00:00:00 --- log: started forth/03.08.20 00:00:24 why not compile by external ':' at the time of building the kernel ? 00:00:53 Define external ':' 00:01:29 * kc5tja redefines ':' in GForth to compile FS/Forth word headers and code fragments. 00:01:44 : H: : ; 00:02:18 H: ;H POSTPONE ; ; IMMEDIATE 00:02:21 kc5tja: so tell me --- what would be the best way for me to make a C forth? as in, just how it should work design-wise 00:02:30 H: : :-header ] ;H 00:02:45 H: ; R> DROP \\exit ;H 00:02:49 do you guys log this channel? 00:02:58 Yes 00:03:01 rad 00:03:20 http://tunes.org/~nef/logs/forth/ 00:03:22 FS/Forth is your own Forth? 00:03:25 Yes 00:03:30 * Serg_Penguin is logging, but MIRC craps br0ken logs 00:03:34 subroutine threading? 00:03:51 uuter: Subroutine threaded with native code inlining for primitives. 00:03:58 thats cool 00:04:12 just a BIT in the Flag byte in the header? 00:04:25 uuter: My initial performance work indicates that it is approximately 2.2x slower than GCC's best possible code output, and on par with GCC's worst code. 00:04:46 x86 I imagine? 00:04:54 uuter: No. Primitives are implemented as COMPILER words, which place literal opcodes into the dictionary via , and C, 00:05:31 * kc5tja does not have an IMMEDIATE flag; instead I have two vocabularies, FORTH and COMPILER. 00:05:36 thats somthing i need to look into 00:05:41 Words defined in the COMPILER vocabulary are assumed immediate. 00:05:54 you have seperate vocabularies? 00:06:02 uuter: I only have two. 00:06:26 is it multi-tasking? 00:06:50 It will be. 00:07:10 i have a question regarding the input stream 00:07:27 i posted to the BB as well 00:07:41 * kc5tja hasn't checked the BB in eons. 00:07:52 rk: I can't answer your question. 00:07:55 is FS/Forth ANS compliant? 00:08:11 Absolutely, 100%, positively, vehemently, undeniably, impunitively NOT. :) 00:08:34 My last two Forth environments were originally designed to be ANS compliant. That was a mistake. 00:08:38 im not a zealot for the ANS spec, but i try to stick to it 00:08:50 ANS is the most fscking difficult of all the Forth specifications I've ever tried to implement. 00:08:57 yeah 00:09:12 can somebody explain in more detail what does> does? 00:09:21 FS/Forth 2.0 follows the Moore philosophy of utter minimalism and MachineForth concepts to the extent that it makes sense to. 00:09:23 even Chuck doesn't like it as far as i know 00:09:37 uuter: Chuck abhores it. 00:09:56 i dig the minimalism of it all 00:10:14 do you implement SOURCE per ANS? 00:10:40 does> is used in defining words 00:11:45 its totally rad you guys have this set up 00:13:55 What does 'this' refer to again? :) 00:14:07 IRC 00:14:10 Ahh 00:14:43 i've been writing a Forth in PPC asm for the last while, and some info is just hard to get 00:14:52 * kc5tja nods 00:15:15 #forth is great. 00:15:23 always somebody to help 00:15:30 Sometimes at least. :) 00:15:47 Sometimes I'm off in another world, talking about Stirling cycle engines and Constantinesco transmissions. :) 00:15:59 rk: i would not add the complications of maintaining portability to your first attempt 00:16:07 Or, when a7r is here, talking about my RX-7. 00:16:54 Good suggestion; I've found that you never can accurately predict what things will need to be abstracted for portability purposes until you actually try to make the port. 00:17:13 totally 00:17:16 This falls in line with the general Forth guideline: No Hooks. 00:17:21 rk: do you have programming experiance? 00:17:30 uuter: yeah. 00:17:32 Or, as an extreme programmer, YAGNI: You Aren't Gonna Need It. 00:18:53 well, when your writing assembly, you don't write 'just in case' code 00:19:16 I used to. I was usually right, but not always. :) 00:19:45 Now I write unit tests for as much as I can, and follow the XP test-first-by-intention rule as strictly as I can. 00:19:46 i just re-write if it comes to that 00:20:06 That, and refactor mercilessly. 00:20:25 I find that it takes longer for me to write code this way; however, I spend less time debugging. So it all balances out. 00:20:26 i step threw my chunks with a piece of paper and a pensile before i even assemble them 00:20:51 I rarely resort to that anymore. 00:21:07 a longer writing cycle is totally worth the lack of endless debugging 00:21:12 * kc5tja nods 00:21:22 test-first is hard to do, too. 00:21:30 plus you don't feel dirty for trying to pass of sloppy code 00:21:31 They don't call it test-driven design for nothing. 00:21:54 I know my code isn't sloppy. It's backed by tests that are repeatable and runnable at any arbitrary moment in time. 00:21:56 :P 00:22:09 yeah, but is it *TIGHT* 00:22:14 thats what bothers me 00:22:17 Who cares if it's tight? 00:22:21 What is tight, anyway? 00:22:27 just knowing it works isn't good enough for me 00:22:37 efficient 00:22:39 Tight code is useless if it runs 0.002% of the time. 00:22:45 Lot of wasted effort. 00:23:04 I'd rather spend my time optimizing code that is run frequently instead. 00:23:09 WORD is not all that useless :) 00:23:18 10/90 00:23:36 In practice, most of the software I write is so heavily I/O bound that fast code is irrelavent. 00:23:47 yeah 00:23:49 I prefer optimizing for size first, speed second. 00:24:14 This actually is related to the no-hooks philosophy too. 00:24:23 well, PPC is load/store, so an efficient GPR layout helps alot 00:24:32 Uber-fast code on CPU architecture X is not guaranteed to be as performant on architecture Y. 00:25:15 PowerPC's rules for optimization aren't that different from x86's. 00:25:28 Remember that all modern x86 CPUs have RISCs at their core. 00:25:45 yeah, but the ISA is still microcoded as CISC 00:25:51 Nope. 00:25:53 Not true. 00:26:08 Only complex instructions use multi-RISCop sequences. 00:26:09 x86 runs RISC86 or MicroOps, yes? 00:26:21 Simple instructions are encoded to a single RISCop. 00:26:40 i read that the fp stuff can be hundreds of micropos 00:27:06 FMUL on a Pentium III takes 1 clock; IMUL takes 3 clocks (1 to convert integers to FP, 1 to FMUL, and 1 to convert back to integer) 00:27:49 i only learned x86, but by looking at other archs, x86 seems pretty ... well ... bloated 00:27:51 im not all that versed on x86 stuff, just what i hear from the people i hang out with 00:27:59 This is why I laugh at people who insist that integer math is somehow inherently "faster" for modern CPU architectures. 00:28:15 So far, they've gotten several things wrong. :) 00:28:25 :P 00:28:35 * kc5tja does x86 assembly language programming for fun and profit, so I can assure you that x86 isn't "all that bad." 00:28:40 its probably my understanding 00:28:41 PowerPCs are faster CPUs. 00:28:49 But not for the reasons you might think. 00:29:02 PPCs have a smaller pipeline 00:29:11 P4 has a ~20 seg? 00:29:12 somebody told me that a 500mhz mips would beat a 733mhz x86 00:29:19 Well, 80486-P2 had a 4-deep pipeline too. 00:29:30 rk: Easily. 00:29:41 kc5tja: heh 00:29:42 What makes RISC architectures faster than x86 in general are the following: 00:30:07 1) Larger set of registers exposed to the programmer. Shadow registers are nice, and they do help. But they're not nearly as good as having a firm set of 32 registers at your disposal. 00:30:26 :) 00:31:02 2) Fixed-length instructions makes instruction decoding constant-time. 00:31:25 In x86-land, opcodes are variable length; hence, while modern architectures are quite fast, it's still variable length. 00:31:32 Avoid instructions with tons of prefixes, for example. 00:31:49 3) RISC register sets don't suffer from read-after-write or write-after-write after-effects. 00:32:43 Due to the shadow register architecture that x86 uses to optimize register operations, if you write a 32-bit value to EDX then use the 16-bit low word of same register (DX), you'll incur a pipeline stall while the registers multiplex the proper data to the integer unit. 00:32:48 Ugly stuff. 00:33:28 sparc has some weird constraints on FP and pipeline bubbles 00:33:50 4) Modern x86 CPUs have so many transistors that it's difficult to fit a fully independent integer and FP units. The P3 and P4, for example, implement the integer multiply instructions as wrappers for the floating point operations. Hence, an integer multiply intrinsically uses an FPU, which can cause a pipe stall if the FPU is already in use. 00:34:18 RISCs, OTOH, routinely implement distinct integer and FP units, for maximum instruction throughputs. 00:34:35 heh, i try to stay away from the layered ISAs 00:35:40 5) For compatibility with old software, x86 must support self-modifying code. This means that the code cache isn't 100% independent of the data cache. Therefore, the code cache will flush its own lines as needed when executing code. This can cause irregular instruction timings, because the code cache can refill at any time. 00:36:43 kc5tja: i heard what in time of 386, Moore _beated_ it w/ only 4k gates CPU ;)) 00:36:45 Meanwhile, the code and data caches of true RISC processors are 100% independent of each other. Consequently, intermixing code and data poses absolutely no problems whatsoever for the RISC, and thus, no performance degradation. 00:37:00 Serg_Penguin: Easily. 00:37:28 kc5tja: but i forgot the name of CPU 00:37:29 Serg_Penguin: The humble 6502 outperforms a 16-bit 8086 board running 1MHz faster. 00:37:38 Serg_Penguin: That'd be the Novix NC4000, I believe. 00:37:49 Serg_Penguin: His MISC M17 would wipe the floor with the 80486 though, so not to worry. :) 00:37:53 was it 8 or 16 bit ? 00:37:58 Serg_Penguin: 6502 is 8-bit. 00:38:03 Novix ? 00:38:08 Oh, Novix is 16-bit. 00:38:13 i sure 6502 is 8bit 00:38:48 * kc5tja notes that the 65816 CPU gets about 80% the performance of the 68000, with a whole lot less pins, and still with an 8-bit data bus. :D 00:38:56 (clock for clock) 00:39:28 but why so bright things are marketed so poorly ? 00:39:44 Because progress is more visible with the poor stuff. 00:39:53 "Look! Now we support 16MB of memory!" 00:40:00 "Look! Now we support 8MHz clock speeds!" 00:40:07 "Look! Now we support multiply instructions!" 00:40:12 i really want a Forth-way thinked computer ! 00:40:14 "Look! Now we support multitasking!" 00:40:32 My 65816-based PC is designed to run Forth. 00:40:49 w/ 3GHz 64-bit stack CPU ;)))) 00:40:51 Granted the CPU isn't a Forth processor, but it's also a heck of a lot cheaper than implementing my own. 00:41:04 65816 based? 00:41:07 Apple IIgs? 00:41:25 It was used in the Apple IIgs, but the chip I'm using is just "slightly" faster than the IIgs's CPU. :) 00:41:38 Like, somewhere in the vicinity of 12MHz faster. ;) 00:41:55 yeah, you can/could buy upgrades for the IIgs 00:42:03 I never had a IIgs. 00:42:16 I used one in the past, though. 00:48:09 its a good machine 00:48:52 It's the only Apple I'd consider using. 00:49:04 (well, not counting the Mac-series.) 00:49:08 Macintosh is good. 00:49:11 The Apple-II was just atrocious. 00:49:34 i assume you were a Commodore fan? 00:52:03 For the most part, yes. 00:52:23 Ataris had really cool video architectures -- the Amiga is a direct descendent of the Atari 800XL, for example. 00:53:22 uh, i thought Amiga was a Commodore line? 00:53:31 and Atari was distinct 00:54:26 It was. 00:54:35 Jay Minor designed the chipsets for both. 00:55:09 The Amiga was originally proposed to Atari's management. But it was rejected. So Jay and bunch of others left Atari, to set up Amiga, Inc, in Los Gatos. 00:55:23 After a long battle, Commodore eventually bought them out, hence Commodore-Amiga. 00:55:51 yeah 00:56:36 i probably would have wanted a RISC OS machine 00:56:38 And Atari was left gaping and groping for something to compete with Commodore once they saw the first Lorraine in 1984. :) 00:57:05 That was why they released the Atari ST. 00:57:13 RISCs back then were insanely expensive. 00:57:16 the ST and the Falcons did ok 00:57:21 $20K for a 1st generation Sparc. 00:57:31 Yes, but they paled in comparison to Amigas. 00:57:33 and didn't they put out a Transputer as well? 00:57:39 No. 00:58:02 Inmos was who made the transputer. Commodore owned Mostek Semiconductors (later Commodore Semiconductor Group). 00:58:38 Yeah, but i thought i read that Atari put out a few quad procesor 040 computers 00:58:54 called 'transputers' however incorrectly 00:59:06 I'm not familiar with that. 01:01:53 I do find it hard to believe, though, considering that TOS wasn't multitasking beyond the normal capabilities of what GEM allowed. 01:02:09 They had to have some alternative OS to run on that transputer thing, I'd guess. Maybe soem flavor of Unix? 01:02:31 im not too sure 01:02:44 i don't even think it was production 01:04:36 Hmm 01:05:31 oh, question, does WORD take the delimiter off the stack? 01:05:41 Yes 01:05:59 so are tabs acceptable in source? 01:06:02 The SOLE exception: 32 WORD will parse to the next whitespace, not necessarily to the next occurance of ASCII 32. 01:06:19 In all other cases, the character code is treated as a single-character case. 01:06:26 hmm 01:06:42 so 32 WORD will cover tabs as well? 01:06:57 This is one reason why my Forth doesn't have WORD -- instead, I have ParseWord (which implicitly knows to check for whitespace) and ParseString, which parses up to the next specified character. 01:06:59 Yes 01:07:02 And new-lines. 01:07:14 ok 01:07:18 thanks 01:07:47 kc5tja: curious: how could you define VARIABLE, ;, and : using CREATE and DOES> (if possible) 01:08:00 You cannot define : and ; using CREATE and DOES> 01:08:07 --- join: serg (~serg@h31n2fls31o965.telia.com) joined #forth 01:08:20 hi from Robert's SSH login ;)) 01:08:20 VARIABLE is defined thusly: : VARIABLE CREATE 0 , ; 01:09:01 :) 01:09:01 i'm showing a UNIX to one guy for the first time ;)) 01:09:06 ??? 01:09:13 kc5tja: i dont get that 01:09:25 rk: It creates a new word, and stores 0 in its parameter area. 01:09:34 (initializes the variable, basically) 01:09:44 --- quit: serg (Client Quit) 01:09:52 rk: When said word is executed, the address of its parameter area is on the stack -- e.g., the address of its contents. 01:10:15 kc5tja: so pushing the address is CREATE default behaviour? 01:10:23 rk: Yes 01:10:45 ooh ok makes sense 01:11:13 :) 01:11:26 CONSTANT, by comparison, is this: : CONSTANT CREATE , DOES> @ ; 01:11:34 kc5tja: could you define : or ; that way? 01:11:40 kc5tja: yeah, you showed me that one 01:12:25 rk: I suppose it's conceivable. : : CREATE ] DOES> EXECUTE ; is the closest I can think of, but I don't know if that'll work across multiple Forth implementations. 01:12:33 ; definitely cannot be implemented using CREATE or DOES>. 01:13:02 heh 01:13:04 ok 01:13:10 Typically, : is implemented specially because it skips the overhead of dealing with pushing addresses on the stack and EXECUTing them. 01:13:20 what about ARRAY? 01:13:29 There is no standard definition for ARRAY. 01:13:37 ARRAY is what you want it to be. 01:13:57 * kc5tja prefers to allocate arrays explicitly using CREATE and ALLOT manually. 01:14:03 kc5tja: how do you make an array then? a bunch of consecutive variables? 01:14:03 ( allocate 16 GDT entries ) 01:14:12 CREATE gdt 8 16 * ALLOT 01:14:14 heh 01:14:43 That creates a 128 byte allotment of memory with 'gdt' pointing right at it. 01:14:52 I dereference an array using + and @/! operators. 01:15:01 gdt @ ==> gdt[0] 01:15:11 gdt 1 CELLS + @ ==> gdt[1] 01:15:18 gdt 2 CELLS + @ ==> gdt[2] 01:15:18 etc 01:15:31 : array create allot ; \ should work 01:15:41 However, if this code appears all the time in your code, you can use CREATE/DOES> to simplify things. 01:15:57 : array create allot does> swap cells + ; 01:16:05 16 8 * array gdt 01:16:10 0 gdt @ ==> gdt[0] 01:16:15 1 gdt @ ==> gdt[1] 01:16:20 2 gdt @ ==> gdt[2] 01:16:21 ..etc.. 01:16:24 heh 01:16:27 cells? 01:16:39 A cell is defined as the basic working size for the Forth virtual machine. 01:16:49 For example, if you have a 16-bit Forth, a cell is 16-bits. 01:16:55 If you have a 32-bit Forth, a cells is 32-bits. 01:16:57 Etc. 01:17:13 heh 01:17:19 CELLS is a word that multiplies a number by the byte size of a cell, so that array elements can be dereferenced easily. 01:17:42 It's all pointer arithmetic. 01:17:44 :) 01:18:06 see array 01:18:07 ( 845FC4 ) CREATE ALLOT 35516 (DOES>) ; 01:18:19 whats the diff between does> and (does>) 01:18:39 Just as gdt[2] is *((GDT *)((unsigned long)(&gdt[0])+(2*sizeof(GDT))) in C, so it is gdt 2 CELLS + in Forth. :) 01:19:08 hrm 01:19:20 thats pretty ugly c code 01:19:24 (does>) is an internal helper word which does the grunt-work for DOES>. DOES> is an immediate/compiler word that compiles code into the current definition; (does>) is what DOES> compiles in to do the useful work. 01:19:32 But look at what it's doing. 01:19:44 It's converting &gdt[0] into a 32-bit unsigned integer (address). 01:20:02 (&gdt[0]) = gdt 01:20:02 It's then multiplying 2 (the array index) times the size of a GDT structure, then adding it. 01:20:19 The resulting pointer is then converted back to a GDT pointer, where the initial * dereferences the pointer. 01:20:40 but is that all really necessary? 01:20:48 in C, that is? 01:20:49 That's what the computer does, sure. 01:21:01 Only it can do it in a single instruction in most cases. 01:21:10 But ultimately, that's what the computer has to do. 01:21:23 The programmer usually just uses gdt[2] notation. 01:21:37 kc5tja: C automatically does the correct pointer arithmatic on pointers, so that line could be converted to *(gdt + 2); 01:21:48 rk: You missed the point. 01:21:54 I'm saying that the two things are equivalent. 01:22:01 kc5tja: oh, ok. 01:22:15 If you look at what the Forth code is doing, it's exactly the same. 01:22:21 see (does>) 01:22:21 ( 83DC40 ) LATEST NAME> >CODE CELL + ! ; 01:22:25 gdt 2 sizeof_GDT * + @ 01:24:08 OK, I gotta go to bed. I'm way past my normal bedtime, and I have work I need to do tomorrow. 01:25:25 --- quit: kc5tja ("THX QSO ES 73 DE KC5TJA/6 CL ES QRT AR SK") 01:25:35 --- quit: Serg_Penguin () 02:23:52 --- quit: rk (Read error: 113 (No route to host)) 03:16:54 --- join: rk (~rk@ca-cmrilo-docsis-cmtsj-b-36.vnnyca.adelphia.net) joined #forth 03:29:26 --- join: Serg_Penguin (Serg_Pengu@212.34.52.140) joined #forth 04:06:51 --- quit: Serg_Penguin (Read error: 104 (Connection reset by peer)) 07:24:08 --- join: jdrake (~Snak@65.48.16.52) joined #forth 07:48:31 --- join: ian_p (ian@inpuj.net) joined #forth 09:08:06 --- join: rO|_ (~rO|@pD9E594C0.dip.t-dialin.net) joined #forth 09:16:37 --- quit: rO| (Read error: 60 (Operation timed out)) 09:26:39 --- join: Herkamire (~jason@h000094d30ba2.ne.client2.attbi.com) joined #forth 09:38:07 howdy 09:40:27 uuter: howdy 10:07:01 howdie 10:07:30 Herkamire, was that you getting the mp3s from me? 10:07:35 I have removed them now, 2 days up. 10:08:02 yeah. I got em 10:08:18 cool, enjoying? 10:08:30 great stuff, Nigthengale is really the Dean of Motivation 10:08:42 yeah. good stuff 10:08:43 Well, he is now the dead dean of motivation, so in some way his legacy lives on. 10:08:51 listened to 5 or so tracks 10:08:58 good, I'm glad, I like sharing that kind of stuff. 10:09:25 bbl 12:57:43 --- quit: jdrake ("Snak 4.9.8 IRC For Mac - http://www.snak.com") 13:17:39 "According to a recent survey undertaken by MIND amongst people suffering from depression, many felt much better after eating a banana." -- banana.com 13:19:42 ok, back to work 13:22:17 --- join: thin (thin@bespin.org) joined #forth 13:35:25 --- quit: thin ("Leaving") 13:44:53 --- join: kc5tja (~kc5tja@ip68-8-127-122.sd.sd.cox.net) joined #forth 13:44:54 --- mode: ChanServ set +o kc5tja 14:43:43 --- join: thin (thin@bespin.org) joined #forth 14:43:49 howdy 14:44:00 kc5tja: chuck doesnt' like DOES> ? 14:47:28 thin: I don't think so 14:47:52 why not 14:48:10 I'm guessing something along the lines of "it was created to solve a problem that I don't have" 14:48:39 I haven't even considered using or creating something like does in at least a year. 14:49:08 back when I did play with it is when I was playing around with syntax and such. 14:49:30 re thin 14:49:53 is DOES a word? 14:50:01 besides DOES> 14:50:04 Chuck has claimed that he now has a problem in his ColorForth that DOES> would solve elegantly. 14:50:33 It's not that he doesn't like DOES> -- it's just that it has no place in a minimal Forth system until it's really, truely needed. 14:50:46 He currently implements DOES>-like behavior through other means. 14:51:05 : string R> COUNT ; 14:51:10 how so? 14:51:21 : hw string [ 11 ," Hello world" 14:51:34 hw TYPE ==> Hello world 14:52:06 hmm 14:52:12 Here, 'string' is the DOES>-like behavior. 14:52:24 It defines what happens when hw is executed. 14:52:32 What happens after that is CREATE-like behavior. 14:52:45 ," is my own invention of course. 14:53:20 I could have just omitted the 11 too, and let ," determine the string length 14:53:49 In a way, I kind of like this solution better. 14:53:55 It is more ovious. 14:53:57 obvious even 14:54:17 : print R> COUNT TYPE ; 14:54:34 : title print [ ," Beauty and the Feast" 14:54:42 : foo ... title ... ; 15:08:23 that's it! the reason I don't use DOES> is that I just have words that compile code. 15:08:53 like these: CREATE VARIABLE LIT 15:10:22 are these equivilent?: 1) .... DOES> foo bar ; 2) .... postpone foo postpone bar ; 15:11:22 kc5tja: I like the term elegantly :) 15:11:31 code should be elegant :) 15:13:26 No, the two fragments are not the same. 15:13:50 DOES> terminates the definition it appears in, AND creates a new (hidden) definition which is bound to the most recently created (at run-time) word. 15:14:22 That's why DOES> is a monster to implement: it has both run-time, compile-time, AND load-time execution semantics. 15:14:24 postpone just compiles a call to an immediate word 15:14:33 Postpone has two behaviors. 15:14:42 If the word is immediate, it compiles a call to it instead of executing it. 15:14:55 If the word is NOT immediate, it compiles code to compile a call to it. 15:15:32 so it some cases my example would both work. 15:15:37 Hence postpone's name -- it defers (postpones) the execution behavior of a word (immediate or not) to the level of the word currently being defined, rather than the word in which it appears. 15:15:52 No 15:15:57 The two are wholesale different 15:16:39 oh, because CREATE compiles code 15:17:03 : DOES> POSTPONE EXIT HERE last-xt FIXUP POSTPONE (DOES>) ; 15:17:27 where FIXUP is a word that patches the most recently created word's code-field address. 15:18:01 ok, I can see how DOES> is sometimes nice for conventional forths. 15:18:10 (DOES>) is the first thing the CREATEd word will execute, which will push its parameter address on the stack, for the benefit of subsequent code. 15:18:23 For example: 15:18:33 but in mine I can just define the word, and then use any word that compiles code, to make it do something. 15:18:46 Yes, that's why I said that ColorForth ahs a reduced need for DOES>. 15:18:53 Here are two completely identical pieces of code: 15:19:41 the only thing I don't have is something like: postpone 15:19:42 : flags SWAP LSHIFT CREATE , DOES> @ AND ; 15:19:50 0 1 flags U_X 15:19:55 0 2 flags U_W 15:19:59 0 4 flags U_R 15:20:12 3 1 flags G_X 15:20:18 3 2 flags G_W 15:20:21 --- join: tathi (~josh@pcp02123722pcs.milfrd01.pa.comcast.net) joined #forth 15:20:23 3 4 flags G_R 15:20:27 and the following: 15:20:59 : flags SWAP LSHIFT LITERAL ; 15:21:15 : permissionMask @ AND ; 15:21:32 : U_X permissionMask [ 0 1 flags 15:21:38 : U_W permissionMask [ 0 2 flags 15:21:45 : U_R permissionMask [ 0 4 flags 15:21:56 : G_X permissionMask [ 3 1 flags 15:22:01 : G_W permissionMask [ 3 2 flags 15:22:08 : G_R permissionMask [ 3 4 flags 15:22:31 It can be made a bit more convenient by doing this: 15:22:37 : U_X [ 0 1 flags ] AND ; 15:22:45 : flags postpone permissionMask swap lshift , ; 15:23:05 If your ] is smart enough to inline a numeric literal sure. 15:23:06 yead 15:23:12 * kc5tja prefers ]L for that syntax. 15:23:34 what does it mean to inline a literal? 15:24:41 Numeric constants are determined at compile-time, then compiled into the final code as if you'd just specified the correct number to begin with. 15:24:45 Like inlining code. 15:24:57 Only with numbers 15:25:57 you mean it would compile native code to do an andi instruction instead of a literal followed by a 'and' instruction (with a stack push and pop)? 15:27:34 --- quit: thin ("brb") 15:27:43 in my forth 15:28:40 in my forth I would do something like: : flags swap lshift r-tos r-tos ,andi ; 15:28:53 (r-tos is the top of stack register) 15:29:25 : U_X [ 0 1 flags ] ; 15:29:28 or better yet: : flags swap lshift r-tos r-tos ,andi ,; ; 15:29:34 : U_X [ 0 1 flags 15:33:08 That's inlining. 15:33:51 Inlining is a generic term meaning, "Compile it in place, do not call a subroutine to do it." 15:33:57 1 2 + 15:34:06 is the same as [ 1 LITERAL ] [ 2 LITERAL ] + 15:34:17 where LITERAL is a word that compiles the code to push a number on the data stack. 15:36:03 how were you thinking that would be implemented? 15:37:02 I'm been trying to decide if I should have some sort of stack optimizer that all words that compile code would have to use 15:37:37 or if I should just do normal literals like you said: [ 1 literal 2 literal ] + 15:37:45 It's up to you. 15:37:53 I'm personally taking the simplest way out possible. 15:38:06 and make + immediate, and check if the last two instructions are literals, in which case it would remove them and replace with it's own code. 15:38:09 Like I said, I have two versions of ]: one which does and one which does not compile a literal. 15:38:21 All my basic primitives are immediate words. 15:39:03 Relatively few primitives are duplicated into the interpretter's vocabulary. 15:39:08 I don't like the idea of having ] compile a literal 15:40:02 something like ]L sounds good if you don't want to type: lit ] 15:40:40 I don't have any choice in my colorforth though 15:40:45 ] is not a word 15:41:34 A colorForth is smart enough to detect a yellow->green transition, while a plain-ASCII Forth isn't. 15:42:07 what should it do on yellow->green? 15:42:11 I don't get that 15:42:17 Yellow is traditionally interpretted code. 15:42:20 Green is compiled. 15:42:27 I haven't been changing to yellow mid-definition to calculate a literal. 15:42:32 I've been changing to compile some code 15:42:35 Hence if you interpreted code in the middle of a definition, it's assumed there is a literal on the stack to compile. 15:42:47 I don't like that assumption. 15:42:54 in my code it is false most of the time. 15:43:05 In ColorForth, I've never had a violation of that assumption. 15:43:07 especially as I don't have immediates 15:43:29 True, CF has immediates in the form of a MACRO vocabulary. 15:43:46 how would you deal with an assembler, or the flags example above? 15:44:01 I have many words that take parameters on the stack, and compile some code. 15:44:15 mostly my assembler, but also other words like LIT 15:44:23 FS/Forth lacks an integrated assembler. I compile raw opcodes from numeric literals. 15:44:55 you just do 1282341734 , sort of thing 15:45:37 ? 15:45:51 However, if I were to implement an assembler, I'd probably end up using the Moore-ish approach where I have : XYZ doesCode [ parameters 15:46:00 Well, I use hex notation, but otherwise, yes. 15:46:19 I tend to use C, mostly since x86 has 8-bit opcodes. 15:46:35 :) I almost typed $ for hex, but I don't want to explain my notation :) 15:46:51 I use $ notation too. 15:46:55 coool 15:47:47 oh :) so you can just do c, 15:47:53 then c, the parameters 15:48:30 Parameters I use either C, or , depending on operand size. *nods* 15:48:37 --- join: jdrake (~Snak@65.48.16.52) joined #forth 15:48:40 cool 15:48:50 that's neat. I didn't know the opcodes were 8bin 15:48:53 8 bit 15:49:05 I admit that it's not as easy to read or maintain that way, but it gets the job done, and with a minimum amount of fuss on my end. 15:49:14 on ppc the primary opcode is 6bits 15:49:18 Especially since x86 isn't nearly so orthogonal that an assembler can be written for it in only two blocks. 15:49:40 Well, x86 opcodes vary in length from 8 bits to 16 bytes. 15:50:03 It depends on the instruction, what prefixes you prepend to it to modify its behavior, and operand sizes. 15:50:25 IN most 32-bit code, opcodes typically range between 1 and 8 bytes. 15:51:36 funny they call it 32-bit code... 15:51:48 It's 32-bit because it manipulates 32-bit chunks of data at a time. 15:52:02 The 21-bit F21 has 5-bit opcodes; does that make it a 5-bit CPU? :) 15:52:02 32 bit registers then? 15:52:07 Yes 15:56:31 going swimming. bbl 16:27:40 --- quit: kc5tja ("THX QSO ES 73 DE KC5TJA/6 CL ES QRT AR SK") 17:12:58 --- quit: jdrake ("Snak 4.9.8 IRC For Mac - http://www.snak.com") 17:27:35 --- quit: tathi ("leaving") 17:36:07 --- quit: fridge (Read error: 60 (Operation timed out)) 17:55:28 --- quit: ian_p ("ircII EPIC4-1.1.7[399] -- Are we there yet?") 18:41:34 --- quit: TreyB () 18:59:12 --- join: TreyB (~trey@cpe-66-87-192-27.tx.sprintbbd.net) joined #forth 19:36:56 --- quit: TreyB (orwell.freenode.net irc.freenode.net) 19:36:56 --- quit: rO|_ (orwell.freenode.net irc.freenode.net) 19:36:56 --- quit: ChanServ (orwell.freenode.net irc.freenode.net) 19:36:56 --- quit: Fractal (orwell.freenode.net irc.freenode.net) 19:36:56 --- quit: Herkamire (orwell.freenode.net irc.freenode.net) 19:36:56 --- quit: rk (orwell.freenode.net irc.freenode.net) 19:36:56 --- quit: Robert (orwell.freenode.net irc.freenode.net) 19:36:56 --- quit: XeF4 (orwell.freenode.net irc.freenode.net) 19:37:29 --- join: ChanServ (ChanServ@services.) joined #forth 19:37:29 --- join: TreyB (~trey@cpe-66-87-192-27.tx.sprintbbd.net) joined #forth 19:37:29 --- join: Herkamire (~jason@h000094d30ba2.ne.client2.attbi.com) joined #forth 19:37:29 --- join: rO|_ (~rO|@pD9E594C0.dip.t-dialin.net) joined #forth 19:37:29 --- join: rk (~rk@ca-cmrilo-docsis-cmtsj-b-36.vnnyca.adelphia.net) joined #forth 19:37:29 --- join: XeF4 (xef4@lowfidelity.org) joined #forth 19:37:29 --- join: Robert (~snofs@h31n2fls31o965.telia.com) joined #forth 19:37:29 --- join: Fractal (bron@we.brute.forced.your.pgp.key.at.hcsw.org) joined #forth 19:37:29 --- mode: orwell.freenode.net set +o ChanServ 19:39:20 --- quit: Fractal (orwell.freenode.net irc.freenode.net) 19:39:20 --- quit: Herkamire (orwell.freenode.net irc.freenode.net) 19:39:20 --- quit: XeF4 (orwell.freenode.net irc.freenode.net) 19:39:20 --- quit: Robert (orwell.freenode.net irc.freenode.net) 19:39:20 --- quit: rk (orwell.freenode.net irc.freenode.net) 19:40:07 --- join: Fractal (bron@we.brute.forced.your.pgp.key.at.hcsw.org) joined #forth 19:41:51 --- join: Herkamire (~jason@h000094d30ba2.ne.client2.attbi.com) joined #forth 19:41:51 --- join: rk (~rk@ca-cmrilo-docsis-cmtsj-b-36.vnnyca.adelphia.net) joined #forth 19:41:51 --- join: Robert (~snofs@h31n2fls31o965.telia.com) joined #forth 19:41:51 --- join: XeF4 (xef4@lowfidelity.org) joined #forth 19:42:34 --- quit: Fractal (orwell.freenode.net irc.freenode.net) 19:42:47 --- join: Fractal (bron@we.brute.forced.your.pgp.key.at.hcsw.org) joined #forth 19:43:14 --- quit: Fractal (orwell.freenode.net irc.freenode.net) 19:44:27 --- join: Fractal (bron@we.brute.forced.your.pgp.key.at.hcsw.org) joined #forth 20:01:16 --- join: rO|__ (~rO|@pD9E594C0.dip.t-dialin.net) joined #forth 20:02:10 --- quit: rO|_ (Read error: 104 (Connection reset by peer)) 20:12:01 --- quit: Fractal (orwell.freenode.net irc.freenode.net) 20:12:32 --- join: Fractal (bron@we.brute.forced.your.pgp.key.at.hcsw.org) joined #forth 20:18:18 --- quit: Fractal (orwell.freenode.net irc.freenode.net) 20:18:55 --- join: Fractal (bron@we.brute.forced.your.pgp.key.at.hcsw.org) joined #forth 20:21:54 --- quit: Fractal (orwell.freenode.net irc.freenode.net) 20:22:17 --- join: Fractal (bron@we.brute.forced.your.pgp.key.at.hcsw.org) joined #forth 20:34:18 --- quit: skylan (Connection timed out) 20:58:42 --- join: jdrake (~Snak@65.48.16.52) joined #forth 22:11:43 --- quit: Herkamire ("bedtime") 22:24:58 --- part: jdrake left #forth 22:27:21 --- join: Serg_Penguin (Serg_Pengu@212.34.52.140) joined #forth 22:30:24 --- quit: Serg_Penguin (Client Quit) 23:39:57 --- join: ianP (ian@inpuj.net) joined #forth 23:59:59 --- log: ended forth/03.08.20