00:00:00 --- log: started forth/06.09.08 00:05:03 --- quit: JasonWoof ("off to bed") 01:25:58 --- quit: Anbidian () 01:59:47 --- join: Cheery (n=Cheery@a81-197-19-23.elisa-laajakaista.fi) joined #forth 02:43:44 --- join: Topaz (n=top@82-43-188-55.cable.ubr07.newm.blueyonder.co.uk) joined #forth 03:20:18 --- quit: Topaz ("Leaving") 04:15:11 --- join: PoppaVic (n=pete@0-1pool47-14.nas30.chicago4.il.us.da.qwest.net) joined #forth 05:21:51 --- join: vatic (n=chatzill@pool-162-84-156-148.ny5030.east.verizon.net) joined #forth 05:28:12 Hi Quartus! Do you have a recommendation for a Forth compiler/compilation overview? I'm starting to tackle programming this for my PIC Forth, and haven't completely sorted out the issues with state: entering and leaving compile state, compiler directives, etc. Trying to zero in on what the fundamental words that I need to understand and write are: the absolute first ones. Reading Rather's... 05:28:14 ..._Programmer's Handbook_ which is helpful, but maybe you know of others/better? 05:39:04 * Quiznos slips some trits in PoppaVic's non-binaries 05:39:30 trits? Is that like a tribble? 05:39:40 Martian math 05:39:47 -1 0 1 05:39:48 ahh 05:39:51 heh 05:40:02 gotta know 50s martians movies to know that 05:40:05 heh 05:40:13 Sorry, until Heinleins Trinary Computers arrive, I'm safe 05:40:31 pff lol 05:48:23 vatic: have you considered getting rid of state altogether? 05:49:30 State is... interesting, but yeah.. I have to wonder if we can dispense with a var anything above the engine sees. 05:50:02 TreyB: well, for my current effort, there's a lot of value in trying to be ANS compliant. As for later, that's up for grabs... 05:52:20 Consider this: instead of interpreting, compile to a temporary buffer. You can execute the buffer at any time so long as you don't have any unresolved items on the control flow stack. 05:52:41 You could execute after each word, or at the end of a line of input. 05:53:10 hehe 05:53:23 Yer still "interpreting" - just at a different point. 05:53:31 i like tht idea and since i'd get the buffer from kernel instead of HERE, its distance from here can allow for even further processing without interferring with the dictionary space 05:54:07 If you see "[", push a marker on the control flow stack. When you see "]", execute everything from the marker to the end of the buffer, and then continue compiling. 05:54:19 I've never much like HERE and such.. There is something extremely funky about such a hack 05:54:32 it's simple. not a hack 05:54:37 Note that with this scheme you can nest: [ blah [ foo ] ] 05:54:47 but forth is sposed to be highly hackable. that's the point 05:55:07 Quiznos: it's extremely limiting and makes some assumptions that continually carry over into the future. 05:55:23 You can have a STATE variable, but it would always report compilation :-) 05:56:29 If you do this, you don't have to write any state-smart words, and all of the control-flow words work interactively for free. 05:56:34 TreyB: Although the PIC MCU is Harvard arch, so you can read from codespace, but not execute dataspace. Also, codespace is EEPROM, which adds its own peculiarities when writing to it... 05:57:02 TreyB that's cool 05:57:23 TreyB what marker is pushed on [? 05:57:34 ptr to next inline cell? 05:58:44 Quiznos: pretty much. You just need a way for the next ] to know where to start executing from. 05:59:13 TreyB yea, but what about intervening words unbalanncing rstk? 06:00:47 You have the same problem that you have in interpreted mode for normal Forth: you can't mess with the return stack. 06:00:54 ok 06:01:31 I *believe* that ANS says you can only alter the return stack in : or CODE definitions, but I'd have to check. 06:02:18 As a rule, I find all the tinkering w/i RS to be frightening. 06:02:19 i was reading minforth code the other day and dont recall seeing state but i have to check; i'm fond of rm'ing state from my forth 06:02:35 have some guts, pop the stk 06:02:36 lol 06:02:59 Been there; Done That. 06:03:42 I guess you could use >R, R@, and R> assuming you follow the normal rules: don't touch something on the return stack that you didn't put there. This would apply to the nested [ ] scopes. 06:04:10 sure 06:04:11 k 06:06:06 vatic: In the case of the PIC MCU, you'd want to defer execution as long as possible to avoid EEPROM programming. You'd build up the execution buffer in data RAM and then write it to EEPROM before execution. 06:06:31 are PICs reprogrammable that way? 06:06:40 "runtime"? 06:07:12 Depends on the PIC, I'd guess. 06:07:47 now, is that reprogramming of microcode? or user-code? 06:08:22 Insufficient Data (tm) 06:10:23 k 06:12:38 Quiznos: the "F" (for flash) series PICs are. You have to burn in 32 bytes multiples and I think on 32 byte boundaries... 06:13:40 Quiznos: If I understand your question, it's user code. You can't fiddle with the underlying design of the MCU... 06:13:51 ok ty 06:14:22 Short life on Flash - easier to use the [e]eprom and some RAM 06:15:55 PoppaVic: [e]eprom is very limited on these chips, only about 512 bytes. Only a few have external addressing capability, so external RAM would have to be read through a kludge with the PIC's ports... 06:16:06 eww 06:17:21 heh 06:19:39 --- join: Ray_work (n=Raystm2@199.227.227.26) joined #forth 06:20:33 Good morning. 06:20:52 howdy 06:21:02 Howdy, PoppaVic. 06:21:52 How they hangin', Ray? 06:22:31 re 06:24:48 --- join: virl (n=virl@chello062178085149.1.12.vie.surfer.at) joined #forth 06:28:17 PoppaVic: one doesn't always choose the chip one works with... ;-) 06:28:34 vatic: well aware, and also of pain. 06:29:06 PoppaVic: yup. But tell what's an ideal Forth chip these days? 06:29:49 there is none. Unless you count a few high-$ specialty-chips that try to bury forthish in the opcodes 06:30:59 PoppaVic: that was my conclusion after looking around. There's always some problem. Anyway, at least the PICs are "feature-laden" :-) 06:32:41 yeppers.. I'm still not convinced that CISC or RISC or "stack-processors" are any great shakes over the long, generic-haul 07:21:42 --- quit: madgarden (Read error: 110 (Connection timed out)) 07:23:30 --- quit: PoppaVic ("Pulls the pin...") 07:25:10 --- join: PoppaVic (n=pete@0-2pool238-89.nas24.chicago4.il.us.da.qwest.net) joined #forth 08:31:14 --- join: ASau` (n=user@195.98.180.3) joined #forth 08:31:34 --- nick: ASau` -> ASau 08:32:35 Dobry vecer! 08:42:25 crazzy flubber! 08:43:21 tootles 08:43:26 --- quit: PoppaVic ("Pulls the pin...") 08:55:28 --- join: neceve (n=claudiu@unaffiliated/neceve) joined #forth 09:08:42 hi vatic. I'm not sure what you're after. Are you trying to write application code in a Forth that isn't finished yet? 09:16:04 Quartus: If you get a moment, I'd appreciate a read of my messages this morning with an eye towards punching holes in my how-to-eliminate-state theory. 09:17:01 You mean, how to eliminate STATE-smart words? 09:17:58 I'm not clear on how you forward-parse in a system that can't exit compilation mode. Are you suggesting no IMMEDIATE words? 09:17:59 I proposed a way to get rid of STATE altogether. 09:18:12 No, we still need immediate words. 09:18:27 Ok, what about defining words? 09:18:55 But the notion of IMMEDIATE changes more to "parsing". 09:21:09 Hi Quartus, thanks! I'm trying to write the minimum number of primitives in PIC asm to get a compiler going for that chip. Using that sheet you gave me a while back, MAF and Rather's book, I think I can start by writing ":" ";" "create" "immediate" "postpone" (I already have stuff like "word" and "literal" written) and then figure out what else I need or where I'm ignorant. I'm looking for... 09:21:11 ...overview discussions of Forth compilation because I don't understand all the dimensions of it. Anyway, not a huge hurry. (I can take a check behind TreyB) ;-) 09:22:43 Vatic, writing a 'minimum-primitives' Forth depends entirely on what you mean by 'minimum'. You can take it down to some ridiculous minimum for the theoretical thrill of it, or write everything that benefits from being optimized as a machine-language primitive. 09:23:08 I don't need my answer right away. I need to chew on Quartus question of defining words for a bit. 09:23:31 TreyB, also -- how to manage conditional compilation? Postponement? 09:24:33 Conditional compilation falls into the immediate/parsing realm, I think. 09:25:02 ou have exactly the same things you need & want to do in an always-compiling system such as you discuss, so you still have to find ways to do them. Such a scheme doesn't eliminate any complexity, as far as I can tell, it just moves it around. I'm not sure what the fuss about STATE is; it's really very simple and easy to use. 09:29:06 I once wrote a Forth that would begin compilation if a compile-only word was encountered while interpreting, and then execute the sequence once the control-stack was again balanced. I believe Open Boot does the same thing. It was amusing, but offered no particular advantage. 09:29:59 TreyB i recall once seeing the source for a forth where the coder used the state var to hold the 8086 opcode value to either call or return 09:30:02 vatic, if I were implementing a new Forth, I'd start with the VM: stacks and memory, and then add the zero-operand instructions as individual routines, and then the dictionary and interpreter. 09:30:10 vatic, I think you can get as minimum as P'' ( pronounced P prime prime ) where you have a store a fetch and two words to move up and down in memory. 09:31:10 Quartus: You got flow control words in "interpreter" state for free, I'd think. 09:31:33 this is even more basic then BrainF*** where you add a print and a 09:31:46 'store literal' I think. 09:31:55 to P'' that is. 09:32:04 TreyB, as long as you remembered that between those control-flow words, you weren't interpreting any longer. Not free. 09:32:55 woouldnt switching states be as simple as whether the code is compiling (const, var, :) or not [; or not]? 09:32:57 In the method I described you never interpret, so we don't have to rembmer that. 09:33:50 TreyB, right, but you then move the complexity into the defining words, and you still have the need of immediacy, so you still have two states. You have just chained yourself to one of them. 09:35:11 I don't see how to get rid of immediacy without losing the forthishness of it. 09:35:31 I mean, dropping immediacy means not extending the compiler. 09:36:58 Right. So you still have two states. But rather than being free to choose between them, you're locking yourself into one. 09:38:37 Two seem better than three. 09:38:59 --- nick: Raystm2 -> nanstm 09:39:15 And I don't see the "locking into" part, so either I've missed something or I haven't explained something sufficiently. 09:39:41 Yet one is clearly more hampering than two. It can't be analyzed by using only that kind of reductionism. 09:40:06 TreyB, modify your system so it does what you suggest, and then port some code to it. The issues will become immediately clear. 09:41:53 Yeah, I haven't said much about it before because I didn't have anything useful to show. I don't doubt that I've got unforeseen issues to deal with. 09:42:32 --- join: snowrichard (n=richard@12.18.108.232) joined #forth 09:42:46 I would be interested in seeing what the issue with STATE is supposed to be, that isn't a contrived example involving redefining DUP so that it can count how many times it was compiled. :) 09:43:49 heh 09:43:54 hi 09:44:47 need caffeine brb 09:48:36 k 09:48:47 Hi snowrichard. 09:49:36 Ray_work: yup. That's the assertion of Frank (Pygmy Forth) Sargeant, right? It's sound very interesting, but I was way beyond that stage when I saw his article. I've already got the things Quartus mentioned (stacks, mem, vm, lots of primitives, BRANCH, 0BRANCH) I need to add a compiler now... 09:50:46 The compiler in a Standard Forth is a mode of the interpreter. If you're in compilation state, instead of interpreting, you lay down code -- unless it's immediate, in which case it's executed; or unless it's a number, in which case a literal is coded into the definiition. 09:54:42 The CORE defining words are : ; CREATE DOES> VARIABLE CONSTANT, along with POSTPONE and S" ['] [CHAR] ." and ABORT". 09:55:20 You'll need the equivalent of COMPILE, from core-ext inside your interpreter, so it makes sense to make that visible as a word. 09:56:39 Quartus: so if I already have an ANS interpreter adding COMPILE, is the next step in implementing the compilation mode? 09:56:53 It's certainly a necessary step at some point. 09:57:47 Quartus: so MAF establishes a sequence for the creation of words. Perhaps I shold use that as a guide? 09:57:51 should 09:57:56 a sequence? 09:58:24 Quartus: it has to define one word first so it can be used in other words... 09:58:37 Oh I see. You could do that. 09:59:17 Quartus: you don't really see that in ASM listings of a Forth where everything is there, but not in any sequence of implementation... 09:59:46 Yes, because MAF is written in Forth, it's bottom-up, so you can see the order of implementation more clearly. 09:59:51 Quartus: MAF is really deliberate in that respect... 10:00:14 Quartus: so I'll keep studying it! :-) 10:00:28 Hard, but valuable... 10:01:35 Well, small steps. Write a 'header' word that creates a new dictionary entry that can't yet be found, and then you can build : as parse-name header ] 10:02:03 (in general terms. You'll likely want to leave a colon-sys on the stack for ; to pick up.) 10:04:11 Quartus: yeah, I've been looking at that and trying to assess whether that's where I need to jump. The thing with the PIC is that I'll have to write all this Flash memory support so that I can save to the code space. Not a quick experiment... 10:04:51 Build it simply first. You can add different memory spaces to it later once it works. 10:06:07 Quartus: good advice. Sadly, I don't think I can execute out of the Data RAM. Or it'd be so tortured it wouldn't be portable to Flash memory. 10:06:39 maybe I'm wrong about that... 10:07:24 Quartus Forth writes code to a kind of quasi-permanent storage -- has to be written to via an OS routine. Implementing that was as simple as setting up a codespace, a CSHERE (equivalent to HERE but in codespace), a comma word for codespace, etc. 10:09:11 Quartus: no OS on the PIC. Data has to be flashed in 32 byte blocks and (I believe) 32 byte boundaries. A bit of a challenge... 10:09:34 Right, I know there's no OS on the PIC. You'd have to implement your own routine to write data to codespace. 10:10:02 Quartus: so I guess that's my next task... 10:10:38 Thanks for your help, Quartus... :-) 10:10:44 That sounds quite simple, though; align new words to a 32-bit boundary, keep a 32-byte buffer in normal RAM, write it out when it's full. 10:11:22 Or as needed. I've never tried to build such a thing, so it may be necessary to flush it before it's full on occasion for backpatching branches. 10:11:43 sorry, a 32-byte boundary. Don't know why I typed bit. Lack of coffee. :) 10:12:00 no prob, I understood... 10:13:05 Quartus: so you're thinking they're might be gaps in the code space? When things didn't come out evenly at 32 byte segments? 10:13:13 Right. 10:13:51 Quartus: or at least I could start there and compact it (if possible) later? 10:14:28 Well, maybe I'm misunderstanding. Looking at what you wrote, you don't need to execute at 32-byte boundaries, do you? You can execute code with a smaller alignment than that. 10:14:40 So you wouldn't need to align code on 32-byte boundaries at all, in that case. 10:14:53 Just maintain your 32-byte buffer, flush it as needed. 10:15:00 Quartus: yes, it's only the writing that's restricted. 10:15:51 It could all be hidden in cs@ and cs! words, I think; completely transparent above that level of abstractoin. 10:16:17 Quartus: I'll have to get my head around that, but it sounds promising... 10:16:20 Like a disk cache, conceptually. 10:16:49 I suppose you'd want to code it to minimize the number of flash-writes. 10:17:47 Quartus: yes, there's a several millisec delay while the process goes on. I think I'll have to stop receiving chars from the serial terminal while it's going on... 10:18:18 implement flow control.. 10:18:31 I was thinking more in terms of there being a finite number of possible writes to flash. 10:18:54 Quartus: it's in the millions of writes... 10:19:25 Which isn't all that high, really, so I'd want to build the system to write once for every 32-bytes (or largest part thereof), rather than re-writing the 32-byte buffer every time it's altered. 10:20:03 OK 10:20:27 Though I'd probably build the latter version first to get it working, and then improve it. 10:21:16 Quartus: the buffer could be a larger multiple of 32 perhaps. There's a bit of memory on these chips... 10:21:48 I don't see how that would help you -- unless it's faster to write out multiple 32-byte blocks at once? 10:22:16 Hmm... Don't know really. 10:22:46 You'd have to flush the cache before executing written code, at any rate. No way around that. 10:24:10 Quartus: well, you've given me enough insight for at least a week's worth of work! 10:24:32 :) Ok, feel free to ask more questions as you go. 10:24:52 Quartus: perhaps I should explore the posibilities through some prototypes and then maybe I could be more concrete... 10:25:02 That's the best way to go. 10:25:15 Always aim for simplicity. 10:25:20 Quartus: You've been very helpful, thanks! :-) 10:25:29 You're welcome! 10:55:43 --- quit: uiuiuiu (Remote closed the connection) 10:55:45 --- join: uiuiuiu (i=ian@dslb-084-056-209-171.pools.arcor-ip.net) joined #forth 11:16:33 --- join: madgarden (n=madgarde@Kitchener-HSE-ppp3576891.sympatico.ca) joined #forth 11:29:24 hey 11:33:19 --- join: snoopy_1711 (i=snoopy_1@dslb-084-058-134-208.pools.arcor-ip.net) joined #forth 11:39:26 --- join: fission (n=fission@69.60.114.33) joined #forth 11:50:01 --- quit: Snoopy42 (Read error: 110 (Connection timed out)) 11:50:13 --- nick: snoopy_1711 -> Snoopy42 12:07:12 snowrichard 12:09:45 h'lo. Can anyone explain to me how an typical Forth address interpreter works? (pseudocode acceptable) 12:10:05 Zarutian there are source examples on taygeta.com 12:10:14 and a forth webring 12:10:25 knock yourself out :) 12:12:37 Quiznos: Taygeta Scientific Incorporated is what I get at www.taygeta.com and at taygeta.com I only get an empty page 12:12:47 oh, hole 12:12:49 hold 12:13:11 taygeta.com is good 12:13:13 fix your dns 12:13:21 wwwtaygeta.com too 12:13:25 with a proper . 12:15:14 Quiznos: dns is okay, but FireFox is acting up 12:15:33 ok 12:15:41 elinks here 12:16:28 Zarutian: try ftp://taygeta.com/pub it's an FTP site. The web pages have been completely re-done recently... 12:17:28 zarutian, you mean how threaded code is executed? 12:19:18 Quartus_: yes, but I am thinking how the address interpreter makes an discintion between an address that points to machinecode and an address that points to an array of addresses. 12:20:45 oh ok. That's a much more specific question :). Different ways to handle it. One way would be to have a specific bit set in the machine-code addresses. 12:21:42 another would be to have all words considered machine-language primitives, but those that are address lists have the list-interpreter as their machine-code. 12:21:46 another would be to have inline strings "machine code:" "address-array:" 12:22:35 yes, I believe the most straightforward implementation is not to have address-lists, but to generate calls instead, and inline shorter sequences. 12:22:40 Quartus_: or checking if an address is below a certain constant if it is then it is machine code if it isnt then it is an array of addresses 12:23:03 (then it points to machine code) 12:23:15 zarutian, that would be another way. As long as there's some sort of differentiator. 12:24:08 it can be done in the execute loop by identifying which is which, or in the words themselves, or the need can be done away with completely by generating native code. 12:24:15 --- join: swsch (n=stefan@pdpc/supporter/sustaining/swsch) joined #forth 12:25:41 Quartus_: but by generating native code (that is code address in the dictionary entry structure always points to a block of machine code) one would loose some compactness, no? 12:26:46 no, in fact it turns out to be slightly smaller in x86, as while a call is 5 bytes, many inlined primitives are shorter than that. 12:27:05 --- part: swsch left #forth 12:27:14 smaller in 68k, too. Definitely faster than threading wherever. 12:30:17 Quartus: x86 and 68k are both CISC architectures, yes? 12:30:24 zarutian, you notion of identifying primitives by address would prohibit the creation of any new primitives, as with a forth assembler. 12:31:08 zarutian, yes. My experiments with ARM suggest it continues to hold true there, too. 12:31:52 certainly it does for speed. Conditional execution, in ARM, can be leveraged to great effect. 12:32:04 Quartus_: re id prim by addr: yes indeed, just was thinking how most address interpreters would do it 12:33:14 the most common traditional approach is to have the code field point to either the primitive itself, or the address interpreter. 12:33:49 aah, that is more sensible when one comes to think about it 12:34:35 Increasingly, modern implementations generate optimized native code, with gforth as a notable exception. 12:35:42 Quartus_: what optimatations are utilized (if you dont know go by your gut instinct) 12:36:09 inlining, regalloc, mostly 12:36:33 and some peepholing of course 12:37:19 Simpest: inlining, first and foremost. Tail-call elimination. Literals, constants, variables, all inlined. Short branch compilation. Strength-reduction. 12:37:52 yeah, strength reduction too 12:37:57 after that, peephole optimization, constant folding. After that, register allocation optimization. 12:38:02 Strength-reduction? I havent heard of that term before so could you care to enlighten me? 12:38:28 as in 16 * becoming 4 lshift 12:38:46 that's peepholing, not strength reduction 12:39:31 Zarutian - http://www.cs.rice.edu/~keith/EMBED/OSR.pdf 12:39:35 strength reduction is replacing a multiplication by a loop index, into repeated addition 12:39:41 Segher, you're so helpful :) you may need peepholing to catch some strength-reduction opportunities, but replacing a multiply by a shift is most certainly strength-reduction. 12:39:46 (for example) 12:40:25 quartus_: no it isn't, but i'm not going to fight over it :-) 12:41:14 you can call it "instruction selection" instead if you want though 12:41:22 segher_ - or 'operator strength reduction'. 12:41:47 any replacement of a complex instruction sequence with a simpler sequence that has the same effect is strength-reduction. Perhaps you learned a different definition. 12:42:43 that definition is so generic that all peepholing (and most other compiler optimisations, for that matter) fall udner it as well 12:43:09 but sure yeah, let's blame it on different terminology and be done with it :-) 12:43:17 sure, that's true. It's still what strength-reduction means, though. 12:43:25 peepholing is most interesting when it performs strength-reduction that reduces an instruction sequence to the 'no-instruction-sequence' that produces the same effect. 12:44:44 commonly itrength-reduction is used to describe exactly what I had in my example: replacing a multiply by a shift, or a divide by a reciprocal multiply, or an addition by a series of increments, that sort of thing. 12:44:55 but probably also peepholing:strength reduction::polymorphism:dispatch 12:45:57 MPE sells compilers that do some impressive optimization. As I recall they also do source-level inlining. 12:46:31 editing-time strength reduction would be cute. 12:47:13 Forth encourages that, with 1+ and 2* and 2/ etc. 12:47:22 http://lgxserver.uniba.it/lei/foldop/foldoc.cgi?strength+reduction 12:48:13 Quartus - I mean, if you typed "1 +" and your editor replaced that with "1+" 12:48:21 I can't easily follow that reference from this gadget right now. Wikipedia has a page on compiler optimizations that defines strength-reduction as I'm using the term. 12:48:42 the wikipedia page is actually a copy from foldoc, heh 12:48:48 ayrnieu, I would quickly stop using any such editor. That's a task for the compiler. 12:49:03 Quartus - it would be cute, in any case. 12:49:22 if by cute you mean horrible, sure :) 12:49:33 most cute things are also horrible. 12:50:03 "cute" and "an actual good idea" are quite different things :-) 12:50:59 that foldoc page limits strength reduction to iterative cleverness, where the loop is restructured to allow strength reduction of the expression that increments the 'loop variable'. 12:51:50 Loop-invariant code motion, is a term that may be applicable to that. 12:52:10 but FOLDOC is _the_ authorative reference for such things, it must be right :-) 12:52:24 there's lots of terms for different sorts of optimization, and some overlap. 12:52:28 segher - there are no authoritative dictionaries anywhere. 12:52:30 code motion is different\ 12:52:56 ayrnieu: there was a smily there -- here, have another one --> :-) 12:53:20 segher - go read the Illuminatus! trilogy, heathen. 12:53:54 which is that 12:54:45 anyway, zarutian, segher's pedantry notwithstanding, I'm talking about replacing something 'strong' like a multiplication with something equivalent but 'weaker', albeit faster and/or smaller, like a shift. 12:55:44 all along, we should've been saying 'simplification'. "Another common optimization is simplification." :-) 12:56:51 Really, everything we say about computers (including 'computers') is more practically an undefined term, as even the people who coin the terms hardly know what they mean by them. Look at these real-word terms: Operating System. Interactive Development Environment. Open Tool Platform. 12:58:20 at least data structures have nice names. 13:01:47 --- join: Quartus__ (n=Quartus_@209.167.5.1) joined #forth 13:02:19 Some technical difficulty. Not sure what I said last that made it out. 13:03:58 Quartus - the last thing I saw was "albeit faster and/or/ smaller, like a shift." 13:04:23 ah ok. I missed anything subsequent to that. 13:09:19 --- join: swsch (n=stefan@pdpc/supporter/sustaining/swsch) joined #forth 13:11:34 --- quit: Quartus_ (Read error: 104 (Connection reset by peer)) 13:12:49 --- part: swsch left #forth 13:48:16 --- quit: ASau (Read error: 104 (Connection reset by peer)) 13:48:52 Optimization is not necessarily simplification. Neither is strength-reduction necessarily simplification. Replacing a given calculation by a sequence of equivalent, but faster and/or smaller instructions, may complicate it furiously and make it hard to follow, by looking at the disassembled result, what calculation is actually being performed -- it's an optimization nonetheless. 14:02:47 --- join: JasonWoof (n=jason@unaffiliated/herkamire) joined #forth 14:02:47 --- mode: ChanServ set +o JasonWoof 14:15:37 --- quit: Ray_work ("User pushed the X - because it's Xtra, baby") 14:18:25 --- quit: Cheery ("Download Gaim: http://gaim.sourceforge.net/") 14:56:49 --- join: Anbidian (i=anbidian@S0106000fb09cff56.ed.shawcable.net) joined #forth 15:08:49 --- quit: snowrichard ("Leaving") 17:09:16 --- quit: virsys (Read error: 104 (Connection reset by peer)) 17:26:09 --- join: virsys (n=virsys@or-71-53-74-48.dhcp.embarqhsd.net) joined #forth 17:43:41 wow 17:44:23 Moore: "ANS Forth standard doesnt describe Forth, but a langauge with the same name." 17:44:43 on /. 2006 Sept 14 17:45:05 article "Chuck Moore holds Forth" 17:45:16 Really. six days from now; you must get the advanced feed. 17:45:27 oops 17:45:32 2001 17:45:33 heh 17:45:33 that's day 2006 of September 1400. 17:46:03 i'm watching star trek on tvland. 17:52:23 --- join: lectus (n=conta@20132237140.user.veloxzone.com.br) joined #forth 18:09:49 --- part: lectus left #forth 18:12:50 --- join: lectus (n=conta@20132237140.user.veloxzone.com.br) joined #forth 18:12:59 How to learn forth? 18:13:13 read 18:13:21 www.taygeta.com 18:13:27 and google 18:17:02 lectus, there are several good links in the topic. 18:17:47 taygeta.com is a random collection of mostly very old documents, and is not a teaching resource. 18:21:53 --- nick: lukeparrish -> docl 18:23:51 --- nick: docl -> lukeparrish 18:28:17 --- part: lectus left #forth 18:40:17 --- quit: Zarutian (Read error: 104 (Connection reset by peer)) 18:48:18 --- join: Zarutian (n=Zarutian@194-144-84-110.du.xdsl.is) joined #forth 19:01:53 --- quit: vatic (Remote closed the connection) 21:12:48 --- join: segher__ (n=segher@dslb-084-056-129-199.pools.arcor-ip.net) joined #forth 21:14:01 --- quit: segher_ (Read error: 60 (Operation timed out)) 22:38:49 --- quit: Anbidian () 23:18:34 --- quit: Quartus__ (Read error: 104 (Connection reset by peer)) 23:59:59 --- log: ended forth/06.09.08