00:00:00 --- log: started forth/04.01.27 00:12:33 Well, I think I'm at a good stopping point for this evening. 00:12:39 Didn't make my goal, but I'm *fairly* close. 00:13:12 I got it to parse words and display them in order of entry on the line, regardless of line-breaks. Elegantly handles trailing white-space too. 00:15:33 --- quit: kc5tja ("THX QSO ES 73 DE KC5TJA/6 CL ES QRT AR SK") 00:24:33 --- join: Nutssh (~Foo@gh-1029.gh.rice.edu) joined #forth 00:24:57 --- quit: Nutssh (Client Quit) 00:34:26 --- part: imaginator left #forth 00:38:57 --- quit: Serg () 02:42:25 --- join: Teratogen (leontopod@intertwingled.net) joined #forth 02:57:36 say, icon is not a bad little language 02:57:38 =) 03:02:04 --- join: arke (~Chris@wbar8.lax1-4-11-099-104.dsl-verizon.net) joined #forth 03:03:36 --- join: qFox (C00K13S@cp12172-a.roose1.nb.home.nl) joined #forth 03:14:27 --- quit: arke (Read error: 104 (Connection reset by peer)) 04:11:14 --- join: TheBlueWizard (TheBlueWiz@207.111.96.29) joined #forth 04:11:14 --- mode: ChanServ set +o TheBlueWizard 04:11:41 hiya all 04:58:39 gotta go...bye all 04:58:46 --- part: TheBlueWizard left #forth 05:51:50 --- quit: [Forth] ("abort" Reality Strikes Again"") 05:54:49 --- quit: I440r ("going to work") 06:56:51 --- join: downix (~downix@adsl-219-34-197.mia.bellsouth.net) joined #forth 07:02:42 --- join: arke (~Chris@wbar8.lax1-4-11-099-104.dsl-verizon.net) joined #forth 07:20:55 --- join: Herkamire (~jason@h000094d30ba2.ne.client2.attbi.com) joined #forth 09:35:16 --- join: kc5tja (~kc5tja@66-91-231-74.san.rr.com) joined #forth 09:35:27 --- mode: ChanServ set +o kc5tja 09:37:02 Hi 09:37:08 hey Sam 09:37:14 Hello. 09:37:24 Made some more progress on FTS/Forth yesterday. 09:37:34 It doesn't yet interpret, but it does have a proper read/evaluate loop now. 09:37:36 cool 09:38:59 What architecture is that Forth system made for? 09:39:06 The bad news is that FTS/Forth has about 1/10th the speed of fully optimized C, and 1/2 the speed of generic C, code. 09:39:13 downix: x86 09:39:21 Oh, nice. 09:39:40 warpzero wants me to port to PowerPC, which I will probably take him up on once I get FTS/Forth sufficiently bootstrapped. 09:40:04 Will you implement better optimization later on? 09:40:26 Yes, my goal is to re-implement the various peephole optimizers in a later generation of the code. 09:40:50 I'm also planning on making more words machine language primitives (some are inlined, some are called). 09:41:13 hey now 09:41:26 A free and fast Forth system for x86, this could be good. :) 09:41:49 Robert: Built on the philosophies of cmForth and PygmyForth, too. 09:42:13 But, remember that it's *not* cmForth or PygmyForth. 09:42:18 What? Minimalism? 09:42:21 Very. 09:42:46 For example, this is how I define four variables in FTS/Forth right now (because of bootstrapping purposes, but still...): 09:42:53 : buffer r> ; 09:43:00 : blk buffer [ 0 , 0 , 0 , 0 , 09:43:04 : >in blk 4 + ; 09:43:08 : src blk 8 + ; 09:43:11 : #src blk 12 + ; 09:43:58 I could make >in, src, and #src their own 1-cell "buffers", but that actually takes up more space in the block than can actually fit. 09:44:31 I'll have more room to fix that when I refactor the interpreter loop to support nested blocks. 09:44:58 Are you going to use raw blocks for storing files and source code? 09:45:20 FTS/Forth currently is being built for Linux. Blocks will be kept in a "blocks.fb" file. 09:45:39 Oh. 09:45:42 When FTS/Forth is ported to run on raw hardware, then blocks will be used to touch actual hardware devices directly. 09:47:42 In fact, I really can't wait for that day, because then the overhead involved with even Linux disappears. 09:48:08 :) 09:48:31 Even if Linux is running only one process, there is still a lot of run-time overhead. 09:48:48 The kernel will periodically try to change processes, which will cause the data cache to thrash a bit. 09:49:20 And with modern CPUs taking *thousands* of cycles to fetch memory now-a-days, that can cause noticable slowdowns. 09:49:32 Some OSes are better than others with regard to this, obviously. 09:49:58 Marcel Hendrix did some comparison tests between Linux and Windows 98, and noticed that *same binaries* run an average of 15 to 20% slower under Windows 98. 09:50:17 Windows NT has no noticable sluggishness compared to Linux, but I'm willing to bet you some 20% slowdown compared to raw metal. 09:51:00 20% of 800MHz (for my box) is still 160MHz. :) Just running without an OS would make my system compare to a 966MHz computer in performance. 09:52:33 * kc5tja notes that one of the reasons the Cray supercomputers can get such insanely fast computation speeds is precisely because it *LACKS* memory management hardware. So an 800MHz Cray X1 can compete rather admirably with other supercomputers running in the 1GHz to 2GHz class quite nicely. 09:55:14 Seymour Cray strongly believed in explicitly managed storage, and the sheer speed of PC/GEOS and block-based Forth systems strongly suggest this is the proper way to go. 09:55:45 kc5tja: how do you make sure your peephole optomizations don't move stuff across a point where an IF branches to? 09:56:18 The peephole optimizers work *before* a primitive is inlined. 09:56:30 Let's consider an example, which best explains it. 09:56:41 Suppose I have a primitive called # which compiles a literal. 09:56:43 eg if you have "drop dup" optimized out and you compile this code: "if drop then 1" 09:57:28 First, you can't optimize out drop dup. :) 09:58:13 "dup drop" then? 09:58:15 The best you can hope for is to replace the drop with "mov eax,[esi]" to get teh same effect. 09:58:55 The compiler maintains pointers to the previous and current locations where it's compiling to. 09:59:07 For every word it sees, it copies the current location to the previous location. 09:59:32 Thus, after THEN is executed by the compiler, which we know doesn't compile any code, the "current" and "previous" code pointers will still point to the same location. 09:59:40 (which is where the literal 1 will go) 10:00:17 So when the DUP part of literal examines memory, it won't find the instance of DROP, because the previous code pointer is no longer pointing to it. 10:01:00 interesting 10:01:20 However, I've been considering taking a slightly alternative approach to the compiler with FTS/Forth. 10:01:34 yeah. 10:01:35 Chuck does his magic by maintaining those two pointers, and examines actual byte sequences in memory. 10:01:49 * kc5tja is thinking of maintaining a bitmap that records what the last instruction was. 10:01:58 E.g., if + was compiled, it'd set the "+" bit, etc. 10:02:27 maybe I will do that cool optomizing thing there you keep the top few stack items in whatever registers are open and you compile your inlined primitives to use whatever registers they are in 10:02:27 This makes the peephole optimizer implementation slightly easier (and certainly smaller) at the expense of greater care needed to code primitives. 10:03:12 yeah 10:03:24 well put "greater care" 10:03:43 I don't know that it would really be harder, but I imagine I would have to be careful 10:04:42 Well, if I have to be careful, it's probably going to be harder in some way (harder to debug, harder to implement, whatever). 10:04:56 In this case, it'd make it harder to debug. 10:05:08 I try to ignore optomizing (because I really like to do it, and it's completely unnessesary at this point) but when I ran that benchmark, and realized that I had about a 6X slowdown because of stack use 10:05:17 I also think it frees me up to do better optimization than Chuck currently does, because it can span a larger number of primitives. 10:05:30 yeah :) 10:05:47 I was avoiding this concept because I thought it would make my system more complex 10:05:57 but I think I could do it so it wasn't any harder to make primatives 10:06:03 Oh, it definitely makes things more complex. 10:06:04 I'm using register names as it is 10:06:07 No question there. 10:06:09 sure 10:06:20 but it migt even make it easier to make primatives. 10:06:22 But the payoff for peephole optimization is huge compared to its implementation complexity. 10:07:09 Now, a dataflow optimizer...that's something that's really complex to implement (lots of mutual recursion too, so make sure you have a large return stack!), but the payoff relative to its complexity may not be nearly as large. 10:07:19 But I'm willing to try it in the future, as a need arises. 10:07:29 my primatives that use two registers would be simplified (in source form anyway) 10:08:23 s/registers/stack items/ 10:08:44 One of the biggest optimizations I'm going to put in FTS/Forth is the auto-detection of CREATE-d words (whcih includes VARIABLEs) and turns them into literals (complete with all the subsequent peephole optimizations possible with them). 10:09:01 :) 10:09:12 I did something simular. 10:09:25 Words which have been modified with DOES> will not be detected with this scheme (which is a GOOD thing) because the target address of the leading CALL instruction will not point to (do-create). 10:09:30 my compiler can't tell the diference between numbers, constants, and variables. 10:09:45 and I don't have create. I just do HERE CONSTANT XXX 10:10:02 (except that CONSTANT is a color) 10:10:09 * kc5tja nods 10:10:47 Well, remember that I'm bootstrapping FTS/Forth, so bizarre constructs like : blk buffer [ 0 , 0 , 0 , 0 , will likely be replaced with create blk 0 , 0 , 0 , 0 , soon enough anyway 10:12:23 I also think that FTS/Forth will eventually split its code and data spaces distinctly in the future too. 10:12:29 what do you call the item under TOS? 10:12:41 The second top of stack. 10:12:53 2tos 3tos 10:14:06 I'm also thinking of rolling the interpreter main-loop and compiler loop into the same loop in FTS/Forth, kind of like how ANSI Forth is built. 10:14:14 But I won't have a STATE variable. 10:14:38 [ and ] will change two pointers: one is a pointer to a word classification word, and the latter is a pointer to a jump table. 10:15:53 iclassify will be a word that classifies for the interpret-mode, while cclassify will do the same for compile-mode. 10:16:11 (the only difference between the two is that cclassify will check the compiler vocabulary before dropping into iclassify. :)) 10:17:10 This opens up a lot of possibilities to me. Especially a *LOT* of text-processing applications. I'm very excited at the prospect of this. 10:17:24 Coding my website maintenance package in FTS/Forth may not be as hard as I anticipated if I implement this. 10:18:08 what does iclassify do? 10:18:16 Interpret-mode Classify 10:18:30 ( caddr u -- caddr u 0 | xt 1 | n 2 ) 10:18:31 what does classify do? 10:18:57 it either returns the word verbatim (word not found), returns its XT (found in FORTH vocabulary), or returns a number (it was a number). 10:18:59 Undefined, word, number 10:19:08 nice 10:19:17 cclassify would be this: ( caddr u -- caddr u 0 | xt 1 | n 2 | xt 3 ) 10:19:28 where option 3 is (it was found in the COMPILER vocabulary) 10:19:30 another identifier for immediates 10:20:07 Although, I'm going to return the constants 0, 4, 8, and optionally 12 too (e.g., they're pre-scaled for jump table use on a 32-bit system). This removes two shift operations. :) 10:20:37 identifier for immediates? What do you mean? 10:20:40 Oh, sorry. 10:20:53 For some reason I was thinking of assembly language when you said 'immediates.' :) 10:21:07 :) 10:21:29 And when you think about it, iclassify is built up from primtives too. 10:21:33 We can start at the root: 10:21:36 : undef 0 ; 10:21:53 Then make a word find-forth that returns (xt 1) if found, else drops into undef. 10:22:09 Then the word number which tries find-forth first, and if undefined, tries to convert the word to a number, etc. 10:22:48 The exact ordering isn't yet set of course, nor are the table indexes. But you get the general idea. 10:23:09 mmmmm.... it seems to me that adding an optomizer that handles immediates to the stack optomizing stuff might make it a bit messy 10:23:31 Ummm 10:23:40 I mean so "4 +" get's compiled as: addi tos, tos, 4 10:23:40 Nearly every primitive in FTS/Forth is coded as an "immediate." 10:23:47 Yes. 10:23:56 The code for + is responsible for checking to see if a literal was compiled first. 10:24:08 It is also responsible for letting the compiler know that a + was compiled. 10:24:17 This way words like @ can encode proper instruction encodings 10:24:24 E.g., 4 + @ ==> mov eax,[eax+4] 10:24:36 right 10:26:48 I also intend on optimizing the 2* and 2/ words big-time. 10:26:49 --- quit: proteusguy (Connection reset by peer) 10:27:17 2* 2* 2* 2* should expand to SHR EAX,4, not four instances of SHR EAX,1, like it currently does. :) 10:27:23 err 10:27:26 SHL rather 10:27:29 :) 10:27:54 Why not simply let the user write "4 <<" or something and then optimize that? 10:28:18 I can I suppose. 10:30:00 I am imagining a system where when you compile "4" it does not compile anything, it just keeps track of it (at compile time) and when you go to compile a primative, you check if TOS has a constant value, or a register, and compile a different instruction for each 10:30:27 You're getting into dataflow optimization. 10:30:43 That's anybody's ultimate goal. 10:31:03 Taken to the extreme, colon definitions don't even emit code. 10:31:10 They just sit in a compiler-maintained database. 10:31:29 for "+" there are three cases: 1) both are registers (values not known at compile time) add instruction 2) one value is number. addi instruction. 3) both are numbers. don't compile anything, just add them and keep track 10:31:33 Until you actually use one of the words, then code for that word, and all words it depends on, get globally optimized as a group, and code emitted on-demand. 10:31:51 spiffy 10:32:20 * kc5tja notes that code so generated competes with the likes of *highly* optimized C very easily. 10:32:31 :) :) 10:32:44 And, in fact, draws much from the arcane arts of optimizing functional programming languages. 10:32:56 :) 10:33:28 I wonder how many times I can move a value between registers in the time it takes to fetch a word from the data cache 10:33:47 If cache is on-chip, one. 10:34:01 Maybe two. 10:34:20 Depends on how many integer pipelines you have, and whether the CPU's scoreboard can properly execute them out of order. 10:35:10 weird 10:35:22 That's the thing with functional languages: due to its referential transparency, there is *zero* distinction between a macro and a function. 10:35:51 A definition can be compiled, or it can be inlined -- you don't know, and it doesn't matter to the program -- it'll still work either way. 10:36:23 Forth, being a concatenative language, exhibits this property to a very, very large extent. 10:36:38 You can easily change any arbitrar colon definition into a compiler macro, and lo and behold, the program still works. :) 10:36:50 Sometimes, but not always, vice versa too. 10:36:56 E.g., : rdrop r> r> drop >r ; 10:37:10 is an example of a "primitive" turned colon-definition that works just fine. 10:37:32 There are a couple of gotchas that you need to look out for (especially in FTS/Forth, which does tail-call optimization). 10:37:53 But for the overwhelming majority, Forth can be considered a functional language *in this sense.* 10:38:09 This is why FL-type optimization schemes and compiler schemes map so well to Forth. 10:41:49 You know, these non-destructive IF statements are coming in rather handy some things. 10:42:16 99% of the time, I use drop if or drop -if, but (especially for debugging) the non-destructive if holds its own when coding the interpreter. :) 11:07:07 * warpzero is back (gone 12:53:03) 11:08:29 * warpzero is away: Well I did not think the girl could be so cruel, and I 11:09:13 * warpzero is back (gone 00:00:44) 11:09:38 --- join: slava (~slava@CPE0080ad77a020-CM.cpe.net.cable.rogers.com) joined #forth 11:09:40 hi all 11:09:47 * warpzero is away: 11:09:48 i added a stack-effect checker to my language's compiler 11:10:05 * warpzero is away: 11:10:18 --- quit: warpzero ("Well I did not think the girl could be so cruel, and I'm never going back to my old school.") 11:10:34 ok> "testing" see 11:10:34 : testing 11:10:34 ( compiled: I I I -- O O ) 2 * + 3 swap / - 5 * sq dup ; 11:11:03 --- join: warpzero (~warpzero@dsl.142.mt.onewest.net) joined #forth 11:11:28 slava: Nice. 11:12:21 * warpzero is away: Well I did not think the girl could be so cruel, and I'm never going back to my old school. 11:12:30 That, actually, is the beginning of a dataflow optimizer. 11:13:12 kc5tja, of course in many causes it is not possible to deduce the stack effect (anything involving control flow). 11:13:24 kc5tja, the plan is to, in some cases, avoid generating any code for 'dup', 'swap', etc. 11:19:52 Anything involving control flow can have its stack effects deduced as long as things remain balanced inside and outside of the control flow. 11:20:24 kc5tja, like an 'if' with both branches having same effect, right? 11:20:33 kc5tja, i was thinking about how i could deduce the stack effect for recursive functions. 11:20:44 Words which have unbalanced stack effects are marked as such for special treatment by the optimizer and compiler. 11:21:07 slava: Yes, or a while-loop where the input and output stack effects are the same. 11:21:50 kc5tja, my plan is to use the jvm stack for definitions that fit a certain form -- where the input effect is known, and the output is either 1 or 0 (this is a limitation of the JVM). 11:22:09 it should be a bit faster than the current way, which uses an array for a stack. 11:22:24 You mean the number 1 or the number 0, or 0 or 1 stack elements? 11:22:29 0 or 1 elements 11:22:39 java methods can only return 1 value, or nothing at all. 11:22:48 * kc5tja nods 11:22:53 eg, right now the compiled form of a word like '+' is: 11:23:00 data stack -> jvm stack 11:23:02 data stack -> jvm stack 11:23:06 call + library routine 11:23:10 jvm stack -> data stack 11:23:22 (the + is a library routine to handle bignums, etc) 11:23:28 * kc5tja nods 11:23:44 so if you write a word that has a lot of arithmetic, its a waste shuffling literals, results, everything back and forth. 11:24:47 i think when the compiler is done, the performance will beat any other language hosted on top of the JVM, except java. 11:25:04 the main problem is that i'm not sure i'll ever be able to implement arithmetic without boxing. 11:29:16 You know, all things considered, I'm surprised my Forth compiler produces code that runs as fast as it does at all. 11:29:19 :) 11:29:30 I'm looking at this code, and I realize just how hhoorriibbllee it is. :) 11:29:38 It's *REALLY* bad. :) 11:31:55 fs/forth? 11:32:17 i'm sure the generated code is not as bad as mine :) 11:33:17 Are you sure about that? :) 11:33:26 I do absolutely zero peephole optimization what-so-ever. 11:33:49 * slava doesn't even know what peephole optimization is 11:37:19 Eliminates duplicate or unnecessary code on a local scale. 11:38:33 oh ok. 11:38:44 E.g., 2* 2* 2* normally would compile to SHR EAX,1 : SHR EAX,1 : SHR EAX,1 in FTS/Forth. 11:38:53 Once peephole optimization is in, it'll compile to SHR EAX,3. :) 11:39:16 hmm 11:39:26 this doesn't seem very forthish to me :) 11:39:27 the name "peephole" comes from the fact that the optimizer looks at a sliding window of code 11:39:41 why not just 3 ^2, with a suitable ^2 ? 11:39:50 oops 11:39:59 slava: Because a bit-shift doesn't compute a power, it computes a multiple. 11:40:15 i just realized that, but i mean, you can write a word *8 or something. 11:40:21 And 2* 2* 2* to compute 8 * is very Forthish. 11:40:24 Well, we normally do: 11:40:28 : 8* 2* 2* 2* ; 11:40:31 right. 11:40:49 slava: so you have a word that still emits the same code he pasted.. 11:40:58 But I'm just saying, why would we want to compute three 1-bit shifts when the peephle optimizer can just replace that whole sequence with a shift to three bits? 11:41:14 its true, but wouldn't the optimizer then need to have intimate knowledge of *2? 11:41:20 or does it work at the instruction level? 11:41:34 slava: More accurately, 2* does its own peephole optimization. 11:41:44 The compiler doesn't know nor care about what or how 2* works. 11:42:10 ok 11:42:28 : 2* previous-shift? if inc-shift-count ;; then emit-opcode ; 11:42:56 interesting 11:43:06 what is ;; 11:43:14 Whereas right now, FTS/Forth's code for 2* is just : 2* emit-opcode ; . :) 11:43:37 and emit-opcode is defined, how? 11:43:37 ;; is my word for EXIT, because I use it so often (note I lack an ELSE clause to IF; I also lack BEGIN, WHILE, REPEAT, AGAIN, UNTIL too.) 11:43:46 Well I was giving a template. 11:43:47 If you must know... 11:44:01 : 2* $d1 c, $e0 c, ; 11:44:07 : 2/ $d1 c, $f8 c, ; 11:44:56 oh right. 11:45:00 :) 11:45:02 i thought you had some kind of magic emit-opcode :) 11:45:22 Nope. Unfortunately not. 11:47:21 * kc5tja needs to start biking into work again. 11:47:40 * kc5tja is getting fat. :( I gots me a bowl full o' jelly that'd make Santa Claus happy. 11:47:57 lol 11:51:28 --- join: proteusguy (proteusguy@236.sub-166-153-177.myvzw.com) joined #forth 11:51:32 hi proteusguy 11:52:08 kc5 - do you have a word for tail-recursion? 11:52:16 once the compiler is done, i hope to move many library routines from java to factor. right now the java code is 6,000 lines, this is way too much. 11:52:29 howdy. 11:52:37 Hi proteusguy 11:52:50 cleverdra: Both ; and ;; do proper tail-call recursion. 11:53:01 kc5tja, factor does tail recursion too :) 11:53:07 its a good idea. 11:53:13 kc5 - what about general tail-call optimization? 11:53:27 cleverdra: What is the distinction between the two? 11:53:44 rehi Robert. Such a friendly little channel is forth! :P Espcially compared to #python. 11:53:55 Hi proteusguy !!! :D 11:54:01 (sorry, I couldn't resist.) 11:54:01 haha 11:54:04 proteusguy, because forth > python :) 11:54:08 kc5 - '; and ;; do proper tail-call recursion' would allow an implementation that only notices self-recursion and optimizes that 11:54:22 Oh, no, it does it for any word. 11:54:32 python forth > . = true ? 11:54:33 kc5 - 'tail-call optimization' turns all instances of 'function-call exit' into 'JMP into function' 11:54:37 kc5 - ah, OK =) 11:54:37 So, for example, if I have : foo bar baz ;, the reference to baz is turned into a JMP instead of a CALL. 11:55:17 cleverdra: It's actually MUCH easier to get full tail-call optimization than to restrict it to just the word. 11:55:19 lightbulb lights up in slava's head 11:56:44 tail-call optimization changes the way you program -- some people seem to think of it as something that you can leave to an implementation, sigh. 11:57:13 cleverdra: Well, let's just say that it's **substantially** simplified the Forth compiler. 11:57:15 Just think of writing a state machine, say, with and without tail-call optimization. 11:57:23 No traditional control flow primitives. 11:57:26 Just IF and THEN. That's it. 11:57:31 cleverdra, in my language, loops are implemented using recursion. 11:57:49 Well, because I use the CPU flags only for IF, I have "IF" and "-IF" (tests for zero, tests for sign, respectively). 11:57:50 cleverdra, thanks to tail call optimization. 11:57:56 I have for/next if/then and tail recursion. 11:58:03 you can make any construct with those 11:58:04 hi Herkamire:) 11:58:09 hi slava :) 11:58:09 i have you guys all beat 11:58:11 I may consider implementing FOR/NEXT in the future, but for now, I can live without it. 11:58:14 : ifte ? call ; 11:58:29 : ? ( condition ifT ifF -- ifT/ifF ) ... ; 11:58:30 slava - that doesn't give you enough by itself, though -- tail-recursion will allow you to write a huge state machine in one function, sure, but you can't split it up into multiple SM-NEXT'ing =) functions. 11:58:59 --- quit: ChanServ (Shutting Down) 11:59:01 cleverdra, any call at the end of a word replaces the execution pointer instead of pushing a new one 11:59:05 Actually, slava, I have you beat. :) 11:59:10 slava: you've renamed SDL --> Factor? or is factor something else? 11:59:16 Herkamire, lsd --> factor 11:59:17 IF ;; THEN 11:59:18 :) 11:59:35 slava - do you know anything about SysRPL? 12:00:30 :) I don't have ELSE in my forh 12:00:38 forth 12:00:52 --- join: ChanServ (ChanServ@services.) joined #forth 12:00:52 --- mode: brunner.freenode.net set +o ChanServ 12:02:36 I have a word that jumps to a specified address. that counts as a control construct too. and ; 12:02:41 --- join: Nutssh (~Foo@dunwlessnat.rice.edu) joined #forth 12:03:30 --- quit: slava (brunner.freenode.net irc.freenode.net) 12:03:30 --- quit: mur (brunner.freenode.net irc.freenode.net) 12:05:44 --- join: mur (~mur@mgw2.uiah.fi) joined #forth 12:07:54 I now need to implement ['] and execute in my cross-compiler. 12:08:10 --- join: slava (~slava@CPE0080ad77a020-CM.cpe.net.cable.rogers.com) joined #forth 12:08:14 --- quit: slava (Connection reset by peer) 12:08:24 Back in a bit.... 12:08:55 --- join: slava (~slava@CPE0080ad77a020-CM.cpe.net.cable.rogers.com) joined #forth 12:19:50 --- quit: Nutssh ("Client exiting") 12:26:22 --- join: Nutssh (~Foo@dunwlessnat.rice.edu) joined #forth 12:43:36 --- quit: Nutssh ("Client exiting") 12:47:42 --- join: Nutssh (~Foo@dunwlessnat.rice.edu) joined #forth 12:49:29 --- quit: arke (Read error: 104 (Connection reset by peer)) 12:54:01 --- join: arke (~Chris@wbar8.lax1-4-11-099-104.dsl-verizon.net) joined #forth 12:54:22 hi all 12:54:29 erm, oops sorry 12:54:32 hi all 12:58:26 hello arke! 12:59:30 chandler: how are you today ? 13:00:03 arke: g'day. 13:00:44 how is y'all? 13:00:56 : : : ; : ; ; ; 13:01:18 --- quit: Robert ("brb") 13:01:51 Hrm, this multiDesk for winblows actually works quite well. 13:04:37 hello , everyone ! 13:04:48 terve mur ! 13:05:31 soon sleep 13:06:15 --- join: arke_ (~Chris@wbar8.lax1-4-11-099-104.dsl-verizon.net) joined #forth 13:06:36 --- quit: arke (Read error: 54 (Connection reset by peer)) 13:06:40 --- nick: arke_ -> arke 13:08:53 hi arke 13:08:59 :) 13:10:46 arke, i'm writing the JVM local variable allocator for my compiler. 13:10:56 arke, hoping for a 3x speedup in some situations. 13:10:59 --- join: Robert (~snofs@c-185a71d5.17-1-64736c10.cust.bredbandsbolaget.se) joined #forth 13:11:33 slava: cool stuff! 13:12:12 slava: how are you doing it? (specifics) 13:12:31 arke, using the java virtual machine's local variables for the datastack, instead of an array 13:12:48 arke, and instead of generating for for swap/dup/et al, just renaming locals at compile time 13:12:57 s/for for/code for/g 13:15:43 heh :) 13:15:51 Great optimization for any forth ^_^ 13:16:13 dup, swap, drop, nip are all compile time :) 13:17:35 yes. each compiled word will be of the following form: JVM locals> stack array> 13:18:01 and if the contains calls to other words, the other word's core is called 13:18:28 :) 13:18:45 so for example in : recip 1 swap / ; there are no pointless copies between 1, swap and / 13:21:07 recip() { int a; int b; a = 1; b = stack[0]; a = a / b; stack[0] = a; } <--- like that? :) 13:23:20 --- join: _proteus (~proteusgu@216.27.161.121) joined #forth 13:24:02 --- quit: _proteus (Read error: 104 (Connection reset by peer)) 13:24:22 --- join: _proteus (~proteusgu@216.27.161.121) joined #forth 13:34:00 Hmm. Another interesting dilemma. 13:34:13 I can produce smaller, faster code if I have execute jump to the address in the A register. 13:34:37 But then I'd always have to write code like a! execute, which of course destroys the contents of the A register. 13:35:24 Or, I can have execute compile a code sequence that assumes traditional execute semantics, but which touches another register. 13:35:39 Or maybe you don't want to keep a that long 13:35:51 Or maybe add another B register, and you can choose or whatever 13:36:11 Well, execute is a primitive; I can't choose. It's one or the other. 13:36:34 Then I would say use A :) 13:36:35 The only way to choose is to redefine execute on a case-by-case (e.g., program-by-program or block-by-block) basis. 13:36:51 arke, re: your sample code; sort of :) 13:37:17 or you could have : execute-a compile a! compile execute ; immediate 13:37:28 er, otherway around 13:37:39 : execute compile a! compile execute-a ; 13:37:43 Well, that's what I'm thinking right now. 13:37:59 But I can also have postpone >r postpone ;; too. :) 13:38:01 :) 13:38:18 heh, that would be even better actually 13:38:38 --- quit: ianp (Read error: 54 (Connection reset by peer)) 13:38:45 provide both ^_^ 13:38:58 what exactly does 'postpone' do? 13:39:09 postpone = compile 13:39:13 and [compile] 13:39:17 in one word. 13:39:22 --- quit: Robert ("brb") 13:39:43 I'm gonna have RkF be "register"-based 13:40:06 arke, what is RkF? 13:40:16 arke, so what is [compile] then? :) 13:41:40 --- join: Robert (~snofs@c-185a71d5.17-1-64736c10.cust.bredbandsbolaget.se) joined #forth 13:41:50 slava: compile an immediate :P 13:42:07 --- quit: proteusguy (Read error: 110 (Connection timed out)) 13:42:35 --- quit: Nutssh ("Client exiting") 13:48:18 --- quit: arke (Read error: 104 (Connection reset by peer)) 13:48:21 --- join: arke_ (~Chris@wbar8.lax1-4-11-099-104.dsl-verizon.net) joined #forth 13:49:00 I decided to call the execute word a; -- jumps to address in A register. 13:49:56 heh, my 16-bit CPU has a branch instruction, but no return instruction :P 13:50:20 The Steamer16 CPU has only a conditional branch instruction. No subroutines, no returns, and not even a return stack. 13:51:05 Heh. 13:51:25 How many other instructions does it have, two? ;) 13:51:36 8 alltogether. 13:51:37 The Steamer16 has 7 other instructions. 13:51:51 It is the most MISC of the MISC architectures I've seen. 13:51:54 and a looping 3-depth stack :) 13:52:01 what's MISC mean? 13:52:08 But it's actually not optimized to run Forth; it's better at running C code than Forth code (believe it or not). 13:52:15 Minimal instruction set computer. 13:52:17 minimal instruction depth? 13:52:18 Minimum Instruction Set Computer. 13:52:25 aa. ok. 13:52:33 kc5tja: How fast does it run? 13:52:54 Robert: 16 to 30MHz; performance-wise it's about as fast as an 80386 at the same clock speeds. 13:53:56 Not too bad. 13:55:29 --- join: arke (~Chris@wbar8.lax1-4-11-099-104.dsl-verizon.net) joined #forth 13:55:33 --- quit: arke_ (Read error: 104 (Connection reset by peer)) 13:55:58 --- quit: _proteus ("Leaving") 13:56:41 * kc5tja nods 13:56:52 It'll actually smoke a 386 if you could get it to blow through straight-line code. 13:57:03 But it's the simulated subroutines that slows it down. 13:57:43 How much power does it use? 13:57:54 Whatever a normal CPLD chip consumes. 13:57:59 It's implemented in programmable logic. 13:58:04 Oh. 13:58:46 Isn't it possible to make really low-power CPUs (while still being relatively fast) these days? 13:59:05 If you design it as an ASIC, yes. 13:59:13 ASIC? 13:59:20 The CPUs that Chandler quoted yesterday is a fine example. 13:59:38 Application-Specific Integrated Circuit -- e.g., you have ultimate control over individual transistors, where they get routed to, etc. 13:59:49 ASIC is a step above a custom-fab 13:59:54 but below an FPGA 13:59:58 Oh, okay. 14:00:07 The dual 1GHz MIPS-64 core draws only 5W. 14:00:17 Umm...no. 14:00:19 Just the opposite. 14:00:22 ASIC *is* custom fab. 14:00:43 Hifn produced highly integrated ASICs. 14:00:49 * kc5tja was responsible for testing them. :) 14:00:59 (which was fun. 1024-bit integer math coprocessors are *nice*.) 14:01:05 Heh. 14:01:10 Yum.. 14:01:31 What is the relationship between clock frequency and power used? 14:01:33 Linear? 14:01:50 Robert: Not always. Depends on the circuit and what is being performed. 14:02:01 A general CPU. 14:02:06 Within relatively narrow margins, it can be approximated via a linear relationship though. 14:02:48 At the very *least* it's a linear relationship though. 14:03:04 What do you mean? 14:03:05 I've never heard of a chip with log(n) clock frequency to power relationship. :) 14:03:18 Oh, heh. 14:04:05 downix: I think what you're referring to is Sea-Of-Gates technology, which isn't technically ASIC technology. 14:04:20 The Novix NC4000 was implemented via SOG. 14:04:33 what about ARM CPUs? they draw very little power but are still quite fast. 14:05:00 slava: See my post above about the dual 1GHz MIPS-64 CPU, posted originally by Chandler. It draws only 5W. 14:05:05 It'd smoke an ARM. 14:05:18 Actually, there's a kind of semi-fab ASIC out there too 14:05:21 it's quite odd 14:05:34 called "synthetic ASIC" 14:14:41 --- quit: arke (Read error: 110 (Connection timed out)) 14:15:15 * kc5tja just realized that a; isn't sufficient; he'll also need an acall too to fully implement the 'execute' word. 14:15:19 : execute a! a; ; 14:15:25 I suppose I can do that for the time being. 14:20:33 --- quit: downix (Remote closed the connection) 14:20:42 kc5tja, still fits in 3 screens? :) 14:20:55 I'm sorry? 14:21:09 i recall one of the goals of fs/forth was to fit in 3 screens 14:21:18 but now you're adding all these new primitives :) 14:21:19 No, that wasn't a goal. 14:21:51 The current cross compiler fits in 30, which is about right for a compiler of this complexity. 14:22:03 The new cross compiler I've got coming up will take about half that many screens. 14:22:33 i c 14:22:43 what are you cross-compiling from? 14:23:01 I'm creating FTS/Forth in GForth. 14:27:52 i see. 14:27:58 gforth is not very fast is it? 14:28:10 It's fast enough for my needs. 14:28:22 but it doesn't compile, its just threaded, right? 14:28:31 It's about 10 to 20 times slower than highly optimized C code. 14:28:41 Threaded code is still a form of compilation. 14:28:52 But you're correct: there is still an inner interpreter. 14:46:13 --- join: arke (~Chris@wbar8.lax1-4-11-099-104.dsl-verizon.net) joined #forth 15:05:31 --- join: _proteus (~proteusgu@216.27.161.121) joined #forth 15:10:10 --- join: blockhead (default@dialin-868-tnt.nyc.bestweb.net) joined #forth 15:12:22 --- nick: _proteus -> proteusguy 15:14:17 --- join: arke_ (~Chris@wbar8.lax1-4-11-099-104.dsl-verizon.net) joined #forth 15:14:17 --- quit: arke (Read error: 104 (Connection reset by peer)) 15:17:20 --- join: arke (~Chris@wbar8.lax1-4-11-099-104.dsl-verizon.net) joined #forth 15:17:21 --- quit: arke_ (Read error: 104 (Connection reset by peer)) 15:34:31 --- quit: arke (Read error: 104 (Connection reset by peer)) 15:38:49 --- join: arke (~Chris@wbar8.lax1-4-11-099-104.dsl-verizon.net) joined #forth 15:40:45 Our topic is over a month old. 15:41:28 not as old as the one in the #povray channel 15:41:39 it is still ocngradulating the class of 2003 :D 15:41:56 THAT OLD! 15:41:57 lol 15:41:58 ahaha 15:42:19 yup 15:42:38 --- join: I440r (~mark4@12-160.lctv-a5.cablelynx.com) joined #forth 15:50:08 --- join: arke_ (~Chris@wbar8.lax1-4-11-099-104.dsl-verizon.net) joined #forth 15:50:08 --- quit: arke (Read error: 104 (Connection reset by peer)) 15:55:06 Though our topic is old, its message is timeless. 15:57:18 what is the half-life of an irc channel topic? 15:57:54 In here? 6 months. 15:57:54 kc5tja: check the topic on #povray - seriously! :D 15:59:05 what's a povray? 16:01:17 Teratogen: note that topic :) 16:20:00 --- join: arke (~Chris@wbar8.lax1-4-11-099-104.dsl-verizon.net) joined #forth 16:20:01 --- quit: arke_ (Read error: 104 (Connection reset by peer)) 16:20:56 --- join: arke_ (~Chris@wbar8.lax1-4-11-099-104.dsl-verizon.net) joined #forth 16:20:56 --- quit: arke (Read error: 104 (Connection reset by peer)) 16:28:21 --- join: ianp (ian@inpuj.net) joined #forth 16:33:54 Teratogen: povray is a ray-tracing program for POSIX-compatible operating systems. 16:34:10 and windows 16:34:13 and mac 16:34:28 as well as the unixes 16:34:57 used to be amiga versions as well 16:35:13 I'm sure it's easy to bring the Amiga version back up to date. 16:35:17 Probably just nobody has. 16:35:22 in fact, povray was originally written on the amiga 16:35:30 yup 16:35:31 kc5tja: yes, nobody has 16:36:10 the amiga os was posix compliant ? 16:36:32 probbaly not. the amiga was very different 16:36:39 what is posix? a unix? 16:36:49 i still consider the amiga os the BEST os i ever saw 16:36:57 I440r: yes. 16:37:05 I miss my amiga 16:37:11 but there was no new software for it 16:37:31 and motorola stopped the 68?00 series 16:37:53 POSIX is an paper standard for implementing a Unix-compatible operating environment. 16:37:55 and the company went backrupt and was bought and sold over and over again 16:37:58 no they didnt - coldfire is 68k :) 16:38:10 kc5tja: than amiga is not poxix-compatible 16:38:18 Motorola still makes the 68000 and 68010 processors too, as far as I know. 16:38:21 I440r: 100%? 16:38:21 the UK devision of amiga made a HUGE profit 16:38:33 THEY wernt bankrupt 16:38:37 blockhead: Not 100%. But there is ixemul.library, which implements the vast majority of the standard. Proper subset. 16:38:52 I440r: recent profit? 16:39:06 kc5tja: really? dang! 16:39:22 no. prior to amiga declaring bankrupcy the UK devision of amiga was making huge profits YEAR after YEAR 16:39:41 the uk devision wanted to buy the company and continue but amiga us said no 16:39:48 THAT is what killed the amiga 16:40:31 I owned amigas from 1989 to 2001 :( 16:40:52 damn fine system 16:40:53 No. What killed Amiga was Irving Gould, and if I ever find that mother fucker, I'm going to castrate him live, with a spoon, and make him eat his own testicles so he can suck himself from the inside. 16:41:06 kc5 - hm, why? 16:41:16 kc5 - how did he kill the Amiga, even. 16:41:31 one word: enron 16:41:41 'cept it was commodore 16:41:53 cleverdra: Because he siphoned money out of Commodore to kill it. Commodore wanted to advertise. Commodore wanted 16M colors before 2000. Commodore wanted 16-bit 5.1 sound. Commodore wanted memory protection in the OS (though still single address space!). 16:41:57 Gould wouldn't have it. 16:42:22 I see. 16:42:33 kc5tja: commodore also blew a chance to have an amiga appear in a star trek movie 16:42:50 blockhead: The Amiga was used in the post-production of Star Trev IV. 16:42:51 that would have been good publicity 16:42:52 2x speedup in my compiler! yay 16:43:23 kc5tja: yes but that's not a visable uese of the amiga :/ People remember Scotty talking to a Macintosh mouse 16:43:35 Hello, computer! 16:43:43 that scene, yes! 16:43:50 that could have been an amiga 16:43:58 that alone would have boosted sales mega 16:44:17 Commodore didn't deserve it 16:45:11 * blockhead remembers that this is not comp.amiga.advocacy and stops 16:45:47 I'm referring to the way that they screwed up their own market, not technical merit :-) 16:45:58 oh! 16:46:38 I had a nice forth on my amiga (trying to get on topic) 16:47:06 It's interesting that all my knowledge of operating systems, microprocessor systems, et. al. all started with my express desire to replicate the core essence of AmigaOS on the PC. 16:47:11 I came extremely close. 16:47:27 kc5tja: mmm? how close? 16:47:28 Dolphin could pull off over 1,000 task switches per second on a 386SX-16 with 4MB of RAM. 16:47:54 dolphin was waht the OS was called? 16:48:00 Supported exe/VMS-style event-driven synchronization (in fact, TaskWait() was the only primitive that could suspend a task!). 16:48:05 block: Yes. 16:48:09 blockhead: Yes even. 16:48:16 * kc5tja thought irssi would auto-complete that. 16:48:29 interesting. sourceforge it and see what happens? 16:48:39 Nothing. 16:48:48 Nobody is interested in a single-address-space operating system for the PC platform. 16:49:12 there's obviously interest, however minimal, in AROS 16:49:12 Lack of memory protection is also just begging for virii and whatnot to infect the system in today's Internet world. 16:49:19 too bad. the whole multi-user thing always seemed like an unnceasary frill for most people 16:49:25 (IMHO) 16:49:29 multi-user != memory protection 16:49:32 Dolphin was not multi-user. 16:49:47 as an example consider BeOS, which was a single-user protected-memory system 16:49:47 it sounds verymuch like the amiga OS from what you have said 16:49:55 * chandler is an ex-Be-ite 16:50:09 now there's a sad story :-) 16:50:14 * kc5tja notes that BeOS also draws heavily from AmigaOS in its coding talent. 16:50:27 isn't Bos still around, supported by its loyal fans? 16:50:40 it's even less around than Amigaos 16:50:49 blockhead: I think there are efforts to recreate it as an open source operating system. Not sure how far they've gotten yet though. 16:51:19 not far... another company called Zeta has the source and is currently screwing it up because they have no idea how to continue Be's vision 16:51:46 and plus like any good German company the first thing they did was apply a massively... funky theme to it and the icons 16:52:10 chandler: hmm, like windows? 16:52:41 worse. ultra-tiny scroll bars, really ugly corporate logo all over the place 16:52:45 one pixel borders around buttons 16:52:58 like Windows XP :D 16:52:59 and HUGE icons 16:53:05 no, the XP interface is bloated 16:53:08 romper-room esque 16:53:12 One pixel borders around icons and whatnot is fine. 16:53:16 I like fine lines. 16:53:19 I don't click on them. 16:53:24 kc5tja: except it doesn't look right compared to the rest of the system 16:53:31 Microsoft, however, takes the opposite tack. 16:53:36 they also made the widget backdrops white instead of grey 16:53:42 They have 16-pixel wide borders, and 8-pixel buttons. 16:53:47 the look of the BeOS GUI was perfect though 16:54:01 chandler: Except for the window title bars, I completely agree. 16:54:07 BeOS was a nice system. I liked it a lot. 16:54:11 aw, what's wrong with the yellow tabs? 16:54:18 they're the personality and soul of BeOS 16:54:29 and besides you could change it if you wanted to 16:54:44 Same thing with any other tiny thing. Mouse pointer inaccuracy made pointing at and clicking on the window title bar (e.g., to move the window) unnecessarily lengthy. 16:55:41 I'd have guessed you'd have used the Amiga style anyway :-) 16:56:37 Had I known it existed, probably so. 16:56:42 heh 16:56:54 If I had the time for it, I'd pick up the development of amiwm for X11. 16:56:56 treewm pleases me a great deal -- it has one-pixel borders =) 16:57:03 Maybe I'll recreate it from scratch, using FTS/Forth. 16:57:10 not using X11 pleases me a great deal 16:57:42 I use it more or less out of necessity. 16:58:06 I know, I am just careful to never associate "sucks less" with actual pleasure 16:58:17 chandler - I came across treewm because of your discussion about interfaces that match better to the way people work, actually. 16:58:40 * kc5tja googles for treewm 16:59:12 * chandler too 16:59:30 chandler - not because treewm struck me as particularly following that, just because I started trying a number of different window managers (starting with XFce) that had a bit more of a visual component than ratpoison =) 16:59:41 hm, seems interesting 17:00:10 I am still trying to figure out what model of virtual desktops works best 17:00:24 treewm has a root desktop, windows (which you can turn into desktops that include themselves as the first window), and subdesktops (which seem like windows to their parent). 17:00:35 I have made at least one conclusion, that the only way the concept can be seamless is if the apps explicitly recognize it 17:00:53 as such multiple desktops should really be a toolkit-level problem 17:01:39 So I, say, start Mozilla somewhere-or-other and then turn it into a desktop, so that any windows created when I deal with it start in that desktop -- a nifty enough way to keep download windows and such from cluttering my other activities. 17:02:09 HAHA! 17:02:12 cleverdra: does this work if Mozilla pops up a dialog when you are in another desktop? and then do you have the option of moving one mozilla window to another desktop? 17:02:19 and it keeps windows together in a nice physical sense -- you can more easily move between desktop-level windows than between windows of different desktops. 17:02:24 Can anyone say SCREENS?! (AmigaOS style that is) 17:02:25 :) 17:02:31 That's too funny. 17:02:43 * kc5tja will have to try treewm to get some ideas. 17:02:52 chandler - you can almost certainly move windows between desktops -- but I haven't done this yet. 17:02:52 kc5tja: but not screen that are at differen resolutions and scan rates :D 17:03:04 cleverdra: then how does it choose which window to put a mozilla popup on? 17:03:06 er, which workspace? 17:03:29 treewm looks cool. near as I can tell its unix only 17:03:38 I have never used Amiga screens. I will say that the best virtual desktop implementation I have used is Be's 17:03:42 treewm also has an Alt+ESC to make a random window fully maximized -- with no distractions, and everything returning to its former state with another Alt+ESC -- using X trickery, I think. 17:03:48 blockhead: Window managers tend to be Unix only, being designed for X Windows. :) 17:04:09 chandler - windows start in the lowest-level desktop that has focus. 17:04:10 but I don't know how Be's compares. I do know it lets you pick independent resolution and refresh rate 17:04:23 kc5tja: I'm using a window manager now, on Windows :) 17:04:28 chandler: In AmigaOS, screens were first-class GUI objects like Windows were. 17:04:28 cleverdra: so a popup from mozilla doesn't go to mozilla's desktop, it goes to the current desktop? 17:04:29 --- quit: I440r ("Leaving") 17:04:42 kc5tja: ... first-class is an overused term. What does that mean? 17:04:43 You could depth-arrange them and move them more or less like windows (hardware permitting of course). 17:04:52 hm, sounds cool 17:04:57 Applications opened and closed screens like they opened and closed windows. 17:05:09 Most applications were coded to run in their own screen. 17:05:19 chandler - yes -- at least so far as I understand. treewm has various scripting (by window name and by something X-level) that you can probably manage mozilla pop-ups with, but I haven't dealt with these yet. 17:05:26 treewm has fairly poor documentation. 17:05:27 cleverdra: that's what I'm saying then 17:05:32 that's why it needs to be a toolkit issue 17:05:40 otherwise you can't sanely put mozilla on /two/ desktops 17:05:49 you can either keep it all on one 17:05:52 First-class in that they weren't opaque to the application software. Applications were very much aware of what screens were, how to create them, manipulate them, close them, etc. 17:05:57 or if you put it on two you have a problem of deciding where to map windows 17:06:04 kc5 - oh, nifty. 17:06:10 kc5tja: woot! that's just what I'm talking about 17:06:38 treewm comes with patched xkill and xprop, though I never use these to care. 17:06:38 And, here's the best part, screens had title-bars just like Windows, which means they made nice status bars that didn't suck up space in each window. :D 17:06:51 gotta go. 'night all 17:06:55 And menus were displayed ONLY when necessary (again, not associated with each window, like X11 or Windows). 17:07:00 Oh, the GUI was sooooo nice. 17:07:04 --- quit: blockhead ("Client Exiting") 17:07:15 Intuition was also trivially easy to write software for. 17:07:17 And I do mean trivial. 17:08:55 * cleverdra goes for dinner. 17:09:02 * kc5tja is actually going to be coding an Intuition-like GUI for my client. Specially tailored to his specific application, though. 17:09:06 And no, not open source. :) 17:10:36 oh, one warning for anyone using treewm: its 'vi mode' conflicts with 'xscreensaver' (xscreensaver whines about it immediately if you've set it to display such messages) -- so keyboard events go to treewm's little vi-input window (or to the last-focused window, if you close that) as well as xscreensaver. I once entered my password into an IRC channel this way. Bah. 17:10:46 kc5tja, what made it so trivial to write for? 17:11:30 this only happens if you have 'vi mode' enabled (which you only ever do for short commands (like !xscreensaver-command -lock), because it interferes with normal use) when xscreensaver kicks in. 17:11:34 * cleverdra goes for real. 17:11:51 slava: To open a window, you called precisely one function: OpenWindow(). 17:12:01 To add gadgets to the window, you called precisely one function: AddGadgets(). 17:12:07 etc. 17:12:20 * chandler starts up MCL - (make-instance 'window) 17:12:21 Intuition's interface was amazingly simple to write software. 17:12:26 Dead simple. 17:12:48 have you used the Be APIs or Cocoa? 17:12:53 It was very data-driven too. Gadgets were represented by static C structures, which you passed a pointer to AddGadgets(). 17:13:19 chandler: No, but I know they're more complex. You have to use API calls to "build" the window's contents piece-meal and add it to the windows. 17:13:23 Intuition didn't do that. 17:13:31 actually.. not in Cocoa 17:13:33 ugh 17:13:37 blockhead left? 17:13:37 You just laid out a bunch of structures in your source code, and invokes AddGadgets(), and whamo -- there they were. 17:13:40 you load your nib file 17:13:52 --- join: Sonarman (~matt@adsl-64-160-165-244.dsl.snfc21.pacbell.net) joined #forth 17:14:02 Oh, "resource" files, eh? 17:14:04 and a nib file is basically a bunch of serialized objects 17:14:08 * kc5tja never liked resource files either. 17:14:16 not really the same... object serialization 17:14:22 what was amigaos like? 17:14:35 eventually i want to do some kind of UI toolkit for Factor, where the UI is specified with a bunch of nested lists (like s-expressions) 17:14:41 Yeah, I just feel more comfortable building structures myself in source code. Just a personal preference. 17:14:49 arke_: Fast. 17:14:58 slava: if you have a lisplike language you should look into presentation-based UIs 17:15:07 kc5tja: well, so was dolphin, it seemed, but what about feel? :) 17:15:12 seems* 17:15:15 --- nick: arke_ -> arke 17:15:34 arke_: Imagine the most user-friendly Unix-variant you've ever used, but it wasn't quite Unix. It was faster, more responsive than Unix. 17:15:54 name a window manager I could compare it to 17:16:17 CLI commands were often formed more like Smalltalk method invokations: copy from dh0:#? to dh1:DH0-Backup/ all quiet 17:16:27 arke: uhhh.... amiwm. :) 17:16:54 But that is the closest emulation you'll get short of actually using an Amiga or running UAE (which you need to find Kickstart ROMs for to use, but I don't have anymore) 17:16:56 so how would you compare it to BeOS? 17:17:07 BeOS is fast and definitely Unix-like. 17:17:13 yeah 17:17:16 I would say it's too Unixy for my tastes. 17:17:22 hm. that's why I liked it :-) 17:17:23 But definitely something I could live with. 17:17:40 Given a choice, I'd use BeOS over Linux. 17:17:46 oh yeah 17:17:51 but BeOS isn't a practical choice anymore 17:17:55 Yep. 17:17:55 I'd still be running it today if it was 17:18:35 I tried booting the BeOS test thing on my box, and it hung during boot :( 17:18:45 but I had to make a choice, and when I went back to using Linux full time I discovered it had gotten more bloated but not more user-friendly. KDE has not improved on fvwm95 in any significant way. 17:18:46 But remember that AmigaOS had no memory protection, so its task switches took, oh, maybe a few tens of cycles. 17:18:59 E.g., it was more real-time than QNX, which advertises itself explicitly as a real-time OS. :) 17:19:06 hehe 17:19:34 :) 17:19:35 but real-time doesn't necessarily mean how many cycles, it's about whether you can hit a gaurantee 17:19:42 Right 17:19:50 and QNX is also about reliability 17:19:55 hence the ultra-microkernel design 17:20:00 But the folks at NASA JPL were using Amigas for scientific stuff over QNX boxes because of its responsiveness. 17:20:05 kc5tja: so, back to Dolphin .... 1k+ task switches in a second? 17:20:12 arke: Yes. 17:20:14 on a measly 386SX? 17:20:16 sigh, JPL was doing a lot of cool stuff 17:20:29 And that's the other thing that was amusing. 17:20:41 http://www.flownet.com/gat/jpl-lisp.html 17:20:59 kc5tja: what did you code it in? 17:21:09 Imagine owning a home computer that was so powerful that folks who dealt with PCs and Macs didn't ever know anything about it, but to learn more, you had to go to people who used Suns, Cray supercomputers, S/390s, AS/400s, and other mega-machines. 17:21:35 arke: The executive was coded in raw assembly language, and consumed about 3KB of code. 17:21:51 Still massively "wow" 17:21:54 With support code, the total kernel size was slightly less than 8KB. 17:22:25 * kc5tja notes that out of that 3KB of code, 2KB was reserved for the combined IDT and GDT tables required by the CPU. 17:22:36 --- quit: arke (Read error: 104 (Connection reset by peer)) 17:22:42 --- join: arke (~Chris@wbar8.lax1-4-11-099-104.dsl-verizon.net) joined #forth 17:22:44 Preemptive, prioritized multitasking is not hard to do. 17:22:56 I think today's OSes suck way too much power than they should. 17:23:32 * kc5tja does have future, long-range plans to continue to develop Dolphin, but it won't be single-address-space anymore. 17:23:40 heh, a lot of people say that, but few actually look at the cost of those modern features 17:23:48 Though fi there is sufficient demand for it, I suppose I could concede and create a Dolphin-Lite. :) 17:24:21 I don't think anyone sets out to create a bloated system... it happens as a result of various tradeoffs in the design decisions 17:24:30 chandler: Agreed, but Linux is **too** big for what it actually does. 17:24:40 chandler: can you name one possible one for linux? 17:25:12 arke: compared to what? compared to amigaos? memory protection. 17:25:12 In some cases, Linux (without init) boots slower than WinXP 17:25:33 when you have a bunch of drivers compiled in that try to detect stuff? 17:25:40 I am no linux apologist 17:25:46 but I think Linux's problem is more one of management 17:26:11 windows does hardware detection just like linux does 17:26:24 no, no, no. Windows actually does hardware detection. Linux does not. 17:26:40 which is to say, there are some userspace programs that read the PCI IDs and attempt to detect like Windows 17:27:03 Actually, the reverse is true. 17:27:11 Windows, as a rule, does not do hardware detection. 17:27:23 Well, Linux does some hardware detection. It needs to provide the /dev tree, right? :) 17:27:25 It'll do it once or twice, and it'll configure its registry accordingly, so that the next time it boots, everything is there. 17:27:37 Linux, however, auto-detects all the hardware every time it boots up. 17:27:46 Windows XP does it when you log on. 17:27:51 but linux isn't following any sane device -> driver mapping 17:27:56 in fact it does the opposite 17:27:59 driver -> device 17:28:10 that's how the static drivers work 17:28:24 --- join: Nutssh (~Foo@gh-1177.gh.rice.edu) joined #forth 17:28:28 chander: Yes, but the drivers still need to auto-detect things, to get I/O ports and DMA channel configurations. 17:28:45 It's true that there is gross /dev pollution. 17:28:47 of course, but the mode of hardware detection is "let's just try /everything/ and see what sticks!" 17:28:50 brb, phone 17:28:52 That's a hold-over from Unix System 1. :) 17:29:09 Now... 17:29:14 in AmigaOS, this works completely differently. 17:29:35 It'll auto-detect what hardware you have, and then register the ROM-resident drivers on the expansion card as device drivers in the OS. 17:30:05 Or, a reference to a disk-resident driver will load the driver for the first time when it is first used, whereby it'll auto-detect its hardware, and either succeed or fail, depending on whether the proper hardware is present. 17:30:15 In either case, least memory consumption, and overall best run-time performance. 17:30:15 Either way, Windows XP boots up mighty friggin quick here 17:30:36 Detect+load on demand? 17:30:46 is that your plan for dolphin? :) 17:31:14 Load-on-demand-then-auto-detect is my plan. 17:32:06 Just like it was in AmigaOS. 17:32:44 coolness coolness. 17:32:57 And then keep a cache of commonly loaded stuff. 17:33:04 I guess :) 17:33:07 Actually, there is no need. 17:33:12 kc5tja: btw, tell me more about Dolphin :) 17:33:12 AmigaOS works by reference counting modules. 17:33:23 When a reference count drops to zero, it does not unload it from memory right away. 17:33:29 It'll wait until a low-memory condition occurs. 17:34:06 which avoids paging which makes things superfast. 17:34:10 Well, there's not much to tell, except that I never completed the project, nor got any further than a single-address-space, preemptive, prioritized multitasking engine. 17:34:23 AmigaOS never, ever, ever, ever supported paging anyway. 17:34:27 No virtual memory. Deal with it. 17:35:03 kc5tja, how about making fts/forth stand-alone? 17:35:18 slava: I have medium-level priority plans for doing exactly that. 17:38:01 In fact, the hardware native version of FTS/Forth will draw very much from my Dolphin experience. 17:38:13 --- join: arke_ (~Chris@wbar8.lax1-4-11-099-104.dsl-verizon.net) joined #forth 17:38:38 Ack I hate thisss 17:38:51 --- quit: arke (Nick collision from services.) 17:38:53 --- nick: arke_ -> arke 17:39:34 AmigaOS never, ever, ever, ever supported paging anyway. 17:39:39 last thing I saw. 17:39:49 What else happened before I came? 17:39:55 look at the clog logs 17:40:17 where? 17:40:51 http://tunes.org/~nef/logs/forth/04.01.27 17:41:55 --- join: arke_ (~Chris@wbar8.lax1-4-11-099-104.dsl-verizon.net) joined #forth 17:41:56 --- quit: arke (Read error: 104 (Connection reset by peer)) 17:42:26 http://tunes.org/~nef/logs/forth/04.01.27 17:42:30 (in case you missed that.) 17:42:35 ack damn connection 17:43:36 --- join: arke (~arke@melrose-251-251.flexabit.net) joined #forth 17:44:13 --- quit: arke_ (Client Quit) 17:44:27 if it doesn't work, use the shell account. 17:44:34 and don't forget to screen it. 17:44:39 brb, screen 17:44:39 --- quit: arke (Client Quit) 17:46:23 --- join: arke (~arke@melrose-251-251.flexabit.net) joined #forth 17:48:08 Ok, now I'm staying :) 17:48:32 Not me. 17:48:33 :) 17:48:43 In about 45 minutes, I'll be in Aikido. 17:48:44 :) 17:50:08 :) 17:51:29 btw there are /much/ nicer logs for #forth at meme.b9.com 17:51:30 http://meme.b9.com/cview?channel=forth&date=040127 17:53:01 wow, cooness 17:53:05 coolness :) 17:55:56 kc5tja, i'm expecting nothing less from you than a native x86 fts/forth with multitasking and a forthish GUI :) 17:57:14 slava: If it'll even have a GUI standard. 17:57:21 Multitasking it'll have though. No worries there. 17:57:28 i wonder what a forthish GUI is. 17:57:44 slava: Think Palm Pilot. That's a very Forthish GUI. Very simple. Some might even argue too simple. 17:57:56 well palm pilot is not appropraite for a large screen. 17:58:01 no, the Palm Pilot does very well for me =) 17:58:04 slava: Sure it is. 17:58:05 tile-based windowing 17:58:21 not one app per screen 17:58:30 slava: Sure. 17:58:38 I take it you don't like Linux virtual consoles then. 17:58:53 i'm in x all the time :) 17:58:53 * kc5tja uses the ion window manager, precisely because it can do both tiled AND paned window management. 17:59:09 More often than not, however, it's tabbed. One screen, one app. 17:59:23 s/paned/tabbed/ -- sorry, tiled and paned mean the same thing. :) 17:59:28 the UI I want for factor is where every window is an outer interpreter loop. 17:59:40 with a database backing for storage. 17:59:47 to edit a definition, type something like: "sq" ed 17:59:47 FTS/Forth will not behave like that. Sorry. 17:59:58 and it enters : sq dup * ; in the input area 18:00:06 and you use command line editing to change it 18:00:29 FTS/Forth will manage application views as overlays. 18:00:30 kc5tja, i don't expect fts/forth to do this :) 18:00:49 When you switch to another application, the view-related code for the previous app gets overwritten with a freshly recompiled version of what you're switching to. 18:01:44 chandler: Is Meme your own program? 18:02:01 kc5tja: no, though I wrote one of the libs it depends on 18:02:07 it belongs to Kevin Rosenberg, aka kmr 18:12:21 --- quit: qFox ("if at first you dont succeed, quit again") 18:39:35 --- join: I440r (~mark4@12-160.lctv-a5.cablelynx.com) joined #forth 19:17:00 --- quit: Teratogen ("SKYKING, SKYKING, DO NOT ANSWER") 19:32:45 --- quit: I440r (Read error: 60 (Operation timed out)) 19:43:27 * warpzero is back (gone 08:31:07) 20:22:23 --- join: I440r (~mark4@12-160.lctv-a5.cablelynx.com) joined #forth 20:33:13 back :) 20:44:34 defrost - ah, nifty. Basically what I wrote, with a very different style =) 20:44:44 hi arke 20:45:15 sorry, mischan. 20:45:55 --- quit: Nutssh ("Client exiting") 20:57:59 Back 20:58:24 chandler: Very cool. :) I'm glad to see "real world" applications written in Lisp. We need a greater diversity of languages out there. 20:59:06 welcome back :) 21:00:09 I took some falls today that were well above my rank to take. :) 21:00:18 I could have been rather hurt, but I wasn't. 21:00:23 That being said, they were damn fun! 21:01:50 hehe :) 21:09:48 --- quit: cmeme (Connection timed out) 22:03:19 --- join: Nutssh (~Foo@gh-1177.gh.rice.edu) joined #forth 22:12:58 --- part: Nutssh left #forth 22:30:45 --- quit: Sonarman ("leaving") 22:57:07 So did you whack someone with your stick real good? 23:07:54 --- quit: OrngeTide ("Changing server") 23:46:39 --- join: Nutssh (~Foo@gh-1177.gh.rice.edu) joined #forth 23:48:35 Heh 23:48:38 No. 23:48:41 I don't have a stick anymore. 23:48:46 Someone stole my stick last year. >:( 23:48:55 I have to order another one. 23:50:08 Anyway, I'm going to go to bed. 23:50:20 goodnight 23:50:25 --- quit: kc5tja ("THX QSO ES 73 DE KC5TJA/6 CL ES QRT AR SK") 23:50:46 Night. 23:59:59 --- log: ended forth/04.01.27