00:00:00 --- log: started forth/10.04.21 00:12:50 --- quit: maht_ (Ping timeout: 276 seconds) 00:57:59 --- join: kar8nga (~kar8nga@jol13-1-82-66-176-74.fbx.proxad.net) joined #forth 02:04:43 --- join: Rugxulo (~user@adsl-065-013-115-246.sip.mob.bellsouth.net) joined #forth 02:05:04 anybody actually done anything interesting in gforth-ec (8086)? 02:05:25 * Rugxulo doubts it, but you never know ... 02:07:45 Nobody has done anything interesting with gforth at all yet. 02:08:16 What is "interesting" here anyway? 02:08:16 I honestly haven't even tried building EC yet, heard it was broken (might be wrong nowadays, not sure) 02:08:31 anything more than 2+2 02:09:22 * schme hasn't used gforth-ec at all. But I did use gforth as a playground environment for something to use on an embedded forth later on. 02:09:28 which was.. more that 2+2 I guess. 02:09:32 it had loops and everything ;) 02:15:37 * Rugxulo can't remember, who was the IsForth dude? madgarden??? 02:16:14 No. 02:16:36 He disappeared. 02:16:45 :-( 02:20:06 --- quit: Rugxulo (Read error: Connection reset by peer) 02:20:20 --- join: Rugxulo (~user@adsl-065-013-115-246.sip.mob.bellsouth.net) joined #forth 02:25:08 --- quit: Rugxulo (Read error: Connection reset by peer) 02:25:47 --- join: Rugxulo (~user@adsl-065-013-115-246.sip.mob.bellsouth.net) joined #forth 02:31:19 --- join: Rugxulo` (~user@adsl-065-013-115-246.sip.mob.bellsouth.net) joined #forth 02:32:57 --- quit: Rugxulo (Ping timeout: 240 seconds) 02:33:42 --- part: Rugxulo` left #forth 06:22:11 The ISForth guy was I440r, right? 06:22:52 I think I have an email for him - I could get a message to him. 06:41:04 --- quit: zashi1 (Quit: Leaving.) 07:19:21 Yeah, I440r 07:19:29 Hmm. 07:19:49 Now I want to make an "ISNTForth" 07:27:22 --- join: tgunr (~tgunr@cust-66-249-166-11.static.o1.com) joined #forth 07:30:15 --- quit: tgunr (Remote host closed the connection) 08:08:00 --- quit: kar8nga (Remote host closed the connection) 08:14:14 --- quit: ASau` (Quit: off) 09:32:08 Hello all. 09:32:17 Howdy. 09:33:07 How is it going? 09:42:47 --- join: gavino (~g@w005.z209031033.sjc-ca.dsl.cnc.net) joined #forth 09:42:47 --- mode: ChanServ set +b gav*!*@* 09:42:47 --- kick: gavino was kicked by ChanServ (Banned: gavino_again) 10:15:28 yoh 10:15:57 Hi. 10:36:42 --- quit: crc (Ping timeout: 260 seconds) 10:38:29 --- join: crc (~charlesch@184.77.185.20) joined #forth 11:21:30 --- join: kar8nga (~kar8nga@jol13-1-82-66-176-74.fbx.proxad.net) joined #forth 11:26:38 --- join: segher (~segher@84-105-60-153.cable.quicknet.nl) joined #forth 11:37:20 --- quit: kar8nga (Remote host closed the connection) 12:05:14 :) 12:08:57 What's up? 12:10:57 going to bed 12:11:01 :D 12:15:59 I see. 12:16:01 Goodnight. :) 12:17:59 --- quit: ygrek (Ping timeout: 245 seconds) 12:19:45 --- join: ygrek (debian-tor@gateway/tor-sasl/ygrek) joined #forth 12:35:32 --- quit: madwork (Ping timeout: 258 seconds) 12:36:37 --- join: madwork (~madgarden@204.138.110.15) joined #forth 13:34:48 --- join: kar8nga (~kar8nga@jol13-1-82-66-176-74.fbx.proxad.net) joined #forth 13:39:48 KipIngram: Have you read this? http://www.ultratechnology.com/1xforth.htm 14:04:36 --- quit: kar8nga (Remote host closed the connection) 14:22:59 --- quit: ygrek (Ping timeout: 245 seconds) 14:31:40 Deformative: Oh yes. Several times over the years. 14:32:49 First time for me. 14:32:58 I think I might implement something similar on my fpga. 14:35:15 I should send you my instruction set. I spent quite a lot of time thinking about it, so perusing it might at least stimulate your thought process. 14:36:48 Sure thing. 14:38:08 Only 4 exams, 1 project, and 1 assignment befoe I am done with the term. (Next thursday) 14:39:03 I just couldn't give up SWAP. It was too easy to implement hardware-wise. 14:39:59 Understandable. 14:40:25 char ops[][6] = { "nop" , "ret" , "co" , "next" , "unext", "@p+" , ">r" , "r>" , 14:40:27 ">a" , "!a+" , "!b+" , "(@a+" , "(@b+" , "@)" , "" , "" , 14:40:29 "dup" , "drop" , "swap" , "over" , "" , "" , "" , "" , 14:40:31 "and" , "xor" , "not" , "+" , "+c" , "+*" , "2/" , "2*" 14:40:33 }; 14:40:52 If you find some of the SeaForth chip documentation it will help too - I based mine heavily on those. 14:41:20 I have some quirks in how my a and b address registers work. 14:42:14 Essentially >a moves the top of stack into the a register, the previous contents of the a register into the b register, and the previous contents of the b register into a "scratch" register. 14:42:34 If @) is executed immediately following that then it pushes the scratch register content onto the stack. 14:42:54 That gives me the ability to load a and b using just one opcode and also the ability to recover them. 14:43:32 The memory access words are optimized for Xilinx internal block RAM. 14:44:13 That's why the @ words are in two pieces; (@a+ and (@b+ do a memory access using the content of the specified address register (and increment the register). 14:44:22 @) follows and pushes the output of the memory onto the stack. 14:44:33 Splitting those up solved *all* *kinds* of problems for me. 14:45:26 KipIngram: This is very interesting. 14:45:27 My unext operation is a bit different from the SeaForth model as well. But you should read how they did it first; then when I tell you how mine works it will make more sense. 14:46:25 And as you can see I have six opcode slots left over at this point. 14:48:01 schme: Really? I agonized over it quite a bit - I hope it makes at least some sense. :-) 14:48:14 KipIngram: I find it very interesting atleast. 14:48:26 Sorry, I was on the phone, back now. 14:49:08 26 opcodes? 14:49:11 Weird number. 14:49:20 pfft. 26 is the new 32. 14:49:21 Well, 32 are supported; I have six spare slots. 14:49:32 There are a couple of things I still want to add. 14:49:35 I see. 14:50:03 I want to define a state register. 14:50:06 KipIngram: What are your plans for this design in the future. Are you actually going to use it for something or is it just a hobby thing? 14:50:08 For example, I'd like memory access words that don't use the a / b registers. Those would be ! and (@ and they'd use the stack top as the address. 14:50:10 And I want to be able to define custom states. 14:50:17 Rather than just immediate/compile. 14:50:39 The (@ word is no problem. The ! word is a *real* problem, though, because it requires that all stack registers be able to take the cell two beneath them as a possible next value. 14:50:51 They currently don't have to do that, so it adds a level of logic and slows down my clock. 14:51:19 schme: Yes, I plan to use it at work. 14:51:26 KipIngram: excellent. 14:51:44 I made every effort to keep it small, so it will "fit off in the corner" of almost all of our FPGA designs. 14:51:57 niice 14:52:00 * schme goes to bed then. 14:52:27 Deformative: what you're goal there? 14:52:30 Sounds complicated. 14:53:15 Although I suppose requring STATE==1 instead of STATE!=0 for the current stuff is pretty easy; then you'd have all other values available for other purposes. 14:53:24 So maybe not so complicated. 14:53:37 But what will you then *do* if STATE has some other value? Jump through a table or something? 14:53:45 * KipIngram gets dizzy... 14:54:00 Heh. 14:54:08 Well, states would be colors. 14:54:16 Create is it's own state. 14:54:20 Compile and interpret obviously. 14:54:33 But less obvious are numbers, which can be their own state. 14:54:36 Oh - you've been reading colorforth. I'm afraid to say I've never quite followed Chuck there... 14:54:36 How to handle the literals. 14:55:04 But you either compile the literal or interpret it, right? 14:55:46 It doesn't surprise me that you'd have some interesting ideas at and around the source code level, though - sounds like you have a really good background in such things. I'm more tuned to the hardware level. 14:56:10 Yeah, I don't know enough about hardware yet. 14:56:41 One of the important things you need to know about your chosen FPGA is that it uses 4-input LUTs. 14:56:41 But to be able to do a "case"-like check over the current state would be interesting. 14:56:54 It would be more useful for user defined words than it would be for the standard INTERPRET. 14:56:56 So any function of four inputs is equally "expensive"; it takes one LUT. 14:57:12 Yeah, I saw that. 14:57:18 So a two-input mux takes one LUT per bit (two inputs for the two inputs, one for the select, and one left over). 14:57:32 I skimmed the user manual last night per your advice. 14:57:34 But a *four* input mux takes more. 14:58:05 So it helps a lot if your stack elements need only two-input muxes. That's why ! gives me such grief. 14:58:20 The Spartan 6 family, though, has six-input LUTs. Very, very nice. 14:58:22 I also saw that every block of combinational logic has only one output. 14:58:26 Opens all kinds of possibilities. 14:58:30 s/block/LE 14:58:34 --- quit: Snoopy_1611 () 14:58:56 Right. And that's another Spartan 6 improvement: every LUT can do either one six-input function or two five-input functions. 14:59:00 --- join: Snoopy_1611 (Snoopy_161@dslb-088-068-206-229.pools.arcor-ip.net) joined #forth 14:59:20 I see. 14:59:20 And also has two registered outputs instead of one. 15:00:03 I am going to keep my particular LEs in mind, but not focus on them. 15:00:15 Much of it will be optimized away by the compiler anyway. 15:00:25 And the whole joy of an HDL for me is that it is portable. 15:01:35 I agree with you up to a point, but the compiler can only do what's possible. If you require things that are hard to do or require many levels of logic you will lose performance, whereas if you understand what your hardware can do and "meet it in the middle" you'll get much better results. 15:01:46 It's a holistic process. 15:02:33 I understand. But I am still just a beginner, so hand optimization is still a bit advanced for where I am. 15:02:46 For instance, include ! in your instruction set and your stack elements will need more than a two-input mux. That will run more slowly. Even though that's the only instruction that needs the extra functionality, that layer is then *there*. 15:02:51 Forever, on every instruction cycle. 15:03:29 Alternatively, if you choose to make ! a macro composed of two machine level operations, you pay the extra cycle only when you run ! and achieve a greater clock speed. 15:03:57 I have no problem with the "walk before you run" approach. 15:04:04 Well, my board only has a 50mhz clock. 15:04:12 I doubt I will cap out. 15:04:27 :-) That's true - you have tons of time. 15:04:28 And there are plenty of LEs to waste space. 15:05:16 There is a hack we utilized in my engineering course to make the system run at 100mhz, I might try that with my board as well. 15:05:32 I haven't looked into it too much though. 15:06:42 I think my approach to doing ! will be to have DROP "stow" the dropped value in a scratch register and then have the primitive part of !, which I might call (!), use that as the address. 15:06:56 Then at the software level ! will compile to DROP (!). 15:07:11 A little tacky, but it will work. 15:08:56 I see. 15:09:31 Well, I need to finish my algorithms project by midnight, so I am going to go to a lab now. 15:09:38 Talk to you more about this later. 15:09:39 o/ 15:09:56 Night. 15:39:39 --- join: aguai_ (~aguai@123.120.226.178) joined #forth 15:43:33 --- quit: aguai (Ping timeout: 260 seconds) 16:50:35 --- quit: alex4nder (Quit: Lost terminal) 16:59:41 --- part: TR2N left #forth 17:59:25 Finished programming. 17:59:30 Hooray. 17:59:49 I still have to study for exams, but whatever. 18:00:41 morning 18:00:47 --- quit: schme (Ping timeout: 276 seconds) 18:00:55 Evening. 18:01:07 >_< 18:01:17 good evening 18:01:30 Heh. 18:01:34 How's it going? 18:01:48 playing travian 18:02:00 --- join: schme (~marcus@c83-254-196-101.bredband.comhem.se) joined #forth 18:02:00 --- quit: schme (Changing host) 18:02:00 --- join: schme (~marcus@sxemacs/devel/schme) joined #forth 18:06:15 I wonder how one would make something like Duff's device in Forth. 18:06:18 * Deformative ponders. 18:11:28 Tom Duff? 18:12:20 Loop unrolling for quick memcpy. 18:12:36 Yes, Tom Duff. 18:13:32 google-ing that & reading wikipedia 18:51:26 Deformative: On a SeaForth processor, or on the one I'm making, you'd set up the a and b registers to point to the to and from points, push the count onto the return stack, and then do this: 18:51:45 (@a+ @) !b+ unext 18:52:36 What is unext? 18:53:18 The processor stores three opcodes in each 16-bit cell. It also keeps a copy of the previous three opcodes (the previous cell), so it has six opcodes in hardware registers at a time. 18:53:34 unext just starts those over again - no fetching required. 18:53:47 So, the loop isn't unrolled. 18:53:53 Technically, though, I guess there is one cycle per pass for the decrement and comparison. 18:53:58 You would need manual control over the ip. 18:54:05 Brb, grabbing a bit of food. 18:54:07 IP isn't used here; it's only used for fetching. 18:54:10 I will be back in like 20 minutes. 18:55:34 Hmmm. I can predetect opcodes in many cases; I'd be able to see the unext coming a cycle early. I might be able to arrange to overlap its execution with the !b+, in which case this would run at the speed of a 100% unrolled loop. 18:56:18 It would be easy for me to see if the opcode I execute just before unext (the !b+ in this case) "conflicted" with pre-execution of unext. If it did, I'd just wait and execute it on time. If it didn't, I'd pre-execute it. 18:56:40 I do this with the ret instruction already; it will pre-execute unless the operation just before it is tinkering with the return stack. 18:56:51 Cool - I feel sure I can make that work. 19:05:34 Yes - I see how to do that. 19:06:20 Furthermore, I can add an opcode, and I think this is worth doing, called !b+) 19:06:31 In that string above here's what happens: 19:06:58 (@a+ fetch from address pointed to by a, increment a, leave value in the "memory out register." 19:07:14 @) push memory out register to the stack 19:07:37 !b+ write top of stack to address pointed to by b, increment b 19:07:44 unext loop 19:07:59 So, I will overlap unext with !b+; so the loop itself costs no time. 19:08:12 But with the new opcode the loop will just be this: 19:08:23 (@a+ !b+) unext 19:08:37 !b+) will write the memory out register to the address pointed to by b, with increment. 19:08:59 That will run as fast as the hardware allows: one cycle for read, one for write, one for read, one for write, etc. 19:09:37 --- quit: madwork (Ping timeout: 265 seconds) 19:09:39 Deformative: Thanks for bringing up this topic - that two nice improvements to my processor. :-) 19:10:36 that's 19:16:28 I'm pretty stingy with my opcodes, but spending one to speed up block memory moves by 50% seems like a good investment. 19:26:11 --- join: madwork (~madgarden@204.138.110.15) joined #forth 19:30:15 KipIngram: But you still don't unroll your loop. 19:31:04 It's 100% unrolled now. There is no run-time loop overhead. And I get that *without any code space cost* that I would pay with traditional unrolling. 19:31:09 It's *better than unrolling*. 19:31:28 In terms of memory usage, perhaps. 19:31:42 Traditional unrolling is a hack that trades space for time. 19:31:56 This approach gets me all the time advantages without the space penalty. 19:32:06 Oh wait, you update your ip and execute the instruction in one cycle? 19:32:10 If so, then I get it. 19:32:18 The unext loop does not use ip. 19:32:28 ip is used only to fetch the next cell from memory. 19:32:31 How do you do this conditionally then? 19:32:40 unext uses cell data already fetched and in the hardware. 19:33:01 unext is a conditional thing. If the count hasn't expired it "replays" opcodes it already has stored. 19:33:16 If the count has expired it then falls through. 19:33:29 The next cell is fetched then. 19:34:14 unext decrements the top cell of the return stack and hits "replay" if the result is non-zero. 19:34:51 And as long as the opcode right before unext doesn't do anything to the return stack (which would be really weird) it can do that *during* the previous opcode. So no time consumed by unext itself. 19:35:18 In other words, the decrement, test, and decision will all occur while I'm doing the !b+) instruction. 19:35:42 Hey, my wife's back from the gym with out daughter now. Going back to our TV show. 19:35:45 Laters. 19:35:51 Ok. 19:36:01 I will try to understand your explaination. 20:59:30 Deformative: I'm back. How did that go? 21:00:33 From what I understand, you prefetch instruction, check conditional, and jump all in one clock cycle. 21:01:41 Well, the instruction cells (up to two of them, which can contain up to six opcodes) are fetched just once; they're cached in hardware registers for the duration of the loop. 21:02:49 But yes, the unext instruction shares a clock cycle with the last "working instruction." So the 1) decrement, 2) zero check, and 3) store with increment instruction all share a clock cycle. 21:03:35 And if the loop continues then the next memory read occurs on the very next clock cycle, followed on the next cycle by the next write, etc. 21:04:21 I see. 21:04:35 Sounds good, but over-complicated for someone like me. 21:05:14 In truth, though, it turned out to be very simple in terms of its hardware implementation. If it hadn't then Chuck Moore wouldn't have put unext into his hardware. He doesn't go for complexity. 21:05:55 But the idea had to cook in my head for several weeks after someone here mentioned it before it "seemed simple" to me. My subconscious needed time to work on it. 21:06:36 Chuck's hardware has five opcodes per cell, so he just caches one. Think of it this way. He fetches a cell. So he has five opcodes right there in front of him. 21:06:47 If one of them is unext and the loop continues, then he just starts those same five over again. 21:06:50 Very simple. 21:07:14 I only have three opcodes per cell. And if I use unext then it takes one of those slots. 21:07:23 I didn't think a two opcode loop seemed very useful. 21:08:08 So I complicated things just a bit by adding a second register that catches the "previous cell"; whenever I'm done with a cell full of opcodes I stuff it into that register as I bring the next three opcodes into the first register. 21:08:15 So I have six opcodes "in front of me." 21:08:36 Juggling them back and forth took a bit of thought, but it didn't take very much logic once I figured out how to implement it. 21:08:40 Oooh, unext only works over 5 opcodes. 21:08:41 I see. 21:08:45 I thought it was arbitrary. 21:09:02 It works "up to" five opcodes. 21:09:34 I understand that. 21:09:58 My very first idea, which I thought of on my own, was to have an arbitrary length queue. 21:10:04 I thought it could work on user defined words and stuff too. 21:10:09 But I am thinking it can not. 21:10:13 Then someone here explained unext to me, and I decided that the simplicity it offered was better. 21:10:51 My original concept would have - it would just have caught the fully expanded opcode stream for any amount of code up to the hardware limit. 21:11:05 My idea was that you would then not need to "rethread" through your code on subsequent passes through the loop. 21:11:20 But the logic required to do that would be significant and complex. 21:11:27 unext is *so* simple. 21:11:56 I understand now. 21:12:03 But it cannot operate on user defined words? 21:12:06 Only hardware words? 21:12:29 Yes, only on an "up to five" sequence of hardware opcodes. 21:12:36 I think of it as a "code machine gun." 21:12:45 Or rather an "opcode machine gun." 21:13:04 Duff's device can work on code which is not machine code. 21:13:27 Ok. I was just thinking about the specific example you cited: a memory block move. 21:13:52 And it seems that if the "work part" that you're doing gets very involved then the loop overhead is less significant by comparison. 21:14:05 It seems like it's just the simple cases where it really buys you something. 21:14:52 Good point. 21:15:03 Things like memory moves, io operations, maybe certain very simple arithmetic operations? 21:15:18 What if you wanted to clear a block with some constant? 21:15:25 Say you wanted to set a whole block of memory to high or low. 21:15:46 I suppose you could put the constant in a register somewhere. 21:19:06 You could use the !b+) word for that, I think. It doesn't take anything from the stack. 21:19:26 So get your value into the memory output register (which I think I'm going to have DROP do), and the just 21:19:35 !b+) unext 21:19:40 One cycle per write. 21:19:42 :-) 21:19:49 I see. 21:20:04 You said something a while back about Forth being one huge clever hack. 21:20:11 How do you load constants in your asm? 21:20:16 @p+ 21:20:28 That fetches from the cell referenced by the IP and then increments it. 21:20:52 You can have three of those in one cell, so you could have @p+ @p+ @p+ in one cell and then three cells of literals, if you wanted. 21:21:59 Anyway, all of these things seem like a hodge podge of hacks to me, but they really do make the metal scream. 21:22:31 But that that has potential of overflowing your bound. 21:24:26 Oh, I wouldn't do that with unext. 21:24:32 I was just explaining how I loaded literals. 21:24:51 Oh. 21:24:55 I wouldn't try to include a literal in the unext loop; I'd have a preamble that got the literal into the scratch / memory out register. 21:25:01 How do pack the literal into the assembly? 21:25:04 I don't understand. 21:25:08 then the unext loop would write that register over and over to memory. 21:25:12 Hold on; I'll lay it out. 21:25:45 Presume the count is on the top of the stack and the starting address just under. Let's say we want to clear a block to zero. 21:26:16 >r dup >a <--- one cell 21:26:26 wait. 21:26:36 >r dup >a >a 0 drop 21:26:53 That has everything ready. Now insert nop opcodes if necessary to "cell align". 21:27:01 Then just !b+) unext 21:27:21 That would then use cycles to store zero to all the required cells. 21:28:39 What does 0 compile to? 21:29:24 @p+ at whatever point it's supposed to execute, and the a 16-bit zero literal in the following cell. 21:30:49 So literals take a whole cell. 21:30:51 I see. 21:31:23 Yes. At one point I considered another approach to literals that "built them up" a nibble at the time. Small literals took very, very little code space that way. 21:31:32 It was actually a tough decision as to which way to go. 21:32:27 That approach had an opcode that put a zero on the top of the stack and did a "mode switch". Every five-bit opcode that followed then shifted the stack top left a nibble and OR'd in a new nibble. 21:32:41 When one of those opcodes had the MSB set that switched back to normal execution mode. 21:32:43 I see. 21:32:51 That made literals more compact in code space but slower to load. 21:33:09 Or rather, it made many commonly used literals (small numbers) compact in code space. 21:36:34 Makes sense. 21:36:43 The unext mechanism, though, works *only* on opcodes. There can't be any literals, subroutine calls, etc. 21:37:38 There is another opcode, "next," that works just like unext except that it loops on real code. No caching. So the loop contents can be as long as you like, contain anything, etc. 21:37:44 Yeah, because literals are their own cell. 21:37:54 next does a real conditional jump if it wants to repeat the loop. 21:38:27 But that jump can also be "merged" with the preceeding opcode as long as the preceeding opcode doesn't monkey with the return stack. 21:38:40 So once again no loop overhead. 21:39:12 That's thanks to you, though. 21:39:49 I didn't click until this evening that I could apply my "predetect and overlap" technique to unext and next. I'd used it only for the return instruction. 21:40:09 So things like unext and literal end the cell. 21:40:12 Right? 21:40:19 There can be nothing after these in the cell. 21:40:22 It was thinking about your Duff's device exercise that got me to see it. 21:40:43 No. I need to check my implementation; I may have chose for unext to end the cell. But literal doesn't have to. 21:40:56 If it doesn't then things are sort of "out of order" but it still works. 21:41:03 --- join: ygrek (debian-tor@gateway/tor-sasl/ygrek) joined #forth 21:41:11 I see. 21:41:24 IP is always pointing to the next cell, no matter which of the three opcodes you're executing. 21:41:37 Yeah, it occured to me after I said that. 21:41:45 So if one of them is @p+ the literal gets pushed and IP incremented. You then just go to the next opcode as usual. 21:41:50 Yep. 21:41:51 I see. 21:42:20 I'd have to study my code to know for sure if unext ends the cell or not. 21:43:19 ret of course does; you can't jump into the middle of a cell, so if something came after ret there would be no way to get at it. 21:43:41 Makes a lot of sense. 21:45:27 I just looked at the code. It looks like unext does not end the cell. It seems that if the count is not expired then it clicks you back to the start of the sequence, but if the count *is* expired it does nothing, which just means you proceed to the next opcode. 21:45:53 Well unext doesn't need to, because if it falls through, you can still use the cell. 21:46:26 Right. I had some recollection of considering having it end the cell for simplicity, but I imagine that in the end it turned out to be simpler to just let things march on. 21:46:28 Do you just use a priority selector to get the current op out of a cell, or what do you use? 21:46:44 I have a four-state state machine. 21:47:00 State 0 is "fetch the cell," state 1 is "opcode 1," etc. 21:47:09 Ah, more trivial. 21:47:22 KISS principle. 21:47:38 A priority selector would be interesting, mux the outputs with the select flag and low the inputs after execution. 21:47:40 Since unext uses opcodes already fetched it returns you to S1, not S0. 21:47:47 Of course it would be more complicated and probably result in more logic. 21:47:53 But it is just cooler in my head for some reason. 21:48:25 I thought about several ways to do it, and tried to pick the best combination of speed and logic economy. 21:48:28 Subjective, of course. 21:48:50 I am sure most/all architectures use standard moore machines. 21:49:21 I remember what those are but don't really think in those terms. 21:49:29 What I suggested is just a moore machine that uses a priority selector with feedback instead of a state register. 21:50:43 Something else that might interest you is that I have conditional execution of threaded calls. 21:50:44 I think mine is less optimal by a lot. 21:51:19 A colan defintion compiles as a 16-bit cell that does not contain opcodes but rather contains an effective address of the target code. 21:51:41 But that cell can be a 1) unconditional call, 2) unconditional jump, 3) conditional call, or 4) conditional jump. 21:51:59 There are three possible conditions (not counting unconditional). 21:52:16 This makes a lot of stuff really compact. 21:52:36 It reduces the number of bits available for the effective address, though, so I have a banked memory mechanism to allow plenty of code space. 21:52:42 Hm, 16 opcodes would be optimal for my fpga. 21:52:50 Since 4 bit states is preferable. 21:52:58 Unless I take additional inputs. 21:53:22 My statement only holds if state transition is not dependent on anything but current state. 21:53:26 Which is just about never true. 21:53:28 So nevermind. 21:53:41 It really helped me to look one opcode ahead and make some decisions based on that in some cases. 21:55:19 The biggest shortcoming my system has at the moment is that all clock cycles are the same length. Ideally the ones that need to involve a memory read or write would be longer and the ones that don't would be shorter. 21:55:31 I think I can add that without too much trouble, but I haven't done it yet. 21:56:02 I'd use a much faster clock and then decide how many cycles to include in each state based on what I'm doing. 21:56:17 Interesting. 22:10:30 --- join: arquebus (~shintaro@201.139.156.133.cable.dyn.cableonline.com.mx) joined #forth 22:14:12 Hmm. 22:14:26 I should either study differential equations or do my algorithms assignment. 22:14:30 But both are just so boring. 22:15:31 what are you a freshman in college? 22:15:51 Yes. 22:16:38 what do you have to do sorting or trees in C for your algorithm assignment? 22:17:28 Graph algoirthms. 22:17:34 Prims and such. 22:18:21 huh, didnt know algorithms could be graphed, live and learn 22:18:29 Also, on this written homework I think huffman encoding, dynamic programming, and poly‐alphabetic cipher are on there. 22:19:54 I just wikipedia'd dynamic programming, sounds forthish 22:20:00 Subsets of dp include subsquencing and knapsack problem. 22:20:34 And the course is in C++, but I of course do it in C as you said earlier. :D 22:21:23 thats pretty intense for freshman coursework, what uni do you go to? 22:21:34 Univeristy of Michigan. 22:22:51 thats pretty good, alot of univerities have CS majors that are a joke nowadays 22:23:06 I am not CS. 22:23:12 I am computer engineering. More hardware centric. 22:24:31 ah, ok, sort of half way between CS and EE I guess 22:24:50 Well, lower half of CS and higher half of EE. 22:25:35 I view it as a more specialized major which overlaps both rather than a more general containing both. 22:25:36 have you done a lot of assembly yet? 22:25:56 Some, in my intro course, I have a lot more in orgizaiton next term. 22:26:35 Well, I shouldn't say I am a freshman, freshman do not take these courses. 22:26:43 I am first year, but I am roughly a junior. 22:27:05 how is that possible to be a junior in your first year? 22:27:25 I tested out of a lot. 22:27:30 And I take full course load. 22:28:01 wow, I guess you did a lot of programming in high school or something 22:28:08 I am still sophmore I suppose. I am junior in about a week. 22:28:10 ^^ 22:28:45 pretty good 22:28:51 Yeah, but the programming didn't get me a whole lot. I got a lot more from other stuff. 22:29:16 I tested out of Programming, Chemistry, Physics I, Physics II, calc I, and calc II. 22:29:26 I didn't have courses for progrmaming or physics II. 22:29:48 I just purchased textbooks, read them for a few weeks, took the tests and got good scores. 22:30:13 sounds awesome, didnt know you could do that 22:30:35 Yeah, I had a lot of spare time in high school. 22:30:50 I didn't do any homework, and the teachers didn't care if I went ot class. 22:31:13 So I was either in the metalshop, reading a textbook, or sitting on my ass all the time. ^^ 22:31:17 but you still passed ok I guess 22:31:23 Yeah. 22:32:25 I wish I discovered programming when I was in high school, Im in my 40s now. When I was in high school computers sucked, just Apple IIs and Trash80s 22:32:55 Yeah, I started programming when I was 14. 22:33:09 The internet makes it easy now. 22:33:23 I downloaded MIT's course and went from there. 22:34:30 I havent found anything good at MIT courseware, I did come across aduni.org which is almost an entire CS major free online 22:35:07 6.0001 22:35:36 whats 6.0001? 22:36:04 Structure and interpretation of computer programs. 22:36:05 It is in lisp. 22:36:13 That was my introduction to CS. 22:36:37 yes, the bible of programming, I can almost read it but I get stuck on the calculus examples they use 22:36:52 that and K&R 22:37:15 Yeah, K&R was my second book. 22:37:16 ^^ 22:38:24 I wish someone told me about K&R 10 years ago, I fried my brain on "C++ for Dummies" and "Teach Yourself C++ in 21 Days" 22:38:35 Ouch. 22:38:44 exactly 22:39:16 Well, it is 1:30, I should get to that work. 22:39:19 I will talk to you later. 22:39:23 ok, later 22:39:45 --- part: arquebus left #forth 23:59:59 --- log: ended forth/10.04.21