00:00:00 --- log: started forth/03.08.13 02:26:21 --- quit: gilbertdeb ("Client Exiting") 02:31:06 --- quit: Stepan (Remote closed the connection) 02:46:07 --- join: Stepan (~stepan@frees.your.system.with.openbios.org) joined #forth 05:32:49 --- quit: proteus_ (Read error: 110 (Connection timed out)) 06:03:31 --- join: fridge (~fridge@dsl-203-33-160-130.NSW.netspace.net.au) joined #forth 06:21:41 --- join: proteus_ (~username@sgi.scigames.com) joined #forth 06:44:02 --- join: Serg_Penguin (Serg_Pengu@212.34.52.140) joined #forth 06:44:17 --- quit: Serg_Penguin (Client Quit) 07:48:40 --- join: gilbertdeb (gilbert@fl-nken-u2-c3b-118.miamfl.adelphia.net) joined #forth 08:22:42 I'm looking for a repository of forth coding examples 08:23:00 what kind of code? 08:24:55 well anything really, just want to see how people solve various problems in forth 08:25:01 I keep getting stuck in C mode 08:25:21 --- join: ma] (markus@lns-th2-3-82-64-37-68.adsl.proxad.net) joined #forth 08:25:22 you're ready for: Thinking Forth 08:25:29 lemme see... 08:26:34 oh, have you checked the news groups? 08:26:53 no 08:27:13 --- quit: ooo (niven.freenode.net irc.freenode.net) 08:27:23 --- join: ooo (~o@jalokivi.netsafir.com) joined #forth 08:27:30 any news in the stack processor front? 08:28:17 hi ma]. 08:28:26 Anynews would have to do with $$$ :P 08:28:50 fridge short of pasting a bunch of links to articles ... 08:28:59 I'm not sure how else to go about this. 08:29:09 but yes you're right, there is a 'forth way'. 08:29:14 http://www.ultratechnology.com/4thlego.htm 08:30:53 ...but the path is small, and most developers brain clumsy :-( 08:31:12 ma] do you know of this path? 08:31:36 I've just invested 400E in a spartan300e 08:32:33 so, till now I'm an amateur... 08:32:47 aren't 5 bits per instruction maybe too much 08:33:40 forth.sf.net has some examples 08:35:02 yesterday I understood, that a direct connection between two processors corresponds to a special group of linda programms 08:36:58 anybody done linda stuff? 08:37:09 whats 08:37:31 that? 08:37:35 parallel programming system 08:38:05 afaik: central list contains all tasks to be done 08:38:26 clients take one after the other, do the work, and put back the result 08:38:33 --> absolutely scaleable 08:40:26 the only imposed limit should be that you don't know when those tasks run :-) 08:43:12 http://www.ultratechnology.com/4thlinda.html 09:21:10 --- quit: ooo (niven.freenode.net irc.freenode.net) 09:21:10 --- quit: Stepan (niven.freenode.net irc.freenode.net) 09:21:10 --- quit: ChanServ (niven.freenode.net irc.freenode.net) 09:21:10 --- quit: ian_p (niven.freenode.net irc.freenode.net) 09:21:11 --- quit: onetom (niven.freenode.net irc.freenode.net) 09:21:11 --- quit: fridge (niven.freenode.net irc.freenode.net) 09:21:11 --- quit: Fractal (niven.freenode.net irc.freenode.net) 09:21:11 --- quit: ma] (niven.freenode.net irc.freenode.net) 09:21:11 --- quit: TreyB (niven.freenode.net irc.freenode.net) 09:21:49 --- join: ChanServ (ChanServ@services.) joined #forth 09:21:49 --- join: ooo (~o@jalokivi.netsafir.com) joined #forth 09:21:49 --- join: ma] (markus@lns-th2-3-82-64-37-68.adsl.proxad.net) joined #forth 09:21:49 --- join: fridge (~fridge@dsl-203-33-160-130.NSW.netspace.net.au) joined #forth 09:21:49 --- join: Stepan (~stepan@frees.your.system.with.openbios.org) joined #forth 09:21:49 --- join: onetom (~tom@novtan.bio.u-szeged.hu) joined #forth 09:21:49 --- join: ian_p (ian@inpuj.net) joined #forth 09:21:49 --- join: Fractal (bron@we.brute.forced.your.pgp.key.at.hcsw.org) joined #forth 09:21:49 --- join: TreyB (~trey@cpe-66-87-192-27.tx.sprintbbd.net) joined #forth 09:21:49 --- mode: niven.freenode.net set +o ChanServ 09:44:50 gilbertdeb: what did you do with SP? 09:45:01 SP? 09:45:11 stack proc. 09:45:18 Nothing. 09:45:22 I don't have one... 09:45:52 till now: me too 09:46:57 whats the spartan300e again? 09:47:08 a fpga 09:47:30 should sooner or later contain 10 to 20 stack processors... 09:47:35 url? 09:47:49 in my head. 09:48:01 each proc 256 words of local mem 09:48:29 url to spartan300e stuff. 09:48:41 I only get two links from google. 09:51:58 n/m i found xilinx. 09:52:00 ok, it's a Spartan-IIE XC2S300E from xilinx; same arch as xilinx virtex 09:52:16 ma] I honestly have never ever used a fpga. 09:52:22 what interesting things can you do with it? 09:52:24 me too 09:52:43 what motivated you to get it? 09:54:08 to try out cpu design. 09:55:33 --- quit: onetom (niven.freenode.net irc.freenode.net) 09:55:38 --- join: onetom (~tom@novtan.bio.u-szeged.hu) joined #forth 09:56:07 ma] and then where do you plug it in? 09:57:15 applications of fpga http://www.fpgacpu.org/links.html 09:57:37 parallel port; it has local mem 09:58:39 em, the board has local mem, but needs it. 09:59:56 ma] how are you learning this stuff? 10:00:04 do you have prior experience? 10:01:27 no 10:01:50 lot's of doc from xilinx 10:02:08 very cool! 10:06:23 brb 10:08:29 brb? 10:09:13 be right back. 10:25:28 http://pet.dhs.org/~ecl/theproject/theproject.html 10:31:00 --- join: a7r (~a7r@206.72.82.135) joined #forth 10:37:33 ma] interesting. 11:14:19 --- join: rk (~rk@ca-cmrilo-docsis-cmtsj-b-36.vnnyca.adelphia.net) joined #forth 12:44:18 gilbertdeb: still there? 13:31:07 jan gray (fpgacpu.org) has a risc processor that is streamlined to what is realizeable on a fpga 13:31:31 I have a lot to read regarding fpga's. 13:31:50 a streamlined misc will probably have the same size, but: WHY registers? 13:32:51 there are only the words push and pop that want to access both stacks at the same time 13:33:19 so why not kick them and provide alternatives, that execute both in one cycle 13:33:50 then one could unify both stacks to one 13:35:11 for building 32 processors on the 300e one would probably have to reduce to 4bits/insn 13:37:12 how do you run it after it is built? 13:38:06 the spartans allow to initialize the on-chip ram 13:38:21 yeah but how do you test it? 13:38:39 how do you display it? 13:38:47 can you plug in a monitor or some such? 13:38:48 with a simulator like ModelSim (where I'm currently stuck in...) 13:38:59 I see. 13:39:14 there is a VGA extension for the X5-X300 13:39:30 is it expensive? 13:40:22 400Euro for: the board + ram ext + LED ext + VGA ext 13:52:49 can you interact with it once it is setup? 13:56:20 --- quit: rk ("Client Exiting") 13:59:04 the VGA ext also has PS/2 13:59:22 cool! 13:59:43 so on that board you can have a _complete_ computer! 14:10:23 --- join: rk (~rk@ca-cmrilo-docsis-cmtsj-b-36.vnnyca.adelphia.net) joined #forth 14:19:00 --- quit: a7r (Read error: 113 (No route to host)) 14:37:53 ma] what about this: http://www.altera.com/products/devices/cyclone/overview/cyc-cyclone_pricing.html 14:58:42 you'd have to buy in 2^10's :-) to get those prices 14:58:58 there is a java board with a cyclone 14:59:17 where does it say that at? 14:59:27 url for the java board? 14:59:41 it seems no one here has prices for the alteras. 15:00:19 http://www.jopdesign.com/cyclone/index.jsp 15:03:01 also cyclone: http://www.elcamino.de/Products/prod%20dcc.html 15:03:01 I'm a little confused. 15:03:19 with the one you have, can you get other chips into the board, or is it soldered on? 15:03:22 if you ask kindly you may get the 699Euro board for 600 15:03:29 hehehe. 15:05:23 elcamino also has boards where you can change the FPGA chip, but they cost 15:05:40 with your model, is that possible? 15:06:22 no; don't know why, but elcamino use only altera chips 15:07:15 I remember vaguely a comment: alteras in general have more routing ressources on board 15:07:45 but also that makes their routing slower, and less optimizeable than the xilinxes 15:14:49 http://groups.google.fr/groups?q=altera+cyclone+price&hl=fr&lr=&ie=UTF-8&oe=UTF-8&selm=3E9F6D62.B9B9BC2B%40yahoo.com&rnum=4 15:15:14 has references to sites that sell single fpga chips 15:15:59 gilbertdeb: did you ever design a board and put some chips on it? 15:16:09 Nope. 15:16:32 I have a bread board and some IC's upstair.s 15:16:43 I haven't looked at it for a while. 15:18:00 what about you? 15:18:10 no 15:18:21 there will be smoke will there not? :D 15:18:56 diabolicus invokus 15:19:05 heheh 15:28:11 I'll be back. 16:49:12 --- join: jma (~jma@h-64-105-21-62.DNVTCO56.covad.net) joined #forth 16:59:11 --- quit: jma () 17:04:01 --- join: TheBlueWizard (TheBlueWiz@pc18dn1d.ppp.fcc.net) joined #forth 17:04:01 --- mode: ChanServ set +o TheBlueWizard 17:04:06 hiya all 17:09:29 --- join: Frek (~anvil@81-216-25-254.man2.calypso.net) joined #forth 17:14:27 hiya Fractal 17:14:29 oops 17:14:33 hiya Frek 17:14:45 hi TheBlueWizard 17:34:07 gotta go...bye all 17:34:18 bye 17:34:27 --- part: TheBlueWizard left #forth 18:10:06 . 18:27:35 * ma] will sleep a round 18:27:55 --- quit: ma] ("Client Exiting") 18:32:37 --- join: jstahuman (~justahuma@pcp053555pcs.brlngt01.nj.comcast.net) joined #forth 18:37:17 --- quit: proteus_ (Read error: 110 (Connection timed out)) 18:51:07 --- join: kc5tja (~kc5tja@ip68-8-127-122.sd.sd.cox.net) joined #forth 18:51:08 --- mode: ChanServ set +o kc5tja 18:51:19 hi kc5tja 18:51:32 what do you know about fpga? 18:52:02 That's like asking me what I know about Earth. 18:52:10 Can you be more specific? 18:52:15 :) 18:52:23 have you ever used one? 18:52:26 No. 18:52:35 do you know anyone who has? 18:52:45 Well, define 'used.' 18:53:03 hmmm? 18:53:09 Because I've toyed with hardware that used one, but it wasn't my hardware. I worked with them in an embedded MIPS-based application, where I developed MIPS software. 18:53:26 was it fast? 18:53:41 As fast as it needed to be. 18:57:21 do you know who manufactured the FPGA you were playing with? 18:57:49 Xilinx. 18:57:55 ah cool. 18:57:58 which model? 18:58:04 The FPGA was **way** faster than the CPU, so it was never a concern. 18:58:06 I have no diea. 18:58:07 idea 18:58:16 Like I said, it was hardware I never designed. 18:58:27 was it a 32bit or 64bit MIPS do you know? 18:58:29 I was just given a board, pre-built, and was told to test it. 18:58:37 64-bit MIPS; R4300 18:58:37 ah okay. 18:58:43 Hmmm. 18:58:51 I'm getting some very evil ideas :D 18:58:57 Such as? 18:58:57 what can I sell here? 18:59:03 I want to buy one. 18:59:10 Well, I think most R4300 CPUs are 32-bit. 18:59:17 I never used anything beyond the 32-bit instruction set. 18:59:36 But the specific CPU we used (Quantum Effect Devices -- OOPS, sorry -- just remembered, R5500) had 64-bit instructions too. 19:00:00 I quoted R4300 because that's the programmers reference book I used while programming it. :) 19:00:09 MIPS is a nnnniiiicccceeee RISC CPU. 19:00:27 MIPS and PowerPCs are my two all-time favorite RISC architectures. 19:00:38 kc5tja yeah, but it is a pain to do anything for on the i2. 19:02:02 Why do you say that? 19:02:21 I don't have a cc license or an asm license for the i2. 19:02:31 you have to pay for stuff on irix! 19:02:50 Pity -- isn't there a SGI i2 version of Linux? 19:03:10 the headaches start flowing from there. 19:03:46 apparently, the r10k IP28 has some weirdities that no one has bothered to hack around to get either netbsd or linux running on there. 19:04:02 OK, what the hell is IP28? 19:04:08 What does it stand for? 19:04:17 I have no idea. 19:04:20 I've never heard such a thing, but you keep mentioning it. :) 19:04:32 It gets quoted back at me a lot. 19:04:50 I'll find out now :) 19:08:43 kc5tja I think the IP28 refers to indigo2 20:03:42 kc5tja I really think an fpga would be a fantastic toy to mess around with. 20:04:01 Yeah, but you need to at least know how to work with digital electronics before playing with one. 20:04:07 Then you need a programmer for one. 20:04:15 I'm reading this article -> http://www.fpgacpu.org/teaching.html 20:04:19 Then you need a software package to compile hardware description language to it. 20:04:26 yeah there are packages. 20:04:55 Not for Linux. Simulators, but not actual hardware compilers. 20:05:00 ma] who was here earlier purchased this board: https://host3.quickdns.net/burched/index.html 20:05:03 As long as you're happy with using Windows for it. 20:05:26 I gathered there was a webbased tool to play with as well. 20:05:38 Webpack tools it is called. 20:05:48 all for 215 bucks! 20:05:51 I think I can afford it. 20:05:52 I doubt you can program one with that though. 20:06:08 <-- watches his whole social life dissapear completely now. 20:06:10 I'd rather spend the extra $30 and go for the $250 package that Xilinx offers. 20:06:19 this is xilinx based. 20:06:28 But these are for very basic, introductory FPGAs. Just remember that. :) 20:06:28 take a look at it. 20:06:41 300k gate Xilinx SpartanIIE FPGA 20:07:01 that's news to me. 20:07:02 is 300K adequate? 20:07:04 THey're getting generous. 20:07:11 I don't know. 20:07:23 Never played with one, never hacked on one, never owned one, never did anything with one except look at one. :) 20:07:32 but you are now aren't you! 20:07:39 * gilbertdeb nods for kc5tja. 20:07:40 The number of gates isn't the only consideration though, because you have to place and route signals. 20:07:54 No, I'm still not playing with one. 20:08:06 I only know the most basic, rudimentary knowledge about FPGAs. 20:08:20 how are you gonna build your machine then? 20:08:48 I was going to use discrete component TTL. 20:09:01 But if programmable logic ends up being cheaper, then I'd use a CPLD. 20:09:04 (similar to an FPGA) 20:10:02 For video output, FPGA is probably the way to go. But it's an expensive proposition for just one peripheral -- $250 for the programmer's toolkit, plus I don't know how much for blank chips. Then I need those serial EEPROMs to boot the FPGAs with. 20:10:11 Or I could find a way to load the data off the main CPU. 20:10:13 Not sure yet. 20:10:30 Main CPU would be cheaper, but I'd need more ROM space to program the FPGAs with, which could be rather expensive. 20:10:44 So, there are too many options for me to consider at the moment. 20:11:06 how many gates do CPLD's tend to have? 20:11:36 Not much, but they're not wired the same way as an equivalent density FPGA. 20:11:48 A CPLD is basically a really big, fat, fast PLA. 20:12:02 who does the "P" for you? 20:12:04 Only the most basic of sequential logic can be performed on it. 20:12:13 ?? 20:12:17 You do. 20:12:21 Just like with a FPGA. 20:12:24 with a special device? 20:12:25 oh okay. 20:12:34 Yes, CPLDs need their own programmers too. 20:12:59 So if I were to build a system, it might be worth my time to just use FPGAs and skip CPLDs because then I'd need only one programmer. 20:13:47 FPGA's are re-usable right? 20:13:58 Yes -- they use RAM for their memories. 20:14:18 Hence the name, Field Programmable Gate Arrays 20:14:34 yes I understand that part. 20:14:51 CPLDs usually are EPROM or EEPROM based. 20:14:54 I wish there were a system where I could have a couple of pluggable FPGAs. 20:20:14 I saw a couple of Xilinx fpga's on ebay, but I'm not quite sure where they 'go'. 20:20:48 Well, you have to design a circuit around them. 20:25:06 --- join: ma] (markus@lns-th2-3-82-64-37-68.adsl.proxad.net) joined #forth 20:25:16 ah there is ma] again. 20:25:30 he has one of the boards. 20:26:26 markus is nosleep... 20:27:56 do you have any more info on the spartan? 20:28:31 did you fetch the specs? 20:29:37 (I was speaking of my time constraints: I'm digging into j2ee. somehow earn money...) 20:30:51 nah, I'm referring to the board and such. 20:32:28 It's remarkable how _much_ effort cpu designers did to get good single thread performance 20:32:41 Yes. 20:32:43 * kc5tja grins 20:32:51 what do you mean ma]? 20:32:55 And Chuck does better with only a handful of transistors. 20:33:10 out of order execution 20:33:32 take a feature list of a modern processor 20:33:56 kc5tja: define 'better' 20:34:21 I thought pipelining made perfect sense, but superscalar superpipelining with out of order execution is just too much. Hell, even superscalar is a BIG FLASHING CLUE to people, "Gee, maybe we should just use multiple CPUs instead." 20:34:59 ma]: Thousands of transistors instead of millions, for comparable performance. The performance-per-transistor is several orders of magnitude better than, say, even MIPS or PowerPC. 20:35:17 kc5tja are you kidding? 20:35:38 gilbertdeb: umm...you do know how many gates are in the F21 CPU, right? 20:35:51 I know they are in the low thousands, yes. 20:36:13 gilbertdeb: There are less than 4400 gates. Each gate averages four transistors. Hence, less than 18000 transistors, for a processor with (clock for clock) nearly equal performance than a PowerPC. 20:36:14 but I was asking of VS MIPS/PPC 20:36:42 It can easily fit on the spartan. 20:36:47 with lots of room to spare. 20:36:53 Careful. 20:37:05 hmm? 20:37:25 It's true that the Spartan has room on the die for that many gates, but the question is, will they *route* in the necessary way and still keep within its performance requirements? 20:37:45 well ma]? 20:37:47 Answer the man :D 20:37:57 Chuck hand-placed all his transistors. He has the ultimate in control. 20:38:18 Not only that, but the transistors used in commercial chips are over-sized (to handle process variations); hence, they'll be inherently slower. 20:38:25 jan gray has a simlified risc with a few hundreds of CLB's 20:38:55 But, an FPGA can hold a stack CPU (it's been done before), and yes, clock for clock, it can compete with other existing RISCs. 20:39:29 if/when I get one, I might get to try a Move machine! 20:39:30 The Steamer16 CPU fits in a CPLD, is faster (on average) than a comparably clocked 80386. The Steamer16 only has a 3-deep data stack, and absolutely no return stack. 20:39:41 Moves are nice. 20:39:46 * kc5tja likes the concept of a move machine. 20:40:02 and the steamer has __3__ bits per insn 20:40:03 You don't get a pay-off unless you have at least four buses. 20:40:10 ma]: Yup. 20:40:13 Forgot to mention that. :D 20:41:08 The interesting thing is, almost by accident, the Steamer16 is almost ideal for code morphing -- e.g., taking a bytecoded stack machine or C/Oberon-compiled program and morphing it to raw Steamer16 code before execution. 20:41:09 gilbertdeb: what's a Move machine? 20:41:22 well kc5tja? 20:41:25 answer the man :D 20:41:34 ma]: A Move machine has precisely zero opcodes, and hence, one instruction: move. The two operands are source register and destination register. That's it. :) 20:42:07 Typical move architectures have two to four slots per instruction, and hence, can move two to four data around per cycle. 20:42:16 and you do measure it in mips 20:42:36 ma]: Some have measured it in GIPS for the higher-speed architectures. 20:43:01 And it's software pipelined in ways that even traditional VLIW architectures can't be, so it's even higher throughputs than VLIW architectures. 20:43:46 The CPU waits for *nothing* (all CPU delays are exposed to the programmer, and hence, are available for exploitation). 20:44:00 NOPs involve moving from a dummy register to the same dummy register. 20:44:16 MOVE architectures have their own set of problems though. Multitasking is . . . a nightmare to say the least. :) 20:44:22 ahh, and on some registers you have the adder, the ander, the PC 20:44:28 Yes. 20:44:29 Like, 20:44:34 move op1 -> addA 20:44:36 move op2 -> addB 20:44:39 (do something else) 20:44:45 move addResult -> multiplyA 20:44:52 mov someotherResult -> multiplyB 20:44:55 (blah blah) 20:45:01 mov multiplyResult -> GP0 20:45:02 etc 20:45:28 Sounds like programming a 3D graphics engine. 20:47:10 It wouldn't surprise me if they use MOVE architectures to attain their speed. 20:47:15 They have VERY little hardware overhead. 20:47:20 how many registers do you pack in a move machine? 20:47:30 And they're the fastest serial-parallizable architecture I know. 20:47:50 Most MOVE machines have enough space for 256, but I think 64 is often the limit. 20:48:00 there must be some chips out there? 20:48:11 They have adder registers, subtractor registers, registers for all the logical operations, immediate operand fetches, etc. 20:48:17 ma]: Just research engines. 20:48:22 Nothing ahs made it to market that I know of. 20:48:49 But it's clearly influenced some other chip makers; the EPIC architecture from Intel is influenced by them (it shows in the way they make lots of things explicit to the programmer). 20:49:01 EPIC == Merced, McKinley, etc. 20:49:06 IA-64 basically. 20:49:57 And MOVE instructions are hell on caches. A 4-bus MOVE architecture has 64-bit instructions. Fortunately, they move four data around per cycle. 20:50:12 Still smaller than IA-64 instruction groups though... :) 20:51:35 I would say that with enough functional units, a MOVE architecture is god-send for functional programming languages. 20:52:37 kc5tja what about a MOVE vm for functional languages? 20:52:50 I just said. 20:53:11 Functional languages are rather slow now-a-days, primarily because of the limitations of the host processor environment. 20:53:23 But a MOVE architecture can be built to a functional language's specific needs. 20:53:42 Including, for example, having four or more complete ALUs at its disposal for massively parallel function evaluation purposes. 20:53:58 And maybe even a memory management coprocessor interface, to help speed up garbage collection and memory allocation. 20:54:09 4 ALU's? 20:54:18 the advantage to a misc should be small 20:54:34 Functional languages are very ill-suited to MISC architecture machines. 20:54:45 A MOVE architecture is all about exposing hardware level parallelism to the programmer. 20:54:54 It's a different kind of simplicity. 20:55:23 i see 20:56:12 you'd need lot of multiplexers 20:56:36 8 for a 4-bus machine 20:56:59 2 per bus (one for source, one for destination) 20:57:29 Each register would have one write port, and n read ports, where n is the number of slots per instruction. 20:58:07 Additional multiplexors would be on a per-functional-unit basis, and not involved with the instruction decoding process. 20:58:24 Any delays in the functional unit would be exposed to the programmer. 20:58:41 If a program reads a result register too soon, well, you're flat out of luck. :D 20:59:02 absolutely explicite 20:59:11 Yes. 21:01:01 This, BTW, is precisely why multitasking is a (#$&* on such architectures. 21:01:06 and you'd fill each bus by read-enable'ing one of the registers 21:01:18 Yes. 21:01:24 so you wouldn't do multitasking. 21:01:33 I would. 21:01:52 because you love the pain? 21:01:58 Even if it's only cooperative, there are just too many problems that are very, very conveniently solved using multitasking. 21:02:04 No. 21:02:35 so it would be very coarse MT 21:02:44 Maybe. 21:02:54 Or it could be a Unix variant, it shouldn't matter. 21:03:02 As long as the hardware state is queriable. 21:03:30 It's just that there are so many registers to save. 21:04:14 And when reloading the FU's registers, care must be kept to ensure proper timing, to allow the results to propegate to the result registers before actually returning to the task. 21:04:15 Current x86 variants have this problem now with the SSEx and MMX registers. 21:04:38 (not the timing problem, but the lots-of-regs problem) 21:04:38 TreyB: I hardly call it a problem. Most RISC processors have much worse an issue with 32 GPRs for both integer AND floating point. 21:04:58 And IA-64 is *nasty* -- 128 registers for integer and floating point. 21:05:14 Plus some other administrative registers (like predicate registers and flags registers, etc) 21:05:17 Overkill is necessary for Marketing purposes. 21:05:24 gilbertdeb: It's not influenced by marketing. 21:05:34 whats it influenced by? 21:05:37 BAD design? :D 21:05:52 gilbertdeb: Remember the chip has *zero* instruction re-ordering capability, and the superscalar abilities of the chip are *explicitly exposed* to the programmer. 21:06:08 So to keep all of its execution units busy, it has a lot of registers, because it lacks register renaming logic. 21:06:40 I don't think it's bad design, but it might be overkill -- I'd say 64 registers should be enough for both integer and FPU. 21:06:44 they had to build a 64 bit CISC didn't they? 21:06:47 This kind of CPU makes compiler writing difficult. 21:06:51 gilbertdeb: IT'S NOT CISC! 21:06:56 --- quit: ma] (niven.freenode.net irc.freenode.net) 21:06:56 --- quit: TreyB (niven.freenode.net irc.freenode.net) 21:06:58 the IA-64? 21:07:08 gilbertdeb: It's VLIW! 21:07:09 --- join: ma] (markus@lns-th2-3-82-64-37-68.adsl.proxad.net) joined #forth 21:07:09 --- join: TreyB (~trey@cpe-66-87-192-27.tx.sprintbbd.net) joined #forth 21:07:15 Based on the PA_RISC architecture. 21:07:15 ack. 21:07:46 TreyB: Only because we're unfamiliar with it. 21:07:55 I"ve read 0 on the ia-64 mostly because its going to be a while before I can even afford a machine which has one. 21:08:17 gilbertdeb: I can't afford a PowerMac, but I know tons about the PowerPC. 21:08:26 Processor design is one area that I'm intensely interested in. 21:08:38 Welcome back. 21:08:40 I think there are other more interesting things for now than the ia-64 honestly. 21:08:40 * TreyB has lust in his heart for the x86-64 machines. 21:08:43 I never left. 21:08:54 and, do we fer crying out loud need what they're offering??? 21:09:01 * gilbertdeb never left either. 21:09:04 x86-64 isn't bad, but it's only prolonging the cancerous architecture. 21:09:14 kc5tja: I think my server node split. 21:09:32 gilbertdeb: Sure, but I cannot blame Intel for wanting to fix their errors. 21:09:49 IA-64 is an honest-to-goodness attempt at saying, "Here is a REAL CPU architecture." 21:09:51 Perhaps, but they've gotten very good at building x86 instruction decoders. 21:10:00 by Intel. 21:10:05 or by the whole industry? 21:10:13 TreyB: Anyone. 21:11:05 As far as whether we need it or not, well, I can't say. Maybe not. But that's not the point. 21:11:22 They need to stay in business, and making ho-hum run-of-the-mill CPUs won't get new sources of income. 21:11:32 If we didn't need it, they wouldn't build it and make money at it. 21:11:49 IA-64 is being used in the supercomputer industry, ironically. Not because its integer performance, but because of its floating point performance. It blows the doors off of anything I've ever seen. 21:11:52 TreyB I disagree. 21:12:07 they're offering it :D 21:12:33 then if we don't need it, it will die a shameful death. 21:12:50 ia64 might just croak :-) 21:12:53 gilbertdeb: There is a need for 64-bit integer processing. There is a need for more than 8 programmer-visible registers. 21:13:02 Right. 21:13:15 How much more isn't really certain. 32 GPRs seems to be the going rate, but AMD's 16 isn't that bad either. 21:13:28 x86-64: The Only 15 Registers You'll Ever Need (tm) 21:13:45 But the thing I hate most is that Hammer support will be slow coming, because it's not 100% backward compatible with 32-bit software environment. 21:14:00 In what way, kc5tja? 21:14:09 TreyB: Well, it does lack segment registers. 21:14:15 (WHICH SUCKS EGGS) 21:14:26 Only in 64 bit mode. Which I happen to like :-) 21:14:26 Which Linux and NT use for system management purposes. 21:14:37 TreyB: And that is the *ONLY* mode you can use those extra registers too. 21:15:01 You can still use FS and GS in 64bit mode. 21:15:01 Because that's the only mode the single-byte INC r8 opcode space is used for register override prefixes. 21:15:23 * TreyB has a full set of x86-64 docs right next to his desk. 21:15:42 Your attempt to impress me has failed. I have them in PDF format. >:) 21:15:52 LOL 21:16:05 * TreyB has those, too. 21:16:12 Seriously, though, I think it was a mistake not to make the extra registers visible in the more native modes. 21:16:21 I really, really do. 21:16:33 I can understand why they did it, though. 21:16:54 I can foresee nasty compatibility problems with software under NT and Linux because of it. 21:17:10 I don't understand why you say that. 21:17:28 We'll see. 21:18:44 Under a 32bit kernel, x86-64 looks like ia-32. Under a 64-bit kernel, ia-32 code runs unchanged, and 64bit code has access to all the new regs. 21:19:01 If memory serves me correctly, native word size in 64-bit mode is 64-bits. The address and operand size override prefixes drop it down to 32-bits. Thus, 16-bit operands aren't supported. I can't be sure about this though. 21:19:35 I'm too lazy to look up how x86-64 handles intermixing 32- and 64-bit code under 64-bit mode. 21:20:23 I think you get RxX unless you specifically ask for ExX. 21:20:39 Assembler issues. I'm talking machine issues. 21:21:12 66 EA FC means something totally different in 32-bit mode and 64-bit modes. 21:21:32 XCHG BP,SP (16-bit mode) versus XCHG EBP,ESP (64-bit mode) 21:21:40 * TreyB whips out the manual. 21:21:44 I used those registers because I memorized that opcode sequence in writing my last Forth. :) 21:28:17 kc5tja: what's your name 21:28:18 ? 21:32:04 Gordon Moore is his middle name for sure :D 21:33:59 so the speed of a move machine is limited by pulling the values from the source registers 21:34:10 and filling the bus, which may be long 21:34:49 then dest must take the value, and the next cycle can start 21:34:51 not? 21:35:59 we don't need to decode the instruction :) 21:37:47 you could build a stack machine on it and use compressed code 21:39:19 but else, it will provoke code bloat like risc, not? 21:39:54 --- quit: ma] ("'too much real live here'") 21:40:22 --- join: ma] (markus@lns-th2-3-82-64-37-68.adsl.proxad.net) joined #forth 21:41:48 Sorry, was replying to e-mail 21:41:51 Samuel A. Falvo II 21:42:34 MOVE architectures are like VLIW machines -- code bloat isn't so much the problem as instruction size is. 21:42:53 I agree that a stack architecture should be left simple in the only way a stack machine can be. 21:43:13 * kc5tja remembers seeing attempts at building a stack machine/MOVE machine hybrid, and the results weren't too pretty. 21:43:57 Besides, the smaller stack architecture has more point-to-point logic than the bus-dominated MOVE machines, so they potentially run with a shorter instruction period. 21:44:32 If parallelism is really that important to me, I'd just write software to use multitasking, and distribute the tasks across multiple stack CPUs. 21:44:43 stack machines, it seems, are made for SMP. 21:44:43 :) 21:46:37 But usually not for multitasking. I haven't seen a stack CPU that has an MMU or context management (but I haven't seen man stack CPUs either). 21:46:50 ...seen many... 21:47:53 You don't need those for multitasking. 21:48:35 You only need those for enforcing protection between programs you truely don't trust. Unix requires it because it's inherently a multi-user system, and it makes sense to have it. 21:48:46 But a personal desktop machine truely won't need such protection features. 21:48:50 * TreyB searches for another word... multi-programming. 21:49:21 kc5tja: I can't even relate how much I disagree with respect to "personal desktop machine" not needing such features. 21:49:28 I can. 21:49:51 Commercial vendors need to take responsibility for the failures of their software. 21:50:07 * kc5tja remembers the Amiga. 21:50:27 I've *NEVER* had the Amiga crash on me when running multiple programs concurrently. Every application I ran was extremely well behaved. 21:51:07 Consequently, because it lacked MMU management overhead, message passing (pretty much the only IPC it implemented and needed) was a simple pointer exchange, and hence, immensely fast. 21:51:40 User responsiveness was insanely quick. My 7MHz Amiga is still more responsive than my 800MHz Athlon box in about 50 to 75% of the time. 21:51:52 (granted it doesn't draw the screen faster, but it does *respond* faster) 21:51:54 And at the time, no one wrote worms or trojans. 21:52:06 Sorry, that's not my responsibility. 21:52:23 Worms and trojans are distributed only by programs which are written with backdoors. 21:52:44 --- quit: jstahuman ("Lost terminal") 21:53:04 Rule #1 about trust: never run external software without prior authorization. 21:53:21 This rule *ALONE* indicates the failures of an ACL-based system, and just *screams* for a capability-based security model. 21:53:28 Capabilities, of course, simply don't require MMUs. 21:53:35 (though they're nice to have) 21:53:49 How can you say that about cap-systems and MMUs? 21:54:01 Rule #2 about trust: If the user is stupid enough to run software that destroys his/her data or family/friend relationships, well, that's their problem. 21:54:18 Because a capability is a reference to an object, basically. 21:54:32 Without a reference to an object, it's not possible to locate or use said resource. 21:54:44 I contribute to EROS very occationally. I know about capability systems. 21:55:12 Without an Open() type function, there's no realistic means of querying the state of the system. 21:55:45 If you want a window to display on the window, you ask the user for one (similar to an OpenWindow() type call, but it will refuse if the user denies it) 21:55:52 memset(0, 0, MAX_INT); 21:55:56 Ditto for a file. NewEmptyFile(), for example. 21:56:19 Without an MMU, you have no hope. 21:56:26 TreyB: Yes, that'll kill the system. But again, it's the user's fault for (a) running the program in the first place, and/or (b) letting the program he/she was using run the errant program. 21:56:41 What about developers making mistakes? 21:56:49 TreyB: They have the most to gain without MMUs. 21:56:57 I developed software without an MMU for decades. 21:57:09 I write crash-free programs routinely. 21:57:26 (note crash-free != bug-free, though my bug-rates are substantially reduced over other programmers) 21:57:35 I didn't say you couldn't. 21:57:38 Especially since I write tests first, then production code after. 21:58:12 We'll just have to agree to disagree on this, I think. 21:58:20 Yep. 21:58:32 Because it's all in the development process. 21:58:48 If the process promulgates buggy code, then you'll need an MMU to cover your butt. 21:58:51 Otherwise, it's not needed. 21:58:53 --- join: ma_] (markus@lns-th2-5-82-64-69-172.adsl.proxad.net) joined #forth 21:59:48 I agree that you can live without an MMU if you have absolute source-code control of the entire system. Short of that, I'll take the MMU. 22:00:22 You can take what you want, but the only thing an MMU is really good for is virtual memory. 22:00:36 (which does require a protection of sorts, I'll admit) 22:00:55 Never once have I ever used an MMU to "protect" me from malicious or buggy software. 22:01:07 Buggy software is still just as buggy, and I still lose just as much data because of it. 22:01:43 Have you ever written an OS where you had to support code written by other programmers? 22:01:48 Yes 22:01:54 Three of them, in fact. 22:02:08 With or without MMUs? 22:02:16 All without. 22:02:27 What did you do about hostile code? 22:02:34 Nothing. 22:02:37 It's not my responsibility. 22:02:40 kc5tja how is it that you don't have a 6figure job with all this experience? 22:03:00 s/6/6+/ 22:03:07 gilbertdeb: Because they don't exist? 22:03:16 Linus doesn't have a 6-figure job... 22:03:17 someone's gotta be displaceable! 22:03:23 gilbertdeb: Me and what qualifications? 22:03:25 Linus is a commie :D 22:03:33 kc5tja your experience. 22:03:38 Because I wrote a Forth environment? An OS environment? Woohoo! Any teenager can do that now-a-days. 22:03:43 thats gotta matter dammit! 22:03:47 Experience means SHIT in today's economy. 22:03:54 why? what matters then? 22:03:59 Money matters. 22:04:05 College degrees. 22:04:12 And most importantly, who you already know. 22:04:19 I'm a nobody. 22:04:22 Hence, I have no job. 22:04:32 kc5tja you're in college, you KNOW that there are so many other people with stupid degrees who lack your experience! 22:04:44 gilbertdeb: And this changes things how? 22:04:46 :) 22:04:55 How long have you been alive, and still not figured this stuff out? 22:05:00 The world isn't fair. 22:05:02 don't the recruiters understand a damned thing? 22:05:09 In fact, it's stacked patently against those who are fair. 22:05:14 gilbertdeb: $$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$ 22:05:21 That's all they see, that's all they care about. 22:05:30 what do you mean $$$? 22:05:39 If the world were fair, we'd all have Constantinesco torque converters in our cars. 22:05:44 hahaha 22:05:49 rofl 22:06:01 you know you're a geek if ... 22:06:04 We'd be driving gas-turbine-powered cars. 22:06:21 Our airplanes would be powered with highly refined Stirling cycle engines. 22:06:31 But the world isn't fair. 22:06:38 All these technologies were shot down. 22:06:39 Hard. 22:06:53 All because people don't want to change. 22:07:10 so what does $$$ have to do with getting hired based on your knowledge and experience? 22:07:15 I missed that part. 22:07:18 And that's the crux of the problem. People are uncomfortable when presented with the idea that sometimes, doing things differently can simplify your life in the long run. 22:07:30 gilbertdeb: IT DOESN'T. Don't you get it? 22:07:33 They care about CASH. 22:07:36 CASH CASH CASH CASH. 22:07:37 No more 22:07:38 No less. 22:07:44 every recruit that gets hired, they get paid. 22:07:50 Mhmm. 22:07:51 They don't care about qualifications. 22:07:56 Not one bit. 22:08:10 * kc5tja is a little pissy about this subject. 22:08:31 Having been unemployed since last November, and dangerously close to running out of any and all funds, to the point where selling his computer and ham radio have entered his mind. 22:08:31 kc5tja you mean they don't care about paper qualifications or real world qualifications? 22:08:38 gilbertdeb: None. 22:09:04 --- quit: ma] (Connection timed out) 22:09:08 Especially when it's a seller's market. 22:09:35 We have *SO* many unemployed engineers here right now that I don't hold a snowball's chance in hell in competing for a job. 22:09:44 I just lack the Ph.D. and Masters degrees that these folks have. 22:09:54 I lack the money to afford long-distance travel for interviews. 22:10:11 I'm FUCKED in every sense of the word but the good one. 22:10:26 And now *I* have to work myself out of the hole. 22:10:29 Hence my own business. 22:10:30 It's hard. 22:10:35 I just lost a *LOT* of money today. 22:10:37 (hence my pissiness) 22:10:58 And I'll be losing money big-time with my next client, but if it goes well, it'll open a gateway to get more (well-paying) clients with. 22:11:02 Hence, it's a risk. 22:11:14 it is Business as usual. 22:11:23 But right now, I'm damn broke, and there's no escaping that. 22:11:26 And unemployable. 22:11:37 And frankly, I'm not sure I *WANT* to work for anyone else anymore. 22:12:04 --- join: Serg_Penguin (Serg_Pengu@212.34.52.140) joined #forth 22:12:05 I'm sick and tired of getting fired all the time. I'm sick and tired of being lied to by management (though I really must exclude Hifn from that; they were very open about it, and let me know months in advance of the layoffs), etc. 22:12:26 kc5tja is the company still alive? 22:12:30 gilbertdeb: Very much so. 22:12:41 who did they retain?? 22:12:57 kc5tja: look at it from another side :))) 22:13:02 gilbertdeb: Their stock hasn't improved much, but that's a relatively poor indicator of company health. They're stable, and growing slightly, which is amazing considering the state of the industry. 22:13:13 gilbertdeb: They let go 25% of their work force. 22:13:22 i had been fired sometimes, but my next job gives more $$$ all the time ;)) 22:13:32 They kept, as you might imagine, everyone who had degrees, who had been with the company longer than 10 years, etc. 22:13:34 Serg_Penguin not so in this economy. 22:13:51 Serg_Penguin: I've been unemployed since November of last year. 22:14:11 but these degrees do not matter in the face of ... 22:14:14 gilbertdeb: no, it's coz i'm changing myself ;)))) 22:14:15 I'm preaching to the choir. 22:14:18 IT DOESN'T MATTER! 22:14:54 Serg_Penguin how? 22:14:55 I could show half the engineers up in Hifn (especially love the ones who don't know what BCD is!), but I was the one to let go, because I was the "highest risk." 22:15:13 Serg_Penguin: Then you're not getting fired. A fire or layoff is an administrative decision on the part of the company, not you. 22:15:24 gilbertdeb: Scientology. ( kc5tja won't like it ;))))) 22:15:30 Emmm. 22:15:32 Emmm. 22:15:36 Yeah. 22:15:37 (and the difference between a fire and a layoff is, with a fire, you were bad. With a layoff, you were good, but we don't care, we're letting you go anyway) 22:15:57 Serg_Penguin: Don't say I didn't warn you. 22:15:59 That's all I have to say. 22:16:07 And with that, the discussion is over. 22:16:28 kc5tja: yes, i agree, it's over ;))) 22:16:41 Serg_Penguin you're quiting scien#@$@#? :D 22:16:56 kc5tja: i left jobs both on my decision, and being kicked 22:17:36 gilbertdeb: no, i dropped it for maybe 2 years, and now i feel i should return to it 22:17:45 bah. 22:17:52 give myself another kick to climb the social stairs 22:18:14 ;)))) 22:18:42 w/ 1st kick, i climbed from dropout to worker ;))) 22:19:09 and was ha-a-a-appy long time, and needed no changes 22:20:14 but now i realized what i wanna be more able and more bright 22:20:26 so i'll do it the way i already tryed ;))) 22:20:55 G'night, all. 22:21:12 anyone, say something ;)) i feel myself like i'm reading the lecture from cafedra ;)) 22:21:46 i don't like this, i used to be between fellows of same level :))) 22:21:59 I have nothing to say. 22:22:13 I do not share your entheusiasm regarding scientology. 22:22:29 We've already had these discussions before, and I do not wish to repeat them. 22:24:43 That being said, I'm trying to immerse myself in some Stirling engine conceptual designs that are easy to build. 22:25:00 I mastered the power piston, but not the displacer system, in my last attempts. 22:25:00 so do you wanna build computers or engines? 22:25:09 gilbertdeb: Am I restricted to just one? 22:25:19 * gilbertdeb points out the 'or'. 22:25:24 I enjoy creating things -- exercising the natural laws is itself an artform. 22:25:57 I particularly wish to equip a bicycle with a Constantinesco torque converter because of the hills around here. 22:26:14 btw, i processed my fotos made by ancient camera ;)) 22:26:18 Changing gears while going up a hill causes too much momentum loss, because I must stop pedaling for a brief moment. 22:26:33 Serg_Penguin: Is this the second batch of photos, or the first? 22:26:41 first film 22:27:07 fotos made at sunlight are ALL exposed about right 22:27:41 ex for 2 in WERY bright mid-day no-cloud sun - they are overexposed 22:28:21 all indoor sunlight fotos are underexposed, despite we used exponometer of friend's camera 22:28:45 only ones where man stands right in sun are close to OK 22:28:56 gilbertdeb: Building engines helps keep me in the real world -- getting my hands dirty, distracting me from other issues. Helps me to relax sometimes. 22:30:00 wanting to get hands dirty is a good thing 22:30:06 C wouldn't exist 22:30:20 I c. 22:30:25 also, i discovered what 'expo fork' of 2 steps is not enough 22:30:26 ma_]: ? 22:30:46 both fotos are either normal or have same error 22:31:10 kc5tja: using a C compiler is a clean solution, but inferior 22:31:24 ma_] hmmm? 22:32:22 ma_]: Actually, for this one project I want to work on, I want to use Oberon. The language's native support for modularity is just the ticket for a DTP program I want to write for Linux. 22:33:02 but I need a custom compiler and run-time environment, because all the existing Oberon compilers for Linux are either self-supporting operating environments (a la Smalltalk) or use GCC to compile to static binaries that can't be manipulated at run-time. 22:33:09 Bugger.... 22:35:31 what can be so good in modularity to choose a language because of it? 22:37:20 hmm... what is Oberon closer to ? and what are it's good and bad sides ? 22:37:35 Serg_Penguin pascal. 22:37:37 modula-2 22:37:43 it is a Wirth language. 22:37:57 the swiss have given us a language :D 22:38:01 ma_]: Type safety combined with modularity makes software *very* easy to write and maintain. 22:38:56 can it be used at runtime like Forth ? 22:38:59 Serg_Penguin: Closer to Modula-2. Good sides: supports native-code compilation complete with type safety and garbage collection. Bad side: non-standard dialects are abundant because of various commercial research and development efforts, and lack of portable language definition. 22:39:42 Serg_Penguin: Not quite, but it can support modules at run-time which compile Oberon sources into new modules if you desire it. It's not impossible. 22:40:05 Serg_Penguin: That is, after all, how the Oberon System 3 and System 4 operating systems work. 22:40:27 so, may i have Oberon compiler or interpreter (or both) in application ? 22:40:31 But at least all the non-standard dialects support a common basic set of functionality. 22:40:56 say, to make editable game scripts or app plugins 22:41:08 Serg_Penguin: Oberon is a compiled language only. It isn't interpreted at all. 22:41:31 Yes, Oberon can be used as a scripting language, if you provide a compiler for it. 22:41:47 A compiler isn't THAT hard to write, but it's not as simple as Forth's. 22:42:11 Oberon is intended for use as a systems programming language though, a la C. 22:42:21 It's designed to write operating systems and device drivers, as well as applications. 22:42:47 Frankly, I'd rather use Forth as a scripting langauge. 22:43:44 it'll be for i86, or you'll use something like gnu lightning 22:43:51 In my DTP application, I was going to compile Oberon modules to bytecode, and then interpret/run-time compile the bytecode. 22:44:29 I was going to use a VM that was optimized specifically for Oberon. 22:44:39 I tend to stay as far away from GNU software as possible. 22:44:45 I *REALLY* dislike their license terms. 22:45:17 why is "self-supporting operating environments like smalltalk" unacceptable 22:45:55 Because Linux is already an operating system. Self-contained operating environments present user interfaces that are "too different" from the surrounding system's. 22:46:08 They often have lack-luster memory performance too. 22:46:28 Consider Java: it's #1 problem is not run-time speed, but rather, memory management performance. 22:46:49 The JVM pre-allocates about 16MB (and that's conservative!) of memory for use as the JVM memory pool. 22:46:59 hmm 22:47:00 Problem is, as it fills up, the application will start to thrash. 22:47:28 When Java garbage collection kicks in, it must walk the *entire* application's data set, which could touch the entire 16MB space; thus effectively kills Java's performance cold. 22:48:15 something advanced as the Train Algorithm? 22:48:15 My roommate is writing a fairly sizable application in Java at the moment, and while he loves the language, he abhores its execution performance, and its resistance to profiling for optimization. 22:48:27 kc5tja for school? 22:48:29 or $$$ 22:48:57 gilbertdeb: I can't answer that question. 22:49:13 ack. 22:49:23 ma_]: I saw the train algorithm. I really didn't think it was anything special. 22:49:27 invite him over sometime :D 22:49:38 gilbertdeb: I can't answer it because he doesn't know if it'll be open source or commercial yet. 22:49:51 gilbertdeb: That won't happen. 22:49:55 heheh. 22:50:31 the touched memory would be a small portion of the heap 22:50:49 The nice thing I like about Oberon's runtime environment is that the garbage collector is cooperative -- it runs while no other task is running, period. Usually, a call to the GCer is made in the application's main I/O loop. 22:51:46 ma_]: It's just a variation of the usual garbage collection in aged, smaller heaps. (Goddammit, I forgot the name of that method now. #OIHLSDJFLKWJE$!!!) 22:52:39 I also remarked you did swap two letters 50 lines ago 22:52:40 I'll remember the name tomorrow morning, while I'm in the shower or something. 22:52:44 :-) 22:53:04 ?? 22:53:31 once i invented a thing while in WC ;))) 22:53:49 kc5tja : Is it a reference counting or mark/sweep implementation? 22:53:55 you are just to perfect in too many things :) 22:53:57 ma_]: Generational garbage collection. That's it. 22:54:10 Fractal: Mark/sweep is how it's often implemented. It's definitely not reference counted. 22:54:41 Though, if I did it, I'd probably use mark/copy. 22:55:18 and you'll get fast allocation for free 22:55:59 Yup, but I'd do it more for the simplicity of it, rather than the performance. In fact, overall, I think it'll slow the system because of the copying. 22:56:24 Though I've seen some reports that say mark/copy is faster than mark/sweep. I can't figure out why, but the numbers don't lie. :) 22:56:46 I guess it depends on what kind of application you're running. 22:56:51 Is oberon a OO language? 22:57:01 Fractal: Not . . . as . . . such. :D 22:57:13 Oberon-2 is definitely the first of the true-OO implementations. 22:57:29 But Oberon-1 has record type inheritance, but no dynamic dispatching of any kind. 22:57:31 But it's an imperative language, right? 22:57:40 Er 22:58:01 However, it can be implemented by defining message types and handler functions in a very elegant way. The first implementation of Oberon was object oriented, even though the language itself wasn't 100%. 22:58:27 The nice thing about message records is that the messages themselves can be manipulated as objects (as in Smalltalk). 22:58:34 Yes, it's imperative. 22:58:48 Though I still tend to write code in a more or less functional style. 22:59:11 Oberon-2 adds "type-bound procedures," which are equivalent to virtual function definitions in C++. 22:59:25 All without adding a single keyword to the language. :D 22:59:39 (though it does introduce the FOR keyword, which is lacking in Oberon-1; but that's different) 23:00:36 * kc5tja rarely uses FOR anyway 23:07:42 kc5tja: why should instruction size be more of a problem than code bloat, if you can get 64 bits in one access even from external ram? 23:07:49 on the move machine... 23:11:20 brb 23:11:38 gilbertdeb: ok 23:12:14 hmm? 23:12:22 gilbertdeb: I'm too much a newby in IRC things 23:12:36 ah. a misfire I take it? 23:13:07 gilbertdeb: the 'brb' 23:13:15 ah that. 23:15:16 ma_]: I define code "bloat" as the excessive quantity of instructions to accomplish a goal. 23:15:31 ma_]: Code "size" refers to individual instruction size. 23:16:00 So if a program requires 16 instructions to accomplish a goal, and it's pretty minimal, then I'd say it's not very bloated, even if they occupy 256 bytes total. 23:16:39 But if it requires 256 instructions to accomplish a particular goal AND IT'S PROVABLE THAT IT'S NOT A MINIMALIZED SOLUTION, even if they're only one byte long, I would say it's bloated. 23:16:46 I don't know if this definition is arbitrary or not. 23:17:17 But it seems to work well for me. 23:22:15 --- quit: ma_] (niven.freenode.net irc.freenode.net) 23:22:15 --- quit: TreyB (niven.freenode.net irc.freenode.net) 23:22:20 --- join: ma_] (markus@lns-th2-5-82-64-69-172.adsl.proxad.net) joined #forth 23:22:20 --- join: TreyB (~trey@cpe-66-87-192-27.tx.sprintbbd.net) joined #forth 23:23:02 so the move machines code is a 'grown' misc's code, since you have to specify the op three times, two moves for the sources, and one to get the result 23:24:16 each register address contains the op 23:25:15 moving sources and taking result is only interspersed by the parallel actions done 23:25:52 * kc5tja nods 23:26:01 Most computations probably won't involve the "programmer registers" at all. 23:26:16 In fact, I see little need for general purpose registers in such an architecture, though most have 16 or so to play with. 23:26:53 do they have 3 or 2 registers for binary ops (like add) 23:27:09 Usually 2. 23:27:23 Source A, Source B, then result registers for ADD, SUB, AND, OR, XOR, etc. 23:27:43 Or, in larger units, they have two operands for A and B, then one result: ADD. 23:28:00 So: AddA, AddB, AddResult, SubA, SubB, SubResult, etc. 23:29:10 I would probably have two arithmetic units, and two logical units. 23:29:15 if I were to build one, that is. 23:30:10 ok, i was thinking of a machine that has 3 regs for every function, so they could all go parallel 23:30:58 so my stmt about 'grow' is in fact wrong 23:31:19 Well, it depends on the CPU architecture in question. 23:31:45 Some have been tried with the all-inclusive units, and others have been tried with dedicated adders, subtractors, etc. 23:38:44 The worst thing about move machines is probably that they'll never provide fast subroutine calls and returns 23:39:08 --- quit: Serg_Penguin () 23:40:37 I predict they will be incredibly fast. 23:40:54 It's just that you need to save the state of the machine you intend on changing on the stack. :D 23:41:33 parse error 23:42:45 I guess I'm just not sure what is confusing, but it's not terribly important. 23:43:00 Fact is, MOVE machines have a lot of research yet to be done on them. 23:43:44 --- quit: ma_] (niven.freenode.net irc.freenode.net) 23:43:44 --- quit: TreyB (niven.freenode.net irc.freenode.net) 23:43:59 Damn these net-splits! 23:44:07 --- join: ma_] (markus@lns-th2-5-82-64-69-172.adsl.proxad.net) joined #forth 23:44:07 --- join: TreyB (~trey@cpe-66-87-192-27.tx.sprintbbd.net) joined #forth 23:44:17 net-split. Last I saw was "parse error." 23:44:29 I can't see how this works with only the 'mv' instruction. 23:44:46 Have a stack unit with two registers: push, pop. 23:44:55 Any word written to the push register will cause it to be stacked. 23:45:17 Any read from the pop register will cause its value to be unstacked. 23:45:29 so you do additionally trigger events 23:45:47 So, basically, writing to the push register causes the value to be stored in pop, but the previous contents of pop to actually be stacked. That'll optimize memory bandwidth. 23:45:54 Yes 23:46:10 The "official" name for a move architecture processor is the "transport triggered architecture," or TTA. 23:46:37 because it's not async like chuck moors stuff 23:47:09 No, because writes to registers causes actions to be performed. 23:47:26 Asynchronous logic doesn't really have anything to do with how a CPU is programmed or how it works. 23:47:46 * kc5tja remembers the StrongARM SA-110 clone that was implemented with async logic. :) 23:48:34 one where there is no CLK and the frq rises when you rise the voltage? 23:48:45 * kc5tja nods 23:49:30 It was a university research project, to compare the two technologies, as I recall. 23:50:07 * kc5tja wants to play with async logic in discrete component TTL some day. I'm pretty positive it's possible. 23:50:24 Well, TTL LS and/or ACT logic. 23:52:44 and the reasons why industry uses synchronous design are? 23:53:57 Fewer transistors (hence, cheaper to fab), cheaper to engineer, more predictable performance come to mind. 23:54:22 At least from what I remember. 23:55:11 The pros of the async design was that it drew less power, performed *faster* (obviously, maximum possible speed for a given voltage/temperature combo). 23:56:14 Note that I said cheaper to engineer -- by this, I mean that commonly available design tools are built for static, synchronous logic. Designing for asynchronous logic pretty much defeats the use of such software, and therefore, more engineering time is required to verify the correctness of circuits. 23:57:29 but chuck moore is doing async stuff with his OK, not 23:57:33 Hence, Chuck's OKAD(-II) may be the only reliable software on the planet to be able to simulate async logic. :) 23:57:42 Yes, he is. 23:57:51 But his CAD doesn't work with Verilog, or VHDL. 23:58:01 He places transistors on the chip's die by hand. 23:58:43 OKAD-II uses ColorForth to place groups of transistors, but he's still describing the chip by hand for the most part. 23:59:59 --- log: ended forth/03.08.13