00:00:00 --- log: started forth/16.07.18 00:18:05 --- quit: nal (Quit: WeeChat 1.4) 01:33:56 --- quit: the_cuckoo1 (Quit: Leaving.) 02:39:13 --- quit: ASau (Ping timeout: 264 seconds) 03:13:34 --- quit: proteusguy (Ping timeout: 260 seconds) 03:21:03 --- quit: dram (Quit: dram) 04:49:31 --- join: nisstyre (~yourstrul@li611-52.members.linode.com) joined #forth 04:49:34 --- quit: nisstyre (Changing host) 04:49:34 --- join: nisstyre (~yourstrul@oftn/oswg-member/Nisstyre) joined #forth 05:21:28 --- join: proteusguy (~proteusgu@183.88.74.56) joined #forth 05:21:28 --- mode: ChanServ set +v proteusguy 06:04:54 --- join: JDat (JDat@89.248.91.5) joined #forth 06:11:37 --- join: gravicappa (~gravicapp@h62-133-162-11.static.bashtel.ru) joined #forth 06:21:51 --- join: Zarutian (~zarutian@168-110-22-46.fiber.hringdu.is) joined #forth 06:29:34 --- quit: JDat (Ping timeout: 260 seconds) 07:55:04 --- join: true-grue (~true-grue@176.14.216.104) joined #forth 08:16:52 --- quit: proteusguy (Ping timeout: 276 seconds) 08:50:11 --- quit: Zarutian (Quit: Zarutian) 09:54:17 --- quit: phadthai (Ping timeout: 258 seconds) 10:02:23 --- join: JDat (JDat@89.248.91.5) joined #forth 10:02:32 --- join: phadthai (mmondor@ginseng.pulsar-zone.net) joined #forth 10:05:43 --- quit: ovf (Ping timeout: 272 seconds) 10:06:05 --- quit: carc (Ping timeout: 272 seconds) 10:06:44 --- quit: crc (Ping timeout: 272 seconds) 10:10:57 --- join: crc (sid2647@gateway/web/irccloud.com/x-sztiugvtyakwfhue) joined #forth 10:11:14 --- join: carc (~carc@2001:41d0:52:cff::f85) joined #forth 10:11:15 --- quit: carc (Changing host) 10:11:15 --- join: carc (~carc@unaffiliated/carc) joined #forth 10:14:54 --- join: ovf (sid19068@gateway/web/irccloud.com/x-dopkxptsnjbzktrh) joined #forth 10:30:22 --- join: Zarutian (~zarutian@168-110-22-46.fiber.hringdu.is) joined #forth 10:47:35 --- join: bedah (~bedah@dyndsl-037-138-031-210.ewe-ip-backbone.de) joined #forth 11:00:20 --- quit: dys (Ping timeout: 252 seconds) 12:10:25 --- quit: impomatic (Ping timeout: 264 seconds) 12:13:12 --- join: impomatic (~impomatic@host31-52-247-186.range31-52.btcentralplus.com) joined #forth 12:17:22 --- quit: gravicappa (Remote host closed the connection) 13:24:01 --- join: gnawzie (~AndChat12@197-219-237-24.gci.net) joined #forth 13:24:27 Hi 13:25:03 hello 13:25:37 what's up 13:28:47 thinking about implementing forth on my Tandy m100 13:29:16 oh 13:29:25 I've got a Tandy 200 13:29:28 not sure it works though 13:29:45 ... now I have a service manual 13:30:02 Nice 13:30:17 it's using an 8085, isn't it 13:30:25 so basically an 8080 13:30:34 Mine is a 80c85 13:30:43 Same arch 13:31:48 yup 13:31:55 no index registers like on the z80 13:32:41 there's an 8080 fig-forth 13:32:46 I want to replace with z80 lol 13:33:23 I have a z80 chip tho that I can use on breadboard 13:34:09 not sure that'll be easy, since the pinout is very different 13:35:02 I have two 512k SRAM chips too, could build custom forth computer, also have like four 8086 chips 13:35:23 is forth good for multithreading 13:41:02 it could be 13:41:22 http://www.bradrodriguez.com/papers/mtasking.html 13:46:28 Like several CPU cores at once or would you need to run multiple instances on each CPU and link them together 13:48:07 okay, mine appears to work, but the screen is very faint and the G key doesn't work 13:48:18 oh on multiple cores? 13:48:28 you'd need to work out your own synchronisation 13:58:15 Oooh, I do have an arduino due that I think there is a forth implementation for 14:00:17 I wish there were more forth/stack-related papers from this millenium ;( 14:00:47 is forth still relevant to this day 14:00:49 how *does* multithreading work in forth, anyway? Separate data stacks for separate threads? 14:01:08 Or just a retro lang with hobbyists 14:01:36 it's as relevant as you want it to be, but it's obviously not as popular as many others 14:02:40 can it do more complex things easily, or does it turn into a Turing tarpit if you want to do anything like make a game or something 14:03:20 could you elaborate on what you mean by "turing tarpit"? 14:03:31 like brainfuck 14:04:30 the worst problem i've heard most people describe as programs get larger in forth is all of the stack management, but they always say that lots of factoring / using the return stack / locals makes it reasonable 14:04:47 ah cool 14:05:05 so you just keep building it and making words until it looks nice? 14:05:14 sort of, yeah 14:05:25 the important thing (to me) is that it's turing-complete and has metaprogramming 14:06:26 so worst-case scenario you have to extend the language to fit how you like it 14:09:12 that sounds good actually 14:09:25 shrink the code size 14:17:03 well, forth is used in many cnc machines and other hard realtime systems 14:19:20 so anyway... how does multithreaded forth work? Separate data stacks for each thread? Or what? 14:21:09 reepca: depends on if you are doing cooperative or preemptive multithreading but usually there are seperate data and return stacks per task 14:21:29 --- quit: gnawzie (Ping timeout: 260 seconds) 14:22:38 By "multithreaded" I mean "assume a multi-core system with true concurrency" 14:23:22 --- join: JDat` (~JDat@89.248.91.5) joined #forth 14:23:36 --- quit: JDat (Ping timeout: 244 seconds) 14:24:02 hmm... mutli-core with one shared memory or multi-core where each core has its own fast SRAM (a few KiBiCells worth)? 14:25:27 shared memory 14:26:21 with the former you just change the thread control blocks to have two pointers to data and returns stacks respectively instead of only one pointer to a stack of activation records 14:27:55 --- quit: JDat` (Ping timeout: 250 seconds) 14:28:08 --- join: JDat (~JDat@89.248.91.5) joined #forth 14:28:15 alright, seems simple enough. 14:31:16 --- join: vsg1990 (~vsg1990@pool-173-64-14-42.bflony.fios.verizon.net) joined #forth 14:31:28 --- join: JDat` (~JDat@89.248.91.5) joined #forth 14:32:40 but I warn you, the cache coherancy traffic and such will propably kill your performance and responsiveness 14:34:32 --- quit: JDat (Ping timeout: 240 seconds) 14:35:12 Could you explain why that would happen more than it would in more typical multithreading implementations in other languages? 14:35:52 no, as this happens with all programming languages 14:36:54 oh, okay. 14:37:52 so what are you going to use multicore multithreading for exactly? 14:41:00 I suppose I was thinking about being able to scale up well 14:41:54 The approach taken by high-performance CPUs recently seems to be to add more cores 14:42:16 scale up on what kind of platform? 14:42:58 x86 I suppose, it's all I have right now. 14:44:06 x86 is so slow in so many regards and punishes you if you even have moderatly branch heavy code 14:44:43 yeah 14:45:01 reepca: so right, a lot of the big wins in Forth come from the way you just need to maintain the parameter and return stack 14:45:15 and you don't need to worry about moving things on and off the stack for function calls 14:45:20 look into rather high denisty FPGAs such as Virtex and run quite a few softcores such as J1 on that is what I recommend 14:45:59 reepca: a lot of the reason folk get tied up in knots with Forth is because they think words are like functions 14:46:09 but even in that case, which kind of parallelism would be used? SIMD or multithreading? 14:46:27 in C a function that's bigger than about a page of text is too big; in Forth a word that's bigger than a couple of lines is really too big 14:46:34 --- quit: bedah (Quit: Ex-Chat) 14:46:42 gordonjcp: exactly! I think of words more like small routines or at least way to cut down on repeation 14:46:44 it would fit SIMD quite well 14:46:49 Zarutian: yup 14:46:59 there's basically no penalty for going from one word to the next 14:47:08 certainly not nearly as much as calling functions 14:47:16 gordonjcp: this actually has started to chafe me in other programming languages. 14:47:25 Zarutian: yeah 14:47:30 Zarutian: I look at what C compiles to 14:47:32 fuck me 14:47:36 leave my stack alone 14:47:46 what the hell is that, 30-odd bytes of stack 14:47:48 Which reminds me... are there any good examples of problems that can be scaled up using multithreading but *not* using SIMD? 14:48:05 what are you doing, why do you think it's okay to have like a kilobyte of stack? 14:48:17 reepca: processing of very dissimiliar datasets 14:48:39 pipeline stuff? MISD, basically? 14:48:58 gordonjcp: not to mention that all this puts so much pressure on the instruction cache, the branch prediction tables and all that 14:49:15 yup 14:49:44 I've been mulling over ideas for how to build a "modern" version of Brad Rodriguez's PISC processor 14:50:02 I can actually buy 74LS181 ALUs from Farnell for about a tenner each 14:50:05 but 14:50:08 heh, I like running forth on hardware that doesn't even try to do branch prediction 14:50:08 buuuuuuuut 14:50:18 reepca: hmm... do you know what a bitbliter is? I look at SIMD as basically more operations in that vein 14:50:21 they have about 12 or 13 inputs 14:50:35 I could blow EPROMs with a truth table that matches the 74LS181 14:50:38 Zarutian: nope 14:50:54 think it would fit in 27128 or 27256, iirc 14:51:08 and the register ICs are expensive 14:51:21 gordonjcp: PISC? what does that stand for? P something Instruction Set Computer 14:51:26 but I reckon I can "fake" a dual-port register RAM with a bunch of 8-bit latches 14:51:28 Zarutian: Pathetic 14:51:36 even better 14:51:42 I've got 8-bit up/down counters 14:51:49 gordonjcp: and it got an ALU in it? 14:52:04 they've got separate inputs and outputs, so they're dual port 14:52:15 and one day I could use the counter to make it directly be a PC or stack reg 14:52:18 hell, I have an Forth VM spec that doesnt have UM+ instruction/primitive in it 14:52:20 Zarutian: yup, four 74LS181s 14:52:44 is there a processor that doesn't have an ALU of some kind in it? 14:53:50 gordonjcp: only three instructions to manipluate data: XOR, AND, and LeftBitRotate 14:54:07 --- join: nal (~nal@adsl-64-237-236-63.prtc.net) joined #forth 14:56:45 gordonjcp: so, I reconn its hardware implementation can get away with three SRAM memories (two for the stacks and one program&data), two 8 bit up/down counters (for the stack idxes), three registers (TOS, PC and instr) and one rom for rom logic based microcoding 14:57:29 Zarutian: is this the right one? https://en.wikipedia.org/wiki/Blitter 14:58:48 reepca: pretty much except in the SIMD case the memory is actually registers 14:59:22 Zarutian: interesting 15:05:11 gordonjcp: not sure how fast it would be but that would depend on the settling times of the memories, I guess 15:33:20 --- join: nal1 (~nal@adsl-64-237-237-80.prtc.net) joined #forth 15:36:41 --- quit: nal (Ping timeout: 272 seconds) 15:43:39 --- quit: wa5qjh (Remote host closed the connection) 15:47:10 --- join: wa5qjh (~Thunderbi@121.54.58.157) joined #forth 15:48:23 --- quit: nighty (Quit: Disappears in a puff of smoke) 15:50:10 --- quit: true-grue (Read error: Connection reset by peer) 15:56:55 --- quit: JDat` (Ping timeout: 258 seconds) 16:09:24 --- quit: dograt (Ping timeout: 260 seconds) 16:19:32 Zarutian: yeah 16:19:42 Zarutian: "pretty quick" I suspect 16:23:45 gordonjcp: what is the usual settling time from address change to when data in or out is safe to perform? 16:26:23 is the xt of each word guaranteed to be unique in ANS Forth? That is, could I use it the same way I might use a symbol in lisp, to identify a word? 16:49:41 --- join: dograt (~dograt@unaffiliated/dograt) joined #forth 17:03:16 --- quit: Zarutian (Quit: Zarutian) 17:12:08 --- join: nighty (~nighty@d246113.ppp.asahi-net.or.jp) joined #forth 17:40:43 --- join: dram (~Thunderbi@112.65.46.78) joined #forth 18:34:14 --- quit: dram (Quit: dram) 18:34:32 --- join: dram (~Thunderbi@112.65.46.78) joined #forth 18:35:01 --- quit: dram (Client Quit) 18:35:15 --- join: dram (~Thunderbi@112.65.46.78) joined #forth 18:35:27 --- part: dram left #forth 18:36:06 --- join: dram (~Thunderbi@112.65.46.78) joined #forth 18:36:10 --- quit: dram (Client Quit) 19:14:58 --- join: gnawzie (~AndChat12@197-219-237-24.gci.net) joined #forth 19:15:00 Hi 19:16:00 supposedly forth is pretty fast but I'm curious about whether in 1 2 + it would be faster to not push those to stack and instead keep in registers for an add 19:21:30 depends on the processor arch 19:22:06 also depends on the forth. There are forth's out there smart enough to determine that you are doing a literal add, and optimize it to 3 19:22:29 plus, you can optimize manually: [ 1 2 + ] LITERAL 19:23:23 nice 19:24:12 reepca, I'm not sure you can say it's a 1:1 mapping, but pretty sure you *can* say that if there are multiple things that use the same xt, they are all performing the same thing 19:24:17 and so are basically identical 19:24:32 I'm thinking of implementing a small forth on my tandy in basic and then bootstrap an optimized native forth out of that 19:24:57 hrm, from basic sounds like a headache 19:25:43 I did something similar, but to a different platform that lacked anything but an assembler 19:26:22 but I actually built a linux x86 assembly forth, then bootstrapped a forth on the target arch from that 19:26:31 I can load an assembler into this thing, its cool sifting through archives of old software originating from BBS's 19:26:54 I'm playing with the DCPU stuff of notch fame 19:27:13 so it doesn't have the same level of history 19:29:02 Looked at some forth examples and it appears you can achieve very high level programming 19:29:36 I'd just consider that you will need to know a lot about the assembly for the platform, and even quite a bit about the machine code level representation thereof, to do a good native forth 19:30:06 yeah, HYPE is a fun thing to poke through and pick apart 19:31:07 less then 2k of compiled code and you get OOP extensions to forth 19:31:22 its an 8085, I know a bit of z80 and supposedly theyre very similar 19:32:07 you done forth stuff before, or is this your first incursion into the subject? 19:32:12 Lol could write a z80 mnemonic 8085 assembler so I don't gotta relearn :v 19:32:37 no I haven't, except a few arithmetics 19:33:01 vendan: hm, the problem is if I define words that are meant to be used as sort of "interned symbols" (and only as that), then conceivably since they perform the same operations they could have the same xts. 19:33:05 well, the bigger bit isn't the mnemonics, it's the machine code for it. 19:34:28 I guess this explains why gforth has name tokens 19:35:15 like a store operation might be pop a, ld hl,NAME ld (hl),a 19:36:47 reepca, it'd be easy to test, make 2 "interned symbols", ' them, and print the result 19:37:16 though if you are CREATE'ing them or similar, it should be unique 19:37:41 hard to tell with gForth and similar though, as it's fairly optimizing, iirc 19:38:01 gnawzie, store should probably involve 2 pops :) 19:38:16 pop a pop b mov [a], b or so 19:38:23 not sure I have the order right on those 19:38:44 is a the address there? 19:39:42 yeah, my x86 forth is pop eax pop ebx mov [eax], ebx 19:39:45 I guess whether I rely on the way gforth does xts or use the name tokens, it's non-portable either way 19:39:57 but with the name tokens at least future gforth versions should be compatible 19:40:06 what are you using them for? 19:41:22 sort of like enums. Although I suppose I could set up defining words to do something similar 19:41:46 the dcpu allows 2 memory operands per op, and has an operand that post-increments the stack as well as dereferencing the stack pointer 19:41:55 I liked the way keywords worked in common lisp, so I'm seeing how close I can get to that 19:42:12 I think I could actually do 'set [SP++], [SP++]' and that'd do a store in 1 instruction 19:42:25 but I separated it into 2 so I knew it was correct 19:42:47 never messed with lisp, so no idea 19:43:04 so do you just insert those chunks of machine code into addresses and have the dictionary point to them? 19:43:40 kinda. Have you looked into any guides about making a forth, and the common datastructures of them? 19:44:15 I looked at wikipedia, need to go reread it and probably take notes 19:45:05 http://www.bradrodriguez.com/papers/moving1.htm is also a very very good read 19:45:40 has tons of useful info 19:45:59 that and ANS Forth helped me make my first forth implementation 19:46:09 still botched it, but my second one was better 19:46:41 how far can compiler optimization go on register machines? 19:47:07 third implementation was decent, but a little ugly... Fourth(HA) is better, but the code is ugly. Working on reformatting it to be fully self hosting 19:47:29 reepca, depends on the forth? 19:47:38 pretty damn far though, if you try hard enough 19:47:43 ANS forth? 19:47:58 http://lars.nocrew.org/dpans/dpans.htm 19:48:14 it's the "standard" that few truely adhere to 19:48:22 and causes lots of controversy 19:48:54 but it's still a good reference 19:49:12 Any place I could look to understand the kind of optimizations that can be made? 19:49:40 a subroutine threaded forth can get rather close to raw assembly if it wants 19:50:15 i.e. 1 2 + could be optimized to mov a, 1 add a, 2 (though it'd probably optimize it away to mov a, 3) 19:51:21 The main concern nagging me is " 19:51:26 oops 19:51:54 isn't it inherently slower having to constantly load from main memory? 19:52:05 to get stuff from the stack 19:52:21 like I said, depends on the forth, depends on the machine architecture 19:52:42 many common uses of forth are on microcontrollers and similar, where registers are same speed as main memory 19:53:05 and it's not like C code is magically "all in registers" 19:53:31 C code, and most other HLL, makes extensive use of the stack 19:54:14 and everything in a small chunk of main memory(i.e. the stack) means the stack can stay in L1 or L2 cache, which is many orders of magnitude faster then main memory on modern hardware 19:55:02 your limiting factor is far more likely to be reading the program memory itself 19:55:08 Would forth be a poor choice for doing graphics 19:55:19 cause that will be out of order, and all over the place, comparatively 19:55:48 one reason subroutine threaded is better, cause it can take advantage of a modern CPU's pipelining and branch prediction 19:56:16 gnawzie, depends? 19:56:36 there isn't a good modern word set that I know of 19:57:04 so while forth the language compared to, say, C# the language 19:57:05 it's just something that's been nagging me since I ran a benchmark with optimized forth (not assembly though, I think) that ended up taking 3x longer than the C with no optimizations enabled. What bugs me is that I don't understand why the difference is. 19:57:15 may be perfectly capable of doing it 19:57:35 C# the "whole picture" has a bunch of prebuilt stuff for loading image files and such 19:57:51 that makes it way faster as a developer to start messing with graphics 19:58:13 yeah, I've heard people say a lot that the biggest feature a language can have is tons of libraries 19:58:14 reepca, what kind of benchmark though? 19:58:22 let me check 19:59:26 and just as a blunt comment, anyone can make a slow benchmark in any language 19:59:35 doesn't even have to be on purpose :) 20:00:17 You could make bindings in forth to c libraries or whatever I would think 20:00:28 yeah, that's fair. it's a bunch of floating-point math on arrays 20:00:46 my machine has some ROM subroutines I could take advantage of 20:04:51 yeah, floating point stuff tends to be a little special 20:05:35 it's not even a part of the core dictionary of ANS Forth 20:07:46 my forth implementations haven't had floating point support 20:08:09 though I did build a fixed point math addition to my latest forth effort 20:08:57 oh, and reepca, one other question about the benchmarks. How did you do the timing on them 20:09:29 'time ./cbinary' vs 'time gforth < blah.f'? 20:09:52 --- quit: gnawzie (Read error: Connection reset by peer) 20:10:10 --- join: gnawzie (~AndChat12@197-219-237-24.gci.net) joined #forth 20:11:04 "time ./bench " vs "time gforth-fast bench.fs" 20:11:50 yeah, you compared the execution of a c binary to the parsing/compiling and execution of the forth 20:12:04 if it's big enough, it shouldn't matter 20:12:23 but just so you are aware it's a little apple/orangey 20:12:35 yeah 20:12:46 I should stop being lazy and build the timing into the forth one 20:13:16 I just copy-pasted it from a site ( http://dan.corlan.net/bench.html#FORTHOPT ) with outdated results 20:14:06 on my dcpu forth, for example, it's a massive difference in speed between 1 2 + . and : test 1 2 + . ; then just timing test 20:14:45 that's largely because the parsing is so relatively expensive on there 20:15:02 100 khz makes you feel the directory searches :D 20:15:13 dictionary* 20:27:49 --- join: leaverite (~Thunderbi@203.111.224.40) joined #forth 20:29:37 --- quit: wa5qjh (Ping timeout: 264 seconds) 20:29:37 --- nick: leaverite -> wa5qjh 20:31:29 100 kHz CPU? Is this a transistor CPU? Lol 20:34:57 erm... how does one divide a double-precision number? 20:37:02 gnawzie, it's an emulated CPU inside a video game 20:37:13 Ah cool 20:37:19 can't give it too much power, cause there will be multiple of them 20:37:27 Have you ever seen The Powder Toy CPUs? 20:37:36 That's wild stuff there 20:37:42 some of them 20:37:54 wireworld is interesting stuff too 20:38:20 There's even ones that come with assemblers and you copy and paste the coded matter into the memory 20:38:35 I'm personally interested in designing custom cpu architectures for FPGA chips 20:38:56 there's nothing like the thrill of seeing your custom cpu running code in real life 20:39:08 like, actually holding it in your hands 20:39:09 I have two FPGAs, a cheap lattice one and an expwncive altera 20:39:30 I've got 3 cheaper altera's 20:39:34 alright, I finally got the timing integrated. it appears that a whopping 15 milliseconds were being used on compiling / starting up gforth 20:39:45 one's on a board with a 256 kbit sdram chip 20:39:49 that one's fun :D 20:40:06 yeah, not terribly surprised 20:40:07 for some reason I forgot that time was specified as double-precision 20:40:12 mine has a 32mb sdram 20:40:15 it's supposed to be fast 20:40:24 Idk speed 20:40:30 well, mine was a $30 cheapo board 20:40:32 so meh 20:40:37 I know it takes up to 166mhz clock tho 20:40:52 The FPGA itself I've pushed up to 500mhz 20:41:01 even if the runtime speed is slower, the fast startup speed is a nice change coming from steel bank common lisp 20:41:07 nice 20:41:24 mine rate at like 200mhz 20:41:38 I'm sure when I build a CPU on it its gonna be way lower 20:41:46 Instead of a simple counter 20:41:47 though my CPU cores tend to be a bit complex, and drag it down to like 80~90mhz 20:41:53 which is still rather nice 20:42:12 i've been wanting to learn to program FPGAs for awhile now, but I'm not sure how expensive/large/high-end it should be for learning 20:42:35 also I tried learning VHDL but apparently the freedomware ecosystem prefers Verilog ;/ 20:42:36 I'm still juggling whether I use vhdl or verilog, verilog is easier 20:42:40 I'm planning on embedding one as the control board for a rather well articulated hexapod 20:42:58 I've got $20 and $30 altera boards 20:43:25 though the lattice stuff seems to be picking up for lower end hobbiest level stuff 20:43:42 I want to build a neural net based processor with a bunch of cores that load neurons which are structures of addresses to other neurons and pass those to other cores to load more neurons 20:43:44 and I think there's even a fully open source build chain for a few of the lattice chips 20:44:09 sounds.... complex 20:44:10 lattice... isn't that like, the only type with an open-source toolchain? 20:44:11 --- join: gravicappa (~gravicapp@h62-133-162-48.static.bashtel.ru) joined #forth 20:45:26 pretty sure the lattice iCE 40 chips are the only ones with a fully open source build chain 20:45:34 like, all the way to the bitstream 20:45:43 nah, it just has a register pointing to a neuron structure, which contains addresses to 8 other neurons and ships those addresses to unoccupied cores, which load 8 more neurons etc 20:46:32 Would stack computers be a worthy project to tinker with? What's the current state of stack machines, anyway? 20:46:52 yeah, I think you'll find that's far more complex to actually model in an HDL then you think 20:47:03 at least, in a synthesizable manner :) 20:47:06 better to use a lang lol 20:47:17 I want a stack computer 20:47:28 --- quit: vsg1990 (Quit: Leaving) 20:47:32 One with like 16 stacks 20:47:40 stack VM's are big 20:47:47 .net is a stack VM 20:47:58 4 different ram ports 20:48:05 ouch 20:48:21 Simultaneousness 20:48:40 double ouch 20:48:49 Is that a coder nightmare 20:48:58 it's a hardware nightmare 20:49:02 on steroids 20:49:04 I read Sega Saturn was a hell 20:49:07 and amphetamines 20:49:22 double ported ram is usually expensive 20:49:30 Sega stuff was a mess lol 20:49:48 A genesis + 32x + Sega cd 20:49:49 I've never even seen the idea of quad ported ram 20:50:11 Well what if they're rather small and the computer is 16 bit 20:50:30 double ported ram is usually your register file 20:50:49 and has like 8~16 items in it 20:51:09 16 bit address so that's 64x64 pins... nvm lol 20:51:45 I'm not paying for a $4000 FPGA to do that 20:52:24 pretty sure even modern tech is only single port for main memory, just massively fast and multiple channels 20:52:57 Jameco has 512k SRAM for $6 but its hella slow 20:53:19 have consumer stack computers ever been made in this millenium? 20:53:45 doubt it 20:54:04 consumer is mostly x86/64, arm, powerpc, and similar 20:54:34 hm. What kind of limitations did they run into? 20:54:45 stack machines? 20:54:50 aye 20:55:03 no idea 20:55:35 though registers are probably easier to make faster on a hardware level 20:56:22 interestingly, stack machines seem to be easier to program to, at least on some level 20:56:36 as .net vm is a stack machine 20:56:54 but it JIT's to register based anyways 20:57:00 --- quit: gnawzie (Read error: Connection reset by peer) 20:57:21 --- join: gnawzie (~AndChat12@197-219-237-24.gci.net) joined #forth 20:57:55 I may make a junky stack computer on my little lattice FPGA for a little forth system 20:58:20 that's something I'd like to do as well 20:58:22 J1 stuff is an interesting read for that 20:58:48 I'd like to better understand how the hardware design affects performance 20:59:14 http://excamera.com/sphinx/fpga-j1.html 20:59:57 Two memory ports are in use for like avr so code is separate from data 21:00:22 no 21:00:27 that's 2 channels 21:00:58 dual memory ports is when there are 2 simultaneous operations capable on a single bank of ram 21:01:12 i.e. reading address X and writing address X+1 at the same time 21:02:35 and there's a bunch of extremely sensitive definition stuff there, to state the order reading from X and writing to X in the same cycle happens 21:06:28 alright, well, it's been fun chatting with ya'll 21:06:35 but it's after midnight for me now 21:06:41 so I gotta get some sleep 21:06:50 alright, sleep well 21:07:15 Good night 21:07:16 I'm on here most of the time, so feel free to hit me up whenever, I just may not answer right away :D 21:26:36 I wonder if you can add to operator functionality for things like matrix multiplication or rotations 21:26:53 you mean like operator overloading? 21:27:46 That would require some extra work I'm afraid. Forth is untyped (or, as some call it, unityped) - the operators are what decide how it's treated. 21:30:04 That being said, you *can* redefine the multiplication operator and pass a flag to it at compile-time that will decide which version to use. I think it would screw up the interpreting of * though, so it would effectively become compile-only or state-smart 21:30:31 so you would just create a word to do it then? 21:31:04 The typical way to do it would be to define a word like mat*, yes 21:31:21 cool 21:31:28 That being said, forth is malleable. If you want to shape it to work differently, you can. 21:33:17 --- join: AndChat|123321 (~AndChat12@197-219-237-24.gci.net) joined #forth 21:33:17 --- quit: gnawzie (Read error: Connection reset by peer) 21:45:07 --- quit: nal1 (Quit: WeeChat 1.4) 21:45:58 --- join: proteusguy (~proteusgu@180.183.115.71) joined #forth 21:45:58 --- mode: ChanServ set +v proteusguy 21:50:01 --- quit: wa5qjh (Ping timeout: 276 seconds) 21:59:51 --- quit: AndChat|123321 (Quit: Bye) 22:00:07 --- join: gnawzie (~AndChat12@197-219-237-24.gci.net) joined #forth 22:25:04 --- quit: gnawzie (Ping timeout: 260 seconds) 22:39:42 --- join: gnawzie (~gnawzie@197-219-237-24.gci.net) joined #forth 22:40:00 decided to put forth on an arduino and play with it 23:30:27 gnawzie, you gonna write your own or use an existing AtMel forth? 23:44:54 use an atmel one 23:45:14 but it seems like the bigger ones require some external programmer 23:59:59 --- log: ended forth/16.07.18