00:00:00 --- log: started forth/19.01.12 01:17:22 --- join: dys (~dys@tmo-113-122.customers.d1-online.com) joined #forth 01:39:50 --- quit: gravicappa (Ping timeout: 258 seconds) 02:04:22 --- quit: ashirase (Ping timeout: 250 seconds) 02:07:03 --- join: ashirase (~ashirase@modemcable098.166-22-96.mc.videotron.ca) joined #forth 02:25:27 --- quit: Kumool (Quit: EXIT) 02:32:01 --- join: gravicappa (~gravicapp@h109-187-2-216.dyn.bashtel.ru) joined #forth 03:58:29 --- quit: dys (Ping timeout: 240 seconds) 04:17:45 --- join: dddddd (~dddddd@unaffiliated/dddddd) joined #forth 04:21:40 --- join: dys (~dys@tmo-099-182.customers.d1-online.com) joined #forth 04:46:44 --- join: smokeink (~smokeink@42-200-116-249.static.imsbiz.com) joined #forth 05:01:18 so the does> for a word I'm writing has the stack effect ( n -- xt) 05:01:53 but because it's does>, it puts it's own address on the stack, so it actually makes more sense to write ( n addr -- xt) but that's misleading 05:01:59 someone tell me what's less wrong 05:28:38 Well, those stack effects are there, so people know what to put onto a stack. 05:29:03 So the first one seems more appropriate, I guess? 05:36:27 if your comment follows does> i would say include the addr 05:36:47 if your comment is following a word defined with the word where does> is used, then don't include the addr 05:39:08 e.g., : makefoo ( xt1 xt2 -- ) create , , does> ( n addr -- xt ) swap cells + @ ; ' dup ' swap makefoo foo ( n -- xt ) 05:42:37 and I guess the makefoo comment there should be ( xt1 xt2 ccc ) 05:47:54 I think you're right 06:38:01 --- quit: smokeink (Quit: Leaving) 06:58:30 general syntax thing (as opposed to implementation) - i have a word like [[ s1 s2 ... ]] where each string is simply placed on the stack, followed by the number of strings obtained - adding something to it is just swap 1 + which i have as []+ (ie: [[ 1 2 3 ]] 4 []+) and if i had something before it's just 1+ which i call +[] (ie: 0 [[ 1 2 3 ]] +[]) 06:58:42 does that make sense so far? 07:02:53 it goes a little further :) - [[ a b c ]] [[+ d e f ]] (append second list) and [[ a b c ]] +[[ d e f ]] (inserts second list) 07:07:33 * the_cuckoo thinks it's probably a little too weird 07:08:45 there are some words in the Forth 2012 that i don't like how they work... should i use different names than in the standard, or use the same names as the std but they work differently? 07:14:02 what standard? isnt it mearly a guideline? 07:14:09 some examples is WORD and FIND that uses counted strings, M* and <# # #> that uses double numbers 07:14:31 Zarutian: i heard some people don't bother with the standard, and others do 07:20:17 oh sorry "Forth 2012 standard" 07:37:01 How do counted strings even work? 07:37:19 well? 07:37:38 john_cephalopoda: they store the count at c0, c+n is character n 07:37:44 john_cephalopoda: the first byte is the length of the string 07:38:26 http://forth-standard.org/standard/rationale#rat:cstring 07:39:49 dave0: Ah, ok. 07:39:53 Makes sense. 07:40:50 i don't like counted strings, i like the c-addr n type 07:41:08 what is handy about having ( c_addr c_length ) as on stack representation of strings is that you can substring without having to copy or touch the original bigger string 07:41:09 : cstring2c-addrn DUP 1 CELLS + SWAP @ ; 07:41:59 ( cstring -- c-addr n ) 07:42:07 john_cephalopoda: there is a count word that goes from counted string to c-addr n ... but going the other way is harder 07:42:08 'COUNT' 07:43:16 dave0: Yeah, going the other way isn't possible without making a copy of the string. 07:44:26 So maybe using cstrings _is_ the best method because you'll be able to easily convert it to c-addr n. 07:45:15 no 07:45:17 :-) 07:45:51 requires memory accesses and moves and whatever 07:46:35 Hmmm... 08:01:12 WilhelmVonWeiner: I mean that strings should be stored in memory as counted strings. Internally they can be handled differently, but having a cstring in memory for some thing is pretty useful. 08:02:09 I can see the arguments for counted, uncounted, and zero-terminated strings 08:03:07 there is no "internally" with Forth :-p 08:04:37 counted string lengths are limited to what a char can hold, and zero-terminated would only be for unix compatibility 08:04:51 `: cmp ( c c-addr -- c-addr f) 1+ tuck 1- c@ dup 5F = if drop dup then = ;` 08:04:55 Is this a bad word 08:07:48 oh it matches the next character in the string, with _ as a wildcard? 08:07:52 aye 08:20:09 --- quit: rain1 (Ping timeout: 240 seconds) 08:28:51 --- join: rain1 (~My_user_n@unaffiliated/rain1) joined #forth 08:31:10 so i wonder what the approach would be to define a forth stack machine instruction set 08:32:01 my feeling is that it should be at least a 16 bit cell size 08:32:19 or rather, stack width 08:32:31 Same diff, to us 08:32:33 didn't you ask this already and then I suggested a bunch of cpus with different approaches 08:32:43 it doesn't really matter 08:33:39 i've had so much deja vu lately 08:49:24 --- join: Kumool (~Khwerz@adsl-64-237-235-188.prtc.net) joined #forth 08:51:12 yes, you suggested some existing instruction sets 08:51:32 but i'm interested in the process that is used to define the instruction set 08:52:00 come again? 08:53:44 i'm not interested in what the best instruction set would be, but in the process of developing an instruction set 08:54:19 --- quit: Kumool (Ping timeout: 246 seconds) 08:54:27 In those specialty processors, they will have words specific to their implementations - but chances are high there is some intersection/commonality. I'd look at "3 instruction forth", and some MCU implementations for commonalities - shooting for the absolute minimum first, and adding opcodes/keywords only when the processor/engine/microcode can do more, faster, than the bytecode/tokens/user - and keep 08:54:27 it to 32/64/128/256 ops 08:55:42 You might also see that - whatever the doc is - where they did a word-use count/frequency analysis of forth. 08:55:49 --- join: Kumool (~Khwerz@adsl-64-237-235-188.prtc.net) joined #forth 08:58:29 --- join: rdrop-exit (~markwilli@112.201.166.158) joined #forth 09:00:19 yea 09:00:34 corecode: in determining mine initially, I looked at a listing of all words in my sources, used by frequency, removed all non-primitives, and then started rewriting the various words that remained in terms of the others. 09:01:25 I've done this a few times, I always end up with around 30 core instructions, though the exact set has varied (slightly) over the last decade 09:07:45 --- join: proteusguy (~proteus-g@119.63.68.196) joined #forth 09:07:45 --- mode: ChanServ set +v proteusguy 09:08:05 It depends the application area, what your trying to optimize for, the tradeoffs involved, the limits involved, the underlying technology, and myriad other factors, just like any other engineering endeavour. 09:08:38 sure 09:08:44 It can be completely arbitrary, too - you'll end up tweaking. 09:08:47 that's why i'm interested in the process 09:08:53 not just one result 09:12:35 A good place to start is to read up on the early history of computer architecture, to see how it all evolved, and the various almost forgotten experiments on the way. 09:21:16 https://www.amazon.com/Computer-Architecture-Evolution-Gerritt-Blaauw/dp/0201105578/ 09:21:37 so there are ALU ops, stack ops, thread call/return, literals 09:22:29 i guess op size doesn't have to fit the cell size 09:23:14 ah, also jumps 09:25:22 memory load and store 09:26:08 interrupts 09:28:40 ah, yes, mem load and store 09:28:56 tho that could be an alu op 09:29:00 whatever specialized instructions are worth the trouble for the intended application domain 09:29:12 I/O 09:29:14 yea i'm trying to figure out a base level first 09:31:39 In my forth-ish designs I always have a specific application/domain I'm trying to build an build just enough forth engine to implement that. Later on I'll extend it when I apply it towards a new application/domain and optimize based on what I've learned from the first time. Every time I've implemented something I "anticipated" needing but wasn't actually in the scope of my immediate needs - I got it wrong and ended up throwing it away or completely 09:31:39 redoing it. 09:31:49 --- quit: PSnacks (Read error: Connection reset by peer) 09:32:27 Note that I've been doing this since 1984... and still can't "anticipate" much better beyond just flat out convincing myself I actually won't ever need it. 09:33:03 --- join: PSnacks (~PSnacks@p200300CB272FF3FC18381BC7823A490C.dip0.t-ipconnect.de) joined #forth 09:36:26 --- quit: Zarutian (Read error: Connection reset by peer) 09:36:48 --- join: Zarutian (~zarutian@173-133-17-89.fiber.hringdu.is) joined #forth 09:38:28 Yes, too many deisgns start off without any specific purpose in mind, just some vague abstract anticipation of over-generalized needs. 09:39:39 They usually end up drowning in unneeded complexities of their own making. 09:39:50 well, you can design like it's a new chip-design, or you can design to run on existing chip-designs, too. (Treat the target devices as something with 'microcode') 09:40:22 The world doesn't need any more ARM or Intel stuff, for sure. 09:40:52 http://ix.io/1y8M 09:40:52 but what about cheap as hell but realiable CPLD/FPGA esque devices? 09:41:23 I just wrote pattern matching for hex words, it was quite a fun program to write 09:41:26 I am thinking about async lut blocks connected together by interconnects 09:41:28 what about them? Still cost more than existing mcu/mpu, and I understand they all run a little slower 09:41:49 PoppaVic: Agreed, the underlying technology has a huge impact on the design of goes above 09:42:13 any commentary on my Forth is highly appreciated 09:42:51 PoppaVic: the mcu/mpus or? You do know that these are sequential and combinational logic devices, not von Neuman or Harward single threaded cores 09:43:43 Zarutian: yes, I have never played with them, but I know they are building-block-widgets. Not even necessarily "processors". So? 09:44:05 PoppaVic: just confused how they are 'slower'þ 09:44:11 s/þ/./ 09:44:55 Zarutian: folks HAVE coded these fpga as processors - and they are always slower - except in the cases where they are designed as a block of some sort - like for DSP and only DSP, etc 09:45:16 so, it's just not something I get overly excited about. 09:45:34 why the async LUTs? 09:45:34 PoppaVic: that is because those FPGAs are always clocked synched for some obscure and obsolete reasons 09:45:45 how does the async thing work? 09:46:03 corecode: no clocking, which is the main thing that plaque FPGAs 09:46:27 so how do you know that you can execute the next instruction? 09:46:41 sure, if one isnt carefull with your design then you might get glitching. 09:47:09 corecode: LUTs, Look Up Tables. Basically reprogrammable gates 09:47:47 yes i know how FPGAs work 09:48:10 i just don't know how it would work asynchronously 09:48:10 in FPGAs, those 'gates' are always clocked. 09:49:05 that on top of the regular CPU clock one implements eats up time. 09:49:10 Reliable asynch design on the scale of FPGAs is still a black art 09:50:15 rdrop-exit: well, if you start with an 'high'-level hardware description language and work down, sure. 09:50:18 you mean the LUTs are clocked? 09:50:28 invisibly clocked? 09:50:47 corecode: yes! This is what hampers FPGAs. 09:50:48 And more brittle to changes in underlying technologies and geometries 09:51:02 where does this clock come from? 09:51:10 i thought the clock was explicit 09:51:22 so even combinational logic is clocked? 09:51:36 corecode: from circuitry inside the FPGA that gets configured as part of the bitstream. 09:51:59 oh that is interesting 09:52:12 do you have a link or so for me to read up on that? 09:53:27 rdrop-exit: lets say 24nm feature size CMOS, phosphorus and bor ion deposition process. Heck, what I have been looking for are devices as cheapish, reliable and upiquitiously available as the 7400 series chips are. 09:54:06 corecode: nope, just got the impression from reading Xilinx, Altera and other FPGA makers in-depth introductions 09:54:37 i never got that impression 09:55:09 corecode: and reading about how FPGAs really work. 09:56:50 If cheap and ubiquitous is possible, it will eventually happen. 09:56:56 rdrop-exit: one thing that annoys me no end is to have to do, when repairing devices, quite the circuit gymnastics to replace an part discontinued just because the manifacturer didnt think it there was not big enough market for it. 09:58:18 it is okay when the replacement is electrically, pin and software compatible with the old part. 09:58:54 right 09:59:50 I understand your frustration 10:00:24 from interesting fpga projects there was/is zynq which had small fpga connected to the same bus as armv7 processor 10:00:33 so you could reconfigure your peripetial 10:00:52 and usually, with these FPGAs the vendors development software is crap. 10:01:38 peripherialperipheral° 10:01:44 The toolchain is the most frustrating aspect of the current FPGA world 10:02:05 I give up on this word :-) 10:02:15 jackdaniel: those have the same issues that armv7 has. Some of the design decisions they made do not fit and sometimes hinder what one is doing. 10:02:16 :)) 10:02:37 Zarutian: sure, still it is a very cool device for hacking hardware 10:03:09 yeah, I took me a while to realize, "Where is the bitstream description datasheet for these?" is a question the vendors dont want to answer. 10:04:05 2am here, need my beauty rest. Good night all. Keep on Forthin' :) 10:04:17 night rdrop-exit 10:04:22 --- quit: rdrop-exit (Quit: Lost terminal) 10:04:24 sleep well \o 10:07:17 Zarutian: openfpga is reverse engineering bitstreams 10:08:47 corecode: yebb, but this got me thinking. Why risk the wrath of fpga vendors for "stealing trade secrets"? 10:10:19 corecode: so why not make a simple easily clonable design with the features one wants and without those one do not want. 10:29:13 --- quit: gravicappa (Ping timeout: 258 seconds) 10:30:28 what wrath 10:58:39 corecode: well, bogus lawsuits that are actually resource-consumption attacks ("They outspend you"). 10:58:56 corecode: plus, who knows if they will be around in twenty years time 11:01:00 are you arguing for creating a new FPGA? 11:19:17 hey guys 11:35:40 --- quit: Kumool (Ping timeout: 246 seconds) 11:42:55 --- join: Kumool (~Khwerz@adsl-64-237-235-188.prtc.net) joined #forth 11:59:29 --- quit: dave0 (Quit: dave's not here) 12:22:40 --- quit: X-Scale` (Ping timeout: 244 seconds) 12:23:31 --- join: X-Scale (~ARM@83.223.226.197) joined #forth 13:06:52 corecode: well, FPGA/CLPD device 13:07:09 that would be cool 13:07:31 but i don't think there is even an open IC 13:07:41 semiconductor processes are difficult 13:08:01 corecode: the state of the art ones currently used sure. 13:08:26 i have enough trouble etching two layer PCBs at home :) 13:08:37 corecode: but even say 120 nm feature size is plenty to get started. 13:09:11 i'd love to see some hobbyists to make their own ICs 13:09:15 that would be awesome 13:10:10 corecode: well there is TMSC. They offer universities, hobbyist and inventors a deal on making ICs at rather reasonable price. The trouble is that they batch many designs and orders together so there is a bit of a wait 13:10:23 about two months or so on the outset iirc 13:10:52 corecode: well there is this highschool student that fabbed his own micronscale IC in his garage 13:11:21 corecode: bought or got given old litography machines and such cheap 13:12:28 ok 13:12:32 good for them 13:12:33 he made some modernizations such as replacing the masking part with a modified image-beamer (screen beamer?) 13:12:52 i don't have enough space to do anything like that 13:18:25 neither do I. But now I am wondering if there might be a market for IC making machine that is basically of the size of a desktop 3d printer 13:19:47 funnily enough using elcron beam instead of ultraviolate light to selectively cure the etch resist actually gives much smaller feature size 13:20:26 but the issue with electron beam is that it is a single beam and not whole flood of light 13:20:41 oh absolutely 13:21:00 but you also need HF and vapor deposition etc 13:21:02 the latter is used in mass manifacturing for that reason alone 13:21:28 HF? 13:22:48 so, for the electron beam usage one needs a vacum chamber. But the same one can be used for the vapor deposition, etching and dopeing 13:22:49 the acid 13:23:02 oh, hydrofluric acid 13:23:09 yea if you can pull it off, you're rich 13:24:27 is that so hard to come by? Not that one needs a lot of it. And of course one has to observe the correct lab safety protocols 13:59:37 --- join: Zarutian_2 (~zarutian@173-133-17-89.fiber.hringdu.is) joined #forth 13:59:49 --- quit: Zarutian (Read error: Connection reset by peer) 14:03:49 --- nick: Zarutian_2 -> Zarutian 14:33:29 --- quit: tabemann (Ping timeout: 268 seconds) 14:45:18 --- join: tabemann (~tabemann@24.196.100.126) joined #forth 14:59:15 rfc on this code that sets you match against hex patterns, could easily be modified for other situations http://ix.io/1y8Y 15:07:11 wow 15:07:20 that is some nice code 15:07:35 what will you use that for? 15:11:38 crc: here, I am looking at the ngaImage file in the latest released Retro tarball 15:13:23 crc: the very first instruction is a jump to 0xFFFFFFFF but the image is not that big 15:13:52 corecode: Chip-8 interpreter. 15:14:04 I could do it with nested if-else but that's real ugly 15:14:33 I did it with jump tables but I had to nest my tables and that was also incredibly annoying 15:14:58 why do you match based on char 15:15:06 and via and? 15:15:15 Because 15:15:19 uh 15:15:42 Oh I match on char so you can enter underscores as the pattern I guess 15:16:10 Maybe I could do it with just the char to a number 15:16:12 whatever 15:16:20 __F_ -> 00F0 -> >NUMBER 15:16:20 What do you mean "via and" 15:17:29 well something like : match ( n pat -- f ) SWAP PICK AND = ; 15:17:37 00F0 would only match the number F0 so you need some kind of wildcard character 15:18:19 p 15:18:30 i don't know what these patterns are supposed to do 15:18:47 Oh right 15:19:09 I'm ready 16 bit hex words from a file, so 0000 to FFFF 15:19:20 x/ready/ c/reading/ 15:20:01 http://ix.io/1yap ignore the numbers they're just line numbers 15:20:31 Zarutian: the base image contains a jump as a stub for later use. The interface layers will patch this as needed when the embedded images for each are built. (See interfaces/barebones.forth for an example) 15:22:32 --- quit: tabemann (Quit: Leaving) 15:22:36 Additionally, some of the interfaces use the words from the image, with a processing loop written in C. (E.g., on iOS, where there's no console style interface. On iOS, the process loop copies tokens into the image memory and calls `interpret` for each.) 15:31:31 --- join: tabemann (~tabemann@24.196.100.126) joined #forth 15:33:40 crc: ah okay 15:36:09 --- join: crc_ (~crc@li782-252.members.linode.com) joined #forth 15:37:48 --- quit: crc (Read error: Connection reset by peer) 15:38:12 the base image has everything needed, except for an enforced input method; I tailor the input per-host. (in addition to the interactive listener and file as input, I've done a couple of block based and file editor based interfaces) 15:38:43 --- nick: crc_ -> crc 15:38:53 --- mode: ChanServ set +v crc 15:44:17 I'm torn over how much in the way of IO interfaces should hashforth have 15:44:51 currently it only has TYPE, KEY< and ACCEPT, and any code that is to be loaded which is not typed at the terminal has to be loaded into the image at assembly time 15:46:46 the thing is hashforth is a proof of concept of a portable RAM-based implementation, and while some systems might want to use an FFI others might use bigbanged registers 15:46:52 *bitbanged 15:48:29 --- join: smokeink (~smokeink@42-200-116-249.static.imsbiz.com) joined #forth 15:49:22 hello! Can win32forth run on win732bit ? I tried the latest version 61504 and the install fails 15:49:57 I prefer bigbanged registers. It sounds so... Final. 15:51:19 oh damn, just discovered that the antivirus is interfering with the installation, it thinks forth is a virus 15:52:49 smokeink: it's a mind virus 15:52:55 you'll never stop thinking about it 15:53:16 yes... I tried to forget it but I can't 15:53:16 WilhelmVonWeiner: forthing at the code organ, eh? 15:55:30 --- quit: tabemann (Ping timeout: 250 seconds) 15:59:29 --- quit: dys (Ping timeout: 272 seconds) 16:02:44 the latest win32forth 61504 gave this when installing Error(-2): {D27CDB6B-AE6D-11CF-96B8-444553540000} Error Loading Type Library but I ignored it and reran Setup.exe which said forth system Rebuilt successfully 16:14:56 smokeink: if win32forth doesn't work there's also SP-Forth 16:15:39 win32forth seems to work now, but I'll check out SP-Forth 16:15:41 thanks 16:34:38 --- join: tabemann (~tabemann@h193.235.138.40.static.ip.windstream.net) joined #forth 16:40:28 how do you bigbang a register? do you unleash a rowhammer against it? 16:41:24 I considered a MOAB or a small nuke - just to be sure. 16:42:48 nuke it from orbit - it's the only way to be sure 16:56:49 --- join: TheCephalopod (~john@unaffiliated/john-cephalopoda/x-6407167) joined #forth 16:57:06 --- quit: john_cephalopoda (Disconnected by services) 16:57:08 --- nick: TheCephalopod -> john_cephalopoda 17:07:02 hey 17:08:56 Hey 17:09:25 well, I've got hashforth to be relatively usable, if IO-challeged 17:09:28 *challenged 17:10:24 to load new code though you have to type it in (or redirect or pipe it in), or you have to edit the image creation script to bake it into the image 17:12:16 Neat. 17:13:37 I created a new primitive, GUARANTEE, which, well, guarantees that a certain amount of space will be available in the user space, and allocates another block of user space of at least that size if that is not true 17:13:59 I haven't given into the temptation to directly expose ALLOCATE and FREE to the user 17:14:20 since on many embedded systems there may be no ALLOCATE or FREE 17:15:32 implementation isn't insane, but many mcu have damned little ram 17:17:28 PoppaVic: like whole 2KibiBytes like the ATMEGA328p? 17:17:54 you can say kilobytes here, it's okay 17:18:20 I say KB - and mean it. Yes, 2K is very, very, very tight 17:18:40 * tabemann personally finds the kibi/mibi/etc. terminology to be annoying 17:18:52 but it still a ways away from the insanity that is the Atari VCS 2600 17:19:04 a full 256 bytes of RAM 17:20:37 tabemann: ya hadnt heard of KiqiCells then. Qi indicates four as in four values per number place 17:23:38 * tabemann just tried searching for kiqi mebi kiqi and found no mention of kiqi, so he thinks you're pulling his leg 17:30:40 what? my wikisalting was unsuccessfull? damn 17:32:05 --- join: rdrop-exit (~markwilli@112.201.166.158) joined #forth 17:33:18 if it were successful, the wiki would be singing the praises of our lord and savior, the God-Emperor Donald Trump 17:35:04 who? 17:36:18 who who? 17:36:29 exactly 17:37:23 but seriously, he will be removed from future history books because serious scholars will think that somebody had sabotaged records 17:38:14 * PoppaVic sighs 17:42:05 just like that mad Chinese emperor that went on an state internal rampage 17:43:07 that section of history just says that the Chinese didnt interact with the rest of the world because they decide to isolate themselfs. 17:43:38 sadly, all hinting records were burned by the Maoist revolution iirc. 17:46:35 so you think the donald will be subjected to damnatio memoriae 17:49:35 Hmmm, the Floating Point stack... 17:50:01 obviously, the pointer floats 17:50:37 okay, gotta go - bbl 17:50:45 I am not quite sure what to think about that stack. 17:51:16 see you tabemann 17:53:02 tabemann: ALLOC and FREE ? I thought you were designing an ISA implementable as either HW or SW. 17:53:51 Why isn't the regular stack used for FP operations? Is it because regular floats are 32 bit and they didn't want that to create issues with the parameter stack, which can have arbitrary size? 17:54:34 we don't want no stinkin' floats on our integers 17:55:50 --- quit: tabemann (Ping timeout: 268 seconds) 17:56:03 Also what's with those 2x sized integers? They are weird. 17:56:45 just overweight - it's genetic 17:57:28 John: They come in handy on 16 bit targets 17:57:44 doubles are awesome 17:58:32 You can safely ignore them on a 64 bit target 17:59:00 On 32 bit, it depends 18:01:58 Imagine you need the 32 bit product of two 16-bit integers on a 16 bit target, you would represent it as a double 18:18:59 --- quit: PSnacks (Ping timeout: 250 seconds) 18:21:04 --- join: tabemann (~tabemann@172-13-49-137.lightspeed.milwwi.sbcglobal.net) joined #forth 18:38:32 --- quit: rdrop-exit (Quit: Lost terminal) 18:44:54 --- join: PSnacks (~PSnacks@p4FEAF3A6.dip0.t-ipconnect.de) joined #forth 18:51:18 --- quit: PSnacks (Read error: Connection reset by peer) 18:52:29 --- join: PSnacks (~PSnacks@p200300CB274497FC18381BC7823A490C.dip0.t-ipconnect.de) joined #forth 20:05:37 --- quit: Kumool (Quit: EXIT) 20:06:19 floating point is an abomination. don't let them contaminate your stack. 20:08:55 Any idea how to make the locale work for Sp-forth? The help is in russian. On windows I tried to run it using Applocale (set to Russian) but the output is garbled , on linux also 20:09:24 REQUIRE HELP lib/ext/help.f HELP á¯ࠢª¥ ­¥ ­ ©¤¥­® 20:09:52 guys guys guys!!! Look what I found!!! Byte magazine's 1980 FORTH issue online and complete! https://archive.org/details/byte-magazine-1980-08 20:10:36 proteusguy: I had the hardcopy ;-) 20:11:10 PoppaVic, so did I and at least a metric ton of issues which are, alas, no more. Nice to find the stuff available online. 20:11:30 Yeah. I don'ated them to the local lib - they prolly burned them 20:13:00 proteusguy: at the time, it was an interesting issue - but I found nothing exciting when I looked it over last year. 20:13:55 --- join: gravicappa (~gravicapp@h109-187-2-216.dyn.bashtel.ru) joined #forth 20:14:22 PoppaVic, yeah it's more of a historical perspective. Mostly I like it for the old ads and seeing what the context was in the days ago. How far we've come in some areas yet how little has been achieved in others. 20:15:49 well, we sure love disposable. 20:22:14 I still think all that S-100 stuff was pretty damn amazing. Just too expensive for me in those days. All I could do is drool over their specs in BYTE magazine. 20:23:04 yes, far too expensive; interesting idea - and it went to hell far too fast, what-with all the variants on "standard" 20:30:52 Ultimately the bus just couldn't handle the speed/bandwidth and things needed to be smaller and closer together. But it was great while it lasted. 20:43:19 well, there have been smaller cards/racks - and PC's still enjoy a 'rack' of sorts.. Mainframes/offsite stuff still uses them. 20:43:54 I mostly expected something to keep around for the hobby/lab side.. I guess we got arduino/pi/beagles instead 21:03:20 Now I think it's gonna be the fpga build your own device in software as the next big hobby thing. 21:04:32 meh. Not for me. 21:05:22 haha why is that? 21:05:48 Check this out: https://www.crowdsupply.com/sutajio-kosagi/fomu 21:05:56 mostly, because 90% of the tools are doze-centric; also they are overpriced; also I can't afford half the tools. 21:06:54 Then look again... completely open source tool chain and ISA. That's where we're heading. Yeah the windows-centric commercial tools have made it impossible for me to make progress in this space but that's all about to end. 21:07:05 FORTH-ish CPUs in our own FPGAs for all! 21:08:17 Another similar concept that can now be programmed 100% with open source tooling: http://fleasystems.com/fleaFPGA_Ohm.html 21:09:25 We no longer have to be subjugated by Intel VLSI register based CPU monopoly. Can't make our own native stack processors straight outtta Koopman! :-) Free returns! 21:11:05 --- quit: smokeink (Remote host closed the connection) 21:11:25 --- join: smokeink (~smokeink@118.131.144.142) joined #forth 21:21:49 s/Can't/Can/r sorry about that confusion! We CAN! 21:22:46 well, again: the board isn't "open" - dunno about part or prices. Sounds ambitious and useful. 21:22:50 --- join: nighty- (~nighty@b157153.ppp.asahi-net.or.jp) joined #forth 21:24:30 "amiga workbench" 21:25:44 * proteusguy finds that impressive although I'd prefer Apple ][. 21:35:32 back 21:36:31 tabemann, welcome. how's progress? 21:36:46 I removed the user space from the core 21:36:54 well, except for in the loader 21:37:10 HERE is now a word implemented in VM assembly 21:53:55 --- quit: dddddd (Remote host closed the connection) 22:23:50 --- quit: PoppaVic (Ping timeout: 264 seconds) 22:36:55 --- join: PoppaVic (~PoppaVic@unaffiliated/poppavic) joined #forth 22:42:33 proteusguy: whoa, I want to write a Forth for fomu now 22:43:05 hm, why did they go with Python 22:43:31 128 KB of RAM is a lot for Forth 22:51:44 siraben, because python is a very popular and easy to use language. Naturally in such environments forth will be superior. 22:52:34 But I don't want to write a forth for formu. I want to write a forth cpu in the fpga in the fomu! That way my code mental model is 100% aligned with the execution model on my device. 22:52:40 I'm interested in eventually seeing how Python is implemented 22:53:00 proteusguy: I like that PDF you linked 22:53:04 I think the Adafruit folk did this mini-python. 22:54:28 Bottom of page 8: FORTH is a difficult language: it easily beats APL as a "write-only language"; you can write a program in the language, but you can't easily read what you've written. 22:54:39 hard to argue with that! :-P 22:55:10 Here's two more issues that I think you will like in particular, siraben : https://archive.org/details/byte-magazine-1979-08 https://archive.org/details/byte-magazine-1981-08 22:55:44 Ah, so that's where those images come from! 22:55:52 2001 was a good movie 22:55:58 Is BYTE still around? 22:56:00 siraben, anyway - I got 10 fomu's on order. Will see when they finally arrive. 22:56:27 BYTE's long gone, alas. I learned how to program and how computers work from BYTE magazine and Computer Language magazine primarily. 22:56:38 The PDFs are absolutely gigantic, ~ 250 MB 22:56:44 I see 22:57:05 It was a giant magazine. Super thick. Like a big catalog of geek heaven! 22:57:39 --- nick: proteusguy -> proteusdude 22:58:05 --- join: proteusguy (~yaaic@2001:44c8:4517:b6bd:1:0:2d8f:33e1) joined #forth 22:58:05 --- mode: ChanServ set +v proteusguy 22:58:40 Just looking at the ads is great fun. You can see forth was definitely something that had some traction back then. 22:58:51 Lisp is amazing 22:58:53 Forth as well 22:59:21 Feeling strangely nostalgic even though I wasn't born then :P 23:01:01 What's the equivalent of BYTE these days? 23:01:25 doesn't exist man. 23:01:48 Every computer looks totally different 23:01:52 Now they're all the same 23:01:56 one of the things lost in time by the fast turn around news cycles enabled by the internet 23:02:31 Wow I could read this for hours 23:02:41 enjoy! ;-) 23:04:28 --- join: dys (~dys@tmo-112-150.customers.d1-online.com) joined #forth 23:05:24 Is there an actor model implementation in Forth? 23:13:03 proteusguy: you mentioned a paper or something 23:40:33 There is no actor model implementation in forth that I'm aware of. Mine will likely be the first. 23:40:52 I will email you the actor papers I told you about later. 23:59:59 --- log: ended forth/19.01.12