00:00:00 --- log: started forth/19.02.02 01:32:23 --- join: nighty- (~nighty@b157153.ppp.asahi-net.or.jp) joined #forth 02:03:11 --- quit: ashirase (Ping timeout: 246 seconds) 02:06:14 --- quit: pierpal (Quit: Poof) 02:06:35 --- join: pierpal (~pierpal@host1-142-dynamic.116-80-r.retail.telecomitalia.it) joined #forth 02:06:42 --- join: proteus-guy (~proteusgu@2403:6200:88a6:329f:bc93:c99:b7d6:5ad3) joined #forth 02:34:44 --- join: ashirase (~ashirase@modemcable098.166-22-96.mc.videotron.ca) joined #forth 03:35:53 --- join: dddddd (~dddddd@unaffiliated/dddddd) joined #forth 03:37:26 https://live.fosdem.org/watch/k4601 03:37:37 foerth for gameyboy live talk 03:41:37 <`presiden> nintendo gameboy? 03:42:18 <`presiden> oh, it's live? 03:42:28 yeah 04:05:45 It is _so_ simple to build an assembler with forth. 04:11:06 rain1: gbforth? 04:11:13 it has ended now 04:11:18 i think it'll be on youtube son 04:11:30 https://github.com/ams-hackers/gbforth 04:11:30 --- quit: dddddd (Ping timeout: 250 seconds) 04:11:52 pointfree: I recently got the m570 trackball, it's 5 button 04:12:13 the two spare buttons are mapped to key-up and key-down I think 04:12:31 yeah gbforth is real cool and I'd play with it if I had the time 04:12:34 so my cpu rewrite is slower than my first try 04:12:56 --- join: dddddd (~dddddd@unaffiliated/dddddd) joined #forth 04:13:02 rain1: was the talk popular? 04:16:25 I liked it 05:36:35 --- quit: dave0 (Quit: dave's not here) 06:10:41 ok, i like my new cpu design better 06:18:49 --- quit: nighty- (Quit: Disappears in a puff of smoke) 06:21:56 okay, @ and ! now post-increment the address 06:58:00 --- quit: rdrop-exit (Quit: Lost terminal) 07:49:50 corecode: then call them @++ and !++ ? 08:30:54 --- join: probablymoony (moony@hellomouse/dev/moony) joined #forth 08:31:02 --- quit: moony (Quit: Bye!) 08:34:19 yea, something like that 09:10:50 --- quit: Zarutian (Ping timeout: 245 seconds) 09:14:35 --- join: Zarutian (~zarutian@173-133-17-89.fiber.hringdu.is) joined #forth 09:15:23 --- join: Zarutian_2 (~zarutian@173-133-17-89.fiber.hringdu.is) joined #forth 09:17:32 --- quit: Zarutian_2 (Client Quit) 09:27:30 --- join: darithorn (~darithorn@75.174.231.56) joined #forth 10:16:55 heyy guys 10:17:46 *hey 10:18:27 * tabemann is wondering why with his bounded channels it's not actually filling the channels before switching tasks 10:28:12 hey tabemann 10:28:22 what's a bounded channel and is the code cool 10:29:56 a bounded channel is a data structure that is a fixed-size queue that can be written to and read from; when written to and it is full, the writing task waits until another tasks reads something from it; when read from and it is empty, the reading task waits until another tasks writes something to it 10:30:22 ah okay, got it 10:30:32 it's a simple concurrency construct allowing communication between two or more tasks in constant memory 10:30:33 I think I used them as "mailboxes" in PunyForth for the ESP8266 10:30:43 bounded channels 10:32:00 when you say "mailbox" you remind me of MVars in Haskell, which are like single-cell bounded channels 10:36:18 https://github.com/zeroflag/punyforth 10:40:03 were you just using punyforth, or is that your forth? 10:44:33 using it 10:45:00 there are enough Forths already, I shan't be writing my own *forth* any time soon 10:47:32 * tabemann finds writing forths to be too much fun in and of itself 10:52:45 --- quit: dddddd (Ping timeout: 244 seconds) 10:52:53 --- join: dddddd_ (~dddddd@unaffiliated/dddddd) joined #forth 10:54:28 --- nick: dddddd_ -> dddddd 10:56:37 --- quit: pierpal (Ping timeout: 268 seconds) 10:58:25 yea now that i have a second working cpu 10:58:31 it's the same sie 10:58:32 size* 10:58:45 --- quit: gravicappa (Ping timeout: 245 seconds) 10:58:45 i guess you can't make it smaller 11:04:23 * tabemann figured out how to go and increase the cells written/read to pauses ratio - turn off the main task 11:05:37 --- quit: zy]x[yz (Quit: leaving) 11:06:33 (because the main task is just spinning waiting for IO) 11:07:30 need a wait function 11:07:41 --- join: zy]x[yz (~corey@unaffiliated/cmtptr) joined #forth 11:08:04 there is a wait function 11:08:19 but it only puts things to sleep when all the tasks are waiting 11:08:31 i mean for a task to yield 11:09:07 well it doesn't spin per se - it checks for input, pauses, checks for input, pause, ad infinitum 11:09:52 it does non-blocking reads and if it gets in put it reliquishes control of the CPU to other tasks until next time it gains control in the round-robin setup 11:10:32 *if it gets nothing in it relinquishes 11:12:01 I do like it how tasks can be arbitrarily turned on and off without actually killing them in hashforth 11:12:43 turning off the main task is merely done with MAIN-TASK DEACTIVATE-TASK... and it can be turned back on again with MAIN-TASK ACTIVATE-TASK 11:45:46 --- quit: cantstanya (Ping timeout: 256 seconds) 11:49:49 --- join: cantstanya (~chatting@gateway/tor-sasl/cantstanya) joined #forth 11:58:17 --- quit: darithorn (Remote host closed the connection) 11:58:53 --- join: darithorn (~darithorn@75.174.231.56) joined #forth 12:52:00 tabemann: hmm... sounds like your hashforth might be used on mid range MCUs for robotics. Particularly if subsumption architecture is used 12:52:23 subsumption architecture? 12:54:05 hierchy of controling tasks. Higher priority ones can take control from lower priority ones. 12:54:52 currently it has no sense of priority - purely round robin with activating and deactivating tasks (note that two deactivations require two activations to active a task, and vice versa) 12:55:04 it's also a cooperative multitasking scheme 12:58:24 not priority in the sense of scheduling priority but in robot control priority 12:59:59 if your control flow words also invoke task switching then it still cooperative multitasking but transparently so. 13:01:36 currently task switching is simply PAUSE combined with ACTIVATE-TASK and DEACTIVATE-TASK for turning on and off tasks (note that a task calling DEACTIVATE-TASK on itself involves an implicit PAUSE) 13:04:25 heck you could do preemtiveness by having an timer interrupt invoke PAUSE and have PAUSE reset the timer. 13:07:05 cooperative multitasking is much easier to reason about 13:07:24 with preemptive multitasking, you need to make sure that you're not stepping on toes 13:10:09 that's only if you control when PAUSE occurs 13:10:19 e.g. in hashforth PAUSE may occur any time one does IO 13:10:37 unless one specifically turns off IO with PAUSE 13:10:56 where than any IO may block the entire system for an arbitrary amount of time 13:11:16 corecode: the issue is multithreading. Tramping over others state or other plan interferences is what to figgure out 13:11:24 or one uses IO primitive words that don't know about nonblocking IO and sleeping and whatnot 13:11:26 Zarutian: yes 13:11:44 Zarutian: if you yield deliberately, it is much easier to reason about interleavings 13:11:52 s/what to figgure/what is hard to figgure/ 13:12:23 corecode: nope, because a word you invoked could yield even though you did not know it. 13:12:48 well with that reasoning it could just overwrite all your RAM 13:13:06 or like my blocking channels - both sending and receiving on a blocking channel may call PAUSE unless one specifically uses "try" words 13:13:07 if you can't trust the word's contract, meh 13:13:48 but overwriting all your RAM is obviously erroneous behavior 13:14:18 simplying calling PAUSE buried in some body of code because your code called IO which blocked is another story 13:14:29 PAUSEing somewhere unexpectively and in a way that was accidentially correct is harder to find as a bug 13:14:50 so i agree that a yield happening without you knowing/thinking about it, makes it more difficult 13:14:56 that's exactly my point 13:15:09 preemption makes it way worse 13:15:11 overwriting all your RAM is something that usually results in an immediate segfault, and those are relatively easy to track down 13:15:25 corecode: Well all interrupts make it way worse often. 13:15:31 yep 13:15:35 the difference with preemption is that you've planned for that to happen 13:15:46 tabemann: planned how? 13:16:09 by using locks, condition variables, STM, and like 13:16:31 how do they work when some IO process yields 13:16:39 IO word* 13:16:58 or event loops whose queues are lock free (if you can have an CompareAndStore primitive) 13:17:19 i think you're agreeing with me 13:17:49 in hashforth it must be assumed that any IO read/write word other than READ or WRITE will call PAUSE 13:18:56 and that's because READ and WRITE go directly to the POSIX system calls of that name, without taking blocking into account 13:19:28 whereas the IO read/write words built on top of that may PAUSE and may result in poll() being invoked 13:20:09 i apologize for not getting excited about your forths - they seem quite comprehensive 13:20:20 i'm mostly interested in microcontroller stuff tho 13:20:33 I'm not expecting you to get excited about my Forth 13:20:34 bare metal, 16KB flash, that kind of deal 13:20:54 hashforth in particular is a pretty boring TTC POSIX forth 13:20:57 good, or you'd be unhappy :) 13:21:51 in the future I might change it to be a JITed Forth, but right now it's meant to be a model for #forth and not a particularly fast or optimized Forth 13:21:52 corecode: common, at least 32KB flash and 2KB WRAM. (If your BOM allows, an 128 KB extra SPI accessible SRAM chip) 13:22:36 i common chip i use has 16KB flash 13:22:52 the attiny13a even has just 1KB 13:22:53 tabemann: this a forth with code routines in x86 or ARM assembly? or is it more like an semi bytecoded VM? 13:23:11 would be nice if i could cross-compile forth for it 13:23:29 corecode: attiny13a? an AVR core? In a 8 pin soic pdip package? 13:23:34 yes 13:23:46 not a fan, but good enough for blinking LEDs 13:23:53 it's written in C and it's TTC, so the code is all tokens and inline parameter words 13:24:03 What's the best forth to use for working through the exercises in 'Starting FORTH'? 13:24:24 corecode: yeah or some timings if your main mcu is overloaded but not much else I think. 13:24:29 gforth isn't cutting it for the editor and 'under the hood' sections. 13:25:08 dunno, haven't touched gforth in ages 13:25:13 Croran: try eForth. You could try to port its primitves to assembly of the ISA of your choice. 13:25:24 Croran: did that with DCPU16 for kicks. 13:25:54 lol what? I'm Starting FORTH not Porting FORTH 13:25:58 tabemann: so tokens, basically a switch or jump/function table in the C runtime? 13:26:41 Zarutian: yeah 13:27:00 Croran: well, on what os and isa are you trying this out on? there might be a binary of eForth for it. 13:27:05 Linux 64-bit 13:27:23 Croran: 64 bit what? ARM? x86? SPARC? 13:27:28 x86 13:27:38 4.15.0-44-lowlatency #47~16.04.1-Ubuntu SMP PREEMPT Mon Jan 14 21:29:51 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux 13:28:24 bbl 13:28:54 tabemann: so, got a description of the tokens? including their numeric values and what they do? 13:29:18 I should port eforth to my vm one of these days... 13:29:54 tabemann: thinking about writing a small emulator in the forth I am using to be able to run hashforth code 13:30:23 Zarutian, they're in the code that is at http://github.com/tabemann/hashforth 13:31:27 note that to build the image used you have to use attoforth, which is at http://github.com/tabemann/attoforth , but apparently that doesn't work properly under FreeBSD or OpenBSD (neither of which I have) 13:32:07 hopefully at some point in the future it will be self-hosting 14:00:27 back 14:24:37 --- quit: darithorn (Quit: Leaving) 14:26:38 --- join: darithorn (~darithorn@75.174.231.56) joined #forth 15:53:48 hmm 16:18:46 --- join: dave0 (~dave0@193.060.dsl.syd.iprimus.net.au) joined #forth 16:19:51 hi 16:24:05 h'lo 16:24:11 hi Zarutian 16:32:03 --- join: pierpal (~pierpal@host132-240-dynamic.52-79-r.retail.telecomitalia.it) joined #forth 16:35:15 --- join: rdrop-exit (~markwilli@112.201.168.172) joined #forth 16:59:18 --- quit: john_cephalopoda (Ping timeout: 240 seconds) 17:01:07 --- join: john_cephalopoda (~john@unaffiliated/john-cephalopoda/x-6407167) joined #forth 17:32:46 hey guys 17:36:21 hi tabemann 17:56:50 hello forthniter 18:02:36 Greetings Forthlings 18:12:21 * tabemann should probably write some documentation for hashforth 18:12:33 however, it does now have more comments than attoforth has 18:13:37 --- quit: darithorn (Quit: Leaving) 18:14:08 it does now have condition variables and bounded channels - I previously thought I wouldn't need condition variables, but they're very useful for implementing bounded channels 18:18:40 Is "bounded channel" a new buzzword for a synchronization FIFO? 18:19:02 it's a synchronization FIFO with a finite maximum quantity 18:19:28 i.e. it's one that can be implemented in constant memory space 18:19:55 All FIFOs have a finite capacity 18:20:31 but an unbounded one is limited by how much memory or how much addressing space one has 18:21:00 whereas a bounded channel, if one creates one with 8 slots, will block upon sending if it's full 18:21:09 of 8 items 18:22:14 Ok, so it's the software implementation of a normal hardware FIFO. 18:23:26 gee, it's so nice to know the simple-shit we did for years has neat-o names! ;-) 18:23:57 I implemented both bounded and unbounded channels for attoforth, and found that bounded channels have much nicer characteristics, e.g. if the producer side is sending faster than the consumer side is receiving, it won't explode your memory usage 18:24:26 PoppaVic: sorry, but I'm not a hardware guy 18:24:43 well, yes: it's a throttle.. When you ain't shoveling fast enough for MY next shovel, I get to go take a leak ;-) 18:25:24 tabemann: I'm only a "hw-guy" in the sense I can solder, read a VM, etc. 18:25:56 I can't keep up with all the new buzzwords, there's a tendency to rename the wheel to new-fangled thigamabob as if the wheel were a new thing. 18:25:56 ..I would love to get my hands on a scope and remember wtf I done-forgotted. 18:26:24 rdrop-exit: well, I find it handy to nod sagely and go get another beer ;-) 18:27:14 rdrop-exit: in where I come from, "channels" by default are unbounded, so it's necessary to qualify them with "bounded" if one means the finite-quantity kind 18:27:17 (sometimes the departures clue to speaker, sometimes the beer clues me - everyone is happy ;-) 18:28:16 tabemann: sounds like you work with people that think everyone has infinite resources, all the time, everywhere, on everything. 18:28:36 e.g. a Haskell "Chan" or "TChan" is unbounded, whereas a "TBChan" is bounded 18:29:18 PoppaVic: I'm used to people coding with the fiction that the machine has unlimited memory resources 18:29:47 yeah. That's sorta' handy at home, but it's never true with N people and projects involved. 18:30:11 --- join: darithorn (~darithorn@75.174.231.56) joined #forth 18:30:53 of course that kind of thing is why we get web browsers which routinely eat 4 GB of RAM without actually doing much of anything 18:31:06 I just looked it up on Wikipedia, apparently the channel term originated with Tony Hoare's CSP paper, of which I have a bound printout on my shelf. My memory didn't make the connection with it. 18:31:27 --- quit: darithorn (Client Quit) 18:33:09 I would never consider using an "unbounded channel" I guess that's why I diddn't make the connection. 18:33:45 It sounds like another concept from the "all resources are infinite and free" school of engineering. 18:33:56 yes it is 18:34:55 I didn't bother to even implement unbounded channels for hashforth, because I had too much trouble getting them to work properly from a synchronization standpoing when working on them for attoforth 18:35:44 whereas bounded channels have very simple and nice synchronization properties - if you try to receive on an empty channel you block until someone sends on it, and if you try to send on a full channel you block until someone receives on it 18:36:20 --- join: darithorn (~darithorn@gateway/tor-sasl/darithorn) joined #forth 18:36:53 Cool 18:38:22 with unbounded channels you have to carefully balance the rate of consumption and producing so producing is never faster, or else 18:40:22 I was thinking about it the other day, I think this is why I have a hard time understanding some of of the designs coming out of "modern" programmers. 18:42:02 I come from a world where garbage collecting and allocating everything on the heap is by far the norm, and where elaborate data structures are used for everything 18:42:19 They don't seem to consider engineering tradeoffs until it's too late. 18:43:26 a lot of this is because too many programmers can't manage memory correctly manually, so they need the garbage collector to produce code that is safe and does not leak memory 18:44:01 whereas with hashforth I'm trying to allocate as much in user space as possible, with being able to erase it all via MARKER when one is done in mind 18:44:44 There's also this notion of ownership as in Rust 18:44:55 Memory safety without garbage colection 18:45:14 (the only thing I've done with ALLOCATE/RESIZE/FREE in hashforth are buffers, because they need to be able resizable in a way that ALLOT doesn't really allow) 18:46:16 Rust is essentially trying to be a latter-day C++ 18:46:55 a better C++ than C++ 18:48:20 as high level languages go I like Haskell, and yes, it is essentially the opposite of Forth in many ways 18:49:44 one thing I did do is make it so that one can easily clean up tasks, condition variables, and bounded channels via MARKER 18:50:56 one just needs to make sure that any such tasks are deactivated at that time, that no tasks are using or waiting on such condition variables, and likewise that no tasks are using or waiting on such bounded channels 18:53:26 There should be no need for ALLOCATE/RESIZE/FREE in the implementation of the Forth system itself, they're ANS creations that should be totally optional extensions at best. 18:56:12 yes, I am treating them as optional - the only place I'm actually using them is when loading forth code into memory from file before executing it, so I can load an arbitrary amount of code into memory without placing any restrictions on it, and so I can load code from within other bodies of code that have been loaded, so they don't interfere with one another as would be the case were there a single buffer allocated for 18:56:13 loading code 18:57:18 also another reason is that if the code loaded compiles code (as it most likely would), if I ALLOTed the buffer for executing the code, I would not be able to un-ALLOT it afterwards 18:57:48 whereas I can FREE memory for code that has been allocated without any issues 19:05:12 The important thing is that implementing the VM spec does not require dynamic memory allocation. 19:05:43 it's an optional service, not a VM opcode 19:05:57 That's not what I mean. 19:06:13 one can implement the VM without it 19:06:42 I'm relieved :) 19:07:09 while the VM as I've implemented it uses malloc() internally, one can easily implement a VM that uses a fixed memory layout that implements the same VM opcode set 19:07:26 Cool 19:10:54 I imagine that many smaller systems would use a fixed memory layout rather than a heap 19:12:34 Personally, were I to implement you VM, I would use a fixed memory layout regardless of platform. 19:14:05 e.g. if I implemented your VM in C, I would set any fixed limits through the c compiler command line. 19:14:51 The few times I've ever used malloc() have been for text processing applications. 19:15:42 I allocate four blocks of memory - the user space, the main data stack, the return data stack, and the image-contained source space (so it's not put in the user space) 19:15:57 *the main return stack 19:16:19 tasks beyond the initial task have their stacks allocated in the user space 19:17:27 actually, I also allocate blocks of memory for storing word header and service headers 19:17:46 There's no concept of "user space" in Forth. 19:18:35 ... or is that a ANS standard construct? 19:20:26 Though there are USER variables. 19:22:34 I mean the space ALLOT points into 19:22:45 and where code gets compiled into 19:22:56 i.e. what is written to by , 19:22:57 The dictionary? 19:23:25 I'm keeping my word headers separate from the space code gets compiled into 19:23:45 Unless your using hardware stacks, everything is part of the same contiguous address space. 19:23:51 * you're 19:24:29 well yes all of these exist in the same addressing space, but they are not contiguous 19:24:32 i.e. one memory map for the whole thing. 19:25:14 Why not? 19:25:39 because I am not currently allocating them all with one big malloc() or mmap() 19:25:41 What's in the holes? 19:25:53 --- quit: darithorn (Quit: Leaving) 19:26:11 Right, but you said implementing the VM does not assume it. 19:26:20 yes 19:26:39 So why are there holes? 19:27:08 because one cannot assume that malloc() will allocate each of them contiguously 19:27:42 What does malloc() have to do with it? 19:28:00 because that's how I'm allocating memory; I could use mmap() instead of course 19:28:19 I need to put my daughter to bed though -I'll be back on later 19:28:51 You are purposely creating holes in the VM memory map, I don't understand why. 19:31:51 If I implement your VM spec with a 16MB RAM, I need to be able to use any address within that 16MB contiguous memory to implement a Forth. 19:49:17 Catch you all later, keep on Forthin' 19:49:35 --- quit: rdrop-exit (Quit: Lost terminal) 20:09:03 /quit 20:09:06 oops 20:09:08 haha 20:26:13 --- join: gravicappa (~gravicapp@h77-94-116-79.dyn.bashtel.ru) joined #forth 21:13:23 --- join: rdrop-exit (~markwilli@112.201.168.172) joined #forth 21:15:02 --- join: nighty- (~nighty@b157153.ppp.asahi-net.or.jp) joined #forth 21:49:19 If I had a bunch of values on the stack and I wanted to put them into a variable using 'create', how would that be done? 21:49:55 or do i have to use ! and a loop? 21:58:59 --- quit: gravicappa (Ping timeout: 272 seconds) 22:07:14 You'd probably want to avoid the situation to begin with. But to answer the question, if the variables already exist you'd use ! for each one. 22:07:56 or, you'd CREATE a name and comma them all into place 22:09:32 Croran: Is there a reason you're piling the values up before storing them? 22:10:20 rdrop-exit: no i guess not. I could use ! to store them as i generate them. 22:11:35 If the variables don't already exist, PoppaVic's solution is the correct one, use CREATE then generate and comma each value one by one 22:12:46 create settings 126 , 2389 , 992 , etc... 22:13:27 You know? I been looking forever, and I'll be goddamned if I can find a counted-list of forth words - the count being the uses.. I swear to god I did this way back on FIG or F83, (not F-PC, iirc).. Damn.. A whole fucking interweb and no one else has ever done and posted it. 22:13:47 rdrop-exit: he might even 0, FIRST - and then come back and store the count. 22:14:28 There have been lists in FD and/or FORML articles, and the Stack Machine book 22:15:02 thanks 22:15:14 rdrop-exit: hm, Maybe it's in FD.. I remember modding the header to add a count-field and then rebuilding and listing the results. 22:15:15 np 22:16:12 I think that's a C.H. Ting article from FD, based on earlier work by somebody else in another FD or FORML article 22:16:29 Lemme check... 22:16:53 Yeah, possible. Well, I'll pull my existing hardcopy to the bed sunday/monday and sort thru 'em.. 22:17:12 FD v7#4 F83 Word Usage -- Ch Ting 22:17:15 I'm pretty sure very little was published when my membership lapsed. 22:17:38 v7#4, ok 0 lemme go look quick. 22:18:03 All the FD's are available online, but I still keep my originals 22:18:51 yeah, at worst they are great bathroom reads ;-) 22:18:58 :)) 22:20:42 OK, cool.. I was afraid I'd scream when I saw Fig.1 - Fig.3 lists the top-users (30+) calls. 22:21:18 silly buggers: listing shit alphabetical when they are talking numerics 22:21:56 Are these runtime counts or compilation counts? 22:22:55 The stack machine book gives runtime counts 22:22:57 iirc, they are the counts generated from rebuilding f83 22:24:12 So he maybe counting the number of times the word is compiled into another, as opposed to how frequently it is called 22:24:16 yeah, rebuilt and loaded 7 files or 230 screens 22:24:58 Right, this is USE statistics.. So, it will help me to decide what opcodes should be alloted and in what order. 22:25:29 I don't put much faith in the counts from the stack machine book, if you use a different opcode set, you might get completely different counts. 22:25:33 hard part will be keeping Pest away from the damned paper. 22:25:33 bbiab 22:25:57 Yeah, counts will vary some - but you get a decent ballpark. 22:47:43 oh, http://forth.org/fd/FD-V07N4.pdf - for anyone wondering wtf '=_ 22:48:51 --- quit: rpcope (Ping timeout: 244 seconds) 22:49:53 --- join: rpcope (~GOTZNC@muon.copesystems.com) joined #forth 23:07:14 gforth is converting my if...then into a while...then 23:07:31 is there some legitimate optimization? or is this a bug? 23:12:07 https://pastebin.com/sE6XNnTA 23:15:32 all those structures are built from the same building-blocks 23:16:04 It's less than entertaining to 'see' or decompose a compiled word and see that sorta' thing, but it's edumacational. 23:18:15 while...then doesn't seem like a combination i've seen described... 23:21:52 see WHILE 23:21:52 : compile-only-error 23:21:53 -14 throw ; 23:21:53 latestxt 23:21:53 : WHILE 23:21:53 POSTPONE IF 1 CS-ROLL ; 23:21:55 latestxt 23:21:57 interpret/compile: WHILE ok 23:22:23 so, perhaps yer implementation went the other way 23:22:55 huh 23:23:21 you know, I suppose, that all branches anywhere and any language are going to devolve to 0branch and branch, right? 23:27:51 Nah. They all devolve to NOR. 23:28:04 * PoppaVic facepalms... 23:32:10 back 23:36:45 --- quit: jedb (Read error: Connection reset by peer) 23:37:32 --- join: jedb__ (jedb@gateway/vpn/mullvad/x-lmtntcsoamjszxwx) joined #forth 23:38:01 --- nick: jedb__ -> jedb 23:59:59 --- log: ended forth/19.02.02