00:00:00 --- log: started forth/18.12.28 00:46:28 you reckon a word like : signed ( n -- u flag) dup abs swap <0 ; would be handy? 00:50:50 I don't think so 00:51:38 You already have 0> 0< 0>= 0<= 00:52:40 I would rather define "abs" from "sgn". ": sgn -1 max 1 min ;"? 00:52:41 Also note that in 2's complement the absolute value of the most negative number is itself 00:54:48 ooh sgn is interesting 00:55:25 : sgn ( n -- -1|0|1 ) dup 0< swap 0> - ;inline 00:55:39 sgn Integer signum function. 01:00:03 Which is faster would depend on how your primitives are coded 01:01:18 ": .sgn (sign-bit) and ; : abs (sign-bit) cleared ;" would be closer to definition. 01:02:37 I derive abs from nabs, since I find nabs more useful in general as it doesn't overflow 01:02:41 : abs ( n -- +n|mnn ) nabs negate ;inline 01:03:21 nabs (negative absolute) is the primitive in my Forths 01:03:35 a nabs Negative absolute value, unlike a positive absolute 01:03:36 b doesn't overflow on |mnn|, 01:03:36 c |dup 0> if negate then|, |dup 63 arshift tuck xor -|. 01:03:43 nabs ( -n|+n -- -n ) 01:04:38 row c is just high level pseudo-code 01:05:11 The actual implementation of nabs is as a primitive 01:05:55 mnn is most negative number, i.e. $ 8000...0 01:15:53 --- quit: rdrop-exit (Ping timeout: 244 seconds) 02:03:35 --- quit: ashirase (Ping timeout: 250 seconds) 02:19:36 --- join: ashirase (~ashirase@modemcable098.166-22-96.mc.videotron.ca) joined #forth 02:48:51 WHy does gforth have 4 ways of including a file 02:50:47 Redundancy! ;þ 03:12:04 --- join: xek_ (~xek@apn-37-248-138-82.dynamic.gprs.plus.pl) joined #forth 03:56:48 crc, do I recall correctly that you wrote a paper about using queues as an alternative to stacks? 04:02:22 proteusguy: It might have been me http://www.0xff.in/bin/SVFIG-Jan27-2018-The-Case-for-FIFOs-in-Place-of-Stacks.pdf 04:03:09 pointfree, ah right!! :-) thanks! I was working on something similar and remembered your paper. Want to see if there's anything I haven't considered. 04:07:47 pointfree: How does your fifo work? shouldn't `2 2 3 3 * * + .` give 15? 04:08:07 `2 2 * 3 * 3 + .`? 04:13:36 WilhelmVonWeiner: It's equivalent to 2 2 * 3 3 * + . 04:13:36 : d> depth 1- roll ; ( consume) 04:13:36 : d@ depth 1- pick ; ( peek) 04:13:36 : + d> d> + ; : * d> d> * ; 04:13:36 2 2 3 3 * * + . ( 13) 04:15:44 so consume puts the item at the front of the queue to the back? 04:20:23 https://0x0.st/snBG.txt like this? That's pretty interesting actually 04:23:57 WilhelmVonWeiner: Yep 04:23:57 Now I think of it as consuming an item and putting it on the stack for localized stack oriented use. The new d> >d and d@ word names are inspired by Sam Falvo's bspl https://github.com/KestrelComputer/kestrel/blob/master/software/src/bspl/examples/hello.bs 04:23:57 When I made the slides I had allocated a buffer and done something like this https://en.wikipedia.org/wiki/Producer%E2%80%93consumer_problem#Without_semaphores_or_monitors which might be faster depending on the implemention of roll. 04:26:54 https://rosettacode.org/wiki/Roots_of_a_quadratic_function#Forth vs https://x0r.be/@pointfree/100708257252497517 is another example for comparison. 04:28:29 --- join: dddddd (~dddddd@unaffiliated/dddddd) joined #forth 05:08:18 I always throught deque instead of a stack was interesting. http://concatenative.org/wiki/view/Deque 05:09:06 The future is circular deques 05:10:24 Yay, my assembler and Forth implementation are getting somewhere! \o/ 06:15:07 WilhelmVonWeiner: Interesting! I hadn't seen that one. 06:15:15 I've seen http://www.enchiladacode.nl/ a tree based concatenative language. 06:15:24 I think the queue style forth could benefit from more than one fifo: multiple fifos linked together in graphs is more what people have in mind for dataflow or fbp languages. 06:15:35 So a graph is composed of triples: source-node-sink or subject-predicate-object (semantic web) or row-cell-column (spreadsheet) or just input-word-output. 06:15:47 Multiple queues can be done by limiting the depth of the consumers and producers instead of just using the full DEPTH. 06:15:57 The extents of the producers >d and consumers d> could partially overlap each other for linking fifos together. I haven't tried this part yet. 06:44:54 --- quit: pierpal (Quit: Poof) 06:45:10 --- join: pierpal (~pierpal@95.239.223.85) joined #forth 06:55:00 --- join: Kumool (~Khwerz@adsl-64-237-235-196.prtc.net) joined #forth 07:10:55 --- quit: dave0 (Quit: dave's not here) 07:23:10 Can't think of good ways to use coroutines, anyone have any examples? 07:26:18 i've found them to be nice for decoupling lexing from its input source 07:28:22 your lexer turns into a simple loop and then you can just feed it by resuming it. it can be easier than keeping track of a complex state mechine. not sure how well it translates to forth, though 07:50:41 --- quit: Kumool (Ping timeout: 250 seconds) 07:55:00 --- join: Kumool (~Khwerz@adsl-64-237-234-190.prtc.net) joined #forth 08:01:31 --- quit: Kumool (Ping timeout: 250 seconds) 08:01:46 --- join: gravicappa (~gravicapp@85.26.165.17) joined #forth 08:03:21 --- join: Kumool (~Khwerz@adsl-64-237-235-117.prtc.net) joined #forth 08:03:35 Speaking of lexing, has anyone already thought of a clean way to implement parser combinators in forth? http://theorangeduck.com/page/you-could-have-invented-parser-combinators 08:03:42 I guess it makes most sense to implement the LIT BOTH EITHER parser combinators as primitives and use them in high level forth to make additional parser combinators. 08:07:29 I think there's no shame in relying on some asm codeword primitives, the primitives define what it will be like to use the forth in question and who says the existing conventional forth primitives are best. 08:08:41 --- quit: Kumool (Ping timeout: 246 seconds) 08:12:02 pointfree: I love parser combinators, amazing stuff. After reading that paper by Hutton I had to immediately implement it in Scheme and SML 08:12:26 So you might want some sort of way to express backtracking in Forth 08:13:53 https://www.complang.tuwien.ac.at/forth/backtracking-in-ansforth 08:14:12 An ANS Forth compatible way would be through CATCH and THROW 08:16:01 Maybe something with RDROP 08:16:33 Of course 08:18:08 Got multi stage booting on my OS now. So the first stage runs a bunch of tests and the second stage is the "main" part 08:18:22 Helps ensure my words won't break 08:19:05 siraben: Is this an x86 OS? 08:19:51 pointfree: glad you asked :) it's not. https://github.com/siraben/zkeme80 08:20:52 Quite fond of the Z80 now, initially had a hard time with the lack of registers and other weird quirks 08:20:57 Cool!! 08:20:57 It's "just enough" 08:22:31 Is the idea to replace TI's OS with something more programmable? 08:23:41 Definitely. 08:23:56 I lack of lot of math words, so it's not that usable in real life yet 08:24:11 But I want it to be an extensible, stack-based calculator 08:24:14 OSp 08:24:18 OS* 08:25:11 pointfree: Although it says it's written in Scheme it's essentially Lisp-style assembly 08:25:22 Very convenient for extensibility 08:26:32 You could add all kinds of stuff for graphing, geometry, notes, etc. 08:27:12 I have a https://en.wikipedia.org/wiki/TI-89_series#TI-89_Titanium from high school and college. 08:27:51 I think it uses a 68k processor 08:28:07 Cool, I've seen that model before 08:28:20 Although IRC the hacking community isn't as strong around it 08:28:34 I think the signing keys aren't factored yet 08:29:16 It was the highest model allowed on the SAT. 08:29:22 Wow 4 MB of flash to play with on the 89? 08:30:07 Huh I've never touched the 68k before 08:30:38 It's got a lot of general purpose registers. 08:30:47 People seem to like it. 08:31:21 Hah on the Z80 we have the 16-bit HL BC DE 08:31:38 IX IY as well if index regs are your thing 08:33:12 Can't wait to add more math stuff 08:33:35 Time to finally find out how the black magic floating point works 08:33:46 Of floating point* 08:48:08 hi 08:59:15 Hello corecode! 09:00:41 proteusguy: not me, sorry 09:02:27 crc, no worries - twas pointfree who dun it. ;-) 09:05:10 pointfree, I'm doing some stuff with Actor model so queues make a lot of sense for message passing. 09:20:15 proteusguy: This morning I've been experimenting with with limiting the extent of the consumer words and having them partially overlap -- words as actors effectively talking tacitly through the overlapping portions of queues. I'm still thinking about it. 09:28:53 I've been thinking about adding lists to my forth, with all the fun functional programming stuff 09:29:03 Overlapping portions of queues is an implementation optimization or you actually intend to have shared state? pointfree 09:29:14 Map, filter, reduce, heck maybe even GC 09:34:50 nuthin like complicating the simple. Yup. 09:48:19 --- join: Kumool (~Khwerz@adsl-64-237-233-75.prtc.net) joined #forth 09:50:34 --- join: tbemann (~androirc@2602:30a:c0d3:1890:3d05:3556:7fca:b1a4) joined #forth 09:50:55 --- quit: pierpal (Read error: Connection reset by peer) 09:56:32 --- join: pierpal (~pierpal@95.239.223.85) joined #forth 09:59:13 --- quit: tbemann (Quit: AndroIRC - Android IRC Client ( http://www.androirc.com )) 10:00:41 --- quit: pierpal (Ping timeout: 250 seconds) 10:01:28 back 10:03:11 siraben: I saw someone having a hand-coded floating point implementation for some forth out, you might be able to find it 10:04:16 Cool, if it's standard forth then it should work no problem 10:04:33 *out there 10:05:47 --- join: pierpal (~pierpal@95.239.223.85) joined #forth 10:06:37 Which implementation? 10:07:04 it'd be slower than shit, too 10:07:13 I think it might've been mecrisp 10:07:36 True I'd be better off using Z80 routines 10:08:14 you might be better off with fixed point across double words 10:08:20 *double cells 10:08:28 Hm 10:08:35 I am on a calculator after all 10:08:44 I'll get double words working 10:08:45 scaled integer math 10:09:18 --- quit: pierpal (Read error: Connection reset by peer) 10:09:24 or maybe even bcd 10:09:29 but mind you people've done floating point on platforms like Z80 before (remember all the Microsoft BASIC implementations with floating point) 10:10:08 you remember M$ BASIC on the z80? wow. 10:10:23 * PoppaVic chuckles 10:10:28 I used an MS BASIC on the 6502 aka Applesoft BASIC 10:11:58 --- join: pierpal (~pierpal@95.239.223.85) joined #forth 10:14:51 Yeah the Z80 has a bunch of BCD instructions I have no idea how to use 10:16:39 I always saw BCD as a waste of numerical resolution all to slightly simplify printing decimal numbers 10:17:01 --- quit: pierpal (Ping timeout: 272 seconds) 10:17:03 all to essentially save a division and a modulus 10:17:17 Exactly what's the point 10:17:26 The user doesn't care except for the final print 10:17:53 what I mean is there's no point if you already need division and modulus elsewhere 10:18:33 Looks like I have a whole bunch of floating point stuff to implement 10:18:35 --- join: pierpal (~pierpal@95.239.223.85) joined #forth 10:18:58 floating point really is not necessary - you can just used fixed point as I mentioned 10:19:15 Interestingly TI's own operating system has a bunch of ROM calls exposing the floating point stack 10:19:17 *use 10:19:19 Right 10:25:50 --- quit: pierpal (Ping timeout: 245 seconds) 10:31:32 --- quit: gravicappa (Remote host closed the connection) 10:33:53 --- join: pierpal (~pierpal@95.239.223.85) joined #forth 10:39:11 --- quit: pierpal (Read error: Connection reset by peer) 10:39:22 --- join: pierpal (~pierpal@95.239.223.85) joined #forth 10:41:42 --- quit: pierpal (Read error: Connection reset by peer) 10:41:59 --- join: pierpal (~pierpal@95.239.223.85) joined #forth 10:55:14 My friend is making a Forth without actually reading any Forth literature 10:55:32 Weird interpreter states for `IF..THEN` 10:56:03 dictionary is a map 10:56:40 unaware of the return stack's importance 10:57:02 I'm almost offended, yknow? 11:08:41 A fellow SVFIG member is scanning some FORML proceedings, foreign language forth magazines, oddities, and rarities. Not sure where to host the files. Perhaps just libgen.io or http://www.forthlibrary.net/ itself or both. 11:08:58 I've also got a large backlog of people who want to buy forth books but I haven't gotten to them largely due to excessive traveling and other tasks. I have not forgotten anyone, but apologies for the delay. 11:09:05 After Zarutian's question last night regarding the lack of `see` in retro, I've coded up a quick implementation (using autopsy's disassembler): https://gist.github.com/crcx/8420576c3d1c75d5374af5dc0781cd89 11:34:40 --- quit: pierpal (Read error: Connection reset by peer) 11:37:57 --- join: pierpal (~pierpal@95.239.223.85) joined #forth 11:39:08 I remember trying to write a Forth ages ago without really understanding how compilation worked - I got nowhere 11:40:34 crc: Chuck says he's been working on decompiling arbitrary machine code into forth. It's an interesting task. 11:40:41 Perhaps one could, given the code for say, the word "+" search the the dictionary backwards for an exact match between the given machine code of ADD... 11:40:53 ...and the body of a word in the dictionary ( rather than matching on the header match the codeword body) 11:41:02 Then, advance over the matched machine code and match the next and so on. 11:41:11 Take the addresses of the matched dictionary codewords and search for word bodies containing exact matches to substrings of the array of codeword addresses. 11:43:56 ...and so on until there are no exact matches to substrings of the machine code in question, arrays of addresses, or the LATEST word has been reached. 11:44:10 --- quit: pierpal (Ping timeout: 245 seconds) 11:48:29 pointfree: that sounds interesting 11:48:40 The advantage would be that the decompiled code would look like the familiar high level code authored by the forther themself and could be factored for clarity simply by adding additional high level words for SEE to find. 11:50:48 Is this in a Forth day or anything? 11:51:03 I assume it derives from his work on sourceless forth? 11:51:58 pointfree: I love how you use parentheses like you're writing Forth code ( Like this, if nobody noticed.) lol. 11:52:47 WilhelmVonWeiner: I don't know anything about Chuck's approach. These are just my off-the-cuff thoughts on how it might be done. 11:52:58 --- join: pierpal (~pierpal@95.239.223.85) joined #forth 12:01:20 --- join: mtsd (~mtsd@94-137-100-130.customers.ownit.se) joined #forth 12:05:33 If there is no exactly matching body because the given binary code is arranged differently, the decompiler would just fall back to lower level codewords or lower level high-level words... 12:05:42 ...and thus the decompilation could be factored by slicing up the decompiled code into definitions. 12:05:54 WilhelmVonWeiner: I did get the idea by reading about sourceless forth on ultratechnology... 12:05:56 Maybe this could be a way to edit arbitrary binary executables in high level forth by writing a little block editor that calls SEE 12:20:16 --- quit: pierpal (Read error: Connection reset by peer) 12:20:26 --- join: pierpal (~pierpal@95.239.223.85) joined #forth 12:29:34 --- quit: pierpal (Read error: Connection reset by peer) 12:34:33 --- join: pierpal (~pierpal@95.239.223.85) joined #forth 12:46:16 Phew... All those pointers confuse me. 12:56:43 hey john_cephalopoda 12:58:03 So my dictionary layout looks something like [ size 8 | name 8+ | prev_ptr 32 | entry_ptr 32 | definitions 32+ ] 12:58:57 Now I get entry_ptr and JMP to that memory address. 13:01:17 --- join: [1]MrMobius (~default@c-73-134-82-217.hsd1.va.comcast.net) joined #forth 13:01:23 For a regular word, this is a common ENTER routine. Now I have to implement it, so it will execute all words following the current IP. 13:04:01 --- quit: MrMobius (Ping timeout: 250 seconds) 13:04:02 --- nick: [1]MrMobius -> MrMobius 13:04:30 I have no clue how to do that though. I am just super-confused by all those pointer pointers. 13:15:18 did you read brad rodriguez' series? 13:23:05 Yeah, I'm using it as a base. 13:24:20 I am writing a Forth with DTC 13:28:41 Maybe I should try STC instead. 13:28:45 At least for the start. 13:28:56 Changing it later will probably be relatively simple. 13:41:28 ok so which pointers confuse you? 13:42:43 and why do you have pointer pointers? 13:42:58 Well, there are pointers that point at some entry code and some that are just pointers to pointers to entry code. 13:43:10 ah, take it one indirection at a time 13:43:14 it's a tree, that's right 13:43:24 I'll try CALL instead. 13:43:42 what is CALL? 13:43:49 --- quit: Kumool (Ping timeout: 240 seconds) 13:43:50 what platform are you writing this for? 13:45:54 corecode: he's targetting x86 13:46:17 what are your register assignments? 13:50:25 high latency chat 13:50:55 corecode: just usual IRC 13:56:33 back 13:58:48 one optimization I implemented was to make the xt pointer point directly to the entry_ptr 14:00:05 note that I have separate secondary pointers, because the data directly following the header may not be (indirect threaded) code 14:00:57 thanks to DOES> 14:04:13 my layout is basically [ name data 8*n ] name length 8 | code ptr 64 | next ptr 64 | next_of_all_ptr 64 | flags 64 | secondary ptr 64 ] 14:07:08 what is a secondary pointer? 14:08:50 pointer that points to ITC code 14:10:12 because with ITC code, the code ptr points to either ENTER or DO-DOES> (or whatever you call it), and a separate pointer is needed to point to the ITC code (because of DOES> making it so there is no guarantee that it directly follows the header) 14:14:42 why wouldn't it follow the header? 14:15:45 because data might be there 14:16:02 like here is the definition of CONSTANT: 14:16:14 : CONSTANT CREATE , DOES> @ ; 14:16:37 the data added by , directly follows the header, note the code @ 14:17:05 yea why would it 14:17:35 you mean for a defined constant 14:18:15 but why wouldn't a defined constant then have a code-pointer to doDOES? 14:18:20 the reason being is that @ is shared amongst all constants, whereas , allots data right after the header is alloted 14:18:26 yes 14:18:38 --- quit: Zarutian (Ping timeout: 244 seconds) 14:18:55 no, wait 14:19:06 --- quit: mtsd (Quit: WeeChat 1.6) 14:19:22 you're talking about when executing the new constant 14:19:27 not CONSTANT itself? 14:19:31 yes 14:21:01 CONSTANT defines a word whose secondary pointer does not directly point to the cell directly after the word's header 14:21:15 for the new constant, wouldn't the code pointer point to doDOES, which executes the @? 14:21:36 but doDOES needs to know where @ is 14:21:43 in ITC 14:21:47 or TTC 14:21:59 or DTC 14:22:05 so in all? :) 14:22:13 no 14:22:27 something is pointing to the XT, no? 14:22:27 in native code you don't need a secondary pointer 14:22:44 to the header or to the XT 14:23:04 well, yes, that'd be in a register 14:23:04 otherwise ENTER/doCOLON wouldn't be able to do its thing either? 14:23:17 right, so doDOES could use that register? 14:23:44 it needs that register to find where to read the secondary pointer 14:23:53 --- join: rain2 (~My_user_n@unaffiliated/rain1) joined #forth 14:23:54 i don't understand 14:23:58 hdi 14:24:00 hi 14:24:02 hey 14:24:26 can someone link me to interpreters for high level languages written in forth? 14:25:06 --- join: Zarutian (~zarutian@173-133-17-89.fiber.hringdu.is) joined #forth 14:25:17 wb Zarutian 14:26:07 sorry, I can't help you with that offhand 14:28:28 corecode: the thing is that the code executed by the code pointer, whether doENTER or doDOES, needs to know where the header for the current word is to find the current secondary pointer and, for doDOES, the location of the data directly following the HEADER 14:29:24 rain2: Like this? http://www.nicholson.com/rhn/files/Tiny_BASIC_in_Forth.txt 14:29:52 yeah, ty! 14:30:15 --- join: Kumool (~Khwerz@adsl-64-237-233-75.prtc.net) joined #forth 14:31:32 rain2: Or this https://hackaday.io/project/13420-rigtigs-big-3d-printer/log/53758-esp14-as-gcode-interpreter 14:34:44 pointfree: "Let's take a programming language that takes the speed of ASM and the ease of writing programs of BASIC to write a BASIC interpreter" ? :þ 14:38:44 rain2: also see http://cosy.com which is a sort of APL in Forth 14:40:59 rain2: I also got an x86 compiler in Forth in the works. https://github.com/jmf/impexus/tree/master/arch/x86 14:42:43 tabemann: why don't i need that? 14:43:42 why don't you need it? if your forth is native code/subroutine threaded you just put your code directly in the code ptr, without any need for a secondary ptr 14:44:20 john_cephalopoda: you don't seem to have a foss license at the start of your code 14:44:31 or dedication to the public domain 14:45:13 i use DTC, with native code in the code field 14:45:14 sec 14:46:12 WilhelmVonWeiner: Not yet, yeah. I wanted to add CC-0 but github doesn't allow me to add it automatically. I was too lazy to add it yet. 14:46:35 Once I got the Forth core working, I'll CC-0 it. 14:46:48 tabemann: https://gist.github.com/corecode/95fbd378bb96e469b781e9d573807517 14:47:52 I'm no assemblerhead, xchgl swaps two longs I take it? 14:48:19 yes, the two registers 14:48:50 you might mean native code 14:49:30 actually, you might be right 14:49:50 I was thinking DTC was just like ITC except that a JMP is used instead of extra indirection 14:50:17 and native code was like where the forth is compiled into code that is executed directly, without any loop executing the forth code 14:50:20 terminology is difficult 14:50:51 "loop" 14:51:17 STC is just call instructions, with ret for NEXT 14:51:20 no? 14:51:42 i need to figure out how to do fast 16bit forth on a 32bit platform 14:54:35 * Zarutian notes that https://www.digikey.com/product-detail/en/on-semiconductor/N01S830HAT22I/N01S830HAT22IOS-ND/6166720 plus ATMEGA328p with 16MHz crystal might make a decent hardware base for an 'Forth' MCU. 14:56:02 next step after this forth is a fpga forth 15:01:39 corecode: Like an fpga stack machine? https://www.fpgarelated.com/showarticle/790.php https://github.com/drom/quark 15:01:56 yea some like that 15:02:19 Cool! 15:07:46 ok now i need to tie in my forth kernel so that i can start executing strings 15:09:40 corecode: I sunk a lot of time into compiling high-level forth from the PSoC 5LP's arm core to its memory mapped cpld logic and routing fabric. 15:09:42 I sunk most of the time into reverse engineering the logic fabric. http://www.psoctools.org/ 15:09:59 hehe 15:10:16 I really need time to play with my PSoC 15:10:27 so i can use psoc on linux now? 15:10:31 I have 2 FPGA boards and this PSoC and no time 15:13:45 corecode: Yes http://www.psoctools.org/psoc5lp-tools.html http://www.psoctools.org/psoc5lp-example-configs.html 15:13:57 OpenOCD support, gcc, and mecrisp forth support but no yosys yet although I do understand the PSoC hardware well enough to add it. 15:15:54 nice! 15:18:56 how do i return from forth code 15:19:11 or, alternatively, how do i call C code from forth 15:19:12 hmmm 15:21:45 I've always wondered that 15:22:34 if you knew how a library was structured how would you call a function 15:22:57 Because I google about FFIs and this and that but can never find an answer because I don't really know the question 15:23:36 i don't need simple ffi, i can hard-code some initial function escapes 15:23:52 well exactly, I don't even know how that works 15:24:07 what terms do I search for information on that kind of stuff 15:25:05 WilhelmVonWeiner: Use registers according to the ABI I suppose. 15:26:10 and set up the stack right 15:41:31 WilhelmVonWeiner: ffi for which os and cpu architecture? 15:45:47 --- join: dave0 (~dave0@47.44-27-211.dynamic.dsl.syd.iprimus.net.au) joined #forth 15:46:01 hi 15:46:39 Well, for any os and architecture. Just the concepts and related terminology 15:47:14 hello dave0 15:47:54 hi WilhelmVonWeiner 16:04:06 the main thing will be the application binary interface (ABI), this will differ by host, but an ABI will specify the calling and return conventions you need to use when interacting with the foreign functions 16:07:29 on 32-bit x86, cdecl is common, and syscall on some older systems. microsoft has used several (pascall, stdcall, fastcall) 16:08:33 On 64-bit x86, most unix like targets I've dabbled with, System V AMD64 conventions are used 16:13:52 http://forthworks.com/retro/r/9.2.10-hosted-source.tar.gz was the last x86 assembly retro release; it had an ffi implemented (generic/generic.asm) and an example of using it (generic/generic.f), (using cdecl) 16:14:12 That also includes a Win32 FFI in the windows directory 16:18:35 --- quit: john_cephalopoda (Ping timeout: 252 seconds) 16:19:16 i did forth EMIT with c PUTCHAR ... i used a global variable to pass the character 16:20:10 getting the right register for c arguments on all the dfferent os's is too hard 16:20:34 --- join: john_cephalopoda (~john@unaffiliated/john-cephalopoda/x-6407167) joined #forth 16:49:19 Ok, new code. 16:49:32 https://github.com/jmf/impexus 16:49:48 I'm going the SRT route. 16:50:03 *STC 16:50:23 why? 16:50:37 Seemed simpler to me. 17:13:56 when I make my next forth I'd probably go with STC 17:14:18 i don't understand that code at all 17:14:40 where is the forth? 17:18:05 it looks like the very beginnings of a forth implementation 17:18:06 crc: that Quotes&Combinators reminds me a bit of SmallTalk80/Squeak blocks, non-lexical lambda functions from Scheme and curly braced sub-scripts in Tcl. 17:18:34 corecode: It's all forth. 17:19:01 corecode: It's literally a gforth program - which generates x86 bytecode and looks a lot like assembly :D 17:19:27 I've got quotes and combinators implemented for attoforth 17:19:46 yea it creates bytecode 17:19:48 I've got both non-lexical nested functions and closures 17:19:56 and where is the forth kernel? 17:20:50 corecode: The part that actually does Forth-y stuff isn't really big yet. 17:21:54 tabemann: how do you implement closures? (which are lambda functions with their own enviroments (which can overlap)) 17:23:53 Zarutian: I've got code that reads a specified number of values off the stack and saves them in (highly nonstandard) anonymous CREATE ... DOES>, where then the code after DOES> runs the content of the nested function; after it does the anonymous CREATE it compiles a BRANCH to jump over the code for the nested function 17:24:11 it's all highly nonstandard 17:24:40 ANS Forth isnt a standard, it is more of a inspiring guide so to speak 17:24:43 note that the anonymous CREATE is not done into the data space but rather into an allocated block of memory on th eheap 17:24:52 *the heap 17:25:32 there is also a separate function for freeing closures 17:26:08 --- quit: moony (Quit: Bye!) 17:27:29 corecode, what language will you write your fpga in? Have you done fpgas before? I'm considering chisel3 which was used for the riscv folk. Also have some excellent simulator tools. 17:27:30 my Forth is kinda standardish but doesn't try too hard to be standard 17:28:33 --- join: moony (moony@hellomouse/dev/moony) joined #forth 17:29:15 you got two stacks (or more) and spell things RPN then you're a forth in my book! ;-) 17:29:23 lol 17:29:34 I've got three stacks and spell things RPN 17:29:43 forth+! 17:30:18 what's stack #3 for? vocabulary, float? 17:30:26 float 17:30:46 makes sense 17:33:02 note that vocabulary in attoforth isn't really suited to a stack, considering that it is implemented such that it could be multiuser if one wanted it to be, so that each task has its own data space into which code can be compiled, and furthermore the data space for each task is not a continguous block of memory but is allocated into separate chunks to ensure that there's normally enough space left at any time 17:33:42 and of course closures involve allocating words into the heap 17:38:48 --- part: PoppaVic left #forth 17:46:09 I so like Forths with anonymous code blocks! 17:46:48 --- join: kumul (~kumool@adsl-64-237-233-75.prtc.net) joined #forth 17:47:47 I've also got control structures that use them 17:48:27 : FOOBAR [: ." foo" ;] [: ." bar" ;] CHOOSE ; ok 17:48:28 FALSE FOOBAR bar ok 17:48:28 TRUE FOOBAR foo ok 17:49:05 that doesn't involve closures, just anonymous code blocks, so there is no need to free anything 17:53:37 : quux 0 1 <: . ;> 1 1 <: . ;> 2 roll 2 pick 2 pick choose free-lambda free-lamb 17:53:38 da ; ok 17:53:38 true quux 0 ok 17:53:38 false quux 1 ok 17:55:27 : FOOBAR 0 [: DUP . 1+ DUP 10 = ;] LOOP-UNTIL DROP ; ok 17:55:27 FOOBAR 0 1 2 3 4 5 6 7 8 9 ok 17:55:56 : FOOBAR 0 [: DUP 10 < ;] [: DUP . 1+ ;] WHILE-LOOP DROP ; ok 17:55:56 FOOBAR 0 1 2 3 4 5 6 7 8 9 ok 17:56:26 : FOOBAR 10 0 DO I . LOOP ; ok 17:56:28 FOOBAR 0 1 2 3 4 5 6 7 8 9 ok 17:57:00 that's the next one: 17:57:03 : FOOBAR 10 0 ['] . COUNT-LOOP ; ok 17:57:03 FOOBAR 0 1 2 3 4 5 6 7 8 9 ok 17:57:26 well properly it's a ?DO loop not a DO loop 17:57:58 and last but not least: 17:58:22 FOOBAR -9 0 [: . -1 ;] COUNT+LOOP ; ok 17:58:22 FOOBAR 0 -1 -2 -3 -4 -5 -6 -7 -8 -9 ok 17:58:31 wnoops : FOOBAR 17:58:59 that's a ?DO +LOOP loop 18:00:19 I've been thinking of adding words like MAP and FILTER 18:01:25 it seems unforthy to write a Lisp-style list API for that though 18:02:19 like allocating one item on the heap for each list item 18:02:43 I might write something based on array list, where MAP and FILTER would copy from one buffer into another 18:02:48 *array lists 18:05:13 Yay Lisp style data structures 18:06:13 the problem is that they work much better with a GC 18:06:45 Yeah you'd generate quite a bit of waste 18:06:49 especially generational garbage collectors where allocation is literally just advancing a pointer 18:07:11 Then again you could make it so that MAP overwrites a data structure 18:08:16 tabemann: you could run a simple GC that starts from a root node and recursively goes down marking objects that can be reached 18:08:28 And then doing a sweep and clearing unmarked objects 18:10:25 the problem is loops 18:10:37 you need something like three-color or semi-space 18:10:54 the thing is semi-space would not work well for something like forth 18:11:58 I'm planning on just doing a copy from one array of cells to another array of cells (or the same array of cells), leaving memory management up to the user 18:12:50 Zarutian: re: quotes&combinators: I did borrow ideas from a lot of places... 18:39:01 --- quit: reepca (Read error: Connection reset by peer) 18:39:36 --- join: reepca (~user@208.89.170.250) joined #forth 18:40:59 --- quit: kumul (Quit: Leaving) 18:46:05 rain2: https://github.com/tgvaughan/scheme.forth.jl 18:46:12 This is a Scheme interpreter written in Forth 18:46:30 Enough to run a metacircular interpreter https://github.com/tgvaughan/scheme.forth.jl/blob/master/examples/metacirc.scm 18:47:13 This is pretty impressive because it implements Lisp types too and a mark and sweep GC 18:47:43 Main forth code here https://github.com/tgvaughan/scheme.forth.jl/blob/c775ab2562b213ac96fe1803d4e7001a73e874a8/src/scheme.4th 18:50:41 the thing there is they're aiming at even more abstraction, because their forth is implemented in Julia 18:51:26 of course I have read of things like a Forth (Transforth) implemented in F# 18:52:46 I was originally planning on writing my Forth in Haskell, until I realized how much of an unholy abomination it'd be and did the sensible thing and wrote it in C instead 19:05:40 --- quit: reepca (Read error: Connection reset by peer) 19:06:18 --- join: reepca (~user@208.89.170.250) joined #forth 19:16:38 --- quit: Kumool (Ping timeout: 250 seconds) 19:19:46 --- join: learning_ (~learning@4.35.154.131) joined #forth 19:26:24 --- join: Kumool (~Khwerz@adsl-64-237-237-177.prtc.net) joined #forth 19:32:26 --- quit: Kumool (Ping timeout: 272 seconds) 19:38:13 yep, got FILTER now implemented 19:45:38 --- join: Kumool (~Khwerz@adsl-64-237-234-34.prtc.net) joined #forth 19:47:17 --- quit: dddddd (Remote host closed the connection) 19:47:21 --- quit: learning_ (Remote host closed the connection) 19:50:26 --- quit: Kumool (Ping timeout: 246 seconds) 19:52:17 --- quit: proteusguy (Remote host closed the connection) 19:53:18 --- quit: pierpal (Quit: Poof) 19:53:37 --- join: pierpal (~pierpal@95.239.223.85) joined #forth 20:04:51 --- join: Monev (~Khwerz@adsl-64-237-238-74.prtc.net) joined #forth 20:10:58 --- quit: Monev (Ping timeout: 246 seconds) 20:16:42 --- join: Monev (~Khwerz@adsl-64-237-235-141.prtc.net) joined #forth 20:25:00 --- quit: Monev (Ping timeout: 245 seconds) 20:30:27 tabemann: they wrote it in Julia but interestingly it's low level 20:30:33 tabemann: can you share how you did that? 20:34:03 --- join: Monev (~Khwerz@adsl-64-237-234-241.prtc.net) joined #forth 20:37:12 I will in a moment 20:37:19 busy debugging my FILTER-MAP word 20:40:27 --- quit: Monev (Ping timeout: 244 seconds) 20:42:25 --- join: Monev (~Khwerz@adsl-64-237-235-233.prtc.net) joined #forth 20:45:55 --- join: PoppaVic (~PoppaVic@unaffiliated/poppavic) joined #forth 20:47:54 --- quit: Monev (Ping timeout: 268 seconds) 20:48:30 --- quit: PoppaVic (Quit: Tuesday is Solylent Green Day) 20:50:26 https://github.com/tabemann/attoforth/blob/master/doc/control.md 20:50:48 https://github.com/tabemann/attoforth/blob/master/src/forth/control.fs 20:51:48 What's & 20:53:33 POSTPONE 20:54:05 for more brain-breaking implementation details, look at https://github.com/tabemann/attoforth/blob/master/src/forth/lambda.fs 20:54:15 Yep looking at it now 20:54:25 Seems more or less pretty standard 20:54:42 I scarcely understand half of lambda.fs myself at this point 20:54:45 and I wrote it 20:54:52 How come? 20:55:09 A lot of metaprogramming going on I se 20:55:11 see 20:55:44 how come? well... I tend to forget the stack state of my code after I've written it 20:55:54 so unless it's obvious when I come back to it 20:56:02 --- join: Monev (~Khwerz@adsl-64-237-235-251.prtc.net) joined #forth 20:56:30 it's a pain coming back to code and trying to do something with it - I oftentimes have to literally go from the first word of the word on and reconstruct each stack state along the way 20:58:26 but yeah 20:58:50 this stuff is an example of code that is underlyingly far more complicated than its surface interface is 20:59:09 well, closureless lambdas aren't that bad 20:59:57 the stuff in control.md is simple though 21:00:03 *control.fs 21:02:07 I should implement FOLD-LEFT and FOLD-RIGHT 21:12:23 --- quit: Monev (Ping timeout: 250 seconds) 21:14:32 --- join: Monev (~Khwerz@adsl-64-237-236-173.prtc.net) joined #forth 21:33:49 --- quit: Monev (Ping timeout: 240 seconds) 21:46:45 --- join: smokeink (~smokeink@118.131.144.142) joined #forth 22:17:20 --- join: gravicappa (~gravicapp@85.26.234.223) joined #forth 22:33:16 --- join: [1]MrMobius (~default@c-73-134-82-217.hsd1.va.comcast.net) joined #forth 22:33:40 Ah I see. 22:33:45 How would you implement FOLD? 22:35:50 --- quit: MrMobius (Ping timeout: 268 seconds) 22:35:50 --- nick: [1]MrMobius -> MrMobius 23:09:05 back 23:09:37 by simple iteration over an array, while passing a value returned by one iteration into the same word iterating on the next value 23:10:24 * tabemann was busy making it so that iteration, mapping, and folding functions can see the stack as if the function dictaing the iteration, mapping, or folding was not there 23:10:31 by using the return stack 23:11:14 to avoid having to use closures, as closures have to be allocated and thus will leak memory if they are not properly freed 23:11:46 anyways, I'm off to bed 23:42:22 --- quit: smokeink (Quit: Leaving) 23:52:56 --- join: smokeink (~smokeink@42-200-116-35.static.imsbiz.com) joined #forth 23:59:59 --- log: ended forth/18.12.28