00:00:00 --- log: started forth/21.03.30 00:23:19 --- part: patrickg left #forth 01:09:37 --- quit: Lord_Nightmare (Quit: ZNC - http://znc.in) 01:11:01 --- quit: f-a (Quit: leaving) 01:11:38 --- join: Lord_Nightmare joined #forth 01:19:50 --- join: xek joined #forth 02:38:06 https://i.imgur.com/1LYCzDU.mp4 02:39:00 Canon Cat looks like a whole Emacs machine written in Forth 03:08:08 Very nice 03:10:25 I am working on a C Forth, I will make the code public soon when it's a bit more developed 03:10:50 I liked the idea of pForth but I think it's too basic, so I'm working on a fully fleshed out C Forth 03:11:44 Nice. Hopefully at some point I'll start implementing my own Forth too. 03:12:05 I've yet to check pForth. 03:13:23 One of the main "selling point"s for me about Forth was that many people roll their own implementations. 03:17:38 It seems like in gforth, using marker and dynamic linking with c libraries don't mix well, or I'm doing something wrong. 03:18:16 After I load my code, and then use marker to reset the dictionary, and then reload my code, I get a "Invalid memory access" at libcc.fs:908:35: 0 $7F7BC8140520 lib-sym 03:19:20 Maybe gforth links it once, but after a marker reset, it doesn't load it a second time, and crashes. 03:46:51 Maybe it just doesn't handle it correctly 03:47:18 I know the gforth devs don't really think that highly of memory management using the dictionary, you'll get better support from them with ALLOCATE etc 03:47:29 Which is a bit sad 04:04:40 --- join: tech_exorcist joined #forth 04:12:06 --- quit: dave0 (Quit: dave's not here) 04:12:58 Hmm was it okay to use return stack inside a begin-while-repeat loop? 04:14:42 The standard says it's a separate control flow stack. 04:22:28 --- quit: cmtptr (Ping timeout: 260 seconds) 04:22:40 --- join: cmtptr joined #forth 04:28:04 --- join: f-a joined #forth 04:31:46 neuro_sys: The control flow stack exists at compile time, begin-while-repeat don't touch return stack 04:32:30 The control flow stack basically gets the address of the BEGIN, and then when you do UNTIL it pops that off to calculate the branch offset, similar logic for REPEAT 04:35:33 On a lot of forths the "control flow stack" is actually just the data stack 05:22:25 --- join: proteusguy joined #forth 05:22:25 --- mode: ChanServ set +v proteusguy 05:25:58 I see 06:01:30 --- join: pareidolia joined #forth 06:08:57 neuro_sys: whoa do you have a link to that mandelbrot program 06:11:16 siraben: https://gist.githubusercontent.com/neuro-sys/d4b2a1c91c702e30edb6f9741d6bac6e/raw/4daee8cb64f7aa6927bcc69631ebd9ac0db9e82b/mandelbrot.fs 06:11:28 I will convert it to integer arithmetic later 06:11:38 runs under which forth? 06:11:46 And will do different color algorithm 06:11:49 siraben: It's gforth 06:12:00 And only linux due to SDL dependency 06:12:01 Ah ok, is sdl.fs included? 06:12:05 Let me add that 06:12:15 macOS also has SDL and X 06:12:25 theoretically it should work 06:12:27 --- quit: f-a (Quit: leaving) 06:12:32 https://gist.githubusercontent.com/neuro-sys/d4b2a1c91c702e30edb6f9741d6bac6e/raw/7ab0db02ef1e02ade3f7383f8d798df78e9401eb/sdl.fs 06:12:35 Here's the sdl.fs 06:12:40 It should be SDL 1.0 and not 2.0 06:12:54 ah, why SDL 1? 06:13:12 No particular reason 06:13:40 Both should be available in most package repositories, maybe Brew on OSX? 06:13:53 sdl2 is usually separate 06:14:00 Yeah, though I use Nix :P 06:14:04 https://formulae.brew.sh/formula/sdl 06:14:20 Nix > brew any time (but needs more darwin maintainers for more exotic packages) 06:14:54 In any case, I'll give it a shot 06:15:05 doesn't depend on special linux syscalls right 06:15:59 Not really. All it needs a put-pixel function and wait keyboard etc. 06:16:11 You can remove everything SDL related and implement put-pixel yourself 07:56:30 --- join: f-a joined #forth 08:06:35 neuro_sys: Are you going to put all this in repos on your github at some point? 08:07:04 I'd like to port this to my forth when I'm ready to do that, would be a good C interface demonstration 08:07:18 And performance comparison 08:08:56 SDL in forth? 08:09:39 Of course 08:10:21 gforth lets you use C interfaces, as do many desktop forths 08:13:56 oh yeah 08:14:01 --- quit: f-a (Read error: Connection reset by peer) 08:14:02 just didn't know somebody had tried it already 08:14:08 although ofc, it makes sense that somebody would 08:18:25 --- join: f-a joined #forth 08:27:04 --- quit: f-a (Quit: leaving) 08:28:32 veltas: Yes 08:28:51 I'm a bit fixated on fixed point arithmetics lately, so I'll get rid of FPU words. 08:30:06 Speaking of which, I'm reading this http://home.citycable.ch/pierrefleur/Jacques-Laporte/Volder_CORDIC.pdf used on a military aircraft in 1956 (B-58). 08:30:34 Not related to the mandelbrot though 08:39:01 Interesting 08:42:30 --- quit: jedb (Remote host closed the connection) 09:02:07 so this would be TRIVIAL in forth but its a conundrum in c lol 09:02:38 im writing code to parse a json script for the user interface. in this json i can define a screen which contains windows and a menuu bar 09:03:09 the menu bar contains pulldown menus and each menu item references a function in the main application 09:03:22 the functions for each menu item are pre-compiled into the executable 09:03:34 but the entire user interface is dynamically created at run time 09:03:58 how do i go from a string name of a function to the address of taht function at run time lol 09:04:08 argh! lol 09:04:34 im going to have to literally create an associative array at compile time giving the string names of every function and its address 09:10:34 macros might help a bit with that 09:10:37 But otherwise yeah probably 09:13:48 not using macros, that either means using the c preprocessor (sucks) or M4 and that sucks worse :P 09:14:10 i can do it without, i already know the solution, its just klunky compared to forth :) 09:17:04 --- join: f-a joined #forth 09:17:54 > complains about C 09:17:58 > refuses to use the preprocessor 09:18:14 for something like this YES lol 09:18:19 not needed 09:18:40 create a typedef struct so people using my lib can create an associative arrau of functions 09:19:02 im not going to create a macro to populate that array because i think that would be FUGLY 09:19:21 im not even sure HOW to do that in C :) 09:20:04 each time a macro is invoked it would have to add an item to a linked list which is again not an economic way of doing it 09:20:24 you can create items of the typedef and populate an array with them 09:20:52 an array of pointers to each one - and each typedef gives a "string" name for the function and a pointer to it 09:21:30 parse in a "menu-function-name" and with that i can derrive the menu item function address 09:21:42 the function name is not the same as what is displayed in the pulldown 10:05:59 veltas: Here it is https://github.com/neuro-sys/forth-mandelbrot/blob/main/mandelbrot.fs 10:20:19 --- quit: tech_exorcist (Remote host closed the connection) 10:20:52 --- join: tech_exorcist joined #forth 10:24:01 --- quit: tech_exorcist (Remote host closed the connection) 10:25:03 --- join: tech_exorcist joined #forth 10:46:51 That sdl.fs is a good demonstration of importing C functions in gforth 10:49:57 --- quit: gravicappa (Ping timeout: 268 seconds) 10:51:52 --- join: gravicappa joined #forth 11:09:50 I've heard people say forth is like a functional programming language, I kinda don't see why 11:10:01 err not even close lol 11:10:05 forth doesn't have first class functions or anything 11:10:11 idk why people say it's like a functional language at all 11:10:21 I guess they're either misunderstanding forth or misunderstanding functional programming 11:10:22 or both 11:10:53 isnt functional programming all about lambda expressions? 11:11:33 yeah 11:11:38 higher order functions 11:13:42 you can say 11:13:44 mhhh 11:13:49 that when programming forth 11:13:56 you use a similar approac to pointfree style 11:14:06 this . that . thtatother 11:14:11 THATOTHER THIS THAT 11:14:34 but that is a stretch 11:15:17 https://www.youtube.com/watch?v=_IgqJr8jG8M 11:15:35 This video makes some comparisons to lambda calculus and concatenative languages. It's pretty cool talk BTW. 11:18:33 I'll have a look 11:26:11 nihilazo: No, Forth is like Lisp. And Lisp is barely like a functional programming language, if it was invented today it wouldn't be considered functional. 11:26:35 oh ok 11:26:52 idk how forth is exactly like lisp other than both having metaprogramming stuff 11:27:05 which is nothing to do with functional programming as an idea at all really 11:27:09 I'll watch that talk tho 11:27:12 It's like Lisp in the respect that it can have a very small implementation, you can do metaprogramming 11:27:31 The parsing is very simple 11:27:44 There are lots of similarities 11:27:46 veltas, java falls into that category too 11:28:01 Which category? 11:28:17 I don't think any of what I just said applies to Java 11:28:35 small implementation (can be embedded) and can be extended if thats what you mean by metaprogramming 11:28:58 The first half an hour of the video is a survey of different computational models, but at around 30 minute mark it goes into Forth (and generally stack based concatenative languages) and show how you can see and use it in functional paradigm. 11:29:34 mark4: The property it's least similar on is having a simple parser, Java's parsing is a C-style language's parsing, which is not as simple as Lisp or Forth. 11:29:55 also common lisp's parsing isn't actually simple... 11:30:21 I mean original lisp 11:30:35 agreed 11:31:10 --- quit: f-a (Quit: leaving) 11:31:11 Forth's parsing isn't simple either in some senses, there is a bit of nuance to what I mean 11:31:52 And I have not attempted to be more specific because really I find language comparisons very unproductive, I'm just trying to give some help to why it is people say Forth is similar to 'functional' languages 11:32:28 I think another reason might be that when I learned Forth, it was as novel to me as when I first programmed in Haskell (my first proper functional language), and made my head hurt just as much 11:34:57 Programming language fights are the worst head aches. 11:56:13 --- join: f-a joined #forth 11:57:04 Comparing programming languages is also meant to be constructive sometimes, I just find it doesn't work that well when you try and reduce it to paradigms 11:58:12 I find what I learn from different programming languages and styles can end up helping me anywhere, never where I expect 12:01:32 I like to enjoy them with no expectation, but I'm sure it helps me in my work life too. 12:03:32 They're pretty much like natural languages too, they are equivalent in the sense that you can form a logical proposition in any of them, but everyone's got their favorite, and one or two that they use daily. 12:04:42 Lately I'm enjoying Forth a lot, some of my friends think I'm now a bit crazy since I share and talk about it regularly. 12:08:54 Now I'm wondering, are there Forths that output machine code after compilation, and stripping any compilation features? Like with zero over head on runtime. 12:09:49 Yes, I think some embedded forths do this (symbol info is kept locally, the embedded device just gets pure threaded code) 12:10:45 You can also keep symbols and the dictionary separate, and have a separate space for interactive feature words, and then you can just cut off those features and symbols in your output image 12:11:23 And SwiftForth for example one of the licences allows you to use their code in your program, but you may not expose the compiler or interpreter to the user 12:12:06 I see, that's cool. I compared my mandelbrot with the equivalent written in C using , and C one obviously was way faster. 12:12:07 --- part: f-a left #forth 12:12:56 I kind of would like to compare it to some Forth that outputs optimized code. At some point I also would like to go into writing a Forth too, at which point I'd like to focus on performance. 12:12:59 The fastest would be to vectorise the work (or do it on GPUs), C is probably easier to do that with too 12:13:19 Yes, but to be fair, I'm more interested in sequential performance. 12:13:42 I wonder why 12:13:47 Not that I care it actually renders fast (in that case I'd write it in C and not Forth). 12:14:53 If I care about performance I care about getting the CPU to do it as fast as it can, and then I'm writing it either in C with intrinsics or some kind of vector extension, or for a GPU in a specialised compute shader language 12:14:53 Well, as I'm only interested in the performance comparison with C, and not that the program runs the fastest possible way. 12:15:14 And you can technically write the 'fast' part in C or assembly, and link it to forth 12:15:32 So then I would say you can get 99% of the performance by writing mostly forth (or any language really) 12:16:02 I think that's an important consideration when people decide a language based on performance requirements, provided you can link C code that doesn't matter as much as people think it does 12:16:28 And the people who claim they care about raw performance don't tend to optimise their C code that much anyway 12:16:31 Just interesting 12:16:36 True. I made a confusing remark. I'm not interested in writing performant code. I'm interested how a Forth can compare to a C compiler in terms of performance (without vectorization or inline assmebly tricks). 12:16:50 Yeah it's a valid question 12:20:02 I think desktop Forth performance is somewhat limited by interest, and also the enthusiasm for optimising Forth, and maybe lack of funding 12:20:55 i.e. I think if the right people cared enough then Forth could be comparable with stuff like LuaJIT, node, JVM in terms of performance 12:22:15 > Lately I'm enjoying Forth a lot, some of my friends think I'm now a bit crazy since I share and talk about it regularly 12:22:25 Forth is the ultimate "I'm not crazy, you are" language :P 12:22:41 Or "I'm not crazy, everyone else is crazy" 12:36:27 the thing I always hit with writing a perf-oriented forth compiler on paper was, choosing the registers->stack mapping 12:36:54 either I have to do an uncomfy amount of inlining, or there's significant overhead going from code for which the stack exists and code where it doesn't 12:41:42 does json allow for templates the way php does? 12:41:57 not knowing PHP, almost definitely not 12:42:26 let me give you an example that im working on which is basiclaly just SHORT HAND for the structures im going to be parsing, 12:42:46 https://dpaste.com/HR7KXLQV9 12:43:24 anywhere within the main json blob that you see |attribs| you would mentally replace that structure with the json structure at the bottom of the file 12:43:36 it would be nice if json parsers allwed you to create macros like that 12:43:38 I'd recommend using yaml or dhall or something as an authoring format, and converting it to JSON 12:43:44 yaml has pointers that'd let you do this 12:43:49 no. json is ubiquitous and very simple :) 12:44:02 yaml->json and dhall->json are lossless conversions 12:44:08 im writing my own parser lol i can implement this :) 12:46:17 like that's not json at that point :P 13:07:54 i know :) 13:07:57 its jsonish 13:08:15 hmmi could implement json loops! }:) 13:08:35 omg please dont give anyone that idea :/ 13:09:58 --- quit: sts-q (Remote host closed the connection) 13:11:07 >_> 13:31:18 Well, I've added something to my system that's rather radical and will raise eyebrows. I can motivate it by the dictionary search for a word. In that search, there is a linked list of vocabularies. Each one of those has a linked list of words. And each one of those has a string of characters that have to be checked. So it's a three-deep search. First imagine searching for a string that's not there. 13:31:20 You're guaranteed to get a false result, after searching the entire structure. 13:31:33 If that is your coding goal, it's fairly simple and clean. So imagine you've coded that. 13:32:07 Now consider having the search succeed. Here you are, way down at the bottom of that three-deep process, and now you're faced with modifying the code above you to propagate your successful result back to the top. 13:32:15 That complicates the code CONSIDERABLY. 13:33:05 So I have extended my stack frame system with what I call a "super frame." Basically it gives me the ability to have my successful search result on top of the stack, and I can leap all the way back up to the top, just bypassing that whole process of unwinding. 13:34:20 The super frame restores both return and data stacks to their state when the frame is opened, except the TOS value cached in a register is maintained. That's how the result get back up to the top. 13:34:48 --- quit: gravicappa (Ping timeout: 240 seconds) 13:35:35 Unlike the standard stack frame the process can't be nested. The required return stack value is cached in an otherwise unused register. 13:35:59 But it's such a radical idea I really can't imagine needing to nest it. 13:36:19 It would be possible to add a third stack to the system and use that to nest it, but I'm assuming I'll nenver do that. 13:37:59 this sounds like how I implemented exceptions a while back 13:38:02 I imagine there are structured programming fanatics that would just lose their minds over something like this. 13:38:15 Yes, it actually is similar to the error handling, except with a controllable return point. 13:38:17 iirc I solved nesting by having the "set up handler" code push the old handler to the return stack 13:38:35 Well, that's what the standard stack frame does. 13:38:48 Saves the frame point to the return stack, and then sets it equal to the current stack pointer. 13:39:05 It just gives a way to access stack elements relative to a fixed point. 13:39:27 I could do the same thing using the stack pointer, but I'd have to keep up with everything moving around as the stack changed. 13:40:03 When I close a standard frame, I put the frame pointer back into the stack pointer, so it avoids the need to clean junk off of the data stack. 13:40:10 Provides a clean return state for that. 13:40:35 hm, okay; my system at the time didn't do anything like that :P 13:40:47 Well, it's kind of unorthodox. 13:41:19 But it is helpful. Makes it so in this nested search I just need to get the loop termination item on the top of the stack so I can test it, and then the framing stuff hanndles cleanup for me. 13:42:04 In the standard frame the TOS is recovered from the initial state as well. I need to ponder that and see if it's what I want. 13:42:26 In that case I'm not really trying to pass information back up. 13:43:46 I had the standard frame stuff in my last one, so I know I like it. This super frame thing I really haven't gotten much experience with yet - time will tell if it was a good idea or not. 13:44:28 --- quit: spoofer (Quit: leaving) 13:44:43 --- join: spoofer joined #forth 13:44:54 It really seems to be extremely helpful in this dictionary search case, though I haven't actually put that code in and made it work yet. It's still on the drawing board. 13:46:51 But the entire process of dictionary search, in the case of multiple vocabularies in the search order, is a pretty sophisticated process. 13:47:23 I could go back to my old one and see how much code I write last time - I can't believe it will be as terse as this candidate code, which is just 12 short lines. 13:47:46 Essentially implementing FIND. 14:25:21 remexre: Probably if you want to write a performance oriented forth compiler you should do a lot of research on stack-based JITs 14:27:07 KipIngram that error handling thing is just exceptions, you've come up with exceptions by another name 14:27:30 And possibly very slightly different semantics 14:27:45 veltas: I'm "not unfamiliar with" c2 (the JVM JIT), but it knows statically how call's affect the stack 14:27:49 which one wouldn't for a typical forth 14:27:54 calls* 14:28:02 Incorrect 14:28:34 which part? 14:28:35 If you write an optimising compiler you will need to do data flow analysis, many forth words can be statically analysed to know the number in/out 14:29:03 right, I'm saying once you have to call out to something that you can't statically analyze, e.g. a deferred word, you're paying a really bad penalty that I couldn't find a way around 14:29:04 And also you could handle different stack results based on different input too 14:30:03 You can statically analyse a deferred word just-in-time 14:30:28 There are some situations where you don't have the same info, yes, although you could have an option to provide that info for performance 14:30:55 Many problems to solve, new ground to tread, I think it's as interesting and worth a pursuit as anything in Forth. 14:31:12 Not something I'll go into myself likely, I'll respect anyone who does though 14:44:01 I think I prefer simpler Forths though, lower bar for understanding, so highly optimised Forths I'm not too keen on. I'd rather optimise critical parts in assembly than maintain an optimising compiler 14:52:45 yeah, that's where I'm at at this point 14:53:10 although I do want to maintain an optimizing compiler, just the optimizations I'm doing are way before it gets anywhere close to being as low-level as Forth :) 14:55:36 veltas: Yes, I accept that description. It is an exception handler, and in the case of the dictionanry search the exception is "success." 14:57:02 KipIngram: Another thing people do is a far exit by dropping from the return stack 14:57:05 The normal exception system always dumps me back in through WARM/QUIT, whereas this one lets me specify a location. If it was possible to nest these things I could just use the mechanism for normal errors. But that would require me adding a third stack, to save these exception roots on, and I don't know if I want to do that. 14:57:32 Obviously that is less maintainable and more error prone for any amount of sophistication in your code 14:57:42 Yes - this does that, basically, except it just has a return stack pointer value that it "drops back" to, without actually picking its way down. 14:58:13 It's a "rapid implementation" of an arbitrary number of "r> drop" pairs. 14:58:23 What does it do to data stack? 14:58:30 Except it also restores the data stack. 14:58:45 {| saves the data stack pointer on the return stack; |} pulls it back. 14:58:57 The TOS entry that's cached in a register isn't restored, though. 14:59:12 Problematic 14:59:21 { ... } restores the TOS, but {| ... |} doesn't that's your way of returning a value from below. 14:59:25 TOS should be an invisible optimisation 14:59:37 Fair enough 15:00:16 In this case there's a 0 |} just before the end of FIND. That 0 is your fail result. If the |} down below is executed, it's done with the success pointer in TOS, and that gets retained. 15:00:43 This really doesn't let me do the usual thing of having FIND return two values on success and one on fail. 15:00:49 I just get one result either way. 15:01:03 But I have a lot of words that do conditional tests without consuming the value, so I'm in good shape. 15:01:28 I have a common mechanism of sticking a . at the beginning of a word, and that causes one extra value to be retained. 15:01:45 So = consumes the top two items, but .= consumes only the top one. And so on. 15:02:04 So I can do .0= after FIND to branch to success/failure cases. 15:02:28 I have a .0=; conditional return, for example. 15:02:31 In my Z80 forth FIND (actually, a generalised version of FIND) is written in assembly 15:02:43 I've done that in the past. 15:02:48 Because it significantly improved performance at the interpreter 15:02:53 More recently though it's written in Forth. 15:03:01 Yes, I can definitely see that. 15:03:05 A 3.5MHz 8-bit processor benefits there 15:03:12 :-) 15:03:16 Yes, it would, wouldn't it? 15:03:27 I haven't imposed such a system on myself yet. 15:03:43 If it's not 'slow' I would write it in Forth generally, even TYPE was implemented in Forth on there 15:03:57 But my general idea is that this will be the OS of any gadgets I build during retirement. 15:04:02 So who knows what that will be. 15:04:05 Nice 15:04:13 I like stuff like that, very personal technical projects 15:04:21 Can get some good gems in there 15:04:24 Oh me too. 15:04:40 This seems a good way to "prepare" for it, while I still have work responsibilities. 15:04:54 How close to retirement are you? 15:04:54 Get this all working, get it so it will self and cross compile, and so on. 15:05:02 I'm 58. 15:05:14 So somewhere within 10 years, I assume. 15:05:34 So you're going to join the rest of us in the retirement age of 90 as the countries we live in go bankrupt :P 15:05:44 What I hope to do is have a good profiling system associated with this. 15:06:02 So I'd profile FIND, and find out exactly where the time is really being spent, and assembly optimize just those points. 15:06:23 You'd get most of the benefit just by optimizing the comparison scan for each name, I imagine. 15:06:43 Yeah probably, I had the same logic, but it was easy enough to just write the whole thing in asm at the time 15:06:58 That would whack away four or five of the 12 lines I've got now that I think implement this. 15:07:20 Sure. And then you're sure. :=) 15:07:34 I am kind of interested, though, in minimizing the amount of porting work associated with this system. 15:08:08 Well, not really "minimizing it." Reducing it as much as possible consistent with "close to optimum" performance. 15:08:39 I've got a layer in there that I refer to as "virtual instructions." 15:08:43 They're assembly macros. 15:08:53 And primitives are implemented using those. 15:09:06 So in theory I'd only need to port those macros, and the rest would be portable. 15:09:13 I figure I'll wind up with a few dozen of them. 15:09:39 Whereas if I wanted to absolutely minimize I could probably get it down to one dozen, or even less, but then I'd have some tacky performing primitives. 15:10:31 I'm trying to set it up so that I can get optimum primitive code for both x86 and ARM architectures. 15:11:54 I think the fig forth approach was good enough, just write colon definitions for large parts and write basic words in assembly 15:12:07 "basic" I mean simple, not BASIC 15:12:12 One of the things I did in the name of porting was put the terminal into raw mode, and implement all of the line editing in Forth. I really want to just need to replace KEY and EMIT and have that be it. 15:12:50 Plus I just don't like the standard way a cooked terminal does things. 15:12:54 It's not "Forth like." 15:13:21 Right - FIG got a lot of stuff right. 15:13:36 What threading model do you prefer? 15:13:44 I've always preferred indirect threading. 15:14:01 For one thing it makes CREATE/DOES> a lot easier to implement. 15:14:04 Hmm not sure 15:14:19 It just feels the most like original Forth to me. 15:14:28 Yes, and it's true to say 15:15:07 I learned all that really well before I ever even considered other alternatives, so they always seem a bit alien to me. 15:15:17 However it improves CODE definitions a lot to use direct threading, and has not a huge overhead on others 15:15:36 Yes, direct threading offers some definite pros. 15:15:53 TOS optimisation I like because it improves a lot of code and has really no impact on other code words 15:16:06 Yes, I think it's clear that caching TOS is an advantage. 15:16:06 It's hard to find a situation where it's negative 15:16:19 Caching more can have a much smaller advantage, but only if you have a really smart compiler. 15:16:22 It's a little harder to wrap your head around TOS when writing assembly though 15:16:29 The GForth guys have dug into that pretty deeply. 15:16:53 And you have to decide where the stack pointer is going to point. 15:17:03 Does it point to where TOS would be? Or to 2OS? 15:17:13 In this system I'm writing now it points to the 2OS location. 15:17:24 So TOS reall belongs at [SP-CELL]. 15:17:38 Point to fake TOS might make sense if you want to easily spill cached stack when calling 15:17:42 You could do it either way, of course. 15:18:08 I've never done one that works the other way, so I'm less familiar there. 15:19:05 But as far as the benefits of caching, you can't deny the advantage on @, which just becomes TOS = [TOS]; NEXT. 15:19:20 Or +1: INC TOS; NEXT 15:19:37 Sorry; 1+. 15:20:28 Because that's such an advantageous structure for performance, I have other words for the first few powers of two. I have 1+, 2+, 4+, and 8+, which come in really really handy when processing linked lists. 15:20:41 And the corresponding decrements. 15:20:53 They are handy 15:21:45 Those are used extensively throughout FIND, leaving me better off than I would be vs. an assembly version. 15:26:57 --- join: jedb joined #forth 15:36:58 --- quit: MrMobius (Read error: Connection reset by peer) 15:37:58 --- join: MrMobius joined #forth 15:40:03 --- join: X-Scale` joined #forth 15:41:44 --- quit: dys (Ping timeout: 265 seconds) 15:42:38 --- quit: tech_exorcist (Ping timeout: 265 seconds) 15:43:35 --- quit: shmorgle (Ping timeout: 240 seconds) 15:43:35 --- quit: X-Scale (Ping timeout: 240 seconds) 15:43:38 --- nick: X-Scale` -> X-Scale 15:43:58 --- join: tech_exorcist joined #forth 16:09:45 veltas: Starting in the last version I had a collection of conditional double returns (like 0=;; instead of 0=; ) and also a word n; that returned an arbitrary number of levels up - I only used that in one spot in the system, though. 16:10:12 I imagine that spot will get done with {| ... |} this time instead. 16:10:42 Counting up the number to use for n was dangerously "manual." 16:11:38 My |} word does NOT change the IP. So you roll on to the end of the word where |} is used. But when you hit ; it has the effect of the ; up in the word {| was executed in. 16:12:17 I wouldn't know what IP value I'd need at the time {| is executed - it's out there somewhere down the stream. 16:13:38 So when I finally make the topmost "search" call, after doing {|, the code after that word and up to the ; only executes in the failure case. In the success case I execute the code after |} down below, up to ;. 16:14:05 What that really means to me is that there won't be any code to speak of between |} and ;. 16:14:58 Up at the top you'd put the failure result on the stack if necessary, and down below you put the success result on the stack. 16:16:04 If the stack is already in those states, then ; will come right after |} 16:19:16 Yes I too have pondered how to make the exception syntax nicer 16:19:37 I don't like ANS exceptions 16:34:50 --- quit: tech_exorcist (Quit: tech_exorcist) 16:36:41 Well, feel free to pilfer any of these thoughts if you decide you like them. 16:37:56 Actually, maybe there nothing stopping me from saving the old value of that new register on the return stack. Then I could nest this. Why not? 16:41:33 --- join: shmorgle joined #forth 16:46:28 For one thing I think any control feature should inline actions, not use xt's 16:46:28 --- quit: clog (Ping timeout: 240 seconds) 16:46:28 --- log: stopped forth/21.03.30 16:47:11 --- log: started forth/21.03.30 16:47:11 --- join: clog joined #forth 16:47:11 --- topic: 'Forth Programming | do drop >in | logged by clog at http://bit.ly/91toWN backup at http://forthworks.com/forth/irc-logs/ | If you have two (or more) stacks and speak RPN then you're welcome here! | https://github.com/mark4th' 16:47:11 --- topic: set by mark4!~mark4@cpe-75-191-74-68.triad.res.rr.com on [Sun Feb 28 11:55:01 2021] 16:47:11 --- names: list (clog shmorgle X-Scale MrMobius jedb spoofer pareidolia +proteusguy cmtptr xek Lord_Nightmare boru lchvdlch joe9 lispmacs[work] Vedran +mark4 proteus-guy rixard nihilazo cantstanya jess Keshl lispmacs klys crest_ djinni Kumool cp- rprimus jimt[m] siraben dddddd jyf ecraven xybre bluekelp veltas phadthai rann ovf dnm lonjil2 krjt mstevens a3f jn__ fiddlerwoaroof _whitelogger +KipIngram kiedtl @crc tolja tabemann mjl ornxka wineroots guan remexre TangentDelta) 16:47:11 --- names: list (dzho neuro_sys APic rpcope koisoke_ nitrix) 17:10:15 I'm not sure exactly what you mean, but it sounds positive somehow. 17:10:23 Sounds "performnce oriented." 17:11:06 With indirect threading I can't literally inline code. But I can automatically construct super-primitives," that is a single code unit with the action of several primitives strung together. 17:11:14 Or I can use an assembler and go completely custom. 17:12:03 I think I like this idea of saving the superframe register on the return stack - with that I can use that feature to implement the error exceptions. 17:12:14 It's like the difference between ": with-xt ['] word rep ;" and ": inline 0 ?do word loop ;" 17:12:38 I see. 17:13:06 With indirect threading you can have a word that just starts executing from IP as machine code 17:13:20 So you can inline machine code 17:17:06 Hmmm. Yes, but I have to have a way to move the IP past it. I guess there would be a CODE or something like that? 17:17:13 then the code would just end with NEXT. 17:17:23 So the IP would be moved past it before you jumped to it. 17:17:51 No I would set up IP in the code, have an easy macro that sets IP to the machine IP 17:18:27 It's not a CODE word it doesn't need to use NEXT imo 17:19:00 Ok. That makes sense too. 17:19:09 Sure, it has to return to the threading. 17:19:14 When it's done. 17:19:20 That means ending in NEXT. 17:19:34 It needs to carry on the colon word it is inlined within 17:19:44 Yes, correct. 17:20:01 So you move IP to the location of the next compiled cell, and then when you run NEXT it will pick up from there. 17:20:17 If the word that starts the inlined section just jumps to it, then at end of that assembly section you might be able to write something to optimise this 17:20:23 I don't really see the advantage of putting it inline. 17:20:39 If it's inline you don't need a header/symbol 17:20:41 If it's a normal primitive you jump to it. But you have to jump to it inline too. 17:20:49 Oh, I see. 17:20:52 Ok, fair point. 17:21:16 So no real performance advantage, but an advantage in dictionary compactness. 17:21:27 Yeah, that's an interesting idea. 17:21:34 At end of inline code you can have a macro that either adds to IP the size of code so far, or CALL's a routine to continue with IP = return pointer 17:21:41 My definition cells and code are in the same memory section anyway. 17:21:57 Right. 17:22:01 I grok it now. 17:24:04 Yes this is why I prefer inline control structures over control words that take an xt 17:24:35 Although inline words are lighter than normal words still 17:46:37 Sure. It's really the same thing ." does - anything can go inline as long as you know it's size. 17:47:34 All this stuff is seriously good fun - all the ways Forth lets you diddle and twiddle around. :-) 17:49:38 Yup 18:15:33 all of this is alien to me, since the forth I'm working on is SRT/NCI; inlining just involves grabbing a word's definition and copying it into the code being compiled 18:15:51 aside from certain words which combine with preceding constants 18:37:08 --- join: eli_oat joined #forth 18:42:34 --- quit: eli_oat (Quit: WeeChat 2.8) 18:42:50 --- join: eli_oat joined #forth 18:46:32 --- join: boru` joined #forth 18:46:35 --- quit: boru (Disconnected by services) 18:46:37 --- nick: boru` -> boru 18:50:58 KipIngram, about what you were saying there about 1+, 2+, 4+, 8+ 18:52:11 in my zeptoforth I defer constants when they are initially compiled, so if the following word is one of a small number of words such as + and -, if the constants are in the right range, they can be compiled right into the instruction for the word, avoiding pushing and popping on the stack 18:52:33 --- quit: eli_oat (Quit: WeeChat 2.8) 19:00:04 Oh, that's interesting. 19:00:06 Nice idea. 19:00:45 Yeah, I just saw no point in all that stack commotion given the number of times I add and subtract certain values in the system. 19:01:13 Especially when it completely eliminates all memory access (just updates the TOS in-register). 19:01:18 SOOOO much more efficient. 19:03:50 How did you arrive at the name "zeptoforth"? 19:05:23 And yes - subroutine threading offers the most natural inlining of all. 19:10:11 Do you implement the CREATE / DOES> mechanism in zeptoforth? 19:10:32 If so, I'd be curious to know whether you found tht difficult to do in subroutine threading. 19:10:50 I go back and forth over whether I really think of subroutine threading as actual Forth. 19:19:08 --- join: dave0 joined #forth 19:23:41 maw 19:23:48 Ok, I modified {| ... |} to make it nestable. I'm unsure if I want to use it for standard error handling, though. It would work great for handling errors. But if there's ever an error condition I want to trap that I come upon when I INSIDE ANOTHER EXCEPTION FRAME, then it wouldn't do the right thing. 19:23:58 So I may keep error handling apart. 19:24:23 I've always had a little gleam in my eye around learning how to trap memory access errors, divide by zero, and that kind of thing, and handling those within the system. 19:24:32 I assume that would involve hooking some interrupt. 19:25:08 Hey dave0. 19:25:14 Long time - have you been well? 19:26:24 hey KipIngram 19:26:43 yup, i've been exercising lately! 19:26:50 going for nice walks 19:26:55 KipIngram: how are you? 19:29:15 Good for you. I seriously need to do that. I mean, SERIOUSLY. I'm sitting around letting myself get in awful shape. 19:29:21 I was just thinking about that over the weekend. 19:29:51 Even if it's just walking and standing up from where I work and waving the little 25 pound kettle bell I keep beside me around - I'd be worlds better off. 19:30:13 When I bought that bell I envisioned doing that regularly and gradually working up to heavier bells. 19:32:03 oh is it an exercise thing? lift the bell like weights? 19:32:31 i just walk :-) it's good cos you're allowed to exercise outside while it's covid 19:33:30 A kettlebell is just a little weight with a U shaped handle on it. 19:33:43 You just pick it up and move it around in various ways. 19:34:02 A kettlebell: https://www.roguefitness.com/media/catalog/product/cache/1/rogue_header_2015/472321edac810f9b2465a359d8cdc0b5/r/o/rogue-kettlebell-1.0-cerakote-h.jpg 19:34:38 aah right right 19:34:52 One of the things I do with it is stand up and hold it in both hands in front of me. Then lift it up in front of my face and carry it around my head in a circle (so it passes behind my head). 19:35:01 That seems to engage most of my arms and upper body. 19:35:02 i could really work on my shoulders 19:35:09 cool 19:35:18 That move I just described hits the shoulders well. 19:36:03 i'm getting muscular legs but not losing any weight :-p 19:36:18 instead of getting thinner, i've built up my muscles to carry the extra weight lol 19:36:34 Well, muscle weighs something - even if you don't lose weight if you build muscle it will affect your shape in a good way. 19:36:40 i haven't weighed myself cos it's embarrasing :-p 19:36:42 I've lost a goodly little bit of weight recently. 19:36:54 nice 19:37:03 Yeah - I know that feeling. I'm on a good track there at the moment though. Almost below 200. 19:37:04 do you eat healthy? 19:37:12 Pretty healthy, yes. 19:37:46 that's as important as exercise and vice versa 19:37:57 Yeah. 19:38:36 I've cut my drinking back a good bit too. I've never been a lush, or anything, but there've been times I've taken in too many calories that way. 19:39:16 i've totally quit beer, but did i tell you i was drinking non-alcoholic beer for a while? 19:39:40 heinekin zero tastes really nice 19:39:56 No, we never discussed that that I recall. 19:40:22 So, I seem to recall you were working on a system last time we interacted. How's it coming? 19:40:44 oh like everything i did a bit and never finished it lol 19:41:23 KipIngram: there's a guy on a differnt channel "gilberth" who is making a homebrew cpu in the 50's style 19:41:30 it would be perfect for forth 19:41:33 :-) Yup - I know that feeling too. I got mine prett finished, but then got busy and left it sitting for a while. I'm currently "reworking" it, which really means writing it again from scratch with a willingness to raid it for code. 19:41:40 http://clim.rocks/gilbert/b32/ 19:41:44 Cleaning up places that now strike me as untidy. 19:42:17 Wasn't it you that was doing something really clever with the rtn insruction? 19:42:26 It was your inner interpreter, if I recall. 19:42:39 here's a better link: http://clim.rocks/gilbert/b32/doc/intro.html 19:43:05 KipIngram: yup, i had rediscovered "return oriented programming" 19:43:44 https://en.wikipedia.org/wiki/Return-oriented_programming 19:43:57 :-) That was my first exposure to it - I was really interested in seeing how it turned out. 19:44:05 Seriously? The idea was already out there? 19:44:17 That's crazy, man. The world's just full of clever people. 19:45:01 i never knew how many tricks went into programming langagues 19:45:36 Oh, another reason I'm re-doing this is to improve portability. 19:46:03 I want to use it on any gadgets I build in the future, so it will help if it's easy to port. 20:30:26 --- quit: dave0 (Quit: dave's not here) 21:50:05 --- join: dave0 joined #forth 22:06:13 --- join: gravicappa joined #forth 22:40:28 --- quit: gravicappa (Ping timeout: 260 seconds) 22:53:39 --- join: gravicappa joined #forth 23:02:44 --- join: jedb_ joined #forth 23:05:00 --- quit: jedb (Ping timeout: 240 seconds) 23:06:08 --- quit: gravicappa (Ping timeout: 240 seconds) 23:30:28 --- join: gravicappa joined #forth 23:48:00 --- quit: gravicappa (Ping timeout: 240 seconds) 23:59:59 --- log: ended forth/21.03.30