00:00:00 --- log: started forth/21.04.03 00:48:50 --- join: dave0 joined #forth 00:49:31 maw 01:29:03 --- quit: gravicappa (Ping timeout: 268 seconds) 01:35:45 --- join: gravicappa joined #forth 01:37:47 --- join: xek joined #forth 01:43:14 --- quit: gravicappa (Ping timeout: 268 seconds) 02:38:36 --- join: Zarutian_HTC joined #forth 03:31:49 --- quit: proteusguy (Ping timeout: 240 seconds) 03:32:19 --- quit: xek (Quit: Leaving) 03:45:59 --- join: proteusguy joined #forth 03:45:59 --- mode: ChanServ set +v proteusguy 05:32:36 --- join: gravicappa joined #forth 06:13:21 --- quit: Zarutian_HTC (Read error: Connection reset by peer) 06:51:44 --- quit: dave0 (Quit: dave's not here) 07:15:36 --- join: Zarutian_HTC joined #forth 07:17:42 --- quit: dddddd (Quit: dddddd) 07:58:34 --- quit: rprimus (Remote host closed the connection) 08:36:33 --- join: rprimus joined #forth 09:12:03 --- quit: Zarutian_HTC (Ping timeout: 240 seconds) 09:39:55 --- quit: gravicappa (Ping timeout: 268 seconds) 09:40:53 --- join: gravicappa joined #forth 10:42:08 --- join: tech_exorcist joined #forth 10:53:36 --- quit: tech_exorcist (Remote host closed the connection) 10:54:19 --- join: tech_exorcist joined #forth 10:55:18 --- quit: tech_exorcist (Remote host closed the connection) 10:55:40 --- join: tech_exorcist joined #forth 10:57:05 --- quit: tech_exorcist (Remote host closed the connection) 10:57:27 --- join: tech_exorcist joined #forth 10:58:39 --- quit: tech_exorcist (Remote host closed the connection) 10:59:15 --- join: tech_exorcist joined #forth 11:00:21 --- quit: tech_exorcist (Remote host closed the connection) 11:00:43 --- join: tech_exorcist joined #forth 11:01:27 --- quit: tech_exorcist (Remote host closed the connection) 11:17:10 No, it's not yet. I'm not ready. 11:18:08 Also Kevin McCabes's "Forth Fundamentals." Especially volume one. 11:18:38 Volume 2 is just a compendium of definitionns, but volume 1 goes deep under the hood. It's where I really got my first proper training. 11:18:50 I don't think it's in print anymore, though. 11:19:23 I think it mostly documents FIG Forth. 11:20:23 But my first "Forth," written on a TRS-80 Color Computer in assembly, was a horror to look upon. I didn't know how anything under the hood was supposed to work, so I just made it behave the way it was supposed to behave, and it was probably 5-10 times longer than it needed to be. 11:20:49 I read McCabe and the scales fell from my eyes, and after I shuddered when I thought of that first one. :-| 11:26:47 So, it's probably good to keep in mind that merely seeing Forth operate is almost certainly not going to convey to a new student how elegant and simple it is internally. 11:27:57 I was first drawn to it because it was an "RPN language." I was in college and used an HP-41 CV calculator, and swore by RPN. So that was the hook that first hooked me. But once I learned how brilliantly it's put together internally, that's what's kept me over the years. 11:28:52 It's also why I have remained a fan of indirect threading. Code threading just doesn't really "feel like Forth" to me. 11:29:42 dave0: Your "return threading" definitely still feels like Forth to me. Yes, it's direct threading instead of indirect, but it's a fairly ingeneius way to get the uP to do next for you. 11:29:54 proteusguy: If you're willing to list it https://github.com/veltas/zenv 11:30:05 And I'll have another soon enough hopefully 11:32:51 --- join: Zarutian_HTC joined #forth 12:00:57 I have a friennd I chat with elsewhere on irc, who is a former Dell dude and living a relaxed retirement as a consequence. CS training. He's fond of saying Forth looks like a slightly glorified macro assembler to him. 12:01:19 I see what motivates the comment, and I can't exactly say he's "wrong," but at the same time he just doesn't have a full appreciation. 12:01:32 And I haven't figured out how to give him one - he's not interested in actually diving in. 12:02:47 I've posted some code snips for him, but he hasn't taken the effort to actually pick through any of them. 12:04:11 I think it's very much like reading English, though - when you first learn to read, perhaps you look at individual letters and "piece together" words. Then you reach a stage where it's just the shape of the word, injecting meaning into your mind. And if you get really good, you get where the shape of whole phrases shove meaning into you. And then you'll read right over typos and grammar errors and so on 12:04:13 without noticing them - you just aren't looking at the structure at that fine a level any more. 12:04:34 But you have to spend a lot of time working with Forth to get to that level of "idiomatic comprehension." 13:18:09 --- quit: jess (Quit: updates) 13:24:20 proteusguy: Thanks. I'm currently reading Threaded Interpretive Languages by R. G. Loeliger, it seems to be very decent so far. 13:27:10 Speaking of which, it mentions (I've read it before elsewhere), the way they implement the dictionary is such that the it stores only the first 3 letters of the word names. Does it mean if the user defines two words with overlapping initial 3 letters, that would cause an overwrite? 13:29:12 Ah yes, that's exactly what it mentions in the same paragraph. :) 13:47:58 --- quit: gravicappa (Ping timeout: 265 seconds) 14:00:54 Yes, some of the old systems did that, when RAM was precious. 14:01:04 It sped up dictionary searches too. 14:01:13 I wrote one of my early ones like that. 14:01:31 It stored the *count* and the first three letters. 14:01:44 So your candidate word had to be the right length, but only the first three letters had to match. 14:02:09 So it would only be a redefinnition if it was the same length and had matching first three letters. 14:03:23 Also, very few of the early systems would actually overwrite something already in the dictionary. The new word would still get added at the end. You couldn't know ahead of time that the new word would fit in the space occupied by the old one. 14:03:45 So what really happened is you just made the old definition "unfindable," because your search would terminate when it found the redefine word. 14:17:43 --- join: jess joined #forth 14:26:26 It occurs to me that it would be possible to scan a dictionary and identify how many first characters needed to match to be sure of no collisions. Then you could limit searches to that to gain speed later. 14:27:07 Are concatenative FORTH-inspired (but not actually FORTH) languages welcome here? 14:28:06 Another way to speed up searches is to invest 1kB or so in a small hash table, and load it with pointers to various words. You'd always confirm a match, but you could go straight to the right place instead of searching for it. 14:28:17 I'm not sure how well that would play with vocabularies, though. 14:29:44 kiedtl: For me that would depend on the exact discussions that came up - on how "applicable" they were to Forth. I'd be fine with some leeway there, but I'd want others to express an opinion as well. 14:30:19 What's an example of such a language? 14:30:52 I'm prone to including non-standard mechanisms in my Forths, so I might learn something fun. :-) 14:32:52 I for one (being a newcomer to Forth) think one of the nice things about Forth is that it's not too strictly defined. So I'd definitely welcome it, even though there's a separate concatenative languages channel, I like it here. :) 14:33:08 I agree. 14:33:42 I'm not a CS guy - I'm not even sure what a "concatenative" language is. 14:33:47 * KipIngram is googling... 14:35:01 --- join: dave0 joined #forth 14:35:52 KipIngram: Factor, Kitten (abandonware as far as I can tell), Joy, the language I'm working on 14:35:59 re examples of languages 14:37:11 CS folk likes to think in terms of abstract notions, and aversive to physical, earthly contraptions that deal with the ugly and gruesome facts of life. 14:38:31 maw 14:38:48 Or in other words, they want to be mathsy, and not "machine operators". 14:39:02 lol 14:48:31 That is true, but both approaches have merit. 14:48:44 Forth definitely has an "informality" to it. 14:50:40 I've had it in mind for a while that at some point I want to investigate the use of regular expressions in a Forth system. 14:50:52 I have no idea how yet, but it just seems like it might be fruitful. 14:51:41 Hmm that's a good idea. 14:52:56 In one of Chuck's old books he discusses the idea of first trying to find a word in the dictionary, and if you can't you peel a prefix or suffix character off of it and try again. 14:53:15 Eventually when you do find a match, the prefix/suffix is then applied as a parameter. 14:53:32 He had some examples of situations where it looked useful, but I can't remember what any of them are. 14:53:53 This was something he wrote in the VERY VERY early days, before Forth really had the final form it took. 14:54:52 Anyway, though, both of those things (regular expressions and prefix/suffix processing) are kind of a step away from standard Forth and toward a stronger "syntax." 14:57:37 Though one of the things I do love about Forth is its "rabid simplicity." 14:59:11 Anyone mind critiquing the syntax/design for the previously-mentioned language I'm working on? Here's a sample: http://0x0.st/-cXq.txt 14:59:54 are : ;; delimiting the "body" of the IF? 14:59:59 I'll take a look. 15:00:24 and is there a reason for them to be necessary? as it's usually implemented, IF embeds the branch by being immediate 15:00:39 remexre: Yeah, they are. {} is taken up by tables/arrays. 15:01:02 Yes, they are necessary. There are no immediate words in this language; parsing isn't done with the traditional FORTH way. 15:01:10 i.e. the parser is always in COMPILE mode. 15:01:16 s/COMPILE/compile 15:01:22 Is that the definition of "word"? 15:01:33 of .n, I think? 15:01:37 No, it's the definition of .n 15:01:43 Yeah, .n prints out a number. 15:01:46 I see. 15:02:15 It seems long, for a single definition. Has the look on the page that a c function might. 15:02:46 I'm afraid I've swallowed the "factor, factor, factor" pill rather firmly. :-| 15:02:48 yeah, I get that feeling too, but I'm not sure how much of that is 8ch tabs and only using 10 or so chars per line 15:02:59 Right. 15:03:07 The compiler can *almost* compile that rn. Just the `until` needs to be implemented, I'm using a retro-forth esque until implementation that works with quotations for now 15:03:26 8ch tabs and 10chars per line are optional ofc, I just find that using less per line keeps things clear 15:03:28 for me, anyway 15:03:37 I know a lot of people hate 8ch tabs with a passion, heh 15:03:52 I guess if the comparison is helpful, in my Forth that's https://p.remexre.xyz/XiZwuVE8UAA= 15:04:11 which yeah, is much more aggressively factored 15:04:57 I don't have mine in one place right now. Let me go see if I can pull it out - I have the forth interspersed with the assembly as "comments" 15:04:58 (this has prefixes on "private things" because it was written before I'd implemented "namespaces") 15:05:50 It's possible that factoring that into several definitions is something I'd do later, but I'm a bit concerned more with stabalising the syntax/lang impl atm than opimisations 15:06:38 well, I'd consider factoring to be a readability optimization :) 15:06:51 --- join: f-a joined #forth 15:07:37 I felt like an amalgated version (mine) was more readable than yours, but that's probably because I hate SCREAMING FORTH CODE with a passion :^) 15:07:59 so if you're trying to determine how readable/not it is, I'd try to figure out what very readable programs in your syntax are, and how readable they are / how much pain it is to write them that way 15:08:23 --- part: f-a left #forth 15:08:31 ah, yeah, that's yet another thing I did before I finally bit the bullet and implemented namespaces -- I was gonna make everything provided by the forth system CAPS and everything in user code small 15:09:07 and some private things in the system are PREFIX/name... 15:09:13 really I should go back and fix all that... 15:09:26 How are you implementing namespacing? I was just going to define an `import' word that loads a file and prefixes all definitions with 'module:'. 15:09:47 immediate words :) 1sec 15:10:18 hmmst 15:10:26 so the implementation is https://git.sr.ht/~remexre/stahl/tree/main/item/kernel/src/common/forth/01-modules.fth 15:10:38 example is https://git.sr.ht/~remexre/stahl/tree/main/item/kernel/src/common/crypto/20-blake2s.fth 15:10:47 Ah, you're the nondocumented stahl fellow :P 15:10:53 s/non/un 15:11:08 there's a website with.... some documentation now 15:11:34 Nice. I'll have to remember to take a look at some point. 15:11:36 but yeah basically, END-MODULE( goes and rewrites the dictionary; first smudging everything since MODULE, then unsmudging everything mentioned before a ) 15:14:04 Interesting. It's a nonoption for me, though, since I'm eschewing immediate words. 15:15:16 Ok, mine is here: 15:15:18 https://pastebin.com/wmqWSbrf 15:15:22 It's long, but it does a lot. 15:15:31 Features listed in the paste. 15:15:50 That definition of FOLD looks hideously long to me now. 15:16:28 Words defined with :: instead of : are "temporary" - after defining NUMBER I would use .wipe to remove those headers. 15:16:53 Floating point numbers are left on the FPU stack. 15:17:08 And only base 10 is supported for floats. 15:18:31 Yeah, immediacy doesn't combine well with a more formal language structure. 15:18:49 It's really sort of a hack, but Forth is "hack-tolerant." 15:18:56 Are the temporary words inlined where they are called? 15:19:13 Forth *is* a hack, if you think about it. It's just a list of functions to call :P 15:19:21 A hack to save memory. 15:19:25 No, but the space taken by headers is completely recovered. 15:19:36 They do have a permanent CFA/PFA pair, though. 15:19:36 uh 15:19:46 If they're wiped 15:19:57 How would they be called when the word calling them was called? 15:20:23 cfa/pfa is all that's needed to call it if you have the address 15:20:25 After compilation you don't need the headers anymore - the definition that uses a temp word has its CFA in the definition. 15:21:01 My current effort has some programming constructs I didn't have when I wrote this, so I will be endeavoring to do better.' 15:21:23 Ahhh. I misread and thought you said that the entire dictionary entry was wiped, not just the heads. 15:21:26 *headers 15:21:41 No, the CFA/PFA pair and the definitions themselves are permanennt. 15:21:49 Yes, I understood now. 15:22:02 It'd be intersting to make a Forth that inlined everything. 15:22:18 Though it wouldn't work if an entry did magic with the return stack. 15:22:21 Implementing that temporary def support was rather involved - I'm taking a simpler approach this time. 15:22:45 Temporary words will just have a bit set in their header that makes them unfindable later. 15:23:09 Dictionary searches will still have to pass over them - they just won't match. 15:23:58 Also these days I use .: instead of :: 15:24:15 It's supposed to evoke the hiding of Linux filenames when they begin with . 15:24:23 Neat. 15:31:09 So, what exactly is the payoff in your case for avoiding immediate words? 15:48:48 A cleaner interpreter, imho. No compiling modes (everything's in COMPILE mode), everything's parsed/executed as it would be in another languages' VM, slightly more conventional syntax, etc 16:09:09 So you don't really have an interactive interpretive environment? 16:12:07 Chuck got rid of his STATE as well - he started controlling the color of each word he typed, and let the color determine that stuff. 16:12:27 So you could always compile, anytime, and always execute, anytime, and so on 16:12:43 He got rid of BASE by using different colors for decimal and hex numbers. 16:13:05 I got rid of base by requiring expicit inclusion of non-decimal base information in the number. 16:13:54 And for outputting numbers in various bases, I have a setup that uses a C-like formatting string. 16:15:10 A version of TYPE that you pass a formatting string to as well - everytime it encounters and escape character it plucks a number from the stack and outputs it as directed by the format field. 16:15:30 does your system "interpret" format strings or "compile" them? 16:16:10 It interprets them - it actually implements a little baby stack machine in a stack cell. 16:16:38 huh, neat 16:16:40 There's a little snip of code it runs for each encountered format string. 16:16:52 Yeah, it was fun to put together. 16:17:08 Did a whole lot of what printf does, and I feel like it must be smaller. 16:17:52 yeah, glibc printf supports the kitchen sink... 16:18:53 including stuff for which the solution is "want different things" 16:19:43 I'll use the same approach this time, but I'll probably re-write it. 16:19:57 Never hurts to rewrite a piece of code - it almost always turns out better. 16:20:22 yep :) 17:20:38 --- join: eli_oat joined #forth 17:27:19 --- quit: eli_oat (Quit: WeeChat 2.8) 17:52:43 --- quit: dave0 (Quit: dave's not here) 18:42:31 --- join: boru` joined #forth 18:42:34 --- quit: boru (Disconnected by services) 18:42:36 --- nick: boru` -> boru 18:54:46 KipIngram: There is an interactive repl, but input is 'compiled' and stuffed into a temporary word (anagolous to `main') before being executed. 18:55:00 analogous 18:55:02 * 18:55:05 I need to sleep. 19:25:17 --- quit: veltas (*.net *.split) 19:25:17 --- quit: ovf (*.net *.split) 19:25:17 --- quit: dnm (*.net *.split) 19:25:24 --- join: veltas joined #forth 19:25:55 --- join: dnm joined #forth 19:26:58 --- join: ovf joined #forth 19:28:33 --- quit: lispmacs (Ping timeout: 240 seconds) 19:40:29 --- join: dave0 joined #forth 19:43:04 maw 19:49:37 Ah, ok. I've seen a Forth system that does that - executes line by line, but complles the whole line before starting execution. 19:49:56 One advantage it offers is that control constructs like IF and so on can be used interactively. 19:50:41 Evening dave0. 19:53:27 --- join: Zarutian_HTC1 joined #forth 19:53:27 --- quit: Zarutian_HTC (Read error: Connection reset by peer) 19:53:51 maw KipIngram 21:03:56 Hey dave0 - refresh my memory on that return-based direct threading? I know you "return through" the definitions, so each cell of a definition is an address of code. What is the code pointed to? For a regular definition, would it be "docol"? How do your (:) and (;) operations work? 21:05:27 I'm thinking there might be something you could do with the "load effective address" instruction that would help out with (:). 21:05:53 I suppose for primitives the addresses in a definition just point right at the primitive, right? 21:05:57 yes i keep the address of the Enter primitive in %rbp 21:06:22 Ah, so the code pointed to is jmp %rbp? 21:06:23 at the start of a definition it's jmp %rbp which is only 2 bytes 21:06:26 yep! 21:06:43 Nice. How does the (:) code recover the address of the new definition? 21:06:47 it's only 2 bytes so it works all the way down to 16 bit cells 21:07:11 ah it grabs the address from %rsp-8 21:07:20 Got it. 21:07:23 Make sense. 21:07:34 it makes execute a bit trickier 21:07:46 Yes, but figure-out-able. 21:07:58 but everything else is the same... and ret is only 1 byte! so it's really tiny 21:08:18 Right. 21:08:40 It feels like the best way to do a direct threaded implementation to me. 21:09:12 i felt clever to figure it out :-) 21:09:19 but i just reinvented it 21:09:21 So (:) grabs [sp-8], increments it, and puts it on the return stack. 21:09:28 exactly 21:10:17 Oh, wait - no I'm sorry - it puts rsp on the return stack. The gets [rsp-8], increments it by 2, and puts puts that in rsp. 21:11:07 hang on i'll check 21:12:04 yes, except instead of 2, i increment it a whole cell (64 bits) 21:12:29 Hmmm. But your new definition starts right after the jmp rbp? 21:12:31 just to make everything the same size as a cell 21:12:35 I see. 21:12:38 You have some pad. 21:12:54 well it's little endian, so the jmp %rbp is at the low address 21:13:04 Right. 21:13:19 yes, i waste 6 bytes, but who cares about that :-) 21:13:33 Well, you might be able to do something useful there. 21:14:00 I am thinking there's probably an lea instruction you could put there that would prevent you from having to look back at where you came from to get the new word address. 21:14:17 As far as I can tell, the lea instruction is the only one that can operate "PC relative." 21:14:57 So you could lea into rsp; you might get the whole (:) operation in those eight bytes. 21:15:11 you could use a call rbp .. oh wait no because that means writing to the stack which doesn't work 21:15:19 No, that writes over... 21:15:21 yeah. 21:15:43 Lemme go look at the details of LEA. 21:15:45 Brb... 21:16:03 it would be a neat trick to pack docol into the start of every word :-) 21:18:41 Ugh. Looks like LEA takes a lot of space. 21:20:17 You might read here: 21:20:19 http://www.nynaeve.net/?p=192 21:21:19 The goal would be to get rsp onto the return stack, get rsp loaded a few bytes ahead of the execution point, and then ret. 21:21:45 It's like you want something like this: lea sp, [*+4] 21:21:48 where the 4 is a guess. 21:22:04 oh yeah amd64 has rip relative 21:22:11 8 bytes probably just isn't quite enough. 21:22:18 You could probably do it if you allowed 16. 21:22:28 KipIngram: it's an interesting optimization! 21:22:30 And you'd avoid the performance cost of the jump. 21:22:38 How big is your stuff at rbp? 21:23:05 yes and it's a jump to a register so it might be hard to predict 21:23:18 KipIngram: oh not sure, i haven't looked at the binary 21:23:23 I use a jmp r15 at the end of each primitive. 21:23:33 i used rdi for the return stack 21:23:45 r15 is my "next," and is also the base address of the whole system. 21:24:03 So r15 allows that reg jmp, and also gets added to numbers that exist in my image as an offset. 21:24:33 ah yes i remember you saying that.. because macos wont let you use absolute addresses 21:24:41 I experimented with using lodsq in my next, but it turns out to be slower. 21:24:49 Right. 21:25:15 Once I get to where I can recompile my system, I could experiment with absolute addresses then. Since I could generate anything I wanted. 21:25:34 It would remove at least two instructions from next. 21:25:41 And a couple from (:). 21:25:49 hmm 21:26:05 you could set up rdi to use a stosq 21:26:22 For the return stack? 21:26:43 so it'd be like mov rax,rsp ; stosq ; lea rsp,[rip+4] ; ret 21:26:46 yep 21:26:56 Oh - for you. 21:26:58 Yes. 21:27:01 to save as many bytes as possible in docol 21:27:15 Right. Stosq is a great idea. 21:27:20 You should tinker with that. 21:27:49 i have to finish the rest of it first :-) it's only been 2 years lol 21:27:55 :-) 21:28:11 I wish there was still a program like DEBUG from back in the DOS days. 21:28:25 where you could rapidly tinker with little sequences like that and see how big they where. 21:29:15 i didn't use DOS .. could you build a little machine code program with DEBUG ? 21:30:13 Here you go: 21:30:15 241 0000005F 4889E0 mov rax, rsp 21:30:17 242 00000062 48AB stosq 21:30:19 243 00000064 C3 ret 21:30:23 That's six bytes. 21:30:36 you have to set the new rsp 21:30:40 If you can get the lea instruction in two, you're golden. 21:30:41 to just after the ret 21:30:45 Yeah - I didn't know how to code that. 21:31:00 I just wanted to see how big the other stuff was. 21:31:22 you reckon inlining docol would be a win? 21:31:23 Playing some more - one sec. 21:31:27 okay 21:32:47 I think it's a bit too big to go in 8: 21:32:49 241 0000005F 4889E0 mov rax, rsp 21:32:51 242 00000062 48AB stosq 21:32:53 243 00000064 488D6002 lea rsp, [rax+2] 21:32:55 244 00000068 C3 ret 21:32:59 That's the wrong register, but the size should be typical. 21:33:09 10 bytes 21:33:13 You could definitely inline it if you were willing to spend 16 bytes. 21:33:20 And I definitely think it would be faster. 21:33:28 Whether it's worth it or not... don't know. 21:33:36 It's your system, so, you have to like it. 21:33:46 it would take a bit of setup.. with rsp and rdi 21:33:51 If it fit in the 8, I'd say absolutely. 21:34:02 yes forth is marvellous that you can just fiddle with it 21:34:02 Yes, and it would affect (;). 21:34:37 Oh, I have an idea. 21:34:42 Let me play some more. :-) 21:35:36 241 0000005F 4889E0 mov rax, rsp 21:35:37 242 00000062 48AB stosq 21:35:39 243 00000064 488D2501000000 lea rsp, [prim] 21:35:41 244 0000006B C3 ret 21:35:50 That actually points to an address just past ret. 21:36:43 That looks like it's [rip+1] 21:37:02 how are you making this assembly? 21:37:25 241 0000005F 4889E0 mov rax, rsp 21:37:27 242 00000062 48AB stosq 21:37:29 243 00000064 488D2501000000 lea rsp, [prim] 21:37:31 244 0000006B C3 ret 21:37:33 245 prim: 21:37:37 That address prim is just past the ret. 21:37:46 I just punched a few lines into my own source and ran nasm on it. 21:37:59 i'm gonna try 21:38:08 Good luck, man. 21:38:12 Bet it'll work. 21:40:27 i got exactly what you got for that assembly 21:40:52 very interesting 21:40:58 i never thought of inlining docol 21:41:23 it wouldn't work on i386 because it doesn't have pc-relative addressing 21:41:43 How's this: 21:41:55 Oh wait - missed something. 21:41:57 One sec. 21:44:26 Nah, the LEA instruction is just too big. 21:44:29 It's 7 bytes. 21:44:42 So you really can't do anything here unless you spend two cells. 21:44:48 --- join: pointfree joined #forth 21:45:38 That's right - i386 only has that mode for a handful of control transfer instructions. But the x64 changes made it available to many. 21:46:13 Since you're already spending 8 bytes, even though you're only using 2, this is a question of "is it worth 8 bytes per definition to inline (:)" 21:46:50 it's the classic computer trade-off :-) 21:46:54 speed vs memory 21:47:05 It also avoids that memory referennce that you have to do. 21:47:11 Where you read out [rsp-8] 21:47:12 oh true true 21:47:35 So, you could avoid a jump, and a full cell read. 21:48:23 Anyway, it bears thinking on. Let me know if you try it. 21:49:01 I'll do it once more, with aligns before and after. 21:51:05 Here you go - looks like it's 13 bytes to do the full inlining: 21:51:07 241 0000005F 90 align 8 21:51:08 242 00000060 48 89 E0 mov rax, rsp 21:51:10 243 00000063 48 AB stosq 21:51:12 244 00000065 48 8D 25 04 00 00 00 lea rsp, [prim] 21:51:14 245 0000006C C3 ret 21:51:16 246 0000006D 90 align 8 21:52:38 prim was right after the second align 8. 21:54:57 ooh 21:55:14 try lea rax,[prim] ; xchg rax,rsp ; stosq ; ret 21:55:32 i believe xchg is 1 byte smaller 21:55:44 but maybe slower 21:55:53 far out i should fiddle with this 21:58:28 I do seem to recall xchg as being a tight instruction. 21:58:40 --- quit: cmtptr (Ping timeout: 240 seconds) 21:58:51 Yes - fiddle. That still wouldn't get you to 8 bytes, so it would be a matter of choosing the fastest one. 21:59:32 It's a damn shame it uses so many bytes in the LEA instruction even when your offset is so small. 21:59:41 --- join: cmtptr joined #forth 21:59:48 Hmmm. Let me try something else. ;-) 21:59:55 ok :-) 22:02:07 No, no go. Even if you arrange the offset in lea to be zero, it still consumes all those bytes. 22:02:26 I was thinking it might be smaller to lea a zero offset addr and then just add. 22:02:36 I think you got the best one, though. 22:02:53 Just between you, me, and the lamppost, I'd absolutely do this. :-) 22:03:15 If I ever decide to try a direct threaded system, I will do it. 22:05:05 lol 22:05:11 cool :-) 22:05:20 Ok, that was fun. 22:05:23 i love how easy it is to experiment with forth 22:05:29 Yes indeed. 22:05:55 Maybe you've already implemented this so you know, but it won't surprise me if you find implementing CREATE / DOES> rather trying. 22:06:08 I've always felt like that indirect threading really makes that easier. 22:07:06 i'm not far along 22:08:15 So my system has a CFA/PFA pair for : definitions, each of which is a pointer. 22:08:30 The CFA points to code (docol), and the PFA *points to* the new definition list. 22:08:46 That CFA/PFA pair is up in higher memory right by the name string. 22:09:06 So, primitives don't need the PFA - and my primitive header macro doesn't provide one. 22:09:21 So that saves a bit of wasted space. 22:10:52 Right now my constants actually have the PFA point to a cell elseshere with the constant value in it, but I might just have the value be IN the PFA pointer cell. It's only a four-byte cell, though, so that would make it impossible (with that mechanism) to have 64-bit constants. 22:11:02 --- join: gravicappa joined #forth 22:11:23 One thing I can do that you can't is have multiple entry points in a definition. 22:11:42 You can't define an entry point without having your code snip there. 22:12:07 Since my CFA/PFA pointers are elsewhere, I can have as many names as I wish pointing into the same list of cells. 22:13:19 So right now, I've got this new one to the point where it has a QUIT loop that just lets me type in lines of code - it applies BL WORD iteratively to those lines and prints individual words back at me, one per line, until it runs out, and then loops. 22:13:33 So next stage is to implement the dictionary lookup of the words. 22:13:52 And finish that QUIT loop out into an actual Forth outer interpreter. 22:14:49 It's already snapshotting my system before calling INTERPRET, so I'm all set to implement error detection and recovery as well (error recovery will copy the snapshot back into the working image). 22:15:05 So if a line I type encouters an error, it resets everything back to how it was at the start of the line. 22:16:35 I've got the dictionary lookup code written in a text file - just need to enter it into the assembly file. 23:59:59 --- log: ended forth/21.04.03