00:00:00 --- log: started forth/18.10.04 01:10:49 --- join: ncv (~neceve@2a02:c7d:c5c9:a900:6eaf:6ef7:3b81:d5f6) joined #forth 01:10:49 --- quit: ncv (Changing host) 01:10:49 --- join: ncv (~neceve@unaffiliated/neceve) joined #forth 02:00:27 rdrop-exit: the BASIC on my old Sharp PC1500A did that. after entering a line it was tokenized and stored as a stream of tokens rather than characters 02:01:12 This is not the same thing, you’re tokenizing source. 02:02:47 --- quit: proteus-guy (Ping timeout: 260 seconds) 02:03:15 --- quit: ashirase (Ping timeout: 264 seconds) 02:05:20 In a sourceless environment you would not be storing the source, you would store object code and just enough extra information to aid disassembly. 02:05:22 --- join: proteus-guy (~proteusgu@2403:6200:88a6:329f:2920:1ca0:b94d:d9b) joined #forth 02:09:02 --- join: ashirase (~ashirase@modemcable098.166-22-96.mc.videotron.ca) joined #forth 02:10:14 --- quit: nighty- (Quit: Disappears in a puff of smoke) 02:13:13 At least that is what I understood from Chuck’s interview. 02:13:32 It reminds me a lot of the M-Code cross-assembler/disassemler. 02:14:27 --- quit: proteus-guy (Ping timeout: 260 seconds) 02:15:43 It’s a more extreme approach than ColorForth. 02:16:48 --- join: proteus-guy (~proteusgu@2403:6200:88a6:329f:2920:1ca0:b94d:d9b) joined #forth 02:17:12 He discusses it in the last few minutes of the interview. 02:17:26 --- join: john_metcalf (~digital_w@host109-149-153-210.range109-149.btcentralplus.com) joined #forth 02:17:38 I’m hoping he’ll give further details at the upcoming SVFIG Forth Day. 02:42:07 --- quit: Labu (Quit: WeeChat 2.0.1) 02:43:28 --- join: Labu (~Labu@labu.pck.nerim.net) joined #forth 03:00:01 --- join: nighty- (~nighty@s229123.ppp.asahi-net.or.jp) joined #forth 03:38:46 rdrop-exit: I've spent a lot of time thinking about that part of Moore's thinking. 03:39:58 As far as I could tell from some of the older stuff about that in the Ultratechnology website, he has a persistent symbol table. What he stores in his "object code" is a pointer into that symbol table. There's a field in the symbol table that supplies the word's CFA. If the word hasn't been compiled yet, that field is zero. 03:40:13 When he compiles it, he sets that CFA field in the symbol table. 03:40:59 --- nick: Guest50250 -> KipIngram 03:41:08 --- mode: ChanServ set +v KipIngram 03:41:39 So when he's compiling a definition he just has to follow the object pointer to the symbol table, grab the CFA, and shove it into memory. 03:42:32 No search. 03:43:18 You can create effective compression schemes such that the most frequently used words (most frequently used indices into the symbol table) can be stored compactly in the source. 03:44:11 I decided this had a limitation. If the symbol table can supply only one CFA value for any given entry, that implies that a given symbol string can have one and only one meaning, throughout your system. 03:44:25 That conflicts with vocabularies, where you can define the same string to mean multiple things. 03:45:08 You could try to overcome that by "complexifying" the symbol table - try to have it so it was the vocabularly path + symbol string that led to a symbol table entry. 03:45:24 But in my case I have multiple processes, and so processes can multiply define symbols too. 03:45:28 So I'd also have that to deal with. 03:46:44 You’re speaking of ColorForth, sourceless Forth which was a different approach he was using in his cad system before CF, that he had given up on but is now revisiting. 03:46:57 I opted to deal with it by having my symbol table simply give me a string <--> unsigned integer mapping, and then I still have traditional linked header lists in each vocabulary / process that works in the usual way, except they store the integer instead of the string. 03:47:37 So I don't really consider my system "sourceless" in any way - I just consider it as supporting source compression. 03:48:22 And it likely will speed up compilation *some*, since I will only have to do integer comparisons when searching header lists. 03:49:21 I could probbaly play some trickery and have a spot in the symbol table to store one CFA and whether or not a symbol was only defined once in the system. 03:49:35 If so, I could use that CFA slot - if not, I'd have to search local dictionaries. 03:49:47 Then I'd get the benefit for any symbols that were not multiply defined. 03:51:28 Or something along those lines - I haven't really thought enough about it. 03:51:55 The source tokens could say "use symbol table CFA" or "use local dictionary." 03:54:38 Actually that wouldn't work, would it? It would imply an order in which source was loaded. 03:54:51 Once again, you’re speaking of an approach similar to CF which tokenizes/compresses source code, this is not the same approach as a Sourceless Forth. 03:55:03 Anyway, I abandoned my attempts to figure out how to extend that idea to multiple meanings. 03:56:03 You’re speaking of an approach similar to ColorForth, sourceless Forth was a different approach he was using in his cad system before CF, that he had given up on but is now revisiting. 03:56:12 Well, the way he did it feels like it's a huge step toward sourceless - the stored form o the program more or less supplies the meaning independent of other structures. 03:56:20 I mean, the names have to be stored somewhere, right? 03:56:35 Or is he hashing them somehow? 03:58:07 And since hashes are one way, how would you ever get a rendering of your source in an editor? 03:58:43 I don't see how to do this without a symbol table, and if you're able to store everything you need to compile IN the symbol table, that seems pretty sourceless to me. 04:02:50 When you want to view the source, you recover the source by disassembling the object with the aid of supplemental info (opcode table, labels, comments). You store the object code and just enough extra info apart from the object code to reproduce the source. 04:03:07 --- join: dddddd (~dddddd@unaffiliated/dddddd) joined #forth 04:04:40 Yes, that describes how this would work too. You recover the source by using the tokens to access the symbol table, so that you can render the names. 04:05:08 I do see how some traditional Forth words might present problems there. 04:05:10 The source is not tokenized. 04:05:27 Like IF ... THEN, which doesn't get compiled in quite that way. 04:06:33 Ok, so give me an example. 04:07:01 What is stored for some arbitrary word foo. 04:08:19 And how do you recover "foo" from that when you're decompiling? 04:08:40 I'm just trying to glom on to exactly how this is different from tokenized. 04:11:25 A symbol table entry, that points somewhere in the object code. 04:12:21 Does the object code point back? 04:12:35 Or do you have to search the symbol table for the right entry when decompiling? 04:13:23 No the object code is machine instructions for the processor, it’s not an intermediate form. 04:13:45 Ok. I'm starting to see the difference then. 04:13:54 So you would have to search the symbol table. 04:14:03 This wouldn't be as fast as his ColorForth era stuff, then. 04:14:41 But who cares - he isn't actually "compiling," so I guess in that sense it's "faster." 04:14:46 Since he's eliminating that phase altogether. 04:14:57 Those symbol table searches would occur at edit time, right? 04:15:09 Right 04:15:26 Ok, thanks - I think I get the difference now. Interesting. 04:17:06 So, regarding structures like IF ... THEN --> ?jmp, anything interesting to say about how he knows how to decompile that back to IF ... THEN? 04:17:18 Or did he change his whole language structure to accomodate this? 04:17:31 Maybe he just eliminated such things. 04:17:40 When thinking about what Chuck ideas of Forth, I think it’s useful to think of it as assembly programming (sometimes for a real processor, sometimes for a virtual processor) rather than thinking of Forth as a compiled or interpreted language. 04:18:14 I guess that last question is "Did he change the language to eliminate decompilation ambiguity?" 04:18:41 Chuck tries to minimize the semantic gap between source code and object code, between language and processor. This is what he means in the video when he says "one-to-one correspondence between source and object". In effect his development environment would be closer to an interactive assembler/disassembler than a traditional compiler or interpreter for a parsed language. 04:19:02 Ok, I'll take that as "yes." 04:19:17 I'll assume he meant "one to one" in the mathematical sense, which means "no ambiguity." 04:19:42 He did not give details, I’m hoping he’ll explain more at the SVFIG Forth Day later this month. 04:20:16 Have a look at the last few minutes of his EuroForth 2018 interview. 04:20:24 K. 04:20:58 That one-to-one requirement restricts the language to some extent. 04:21:07 In regular Forth nothing is stopping me from doing this: 04:21:15 https://wiki.forth-ev.de/doku.php/events:ef2018:start 04:21:18 : foo postpone bar ; immediate 04:21:25 : bam postpone bar ; immediate 04:21:33 Now foo and bam compile to exactly the same thing. 04:21:37 That's not one-to-one. 04:22:00 Now, you could ask why I'd want to do that, and it would be a fair question. 04:22:13 In my Forth that would be « : foo bar ;inline » 04:22:25 But the answer might be that some days I just feel foo and some days bam. 04:22:55 If I'm not allowed to do it, that's a restriction. 04:23:40 Numerous things in Forth wind up compiling ?jmp or jmp. 04:24:11 Maybe it's possible to look at the local object structure to back-infer what control structure was in the source. 04:26:04 So when you compile a high level Forth word, you get just a list of addresses. 04:26:25 Since you can load source in varying order, the same source can wind up having different addresses on different loads. 04:26:35 : foo a b c d ; 04:26:52 On one load, you could have a > b > c > d, on another some other ordering, depending on how you loaded prior source. 04:27:02 How does this new approach handle unloaded code? 04:27:30 Unloaded code? 04:27:44 Something you'd defined but hadn't loaded from disk to RAM yet. 04:28:06 Seems there has to be a representation of foo that is not based on RAM addresses. 04:29:19 I could see it perhaps being done by regarding the disk space this is all stored on as a "virtual address space," and then everything would be "compiled there." 04:29:37 Not necessarily, if you’re interactively and assembling you always have object code. 04:29:39 Then when you brought it into RAM you'd have to do some sort of conversion, so that the *loaded* definitions now used RAM addresses. 04:30:26 I could see that all being resolved with a virtual memory system. 04:30:49 It would all be built in "virtual space," and the fact that you had some of it in physical RAM would really be irrelevant. 04:32:04 I don’t think Chuck loads things piecemeal from disk anymore, just the development environment as a whole. I recall him saying his block was just «  : block 1024 * ; «  04:32:38 Aha - yes - that smacks of a big virtual address space. 04:32:41 Makes sense. 04:33:05 That’s the way my Forth works. 04:33:21 And I had definitely gotten the impression that his later stuff has moved away from being "generic" in the same way old Forth was, and is more about building something that precisely targets a particular application. 04:33:52 My block is defined as « : block ( blk# -- a ) k STORAGE + ;inline » 04:34:06 That's in keeping with his long-term philosophy: Solve the problem in front of you; waste no energy "supporting the solution of problems you don't have." 04:35:28 Right 04:35:45 I lusted after his CF era "fast compile" pretty badly. 04:36:01 But in the end I decided it wasn't entirely compatible with what I wanted to be able to do with my system. 04:36:06 I want an *operating system*. 04:36:16 Time for dinner, gotta go, nice chatting with you. 04:36:18 That's inherently a system that I might want to solve problem A with right now, and B tomorrow. 04:36:30 Or A and B today, B and C tomorrow, A and C the next day. 04:36:53 I felt like I had to have vocabularies, and had to have multiple processes able to define the same string to mean different things. 04:37:08 So I wasn't focused in the same way CM was in that era. 04:37:18 I think around then he was focused on being able to compile chip designs. 04:37:30 And his "development system" would be devoted to that one thing. 04:37:50 Obviously that specificity could be exploited. 04:39:00 The whole notion of "operating system" is sort of contrary to CM's view of things. 04:39:11 It's kind of automatically excess baggage. 04:40:38 Of course with the "virtual memory" way of looking at it you could try to have both things. Each application would be its own totally separate virtual space. 04:40:50 And that might be something modern processors could handle quite well. 04:42:01 What happened to forth.org and the ultratechnology.com site? 04:42:02 I have noticed in my MacOS Forth the thing gets its own virtual space on the Mac every time it's loaded. 04:42:08 Independent of what else I may be doing with the computer. 04:42:25 Everything's always at the same address, unless I add code such that it moves. 04:42:40 The ultratech site is still "there." 04:42:45 I pulled it up a couple of days ago. 04:42:55 I don't think it reflects any current activity. 04:43:03 Just a bunch of historically interesting documents and videos. 04:43:28 It's not still "there". 04:43:43 http://www.ultratechnology.com/ 04:43:50 Just opened it just now. 04:44:04 Are you not getting a 403? 04:44:09 No, getting a page. 04:44:22 "UltraTechnology has been out of business since 2002. This site documents its history with code, simulations,..." 04:45:05 No idea what the hell is happening on my end then 04:45:48 Hmmm. I can't ping the site. 04:46:04 Is it possible it's gone down recently and I had something in a cache or something? 04:46:14 Let me try it from Safari, where I've never previously opened it. 04:46:50 It opens there too. 04:47:02 That's weird. 04:48:34 So anyway, when I load my system MacOS sets it up so that everything is always at the same virtual address. So I certainly could use some kind of sourceless representation that just had pointers into that compiled code. 04:49:31 My language doesn't have / maintain that 1-to-1 correspondence, though. 04:51:00 Strictly speaking, though, I don't know that this needs to be *completely* 1-to-1. I could have process A define : foo bar ; and process B define : foo bam ; 04:51:16 foo would be in the symbol table twice, pointing at bar once and bam once. 04:51:40 If you search for the symbol table entry that points to bar, you find foo. 04:51:43 Similarly for bam. 04:51:48 So decompilation works right. 04:52:05 You'd still have to have some way of knowing, when you typed foo and hit enter, which one to run. 04:52:20 But that's an "edit time / interaction time" thing, and there could be structures for that. 04:53:01 Structures that were accessed only when you were editing your code; "your foo" would get stored in the code pointing to the right place. 04:53:43 As far as decompilation and rendering goes, this could be a many-to-one mapping. 04:54:55 Something as simple as having the symbol table be a collection of linked lists would work - different processes would have different "entry points," and within that linked list it would be 1-to-1. 04:55:09 mmm, dat sexy html1.0 04:55:28 Oh, the UltraTech site? Yeah - pretty "primitive" looking. :-) 04:55:42 For someone my age it at least has some nostalgia. 04:56:15 Things were more... "pragmatic" in those days. 04:56:20 Deliver the information - period. 04:59:30 I guess I think the later evolution of web presentation is fine - as processors and network tech grew in power, we became able to be "prettier." 04:59:45 The part I object to, hugely, is the decisions that led to the ability for websites to infect your computer and so on. 05:00:00 It should have been kept focused on rendering stuff on your screen. 05:00:12 This whole "active content" business was a bad path. 05:00:30 Some really bad security decisions were made. 05:02:14 I don't know how they botched it so badly - the *concept* of your browser providing a virtual computing sandbox that web content could "run on" isn't broken in and of itself. 05:02:32 The idea of cookies isn't bad in and of itself. But they managed to fail in the implementation somehow. 05:05:37 rdrop-exit: I've thought at times about a better 1-1 mapping of source and compiled code. 05:06:42 Instead of IF ... THEN implemented with ?jump you could have ?SKIP: . That gets you the same capabilities, and encourages (forces) factoring - the ... content would be in its own word. 05:07:00 And it's immediately decompile-able. 05:07:22 Doesn't require immediacy. 05:08:08 Another way of doing the same thing would be to put a conditional return at the front end of the ... word. 05:08:15 I've done that a good bit in my code. 05:08:31 I find that with my conditional returns I use a lot fewer IF ... THEN clauses. 05:08:52 ?SKIP: would be a little faster, since it would avoid the call/return. 05:09:51 But if everything has to be compiled into using jump and ?jump then it's harder to avoid ambiguous stuff. 05:11:22 --- join: siraben (~user@unaffiliated/siraben) joined #forth 05:15:40 Do things like race conditions appear at all at the assembly level or Forth? I never encountered them anywhere. Say I wanted to make a race condition? 05:17:28 Or do such things only happen within the context of threading and OSes? 05:17:47 if you have two independent workers then simply put variable incrementation in loop and see, if it sums up when both workers are done 05:17:48 That kind of thing comes up anytime you have multiple threads. 05:18:02 Have you implemented threads? 05:18:23 Well, you need either multiple cores, or else a mechanism for interrupting one thread and switching to another context. 05:18:25 I can't see how interleaving computations can make them faster than doing one at a time. 05:18:31 Oh, I don't have multiple cores, that's why. 05:18:37 It doesn't, really. 05:18:42 Unless you have multiple cores. 05:18:51 unless you have blocking operations 05:18:53 You can just provide the impression to the user that multiple things are happening at once. 05:19:08 But the time you spend switching back and forth is "wasted." 05:19:18 I have a non-blocking version of KEY called KEYC which returns 0 when no key is entered 05:19:33 Right - you could build multi-threading around that. 05:19:35 By the way, not sure if that's a good name for it, suggestions? 05:19:53 read-char-no-hang ? 05:19:54 Application would use KEY but under the hood it would check to see if a keystroke is available, and if it's not go do something else for a little while. 05:20:10 I usually call it ?KEY. 05:20:15 The one that checks and returns a flag. 05:20:16 So what does ? mean? 05:20:24 maybe-key 05:20:34 Returns a flag? 05:20:37 Yeah, or "check key" or whatever. 05:20:41 I don't have any flags, hm. 05:20:54 ?KEY will return 0 or 1 - not the actual keystroke. 05:21:00 Huh. 05:21:02 You'd still have to run KEY to get it. 05:21:11 But you could avoid having KEY block. 05:21:14 So how would you use it? 05:21:32 ?KEY IF KEY THEN 05:21:35 would be the simplest way. 05:21:53 For me it's BEGIN KEY DUP 9 <> WHILE CASE 1 OF FOO ENDOF 2 OF BAR ENDOF ... ENDCASE REPEAT DROP 05:21:55 But you could also say ?KEY IF KEY ELSE ...run something else... THEN 05:22:00 I mean KEYC, not key^ 05:22:03 In that example 05:22:08 Oh I see. 05:22:22 I thought ?KEY had this stack image ( -- F | keycode T ) 05:22:35 Or just run KEYC until a non-zero value is returned in my case. 05:22:38 You could put it in a loop, and the loop body would do other useful work while waiting for a key to be ready. 05:22:45 So what are flags? 05:22:51 The processors flags themselves? 05:22:54 Just a 0 or 1 returned on the stack. 05:22:59 No, I meant a "Forth flag." 05:23:05 A Boolean. 05:23:09 Ah I see. 05:23:27 in some forths for true you do not get 1 but 0xFFFF (whatever your cell size) 05:23:28 I'm using the convention of 0 = false 1 = true, but apparently it's supposed to be 65535 (or all 1s) 05:23:36 For a while Chuck Moore had IF use the processor flag, but he later went back to stack "flags," I think. 05:23:47 It can be whatever you want it to be. 05:23:52 Of course. 05:23:55 My last one used -1, this current one uses 1. 05:24:00 Zarutian: interesting 05:24:11 What does your STATE = 0 correspond to? 05:24:17 Interpreting. 05:24:23 STATE = 1 is compiling. 05:24:31 Oh, I got it the other way round. Hm. 05:24:39 :-) Convention. 05:24:47 Be consistent and you're golden. 05:24:52 the handy thing of having true being 0xFFFF is that AND, OR, XOR and other boolean and multibit operations are the same 05:25:00 Right. 05:25:07 Ah, that's why. 05:25:19 In a few places I couldn't resist the urge to "do math" on my true/false results. 05:25:29 But that can work either way - you just have to do the right math. 05:25:35 I'm thinking of adding 32 bit numbers and operations but can't possibly see how they would be fast. 05:25:53 Surely there's a collection of 32 bit snippets for the Z80 somewhere 05:25:53 Doubles really aren't fast if you have to implement them on top of singles. 05:26:21 I decided that 64 bits was wide enough; I passed over implementing doubles. 05:26:33 Of course. 05:26:50 I use decimal points in numbers to trigger floating point conversion. 05:27:04 Back from dinner 05:27:15 Interesting fact: the TI 84 calculator OS can do up to ± 2^33 calculations exactly, more than that it uses floating point. 05:27:32 But it has rationals, reals, complex, etc. because it's a calculator. 05:27:51 Yeah, rational computing is pretty interesting. 05:27:53 Considering that you can barely multiply on the Z80, that's pretty impressive. 05:28:16 crappy IEEE 754 floating points are the bane of my existance, gimme scaled values instead when the range of values are known 05:28:24 It's exact. There's a mathematician out there named Wildberger or something like that that has developed a complete theory of math (or a lot of it, anyway) using rationals. 05:28:33 He has a well-developed theory of trigonometry. 05:28:35 I use two calculators HP16C and HP48GX 05:28:35 Should I use BCD? 05:28:50 For floating point? 05:28:55 For doubles 05:29:00 Or arbitrary precision 05:29:05 Oh, I wouldn't. That will be slower still. 05:29:20 Even for arbitrary precision, I'd still use full cells for the hpieces. 05:29:23 KipIngram: rationals? sorry my English vocabulary isnt good mathwise. It is the x / y things? Like one learned about in elementary school. 05:29:24 It will be faster. 05:29:25 I don't trust my asm-fu enough to do 32 bit multiplications 05:29:28 Zarutian: Yes. 05:29:31 "Fractions" 05:29:39 Yes, rational number is quotient of two integers. 05:30:14 right. Now I know you are talking about. 05:30:22 pi Convergent rational approximation of pi. 05:30:23 : pi ( -- numerator denominator ) 05:30:24 $ 24baf15fe1658f99 $ bb10cb777fb8137 ; 05:30:39 siraben: a 16-bit cell holds roughly three decimal digits. 05:30:50 If you go BCD, you will have to do many more operations. 05:30:57 *gasp* 05:31:07 Since you'll be working with one digit at a time, when you could be handling three-ish. 05:31:33 been thinking of stealing ideas from Scheme's (the Lisp) numerical tower system. 05:31:48 Ah yes, that's quite good. 05:31:59 rdrop-exit: Wow - how many digits does that get right? 05:32:06 Common Lisp has a similar numerical tower 05:32:19 Zarutian: So you're going to support fractions, complex numbers, reals, and signed versions of each? 05:32:44 I'm actually thinking of support for tensors. 05:32:53 Not in the core system, but as a "library" of sorts. 05:33:05 I just wished that a spefic eletronics CAD and simulation package used CL or Schemes numerical tower when simulating circuits and not just LTSPICEs shitty floating point based implementation. 05:33:06 That’s the best rational approximation for a 64-bit u*/ with a 128-bit intemediary product 05:33:42 Don’t remember how many digits it gets right 05:34:57 I've seen a mandelbrot program in Forth 05:35:42 Nice. 05:36:19 Oh thank goodness: http://z80-heaven.wikidot.com/math 05:37:00 Do people use Forth for cryptographic applications? 05:37:06 Vectors, matrices, and tensors always have components, and those components only have a well-defined meaning in the context of a specified reference frame. So that stuff I mentioned above would store not only the components, but also a reference frame indication. 05:37:15 Then it can be thought of as representing something physically real. 05:37:40 And the underlying code could switch it between coordinate systems, etc. 05:38:07 Maybe at some point I should just turn this interpreter into a larger system and replace the TI OS 05:38:25 Yeah, why not? 05:38:44 Except I'd suddenly make my calculator not a calculator 05:38:50 Then you could get rid of that flash-app annoyance you were discussing yesterday. 05:38:59 Well, unless you made it back into one. 05:39:26 Right. It's just a bit tedious figuring out using these ROM calls 05:39:26 I.e., had you own calculator functionality. 05:40:05 There seems to be a little architecture for the OS. There's 6 9-byte floating point "registers" with specific calls to transfer data from one to another 05:40:21 Actually, you might be interested to know that OS variables are type-tagged. 05:46:39 TI produced lisp machines in 80, it is not suprising they have retained some ideas 05:47:19 Do they have a compiler for internal use or do you think that their engineers just write in ASM? 05:48:33 Computing would be different if the Lisp machines won. 05:49:13 sure, enterprise computing all along the way ,) I'm happy with Common Lisp on stock hardware 05:50:10 Forth is like Lisp in many ways, especially with respect to its meta-programming capabilities. 05:59:59 I poked into that not long ago, and got the notion that that similarity is somewhat superficial. 06:00:06 Under the hood I think they're pretty different. 06:00:49 Forth is the "ultimate" low-level language, Lisp is the "ultimate" high-level language, I think. 06:02:11 --- quit: rdrop-exit (Quit: rdrop-exit) 06:10:40 What should I call a word that allocates the number of bytes given by the top of the stack? 06:10:53 Like ALLOT but more permanent because the data resides outside the Forth system. 06:11:00 ALLOC? 06:30:08 That seems reasonable, but the important thing is that it has meaning to you. 06:30:49 My "outside the system" allocation is from my heap - I called those words HEAP and -HEAP (allocate a page, free a page). 06:32:29 I actually have two layers of that - (HEAP) and (-HEAP) do a "raw" and very fast allocation / deallocation; HEAP and -HEAP use the first two cells of the page to maintain a doubly linked list of all of a process's heap pages and return the address just past those cells to the user. 06:32:59 --- join: mark4 (~mark4@cpe-2606-A000-8096-FE00-5A94-6BFF-FEA6-29D4.dyn6.twc.com) joined #forth 06:33:40 Error recovery follows those links and snapshots every page belonging to the process for possible restoration if an error is encountered. 06:34:52 In addition to user-code calls to HEAP / -HEAP, pages get automatically allocated now and then as needed for dictionary growth. 06:35:33 I think I want error recovery to "unwind" any user calls to HEAP or -HEAP that occurred on the error-containing line. 06:35:50 Otherwise pages can get allocated and then error recovery destroys any refernece to them - it's a leak. 06:35:54 I see. 06:36:02 Unwinding HEAP calls is no problem; I can add that easily. 06:36:06 The TI-84 has a Variable Allocation Table so nothing is really leaked. 06:36:16 Just de-allocated by a ROM call. 06:36:27 Each variable is identified by a type and a 8-character name. 06:36:34 Unwinding -HEAP calls is easy too, but if I have multiple threads running the possibility exists that another thread could allocate the freed page before it gets "un-freed." 06:36:39 So that's a glaring problem. 06:36:50 Ah, free issues. 06:36:54 I let the OS do that. 06:37:12 This is like a tagged malloc 06:37:21 I could just mark the need to free the page but postpone the actual freeing until successful completion of the line, but then I've got a problem if I run a word that does a lot of heap churning and runs for a long time. 06:37:28 Well, this is meant to be the OS. 06:37:35 It's not, yet, but it will be. 06:38:28 The way I have this written right now, HEAP will return the most recently freed page preferentially. 06:38:38 So that's just a problem waiting to happen when I go multi-thread. 06:42:01 I could reduce its severity by deferring free for up to some small number of pages. 06:42:23 Then in most situations the frees would get deferred to success, but if a word really took off and needed to do a lot of stuff it would be able to. 06:42:43 Imperfect, but might cover me "well enough." 06:43:10 What do you mean by a page? 06:43:18 Is that one unit of allocation? 06:43:23 Or OS page? 06:43:34 Yes - to keep this efficient I allocate fixed size pages from this "heap." 06:43:53 How do you deal with address space layout randomization? 06:44:05 Not sure what you mean. 06:44:17 Modern OSes tend to randomize their memory allocation 06:44:26 So you can't predict the next free byte, so to speak. 06:44:36 Not byte, but allocation in general. 06:44:53 Well, this is written with the eventual idea of running on bare metal, in which case that will all be under my control. 06:44:59 Ah. Right. 06:45:03 But I'm not actually using OS calls to allocate and deallocate memory. 06:45:14 I just give the thing a bunch of memory when it loads, and it does with it what it wants. 06:45:21 I never presume that two heap pages will be adjacent. 06:46:36 Among other things, this means that something like CREATE FOO 256 ALLOT is dangerous. 06:46:51 There might not be 256 bytes left in the current definition page. 06:47:28 I expect I'll have another, more complete, dynamic memory system eventually. 06:47:52 But I didn't want the complexity of a full-on heap with variable block size, garbage collection, and so on in my core system. 06:48:22 What I imagine I'll ultimately end up with is this simple heap growing from the bottom of free RAM, and a "real" heap growing down from the top. 06:49:32 In which case the above code would be done as 256 FANCY_HEAP CONSTANT FOO 06:50:07 Maybe with some sugar, so it was 256 BUFFER FOO 06:50:35 Then later I could say FOO FREE or something. 06:50:50 Or -BUFFER 06:51:17 When I first wrote this thing I was thinking in terms of essentially unlimited MacOS memory. 06:51:38 I used 64k heap page size, just grabbed one whenever I wanted one, and also had each process residing fully within one page. 06:51:59 But that model doesn't port to small memory environments well, like this little Cortex M4 system I've got (192k RAM). 06:52:26 So I reworked it such that processes use several pages for various purposes, and such that header and definition spaces can grow to new pages. 06:53:05 Task blocks, which contain things like the stacks, the task variables SP0 and RP0, and so on, still exist in single heap pages. 06:53:15 So that's what sets the lower bound on my heap page size. 06:54:11 Right now it's set to 1k. So the "raw" blocks are 1kB + two cells (for the linked list); when I call HEAP I get back an address and I know there are 1024 bytes there for me to use. 06:55:09 Error recovery calls (HEAP) to get snapshot pages, and (-HEAP) when it doesn't need them any more. 06:55:25 I don't want those pages put on the list, because the error recovery is following that list to know what to snapshot. 06:55:37 It would never finish if those went on the list - it would wind up snapshotting its snapshot. 06:55:52 It maintains its own list of things it has to undo later. 07:00:54 Ah, so you snapshot everything so that you can restore the system exactly as it was before the failure. 07:02:34 That's right. 07:02:41 So I can restore the *process*. 07:02:45 I don't snapshot the whole system. 07:02:54 Right now it's essentially the same thing, because I have only one process. 07:03:12 But the heap management variables lie out side the process, and therein is the issue. 07:03:39 But it's quite a sledge hammer - it keeps the process sane almost completely. 07:04:00 And I get the benefit of not having an error toally reset my data stack. 07:04:04 totally 07:04:17 It resets the return stack - it restores the process pages and then runs QUIT. 07:04:32 But any items that were on the data stack at the beginning of the line are still there after the error recovery. 07:05:14 I wanted to be able, for example, to do a bunch of interactive things - say I just spent 20 lines setting up the stack for some purpose. 07:05:22 I didn't want to have a casual typo nuke that work. 07:05:57 So generally speaking I can just use my command history to cursor back to the bad line, fix it, and hit enter. 07:06:24 Another thing that will be an issue will be communicating processes. 07:06:29 Right. Many implementations nuke the data stack when an error happens. 07:06:32 Kind of annoying. 07:06:38 If A sends B a message, then A encounters an error, there's no good way to unroll the message send. 07:06:49 Not only could be have already received it, it could have already acted on it. 07:06:58 Ah, so you're adding processes as well? Do you keep track of them in memory? 07:07:00 So it's not "perfect." But it's really quite good. 07:07:15 Does your target have multiple cores? 07:07:17 Yes, there will be a process ring. 07:07:28 Doubly linked, with a system variable providing an access point to it. 07:07:41 Well, I don't have a certain target. 07:07:44 My Mac has four cores. 07:07:51 This Cortex M4 board has only one. 07:08:00 I have a Linux system at the office that has 32 cores. 07:08:24 But even on a one core system I'd be able to write applications structured as communicating processes. 07:08:37 https://en.wikipedia.org/wiki/Flow-based_programming 07:11:40 Or I might just decide whether to use multiple processes based on my target. 07:11:45 I want the ability to do it. 07:12:04 One of my more common uses of that might be to just have multiple sessions of interactive work going. 07:12:13 Sort of like running screen on a Linux system and having several windows. 07:12:43 Those processes don't communicate with one another - it's just a way for me to have several activities going on at once as far as my own workflow goes. 07:13:31 I intend to have an *option* of associating an output buffer with a process. 07:13:50 When I switch to a process, if it has that engaged, the system will re-draw the screen. 07:14:12 If it doesn't have an output buffer, I'll just get a prompt associated with that process, with no change to the screen overall. 07:14:29 It was only recently I realized that was a "separate feature." 07:18:23 Wow. You're going to create a full-fledged operating system with that thing. 07:18:32 --- quit: Zarutian (Read error: Connection reset by peer) 07:18:44 --- join: Zarutian_2 (~zarutian@173-133-17-89.fiber.hringdu.is) joined #forth 07:18:46 One problem I have now is how to INTERPRET a string in memory 07:20:46 What's the problem? 07:21:27 INTERPRET is really just a loop of BL WORD then FIND then EXECUTE or NUMBER. 07:21:50 If FIND fails, you try NUMBER, and if that fails you just fall through your error trapdoor. 07:22:32 Terminate it with the immediate NULL word trick - have an entry in your dictionary that has the null string as its name. 07:22:35 Make it immediate. 07:22:53 What it does is bugger the return stack so that when it returns it returns back to the QUIT loop instead of the INTERPRET loop. 07:23:28 Then INTERPRET is literally just an infinite loop. 07:24:09 WORD is implemented with a word buffer 07:24:16 --- join: kumool (~kumool@adsl-64-237-233-169.prtc.net) joined #forth 07:24:23 I put my input into a buffer until ENTER is pressed 07:24:36 So I guess I could just copy the string into the string buffer and execute it 07:26:09 Yeah, QUERY gets a line from the keyboard to the TIB buffer. Then WORD pulls the next word from that buffer into another place, often an "unofficial" place like PAD. 07:27:00 PAD is traditionally just "some spot" up above the end of the dictionary. 07:29:52 I had to change how I did that, though, because the normal definition of PAD could go out beyond the end of my heap page. 07:29:53 WORD typically stores the word as a counted string. 07:29:53 Investigate the words SKIP SCAN and ENCLOSE 07:29:53 So the whole line is stored in TIB without a count - that's null-terminated. 07:29:53 WORD is written so that it will successfully pick up a null string when called at the end of the line. 07:29:53 I think I changed PAD so that it just got put way up in high memory, below my command history. 07:29:53 My symbol table and my command history both involve buffers larger than a heap page, so I just slapped them up at the top of my RAM for now. 07:29:53 Eventually I intend for both of those to be disk resident and will get accessed via BLOCK. 07:31:03 And yes, I mean for this to be a full operating system. That little Cortex M4 board has 192k of RAM, 512k of flash, and a 120 MHz processor. 07:31:20 --- nick: Zarutian_2 -> Zarutian 07:31:27 I've got it in mind to do a multi-session OS for that guy. 07:31:30 Console only, of course. 07:33:46 What I eventually want to do is to put this processor, or a similar one, on a board of my own with other features, like Bluetooth. 07:34:10 Arrange it so that a family of them will find each other over bluetooth and form up a mesh network, and behave such that I can program all of them as an integrated system. 07:36:06 --- quit: tabemann (Ping timeout: 252 seconds) 07:36:26 Great success--I can now write programs and store them, so I am no longer that tied to my computer. 07:36:52 It's like a "file system", if names were truncated to 8 characters and there are no folders and what have you 07:46:24 Vocabularies would be like your subfolder system. 07:46:39 The usual syntax is 07:46:42 vocabulary alpha 07:46:45 alpha definitions 07:46:51 ... define words in alpha ... 07:47:12 There's generally a variable called CURRENT that designates the vocabularly that should receive definitions. 07:47:16 don't you want an "also" in there? 07:47:45 Possibly, though I've deviated quite a bit from standard Forth, so I'm unsure. 07:47:57 also alpha definitions? 07:48:23 I implemented CONTEXT as a stack, so when I execute alpha there it just pushes it to the context stack. 07:48:30 I think so. that's how mine works, but I've seen so many variations I don't know what's considered canonical 07:48:30 The "also" is automatic for me. 07:49:12 Either mark or crc suggested a stack of *context stacks*, rather than just one, and I liked that idea quite a lot. 07:49:22 I haven't implemented this part yet on this system, so I may do it that way. 07:49:24 mark's is just confusing to me 07:49:52 Well, it seemed to make sense to me. If you execute a vocabulary that's already in the context stack, what do you do? 07:50:02 Obviously you want that vocab on top of the stack. 07:50:02 I can't remember now exactly how his worked, but I remember not understanding how one would use it effectively 07:50:09 But do you delete the lower copy? 07:50:22 His approach was just to define a whole new stack, make it how you want, use it, and then discard it. 07:50:50 oh I remember what it was - he enforces an always-appear-only-once-in-the-stack rule 07:51:05 But at the same time he saves previous stacks so he can restore them. 07:51:17 Huh that's weird. I should be able to transfer variables from the calculator to the computer, but strings aren't being transferred. 07:51:28 And, actually, I see removing the deeper copy as an optimization. 07:51:29 if your stack is (bottom-to-top) "alpha, beta, delta", and you write "beta", then it doesn't push beta again - instead, it moves beta to the top 07:51:32 It would a way to finally get data in/out of the calculator, through strings. 07:51:37 Anything you'd find in there you'd find the first time, so... 07:51:38 it becomes "alpha, delta, beta" 07:51:45 which is just confusing as hell imo 07:51:49 Searching it again consumes time, but presents no other issue that I can see. 07:52:35 Since he restores the previously existing stack, moving beta like that doesn't bugger the outside context. 07:54:21 I believe in my previous imp I had some word that emptied that stack except for Forth. 07:57:19 quit is an infinite loop begin interpret again 07:57:19 I have that too: only 07:58:00 abort is the only way out of the loop and that jumps right back into it :) 07:59:25 we talking about my context stack mechanism above? 07:59:39 I was shitting on it 07:59:53 where only forth root compiler forth does not put forth on the stack twice but rotates forth out to the top 08:00:00 you think its bad? 08:00:20 I just don't understand it, but maybe I just need to spend more time staring at it 08:00:41 I mean, I understand how it works. I don't understand how it's useful 08:01:04 it keeps context clean. it removes the need for "also" 08:01:14 the way context stacks work traditionally is counter intuitibe 08:01:21 you cant do only forth compiler root 08:01:36 bcause you have to put ALSO in between everything 08:01:42 only forth also compiler also foo 08:01:51 because if you say only forth compiler then COMPILER overwrites FORTH 08:01:59 also duplicates the top item so 08:02:15 only forth ALSo compiler makes it root forth forth and THEN overwites the second forth with compiler 08:02:19 STUPID 08:02:22 i removed also 08:02:37 so in yours, what if you want to replace the top vocab with a different vocab? 08:02:40 only forth compiler root any time you invoke a vocab it adds itself NON destructively to the stack 08:02:57 only forth also compiler .... previous 08:03:04 previous removes the top item of the context stack 08:03:11 that removes compiler 08:03:16 yes 08:03:23 ONLY removes everything except root 08:03:29 what if you wanted forth gone from the stack, and have only compiler? 08:03:38 only commpiler 08:03:43 root and compiler 08:03:49 oh, so you have that model where all vocabs are defined in root? 08:04:03 if you want ONLY compiler you would use seal. but as all the vocab controlling words are in root you cant add or remove anything any more 08:04:23 yes every vocab is part of root. all vocab controlling words are in root 08:05:07 so lets say module A sets up a context stack for itself. it then includes YOUR module and YOU modify context. when you have been fully included and module A continues to compile the context stack is not how it left it 08:05:44 thats why traditional forth allow you to add items to the stack that are already there. that way you can remove them and NOT pull the rug out from under some other module that just happened to include yours 08:06:00 i allow the creation of new context stacks 08:06:07 right, you have a module-local context stack 08:06:08 context: my-context-stack 08:06:18 and you can discard it later 08:06:27 i have a STACK of context stacks 08:07:05 right 08:07:15 i think this is a much cleaner solution than a cluster fucked context stack with root compiler forth compiler foo bar bam foo comiler bar bam forth 08:16:08 my main reason for doin this was because i found the also mechanism to be counter intuitiive 08:16:16 on the forth stack you do 1 2 + 08:16:20 not 1 dup 2 + 08:16:29 sure 08:16:33 you dont need to duplicate the top item of the stack just to be able to push a new item onto it 08:18:24 mark4: I like your way. 08:19:27 I understand it now, but in mine vocabularies aren't treated special (that is, their definition goes into whatever vocab is current, not into root), so I don't see how I could do it that way 08:19:53 for example, if there's a vocab foo and inside it is a vocab bar, and you only want bar on the stack, you'd have to do it like "also foo bar" 08:20:16 well keeping vocs in a special vocabulary with all vocabulary defining and controlling words makes it cleaner imho 08:20:50 vocabluary blah and blah itself is defined in root no matter what voc is current 08:21:06 your way you can create a voc called foo and another called bar inside foo 08:21:15 and unless foo was in context you couldnt add bar to context 08:21:32 yes, which means you don't have to worry about name collisions 08:21:44 --- quit: MrMobius (Ping timeout: 268 seconds) 08:22:26 while having a function called "init" in different places sounds reasonable 08:22:39 having two vocabs with the same name sounds confusing 08:22:58 you dont want 23458629875 vocabs all called "asm" 08:23:03 you might call one asm86 08:23:10 another asm-6502 08:23:12 or asm-arm 08:23:14 or whatever 08:23:53 the problem is that I don't want to have to have every module in my entire codebase memorized at all times 08:23:58 Ah, I typically don't do that - like zy]x[yz I just put vocabularies in the now current vocabulary. 08:24:12 But, interesting. I'll cogitate. 08:24:36 But I think that mirrors the change from FIG to F79. 08:24:54 iirc, F79 had all vocabularies in forth (or root - whatever was "basic"). 08:26:45 I sometimes do use the same vocabulary name - like "helper" 08:26:50 editor helper definitions ... 08:27:08 graphics helper definitions ... 08:27:27 hamburger helper definitions ... 08:27:39 Typically the helper vocabulary has words that I use to build the tools, but don't want exposed when I'm using the app. 08:28:09 lmao... 08:29:22 loll 08:29:26 tuna helper definitions 08:29:43 thats why you do headerless words 08:29:58 in isforth i have and behead 08:30:15 think of them as arrows pointing towards the words that have headers 08:30:37
mine "beheads" or "hides" all words by default after it finishes parsing a module unless they've been marked with "export" 08:30:51 headers> (headers to) turns headers back on for subsequent words 08:31:01 but admittedly that's kind of tedious to have the word "export" all over the place 08:32:13 Yes, true - since doing the above I've come up with my :: / ::WIPE mechanism for headerless words. 08:33:07 Haven't implemented that yet - just "evoked" it in my assembly "hand compilation." 08:33:27 I have macros head_d "name" label and head_d0 label. 08:33:56 There's still a "header" - it just doesn't contain a symbol table index or a link. 08:34:03 Still has the CFA/PFA pointer pair. 08:34:22 I'm indirect threded. 08:34:30 threaded 08:43:25 x4 is direct threaded, t4 is sub threaded and my n4 (android ndk) wont work as anything but indirect threaded 09:21:06 --- quit: ncv (Remote host closed the connection) 09:39:04 --- join: dys (~dys@tmo-080-126.customers.d1-online.com) joined #forth 09:56:11 --- join: MrMobius (~default@c-73-134-82-217.hsd1.va.comcast.net) joined #forth 09:57:29 --- quit: MrMobius (Client Quit) 09:57:51 --- join: MrMobius (~default@c-73-134-82-217.hsd1.va.comcast.net) joined #forth 09:59:34 --- quit: MrMobius (Client Quit) 10:02:33 --- join: MrMobius (~default@c-73-134-82-217.hsd1.va.comcast.net) joined #forth 10:04:50 --- quit: kumool (Quit: Leaving) 10:13:04 please help me 10:13:06 i am in hell 10:13:50 trying to explain to a customer service rep why the problem he's dealing with is a dup of another bug I've already fixed in a later release and he just needs to tell the customer to upgrade 10:14:32 and his reading comprehension skills appear to be below par. every time I provide him with an explanation, he comes back with some twisted, backward version of it 10:21:01 is he indian? 10:22:13 usually they are, but I don't think this one is 10:23:22 just say "call me back when the customer has upgraded to version xxx.xx.x" 10:23:42 then say "have a nice day" and hang up :P~ 10:24:11 heh, yeah. I need to work on my blow-off skills 10:24:12 or say "hang on i just found the bug let me fix it wait a sec" ... .. "ok let me email you an update for the customer" 10:24:13 lo;l 10:45:02 --- quit: Labu (Quit: WeeChat 2.0.1) 10:53:57 --- join: Labu (~Labu@labu.pck.nerim.net) joined #forth 14:06:08 --- join: pierpal (~pierpal@host168-226-dynamic.104-80-r.retail.telecomitalia.it) joined #forth 14:32:47 Ok, so that link you guys posted earlier (https://colorforth.github.io/POL.htm) has some pretty good stuff in it. 14:33:21 Also some pretty "far out" stuff - I still don't have my head around the whole concept of levels yet, though I think the notion that something was down that road has tugged at me a couple of times over the years. 14:34:35 Looks like "levels" is sort of the notion of integrating state machines into threaded execution in a systematic way. 14:35:38 He uses infix arithmetic as an example (noting that people are "negatively impressed" when that's missing from a system), but I can't help thinking it might be right up the alley of what I need for regular expression processing. 14:36:07 He apologized for not knowing the "industry standard terminology" for that set of ideas. 14:36:27 As the guy who had no clue what closures were, I can relate... 14:38:37 so long as you describe your ideas fully as you can then the lack of 'proper' terminology does not matter that much. 14:40:30 That's how I feel too. 14:41:15 --- quit: dys (Ping timeout: 252 seconds) 14:42:17 I find the whole terminology thing frustrating sometimes - I pursue a pretty vigorous "self education" program in physics, and one of the things holding me back at this point is a better understanding of discrete math (group theory, abstract algebra, etc.) The literature I'm able to find on those things online is extremely "circular" in its presentation - to know A, you need to know B, to know B, you need to 14:42:19 know C, and eventually it turns out you need to know A. 14:42:37 I imagine there *is* a path into it all that isn't self-referential, but I haven't really found it yet. 14:44:47 well, I have also found that often math notation is arbitrary and sometimes self contradicting if the same kind of symbols are used for two diffrent things in two sub disiplinces of mathematics 14:46:00 Yeah, for sure. Which means you need to understand it pretty darn well in order to get to where you can "just know" what's meant. 14:46:41 yeah, it is basically the 'traditional chinese academia' syndrome 14:47:42 --- join: TCZ (~Johnny@ip-91.246.66.247.skyware.pl) joined #forth 14:57:30 --- quit: TCZ (Quit: Leaving) 15:10:22 Zarutian, yeah, also I've noticed a lot of pushback at attempts to re-factor and clean up arbitrary cruft in math and physics e.g, https://en.wikipedia.org/wiki/Natural_units and https://en.wikipedia.org/wiki/Rational_trigonometry 15:11:54 --- join: kumool (~kumool@adsl-64-237-233-169.prtc.net) joined #forth 15:11:55 well, I have on one occasation forced a physicist who was trying to explain something with an equation to write it as keyword parametered s-expression 15:12:42 ha 15:14:41 turned out that a parameter to a series summation she thought she understood was not as she understood it 15:37:18 Ah, rational trig - that's the stuff I mentioned earlier by that Wildberger guy. 15:37:26 I watched a bunch of his YouTube videos on that. 16:05:54 --- quit: phadthai (Ping timeout: 272 seconds) 16:06:34 KipIngram: rational trig has found a home https://arxiv.org/abs/1401.2371 under geometric algebra https://www.av8n.com/physics/maxwell-ga.htm -- another great re-factoring. 16:20:23 --- join: phadthai (mmondor@ginseng.pulsar-zone.net) joined #forth 16:20:35 I'm writing forth words for compiling resonant metamaterial designs, this one will use natural units and geometric algebra and then convert back to the crufty units at the end for the pcb fabs and 3d printers. 16:21:28 --- join: wa5qjh (~quassel@freebsd/user/wa5qjh) joined #forth 16:29:55 --- quit: pierpal (Read error: Connection reset by peer) 16:41:26 --- join: dave9 (~dave@90.20.215.218.dyn.iprimus.net.au) joined #forth 16:41:53 BTW, we've mentioned word frequencies a couple of times - they appear in an appendix of Koopman's book: 16:42:01 hi 16:42:06 hi KipIngram 16:42:16 https://users.ece.cmu.edu/~koopman/stack_computers/stack_computers_book.pdf 16:42:34 I'm pretty sure I've seen a stand-alone paper, where he talks about the test cases, but I think this is likely exactly the same results. 16:44:31 Also a discussion in chapter 6, section 3. 16:44:42 * dave9 clicks 16:44:47 Hi Dave. 16:51:57 --- quit: siraben (Ping timeout: 252 seconds) 17:00:03 --- quit: nighty- (Quit: Disappears in a puff of smoke) 17:05:03 KipIngram: re word frequencies: some primitive words get used a lot, no? while some of the other primitives are only used once. 17:06:36 * Zarutian is reading https://www.freebsd.org/cgi/man.cgi?query=elf&sektion=5&apropos=0&manpath=FreeBSD+11.2-RELEASE+and+Ports purely out of curiousity 17:28:44 --- quit: mark4 (Read error: No route to host) 17:39:51 --- join: rdrop-exit (~markwilli@112.201.162.180) joined #forth 17:41:59 I don’t think the frequency stats from Koopman’s book are all that useful. 17:42:18 The thinking’s circular. 17:45:10 Yeah, could be. I've never tried to use them. But how do you mean, exactly? My recollection is he took some typical applications and counted? 17:50:12 The stats are a basis for designing a Forth ISA, and a different ISA would result in completely different stats. 17:59:36 In other words whose to say that the reason the primitive X is frequent isn’t just a fluke of the way the Forth instruction set is factored, and that a different factoring might have favored primitive Y over primitive X. 18:00:19 edit: « who’s » not « whose" 18:00:56 --- join: tabemann (~tabemann@168-83-181-166.mobile.uscc.net) joined #forth 18:09:40 --- join: nighty- (~nighty@kyotolabs.asahinet.com) joined #forth 18:12:18 e.g. perhaps the « hammer » instruction is being abused, because the ISA didn’t include a « screwdriver » instruction 18:19:06 Oh. I presumed that the stats were based on a "standard" Forth. 18:19:16 But yes, what you said certainly sounds reasonable. 18:32:58 As Chuck discovered when he started to design chips some aspects of the traditional Forth Virtual Machine did not map well to a real Forth machine implementation, ergo he came up with Machine Forth. 18:40:52 Right. 18:41:07 Sometimes hardware really does tell you how it wants to be designed. 18:41:29 --- join: atalas (~atalas@c-73-188-73-139.hsd1.pa.comcast.net) joined #forth 18:41:48 I think most businesses try to separate hardware and software design too much in their organizations. 18:45:20 IMO this is why young programmers come up with crazy over abstracted messes, they lack a grounding in hardware realities to anchor their design thinking. 18:48:40 Yes, couldn't agree more. 18:48:58 *new programmers ... 18:49:02 hi btw 18:49:08 Hi! 18:49:12 hello :) 18:52:36 i intended to stop by and absorb some forth wisdom 18:52:48 but then i found 10 months of irc logs 18:53:12 Oh shoot - we intended to charge for our wisdom. 18:53:22 Scratch that idea, I guess... :-) 18:53:40 There’s a very fine line between a fool and a wise man ;) 18:54:32 I only very recently started using IRC, I should dig back through the logs. 18:55:03 id prob fit in with either. :) 18:56:17 In years past weeks or months could pass in here with hardly a word. 18:56:28 I dropped back in a few weeks ago, though, and it's been nicely active since then. 18:56:44 i was going to ask how active the channel was 18:56:47 Not so you feel there's no way to keep up, but enough to entertain. 18:58:24 I’m on the Reddit Forth topic and CLF, and nowhere. Not sure if there’s any other significant online Forth fora. 18:58:44 « now here » not « nowhere » :) 18:59:26 Not many Forthwrights left. 18:59:46 im noticing that 19:00:41 i only became interested again because i want to do an embedded project 19:00:56 Cool 19:01:08 Ah, excellent. 19:01:12 Forth is great for embedded. 19:01:30 so im teaching forth. its quickly becoming a favorite language. 19:01:41 teaching myself* 19:01:58 Excellent 19:05:07 I’ve gotta run, long day ahead. Will try to visit again tomorrow. Ciao. 19:05:27 --- quit: rdrop-exit (Quit: rdrop-exit) 19:22:56 'night KipIngram. i ttyl. 19:23:04 --- part: atalas left #forth 19:27:34 --- quit: wa5qjh (Remote host closed the connection) 19:27:48 --- join: leaverite (~quassel@freebsd/user/wa5qjh) joined #forth 19:40:57 --- join: pierpal (~pierpal@host168-226-dynamic.104-80-r.retail.telecomitalia.it) joined #forth 19:45:30 --- quit: dave9 (Quit: dave's not here) 20:01:52 --- quit: tabemann (Ping timeout: 244 seconds) 20:21:49 --- quit: dddddd (Remote host closed the connection) 20:22:54 --- join: tabemann (~tabemann@2602:30a:c0d3:1890:b90b:3bee:a83b:ed44) joined #forth 21:10:59 --- quit: kumool (Quit: Leaving) 21:52:29 --- quit: Zarutian (*.net *.split) 21:52:29 --- quit: KipIngram (*.net *.split) 21:52:32 --- quit: WilhelmVonWeiner (*.net *.split) 21:55:00 --- quit: ashirase (Ping timeout: 252 seconds) 21:55:00 --- quit: catern (Ping timeout: 252 seconds) 21:57:30 --- join: Zarutian (~zarutian@173-133-17-89.fiber.hringdu.is) joined #forth 21:57:30 --- join: KipIngram (~kipingram@185.149.90.58) joined #forth 21:57:30 --- join: WilhelmVonWeiner (dch@ny1.hashbang.sh) joined #forth 21:57:30 --- mode: kornbluth.freenode.net set +v KipIngram 21:57:45 --- join: ashirase (~ashirase@modemcable098.166-22-96.mc.videotron.ca) joined #forth 21:59:28 --- join: dys (~dys@tmo-096-199.customers.d1-online.com) joined #forth 21:59:45 --- join: catern (~catern@catern.com) joined #forth 23:32:00 --- quit: dys (Ping timeout: 252 seconds) 23:59:59 --- log: ended forth/18.10.04