00:00:00 --- log: started forth/18.10.05 00:04:30 --- join: dave9 (~dave@90.20.215.218.dyn.iprimus.net.au) joined #forth 00:21:03 re 00:33:11 --- quit: pierpal (Read error: Connection reset by peer) 00:37:42 --- join: pierpal (~pierpal@host168-226-dynamic.104-80-r.retail.telecomitalia.it) joined #forth 00:47:47 --- quit: leaverite (Ping timeout: 268 seconds) 00:52:13 --- join: wa5qjh (~quassel@175.158.225.207) joined #forth 00:52:13 --- quit: wa5qjh (Changing host) 00:52:13 --- join: wa5qjh (~quassel@freebsd/user/wa5qjh) joined #forth 02:04:21 --- quit: ashirase (Ping timeout: 252 seconds) 02:08:42 hi dave 02:09:25 hi nerfur 02:11:29 --- join: ashirase (~ashirase@modemcable098.166-22-96.mc.videotron.ca) joined #forth 02:49:48 --- quit: pierpal (Ping timeout: 252 seconds) 02:58:24 --- join: mayuresh (~mayuresh@182.58.205.196) joined #forth 02:58:54 --- quit: mayuresh (Client Quit) 03:01:20 How do we revive Forth 03:01:33 I was thinking of just rewriting a bunch of existing stuff in Forth tbh 03:05:03 --- quit: nighty- (Quit: Disappears in a puff of smoke) 03:11:11 Seed it into the next generation somehow. 03:11:23 Kids, who haven't been "corrupted" by formal software training yet. 03:11:34 Weave it into games somehow. 03:12:08 I'm sorry - not knocking the formal training; it's fine. But I think once you get on that bandwagon it's hard to get off. 03:12:43 Nah 03:13:01 I think I like it because my first programming experience was on a stack-based calculator - stack programming 'gets into your head'. 03:13:19 I think most programmer programmers can pick it up real quick, LISP has taken off again with Clojure, so it's paradigm-related 03:13:38 I started programming with Ruby, it's not like being primed for anything 03:14:23 So I realized this morning there's another conditional mechanism that fits in with my naming conventions. 03:14:26 Conditional : 03:14:34 0=: ALPHA ...code... ; 03:14:35 The best way is to present Forth as a way of writing functional programs 03:14:37 like ML 03:14:47 Yes, I do agree with that. 03:15:01 That code above would be more efficient that using conditional return: 03:15:07 : ALPHA 0<>; ...code... ; 03:15:15 Because it would dodge pushing and popping the return stack. 03:15:34 And I could apply my "dot prefix" too: 03:15:37 Kids these days love functional code over imperative, and concatenative is (kinda, basically) functional 03:15:42 .0=: ALPHA ...code... ; 03:15:48 KipIngram: how many words do you have already 03:15:50 Which would do the same as usual - retain the flag. 03:16:02 Named? About 280. 03:16:14 If I count no-name headers it's something over 400. 03:16:22 I... factored a lot. :-) 03:16:44 NUMBER has 26 no-name definitions under it, to do the whole job (integer and floating point). 03:17:27 QUERY also has quite a few, handling the EXPECT part and the command history and so on. 03:17:45 is it really factoring if you have over 400 words 03:18:08 Well, yes, I think so. I find the factored code very easy to read and remember the operation of. 03:18:40 I think that's what factoring IS - express your goal using short definitions. 03:19:12 I understand what you're saying, though. 03:19:29 1970 Forth had 200 words in the dictionary iirc 03:19:31 You're implying that a better set of words could do the same work with fewer words. 03:19:50 And that could be true - I'd just say if that's so then I just haven't gotten to that word set yet. 03:20:05 But it's immensely better code than I've written in the past. 03:20:14 My implementation before this one was rather ugly. 03:21:14 I certainly like words that get re-used a lot - those would get names in the dictionary. 03:21:35 Maybe over time I'll see improvements. 03:21:54 It's a lot more likely that I will with it broken down into short definitions like this than if, say, NUMBER was one opaque wall of code. 03:22:00 I do like how old school Forth had . meaning : 03:22:28 and DUP was defined something like `. DUP *DUP ;` because *DUP actually executed the instructions 03:23:03 I'm not familiar with that... can you explain a bit more? 03:23:15 you want me to find you the original paper 03:23:24 it's linked on the Forth.org site somewhere 03:23:25 Sure - that would be great. 03:24:21 So I'm going to interpret your comment above as "If I have 150 words that didn't even warrant names, maybe I can factor *better*." 03:24:28 I think that's a very valid insight. 03:24:57 http://www.ultratechnology.com/4th_1970.html 03:25:10 Turns out Jeff Fox html-ized the PDF long ago 03:25:19 :-) 03:25:29 That guy is sort of the "scribe of Forth." 03:25:39 also `;` was `RETURN` 03:25:56 `. DUP *DUP RETURN` almost looks like Forth 03:26:00 I have a friend who'd probably have preferred that before he got used to ; 03:26:10 He's constantly poking at me over my "terse, symbolic" names. 03:26:23 I seem to gravitate to terse notation. 03:26:37 Well my favourite quote from Dennis Richie is something along the following lines 03:26:46 when asked if he could go back and change anything about Unix 03:27:12 His answer was "I'd add the e to `creat`". 03:27:26 :-) 03:27:57 So that (dropping letters from words) I don't tend toward. 03:28:00 maybe a little. 03:28:03 ever since reading that, I always use `int count` as opposed to `int cnt` because who the hell needs to save two characters in 2018 03:28:08 But mine are usually very "symbol like." 03:28:23 But I do try to make the symbol structure have meaning, and stick with it. 03:30:48 So I've got the ultratechnology website pulled up - what sort of phrase can I search for to find that page you mentioned? 03:31:22 It's in the section "The Language" 03:31:39 or the following one 03:31:55 Also it was a Ken Thompson quote, not a Dennis Richie quote 03:32:57 "I'd spell creat with an e." 03:33:05 Hmmm. My view of the site doesn't seem to have such a section. 03:33:20 http://www.ultratechnology.com/f70c8.html 03:34:05 if you click "Next Chapter" you get the section about DUP and *DUP 03:34:45 Oh, interesting. 03:34:58 So the * prefix made the word compile, so to speak? 03:35:35 Did systems at that time not have STATE? 03:36:48 Hope not 03:37:10 Well, it seems that *DUP does the same thing that DUP would do with STATE=1. 03:38:00 These days you could do *DUP as ] DUP [ 03:38:48 It's funny ] ... [ seems to "alarm" newcomers - whereas they're fine with [ ... ] 03:39:32 [ ... ] implies something structured, but Forth "doesn't go there." 03:40:11 People are so used to programming environments imposing (requiring) some sort of structure. 03:40:49 Never eally seen anyone I've introduced to Forth confused about that 03:40:55 Forth has structure 03:41:08 just no real syntax 03:41:12 Well, not in the sense that the [ and ] are "connected" in any way. 03:41:17 Other than by their action. 03:41:30 Yes - syntactic structure is what I meant. 03:41:47 Forth actually has a beautiful structure, in its own way. 03:42:30 problem with introducing people to Forth few good beginner introductions 03:42:43 can only think of easyforth and Starting Forth as anything I'd recommend 03:43:41 First Forth needs a website with parallax and some kind of 3D React-based material framework 03:43:42 Yeah. I try to just explain it, by saying that it lets you take a group of operationns the system already knows how to do and give them a name. 03:44:00 FORTH. Simple. Elegant. Computer. 03:44:22 "Forth lets you use your computer how you want." 03:44:33 a whole page about Forth with no real information whatsoever 03:44:48 And by choosing the right groupings and giving them good meaningful names, you create a language for solving a problem. 03:45:36 I have a friend (professional CS guy - got wealthy at Dell) who calls Forth a glorified macro assembler. 03:45:48 Though he's never worked with it - he just listens to me and another guy talk on IRC. 03:46:05 I see why he says that, but if that's true, then the "glorified" part packs an awful lot of punch. 03:46:40 Just being set free of assembly's "line format" is hugely beneficial. 03:47:14 C is just a glorified macro assembler if you see what the compilers can spit out 03:47:24 Yes, agreed. 03:47:32 fits the definition of "macro" real well 03:47:52 I'd consider "converting" that guy a major coup, but he is one of the most obtuse, stubborn guys I've ever encountered. 03:48:16 Inject him with Chuck Moore's blood and see what happens 03:48:19 Very smart, so you pretty much never catch him "intellectually exposed." 03:49:38 He teases us about the "stature" we ascribe Chuck. He joked one day that someone could make a fortune selling Chuck Moore dolls to Forth folk, if there were more than eight of us. 03:50:28 I chafed a little, but, well, I also had to snicker a little. 03:51:53 So back to your opening line this morning - I'm not claiming it's the only way, but I do think trying to seed Forth into youngsters would have to be helpful. 03:52:10 The trick there is finding a way to tie it into something they're interested in. 03:52:20 When you can get a kid *curious* about something, there's power there. 03:53:22 I love the rare occasions I can run into a kid who gets curious about physics - their minds can be like sponges. 03:53:32 Man 03:53:37 It's just that most of them never engage. 03:53:39 It'll hurt when Chuck dies 03:53:44 No doubt. 03:53:52 wonder how EtherForth is coming along 03:54:26 He's been a pretty important figure in my life. 03:54:39 --- join: dddddd (~dddddd@unaffiliated/dddddd) joined #forth 03:56:21 In my school of programming thought he changed me 03:56:41 Yes. 03:57:08 the most important quote he's said was something like "G G G G G G G G is better than a loop" 03:57:28 Yeah - I remember that one. 03:57:31 hm, maybe. guy like that says it, he's probably onto something 03:58:03 That's the thing - with him it's not listening because he's charming / charismatic - it's all driven by the power of his ideas. 03:59:16 He's established a "reputation"; when he talks is so often good that you don't want to miss it. 04:02:09 I need to dig through the latest stuff - I've covered the older stuff pretty well, but I think I've missed some more recent things. 04:08:19 I'm not sure I'm really getting the difference between : (definition) and . in this stuff you just pointed me at. 04:09:41 Looks like this material is VERY old - from the very very beginning. 04:09:52 You did say "old school." 04:12:42 Well the beginning beginning was something like 1954 04:12:42 Oh, ok - this page gives more discussion: 04:12:45 http://www.ultratechnology.com/f70c2.html 04:13:05 There's no difference between . and :, that's what I said 04:13:21 --- join: nighty- (~nighty@s229123.ppp.asahi-net.or.jp) joined #forth 04:13:26 Ok. Then why have both of them? 04:13:51 That page I just linked seems vaguely to imply that . might be more like code: 04:14:03 But I'm not sure that was the intended meaning. 04:14:39 Oh, gee - that page says that : causes the following *text* to be stored in the dictionary. 04:14:47 Oh nevermind then 04:14:54 mustve misread 04:15:01 Like it's a macro expansion thing or something. 04:15:27 "A : declares a definition and stores the following character string (until the first ;) in the parameter part. The code specified directs the scanner to interpret this string." 04:16:21 "A noun fills core with data, a verb with code and a definition with a character string." 04:18:42 It also looks like at that time space delimiting wasn't the sole method of breaking words, either. 04:19:02 Says here that X@J+ would be broken into X @J + 04:24:26 So I thought more about that sourceless programming stuff. I see now how it would basically work, but it did occur to me that there's no good way to store code layout information (i.e., how the code would be rendered on an edit screen). 04:24:44 So it seems like some automated form of layout would be called for. 04:28:00 --- join: pierpal (~pierpal@host168-226-dynamic.104-80-r.retail.telecomitalia.it) joined #forth 04:59:04 It's kind of hard to see why Forth didn't take off in the 70's. 04:59:14 it offered so many advantages over other contenders. 04:59:48 --- quit: wa5qjh (Remote host closed the connection) 04:59:52 If only Chuck had been at Bell Labs or something like that... 05:01:03 --- join: siraben (~user@unaffiliated/siraben) joined #forth 05:02:49 No-name words are like anonymous functions 05:03:36 how do you use them? 05:04:13 In practice, or how they are used? 05:04:36 When you define a no-name word it pushes its address onto the stack. Then you can just run EXECUTE 05:04:45 (depending on the implementation, of course) 05:04:56 aaah, makes sense 05:10:20 I'm setting them up so that the name is temporarily present, in a "scratch area." 05:10:33 I still have to have a CFA/PFA pointer pair in the header area, since I'm indirect threaded. 05:10:46 Just don't need the symbol table index and linked list link. 05:11:06 I'm going to have them defined using :: - it will work exactly like : except the name is placed in the scratch area. 05:11:19 That area will get searched first, before the dictionary, during subsequent compilation. 05:11:52 Then when I've finished a batch of work and don't need that set of names anymore, I'll run a word I'm currently calling ::WIPE; it will toss the scratch area names. 05:12:33 so while the names are present they'll get used exactly like words defined with : would be. 05:13:19 In my assembly source I use a different macro to make the noname headers - for definitions it's head_d for named words; that macro takes a name string and a label as parameters. 05:13:34 Or a "root" of a label - I make some labels by appending various things to that part. 05:13:46 No-name words have macro head_d0, which only takes the label root. 05:13:59 head_d handles all of the linking and so forth. 05:32:28 --- quit: dave9 (Quit: dave's not here) 05:58:48 Should I implement a search searching algorithm in Forth or assembly? 05:59:10 i.e. a naive implementation in assembly or a good one in Forth? 05:59:42 Wondering if the algorithmic gains would match those of writing it in assembly 06:10:15 is that an algorithm for searching for searches 06:19:31 Oops I mean string searching 06:22:52 It's hard to say without knowing what algorithms you're considering. 06:23:07 Boyer–Moore, say. 06:23:18 A lot of the fancy algorithms have better O() behavior, but you need a really large population to make it pay off (to overcome the overhead). 06:23:24 Can't do a fancy one in assembly? 06:23:48 I've never actually written a string searching algorithm, time to practice I suppose. 06:23:52 I'm planning to include a profiler in my system - in a case like that I'd write the good one in Forth, profile it, and perhaps write parts of it in assembly. 06:23:53 Except for a naive one& 06:24:03 99% of the code we write isn't performance critical. 06:24:07 Yeah, I'd like to have benchmarks as well. 06:24:08 Right. 06:24:34 I think I'd start out in Forth, to experiment and get it working. 06:24:34 I've been looking at more ROM call documentation for interop with the OS. I no longer need to worry about memory usage 06:24:45 Oh, good. 06:24:51 gforth has a floating point stack, so does the TI 06:25:06 So I don't need to reinvent the wheel :) 06:25:11 I have a no-name word that's part of FIND; it's the word that searches my symbol table. 06:25:22 That's going to be a system-wide, persistent table, so it may get large. 06:25:37 It's written in Forth, but I've got it flagged for conversion to assembly. 06:25:42 Oh I just remembered I do have a string searching algorithm... It's a string comparsion routine 06:25:46 Not all of Find - just that "inner loop" part of it. 06:26:05 But it does just ( str1 str 2 -- f ) 06:26:11 Where f is true or false 06:26:18 ( str1 str2 -- f ) 06:26:19 If you don't need > and < and only need an equality check it's easier. 06:27:00 Is it comparing counted strings, or null-terminated ones? 06:27:09 It's often a trade-off between performance vs. the time it takes to write them 06:27:11 Null-terminated. 06:27:34 At some point you'll likely need a counted string test as well. 06:27:40 It's just tempting to optimize things, especially since assembly is at my fingertips 06:27:53 Yes, definitely. 06:27:58 Ah, FIND checks the lengths first before doing a detailed comparsion. 06:28:07 A counted string test shouldn't be hard 06:28:10 Most of the time readability and maintainability are more important, but we all like to feel "clever". :-) 06:28:30 It's not, and often you reject straight away based on mismatched counts. 06:28:41 Looking through my commits they were often very incremental. Forth promotes that level of thinking even at the assembly level. 06:28:47 Because everything uses the stack. 06:29:02 I don't actually have a "string" comparison in mind; I'm limiting my names to no more than 15 bytes (so a counted string fits in 16). 06:29:12 I just do a pair of 64-bit compares. 06:29:29 Oh you and your 64 bit paradise 06:29:29 I've got it set up so the things are null-padded out to 16. 06:29:42 I see. 06:29:45 Yeah, I know. Well, I am porting to a 32-bit target, so I'll lose some of that. 06:30:34 I can finally write INCLUDE and split source files on my target 06:30:43 Nice. 06:30:58 I am sure I have unnoticed cell-size dependencies in this thing. 06:31:10 I'll try to get it running in 32 bit "mode" here on the Mac first. 06:31:15 So I can at least confirm it's working. 06:31:46 This is my first implementation, my next one would probably be completely re-written 06:32:08 Well, I would have to anyway because this assembler is just for TI-84 Z80 06:32:41 I imagine your first several will just keep getting better. 06:33:09 Should I go with a direct or indirect threaded model next time? 06:33:21 The guide I was reading (Moving Forth) advocated DTC 06:34:32 It can depend on your processor. 06:34:46 I find indirect threading to be more "elegant," so it's the only kind I've ever written. 06:35:07 Is the difference just between a jp docol or .dw docol ? 06:35:16 With direct threading you have to be able to put code anywhere. 06:35:17 Following the header of a word 06:35:22 Yes, more or less. 06:35:29 What do you mean put code anywhere? 06:35:38 In direct threading the cells of a definition have to point to *code*. 06:35:44 In indirect they point to a code pointer. 06:36:07 Some microcontrollers won't let you execute code from RAM. 06:36:20 Oh. 06:36:21 They only execute from flash, and often it's cumbersome to treat flash as data. 06:36:29 Harvard architecture stuff. 06:36:35 Separate instruction and data spaces. 06:36:50 Indirect works better on those. 06:36:58 Well the TI for most purposes considers RAM as the de-facto place to store and execute code 06:37:08 Flash is reserved for larger applications or backup 06:37:19 So you could likely do any of the three threading models. 06:37:54 Do your definitions, data, etc. immediately follow their headers? 06:37:54 I just refined my NEXT to do a jump instead of being inline, haven't noticed any performance drops. 06:37:58 Yes. 06:38:13 You might consider moving them to a separate region (your headers). 06:38:20 How come? 06:38:25 Then instead of the parameter zone following the header, there is a pointer in the header to it. 06:38:33 Well, one thing it lets you do is multiple entry points. 06:38:36 Like this: 06:38:44 : alpha 1 2 3 : beta 4 5 6 ; 06:39:08 In the definition space you just have 1 2 3 4 5 6 06:39:15 alpha and beta point to the right places. 06:39:27 Huh. I never considered that. 06:39:35 I like jumping to next; it gives you interesting patching possibilities. 06:39:56 You can easily replace next with something less efficient but "information richer," like profiling stuff, debugging stuff, etc. 06:40:06 Then when you're done patch it back out. 06:40:24 Debugging assembly can be pretty painful on my target 06:40:27 You can do that just by re-writing the next code that gets jumped to. 06:40:58 Also, if your headers have a pointer to the parameter space instead of having it inline, you can revector any word. 06:41:10 You can supply a new definition, and overwrite the code and parameter pointers in the header. 06:41:16 All instances will then use the new definition. 06:41:35 In traditional Forth systems "vector-able" words are special. 06:41:55 You can also have headers with extra information in them. 06:42:10 They don't all have to be the same size. That opens interesting possibilities as well. 06:42:16 I think that's how I'm going to do DOES> 06:42:48 The way I implemented DOES> was to change the target of the JP instruction after the header. 06:42:59 Whereas a : definition will have the usual docol code pointer and parameter (definition) pointer, a DOES> word will have a "dodoes" code pointer, a pointer to the DOES> code, and then an extra pointer to the data area. 06:43:12 I mean CALL: [header] call docol .... becomes [header] call dodoes ... 06:43:23 dodoes will just push the contents of that extra cell (parameter address) to the stack before treating the regular pointer just as a : def does. 06:43:36 Assembly CALL on my target leaves the return address on the parameter stack. 06:43:37 Yes. 06:43:52 That part is easy - it's getting the data pointer handled that can be tedious. 06:43:58 I can make it easy with an extra header cell. 06:44:05 I see. 06:44:27 I'm not particularly happy which some of the SMC going on, but my assembly doesn't seem to be able to escape " in macros 06:44:38 So ." is actually .Q but SMC'd to ." 06:44:43 One of the cool things about Forth is that there can be so many variations of this sort, none of which are "wrong." 06:44:51 Right. 06:45:47 I moved my actual name strings into a separate table - the heders actually contain an index into that table. 06:45:49 How many possible assembler words do you have on your target? 06:45:57 Your instruction set 06:45:58 That's to allow a source compression scheme I'm interested in. 06:46:12 Oh, well, it's x86-64. 06:46:20 Is that what you meant? The actual processor instructions? 06:46:27 Or did you mean primitive count? 06:46:44 I only actually use a small subset of the process instructions. 06:46:47 processor 06:46:52 The primitive processor instructions 06:47:04 x86 has a lot of math primitives right? 06:47:09 I haven't counted how many different opcodes I use. 06:47:28 Yes, and also floating point, and some SIMD support instructions, and myriad other things. 06:47:37 It's grown into something of a monstrosity over the years. 06:48:07 I suppose there's a bit of a Zen-like feeling with the Z80's minimalism 06:48:14 What I mostly use are jumps, moves, a handful of the arithmetic instructions, and a couple of the floating point ones. 06:48:26 I'll probably want to look into some kind of a API to the SIMD stuff at some point. 06:48:26 Right, Forth only needs a subset. 06:48:30 SIMD? 06:48:42 single instruction multiple data 06:48:50 Processing multiple data streams at once. 06:49:08 Wow. 06:49:27 But I don't want any of that used in the core system, because it would make porting to other targets difficult. 06:49:56 It's really a shame that most mobile phones are closed off devices, I'd like to see a full Forth system there. 06:50:07 Yes, definitely. 06:50:21 I wrote macro wrappers for all my machine code usages; I've got about 50 of those. 06:50:42 The idea being that the system itself is expressed in a "portable assembly language," and I can port by just re-writing those macros. 06:50:56 Are you thinking of "exotic" targets, say PDAs, e-ink devices? 06:51:06 Sort of a manual implementation of what the golang guys do for all their targets. 06:51:17 The golang compiler generates such a "portable assembly" format. 06:51:24 I see. 06:51:27 Possibly, but not initially. 06:51:38 Makes sense because implementations are similar in that they implement a Forth "VM" 06:51:39 I really want to move it to a good solid processor I can use in embedded designs. 06:51:59 Right - these macros are sort of Forth VM instructions. 06:52:19 Low level ones - actually all the primitives are VM instructions, but this takes it a level lower. 06:52:51 My source file is about 3200 lines long, but only about 500 should need to be re-written for a port. 06:53:05 That's the hope, at least. 06:53:14 I haven't made it happen yet, so there could be some ugly surprise lurking. 06:54:54 Say I wanted to make a system to water my plants automatically, what hardware would it take? 06:55:04 I've never done a project of this sort so I haven't a clue where to start. 06:55:12 Apart from making the Forth system, of course. 06:56:51 Will you have water tubes pre-placed, that just need to be turned on? 06:56:56 Or are you thinking some sort of robot? 06:57:21 For the former, a small microcontroller with digital i/o to control valves is all you'd need. 06:57:45 If you wanted to measure the water you'd need flow rate sensors, but it's likly that just turning a hose on for some period of time would be adequate. 06:58:06 And your time measurement doesn't need to be ultra-precise; 10.3 seconds is as good as 10. 06:58:20 So you could use a simple timing loop that you'd calibrated during development. 06:58:30 Not a robot, just, for instance, turning sprinklers on and off 06:58:46 And how would I connect digital I/O components together? 06:58:49 The digital outputs wouldn't be able to drive the valves directly - you'd need small transistor circuits (a "driver" circuit) on each one. 06:59:04 But that's likely just one transistor; this is a pretty simple application. 06:59:24 Ah, so working with some actual hardware and circuits is required. 06:59:31 Similar driver circuits on each phase of a stepper motor's windings would let you do motor control, and so on 06:59:41 Yes, but very simple ones. 07:00:04 You could probably buy such a thing already made - designed to wire your digital output to and then wire to your valve. 07:00:31 You also might be able to buy valves with the driver built in. 07:00:42 So designed for direct digital control. 07:01:24 I see. I'd want to experiment with more home automation in the future. 07:01:44 But self-implemented rather than using something made already, unless absolutely necessary. 07:01:46 But just to tell you how simple this is, I'm picturing a single transistor with the source grounded, the drain connected to the valve, and the gate connected to your digital output. 07:01:53 Right. 07:02:00 You're just turning the transistor on, and it sinks current through the valve actuator. 07:02:21 So what microcontroller would you be thinking of? 07:02:56 Almost anything. The little Cortex M4 board that I have (the Adafruit Itsy) would do fine - it has digital i/o connections already on the board. 07:03:30 Wow, so it really would just be a matter of buying some wires and linking them. 07:03:44 Yes, for something this low-performance. 07:03:52 There are very few problems to worry about here. 07:04:21 As you march up to more involved applications, your need for precision time control and so on would increase. 07:04:37 Have you done some home automation projects? 07:04:41 If so, what? 07:04:45 You might find yourself needing to carefully synchronize outputs, etc. Like in a motor controller - you want to manage those phase actuations "just right." 07:04:51 Oh yes. 07:05:13 I've led the development of pick and place machines, done machine vision, etc. 07:05:17 That's the backbone of my career. 07:05:56 Machine vision not using deep learning or neural networks, right? 07:06:04 It's been a while; I joined a little company here in Houston in 2012 as their software engineering manager. Enterprise grade flash data storage. 07:06:13 Five months later IBM bought the company, imported new management. 07:06:20 Since then I've done performance testing on the product. 07:06:23 Ah, so you work at IBM now? 07:06:30 I feel... quite underutilized. :-) 07:06:33 But the pay is good. 07:06:48 Yes, for the last six years now. 07:07:14 Performance testing is ok - I know a lot about how the system works under the hood, and that helps me write good tests. 07:07:31 Flash is pretty interesting stuff. 07:07:34 It's basically crap. 07:07:34 I see. Is it interesting or quite repetitive? 07:07:46 Doing performance testing, that is. 07:07:49 I've managed to keep it interesting, but there's always the threat of it becoming repetitive. 07:08:04 We use that crap to build highly reliable storage systems. 07:08:09 Lot of interesting engineering there. 07:08:11 Flash is crap? 07:08:14 It's like making armor out of tissue paper. 07:08:16 I mean, how so? 07:08:27 Yeah, flash chips will just "forget data" if you leave them to their own devices. 07:08:35 Bit error rate is rather high. 07:08:47 So we do all kinds of software layers to work around that. 07:08:48 Oh you're talking about flash as in RAM chips? 07:08:58 Well, flash chips, but yes. 07:09:07 I was thinking of flash drives like SSDs, haha 07:09:13 Flash also wears out - it only tolerates a certain number of program erase cycles. 07:09:19 So we have to do wear leveling, and so on. 07:09:34 And the unit of erasing is larger than the unit of writing, so "garbage collection" is required. 07:10:04 Ah, and is that the reason why it's hard to securely erase data on flash memory? Because of wear leveling? 07:10:06 You wind up with your data scattered out so that there are no completely empty "erase zones." 07:10:17 So you have to move the still-valid data out of there, so you can erase and re-use the zone. 07:10:39 Yes, typically when you "rewrite" a logical page, it just writes the new data to a new location. 07:10:47 The old data is still sitting there - we say it's "stale." 07:10:50 Side question: What's your stack signature of FILL ? 07:10:59 Ah I see. But it's still technically recoverable. 07:10:59 You can no longer access it via a logical page address, but it's not "gone." 07:11:08 Right. 07:11:10 Yes. 07:11:20 Except you can't really overwrite it explicitly? 07:11:34 Well, I haven't written FILL yet, but I you need an address, a count, and a fill byte. 07:11:50 You can, by addressing the physical flash page and writing something new. 07:12:12 But the user interface to the thing works with logical addresses, that are mapped to physical via a "logical page table" (LPT). 07:12:23 Once you re-write the logical page, nothing in the LPT points to the old data any more. 07:12:36 So you can't get rid of it via normal interaction with the system. 07:12:48 You'd have to have a mode that let you access the flash by physical page. 07:12:56 gforth uses ( addr how_many byte –- ) but I find ( byte addr how_many -- ) easier because it's just : FILL 0 DO 2DUP C! 1+ LOOP 2DROP ; 07:13:13 Right. I haven't really given it any thought. 07:13:18 Another one of those things you can do however you like. 07:13:23 Oh I could just have prefaced it with a ROT 07:13:28 But then again, convention :) 07:13:44 Yep. 07:13:45 I fixed my CMOVE to follow ( src dest count -- ) instead of ( dest src count -- ) 07:13:55 And so on 07:13:57 I'm "largely" compatible with convention, but I am always willing to break with it if I have a good reason. 07:14:24 And an argument could be made for source count dest. 07:14:31 Ah but having a mode that could interact with the physical page would open up the possibility of explicitly burning out that page. 07:14:42 I.e., you have a counted string represented by addr count, and you just say "destination, move it." 07:14:50 Right. 07:15:00 You could kill a page by just writing it over and over. 07:15:13 Well, erasing it and writing it over and over - I think it's actually the erase that consumes the life. 07:15:13 Are you using Forth for your performance testing? 07:15:22 No, no good opportunity. 07:15:29 What do you use then? 07:15:37 I've written quite a bit of code to facilitate it (actually to manage the results) in C and Python. 07:15:40 No Forth, though. 07:16:20 I've always found Forth to be most useful when directly dicking around with embedded hardware. 07:16:46 I'm trying with this current project to get a "Forth operating system" that's generally useful, but I've just never figured out how to use Forth well running under some other OS. 07:17:11 Are there languages in similar spirit to Forth? Small, interactive, low-level etc? 07:17:20 Plus I've found that writing good Forth is HARD WORK, whereas those other languages (especially Python) let me grind out stuff pretty fast. 07:17:34 Ah yes. It's good to have a large body done for you already. 07:17:38 I'm not aware of any - it's certainly the "crown jewel" of that domain. 07:17:55 I think it's a good abstraction over assembly. 07:17:56 Yeah, Python has an amazing array of packages ready to pick up and use. 07:18:04 A more universal form of ASM 07:18:15 Python has good list-processing facilities too, I hear 07:18:17 It's just nothing to slurp in a JSON file in Python, for example. 07:18:24 One line of code, and you have a Python structure. 07:18:34 Well, two lines if you count the package import. 07:18:48 Haskell is becoming more like Python in a sense, with its growing libraries 07:18:59 Except, again, there's a lot of weird stuff with naming 07:19:01 And Python minces through string parsing easy as pie. 07:19:10 I'm stealing that - I want my Forth to handle strings in a similar way. 07:19:46 Parser combinators are a very good way to write parsers easily. 07:19:47 I've felt that Forth has never had a good universally accepted string handling facility. 07:20:18 KipIngram: https://github.com/siraben/monadic-parsing 07:20:35 Moore said the big problem with strings is basically having a place to put them, and I think he's right. 07:20:42 If you look at some of the examples, there's one that reads lists of numbers lik [1,2,3,4], a BNF parser and a Lisp s-exp parser 07:20:46 It encourages you to have something like a heap, and I guess he never wanted to go there. 07:20:57 They're written in a very composable way. I should port it to forth. 07:21:08 Ah yes, strings and heaps often go together. 07:21:21 I do plan to have a heap for such things, and a string input word s. 07:21:33 s will let me delimit my string with any character I want. 07:21:43 s "this is a string" or s /this is a string/ etc. 07:21:55 Here's a BNF parser in Forth I found online: http://www.bradrodriguez.com/papers/bnfparse.htm 07:22:00 That way if I want quotes in my string I can have them. 07:22:08 Ah I see. 07:22:20 I also want good support for regular expressions. 07:22:22 A custom version of ." 07:22:34 Well, custom version of s" 07:22:37 ." prints it. 07:22:45 s" just "makes it." 07:22:53 What about recognizing "stronger" languages like context-free grammars? 07:22:55 And once that heap exists, I can have any kind of structure. 07:23:05 Say, parsing JSON, as you said with Python. 07:23:13 [ 1, 2, 3, "abc" [ 17.4, 26.3 ] ] 07:23:15 Input the grammar, and write a word to generate a parser for it 07:23:51 Right, or Lisp style s-exps (1 2 3 "abc" (17.4 26.3) foo) 07:24:35 KipIngram: You might be interested in that website, it defines a word called BNF: and very much defines the grammar in Forth 07:25:04 For instance: BNF: '(' ')' | | ;BNF 07:25:21 Recognizes a language of balanced parens ( (like) ((this))) 07:28:36 siraben: neat! 07:28:42 Cool. 07:32:01 That would mean you could bootstrap Forth yet again, with a Forth parser written in Forth. 07:36:00 --- quit: tabemann (Ping timeout: 252 seconds) 07:51:25 --- join: kumool (~kumool@adsl-64-237-233-169.prtc.net) joined #forth 07:52:34 Yes, getting this thing to build itself is definitely a goal of mine. 07:52:37 :-) 07:52:56 I don't think I'm going to try to "bootstrap," though - I'm going to assume I have a full working system, and have it build an image of itself. 07:54:31 --- quit: Zarutian (Read error: Connection reset by peer) 07:54:45 --- join: Zarutian (~zarutian@173-133-17-89.fiber.hringdu.is) joined #forth 07:55:26 Ah yes. I've had a glimpse of that with when I implemented SEE on my system and performed SEE SEE then used that decompiled definition to implement it. 07:55:36 :-) 07:55:39 Right. 07:55:49 Very meta. 07:55:57 Yes. 07:59:18 Wow the TI has ROM calls for calculating MD5 hashes 08:01:18 On the other hand, there's a ROM call to subtract 5 from a 16-bit register, HL. -_- 08:12:10 --- quit: kumool (Quit: Leaving) 08:46:38 Having implemented F+ F/ F* square root and reciprocal floating point, I think I'll hit the hay. Night all! 09:20:58 --- join: [1]MrMobius (~default@c-73-134-82-217.hsd1.va.comcast.net) joined #forth 09:23:44 --- quit: MrMobius (Ping timeout: 244 seconds) 09:23:45 --- nick: [1]MrMobius -> MrMobius 10:06:56 I'm starting think that defining words aren't so great afterall 10:07:01 create and does>, I mean 10:07:26 you can't extend them. if I make a word with create and does>, there's no way to make another word which builds on that 10:12:01 --- join: kumool (~kumool@adsl-64-237-233-169.prtc.net) joined #forth 10:59:22 --- quit: Zarutian (Read error: Connection reset by peer) 11:02:00 --- quit: pierpal (Quit: Poof) 11:02:17 --- join: pierpal (~pierpal@host168-226-dynamic.104-80-r.retail.telecomitalia.it) joined #forth 11:11:53 Interesting point. So you're wanting a inheritance mechanism. 11:12:10 I just want the ability to reuse code 11:12:17 So what you'd like is something like this? 11:12:38 : ALPHA CREATE ... DOES> ...a... ; 11:12:51 : BETA ALPHA ... DOES> ...b... ; 11:13:22 The first ... in BETA would add EXTRA storage, above what ALPHA did, and the ...b... would run in addition to ...a... when words created by BETA were run? 11:13:41 Do you know for a fact that ^ won't work? 11:13:49 I'd have to try it to know. 11:13:52 : foo create , does> @ + ; : bar create , does> @ + 5 - ; 11:14:00 here "@ +" is common 11:14:16 What if you replace the create in bar with foo? 11:14:34 yeah, that would be ideal, but then there's no way to tack on the "5 -" part 11:14:49 You can't include a DOES> in bar? 11:14:55 Hang on - I'm trying this. 11:15:11 I mean, I guess you could but that just overrides it, it doesn't extend it 11:15:19 so you'd have : bar foo does> @ + 5 - ; 11:16:10 Yeah, doesn't work the slick way. 11:16:20 But that's a... nice idea. Can we make it work that way? 11:17:01 bar created words don't seem to run foo's DOES> code. 11:17:06 I just get an address - 5. 11:17:16 In GForth. 11:17:29 right, because DOES> overwrites the code field 11:17:32 it doesn't extend it 11:17:33 Yes. 11:17:36 I'll think about it. 11:18:09 All we need is for a call too foo's DOES> code to get put in the bar words. 11:18:15 Before its own DOES> code. 11:18:27 yeah so I guess you could like 11:18:37 well 11:18:46 Yeah, there's no way to manually get at that code. 11:18:51 I mean, we could make a way. 11:18:58 But it's not pre-existing. 11:18:58 you oculd come up with some hackery to find that but it would be nasty 11:19:05 Yes. 11:19:11 Much better if it just WORKED that way. 11:19:28 foo's DOES> code should have CREATE's function pre-pended to it. 11:19:43 bar's DOES> code should have foo's function pre-pended to it. 11:19:59 It would still work right when CREATE was used if we think of it that way. 11:20:16 I.e., IT NEEDS TO BE THAT WAY. 11:20:41 So we don't want to think of "an automatic push of the address" followed by the DOES> code. 11:21:00 We want to think of running invoked defining word's runtime, then execute the new word's run-time. 11:23:58 So we want a generic " DOES>" methodology. 11:25:14 well, you're thinking of fixing the issue 11:25:23 I guess my original point is that maybe I just need to use create/does> less 11:27:46 Well, yes - I am currently engrossed with writing a deliberately non-standard system, so this sort of thing is grist for my mill. 11:28:20 Wonder how Mark did this part? 11:28:31 --- join: dys (~dys@tmo-102-85.customers.d1-online.com) joined #forth 11:30:07 I'd planned to have a dodoes runtime that pushed the extra parameter cell content (suitably relocated) to the stack, but this makes me think that's not right. 11:30:36 What I want is for it to have docol as its runtime, and to deliberately insert a call to the defining word's runtime as the first cell of this word's runtime. 11:32:28 The rub here is that CREATE gets the address to push from the first PFA cell, not the "extra cell." 11:32:45 That won't be the right thing for words that have the extra cell. 11:33:44 These words may need to have two complete CFA/PFA pairs. 11:53:47 --- quit: pierpal (Quit: Poof) 11:54:04 --- join: pierpal (~pierpal@host168-226-dynamic.104-80-r.retail.telecomitalia.it) joined #forth 12:11:57 yay the penti keyboard is great for forth on phones 12:12:03 https://software-lab.de/penti.html 12:15:49 typed from a penti keyboard 12:34:03 Interesting. 12:34:13 What's the physical device this is on? 12:44:51 KipIngram: an android phone https://play.google.com/store/apps/details?id=de.software_lab.pentikeyboard 12:44:51 ...but it could be on a microcontroller breadboard. It only requires 6 buttons for efficient typing. 12:44:51 ...or you could type with one of those 6-button gaming mice. 12:45:53 --- quit: pierpal (Read error: Connection reset by peer) 12:46:07 --- join: pierpal (~pierpal@host168-226-dynamic.104-80-r.retail.telecomitalia.it) joined #forth 12:47:42 I'm already starting to feel that this will making coding on small devices much more comfortable after using it on my phone this morning. 12:49:37 --- quit: kumool (Remote host closed the connection) 12:49:42 --- join: kumul (~kumool@adsl-64-237-233-169.prtc.net) joined #forth 12:50:15 --- quit: kumul (Remote host closed the connection) 12:50:27 --- join: kumul (~kumool@adsl-64-237-233-169.prtc.net) joined #forth 12:53:11 Installing. 12:55:12 the guy talks about legality, but really though, nothing that amazing about chording 12:55:36 It took a bit of time to get used to it. The mnemonics help. 12:55:36 How do you activate it? 12:55:43 I selected it in my keyboard list. 12:55:55 But I don't see anything except the word Penti overlaid on my screen. 12:56:15 place your fingers where you want them. 12:57:25 (all five fingers in comfortable positions on the display) 12:59:04 Heh. I don't find a comfortable five finger position in portrait. 12:59:13 Plus my hand covers up where the text appears in Hangouts. 13:09:46 I find this idea interesting, but I think I'd want to design an input device. 13:12:08 --- join: Zarutian (~zarutian@173-133-17-89.fiber.hringdu.is) joined #forth 13:18:02 Well, shit. 13:18:08 Oh, sorry - wrong channel. 13:18:16 Normally would have been a bit more circumspect here. 13:20:16 I'm reading about ARM assembly, and I already see problems with my "portable" macros. :-( 13:20:36 In ARM only "load" and "store" can access memory. 13:20:40 Everything else has to use registers. 13:20:47 So ADD has to use registers, etc. 13:20:57 That breaks my portability fairly thoroughly. 13:24:25 I suppose that's what I get for trying to write the macros before familiarizing myself better with ARM. 13:37:53 --- quit: kumul (Ping timeout: 272 seconds) 13:43:32 --- join: kumool (~kumool@adsl-64-237-233-169.prtc.net) joined #forth 13:58:56 KipIngram: you did know that R in ARM stands for Risc, no? which is basically load&store register architecture 13:59:11 (Acorn Risc Machines if anyone is curious) 14:03:46 --- quit: kumool (Ping timeout: 244 seconds) 14:34:33 I did know that, but not further details. 15:00:12 --- quit: Zarutian (Ping timeout: 252 seconds) 15:01:00 --- join: wa5qjh (~quassel@freebsd/user/wa5qjh) joined #forth 15:22:12 --- join: Zarutian (~zarutian@173-133-17-89.fiber.hringdu.is) joined #forth 15:33:43 --- join: kumool (~kumool@adsl-64-237-233-169.prtc.net) joined #forth 16:27:35 --- quit: Zarutian (Ping timeout: 268 seconds) 16:27:53 --- join: Zarutian (~zarutian@173-133-17-89.fiber.hringdu.is) joined #forth 16:34:23 --- join: kumul (~kumool@adsl-64-237-233-169.prtc.net) joined #forth 16:34:23 --- quit: kumul (Remote host closed the connection) 16:35:20 --- quit: kumool (Remote host closed the connection) 17:15:30 --- quit: siraben (Ping timeout: 252 seconds) 17:33:43 --- join: tabemann (~tabemann@213-84-181-166.mobile.uscc.net) joined #forth 17:46:50 zy]x[yz: I think I worked out an indirect-threaded implementation of your extensible DOES> mechanism. 17:47:19 So in all of these cases we execute a defining word, which actually creates our header. 17:47:34 I think the storage allocation aspect of this all already works. 17:47:48 The storage allocation actions would just follow one another. 17:47:56 Here's what DOES> would do: 17:48:24 1) Shift every instruction cell from LATEST to HERE (not including the one AT HERE) forward two instruction cells. 17:48:50 2) Populate the two icells that opens up with docol and the address where we're going to compile the DOES> code. 17:49:37 Oh, I believe I see a problem with this. Let me think a few more minutes. 17:49:39 --- join: kumul (~kumool@adsl-64-237-233-169.prtc.net) joined #forth 17:50:57 No, wait, I think we're ok. Maybe. I'll finish. 17:51:37 3) Place a pointer to the original CFA content that our defining word left us with (which is now two icells beyond where it was) in the first icell of the DOES> code area. 17:51:48 4) Compile the DOES> code on from there. 17:52:03 So I'm iffy about this - moving the stuff might break something - I need to draw a picture. 17:52:17 But the idea here is that we don't want to have the address push hidden away in some "dodoes" routine. 17:52:36 We want to compile a first cell to the DOES> area that performs that for us, and then put our own DOES> code after that. 17:52:46 Then the word is just a : definition that uses docol. 17:53:24 The original CFA/PFA pair that our defining word gave us becomes a "no-name header" that we can use from our DOES> code. 17:53:41 And the named word takes on a docol action running the pre-pended DOES> code. 17:54:10 I think if you extended this to several levels, you'd wind up with multiple naname headers created after your named CFA/PFA pair. 17:54:35 And I think this would only work if you have PFA *pointers* like I do - it would not work if parameter fields directly follow the CFA cell. 17:55:19 And I've only "imagined" this in my head - I may be overlooking something. 17:58:55 Ah, ok - so I do think that's broken. But I think I see how to fix it. Drawing some pictures now. 18:14:42 zy]x[yz: See what you think of this: 18:14:44 https://pastebin.com/MxmDhYT7 18:15:02 You get an extra level of nesting with each extension. 18:15:06 But I think it "works." 18:15:10 Or is at least in the ballpark. 18:16:56 This does push the CFA/PFA list down each time and install a new one. The new one uses a "doboth" runtime; it EXECUTEs the CFA/PFA pair (which reside at a fully valid CFA address) and then behaves as docol with the definition found in the following cell. 18:17:13 If the CFA/PFA after you happens to use doboth as well, then it does the same thing to what follows it. 18:17:30 So you push all the way to the bottom, where you EXECUTE a dovar / address pair. 18:17:43 Then as you pop back up you pick up the DOES> code from each level. 18:18:27 So when you use a "several levels deep" defining word, you get a named header for the new word, followed by a raft of no-name headers. 18:19:18 And you wind up executing those from the earliest to the latest, finally executing the named pair. 18:20:16 I don't know if this qualifies as "simple." 18:20:20 Or efficient. 18:38:10 hmm. tempting 18:53:52 --- join: rdrop-exit (~markwilli@112.201.162.180) joined #forth 19:14:07 :-) 19:14:11 I'm on the fence about it. 19:14:20 It is tempting - it seems like the way the functionality should behave. 19:14:28 I was hoping it would be a bit simpler to implement, though. 19:21:14 --- quit: tabemann (Ping timeout: 245 seconds) 19:29:09 --- quit: wa5qjh (Remote host closed the connection) 19:31:27 i think i'm still going to lean toward kiss and use create/does> more sparsely 19:40:08 I’m eliminating create/does> in the next iteration of my Forth 19:41:27 I’ve never liked the lack of factoring in a create/does> word. 19:43:49 --- join: tabemann (~tabemann@2602:30a:c0d3:1890:b90b:3bee:a83b:ed44) joined #forth 19:45:53 yeah 19:57:18 --- join: wa5qjh (~quassel@freebsd/user/wa5qjh) joined #forth 20:30:55 --- quit: dddddd (Remote host closed the connection) 20:34:47 --- quit: pierpal (Read error: Connection reset by peer) 20:59:21 --- quit: jackdaniel (Ping timeout: 252 seconds) 21:27:53 --- join: pierpal (~pierpal@host168-226-dynamic.104-80-r.retail.telecomitalia.it) joined #forth 21:30:27 --- join: robdog (~asd@2601:700:4180:1543:5838:c11c:5127:1cd6) joined #forth 21:52:22 --- join: jackdaniel (~jackdanie@hellsgate.pl) joined #forth 22:15:49 --- quit: dave0 (Remote host closed the connection) 22:16:36 --- join: dave0 (~dave@90.20.215.218.dyn.iprimus.net.au) joined #forth 22:34:53 --- quit: kumul (Quit: Leaving) 22:47:27 --- quit: dys (Ping timeout: 268 seconds) 23:05:40 --- quit: rdrop-exit (Quit: rdrop-exit) 23:44:58 --- quit: pierpal (Read error: Connection reset by peer) 23:59:59 --- log: ended forth/18.10.05