00:00:00 --- log: started forth/19.03.21 00:18:18 crc: i did a pull/update - got the underflow fix - side effect though - terminal left in a dodgy state... maybe you could use an atexit which checks if termios is active and reset that there? a stty sane isn't too arduous of course 00:37:57 --- join: dave0 (~dave0@223.072.dsl.syd.iprimus.net.au) joined #forth 00:38:26 re 00:41:01 <`presiden> 0 00:41:38 --- join: xek_ (~xek@apn-31-0-23-83.dynamic.gprs.plus.pl) joined #forth 00:42:33 morning 01:08:07 hi `presiden, the_cuckoo 01:08:31 <`presiden> hi 0 02:03:59 --- quit: ashirase (Ping timeout: 255 seconds) 02:07:57 --- join: ashirase (~ashirase@modemcable098.166-22-96.mc.videotron.ca) joined #forth 02:15:11 crc: but if it's emulating hardware that read a program from memory into cache, would that not truncate the image? Or, overwrite the beginning n cells? Unless the hardware protected against that 02:49:46 Hmmm. Active use of the return stack in a word really can sort of foul up the ability to factor. 03:00:53 it's like maths 03:01:04 expand it back out 03:01:11 and refactor ;) 03:01:48 Well, I just mean that once you have somethingon the return stack you then can't get at it once you've called a word. 03:04:24 I just wrote a word yesterday that turnd out to be exceptionally heavy on return stack operations. Very unusual for one to land that way. 03:05:42 You can use J 03:05:54 to dig under the return address 03:06:00 ...not advised though... 03:07:43 Yeah, not a very appealing idea. :-) 03:10:21 I've given a little thought to make my A and B address register stacks. 03:10:40 Then I could use >A and A> similarly, and that would leave the return stack completely untainted. 03:11:36 I wouldn't. 03:11:58 It seems bloated to have more than two registers. 03:12:34 Means my code is probably too complicated, probably just a weird psychological bias 03:13:04 Well, we're sometimes entitled to our biases. Long as they aren't hurting anyone they're harmless. :-) 03:13:10 but also I like to think about how the features I'm using would be implemented in hardware so I like to think it simple. 03:13:24 Yes, that's definitely a good point. 03:14:00 i used r-pick and r-roll :-) 03:14:16 Well, for that matter I could just use A and B. I just hate to overwrite them any more than I can help. 03:14:19 like pick and roll except for the return stack :-p 03:14:47 I REALLY don't like the "roll" words, for the reason WilhelmVonWeiner just stated. 03:15:02 I think about what's actually HAPPENING when that word executes, and it's... "yuck." 03:15:11 Whole ton of stuff is moving. 03:15:17 <`presiden> yeah, I guess it doesn't roll your tongue 03:15:37 * `presiden goes into the corner 03:16:58 This word takes a memory region under management by boundary tag memory allocation algorithm and splits a sub-region out of it. Fixues up the tags and establishes the required new ones. There are just a LOT of interlinkages within that operation. 03:17:13 --- join: dddddd (~dddddd@unaffiliated/dddddd) joined #forth 03:17:13 I'm still looking to see if I've overlooked a better structure. 03:20:18 KipIngram: LOL 03:20:22 "Yuck" 03:20:29 my thoughts exactly 03:21:08 pick and roll would probably be quite complex 03:22:04 Yeah. In a software stack PICK is no problem. In a hardware stack you have to have data pathways from every slot you can PICK from to a mux, so that's already pretty ugly. 03:22:15 That's a LOT OF WIRES. 03:22:59 Especially in a 64-bit system. 03:23:32 The same thing can happen when you start trying to think about making shift higher performance. 03:23:41 BLEH I can feel my college EE lecturer punching my gut as we speak 03:23:52 Shift one bit is pretty straightforward - shift an arbitrary number of bits gets ugly FAST. 03:24:46 What exactly does a barrel shifter do? Does it have paths for all "power of 2" bit count shifts, and then just stacks those to get whatever you want? 03:26:40 Oh, hmmm. 03:27:01 The Wikipedia article makes it seem like it implements a full crossbar. 03:27:04 Nightmarish. 03:27:25 Oh, no, that was misleading. 03:27:33 Looks like it does (usually) stack stages. 03:27:45 Was never taught how they work and I do really need to brush up on my circuit design. 03:28:59 In some encryption applications you need to rearrange bits in large bit fields. That's an interesting problem to try to develop a clean lexicon for as well. 03:31:08 A colleague of mine has a pet question he liks to ask applicants re: FPGA design. 03:31:22 Back when I did that we still drew schematics, so what you were getting was right in front of your face. 03:31:44 These days they use Verilog, and in ways that "infer" logic, so there's a layer of abstraction between you and the result. 03:32:02 His question is about achieving a certain function via Verilog. I can't remember exactly what it is. 03:32:24 But there's a really simple bit of Verilog that looks like it would be the obvious answer - just straightforward as can be. 03:32:41 But it turns out that that Verilog will fill like half of a big FPGA. 03:33:20 I want to go back to university, take a course involving VLSI. 03:33:25 It creates this vast structure of logic. Most inexperienced applicants will give that answer. He's ok with that - what he's going for is to see if they can tell him what's wrong with it when he asks them to take a closer look. 03:33:40 He wants to see if they can "see through to what they're really doing." 03:34:02 I can recommend a really good book if you want to self-study. 03:34:17 I'd have to go look at it to get the title / authors, but I could do that later. 03:34:34 I found it to be very readable. I've never had any "real" training in that area. 03:37:58 Please. 03:38:34 Ok. I'll find it and let you know. It may be on my shelf at the office. 04:07:12 the_cuckoo: I've added an atexit() call to restore_term() which will hopefully resolve this 04:09:00 WilhelmVonWeiner: I'd have to consider what to do on actual hardware when that time comes; as long as it's on a host system, I may as well check and abort if the image is too large :) 04:11:41 --- join: mtsd (~mtsd@94-137-100-130.customers.ownit.se) joined #forth 04:12:11 WilhelmVonWeiner: This is what this word needs to do: 04:12:14 2 // : split ( M A --) ... ; 04:12:16 3 // 04:12:18 4 // Initial: 04:12:20 5 // A A+N+2 04:12:22 6 // -------------------------------------- 04:12:24 7 // N+4 N+2 04:12:26 8 // 04:12:28 9 // Final: 04:12:30 10 // A A+M+2 A+M+4 A+N+2 04:12:32 11 // -------------------------------------- 04:12:34 12 // M+4 M+2 N-M N-M-2 04:12:36 13 // 04:12:38 Addresses above the --- lines, content below. 04:13:48 I do have a good operation factored out of it - I have "form" whic hsets up one range. So I call form twice, once to get A and A+M+2 set, and again to get A+M+4 and A+N-2 set. 04:15:24 form has a quite nice definition, length-wise. 04:15:45 : form swap 2 + tuck 2 + over w! over + w! ; 04:20:38 I can't decode what your diagram is meant to mean, so sorry, I don't think I can be much help. 04:22:12 Sorry. The "initial" picture just says that address A contains N+4 as a 16--bit number, and address A+N+2 contains N+2 as a 16-bit number. 04:22:20 It's just a picture of a RAM range. 04:23:04 I'm carving an M-byte allocation out of a pre-existing N-byte allocation. 04:26:42 w! ( w a -)? 04:27:07 Yes; 16-bit store. 04:31:43 crc: excellent - thanks 04:33:31 lol at me trying to decode the logic here... `A ?` gives N + 4? 04:34:25 Yes. Ok, I think I see a nice clean way out using my stack frames. 04:34:31 Should have thought of that sooner. 04:40:44 Yes, that's very clean. 04:56:03 pastebin it? 04:56:15 Yes, just a sec. Finishing touches. 04:59:39 https://pastebin.com/18Y9y1cd 05:02:14 {{ ? 05:06:57 {{ and }} are my words for opening and closing stack frames. 05:07:11 That's what enables the use of the 3@ 1@ etc. fetch words. 05:07:20 The number in those is the index into the frame. 05:07:34 The "is there enough room left" check isn't working quite right. 05:08:42 --- quit: mtsd (Quit: leaving) 05:40:37 --- join: mtsd (~mtsd@94-137-100-130.customers.ownit.se) joined #forth 06:08:35 --- quit: dave0 (Quit: dave's not here) 06:09:16 --- quit: mtsd (Quit: Leaving) 07:00:10 regarding shift or bit rotating hardware: for 64 bit you need 64 muxes that take in 6 control lines each. Those lines are shared so you only need those. I think that is pretty much how many barrel shifters work 07:00:37 (cheaper to do with 16 bit cells though) 07:02:15 regarding PICK and hardware stacks. One implementation is one that 'bubbles up' the requested value. Takes a while though. 07:04:00 another topic regarding hardware. 07:04:23 I was a bit inspired by the Hangul writing system of Korea 07:07:39 with that minimalistic ISA Forth machine I described here one could implement it in hardware simply or with increasing complexity to run the code faster by decoding and executing bigger bundles of instructions 07:13:26 Do you have a link to that? 07:13:37 That does sound interesting. 07:18:01 --- join: gravicappa (~gravicapp@h37-122-117-136.dyn.bashtel.ru) joined #forth 07:51:51 I dont have a link to it as I am thinking and formulating this right now 07:53:19 for instance take : NIP SWAP DROP ; 07:53:35 Ok. 07:53:48 Which would just be nip nip. 07:54:00 Oh, never mind. 07:54:07 I overlooked that you had the : and ; ther. 07:54:09 there 07:54:11 ok, proceed. 07:54:26 My nip is a primitive. 07:54:46 It's a really eas one - just increments the stack pointer. 07:55:12 precisely my point. On some implementations it is a primitive instruction on some it isnt 07:58:20 in the ISA of the Forth cpu I specified NIP would be 0x0009 followed by 0x000A follwed by 0x000F 07:58:58 sorry swap 0x0009 and 0x000A around 07:59:32 those are the op codes of SWAP, DROP, and EXIT respectively 08:03:42 taking these three cells (say in a four cell fetch) and decoding them together tells the hardware. Basically when the hardware sees this, decr the datastack index control register 08:04:25 (most hardware stacks are implemented as memory device whose address lines are driven by an index control index) 08:05:43 How do you know those particular bits would cause that in a simple way? 08:06:07 Wouldn't you potentially need a fair bit of logic to generate the control signals, or a table lookup or something? 08:06:57 Let me ask that better. Say you had optimized your instruction mapping and bit patterns so that the individual three instructions did what they do in a really efficient way. 08:07:32 Now your asking that the three fetched together produce some other action - seems like that couldn't also be really "natural." 08:37:49 --- quit: dgi (Ping timeout: 268 seconds) 09:15:12 KipIngram: the idea is that those three fetched together and produce the same result as if they had been fetched and executed individually 09:15:19 --- join: DKordic (~user@79-101-206-168.dynamic.isp.telekom.rs) joined #forth 09:19:56 the only thing that would change is that instead of seeing three seperate instructions in this case, the cpu sees one 09:24:07 isn't this just an optimisation which could be handled by having a single instruction at the cpu level, and the compiler sees the 3 and drops in the replacement? 09:24:18 NO 09:24:29 DON'T THINK ABOUT THAT 09:25:16 why not? :) 09:25:17 that leads to uA and uOps and other pains in the soul 09:25:51 cancers on computing 09:26:46 also if you're dropping in the replacement - the programmer should just be writing that word 09:26:57 i'm struggling to figure out what uA stands for 09:27:10 micro-architecture 09:27:13 o 09:27:35 i don't get the objection to the compiler optimisation - can you explain? 09:28:27 seems to me that software is entirely the right place to do that kind of thing 09:28:36 if the compiler is dropping in a replacement instruction for a user who types : NIP SWAP DROP ; the user should just be using that instruction 09:29:19 they're using nip - that is what they've asked for - how it is done isn't all that important, surely? 09:29:32 it's hiding complexity 09:29:35 yes 09:29:46 so is : nip swap drop ; 09:29:47 moving it from edit time where it's easy to fix, to compile time 09:30:05 yes 09:30:16 no no, that's not what I mean by hiding 09:30:42 it is hiding the complexity of nip if they haven't authored the word themselves 09:31:06 the compiler becomes complicated when now it's designed to do more 09:31:55 yeah - but i don't see a problem with complicated code - providing it's well structured and understandable in itself 09:32:15 the effort of designing, implementing, and using (and predicting?) the compiler is made harder, slower, and more complicated 09:33:52 the code might not appear complicated, but as Bastiat said in his most famous essay, all this is that which is seen. 09:34:51 you've lost me in your appeal to authority there :) - can you expand on "all this is that which is seen"? 09:35:45 quoting an author is not an appeal to authority, Bastiat isn't even a programmer 09:36:04 that was tongue in cheek - hence the smiley :) 09:36:15 it's a reference to his famous essay That Which is Seen, and That Which is Not Seen 09:36:18 lol 09:36:20 oh 09:38:03 the programmer typing NIP at the prompt or in their editor is massively more simple than an optimising compiler that recognises swap drop exit at the point of compilation to then replace it with the instruction NIP 09:39:12 agreed 09:39:54 Nip is NOT complex in a software stack system that caches TOS. 09:40:12 It's simpler than just about anything else. 09:43:33 KipIngram: indeed - i was just using it as an example of something which, from the point of view of use, is a single instruction, but is in fact implemented as many - and i was following on immediately from what Zarutian seemed to be suggesting (though i freely admit i was asking him for clarification - WVW interjected and we are where are now) 09:43:55 did I rant on about a misunderstanding 09:44:15 no - i don't think so - i found your comments interesting 09:45:31 the_cuckoo: you are assuming that there is a compile time stage somewhere where the 'binary executable' is remade. Not so. The idea is that the same 'executable' runs identically (just faster) on two members of the cpu family 09:46:55 interesting 09:47:01 ah - i see :) - and therein probably lies my own misunderstanding 09:47:14 would be a mad decode stage but interesting 09:48:19 would be interesting if in the fetch decode execute pipeline, your decode stage is also an "optimise" that holds two instructions 09:48:36 minor but interesting 09:50:10 you havent heard about mad stuff yet like the word fetcher that when encountering a call goes and fetches instructions from there while updating the return stack. That is until it comes across R> or >R. 09:50:48 the decoder for the functional unit only sees a stream of primitive instructions 09:52:25 (well the word fetcher also does most of the EXIT instructions) 09:58:33 s/R> or >R./R>, >R, or SKZ/ 10:11:53 --- quit: nighty- (Remote host closed the connection) 10:19:15 48*963 10:22:34 sorry, cat got on the keyboard 10:26:13 does your cat not understand rpn? 11:01:15 Zarution that's really close to how mine worked. 11:01:23 Having the fetch and execute units like that. 11:01:34 How did you handle conditional jumps? 11:01:56 Zarutian: i may well be misrembering (as it was more than half a life time back), but isn't this in some way related to 'pipelining' in the olde worlde motorola RISC stuff? (i did my uni thesis on it, but i'm struggling to remember the details - just rings a bell, but probably way off) 11:02:49 KipIngram: with SKZ being the only conditional way to change control flow. 11:03:48 What does the fetcher do? 11:03:52 the_cuckoo: well the same root inspired it but nothing much else in common. 11:04:13 When it encounters the conditional jump in the instruction stream? 11:04:19 Where does it then fetch from? 11:04:26 ok - so not completely off base then - will try to see if i can locate the text book later 11:05:15 KipIngram: goes into 'fetch-one-at-a-time' mode until the functional unit has decided if the cell after SKZ should be skipped or not. 11:05:34 Ok. 11:05:39 I tried to do better than that. 11:05:47 Or at least do better "sometimes, when possible." 11:06:01 I had a signal system from the execute unit back to the fetch unit. 11:06:11 I'll describe "one signal" of it. 11:06:34 Each signal is two bits, that can have states "empty," "full true," or "full false." 11:06:50 The conditional jump selected one of these signals to use for the decision. 11:07:22 If the signal was "empty" when the fetch unit processes the conditional jump, it has to stop and wait. But if it's full true or full false, it uses that to decide where to take the fetching. 11:07:41 note that there is no BRZ or JMP in the ISA I specified 11:07:41 Also, I had a pipeline between the fetch and execute units. 11:08:12 So the idea was that you would attempt to write code that calculated the decision as far before the jump point as possible. 11:08:39 Anyway, that probably makes it clear. 11:08:49 It wouldn't always be possible to do the decision early. 11:08:55 That would depend on the application. 11:09:15 But at least some cases, like say looking for the null at the end of a linked list when you were processing each element, allowed a bit of gain. 11:09:30 You can check the flag and post the signal BEFORE you do the processing on the element. 11:09:38 Check the link I mean. 11:10:08 So you can give the fetch unit at least a few instruction's worth of time to start filling your pipeline again. 11:11:08 The idea was simple and clean, but it gets sort of messy when you consider the need for multiple signals. I just decided to support a certain number of signals, and the programmer would manually select one for each decision point. 11:12:05 I just really hated the idea of emptying the pipeline. :-( 11:12:44 That pipeline also reduces the subroutine call overhead to near zero - while execution unit is executing opcodes, fetch unit is peeling through layers of calls looking for the next opcodes. 11:13:07 And in a system like this you can have the fetch unit "see return coming" and you can get zero time penalty for returns. 11:25:57 how deep was that pipeline? 11:26:59 I was thinking to use at most four to sixteen instruction 'slots' 11:35:23 Well, this was a paper design only, though I'd thought at great length about a Spartan 6 implementation and using LUTs effectively and so on. I never chose a depth for the pipe. 11:35:50 The important thing, and the reason the pipe was there, was to give the execution something to do while the fetch unit peeled throught however many calls it had to. 11:35:58 How deep do we REALLY stack our Forth words? 11:36:08 Maybe 8 or 10 deep you think? A dozen? 11:36:16 Worst case, I mean. 11:37:10 KipIngram: return stack depth? 11:39:37 KipIngram: depends on the complexity of the primitives provided. Bigger return stack is required for more primitive primitives. 11:41:44 My stacks share a space, so there's really only a "combined total depth." That's around 400-450 or so. 11:42:38 That's just a consequence of my 4kB memory page size - each task entity gets one of those pages for stacks and "task variables." 11:43:00 I have run the system with the page size set as low as 512. 11:43:11 In that case the combined stack depth is around 50. 11:43:28 But it would not run with page size 256. 11:43:35 I assume the stacks collided. 11:45:17 So, combined stack depth of 20 or so is insufficient on my system, but 50 is adequate for the base system at least to run. I don't know how much I had to spare. 11:52:00 for me, compiling my stdlib will reach an address depth of 76 items and 19 on the data stack. application code can get much deeper. 11:53:01 Ok, with page size 0x1AA it will start, interpret, and do a : definition. 11:53:10 For page size 0x1A9 it won't. 11:54:33 I use 88 bytes of the task block for non-stack purposes. 11:55:39 0x1AA is 426; take the 88 away and that's 338. At 8 bytes per slot, that 42.25. So obviously there was some hit or miss / accidental stuff going on in my testing. 11:55:46 It can't 'depend on a partial slot. 11:56:05 I think it's ok to say, though, that I need just over 40 stack slots, shared. 11:56:30 So the common thing you hear out there about each stack being 32 deep - that seems plenty adequate. 11:56:46 Of course, this test wasn't really stressing the system very much. 11:56:59 how are your stacks shared? 11:57:12 They grow toward each other from the two ends of a zone. 11:57:30 I can't load any of my source with the page size set to other than 4k, because that also controls the mass storage block size. 11:57:59 oh, i see. so it's not one shared stack, they just exist in an overlapping region 11:58:10 In one heap page, yes. 11:58:25 The first 88 bytes of the page are for other things. Then the return stack grows up from there, and hte data stack down from the top. 12:01:11 I give away two cells at the top, to put a pad between the data stack and whatever is in the next page of the heap. 12:01:37 Not going to guard against a real run-away stack underflow, but it is a bit of "casual protection." 12:01:54 Like most Forth systems, mine only checks for underflow in the interpreter loop after it executes each word. 12:02:06 Nothing gets checked "within word execution." 12:02:46 So I guess I should really say there are 104 bytes of non-stack in the page. 12:03:26 I've thought about initializing that page to some unique number before setting it up for use. Then later I could go look at it and see how far the stacks had grown. 12:05:46 I think I'll go ahead and do that - it's useful information. 12:19:06 --- quit: gravicappa (Ping timeout: 252 seconds) 12:44:54 Cool. Looks like I'm using at most 16 data stack entries, during "basic system operation." 12:46:57 And it appears about seven return stack levels. 12:47:06 . 12:47:14 Sorry - 12. 12:52:13 --- join: mtsd (~mtsd@94-137-100-130.customers.ownit.se) joined #forth 13:17:00 --- quit: rain1 (Quit: WeeChat 1.6) 13:44:41 --- join: rain1 (~My_user_n@unaffiliated/rain1) joined #forth 14:18:00 --- join: dave0 (~dave0@223.072.dsl.syd.iprimus.net.au) joined #forth 14:18:48 hi 14:27:35 --- quit: mtsd (Quit: leaving) 14:28:11 morning dave0 - hit the coffee yet? 14:28:44 hi the_cuckoo 14:28:50 coffee! C4[_]~ C8[_]~ C3[_]~ C2[_]~ 14:29:32 :) 14:29:38 how goes it? 14:30:30 alright, haven't woke up yet lol 14:30:44 you? 14:30:59 getting to the point of needing sleep :) 14:31:06 ah 14:31:11 it's already friday here 14:31:45 am not quite there yet, but i work a four day week, so thursday is kinda friday for me 14:32:00 and friday is sort of saturday 14:32:02 3 day weekend! 14:32:16 indeed - kinda nice :) 14:33:28 i normally take the friday to hack on private stuff, but i'm not sure what i'll do tomorrow 14:33:57 is there such thing as coders block? like writers block :-) 14:34:33 yeah - it normally involves gaming 14:37:29 nethack does that to me 14:37:50 i was a huge fan of rogue back in the day 14:37:57 it only took about 5 years or so to finally burn out on to the point that it's not as much of a risk to me anymore 14:38:14 maybe 6 years 14:38:55 it's perfect for the office because to someone just passing by it blends in perfectly with the rest of my screen full of terminals and code 14:38:57 i'm still working my way through minesweeper 14:40:05 * the_cuckoo loves minesweeper :D 14:48:16 dave0: re coders block: well arent they enveloped by curly-braces usually? Otherwise they wouldnt be fast in place. 14:48:52 haha 14:55:22 --- quit: xek_ (Ping timeout: 244 seconds) 15:03:37 --- join: Keshl_ (~Purple@207.44.70.214.res-cmts.gld.ptd.net) joined #forth 15:04:33 --- quit: Keshl (Read error: Connection reset by peer) 16:20:31 Well, this stuff seems to be working. 16:20:47 The "free" implementation turned out to be pretty easy - just 5 lines. 16:21:07 As far as I've been able to tell it's working, but I haven't really tested it hard enough yet. 16:22:25 Zarutian: thanks to our conversation earlier, as soon as I finished this I found myself imagining hardware support for it. 16:23:04 You've got a 64-bit address space. Imagine you allcoate 1/256 of that to this mechanism. That's still a (effectively infinite) 2^56 bytes of logical space. 16:23:35 Then you just start using it, linearly. Ask the hardware for an address where you will have N bytes to work with. Tell it when you're done with them. 16:23:47 At boot time you'd specify what portion of the physical RAM to back this with. 16:23:58 Any power-of-two size aligned range 16:24:17 It would handle the rest, and it would also give you an exception if you tried to use an old address that you'd wrapped past. 16:25:17 Basically if the allocation point was at N, you could access any address down to N - without error. 16:27:03 I can see opcode BUFF - give it a size, get an address, and and opcode FREE - give it an address. 16:27:24 I'm thinking this is fairly simple hardware. 16:31:22 --- quit: dave0 (Quit: dave's not here) 17:19:11 --- join: tabemann (~travisb@h193.235.138.40.static.ip.windstream.net) joined #forth 17:31:39 --- quit: john_cephalopoda (Ping timeout: 252 seconds) 17:41:14 --- join: dave0 (~dave0@223.072.dsl.syd.iprimus.net.au) joined #forth 17:42:57 re 17:44:33 --- join: john_cephalopoda (~john@unaffiliated/john-cephalopoda/x-6407167) joined #forth 18:31:39 --- join: nighty- (~nighty@b157153.ppp.asahi-net.or.jp) joined #forth 18:45:33 --- quit: dave9 (Ping timeout: 272 seconds) 18:46:09 --- join: rdrop-exit (~markwilli@112.201.168.172) joined #forth 18:46:33 Good morning Forthwrights :) 18:46:50 --- quit: dave0 (Ping timeout: 244 seconds) 18:51:08 Greetings, rdrop-exit. 18:51:11 How goes the quest? 18:53:41 Hi KipIngram, just got back from a trip, haven't been productive for the past couple weeks 18:55:54 Oh, nice. 18:55:56 Where'd you go? 18:57:41 Nice France and Monaco 18:57:41 --- join: dave9 (~dave@223.072.dsl.syd.iprimus.net.au) joined #forth 18:59:11 Wasn't really a vacation, my mother had an operation, so I went to give moral support 19:00:03 All went well, was nice to visit my old haunts 19:00:19 --- join: dave0 (~dave0@223.072.dsl.syd.iprimus.net.au) joined #forth 19:01:27 Oh so cool. 19:01:34 Oh. 19:01:36 re KipIngram 19:01:38 Well, sorry about that. 19:01:41 Hi dave0. 19:01:44 I hope she's doing ok. 19:01:53 Glad it went well. 19:02:00 Yes all well, thanks :) 19:02:41 My "old haunt" is gone. I lived in a small town, and there wasn't much going on. But there was a diner called Tommy's, where a lot of people my age congregated. 19:02:44 Lots of good times there. 19:02:50 While I was in college, though, it burned down. 19:03:08 Where was this? 19:03:32 :-) A noone's-ever-heard-of-it place called Troy, Alabama. 19:03:43 Cool name :) 19:03:47 Yeah. 19:04:13 And the little university there, which my dad was a professor at, has made a run at the online education thing, and has become a lot more well known than it ever was then. 19:04:45 My youth was split between San Francisco and the South of France 19:04:49 In Alabama, people have HEARD of Auburn University and the University of Alabama. 19:04:57 Troy was basically the third largest. 19:05:03 But quite a large difference in reputation. 19:05:12 That is an... interesting split. 19:06:43 Lots of Pan Am and TWA flights 19:06:49 My dad taught organic chemistry, and some related courses there. 19:07:07 I hung out a lot in his lab and stockroom, and played with the gear - that was kind of cool. 19:07:12 Nice 19:07:44 If I'd gone the wrong direction I could have gone off and become the Unabomber or something - I knew how to mix up some fairly fun stuff. :-) 19:07:54 :) 19:07:57 never did anything more than "beaker level" though. 19:08:21 And I've forgotten most of it. :-( 19:09:08 My father was in the Navy, met my mother while in the Sixth Fleet in the Med 19:09:35 Ah, that explains the split geography. 19:12:02 Yes, my mother decided to move back to the South of France a few years ago. 19:13:09 It's better suited to retirees than the Bay Area 19:14:37 (in her opinion) 19:15:29 My dad passed away in 1999, and my mom lives with my sister in the Auburn area now. Sister is a 3rd grade schoolteacher, but she's turned in her retirment paperwork for the end of this year. 19:16:06 My sister never married, so it's just the two of them. 19:16:20 I wish they'd move out here near us, but my mom just won't hear of it. 19:16:27 My father passed away last year, I'm the only child of that marriage, my father remarried 19:16:49 My sister and I are her only two children, so I've got ALL of her grandchildren - you'd think that would be a factor. 19:16:58 I'd sure move to where my grandchildren were. 19:17:21 Although most of them are grown now - only one still left at home, so maybe it's not the same thing. 19:18:31 I tried to get my mother to move here for the same reason (grandkids), but she doesn't like the humidity, and anyhow my kids are grown up and move around a lot 19:19:03 No kids at home anymore 19:19:08 Just 2 dogs 19:19:24 I'm down to my lst one - she's 14, so not a whole lot more to go on that. 19:19:32 I've got four others scattered around the state. 19:19:46 All of them still close enough to drive in for a visit, thankfully. 19:19:58 And even more thankfully they all still seem to like to. 19:20:32 My youngest is 23, she's a ramp model, moves countries every 3 months. 19:20:44 Oh wow. 19:20:52 She's in Brazil right now 19:21:15 Only see her a couple weeks a year 19:22:08 I have four kids, eldest is 32 19:22:17 I'm so fortunately able to see mine fairly frequntly still. 19:22:26 I suppose it could change at any time, but I'll take it while I can get it. 19:23:19 Right 19:25:38 I think only my eldest will stay here for the long run 19:26:05 back 19:26:14 Hi tabemann 19:26:26 hey guys 19:27:01 * tabemann is actually getting around to writing a line editor for hashforth 19:27:37 one thing that is making it much easier is I made there to be one line editor per thread, and it's stored in a user variable 19:27:42 Hi tabemann. 19:27:48 Nice. 19:27:56 (line editor). 19:27:57 I've decided to try switch my Forth to the old Wordstar key bindings for a while 19:28:00 s/thread/task 19:28:14 I'm too used to the GNU readline key bindings 19:28:17 As an experiment 19:28:53 tabemann: I'm planning to support both "tasks" and "processes." 19:29:01 A task is a stream of execution, with a stack. 19:29:19 only one? 19:29:19 A process is has a dictionary, input/output/error buffers, etc. 19:29:37 A process can have as many tasks as it wants to run - each one knows who its parent is. 19:29:45 It starts out with one, which is what you're typing into. 19:29:58 * tabemann laughs at "only one" 19:30:02 I meant only one stack? 19:30:11 The main assumpmtion I've made is that only one task belonging to any particular process will attempt to manipulate the dictionary. 19:30:26 No, that was a mis-speak. 19:30:29 "stacks." 19:30:31 Thanks. 19:30:33 :) 19:30:54 * tabemann still finds "with a stack" funny 19:31:14 It actually is kind of funny. I'm likely getting tired. 19:31:37 Each thread knows who its parent process is, and any of them COULD execute input/output or dictionary-related words. 19:31:52 I'm just assuming that I'll only actuall DO those things with one. 19:32:13 My host Forth has only two tasks, anything else is coroutines. 19:32:35 currently I'm having all input/output be by stdin/stdout/stderr, but I'm planning on changing that so that each of those can be changed per-task 19:33:37 I use only 2 file descriptors, one for the user terminal, and one for a serial tether. 19:34:13 hashforth has one kind of task, a cooperative task; attoforth has one kind of task, a preemptive task - note that hashforth tasks share a wordlist order while attoforth tasks each have their own wordlist order 19:36:48 I want to (potentially) be able to support at least one task per available core. 19:37:06 And even better, 2-3, since any one task might block for some period of time. 19:37:43 I'm of the rather strong aopinion that for generic appications the optimum thread count is some 1 < x < "small integer" multiple of the core count. 19:38:01 both hashforth and attoforth are single-core; it would probably be easier to make attoforth multicore since it's already designed with preemptive multitasking in mind and the runtime itself manages tasks 19:38:16 >1 purely to allow for tasks to have small delays without shutting down a core. 19:38:17 whereas the hashforth runtime knows nothing about tasks and tasks solely exist in the user space 19:38:55 I started this iteration off from the very beginning with the idea that I wanted to multi-task and multi-process. 19:39:12 That is woven into it at the foundation layer. 19:39:59 if I were to make hashforth multicore what I would do is to instantiate multiple runtimes to run in parallel and expose these different runtimes to the user 19:40:20 note that I would have to do some things differently, like have separate user spaces 19:40:24 for each core 19:40:49 Yes, given how small Forth generally is that's an ENTIRELY valid approach. 19:41:07 I started developing this under MacOS. 19:41:18 The only supported exeuctable format is MachO. 19:41:26 It requires 100% relocatability. 19:41:46 I achieved that by allocating a register to point to my "system base address," and did EVERYTHING relative to that register. 19:41:55 In my case the development and target environments are totally separate 19:42:06 But I also used a second register to point to the active task block. 19:42:26 And the first cell of the task block points to the process block. 19:42:42 So I have quite easy access to task, process, and system level quantities. 19:42:57 That really paved the way for smooth multi-task/multi-process operation. 19:43:39 in attoforth I passed around a pointer to the current task object in almost every runtime function 19:44:50 Yes, that sounds similar. 19:44:51 I just have a Task Select bit in the host VM 19:45:00 I just have a register for it - that's basically "passing around a pointer." 19:46:54 it's probably more efficient, though, because you're guaranteed that it's going to stay in a single register 19:53:45 In a way our approaches are diametrically opposed 19:53:47 Well, probably, but the concept is the key thing, I guess. 19:54:10 I've tried to arrange for as much efficiency as I possibly can. 19:54:33 The MacOS pretty much FORCED me into the "system base address relative" approach. 19:54:44 Maybe ther ewas another way to do it I missed, but I'm pretty committed to this one now. 19:54:49 I use R15 for that. 19:55:04 I shows up in my NEXT and my DOCOL, DOVAR, DOCON, etc. 19:55:25 It actually only costs 1-2 instructions per NEXT/DOCOL. 19:55:39 Which, since there are only 4-5 instruction in those, is kind of expensive. 19:55:49 But I didn't see a way around it. 19:56:09 What it means is that all of my definitions are stored as lists of OFFSETS into the system region, rather than as absolute addresses. 19:56:10 --- quit: dddddd (Read error: Connection reset by peer) 19:56:18 hashforth as written does not involve any executable code - as written it is pure TTC - but I plan on writing an implementation at some point which converts the TTC code into native code 19:56:28 i.e. SRT/NCI 19:57:30 which will be necessary if I am to try to wring more performance out of it - e.g. a simple counted loop using BEGIN WHILE REPEAT is half the speed of a for x in range() loop in Python 19:58:18 Well, I think ALL of these things are interesting and educational. 19:58:21 the advantage is the whole user space will not need to change, and the runtime portion is simple 19:58:28 So few people really have this kind of low-level understanding these days. 19:58:45 I'm about 35x faster than Python in the testing I've done. 19:58:53 I do have to go, though - the coffee shop is closing 19:58:54 And about 33% of the speed of C. 19:59:26 Ok - be safe getting home. 19:59:32 Ciao 19:59:35 Thanks for dropping in - always fun. 20:00:06 :) 20:02:17 I defined a bunch of macros in my assembly file. 20:02:22 Macros to make headers. 20:03:57 That's where I discriminate amongst system variables, process variables, and task variables. 20:03:58 VARIABLES are *process* variables. 20:03:58 The set of task variables is pretty much fixed. 20:03:58 And the set of system variables mostly is too, though a couple of times I've thought about whether I'll need an SVAR word. 20:03:59 To declare new ones. 20:03:59 Or SYSVAR 20:04:14 Process variables seemed like the closest correspondance to traditional Forth variables. 20:04:25 It all sounds so complicated to me 20:04:48 --- quit: tabemann (Ping timeout: 268 seconds) 20:04:53 It's not really. 20:05:15 It's just a run-time for each type (varies the register added), and a header for each type (in the assembly file). 20:05:31 Theere's no reeal "engine" required to drive it or anything. 20:05:49 And besides, I haven't really thought of any need I'll have later for task or system vars. 20:05:53 I think they all exist. 20:06:01 And VARIABLEs are process vars. 20:06:07 So there's really still just the one thing. 20:06:29 I think it's easy for us to under-describe stuff when we understand it wll. 20:06:38 Potentially makes it sound complex / confusing. 20:09:15 I guess it's the old dichotomy between fat forths and minimal forths. 20:10:21 Maybe so, though I'm TRYING to keep this one fairly clean. 20:10:40 I certaintly don't think THAT will turn out to be where I failed. 20:11:31 All of the differen kind of variables work in exactly the same way, and all are equally efficient. 20:11:37 It's just "which register" that varies. 20:12:03 And I don't REALLY know if the distinction matter to me any more once I start the thing up and write Forth. 20:12:35 If I want the base address of the heap, I say 20:12:39 LOHEAP @ 20:12:50 If I want the value of STATE, I say 20:12:53 STATE @. 20:12:56 Ooops. 20:12:57 STATE @ 20:13:23 You' 20:13:23 The fact that the first offsets from R15 the second from *R!$ is kindof moot. 20:14:28 I have to be aware of all that when I'm writing assembly, but not when I'm writing Forth. 20:14:37 You're anticipating so much about the target application in your design 20:15:03 How so? I feel like I could write any application in this. 20:15:29 But if you mean I'm planning multiple processes, each possessing mutilple threads (potentially), then guilty. 20:15:31 What if your application doesn't require multiple processes and heaps 20:15:44 etc... 20:16:05 Remember, the requirement that everything be recolatable in SOME WAY was forced onto me. 20:16:12 That wasn't a choice. 20:16:33 That's not my point 20:16:47 it's an os for naked hw 20:18:14 Hopefully. I hope I make the naked hw before I die. 20:18:26 Yes, and like any OS it carries a lot of overhead that may be unecessary for the final application 20:19:01 Yeah, I'm sure. But I HOPE "less." 20:19:29 I *do* require it to support multiple appliatns - high performance applications. 20:19:38 but I'm trying not to overdo it. 20:19:51 I'm just follwing the best direction my "nose" leads me in this. 20:20:18 There's a tradeoff between "thorough prior support" and "application specific" that I'm trying to straddle here. 20:20:37 And that is, to some extent, a subjective set of decisions. 20:20:44 I'd be shocked if you agreed with every single one of them. 20:21:00 But if you can't make decisions, the product never gets to market. 20:21:26 I start a target out with a serial monitor and only place stuff on the target that is required by the application. 20:21:28 You start out trying to follow theory, and training. 20:21:48 Eventaully you have to say "*THIS* is what we're going to build - cross your fingers." 20:22:04 Then later you find out how well it worked. 20:22:33 I think that sounds great. I think you may just be optimizing to a different target thnme. 20:22:49 And I'm just *trying*, to the best of my ability, to "optimize." 20:22:50 I optimize for the application. 20:22:54 It's a gut-driven approach. 20:25:40 It's difficult to optimize something that's general purpose 20:25:57 almost impossible 20:26:38 The complexity goes through the roof 20:26:45 Ok - I don't disagree. So yes - if you're going to write a custom code, from scratch, for every single application you ever do, you're 100% correct 20:26:54 I won't deny that for a second. 20:27:12 But that's only if the goal is machine efficiency, in that specific appication. 20:27:20 I think Chuck was right on in his thoughts on all that. 20:27:52 If, on the other hand, you're looking for something that gets you 90% of the optimality, and 90% of the speed of development, well then that MIGHT be an acievable goal. 20:27:55 I'm seeking it. 20:28:12 Anyway, 10:30 here - I better get to bed. 20:28:18 I'll catch back up tomorrow. 20:28:23 Rest / work well. 20:28:58 Nighty night 20:32:12 That's why there's a difference between development environment and target environment, the two can be combined if the application requires it, but the default should be separation. 20:44:05 --- join: travisb (~travisb@2600:1700:7990:24e0:e195:6b4e:1b5c:932b) joined #forth 20:44:11 --- join: gravicappa (~gravicapp@h37-122-117-136.dyn.bashtel.ru) joined #forth 21:32:09 --- quit: arrdem_ (Ping timeout: 240 seconds) 21:32:17 --- join: arrdem_ (sid333803@gateway/web/irccloud.com/x-zvavenqakrrkqhvj) joined #forth 21:32:22 --- quit: jhei (Ping timeout: 250 seconds) 21:32:48 --- quit: pointfree (Ping timeout: 250 seconds) 21:33:30 --- quit: ovf (Ping timeout: 268 seconds) 21:34:50 --- join: jhei (sid81469@gateway/web/irccloud.com/x-twtcqxkvqilhzmcw) joined #forth 21:35:11 --- join: pointfree (sid204397@gateway/web/irccloud.com/x-ylwyleubpvlnpyyy) joined #forth 21:37:41 --- join: ovf (sid19068@gateway/web/irccloud.com/x-udpstdvlxhoogttn) joined #forth 22:33:20 <`presiden> why sometimes something looks easy and nice in your head, but when you actually implement it.... 23:59:59 --- log: ended forth/19.03.21