00:00:00 --- log: started forth/19.02.27 00:39:02 --- join: [1]MrMobius (~default@c-73-134-82-217.hsd1.va.comcast.net) joined #forth 00:41:58 --- quit: MrMobius (Ping timeout: 255 seconds) 00:41:58 --- nick: [1]MrMobius -> MrMobius 00:42:57 --- mode: ChanServ set +v bluekelp 00:51:25 <`presiden> investigating what causing a bug make you feels like real detective 02:04:27 --- quit: ashirase_ (Ping timeout: 272 seconds) 02:06:43 --- join: dave0 (~dave0@193.060.dsl.syd.iprimus.net.au) joined #forth 02:07:02 --- join: ashirase (~ashirase@modemcable098.166-22-96.mc.videotron.ca) joined #forth 02:07:39 hi 02:09:53 hi dave0 02:10:00 hi bluekelp 02:28:26 hi guys 02:54:01 Finally started implementing interrupt support on zkeme80 02:54:39 I'm thinking of a word such as :I that would allow the definition of interrupt handlers 02:54:45 and ;I 03:00:40 And keeping the stacks separated 03:01:26 is that a Z80 cpu? 03:01:40 zkeme80 is my forth system for the z80, yes 03:01:43 were you doing a forth on the calculator? 03:01:52 https://github.com/siraben/zkeme80 03:02:18 --- join: dddddd (~dddddd@unaffiliated/dddddd) joined #forth 03:02:22 ah yep i remember 03:02:50 that looks nifty. alas i have no ti-83/4 calcs any longer 03:04:01 I have a book on implementing forth on the z80 though 03:04:37 bluekelp: there's an emulator here 03:04:38 https://www.cemetech.net/projects/jstified/ 03:04:48 Just install guile, clone the repo and run "make build" 03:04:58 Then zkeme80.rom is the ROM you need to pass to the emulator 03:08:15 http://www.ticalc.org/ 03:14:28 yunfan: thanks for the esp8266 link 03:14:33 I have about 10 of those boards 03:15:47 do you want to port my forth to the xtensa cpu used in esp? 03:18:28 far too busy as it stands 03:18:47 ok 03:19:22 Yes! It seems to be working. I'm getting a routine to run "at the same time" as my boot process 03:19:28 writing C for the ESP is interesting depending on your approach 03:19:34 Although I'm noticing a big slowdown 03:19:49 because avoiding the SDK you get into a war with the watchdog 03:22:14 --- quit: proteusguy (Ping timeout: 246 seconds) 03:23:20 why war? 03:29:25 I can't find the page I was reading about it 03:34:39 ok 03:34:48 well, you always have to service the watchdog, or disable it 03:42:15 Does Linux / MacOS offer a system mechanism for "hooking" the illegal memory access exception? 03:42:26 I should would like it if I could recover without failure from those. 03:42:45 Print my own error message and re-start the outer interpreter. 03:44:25 Seems like it ought to be simple enough to provide a way to register a callback - "Don't fail; call this instead." 03:49:50 KipIngram: in unix, it's signals, and it's very machine dependent 03:53:57 Explain more? Or give me a more specific term I can Google? 03:54:31 https://termbin.com/0nie 03:55:07 Nice comments... ;-) 03:55:12 lol 03:55:15 Thanks dave0; I'll peer into that... 03:55:18 it is not very well documented :-) 03:55:33 It's at least as well documented as most of my code... 03:55:34 sigaction is the unix/posix thing 03:56:20 https://stackoverflow.com/questions/2350489/how-to-catch-segmentation-fault-in-linux 03:56:43 --- join: proteusguy (~proteusgu@cm-58-10-208-131.revip7.asianet.co.th) joined #forth 03:56:44 --- mode: ChanServ set +v proteusguy 03:56:56 sigaction actually sets the handler for the various exceptions SIGILL for illegal instruction SIGFPE for divide by zero SIGBUS for bus error writing to non-existing memory SIGSEGV for writing to other memory 03:57:11 --- join: rdrop-exit (~markwilli@112.201.168.172) joined #forth 03:57:25 That looks like it ought to get me there. 03:57:56 sigaltstack sets a stack to be used while "handling" the signal, i needed it because i play with the stack pointer in a non-standard way 03:58:08 ahi will read stack overflow too 03:58:09 Yes you certainly do. 03:58:28 But it's damn interesting - I still look forward to seeing what you "get" performance-wise from that. 03:58:55 the actual handler pokes in a new PC program counter,which is where the program resumes after "handling" the error 03:59:27 KipIngram: i think i've been doing it for a year now and it's maybe 10% done lol 03:59:45 Should be very simple on my system - I already have ERR that takes an error number, prints the associated message, and runs QUIT. 04:00:01 Oh, and copies my error recovery snapshot back into place. 04:00:22 i unimaginatively called it "exception" 04:00:23 It restores the system to the state it had at the beginning of the interpretation line that contained the error. 04:00:42 ah yep i remember you saying 04:00:45 https://pubs.opengroup.org/onlinepubs/9699919799/ 04:00:52 It's quite a sledge hammer - it restores EVERYTHING that is local to the process. 04:01:18 Doesn't reverse changes to the heap, won't reverse activity affecting disk buffers, etc. - global resources aren't restored. 04:02:10 Doesn't save and restore stuff belonging to other processes, so if your error munged one of them you might have problems. 04:02:38 yeah but Forth has always allowed you to crash the system :-) 04:02:52 But for the most common errors (unrecognized word, badly formatted number, and if I make this work reading bad memory and so on), it does a great job. 04:03:00 Yes, exactly. 04:03:07 There is no protection against anything here. 04:03:46 I restore everything but the return stack 04:04:10 Well, QUIT resets the return stack anyway. 04:04:18 exactly 04:04:35 KipIngram: oh also check out https://www.gnu.org/software/libsigsegv/ 04:04:52 Cool cool. 04:04:57 Thanks. 04:05:05 no worries mate! 04:05:38 If I exit the system with BYE, when I bring it back up I get the same display screen and data stack. 04:06:15 it might be worth using that library, it is more-or-less portable and not very large... i'm pretty sure GNU clisp uses it to catch page faults and extend the stack 04:06:16 Ah, nice. I do plan to look into that sort of thing. Need my disk support working first, because I'll want to write that stuff into my own file system. 04:07:09 I want to make that work at the process level; each process can have its own save state. 04:07:37 My BYE is going to exit the active *process*, like closing a screen in Gnu screen. 04:07:49 You'll just move to the next process over in the ring, if there is one. 04:07:57 --- quit: zy]x[yz (Ping timeout: 272 seconds) 04:08:06 BYE from the last process will truly exit, and I also have a word EXIT that will leave everything. 04:10:41 I haven't actually *done* any of that yet, but I have, I think, laid all the groundwork for it - I really don't expect any serious problems getting it to fly. I may be sorely disappointed, I suppose. 04:10:59 On the host instead of MARKER, I have SAVE and RESTORE 04:11:10 But all the words already use proper "in process" memory references, the process ring exists, with just one process in it, etc. 04:14:23 Mostly what I have left is to just write word that allocates the RAM for a new process, initializes its variables, and puts it in the ring, and words to navigate the ring. 04:14:38 Eventually I'll define keystrokes that do that. 04:15:00 Cooperative or Preemptive? 04:15:31 Well, cooperative first; I'll think about preemptive later. 04:16:23 The init of that first process is spread out in the assembly code and then the Forth that runs right at the start; I need to re-arrange that so that it can just call the same word I just mentioned. 04:16:45 I stick to Cooperative 04:16:57 Then BYE will just release the active process's heap pages, remove it from the ring, and jump over a notch in the ring. 04:17:43 Sort of the "philosophy" of this system is that it will all be working in a cooperative way to solve whatever problem is relevant. No consideration at all of a process being "adversarial." 04:17:56 I think that would just take me too far from Forth. 04:18:33 Right 04:20:35 I'm hoping that by avoiding all the protections and security features that mainstream operating systems have to provide I can gain a performance advantage. 04:21:28 I usually design with a fixed number of tasks, no dynamic creation of tasks. 04:22:18 Well, I can *do* dynamic creation, but on the other hand I could also just do all that at startup, create the app-specific structure I need, and then just let it run. 04:22:25 So it would operate as though it were that sort of system. 04:23:01 I use coroutines for the rest. 04:23:29 And I've described my memory manager as "fixed page size," but that's not 100% true. As long as there's still lots of RAM left up at the top, I can allocate pages of any size. So again at startup I could layout whatever sort of memory structure my application needed. 04:24:24 I have that on my mind right now, with an eye toward potentially making the disk block buffers larger than 4k, to get better bandwidth on the storage. 04:24:59 Those buffers are allocated as they are first used, but once they exist they're then permanently associated with that particular disk buffer record. 04:28:39 I prefer to have a fixed number of disk buffers, which can be set at meta/cross compile time. 04:29:53 Yes, I think that makes sense in your system. You use this to develop embedded applications, right? 04:30:04 yes 04:30:24 Most such apps don't need all that dynamic stuff. 04:30:29 They do what they do. 04:31:25 I want mine to be a generally usable OS, where there's no telling what I might be doing in some of my processes. 04:32:25 right 04:34:45 I doubt I'll do anything general purpose in the coming years, but if I did I'd probably fall back on C+POSIX. 04:35:23 At some point I may even try to spin this off in a way that lets me specify it as my login's Linux shell app. 04:35:29 "forsh" or something. 04:35:43 :) 04:35:58 Of course in that case I'd also need to integrate well with the hsot file system. 04:40:40 i also want a forth based shell :) - it's on my todo list :) 04:40:48 :-) 04:40:57 We could make a t-shirt... 04:41:05 :) 04:42:49 the only thing stopping me right now is the job control stuff which i use a lot (ie: i start vi, and to return to the shell, i use ctrl-z - jobs and fg are used to recover them) 04:43:22 If you're developing a general purpose OS, take a look at the Rump Kernel used in NetBSD for drivers: 04:43:23 https://en.wikipedia.org/wiki/Rump_kernel 04:43:58 Oh, cool. 04:44:09 Yes, there's a lot of work between me and bare metal. 04:44:24 Bluetooth, network, USB, etc. 04:45:37 You can use it to get access to existing drivers even from your own bare metal OS. 04:50:44 https://www.usenix.org/system/files/login/articles/login_1410_03_kantee.pdf 04:58:26 That's a good resource. 04:59:40 Hmmm. I'm contemplating a couple of (simple) changes to my memory manager, that would let me work with more than one block size, and also one that would improve the system's ability to pull "HIHEAP" back down to lower addresses. Right now if you free the page immediately below HIHEAP it will draw it back down rather than put the page on the free list. 05:00:06 But "pathological" deallocation orders can leave me with some pages on the free list that could be given back to the >HIHEAP pool. 05:00:23 Thinking of ways I could bolster that code without getting too inefficient. 05:01:35 When I free a block, I could put a "magic number" at the high end of the block. 05:02:15 Then I could just check right below HIHEAP to see if that number is there. If it is, I could then search the free list (or have a pointer to the free list entry below the magic #) and move that page back to totally free RAM. 05:02:34 Of course, the allocation code would have to make sure that magic number was NOT present. 05:02:40 So adds a little work. 05:02:43 But it might be worth it. 05:05:04 The free list linked list is stupid simple right now - just a straight one-direction list. To make this work without searching the free list I'd have to promote that to doubly linked status. 05:05:19 And then there would be the usual tedious "end conditions" that you always have to deal with with such lists. 05:05:32 Aren't you anticipating too much at this point? 05:05:41 Maybe. 05:05:54 But that's part of what I do - I think very deeply about stuff like this. 05:06:05 I've never been a "just start hacking away" designer. 05:06:13 So I'm sure I err sometimes to the high side of that. 05:06:43 I don't like to make systems too "smart" to begin with. 05:07:26 I got here by a logical path, though - first I started thinking "better storage bandwidth," and that mean "larger block sizes." That led to "memory manager supports more than one block size," --> "multiple free lists." 05:07:48 And finally that brought up how to improve on blocks getting permanently associated with one of the free lists. 05:08:16 If the system was good at putting RAM back in the high pool when the opportunity arose, then that memory manager would be able to evolve better in response to changing conditions. 05:08:53 The other solution is to just write a word that looks over the free lists and gives back what it can. 05:09:04 And does some work in the process, but I'd just run that word when I wanted to. 05:15:48 The tradeoffs are usually too vague in the early stages to pin down that much sophistication up front. 05:17:39 Maybe. But I do think DEEPLY. 05:18:00 It's just an approach that has worked for me enough to keep me doing it. 05:18:15 Besides, it's not necessarily the "early stages" - this thing is pretty far along. 05:18:25 ok 05:18:36 I work in storage professionally - it's an absolute truism that small block size --> low bandwidth. 05:18:51 You just can't get full bandwidth out of a storage device with 4k operations. 05:19:14 You can with 64k, usually, and in between it's a transition. 05:19:29 It is a tradeoff - you can't get full *IOPS* with the larger blocks. 05:19:35 So it's a matter of what you need in a given situation. 05:19:57 And when you're in a position to measure that need. 05:20:46 It's meant to be a generally usable system - I can say right now that I want to be able to deal with high IOPS situations and high bandwidth situations. 05:20:54 Without having to know exactly what those situations will be. 05:21:46 Yes, I can be a little stubborn sometimes. ;-) 05:21:55 It's come with age. 05:22:26 And besides, right now I'm just thinking about to what extent I can move in the aforementioned direction CHEAPLY. 05:22:38 I'm not about to go toss tons of baggage onto the code. 05:24:03 It's not you, it's me, I'm allergic to "General Purpose". :) 05:24:40 :-) That's ok - I've done it that way too over the years. Most of my career (the most fun part of it, anyway) was in very specific embedded systems, and I was just as allergic. 05:25:03 Very anti feature creep. 05:25:20 :) 05:26:43 I do also see a concern here. These memory management routines are very fast, and they're *reliably* fast. They always take about the same length of time. But if I go incorporate high-memory healing into it, then the "free" operation becomes something with potentially unknown variable time requirements. 05:26:58 It might have to walk down through 50 pages there at the top that suddenly become freeable. 05:27:05 That's worth being cautious of. 05:27:21 How long it took would depend on what other processes had done to the heap. 05:27:59 That's probably my biggest concern - there's no good way to predict how that might bite me in the wrong situation. 05:28:26 And like I said, I could just write a "healng word" that does the necessary work that I can run when I choose. 05:29:54 And I could write that word without changing a darn thing in the existing code. It would have to walk the free list repeatedly, until it was no longer able to free anything, but nothing else would need to be changed. 05:31:17 At some point I'll probably re-write the allocate and free words in assembly. They're short enough that that wouldn't be too bad. But... later. 05:33:40 Can't your "healing" be a task that does a fixed amount of work each time it gets a turn? 05:34:06 Yes, that's definitely a possibility too. 05:34:13 Good idea. 05:34:23 We have "sweepers" like that in our products at work. 05:34:48 Flash memory is crap, and it forgets. We have error detection and correction circuitry, but only up to a certain number of errors. 05:35:29 A sweeper crawls through the whole system over the course of every couple of weeks, reading each page, and if the error count is above a certain level (if we're getting too close to uncorrectability) writes it back out again clean. 05:36:07 Sometimes I feel like making enterprise grade storage out of this stuff is like making armor out of toilet paper. 05:36:21 :)) 05:36:33 But it keeps us in revenue. 05:36:46 Interesting work 05:37:02 It is - there are a lot of interesting features. 05:37:59 Well, I better get my rear in gear - I'll catch you guys in a little bit. 05:38:11 Ciao KipIngram 05:43:39 --- quit: rdrop-exit (Quit: Lost terminal) 06:19:53 how would i use the forth parser on a socket/file? 06:20:03 or is that not idiomatic? 06:43:36 corecode: redirect KEY? to fetch from it or some such 06:44:23 ah, i had to refill 06:44:39 now i need to figure out why parse seems to read a whole line instead of just one word 06:44:53 ah, because i'm a dummy 07:08:11 --- quit: dave0 (Quit: dave's not here) 07:15:19 corecode: Some of the folks here have one-word-at-a-time systems and swear by them. 07:15:41 I've never taken the plunge, and I think I would find it displeasing, but you never know. It's not dumb. 07:15:56 If it helps you for it to work that way, make it work that way. 07:18:00 i need to do a command dispatch now 07:18:09 thinking about using a secondary wordlist 07:18:21 or just do a whole set of ifs 07:23:14 --- quit: Zarutian (Read error: Connection reset by peer) 07:23:36 --- join: Zarutian (~zarutian@173-133-17-89.fiber.hringdu.is) joined #forth 07:32:44 set of ifs == yuck 07:33:06 --- quit: tabemann (Ping timeout: 250 seconds) 07:33:21 indeed 07:55:20 KipIngram: one word at a time system? So no words that can parse ahead? 07:55:20 How would something like : even work then 07:56:31 that's not what it implies - the : would put the parser into a state which would push the following tokens to the thing which is handling the : 07:58:19 No - you can still parse ahead. 07:58:34 It just means that as soon as you hit the space bar after a word, that word is immediately accepted and executed. 07:58:40 Instead of buffering up a line. 07:58:55 I'm not advocating this, btw - we just had a discussion about it in here last week. 07:59:14 The other guys in the discussion were very high on that approach. 07:59:34 But I decided that sometimes I like to type a line of code and then look at it to "review it" before committing to is. 08:00:03 saw the discussion, but can't say i'd like that - the ability to edit a line before hitting return/parsing seems required to me 08:00:06 depends, some Forths, buffer one line so you can edit it as you wish, then PARSE gets invoked and gets one word from it at a time. 08:00:09 To answer your question, though, in the situation you were imaging everything would have to be postfix. 08:00:14 : 08:00:16 would become 08:00:25 "" : 08:00:32 It could still work 08:00:41 The system would have to be able to understand literal strings, though. 08:01:27 the_cuckoo: Not 'required' operationally, but yes, I see it as a personal requirement of mine. 08:02:01 Gee, where were you the other day? I was all alone holding out for line buffering. 08:02:27 sure - would drive me insane if i didn't have it though - would be like typing on irc without the ability to edit - man, that would be embarrassing :) 08:02:27 * PoppaVic chuckles 08:02:57 the_cuckoo: you havent used talk on an multi user unix system then ;-) 08:03:12 heh.. man 1 write ;-) 08:03:18 yeah - naturally :) - and i hate it :) 08:03:50 Zarutian: and the irritation it was arriving whule you were trying to program ;-) 08:03:54 otoh, i only used it with colleagues and that's fine 08:03:55 while 08:05:59 --- quit: dave9 (Ping timeout: 250 seconds) 08:06:02 PoppaVic: yeah, I setup once a cron tab script that very infrequently and randomly did write "*HIC* Excuse me.\n" at a random terminal of users I knew had not used unix shells much. 08:06:41 Bwahahahahahahahahah... 08:07:35 best was that there was always isopropnol alcahol bottle in the room of where that computer was. 08:07:49 ancient stupidity. sysops had to patch solaris a dozen times before that shit stopped 08:08:28 in plain view of the window of the locked door. If people complained I removed the bottle and put it out of sight until a fortnite or so later. 08:09:33 mind you, this was/is uni student shell server that people were warned that might not be reliable. 08:10:53 this kind of crap got me into really looking into comp.sec. or process fine grained access control 08:12:01 while after that I discovered KeyKOS, its decendant EROS, and its decendant Capros. Now those ideas live on in seL4/Genode 08:16:46 What was the problem with isopropyl alcohol? 08:18:39 KipIngram: obviously, it was evil: "alcohol". 08:24:45 KipIngram: nothing, it was just there to imply the shell server sometimes got _smashed_. 08:25:51 Isopropyl alcohol is rubbing alcohol, isn't it? 08:25:57 I thought that was poisonous. 08:29:21 it causes blindness and mushes the brain as well, iirc 08:29:26 organ-failures all over 08:30:53 rubbing alcohol and isopropyl alcohol is not the same. The latter is spefic industry solvent 08:32:14 the former, is often diluted with stuff one does not want on electronics. 08:42:30 rdrop-exit: you know, another place I could put an increment of heap cleansing would be in the QUIT loop. 08:42:47 Just do a little of that work every time I acquired a new line. 08:42:58 Same idea as your background task idea. 08:56:52 --- quit: Keshl (Read error: Connection reset by peer) 08:56:54 --- join: Keshl_ (~Purple@207.44.70.214.res-cmts.gld.ptd.net) joined #forth 09:13:59 --- join: ircbfs (~ircbfs@81.221.130.119) joined #forth 09:14:03 ircbfs: hi 09:14:10 ircbfs: testing 09:14:13 cool 09:18:00 --- quit: ircbfs (Ping timeout: 244 seconds) 09:20:50 --- join: ircbfs (~ircbfs@81.221.130.119) joined #forth 09:25:19 --- quit: ircbfs (Ping timeout: 255 seconds) 09:29:20 --- join: ircbfs (~ircbfs@81.221.130.119) joined #forth 09:31:18 ircbfs: hi 09:31:21 test 09:33:31 --- quit: ircbfs (Ping timeout: 250 seconds) 09:40:36 corecode: Is that a Forth bot? 09:41:49 [I.e. written in forth] 09:47:42 --- join: ircbfs (~ircbfs@81.221.130.119) joined #forth 09:47:42 --- quit: ircbfs (Remote host closed the connection) 09:50:44 yes 09:50:56 50 lines at the moment 09:56:07 --- join: ircbfs (~ircbfs@81.221.130.119) joined #forth 09:57:56 --- quit: ircbfs (Remote host closed the connection) 09:58:03 --- join: ircbfs (~ircbfs@81.221.130.119) joined #forth 09:58:09 ircbfs: hi 09:58:10 testing 10:01:34 --- quit: ircbfs (Remote host closed the connection) 10:01:42 --- join: ircbfs (~ircbfs@81.221.130.119) joined #forth 10:01:58 sorry, i'll take this elsewhere 10:03:19 --- join: dave9 (~dave@223.072.dsl.syd.iprimus.net.au) joined #forth 10:04:54 https://gist.github.com/f069b26345ff5c0403189bee15db4e37 10:05:17 john_cephalopoda: in case you're interested 10:26:11 --- join: zy]x[yz (~corey@unaffiliated/cmtptr) joined #forth 10:34:29 --- quit: zy]x[yz (Ping timeout: 240 seconds) 10:34:48 --- join: zy]x[yz (~corey@unaffiliated/cmtptr) joined #forth 10:48:06 --- quit: Zarutian (Read error: Connection reset by peer) 10:49:48 --- join: Zarutian (~zarutian@173-133-17-89.fiber.hringdu.is) joined #forth 10:53:52 corecode: Oooh, nice 11:22:17 --- quit: gravicappa (Ping timeout: 250 seconds) 11:27:42 --- join: mark4 (~mark4@148.80.255.161) joined #forth 11:35:52 --- quit: mark4 (Remote host closed the connection) 11:45:24 --- quit: Zarutian (Ping timeout: 245 seconds) 12:29:06 --- join: Zarutian (~zarutian@173-133-17-89.fiber.hringdu.is) joined #forth 12:35:19 --- join: mtsd (~mtsd@94-137-100-130.customers.ownit.se) joined #forth 12:45:07 --- join: mark4 (~mark4@148.80.255.161) joined #forth 13:08:39 Ok, guys - I did some nice things to the memory manager today. 13:09:02 First off, I rewrote them in machine code; just like the "block already present" part of BLOCK, they're things that need to run fast. 13:09:16 corecode: http://forth.org/forth_style.html 13:09:48 Second, I switched from a singly linked free list to a doubly linked one, and implemented that "magic number" at the high end of the block (as well as the page size) so that it's easy to tell if a free page exists right below the high heap pointer. 13:09:52 or sam falvo's style of aligned ": word definition ;" 13:10:22 And finallly I wrote a word HEAL that does that check, and if there is a free page at the top it unlinks it from the free list and moves the pointer back down. 13:10:54 Just one time - HEAL returns a flag that's true if it recovered a page and 0 if it didn't; so the idea would be if you wanted to do the whole job you'd call HEAL until it returned false. 13:11:12 I can see all of this assembly code, for all three routines, on my screen at one time - it's a pretty small amount of stuff. 13:11:27 It just turns out to be a lot easier to manipulate doubly linked lists in asm than in Forth. 13:11:41 That's a situation where having registers around helps, I think. 13:11:55 I hate to be saying that, but it seemed pretty obviously easier. 13:12:04 --- quit: mark4 (Read error: Connection reset by peer) 13:13:53 --- quit: mtsd (Quit: WeeChat 1.6) 13:14:09 --- join: mtsd (~mtsd@94-137-100-130.customers.ownit.se) joined #forth 13:14:28 --- quit: mtsd (Client Quit) 13:14:47 By having the size stored up at the top of the free blocks, I pave the way for having heaps of multiple page sizes, just by having a free list pointer for each one and allocate and free routines for each one. 13:20:19 whee 13:20:24 OK, back to programming 13:22:19 So now I can add support for an additional heap page size by 1) adding a variable for that size's free list, and 2) adding allocate and free routines for that page size, which will be four lines each, including the header. 13:22:52 Those just point rax at the appropriate free list pointer and laod the page size into rdx, and then jump to existing labels. 13:23:02 Plus these are machine code fast now. 13:23:07 This was a big improvement. 13:23:30 You know, dealing with doubly linked lists in *Forth* is like pulling teeth - it's a real pain in the ass. 13:23:40 But dealing with them in assembly is almost trivial. 13:23:51 That's one situation where having registers helps enormously. 13:24:20 Of course, with my frame system I could emulate the approach using the stack. 13:25:25 What do you think of a "LastX" register? The old HP calculators had one. 13:25:49 Say you said ALPHA BETA +. Then LastX would produce BETA. 13:26:30 That would mean adding one line to a whole slew of primitives: mov , rcx where is the register I choose for this. 13:33:58 Oh wow - I just made a scary discovery. 13:34:26 --- join: mark4 (~mark4@148.80.255.161) joined #forth 13:34:28 Remember how I have my actual source asm file in a Dropbox folder, and then have project folders on my Mac and my Linux box linking to it with soft links? 13:34:38 I just noticed that git wasn't committing changes to that file. 13:34:48 It just sees the (unchanging) symbolic link and doesn't do anything. 13:35:02 So the last few days I haven't been getting my changes committed. 13:35:15 Looks like it was just committing the .lst file, and I didn't notice until now. 13:35:27 annoying 13:35:48 I did cp source.asm source.gold and added source.gold and committed. That worked, but it's a process I'll have to go through every time I want to commit. 13:35:58 It is annoying - this seems like a very reasonable use case. 13:36:14 Maybe there are reasons you don't always want it, but sometimes you clearly would - it should be configurable. 13:36:47 time for a bug report? :) 13:37:01 it's documented that git doesn't follow symlinks 13:40:06 I'll write a little script. 13:40:12 That does the copy and then the commit. 13:40:14 No problem. 13:40:44 make it part of the makefile steps ;-) 14:03:39 corecode: bot looks nice :) 14:06:01 I don't have a makefile. I just have a script that runs the assembler and linker. 14:06:15 I don't commit every single build. 14:06:19 I'll make it a separate script. 14:06:22 "commit" 14:06:46 well, sheesh - use a damned Makefile 14:14:51 there are enough of us now with projects that it might be nice to have a central place to document them or have links to repos, etc. 14:15:02 does anyone have thoughts on what would be good for this? 14:15:47 e.g., I have to read quite a way up and infer a whole lot to make sense of, e.g. KipIngram's project, since I haven't been able to read everything in the channel over the past N weeks, etc. 14:16:18 there are a lot of interesting things and i think it would be convenient to see what's going on at a higher level 14:19:41 KipIngram: Sounds like you might make use of a pre-commit hook. 14:20:18 KipIngram: it's GIT - use a sledge. 14:24:26 bluekelp: Um, what can I tell you about it? 14:24:47 It's pure nasm assembly, using OS syscalls for read, write, exit, seek, open, and close. 14:24:59 Headers are stored apart from definition bodies. 14:25:24 Indirect threaded. 14:26:03 As far as the Forth itself goes, I think those are the main aspects. I'm trying to write a full-on operating system, so that I can eventually replace those syscalls with original code and run on bare metal. 14:26:08 Oh, and the ioctl syscall. 14:26:11 Uses that too. 14:26:32 it's intended to be multi-process / multi-task, supports a per-process command history. 14:26:52 Very similar to the bash command history (though I corrected what I see as a bug in that). 14:27:28 Has a simple memory manager to let me manage RAM as processes appear, disappear, etc. 14:27:48 I plan to support inter-process comm using streams, probably based on msgpack. 14:46:45 KipIngram: everythig you state there sounds like X4 but x4 is direct threaded :) 14:47:37 no IPC yet or even threading yet but i have not decided if im going to do threads or just have forth do the multi threading itself 14:47:54 probably be better to use pthreads 14:48:31 did you write your own memory manager? 14:48:48 x4 has had one for years and a terminfo / curses interface 14:49:01 but im trying to fix a bug in that related to the new extended terminfo file format 14:49:13 Yes, I wrote a simple memory manager, first in Forth and then today switched it to assembly and improved it a bit. 14:49:34 I just have the assembly source call out a huge memory reservation - it gets all that from the OS at startup. 14:49:43 Then I parcel it out / gather it back up with my routines. 14:49:59 It's intended to be FAST, not full-featured. 14:51:08 No garbage collection - once allocated, blocks stay put. 14:51:27 My original idea was that it would support only one page size, which means fragmentation isn't an issue. 14:51:57 But now I'm considering deploying multiple subsystems, each handling a page size, in the same pool. Mostly because it would be so easy to do. 14:52:04 That does raise the spectre of fragmentation, though. 14:52:45 I don't have tasks and IPC working yet either, but I've designed things from the very beginning to support those things, so I'm hoping it will be relatively straightforward to bring them up. 14:52:56 my memory manager also chunks out allocations but merges free areas on deallocate 14:53:07 and if all blocks in a given heap are deallocated the heap is freed 14:53:10 Yes, the improvements I made today would allow for merging. 14:53:22 if you try allocate more space than is available in a heap a new one is allocated 14:53:25 So it would have to be "hard" fragmentation to create a real problem. 14:53:40 Ah, neat. I'm not doing that yet, but I guess I could. 14:53:42 do you have guard blocks at the start and end of your heap? 14:53:51 I figure if I go bare metal the initial allocation will be "all that there is." 14:53:57 you can put a pointer to prev/next heap inside the guard blocks and speed up merging 14:54:13 and you can detect page overrun when a guard block is trashed 14:54:31 have you looked at x4's memory manager? 14:54:53 Hmmm. You're making me thing about the possibility of a process allocating a big block within which it would allocate it's small blocks while running. 14:55:00 Then when it was done it could just let go of the big block. 14:55:05 That would combat fragmentation. 14:55:11 No, I haven't. 14:55:20 i have a memory manager test that allocates 20 thousahd buffers of random sizes from 16 bytes to 16k 14:55:27 and then it frees them all up in a random order 14:55:36 I suspect mine is much, much simpler than yours - it's like as simple as it could possibly be. 14:55:55 mine needs simplifying 14:56:10 "it works" but its not written as well as i would like it to be 14:56:34 but its also not on a par with the ones used in linux istelf 14:56:36 Been there, for sure. 14:56:49 I've been a fairly good boy this time, though - this is all decently clean so far. 14:57:08 its not that its not clean, its just more complificated than i think it needs to be 14:57:19 even so it is still quite fast 14:57:37 I just got the BLOCK stuff put in, and am gearing up to do a file system. 14:57:51 That's what made me start to think about larger memory blocks. 14:58:00 Larger blocks --> better disk bandwidth. 14:58:33 i was writing a DoS forth (16 bit) that was isforth (x4) ish 14:58:35 BLOCK is implemented with an array of buffers and block numbers are set associated with "rows" of that array. 14:58:44 A block can use any buffer in its row. 14:58:54 and i wrote a block editor that allowed you to specify the dimensions of the blocks 14:58:57 Right now my row length is two, but it would be easy to extend it. 14:59:39 Easy and pretty low cost, performance-wise. 15:00:54 Earlier I was thinking about how to implement locking around the heap routines. 15:01:15 And I'll need that around the block buffer rows as well. 15:01:44 I am thinking that I'm going to make it so that once you BLOCK in a block it will stay until you release it. 15:02:03 And multiple tasks will be able to share a resident block, so there will be a use counter around it. 15:02:13 No re-using that buffer until the use counter goes to zero. 15:02:46 I figure it's perfectly ok for multiple tasks to be writing to the same block, as long as they aren't directly conflicting around a particular part of that block. 15:03:09 As far as preventing that goes, I just have to be a good programmer. 15:04:29 and your end users have to be good end users :P 15:05:19 :-) I guess I'm operating on the principle that this is mostly for me. 15:05:55 Earlier in my career I had some positions of real authority (big fish in small ponds), but I surely don't anymore. I don't feel much in control of anything at work these days. 15:06:15 I think this is partially my way of having something technical that is absolutely mine, to do exactly as I please with. 15:06:34 No one rushing me, no one looking over my shoulder, etc. 15:06:48 ya 15:29:37 hi 15:41:31 --- quit: Zarutian (Ping timeout: 255 seconds) 15:44:01 --- join: Zarutian (~zarutian@173-133-17-89.fiber.hringdu.is) joined #forth 15:52:21 --- join: yrm (~yrm@host155.shizentai.jp) joined #forth 15:54:25 --- quit: mark4 (Ping timeout: 250 seconds) 16:24:22 Hi corecode. 16:24:56 --- join: dave0 (~dave0@223.072.dsl.syd.iprimus.net.au) joined #forth 16:25:56 hi 16:43:35 KipIngram: someone always rushing you regarding technical stuff at work? 16:45:38 KipIngram: what do you do at work? 16:52:01 Oh, everyone's always in a hurry at work. 16:52:40 corecode: Nothing too spectacular. I do performance testing, and troubleshooting when the performance isn't right, on my group's line of flash-memory based enterprise grade storage systems. 16:53:19 This is one of ours: 16:53:21 https://www.ibm.com/us-en/marketplace/flash-storage 16:53:55 Most of the time it's basically just turning a crank - new software version is deployed, I load it up and test. 16:54:07 Once in a while I find a way to make it extra interesting. 16:54:18 I developed most of my test infrastructure myself. 16:55:11 I use an open source package called fio to generate traffic, but I designed the database I keep the test results in, wrote Python and C tools to gather up the numerical results and put them in that database, and a collection of tools for creating reporting assets. 16:55:43 And I know more about how the thing works under the hood than most of the people in the test group, so when there are problems I often work closely with the developers on resolving them. 16:56:03 The group was an independent company up through 2012 called Texas Memory Systems - I was TMS's software engineering manager. 16:56:25 IBM acquired the place and brought in legacy IBM management, so I had to find something else to do. 16:56:52 --- quit: john_cephalopoda (Ping timeout: 250 seconds) 16:57:44 i always wonder what managers do 16:57:50 they mostly seem to be in the way 16:57:59 Well, at IBM they mostly have meetings. 16:58:03 yea 16:58:13 i think ibm keeps at&t afloat 16:58:18 Earlier in my career I was in a number of management roles, including a couple of gigs at small companies as VP of engineering. 16:58:24 I was EXTREMELY hands-on technically. 16:58:56 --- join: john_cephalopoda (~john@unaffiliated/john-cephalopoda/x-6407167) joined #forth 17:01:06 At one of those VP places (the job I consider my favoriate of all I've ever had), I had about 65 engineers and techs on my team. Electronics, software, and mechanical stuff. 17:01:13 Most of them were young-ish and inexperienced. 17:02:02 My job was to figure out how to break the goals down into work items that the various people could handle, schedule it all, and watch over them to make sure the pieces were all going to fit together when the time came. 17:02:50 I'm not the only guy in the world who could have done it, but I was the only guy THERE that could do it, except for the owner of the company himself. He was a fantastic engineer and had a great sense of what made that particular kind of product good. 17:02:59 We made equipment for programming programmable devices. 17:03:12 These days it's easy - most stuff "programs itself" and gives you sort of an API. 17:03:36 But back then it often called for precise timing, accurate voltages and currents (and somtimes accurate current RAMPS) and so on. 17:03:48 So there was definitely a niche for good gear of that type, and we mae the best there was. 17:04:01 We pretty much just took over that industry in the late 1990s. 17:04:12 From 1995 to 2000 our revenue went up about 50% a year. 17:04:26 Then in 2001 the whole tech sector downturned, and we CRATERED. 17:04:58 We still sold plenty of desktop units used by working engineers, but the big customers that bought the automated stuff didn't need new equipment - the gear they already had was enough. 17:05:19 So anyway, that party ended right in there - I quit in 2002 and started a consulting company. 17:05:34 Let's please not debate the wisdom of starting a consulting company in the middle of a technology downturn. :-| 17:05:44 I did ok, but never made as much as I did working for others. 17:06:01 I stuck with it for about 5 years and then went and found a real job again. 17:06:54 corecode: So, when you wonder what managers do, I think you should frame that in terms of BIG companies. At little companies there is no set formula, and sometimes managers work their asses off. :-) 17:07:51 Those were great days. I miss them. 17:41:03 --- join: tabemann (~travisb@rrcs-162-155-170-75.central.biz.rr.com) joined #forth 17:51:45 --- quit: dave0 (Quit: dave's not here) 17:55:23 --- quit: dddddd (Remote host closed the connection) 18:29:06 WilhelmVonWeiner: i think that i need a thinner ISA BOOK about xtensa 18:43:19 --- quit: proteusguy (Ping timeout: 255 seconds) 19:46:16 --- join: smokeink (~smokeink@42-200-118-142.static.imsbiz.com) joined #forth 20:01:35 --- join: gravicappa (~gravicapp@h37-122-126-13.dyn.bashtel.ru) joined #forth 20:03:52 --- quit: tabemann (Ping timeout: 255 seconds) 20:04:46 --- quit: smokeink (Ping timeout: 255 seconds) 20:05:47 --- join: smokeink (~smokeink@42-200-118-142.static.imsbiz.com) joined #forth 20:14:13 --- quit: smokeink (Ping timeout: 255 seconds) 20:15:27 --- join: smokeink (~smokeink@42-200-118-142.static.imsbiz.com) joined #forth 20:21:32 --- join: nighty- (~nighty@b157153.ppp.asahi-net.or.jp) joined #forth 20:22:00 --- quit: nighty- (Read error: Connection reset by peer) 20:36:56 --- join: tabemann (~travisb@2600:1700:7990:24e0:78df:6a9:450a:b439) joined #forth 20:59:24 Where are you even finding a thick book on it? 20:59:43 Last time I went looking, which was a fair long time ago, to be fair, Xtensa was all about the NDAs. 21:01:15 --- join: [1]MrMobius (~default@c-73-134-82-217.hsd1.va.comcast.net) joined #forth 21:04:17 --- quit: MrMobius (Ping timeout: 244 seconds) 21:04:17 --- nick: [1]MrMobius -> MrMobius 21:04:34 --- join: nighty- (~nighty@b157153.ppp.asahi-net.or.jp) joined #forth 21:08:54 ttmrichter: lesser than 100 pages? 21:11:09 The one I found is 6-700 pages. That's, you know, only 10X. :D 21:12:46 sounds like a cpp project compared to a forth project 21:13:01 This is an improvement over when I first went searching and found only vague handwavy docs and NDA barriers. 21:13:20 Well, any non-stack CPU is going to be a mess to describe fully. 21:13:44 Even the old 8-bit cores with simple instruction sets would be hard to properly document in 100 pages. 21:13:52 --- join: rdrop-exit (~markwilli@112.201.168.172) joined #forth 21:14:15 i guess a progressive deepin introducation might be better than just pages limitation :] 21:14:55 That would involve chip vendors hiring people who could write. 21:15:13 Which would involve them actually caring if customers could figure their stuff out. 21:15:39 In reality, I think a lot of them get a healthy income on the side from consultation on how to use their messes. :D 21:16:11 * ttmrichter isn't naming any specific vendor Silicon Labs here. 21:16:38 ttmrichter: "Even the old 8-bit cores with simple instruction sets would be hard to properly document in 100 pages." 16-bit, 112 pages. PDP-11, best ISA ever. 21:16:47 https://gordonbell.azurewebsites.net/Digital/PDP%2011%20Handbook%201969.pdf 21:17:11 Ah, that's not an 8-bit core and yes, the PDP-11 is an improvement over every ISA before or since. 21:17:19 Yup. 21:17:21 Except for that pain in the ass part that made it really not extensible. 21:17:36 (Which gave us things like middle-endian numbers in later iterations.) 21:17:41 Well, VAX was close enough. 21:17:46 * ttmrichter handwobbles. 21:17:56 VAX was OK, but ... it wasn't a PDP-11 in elegance. 21:18:01 I've got a PDP-11/73 by the way. It works. 21:18:13 You *HAD* a PDP-11/73. 21:18:27 Even as we type, operatives are enroute to divest you of it and give it me. 21:18:49 It was the holy grail to add to my collection. Figured I'd be 50+ and wealthy before I could buy one. Literally found one in a junk pile. 21:19:24 I actually want a /40. 21:19:32 Yeah. Me too. 21:19:49 But I brag about my 11/73 instead, because I'm realistic. 21:19:58 :P 21:21:01 But yeah. Extend the concept to 64 bits and we'd have the best architecture of all time. 21:22:09 Instead we've got... well cat /proc/cpuinfo and cry. 21:23:05 I also want a PDP-8/E 21:23:25 Well, Intel is pretty much a collection of every ISA anti-pattern humanity could think of. :-/ 21:23:52 So, the absence of any pattern. Gotcha. 21:23:58 It's like someone said "how can we do this worse?" and someone else said "hold my beer". 21:24:13 "Hold my ALU" 21:24:26 mechaniputer: riscv book's goal is 100 pages 21:24:27 * ttmrichter heard the story--from a former Intel engineer--of how "real" vs. "protected" mode came about. 21:25:00 also if you write it in chinese characters, 100 pages is not too less, you could check twitter 21:25:20 yunfan: Yeah that would be nice. I'm a FOSS-obsessive type though so I'm waiting on LowRISC which seems to be slowing down... :( 21:25:38 What's the F/OSS barrier with RISC-V? 21:26:17 mechaniputer: i will got my fpga board with riscv soft core tomorrow :D 21:27:09 btw, when i bought a all winner's H3 based Soc years ago, i already have a AR100 as gift inside that chip 21:27:17 ttmrichter: Well, I want to know what the abstraction below my software is to the most fundamental level possible. I want the behavior of my hardware to be completely and unanbiguously documented without additional secrets. Which the PDP-11 achieved but no modern CPU does. 21:27:18 which is a OR1k core 21:27:47 mechaniputer: me too, i hate those history legacy 21:29:56 --- quit: `presiden (Quit: WeeChat 2.4-dev) 21:30:50 ttmrichter: I'm sad that Alpha never took off. That would have been better than what we have. I've got an Alpha machine that I've barely touched. I want to get some kind of BSD running on it eventually. 21:32:42 yunfan: Do you know of the Yeeloong laptop that RMS was using for several years? It was a small laptop from a Chinese comapny that could run with completely free software. He switched tosome kind of Lenovo with a modified BIOS once it was too slow to use. 21:34:26 yunfan: It seemed cool but it was hard to find. It used a MIPS CPU. 21:35:05 --- quit: gravicappa (Ping timeout: 258 seconds) 21:35:50 OR1K seems cool too. I just wish I could buy actual hardware for it. 21:39:06 mechaniputer: i knew that its the loong chip , to be honest, we dislike it 21:39:21 mechaniputer: you could still bought some on taobao 21:39:22 Ok, I made use of the undercode that I wrote for EXPECT and QUERY to implement a block line editor. 21:39:34 : EDIT ( block line --) ... ; 21:39:39 mechaniputer: but the long chip team leader is a maoist, which i dislike 21:39:59 It presents me with the line, with the cursor in line 0 and the initial length reaching out to the last non-space char on that 64-char line. 21:40:01 yunfan: Ah, me too. I'm more of a Trotskyist :P lol 21:40:09 It will let me extend it to 64 chars, but no further. 21:40:18 And it drops it into the right spot. 21:40:23 Actually it's editing it in place. 21:40:33 and also since we have riscv, i dont think mips is need more attention 21:40:39 Yeah 21:40:49 Had to improve the old code a little, because EXPECT kept a null at the end of the line, so what was beyond that didn't matter. 21:40:53 mechaniputer: so what is Trotskyist 21:41:08 I had to make sure spaces were getting deposited out there, and of course I don't have a null in this circumstance. 21:41:16 yunfan: Socialism in the tradition of Leon Trotsky. 21:41:43 mechaniputer: ah, i got it via wikipedia, they say there were Trotskyist at hongkong 21:42:29 mechaniputer: I don't know much about Alpha. By the time I was in a position to be able to look closer at it, it was dead. 21:43:12 which fund the recently labour worker's movement in china mainland 21:44:15 ttmrichter: Yeah. It was good though. The sucessor to VAX which was the successor to PDP-11. DEC designed Alpha to be the (parallel) architecture of the next 30 years, right before they went out of business. 21:44:33 --- quit: smokeink (Ping timeout: 250 seconds) 21:44:42 What was the main cause of the Alpha's death? 21:44:47 Typical DEC marketing? 21:45:12 (I swear that if DEC's marketing had bought KFC, they'd have branded it Kentucky Hot Dead Chicken Parts...) 21:45:49 ttmrichter: Just the closure of the company. And perhaps the marketing too. But it was WAY better than x86. 21:46:03 I've got a manual for it. 21:46:57 Alpha was the death rattle of DEC. We would all be better off if it had survived though. 21:47:59 --- join: smokeink (~smokeink@42-200-118-142.static.imsbiz.com) joined #forth 21:51:50 mechaniputer: Being way better than x86 is an astonishingly low bar. Like the PDP-11 was an improvement over all ISAs before and since, the x86 is worse than all ISAs before and since. :D 21:51:59 The x86 is the anti-PDP-11. 21:53:43 I don't mean to offend anyone, but I don't think socialism really works without government force being used to impose it. 21:53:52 Human nature just isn't compatible with it. 21:54:35 Human beings strive for their own well-being, and that of the people they have CHOSEN to make important in their lives. 21:54:54 You can't just tell them to behave a certain way and have them naturally do it. 21:55:25 That seems so obvious to me - I don't know why so any people don't get it. 21:55:37 well alghouth i dislike them but i still say you words is meaningless, because any society needs some force to obey the orders 21:55:47 * PoppaVic chortles 21:55:54 That's true, but it's a question of degree. 21:56:08 "the orders" matter as well 21:56:12 its a level problem 21:56:20 I willingly go to work each day, because I am going to be paid. 21:56:23 and the line changes up and down in history 21:56:28 But also because I will not be paid if I don't go. 21:56:54 If I got taken care of by the government, and the amount of work I did had no effect on my prosperity, I'd stay home and play with Forth. 21:57:22 KipIngram: The whole point of socialism is for the government to obey the inclinations of the the people. Anything else is a fake. 21:57:38 Well, I think we've mostly had fakes in history then. 21:57:46 Indeed. 21:58:06 That's human nature too - power corrupts, absolute power corrupts absolutely, etc. etc. 21:58:13 Yeah. So let' 21:58:31 's use the inclinations of the populace to set the overal direction. 21:58:56 Well, you have to be concerned about the tyranny of the majority. 21:59:05 True. Nothing is perfect. 21:59:11 The inclinations of the populace in the American South 200 years ago was to own slaves. 21:59:33 not accurate 21:59:42 Oh? 21:59:58 How do you explain, then, the fact that they, um... owned them? 22:00:12 The rich had the power, and the slaves. 22:00:23 relatively few "owners", more with some envy. Part was prestige, part was looking down on the dirt-farmers, etc 22:00:58 Well, take any one of those and give them a little success, and they would have joined the club. 22:01:07 KipIngram: accurate to say many always have to have others to look down on - and TRY to own, (and again this is ego and prestige) 22:01:09 Not being able to afford something doesn't mean you're not inclined toward it. 22:01:29 I do think that finding someone to look down on seems to be a core human feature. 22:01:34 KipIngram: Indeed. Which is where society and relatioships come it. 22:01:36 I think it arises from insecurity. 22:01:41 not saying a lotta' folks would have wanted a slave, or a model-T, or an extra mule, ox, etc 22:01:44 KipIngram: +1 22:01:45 Lack of satisfaction with your own self. 22:01:59 would [not] have.. 22:02:06 Fear that YOU will be at the bottom. 22:02:16 So you have to find someone else that's "more clearly and obviously" at the bottom. 22:02:23 i might went to the whole world to see if i could help them planting food and rising fish :D 22:02:35 besides, it's easier and cheaper to have wage-slaves and plantation-cities 22:02:37 if i dont need work and money :D 22:03:19 mechaniputer: " others were not true socialists " :D 22:03:26 sounds like muslim groups 22:04:36 yunfan: Fair enough pint but read some Marx. It all becomes clear. All wealth comes from the working class, and so all apower comes from the workers. It is theirs, but is stolen by others. 22:04:43 Yes, it turns out it was never true socialism after it goes south. 22:05:08 This experiment has been tried. Several times 22:05:20 And each time LARGE NUMBERS OF PEOPLE have been slaughtered. 22:05:23 It always winds up there. 22:05:40 Go listen to Jordan Peterson - he has a bead on this stuff. 22:05:46 mechaniputer: since i live in china, so i knew some basic marx's theory, but to be honest, i dislike them, prefer freedom, but as i work as problem, i were thinking his vision might someday become true 22:06:44 Anywhere large amounts of power materialize, you have to keep watch. It will get abused. 22:06:53 One place large power is materializing today is in the big corporations. 22:06:59 That's capitalism - not socialism. 22:07:08 yunfan: Marxism is all about freedom. Self determination. Ability to decide what to do with one's own ability to produce weathth, rather than be subservient to the influence of others who just want to profit from your ability to produce wealth. 22:07:11 But it's POWER, and they will abuse it if we don't hold the line on them. 22:07:15 That is a risk. 22:07:16 i think the problem might be in industry 22:07:39 many big important industry were controlled by center based model 22:07:51 which made this model nature to us 22:08:02 man, too funny 22:09:00 the ancient egypian needs center based power to managing large irrigation projects 22:09:51 which help build their goverment , and the manager, the Pharaoh came 22:10:05 Marxism has been corrupted by the interests of news corporations and pretty much all governments. But it is meant to encapsulate a concept which is beyond their scope. Everyone acts to their own benefit and gain. Marxism depends on this except that it accepts that not everyone willl become rich. 22:30:34 --- quit: DKordic (Ping timeout: 244 seconds) 22:42:54 --- quit: zy]x[yz (Ping timeout: 245 seconds) 23:05:16 --- join: zy]x[yz (~corey@unaffiliated/cmtptr) joined #forth 23:13:38 Good afternoon Forthwrights :) 23:14:58 rdrop-exit: Good morning! 23:15:23 Good morning to you mechaniputer :) 23:20:14 all this cheeriness first thing in the morning - bah 23:21:10 Top of the mornin' to you the_cuckoo :) 23:21:11 --- quit: zy]x[yz (Ping timeout: 250 seconds) 23:23:29 bah 23:23:31 :p 23:23:43 ugh.. sm/rem died a horrible death in the rewrite. I am tempted to just chuck the bastard 23:27:50 do you need it? 23:28:11 prolly not, it seemed to work originally, but I can better use the opcode. 23:29:27 I only do floored and unsigned 23:30:46 I implemented Euclidean once, but never had any practical use for it 23:32:17 well, I see a few imps define it in readable forth, so I can always write a colonword 23:33:30 This is for your bytecoded VM? 23:34:10 yeah, I had to rework the hell out of the past few changes - fucked up the works for a few days 23:34:20 ouch 23:34:56 yeah - I was mixing the mindset of controller and linux - snarfled up pointers all over the goddamned place 23:35:31 ..so, in rewriting shit I started writing a pile of "static inline" funcs as well.. Ditching macros 23:35:57 C? 23:36:01 yup 23:37:19 so, I finally got it back to the point where I am testing each token - that's the only one seems borked, and I'm fine with that - I've never used it in my life anyway 23:38:42 As opcodes I only have /mod */mod u/mod u*/mod 23:39:02 The signed ones are floored 23:39:44 For multiplication I have: * u* *hi u*hi 23:39:50 yeah, I just check what they posta return and slap in the usual C. I'll be happy if it performs that from tokens, but I also check back to whatever gforth thinks is "right" 23:59:11 --- join: zy]x[yz (~corey@unaffiliated/cmtptr) joined #forth 23:59:59 --- log: ended forth/19.02.27