00:00:00 --- log: started forth/18.10.25 01:44:09 --- quit: Zarutian (Ping timeout: 240 seconds) 01:58:15 --- join: xek (~xek@apn-31-0-23-83.dynamic.gprs.plus.pl) joined #forth 02:06:34 --- join: ncv (~neceve@2a02:c7d:c5c9:a900:6eaf:6ef7:3b81:d5f6) joined #forth 02:06:34 --- quit: ncv (Changing host) 02:06:34 --- join: ncv (~neceve@unaffiliated/neceve) joined #forth 03:08:49 --- quit: pierpal (Ping timeout: 240 seconds) 03:57:16 Wouldn't you still have the issue of tasks accessing that task, then? Seems like that just adds something to "the terminal," but that whole is still a shared resource. 03:57:51 I'll have some sort of mutex so that only one process can output to the screen or read from the keyboard at a time. 03:58:26 But I can still decide whether processes actually access the physical console or whether they just output to an output buffer that belongs to them, and a "console handler" decides which of those buffers to put on the screen at any given time. 03:58:59 I'm thinking I'll make "having an output buffer" optional - a process without one won't re-paint the screen when it takes over; it will just start doing I/o. 03:59:11 But one with a console buffer will re-draw before starting to do I/O. 04:19:44 My situation is simpler, cooperative tasks, only two of them, foreground terminal, background tether. 04:46:01 I'm also planning cooperative task switches (on a given core), but also will be supporting multi-core. The console connection is something I'll explicitly control - just whichever one I want to be working with. 04:46:08 Sort of like screen windows in Linux. 04:46:18 Probably similar keystrokes to swich around. 04:56:12 --- quit: wa5qjh (Quit: http://quassel-irc.org - Chat comfortably. Anywhere.) 04:59:54 --- join: Zarutian (~zarutian@173-133-17-89.fiber.hringdu.is) joined #forth 05:08:00 I see, my OS hosted Forth is only for tethered development, a single core on a modern processor is already overkill. 05:09:59 Got it. A "general goal" of mine is to be able to take advantage of all available processing resources; I hope to support any reasonable approach to that, but flow-based programming is kind of driving my decisions. 05:10:28 Probably with good ability to use msgpack for process-to-process communication. 05:11:18 Though I did have an interesting idea a couple of weeks ago, for supporting "fine grain parallelism opportunities," where a child task would inherit the parents frame pointer, and would take parameters and deliver results directly into the parent's stack. 05:12:08 Parent would prepare the stack, including "vacancies" for results, establish the frame, and then just make sure it didn't drop the stack far enough to remove that while the child(ren) run(s). 05:12:27 Meanwhile the parent can do anything it likes with its stack except for that one thing. 05:12:58 And can even define further frames - the children will have their own frame pointers - just the value of the parent frame pointer at spawn time will be passed. 05:13:24 Probably will have a pool of "ready to go" children on tap, so all I really need to do is set the frame pointer and the IP. 05:13:45 This will be for code sections that have this form: 05:14:01 ... A1 A2 ... An ... B1 B2 ... Bn ... 05:14:13 where I recognize that the A and B sequences can be run in parallel. 05:14:51 Though I'll probably choose to implement it so that the A and B sequences have to be factored into separate words, so that the child tasks can have an explicit word to execute. 05:15:47 I found a problem in my code last night. 05:16:43 It revolved around how the stack pointer gets interpreted. Physically, my SP always points to the second item on the stack - the TOS is in a register. But *logically* what should we consider the value of SP to be? 05:17:00 Should it be that physical value, or should it be the address that TOS would be at if it wasn't in a register? 05:17:08 I decided that the latter of those options was right. 05:17:18 My current philosophy is that if I really need fine-grained parallelism for an HPC type product, I'd implement on an ASIC on an FPGA. 05:17:22 But when I wrote SP@ and SP!, they took different approaches. 05:17:37 So I wound up with a situation where the code sequenc SP@ SP! dropped the stack. 05:17:54 SP@ was the one that was wrong. 05:18:06 And in places where I'd used SP@, I'd "accounted for that." 05:18:23 So I had to fix SP@, then I had to tinker with those other places so they no longer presumed the incorrect operation. 05:18:45 Oh, well, yes - hardware is great for that kind of thing. 05:18:50 I won't always have it available, though. 05:19:02 On the embedded stuff I've thought about building I do intend to include an FPGA. 05:19:26 I'm gunning for a "flexible core" circuit that I can use for a lot of different projects - that core will include an FPGA that can be used where needed. 05:20:02 But I want the other as well, and that idea of having the children work directly in the parent stack gave me a much lower-overhead way of approaching that sort of thing. 05:20:56 My previous SP@ definition was "move w, sp; push tos; move tos, w; next" 05:21:11 The fix is simpler - just "push tos; move tos, sp; next" 05:21:54 I found a reasonable looking FPGAA on Digikey for about $6. 05:22:05 FPGA, rather 05:23:08 I'm interested both in moderately high-end things like lab instrumentation and so on and fairly inexpensive stuff like data acquisition, motor controllers, etc. 05:23:27 So I need a "few dollars" path to everything. 05:23:47 FPGAs are interesting to me for HPC type applications, for small embeded systems MCU make more sense economically (and power wise too). 05:24:15 Yes; for some projects I'd just leave the FPGA off of the board. 05:24:28 But I want something I can pick up and use without having to design from scratch. 05:28:44 My host Forth is very rudimentary, it's a reimplementation of an old DOS based tethered Forth on POSIX, with some major redesign. 05:32:00 One of these days I want to take another crack at the sourceless Forth approach, but only for targets, I'll keep the host Forth as it is. 05:33:55 The SVFIG Annual Forth Day is in a couple of days, I hope they record Chuck's fireside chat. 05:42:41 --- join: pierpal (~pierpal@95.234.60.245) joined #forth 06:28:40 Annual Forth Day? What will take place then? 06:51:31 Some presentations and Chuck's traditional fireside chat, I assume. 06:58:12 --- join: dddddd (~dddddd@unaffiliated/dddddd) joined #forth 07:08:10 --- quit: pierpal (Ping timeout: 252 seconds) 07:22:31 --- quit: rdrop-exit (Quit: Lost terminal) 07:36:35 --- quit: tabemann (Ping timeout: 250 seconds) 08:13:49 --- join: pierpal (~pierpal@95.234.60.245) joined #forth 08:24:51 --- quit: pierpal (Quit: Poof) 08:25:13 --- join: pierpal (~pierpal@95.234.60.245) joined #forth 09:17:10 I wonder what Chuck's been working on. 09:17:23 GreenArrays can't be getting much business really 09:20:04 isn't he like 80 years old? hopefully he's mostly working on keeping himself entertained 09:20:17 That's what Forth is for. 09:20:46 I remember reading him comment something along the lines of "Instead of doing a crossword, I solve some problem using Forth." 09:21:17 hadn't he already long lost interest in forth decades ago? I thought that was why he shifted to mostly hardware design 09:21:45 His hardware is designed to run newer and more esoteric Forths 09:21:56 lol 09:22:04 ColourForth > ArrayForth > EtherForth 09:22:36 he sure is hung up on that one number, huh? 09:22:59 and iirc he wrote a lot of machine code on the GA144 anyhow 09:23:55 I don't know the meaning of your metaphor 09:27:29 --- quit: pierpal (Ping timeout: 240 seconds) 09:31:26 --- join: pierpal (~pierpal@95.234.60.245) joined #forth 09:49:50 the number four 09:50:11 you'd think after 50 years or whatever he'd move on to something he liked enough to call fifth 09:59:17 --- quit: xek (Remote host closed the connection) 09:59:41 --- join: xek (~xek@apn-31-0-23-83.dynamic.gprs.plus.pl) joined #forth 10:18:29 --- quit: pierpal (Read error: Connection reset by peer) 10:23:35 --- quit: jedb (Read error: Connection reset by peer) 10:31:23 --- join: jedb (~jedb@199.66.90.113) joined #forth 10:33:13 :-) 10:44:42 zy]x[yz: you didnt know? He likes Bethovens Fifth, so there is that. 10:49:47 --- quit: dave0 (Quit: dave's not here) 11:16:41 well, all the forths are just refinements or variations on the same paradign 11:16:45 *paradigm 11:16:51 maybe sourceless is fifth. 11:45:24 --- join: Mat4 (~eh@ip5b409c40.dynamic.kabel-deutschland.de) joined #forth 11:45:44 hi 11:46:10 --- quit: Mat4 (Client Quit) 11:53:10 --- join: pierpal (~pierpal@95.234.60.245) joined #forth 12:03:31 --- quit: john_metcalf (Ping timeout: 252 seconds) 12:34:49 --- quit: pierpal (Ping timeout: 240 seconds) 13:43:00 --- quit: xek (Remote host closed the connection) 13:49:30 Given that "sourceless" can't be taken all the way (you still have to represent the parts that don't have a 1-to-1 correspondence with machine code), I'm still struggling to find what makes it "extra-super-valuable." 13:49:51 One could argue that moving as far toward sourceless as possible achieves the highest "source compression," but that's about all I'm coming up with. 13:50:46 If we could take it all the way, 100%, then I think it would achieve a higher tier of value. 13:51:26 But we can't, and those systems for managing that incompleteness seem to me like they'd be just as complex as, if not more complex than, a straight source->code compiler. 13:53:53 There's also the problem that arises on older, smaller systems. 13:54:09 Systems that 1) don't have an MMU and 2) for which you want to write more code than will fit in RAM at once. 13:54:38 Suddenly you're faced with the need to store a loadable (relocatable) disk-resident format as well. 13:55:24 Chuck used to claim as a benefit the absence of the need for such an "object" format, because you just compiled source directly to RAM when you wanted to load it - the source was the only disk-resident format. 13:56:19 This is all relieved if you have an MMU - now you can have a huge virtual space, represent it explicitly on disk, and then just *map* pages of that when you load them. 13:56:29 The MMU does your "relocatable loading" stuff for you. 13:59:32 It's also relieved if you only have a single application, 100% of which will be in RAM whenever you're using it. 14:00:03 I think that's that the sort of application Chuck has gravitated to over the years - it's the logical conclusion of his "solve only the problem in front of you" philosophy. 14:00:27 Even thinking about storing multiple problem solutions on a hard drive is a deviation from that philosophy. 14:01:37 werent the first machines that he wrote forth on basically without hard drives? 14:02:01 code+data loaded from punched cards or tape 14:02:19 Ok, let me correct something I said. 14:02:28 It's not wanting to have more than one program that's the problem. 14:02:38 As long as you only LOAD one at a time, you can work with literal RAM addresses. 14:02:57 It's wanting to have a flexible, changing mixture of programs co-resident in memory that brings in the relocation requirement. 14:03:26 If you're only going to run one at a time, you can work exclusively with actual RAM addresses and all is well. 14:03:32 sure, and I am follower of using units of 128KibiBytes RAM each dedicated to a 'core'. 14:04:08 meaning that each 'core+ram' unit only has one program loaded at time 14:04:11 Ok, I'll back up another step. You can have multiple programs if you put them in the same place every time. 14:04:29 like overlays? 14:05:08 Well, what you said - I buy that. If you always run a program with it residing in the same 128k block of RAM, you can use and store actual RAM addresses. 14:05:16 but yeah the MMU (or DAT like IBM called it) is mainly there to virtualize that you only have one 'computer' 14:05:42 Whenever you've got this problem solved, now your "source" can contain the non-1-to-1 stuff, and mixed in with it would be some sort of a record that said "N bytes of code here." 14:05:51 if you think about it a POSIX 'process' is basically a small 'vm' up to a point. 14:06:08 So rendering that in the editor you'd render the non-1-to-1 stuff from its source, and the "N-bytes" pieces you'd decompile from RAM. 14:06:41 hmm.. that assumes that you only have one ISA architecture in that computer system 14:06:47 Yes, I've quite noticed that my Forth looks, on memory inspection, like it's loaded at the same location every time - the stuff I actually wrote starts at 4k. 14:07:04 So it totally baffles me why MacOS puts these "relocatability" restrictions on me. 14:07:16 It never needs to relocate it - it's always sitting in the same virtual address space. 14:07:33 because of an ineffective crap called ASLR 14:08:07 Yeah, I've seen that term pop up a lot. What do they think it's buying them? 14:08:09 plus the issue of dynamically loaded libraries and VSO for syscalls and such 14:08:23 Oh, they think it helps security. 14:08:37 it does not help security AT ALL 14:08:46 But that seems to be the claim. 14:10:17 yeah, means that I had to put in an ELF flag on a binary that means NO_ASLR because even though the sections say where to load the .text ones the OS did not load them 14:11:10 and it makes it much damn harder to debug with core dump diffs 14:11:33 Yeah, I can imagine. 14:12:24 and it makes it much damn harder to debug with core dump diffs 14:12:25 and it makes it much damn harder to debug with core dump diffs 14:12:36 sorry about that 14:13:01 :-) 14:13:03 was trying to switch to an open terminal window so I could kill chromium 14:13:11 No worries - I thought maybe you were stressing your point. :-) 14:13:43 killall chromium-browser is what I have in that terminal windows history 14:14:04 some sites I visit seem to have memory leaks or who knows. 14:15:37 so yeah ASLR has no benefit and only gets in the way. It is also an indication that whoever promulgated it has the 'safe'/'fort' imagery of security in their head. 14:16:29 Well, anyway, I quite recognize the benefit of moving the language as far as possible toward having a 1-to-1 correspondence between "the language I type" and "the code in RAM." I.e., moving the map closer to the territory, as one of you guys put it. 14:17:04 But given that getting 100% of the way to that goal isn't feasible,, I think exactly how the machine code is internally represented in the source isn't terribly relevant. 14:17:19 That's hidden from me as a *user* of the system anyway. 14:17:44 KipIngram: depends on the ISA. I quite like minimal ISAs for their completeness and case-enumerability. 14:18:44 I think the two extremes, practically speaking, are as follows (these are pictures of the data structure lying beneath the editor rendering): 14:19:16 ... etc. 14:19:30 That's the "sourceless" end - we go to the code to do the N-byte decompiles. 14:19:42 And the other extreme is just "source for everything," like in the old days. 14:19:48 KipIngram: that might result in the code being mostly calls (if DTC is being used) 14:19:54 Even if that source has a token-based representation, it's still logically the same. 14:20:22 Can you explain that? I don't see how that structure implies anything about the factoring level. 14:21:12 let say I have an ISA that has 32 instructions and is dual stack machine based 14:21:26 Ok. 14:21:33 lets say J1 for concrete example. 14:21:37 Ok. 14:22:36 many of the words are basically colon definitions but do not have equiv of DOCOL at the front just bunch of direct calls 14:22:57 So like subroutine threading? 14:23:34 yeah 14:24:19 Ok. That's where I wound up when I was tinkering with my own FPGA Forth. 14:24:48 One bit in the cell told me whether it was a call or packed opcodes. 14:24:50 unless you want to have loads and loads of 'call ' in the decompiled output to the editor you will need to loose a bit of the 1-to-1-ness 14:25:23 Oh, I figured "call FOO" would just be represented in the rendering as FOO. 14:25:40 (pretty much omitting the 'call' part of the string) 14:25:42 exactly 14:26:20 but how to disambiguate from + and call + 14:26:57 Where would + show up in the rendering such that it didn't mean call +? 14:27:28 if you have + as instruction but want to have the flexibility of changing + if it turns out that you want to switch to double sized ints instead of the native ints 14:27:47 I just used the + as an example 14:29:23 Ok. So "+" is representing a logical operation of sorts, that's not bound to a specific implementation yet. 14:29:27 I presume that one would see ADDI for the instruction and + for the word that could be using ADDI as part of its implementation 14:30:09 Ok, I kind of see the general direction you're heading. 14:30:34 I tend to think of primitives as the "hardware" of the virtual machine - I generally wouldn't try to dive inside them. 14:30:57 And I've never tried to change what, say, + means. 14:31:10 It's a name for *that function*, that has some machine code implementation. 14:31:48 This is interesting to me, though - the talk of calculators the last few days got me to thinking about how I'd like to be able to use my Forth as a very smooth calculator. 14:32:06 By activating a "calculator" vocabulary, where I'd want + - * / to do *floating point*. 14:32:26 I go further than that. I define a VM spec and have the smallest amount of primitives. 14:32:30 I wrote a word yesterday called BECOMES, that supported such redefinition. 14:32:37 Right. 14:33:19 : + BECOMES F+ ; IMMEDIATE gives you a word + that will execute F+ if STATE=0 and compile F+ if STATE=1. 14:33:29 So more efficient than just saying : + F+ ; 14:34:06 it just means that I can port that VM spec to any platform I have and run my forth image on that 14:36:28 (for I/O I 'stole' the 'hardware' interface idea/spec from DCPU-16, means that I am not limiting myself to only some predefined hardware like ngaro vm did) 14:40:06 Yeah - sounds interesting. I've toyed at times with just how far I could push the "textual specification" of a Forth toward 100%. To some extent I was willing to adjust the language to make that more possible. 14:40:16 I felt like I made some progress, but never really "got there." 14:40:21 To my satisfaction, at least. 14:50:09 --- join: leaverite (~quassel@175.158.225.193) joined #forth 14:50:09 --- join: wa5qjh (~quassel@175.158.225.193) joined #forth 14:50:09 --- quit: leaverite (Changing host) 14:50:09 --- join: leaverite (~quassel@freebsd/user/wa5qjh) joined #forth 14:50:09 --- quit: wa5qjh (Changing host) 14:50:09 --- join: wa5qjh (~quassel@freebsd/user/wa5qjh) joined #forth 14:51:59 --- join: [1]MrMobius (~default@c-73-134-82-217.hsd1.va.comcast.net) joined #forth 14:54:55 --- quit: MrMobius (Ping timeout: 252 seconds) 14:54:55 --- nick: [1]MrMobius -> MrMobius 15:08:24 --- quit: wa5qjh (Quit: http://quassel-irc.org - Chat comfortably. Anywhere.) 15:09:23 --- quit: leaverite (Quit: http://quassel-irc.org - Chat comfortably. Anywhere.) 15:11:14 --- join: wa5qjh (~quassel@175.158.225.193) joined #forth 15:11:14 --- quit: wa5qjh (Changing host) 15:11:14 --- join: wa5qjh (~quassel@freebsd/user/wa5qjh) joined #forth 15:11:21 * Zarutian plugs http://members.chello.at/~easyfilter/bresenham.html for those who want to see how lines, ovals, circles and bezier curves. 15:28:18 That looks nice, Zarutian. 15:28:59 the paper linked from there is pretty nice 15:29:34 * KipIngram grabs it... 15:29:42 * KipIngram collects interesting pdfs... 15:30:06 I'm "math geeky" enough that I quite enjoy such things. 15:46:56 --- quit: WilhelmVonWeiner (Quit: Lost terminal) 15:47:05 --- join: WilhelmVonWeiner (dch@ny1.hashbang.sh) joined #forth 15:56:20 s/curves./curves are made./ 16:11:07 --- join: pierpa (5fea3cf5@gateway/web/freenode/ip.95.234.60.245) joined #forth 16:20:12 --- quit: jedb (Remote host closed the connection) 16:20:26 --- join: jedb (~jedb@199.66.90.113) joined #forth 17:36:13 --- quit: wa5qjh (Remote host closed the connection) 17:38:07 --- join: wa5qjh (~quassel@175.158.225.193) joined #forth 17:38:07 --- quit: wa5qjh (Changing host) 17:38:07 --- join: wa5qjh (~quassel@freebsd/user/wa5qjh) joined #forth 17:42:52 --- join: ttmrichter (~ttmrichte@2a00:f10:401:0:4d3:e0ff:fe00:20d) joined #forth 17:48:07 --- join: rdrop-exit (~markwilli@112.201.162.180) joined #forth 17:51:52 I just checked the SVFIG page, the Annual Forth Day is in November, this saturday's is just a regular meeting. 17:52:42 November 17 18:18:19 --- join: tabemann (~tabemann@rrcs-162-155-170-75.central.biz.rr.com) joined #forth 18:27:39 --- quit: dddddd (Remote host closed the connection) 18:32:46 --- join: dave0 (~dave@90.20.215.218.dyn.iprimus.net.au) joined #forth 18:33:56 hi 18:39:03 --- quit: rdrop-exit (Read error: Connection reset by peer) 18:39:49 --- join: rdrop-exit (~markwilli@112.201.162.180) joined #forth 18:41:24 Hey Dave. 18:41:51 hi KipIngram 18:41:52 sup? 18:42:25 hey guys 18:42:41 hi tabemann 18:43:28 I'm trying to figure out how to move multitasking out of my Forth kernel 18:43:56 it would be easy, with a few supporting primitives, if I were to do cooperative multitasking, but I insist on preemptive 18:45:17 currently I have a count-down that decrements for every word executed, which switches task when it reaches zero or the task yields or sleeps 18:45:21 i call cooperative multitasking "coroutines" 18:48:40 the other problem is that to bootstrap the forth system I have to have a hard-coded simple interpreter, which is directly tied into the core runtime, whereas I would like to move it out of the core runtime so I don't have the overhead of an interpreter baked right into the runtime (note that there is another, more full interpreter that is used for interactive use, loading code from files, or evaluate - the baked-in interpreter 18:48:40 is solely used for interpreting a single very long string for bootstrapping 19:25:34 I see cooperative multitasking and coroutines as different in Forth. I use both. 19:26:38 I use PAUSE for tasks, YIELD for coroutines. 19:57:49 --- quit: wa5qjh (Read error: Connection reset by peer) 20:02:07 --- join: wa5qjh (~quassel@110.54.189.149) joined #forth 20:02:07 --- quit: wa5qjh (Changing host) 20:02:07 --- join: wa5qjh (~quassel@freebsd/user/wa5qjh) joined #forth 20:03:09 --- quit: tabemann (Ping timeout: 246 seconds) 20:18:02 --- quit: wa5qjh (Remote host closed the connection) 20:19:41 --- join: wa5qjh (~quassel@175.158.225.211) joined #forth 20:19:41 --- quit: wa5qjh (Changing host) 20:19:41 --- join: wa5qjh (~quassel@freebsd/user/wa5qjh) joined #forth 20:20:36 --- quit: pierpa (Quit: Page closed) 20:49:08 yunfan, saw you mention riscv the other day. the tool you meant as an alternative to verilog is called chisel. it actually generates verilog. have you played with riscv. 20:53:10 --- join: tabemann (~tabemann@2602:30a:c0d3:1890:d004:a860:b5ff:7c1c) joined #forth 21:05:00 --- join: pierpal (~pierpal@95.234.60.245) joined #forth 21:24:52 --- quit: pierpal (Ping timeout: 272 seconds) 21:37:26 --- quit: ncv (Remote host closed the connection) 21:38:52 --- join: ncv (~neceve@2a02:c7d:c5c9:a900:6eaf:6ef7:3b81:d5f6) joined #forth 21:38:52 --- quit: ncv (Changing host) 21:38:52 --- join: ncv (~neceve@unaffiliated/neceve) joined #forth 21:47:01 --- join: pierpal (~pierpal@95.234.60.245) joined #forth 22:11:49 --- quit: pierpal (Ping timeout: 264 seconds) 22:13:45 proteus-guy: nope, i'd like to, i am fasnatinete with precise design 22:35:26 --- quit: ncv (Remote host closed the connection) 22:42:31 --- join: pierpal (~pierpal@95.234.60.245) joined #forth 22:47:22 --- join: [X-Scale] (~ARM@33.121.108.93.rev.vodafone.pt) joined #forth 22:48:53 --- join: a3f_ (~a3f@irc.a3f.at) joined #forth 22:50:53 --- join: phadthai_ (mmondor@ginseng.pulsar-zone.net) joined #forth 22:56:13 --- quit: KipIngram (*.net *.split) 22:56:13 --- quit: X-Scale (*.net *.split) 22:56:13 --- quit: catern (*.net *.split) 22:56:13 --- quit: APic (*.net *.split) 22:56:13 --- quit: phadthai (*.net *.split) 22:56:13 --- quit: irsol (*.net *.split) 22:56:13 --- quit: a3f (*.net *.split) 22:56:20 --- nick: [X-Scale] -> X-Scale 22:57:11 --- join: irsol (~irsol@unaffiliated/contempt) joined #forth 23:00:20 --- nick: phadthai_ -> phadthai 23:03:26 --- join: APic (apic@apic.name) joined #forth 23:03:28 --- join: KipIngram (~kipingram@185.149.90.58) joined #forth 23:03:53 --- nick: KipIngram -> Guest89117 23:11:46 --- join: catern_ (~catern@catern.com) joined #forth 23:26:25 --- quit: pierpal (Ping timeout: 252 seconds) 23:40:23 --- join: pierpal (~pierpal@95.234.60.245) joined #forth 23:59:59 --- log: ended forth/18.10.25