00:00:00 --- log: started forth/19.06.01 02:00:58 --- join: dys (~dys@tmo-106-221.customers.d1-online.com) joined #forth 02:04:07 --- quit: ashirase (Ping timeout: 258 seconds) 02:08:56 --- join: ashirase (~ashirase@modemcable098.166-22-96.mc.videotron.ca) joined #forth 02:22:27 --- join: dddddd (~dddddd@unaffiliated/dddddd) joined #forth 03:20:22 --- quit: dne (Remote host closed the connection) 03:22:25 --- join: dne (~dne@jaune.mayonnaise.net) joined #forth 03:34:45 --- join: pierpal (~pierpal@host196-36-dynamic.16-87-r.retail.telecomitalia.it) joined #forth 05:22:49 --- quit: Zarutian (Ping timeout: 246 seconds) 06:55:27 --- quit: dys (Ping timeout: 258 seconds) 07:28:02 --- quit: dave0 (Quit: dave's not here) 07:41:54 --- join: PoppaVic (~PoppaVic@unaffiliated/poppavic) joined #forth 08:27:16 --- quit: pierpal (Ping timeout: 246 seconds) 08:32:32 --- join: dys (~dys@tmo-098-140.customers.d1-online.com) joined #forth 09:33:10 --- join: Zarutian (~zarutian@173-133-17-89.fiber.hringdu.is) joined #forth 09:35:46 --- quit: proteusguy (Remote host closed the connection) 09:37:05 --- join: proteusguy (~proteusgu@cm-58-10-155-156.revip7.asianet.co.th) joined #forth 09:37:05 --- mode: ChanServ set +v proteusguy 10:57:48 back 11:23:03 --- part: tabemann left #forth 11:24:52 --- join: tabemann (~tabemann@2600:1700:7990:24e0:b944:a349:56b9:12fb) joined #forth 11:45:49 --- quit: Zarutian (Ping timeout: 245 seconds) 11:50:10 --- join: Zarutian (~zarutian@173-133-17-89.fiber.hringdu.is) joined #forth 11:52:30 hey 11:54:23 * tabemann is happy that he now has a malloc()-free Forth (aside from that malloc() is still used to allocate the Forth memory space overall, even though I could use mmap(), and that I still preserve the original ALLOCATE, RESIZE, and FREE services internally just so I can run older images that rely on them) 12:08:28 --- quit: gravicappa (Ping timeout: 272 seconds) 12:46:37 --- join: xek_ (~xek@apn-37-248-138-83.dynamic.gprs.plus.pl) joined #forth 13:11:46 I don't use malloc at all 13:12:35 really the only thing I use ALLOCATE for is loading source files, so I can evaluate source files as wholes rather than as line by line 13:13:06 without having to ALLOT a fixed maximum file size 13:13:40 the difference now is I'm not using malloc() as my ALLOCATE is now written entirely in Forth 14:18:03 --- quit: xek_ (Remote host closed the connection) 15:14:05 --- quit: tabemann (Ping timeout: 252 seconds) 15:24:54 somewhat related to allocation, etc 15:25:21 what's the traditional approach for avoiding chunks of address space you're not allowed to overwrite when on an MMU-less system 15:26:33 e.g. I load at 0x1000, my ALLOT-space starts at 0x2000, but there are various smallish chunks of memory-mapped devices throughout the range 0x4000-0xffff 15:27:15 that's unfortunate 15:29:31 i'm guessing the hardware already exists and you don't have the ability to change it so that memory starts at 0x10000? 15:30:36 oh, small chunks in that range. so it's probably a 16-bit system 15:33:02 well, if all memory accesses (beyond fetching instructions) is through @ and ! then you can implement psudeo-MMU in them. 15:33:03 yeah; I'm using a 16-bit system as an example, though the final hardware will probably be 32-bit 15:34:28 oof, would rather not do that... 15:35:56 why not? too slow? 15:36:19 mainly 15:36:58 have you profiled such code on actual hardware? are you sure it is too slow? 15:37:56 no, I haven't even started writing it yet; it just sounds slow 15:40:23 --- join: tabemann (~tabemann@rrcs-98-100-171-35.central.biz.rr.com) joined #forth 15:42:17 well, I think most addressing modes of CISCs like legacy x86 (both 32bit and 64bit versions) are quite slow due to decoding latency imposed by various bad ISA design choices made. Why? Because they sound slow. 15:49:03 sure, but isn't all the hardware designed to mitigate that? 15:50:22 --- join: TCZ (~jankowals@ip-91.246.67.25.skyware.pl) joined #forth 15:50:29 the thing is that with RISC aren't you just being faster per insgtruction by doing less? 15:51:14 I think a better comparison would be speed per byte of code executed for a normal workload 15:52:46 I suppose 15:53:04 * tabemann doesn't know why he is so happy about having a working allocator, since Forth shouldn't require an allocator 15:53:41 --- part: TCZ left #forth 16:07:48 or maybe define a set if individual primitive operations, and then define each instruction in terms of these, and then for a normal workload figure out how many of these primitive operations would be evaluated were they separate instructions per unit time 16:08:24 CISC of course is going to be slower, but the thing is that it is doing more 16:08:57 remexre: well the mitigations are only statistically effective. That is in most cases they have measured and predicted the mitigation works but if any of their stated and unstated assumptions are violated then you hit the lowest part of the chips performance baseline. 16:09:53 another way to do it is if one translated the CISC instructions into RISC instructions, do they speed up or slow down 16:11:43 Zarutian: Are there any addressing modes that're known to be particularly slow vs. corresponding mov/add/shr instructions? 16:11:43 --- quit: john_cephalopoda (Ping timeout: 257 seconds) 16:12:37 tabemann: they speed up because that is what is done nowdays inside legacy x86 cores. 16:14:20 remexre: the one that fetches the base address from a literal address given, then multiplies it with the contents of register A, then adds contents of register B to it before then finally fetching the value sought 16:15:13 Zarutian: e.g. mov eax, [ebx+ecx*8+4]? 16:15:53 and you're saying that's slower than mov edx, ecx; shr edx, 3; add edx, ebx; add edx, 4 16:16:37 about RISC versus CISC, I picture that RISC would be simpler to pipeline 16:17:05 which would be an advantage unto itself (and reason to translate CISC code into RISC in silico) 16:19:20 remexre: well it is slower than the equivaliant of that on non CISC ISAs. 16:21:00 --- join: dbucklin (~dbucklin@ec2-18-221-180-137.us-east-2.compute.amazonaws.com) joined #forth 16:21:27 probably because separate instructions have either simpler pipelining logic or, in the case of modern x86(-64), do not require a complex layer of decoding logic to convert CISC instructions to RISC ones 16:24:54 --- join: john_cephalopoda (~john@unaffiliated/john-cephalopoda/x-6407167) joined #forth 16:26:13 Zarutian> well, if all memory accesses (beyond fetching instructions) is through @ and ! then you can implement psudeo-MMU in them 16:26:54 i was thinking that too but then the vm and the compiler woupd both also have to know how to skip blacklisted gaps 16:27:26 the problem is how do you enforce MMU-type rules upon fetched words 16:27:31 seems complicated 16:27:33 *opcodes 16:29:31 zy]x[yz: depends if you are using assembler words or not. In the latter case then no worries. Plus I presume the compiler words use @ and ! internally. 16:29:57 zy]x[yz: hmm.. might want to change the HERE incrementer too to skip those gaps 16:34:11 Is anyone here interested in 3d printing? 16:34:11 I'm about to write a forth high-level POL that compiles things down to GCODE: comma-compiling a material should extrude that material to the build area. ALLOT should move the nozzle. Something like that. The build area as a dictionary. Printing concepts mapped into forth. 16:34:11 I'm new to 3d printing so if anyone has any protips I'd love to hear them. I'm going to be printing 3d circuits so I'm looking to get a multimaterial printer. 17:55:41 --- quit: tabemann (Ping timeout: 244 seconds) 18:00:01 --- join: dave0 (~dave0@069.d.003.ncl.iprimus.net.au) joined #forth 18:00:34 hi 18:19:58 --- quit: dddddd (Remote host closed the connection) 18:25:11 --- join: tabemann (~tabemann@2600:1700:7990:24e0:b944:a349:56b9:12fb) joined #forth 18:50:45 --- quit: MrMobius (Ping timeout: 258 seconds) 18:57:54 --- quit: dys (Ping timeout: 245 seconds) 20:27:25 --- quit: dave0 (Quit: dave's not here) 21:13:32 --- join: MrMobius (~default@c-73-134-82-217.hsd1.va.comcast.net) joined #forth 22:07:26 --- join: gravicappa (~gravicapp@h109-187-8-54.dyn.bashtel.ru) joined #forth 22:12:23 --- join: dave0 (~dave0@069.d.003.ncl.iprimus.net.au) joined #forth 22:13:48 re 22:25:55 hey 22:27:20 hi tabemann 22:28:13 * tabemann is busy implementing intmap 22:28:36 i.e. a map whose keys are integers 22:29:09 hash table or tree? 22:29:17 hash table 22:29:28 ah 22:29:34 it's trivial to make a fast hash table-based intmap 22:29:48 because your hash function literally just becomes either mod, or an offset then mod 22:30:15 if the map doesn't change, you can binary search on a sorted array 22:30:22 the you just point into the buffer, and see if there's a collision, where if there is you wrap around to the next slot 22:30:25 and forth can sort it at compile time :-) 22:30:28 --- part: PoppaVic left #forth 22:30:53 until you find an open slot 22:31:17 there's a name for that but i don't know it 22:33:39 the going to the next slot if you have a collision? 22:33:49 yep 22:34:09 I have implemented to similar (mostly sharing the same code) versions of the intmap - one that is ALLOT-allocated and one which is ALLOCATE-allocated 22:34:16 --- join: dys (~dys@tmo-102-38.customers.d1-online.com) joined #forth 22:34:33 i think it was in school.. there are different ways of handling collisions 22:34:35 the ALLOT-allocated one returns a boolean saying whether there was enough room to set a key 22:34:55 the ALLOCATE-allocated one expands the buffer as needed 22:35:11 cool 22:35:46 I know people around here don't like ALLOCATE, but as it's available in my Forth impl why not provide the option of using it 22:55:56 --- join: pierpal (~pierpal@host196-36-dynamic.16-87-r.retail.telecomitalia.it) joined #forth 23:59:59 --- log: ended forth/19.06.01