00:00:00 --- log: started forth/20.12.11 01:50:26 --- quit: hosewiejacke (Ping timeout: 260 seconds) 02:20:07 --- join: hosewiejacke joined #forth 02:27:24 --- quit: hosewiejacke (Remote host closed the connection) 02:32:12 --- join: hosewiejacke joined #forth 02:55:24 --- join: marksmith joined #forth 03:20:33 --- quit: jsoft (Quit: Leaving) 03:42:06 --- quit: dave0 (Ping timeout: 272 seconds) 04:11:29 Here's a design for reference counting that uses compile time knowledge to make it extremely efficient: https://www.microsoft.com/en-us/research/uploads/prod/2020/11/perceus-tr-v1.pdf 04:12:20 --- quit: iyzsong (Quit: ZNC 1.7.5 - https://znc.in) 04:13:11 --- join: iyzsong joined #forth 04:21:10 --- join: rixard joined #forth 04:21:30 --- quit: rixard_ (Read error: Connection reset by peer) 04:37:05 proteusguy: nice, looks like they also provide the syntax and semantics for their linear resource calculus 04:43:49 --- join: inode joined #forth 04:44:44 --- join: Gromboli joined #forth 04:54:47 yeah I'm kinda curious what the real implementation cost would be. Might be something worth incorporating into ActorForth. 05:00:02 --- quit: hosewiejacke (Ping timeout: 246 seconds) 05:33:24 --- join: hosewiejacke joined #forth 06:57:38 --- quit: phadthai (Ping timeout: 260 seconds) 07:03:35 --- join: phadthai joined #forth 07:16:57 --- quit: DKordic (Ping timeout: 256 seconds) 07:44:25 --- join: DKordic joined #forth 07:50:22 --- quit: hosewiejacke (Ping timeout: 260 seconds) 08:19:33 --- join: hosewiejacke joined #forth 08:47:32 --- quit: hosewiejacke (Ping timeout: 256 seconds) 08:49:47 --- quit: marksmith (Ping timeout: 264 seconds) 08:54:04 --- quit: gravicappa (Read error: Connection reset by peer) 08:54:30 --- join: gravicappa joined #forth 10:10:02 --- join: WickedShell joined #forth 11:00:58 --- join: X-Scale` joined #forth 11:02:13 --- join: hosewiejacke joined #forth 11:02:59 --- quit: X-Scale (Ping timeout: 256 seconds) 11:02:59 --- nick: X-Scale` -> X-Scale 11:10:27 --- quit: hosewiejacke (Ping timeout: 240 seconds) 11:17:56 --- quit: xek (Quit: Leaving) 12:01:28 --- quit: rixard (Read error: Connection reset by peer) 12:02:19 --- join: rixard joined #forth 12:09:44 --- join: the_cuckoo joined #forth 12:14:02 --- quit: gravicappa (Ping timeout: 256 seconds) 12:45:46 --- join: marksmith joined #forth 12:57:49 --- quit: marksmith (Quit: Leaving.) 13:01:15 --- join: Zarutian_HTC joined #forth 13:32:12 what would you call the word that takes a number and sets all bits below the most-significant 1? 13:32:33 e.g., 48 becomes 63, 99 becomes 127 13:32:43 i'm thinking "saturate" 13:42:01 1bit-right-streak ? 13:44:32 i was hoping for sonething shorter than what i had come up with 13:46:09 I am usually terrible at naming Forth words, so the name string sometimes ends up being the biggest part of the definition in memory 14:06:58 shorter than "saturate"? hmm, how about "saturat"? 14:07:09 satur8 14:21:31 lol i like it 14:33:46 --- quit: Gromboli (Quit: Leaving) 14:34:38 --- quit: Zarutian_HTC (Ping timeout: 240 seconds) 14:34:50 --- join: Zarutian_HTC joined #forth 15:32:03 cmtptr: What is this for? 15:35:13 I think to try and give a good name I'd need to know a little about the context for needing this word 15:43:22 veltas, rounding a number up to the nearest power of 2 15:49:40 --- quit: WickedShell (Remote host closed the connection) 15:50:08 How is this word implemented? 16:00:42 Might call it set>msb or something like that (set up to most sig bit) 16:01:22 I don't think I'd let that be a word though, unless it fell out during refactoring 16:03:13 it's part of a larger algo 16:03:33 so yeah it felt like a natural part to factor 16:03:59 it's really rounding up to power of 2 -1 16:07:05 --- join: astrid joined #forth 16:07:07 --- join: TangentDelta joined #forth 16:07:09 --- join: Gromboli joined #forth 16:10:34 Hello 16:11:24 Hello 16:12:07 Hello 16:12:22 Working on anything fun and exciting? 16:20:14 I've been digging into this HP IDACOM PT-500 protocol tester. It runs a modified flavor of FIG-FORTH 16:23:12 It has 6 Motorola 68000's in it that all run FORTH concurrently. You can move the screen+keyboard to any of them, and send messages between them. 16:37:42 --- quit: Zarutian_HTC (Remote host closed the connection) 17:13:59 back 17:26:03 TangentDelta, that sounds cool 17:26:21 are you using it as a protocol tester or just having fun? 17:30:00 Just having fun with it at the moment. 17:30:44 I was using it as a protocol tester in my telecom lab, using it to capture stuff on a T1. 17:34:34 It's sad how little attention FORTH gets these days. 17:34:53 I work for a friend that does all kinds of cool things with old computer hardware/software. 17:35:35 He designed, and sells, a single-board computer around the Rockwell R6501Q. It didn't get much attention at first (mostly due to a lack of software). 17:37:32 We found ROMs for Rockwell's RSC-FORTH for the R6501Q and got it going on the single board. 17:37:49 nice! 17:38:04 did you guys post about that recently on one of the forums? I saw something like that 17:38:21 and I got TinyBASIC working too, for those heathens :P 17:38:40 Yeah. He made a post about it on the 6502 forum and the the VCFed forum. 17:39:23 TinyBASIC uses an intermediate language. I wonder if a virtual machine for that language could be written for FORTH... 17:39:38 *and then the 17:40:18 just for fun? seems like that would be slower 17:40:30 Just for fun :P 17:40:43 haha then the answer must be yes 17:40:44 TinyBASIC is really slow regardless 17:41:00 iirc part of tinybasic is written in that intermediate language 17:41:20 A large portion of it is 17:41:22 which is already a forth-like approach :P 17:42:30 Lol, I was thinking about extending it so that you could have TinyBASIC extend itself, but then you'd end up with a poor-man's FORTH. 17:43:43 heh 17:44:06 I thought about writing a 6502 simulator in a 6502 forth then running basic on that 17:44:17 Hahaha 17:44:26 and if it's fast enough making another 6502 simulator in that basic to simulate something else 17:44:37 where the innermost layer is just blinking an LED 17:44:54 if you do enough layers, you could get the blink rate down to once a second just through simulator overhead :P 17:45:25 Someone wrote a Lisp for FORTH, and I found a FORTH written in Lisp. 17:45:59 i cant even wrap my brain around that 17:46:07 Hahaha 17:46:23 where would you stick all those list values in your forth? hopefully not on your stack 17:46:27 https://github.com/scott91e1/forthlisp 17:46:44 Probably in an ALLOT array. 17:47:36 out of memory crash in 3 2 1... 17:48:08 Oh it uses the heap 17:48:23 "allocate" is like C's malloc 17:49:50 oh nice 18:06:51 --- join: boru` joined #forth 18:06:53 --- quit: boru (Disconnected by services) 18:06:56 --- nick: boru` -> boru 18:37:05 cmtptr, fillr is what I would call it, filling in the bits to the right of the msb. 18:40:01 Or even just fill if there's never a use case for filling to the left or any other word that would be a complement to fillr. 18:49:56 proteusguy: how's ActorForth going? 18:52:07 proteusguy, thanks that's a good one. i think i had called it fill at one point but i felt like it was too ambiguous. i think i like fillr 18:52:25 fillr up 18:56:45 :-) there you have it 18:58:26 siraben, paused since my Mom passed away. Been thinking on it again. I'm starting to consider implementing it with C++ rather than Python. Actually spent part of last night setting up a modern C++ 20 dev environment. Also getting a build system setup. Seems like CMake with Ninja is what's popular now. 19:04:41 Right, I've seen those used in conjunction a lot. Personally I use Nix as soon as a project requires other libraries and dependencies. 19:07:02 Why switch from Python to C++? Is it a matter of performance? 19:16:25 siraben, performance will definitely improve but the real driver is that Python doesn't give me low enough level access to do some of the simple stuff. It makes it difficult to escape out of its object model. What I had to do to get branching and looping "working" was insane. 19:17:07 That makes sense. 19:17:37 Also the latest C++ standard also has continuations in it which seem promising. 19:17:53 Full continuations? What's the equivalent of call/cc? 19:18:11 Heh, C already kinda has continuations via setjmp/longjmp 19:18:42 Yeah but it's not "safe". Naturally the C++ ones are. 19:21:33 proteusguy: have you heard of Zig? 19:22:40 don't think so 19:24:08 Zig looks like a viable C alternative, while languages like C++ and Rust are expressive and performant they are very complex 19:25:49 and take too long to compile 19:30:43 --- join: hosewiejacke joined #forth 19:30:57 Zig looks interesting. It says it's also a C compiler - is it also a C++ compiler as well? 19:32:36 I think so; https://github.com/ziglang/zig/issues/4786 19:38:40 siraben, it "sorta" does. Funny it's written in C++ but doesn't integrate with it very well. 19:39:54 --- mode: ChanServ set +v proteusguy 19:49:54 --- quit: fiddlerwoaroof (Quit: Gone.) 19:50:24 --- join: fiddlerwoaroof joined #forth 19:56:37 --- quit: fiddlerwoaroof (Quit: Gone.) 19:57:40 --- join: fiddlerwoaroof joined #forth 20:01:30 --- quit: fiddlerwoaroof (Client Quit) 20:02:29 --- quit: hosewiejacke (Ping timeout: 258 seconds) 20:02:38 --- join: hosewiejacke joined #forth 20:03:06 --- join: fiddlerwoaroof joined #forth 20:15:09 hey guys 20:29:24 --- quit: Gromboli (Quit: Leaving) 20:30:19 tabemann_, hey! 20:32:19 --- join: dave0 joined #forth 20:33:53 * tabemann_ just wrote something that should solve most of his dynamic allocation needs - a generic memory pool 20:34:04 simple, fast, and constant-time 20:35:44 and the main downside I think is potentially losing up to half of your usable memory 20:36:02 no, it's not a heap 20:36:34 it's really just a free list for constant-sized blocks 20:36:59 unlike a heap, there is no way memory can get "lost" 20:37:17 and 16 is your smallest block? 20:37:23 no, 4 bytes 20:37:31 and 8 bytes is the next smallest? 20:37:37 16 bytes is my heap implementation's minimum size 20:37:37 *largest 20:38:25 so how many 17 byte blocks can I allocate? 20:38:30 my memory pool has a minimum size of 4 bytes because it needs to put the free pointer somewhere 20:38:59 seems reasonable. 20:39:09 the maximum size is how much free RAM there is on the machine 20:39:53 one plus of the memory pool implementation is that one does not need to devote all the RAM one needs to it ahead of time 20:40:04 it allows adding more RAM to it after the fact 20:40:19 currently it doesn't support removing RAM from it, though, after the RAM has been added 20:41:08 --- quit: sts-q (Ping timeout: 272 seconds) 20:41:36 I think actually thats the main drawback of memory pools over garbage collection - you lose up to half of your ram 20:42:17 if you have a pool of 16 byte blocks and a pool of 32 byte blocks and you try to allocate 17 byte blocks then they will all be taken from 32 byte pool 20:42:26 and 15 of the 32 bytes will be unused in every block 20:42:37 which would not happen if it were garbage collected 20:43:17 this is supposed to be a soft real-time system 20:43:37 GC would be insane for that 20:44:20 there's a reason why I left my heap allocator out of my standard images 20:44:38 big, expensive resource-wise, slow, not real-time at all 20:44:39 I know im just saying thats the main trade off as far as I can tell 20:44:46 constant time but less usable memory 20:47:11 I think your hypothetical problem only arises when the programmer makes poor choices as to what block sizes to use for their memory pools 20:47:15 --- join: sts-q joined #forth 20:47:45 if they are going to have many 17-byte blocks allocated, then make a pool of 17-byte blocks instead of a pool of 16-byte blocks 20:58:04 tabemann_, ya if you know that in advance 20:59:28 but if you have a random chance of needing to allocate 1-32 bytes since the size might depend on runtime input then you're losing a good chunk of your memory there too 21:01:06 for that what I'd do is determine the distribution of sizes needed, and create an array of memory pools based upon that - then dynamically select which of the memory pools one needs at runtime 21:03:37 so if there is an equal chance of the buffer being length 1-32 you would create 32 pools? 21:04:16 --- quit: fiddlerwoaroof (Quit: Gone.) 21:04:38 know what 21:04:50 this particular use case probably is better served by a hybrid approach 21:04:58 --- quit: hosewiejacke (Ping timeout: 258 seconds) 21:05:07 use a heap combined with multiple memory pools 21:05:43 e.g. have 4, 8, 12, 16, 20, 24, 28, and 32 byte memory pools 21:05:54 but don't allocate all the space needed for them ahead of time 21:06:03 rather allocate minimal amounts in a heap 21:06:14 and when those run out of space, allocate more space from the heap 21:06:20 --- join: fiddlerwoaroof joined #forth 21:07:07 seems like youd lose your constant time then if you need to allocate more heap space 21:07:12 and add it to the memory pools in question 21:07:50 yes, you would; it's not an ideal case, but if you're more concerned with space wasted than with realtime characteristics, it's probably the way to go 21:08:03 so you have a tradeoff 21:08:23 you can have really nice realtime characteristics, at the cost of the possibility of wasting RAM 21:08:57 or you can have less realtime-friendly characteristics, with the plus that one can use RAM more efficiently 21:09:42 --- quit: dave0 (Quit: dave's not here) 21:17:49 --- join: gravicappa joined #forth 21:35:33 --- quit: ecraven (Quit: bye) 21:35:46 --- join: ecraven joined #forth 22:06:41 --- quit: gravicappa (Ping timeout: 258 seconds) 22:44:44 --- join: hosewiejacke joined #forth 23:23:33 concatenative, polymorphic programming in Haskell; https://github.com/leonidas/codeblog/blob/master/2012/2012-02-17-concatenative-haskell.md 23:24:50 treating a stack as a heterogenous tuple, it's possible to encode push, dup, swap, nip, etc. and compose them in the usual way 23:52:48 TangentDelta: ever seen a R6501Q or similar 6502 derivatives used in dial-up modems? 23:59:59 --- log: ended forth/20.12.11