00:00:00 --- log: started forth/20.12.12 00:05:28 --- part: hosewiejacke left #forth 01:03:46 --- quit: iyzsong (Quit: ZNC 1.7.5 - https://znc.in) 01:08:49 --- join: marksmith joined #forth 01:19:11 --- quit: marksmith (Ping timeout: 264 seconds) 01:19:46 --- join: iyzsong joined #forth 01:48:34 --- join: marksmith joined #forth 02:34:57 siraben: nice 02:59:20 I struggle to motivate myself to apply complicated language features to a language that's designed to be inherently simple in construction 02:59:57 But I get that in e.g. Haskell it's nice to provide concatenative features and harness the type system there to do it (not least because you *must* use the type system?) 04:11:26 --- quit: marksmith (Ping timeout: 272 seconds) 04:53:34 --- join: marksmith joined #forth 05:22:17 You could also embed a dynamically typed language in Haskell, by making a new datatype V which has all the possible values, data V = I Int | B Bool | F (V → V) | L [V] | TypeError, then define something like 05:22:42 double :: V → V 05:22:42 double (I n) = I (n + n) 05:22:42 double _ = TypeError 05:23:21 definitely because of the type system these things can be done, so it's up to the user on how strongly typed they want their DSL to be 05:25:08 --- join: gravicappa joined #forth 05:25:42 I might toy around with a statically typed embedding of Forth in Haskell, so that something like TRUE 1 + would be unrepresentable, then generate Forth code 06:06:35 inode: Possibly. I have an ISA modem card with a large amount of Rockwell ICs on it. I haven't bothered to look any of them up in the data book though. 06:08:17 I have some Protel COCOT payphones that use a processor with a 6502 core, designed to be used in a payphone. 06:09:48 Not a Rockwell part though. 06:34:02 rockwell did some really interesting things later too like slightly incompatible 35mhz smd 6502s with added thread pointer 06:34:21 i think for modems 06:35:28 might not be that model but some had a mode where X specifies which zero page byte is accumulator 06:35:46 both if those are big speed ups for forth 06:52:58 siraben: Yes a library for doing forth-style code in Haskell that compiled to something like standard forth would be cool 06:53:04 With type safety 06:54:44 Well I suppose it depends how flexible/approachable it is, I'm sure it can be done, just whether it would really be of use to Forthers is the question 06:59:00 --- join: Gromboli joined #forth 07:21:13 This discussion of concatenative programming from the functional perspective is quite interesting, would be nice if that got traction one day 07:33:12 I like to think that FP combines the concatenative and algebraic styles together 07:34:00 So you can write foo x y z = x + y * z just as easily as main = interact $ prettyPrint . solve . parse 07:40:40 http://evincarofautumn.blogspot.com/2012/02/why-concatenative-programming-matters.html 07:41:08 Under "Point-free Expressions" they compare point-free forms and concatenative, as if they are different 07:41:18 Of course I know FP is capable of representing things concatenatively anyway 07:41:22 Well any language can really 07:41:38 Anyone here read Algebra of Programming? The book uses a ton of pointfree FP which especially helps with algebra reasoning 07:41:41 s/algebra/algebraic 07:44:36 I remember getting the same 'chills' learning Forth that I got when seeing point-free programming in Haskell the first time 07:44:48 I like factoring, what can I say 07:46:18 Have you heard of hlint? It's a linter that often suggests to rewrite things in pointfree form, e.g. (\x → f (g x)) becomes (f . g) 07:47:56 Nope, I don't write Haskell though 07:48:14 I wrote Haskell years ago for some course at uni, I thought it was really nice but just lost interest. 07:48:41 I don't have a negative opinion, and I think it would be a good thing to learn, but my interest has gone in other directions 07:49:52 Of course, only so much time to do things 07:56:12 siraben would you say that retro was a functional language, a procedural language, or neither? 07:56:46 I haven't used retro, but from what someone posted here recently, it looks at least procedural 07:56:52 Does it have higher-order functions? 07:57:11 It uses quotations a huge amount 07:57:39 And functions that manipulate the xt's from those quotations and other things on the stack 07:58:15 Hm, so I'm guessing that allows things like functions to be passed and applied 07:58:22 Yes 07:58:23 I want to say it's not a FP, but I think I can't give a good generic answer for why that wouldn't also reject Lisp 07:58:25 Does it have closures? 07:58:44 No but neither do most lisps (if I remember rightly?) 07:58:51 Or does it crc? 07:59:09 IIRC McCarthy's original LISP-2 used dynamic scope and thus didn't have closures 07:59:36 Usually Lisps these days have closures and lexical scope to varying degrees 08:00:11 funnily enough, I wonder how much closures are even needed in pointfree style of programming 08:00:19 Hmmm 08:01:10 I used to think there was a clear line between FP and procedural, but I realise that might not be true, or maybe I don't understand the definition at all 08:02:13 As a forth, retro is procedural. But it does drow some influence from functional programming 08:02:18 I think the main point is some languages are closer to the lambda calculus heritage, and some closer to the turing machine heritage. 08:02:27 No closures 08:03:11 Yes crc, although by that definition lisp is procedural. But it's a reasonable definition for current languages with how FP has evolved over time 08:05:53 retro sounds reflective, which overlaps with FP a bit 08:06:27 Well, that'd make C functional since it has function pointers, heh 08:09:44 Exactly 08:09:56 another good measure is how amenable is the language to equational reasoning. can you replace a word by its body and expect the same behavior? 08:10:10 and likewise, replace expressions with equivalent ones and expect the same behavior 08:10:21 It is a bit subjective, especially with earlier languages which kind of fail even modern definitions of a "high level language" most days 08:11:05 There was a great paper I came across on pinning down what is really meant by "expressiveness" in programming languages 08:12:54 https://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.51.4656 08:13:58 after thinking about it more I'd say Retros is procedural but not functional for the same reason C isn't functional, closures are pretty key to FP IMO 08:14:04 s/Retros/Retro 08:27:24 I personally think C++ and Lua are more procedural than retro, but they both have closures. 08:35:17 I was reading something about Turing suggesting to somebody they should write a program on their early computer that 08:35:42 let them run a program emulated, so they could debug it in detail and 'trace' what it does for study 08:36:12 And I thought it was quite funny because he was kind of like the real computer analogue of a universal turing machine 08:58:17 Hah, nice. Turing was very forward-thinking. 09:58:47 --- quit: gravicappa (Ping timeout: 240 seconds) 10:02:16 --- join: gravicappa joined #forth 10:09:33 --- quit: lispmacs (Read error: Connection reset by peer) 10:18:14 --- join: lispmacs joined #forth 10:57:18 --- quit: catern (Ping timeout: 240 seconds) 10:58:38 --- quit: ornxka (Ping timeout: 240 seconds) 10:59:30 --- join: catern joined #forth 10:59:32 --- join: ornxka joined #forth 11:00:21 --- join: X-Scale` joined #forth 11:03:35 --- quit: X-Scale (Ping timeout: 264 seconds) 11:03:36 --- nick: X-Scale` -> X-Scale 12:04:52 --- quit: gravicappa (Ping timeout: 256 seconds) 12:47:41 --- join: Zarutian_HTC joined #forth 13:01:11 hey guys 13:04:53 h'lo 13:07:40 * tabemann_ is weighing whether a hyrbrid heap-memory pool solution is too expensive in flash needed 13:08:11 i.e. memory exists in a heap, out of which space is allocated by a memory pool as needed 13:08:12 been thinking about using something like Infocoms ZCSII to encode word names 13:11:30 --- quit: Vedran (Ping timeout: 256 seconds) 13:12:11 in some old Forths they only used the length of a word and the first three characters 13:12:30 --- join: Vedran joined #forth 13:22:31 that doesnt work with other languages than english 13:23:06 plus I want SEE to work properly 13:23:52 well yes 13:23:58 I wasn't recommending the approach 13:54:32 --- quit: Zarutian_HTC (Remote host closed the connection) 14:08:28 --- join: zolk3ri joined #forth 14:31:25 --- quit: zolk3ri (Remote host closed the connection) 14:52:44 --- quit: Lord_Nightmare (Remote host closed the connection) 14:53:07 --- join: Lord_Nightmare joined #forth 15:04:29 --- join: Zarutian_HTC joined #forth 15:05:04 --- quit: Zarutian_HTC (Remote host closed the connection) 15:25:49 tabemann_: what are your constraints? 15:26:54 And is concern here about using too much space by the words needed, what are the constraints there? 15:27:43 flash usage isn't too much of a concern, as the chips I'm using have about 512K to 1M of flash on average 15:28:14 even still, a fully loaded system will have a lot of extra non-standard functionality, which will add up 15:28:35 ("non-standard" as it's not included in the official binaries, and has to be loaded by the user) 15:29:03 other constraints are realtime performance - in the common case it will be fast, not much slower than plain memory pools 15:29:32 but in the worst case it will have the behavior of a heap, because it will need to allocate more memory to add to the memory pools 15:30:00 another constraint is memory usage efficiency 15:30:23 in this regard it aims to have better performance than plain fixed-size memory pools 15:30:38 because it is internally an array of separately-expanded memory pools for a range of sizes 15:31:54 each of which are expanded as needed with memory taken from the heap 15:32:09 "better performance than plain fixed-size memory pools" is this because the data could be larger so would be some kind of list? 15:32:26 but which can have minimum allocation sizes smaller than those permitted than the heap (which has a minimum allocation size of 16 bytes) 15:33:54 veltas: it's because it is an array of memory pools, and it automatically selects the smallest pool necessary, and expands it as necessary so it does not all need to be pre-allocated (aside from the total memory allocated for the heap) 15:34:55 *smallest block pool necessary 15:37:16 When you say an array of memory pools, is it like dlmalloc? http://gee.cs.oswego.edu/dl/html/malloc.html 15:38:35 Well it uses an array of free block lists, starting from smaller to larger sizes 15:43:20 aw shite - just realized a problem with what I'm implementing - my memory pool implementation has no size marking, because all the blocks are guaranteed to have the same size 15:43:45 but for this, because there's multiple block sizes, each block needs to have to have a size field 15:44:01 and it has to be four bytes in size to maintain alignment 15:45:02 Yes sounds like a full heap implementation 15:47:03 I personally think a heap is unnecessary for an embedded system, but at the same time I'd expect it in any embedded system at least as an option 15:47:24 I already have a heap implementation, that's the thing 15:47:47 I wanted something faster, with better performance in the common case though 15:48:05 screw it, I'm not going to bother with this 15:48:20 How does the current one work? 15:49:06 Does it just create a big list of all freed blocks, and allocate from the first it sees the right size and increment the heap end address if not, storing size of each block at front? 15:49:20 not quite 15:49:26 it has an array of freed block lists 15:49:51 Of different size ranges? 15:50:05 it would choose a list out of the array based on the logarithm of its size + 1 15:50:10 Yeah 15:50:12 So that's just dlmalloc without the small block optimisation 15:50:41 and it has all kinds of logic for splitting and merging blocks 15:51:18 Yup 15:51:20 e.g. if when allocating there is space left over in there rest of the block allocated from, that would be unlinked and then relinked into a new location in the list structure 15:52:03 I wrote something like that for the ZX Spectrum for a text editor, it was 'fine'. If it works on a 3.5MHz Z80 it will work in embedded 15:53:13 For my spectrum code originally I wanted to write it with compaction (to remove all 'gaps' if there was no free memory left because of fragmentation) 15:53:43 But the usage code got way too complicated, you have to keep re-loading your locator in case it's moved! Or be too restrictive about where allocation is allowed 15:53:47 my heap achieves compaction by merging adjacent blocks 15:53:53 when blocks are free 15:54:16 By compaction I mean moving *allocated* memory 15:54:52 You will get fragmentation however you implement it, unless you use fixed size blocks or make other restrictions on usage 15:55:48 The funny thing is that the code needed to handle compacting allocations lost me any real benefit I probably would have had, and most of my allocations were small anyway so it didn't really matter 15:55:56 I mean size benefit 15:56:05 Because on spectrum the code was in RAM 15:56:33 On embedded systems would be different in that regard 15:57:16 How do you know your allocation needs optimising, is this from profiling? 15:59:39 it's because it's not very space-efficient for very small allocations 16:00:02 the minimum block size is 16 bytes, and each block also has a 16-byte overhead 16:00:31 whereas my memory pool has a minimum block size of 4 bytes 16:00:37 with no extra overhead 16:01:19 the only issue with the memory pool is that each block has a fixed maximum size, and if less data is needed than the full block size it is wasted 16:01:27 *the remainder is wasted 16:01:53 Yeah I think you're trying to square a circle a bit on this one 16:03:46 note that the heap can be combined with the memory pool, as the user can take space allocated on a heap and add it to the memory pool 16:04:22 and that was my objective here 16:04:27 You can avoid storing the size per-block if the size can be passed to free and the actual size is a deterministic function of requested size 16:05:05 And if you are freeing something you generally either know or can remember how large it is 16:06:03 It's been talked about that this is a 'broken' feature of C, C++ etc that free/delete does not pass size info (in C++ it really is, because classes absolutely know how large things are) 16:06:50 This saves like 16 or 8 bytes per allocation on modern computers. 16:10:37 I'm surprised that you actually implemented a compacting heap for a ZX Spectrum, btw 16:11:33 I didn't! 16:11:47 I said I started doing it and gave up because it turned out not to be worth it :P 16:12:10 It might be worth it on an embedded device with very constrained RAM where the flash space is not an issue 16:12:26 * tabemann_ notes that the Mac had a compacting heap originally 16:12:42 If you find yourself writing extra code to try and save space in the same space you're trying to save, you're doing it wrong 16:12:47 Yes Mac and Windows both 16:13:08 compacting heaps make it hard to write bug-free code though 16:13:35 On Windows it 'hid' the details from you often because it would compact on a 64K boundary, and your segment registers would get mysteriously updated if it happened while your program was yielded 16:13:38 because you need to keep on locking and unlocking blocks 16:14:26 Yes, this is what I realised and that adds extra code, therefore the space lost by fragmentation is not as bad as the space (from extra code) *and* effort required by the compacting 16:15:01 It would be worth it on e.g. x86 where you can hide the details like MS did 16:16:10 Why would you be surprised? 16:16:33 ? 16:16:41 oh yeah 16:17:13 I thought that the ZX Spectrum would be too small of a system to really support something like a real heap, with its overhead 16:18:28 Definitely was constrained, heap is convenient though and I was writing a text editor so seemed worth it 16:19:47 Had an array of pointers to text lines, means the line length can be more flexible and probably more memory efficient than what forth does 16:20:27 Old Forth block editors just allocate the whole block to a number of fixed-length lines, short lines waste loads of space etc. 16:20:45 yeah 16:22:13 The thing wasting space on that project was trying to write code in C with SDCC 16:24:54 The idea was to try and make an interactive environment with some kind of language, and I decided then that it would actually make more sense to create a small language and implement the *editor* and as much as possible in that language 16:25:14 Even write the compiler and interpreter in the language, all you need then is to write the VM in machine code 16:25:49 This is before I knew forth obviously, or the solution would have been painfully obvious 16:26:07 yeah 16:26:22 okay, I'm gonna have dinner now, will bbl 16:26:30 I like rediscovering all the same problems others have solved 16:26:36 definitely 16:26:37 Have fun 16:33:23 I think the right way to do a text editor with constrained space is to delimit lines where they are allocated, and leave space between them. Then you can compact if needed, or shuffle them along (less frequently than if they're all bunched up) 16:34:17 The actual BASIC environment that comes with the Spectrum decided to delimit them in memory all contiguously, and then as lines are inserted all further lines must be moved along in memory, which is visibly slow in the editor! 16:36:14 The Spectrum fit a decent amount of code in, because it was stored tokenised, values above $80 were used for keywords/built-ins/etc 16:42:48 Just put everything in B-trees with traversal optimisations and then use a fixed-size pre-allocated pool for allocations, fully real-time. Just don't ask me to code the B-tree :P 16:54:24 --- quit: _whitelogger (Remote host closed the connection) 16:57:20 --- join: _whitelogger joined #forth 16:59:09 It's annoying that the 6502 took off and became super popular. It's my favorite 8-bit microprocessor though :P 16:59:33 The 6800 is way more capable, but nowhere near as cheap as the 6502 was. 17:03:03 6502 is fine if you treat it like a 128-register RISC and don't think too hard about it 17:03:38 What a true 8-bit processor it was 17:26:45 --- join: Zarutian_HTC joined #forth 17:30:18 --- quit: marksmith (Ping timeout: 240 seconds) 18:05:35 --- join: marksmith joined #forth 18:05:51 --- join: boru` joined #forth 18:05:53 --- quit: boru (Disconnected by services) 18:05:56 --- nick: boru` -> boru 18:17:18 --- join: dave0 joined #forth 18:56:08 --- quit: Zarutian_HTC (Remote host closed the connection) 18:57:29 back 18:57:32 hey guys 18:58:41 to me the big problem with the 6502 is that the index registers were 8-bit 19:05:46 Yeah. That's the most annoying thing for me. 19:06:55 It's not too bad with all of the different indirect indexed modes, but it's slower and takes up more code. 19:10:18 --- quit: marksmith (Ping timeout: 240 seconds) 20:36:33 --- quit: dave0 (Quit: dave's not here) 20:38:16 --- quit: sts-q (Ping timeout: 260 seconds) 20:39:15 --- quit: phadthai (Quit: bbl) 20:52:54 --- join: sts-q joined #forth 21:04:53 --- join: phadthai joined #forth 21:14:17 --- quit: Gromboli (Quit: Leaving) 21:18:33 --- join: gravicappa joined #forth 22:12:50 today's advent of code was nice 23:59:59 --- log: ended forth/20.12.12