00:00:00 --- log: started forth/18.10.14 00:34:04 --- quit: clog (^C) 00:34:04 --- log: stopped forth/18.10.14 00:34:17 --- log: started forth/18.10.14 00:34:17 --- join: clog (~nef@bespin.org) joined #forth 00:34:17 --- topic: 'Forth Programming | logged by clog at http://bit.ly/91toWN | If you have two (or more) stacks and speak RPN then you're welcome here! | https://github.com/mark4th' 00:34:17 --- topic: set by proteusguy!~proteus-g@cm-134-196-84-89.revip18.asianet.co.th on [Sun Mar 18 08:48:16 2018] 00:34:17 --- names: list (clog dave9 dys smokeink2 proteus-guy X-Scale smokeink ashirase reepca MrMobius unrznbl[m]1 tabemann Zarutian nerfur jedb nighty- ovf dave0 ecraven sigjuice vxe pointfree +KipIngram Keshl pointfree[m] lonjil NB0X-Matt-CA catern mstevens carc APic malyn z0d jackdaniel yunfan rprimus C-Keen amuck rann jhei +crc FatalNIX Lord_Nightmare djinni zy]x[yz koisoke Labu phadthai irsol rpcope a3f bb010g jimt[m] dzho diginet2 cheater dne WilhelmVonWeiner bluekelp rain2) 00:34:17 --- names: list (newcup jn__) 02:41:20 --- log: started forth/18.10.14 02:41:20 --- join: clog (~nef@bespin.org) joined #forth 02:41:20 --- topic: 'Forth Programming | logged by clog at http://bit.ly/91toWN | If you have two (or more) stacks and speak RPN then you're welcome here! | https://github.com/mark4th' 02:41:20 --- topic: set by proteusguy!~proteus-g@cm-134-196-84-89.revip18.asianet.co.th on [Sun Mar 18 08:48:16 2018] 02:41:20 --- names: list (clog smokeink2 ncv ashirase dave9 dys proteus-guy X-Scale reepca MrMobius unrznbl[m]1 tabemann Zarutian nerfur jedb nighty- ovf dave0 ecraven sigjuice vxe pointfree +KipIngram Keshl pointfree[m] lonjil NB0X-Matt-CA catern mstevens carc APic malyn z0d jackdaniel yunfan rprimus C-Keen amuck rann jn__ jhei +crc newcup rain2 bluekelp WilhelmVonWeiner dne cheater diginet2 dzho jimt[m] bb010g a3f rpcope irsol phadthai Labu FatalNIX Lord_Nightmare djinni zy]x[yz) 02:41:20 --- names: list (koisoke) 02:41:22 --- join: smokeink (~smokeink@118.131.144.142) joined #forth 02:43:29 --- join: siraben (~user@unaffiliated/siraben) joined #forth 02:48:22 --- join: ncv_ (~neceve@2a02:c7d:c5c9:a900:6eaf:6ef7:3b81:d5f6) joined #forth 02:48:22 --- quit: ncv_ (Changing host) 02:48:22 --- join: ncv_ (~neceve@unaffiliated/neceve) joined #forth 02:49:39 --- quit: ncv_ (Remote host closed the connection) 02:50:16 --- quit: ncv (Remote host closed the connection) 02:50:49 --- join: ncv (~neceve@2a02:c7d:c5c9:a900:6eaf:6ef7:3b81:d5f6) joined #forth 02:50:49 --- quit: ncv (Changing host) 02:50:49 --- join: ncv (~neceve@unaffiliated/neceve) joined #forth 02:52:25 oh, I thought ['] are 3 separate words 02:52:36 if it's a single word then everything is clear 02:57:05 i came from C and allowing any characters in Forth words was a bit weird to get my head around 03:14:26 --- quit: ncv (Remote host closed the connection) 03:15:06 It's a cool and powerful feature I guess, one can define many beautiful words in any languages he wishes ... in C one has to use English and that's not cool 03:15:58 --- join: ncv (~neceve@2a02:c7d:c5c9:a900:6eaf:6ef7:3b81:d5f6) joined #forth 03:15:58 --- quit: ncv (Changing host) 03:15:58 --- join: ncv (~neceve@unaffiliated/neceve) joined #forth 03:17:39 not only that, you can use real chars like × for multiply and ÷ for divide if you want 03:19:58 Most stack operations form a mathematical group 03:20:23 Like SWAP, ROT, -ROT, 2SWAP, 2ROT 03:21:06 I wonder what the "minimal" operations are, so that you could define everything else on top of that 03:21:36 I think it's just >R R> and SWAP, right? 03:21:58 nice 03:22:27 dave9: There's always the danger of making an APL by accident 03:22:51 i used to know a bit of apl 03:23:07 +/ 1 2 3 4 5 adds up the numbers 03:23:26 smokeink: Yeah that was a little strange for me as well, but it makes reading programs so much easier 03:23:31 You deal with a word at a time 03:23:44 usually can find out what it does with SEE, or reading the docs 03:23:59 Order of operations is explicit, left to right. 03:25:01 --- join: gbc (~quassel@110.54.218.92) joined #forth 03:30:45 --- nick: gbc -> wa5qjh 03:30:54 --- quit: wa5qjh (Changing host) 03:30:54 --- join: wa5qjh (~quassel@freebsd/user/wa5qjh) joined #forth 03:32:50 --- quit: wa5qjh (Remote host closed the connection) 03:59:21 --- quit: reepca (Ping timeout: 244 seconds) 03:59:53 --- join: reepca (~user@208.89.170.250) joined #forth 04:01:59 --- quit: smokeink2 (Quit: http://www.kiwiirc.com/ - A hand crafted IRC client) 04:02:22 --- quit: smokeink (Remote host closed the connection) 04:15:24 --- join: dddddd (~dddddd@unaffiliated/dddddd) joined #forth 04:20:30 --- quit: ncv (Remote host closed the connection) 04:20:50 --- join: ncv (~neceve@2a02:c7d:c5c9:a900:6eaf:6ef7:3b81:d5f6) joined #forth 04:20:50 --- quit: ncv (Changing host) 04:20:50 --- join: ncv (~neceve@unaffiliated/neceve) joined #forth 04:54:35 siraben: http://tunes.org/~iepos/joy.html 04:54:49 Saw this on #concatenative not too long ago. 04:56:08 absolutely nutty. 05:32:23 --- quit: dne (Ping timeout: 260 seconds) 05:32:41 --- join: dne (~dne@jaune.mayonnaise.net) joined #forth 05:35:52 What's #concatenative all about? 05:36:14 WilhelmVonWeiner: Yeah I've seen that before, time to read it in depth. 05:41:17 I'm exploring program generation for Forth. 05:42:03 The idea is that since we can ask, Prolog-style for a program that does this operation: ( a b c d -- c d b a) and it would gladly respond "SWAP 2SWAP" 05:43:06 If anyone has some challenges for this system please let me know 05:43:18 Also has >R R>, and other return stack operators 05:46:30 about concatenative languages I suppose 05:49:48 --- join: rdrop-exit (~markwilli@112.201.162.180) joined #forth 05:50:10 You mean « 2swap swap » 05:50:26 Oh sorry my stack notation is backwards 05:50:31 I'll fix it 05:51:09 It's just that in my language it's easier to take the first element 05:51:13 (Scheme) 05:52:57 There have been a few implementations of similar ideas. I don’t think there ever worth the trouble. 05:54:05 Fixed 05:54:08 Oh it's easier than it seems 05:54:17 wouldn't that be swap 2swap swap 05:54:39 oh c and d weren't swapped. nvm 05:54:49 It generated: ((two-swap swap two-swap)) 05:55:02 Is that correct? 05:55:06 no 05:55:44 it should be 2swap swap 05:55:52 Ah I need to fix my representation in memory brb 05:56:30 What's the convention for notation? 05:56:38 ( bottom top -- bottom top ) ? 05:56:44 yes 05:57:02 top is always the rightmost 05:57:56 If you’re trying to come up with an automatic stack juggler, you don’t need the left side. 05:58:18 It can run programs backwards or forwards 05:58:33 i.e. given a stack definition it can make a program, given a program it can run it 05:58:44 *stack effect 05:59:17 You could call swap « ba », over « aba », lift « aab », swish « bac », etc… 05:59:58 2swap « cdab » 06:00:24 You're trying to automate stack rearrangement? 06:00:48 Why? Just for fun? 06:04:34 Here are the fixed results ((two-swap swap) (swap swap two-swap swap) (two-swap swap swap swap)) 06:04:43 Should be correct for a b c d -> c d b a 06:04:46 Yeah, just for fun. 06:04:57 Yikes 06:05:16 Program generation is just interesting 06:05:27 Especially through relational/logic programming 06:10:07 One could add an instruction pointer and perform conditional branching, but infinite loops are everywhere and so is the Halting Problem 06:14:01 --- quit: rdrop-exit (Quit: rdrop-exit) 06:20:09 Can you make an optimality claim? 06:20:17 I.e., are you getting the shortest path to the rearrangement? 06:21:43 No. It's breadth-first search 06:21:50 Although the first solution found is usually the simplest 06:22:07 I think the first solution is always optimal 06:27:27 Well, that's how you want it. :-) 06:27:30 "first is best." 06:27:50 --- quit: reepca (Remote host closed the connection) 06:28:07 --- join: reepca (~user@208.89.170.250) joined #forth 06:32:56 I just did a search and it's possible to make all 24 permutations of the top four elements of a stack with the following words: ROT SWAP 2SWAP 2ROT 06:33:04 I'll try to see if I can cut it down 06:34:08 doesn't 2rot expect 6 stack items? 06:44:24 siraben: use ROLL 06:45:31 Ah sorry 2ROT hasn't been implemented yet 06:45:37 So it didn't appear in the generated programs 06:48:36 I think I'd just go with >R R> and SWAP. :-) 06:48:50 Can't you do anything with those? 06:50:42 Yeah I'm doing a search now 06:50:56 With those, coincidentally enough 06:51:51 I suppose I'm taking a short break from all this low-level programming, haha. 06:51:58 :-) 06:52:31 Watching Scheme take up 2.33 GB of memory :O 06:56:06 I'll post the results here once it's done. Right now at 22/24 permutations done. 07:00:17 If I find myself with an uncomfortable stack situation on my hands, I usually look back at how I got there, and see if I can restructure the computation of those things so they wind up more happily laid out. 07:00:40 I.e., try to "do stack manipulation" by "not needing it." 07:00:45 Right, of course. 07:01:13 The really tacky situations where you can't do that - that's what prompted me to implement my stack frame feature. 07:01:20 Also helps to know some algebraic properties, like the commutativity of + and * ( a b + is the same as b a + ) 07:01:35 The less the better 07:01:36 Sometimes you really do need access to a biggish number of stack items all in the same bit of code. 07:01:52 Yeah, you could do offsets from the stack pointer, right? 07:01:58 I'm all for Forth's "need only the top couple of items," but sometimes you just can't, I think. 07:02:23 Such as? 07:02:26 Well, I have a separate "frame" pointer, so that I can still use the stack in the normal way without making all the offsets change all over the place. 07:02:54 (hm the last permutation is taking a long time to solve) 07:03:23 Um, in my command history code. If I recall, I have 1) string byte, 2) string address, 3) cursor position, 4) offset in input buffer, 5) input buffer address, and 6) pointer into the command history list. 07:04:01 And I just couldn't figure out how to arrange to only need access to a couple of those at any given time, and even then, what comes next will need something else, and it might be anywhere in that list. 07:04:41 Maybe you could use variables? 07:04:49 I absolutely did not want to do that. 07:04:57 Why not? Slower? 07:05:24 I expect it would have been about the same - the frame cells behave like variables in terms of accessing them. 07:05:52 I just don't like creating variables that sit there forever idle most of the time. 07:06:05 You could think of the frame thing as a "local variable" system of sorts. 07:06:17 I use {{ to open a frame, }} to close it. 07:06:18 Ah, same here, I don't like variables just sitting there 07:06:29 Pre-existing rame pointer gets saved on the return stack, and restored on }} 07:06:36 --- join: rdrop-exit (~markwilli@112.201.162.180) joined #forth 07:06:39 frame 07:07:17 I have a set of words, 0@ 1@ 2@ ..., 0! 1! 2! ..., and a few others, like +! -! etc. 07:07:35 All with the offset hard-coded, so I don't have to run a number through the stack to specify the offset. 07:07:39 I supported up to 6. 07:08:02 If you’re dealing with a proper stack machine (real or virtual) you don’t necessarily have random acces to the stacks, or even direct access to a stack pointer. 07:08:22 If you had a preference for writing general words and specifying an offset numerically, you could do that too. 07:08:43 rdrop-exit: that's true, and it's another reason for hard codeing the parameter into the word. 07:09:00 By the way, what happens on a stack underflow for you? 07:09:00 I could implement that in a hardware stack, though it would introduce new data paths in the wiring. 07:09:16 I restore the system from the error recovery image and run QUIT. 07:09:45 Once you start doing random access into the stack you’re actually programming to a register machine model rather than a stack machine. 07:10:13 I understand. This isn't something I use as a routine thing - it's there to bail me out of an occasional ugly situation. 07:10:22 I use circular stacks, they’re never empty. 07:10:39 Yeah, I'd think about that as well if I were buidling a hardware implementation. 07:10:54 That behavior can be very useful in some situations. 07:12:00 I think I have two places, maybe three, in my actual system where I use stack frames. 07:12:33 Maybe there are better ways to structure those things I just didn't see. 07:13:04 I am sensitive about not using the facility too easily - I think a risk is coming to rely on it rather than fighting for the "good way." 07:13:33 When I first implemented those words, I used the stack pointer. 07:13:41 I never do random access on the stacks. 07:13:47 rdrop-exit: Circular stacks? 07:13:52 Enlighten me 07:14:05 But that got hard to keep up with. It was a real pain figuring out what offset I needed to use "this time," and changes to the code later tended to break all of it. 07:14:13 Much cleaner with a frame pointer that's under better control. 07:14:27 KipIngram: How did you implement a frame pointer? 07:14:29 siraben: the stack pointer wraps around when it hits one end of the buffer. 07:14:30 Like on Chuck’s Chips, the stacks have no begining or end. The addressing is circular. 07:14:31 What model are you using? 07:14:37 With a register. 07:14:47 Oh, lucky you. 07:14:52 {{ pushes that register to the return stack and copies the stack pointer into it. 07:15:02 I'll have to implement it with a hardcoded 16-bit memory cell 07:15:02 So it's a "snapshot" of a stack pointer. 07:15:09 I use 256 deep stacks on my host Forth. 07:15:41 Mine are currently a bit under 512. 07:15:45 How do you check for underflow? 07:15:55 You don’t. 07:16:07 Maybe I can weave it in my interpreter 07:16:17 siraben: a circular stack pointer can never leave the range allocated for the stack. 07:16:34 That's actually a clever idea. 07:16:35 On my system each time I return from EXECUTE in the INTERPRET loop I call ?STACKS. 07:16:42 It compares SP against the value in SP0. 07:16:53 The only issue arises during heavily nested calls, right? 07:17:04 Have you checked when the deepest call happens? 07:17:13 Currently I don't. 07:17:17 KipIngram: I'm thinking of that check as well. 07:17:22 ?STACKS 07:17:24 But my stack regions are initialized to zero at the startup. 07:17:35 Just don’t take anything off the stack that you didn’t put there. 07:17:38 I can look at that memory and find the "deepest" non-zero word at any time. 07:17:59 Right - if underflow happens you've made a mistake. 07:18:44 siraben: I do plan to have a profiling system, and one of the things it might monitor is stack depth. 07:19:07 Profiling will involve adding "overhead code" to several performance sensitive places, so as long as I'm doing it I may as well gain all the information I can. 07:19:17 Gotta go, see you all later. 07:19:18 But that stuff won't be on except when I'm explicitly doing profiling. 07:19:22 Later. 07:19:27 --- quit: rdrop-exit (Quit: rdrop-exit) 07:20:35 Right. I'll need to figure out timers on my calculator 07:20:52 I've seen assembly programs that work as stopwatches, so it's somewhere there. 07:22:51 Yeah, I'll have to figure out how to get time info on my MacOS as well. 07:24:00 If I were doing this in hardware of my own design I'd just have a cycle counter. 07:24:14 A 64-bit counter running at 4 GHz won't wrap for 146 years. 07:25:40 Funny thing about computing is that issues sometimes don't prop up until decades later 07:26:02 e.g. year 2038 problem 07:26:47 I'll figure those out then. :-) 07:27:10 But yeah, that's another argument for having as near a total understanding of what you're doing as possible. 07:27:14 Well, it's "good enough" 07:27:22 The more you have your head around things, the more you can head off at the pass. 07:27:40 If the bug arises in a way you simply had no clue about, it's hard for you to prevent it. 07:28:19 No wonder people are in a huge fuss about program correctness 07:28:21 If you use 10 packages written by 10 other people and you only have a 10% understanding of each one, you're just not going to be able to foresee all the interactions. 07:28:29 *Proving* that your compiler is correct, not testing it. 07:28:33 Right. 07:28:54 Combinatorics all the way. 07:29:12 I remember a blog post on something about "There are only 4 billion floats so test them all!" 07:29:41 Ugh. 07:29:47 That hardly seems like the right answer to me. 07:29:49 https://randomascii.wordpress.com/2014/01/27/theres-only-four-billion-floatsso-test-them-all/ 07:30:00 Understand your solution, completely, and KNOW that it's right. 07:30:15 "The ceil function gave the wrong answer for many numbers it was supposed to handle, including odd-ball numbers like ‘one’." 07:30:31 I think something similar happened in intel processors 07:30:46 So that tells me that the guy that wrote the ceil function just didn't REALLY have a good understanding of floats. 07:30:56 "The floor and round functions were similarly flawed" 07:30:57 Wow 07:31:13 I don't like using other people's code, and I REALLY hate being asked to MAINTAIN other people's code. 07:31:19 A failure rate of 33% over all floats, too 07:31:51 Most of the code I've seen is a. either well documented or b. written by me 07:31:54 "Don't write your own code for that - use the library." 07:32:18 Ok, great, you go pull up the library, and you find it wasn't just a solution to this problem you have - it was intended to be a solution to a zillion other things as well. 07:32:31 And suddenly "really understanding" library becomes a formidable undertaking. 07:32:34 I would use libraries for quick dirty tasks but would reconsider it for a long term solution 07:32:45 And yet, if you don't, and you use it, you're exposing yourself to a lot of risk. 07:32:57 I'd rather write my own laser-focused solution for the problem I have in front of me. 07:33:06 Yes, I do that in Python all the time. 07:33:08 Libraries also make programmers rely on them a lot 07:33:18 Yeah, some languages encourage library usage 07:33:18 When I need a quick and dirty solution that I'm going to run a couple of times and then throw away. 07:33:30 When I can tell by looking that it's given me what I wanted, in my case. 07:33:38 There's also great unix tools like grep, awk, sed, bash etc. 07:33:47 Which are generic enough so that you can throw anything at them 07:33:56 But people will then accuse me of having "not invented here" syndrome, which is considered a bad label. 07:34:15 It's more like "not understood here" syndrome. 07:34:55 That being said, there are areas where libraries/packages absolutely make sense, for instance, LaTeX. 07:34:56 I just hate handing someone code while I'm thinking, "Shit, I don't really know if it's going to work every time or not." 07:35:16 Sure - we do have to use tools. 07:35:18 There's packages to make slides, chemical equations, syntax highlight code, make diagrams etc. 07:35:22 Right 07:35:52 Remember the leftpad debacle? 07:35:59 But when I use those, I can look at the output, which is what I'm actually *delivering*, and see that it's right. 07:36:08 No, I'm not familiar with that one. 07:36:34 We also use compilers and assemblers. 07:36:41 https://www.davidhaney.io/npm-left-pad-have-we-forgotten-how-to-program/ 07:36:57 There was a package on NPM called Leftpad, which just... pads a string with characters on the left 07:37:09 And when the developer removed it, it caused a lot of other packages to break 07:37:22 It's literally 11 lines long 07:38:22 Yeah, I agree totally with that guy. 07:38:35 But many programmers are... addicted to libraries. 07:38:53 "How can I take someone else's work and use it?" 07:39:06 Shoot, you even get it in music. 07:39:10 All the remixing and stuff. 07:39:24 It's amazing how people can "program" by constantly searching things on the internet. 07:39:48 But what happened was that computers and some software made it possible for hoards of people with no musical skills whatsoever to declare themselves "musicians." 07:39:54 Right. 07:40:04 It's a trade-off, ultimately. 07:40:15 And if they're cute, and charismatic in any way, they get put on MTV or whatever and the public has a fit over them. 07:40:42 You don't even know if you're hearing their real *voice* - often it's digitally processed to a huge extent. 07:40:42 Made worse by tools that automate the process; automatic movie makers, poster templates, even photo filters 07:40:50 Yes. 07:41:05 It also has the effect of making it harder to separate the crap from the cream 07:41:06 I think we still have as many people born with the latent talent as we used to. 07:41:17 But they now have to compete with a huge "noise level" created by this other stuff. 07:41:24 So we're probably overlooking many of them. 07:41:44 Live musical performances are still hard to fake, so that's good. 07:41:50 ^not including lip-syncing 07:42:04 True, but you can still stick digital processing in that path to the speakers. 07:42:11 But it is a more... delicate situation for sure. 07:42:28 Here are the permutations, as promised: https://pastebin.com/raw/Q42TdEGT 07:42:55 Feels good to not have figured them out myself. 07:44:00 Since there's 120 possible permutations for 5 elements, I'll leave it running overnight 07:44:51 Using libraries also provides one with at least an "excuse" layer of protection against responsibility for mistakes. "Well, it turns out there was a bug in the blah-blah package. 07:45:16 Yeah, our customers suffered, but responsibility really lies with a third part... 07:45:23 party 07:45:44 And by the way, why wasn't this caught by the test guys???? 07:45:53 That being said, I would absolutely use a cryptography library though. 07:46:10 Rolling my own performant SHA-512 is not a good idea 07:46:20 That's different - that's a "huge expertise area." 07:46:35 I suppose the best libraries are the ones for non-trivial tasks. 07:46:42 "Non-trivial" is highly relative, however. 07:46:49 right, I really see that as the point of the article you linked. 07:47:06 What he's really criticizing is people using libraries for those simple one-line / eleven-line things. 07:47:28 "Take on a dependency for any complex functionality that would take a lot of time, money, and/or debugging to write yourself." 07:47:30 It becomes hard to know where to draw that line, though. 07:47:32 Right. 07:47:41 What about a function to solve a set of linear algebraic equations? 07:47:53 The Haskell library has a lot of one-line function definitions, are those trivial? 07:48:01 Gaussian elimination isn't rocket science. 07:48:03 Each of them may not be, but together they are not trivial at all 07:48:13 I mean each of them may be trivial* 07:48:25 Knowing math certainly helps. 07:48:32 Well, if they're intended to work together in some systematic way, then I think you have to regard the whole architecture as the thing you're using. 07:48:56 And computational complexity. I really need a super fast string searching algorithm? 07:49:39 Do I* 07:50:12 That gets us into unnecessary performance optimization. 07:50:36 It's a hard issue. Libraries exist for a reason, otherwise we would never have built sufficiently large and _composable_ programs. 07:50:47 A CS buddy accuses me of "premature optimization" when I'm sweating over NEXT at the very start of developing a Forth. 07:50:58 But in that case I KNOW IN ADVANCE it's a critical piece of code. 07:51:11 I'm not blind to the overall system, even though I haven't written much of it yet. 07:51:35 Oh yeah, I worried deeply about NEXT as well. 07:51:49 The notion that you can't know where the performance critical parts will be until you profile it just doesn't apply to that aspect of Forth. 07:52:12 It should be possible to profile every single word in Forth, just start the timer before you execute and stop it when you return, right? 07:52:39 Yeah, it's just that doing that will affect your performance quite a bit. 07:52:56 Try to be sure that what you're timing is actually THE WORD, and doesn't also include some of the work you do managing the timers. 07:53:06 Or... why not take a video of it and count the frames? 07:53:12 I've done that 07:53:30 FPS * Frames = seconds 07:53:48 So you can repeat something a thousand times and get the average 07:54:42 You mean just by executing the word from the command line? 07:55:00 Kind of 07:55:12 Wouldn't you also be timing portions of the interpreter loop? 07:55:23 You could make it a word 07:55:25 If you had a character print at the start and end of the word, you could try that. 07:55:31 But then you'd be timing parts of the print routine. 07:55:37 Yeah 07:56:10 I just plan to patch a new vector into NEXT and docol, and maybe dosem - not sure yet. 07:56:25 Have the extra code added manage the profile info. 07:56:46 And I'll have to pay attention to make sure that I don't measure the added code along with the original code. 07:59:49 I've done timing from the interpreter, but only when I have a loop I can run a huge number of times, and what I want is the time per iteration of the WHOLE LOOP. 08:00:02 There just by making the count huge you can amortize the interpreter overhead out. 08:00:07 Almost, at least. 08:00:17 My simplest one was to run this: 08:00:26 : downcount 1- .0<>me drop ; 08:00:30 10000000000 downcount 08:01:37 I timed that by typing "date" in another window. Hit enter in Forth, clicked to the other window and hit enter again as fast as I could. 08:01:46 Then got another date command ready and waited. 08:01:53 I'd say my error was well under a second. 08:02:10 Took 37 seconds to run the loop, so I was likely within a couple of percent. 08:02:37 And of course I was only getting one second resolution on the timestamps. 08:03:09 But I only needed a rough feel - what I was after was an upper bound on how long my NEXT takes. 08:05:20 Ah. 08:05:26 What I should do is use an audible beep 08:05:49 That will introduce some overhead, perhaps on the order of 100s of milliseconds, but if I wait long enough I can get the error down 08:06:06 It can be audible because the TI-84 can send data to a port 08:06:24 You could also do it numerous times and do statistics on the data. 08:06:32 Then you can remove some of your own reaction time. 08:06:43 And also quantify the variance in your reaction time. 08:06:55 I assume you'd be starting and stopping a stopwatch on the beeps? 08:07:16 I would be recording the beeps and using an audio editor to subtract the time 08:07:23 Ah, much better. 08:07:27 Nice. 08:07:48 I'd expect that to be quite reliable. 08:08:20 So you can ascertain your measurement overhead by doing that for several different loop counts. 08:08:36 You know that any iteration count is going to be N* 08:08:49 --- join: kumool (~kumool@adsl-64-237-233-141.prtc.net) joined #forth 08:08:50 So with several counts you'll get a plot of that that has a non-zero y intercept. 08:08:54 That's your overhead. 08:09:05 Right. 08:09:07 --- quit: kumool (Read error: Connection reset by peer) 08:09:28 Here's where I'll use a library: I'll use Mathematica to extract the peaks automatically and do the data analysis 08:09:29 --- join: kumool (~kumool@adsl-64-237-233-141.prtc.net) joined #forth 08:10:01 I haven't had the time to switch to fully using Python (someday), but Mathematica has excellent documentation. 08:10:52 I can only do 16 bit iterations, however. 08:10:58 Nesting them should give me 32 bit and more 08:11:05 ^the equivalent of 08:11:17 Yes. 08:24:29 hey guys 08:43:56 --- quit: kumool (Quit: Leaving) 08:46:27 --- join: kumool (~kumool@adsl-64-237-233-141.prtc.net) joined #forth 08:52:28 --- quit: siraben (Read error: Connection reset by peer) 08:53:27 --- join: siraben (~user@unaffiliated/siraben) joined #forth 09:00:10 --- quit: dys (Ping timeout: 252 seconds) 09:14:19 --- quit: siraben (Read error: Connection reset by peer) 09:15:20 --- join: siraben (~user@unaffiliated/siraben) joined #forth 09:15:37 Morning. 09:16:03 --- join: dys (~dys@tmo-116-133.customers.d1-online.com) joined #forth 09:19:25 --- quit: kumool (Ping timeout: 252 seconds) 09:20:31 --- quit: dys (Ping timeout: 250 seconds) 09:21:26 --- quit: siraben (Ping timeout: 246 seconds) 09:22:57 --- join: siraben (~user@unaffiliated/siraben) joined #forth 09:30:17 --- join: dys (~dys@tmo-114-12.customers.d1-online.com) joined #forth 09:40:27 --- quit: dys (Ping timeout: 250 seconds) 09:47:07 --- quit: ncv (Remote host closed the connection) 09:58:38 --- join: kumool (~kumool@adsl-64-237-233-141.prtc.net) joined #forth 10:00:52 --- join: [1]MrMobius (~default@c-73-134-82-217.hsd1.va.comcast.net) joined #forth 10:04:31 --- quit: MrMobius (Ping timeout: 252 seconds) 10:04:31 --- nick: [1]MrMobius -> MrMobius 10:29:30 --- join: dys (~dys@tmo-098-168.customers.d1-online.com) joined #forth 11:09:12 --- quit: dave9 (Quit: dave's not here) 12:46:21 --- quit: kumool (Ping timeout: 244 seconds) 12:55:39 --- join: mtsd (~mtsd@94-137-100-130.customers.ownit.se) joined #forth 13:16:07 --- join: kumool (~kumool@adsl-64-237-233-141.prtc.net) joined #forth 13:46:47 gforth annoys me sometimes. 13:47:11 What the hell is COMP' meant to even mean? Why can't I just ' or ['] the word ; ? 13:50:50 Maybe I should switch to ciforth. 13:55:38 `: ;; ['] ; EXECUTE ['] IMMEDIATE EXECUTE ; IMMEDIATE` 13:56:25 Works in ciforth/lina. Does not work in gforth - ":1: Cannot tick compile-only word (try COMP' ... DROP)", SIGH 13:57:51 `: ;; [COMP'] ; DROP EXECUTE...` why why why why why 14:02:48 What's ;; supposed to do? 14:03:13 I use that for a two-level return, and though it's a primitive it behaves as 14:03:28 : ;; R> DROP R> DROP ; 14:05:19 It just saves me typing IMMEDIATE after all my macros 14:05:32 Ah, I see. 14:06:01 Macro is just a string that gets compiled when its name is invoked? 14:07:18 it's a word that executes it's runtime behaviour during compile time, you know, `: TEST ." TEST!" ; immediate` 14:07:49 I called it a macro because that's what the term "immediate word" says to me 14:37:22 Oh, ok - so immediate word. 14:37:46 In that old old book of Moores, he had the concept of compiling a string to the dictionary for later execution. 14:37:57 In fact, I think that's what his "definitions" were at that time. 14:38:16 He did this by having the string serve as a third possible source for the next execution word. 14:38:25 So you had the keyboard, a disk block, or a macro string. 14:39:06 It's sometimes hard to parse his explanations in that book - he's not the most "clear" writer around. 14:39:11 KipIngram: sounds suspictiously like Tcl or Schemes eval being fed the input at later date. Allows for quite late binding. 14:39:27 But as best I can tell at that time he didn't even have the concept of our contemporary "threaded" definitions. 14:40:08 I think he had machine code definitions / primitives, and these "macro" definitions. 14:41:48 I guess he got tired of typing in the same stuff over and over again to get his 'interpreter' to do spefic stuff so he thought "hey! I know, I let it remember such a input to do again at later time" 14:42:03 Yeah. 15:36:47 From my first-edition Starting Forth: "Here's the definition of semicolon: `: ; COMPILE EXIT r> DROP ; IMMEDIATE`" 15:36:57 now that's elegance 15:39:15 --- quit: tabemann (Ping timeout: 276 seconds) 15:47:59 What's the r> drop for? 15:48:06 I believe my definition for ; is 15:48:53 : ; COMPILE EXIT [ IMMEDIATE 15:49:12 Except I have some tail optimization stuff in there, but ^ that was the "starting point." 15:49:36 I call EXIT (;) 16:42:44 --- quit: nighty- (Quit: Disappears in a puff of smoke) 17:02:16 --- quit: kumool (Quit: Leaving) 17:05:12 --- join: kumool (~kumool@adsl-64-237-233-141.prtc.net) joined #forth 17:14:33 --- join: rdrop-exit (~markwilli@112.201.162.180) joined #forth 17:16:19 : ; ( -- ) semi & [ ; directive 17:19:01 What's directive? 17:19:25 : directive ( -- ) immediate compile-only ; 17:19:42 It’s for compiler directives. 17:19:58 Permutations for five stack elements: https://pastebin.com/raw/87cZNq2P 17:20:09 & is my shorthand for POSTPONE 17:20:23 I could only search 40 out of 120 (5 factorial) possible ones until I had a heap overflow error on my computer 17:20:24 Ah I see. 17:20:43 I should have an alias for that, typing on my calculator is slow as it is already. 17:21:06 Actually for directives, I usually provide both a compile-time and a run-time stack picture: 17:21:18 : ; { -- } ( -- ) semi & [ ; directive 17:22:32 What's { } ? 17:23:03 It’s my notation for the compile-time stack picture 17:23:12 So it's a comment? 17:23:18 yes 17:23:25 I see. 17:25:49 For directives I comment what the word does when it executes during compilation, and what the effect later when the word it’s compiled into runs. 17:26:21 : then { a -- } ( -- ) resolve ; directive 17:29:23 : ['] { -- } ( -- ca ) ' & literal ; directive 17:38:08 I just read the permutations link you posted 17:41:44 Should be correct 17:43:12 I’m of the opinion that one should factor so as not juggle that many items on the stack 17:43:56 rdrop-exit: I share that opinion, but I've found that occasionally that leads to very difficult coding. Not too often, but occasionally. 17:43:57 at any one time 17:44:58 Be right back 17:44:59 --- quit: siraben (Quit: ERC (IRC client for Emacs 26.1)) 17:46:30 I also think you can mitigate that problem to some extent by global changes to your code flow, so that you put things on the stack as much as possible to avoid needing things deep in the stack. 17:47:01 --- join: siraben (~user@unaffiliated/siraben) joined #forth 17:47:17 --- join: nighty- (~nighty@kyotolabs.asahinet.com) joined #forth 17:48:17 There's something about the regularity of the generated code, perhaps it corresponds to some numbering system 17:48:20 Something base 3, Because there's only three possible instructions >r r> and swap 17:48:26 rdrop-exit: I agree with that as well. 17:48:29 Juggling five items is excessive. 17:48:34 KipIngram: My return stack is slower than the parameter stack because it uses the IX registers 17:48:41 register* 17:49:27 When you’re juggling that much an alram bell should go off in your head that it’s time to rethink and refactor 17:49:36 *alarm 17:49:39 My thoughts exactly. 17:49:55 Agree totally - I'm just contending that you can't ALWAYS (100% of the time) avoid it. 17:50:01 In every single thing you'll ever code. 17:50:02 I even get concerned when I'm using the return stack in my CODE words to temporarily store items 17:50:03 brb 17:50:07 e.g. 2SWAP 17:50:59 KipIngram: This is getting closer to more code generation. 17:51:16 I could feasibly simulate my registers and push/pops to them and say, I want a CODE word that does 2SWAP 17:51:35 siraben: You really bring a totally different approach to this from mine. 17:51:35 It would help somewhat if I were really stack 17:51:37 stuck* 17:51:41 I'm not really an "automation" sort of guy. 17:51:41 KipIngram: How so? 17:51:44 Ah, haha. 17:52:03 I was just really interested in relational programming and this seemed like a good target. 17:52:06 I try to wrap my mind as tightly as possible around the problem, and then pour it out into code. 17:52:25 me too 17:52:44 I like both ways. Sometimes I just want to explore without much mental effort 17:52:52 e.g. how do I permute all 4 items on a stack with only three instructions? 17:53:22 * siraben sees that 2NIP 2TUCK 2ROT 2OVER are still unimplemented 17:53:23 Hm. 17:53:57 That type of thinking belongs in your optimizer, if you have one 17:54:22 I started a 32 bit counter on my calculator before I went to bed, now it's at ~459500 17:54:29 But I forgot to start a timer! Dammit. 17:54:49 A timer on my phone or something would have allowed me to check the loop 17:55:17 KipIngram: An alternative strategy I came up with is to use my non-blocking KEY variant called KEYC, asking for a key until the user presses ENTER 17:55:27 So you could press ENTER and stop the timer at the same time 17:55:32 rdrop-exit: Optimizer? 17:55:49 Ah, looks like the 32 bit counter overflowed 17:56:17 Or maybe not, who knows. 17:56:20 If you want to optimize stack operations across words, e.g. 17:57:01 DUP followed by a DROP 17:57:19 I'll generate a bunch of those 17:57:40 --- join: wa5qjh (~quassel@110.54.188.224) joined #forth 17:57:40 --- quit: wa5qjh (Changing host) 17:57:40 --- join: wa5qjh (~quassel@freebsd/user/wa5qjh) joined #forth 18:01:40 rdrop-exit: Here's 300 of them https://pastebin.com/raw/jRiH03ub 18:01:46 I see your "DUP DROP" example 18:02:20 There’s no point to generating them in the first place 18:02:25 :-) 18:02:40 Curiosity 18:04:20 Perhaps a good Forth editor should be more interactive. i.e. As you type a word definition in, there should be a box to the side that shows the state of the stack, return stack and other variables 18:04:34 Not on an embedded system, of course 18:05:06 So the stack juggling that you do in your head can be made visible, and when you delete words from your current definition, the effect is reversed 18:05:31 If you’re Forth is SRT+inlining you can add an optimizer to deal with stack movement across words 18:05:41 In my host Forth I have a primitive called REFLECT which copies a snapshot of the Forth VM registers and stacks to memory 18:07:17 SRT? 18:07:29 Subroutine-Threaded 18:07:36 Ah, oh well. 18:08:28 Not possible here, because it'll crash my target. 18:09:28 You’re target can’t handle SRT? 18:09:51 Technically yes, but if I add it when I want to store programs in a different memory location, it'll crash it 18:10:01 The program counter can't exceed $C000 18:10:19 So that's why indirect is the best, treat WORD as data instead of code. 18:10:23 WORDS* 18:10:56 I remember you explaining the $C000 thing the other day 18:11:03 Yeah, it's weird 18:11:18 Be right back, getting out of a car 18:11:20 --- quit: siraben (Quit: ERC (IRC client for Emacs 26.1)) 18:11:32 brb 18:12:12 --- join: dave9 (~dave@90.20.215.218.dyn.iprimus.net.au) joined #forth 18:12:34 hi 18:16:05 --- quit: dddddd (Remote host closed the connection) 18:23:46 --- join: siraben (~user@unaffiliated/siraben) joined #forth 18:23:51 --- join: siraben` (~user@182.232.32.212) joined #forth 18:24:18 Hello dave9 18:24:36 hi rdrop-exit 18:24:48 --- join: smokeink (~smokeink@42-200-119-209.static.imsbiz.com) joined #forth 18:26:03 --- quit: siraben (Client Quit) 18:26:36 --- join: siraben (~user@unaffiliated/siraben) joined #forth 18:32:02 --- quit: siraben` (Quit: ERC (IRC client for Emacs 26.1)) 18:32:28 --- part: siraben left #forth 19:05:58 --- quit: wa5qjh (Remote host closed the connection) 20:22:13 --- quit: dave9 (Quit: dave's not here) 20:26:56 --- quit: kumool (Quit: Leaving) 21:45:13 --- quit: dys (Ping timeout: 252 seconds) 21:46:47 --- quit: smokeink (Remote host closed the connection) 21:47:17 --- join: smokeink (~smokeink@42-200-119-209.static.imsbiz.com) joined #forth 21:47:59 --- quit: smokeink (Remote host closed the connection) 22:01:55 --- quit: proteus-guy (Ping timeout: 252 seconds) 22:06:07 --- join: proteus-guy (~proteusgu@2403:6200:88a6:329f:c150:b4eb:ca74:c2f1) joined #forth 22:27:29 --- join: tabemann (~tabemann@2602:30a:c0d3:1890:d004:a860:b5ff:7c1c) joined #forth 22:47:48 --- quit: reepca (Ping timeout: 260 seconds) 23:59:59 --- log: ended forth/18.10.14