00:00:00 --- log: started forth/18.10.28 00:31:05 --- join: dys (~dys@tmo-101-161.customers.d1-online.com) joined #forth 03:08:06 --- log: started forth/18.10.28 03:08:06 --- join: clog (~nef@bespin.org) joined #forth 03:08:06 --- topic: 'Forth Programming | logged by clog at http://bit.ly/91toWN | If you have two (or more) stacks and speak RPN then you're welcome here! | https://github.com/mark4th' 03:08:06 --- topic: set by proteusguy!~proteus-g@cm-134-196-84-89.revip18.asianet.co.th on [Sun Mar 18 08:48:16 2018] 03:08:06 --- names: list (clog wa5qjh NB0X-Matt-CA dys pierpal smokeink dave0 jedb rdrop-exit Labu dne MrMobius Zarutian ncv tabemann Lord_Nightmare catern +KipIngram APic irsol phadthai a3f_ X-Scale ttmrichter WilhelmVonWeiner jn__ malyn dave9 koisoke bb010g jimt[m] dzho diginet2 rain2 newcup +crc jhei rann amuck C-Keen rprimus yunfan jackdaniel mstevens vxe ovf siraben djinni pointfree[m] carc moony nerfur lonjil rpcope bluekelp zy]x[yz FatalNIX ecraven proteus-guy groovy2shoes) 03:08:06 --- names: list (pointfree sigjuice nighty- johnnymacs Keshl cheater) 03:13:53 --- quit: nerfur (Ping timeout: 250 seconds) 03:18:01 --- quit: wa5qjh (Remote host closed the connection) 05:09:12 --- quit: smokeink (Remote host closed the connection) 05:29:17 --- join: smokeink (~smokeink@118.131.144.142) joined #forth 05:31:26 --- quit: smokeink (Remote host closed the connection) 05:31:47 --- join: smokeink (~smokeink@118.131.144.142) joined #forth 05:32:00 --- quit: smokeink (Remote host closed the connection) 05:32:21 --- join: smokeink (~smokeink@li1543-118.members.linode.com) joined #forth 05:45:10 Morning guys. 06:00:47 --- join: proteusguy (~proteus-g@cm-134-196-84-17.revip18.asianet.co.th) joined #forth 06:00:51 --- mode: ChanServ set +v proteusguy 06:02:58 tabeman: My "heap" isn't really anything very sophisticated - it's just a simple fixed-block size allocator/deallocator. It doesn't play much of a role in the core system operation; the system runs "within" heap pages after they're allocated. So my error recovery uses the heap to create space for an image of the system, and I restore that if I hit an error. 06:03:39 So of course the heap could become corrupt, but it's the best I have so I use it that way. Most of my typical errors can be recovered from just fine with this. 06:04:05 But in a system like this where there really aren't any restrictions the right kind of error could gob my heap and be unrecoverable. 06:06:27 In my last Forth I realized that there was a whole class of errors that involved manipulating vocabularies and then encountering an error before the end of the interpreted line. Basically leaves the dictionary in an incoherent state. I tried in that system to cope with those piecemeal, and got most of them but I was never sure it was thorough. 06:07:06 So this time I made that sort of a sledgehammer - the whole process gets imaged before interpretation begins, and if an error occurs tha timage is restored. Stacks, dictionary, everything. 06:07:10 Seems to work pretty good. 06:11:47 --- quit: smokeink (Remote host closed the connection) 06:12:08 --- join: smokeink (~smokeink@118.131.144.142) joined #forth 06:15:25 hm. i have had a long-festering forth-ish project with quotations and first-class lexical environments. if something was unresolved in context it was just invalid. 06:18:48 --- quit: smokeink (Remote host closed the connection) 06:19:09 --- join: smokeink (~smokeink@42-200-118-172.static.imsbiz.com) joined #forth 06:21:44 --- join: jedb_ (jedb@gateway/vpn/mullvad/x-djmrgpvdjebzcsof) joined #forth 06:22:34 --- quit: jedb (Ping timeout: 252 seconds) 06:27:12 koisoke (IRC): Quotations in Forth? How would you do that beyond just compiling a word? 06:27:26 Forth as a lot of quotation-like ideas 06:33:34 IF et al taking a quotation instead compiling a jump over following code 06:52:40 --- nick: jedb_ -> jedb 06:57:17 Can you give an example? 06:59:19 0 1 < { foo } { bar } if vs 0 1 < if foo then bar else 06:59:45 *if foo else bar then 07:03:17 is there a client that suffixes people's names with (IRC) 07:03:43 i saw a guy doing that elsewhere and thought he was just a weird dude, but now i see it here too 07:04:50 have not seen such 07:05:22 i could see doing that with a log when using it elsewhere 07:08:05 koisoke: Can you describe what actually happens at run time when that line above is processed? Do the xt's for foo and bar get placed on the stack, and then one is selected by if? 07:08:36 Or is it all handled at compile time somehow? 07:12:36 If it moves the xt's through the stack at runtime, then I regard that as an inefficiency. The traditional code produces the flag, and then immediately just jumps or not based on it. It's as efficient as you could have it be. 07:13:47 Plus the quoted method just doesn't appeal to my sense of readability - you don't actually see that you're making a decision until "later." 07:14:15 That's espcially the case if you have some anonymous function thing where your stuff in the {}'s can be a long string of code. 07:14:29 The usual approach produces the flag, and then immediately tells you you're now making a choice. 07:16:31 --- quit: pierpal (Quit: Poof) 07:16:53 --- join: pierpal (~pierpal@95.234.60.245) joined #forth 07:18:07 --- join: nerfur (~nerfur@broadband-95-84-184-13.ip.moscow.rt.ru) joined #forth 07:18:16 KipIngram: XTs for foo and bar get put on the stack 07:18:37 optionally optimized at compile time 07:19:42 the stuf in the {}s can be a long string of code, but the XTs for the compiled {} contents gets put on the stack 07:19:54 The sort of tinkering with if I've thought about trying is to require that the inner clauses be factored into words. So in the simplest case (replacing if ... then) you'd get the flag in place and then just say if ... 07:19:59 --- quit: smokeink (Remote host closed the connection) 07:20:03 If the flag is true, word is executed, if not it's skipped. 07:20:12 So no jump is needed, and no figuring out the jump distance. 07:20:25 (if) just conditionally nudges IP forward by a cell. 07:20:28 --- join: smokeink (~smokeink@118.131.144.142) joined #forth 07:21:02 For if ... else ... then you'd have a different word. So 07:21:10 inner clauses? 07:21:15 ...make flag.. ifelse ... 07:21:28 The ...'s in if ... else ... then 07:21:37 The code clauses you're making conditional. 07:22:13 I haven't cared about those ideas enough to try them. 07:22:26 But they force factoring, which I regard as a potentially good thing. 07:22:36 ok but "factored into words" doesn't mean anything there taht i can discern 07:23:00 Instead of if ..a.. else ..b.. then you'd have 07:23:05 : word1 ..a.. ; 07:23:10 : word2 ..b.. ; 07:23:22 ifelse word1 word2 07:23:33 so quotations by another name :P 07:23:38 All I meant by "factored into words" was that code strings had been made definitions. 07:23:51 Well, I would at no point be putting xt's onto the data stack. 07:24:25 So what you're call qutations - if I saidi { a b c } somewhere would that cause a definition containing a b c to be made? 07:24:32 An unnamed one? 07:24:33 where would IFELSE pluck them from? 07:24:52 The code stream following ifelse. 07:25:07 I'm not sure how I'd make ifelse work yet, so let's talk about if. 07:25:15 ... if ... 07:25:28 When if executes, IP is pointing at 07:25:36 If the flag is true, ifelse just removes it and continues. 07:25:45 I mean if 07:25:56 If the flag is false, if removes it and increments IP by one cell. 07:26:57 that makes ifelse dependent on the parser behaior which i think is pretty gross 07:27:35 I'm not sure what you mean. And like I said, I'm not sure I see a clean ifelse implementation, but that idea for if (which handles only the if ... then case) seems fine to me. 07:27:50 Parsing wise it's much simpler than what we do now - that if doesn't have to be immediate. 07:28:02 That maps very cleanly onto compiled code. 07:29:28 it maps very cleanly onto a conditional jmp. it doesn't map so cleanly into a condionally branching definition 07:29:40 For ifelse that approach works fine if the second word needs to be the one executed - ifelse just detects that and nudges IP past the first word. 07:29:55 But if we need to execute the first word, it's not immediately obvious to me how to keep the second word from executing. 07:30:23 { ... } evalutates to an xt 07:30:27 Well, it's not a normal Forth jump, because it doesn't require a distance field. 07:30:34 The distance is known and can be canned into the primitive. 07:30:55 At run time or compile time? 07:31:01 compile time 07:31:37 (am aware of some locals libraries that use that syntax too but don't mean that here) 07:32:18 --- quit: smokeink (Remote host closed the connection) 07:32:33 --- quit: pierpal (Ping timeout: 244 seconds) 07:32:39 --- join: smokeink (~smokeink@li1543-118.members.linode.com) joined #forth 07:32:42 --- quit: smokeink (Remote host closed the connection) 07:33:04 --- join: smokeink (~smokeink@118.131.144.142) joined #forth 07:33:51 --- quit: smokeink (Remote host closed the connection) 07:34:14 --- join: smokeink (~smokeink@61-216-40-75.HINET-IP.hinet.net) joined #forth 07:34:17 But the xt's appear on the stack at run-time? 07:34:33 See, I've been trying the go the other way - I've been working hard to factor toward shorter definitions. 07:34:49 The last thing I want to do is stick a bunch of logically name-able code into a higher level definition. 07:35:12 I'd name those code strings, temporarily, use the names in the higher def, and then drop the names. 07:35:41 Also, how do you manage that? Here you are in the middle of a definition, and you need to create a new (unnamed) definition. 07:35:53 You have to have some fancier footwork than a simple linear dictionary to make that work. 07:36:12 Are you putting your definitions in a heap or something? 07:36:30 the XTs appear on the stack in principle but i keep meaning to write am optimizer that will compile a conditonal branch without pushing to the stack 07:36:52 Ok, so an optimizer that will produce the code I get to start with? 07:36:55 ;-) 07:37:36 I do get the argument that { code } {code } if is more "postfixy" than what we do now. 07:37:45 But I only think of Forth as postfix for DATA. 07:38:14 yes but you have a compiler that can't cope with data as code because it needs to pull its conditonal branches from the parse area 07:38:17 I also do see that you could do some interesting dynamic things with that - you could have the xt's generated in some dynamic way at run-time. 07:38:28 Whereas the traditional approach has everything hard nailed down at compile time. 07:38:42 Ok ^ I think we just said the same thing. 07:38:47 I agree with that. 07:39:00 i think we did, with different moral valences :D 07:39:28 If you're interested in whole new areas of capability, where you're effectively treating code chunks as data, then I think I'd prefer that "code data" be handled postfix like all other data. 07:39:40 --- quit: smokeink (Remote host closed the connection) 07:39:43 yes 07:40:01 --- join: smokeink (~smokeink@118.131.144.142) joined #forth 07:40:03 So fair enougn - I think you're probably on the right track for a "Forth++" with more dynamic flexibility. 07:40:27 I'm interested in bringing some type support into my Forth. 07:40:46 But I pretty consciously decided I was not going to try to cross that "static dynamic" divide. 07:41:00 All of the extra things I plan to get from types etc. I plan to get at compile time. 07:41:08 fair enough 07:41:15 What gets compiled and later executed will look pretty entirely like a standard Forth system. 07:41:30 Types will affect which words I compile, but once compiled they'll just be words. 07:43:49 there is a ceratin standard of humlity here whe one has implemented something only half-assedly with promissory notes. i have implemnented polymorphism based on TOS only half-assedly with promissory notes 07:43:58 s/whe/where/ 07:50:41 KipIngram: regarding 'types': never understood why some people are enamured with them 07:51:40 --- quit: proteusguy (Ping timeout: 244 seconds) 08:04:24 Types are good abstractions in large programs. 08:06:58 --- quit: smokeink (Remote host closed the connection) 08:10:50 Zarutian: I don't want them in my core system. But what's really motivating me there is the desire to run an environment that behaves somewhat like Octave (Matlab). 08:11:09 I.e., to have A B + "just work," whether A and B were scalars, vectors, matrices, etc. 08:12:19 There's a limit to how much trouble I'd go to for that, but to have it work statically (compile time) feels like something that would be fairly easy. 08:28:44 --- join: dddddd (~dddddd@unaffiliated/dddddd) joined #forth 08:29:58 KipIngram (IRC): Sounds like you want generic operators. 08:30:24 I would go with a type stack, and + would fetch the appropriate word "like +VEC" and execute it 08:30:42 A type stack is exactly what I have in mind. 08:30:48 Right. 08:31:03 The top elements of the type stack will get used as part of the search criteria. 08:31:08 Should only take a byte/entry 08:31:13 Well in my Forth if I were to do it 08:31:33 And the words will have their stack effects documented in the headers, so the compiler can track what the type stack "will be." 08:31:45 Right. 08:31:57 Seems like the obvious and natural way to go. 08:32:02 What will the entries actually be on the parameter stack? 08:32:05 A pointer? 08:32:21 Well, integers will still just be integers. 08:32:28 Floats will still be floats on the FPU stack. 08:32:46 But yeah, larger stuff like vectors and matrices will be allocated in a heap, and I'll have a pointer to them. 08:33:14 o.O a vector stack, matrix stack... 08:33:19 Heap allocation is better, though. 08:33:26 Because those things would be variable size. 08:33:29 Right. 08:33:46 I've been experimenting with data structures 08:33:51 Namely that of a linkde list 08:33:55 linked* 08:34:11 But this is a whole tier of complexity beyond a "standard Forth system," so I didn't want to try to build that kind of thing in from the ground floor. 08:34:37 So I have a linked list with a NULL pointer at the end, and one of my words is called MAPL which takes an execution token and a pointer to the start of the linked list and applies the token to each element of the list. 08:34:52 --- quit: nighty- (Quit: Disappears in a puff of smoke) 08:35:00 So if I had : DOUBLE DUP + ; and a list 1->2->3->4, MAPL DOUBLE over it would give 08:35:05 2->4->6->8 08:35:25 Binary trees are next 08:39:15 Well, the Forth dictionary is a linked list, but in addition to that I have memory pages belonging to a process linked in one too. 08:39:28 So when I kill a process I can find all of its memory and free it. 08:39:44 And the memory allocator has a free list of released pages - that's also a linked list. 08:40:01 I don't really have any "generic list utilities" though. 08:40:13 The words that manage those things just navigate the lists. 08:40:44 The process memory page list is actually doubly linked. 08:44:10 --- quit: tabemann (Disconnected by services) 08:44:30 --- join: tabemann (~tabemann@2602:30a:c0d3:1890:1870:3255:3792:ba89) joined #forth 08:45:05 --- join: tabemann_ (~androirc@2600:380:6822:81bc:d48:9b02:4870:4f65) joined #forth 08:46:18 --- quit: tabemann_ (Remote host closed the connection) 08:53:55 --- quit: dave0 (Quit: dave's not here) 09:15:12 --- join: pierpal (~pierpal@95.234.60.245) joined #forth 09:28:45 --- quit: pierpal (Read error: Connection reset by peer) 09:28:59 --- join: pierpal (~pierpal@95.234.60.245) joined #forth 09:40:09 --- quit: Zarutian (Ping timeout: 240 seconds) 09:43:41 --- join: Zarutian (~zarutian@173-133-17-89.fiber.hringdu.is) joined #forth 10:07:04 “A programming language is low level when its programs require attention to the irrelevant.”—Alan Perlis, Epigrams on programming 10:07:21 I saw this quote and LOLED real hard 10:08:10 Because to this philospohy, Forth is pretty high level 10:28:27 --- quit: ncv (Remote host closed the connection) 10:34:56 --- quit: pierpal (Ping timeout: 244 seconds) 10:46:34 --- join: pierpal (~pierpal@95.234.60.245) joined #forth 11:00:09 --- quit: pierpal (Ping timeout: 240 seconds) 11:05:35 --- join: xek (~xek@apn-31-0-23-83.dynamic.gprs.plus.pl) joined #forth 11:11:21 I absolutely regard Forth as transcending levels. 11:12:20 You can use it in a thoroughly low-level way; I've got a friend who likens it to a macro assembler, and I don't think he's *far* wrong. Though he's not as right as he thinks he is. 11:12:32 But then you can make it as high level as you want it. 11:26:03 --- join: pierpal (~pierpal@95.234.60.245) joined #forth 11:30:57 --- quit: pierpal (Read error: Connection reset by peer) 11:42:52 --- quit: proteus-guy (Ping timeout: 252 seconds) 11:54:50 --- join: proteus-guy (~proteusgu@2403:6200:88a6:329f:51f5:f53e:a198:afdb) joined #forth 12:15:01 --- join: pierpal (~pierpal@95.234.60.245) joined #forth 12:15:58 --- quit: pierpal (Read error: Connection reset by peer) 12:18:09 --- join: pierpal (~pierpal@95.234.60.245) joined #forth 12:28:58 --- quit: pierpal (Read error: Connection reset by peer) 12:33:13 --- join: pierpal (~pierpal@95.234.60.245) joined #forth 12:34:29 --- quit: pierpal (Read error: Connection reset by peer) 15:07:49 --- quit: xek (Ping timeout: 252 seconds) 15:26:41 --- join: wa5qjh (~quassel@175.158.225.219) joined #forth 15:26:41 --- quit: wa5qjh (Changing host) 15:26:41 --- join: wa5qjh (~quassel@freebsd/user/wa5qjh) joined #forth 15:39:39 --- quit: tabemann (Ping timeout: 276 seconds) 16:46:40 --- join: tabemann (~tabemann@rrcs-162-155-170-75.central.biz.rr.com) joined #forth 16:53:50 wow. IBM just bought Red Hat. 16:54:39 whaaatt microsoft owns github, oracle owns solaris, ibm owns redhat, wtf is happening?! 16:56:04 as for oracle owning solaris, apparently they laid off most of the solaris developers 16:56:10 Geez. 16:56:15 I hate mergers 16:56:26 Me too - as if these guys aren't big enough. 16:56:41 It's the classic way to put down competition, though. Buy it. 16:56:47 mergers are almost always followed by layoffs or reductions in product line 16:56:58 Then you decide whether to use the purchased tech, or just make it go away. 16:57:24 Yeah, they're basically contrary to how capitalism is SUPPOSED to work. 16:57:42 capitalism never works how it is supposed to work 16:57:43 tabemann: Saw the top post on Hacker News 16:58:01 Economics 101, where they teach you about all the good stuff that rises from capitalism, always attaches a rider: "under conditions of perfect competition." 16:58:08 Lose that, and you lose all of the goodie. 16:58:48 Well, Marx's reason for believing capitalism would fail was based on the idea that wealth and power would ultimately become thoroughly centralized, and eventually the masses would revolt. 16:58:57 What does IBM focus on nowadays? 16:59:01 And we really do seem to see that centralization going on. 16:59:03 They're still in the hardware business? 16:59:04 Mostly services. 16:59:23 I happen to work in an office that develops a hardware / software product, but that's not the company-wide norm. 16:59:31 IBM doesn't really do much hardware these days, aside from it's ultra-high end System z systems 16:59:37 Some pretty good comments here: https://news.ycombinator.com/item?id=18321884 16:59:38 its* 16:59:40 And they got that product by buying the small company I used to work for. 16:59:47 "Texas Memory Systems." 17:00:01 tabemann: Right. 17:00:04 Ugh, mergers. 17:00:11 We do enterprise grade flash storage systems. 17:00:25 It's actually pretty good gear, but like I said, it was a small company development. 17:00:29 tabemann> as for oracle owning solaris, apparently they laid off most of the solaris developers 17:00:30 IBM bought TMS in 2012. 17:00:31 Microsoft buying GitHub caused a controversy for a little bit 17:00:46 Especially when the media misinterprets everything 17:01:02 "Microsoft gets access to 2 million developers" 17:01:32 Well it'd be interesting to see how this plays out for Red Hat. 17:02:06 could be worse. if you're my company, you buy up part of a canadian company and then lay off your own employees because of some agreement from the canadian government will subsidize the canadian wages for some period of time 17:02:25 UGH. 17:02:31 See, that's just BROKEN. 17:03:07 Anyway, I'm a thorough believer in capitalism in the abstract, but based on the reasoning above I do feel that unless it has some high-level regulation it's not going to work well. 17:03:39 In my opinion such regulation (in America) should be aimed at winding up with a system that prioritizes the prosperity of *Americans*. 17:03:45 fundamentally I am against capitalism as a whole and am for worker owned and self-managed cooperatives 17:03:54 that's the trouble with the free-market/interventionist debate in politics. neither side acknowledges that it's a two-dimensional problem 17:04:06 Well, there's nothing un-capitalistic about workers owning a company. 17:04:34 I should re-phrase; what I'm really a believer in is *freedom* (individual freedom), and I think capitalism is the system most consistent with that. 17:04:40 there's a venn diagram of private business and national competition 17:04:47 But it doesn't work well when there are just a few giant corporations running things. 17:05:04 I think we should regulate it so that the economy is based primarily on small and medium size companies. 17:05:23 but that's how it always ends up with, because it is in corporations' interest to maximize their profit and therefore their percentage of the market 17:05:27 I'd do it by having taxes go up (sharply) for larger companies. 17:05:32 Make it less profitable to become huge. 17:06:04 Yes, I know tabemann. 17:06:14 That's why it's not going to happen without public sector intervention. 17:06:19 eh, I don't agree with punishing success 17:06:25 I think it's the *size* that's bad, not the mode of operation. 17:06:33 there just needs to be low barrier to entry for competiton 17:06:40 No company should be allowed to become large enough to have an individual impact on its market. 17:07:03 That's when you get all those ECO 101 nice things happening - when there's a large number of independent players 17:07:41 That also makes no one company large enough to "stand out" in government lobbying, etc. 17:08:04 No one's big enough to "buy off" the government. 17:08:18 the way you fix government lobbying is you limit government power - remove the motive 17:08:18 Yes, that too, for sure. 17:08:22 I'm for workers directly expropriating their workplaces and turning them into worker cooperatives - of course those in power would never accept this 17:08:30 We seem to make it as hard as possible to start a small business. 17:08:37 Regulations that you need an army of lawyers to navigate. 17:08:50 Accounting rules that are too complex for a small business guy to do his own accounting. 17:08:52 Etc. 17:09:03 And the big companies *love it* that way. 17:09:48 I think there's a LOT of criticism to be leveled at what our economy is like now, but I don't regard any of it as a criticism of "free enterprise*. 17:09:59 Except to the extent that it shows free enterprise needs to be regulated, or we get... this. 17:10:13 the thing to me is that small businesses can be just as worker-unfriendly as large businesses, when they're not worker-owned and managed 17:10:39 They can. But there are options. 17:10:43 Other companies, etc. 17:10:58 I do believe that in a competitive market these things will USUALLY work out. 17:11:15 Sometimes you get bad situations, like when the entire American South shut out blacks back in the day. 17:11:24 That was a pervasive social problem that the market wasn't going to solve. 17:12:23 that was the government though, not the private sector 17:12:35 that was society as a whole 17:12:36 separate but equal was a government policy 17:12:44 so were jim crow laws 17:13:12 but separate but equal and jim crow were supported by white society overall - that's why they were even created 17:13:33 definitely not overall 17:13:43 Right - pervasive. Something "global" had to be done. 17:14:05 If there's just one asshole diner owner, his ass-holery will hurt his business, and the world will go on past him. 17:14:12 That wasn't in play in that situation. 17:14:42 it wasn't though? the economy of the south was shit at the time 17:14:46 So *generally* I'm in favor of people (all people - business owners included) being free. But sometimes it doesn't work - we always need to be ready to fix those things. 17:15:26 I just meant that the "general public" wasn't going to object to the racist behavior - it permeated the whole of society in that part of the country. 17:15:56 As opposed to, say, someone in my area right now trying to open a "Nazi supporting" business. 17:16:12 99.9% of everyone would be down on that. 17:16:54 plus it would just be a bad investment because the nazis lost pretty hard over 70 years ago 17:17:01 :-) Right. 17:17:01 who cares? don't shop there 17:17:11 Well, right - that's my point. 17:17:12 they never did seem to recover after that 17:17:14 Very few people would. 17:17:38 the alt right seems to want to change that though, even though the alt right in reality is a small few 17:17:49 Yes, a very small few. 17:18:02 I think a huge portion of media attention is directed to small slivers on the right an dleft. 17:18:20 I don't think we get an accurate picture of our culture from the media these days. 17:18:28 They seek out the sensational. 17:18:48 yeah msm sucks. that's why i only listen to alex jones 17:19:10 I think there are a ton of regualar ordinary folk just going about the business of living, that are considered entirely un-newsworthy. 17:19:11 * Zarutian gives zy]x[yz a Look 17:19:14 lol 17:19:17 lmao 17:19:25 I'm going to open a business only supporting my chosen group 17:19:25 the frogs are turning gay!! 17:19:26 Forthers 17:20:00 WilhelmVonWeiner: hmm... how are you going to find out if a person is a Forther? 17:20:03 I thought frogs these days were white supremacists 17:20:35 Zarutian: ask em about their return addresses 17:20:42 or stack frames or something stupid 17:20:59 ask them what their opinion is of RPN 17:21:13 tabemann, that's just communist satanist misinformation to distract you from pizzagate 17:21:36 To get in there's a button that says DOOR OPEN and a button that says OPEN DOOR - the latter drops them into a pit 17:23:21 zy]x[yz: I thought Pepe the Frog was like their mascot 17:23:34 nah he just gets posted a lot 17:23:36 feels good man 17:23:39 he's a dead meme now 17:24:15 tabemann, if you really don't know, that was always just a 4chan troll thing 17:24:24 is #Forth really the place for these discussions anyhow 17:24:31 when is IBM going to Buy Forth, Inc 17:24:46 some alt right losers probably adopted it and ran with it, but that's all 17:24:51 when Forth, Inc starts making fuckloads of money 17:25:21 tabemann: so zero usd? (generally a fuckload is zero fucks) 17:25:21 Breaking news: IBM rewrites Watson in Forth, who is now rapidly approaching sentience 17:25:39 and of course when IBM buys Forth, Inc they'll insist on making it infix 17:25:53 PL/Forth 17:28:23 IBM Forth: `PROC: PRINT 1 + (FN(2));` 17:28:33 lolol 17:28:57 why haven't I yet figured out a use for the closures I added to my Forth? 17:29:20 why haven't I figured out a use for closures (full stop) 17:29:26 lol 17:29:57 no, I use closures all the fucking time when I code in Haskell, but I haven't seen the need when coding in Forth 17:30:53 I don't find that shocking or anything - maybe Forth and Haskell (and lots of other things) just have their own "orientation" for solving problems. 17:31:28 tabemann: 17:31:31 oh my god 17:31:44 i hadn't realised i've been using closures this whole time 17:32:13 you have a Forth that has closures as a feature? 17:32:32 no, I've been writing Haskell for university 17:33:28 any time you have a function that references names outside its local scope which are not global, you've got a closure 17:34:26 I usually think of that as just referencing a name outside the local scope that's not a global. ;-) 17:34:42 I think I know about a lot of things and just don't know enough to give them fancy names. 17:34:51 lol innit 17:34:53 ah 17:35:43 Like if I were writing a big program with names all over, I could probably explain to you why I had scoped each one the way I had. 17:35:58 But I wouldn't be able to do so using the right lingo. 17:36:38 You ever heard of "dependency injection" 17:36:54 a Javaism I don't like to think about 17:36:57 I think I've heard of it. Couldn't tell you what it is. 17:37:03 it sounds like a mega complex hyperthought super-genius idea 17:37:12 I don't know how it works, and don't really want to know 17:37:15 " 17:37:18 I heard a neat one a couple of days ago. "False sharing." 17:37:23 ""Dependency Injection" is a 25-dollar term for a 5-cent concept. [...] Dependency injection means giving an object its instance variables. [...]." 17:37:34 from a Mr James Shore 17:37:37 Dependency Injection is what you use when you don't have elegant ways to close over out-of-scope variables. 17:37:59 That's where a thread runing on core A constantly reads a variable in a cache line, and a thread running on core B constantly writes to a totally different variable that happens to be in the same cache line. 17:38:14 So those writes keep invalidating A's read-only variable so it keeps having to re-load the cache line. 17:38:18 Can just clobber performance. 17:38:29 I've heard of that 17:38:44 I'd never heard the name before, but of course it makes perfect sense. 17:38:59 I honestly understood that better than dependency injection 17:39:00 So you tweak your code so those things don't share a cache line anymore. 17:39:41 ttmrichter: Dependency Injection is what you use when you want to sound smarter and potentially get paid more 17:39:44 It's stuff like that that I think makes it important to understand your whole system. 17:39:58 If you try to "abstract away" too much, you don't know how to avoid / deal with such things. 17:45:09 WilhelmVonWeiner: Well that's what using the term means, yes. :) 17:47:41 to me "dependency injection" is like one of those terms whose primary purpose is to obscure the actual concept and make one sound smarter 17:48:34 ^ 17:49:42 honestly, I hate everything people refer to as "design patterns" 17:50:08 not the concepts themselves, but the idea of them being "design patterns" 17:50:24 "Agile programming." 17:50:35 as if I'm supposed to be some kind of dumbfuck because I don't have these things memorized 17:50:36 I used agile project techniques years before they were cool. 17:50:46 Never occurred to me to name them with sexy buzzwords and get famous. 17:50:50 Just felt like common sense. 17:51:15 to me "agile" turns out to be implemented mostly just as daily scrum meetings and like 17:51:58 I've never worked anyplace that's implemented the whole thing 17:52:09 I didn't include that part. Meetings were often just hallway conversations, and formal and informal communication just happened "when it was needed." 17:52:33 But, stuff like "don't plan too far in advance," "evolve the specification based on new understanding," etc. 17:52:40 That just felt like the right way to do things. 17:52:54 My long term plans were fuzzy and general - near term plans sharper, etc. 17:53:19 my current company is basically trying to replicate a previous product of there as an HTML5-based web product 17:53:47 so in that way it's almost the complete opposite of that 17:54:09 Ah - well, you have an extremely well-defined target. 17:54:19 We had a class of products like that. 17:54:34 The little socket carriers that attached to the top of our programmers. 17:54:53 Those were basically just a socket on a board with connectors, with some standard circuitry for switching. Very simple. 17:55:05 Those had a very precise project execution and they could just flow like clockwork. 17:55:29 The programmers themselves were quite different from generation to generation, and planning for those had to be a lot more flexible. 17:58:45 * tabemann is currently wondering what next to do with his Forth 17:59:07 maybe reimplement numeric parsing in Forth? 18:00:14 That's a good one. 18:00:19 I had a lot of fun writing that in Forth. 18:00:37 It's implmeented in the assembly file, but I still wrote it in Forth and then hand translated that to assembly. 18:00:41 got the Forth in the comments. 18:01:11 I'm still going to have numeric parsing in C, because it's needed by the bootstrapper 18:01:29 That's actually how I arrived at some of my new words - got to various places and thought "Oh, I could simplify this a lot if I could do ." 18:01:35 but I'm duplicating it in Forth for the normal user environment 18:02:40 I already duplicated word lookup so it's now in Forth except for bootstrapping 18:04:15 note that I've thought of implementing some means of hardcoding my core Forth words that are not in C, but it seems too hard to be worth it, considering that I am not using an assembler so I cannot use labels to calculate branch addresses trivially 18:04:17 Yeah, I did that in Forth too, with the same sort of hand translation. 18:04:29 Quite a lot of the system has been done that way, actually. 18:04:33 --- join: smokeink (~smokeink@118.131.144.142) joined #forth 18:04:54 The command history, the output formatting (uses format strings similar to C), and a good bit of other stuff. 18:06:27 you wrote these originally in Forth and then you translated them to compiled code? 18:12:51 Yes. 18:13:19 Here's an example: 18:13:22 3145 ;;; : CSTR .C@++ OVER + 0 SWAP C! ; 18:13:24 3146 head_d "cstr", cstr 18:13:26 3147 cstr_d: defi dcloadpp-o, over-o, plus-o, zero-o 18:13:28 3148 defi swap-o, cstore-o, dosem-o 18:13:56 It's pretty easy when I have the system well in mind. 18:14:07 If I put it aside for a year and tried to do it I'd have to struggle back up to speed. 18:16:07 it'd be easier for me to do if I rewrote my Forth in assembly 18:16:46 Yes. My last one was in C - getting the assembly "off the ground" was a good bit of effort, but now that it's as far along as it is I'm really glad I'm working that way. 18:23:49 my main problems are that I don't know assembly well and I want portability 18:24:17 because I'd really like this forth to be be able to run on an ARM-based single board computer without needing rewriting 18:27:57 Yeah. I'm trying to solve that via macros and register aliases, but I haven't quite hit the mark yet. 18:28:16 Of course, the macros would have to be rewritten, but that's only about 20% or less of the job. 18:29:35 --- join: nighty- (~nighty@kyotolabs.asahinet.com) joined #forth 18:34:31 I'm sure the approach will work out - I just jumped the gun and wrote the macros before I knew my ARM target well enough. 18:41:41 --- join: pierpal (~pierpal@95.234.60.245) joined #forth 18:43:39 --- quit: pierpal (Client Quit) 18:43:57 --- join: pierpal (~pierpal@95.234.60.245) joined #forth 18:44:02 back 18:44:44 to me the thing is that I don't think on modern architectures the performance it from using C is that great unless you're working in small systems 18:48:21 Very possibly. I just like knowing I'm exerting full control. 18:48:55 That said, I have looked at some of the code gcc generates, and it hasn't looked that good to me. 18:49:37 Like, I want TOS in a register. Maybe gcc puts it in one - it's hard to even really tell. But with assembly I KNOW it's in a register. 18:49:41 I KNOW *everything*. 18:50:29 I certainly haven't run any tests to quantitatively prove to myself I'm better off in assembly. 18:50:35 It's just a personal preference thing. 18:53:28 I guess another argument in favor of assembly is that I'm totally primed to investigate advanced parts of the instruction set that gcc might not even support yet. 18:53:34 I perfectly understand why one would want an assembly Forth 18:54:15 What is your threading model? 18:54:36 I opted for indirect threading 18:54:56 so I wouldn't have to do any actual code generation, and so I would be portable 18:55:10 I was able to implement "right and proper" indirect threading in C, but only by using gcc that has a language extension (storing label values in variables) that made it work. 18:55:47 I doubt a direct threading model would be possible - that requires code and data in intimate proximity. 18:56:15 I could be wrong, though. 18:56:30 my indirect threaded code exists in the heap, and I'd have to do something with mmap to get memory that I can execute code in 18:56:51 on systems where execution in the heap is not allowed 18:57:49 Yeah, there's a call. mprotect, I think. I've got that somewhere in the files for that old imp. 18:58:18 My whole indirect threaded image was in a block I got with malloc. 18:58:31 I did get it to where I could poke machine code into that and successfully call it. 18:58:41 * Zarutian usually solves that kind of crap by having only one page executable and implement a small vm. 18:58:46 But I never actually moved the primitives from whereever C put them. 18:59:20 I haven't figured out how to do that with MacOS system calls yet, though. 19:00:11 GCC does so many dirty tricks it's hard to say what your assembly will look like 19:01:31 I prefer writing my own 19:09:32 I'm probably used to programming in "slow" languages, if you call Haskell, Java, and JavaScript slow languages 19:09:37 *too used to 19:10:23 Although quite a lot of work has gone into compilers for functional languages. 19:10:54 ^especially because code optimization is easier to do when you have proofs of correctness 19:12:33 GHC Haskell is notorious though for having performance that is hard to predict ahead of time 19:13:05 tabemann: That is an artifact of a fatal (IMO) design decision in Haskell proper. 19:13:29 ttmrichter: Please do elaborate. 19:13:48 Having lazy evaluation be the default kills Haskell dead. 19:14:17 It is impossible, without Deep Knowledge of a given implementation, to predict memory and time complexity of any given non-trivial expression. 19:14:28 I agree, lazy evaluation makes reasoning about performance hard. 19:14:47 KipIngram: You don't get lazy evaluation with languages like Forth 19:15:42 https://paul.bone.id.au/pub/pbone-2016-haskell-sucks.pdf 19:15:49 Ignore the click-bait title. It's a joke. 19:16:03 It's a presentation made at a functional languages user group. 19:17:29 there is a simple solution - you declare every field in every data structure you declare as strict, and you use strict data structures like Data.Sequence 19:17:58 it is a pain, though, considering that otherwise Haskell is a very good language 19:18:23 --- quit: dddddd (Remote host closed the connection) 19:18:29 I like its type system. 19:19:18 "All told, a monad in X is just a monoid in the category of 19:19:20 endofunctors of X, with product ? replaced by composition 19:19:22 of endofunctors and unit set by the identity endofunctor. 19:19:24 " 19:19:27 lol 19:19:28 * KipIngram 's eyes glaze over... 19:19:31 Right. 19:19:50 you do realize that's kinda a joke at this point 19:19:55 Very far from the hardware 19:20:02 tabemann: That's not a "simple" solution. 19:20:15 Yes, I sensed that the presenter was approaching that with humor. 19:20:19 That's a horribly error-prone and invisible-to-casual-inspection solution. 19:20:31 Paul likes Haskell. 19:20:49 He's just not an addict, so he can actually see and comment intelligently on its flaws. :D 19:20:59 I like Haskell too 19:21:14 siraben: very far from actually running. Why? Either the compiler stops because of an obscure type error or the program gets trivially optimized away because you forgot to force a thunk or something akin. 19:21:17 I've been learning SML, and it's more sane than Haskell 19:21:18 I like Haskell-the-language but have banned it from any system I control because of Haskell-the-ecosystem. 19:21:38 SML has shit like eqtypes though 19:21:42 Zarutian: Right. Knowing when expressions are exactly evaluated is a huge gues. 19:21:43 guess* 19:21:46 ttmrichter: Haskell has an ecosystem? 19:21:46 I'm sorry, but I cannot accept eqtypes 19:21:47 tabemann: eqtypes? 19:22:02 siraben: specially overloaded comparison operators 19:22:02 Zarutian: Hackage. Cabal. The entire community, practically. 19:22:21 as opposed type classes 19:22:35 ttmrichter: I thought Haskellers just whipped up whole typesystems whenever they needed 19:22:40 The community consists largely of people who're so used to shooting themselves in the foot that they think leaving bloody footprints is normal. 19:22:44 type classes to me seem far, far more elegant than what SML and OCaml do with comparison operators 19:23:05 tabemann: I missed typeclasses in SML 19:23:06 I wish they were there 19:23:50 and while SML and OCaml have parameterized modules, they are so much harder to actually use than type classes in practice 19:23:55 so no one actually uses them 19:24:48 Anyone here know Standard ML? 19:24:57 I've been trying to debug a List Functor signature 19:25:07 I know SML mainly from Okasaki's book 19:25:25 I used to code in OCaml before I started coding in Haskell 19:25:27 That's a pretty good paper 19:26:11 I have trying for years to figgure one thing out about Haskel and such: why the enamourness of types? 19:26:45 I want to see a small language that can be extended rather than a large one 19:27:03 types allow you to know that your code is safe statically 19:27:09 Right 19:27:23 siraben: ya basically want what Guy Steele calls a 'growable' language. 19:27:26 Zarutian: It's not the types that bug me in Haskell, though they're not the end-all/be-all that strong advocates claim. It's the abuse of them. Specifically the unnecessary cramming of everything into monads, which are constructs that actively resist composition. 19:27:34 Zarutian: Ah that talk, "Growing a Language" 19:27:43 He was talking a bit about generics in Java 19:27:45 siraben: 1ML is an interesting concept, but sadly seems to have stalled. 19:28:07 We've got a growable language. 19:28:18 Forth is the ultimate growable language 19:28:19 Yeah, Forth is definitely growable. 19:28:19 We're all *here*, so we all know about it. 19:28:20 And now, of course, the latest hotness in Haskell is that lens crap. 150 custom operators and growing! 19:28:31 +dependent types 19:28:39 tabemann: unless the code falls into Primitive Recursive Algorithms computability wise you can never be sure if a program is safe statically 19:28:50 I can't wrap my brain around dependent types 19:28:53 Haskell has dependent types now too? :-o 19:28:54 Crap. 19:29:04 I believe it's a language extension 19:29:11 Haskell is attempting to have a limited subset of dependent types 19:29:16 Ugh. 19:29:18 attempting is the key word 19:29:21 I have sort of an intellectual appreciation for functional programming, but I think my background ill suits me to TRULY appreciate the idea. 19:29:23 I don't use that part of haskell 19:29:29 For me software is for telling hardware what to do. 19:29:38 * ttmrichter high-fives Kip. 19:29:53 I started functional programming with Scheme (a Lisp dialect) 19:29:56 If it's not getting hardware to do something, it's pretty much wankery. 19:30:14 --- join: [1]MrMobius (~default@c-73-134-82-217.hsd1.va.comcast.net) joined #forth 19:30:34 KipIngram: re functional programming, well I like map, reduce and co. Specially because one is firing of "oy! let every element do this thing on themselfs!" kind of thing. 19:30:53 Yeah, map, filter, foldl, foldr, zip, etc. are good abstractions 19:31:01 reduce as well 19:31:19 Zarutian: that seems to qualify. I'm not *totally* opposed to any sort of abstraction. 19:32:09 At the "ground floor," though, I want to be able to see a layer that's saying "do this, then do that, then do the other thing over there." 19:32:30 Or "do all of these things - timing doesn't matter: ...list..." - that's fair too. 19:32:45 KipIngram: I'd say the biggest idea in functional programming is thinking without mutable state. 19:33:06 hell, I have been toying with the idea of using Esprima to parse EcmaScript (Java? isnt that coffee brand from that island) function and spit out SeQueL queries. And filter, map and co will be extensively be used inside the passed in functions. 19:33:07 --- quit: MrMobius (Ping timeout: 252 seconds) 19:33:07 --- nick: [1]MrMobius -> MrMobius 19:33:11 I try to avoid using ! and @ in my code because it's slower than just keeping something on the stack 19:33:14 BTW, slightly off topic, but I stuck a toe into Chapel a year ago or so, and thought it made a lot of sense. 19:33:44 Good morning Forthwrights 19:33:56 I quite liked the idea of being able to write parallel code without having to know the details of my core count and so on. 19:34:03 * Zarutian continues: why? Because I absolutely detest SQL. 19:34:18 I suspect that in a small core count system, explicitly planning the activities of each core will lead to the best performance. 19:34:36 But if you've got some million-core cluster monster and you can't even know which of those cores might be down? 19:34:43 KipIngram: parallel and concurrent code is why I like FlowBasedProgramming 19:34:44 That's going to require some software help to manage / use. 19:34:56 Zarutian: Yes, definitely. 19:35:00 Can you do flow based programming in Forth? 19:35:12 I see that model working extremely well for some applications. 19:35:30 But maybe not so perfectly for, say, finding the eigenvalues of some gigantic array. 19:35:30 siraben: well, do not know. You could try on Green Arrays 19:35:47 Ah those chips 19:36:10 Sure you can, if your Forth supports multiple processes and pipes between them. 19:36:11 Paraphrasing Chuck, there's always state, what differentiates different approaches is where it is kept. 19:36:27 KipIngram: sure. But I think most kind of applications used for businesses and such might work. 19:36:39 I think so too. 19:36:57 I've definitely got FBP as a "primary target" for my system. 19:37:05 I'm making design decisions based on supporting it well. 19:37:56 At the same time, though, I'm also designing to do good support of a situation where you have, say, two words in a definition both of which are significant and that can be executed in parallel. 19:38:15 Just an easy way to say "That word right there - run it somewhere else; I'll wait when I need to." 19:38:21 There's assembly instructions to do that? 19:38:29 Well none on the Z80 19:38:32 To do what? 19:38:40 fork off a process like that? 19:38:40 Parallel execution 19:38:46 Ah, using the OS 19:38:49 No, there are instructions to support synchronization. 19:38:58 Well, it will work when I'm bare metal too. 19:39:06 While under MacOS I will use the OS to get new threads. 19:39:09 siraben: well, you have looked into Sega Mega Drive (or as the yanks called it Sega Genesis)? 19:39:16 But this whole mechanism exists inside my Forth. 19:39:19 Zarutian: I've not heard of it. 19:39:36 Or, rather, will exist. 19:39:39 haven't written it yet. 19:39:52 But I have a pretty good idea of what I'm going to do. 19:39:59 In broad outlines, at least. 19:40:47 siraben: well on that console you have at least two processors, one for sound processing, one for main logic and possibly one for doing graphix. Plus who knows what you want to put on your cart. 19:41:01 --- join: proteusguy (~proteus-g@cm-171-100-61-10.revip10.asianet.co.th) joined #forth 19:41:01 --- mode: ChanServ set +v proteusguy 19:41:09 IIRC flow based programming is the high level analogue to combinational logic at the hardware level 19:41:25 (in contrast to sequential logic) 19:41:49 fan-out fan-in 19:41:50 rdrop-exit: a loose analogy but not quite 19:42:12 sure but in combinational logic you have no loop backs 19:42:53 if you did you get hysterises gliching andor percursion 19:43:48 Zarutian, Buffers. :P 19:43:49 asynchronous vs synchronous sequential machines 19:43:54 Zarutian: I see. 19:45:50 rdrop-exit: I'm no expert on flow-based yet, but yes, I do get the impression that it's about the processes being completely autonomous from one another except passing data. 19:46:02 I.e., no synchronization between them; data's either available or it's not. 19:46:15 Just like a signal value change in a combinational circuit. 19:46:20 So I think that's a good analogy. 19:48:14 When I hear of people trying to implement functional programming primitives in Forth it reminds me of the old Lisp hardware 19:49:28 Thinking Machines IIRC 19:51:12 Why did the neumann architecture win in the end? 19:51:47 Flexibility and economy would be my guess 19:52:11 I suppose. 19:52:28 You can get better performance out of a Harvard Architecture, but it's less flexible 19:53:36 Ye olde Von Neumann bottleneck 19:55:49 Those questions (why did X beat Y?) are rarely "pure." 19:56:01 Lots of economic / market influences. 19:56:09 Why did VHS beat BetaMax? 19:56:21 Most people think Beta was superior technically. 19:56:40 --- quit: WilhelmVonWeiner (Ping timeout: 245 seconds) 19:56:48 One answer I've heard on that one is "because the adult film industry went VHS." 19:57:29 Any standard needs to reach some critical mass before it becomes entrenched 19:57:31 --- join: WilhelmVonWeiner (dch@ny1.hashbang.sh) joined #forth 19:57:55 --- nick: WilhelmVonWeiner -> Guest14767 19:58:34 Economies of scale take over 19:58:47 Right. 20:00:12 For the initial adoption of a standard, politics weighs more than technical merit 20:00:45 and also "historical accident" 20:01:53 plays a role 20:03:07 Look at CP/M and MSDOS 20:04:08 I've always felt sorry for Gary Kildall 20:04:40 especially when I rewatch old episodes of Computer Chronicles on Youtube 20:04:56 He ended up dying in a bar fight 20:05:40 BTW the episode with Elizabeth Rather presenting Forth is on Youtube 20:06:00 --- quit: tabemann (Ping timeout: 246 seconds) 20:07:09 --- quit: jn__ (Ping timeout: 240 seconds) 20:07:12 https://www.youtube.com/watch?v=Jtvgf_CyiS0 20:08:48 --- join: jn__ (~nope@aftr-109-90-232-81.unity-media.net) joined #forth 20:18:35 Well, headed to bed guys. Catch you all later. 20:18:57 Good night Kip, don;t let the bed bugs bite 20:23:55 --- join: tabemann (~tabemann@2602:30a:c0d3:1890:1870:3255:3792:ba89) joined #forth 20:55:33 --- quit: wa5qjh (Remote host closed the connection) 20:57:52 --- join: wa5qjh (~quassel@freebsd/user/wa5qjh) joined #forth 21:37:45 --- join: dave0 (~dave@90.20.215.218.dyn.iprimus.net.au) joined #forth 21:38:18 hi 22:12:10 --- quit: smokeink (Remote host closed the connection) 23:01:37 VHS won over Beta for several reasons. Adult didn't hurt, but wasn't the driver. The driver was that VHS kit was cheap to make. Sony was stupid with licensing fees for Beta. 23:28:40 --- join: smokeink (~smokeink@118.131.144.142) joined #forth 23:59:59 --- log: ended forth/18.10.28