00:00:00 --- log: started forth/18.10.16 00:13:38 --- join: nerfur (~nerfur@broadband-95-84-184-13.ip.moscow.rt.ru) joined #forth 00:20:13 --- quit: dys (Ping timeout: 252 seconds) 00:22:36 --- quit: kumool (Quit: Leaving) 00:31:27 --- quit: wa5qjh (Quit: http://quassel-irc.org - Chat comfortably. Anywhere.) 00:33:35 --- join: wa5qjh (~quassel@175.158.225.216) joined #forth 00:33:35 --- quit: wa5qjh (Changing host) 00:33:35 --- join: wa5qjh (~quassel@freebsd/user/wa5qjh) joined #forth 00:45:57 --- join: ncv (~neceve@2a02:c7d:c5c9:a900:6eaf:6ef7:3b81:d5f6) joined #forth 00:45:57 --- quit: ncv (Changing host) 00:45:57 --- join: ncv (~neceve@unaffiliated/neceve) joined #forth 00:51:31 --- quit: wa5qjh (Quit: http://quassel-irc.org - Chat comfortably. Anywhere.) 00:59:31 --- quit: smokeink (Remote host closed the connection) 00:59:54 --- join: smokeink (~smokeink@218.255.173.76) joined #forth 01:00:38 --- quit: smokeink (Read error: Connection reset by peer) 01:00:59 --- join: smokeink (~smokeink@118.131.144.142) joined #forth 01:02:25 --- quit: smokeink (Remote host closed the connection) 01:02:45 --- join: smokeink (~smokeink@185.189.254.154) joined #forth 01:02:46 --- quit: smokeink (Remote host closed the connection) 01:03:07 --- join: smokeink (~smokeink@61-216-40-75.HINET-IP.hinet.net) joined #forth 01:03:44 --- quit: smokeink (Remote host closed the connection) 01:07:45 --- join: wa5qjh (~quassel@175.158.225.216) joined #forth 01:07:45 --- quit: wa5qjh (Changing host) 01:07:46 --- join: wa5qjh (~quassel@freebsd/user/wa5qjh) joined #forth 01:18:59 --- quit: wa5qjh (Remote host closed the connection) 01:52:04 --- quit: jedb (Ping timeout: 252 seconds) 02:04:26 --- quit: ashirase (Ping timeout: 246 seconds) 02:08:39 --- join: wa5qjh (~quassel@175.158.225.216) joined #forth 02:08:39 --- quit: wa5qjh (Changing host) 02:08:39 --- join: wa5qjh (~quassel@freebsd/user/wa5qjh) joined #forth 02:14:21 --- join: ncv_ (~neceve@86.127.177.21) joined #forth 02:14:21 --- quit: ncv_ (Changing host) 02:14:21 --- join: ncv_ (~neceve@unaffiliated/neceve) joined #forth 02:15:35 --- quit: wa5qjh (Remote host closed the connection) 02:16:43 --- quit: ncv (Ping timeout: 252 seconds) 02:17:26 --- join: wa5qjh (~quassel@175.158.225.216) joined #forth 02:17:26 --- quit: wa5qjh (Changing host) 02:17:26 --- join: wa5qjh (~quassel@freebsd/user/wa5qjh) joined #forth 02:23:13 --- quit: wa5qjh (Remote host closed the connection) 02:24:10 --- join: ashirase (~ashirase@modemcable098.166-22-96.mc.videotron.ca) joined #forth 02:25:02 --- join: wa5qjh (~quassel@175.158.225.216) joined #forth 02:25:02 --- quit: wa5qjh (Changing host) 02:25:03 --- join: wa5qjh (~quassel@freebsd/user/wa5qjh) joined #forth 03:14:24 --- quit: wa5qjh (Remote host closed the connection) 03:16:24 --- join: wa5qjh (~quassel@175.158.225.216) joined #forth 03:16:24 --- quit: wa5qjh (Changing host) 03:16:24 --- join: wa5qjh (~quassel@freebsd/user/wa5qjh) joined #forth 03:40:20 --- join: ncv__ (~neceve@90.207.216.74) joined #forth 03:40:20 --- quit: ncv__ (Changing host) 03:40:20 --- join: ncv__ (~neceve@unaffiliated/neceve) joined #forth 03:43:10 --- quit: ncv_ (Ping timeout: 252 seconds) 04:03:17 --- quit: KipIngram (Quit: WeeChat) 04:05:44 --- quit: nighty- (Quit: Disappears in a puff of smoke) 04:16:56 --- quit: wa5qjh (Quit: http://quassel-irc.org - Chat comfortably. Anywhere.) 04:18:57 --- join: KipIngram (~kipingram@185.149.90.58) joined #forth 04:18:57 --- mode: ChanServ set +v KipIngram 04:19:26 --- join: wa5qjh (~quassel@175.158.225.216) joined #forth 04:19:26 --- quit: wa5qjh (Changing host) 04:19:26 --- join: wa5qjh (~quassel@freebsd/user/wa5qjh) joined #forth 04:21:07 Changing from direct to indirect is strangely buggy still, will retry again on the weekend 04:21:16 Even on the "minimal" Forth 04:27:10 --- join: jedb_ (~jedb@199.66.90.113) joined #forth 04:31:05 --- nick: jedb_ -> jedb 04:35:43 What sort of bugs? 04:36:09 And is it "unstable" or "doesn't work"? 04:36:26 Doesn't work. 04:36:42 Also, I lose TOS, which means I have to rewrite most of the CODE words 04:36:52 I suppose the gain is that I can get to have kilobytes of space instead 04:37:17 --- join: nighty- (~nighty@s229123.ppp.asahi-net.or.jp) joined #forth 04:37:42 As it stands, I still have a measly 400 bytes of space after HERE, which means loading larger programs won't work. 04:38:13 I'm not clear yet on how it cost you a register - you said you already had a W. 04:38:22 But I need an X, right? 04:38:36 What are you calling X? 04:38:43 Another spare register? 04:39:10 Ok, walk me through the registers again, of your direct-threaded Forth. 04:39:17 Or, let me start. 04:39:21 Right 04:39:24 You start. 04:39:24 You have an IP (instruction pointer). 04:39:27 --- join: dddddd (~dddddd@unaffiliated/dddddd) joined #forth 04:39:49 So tell me how your direct threaded NEXT operates, step-by-step. 04:40:51 (IP) -> W 04:40:56 IP+2 -> IP 04:40:58 JP (W) 04:41:04 That's it 04:41:07 Ok. 04:41:31 DOCOL is like this: 04:41:37 PUSH IP 04:41:41 W+2 -> IP 04:41:45 NEXT 04:41:59 Ok, so you can avoid having to use another register. 04:42:11 And yes, I feel like indirect threaded should consume an extra register. 04:42:15 but you can do this: 04:42:20 NEXT: 04:42:29 (IP) -> W 04:42:34 IP+2 -> IP 04:42:40 (W) -> W 04:42:44 JP (W) 04:42:52 Now - you no longer have W pointing at your word. 04:43:09 So when you need that (non-primitive), you have to reocover it from IP again. 04:43:20 (IP-2) -> W 04:43:34 That's not fun, but it's FAR better than losing TOS register. 04:43:35 The problem is that I can't do (W) -> W 04:43:42 Oh. Why? 04:43:46 Z80 04:43:55 I can't do it directly, anyway 04:44:27 Also, the only register that can do JP (W) is the 16-bit HL register 04:44:30 Well, that sucks. But ok - yeah; those old processors had a lot of limitiations there. 04:44:52 So, this means I need to do LD HL, (HL) (dest, src) 04:45:01 What's your whole list of registers? 04:45:05 Which means, I need to deference HL into something else 04:45:15 DE, HL, BC, IX, IY mainly 04:45:24 IX is for the return stack, IY is used by the OS for interrupts 04:45:27 So I'm left with DE HL and BC 04:45:37 Technically AF as well, but A is the accumulator 04:45:56 Maybe I can salvage IY 04:46:13 * KipIngram drumming fingers on desk... 04:46:15 Hmmm. 04:47:31 Which register is your data stack pointer? 04:47:38 PSP 04:47:43 I mean SP 04:47:48 Oh, ok, the regular stack. 04:47:58 I could save/restore it, but that would add extra overhead 04:48:06 But in DOCOL you said PUSH IP. 04:48:15 Yes 04:48:16 That makes that one seem like the return stack. 04:48:19 OH sorry 04:48:25 I mean PUSH_IP_TO_RS 04:48:30 push IP to the return stack 04:48:35 "push" 04:48:41 Ok. 04:48:55 exit is: POP_IP_FROM_RS \ NEXT 04:49:03 So the problem here is that you are starved for registers that can address RAM. 04:49:25 Right. 04:49:43 Losing TOS in a register hurts. But I think anything you do here is going to hurt. 04:49:56 Oh, well. 04:49:59 Do you have a register-to-register xchg instruction? 04:50:15 ex de, hl 04:50:20 Is the only valid exchange 04:50:26 Otherwise it has to go through the 8-bit accumulate 04:50:30 Urgh. 04:50:30 accumulator* 04:50:50 e.g. to do BC -> HL you have to do: 04:51:10 ld a, c \ ld l, a \ ld a, b \ ld h, a 04:51:16 Yeah. 04:51:40 And each load takes around 4 cycles, but NEXT is something that I would be running a lot 04:51:41 Well, shoot. I don't really see any way out that wouldn't probably impact you as much as not having registered TOS. 04:51:47 You have several ways out, but they all cost. 04:51:52 Right. A full rewrite is needed. 04:52:03 Just the primitives, the WORD words don't need to be 04:52:36 Well, isn't just add a pop at the beginning and a push at the end of all the words? 04:53:04 Yeah 04:53:20 Oh becaues the register allocations will be different 04:53:27 Oh. Ok. 04:53:35 So in your test version, how is it misbehaving? 04:53:42 Or can you tell yet? 04:53:48 It just crashes. 04:54:10 My current strategy is an unusual one. To accomplish (W) -> W then JP (W) 04:54:32 I do this: (W) -> W then perform self-modifying code to change the destination of the JP 04:55:26 Hey, if it works... I've never shared the professions aversion to self-modifying code. I understand the general discomfort, but basically if something WORKS it works. 04:55:45 i.e. I deference W into itself, then deference W again into the two bytes after JP 04:55:50 Theoretically it should work, I'm not sure why 04:55:52 Right. 04:55:58 why it's not * 04:56:35 So basically you're re-writing NEXT on every word. 04:56:45 Yeah 04:57:17 In an effort to try to not change my current register allocations 04:57:30 Right. 04:57:55 Exactly the sort of thing I'd do. So no chance you have the endieness of that jump target wrong? 04:58:26 I thought that it might, but I really don't think so. 04:58:36 Should be JP [lowest byte] [highest byte] 04:58:56 Ok. 04:59:12 And JP is an absolute jump? 04:59:14 Not relative? 04:59:19 Absolute. 05:01:42 I guess you can't push a memory word, can you? 05:01:48 push (w)? 05:02:10 Nope. 05:02:21 If you could do that then push (w); ret would get you there. 05:02:21 Only PUSH reg_16 or PUSH index_reg 05:02:36 Ah, if I could, yes. 05:03:18 Just as a test, have you tried push ix; w->ix; (ix)->w; pop ix ? 05:03:22 One thing could be to blame: my macros. 05:03:57 Huh there's EX (SP), HL 05:04:33 I'll try that 05:04:45 I tend to avoid the index registers though, they're slow. 05:05:08 Interesting. 05:05:19 Well, "slow" is relative. 05:05:29 Takes microseconds but they add up 05:05:30 I have to keep reminding myself how old this tech is. 05:05:38 We were still "toddlers." 05:06:23 This would never be a problem if I was able to execute RAM above 0xC000 05:06:32 Yeah. 05:06:39 So that's deliberately built into the calculator? 05:06:49 The operating system, I believe 05:07:12 The assembler really isn't helping me with this too. I'll need to write my own later. 05:07:17 I think it must be in the hardware somehow - the OS shouldn't know what code you're executing. 05:07:48 Ah, it's probably a hardware thing then. 05:07:56 If OS code is executing, you IP value isn't in IP. 05:08:17 yeah - they must or the top bits of the program counter and send that into an interrupt. 05:08:54 Maybe you could mask interrupts for the brief time you're passing through you direct threading preambles? 05:09:21 I haven't learned interrupts yet. Maybe that's what I'm missing. 05:09:22 NEXT would mask just before it does the jp (w)? 05:09:32 I'll check my guide on that 05:09:46 Ok. Problem there is that there usually is a "non-maskable interrupt." 05:09:55 If they're using that one then you can't beat them that way. 05:10:08 There seems to be an interupt mode called mode 2: "The most fun. In mode 2, the CPU can theoretically jump to any address in memory. This is the interrupt mode we are interested in." 05:10:26 But there's usually an instruction or a flag in the flag registers for masking maskable interrupts. 05:11:16 NEXT could mask just before it does the JP (W), and then you'd unmask at the start of all the routines that you get to from the preambles. 05:11:44 I'm assuming those bits of code that JP (W) send you to are themselves jumps? 05:11:50 Jumping you back down to low RAM? 05:12:03 I.e., you have jp docol before your definitions? 05:12:17 Wow, a bunch of instructions I haven't used yet: 05:12:26 EXX EI IM EI etc 05:12:45 I have a CALL DOCOL before my word definitions 05:13:11 Ah, so that puts the address you need to load into IP on your stack. 05:13:41 So yeah, figure out how to mask interrupts, mask just before JP (W) in NEXT, and then unmask as the first thing in docol. 05:14:03 And dovar, and do. 05:14:18 Sorry, I don't completely understand the use of interrupts. 05:14:19 I'll read more on it. 05:14:46 You know, it's interesting. With your direct threaded thing you have 05:15:05 CALL DOCOL; etc. 05:15:12 And indirect threaded just needs 05:15:21 DOCOL; etc. 05:15:24 Right. 05:15:40 So that's really already there - you'd just be incrementing all of the cells of your definition by one byte. 05:15:49 Point just after the call opcode instead of at it. 05:15:57 Should work even if it was still there. 05:16:14 But you have your headers back there too, I think. 05:16:44 Usually people don't CALL DOCOL. 05:16:48 They JP DOCOL. 05:17:04 You have W pointing there, so incrementing it past the call instruction gets you the thing you need for IP. 05:17:15 You're using macros as well, right? 05:17:21 Yes. 05:17:22 Heavily. 05:20:22 This is perhaps the most constrained target I'm going to write for, right? 05:21:48 I think so, yes. 05:21:57 In its day the z80 was considered a big advance over the 8080. 05:22:04 But I doubt you'll wind up needing to program it. 05:22:34 Every other assembly language seems to be so high-level 05:22:37 Like the x86 05:22:41 Wow 05:23:08 To bad this isn't a 6809 - I think of it as pretty much the best of those early generation units. 05:23:13 Very "orthogonal." 05:23:41 My very first Forth was on a 6809 - it was the brains of the TRS-80 Color Computer. 05:24:38 Around how old were you when you did that? 05:26:26 Oh, 20? 05:26:38 It was horrible. 05:26:40 It worked. 05:26:47 Horrible how? 05:26:49 But I had no real knowledge of Forth internals when I did that. 05:27:11 Later, when I got hold of a book outlining Forth "guts," I was blown away by how simple it all was when done right. 05:27:17 O.o it's possible to implement a Forth without understanding its internals? 05:27:37 Sure. You're just likely to overcomplicate a lot of things. 05:27:48 I need to find a good book on implementing Forths 05:27:57 A lot of "firsts" are almost always the most sub-optimal 05:28:12 Forth Fundamentals volume 1 is the book I used, and it's excellent. 05:28:16 McCabe. 05:28:20 It's hard to find. 05:28:41 It's not even on library genesis 05:28:51 I've got a copy, and at some point I'm going to find a way to get it online. 05:28:57 But I don't want to ruin the book in the process. 05:29:02 Just haven't tackled it yet. 05:29:04 So what are some possible reasons why Forth isn't as popular as it used to? 05:29:11 It's too good to be "lost" to the community. 05:29:30 Lisp has exploded in popularity and reverence 05:29:59 ^it also helped that there was an MIT course called Structure and Interpretation of Computer Programs, that made Lisp shine 05:30:00 Well, I think a couple of possible reasons are 1) one of Forth's biggest strengths is that it's low-RAM-consuming. 05:30:04 But that hardly matters anymore. 05:30:16 Ah, it's niche. 05:30:48 2) It's not "intellectual" in the sense of obeying some formal set of syntax rules. It's fairly arbitrary, in actual use. 05:31:00 Programming languages shouldn't have ambigious order of operations. Those come from algebra but don't fit the world of computing 05:31:02 I consider that an advantage, but I think people like following rules. 05:31:18 Finally, I think Forth people tend to alienate other parts of the community. 05:31:26 How so? 05:31:34 We have a tendency to go around talking about how Forth is just clearly better than anything they're doing. 05:31:41 We act kind of superior about it, and that doesn't win friends. 05:31:59 Aren't Lispers the same way? 05:32:10 And also Forth really just doesn't play well with other languages. It likes to be in charge. 05:32:24 (unfortunately, I haven't had anyone to advocate Forth to in real life) 05:32:39 I've always found that I can do good work with a system that's Forth all the way to the hardware. It just doesn't fit into our modern "write massive libraries" mentality. 05:32:52 Plus it's HARD. Easy to learn, but hard to become *good*. 05:33:02 It takes a lot of work to understand an appication well enough to write good Forth. 05:33:06 Right. You need complete control in Forth. 05:33:11 Also it has no security model 05:33:17 Python, say, makes it a lot easier to hack something out quick and dirty. 05:33:33 Right. 05:33:38 It's security model is "none." 05:33:48 Because any security = slower 05:33:49 Here's the universe - you're God. 05:34:29 Yeah, I was thinking as I drove to work yesterday about the comparitive speed of interrupts and polling. 05:34:47 Really, interrupts should be at least about as fast as the best polling loop you could write. 05:34:52 Often should be a hair faster. 05:34:59 Frankly, "god mode" can be dangerous for some embedded systems. 05:35:07 But invariably in modern systems an interrupt means a security level change - a context switch to the kernel. 05:35:09 I wouldn't want to open a REPL on a pacemaker 05:35:11 And that's expensive. 05:35:22 No doubt. 05:35:47 But I can't help but look at embedded systems through the eyes of a developer. 05:35:53 And as a developer, I want to be able to do anything. 05:36:16 Right. But one question is _should_ you be able to do anything? 05:36:20 Especially on life-critical systems. 05:36:31 Certainly not, to a real pacemaker in a live human. 05:36:42 But that part comes later, after I've done my thing. 05:37:25 I don't understand the absurd limitation on mobile phones not being very programmable 05:37:38 All of our big-time operating system sare designed with the idea that they will be populated with adversarial users, simultaneously. 05:37:44 So the thick security layer is required. 05:37:47 There's really no harm in that, just use sandboxes, tc. 05:37:48 etc 05:37:56 Have you heard of Qubes OS? 05:38:00 No one ever uses my Mac notebook except me, but I bear that layer anyway. 05:38:05 No. 05:38:24 It goes all the way by allowing you to have applications in specific "workspaces", which are equivalent to virtual machines 05:38:37 So your browser would be completely isolated to your personal information, for instance. 05:38:50 A single failure is just a single failure 05:38:58 Yes. Containers. 05:39:17 I haven't been able to get it working on my Mac, sometime later when I switch to a PC 05:39:48 I'm writing my Forth with the view that it will be completely in control of any machine it's running on (eventually), and that 100% of the software on the machine is friendly and cooperative. 05:40:32 https://blog.invisiblethings.org/2015/12/23/state_harmful.html 05:40:42 "State considered harmful - A proposal for a stateless laptop" 05:40:52 Pretty good paper 05:40:58 Cool. 05:41:16 Though of course what a computer IS is a state machine. 05:41:25 So they must be using "state" in a particular way. 05:41:46 A good world would be one in which you carry your data with you but share laptops and so on. You can verify that the laptop is secure and plug in your data and use it as if it was yours. 05:42:06 Then when you're done you save the image onto your drive and leave no trace on the hardware 05:42:25 I do think that would be a good world, but it involves a lot of trust of the hardware. 05:42:38 --- join: rdrop-exit (~markwilli@112.201.162.180) joined #forth 05:42:49 You heard of "digital dead drops?" 05:42:50 A lot of old computer systems that underly our world will have to change 05:43:01 I've heard of those, but they sound like a good way to get viruses 05:43:07 Exactly. 05:43:22 They're the flip side of what you just described: bring your own hardware, here is data. 05:43:37 But wow - the analogy with STDs is striking. 05:43:50 The fundamental problem with security is that it's harder to tell when you're safe _digitally_ than physically. 05:43:59 Haha 05:44:32 I don't know if my laptop was compromised by an evil maid attack, but I can tell if someone broke my lock 05:44:43 Right. 05:44:59 We erred when we started building computers so that digital maid attacks were even possible. 05:45:18 Research in encrypted computing is still in its infancy 05:45:20 But the concept is fascinating. 05:45:21 To respond to something from the outside in some heavy way, without your user authorizing / requesting it... 05:45:28 Yes it is. 05:45:37 --- quit: z0d (Quit: leaving) 05:45:49 I feel the same way about "active content" on the wb. 05:45:51 web 05:46:01 Browsers should fetch data and render it on your screen. 05:46:03 End of story. 05:46:31 We've softened the boundary between "code" and "data" too much. 05:46:34 active content? 05:46:49 I have javascript disabled by default 05:46:51 Like Javascript and other such things that "execute" on the client PC. 05:47:09 Around 80% of the time it works fine except for when I need to sign in/use interactive content 05:47:20 I use uMatrix, great add-on to Firefox 05:47:32 We probably don't really need it as much as we did. 05:47:45 There was a period in there where networks were slower, and it enabled very high-performance web experiences. 05:47:51 I enable JavaScript for some websites like Wikipedia, DuckDuckGo 05:47:54 A cleaner model would probably work in today's world. 05:47:54 Right 05:47:59 Now a lot of JavaScript is junk 05:48:25 Also not to mention that JavaScript is so slow and inadequate as a language people are compiling TO JavaScript 05:48:37 Or bypassing it completely with webassembly 05:49:05 It was just an example. I meant to refer to anything where you hand control of your processor over to something embedded in the web content. 05:49:20 Right 05:49:34 --- quit: ashirase (Read error: Connection reset by peer) 05:49:52 Just viewing a website should not make your computer vulnerable - that was a bad step. 05:51:24 Query injection attacks are similar - the dbms is allowing data to be treated as code. 05:51:36 It should have a strong wall between "this is query" and "this is data." 05:52:15 Not to mention that JSON is often eval'd because it is compatible with JavaScript syntax 05:52:27 Yes. :-) 05:52:47 One thing you can count on the world for: if you open a door, someone will walk through it. 05:53:11 One problem is that when a lot of our network infrastructure was first developed the community was mutually trusting. 05:53:21 Scientists, military, etc. - all a team. 05:53:31 I sometimes wonder what an alien civilization's computing would look like 05:53:32 Then we dropped it on everyone without fixing those things. 05:53:40 +KipIngram> I'm writing my Forth with the view that it will be completely in control of any machine it's running on (eventually), and that 100% of the software on the machine is friendly and cooperative. 05:53:42 How much of it is really fundamental? 05:53:43 :-) 05:53:50 outside of an embedded environment, you know you'll never get that, right? 05:54:05 I just laughed when they uploaded a virus to the alien network on Independence Day. 05:54:09 even if you replace the OS, there's still the firmware, the microcode, etc. 05:54:14 Wow - they use the same protocols we do... 05:54:37 Ha 05:54:46 Definitely impossible 05:54:56 yeah, what I primarily meant by that is that there won't be other software on the system being run by other users. 05:55:09 Binary is fundamental, right? 05:55:24 no 05:55:26 NASA attempted to exploit that in their Voyager golden discs 05:55:27 siraben: They could have at least put in a few lines about how they'd had that alien ship at Area 51 for decades, and had studied their computers. 05:55:34 They could have made it ok, with just a few lines of dialog. 05:55:46 Perhaps 05:55:56 --- join: ashirase (~ashirase@modemcable098.166-22-96.mc.videotron.ca) joined #forth 05:56:01 It still would have been ridiculous for someone who'd known nothing about it all until a few hours prior to have done the viral attack. 05:56:13 It would have been a staffer who'd been studying the stuff for years. 05:56:34 Then again, Hollywood never strives for accuracy 05:56:42 No, they don't. 05:56:58 And that's ok, I guess, but I think we have hoardes of terribly misinformed people out there. 05:57:15 But anyway, that movie was worth it for Bill Pullman's speech. 05:57:34 Talk about the (mis)representation of programming in movies 05:57:38 It's crazy, right? 05:57:41 Yes. 05:57:43 Black Hat. 05:57:55 Oh my God - I felt physical pain during that movie. 05:58:34 https://en.wikipedia.org/wiki/Gell-Mann_amnesia_effect 05:59:38 (perhaps relevant to the Hollywood discussion) 05:59:56 Yes. 06:00:20 I'm an expert - I watched ." 06:01:24 "We need to start turning away from media." 06:01:25 Amen. 06:01:36 The media has long since stopped being mere "distributors of information." 06:01:44 They actively try to steer our thinking. 06:02:17 Computing is especially misrepresented because 1. most people don't understand it 2. it looks cool and magical 06:02:26 Right, exactly. 06:02:36 and 3. the people who perform the tasks are wizards 06:02:37 It *is* cool, but Hollywood needs it to be cooler. 06:02:47 To appreciate it's actual coolness, you need some knowledge of it. 06:02:51 And the audience doesn't have that. 06:03:07 Just last week, I was taking notes on Emacs and it was mirrored to the screen and there was always the occasional glance like "how did you do that"? 06:03:27 For instance, I did the equivalent of 20 undos in a couple of keystrokes 06:03:43 Yes. 06:03:44 "Any sufficiently advanced technology is indistinguishable from magic" 06:03:52 Clarke rocked. 06:04:03 I have felt that sensation when looking at videos of other users 06:04:25 Watching someone who really knows how to fly vim or emacs code is... entertainment. 06:04:28 Especially before getting to know my editor 06:04:30 It can be quite impressive. 06:04:47 I don't hold a candle to a few of the whiz kids in my office. 06:05:15 Google Docs is painfully slow, I can't believe people tolerate the latency 06:05:29 yeah - it's crazy. 06:05:31 Eclipse. 06:05:32 ~ 5 seconds to just get from URL to being able to edit the file 06:05:33 Can't stand it. 06:05:36 Or more 06:05:52 There's really no excuse to make things fast, right? 06:06:06 You mean not to? 06:06:16 Right, not to make them fast 06:06:21 That's how I feel. 06:06:35 We have these machines that are just absolutely remarkable compared to what was avaialble a few years ago. 06:06:57 But we've bloated our OSes and done other things that just throw that power out the window. 06:07:56 We should have a rule, that at least 90% of the power of your processors has to reach the end user application. Or something like that. 06:08:01 And I'm joking, I don't believe in such rules. 06:08:08 But wow - we're so far from that it's crazy. 06:08:50 Unfortunately on macOS you can't replace the window manager with something else resource intensive. 06:09:01 We've sort of done it to ourselves, though. People drool and go "oh, ah" over glitzy graphical effects in the os and so on, instead of focusing on what the thing lets them get done. 06:09:14 I have used i3 on Linux before, wonderful software. 06:09:32 i3? 06:09:49 A window manager 06:09:53 ah 06:10:14 If you peek into a restaurant kitchen, you don't see polished marble tables and oak closets, you see efficiency, and it ain't pretty. 06:10:20 That's how I feel about it 06:10:34 RIGHT - good analogy. 06:10:38 Of course, aesthetics matter but only in the final product, that's why I use LaTeX for paper writing 06:10:49 But peek into an upscale suburban home kitchen. 06:10:53 You don't see efficiency. 06:10:56 You see "show." 06:11:03 Ugly but efficient writing process but beautiful output 06:11:30 My wife and I have strongly different opinions on kitchens. 06:11:35 I love that quote from Starting Forth 06:11:36 She likes the swanky looking status symbol stuff. 06:11:41 I want a WORKING kitchen. 06:11:53 I stay out of the kitchen :) 06:11:58 Comparing the raw power of Forth to playing a piano. You don't expect the piano to do things you don't tell it to, why let a programming language get in the way? 06:11:58 Heh. 06:12:04 I cook more and better than my wife does. 06:12:05 Of course, that only works to some extent. 06:12:14 I NEED a safe language sometimes 06:12:20 Hah 06:12:38 Because you emphasize efficiency? 06:12:51 No, because I'm just better at it. 06:13:14 I was single for a while between marriages, and it's one of the things I decided to become competent at. 06:13:27 I see. 06:13:40 But I scooped my wife up her last semester in college (she's 10 years younger), so she never really had the same sort of pressure to learn. 06:13:51 She had her parents, she had the dorm cafeteria, and then she had me. 06:14:10 My wife is an excellent cook, but rarely has the time 06:14:14 Environment matters. 06:14:32 I'm too young to have a wife, so. 06:14:34 Mine either - she's also an engineer and these days works considerably harder than I do. 06:15:06 My job's actually pretty ridiculously low-stress given the money. I've been very lucky. 06:15:20 I had my years in the trenches, but I'm not really there now. 06:15:52 --- quit: wa5qjh (Remote host closed the connection) 06:16:46 Officially I’ve been retired for 9 years 06:17:10 But I still dabble 06:19:05 KipIngram: I enjoyed my recent session on program generation, I'm learning Prolog right now and it's very mind-bending. 06:19:12 I'll port it to Prolog eventually. 06:20:18 Very declarative. Instead of "do this, do that", it's "f(x) = y, make it happen" 06:20:36 One can do someting similar with backtracking in Forth 06:20:45 Well, a smaller class of search problems that is 06:21:32 The creator of Erlang said that there hasn't really been any new ideas since Prolog 06:21:44 Everything's the same, more or less, in programming language theory. 06:21:45 Check out the book « Intelligent Embedded Systems » by Louis Odette, he implemented Prolog on his Forth 06:22:11 "Intelligent Embedded Systems" probably means something very different nowadays 06:22:28 I'll check it out. 06:22:38 It’s an old book 06:23:00 Wow these books are hard to find. 06:23:30 He also had some papers in FORML proceedings and JFAR IIRC 06:23:47 1991 06:24:11 It came with a diskette, I wonder if I ever backed it up 06:24:51 Yes, Prolog is interesting. 06:26:17 IIRC Paperback software did VP-Expert in Forth 06:26:24 I also heard the creator of Erlang endorse the "don't worry about speed - computers will be 10x faster next decade anyway" position. 06:26:46 I think that's a dangerous mentality. Definitely something to be embraced "moderately." 06:27:11 Haha, I think he was joking 06:27:27 Unlikely to be the case anymore, anyway. 06:27:32 People are buying computers less 06:27:36 Even mobile sales have slumped 06:28:26 My phone is 6 years old (iPhone 5S) and apart from a suboptimal battery performs all my tasks adequately. 06:28:26 some peeps position "but I want ludricous speed! lurdricous I say!" 06:28:46 But if engineers and programmers were better, we could last at least 10 years with current hardware 06:29:14 No, I don't think he was. 06:29:19 Not in the talk I saw. 06:29:30 And right - we're done with those days. 06:29:54 Which is a good thing. We can finally make use of this huge pool of resources 06:30:09 And figure out how to really make use of parallelism. 06:30:20 I think we just went crazy with "look! gigabytes of RAM! I don't need to worry about memory any more!" 06:30:22 We spent decades chasing faster cores. 06:30:30 KipIngram: like I keep telling peeps FlowBasedProgramming. 06:30:42 Hybrid CPU+FPGA cores 06:30:52 How is parallelism nowadays? 06:30:56 I haven't used it so much 06:31:28 rdrop-exit: yeah but not with complete hard cores but components there of, hopefully 06:32:10 Joe Armstrong has done quite a bit with multicore computing 06:32:15 Zarutian: Yes - I'm totally aiming at clean support for that in my Forth. 06:32:25 It's going to be the basis of processes working together. 06:33:04 This reminds of the Green Array chips created by Chuck Moore 06:33:07 Intel has x86+FPGA in the same package, that’ll be the future for PCs it seems 06:33:11 144 concurrent "computers" 06:33:19 Just imagine 06:33:31 siraben[m]: yeah, FlowBasedProgramming fits nicely on Green Arrays. 06:33:49 What's flow based programming and how do I learn that paradigm? 06:33:59 rdrop-exit: yeah, but I suspect that more and more code then will be dumping the x86 part. 06:34:30 siraben[m]: http://www.jpaulmorrison.com/fbp/ 06:35:18 I think the easy stuff will be done on the X86 the hard stuff will be offloaded to the FPGA 06:36:18 https://www.nextplatform.com/2018/05/24/a-peek-inside-that-intel-xeon-fpga-hybrid-chip/ 06:36:59 Is Flow Based Programming not the same as unix pipes combined with the actor model? 06:38:17 KipIngram: if you can stomach writing a json parser (or was it sexpr parser) in Forth to read in FBP netlist and component definitions then your Forth gains something that most Forths lack somewhat: Utility. 06:38:32 siraben[m]: not quite but you are in the right direction. 06:39:16 siraben[m]: that is you can FBP with unix pipes and processes/programs 06:39:34 can do FBP* 06:40:29 KipIngram: I think I'll take some time off directly writing assembly/Forth to work on an assembler. The more I think about it the more it makes sense. 06:40:30 My own is unmaintained, underdocumentedd 06:40:50 And plus, it's a fun exercise :) 06:41:04 It's extremely easy to factor Lisp code with Emacs 06:41:57 What assembler are you using? 07:08:46 --- quit: rdrop-exit (Quit: rdrop-exit) 07:10:17 hmm 07:10:21 I have a problem 07:12:30 I added a decompiler to Attoforth, but it has a couple problems - namely that inline numbers in code just come out as numbers, including ones that are really pointers, and second that there is no visible "if" "else" "then" "begin" "until" "while" "repeat" "do" "loop" etc. but rather those all come out as a pile of BRANCHes, ?BRANCHes, (LITERAL)s, and numbers 07:21:27 tabemann: known 'problem', it depends on how 'smart' you are willing to make your decompiler to recognize higher order control structures and such 07:25:11 --- quit: tabemann (Ping timeout: 250 seconds) 07:33:16 Right - I hesitate to even use the word "problem" for that. 07:33:48 You could "assist yourself" by having several different copies of 0BRANCH, so that information could be gleaned from which one you were looking at. 07:33:58 But that sort of compromises the "purity" of the compiled code. 07:34:11 It's a pretty small price, in the grand scheme of things. 07:34:17 Sounds like an undecidable(?) problem 07:34:33 You see what I mean? Have one that you only use for IF, one you use for WHILE, etc. 07:34:41 They'd all do the same thing - just 0BRANCH. 07:34:48 There's infinitely many control structures. 07:34:51 But it would help you know what structure compiled that. 07:35:01 Well, yeah - I'm just thinking of the base language. 07:35:14 Since Forth lets you create new control structures, you could always outfox it. 07:35:46 Hilarity may ensue 07:35:55 Some decompilers just dcompile those as 0BRANCH, and let you figure out what the original code looked like. 07:36:08 I just don't prettify control structures 07:36:14 Aids debugging 07:36:20 Right 07:36:20 I'm evolving toward less. 07:36:33 I want to be able to go from Forth → Assembly 07:36:43 I'm doing a lot of things with conditional return, jumps to the start of words, and so on. 07:36:57 I'm just not NEEDING the traditional structures. 07:37:40 Yeah I find the IF WHILE BEGIN et al. family restricting sometimes 07:39:01 Plus they're long (textually) and push up definition length fast. 07:39:24 I'm very "symbolic" in my programming; I tend to like names that are a short little string of special characters rather than a long English word. 07:39:45 These days I put a lot of emphasis on keeping definitions short - hopfully no longer than 50-60 characters. 07:40:01 7-10 words. 07:40:11 I can't even represent words > 32 in length 07:40:16 It gets truncated 07:40:17 I'm really taking the "Forth says factor" thing to heart. 07:40:26 I don't consider that much of a limitation. 07:40:51 My symbol table is structured so that any word longer than 15 chars would require a linked record. 07:40:57 I hope to never use that feature. 07:41:41 My longest word is SEE 07:41:42 https://github.com/siraben/ti84-forth/blob/master/forth.asm#L2625 07:41:54 That was a good example of how decompiling helps 07:42:12 I would never have been able to (without a lot of debugging) write 0branches and branches everywhere 07:45:28 I'm intrigued by other control flow primitives. Like I noted, I'm using conditional returns a lot. I'm also interested in primitives that would conditionally skip a word. Just one word - factor. 07:45:42 That's sort of like pushing the conditional return up a level. 07:45:59 Instead of always calling and conditionally returning, it should be faster to just conditionally call (on the opposite condition, of course). 07:46:00 They're all equivalent. All you really need is 0BRANCH 07:46:14 Yes, I know, or perhaps you DON'T need 0BRANCH. 07:46:30 conditional return and conditional skip don't require an offset be compiled. 07:46:54 The skip of course has the word you're skipping or not, but it's still "more like everything else" than a branch offset is. 07:47:19 I agree - it's all six of one half dozen of another. 07:47:58 And the skip also has an implicit branch of length one cell. 07:48:01 forward. 07:48:26 And instead of a DO LOOP, with arbitrary content, you could have an "iterator word," that just calls one cell N times. 07:48:26 KipIngram: I provide SKZ (SKip if Zero) as the only conditional control flow transfer primitive. 07:48:30 Once again, factor... 07:48:40 Right. 07:48:52 I need to read Thinking Forth 07:48:57 It's good. 07:49:01 oh, I hadnt seen the "iterator word" idea. 07:49:04 It's more of a book about software engineering than actual code 07:49:10 Yes. 07:49:46 Zarutian: I'd be apt to provide a whole family of SKIP words, for different conditions, but I do see that you can make any of them from one. 07:49:55 I'd just wnat to have my choice without paying any efficiency. 07:51:12 In my Forth processor I tinkered with, I considered 32-bit cells, which contained either six packed 5-bit opcodes or contained an address and instructions on what to do with it. 07:51:20 There are two leftover bits, so you have four options. 07:51:35 One is packed opcodes, one is call, one is jump, one can be a conditional jump. 07:52:16 It worked nicely to have a fetch unit using those two bits to steer fetching - it would forward packed opcodes to an execution unit. 07:52:46 The conditional jump became the issue - in a naive system you don't have the decision made yet by the execution unit, so the fetch unit has to wait. 07:52:49 Blows the pipelining. 07:53:08 So I experimented with ways of separating the calculation of the decision from the "implementation" of the decision. 07:53:14 When possible - it's not always. 07:53:27 The execution unit could post "flags" to the fetch unit. 07:53:28 KipIngram: one iterator perhaps : TIMES ( times -- ) R> SWAP BEGIN OVER @EXECUTE 1- DUP 0= AGAIN DROP CELL+ >R ; 07:53:46 So if you could know early what you were going to need to do later, you could have the result waiting when the fetch unit got to the decision point. 07:53:56 Yes. 07:54:35 One example of where my approach worked well was classic linked list processing. 07:54:38 Normally you see this: 07:54:46 while (p) { 07:54:51 process(p->data); 07:54:58 p = p->next; 07:55:00 } 07:55:05 Instead, I'd do something like this: 07:55:19 What's @EXECUTE ? 07:55:26 while (not done) { 07:55:34 q = p->next; 07:55:40 post_flag(q); 07:55:52 process(p->data); 07:55:55 p=q; 07:55:57 } 07:56:02 siraben[m]: it fetches from an address, if the contents are nonzero it calls it 07:56:15 That's probably not strictly correct, but you see the point - you've moved the communication of the "is there more" to before the processing of the data. 07:56:32 Zarutian: Interesting. 07:56:34 But sometimes of course the processing of the data is what tells you whether there's more or not. 07:57:37 For one loop it's really simple - just a bit. But making it handle nested loops was somewhat more involved - you needed a multi-bit flag mechanism between the execution and fetch units. 07:58:55 The separation into fetch and execute units worked great for heavily factored code. Since the fetch unit picked up six opcodes every time it got any, it could pull ahead of the execution unit. 07:59:11 So it's over there working through your factored call tree while the execution unit is processing the primitives. 07:59:27 KipIngram: this is starting to sound like what one version of the Itanium did 07:59:45 And also I could usually "see return coming" in a set of packed opcodes (unless it happened to be first), and wound up with zero execution time returns. 08:00:02 Really? Ok. Wouldn't be the first time I "invented" something already out there. :-) 08:00:30 I got the general idea from some chip of Chuck's, where he had three five-bit opcodes packed in a 16-bit word. 08:00:32 KipIngram: naah, it is very similiar to what DSPs and the Mill do 08:01:18 And if it's Harvard architecture, or uses dual-port RAM in FPGAs, there's no memory contention between fetch and execute. 08:01:48 dual-port RAM for program code is so damn nice 08:01:54 One thing you lose is any ability to do "clever" things with the return stack. 08:02:06 Since fetch and execute are generally (hopefully) out of phase with one another. 08:02:11 Fetch owns the return stack. 08:02:20 Oh, right. 08:02:27 Actually, fetch would see the return instructions. 08:02:46 Been a few years. 08:02:53 wouldnt the fetch unit also see the >R and R> instructions? 08:03:16 If they were there it owuld have to, but it has no idea what the execution (data) stack state is at the time. 08:03:19 So they're not very uable. 08:03:21 usable 08:03:33 Anything that requires syncing fetch and execution slows the thing down. 08:04:21 Most of the time when we use >r and r> we're just temporarily storing data. It's kind of unusual to actually manipuate R for control flow. 08:04:37 So in this thing I'd probably provide a second data stack or something for thoat purpose, unrelated to return addresses. 08:05:07 I didn't really explore the idea of it having two data stack. 08:05:08 I seen that if one limits the complexity of instructions and uses a fast primary memory (not main but primary) then fetch and execute units can be in sync with out speed penalty 08:05:09 stacks 08:05:31 Possibly, but that wasn't the case in the Spartan FPGA I was considering at the time. 08:05:57 Remember that sometimes you might have a large number of calls before you got to any opcodes. 08:06:28 My whole goal was to lower the speed penalty of factoring as close to zero as I possibly could. 08:06:50 Can't wait to implement Forth on a different, perhaps more interesting target. 08:06:55 yeah Xilinx FPGAs are all so wierd time-domain wise that you get waits like that. 08:08:00 (synchronous clocked time-domains that is) 08:08:31 Anyway, I quickly gave up on any kind of elegant "automation" of those conditional flags - I was planning a small pool of such flags that were explicitly selected / managed by the programmer. 08:08:36 "Know what you're doing." 08:08:48 And the depth of loop nesting would be limited by the number of flags. 08:09:15 But most of the time we don't really nest THAT deeply. 08:09:16 heck, found an Atmel (I think) FPGA that did not have any hard d-flipflops, only LUTS (blocked together) and routing interconnects. 08:09:31 Interesting. 08:09:44 I know a good bit about older Xilinx and not a WHOLE lot about anything else. 08:09:51 A little bit about Actel. 08:11:24 or was it Actel? Anyway, what I recall was the LUTs were not clocked at all. It felt you were making logic out of something like 7400's series chips but as bitstream for the LUTs and routing interconnects. 08:11:59 Right. I'm not sure who it would have been. My main memory of Actel is that they did better than most on power consumption. 08:12:04 (you implemented the flipflops via a few LUTs but those LUTs were cheap) 08:12:14 And they didn't need to be loaded at power up - you burned them and they worked. 08:12:42 Flash, OTP (anti-fuse or whatever) or something else? 08:12:47 We actually made the programmer that Actel branded and sold as their own programmer. 08:13:03 Yes, anti-fuse. 08:13:13 In Actel. 08:14:05 I wouldnt say no to MRAM based FPGAs based on the blocked together LUTs idea above. 08:14:18 Maybe we'll see it. 08:15:01 Have each sector reconfigurable via an internal-?port?. Then you could just use some sectors as memory if you wanted to. 08:15:30 I really like the idea of that sort of "full flexibility." 08:15:32 (one port per sector obviously)+ 08:16:02 I think on-the-fly reconfigurability would be interesting too. 08:16:17 Have a system that deployed the optimized hardware it needed, when it needed it. 08:16:29 and have the 'bitstream' defined and publicly documented. 08:17:18 Yes, AMEN. 08:17:28 Then we could have clever third party development tools. 08:17:36 Instead of being "locked in." 08:19:00 humm... maybe we should design this kind of FPGA, fake a company like a collection of hackerspaces did. Why? Because they managed to get a Chinese copycat manifacturer to "steal" their design and manifacture it. 08:19:17 :-) NICE. 08:19:24 GAME THAT SYSTEM! 08:20:11 the hackers just wanted the cheap but reliable components (the targeted copycat manifacturer was known for making cheaper but better chips) 08:20:14 haha that's awesome 08:22:10 last I knew, the manifacturer found out. Now there is back and forth between them on new designs and tweaks. 08:22:25 That's like a golden outcome. 08:24:23 hey the manifacturer sells some chips and do not have to worry about stupid FTDI (Failed Technology Division Incorporated) tactics. 08:35:35 --- quit: dave9 (Quit: dave's not here) 10:17:02 --- join: dys (~dys@tmo-105-37.customers.d1-online.com) joined #forth 10:21:22 --- quit: dys (Ping timeout: 252 seconds) 10:31:27 --- join: dys (~dys@tmo-104-166.customers.d1-online.com) joined #forth 10:36:26 --- join: [1]MrMobius (~default@c-73-134-82-217.hsd1.va.comcast.net) joined #forth 10:39:36 --- quit: MrMobius (Ping timeout: 272 seconds) 10:39:36 --- nick: [1]MrMobius -> MrMobius 10:45:17 --- quit: ncv__ (Remote host closed the connection) 11:27:14 --- quit: dddddd (Ping timeout: 246 seconds) 11:36:34 --- join: dddddd (~dddddd@unaffiliated/dddddd) joined #forth 13:54:33 --- quit: groovy2shoes (Ping timeout: 260 seconds) 13:59:37 --- quit: ashirase (Ping timeout: 244 seconds) 14:44:55 --- join: ashirase (~ashirase@modemcable098.166-22-96.mc.videotron.ca) joined #forth 16:21:18 Morning. 16:27:56 --- join: kumool (~kumool@adsl-64-237-233-141.prtc.net) joined #forth 16:30:34 Hey. 17:00:41 --- quit: nighty- (Quit: Disappears in a puff of smoke) 17:24:00 lina forth is really nice 17:24:24 refined wordset, runs real fast, good string handlin words, good documentation 17:24:43 poor error messages but nothing a `." ERROR!!!!"` can't fix 17:46:28 Oh, neat - I'll have to take a look. I'm interested in string handling ideas. 17:54:39 Hmmm. The "brain dead" appelation for normal counted strings is a bit judgemental / unfriendly. 17:55:55 Ok, their split word only works once, so you'd have to loop on it, but that's still pretty decent. 17:56:11 I imagine I'll write mine similar to Python string split - where it returns an array of strings. 19:41:03 --- join: nighty- (~nighty@kyotolabs.asahinet.com) joined #forth 19:47:25 WilhelmVonWeiner: fast? 19:49:09 --- quit: dddddd (Remote host closed the connection) 19:52:25 --- join: tabemann (~tabemann@2602:30a:c0d3:1890:d004:a860:b5ff:7c1c) joined #forth 20:03:08 I'm thinking of adding list-structured memory to mine 20:03:23 Lisp-style 20:33:57 --- join: dave9 (~dave@90.20.215.218.dyn.iprimus.net.au) joined #forth 20:34:56 hi 20:38:24 Hi 20:38:33 What time is it over there? 20:43:44 hi siraben[m] 20:43:51 almost 3pm 20:44:27 Cool. Australia? 20:46:35 yup 20:46:45 i slept in till 2pm heh 20:52:17 3pm? you live in the east of au? dave9 20:52:35 yep 20:52:43 i'm in wollongong, south of sydney 20:52:52 aha, famous place 20:53:29 "wollongong" means "windy city" and it is! 20:54:31 dave9: you know what, the name pronouncement sounds like "sleeping dragon mountain" in chinese 20:54:56 are you making fun of me sleeping in? lol 20:55:28 nope, its just i had often heard this name from a chinese quora like site 20:56:35 i had a friend told me he knows a guys who live in AU and use forth 20:59:10 do you know quaak haak 21:06:14 --- join: rdrop-exit (~markwilli@112.201.162.180) joined #forth 21:07:45 hi 21:08:23 Hi 21:08:47 Hello Siraben[m] 21:51:35 --- join: wa5qjh (~quassel@175.158.225.196) joined #forth 21:51:35 --- quit: wa5qjh (Changing host) 21:51:35 --- join: wa5qjh (~quassel@freebsd/user/wa5qjh) joined #forth 21:54:23 --- quit: rdrop-exit (Quit: rdrop-exit) 22:03:13 hey guys 22:22:25 --- quit: dys (Ping timeout: 252 seconds) 22:39:05 --- quit: djinni (Quit: Leaving) 22:39:42 --- quit: FatalNIX (Ping timeout: 272 seconds) 22:40:07 --- join: djinni (~djinni@68.ip-149-56-14.net) joined #forth 22:41:02 --- join: FatalNIX (~FatalNIX@caligula.lobsternetworks.com) joined #forth 23:14:11 --- quit: kumool (Quit: Leaving) 23:59:59 --- log: ended forth/18.10.16