00:00:00 --- log: started forth/18.10.09 01:08:15 --- join: pierpal (~pierpal@host2-229-dynamic.17-79-r.retail.telecomitalia.it) joined #forth 01:17:40 --- quit: pierpal (Ping timeout: 268 seconds) 01:19:22 --- join: pierpal (~pierpal@host2-229-dynamic.17-79-r.retail.telecomitalia.it) joined #forth 01:22:14 --- quit: pierpal (Read error: Connection reset by peer) 01:22:33 --- join: pierpal (~pierpal@host2-229-dynamic.17-79-r.retail.telecomitalia.it) joined #forth 01:23:52 --- quit: pierpal (Read error: Connection reset by peer) 01:25:29 --- join: pierpal (~pierpal@host2-229-dynamic.17-79-r.retail.telecomitalia.it) joined #forth 01:32:17 --- quit: pierpal (Read error: Connection reset by peer) 01:40:27 --- join: pierpal (~pierpal@host2-229-dynamic.17-79-r.retail.telecomitalia.it) joined #forth 01:52:12 --- quit: pierpal (Ping timeout: 252 seconds) 01:54:44 --- join: pierpal (~pierpal@host2-229-dynamic.17-79-r.retail.telecomitalia.it) joined #forth 02:03:22 --- quit: ashirase (Ping timeout: 246 seconds) 02:09:24 --- join: gf (~gf@147.83.107.200) joined #forth 02:09:29 --- part: gf left #forth 02:09:52 --- join: gf (~gf@147.83.107.200) joined #forth 02:09:58 hi there 02:12:05 --- join: ashirase (~ashirase@modemcable098.166-22-96.mc.videotron.ca) joined #forth 02:21:51 Hi gf. 02:34:33 --- join: dddddd (~dddddd@unaffiliated/dddddd) joined #forth 02:47:15 --- quit: jedb (Ping timeout: 244 seconds) 02:59:25 --- join: jedb (~jedb@199.66.90.113) joined #forth 03:05:14 --- quit: nighty- (Quit: Disappears in a puff of smoke) 03:39:59 --- join: nighty- (~nighty@s229123.ppp.asahi-net.or.jp) joined #forth 04:24:35 #notmygf 04:26:07 lol 04:26:52 --- quit: pierpal (Ping timeout: 250 seconds) 04:27:28 --- join: pierpal (~pierpal@host2-229-dynamic.17-79-r.retail.telecomitalia.it) joined #forth 04:53:07 --- join: siraben (~user@unaffiliated/siraben) joined #forth 05:04:10 --- quit: pierpal (Ping timeout: 244 seconds) 05:18:36 lol 05:20:37 Morning, gents. 05:20:54 Evening. 05:22:43 hello 05:26:05 --- nick: gf -> yoplaid 05:28:20 :-) Where are you, siraben? 05:28:27 Thailand. 05:28:39 Ah, wow - long way from here. 05:28:45 Small world we live in these days. 05:29:24 Yeah, the world would have been unimaginably large way back then. 05:30:22 --- nick: yoplaid -> yoplaid_ 05:30:38 Yeah, I think it changes our whole mentality to the point where we just can't relate to things the way our, say, great grandparents would have. 05:31:17 KipIngram: The small feeling is no doubt largely caused by modern software :-) 05:31:43 Well, and hardware. Software has to ahve something to run on. 05:31:50 But yeah, the "information arena." 05:32:27 My supervising professor sort of predicted this back in the late 80's. Nothing prophetic there, really - the tech was far enough along that you could see what was coming. 05:32:42 But I really think of the 90's as the decade that the internet really "came of age." 05:32:53 Before that it was sort of a toy used mostly by scientists and other such. 05:32:56 Still boggles my mind how it is possible that information from my computer is being sent through various networks, routers and thousands of miles of cables to the other side of the world. 05:33:08 Yep. 05:33:21 In what, around 0.2 seconds too? 05:33:29 Yeah, thereabouts. 05:33:41 Doing /ping KipIngram returned 00:00.00 05:33:56 And even more amazingly, that it's so cheap to do that that we can do it for idle recreation. 05:34:29 Having the capability is one thing - having it be PERVASIVE is a totally different thing. 05:34:45 And my kids just don't know a world that's not like that. 05:34:48 No clue. 05:34:52 The latter is becoming more frequent. It's terrible. 05:35:16 The internet is becoming more hostile, not having an ad-blocker or some sort of privacy protection on one's browser is like opening the door to strangers. 05:35:29 Oh, pervasive as in ubiquitous? 05:35:34 Yes. 05:35:41 I thought as in "invasive", haha 05:35:44 But what you said is also descriptive. 05:35:51 Not of the word pervasive, but of the internet. 05:35:57 --- quit: wa5qjh (Remote host closed the connection) 05:35:57 Right. 05:36:28 It's also still quite weak structurally with a lot of centralization 05:36:41 And while it's a great thing that everyone can be part of the internet, that has also contributed to a massive decline in the average quality of information you get from the net. 05:37:12 Yes - I wish it was more fully peer-to-peer. 05:37:22 I dislike centralized authority in general. 05:37:30 --- nick: yoplaid_ -> yoplaid 05:37:37 Government, business, churches, whatever. 05:37:47 It's weird that a lot "decentralize the net" projects have gotten tangled in monetary gain and over-enthusiasm. 05:38:04 Vendor lock-in :( , very sad. 05:38:14 Oh yeah, for sure - money is an almost irresistable temptation. 05:38:43 I don't think what Facebook is today is anywhere close to what was on Zuck's mind when he first started tinkering with it. 05:39:12 Yeah. I'm reminded of that movie "The Social Network", the Facebook depicted there is completely different to now. 05:39:18 I think lots of those pioneer guys were initially well-motivated, but success brings temptation. 05:39:49 And I suspect I wouldn't be immune, at all. 05:40:03 The GNU project is much older and it's still going strong. 05:40:03 Give me an opportunity to "cash in," and it would be hard to resist. 05:40:10 Kids to send to college, retirement to think about, etc. 05:40:39 Do you remember when you first heard about GNU, by any chance? 05:40:53 Oh, gosh. Sometimes in the mid to late 1990s? 05:40:56 That's a wild guess. 05:41:45 I quite like the idea of open-source sysstems, but I also think they are a bit radical for my tastes - I certainly do regard the creation of software as an intellectual activity that should have IP rights attached. 05:41:50 GNU, and even some of the infrastructure projects at facebook (see also "Open Compute Project") are creating ethically somewhat "neutral" technology. Compilers, or servers/switches can be used for all kinds of purposes 05:41:52 GNU is sort of "communist" about it. :-) 05:42:06 Right, especially with RMS at the wheel. 05:42:17 Yep. 05:42:21 He has very strong opinions on things. 05:42:22 I favor the LGPL. 05:42:57 In the same way that people buy art, sometimes we just want to buy software like games and not necessarily worry about having the rights to modify, share and distribute it. 05:43:27 Yes - no different from buying a piece of hardware. 05:43:55 But I also disagree with software licenses that try to say you're not allowed to modify YOUR OWN COPY of the software for YOUR OWN PERSONAL USE. 05:44:05 It's like you're not really buying the software. 05:44:10 That being said, I'd still like the hardware's schematics and microcode to be made free, otherwise we get disasters like Intel ME 05:44:14 If it's mine, I can do anything I want to with it, privately. 05:44:25 Yeah, it should be like that. 05:44:34 Technically, nothing's stopping you from doing so anyway. 05:44:37 Yeah. When IBM first rolled otu the PC they provided a "technical reference" that had just that. 05:44:44 Full schematics, BIOS assembly listings, etc. 05:45:03 Well, right - it's a largely unenforceable law. 05:45:08 License, rather. 05:45:19 That's fine because there's intellectual property rights, and also the barrier to producing < 10 nm chips 05:45:44 It's like giving the schematics of a building, doesn't suddenly lead to a lot more competition. 05:45:48 I'd love to see a grass-roots abiiity to craft silicon. 05:45:59 Would 3D printing help? 05:46:08 Currently it's still in its infancy, but it'll improve. 05:46:21 Well, it's the same sort of ability, but the precision and cleanliness you need for silicon work is pretty astounding. 05:46:30 https://github.com/homecmos/homecmos-wiki <-- some resources about that. there's also an IRC channel 05:46:36 Have you toyed with FGPAs/ 05:46:36 There is a "DIY integrated circuit" thing out there, but it's still very crude. 05:46:48 Ah ^ yes. 05:46:56 I've worked extensively with FPGAs. 05:47:01 I'm a hardware guy by training. 05:47:01 What makes them special? And how do you even modify it? 05:47:30 Well, an FPGA just has a pool of logic resources and some way to connect them up (like selecting paths through muxes, or whatever). 05:47:54 The details vary, but with, say, Xilinx, when the chip powers on it reads a long bit string out of a separate chip sitting by it. 05:47:58 well, having into account that we are currently on techonolies with features several hundred times smaller than a dust particle, performance wise, we are ages from factible digital devices made at home 05:48:02 The bits of that bitstring select those mux paths. 05:48:03 So you could simulate several different chips with the same FGPA? 05:48:08 Yes, definitely. 05:48:14 If it's resource-rich enough. 05:48:15 analog ans slow migth be different 05:48:31 You wouldn't get the exact same timing as a custom implentation. 05:48:45 And you're likely slower and more power hungry, because that flexibility comes at a price. 05:48:52 But you can indeed do that sort of thing, and people do. 05:49:02 Have you worked with the green array chips? 05:49:10 The ones designed for use with Forth specifically. 05:49:12 No. 05:49:19 I think they look quite interesting, but I've never taken the plunge. 05:49:25 By Charles Moore, too. 05:49:29 Yes. 05:49:46 I like the concept of a LARGE number of SIMPLE communicating cores, as opposed to a hugely complex smaller-count core. 05:49:48 Seems like an interesting concept. You factor at the hardware level and split a task into 144 parallel units. 05:49:53 Yes. 05:49:54 Yes. 05:50:21 I think that we should have embraced that sort of parallelism decades ago, but the vendors just kept pouring more and more logic into trickery to speed up single cores. 05:50:33 Now we've got mostrosities like the x86 architecture. :-) 05:50:54 what-s interenting is that that chip was designed with a software wrtien in forth, which is incredibly small compared to VLSI design software, and it works (by the way I'm a chip designer) 05:51:18 Do you have the Forth software that was used to design the chip? 05:51:27 Yeah, apparently it's only 500 lines or so. 05:51:36 According to Charles Moore. 05:51:38 color-forth in fact 05:51:43 i believe 05:51:43 I'm fascinated with Chuck's work in chip design. 05:51:51 To me that's just the flagship example of what Forth makes possible. 05:52:04 He wanted a chip, so he MADE IT HIMSELF, and wrote the software to do it. 05:52:06 Wow. 05:52:28 I had heard about Moore for quite a number of years, as in "Moore's Law". 05:52:37 saving himself the costly licence 05:52:44 I feel like I want to toy with that myself (writing it and using it). I doubt I'll ever make a chip, but I think it would be cool to lay out transistor circuits on direct silicon and then do SPICE-like simulations of the circuit. 05:53:04 KipIngram: Green Arrays kind of parallelism? Fits like splinter to arse with FlowBased Programming I rekon 05:53:06 http://www.colorforth.com/ is dead, apparently. 05:53:09 I don't think that's the same Moore. 05:53:13 That was Gordon Moore. 05:53:25 Moore Moores! 05:53:25 Ah, that clears it up. 05:53:32 lmao 05:53:54 Chuck really never got any noteriety outside of our community. 05:53:55 by the way, as far as i know, it is a really simple software, much simpler and less powerfull than say Cadence, which is the industry's standart 05:54:13 I think it's a shame - I was thinking a couple of days ago about how different the world might be today if he'd been a big shot at Bell Labs or something like that. 05:54:29 Yes, else he couldn't have written it himself. :-) 05:55:09 KipIngram: exactly 05:55:18 Moore's emphasis on simplicity just fits my mentality PERFECTLY. 05:55:28 Simplicity and having the WHOLE problem in your mind as you solve it. 05:56:00 I think the abstraction pursued by modern software engineers comes a terrible price no one really grasps the full impact of. 05:56:14 A lot of work on functional programming languages just reminds me of Forth. Make a bunch of small, pure functions/words and combine them. 05:56:36 Right. Those guys are pursuing a higher level of "mathematical" rigor, though. 05:56:39 At least I think they are. 05:56:48 Forth is extraordinarily "pragmatic." 05:57:02 Right. I still like imperative for practical low-level purposes. 05:57:16 But many languages make the mistake of making the programmer keep track of temporary variables. 05:57:37 I love how Forth eliminates that. Variables should be for the things that _truly_ vary and whose state should be recorded. 05:57:39 I do like how they have (claimed to have) proven the correctness of the L4 microkernel. 05:57:50 I like microkernels in general, though not as much as I like Forth. 05:58:23 The idea that, if you're going to have an "operating system," you keep it lean and precise really appeals to me. 05:58:36 A lot of my words are "pure" in the functional sense. Operate on the stack and return on the stack, nothing else. 05:58:56 The ones that actually have side effects are things like RAND, PLOT, various printing operators I/O and so on. 05:59:04 Well, yeah - most of them are like that. Aside from a few that deal with memory and hardware and so on. 05:59:17 The huge majority of Forth words can be looked at as functional in that respect. 05:59:27 S' = f(S), where S is "the stack." 05:59:30 what really ticks me about forth, appart from it's conceptual simplicity, its its metaprogrammability, words that create words, that was eye opening 05:59:43 Yes. 05:59:46 Just like Lisp's macros. 05:59:58 I've got a friend that calls Forth a glorified macro assembler. 06:00:00 And the ' operator reminds me of ' (quote) in Lisp 06:00:12 Well, maybe it is to some extent, but the way you can use it goes so much further.' 06:00:41 '(+ 2 3) is the data +, 2 and 3. What's better in Forth is that you don't need to "unquote" words, just make them IMMEDIATE 06:00:50 I've been reading this VERY OLD doc of Chuck's the last few days: 06:00:52 https://colorforth.github.io/POL.htm 06:00:54 Someone posted the link here. 06:01:07 It's so old that what he's describing is quite NOT Forth as we know it today. 06:01:17 It has similarities, but big differences as well. 06:01:36 mmm it is more like using yourself the compiler to do whatever you like, it's flexibility at its best 06:01:39 IMHO 06:01:43 I think some of the ideas are still interesting - re: others I'm like, "Yeah, you didn't have that quite figured out yet." 06:01:49 The power of IMMEDIATE is a tricky one. IMMEDIATE words that have side effects can become problematic. 06:02:05 Yeah, you have to be able to keep up with what you're doing. 06:02:16 Yeah, the Forth advocates really weren't joking when they said IF, ELSE, WHILE, et al. was defined in Forth. 06:02:26 Macros all the way down. 06:03:16 One of the most notable differences in that old doc is that he had a class of things (I've lost track of what he called "definitions") where when you did the defining it just stored the string in the dictionary. 06:03:24 So : foo bar bam boom ; 06:03:44 Ah, I've skimmed his manifesto. 06:03:46 literally stored "bar bam boom" in the dictionary, and when you executed foo it switched the input to that string until it was empty. 06:03:51 So absolutely a "macro." 06:04:09 And there's a lot of interesting stuff in there re: extending dictionary searches to the disk. 06:04:13 so like [eval "bar bam boom"] in Tcl 06:04:24 Yes. 06:04:26 Sounds like (eval '(bar bam boom)) in Lisp 06:04:46 I'm not even sure he HAS today's : definitions in those ideas. 06:04:57 * Zarutian might do the picol subset of Tcl in Forth because he finds the command processing handy and most people get it quickly 06:05:00 Looks to me almost like his "fast" words are defining new primitives. 06:05:07 The fact that we can extend the READER is also powerful. If you wanted you could add floating point like 123e-3 , or imaginary numbers etc. 06:05:43 Yes, certainly. I suspect I will have an "alternate interpreter" at some point, because I want to be able to easily do the sort of stuff I'd do in Octave or Matlab. 06:05:53 Type literal vectors and matrices, etc. 06:06:02 By current strategy is in the INTERP loop, if FIND fails, there's a word to "backtrack" the word buffer pointer and run NUMBER, but it could be more. 06:06:10 Have "+" work to add two numbers, add two vectors, or matrices, etc. 06:06:11 So it "ungets" the word. 06:06:20 Ah, so you're talking about generic operators. 06:06:21 Operator overloading - driven by the type of what's on the stack. 06:06:27 But no way I want that in my base system. 06:06:27 Types! 06:06:40 Types are absolutely necessary for a calculator. 06:06:48 i.e. a scientific one. 06:06:50 I suspect that environment would implement "the stack" in a heap memory structure. 06:06:58 And there would be a "type stack" alongside. 06:07:11 But I want the types to matter at compile time, not run-time. 06:07:24 In the TI-84 the first byte of each floating point stack entry is reserved for the type of the object. 06:07:29 * Zarutian really likes physical dual stack machines. 06:07:58 (really easy to implement and wire up) 06:08:49 There's something called the OP stack and OP registers on my calculator. 06:08:57 Nine bytes. 06:09:01 For each entry^ 06:09:30 KipIngram: So yes, it's a heap memory structure. 06:10:31 I gotta drive to the office - back in a bit. 06:10:32 Then you could write words that only run when the types are valid at run-time. Ultimately a trade-off between speed vs. robustness. 06:10:34 Ok. 06:11:52 Well, you wouldn't even find the wrong words. 06:12:03 The type requirements would be factored into the search. 06:12:16 Lets you pretty much completely prevent underflow, overflow, etc. 06:12:23 mmm i can't remember the name but not too long ago i saw an implementation featuring variable size elements, encoding the type on the most sgnificant byte 06:12:26 But the word, once found, is compiled and runs as usual. 06:12:33 So no run-time impact of the methodology. 06:12:42 You COULD consider types at run-time, but that's not what I'm planning. 06:13:32 Each word's "stack comment" would define it's impact on the type stack. 06:14:02 this one I think https://github.com/zevv/zForth 06:14:06 Right, compile time correctness. I could start a word's definition with the types before and after. Then proceed to compile the definition, checking as I go, then have no effect on runtime. 06:14:17 the type is implicid in the value itself 06:14:51 KipIngram: What this gets into is a type system. Imagine words like DUP, SWAP, OVER etc. that work on stack entries of any type, so you would specify the type with type variables 06:15:33 For instance: TYPE{ a b SWAP -> b a }, would say "let a and b be stack entries of any type" 06:15:50 That definitely would be an interesting area to explore. 06:16:02 Your words won't compile until you have satisfied your type comment. 06:16:36 Doesn't slow anything down, catch mistakes better. 06:17:37 You could prove properties about words. For instance, that "a b +" is equal to "a b SWAP +" for all a and b 06:18:14 But the latter would start to become redundant. Most people don't need to prove properties. 06:18:51 I'll begin to toy with more ideas from functional programming later in the year. 06:20:47 --- join: pierpal (~pierpal@host2-229-dynamic.17-79-r.retail.telecomitalia.it) joined #forth 06:36:00 --- quit: pierpal (Ping timeout: 252 seconds) 07:07:55 --- quit: tabemann (Ping timeout: 240 seconds) 07:09:02 siraben: Yes, you got it. 07:09:12 No run time impact, but a lot of added power. 07:09:29 Run-time dynamic type checking can do some things that this can't, but I think it's an interesting "middle ground." 07:09:56 Run-time checking is better than none at all, I suppose. 07:10:21 Well, or worse, from a performance perspective. 07:10:22 Having type checking would perhaps make Forth more accessible to complete beginners to programming 07:10:25 Right. 07:10:38 Run-time checking is not valued in statically-typed languages. 07:10:47 because it means you *might* fail. 07:10:54 Anyway, this would all be rolled up in a replacement interpreter - you'd run it when you wanted the power, make your definitions. Then you could quite the fancy interpreter and fall back to the regular one. 07:11:04 But the defined words would still be "executable" from there. 07:11:13 I was fooling around with the idea of an "universal assembler" 07:11:27 Google has done that to support lots of processors under golang. 07:11:29 Since things like pushing and popping registers, having a way to define CODE words within the REPL would be good. 07:11:37 Work with the Forth VM 07:11:48 The golang compiler produces such a thing, and then they are able to automatically scrape processor data sheets to generate the final translation. 07:12:16 Looks like I need to learn a heck of a lot more stuff before going into the territory of automated program optimization and generation. 07:12:51 --- join: kumool (~kumool@adsl-64-237-238-107.prtc.net) joined #forth 07:12:55 Because processors have different instructions, and I'd like to be able to take advantage of that. For instance CMOVE uses the LDIR instruction instead of an explicit loop 07:13:08 It's what I've tried to do with my macros, so I had one source that would assemble for x86 or Cortex M4. But I missed the mark and have to revisit it. 07:13:36 Right, in x86 that can use rep movsb. 07:13:51 I'd like to remove as much as the tedium as possible, so the ultimate goal would be to completely generate and optimize new implementations for new instruction sets. 07:14:11 It's not always going to be possible, but it would be faster and easier than rewriting even those small primitives. 07:15:01 Macros can go a long way already, however. 07:15:43 Yeah, I'm not trying to pull of some profound accomplishment here - I just want to have a couple of targets both of which I can support. 07:15:50 So it should be well within reach. 07:16:04 I just made the mistake of attempting my macros before I'd learned enough about the second target. 07:18:01 Right. Maybe the best way is to just learn the target and be clever. 07:18:27 Well, for just one target almost certainly. :-) 07:18:39 But I sure do like the idea of porting 500-ish lines vs. porting 3200. 07:18:51 one thing I have noticed with many ISAs both Risc and Cisc is that conceptually one could split each instruction into smaller ones. Specially with Cisc. 07:19:07 Though that 3200 count includes the Forth words too, so it's probably not THAT may. 07:20:21 for instance the complex addressing modes on 6502 and its variants 07:20:43 Isn't x86 MOV Turing Complete? So the universal implementation would just be implementing MOV. 07:21:27 siraben: yeah, pretty much. Also I heard that some anti-reverse-engineering techniques just translate the code into x86 MOV instructions. 07:22:01 That would be an absolute nightmare to reverse engineer. 07:22:33 I think the most complex addressing mode I have seen so far was indexed double indirect offsetted address 07:22:52 No control flow of any kind, just unconditional MOVs. 07:23:47 siraben: yeb, well one unconditional jump at the end to the begining 07:24:11 How do you get operations that way? 07:24:34 Don't you need shift to, and NOT and AND or OR? 07:24:47 too 07:25:08 Nope. 07:25:15 ldi r3, [[offset]+(index*item_size)] was the instruction iirc 07:25:24 https://www.youtube.com/watch?v=R7EEoWg6Ekk 07:25:35 Table look ups for those things, I guess? 07:25:52 Skip to 0:39 for the video start. 07:25:55 Yeah. 07:26:01 Interesting. :-) 07:26:37 the equiv Forth code: ( offset index item_size ) * SWAP @ + @ r3 ! 07:26:40 Well, makes sense I guess, since even doing ALU operations is just "move the operands in, move the results out." 07:27:04 KipIngram: yeard of register to register transfer machines? 07:27:09 heard* 07:27:13 Yes ^ 07:27:18 That's what I was alluding to. 07:27:21 Ah, here's the four page PDF of the original paper 07:27:22 https://www.cl.cam.ac.uk/~sd601/papers/mov.pdf 07:27:44 "It is well-known that the x86 instruction set is baroque, overcomplicated, and redundantly redundant. We show just how much fluff it has by demonstrating that it remains Turing-complete when reduced to just one instruction." 07:28:46 Fun, fun. :-) 07:30:12 --- quit: dave9 (Quit: dave's not here) 07:30:36 Also reminds me of the beauty of CS in that proofs are by construction. 07:34:55 once upon i time i designed for fun a CPU with just one arithmetic openand (NOR) and conditional jump, it was an stack machine, in fact all instructions where kind of microcoded like forth words, with the indirection build-in to make it ease, i don't remember all the details. Also it had shadow registers for top of stack and second of stack 07:35:52 just on paper, but had many ideas that i've found later in other places 07:36:09 yoplaid: I've done a fair bit of "paper" processor design too. 07:36:14 and with discrete components, it was fun 07:36:23 One was a fairly well-developed FPGA-suitable Forth machine. 07:36:31 lol train trips to the university are long my friend 07:36:56 I've also done a good bit of thinking about a processor made more or less exclusively using 22V10 programmable logic chips. 07:37:00 And memory, of course. 07:37:24 I've got a few tubes of 22V10s in the garage, but that's as far as I got. :-) 07:37:40 the goal there was to have a "guaranteed back-door free" system. 07:38:07 wow, this one was using 74-family chips. I've also been toying with the idea of using roms as combinational modules 07:38:37 Well, that's basically what the major FPGAs do - they're filled with "LUTs" (Look Up Tables). 07:38:48 You implement the logic by suitably programmign those tables. 07:38:59 by the way, do you know about the ZPU processor? 07:39:18 It seems vaguely familiar, but I can't give anything solid on it at this moment. 07:39:56 * Zarutian has been looking at MRAM from Everspin Technologies. Quick as SRAM, nonvolatile and no writes limit as in Flash. 07:40:06 Ah, looked it up. 07:40:20 Yes, MRAM looks like it has good potential. 07:40:31 Though we haven't started a storage product based on it yet. 07:40:32 yoplaid: so rom logic basically? (using ROM for combinational stuff) 07:42:00 yeah, not too efficient, but fun to try 07:42:01 KipIngram: order a shitload of their biggest storage wise, chips with parallel access and try it out? 07:42:41 MRAM? what the M stand for? 07:43:12 yoplaid: had to do an automatic machine 'cycler' using only a rom and a register for a project once. 07:43:17 magnetoresistive 07:45:13 wow just looked at wikipedia, looks interesting 07:46:35 just catching up on the convo 07:46:49 Morning, Mr. M. 07:46:56 Morning guys 07:47:04 hi 07:47:20 I was interested in the GA144 too a couple years ago but it seems like an answer looking for a problem 07:47:36 I couldnt think of anything I needed that kind of parallelization for 07:47:59 and actually most things we use computers for are hard or impossible to do in parallel it seems. I see why we stuck with x86 and serialization 07:50:34 but x86 is so slow, sure it perhaps has good throughput but damn it takes a long while to respond 07:50:49 I read an article about someone running a cluster of 16 or so 8051s to calculate Mandelbrot sets which outperformed the 40mhz PC or whatever it was they were using at the time 07:52:20 Zarutian, interesting. what do you mean by respond? 07:52:33 MrMobius: to interrupts and branching 07:52:56 This is an interesting gadget a friend brought to my attention in the last day or two: 07:52:57 https://en.wikipedia.org/wiki/ESP32 07:53:10 Small, cheap, but capable, and with WiFi and Bluetooth built in. 07:53:12 Interesting. 07:53:39 KipIngram: you are hearing about ESP32, just now? well, I thought you read hackaday.com 07:53:53 No, not usually. 07:54:11 I mostly just work on my own stuff, so wind up relying on acquaintances to clue me in on these things. 07:54:13 :-) 07:54:54 Zarutian: I do want to build a system of my own someday, likely from FGPAs, and that will include a capable storage system. At the moment I plan flash, but that's of course subject to change. 07:55:09 My office at IBM makes enterprise grade flash-based storage, so I know a lot about how to make it work well. 07:56:33 KipIngram: depends on what you plan to use the Flash for. I was thinking to use the MRAM chips for main memory storage (instead of SRAM). But for 'harddisk' applications Flash is okay. 07:58:46 that would be nice 08:00:33 --- quit: epony (Read error: Connection reset by peer) 08:05:45 --- join: epony (~epony@unaffiliated/epony) joined #forth 08:10:26 --- quit: epony (Remote host closed the connection) 08:10:54 --- join: dys (~dys@2a01:598:b001:93db:226:5eff:fee9:68d2) joined #forth 08:11:11 Yeah, for "disk drive." 08:23:22 --- join: epony (~epony@unaffiliated/epony) joined #forth 08:36:34 KipIngram: it's not the same chip, but, https://github.com/zeroflag/punyforth 08:38:45 KipIngram: How viable do you think Forth is as a teaching language to 4th graders? 08:39:11 It's concise and powerful, but the lack of error handling can be demotivating for beginners. I think. 08:42:18 great, but they will learn forth, which is a blessing and a problem since it doesn't look anythink like any other C like language 08:44:19 --- quit: kumool (Quit: Leaving) 08:44:49 Ah yes. 08:45:09 I would say its a good language for once they understand _programming_ 08:45:33 Syntax turns out to be really important for learning 08:48:10 i'd say that it is also a great language to understand what programming is, as it is assembly and, no so much C. 08:50:09 --- quit: yoplaid (Quit: Changing server) 08:52:25 --- join: yoplaid (~gf@147.83.107.200) joined #forth 09:08:48 siraben: syntax turns out to be really important indeed specially when it hampers learning 09:10:30 heck, that Scratch esque enviroment that Microsoft built upon Google Blocks to parse and edit javascript might even be easier to do for some Forths 09:17:13 from one topic to next: been mulling how to do the sheering graphix operation. Perhaps modify a copy of Bresenhams line algorithm so instead of ploting a point it draws a 'scanline' of the image? 09:20:10 using that sheering operation one can then rotate images between 0°-90° 09:21:30 (doing whole 90° rotations is pretty easy relative to that) 09:30:53 goodbye 09:30:58 see you soon 09:31:00 --- part: yoplaid left #forth 09:32:14 * Zarutian waves 09:41:39 --- quit: siraben (Ping timeout: 264 seconds) 09:50:16 --- join: ncv (~neceve@unaffiliated/neceve) joined #forth 09:58:57 --- quit: Zarutian (Read error: Connection reset by peer) 09:59:04 --- join: Zarutian_2 (~zarutian@173-133-17-89.fiber.hringdu.is) joined #forth 09:59:26 --- nick: Zarutian_2 -> Zarutian 10:14:10 --- join: siraben (~user@unaffiliated/siraben) joined #forth 11:31:59 siraben: Don Golding had a company with the most popular robotics platform prior to lego mindstorms. High school students programmed the robots in a clean "english-like" forth. Many middle/high school student's first exposure to programming was with forth. 11:31:59 Don Golding talked about it at the last month's svfig meetup. You could contact him if you are interested in the tutorials and educational materials that went with it. 11:31:59 He's bringing it back with a v2. 11:32:00 Forth has also been taught to younger students. 11:33:43 From what I've heard, forth works best when students *don't* already have exposure to others languages. 11:33:46 other* 11:43:35 --- quit: ncv (Remote host closed the connection) 12:36:30 --- join: ACE_Recliner_ (~ACE_Recli@c-98-220-46-30.hsd1.in.comcast.net) joined #forth 12:38:06 --- quit: ACE_Recliner_ (Client Quit) 12:38:29 --- join: ACE_Recliner (~ACE_Recli@c-98-220-46-30.hsd1.in.comcast.net) joined #forth 12:38:36 --- part: ACE_Recliner left #forth 12:55:05 --- join: klandeg (~simpleirc@21.red-79-151-185.dynamicip.rima-tde.net) joined #forth 12:57:47 --- quit: klandeg (Client Quit) 13:03:24 --- join: pierpal (~pierpal@host2-229-dynamic.17-79-r.retail.telecomitalia.it) joined #forth 15:00:48 --- join: wa5qjh (~quassel@freebsd/user/wa5qjh) joined #forth 15:12:54 --- quit: cheater (Ping timeout: 245 seconds) 15:15:41 --- join: cheater (~cheater@unaffiliated/cheater) joined #forth 15:24:52 --- join: kumool (~kumool@adsl-64-237-238-107.prtc.net) joined #forth 15:26:26 --- quit: dddddd (Read error: Connection reset by peer) 16:43:07 --- quit: siraben (Ping timeout: 260 seconds) 17:25:50 --- quit: cheater (Ping timeout: 268 seconds) 17:27:49 --- join: cheater (~cheater@unaffiliated/cheater) joined #forth 17:33:25 --- join: tabemann (~tabemann@232-82-181-166.mobile.uscc.net) joined #forth 17:39:47 --- quit: pierpal (Remote host closed the connection) 17:41:22 --- join: dave9 (~dave@90.20.215.218.dyn.iprimus.net.au) joined #forth 17:42:14 hi 18:26:04 --- join: nighty-- (~nighty@kyotolabs.asahinet.com) joined #forth 20:34:59 --- log: started forth/18.10.09 20:34:59 --- join: clog (~nef@bespin.org) joined #forth 20:34:59 --- topic: 'Forth Programming | logged by clog at http://bit.ly/91toWN | If you have two (or more) stacks and speak RPN then you're welcome here! | https://github.com/mark4th' 20:34:59 --- topic: set by proteusguy!~proteus-g@cm-134-196-84-89.revip18.asianet.co.th on [Sun Mar 18 08:48:16 2018] 20:34:59 --- names: list (clog tabemann kumool APic Guest88437 nighty-- jedb FatalNIX vxe_ Lord_Nightmare djinni zy]x[yz koisoke carc ashirase dave0 Labu sigjuice ecraven ovf xek wa5qjh reepca phadthai NB0X-Matt-CA irsol rpcope a3f bb010g[m] groovy2shoes catern proteus-guy jimt[m] dzho diginet2 Keshl Guest16778 cheater MrMobius dne +KipIngram WilhelmVonWeiner bluekelp dys lonjil rain2 newcup pointfree +crc jhei unrznbl[m] Zarutian nighty- jn__ rann pointfree[m] amuck C-Keen rprimus) 20:34:59 --- names: list (yunfan jackdaniel z0d nerfur malyn) 20:45:01 --- join: dave9 (~dave@90.20.215.218.dyn.iprimus.net.au) joined #forth 20:55:14 --- join: epony (~epony@unaffiliated/epony) joined #forth 21:25:07 --- join: [1]MrMobius (~default@c-73-134-82-217.hsd1.va.comcast.net) joined #forth 21:27:47 --- quit: MrMobius (Ping timeout: 260 seconds) 21:27:47 --- nick: [1]MrMobius -> MrMobius 22:20:39 --- quit: kumool (Quit: Leaving) 23:31:12 --- quit: dys (Ping timeout: 252 seconds) 23:40:59 --- quit: dave9 (Quit: dave's not here) 23:59:59 --- log: ended forth/18.10.09