00:00:00 --- log: started forth/18.10.23 00:03:18 --- join: wa5qjh (~quassel@freebsd/user/wa5qjh) joined #forth 00:13:08 --- quit: pierpal (Quit: Poof) 00:13:26 --- join: pierpal (~pierpal@host247-234-dynamic.18-79-r.retail.telecomitalia.it) joined #forth 00:16:23 --- quit: wa5qjh (Quit: http://quassel-irc.org - Chat comfortably. Anywhere.) 00:45:28 --- quit: pierpal (Read error: Connection reset by peer) 00:58:53 --- join: nighty- (~nighty@kyotolabs.asahinet.com) joined #forth 01:05:22 --- quit: dys (Ping timeout: 252 seconds) 02:24:10 --- join: ncv (~neceve@2a02:c7d:c5c9:a900:6eaf:6ef7:3b81:d5f6) joined #forth 02:24:11 --- quit: ncv (Changing host) 02:24:11 --- join: ncv (~neceve@unaffiliated/neceve) joined #forth 02:30:53 --- quit: nighty- (Quit: Disappears in a puff of smoke) 03:38:31 --- join: pierpal (~pierpal@host247-234-dynamic.18-79-r.retail.telecomitalia.it) joined #forth 03:45:13 --- quit: pierpal (Ping timeout: 252 seconds) 04:02:22 --- join: dave0 (~dave@90.20.215.218.dyn.iprimus.net.au) joined #forth 04:02:28 re 04:27:43 siraben: I've implemented my frame pointer system so that I really can treat frame entries like variables. I have @ and ! and +! and so on. 04:28:12 The offset from the frame pointer is hard coded into those primitives, so they're basically same efficiency as variables. 04:28:29 So that's my "local variable" methodology. 04:28:54 When I write my source editor, I'm thinking I'll provide a feature within it for "labeling" the frame entries with names. 04:29:40 That will require a bit of intelligence in the editor. And the compiled code will still just use the frame access words - only the editor will have the association between the offset and a "local name." 04:35:06 KipIngram: That seems like an interesting concept. 04:35:11 KipIngram: Good news! My target is working again! 04:35:25 I just needed to boot into Windows and flash the OS from there, apparently the software works better there 04:35:27 --- join: xek (~xek@apn-37-248-138-80.dynamic.gprs.plus.pl) joined #forth 04:37:32 Oh, outstanding. 04:37:36 That is good news. 04:37:56 Yeah, I think it's an interesting way to go about locals. 04:38:09 I like it because it doesn't involve "invading the run-time" in any way. 04:38:29 At the same time, I've found just having the frame and referencing it numerically to be fairly easy coding. 04:38:46 I just scratch a little note down next to me reminding me what quantity is in what slot. 04:39:23 I haven't done a whole lot of coding that way, though - there are only a couple of places in my system where I have enough on the stack to need a frame. 04:39:44 Generally speaking I prefer the "Forth wisdom" on just avoiding that need to begin with. 04:40:18 Maybe there are better ways to do those few places so I wouldn't need that kind of access. 04:40:28 I just haven't found them yet. 04:44:46 --- join: nighty- (~nighty@s229123.ppp.asahi-net.or.jp) joined #forth 04:58:09 Right. For me, the stack is local enough. 04:58:44 --- join: wa5qjh (~quassel@175.158.225.215) joined #forth 04:58:45 --- quit: wa5qjh (Changing host) 04:58:45 --- join: wa5qjh (~quassel@freebsd/user/wa5qjh) joined #forth 04:58:48 Although I wish I could do better than SWAP 1+ SWAP 05:03:05 --- quit: wa5qjh (Remote host closed the connection) 05:15:28 >r 1+ r> 05:25:18 When I first wrote those stack access words, I offset them from SP (didn't have a frame pointer). In theory you can do anything that way. It's just that the reference points move around and it's a pain in the ass to keep up with. 05:25:47 But at that point I would have said 1++ or 2++ or 3++ etc. 05:26:11 A stricter naming convention would have been 1+, but that wound me up with 11+ 12+ etc, and I just didn't like it. 05:26:35 Maybe I should separate the numerical parameter somehow, so 1:1+ 05:26:55 2:@ etc. 05:29:10 And I have missed having the stack pointer access ones since switching them over. 05:29:13 Maybe I should have both. 05:29:41 s and f 05:29:52 So 1s1+ to do what you're after. 05:29:59 1f1+ to increment frame item 1. 05:30:36 Of course, I could just define words to took a stack or frame item # and gave me back an address, then just use the regular words. 05:30:40 But that's a lot of extra code. 06:53:05 KipIngram: I had a moment of insight recently. 06:53:15 My word definitions cannot exceed $C000, but my programs can 06:53:33 Beause a program is just a list of words. 06:53:53 So, theoretically I could come up with an alternative threading model to use at high-address space 06:54:14 Which means I get the benefits of direct threaded being fast on my machine, but the extra space when I need it. 06:59:19 Hmmm. I can't comment on that until you propose some details. 06:59:35 But right - the definitions themselves are "data" to the processor. 07:00:09 When I've thought of things like this, they usually look good until I start thinking about mixing words defined in the various ways together into one definition. 07:00:29 NEXT has to know what to do for each one of them, and if they have different structures NEXT has to have a way of figuring that out and dealing with it. 07:06:20 KipIngram: at the very bottom of my forth.asm file is the interpreter. 07:06:43 How a program is run is that the instruction pointer points to the first word of the interpreter. 07:07:14 But I can also embed strings, literals etc. in the word as long as it's prefaced by the proper word 07:07:51 e.g. .dw lit, 120, dot is equivalent to "120 ." 07:07:52 Right, sure. 07:08:00 yes, that's pretty standard. 07:08:02 I'm just trying to find a way to make this as much data as possible 07:08:36 Well, that's exactly what indirect threading does. 07:08:38 It's all data then. 07:08:40 Inevitably there's going to be a slowdown of sorts, that much is unavoidable. 07:08:50 Right. 07:08:54 Except your primitives of course. 07:09:07 I've thought at times about how to mix direct and indirect threading. 07:09:24 It's fairly easy if a word is all direct or all indirect, all the way to the bottom. 07:09:42 That might mean my programs in the high memory region (> $C000) will have to be only pointing to words defined in the low memory region. 07:09:47 Including primitives that are written to support direct or indirect. 07:09:50 That works fine. 07:10:00 But like I said, it's mixing them that raises issues. 07:10:19 We'll see, I'm interested to see what arise. 07:10:21 arises* 07:10:31 Absolutely - such thoughts are always worthwhile. 07:10:41 Even if they lead nowhere, they polish your knowledge. 07:10:46 And that might involve some words that can self-modify 07:10:57 Precisely. 07:14:10 I'll add QQ 07:14:16 Oops 07:14:34 I'll add some self modifying word primitives 07:15:09 For instance SET! might change the word pointed to by the code pointer 07:15:35 Interesting. I've never gone down that road (self-modifying code). 07:15:38 and some branching that depends on the word at the instruction pointer 07:15:45 So you may find something interesting there. 07:16:12 This self modification would have to happen at run-time, I suspect? 07:16:21 Yeah, everything will still be data but choices can be made on that data and so on 07:16:26 Yes run-time. 07:16:49 I've tried to avoid any sort of time-consuming thing at run-time. 07:17:45 Oh the modification will happen at run time 07:17:46 But once the changes are made the code is run as normal 07:17:55 My most likely approach would probably be to have the instruction cells "tell me what they were" somehow - a bit would indicate direct or indirect threaded, and NEXT would act accordingly. 07:17:57 So, in memory, DUP + might become DUP * 07:18:06 But I think that decision process would take longer than just going all indirect. 07:18:32 Since it has to be in *NEXT*. 07:18:39 That's sort of the most important code in the whole system. 07:19:14 Right, and I don't want to slow down everything 07:19:57 * Zarutian butts in late into the conversation. 07:20:21 I should explore writing applications instead of programs for my calculator. I can go larger than 8 KB 07:20:25 siraben: what is the constraint that $C000 is about on your target? 07:20:26 Because apps are loaded in pages. 07:20:36 ^pages 07:20:46 Morning Zarutian. Always welcome. 07:20:51 I've seen applications with sizes like 33 KB 07:21:22 Zarutian: my target is back 07:21:44 I just needed to flash the OS from Windows instead of Mac, apparently TI writes better connect software there. 07:21:48 siraben: was it the flash chip that gave up its magic-smoke? ;-þ 07:21:56 oh, I see 07:22:00 I did an OS check and everything worked 07:22:06 Weird 07:22:16 Must be some bit flip 07:22:33 A cosmic ray did it! 07:22:39 happens 07:22:57 or just some static or something. 07:24:32 --- quit: tabemann_ (Ping timeout: 244 seconds) 07:25:52 Hopefully get a couple more years out of this thing 07:27:33 Just added a SQRT primitive 07:28:03 OS call for that? 07:28:07 It's pretty cool how it just works. I don't know how I'd have implemented it. 07:28:09 * siraben copied some stuff 07:28:19 No 07:28:40 OS call for the floating point square root though 07:31:58 KipIngram: So how's NASM, do you have all the primitives there? 07:32:03 Multiplication, division et al. ? 07:34:10 siraben: I am still curious about what the $C000 restriction is about. No direct calls to words at or above that address? 07:34:43 No execution is allowed. i.e. the system's program counter cannot exceed that. 07:34:50 But data and be read/written 07:35:19 Well, obviously I think the system still manages to execute code above $C000 so it must be something about interrupts? Not sure. 07:35:36 I'll find out eventually. 07:36:23 https://en.wikibooks.org/wiki/TI_83_Plus_Assembly/Memory 07:36:29 siraben: oh, I see. Indirect threaded code it is then or extermely small vm 07:36:31 "Know however that PUSH and POP change the memory at $C000-$FFFF, and that you can NOT execute in this area. " 07:36:34 I haven't written division yet - I've written U/MOD. 07:36:46 But yes, all of those will be simple and straightforward in nasm. 07:37:25 I'm not sure what that statement means. 07:37:31 HOW do push and pop change that memory? 07:37:40 Why would they change memory anywhere other than where SP is pointing? 07:38:14 Not sure exactly. It's a very forbidden area. 07:38:40 I found a forum post which states "I would just like to remind people that making all of $C000-$FFFF executable is entirely possible. For example, see Fullrene." 07:39:22 Curious. 07:39:28 Oh, that's encouraging. 07:40:14 It's a pretty (at least historically) active community 07:40:22 siraben: how big is the Stack Pointer register? Full 16 bit? 07:40:44 Yes. 07:41:16 I sometimes worry too much about performance. 07:41:25 probably something to do with interrupts then. 07:41:34 I just can't help it when I'm writing assembly, though, every cycle counts. 07:41:59 here is a handy guide for determining if a program is fast enough: if you do not have time to cheer it on then it is fast enough 07:42:00 Yeah, messing with interrupts can make my program very very fast 07:42:06 I still don't know exactly how they work, should go off and read. 07:43:01 I thought I knew all about the Z80 there was to know, looks like no. 07:43:58 You know, actually having a working target has reignited my motivation again :-) 07:44:20 well, I was cured of such thinking when I saw piclist.com, man the knowledge on the PICs is bottom less (handy to look for some assembler routines too) 07:45:19 What's piclist? 07:46:41 it is a site that is all about Microchip's Programable Integrated Circuit Micro Controler Unit 07:47:29 I see. 07:47:31 PICs are just EVERYWHERE. 07:47:35 Have any of you ever ran into instances when Forth was too slow, and assembly was a better option? 07:48:25 KipIngram: I am switching to AVR, why? Because it can easily host a Direct Threaded Forth. 07:48:44 siraben: only in hard-hard-realtime code 07:50:52 I just learned of a new instruction; RST 07:51:11 RST xx = Software restart to address $00xx 07:51:35 Funny instruction. RST 00 simulates taking batteries out 07:51:43 Zarutian: That's a fine reason. 07:52:45 Looks like the b_call "instruction" I've been using this whole time was a macro that used RST 07:53:10 --- quit: Zarutian (Read error: Connection reset by peer) 07:53:30 --- join: Zarutian (~zarutian@173-133-17-89.fiber.hringdu.is) joined #forth 07:53:44 Interrupts are really over my head, how is it possible to execute code when other code is being executed? 07:54:38 It's not. 07:54:43 Unless you have multiple cores. 07:54:57 siraben: you just stop executing whatever you're running now and run the interrupt code 07:54:59 An interrupt... "interrupts" the running code. 07:55:12 then, when that interrupt code "returns", you continue where you left off 07:55:16 It's just like a subroutine call being inserted, except that generally the flags register is also saved. 07:55:55 And of course there's no "interaction" between the "caller" and "callee" like in a regular subroutine, because you have no idea when the interrupt is going to happen. 08:00:14 Huh, but how can an interrupt happen when I'm running my code? 08:00:26 the cpu just stops running your code and starts running the interrupt code 08:00:33 mostly because some pin went high or low 08:01:28 Right, that's exactly right. 08:01:44 It can happen because the processor state machine is designed to respond to those signals. 08:01:56 Normally the state machine is fetching your instructions and acting on them. 08:02:08 But if it sees this other input, it goes off and does something altogether different. 08:02:42 The state machine only does the normal "your code" processing if none of those signals are high. 08:02:46 many cpus have some sort of interrupt "vector", which is just a list of addresses where to jump when an interrupt occurs 08:03:04 so if you put the address of one of your functions there, it will be invoked whenever that interrupt occurs 08:03:19 Right - the different signals can all cause different actions, and that's usually implemented by just varying the location the state machine looks to decide where to go execute. 08:03:25 "interrupt vector table" seems to be the right word 08:04:40 siraben: that same state machine, when interrupts don't occur, normally just increments the program counter to get to the next instruction. 08:05:21 But when it fetches the instruction, the particular bit pattern of the instruction can tell it to do something different there too - can tell it to instead fetch the next bit of your progam and use it as an address for fetching code (replace the program counter value with it). 08:05:24 That's just a jump. 08:05:34 But ALL of this stuff is just a state machine responding to various input signals. 08:05:48 Which one it responds to and how depends on what state the machine is in. 08:08:13 KipIngram: re state machine that fetches next instruction can do an subroutine call to an interrupt handler instead is exactly how I implement such functionality in ROM-logic 08:21:52 --- quit: Zarutian (Read error: Connection reset by peer) 08:22:03 --- join: Zarutian (~zarutian@173-133-17-89.fiber.hringdu.is) joined #forth 08:29:08 Cool. siraben, this is actually very simple stuff once you know some digital logic. The design of a processor can be "involved," but conceptually there's nothing particularly "difficult" about it. 08:29:30 State machines are not the very first thing you learn in that area, but it's not far from it. 08:31:02 Ah yes, the Z80 has an interrupt vector as well 08:31:50 Right, there's two modes of interrupts, software and hardware 08:31:50 http://tutorials.eeems.ca/ASMin28Days/lesson/day23.html 08:32:04 I'll keep reading, thanks KipIngram , Zarutian , ecraven . 08:35:40 --- part: xpoqp left #forth 08:36:01 --- quit: Keshl (Read error: Connection reset by peer) 08:36:26 --- join: Keshl (~Purple@24.115.185.149.res-cmts.gld.ptd.net) joined #forth 08:44:27 A software interrupt is even more like a subroutine call, in that you trigger it from a specific point in your program. 08:44:37 Like a hardware interrupt, it pushes the flags as well as the return address. 08:44:56 Often that changes privilege too, and is how a lot of processors give access to the OS. 09:05:18 --- quit: dave0 (Quit: dave's not here) 09:42:01 Right. It's used for key input for instance. 09:42:20 Yes, or disk access or any of various other things. 09:45:38 Stopped by an electronics store today and there were a lot of embedded devices 09:46:18 The sellers didn't know the chip models but it'd be interesting to see if a Forth could have been implemented on that 09:55:50 --- join: [1]MrMobius (~default@c-73-134-82-217.hsd1.va.comcast.net) joined #forth 09:59:13 --- quit: MrMobius (Ping timeout: 252 seconds) 09:59:13 --- nick: [1]MrMobius -> MrMobius 10:30:32 I don't think there's a processor in existence that couldn't host a forth 10:36:00 PIC 10:43:55 Yeah, I've felt like I could cram a "Forth-ish" onto a PIC, but it's not a very friendly architecture. 10:44:41 The instruction set is pretty clean, though, I once spent an afternoon writing an assembler in Forth for a PIC. Intellectual exercise only - I never used or ran anything it did, but I remember thinking it was pretty straightforward. 10:45:04 When you get to design your syntax any way you want (which means you design it to be easy to make work) it's... quite helpful. 12:38:38 --- join: dddddd (~dddddd@unaffiliated/dddddd) joined #forth 12:47:04 KipIngram: I'm about to write a forth for dsPIC33 because I need the ctmu. I'm not aware of any other micros that can measure time down to 5ps. 12:49:12 Oh... nice. 12:49:52 What are you going to measure with it? 12:50:11 I'm curious about anything that requires that kind of resolution. 12:56:21 KipIngram: I'm using a few led's as light sensors for a kind of compressed sensing optical flow camera. I'm measuring discharge time under light exposure rather than measuring the photocurrent directly. ( http://ch00ftech.com/2011/11/16/ldds/ ) 12:56:31 --- join: john_metcalf (~digital_w@host81-136-110-38.range81-136.btcentralplus.com) joined #forth 12:57:00 more time resolution gives the camera greater spatial resolution. 12:57:48 Some people are using the ctmu for tdr to measure lengths of wire. 12:59:27 Cool. I grok that one better than your first thing. 12:59:56 Damn, that would measure wire length down to like 1/20 inch. 13:00:33 Chuck Moore once wrote about a "one wire keyboard." 13:01:16 It was just a wire folded so both ends were accessible, and them all the keys were set up so they'd short the two pieces together. 13:01:27 Measure how far from the ends the short was. 13:01:31 Identify the key that way. 13:01:59 Treat it like a transmission line - just send a pulse into it and time it. 13:03:13 Only problem I saw with it is that you could only identify the nearest key that was depressed - everything else was "hidden." 13:03:25 So I don't know how he thought he was going to do shift keys. 13:05:15 You could compare lengths with a race condition between two delay lines. 13:05:42 I heard of keyboards that are basically giant parallel in serial out shift register. 13:08:58 If you project a 2d lattice at a golden ratio angle onto a 1D variable resistor you get a 1D aperiodic quasicrystal on the variable resistor. Every 2D location maps to a distinguishable 1D location -- no translational symmetry and thus no keyboard ghosting. 13:11:03 ...and it's continuous, so this could work for touch panels. 13:11:14 pointfree: can you unpack that a bit? 13:14:35 Can I imagine that as a square grid of points, and I look at it at the right angle and can see every key? 13:14:46 Every point, that is? 13:15:41 Or, another way - if I choose the right angle I can draw a grid of parallel lines that come in from the outside, and each line hits a lattice point and doesn't touch any other points? 13:16:20 Or yet another way - if I tied a string to each point in the 2D lattice, there's an angle I can hold it at such that all the strings hang down parallel and don't touch? 13:42:20 --- quit: xek (Remote host closed the connection) 14:14:49 --- join: xek (~xek@user-5-173-128-60.play-internet.pl) joined #forth 14:17:29 --- quit: xek (Remote host closed the connection) 14:42:54 --- join: wa5qjh (~quassel@175.158.225.215) joined #forth 14:42:54 --- quit: wa5qjh (Changing host) 14:42:54 --- join: wa5qjh (~quassel@freebsd/user/wa5qjh) joined #forth 15:05:43 --- quit: cheater (Ping timeout: 260 seconds) 15:07:57 --- join: CORDIC (~user@109-93-214-149.dynamic.isp.telekom.rs) joined #forth 15:08:18 --- join: cheater (~cheater@unaffiliated/cheater) joined #forth 15:08:49 --- quit: DKordic (Ping timeout: 240 seconds) 15:13:09 --- quit: CORDIC (Ping timeout: 240 seconds) 16:02:11 --- join: pierpa (4f12eaf7@gateway/web/freenode/ip.79.18.234.247) joined #forth 17:11:52 --- quit: wa5qjh (Quit: http://quassel-irc.org - Chat comfortably. Anywhere.) 17:12:01 --- join: dave0 (~dave@90.20.215.218.dyn.iprimus.net.au) joined #forth 17:12:36 hi 17:14:46 --- join: wa5qjh (~quassel@175.158.225.215) joined #forth 17:14:46 --- quit: wa5qjh (Changing host) 17:14:46 --- join: wa5qjh (~quassel@freebsd/user/wa5qjh) joined #forth 17:21:51 --- quit: wa5qjh (Remote host closed the connection) 17:29:10 Hi Dave. 17:30:23 --- join: wa5qjh (~quassel@freebsd/user/wa5qjh) joined #forth 17:30:59 hi KipIngram 17:31:05 how's it going? 17:31:40 Pretty good. Been buried in work lately though - haven't done any Forth coding in weeks. :-( 17:31:50 Read a couple of interesting docs, but that's it. 17:32:16 siraben inspired me to learn z80 assembly 17:32:46 so i've been writing little code snippets 17:35:45 Heh heh. 17:36:26 Well, historical information isn't bad. I'll be surprised if you get to use it for anything, though. But hey, every angle you can look at programming from has to do some good. 17:36:50 i'm happy to run it on an emulator 17:37:07 I still think that disk management is the next thing I'm going to work on. 17:37:22 Logical disk management - not going to do any disk controller interfacing or anything like tha tyet. 17:37:24 that yet 17:37:25 sounds difficult :-( 17:37:36 The disk handling? 17:37:40 I don't think it will be too bad. 17:38:00 In some ways it's similar to what I'm already doing with RAM and with my symbol table - basically just a linked list structure. 17:38:41 dave0: What other assembly languages are you familiar with? 17:39:55 I've used x86 a good bit, and long ago did quite a bit of 6809 programming. I read thoroughly about the 68000, because I was half in love with it, but never got an opportunity to program it. 17:40:06 siraben: i can code x86 m68k 6809 and PIC 17:40:31 m68k had lots of registers compared to other CISC at the time 17:40:35 Looks like I'm going to be tackling Cortex M4 for this little Itsy board port. 17:40:35 I should learn x86, hm. 17:40:39 it was comfortable 17:41:14 siraben: my computers are all pc's so i had to eventually learn x86 17:41:26 i don't know the newer features like vectors 17:41:34 dave0: I'm really glad they added the extra regs to x86_64; I couldn't do some of the things I'm doing otherwise. 17:41:40 Me either. 17:41:44 x86 has a lot of nice instructions, right? Compared to the minimalism of the Z80. 17:42:14 It's a monstrosity. Which is a way of saying yes, it has tons of instructions. 17:42:23 They just kept piling them on and finding some way to stick them in. 17:42:39 siraben: the 16 bit x86 was as complicated as the z80 17:43:02 Recently I'm working on a way for me to input to a program the available register movement operations and the stack effect. 17:43:06 The idea is that it would generate assembly code for, say ROT without me lifting a finger. 17:43:17 Heh heh heh. 17:43:21 siraben: i386 and later got a bit more sensible 17:43:23 Since there's only a finite number of registers and operations, it's a good way to reduce errors. 17:43:26 Look into the golang assembler. 17:43:29 Ah I see. 17:43:37 KipIngram: You write Go? 17:43:49 No, but I watched a video presentation of how they do that part. 17:44:09 i believe the go assembler came from plan 9 17:44:11 They have a "meta assembly language" and they scrape processor data sheets for the info on how to convert that to the processor's machine code. 17:44:35 i wish i knew how to write an assembler :-( 17:44:35 So the compiler generates that meta language, independent of the target. 17:44:50 Then an automatically generated (right up your alley) piece of code takes it the rest of the way. 17:45:10 dave9: we can chat about that in PM if you like. 17:50:54 I'm interested in writing an assembler as well 17:51:21 Well I have the gist of how to write a simple one, with labels and converting instructions to opcodes, I'm figuring out how to make macros work. 17:51:33 Scheme has made it very easy to create an assembler. 17:51:54 Ah, ok, you're further along. Yeah, I was just going to basically outline the two pass idea. 17:52:06 Weird - my connection from my Mac to here just got cut somehow. 17:52:13 I can't even ping my seedbox. 17:52:27 But I can from a server at the office, and that's how I'm currently connected. 17:52:35 Screen session was still running - I just attached to it. 17:53:07 The hard part of writing an assembler is getting the header of the executable right. 17:53:23 Oh, certainly. 17:53:38 I've written an assembler that, um, compiles to a list of bytes to be pasted inside another ASM file to be assembled by another assembler 17:53:42 But I hope to control the full stack 17:54:02 Ah, good - back on the home connection. 17:54:45 --- quit: wa5qjh (Quit: http://quassel-irc.org - Chat comfortably. Anywhere.) 17:54:58 This is an abandoned project of mine to compile a small language to Z80 17:54:59 https://github.com/siraben/sicp-to-z80 17:55:00 :-) 17:55:06 More of a compiler than assembler 17:56:00 Documentation on assembling for my calculator is so scarce I have to read C++ code 17:56:54 --- join: wa5qjh (~quassel@110.54.175.111) joined #forth 17:56:54 --- quit: wa5qjh (Changing host) 17:56:54 --- join: wa5qjh (~quassel@freebsd/user/wa5qjh) joined #forth 18:05:42 --- quit: wa5qjh (Remote host closed the connection) 18:08:37 It's getting that way all over - most people totally neglect assembly. 18:11:00 And for good reason, I suppose. 18:11:17 --- join: wa5qjh (~quassel@175.158.225.213) joined #forth 18:11:17 --- quit: wa5qjh (Changing host) 18:11:17 --- join: wa5qjh (~quassel@freebsd/user/wa5qjh) joined #forth 18:11:51 Well, I think it's a shame people don't want to understand those deep levels. 18:13:46 Kind of like the self-driving car thing; next thing you know people aren't going to know how to drive. 18:13:50 I *like* driving. 18:14:32 But seriously - if that tech becomes "the norm / commonplace," we'll wind up with people who never bother to learn to drive. 18:14:58 For me I think it depends on the level of abstraction people need. A person who just wants to compute the average of a list of numbers can use a spreadsheet and not worry about it. 18:15:08 Of course, the more understanding the better. 18:15:17 Yes, but I don't consider that "programming." 18:15:23 That's using an appliance. 18:15:26 Like a toaster oven. 18:15:49 Right. 18:16:37 We could write a C program with several nested loops, that accesses an array. 18:16:57 And we could make it so if you traversed the array one way (say by rows), it would fun good, but if you do it the other way it will crawl. 18:17:12 A programmer should know why that's the case, and how to do it the right way. 18:17:58 I wonder if any compilers will re-order that kind of thing for you? 18:18:53 On the other hand, the array might be control registers in a piece of hardware. 18:19:03 And they might need to be accessed in a particular order. 18:19:23 So a compiler would have to be awfully smart to know how and when to do that optimization. 18:21:04 there are ways to tell the compiler those things 18:21:12 volatile for example is really useful 18:21:20 Right. I don't know many of those things. 18:21:29 But I know how to nest my loops in the right order. ;-) 18:21:35 Old school over here. 18:21:50 the reason people dont write much assembly code is because humans cant do it very well :) 18:21:57 compared to the compiler 18:22:01 Not many humans, at least. 18:22:23 There are people in the business these days that never would have made the cut 30-40 years ago. 18:22:34 Programming isn't unique in that, though. 18:22:44 I see it in music, too. 18:23:04 I'm sure it affects any field that technology has had a large impact on. 18:25:54 siraben, what do you mean documentation is scarce? isnt it standard Z80 stuff? 18:27:27 MrMobius: No, the way the calculator does signing and so on 18:27:47 oh i see 18:27:56 didnt they figure out a way to sign things without TI? 18:28:35 Of course, but after looking at some hexdumps there's a checksum at the end 18:28:42 Which I don't know how is calculated. 18:28:47 it's* 19:01:32 --- quit: dave0 (Quit: dave's not here) 19:25:02 --- join: reich (~reich@71-88-195-60.dhcp.kgpt.tn.charter.com) joined #forth 20:30:24 --- join: tabemann (~tabemann@172-13-49-137.lightspeed.milwwi.sbcglobal.net) joined #forth 20:32:57 --- quit: pierpa (Quit: Page closed) 20:38:46 --- quit: dddddd (Remote host closed the connection) 20:51:22 --- join: pierpal (~pierpal@host247-234-dynamic.18-79-r.retail.telecomitalia.it) joined #forth 20:54:25 --- quit: pierpal (Read error: Connection reset by peer) 21:06:50 * tabemann just got a more user-friendly closure interface working 21:08:41 e.g. you can do : foobar 0 1 2 3 4 <: . . . . ;> execute ; foobar 3 2 1 0 ok 21:18:43 --- join: pierpal (~pierpal@host247-234-dynamic.18-79-r.retail.telecomitalia.it) joined #forth 21:23:13 --- quit: pierpal (Ping timeout: 264 seconds) 21:27:51 --- join: pierpal (~pierpal@host247-234-dynamic.18-79-r.retail.telecomitalia.it) joined #forth 21:37:49 --- quit: pierpal (Read error: Connection reset by peer) 21:38:58 --- join: pierpal (~pierpal@host247-234-dynamic.18-79-r.retail.telecomitalia.it) joined #forth 21:42:09 --- quit: ncv (Ping timeout: 252 seconds) 21:43:08 Do you mean 4 3 2 1 ? 21:43:09 --- quit: pierpal (Ping timeout: 240 seconds) 21:46:15 --- join: dave0 (~dave@90.20.215.218.dyn.iprimus.net.au) joined #forth 21:46:26 re 21:48:26 --- join: pierpal (~pierpal@host247-234-dynamic.18-79-r.retail.telecomitalia.it) joined #forth 21:55:54 back 21:55:57 no, 3 2 1 0 21:56:05 What happened to 4? 21:56:12 4 is the number of cells to enclose 22:04:14 this is the first non-garbage-collected implementation of closures I have personally encountered (even though I suspect C++11 may also have such) 22:16:40 So what happens if I push numbers before foobar/ 22:16:40 ? 22:16:49 7 8 9 foobar => 3 2 1 0 ok ? 22:17:42 foobar doesn't take any arguments on the stack 22:18:00 however, you could make a function : quux 4 <: . . . . ;> ; 22:18:22 that when you execute 7 8 9 10 quux it outputs an xt 22:18:42 which when executed outputs 10 9 8 7 22:19:47 I should go to bed though 22:20:51 huh I thought it did stack capture 22:21:07 it does do stack capture 22:21:24 it's just that in my original function, the function provided the arguments already 22:21:54 and it captures the number of arguments that you specify at the time of the closure's creation 22:24:59 one neat thing that you can do with this is create an xt that returns a value that is incremented each time it is executed, without using any outside storage visible to the user 22:28:03 : counter 1 cells allocate! tuck ! 1 <: dup @ dup 1+ rot ! ;> ; ok 22:28:03 $100 counter constant my-counter ok 22:28:03 my-counter execute . 256 ok 22:28:03 my-counter execute . 257 ok 22:28:03 my-counter execute . 258 ok 22:28:04 my-counter execute . 259 ok 22:29:33 note that allocate! is just : allocate abort" unable to allocate" ; 22:29:42 whoops 22:29:52 note that allocate! is just : allocate! allocate abort" unable to allocate" ; 22:30:04 I should go to bed though 22:33:40 Ah, so they're closures! 22:33:50 That's good. Internal state should be hidden. 22:34:01 I hate guts spilling out everywhere 22:34:05 Sweet dreams. 23:59:59 --- log: ended forth/18.10.23