00:00:00 --- log: started forth/20.09.28 00:25:39 --- quit: tabemann (Remote host closed the connection) 00:26:10 --- join: tabemann joined #forth 01:01:08 --- join: mtsd joined #forth 01:32:40 --- quit: gravicappa (Ping timeout: 246 seconds) 01:51:12 --- join: xek joined #forth 02:43:34 --- quit: mtsd (Quit: mtsd) 02:56:41 --- quit: jsoft (Quit: Leaving) 03:05:50 --- join: mtsd joined #forth 03:12:27 --- quit: mtsd (Quit: mtsd) 03:18:31 --- join: dave0 joined #forth 03:47:20 --- join: xek_ joined #forth 03:49:51 --- join: xek__ joined #forth 03:50:33 --- quit: xek (Ping timeout: 272 seconds) 03:52:46 --- quit: xek_ (Ping timeout: 264 seconds) 04:00:54 cheater, probably pretty well if you go with an STC like Tali Forth 2 04:01:51 STC? 04:02:00 subroutine threaded forth 04:02:47 there are different ways to organize things internally. one way is to have a list of addresses of subroutines and jump to each one by one 04:03:22 it's a lot faster though to have the same list with a jump to subroutine instruction before each address 04:03:46 so the call takes up 50% more room but you avoid pointer calculations which are really slow on the 6502 04:04:47 cheater, cells could also be 32 bit or even 64 bit. it just makes sense to keep them big enough to hold a pointer 04:07:46 why not 8 bit? 04:09:04 then you would have 8 bit pointers and only be able to address 256 bytes 04:09:21 or have a scheme where pointers take up 2 cells 04:17:42 i mean for short jumps that's fine enough, right? 04:18:07 it's only long jumps where that's a problem, and you should have two or more cells 04:18:18 --- join: Zarutian_HTC joined #forth 04:18:39 65c816 has 24 bit address space 04:18:53 so you might want to use 3 bytes for the pointers 04:19:06 i guess you'd have to push them all on the stack and then jump, or something 04:20:15 for example, you could actually push the machine code instruction for jump onto the stack, then the 24 bits of the pointer, and then have a forth function called "interpret_jump" 04:20:19 they will basically all be long jumps and evrn short jumps wouodnt work well since youre either jumping into the first 256 bytes where the stack is or doing a relative jump which will be slow 04:20:45 yeah 04:21:41 ya someone made a forth for 816 and says he got a huge speedup 04:21:48 wonder how? 04:21:58 oh yea the 816 has 16-bit registers 04:22:00 nice 04:22:25 i wonder if you could make a basic interpreter that runs on forth? 04:23:11 and better addressing so some of that is less painful. havent uses it myself though. some people say the processor is unpleasant to work with 04:23:31 wonder why they say that 04:23:53 sure. you would hate your life and it would be ungodly slow if on the 6502 but you could do it 04:24:54 several reasons. its not a 16 bit redesign. its an 8 bit 6502 with 16 bit stuff frankensteined on top 04:28:09 i'm mostly thinking of putting it on the 816 to be honest 04:28:45 also let's be honest with each other no one's going to write huge programs like this, for anything even medium complex they'd write some other language on some other machine and cross compile 04:29:21 so maybe if you have an 8 bit pointer and can only address 256 bytes at first, maybe that's perfectly fine for a simple program 04:33:31 like most 16 bit machines will have an 8 bit and a 16 bit version of an instruction like ADD but the 816 just has an 8 bit mode and a 16 bit mode for each register and only one ADD instruction 04:33:42 oh 04:33:46 interesting 04:33:53 MrMobius, basically the 6502 was meant to treat the zero page as registers, that's why operating on that is cheap (or supposed to be, i don't actually know) 04:34:28 so when you jump into a function or into an interrupt you may not know what modes your in which is a real pain 04:36:01 cheater, the problem is though that using 8 bit pointers doesnt just mean your program has to fit in 256 bytes but also that the whole forth interpretter does if you want to be able to access the address of primitives or other stuff 04:36:35 hmmm right 04:36:48 and even then you cant fit much of a meaningful program in 256 bytes. the code for my robot game is 40k or so I think 04:37:03 what if i made it a thing where /writing/ to the stack is more expensive but reading from it is cheaper? 04:37:24 how would you do that? 04:37:24 i.e. when you write to the stack, you basically write the machine code you'd want to have 04:37:42 but that'll have to be taken from memory somewhere else, which is more expensive 04:38:00 some sort of partly pre-compiled forth 04:38:47 hmm, it doesnt work like what youre describing 04:38:53 why not? 04:39:16 what machine code are you writing when you write to the stack? you dont put instructions on the stack, just data 04:41:37 i was thinking of one of two designs. design 1: have the stack grow backwards, i.e. opposite to the way machine code is interpreted. then run that. design 2: when committing data to the stack, pre-place holes that you will later populate with instructions once you're ready to. once you're at the point of executing that instruction together with its data, you take the ultimate result, place it where the instruction was, and discard the rest (i.e. the 04:41:38 locations where the hole's data was). this means the stack structure is retained and the result of this instruction can be used in the next instruction, for which there will be a hole as well. 04:43:15 why would you put machine code on the stack though? there's no reason to jump into the stack and start executing things 04:44:07 if youre leaving a hole there to jump into, you can only put on instruction there 04:44:34 even the simplest forth instruction takes several steps so needs a dozen or more bytes of instructions 04:45:44 i mean yeah. i guess this is a thing that would need to be integrated into the design. i was just trying to present the idea in a simple fashion 04:46:32 nothing's stopping you from having larger holes, or having multiple-hole systems 04:47:08 like maybe commit to two holes: the instruction to jump to, followed by its data (not holes), then at the end you have a hole for another jump instruction 04:47:24 doesnt sound like it would be any faster than what already exists, but what you could do is get a forth for your pc and learn it really well then see if you can implement your improve version on the 816 or some other platform 04:48:25 tbh i'd probably write an 816 emulator and try to figure how to build something for that 04:48:30 ya what your describing is the the thread of instructions not the data stack. you can inline data in the thread and use the return address to get the address of the data 04:48:34 i mean 8502 04:48:38 er 6502 04:48:41 what's wrong with me today 04:49:16 what is the thread of instructions? that's not a standard forth feature, is it? 04:50:01 its the list of subroutine calls I was talking about before. every forth has some sort of thread 04:50:49 go for it. there are a lot of 6502 emulators. it's a good project to learn about the chip. there's even a pretty well tested verification program you can run to test your emulator 04:57:01 don't you think a 6502 forth would be faster without a subroutine thread? 04:57:17 looks like refering to non-immediate data is the real perf killer here 04:58:17 --- join: mtsd joined #forth 04:59:22 that's why i was thinking of this hole system, because that makes the data always immediate to the instructions 05:01:30 how many bytes of arguments does a 6502 machine instruction take? is this variable? what about the 816? 05:03:47 cheater, faster? how else are you going to run anything? 05:04:50 I see what you mean. there is a form of this you can use in some situations 05:06:12 you write instructions to memory and modify their arguments at run time. this is called self modifying code and only works in specific circumstances. you couldn't do everything that way. there is overhead to filling the holes 05:06:32 yes variable. 0-2 on 6502. 0-3 I think on 816 but not sure 05:15:51 --- join: gravicappa joined #forth 05:22:10 --- quit: dave0 (Quit: dave's not here) 05:48:28 hmm, looks like they added incremental compilation to the zig programming language 05:48:45 yeah, i was thinking of smc, but instead of modifying the instruction arguments, you put up arguments and you modify the instruction that's being called 05:48:51 I wonder if we'll see NASA running that instead of forth on satelites 05:50:03 cheater, how would that be better though? it takes time to write the instruction there and the instruction only does one thing. it takes longer to put the instruction in the hole then to execute it 05:50:52 it only makes sense if the overhead of changing the instruction is less than the improvement you get from using the immediate version which seems like it would only be in a loop 05:52:14 it's a trade off, either the cost is large at write (my way) or at read (your way) 05:53:50 so for example, you might be able to optimize the writes. if you know you'll be doing only one specific thing with the data once that data is on the stack, just include the instruction with the data, so it's put on the stack together; and the jump-back instruction, which is the other part that gets modified, that still gets "fixed up" by the runtime 05:54:23 or let's say you have a bunch of data, and you want to fold or map over it, doing basically the same thing to each piece of the data 05:54:55 you'd load the data into memory with holes, then load one instruction into a zero-page "register" for easy access, and write it to all the holes 05:56:24 you might need to do that multiple times due to there being multiple things you need to write into the holes 05:56:48 but in general you could transform data like this. it's very reminiscent to the map/fold paradigm you'd find in functional programming. 05:58:26 --- quit: mtsd (Quit: Leaving) 05:58:39 cheater, that's just not how that works at all. what youre describing does not match the way forth works or more importantly how the processor works 06:00:36 would you like to explain the issues to me? it's okay if you don't feel like it, i'm just curious 06:01:29 lets say you have an 8 bit stack, which I think is a bad idea and you should not do 06:01:57 you can put an instruction to add for example in the hole and that will add the value to the accumulator but what then? 06:02:18 that value then needs to be written somewhere and the stack pointer needs to be adjusted 06:02:52 it would be better to use a 16 bit stack but that is even worse since nothing you put in the hole can do a 16 bit add 06:03:58 the hole can be multiple bytes long 06:04:02 so you could leave an enormous hole and put a whole bunch of instructions in there but then we're back to the huge overhead of copying that in 06:04:11 also if you just have one stack, then you can use the program counter 06:04:16 which is 16 bit 06:04:32 again, thats just not how that works. the program and stack are different 06:04:40 why must they be different? 06:05:07 also, if you have the hole multiple bytes long, you arent using the immediate mode any more so you lose the speed up you get from loading an immediate 06:05:48 not necessarily. the hole can be actually multiple holes: one small hole for the instruction with arguments following it, and then another longer hole for the handler 06:06:27 because as the program counter advances, you leave the results of calculations behind you in the space for data. you need some way to pull that forward and keep working on it. thats what the data stack does. its always in the same place no matter where your program counter is 06:07:28 cheater, are you on windows? 06:07:45 yeah 06:08:41 download this 6502 simulator+assembler and give it a try http://exifpro.com/utils.html 06:09:06 its a really cool program. i keep it running on my taskbar 24/7 to test asm snippets 06:09:13 haha that's nice 06:09:31 try to implement what youre describing there and youll see that what I mean 06:09:43 you may not be able to see that this doesnt work until you implement it yourself 06:10:19 also ##6502 is very active 06:11:24 *hacker voice* i'm in 06:11:29 what do i do now? 06:11:41 i guess i'll look for a hello world 06:12:08 http://www.rosettacode.org/wiki/6502_Assembly 06:12:09 nice 06:13:32 http://www.rosettacode.org/wiki/Even_or_odd#6502_Assembly hmm... this is Apple II code... is the syntax right for the exifpro assembler? and the apple ii specific stuff? 06:14:01 hmm, no 06:14:12 probably better to discuss in ##6502 06:15:25 yep let me join 06:50:54 cool tidbit about forth on c64 https://www.reddit.com/r/c64/comments/j0y1rv/is_forth_a_good_match_for_6502/g6whc4p/?context=3 07:06:53 --- quit: cp- (Quit: Disappeared in a puff of smoke) 07:08:19 --- join: cp- joined #forth 07:09:22 --- quit: cp- (Client Quit) 07:10:19 --- join: cp- joined #forth 08:28:20 --- join: mark4 joined #forth 08:29:30 tabemann, that would actually be the primary use for zero page :) 08:30:36 i would consider using index x for a stack pointer somewhere other than 0x100 area, leave that for the processor stack but have two sofware stacks elsewhere in ram. 64k is freeking huge :) 08:31:11 i would also put the forth kernel uder the basic rom which can be paged out to reveal the ram under it 08:31:51 --- quit: Zarutian_HTC (Ping timeout: 258 seconds) 08:39:29 mark4, yep. probably want to put the x-based data stack in zero page 08:46:23 sp yes, stack no :) 08:47:02 i would have sp and rp on zero page variables pointing to 0x200 and 0x300 08:47:14 tho for a c64 that would probabely be $200 and $300 :) 08:48:39 mark4, hardware stack is fixed at $100 so no choice about that 08:49:03 right but i would not use that for parameters 08:49:15 actually i guess that could be the return stack? 08:49:21 right, r stack 08:49:28 i forget do you have access to SP on the c64? 08:49:50 not directly. you can transfer SP to X, modify then transfer it back 08:49:59 0x00 to 0xff is zer0 page, page 1 is stack 08:50:04 aha thats right. thats good enough :) 08:50:29 been a very very long time since i looked at a c64 :) 08:50:34 very many fond memories :) 08:50:38 hehe 08:51:38 i learned 6502 in 2 weeks and then bought a c64 :) 08:51:56 i did a hand reverse engineer of a disassembler that was published in CCI :) 08:52:15 took 2 weeks and then i knew almost eveyr opcode by heart :) 08:52:44 nice :) 08:53:15 I didnt get one until 10 or 15 years ago off ebay. it was fun to play around with but didnt work any more when I dug it out of the closet a few years ago 08:53:31 awww 08:53:35 wish i still had mine 08:53:56 tried to repair it last year and some of the ttl chips have been replace and have been socketed before it got to me since they didnt do that at the factory. problems match that chip being bad 08:54:31 thers a guy on youtube that repairs c64s :) 08:54:40 they are neat. I had a lot of ideas for neat things but kind of lost interest when I found out the process they used for the chips was flawed and the chips' lives shorten the longer you use it 08:55:12 so no sense in leaving it on to run an 8 hour simulation or something even if it would be neat or running a server on it 08:55:40 yep! I would be totally lost trying to fix anything without that :P 08:58:23 yea and you can do it in an emulator too :) 08:59:16 bbl have an errand to run 09:01:52 --- join: Zarutian_HTC joined #forth 09:23:24 --- quit: cantstanya (Ping timeout: 240 seconds) 09:26:22 --- quit: xek__ (Ping timeout: 264 seconds) 10:19:29 --- join: xek__ joined #forth 10:51:52 --- join: WickedShell joined #forth 11:15:44 --- quit: Zarutian_HTC (Remote host closed the connection) 11:29:13 --- quit: gravicappa (Ping timeout: 256 seconds) 11:30:05 --- join: gravicappa joined #forth 12:38:04 MrMobius: say more about the chip life being shortened with use? 12:40:40 I have an old 64 in a box again after booting it up and confirming it worked a few years ago. much nostalgia. 12:43:48 it was boron or something they used in the process and didnt have it exactly right so the chips MOS made themselves will die with use 12:49:38 interesting. I'll research more. 13:02:52 bluekelp, I think I might be partly wrong. I heard Bil Herd talk about it in a video but it might just be the PLAs 13:03:04 and you can get PLA replacements 13:11:28 --- quit: remexre (Ping timeout: 240 seconds) 13:13:37 --- join: remexre joined #forth 13:46:07 --- quit: gravicappa (Ping timeout: 256 seconds) 13:58:49 --- quit: xek__ (Ping timeout: 256 seconds) 14:05:20 --- quit: _whitelogger (Remote host closed the connection) 14:06:01 --- join: f-a joined #forth 14:08:22 --- join: _whitelogger joined #forth 14:13:09 a friend of mine is trying to get me into soldering 14:13:15 maybe I should dust off forth too 14:26:27 --- join: WQX joined #forth 14:54:37 --- quit: f-a (Quit: leaving) 15:17:11 --- part: WQX left #forth 16:50:05 --- join: dave0 joined #forth 18:23:51 --- quit: Lord_Nightmare (Quit: ZNC - http://znc.in) 18:26:21 --- join: Zarutian_HTC joined #forth 18:28:02 --- join: Lord_Nightmare joined #forth 18:41:33 --- quit: dave0 (Quit: dave's not here) 18:49:14 --- join: boru` joined #forth 18:49:17 --- quit: boru (Disconnected by services) 18:49:20 --- nick: boru` -> boru 19:10:00 --- quit: mark4 (Remote host closed the connection) 19:19:53 --- quit: iyzsong (Quit: ZNC 1.7.5 - https://znc.in) 19:21:02 --- join: Zarutian_HTC1 joined #forth 19:21:02 --- quit: Zarutian_HTC (Read error: Connection reset by peer) 19:22:14 --- join: iyzsong joined #forth 19:27:05 --- quit: WickedShell (Remote host closed the connection) 19:41:45 --- quit: lonjil (Quit: No Ping reply in 180 seconds.) 19:46:54 --- join: lonjil joined #forth 20:31:59 --- join: jsoft joined #forth 20:58:21 --- join: gravicappa joined #forth 21:25:46 yay, fixed my TI-84+ Forth-based OS build by pinning a specific version of nixpkgs, so it will build for eternity now! 21:49:43 siraben, outstanding! 21:49:51 eternity is a long time! ;-) 22:14:15 --- quit: jsoft (Remote host closed the connection) 22:18:37 --- join: jsoft joined #forth 22:20:24 --- nick: Zarutian_HTC1 -> Zarutian_HTC 22:23:23 --- quit: jsoft (Client Quit) 22:31:28 --- join: jsoft joined #forth 22:40:09 --- quit: reepca (Ping timeout: 256 seconds) 22:44:48 --- join: reepca joined #forth 23:42:55 --- join: mtsd joined #forth 23:59:59 --- log: ended forth/20.09.28