00:00:00 --- log: started forth/21.05.15 01:44:53 --- join: f-a joined #forth 02:41:08 --- quit: dave0 (Ping timeout: 245 seconds) 02:42:50 --- join: dave0 joined #forth 03:34:08 --- quit: Zarutian_HTC (Read error: Connection reset by peer) 03:34:16 --- join: Zarutian_HTC joined #forth 03:43:43 --- quit: f-a (Quit: leaving) 03:52:01 --- join: tech_exorcist joined #forth 04:31:12 --- join: lispmacs` joined #forth 04:42:16 --- join: f-a joined #forth 04:45:00 --- quit: lispmacs` (Remote host closed the connection) 04:52:36 --- quit: rixard () 05:37:53 --- join: pbaille_ joined #forth 05:40:11 --- quit: dave0 (Quit: dave's not here) 05:40:34 --- quit: pbaille (Ping timeout: 240 seconds) 05:42:54 --- quit: f-a (Ping timeout: 240 seconds) 05:45:03 --- join: f-a joined #forth 06:19:33 --- quit: actuallybatman (Ping timeout: 240 seconds) 07:04:18 --- join: rixard joined #forth 07:39:14 --- quit: Zarutian_HTC (Ping timeout: 240 seconds) 08:25:43 --- quit: proteus-guy (Ping timeout: 260 seconds) 08:38:10 --- join: proteus-guy joined #forth 08:42:09 --- quit: Keshl (Ping timeout: 268 seconds) 08:45:14 --- join: Keshl joined #forth 08:49:18 Regarding yesterday's conversation, I'm sure it's more that the test team "no longer has the old hardware 08:49:40 rather than "not wanting to test." 08:49:53 They can't really keep hardware forever, I guess. 08:50:06 But I see your point - they really SHOULD keep it until it's end of lifed. 08:56:33 Something weird is up with my error recovery. 08:56:47 I realized last night that the way I'd written it it wasn't preserving and restoring the stacks. 08:57:06 But when I extend the moves to get them, then it crashes when it restores the return stack. 08:57:30 And I can't figure out why - looking at the code the return stack shouldn't be involved at that point. 09:06:56 Ok, cool - now that I slept it was obvious what I've overlooked - not it works. 09:06:58 now 09:41:33 --- quit: f-a (Read error: Connection reset by peer) 09:44:57 --- join: f-a joined #forth 09:58:41 I will say, though, that in all this Forth work I've done clean error recovery is one of the most involved and delicate things I work on. 10:01:46 --- join: actuallybatman joined #forth 10:05:31 The wrinkle I just fixed today was caused by doing things in the wrong order. 10:06:26 I'd been restoring RAM, recovering the stack pointer. But if the line with the error removes a bunch of stuff from the stack before hitting the error, then that caused me to recover the RAM with the stack pointer aimed into the stuff I just restored. Before I could get the stack pointer restored it overwrote stuff. 10:06:39 So the solution was to restore the stack pointer first, then recover the RAM. 10:06:50 Obvious on hindsight, but it's just easy to overlook some of these things. 10:07:06 So I will tip my hat to you gents that are interested in "proving correctness." :-) 10:07:43 A lot of the time you can just look at your code and know it's right. But sometimes it can be damn subtle, so getting a math quality assurance is a "good thing", tm. 10:27:39 You know, I just noticed that this error recovery has an interesting application. 10:28:27 If you have your system in some state, and you want to see what the effect of some bit of code would be without destroying that state, you can do the test code and then follow it with something like 1 ERR at the end of the same line. 10:28:41 Force the system through error recovery - it will undo all of the actions of the line. 10:28:55 But you would have gotten a chance to see what your test code did. 10:32:10 It also occurs to me that my word FLUSH should re-snapshot the system after it writes the buffers out. Because otherwise error recovery would restore the system to believing the writes hadn't been done, and that's untrue. 10:38:12 --- quit: pbaille_ (Read error: No route to host) 10:38:34 --- join: pbaille joined #forth 10:47:18 --- join: rtdos joined #forth 10:48:14 --- join: proteanthread joined #forth 10:48:22 --- join: bamberbiz joined #forth 10:50:54 So my total image size is up to just a hair under 16k now. 11:10:49 --- join: Zarutian_HTC joined #forth 11:47:01 this is for x86? 12:13:08 --- quit: bamberbiz (Ping timeout: 240 seconds) 12:25:51 --- join: bamberbiz joined #forth 13:03:33 Yes. 13:03:38 --- part: f-a left #forth 13:03:58 I intend for it to be portable, and am making design decisions with an eye toward that, but x86 (64-bit) is the only way I've deployed it to date. 13:04:40 There's supposed to be a layer of short machine code macros that I actually *write* the primitives with, and thought has gone into choosing those so that they're efficiently implementable in both x86 and ARM. 13:05:04 So the primitives themselves (and of course all the : definitions) should be platform portable. 13:05:20 At least that's the hope. 13:05:48 I haven't adhered to that perfectly - a few times I was in a hurry and just stuck naked machine code into primitives. 13:06:00 But it's not a whole lot and it shouldn't take me to long to find all those places and fix them. 13:06:22 "too" long 13:06:37 2 long 13:06:44 :-) 13:06:55 That's how my daughters would write it, yep. 13:07:10 They think that kind of thing is gr8. 13:07:52 I can read teen text language, but I don't create it on the fly very readily. 13:08:04 I have seen "two long ago" written without the l33t-5m0ji Cuniform pictograms 13:08:40 It's sub-culture I've just never explored. 13:09:07 I've nosed around hidden websites with TOR, and for about an evening I had Freenet on a computer once. 13:09:14 But that stuff scared me and I shut it down. 13:09:26 Especially freenet. 13:09:45 That's a dark and vicious place. 13:10:18 I use tor quite a bit. Damn handy for making sshd or such behind a nat accessible. 13:10:57 Yes, tor is a really nice technology. But as far as doing what it was intended to do, well, I can just picture huge warehouses full of tor nodes under government monitoring. 13:11:10 It wouldn't surprise me to learn that half or more of the network was compromised. 13:11:15 and for checking airline ticket prices 13:11:43 Right - it's good for "tier one" masking of your location. 13:12:07 But I bet someone in the gov could find you without much trouble - I don't think it's fully successful in that regard. 13:12:15 If they really wanted to, that is - might take them some work. 13:12:53 Somewhere upstairs I've got a book on P2P technologies that walks through all those different things. 13:13:10 I thought diaspora was a pretty neat idea - p2p facebook. 13:13:23 But then the guy behind it suddenly turned up "suicided." 13:14:13 for tier 3 one uses an email to web gateway, a mixnet, and a remailer that sends to alt.anonymous.messages or as sms payloads through iridium global paging channels 13:14:54 I actually backed diaspora on kickstarter - got a t-shirt and everything. 13:14:59 But it never took off. 13:16:00 I also heard that there was a Canadian guy who had developed a distributed p2p file store system, and a "side benefit" of it was that it might have massively increased the public interest in tor. Being part of it meant yu were supporting tor. 13:16:02 how userfriendly was diaspora supposed to be? 13:16:11 But *that* guy got killed in a "wrong address police raid." 13:16:31 Oh gosh, it's been a long time. 13:16:46 Fairly, I think - most of the network discovery and handling was supposed to be under the hood. 13:16:54 But it never got far enough along to really see. 13:19:26 when facebook stopped showing the feed in reverse chronological order and started with the promoted bullshit I stopped using it 13:21:10 the utility of fb was that I could check once a day on what folks I knew were publicly up to. I did not mind the interspesed ads. 13:23:50 --- part: bamberbiz left #forth 13:25:40 but it got so pushy that I elected to use a scraper against them but the dev behind it gave up 13:44:56 --- quit: gravicappa (Ping timeout: 268 seconds) 13:47:01 Yes, me too. 13:47:18 For a while you could manually set it back to reverse chronology, but they just kept on moving in on it. 13:47:30 They mean for you to look at what THEY want you to look at. 13:47:32 I don't need it. 13:48:17 I regard them as one of our most wicked corporate entities. 13:51:42 Hmmm. 13:52:22 I don't have double numbers in my system. But nonetheless, when I do * the upper half of the product IS in rdx, and stays in rdx until I do something else later that needs that register. 13:52:45 It would be easy for me to have a primitive that just pushed my TOS to the stack and moved rdx into my TOS cache. 13:53:03 Long as I ran it immediately after * (more or less), then I'd have the full product on the stack. 13:53:22 Another primitive to take the top to stack items and move them into an internal double representation would be equally simple. 13:53:27 That seems worthwhile to me. 13:54:18 UGH. 13:54:26 Top *two* stack items. TWO. 13:54:45 Stay tuned - by the end of the day I must be going to make every possible to/too/two mistake. 13:55:35 I've noticed that I never foul up "sound alike" words when I'm writing an email. Email seems to activtate the "I'm writing" part of my brain, whereas IRC activates the "I'm talking" part, and evidently if it sounds right that part of my brain is happy. 13:57:34 Any name suggestions for those words? 13:58:15 I could do (int) and (long), I guess. 13:58:44 (int) would push TOS and move the upper part to TOS; (long) would pop to rdx. 14:03:07 Ok, that's done. But rdx doesn't survive across lines - there's enough code in there that rdx gets munged. So you do have to USE your long immediately after running (long) though. 14:03:15 rdx? this is legacy x86? 14:03:31 That's one of the 64-bit register names. 14:03:48 I'm not sure what you mean by "legacy"? 14:04:26 doing it that way might make it hard to use interrupts if you port to arm 14:04:42 Doing what exactly? 14:04:56 You're right - I have no idea how well that would port. 14:05:19 It is exploiting what's really an idiosyncracy of the architecture as far as my Forth is concerned. 14:05:50 KipIngram: before the wholesale isa restructuring that happened in 2025 14:06:00 Probably better to have proper double words if I want them so the data is always part of the stack. 14:06:20 Uh... did I fall through a time warp or something? 14:06:36 2025? 14:06:59 shit, please disregard that, dont want another visit by the Time Service 14:07:07 lmao 14:07:55 I haven't really kept careful track of the x86/x64 instruction set. 14:07:59 no, I just detest x86 and its needless convulatedness 14:08:08 Ah, I see. 14:10:05 riscv makes much more sense 14:10:30 Well, I have no particular love for intel's instruction set either. 14:10:52 They've tried to just keep cramming more into it for way too long - it was bound not to end well. 14:10:54 but I think something like the Mill isa is quite better 14:11:57 You talking about this? 14:11:59 http://millcomputing.com/wiki/Instruction_Set 14:13:24 yeah, ibm 390 isa makes much more sense for software whose usefull life time is expected to be half a century or a decade more 14:13:31 yepp 14:14:44 Does anything actually run that? 14:15:23 but intel lucked into that their 8088 cpu was used in the ibm pc 14:15:48 ibm390 or mill? 14:15:53 Yes, that was quite a boon for Intel. 14:15:56 Mill 14:17:24 not yet afaict but I do not expect to see widespread use of it until after a decade or such, if risc-v hasnt eaten their lunch in the interm 14:19:36 oy, just recalled that I wanted to ask here: what are peeps opinions on logicsim evolution? 14:20:14 That is an interesting outfit. 14:27:17 what I like about the Mill is the concept of NotAResult "value" 14:27:42 re Mill - looks like they have decimal floating point, so they get my vote 14:30:27 I believe I decided somewhere along the way that for a given number of bits binary floating point has smaller round-off error. 14:31:07 There's a really good document out there on all that... 14:32:03 I choose to use algorithms that do no rely on floating point 14:32:23 This one, I think: 14:32:25 https://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html 14:32:59 not all the isa and mcus I target got die or power budget for an fpu 14:33:00 Yes - that's it; it's bookmarked in my browser. 14:36:43 recently ish I started looking into doing dsp stuff via simple 7400 series devices operating on delta sigma encoding bitstreams 14:37:40 I blame mitxela (look him up on youtube) for that 14:45:42 --- quit: pbaille (Remote host closed the connection) 14:46:17 --- join: pbaille joined #forth 14:46:27 delta sigma is really cool. 14:46:57 I started my career with 7400 series stuff. 14:48:02 Hah - he looks fun. 14:50:14 --- quit: pbaille (Ping timeout: 240 seconds) 14:53:44 for instance to add two signals together use a quad input selector 14:54:04 --- join: pbaille joined #forth 14:54:14 the two select bits are driven by the two signals 14:56:42 input 0 is tied to constant 0, input 3 is tied to constant 1, and inputs 1 and two are connected to an bitstream alternating as fast it can between 0 and 1 values 14:58:16 --- quit: pbaille (Ping timeout: 245 seconds) 14:58:18 and I have not gotten further than that 15:00:10 --- quit: tech_exorcist (Ping timeout: 268 seconds) 15:02:08 --- quit: joe9 (Ping timeout: 240 seconds) 15:10:54 I find that kind of stuff fascinating. 15:11:13 We used delta sigma in the seismic sensors I worked on at my job just before this one. 15:11:57 Delta sigma is of course completely technically "sound," but what you get from it has always felt a bit like black magic to me. 15:17:35 --- join: pbaille joined #forth 15:21:30 It wasn't too long after I started working that PAL device started to become common. We still used some 7400 series logic in most cases, but we'd use PASs rather than a fleet of discrete logic chips. 15:22:05 At the time the 22V10 was considered a "luxry" - my boss would gripe at us if we used too many of them. He preferred the less expensive devices. 15:22:22 Then a little further along AMD Mach devices and other things like that started to show up. 15:22:42 But FPGA's weren't really a thing yet at the time - they came a little later. 15:23:15 In fact, I had a sales guy that was trying to sell us parts at one job try to talk me into investing in this start-up called Xilinx. 15:23:19 :-( 15:23:24 I wish he'd been more convincing. 15:31:55 KipIngram, ya you lose a lot of range with decimal but at least you eliminate the rounding errors from converting from base 10 which is useful for some things 15:43:34 Yes, if you have a decimal number to begin with, then you can represent it accurately. But I think that doc I linked explains that the rounding error that results from calculations per operation is larger when using base 10 than base 2. 15:43:47 I may be mis-remembering. 15:44:07 But there's a lot of good stuff in that doc; I'll probably peruse it again, since I've got it in a tab now. 16:05:13 --- quit: pbaille (Ping timeout: 265 seconds) 16:18:06 that would make sense for x bits of base 2 compared to x bits of base 10 16:31:03 --- quit: proteanthread (Ping timeout: 250 seconds) 16:31:03 --- quit: rtdos (Ping timeout: 250 seconds) 16:37:25 --- join: pbaille joined #forth 16:41:54 --- quit: pbaille (Ping timeout: 246 seconds) 16:56:20 --- join: pbaille joined #forth 16:56:27 dude! i programmed 22v10s in college 16:57:35 what are those? some sort of GateArrayLogic or ProgramableArrayLogic? 16:57:35 --- quit: proteus-guy (Ping timeout: 260 seconds) 17:00:33 --- quit: pbaille (Ping timeout: 240 seconds) 17:01:03 gals 17:01:28 we used vhdl 17:01:55 that's the only sort of hdl programming i've done 17:02:06 maybe i should get into that again 17:10:20 --- join: proteus-guy joined #forth 17:30:07 --- join: rtdos joined #forth 17:30:36 --- join: proteanthread joined #forth 17:33:55 --- join: dave0 joined #forth 17:34:23 maw 17:41:19 wam 17:41:38 Yessir - 22V10's rocked. 17:41:55 I did a paper design once of a Forth processor core totally built with 22V10's. 17:42:22 --- join: pbaille joined #forth 17:42:29 Hardware stack, got the instruction set crafted (32 five bit opcodes) and got the equations for the instruction set to fit in a batch of 22V10's. 17:42:49 But as with so many of my personal projects - never got it built. 17:43:10 I did go so far as buying several tubes of 22V10's from Digikey, though - they're still in the garage somewhere. 17:44:48 Had to put the signals on the right pins - some of them have more OR terms supported than others. 17:45:40 For those of you unfamiliar, the pattern in those devices was that each term could be an AND combination of all of the inputs, with either polarity chosen. But each pin only supported a limited number of such terms for the OR operation. 17:45:47 Like 6-9 in the 22V10. 17:45:54 iirc 17:46:34 --- quit: pbaille (Ping timeout: 240 seconds) 17:50:53 It was a fun time, as companies just kept figuring out how to offer more flexible programmable interconnects for those things. The gradually you started to be able to have internal state bits, and so on. Eventually it just "blurred into" FPGAs. 18:33:38 --- quit: dave0 (Ping timeout: 245 seconds) 18:38:09 --- join: boru` joined #forth 18:38:11 --- quit: boru (Disconnected by services) 18:38:14 --- nick: boru` -> boru 18:46:57 --- join: pbaille joined #forth 18:52:02 --- quit: pbaille (Ping timeout: 268 seconds) 19:22:14 --- quit: sts-q (Ping timeout: 240 seconds) 19:24:09 --- join: sts-q joined #forth 19:41:10 --- quit: proteus-guy (Ping timeout: 245 seconds) 19:53:43 --- join: proteus-guy joined #forth 21:04:48 --- join: gravicappa joined #forth 21:48:58 --- join: dave0 joined #forth 22:30:48 --- join: pbaille joined #forth 23:08:15 --- quit: proteanthread (Ping timeout: 245 seconds) 23:08:16 --- quit: rtdos (Ping timeout: 245 seconds) 23:34:05 --- quit: pbaille (Ping timeout: 245 seconds) 23:39:52 --- quit: proteus-guy (Ping timeout: 252 seconds) 23:52:39 --- join: proteus-guy joined #forth 23:59:59 --- log: ended forth/21.05.15