00:00:00 --- log: started forth/10.03.17 00:02:56 --- join: proteusguy (~proteusgu@zeppelin.proteus-tech.com) joined #forth 00:55:13 --- quit: mathrick (Remote host closed the connection) 00:57:02 --- join: mathrick (~mathrick@users177.kollegienet.dk) joined #forth 01:39:17 --- quit: skas (Quit: Leaving) 03:01:08 --- part: TR2N left #forth 03:13:48 --- join: kar8nga (~kar8nga@jol13-1-82-66-176-74.fbx.proxad.net) joined #forth 03:28:33 --- join: flash (~flash@222.131.175.137) joined #forth 03:30:10 --- quit: foxes (Ping timeout: 276 seconds) 04:13:35 --- quit: kar8nga (Remote host closed the connection) 04:49:53 --- quit: nighty^ (Read error: Connection reset by peer) 04:56:41 --- join: kar8nga (~kar8nga@jol13-1-82-66-176-74.fbx.proxad.net) joined #forth 06:09:38 --- quit: Deformative (Ping timeout: 246 seconds) 06:26:49 --- join: info (~lol@84.22.56.34) joined #forth 06:27:17 --- quit: info () 06:47:55 --- join: nighty^ (~nighty@x122091.ppp.asahi-net.or.jp) joined #forth 07:05:21 --- quit: |dinya_| (Quit: Smile!.. tommorow will be worse :) (c) Murphy) 07:34:06 Hmmm. I'm working on the outer interpreter stuff for my FPGA processor. Still using an emulator, but it does involve actual machine code. 07:34:30 Now that I'm faced with using them I don't really like the A and B address register model of the C18. 07:35:05 I'm constantly having to fret over what's in them, do I need to save / restore, etc. 07:35:47 And you *can't* save B, since you can't read it. 07:36:11 Which really means you can't use it in the outer interpreter if you want to be able to test snips of code that use it., 07:36:30 Now you understand why classic approach is more wide-spread. 07:36:42 Yes, I like the classic approach. 07:37:06 It looks like these registers are good really for just one thing - high speed moves between a memory block and a port. 07:37:21 But they don't "replace" the traditional stack-based @ and !. 07:38:01 The interpreter doesn't have to be terribly high-performance; I find myself tempted to create fully "innocent" @ and ! words. 07:38:25 : @ A@ PUSH A! @A POP A! ; 07:38:27 etc. 07:38:32 Yuck, but it would work. 07:40:25 Or I could give up the "non auto increment" words that work with A (i.e., decide that it will be used for auto-increment applications) and use the freed-up opcodes to do traditional ! and @. My hardware does support routing the stack top to the memory address bus. 08:11:06 --- join: Deformative (~joe@bursley-185022.reshall.umich.edu) joined #forth 08:18:19 ASau: I see now. The C18 instruction set looks well-optimized for what the processor is designed for - running small, more-or-less dedicated snips of code in a distributed computing environment. It doesn't look as good for a situation where you're expected to preserve your calling hierarchy's environment. 08:18:40 The docs also talk about "programming with abandon," where you don't worry about dropping your stuff from the stack because there is no stack overflow. 08:18:50 Once again, fine if you don't have to worry about your callers, not so fine if you do. 08:19:18 This isn't called "distributed." 08:19:29 Ok, parallel, or whatever. 08:19:42 Many processors each of which is running a tight, focused bit of code. 08:19:47 It is systolic array which is very limited class of parallel hardware. 08:20:05 Ok - let's not argue terminology, please. 08:20:13 I know where it would be nice to have such hardware, 08:20:24 but this _requires_ that it provides FPN operations. 08:20:35 Where would you like it? 08:21:05 Most parallel software is solving linear systems. :) 08:21:41 It is FEM, CFD, or QC. 08:21:47 Well... MD sometimes. 08:22:01 DEM is popular too. 08:22:06 Ok. I'm familiar enough with those things. I've written some FEM codes in my distant past. 08:22:14 For solving electromagnetic field problems. 08:22:48 I did my PhD work at a university research center where we studied exotic motor / generator stuff, for pulsed operation. 08:24:04 Are there array architectures that do well with sparse linear systems? 08:25:05 Anyway, I'm reconsidering this A/B approach for my purposes. I may play with arrays from time to time, just because I can, but generally I will have one of these tucked over in the corner of the FPGA, supporting a fairly substantial application program. 08:25:36 I need for it to be easy for a subroutine to do its job and return, without fouling up whoever called it. 08:25:58 I don't know about specifically sparse systems, 08:26:06 And if that requires a bunch of overhead to save and restore resources then the point of having those special resources in the first place is destroyed. 08:26:16 it may be that it would be easier to solve at least dense blocks. 08:28:27 That would be fun to work on - an array machine that excelled at sparse systems. I imagine there is a good solution, but it's not at all clear to me what the optimum hardware would look like. 08:29:04 If I get position at Uni Basel, I may tell you some ideas. 08:29:28 And on my other issue, the A / B register architecture take a *lot* of opcodes. Six of them, in fact, all just related to reading and writing memory. 08:32:01 --- join: qFox (~C00K13S@5356B263.cable.casema.nl) joined #forth 08:32:44 What sort of position are you pursuing? 08:35:55 Oh, I failed to count two. *Eight* opcodes. 25% of all opcodes relate to the A and B registers. 08:36:32 Nine. 08:36:46 God, I can't count today. Only haflway into my first cup of coffee. 08:39:41 !b, !a, !a+, @b, @a, @a+, b!, a!, a@ 08:40:01 Then there are @p+ and !p+, but those really serve an entirely different purpose. 08:45:52 Ok - decision. Our applications at work do, in fact, involve moving data from i/o devices to memory. So the A/B registers will prove useful in that sense. 08:45:58 I don't want to rework the processor at this point. 08:46:02 It's done and working. 08:46:37 I will write "safe" @ and ! words for the outer interpreter, but application code will be responsible for doing whatever it thinks is appropriate. 08:57:49 --- quit: Deformative (Ping timeout: 264 seconds) 09:14:10 --- join: Deformative (~joe@67-194-32-211.wireless.umnet.umich.edu) joined #forth 09:17:28 --- join: alex4nder (~alexander@68-29-150-82.pools.spcsdns.net) joined #forth 09:17:30 hey 09:22:46 o/ 09:38:09 --- join: Azure_Ag (~azure@electric.azureprime.com) joined #forth 09:38:22 sup Deformative? 09:38:41 Not much, trying to get some things worked out. 09:38:48 Emailing my advisor and stuff. 09:44:10 You have regular interaction with an advisor as an undergraduate? I "had an advisor" when I was an undergrad, but I met with him once at the beginning of the program and then never saw him again. 09:44:27 Of course there was no email then, so these daysI guess they can cover a lot more ground. 09:45:02 I had three advisors. 09:45:15 One was the general 'academic advisor' who I met with once and then ignored. 09:45:29 And then I had two concentration advisors who helped me pick classes and such. 09:46:24 We were given a catalog at the beginning that described the course work. It was all either 1) core (all EE's took those), 2) concentration (once you picked a concentration you had to those) and 3) two electives, which could be literally anything with a high enough number. 09:46:34 So there really wasn't a lot of picking to do. 09:46:58 You just paid attention to the prerequisite issues and that was that, but they seemed to think we could do that on our own. 09:47:48 And if you fouled up your prerequisites and had to go an extra semester or something it was regarded as your own foolishness. 09:47:55 After all, you had the catalog... 09:48:20 And the catalog that applied was the one you entered under, so even if they made changes they applied only to new incoming students. 09:52:07 What Azure_Ag said. 09:52:49 I met with my advisor today to discuss course selection and try to get me access to the advanded computing cluster. 09:53:16 Turns out she is teaching my computer organization class next term. 09:53:20 So that's cool I suppose. 09:53:42 :-) I tried that too when I was a freshman. I wanted to work in the terminal room instead of with punch cards. "No. Dismissed." 09:53:47 --- quit: kar8nga (Read error: Connection reset by peer) 09:54:39 Heh. :) 09:55:51 Well... Buy 8-core board and 8 cpus for it. 09:56:21 Then you have cluster that outperform old supercomputers by more then 100 times. 09:57:14 What's the name of that special Linux distro that has the cluster computing kernel in it? 09:57:15 ASau: This cluster is: Over 3,500 Opteron cores with an average of 2 GB RAM/each.... 09:57:37 KipIngram: who cares? 09:57:43 It will do its stuff on either a "real" networked cluster or on a multi-core machine, won't it? 09:57:45 You can build kernel yourself. 09:57:57 I guess that would be educational. 09:58:12 I want to experiment with path tracing and things. 09:58:19 I like stuff that will run out of the box, though. 09:58:21 Deformative: your job may be restricted to 2 cpus at a time. 09:58:52 KipIngram: there's no such stuff, if you care about performance. 09:58:59 ASau: Unlikely. 09:59:14 E.g. you have to compile GAUSSIAN and GAMESS yourself. 09:59:46 Actually, they upgraded the system recently. 09:59:49 4888 cores now. 09:59:58 PelicanHPC. 10:00:12 It is currently floating around 60% busy. 10:00:25 http://pareto.uab.es/mcreel/PelicanHPC/ 10:00:36 That's one - I've seen similar efforts at other places. 10:00:47 Ugh, I also need to get an override from my professor so that I can skip into operating systems. 10:00:56 And I need to get a new keycard... 10:01:03 So much nonsense to do today. 10:01:48 ASau: I do imagine that having a system that's general enough to run on any cluster would certainly cost performance vs. a system that was tuned to your cluster. 10:01:53 Hard to see how it wouldn't. 10:02:32 KipIngram: Seealso: dragonflybsd 10:02:36 Well, bbl class is over. 10:03:08 KipIngram: I can save your time, don't look at dragonflybsd. 10:03:33 There's nothing interesting except fantasies. 10:03:55 KipIngram: actually, the base system is of less importance. 10:04:07 Well, I'm not actually about to do this, anyway. I just find "plug and play clusters" an interesting concept. 10:04:09 You don't usually use cluster for I/O and data processing. 10:04:34 Well... Unless you're in commercial setting. 10:04:50 Most of the time is spent on CPU intensive tasks. 10:04:58 And there you need your libraries tuned. 10:05:07 E.g. ATLAS. 10:06:58 --- quit: Deformative (Ping timeout: 246 seconds) 10:11:28 Hmmm. I really wish I had a "pc -> return stack" instruction. Then I could do that just before a loop and do an unconditional jump via the RET instruction. 10:12:00 And instead of JZ I like the idea of a "skip if zero" or "skip if not zero" instruction; then I could use a similar trick for conditional branches. 10:14:02 Plus then I could conditionalize any instruction, not just jumps. 10:14:45 ARM style? 10:14:46 --- join: Quartus` (~Quartus`@74.198.8.57) joined #forth 10:18:03 --- quit: gogonkt (Ping timeout: 256 seconds) 10:18:48 --- join: gogonkt (~info@116.5.49.13) joined #forth 10:30:45 --- quit: nighty^ (Quit: Disappears in a puff of smoke) 10:36:50 --- join: Maki (~Maki@dynamic-78-30-167-37.adsl.eunet.rs) joined #forth 10:54:07 --- quit: mathrick (Ping timeout: 258 seconds) 10:58:43 --- join: forther (~forther@207.47.34.100.static.nextweb.net) joined #forth 10:59:16 hi 10:59:32 --- join: kar8nga (~kar8nga@jol13-1-82-66-176-74.fbx.proxad.net) joined #forth 11:00:40 --- quit: Quartus` (Ping timeout: 265 seconds) 11:01:58 I wouldn't call s40 systolic array. 11:02:22 frst of all it has no single clock 11:02:52 so, it's rather wavefront array 11:02:54 Yes, I understand that it isn't. 11:03:05 But it is a bit similar by design to that. 11:03:18 also it can run non-systolic kind of apps 11:04:15 in fact none of our stand alone applications are pure "systolic" 11:04:56 but, yes, it is perfect for "systolic" things too 11:07:47 --- quit: Azure_Ag (Quit: Lost terminal) 11:23:51 --- join: segher (~segher@84-105-60-153.cable.quicknet.nl) joined #forth 11:25:21 --- join: mathrick (~mathrick@users177.kollegienet.dk) joined #forth 11:31:16 --- join: ygrek (debian-tor@gateway/tor-sasl/ygrek) joined #forth 11:42:41 --- quit: forther (Quit: Leaving) 11:50:27 --- quit: ygrek (Ping timeout: 245 seconds) 11:52:31 ASau: I guess so - I'm not intimately familiar with the ARM instruction set. If I implement a "skip on zero" instruction, for instance, it will just set a "skip flag" and the next instruction won't function if the flag is set, but all instructions will clear the flag. 11:54:57 --- quit: alex4nder (Ping timeout: 256 seconds) 11:55:40 KipIngram: ARM pretty much lets you add a bunch of conditionals to all instructions. 11:56:47 I guess I did have some awareness of that. This instruction could prefix any other instruction. 11:56:51 but not in the way you describe it. The conditional is encoded in the opcode (: 11:57:05 cools. 11:57:12 ARM does not do the clearing you mention though. 11:57:52 Hmmm. I guess I could have an explicit clear instruction. That might be useful. 11:59:00 Its useful on ARM anyway. saves you some "if ... then branch here and then head back and continue" because you could just put the 3-4 instructions right in there with the conditional :) 11:59:13 Right. 11:59:33 I only have one tremendously handly opcode to steal, though. 11:59:41 Oh ya? 11:59:54 Yeah. I emulated the C18 instruction set pretty completely. 12:00:01 cools. 12:00:16 I didn't need an explicit call opcode because I use the MSB of a cell to specify "call" or "three opcodes." 12:00:34 I spent that one on separated add and add with carry, so I didn't need that "extended mode" business. 12:00:48 I'll probably take !p+ for this. 12:01:24 I don't know. 12:01:36 I'm thinking that maybe for a fortish environment a skip flag like you discribe it might be better than the ARM way. 12:01:39 or maybe not. 12:01:40 I wanted to turn !p+ into a "push p+". 12:01:49 cools. 12:02:35 That way I could return stack the pc to set a jump point and then "ret" to jump to it. 12:02:56 sounds good to me (: 12:03:01 what is stopping ya? 12:03:25 Nothing really. It's just that the darn thing is completely done and working (at least in emulation) and changing it will involve work. 12:03:31 Guess that's what life's for, though. 12:03:33 aaaah. 12:03:43 mebbee. for v2.0 :) 12:03:57 Were you around earlier whenI was grousing about the A / B register thing? 12:04:05 nope. 12:04:17 I could scroll back (: 12:04:19 I'm not liking it much now that I have it working and started trying to code some stuff. 12:04:45 ugh. what a sticky spot to be in :) 12:04:49 Because when you use them you destroy their content. Unless you save it. 12:05:12 yup 12:05:15 And then I realized that *9* of my 32 opcodes relate to A and B. Wow, that's a lot. 12:06:00 hahaha yes. 25%+ 12:06:09 In other words, if I abandoned the A/B architecture I would eliminate the "do I save/restore them" problem *and* free up a bunch of instructions for other stuff. 12:06:28 path dependency :) 12:09:03 I think I'll change some things. I'll be happier with it if I do, I believe. 12:09:12 Earlier I was feeling lazy. 12:09:34 Are you on some deadline? 12:09:39 No, not at all. 12:09:47 oh ya def change it up then :) 12:10:16 This is just building something for the future. 12:14:53 I do like the "autoincrement" feature on A, and the way that A and B support really fast port-to/from-memory transfers. 12:15:22 I decided earlier that I'd keep A and B but never presume that they were preserved. 12:15:39 Plus it does help my hardware a bit to have a stable place for the memory address to "be." 12:16:40 I think I will keep A and B, keep A! and B!, and keep !A+, !B, @A+, and @B. I'll drop the non-auto-increment on A operations and drop the ability to read A. 12:17:00 That frees three opcodes. 12:17:37 !p+ will become "push p", and the three new opcodes I can use two of the three new opcodes for conditional support. 12:18:57 And it won't be too hard to make the changes. 12:19:08 excellent. 12:19:15 I need some yoghurt then. 12:26:03 --- join: forther (~cf2f2264@gateway/web/freenode/x-ddqjogztjpgqpwer) joined #forth 12:36:11 --- join: terminals (~kiss@189.75.14.74) joined #forth 12:38:40 why do you need non-incrementing !b @b ? 12:40:53 I think that Chuck had it in mind that A would be used for memory and B for an i/o port. 12:41:06 talking A/B it would make sense to restrict it to b! b! !b+ @b+ !b+ @b+ 12:41:52 !a+ doesn't increment when on ports 12:42:12 !b+ could be done the same way 12:42:18 Yes - that's true. I didn't want to have to bother with the hardware deciding whether to increment or not. 12:42:49 Good point, though. 12:42:58 with the single core ports are not important 12:43:19 I still will have hardware that I talk to. Analog to digital converters and so forth. 12:43:36 right 12:43:48 ok then 12:44:17 So you're proposing that a and b be totally symmetrical in terms of instructions. 12:44:28 With address-controlled increment. 12:44:29 yes 12:44:55 Ok; let me give that some thought. Thanks. :-) 12:45:33 I like a@ too 12:46:02 especially for restricted c18 environment 12:46:53 btw 12:47:10 Elaborate - I had talked myself out of it. 12:47:17 *+ leaves the result in A 12:47:48 I mean lower part of the product 12:48:57 Really? I saw that in some of the docs I read, but the one I considered "the latest" didn't seem to imply that. 12:49:08 and swap is shorter with a@ 12:49:29 Very true. 12:49:50 modern c18 use A as the second multiplicand and as lower part of up to 36 bits result 12:50:03 --- quit: kar8nga (Remote host closed the connection) 12:50:04 Was there an early version that just used S and T? 12:50:23 yes 12:50:34 Ah. I must be using an antiquated document, then. 12:50:34 s24 had it 12:50:50 Got it. 12:53:19 Geez. I may be better off just wiping my control logic clean and rebuilding it, rather than trying to "adjust" it. 12:53:24 How annoying. 13:15:32 --- join: alex4nder (~alexander@70-7-168-108.pools.spcsdns.net) joined #forth 13:25:37 --- quit: Maki (Quit: Leaving) 13:38:09 --- join: Deformative (~joe@bursley-185022.reshall.umich.edu) joined #forth 13:44:57 --- quit: ASau (Read error: Connection reset by peer) 13:50:34 --- quit: alex4nder (Ping timeout: 248 seconds) 14:01:53 --- quit: forther (Quit: Page closed) 14:06:33 --- quit: qFox (Quit: Time for cookies!) 14:27:29 --- join: kar8nga (~kar8nga@jol13-1-82-66-176-74.fbx.proxad.net) joined #forth 14:28:35 --- quit: Deformative (Read error: Operation timed out) 14:30:41 --- join: Deformative (~joe@67-194-24-60.wireless.umnet.umich.edu) joined #forth 14:49:22 --- quit: Deformative (Ping timeout: 264 seconds) 14:51:24 --- join: crc_ (~charlesch@71.23.210.149) joined #forth 14:54:34 --- quit: crc (Ping timeout: 248 seconds) 14:56:01 --- quit: kar8nga (Remote host closed the connection) 15:00:18 --- join: Deformative (~joe@67-194-49-71.wireless.umnet.umich.edu) joined #forth 15:13:28 --- quit: proteusguy (Ping timeout: 276 seconds) 15:25:53 --- join: proteusguy (~proteusgu@zeppelin.proteus-tech.com) joined #forth 15:43:39 --- join: Pusdesris (~joe@bursley-185022.reshall.umich.edu) joined #forth 15:44:57 --- quit: Deformative (Ping timeout: 260 seconds) 15:53:18 --- join: alex4nder (~alexander@68-27-162-228.pools.spcsdns.net) joined #forth 16:01:09 --- quit: terminals (K-Lined) 16:30:58 --- join: skas (~skas@eth488.act.adsl.internode.on.net) joined #forth 16:38:55 --- nick: Pusdesris -> Deformative 17:16:08 --- quit: grub_booter (Read error: Operation timed out) 17:20:21 --- join: grub_booter (~charlie@2002:54c5:19d5:0:21a:a0ff:fedb:cda0) joined #forth 17:36:00 --- quit: alex4nder (Ping timeout: 276 seconds) 18:08:03 --- join: nighty^ (~nighty@210.188.173.245) joined #forth 18:44:16 --- join: alex4nder (~alexander@polaris.andern.org) joined #forth 18:44:35 hey 19:08:49 --- quit: nighty^ (Remote host closed the connection) 19:19:27 --- nick: crc_ -> crc 19:28:56 --- quit: crc (Ping timeout: 268 seconds) 19:30:59 --- join: crc (~charlesch@71.23.210.149) joined #forth 19:47:33 morning 20:00:30 evening here 20:08:24 11 am here 20:09:05 11pm here :) 20:09:56 14:00 here :-] 20:09:57 :D 21:24:52 --- quit: grub_booter (Read error: Operation timed out) 21:25:17 --- join: grub_booter (~charlie@2002:54c5:19d5:0:21a:a0ff:fedb:cda0) joined #forth 21:31:24 Hi. 21:36:21 --- quit: proteusguy (Ping timeout: 260 seconds) 21:37:58 HI 21:38:06 Morgen 21:49:19 --- join: proteusguy (~proteusgu@zeppelin.proteus-tech.com) joined #forth 22:10:46 --- join: nighty^ (~nighty@210.188.173.245) joined #forth 22:22:40 --- quit: malyn (*.net *.split) 22:22:40 --- quit: crc (*.net *.split) 22:22:40 --- quit: cataska (*.net *.split) 22:22:41 --- quit: yiyus (*.net *.split) 22:22:41 --- quit: schme (*.net *.split) 22:22:41 --- quit: grub_booter (*.net *.split) 22:22:41 --- quit: crcx (*.net *.split) 22:22:42 --- quit: flash (*.net *.split) 22:22:42 --- quit: saper (*.net *.split) 22:22:43 --- quit: nighty^ (*.net *.split) 22:22:43 --- quit: skas (*.net *.split) 22:22:43 --- quit: KipIngram (*.net *.split) 22:22:43 --- quit: nighty_ (*.net *.split) 22:22:44 --- quit: TreyB (*.net *.split) 22:22:44 --- quit: Snoopy_1611 (*.net *.split) 22:22:44 --- quit: nottwo (*.net *.split) 22:22:44 --- quit: madgarden (*.net *.split) 22:22:44 --- quit: proteusguy (*.net *.split) 22:22:45 --- quit: mathrick (*.net *.split) 22:22:45 --- quit: tmitt (*.net *.split) 22:22:45 --- quit: gnomon (*.net *.split) 22:22:46 --- quit: gogonkt (*.net *.split) 22:22:46 --- quit: maht (*.net *.split) 22:22:46 --- quit: DavidC99 (*.net *.split) 23:59:59 --- log: ended forth/10.03.17