00:00:00 --- log: started forth/09.11.08 00:04:57 --- join: gnomon (n=gnomon@CPE001d60dffa5c-CM000f9f776f96.cpe.net.cable.rogers.com) joined #forth 00:06:48 --- quit: gnomon_ (Read error: 104 (Connection reset by peer)) 00:17:44 --- nick: ASau` -> ASau 00:26:00 --- quit: Raiford (sendak.freenode.net irc.freenode.net) 00:26:00 --- quit: segher (sendak.freenode.net irc.freenode.net) 00:26:00 --- quit: TR2N (sendak.freenode.net irc.freenode.net) 00:26:03 --- quit: Al2O3 (sendak.freenode.net irc.freenode.net) 00:26:05 --- quit: jas (sendak.freenode.net irc.freenode.net) 00:26:09 --- quit: maht (sendak.freenode.net irc.freenode.net) 00:26:09 --- quit: yiyus (sendak.freenode.net irc.freenode.net) 00:26:09 --- quit: probonono (sendak.freenode.net irc.freenode.net) 00:26:09 --- quit: letoh (sendak.freenode.net irc.freenode.net) 00:26:09 --- quit: nighty___ (sendak.freenode.net irc.freenode.net) 00:26:09 --- quit: schme (sendak.freenode.net irc.freenode.net) 00:26:09 --- quit: foxLapto1 (sendak.freenode.net irc.freenode.net) 00:26:09 --- quit: KipIngram (sendak.freenode.net irc.freenode.net) 00:26:09 --- quit: madgarden (sendak.freenode.net irc.freenode.net) 00:26:09 --- quit: saper (sendak.freenode.net irc.freenode.net) 00:26:10 --- quit: ASau (sendak.freenode.net irc.freenode.net) 00:26:10 --- quit: madwork_ (sendak.freenode.net irc.freenode.net) 00:26:10 --- quit: TreyB (sendak.freenode.net irc.freenode.net) 00:26:10 --- quit: ygrek (sendak.freenode.net irc.freenode.net) 00:26:10 --- quit: nottwo (sendak.freenode.net irc.freenode.net) 00:26:10 --- quit: aguai (sendak.freenode.net irc.freenode.net) 00:28:26 --- join: ASau (n=user@83.69.227.32) joined #forth 00:28:26 --- join: ygrek (i=user@gateway/gpg-tor/key-0x708D5A0C) joined #forth 00:28:26 --- join: Al2O3 (n=Al2O3@c-75-70-11-191.hsd1.co.comcast.net) joined #forth 00:28:26 --- join: TR2N (n=email@89.180.206.188) joined #forth 00:28:26 --- join: madwork_ (n=madgarde@204.138.110.15) joined #forth 00:28:26 --- join: jas (n=jas@adsl-69-215-39-41.dsl.chcgil.ameritech.net) joined #forth 00:28:26 --- join: Raiford (n=Raiford@234.157-62-69.ftth.swbr.surewest.net) joined #forth 00:28:26 --- join: segher (n=segher@84-105-60-153.cable.quicknet.nl) joined #forth 00:28:26 --- join: letoh (n=letoh@221.169.231.106) joined #forth 00:28:26 --- join: probonono (n=User@unaffiliated/probonono) joined #forth 00:28:26 --- join: maht (n=maht__@85-189-31-174.proweb.managedbroadband.co.uk) joined #forth 00:28:26 --- join: aguai (i=aguai@114.32.77.124) joined #forth 00:28:26 --- join: KipIngram (n=kip@173-11-138-177-houston.txt.hfc.comcastbusiness.net) joined #forth 00:28:26 --- join: foxLapto1 (i=flash@222.131.160.45) joined #forth 00:28:26 --- join: madgarden (n=madgarde@CPE001d7e527f89-CM00159a65a870.cpe.net.cable.rogers.com) joined #forth 00:28:26 --- join: saper (i=saper@wikipedia/saper) joined #forth 00:28:26 --- join: yiyus (i=12427124@je.je.je) joined #forth 00:28:26 --- join: schme (n=marcus@sxemacs/devel/schme) joined #forth 00:28:26 --- join: nighty___ (n=nighty@x122091.ppp.asahi-net.or.jp) joined #forth 00:28:26 --- join: TreyB (n=trey@adsl-76-254-113-201.dsl.hstntx.sbcglobal.net) joined #forth 00:28:26 --- join: nottwo (n=trannie@designvox-gw.iserv.net) joined #forth 00:29:01 --- quit: Al2O3 () 00:30:48 --- join: DrunkTomato (n=DEDULO@ext-gw.wellcom.tomsk.ru) joined #forth 00:36:47 --- join: Al2O3 (n=Al2O3@c-75-70-11-191.hsd1.co.comcast.net) joined #forth 00:46:34 --- quit: segher (sendak.freenode.net irc.freenode.net) 00:46:34 --- quit: Raiford (sendak.freenode.net irc.freenode.net) 00:46:34 --- quit: TR2N (sendak.freenode.net irc.freenode.net) 00:46:39 --- quit: DrunkTomato (sendak.freenode.net irc.freenode.net) 00:46:39 --- quit: Al2O3 (sendak.freenode.net irc.freenode.net) 00:46:39 --- quit: jas (sendak.freenode.net irc.freenode.net) 00:46:43 --- quit: yiyus (sendak.freenode.net irc.freenode.net) 00:46:43 --- quit: maht (sendak.freenode.net irc.freenode.net) 00:46:44 --- quit: foxLapto1 (sendak.freenode.net irc.freenode.net) 00:46:44 --- quit: KipIngram (sendak.freenode.net irc.freenode.net) 00:46:44 --- quit: nighty___ (sendak.freenode.net irc.freenode.net) 00:46:44 --- quit: probonono (sendak.freenode.net irc.freenode.net) 00:46:44 --- quit: madgarden (sendak.freenode.net irc.freenode.net) 00:46:44 --- quit: schme (sendak.freenode.net irc.freenode.net) 00:46:44 --- quit: letoh (sendak.freenode.net irc.freenode.net) 00:46:44 --- quit: saper (sendak.freenode.net irc.freenode.net) 00:46:44 --- quit: TreyB (sendak.freenode.net irc.freenode.net) 00:46:45 --- quit: madwork_ (sendak.freenode.net irc.freenode.net) 00:46:45 --- quit: ASau (sendak.freenode.net irc.freenode.net) 00:46:45 --- quit: ygrek (sendak.freenode.net irc.freenode.net) 00:46:45 --- quit: nottwo (sendak.freenode.net irc.freenode.net) 00:46:45 --- quit: aguai (sendak.freenode.net irc.freenode.net) 00:46:45 --- quit: gogonkt (sendak.freenode.net irc.freenode.net) 00:46:45 --- quit: scj (sendak.freenode.net irc.freenode.net) 00:46:45 --- quit: gnomon (sendak.freenode.net irc.freenode.net) 00:46:45 --- quit: malyn (sendak.freenode.net irc.freenode.net) 00:46:45 --- quit: uiu_ (sendak.freenode.net irc.freenode.net) 00:46:45 --- quit: uiu (sendak.freenode.net irc.freenode.net) 00:46:46 --- quit: Snoopy_1711 (sendak.freenode.net irc.freenode.net) 00:46:46 --- quit: Frek (sendak.freenode.net irc.freenode.net) 00:46:46 --- quit: cce891ed (sendak.freenode.net irc.freenode.net) 00:46:46 --- quit: mathrick (sendak.freenode.net irc.freenode.net) 00:46:46 --- quit: aguaithegeek (sendak.freenode.net irc.freenode.net) 00:46:48 --- quit: crc (sendak.freenode.net irc.freenode.net) 01:13:12 --- join: DrunkTomato (n=DEDULO@ext-gw.wellcom.tomsk.ru) joined #forth 01:13:23 --- quit: DrunkTomato (Client Quit) 01:35:03 --- join: Frek (n=nmacbook@81-225-142-146-no36.tbcn.telia.com) joined #forth 01:35:03 --- join: Snoopy_1711 (i=Snoopy_1@dslb-084-059-216-164.pools.arcor-ip.net) joined #forth 01:35:03 --- join: uiu (n=ian@78.42.132.111) joined #forth 01:35:03 --- join: uiu_ (n=ian@81.169.184.117) joined #forth 01:35:03 --- join: malyn (n=malyn@unaffiliated/malyn) joined #forth 01:35:03 --- join: gnomon (n=gnomon@CPE001d60dffa5c-CM000f9f776f96.cpe.net.cable.rogers.com) joined #forth 01:35:16 --- join: gogonkt (n=info@218.13.51.102) joined #forth 01:35:16 --- join: scj (i=syljo361@static-ip-62-75-255-125.inaddr.server4you.de) joined #forth 01:35:19 --- join: crc (n=charlesc@c-68-80-139-0.hsd1.pa.comcast.net) joined #forth 01:35:19 --- mode: irc.freenode.net set +o crc 01:35:27 --- join: cce891ed (n=chtr@freya.dhcp.rose-hulman.edu) joined #forth 01:35:27 --- join: aguaithegeek (n=aguai@98.142.211.123) joined #forth 01:36:36 --- join: DrunkTomato (n=DEDULO@ext-gw.wellcom.tomsk.ru) joined #forth 01:36:36 --- join: Raiford (n=Raiford@234.157-62-69.ftth.swbr.surewest.net) joined #forth 01:36:36 --- join: qFox (n=C00K13S@5356B263.cable.casema.nl) joined #forth 01:36:36 --- join: mathrick (n=mathrick@users177.kollegienet.dk) joined #forth 01:36:36 --- join: Al2O3 (n=Al2O3@c-75-70-11-191.hsd1.co.comcast.net) joined #forth 01:36:36 --- join: ASau (n=user@83.69.227.32) joined #forth 01:36:36 --- join: ygrek (i=user@gateway/gpg-tor/key-0x708D5A0C) joined #forth 01:36:36 --- join: TR2N (n=email@89.180.206.188) joined #forth 01:36:36 --- join: madwork_ (n=madgarde@204.138.110.15) joined #forth 01:36:36 --- join: jas (n=jas@adsl-69-215-39-41.dsl.chcgil.ameritech.net) joined #forth 01:36:36 --- join: segher (n=segher@84-105-60-153.cable.quicknet.nl) joined #forth 01:36:36 --- join: letoh (n=letoh@221.169.231.106) joined #forth 01:36:36 --- join: probonono (n=User@unaffiliated/probonono) joined #forth 01:36:36 --- join: maht (n=maht__@85-189-31-174.proweb.managedbroadband.co.uk) joined #forth 01:36:36 --- join: aguai (i=aguai@114.32.77.124) joined #forth 01:36:36 --- join: KipIngram (n=kip@173-11-138-177-houston.txt.hfc.comcastbusiness.net) joined #forth 01:36:36 --- join: foxLapto1 (i=flash@222.131.160.45) joined #forth 01:36:36 --- join: madgarden (n=madgarde@CPE001d7e527f89-CM00159a65a870.cpe.net.cable.rogers.com) joined #forth 01:36:36 --- join: saper (i=saper@wikipedia/saper) joined #forth 01:36:36 --- join: yiyus (i=12427124@je.je.je) joined #forth 01:36:36 --- join: schme (n=marcus@sxemacs/devel/schme) joined #forth 01:36:36 --- join: nighty___ (n=nighty@x122091.ppp.asahi-net.or.jp) joined #forth 01:36:36 --- join: TreyB (n=trey@adsl-76-254-113-201.dsl.hstntx.sbcglobal.net) joined #forth 01:36:36 --- join: nottwo (n=trannie@designvox-gw.iserv.net) joined #forth 03:00:42 --- join: kar8nga (n=kar8nga@jol13-1-82-66-176-74.fbx.proxad.net) joined #forth 03:12:35 --- quit: gnomon (Read error: 145 (Connection timed out)) 03:57:19 --- quit: foxLapto1 (Read error: 104 (Connection reset by peer)) 03:59:08 --- quit: kar8nga (Remote closed the connection) 04:46:17 --- join: Judofyr (n=Judofyr@ti0056a380-1170.bb.online.no) joined #forth 04:53:07 --- part: letoh left #forth 04:54:08 --- join: kar8nga (n=kar8nga@82.66.176.74) joined #forth 05:43:13 --- quit: TR2N (Read error: 110 (Connection timed out)) 05:44:00 --- join: ContraSF (n=email@89.180.212.188) joined #forth 06:21:26 --- quit: kar8nga (Remote closed the connection) 06:29:40 --- quit: nighty^ ("Disappears in a puff of smoke") 06:35:38 --- join: gnomon (n=gnomon@CPE0022158a8221-CM000f9f776f96.cpe.net.cable.rogers.com) joined #forth 07:50:08 Why do they try so hard to split FP stack from regular one? 07:50:27 ~ any modern CPU doesn't need it. 07:51:22 Single precision IEEE 754 FPNs fit single 32-bit cell, 07:51:49 and 64-bin CPU deals with double precision numbers with the same ease! 08:15:25 --- quit: ASau (Read error: 54 (Connection reset by peer)) 08:15:53 --- join: ASau (n=user@83.69.227.32) joined #forth 08:38:14 --- join: tathi (n=josh@dsl-216-227-91-166.fairpoint.net) joined #forth 08:47:54 --- quit: Judofyr (Remote closed the connection) 08:49:58 --- join: gogonkt_ (n=info@218.13.53.137) joined #forth 08:50:17 I agree with you on the FP stack point, ASau. Probably a holdover from history, when integer cells were smaller. 08:52:42 I could also see someone argue that floating point numbers are *never* addresses, whereas integers are sometimes. By keeping the FP stack separate you're free to use the normal stack for addressing calculations without concern for where your data is on the other stack. 08:53:20 So it seems like a bit of a tradeoff between practicality and elegant consistency. 08:53:45 There is one point that NPX simulates separate stack and 08:53:45 keeping FPNs there may lead to more efficient code. 08:53:46 My first instinct is always toward elegance, but I will pull the practicality trigger from time to time if it seems to make sense to me. 08:54:06 Another point of practicality. 08:54:15 NPX? 08:54:18 But you can have another code that suffers from this distinction as well. 08:54:39 "Numeric Processing eXtentension." 08:54:54 "C" is for damn "interfaCe". 08:55:25 Or whatever Intel calls it. 08:56:07 OTOH, you still have to deal with _very_ limited depth of that internal stack. 08:56:41 Strictly speaking you almost never deal with addresses. 08:56:46 They're not interesting. 08:57:11 If you had an array of floating point numbers in memory you would be indexing into it, so yes, you would deal with addresses. 08:57:25 Objects placed at those addresses have quite fine internal structure most frequently. 08:57:38 The thing that I have run into is that it's hard to write portable code with floating-point numbers on the data stack, because you can't assume they're the same size as a cell. 08:57:50 But I suppose that could be a lack of skill on my part. 08:57:52 Everything about processing data in memory requires that you deal with addresses - in Forth you do so explicitly. Other languages hide it from you to a greater extent. 08:58:10 You cannot address into FPN anyway. 08:58:20 tathi: yes, I agree that that requires some skill. 08:58:34 So does stack juggling in general. 08:59:38 But ASau's point was strongest in the case where you have a 32-bit cell and floats are either 32 or 64. That's a huge simplification. If that didn't hold I don't think I'd want to even consider using one stack for both. 08:59:55 But if it does hold you just need to know whether your float is single or double. 09:00:32 Even if you insist on separate FPN stack, you should 09:00:32 provide some symmetry, so that you can treat FPNs as regular ones. 09:01:43 The way it is done in current standard isn't acceptable 09:01:43 even if you require separate stack. 09:02:21 ASau: What is it that you don't like about having a separate FP stack? 09:02:32 With non-split data stack, you can measure size of FPN at compile time. 09:02:51 tathi: you can't do complex things with FPNs. 09:03:05 Like what? 09:03:23 I frequently meet tasks when I need up to 5 parameters, 09:03:45 some of which are, strictly speaking, complex numbers, 09:03:51 but I can deal with real ones for now. 09:04:22 And sometimes I need to store FPN temporarily somewhere else. 09:04:35 For integer values I have "return" stack. 09:05:21 For real values I have _only_main_memory_, if I don't break standard. 09:06:07 This means, that I do: ALLOCATE THROW ... F! ... F@ ... FREE THROW 09:06:11 Ah, right. 09:06:38 While I could do "F>R ... FR>" 09:07:51 Using dynamic memory allocator is kind of overkill for doing stack frame allocation. 09:07:53 --- join: GeDaMo (n=gedamo@212.225.108.57) joined #forth 09:08:02 Yes, I understand now. 09:08:39 Unfortunately alignment issues make F>R and FR> tricky on some platforms. 09:08:45 I wish people would implement them anyway... 09:09:09 Well... There _is_ a way to do those, if you have memory addressable data stack. 09:09:19 You understand that trickery. 09:09:26 ;) 09:10:09 Actually the whole thing with return stack is quite unclear. 09:10:16 --- quit: gogonkt (Read error: 110 (Connection timed out)) 09:10:17 right 09:10:41 Sometimes it's really annoying that the standard supports such a wide range of system. 09:10:45 :) 09:10:48 I'd like to have something to push data _under_ return address. 09:10:59 Now *that* would be neat. 09:11:31 Even _this_ restriction is quite tolerable. 09:12:11 I'm getting to the point where I just code for gforth, as none of the other free systems seem to be completely standard. 09:12:23 Yes. 09:12:28 Same here. 09:12:48 I'm trying to break this jail, but to no avail for now. 09:13:19 I've been sending patches to PFE's maintainer as I find things. 09:13:30 I need to find a week or two and move to FICL or pForth. 09:13:43 Yeah, FICL isn't too bad. 09:13:45 Do they fix it? 09:13:53 Sometimes 09:14:10 Actually, Sadler isn't completely passive. 09:14:13 I'm not sure he has incorporated my patch to fix :NONAME 09:14:21 Who's Sadler? FICL? 09:14:25 Yes. 09:14:36 Maybe we could cooperate and revive FICL. 09:14:57 --- join: kar8nga (n=kar8nga@jol13-1-82-66-176-74.fbx.proxad.net) joined #forth 09:15:26 I have my own build system, which should support cross-compiler OOB. 09:15:29 Or almost OOB. 09:15:43 I had to override FICL's native one. 09:15:53 OOB? 09:15:56 Out of box. 09:15:57 But isn't the idea of Forth that you define the things you want? So to push under the return address you : >Ru R >R >R ; ? 09:15:58 ah 09:16:25 KipIngram: Sure, and normally you do just that. 09:16:31 KipIngram: strictly speaking, you cannot assume that return point is encoded by single cell. 09:16:43 ASau is trying to write code that runs out of the box everywhere, I believe. 09:16:50 Or most places, at least. 09:17:05 And recent trends in our "standard commitee" is to support those strange ones. 09:17:20 I don't think I'm as hot to stay standard as you guys are. I always have a target system, usually embedded, and my interest is in the product I'm designing at the moment. I reuse techniques, but not code. 09:17:42 tathi: I have surprisingly positive experience with _correct_ code. 09:17:56 That's great, but it will almost certainly be less efficient than a solution carefully crafted for the system at hand. 09:18:33 Of course I guess one could argue that I should just go all the way down my road and write everything in assembly, but I'm not quite that desperate for performance. 09:18:33 KipIngram: I don't actually care that much; I just find this sort of thing entertaining. 09:18:55 I like having an integrated assembler so I can use code if I want. Which automatically makes me target specific. 09:19:08 ASau: with correct code actually working on various systems? 09:19:16 tathi: yes. :) 09:19:51 That's fair enough. Most of the times I've run into trouble, I was trying to write a cross-compiler, so I was poking around in dark corners of the standard. :) 09:19:53 --- quit: gnomon (Read error: 60 (Operation timed out)) 09:21:12 My recent experience shows that even by applying mechanical 09:21:12 changes to the code just to silence lint or cc warnings, you can 09:21:12 fix the code. 09:21:18 bugs in it. 09:21:40 Most surprising part is that it damn works after that "outrage." 09:21:43 The code to FICL? 09:21:52 That was "C". 09:22:51 Though I worked rather long time like type inference engine. %| 09:22:58 --- join: gnomon (n=gnomon@CPE0022158a8221-CM000f9f776f96.cpe.net.cable.rogers.com) joined #forth 09:23:07 See my profession is really electrical engineering, not computer science. I write code to run on circuitry that I've designed. I don't expect to design one amplifier and have it fit every application I ever encounter - I expect to optimize the circuit design in each case. So optimizing the code (to some extent) for each application is a natural mindset for me. 09:23:57 yeah, I usually take that approach to Forth. 09:24:01 For me the circuitry and the code are just facets of an integrated solution. 09:24:03 KipIngram: my education is research chemist, my profession 09:24:03 is programmer, and I damn fix code to work on circuitry designed 09:24:03 by some other person. 09:24:39 And I tend to call that person idiot, because this device 09:24:39 But for desktop stuff, sometimes it would be nice if I could write code that would work on other machines than just mine. 09:24:40 doesn't work as described in documentation. 09:24:52 Not always. 09:24:56 Or be able to occasionally reuse other people's code, even. :) 09:25:38 I design programs to be easy to fix in future, when some requirement changes. 09:26:58 It is easier to change constant that rewrite the whole module. 09:28:00 ...than to rewrite the whole module. 09:33:31 Yes, I agree with all of that, and I do try to think ahead when designing. But normally we have a hard delivery date for the tool, so I keep my focus on that. 09:35:40 Precisely the thing I have always admired about Forth is that it offers an "intimate" relationship with the underlying hardware. I understand the drive to standardie, but I really don't like anything that tries to tell me how I should interface with my hardware, or restricts my ability to do so as I please. 09:36:22 One of problems with this non-standard hardware is that it is hard to use. 09:36:56 I have device that uses simultaneously serial, parallel and ethernet peripheral devices. 09:37:22 And I'm not quite sure how that watchdog exactly works, 09:37:47 I know only that I have to access it via PPI and it resets CPU. 09:37:54 My company makes "down hole" tools for use in oil wells. Most of our stuff has to operate up to 500 degrees. That requirement usually dominates the design pathway completely. 09:38:16 500 which degrees? 09:38:23 So the temperature requirement drives the design, and then I want a software tool that makes it as easy as possible to diddle the hardware around the way I want. 09:38:23 Real degrees or Celsius? 09:38:29 F 09:38:36 Oh, damn. 09:38:59 Exactly. 09:39:01 :-( 09:39:09 Very tough. 09:39:29 Is that 300 C or what? 09:39:40 260 C, I think. 09:39:59 yes, 260 C 09:39:59 That's the extreme; 175C and 200C tools are more common. 09:40:00 Well... Quite high for electronics. 09:40:53 * ASau is more accustomed to temperatures around 1200...1500 K. 09:41:49 Anyway, somewhere I read a quote (may have been in Thinking Forth) where a guy said that Forth had influenced his hardware work, by causing him to think in terms of more well-defined electronic "widgets" that he could string together in flexible ways. He found himself refactoring his hardware designs. :-) I loved that quote. 09:42:26 Yeah, it's "TF" most probably. 09:42:41 I remember the quote vaguely. 09:43:30 Reality is more harsh though. 09:43:41 I like Forth exactly because it tends to tear down the barrier we've erected between "the hardware" and "the software." It gets closer to treating it all as "the system." 09:43:52 Especially when you start thinking about FPGA-based Forth processors. 09:44:07 I haven't designed one yet, but I keep threatening to around the office. 09:44:33 Do they work at those 500 K temperatures? 09:44:56 FPGAs. 09:45:10 Some of the Actel units go to the highest temps. I haven't tried one at 500F yet, but they do work well above 200C. 09:45:24 Xilinx units (Spartan 2) putz out around 180C. 09:45:52 500 K is working temperature for those semicond. measurements. 09:46:33 500K is what, 227C? I think an Actel might make it that high. At least very close. 09:46:38 Melt salt conductivity &c. 09:47:02 I don't remember exactly, this was another research group. 09:49:27 http://en.wikipedia.org/wiki/Molten_salt_battery 09:50:26 National Semiconductor and Texas Instruments both make lines oriented specifically toward the down-hole world; if you need components of various types that will run hot that's a good place to start. 09:50:45 The TI lineup includes some DSPs. Couple that with Actel FPGAs and you have a pretty good working approach. 09:51:01 You still have to test stuff to be sure, but at least you've stacked the deck in your favor. 09:51:13 If I ever get back to working with those temperatures, 09:51:25 and I hope I shall, 09:51:31 I shan't do any programming. 09:51:52 Not of this kind in any case :D 09:52:20 I actually don't do a whole lot of the hands on-stuff any more. These days I manage the electrical engineering team. I try to stay pretty hands on, though, and grab occasional projects when the group's overloaded. 09:54:18 I worked with this kind of materials: http://en.wikipedia.org/wiki/Y123 09:55:33 Love that picture of the molecule. I'll read it in a sec - got to find something for my wife first. 09:55:33 And this explains why those temperatures: http://en.wikipedia.org/wiki/Y123#Synthesis 09:59:32 I see - so your applications are in superconductivity? What are some of the target applications? 10:00:01 I have left that group when I graduated. 10:00:19 When I first moved to Houston I interviewed for a job with the University of Houston's superconductivity program - they were interested in superconducting flywheel batteries, and I had flywheel battery and related experience. 10:00:36 Now I want to get back, but I have to be paid for the work. 10:00:41 I took another job, so I don't know where they went with it. 10:01:48 Or at least have some short-time return. 10:02:46 As for now my interest in Forth is more of academic kind 10:02:46 rather than strictly practical one. 10:03:05 I _could_ write and deploy several Forth applications 10:03:19 given almost complete freedom of choice of tools. 10:03:31 But I'm not given that amount of time. 10:03:52 Hence web interface is written in Scheme rather than Forth. 10:03:55 :) 10:05:46 Next developer will be quite surprised. 10:08:40 I'd be really interested in how you approached a practical problem with Scheme. I've taken looks at it, because many years ago I had a strong interest in AI and hence Lisp. 10:09:22 It always seemed to me that getting actually productive in Lisp required a lot of "starting knowledge." I never found the time to get there. 10:09:48 Yes, it's amazing how some groups don't "get it" that we have to actually earn livings. :-) 10:16:12 I needed to immediatly replace C-written non-working web interface with anything working. 10:16:35 I don't know PHP and don't want to learn it just for the sake of it. 10:17:03 I'll share what I'd actually like to with Forth someday, along with other things. 10:17:10 Thus I had 4 languages to choose from: C, which proved to be inefficient, 10:17:28 Forth, which lacks necessary libraries, 10:17:41 I'd like to see a 100% open-source computing platform. Open, freely available circuit designs, implemented within the gEDA platform. 10:17:44 Common Lisp and Scheme, which have all necessary libraries. 10:17:57 CL was ruled out because of deployment problems. 10:17:59 Completely available PCB layout, gerber files, bills of materials, etc. 10:18:05 Scheme survived. 10:18:11 That's all! :) 10:18:39 Processor based on FPGA technology with parallel processors that execute Forth as their native language. 10:18:41 T1/Niagara + NetBSD? 10:18:47 So this thing would be easy to program in "assembler." 10:18:55 --- join: Judofyr (n=Judofyr@cC694BF51.dhcp.bluecom.no) joined #forth 10:18:59 Then a bridge to open-source Linux. 10:19:29 So you'd wind up with a fully functional, fully capable computer system with absolutely no "hooks" for companies like Microsoft, Intel, AMD, etc. to control anyting with. 10:20:20 Then encryption technology to keep anyone from hooking in to the Internet traffic, and you have total freedom on the net. 10:20:46 Forth's just a part of that, but an important part in my mind. 10:21:13 I'd like to see open source community processes at work in the BIOS, and, indeed, in the VHDL / Verilog that implemented the processor core itself. 10:21:34 Hopefully we'd never find it necssary to implement a public domain FPGA, but even that could be done if necessary. 10:22:03 And might be considered if it were the *only* piece of the puzzle that was left. 10:22:29 It's hard for me to see how Actel or Xilinx could "needle into" processor operation the same way Intel and AMD can with today's approach. 10:23:55 I think, that _if_ you could produce reasonably cheap 10:23:56 full-capable hardware with reasonably powerful CPU, 10:24:18 you _could_ find geeks to port NetBSD, OpenBSD or linux to it. 10:25:30 My approach (no idea if it would turn out to be the best one) would be to keep the processor itself extremely simple. Probably pipelined, but in a very straightforward, easy to understand way. 10:25:42 Simple = small, so you could have many such processors on your chip. 10:25:44 I could invest some time into something like Palm Tungsten. 10:25:57 Even if it will bear Forth-CPU inside. 10:26:14 So you have a system that can run a lot of threads simultaneously. Run ps ax on your Linux box and you see a lot of threads. So you'd spread those out over the hardware resources. 10:26:53 I assume that when two Linux threads need to synchronize they handle that explicitly themselves, perhaps using an OS mechanism but not requiring that the hardware help. 10:27:36 So my guess is that the Linux thread farm would drop fairly easily onto what I suppose would best be considered a multi-core processor with high core count. 10:29:38 Another reason to keep the CPU simple would be to facilitate a very high clock speed. I looked into this just a bit a while back and decided that I could could approach 250 MHz without much trouble in a Spartan 3 or a Virtex. 10:31:21 Anyway, the basic idea is to use lots of parallelism to overcome the fact that Intel and AMD can design at the transistor level and bring *extremely* sophisticated techniques to bear on speeding stuff up. 10:31:36 In the end I'd hope to wind up with a computer that would run as fast or faster. 10:32:32 I don't know how linux threads synchronize, 10:32:32 I can tell how NetBSD threads synchronize. 10:32:41 It seems that they use atomic operations. 10:33:01 Like "test and set," etc.? 10:33:08 Semaphore interface operations? 10:33:31 Yes. 10:33:49 The atomic_ops family of functions provide atomic memory operations. 10:33:49 There are 7 classes of atomic memory operations available: 10:33:49 atomic_add(3) These functions perform atomic addition. 10:33:49 atomic_and(3) These functions perform atomic logical ``and''. 10:33:52 atomic_cas(3) These functions perform atomic compare-and-swap. 10:33:55 atomic_dec(3) These functions perform atomic decrement. 10:33:57 It would be simple enough to put a few words like that in the native instruction set. 10:33:58 atomic_inc(3) These functions perform atomic increment. 10:34:04 atomic_or(3) These functions perform atomic logical ``or''. 10:34:05 atomic_swap(3) These functions perform atomic swap. 10:34:08 Quoting atomic_ops(3) 10:35:40 The basic approach to something like this would be to lay the processors out in the FPGA in a square array, and share the built-in memory amongst them. So each processor would have its own local memory and a narrow comm link with its four nearest neighbors. 10:35:53 The main memory interface (to off-chip RAM) would have to be shared. 10:36:20 So you might think of that local memory as microcode memory, or as a cache, or whatever. It would really fill all of those roles. 10:36:48 I wouldn't put the stack there - I'd implement it as a bidirectional hardware shift register for speed. Probably eight cells deep, maybe 16. 10:36:52 I'm not sure how easy it is to program. 10:36:55 Same with the return stack. 10:37:16 And this matters much. 10:37:30 These RAM blocks are dual ported, so another possibility would be for each processor to share 1/4 of its local RAM with the four nearest neighbors. 10:37:58 If it is hard to program everyone would sacrifice speed, 10:37:58 money and everything else rather than programming time. 10:38:32 Well, ultimately it has to connect up with existing off-the-shelf kernel code, etc. 10:38:44 I understand what you're saying though; I've pondered this point a good bit. 10:39:24 I first thought of this processor array for use in hardware control applications, where precise timing is vital. So I envisioned an architecture where instruction execution time was precisely preditable. 10:39:27 predictable. 10:40:05 And where I synchronized threads manually by lining up the code for all 16 or so processors in a special editor I wrote for the purpose. 10:40:29 I planned to use conditional execution to avoid pipeline-breaking jumps. 10:41:03 So it would be easy to write the code in a way that let you say "these instructions, even though they're on separate processors, will execute at exactly the same time, guaranteed." 10:41:58 I do think that a special editor would be required. You'd want to be able to highlight a line of code that's on one processor and see in all the other windows the code that was time aligned with that on the other processors. 10:42:13 It'd be easy to see what was going on everywhere at any given time. 10:42:41 This leads into very narrow field. 10:43:01 Complex, yes. But remember we're trying to replace not just the code writing complexity of a conventional system but also the processor design complexity. Certainly sophisticated thought will be required at some level. The performance we're going for doesn't come for free. 10:43:50 See what I said above. 10:44:12 If it is too hard to use, performance and everything else 10:44:12 will be sacrificed to ease of programming. 10:44:18 At this level of coding our goal is to achieve the functionality of a processor, at performance levels that match Intel and AMD. I assume that stuff corresponding to the BIOS and so forth would live a layer up fromt here, and would code up in a very similar way. 10:44:42 This may interest HPC folks. 10:44:58 You need to adjust to their application needs then. 10:45:31 Like ship optimized BLAS and LAPACK at the very least. 10:46:11 You know, I suppose another approach would be to have a string of processors instead of an array, and then use them to build up a more standard instruction processing pipeline. Then multiple such strings would represent multiple cores. 10:46:48 That would "look more like" existing processors, and thus might make it easier to mesh up with existing open source code optimized for such processors. 10:47:56 But then you're just a step away from specializing each of the processors in the string for whatever purpose it's supposed to fill, and you're moving away from a parallel processing approach. 10:48:07 Well... I think that you should decide on what you're trying to achieve first. 10:48:08 More effort in the VHDL. 10:48:28 If you want to address computer geeks, than which ones? 10:48:31 Forth geeks? 10:48:37 More regular ones? 10:48:50 Do you want to address HPC folks? 10:49:40 Well, I stated that very clearly - an open source computer that will rival the performance of units based on Intel / AMD processors. At a reasonable cost. 10:49:58 I don't expect this thing to be as cheap as mass-produced units. But it shouldn't be 10x the cost either. Maybe twice. 10:50:41 Some people use Linux because it matters to them to be free of corporate control. These are the people who would want this computer. 10:51:03 What is the intended audience? 10:51:06 Who are its users? 10:51:24 They might buy it from a company that built them for sale, but they always could have their own PCBs made, etc. 10:51:50 NetBSD, OpenBSD, Linux and others need MMU. 10:52:58 There is some weird linux fork that doesn't need one, 10:53:11 but it is meant for pretty restricted use. 10:53:12 I don't want to get away from the technical discussion, but my interest in this comes basically from my distrust of Big Business (or Big Government, or "Big" anything). 10:53:19 I don't trust Microsoft not to spy on me, so I use Linux. 10:53:34 I also don't trust Intel / AMD not to spy on me, though I don't think that's as much of an issue *today*. 10:53:39 That's pretty nice, but intended audience is technical discussion here. 10:53:45 But it could be someday, and I would like for the world to have a place to turn. 10:53:46 Correct. 10:53:50 Enough of that. 10:53:56 But you asked about the intended audience. 10:54:08 Sure. 10:54:26 Consider someone creates CPU that does NOOP at 10 GHz. 10:54:41 Which is 4x as much as current frequencies. 10:54:44 So a totally parallel processed approach would use identical processor blocks across the board, and the tailoring of the functionality would happen in the Forth "firmware." 10:54:48 But who needs it?? 10:54:58 Well, we can't go that fast in FPGAs. 10:55:15 We can approach 1GHz, but I don't know if we can get all the way there. 10:55:29 Alright, it does 100 NOOPs at 1 MHz freq. 10:55:51 Which is still 5x--10x as much as current freqs. 10:56:05 I'm just looking at tco (time from clock to output) of the flip flop cells in the mainstream FPGA offerings. That's a nanosecond or two. 10:56:26 Still who needs even these parallel no-ops? 10:56:31 Then there are other things to consider. Like I said earlier, I saw getting to 250 MHz or so without any real trouble. 10:56:57 Listen. 10:57:04 I don't know - I haven't done the math on what it would really take to match state-of-the-art processor performance. 10:57:09 _If_ you find some application domain, 10:57:22 where using your CPUs give some valuable advantage, 10:57:24 I've thought more about what I *could* put into a big off-the-shelf FPGA. Then I'd figure out how to use it / what to do with it. 10:57:27 you _may_ win. 10:57:53 If you application domain is wide enough, you may win more. 10:58:38 See, I don't want any low-level processor style other than this one. So, for example, your graphics engine would be an array of these processors that ultimately fed three or four digital-to-analog converters (RGB, CMYK, etc.) 10:58:44 Right now I don't see even theoretical possibility to use processors of your design. 10:58:59 So once you understood how the basic processor cell worked you'd understand how *everything* in the computer worked. 10:59:38 Same for USB - same processors with a simple analog interface to hook up to the connector. 11:00:33 As an application programmer, I care little of time diagrams. 11:00:35 Fair enough - I'm still very much in blue sky mode. But certainly I see hooking a string of these things together to emulate the pipline processing in a conventional processor as a possibility. So that looks like *a* way; I just don't know if it's the best one. 11:00:41 What matters to me is simplicity of use. 11:01:09 If I device is memory-mapped, that's easy. 11:01:14 This is meant, ultimately, to be a Linux box. Just as easy to use as any other. 11:01:40 If it requires constantly tweaking its registers in time-predictable manner, 11:01:42 Oh I see. 11:01:43 this is NOT easy. 11:02:07 Well, at some level the timing has to be handled, but yes, I see your interest in a higher-level interface that's memory mapped. 11:02:49 Even some kind of DMA controller programming is easy enough. 11:03:18 When we interface serial ADCs and DACs at work we put a FIFO in the FPGA. FPGA circuitry manages the timing, serializes and deserializes, moves words into and out of the FIFO, etc. The FIFO half-full flag interrupts the DSP, and it has a simple register interface to everything. 11:03:25 Load registers, start operation and wait till it finishes. 11:03:33 Just reads words until the FIFO is empty, or writes them till it's full, etc. 11:04:09 Or h/w FIFO. 11:04:12 With existing processors and graphics accelerators and so on a lot of fancy functionality is built into the hardware, its firmware, etc. Someone has done that for us, and we use it. 11:04:17 Software FIFO is NOT easy. 11:04:20 I'm fed up with it. 11:04:23 %| 11:04:41 But in the vision I'm proposing we've "opened the kimono"; someone in the open source community becomes responsible for every part of the system. 11:05:01 So you might or might not choose to play at those levels of the process, but I imagine that someone would. 11:05:13 OSS world works exactly this way: 11:05:32 you propose design which is at least theoretically valuable, 11:05:37 and provide prototype. 11:05:42 Then we'd no longer have to fret of when Intel will get around to makeing the Paulsbo graphics drivers available,or whatever. We'll own the chain from one end to the other, and the big companies can just hop right over it. 11:05:50 If it is viable, followers appear. 11:06:07 If it isn't, followers don't appear and your project dies out. 11:07:53 I may not get to any of this until I retire, though; I'm busy earning a living, I have five daughters ages 5 to 16, etc. Life intervenes, and ultimately I do work to live rather than live to work. 11:08:07 I do hope to do some of these things someday, though. 11:08:16 Same here. 11:08:19 I am putting some of the supporting pieces into place. 11:08:23 Only without kids. 11:09:02 At the moment I'm very carefully setting up the basis of gEDA component libraries on my PC, with git as version control, so that as I build up useful parts of this it will be easy to share them in a rigorous, well-specified way. 11:09:31 I might use the same infrastructure to support future consulting projects, so I can argue that I'm keeping an eye on my professional future at the same time. 11:09:37 Makes it easier to justify the investment. 11:10:07 In today's world I expect to have to keep earning well into what would otherwise be "retirement," and I'd rather do that from home than from an office. 11:10:11 AFAIR, we provide git-based packages. 11:10:20 I'd hate to have to drive to work everyday when I'm 70. 11:10:26 If not, I'm afraid, that I'm the only person who does that. 11:10:50 Git just seems like the right tool to me. Very P2P. 11:11:01 Well... We don't. 11:11:06 But I could do that. 11:11:30 Using it's really easy too - just install it from the Ubuntu package repository, go to the directory you want to keep your project in, say "git init," and you're off to the races. 11:11:42 Then anytime you feel like it "git add * ; git commit". 11:11:46 And someone did update geda yesterday or so. 11:11:53 Back up the .git directory regularly. 11:12:23 We backup the VCS snapshot by default/ 11:12:24 I have whatever gEDA Ubuntu gives me. Maybe there's a PPA for accessing the latest one; I should check. 11:12:39 I don't really love gSchem that much, but I do like PCB a *lot*. 11:12:42 "PPA"? 11:13:25 PPAs are a relatively recent way of getting the standard Debian package upgrade process to "go outside" the usual repositories and update individual packages for you. 11:13:36 Oh, debian. 11:13:50 For example, I use a Twitter client called Gwibber. Ubuntu provides Gwibber, but it's a back version. 11:13:58 That very system with package of 3 years old. 11:14:20 By hooking up with the Gwibber PPA the standard software update process can go straight to the package maintainers, if they offer a PPA. 11:14:38 So you stay right up to date without having to mess around with subversion, or CVS, or git, or whatever. 11:14:54 Without having almost to become a package developer yourself. 11:15:20 We have some debian users, who obviously override default package management system. 11:16:51 Anyway, I want to start designing circuitry soon. Power supply first, but fairly quickly a design that includes an FPGA and a straightforward way of interfacing it. 11:17:43 The one nasty in the vision is that I don't see a way to avoid using FPGA vendor supplied design environments. Ideally we'd just understand the encoding of the bitstream used to program the FPGA and could have open source tools to generate them. As far as I now, though that information is held proprietary. 11:19:28 But my goal, at any rate, is to get a hardware design in place that will let me start playing with these parallel Forth processors, so I can start to figure out what I can do with them. 11:20:04 I suppose I could emulate them in software for some "early results," but I would like to see how effectively I could, for example, drive a display or whatever using the techniques I outlined earlier. 11:20:50 Like how well could I decode an AVI file and play it. 11:21:34 Or interface a gigabit ethernet port. 11:21:37 Or whatever. 11:21:50 All with this one basic, common processor cell running Forth. 11:22:35 Ok, enough carrying on by me. I have to take my daughter to cheer practice and buy some groceries. Good to chat with you again, ASau. 11:35:39 --- quit: Al2O3 () 11:53:29 --- nick: Snoopy_1711 -> Snoopy_1611 12:30:44 --- quit: ygrek (Remote closed the connection) 12:37:22 --- join: ASau` (n=user@83.69.227.32) joined #forth 12:42:54 --- join: ASau`` (n=user@83.69.227.32) joined #forth 12:45:04 --- quit: ASau (Read error: 145 (Connection timed out)) 12:46:53 --- nick: ASau`` -> ASau 12:50:33 --- quit: ASau` (Read error: 145 (Connection timed out)) 13:01:49 --- join: Al2O3 (n=Al2O3@c-75-70-11-191.hsd1.co.comcast.net) joined #forth 13:10:12 --- quit: ASau ("off") 13:12:02 --- quit: tathi (Read error: 60 (Operation timed out)) 13:16:44 --- quit: Al2O3 () 13:20:58 --- join: Al2O3 (n=Al2O3@c-75-70-11-191.hsd1.co.comcast.net) joined #forth 13:23:20 --- quit: Al2O3 (Client Quit) 13:43:47 --- join: gogonkt (n=info@218.13.55.206) joined #forth 13:58:48 --- quit: gogonkt_ (Read error: 110 (Connection timed out)) 14:02:02 --- join: ASau (n=user@83.69.227.32) joined #forth 14:02:25 --- quit: DrunkTomato (Client Quit) 14:21:46 --- quit: kar8nga (Remote closed the connection) 14:22:12 --- join: ASau` (n=user@83.69.227.32) joined #forth 14:34:35 --- join: tathi (n=josh@dsl-216-227-91-166.fairpoint.net) joined #forth 14:37:23 --- quit: ASau (Read error: 113 (No route to host)) 14:37:47 --- quit: qFox ("Time for cookies!") 14:38:12 --- nick: ASau` -> ASau 14:44:47 --- quit: GeDaMo ("Leaving.") 15:34:27 --- quit: gogonkt (Remote closed the connection) 15:34:44 --- join: gogonkt (n=info@218.13.55.206) joined #forth 15:35:16 --- join: schmx (n=marcus@sxemacs/devel/schme) joined #forth 15:35:21 --- quit: Judofyr (Remote closed the connection) 15:36:20 --- join: ASau` (n=user@83.69.227.32) joined #forth 15:38:22 --- quit: schme (Read error: 104 (Connection reset by peer)) 15:45:49 --- quit: scj ("ZNC - http://znc.sourceforge.net") 15:46:57 --- join: scj (i=syljo361@static-ip-62-75-255-125.inaddr.server4you.de) joined #forth 15:53:49 --- quit: ASau (Read error: 113 (No route to host)) 15:58:27 --- join: ASau`` (n=user@83.69.227.32) joined #forth 16:11:20 --- join: gogonkt_ (n=info@218.13.44.141) joined #forth 16:15:43 --- quit: ASau` (Read error: 113 (No route to host)) 16:19:05 --- quit: gogonkt (Read error: 145 (Connection timed out)) 17:10:34 --- join: ASau``` (n=user@83.69.227.32) joined #forth 17:28:04 --- quit: ASau`` (Read error: 113 (No route to host)) 17:44:59 --- join: foxes (i=flash@222.131.160.45) joined #forth 18:02:58 :) 18:03:28 are there some words for this: convert s" 32" to 32 and leave it on my stack? 18:12:16 --- join: foxLaptop (i=flash@222.131.160.45) joined #forth 18:18:21 see >NUMBER 18:31:41 I saw, but dont understand. reading about c" and s" 18:34:17 c" gives you a single address: the first character is the length. 18:34:23 s" gives you address and length on the stack 18:35:26 For instance, `0 s>d s" 32" >number 2drop d>s` 18:35:43 s/For instance, // 18:37:25 tathi, i tried 0.0 s" 32" >number 18:37:28 is it ok? 18:37:39 that should be fine 18:38:07 thx :~) 18:38:16 You ok from there? 18:38:25 >number returns the rest of the stringn 18:38:35 so you can check if it managed to convert the whole thing 18:38:47 (or just 2drop it if you don't care) 18:39:24 Note that it doesn't handle a minus sign, so if you care about negative numbers you have to deal with that yourself. 18:39:39 nod 18:40:14 I am using the CONTENT_LENGTH for a input string 18:40:20 so, i will be the int 18:40:23 gforth has s>number ( addr u -- d f ) 18:40:30 er, 's>number?' 18:40:35 cool!! 18:40:43 that handles the sign for you 18:41:05 s>number 18:41:08 it works 18:42:20 Right. s>number is `s>number? drop` 18:42:36 if you don't care whether or not it succeeds in converting the whole input 18:43:58 OK, I need sleep. 18:44:03 Goodnight, all! 18:44:07 --- quit: tathi ("leaving") 18:44:49 night 18:46:26 --- part: foxes left #forth 18:46:58 --- join: foxes (i=flash@222.131.160.45) joined #forth 19:03:25 --- join: nighty^ (n=nighty@210.188.173.245) joined #forth 19:04:29 --- join: PoppaVic (n=pops@adsl-99-150-135-165.dsl.sfldmi.sbcglobal.net) joined #forth 19:29:49 --- join: gnomon_ (n=gnomon@CPE0022158a8221-CM000f9f776f96.cpe.net.cable.rogers.com) joined #forth 19:30:24 --- quit: gnomon_ (Client Quit) 21:38:26 --- quit: Snoopy_1611 (Read error: 145 (Connection timed out)) 21:49:12 --- join: Snoopy_1611 (i=Snoopy_1@dslb-088-068-216-251.pools.arcor-ip.net) joined #forth 22:34:10 --- quit: ASau``` ("off") 22:43:13 --- part: ContraSF left #forth 23:44:26 --- join: ygrek (i=user@gateway/gpg-tor/key-0x708D5A0C) joined #forth 23:59:59 --- log: ended forth/09.11.08