00:00:00 --- log: started forth/10.04.23 00:01:00 --- quit: gnomon (Ping timeout: 264 seconds) 00:01:16 --- join: gnomon (~gnomon@CPE0022158a8221-CM000f9f776f96.cpe.net.cable.rogers.com) joined #forth 00:10:39 --- join: ASau` (~user@77.246.231.114) joined #forth 00:17:59 --- quit: nighty__ (Ping timeout: 258 seconds) 00:18:59 --- join: nighty__ (~nighty@210.188.173.245) joined #forth 00:38:41 --- quit: nighty__ (Ping timeout: 258 seconds) 00:49:26 --- quit: kar8nga (Remote host closed the connection) 00:51:50 --- join: nighty__ (~nighty@210.188.173.245) joined #forth 01:00:34 --- quit: ASau (Ping timeout: 248 seconds) 01:59:19 --- quit: nighty__ (Remote host closed the connection) 03:47:07 --- join: skas (~skas@ppp121-45-193-75.lns20.cbr1.internode.on.net) joined #forth 04:00:19 --- quit: skas (Quit: Leaving) 04:11:19 --- quit: cataska (Quit: leaving) 07:33:51 Well fuck. 07:33:55 That Calc IV exam was hard. 07:40:09 Ah, multivariate integrals! 07:40:31 That is Calc III. 07:40:39 Calc IV is differential equations. 07:40:59 Specifically, systems of differential equations are hard for me. 07:41:03 Eigenvalues and such. 07:41:20 Calc III was very easy for me. 07:44:29 If you're talking about linear systems, then this shouldn't 07:44:29 be in calculus course at all. 07:46:01 Linear system of differential equations. 07:46:15 Evaluating critical poitns. 07:46:18 That sort of thing. 07:46:29 Laplace transforms are not too difficult. 07:46:36 I just had difficulty with the systems. 07:47:12 Once you've found basis (any), this doesn't belong to calculus. 07:47:23 It's purely algebraic problem. 07:49:25 --- join: kar8nga (~kar8nga@jol13-1-82-66-176-74.fbx.proxad.net) joined #forth 07:49:57 All differential equations are algebraic. 07:50:01 With a ton of formulas. 07:51:40 And transforms and procedures. 08:32:07 --- quit: Al2O3 (Quit: Al2O3) 09:08:40 --- join: arquebus (~sdf@201.139.156.133.cable.dyn.cableonline.com.mx) joined #forth 09:18:00 --- part: arquebus left #forth 09:18:25 --- join: arquebus (~sdf@201.139.156.133.cable.dyn.cableonline.com.mx) joined #forth 09:19:18 anyone here know if there are graphics libraries for gforth like SDL or OpenGL bindings? 09:19:35 --- join: skas (~skas@ppp121-45-196-67.lns20.cbr1.internode.on.net) joined #forth 09:25:23 I don't know of any, but that's not to say there aren't. 09:25:38 --- part: skas left #forth 09:27:34 Deformative- ok, thanks, it really surprises me that places like forth.org dont have a page dedicated to listing libraries or even the gforth docs dont list libraries 09:34:12 arquebus: no, there're no such libraries, everyone's writing FFI wrappers himself/herself. 09:35:04 arquebus: There's no surprise that forth.org doesn't 09:35:04 mention anything new, since Forth is mostly dead and interest is 09:35:04 mostly dead with it. 09:36:01 --- quit: ASau` (Quit: off) 09:37:35 --- quit: arquebus (Quit: arquebus) 10:21:45 :D 10:21:56 Feath 10:34:59 --- join: forther (~62d2faca@gateway/web/freenode/x-ajiaynbvpboiqlta) joined #forth 10:35:15 hi 10:35:44 --- join: ASau (~user@83.69.227.32) joined #forth 10:42:37 --- quit: crc (Ping timeout: 240 seconds) 10:44:30 --- join: crc (~charlesch@184.77.185.20) joined #forth 10:50:01 Hi forther. 10:50:29 hi Deformative 10:51:59 What's up? 11:00:54 nothing. just wondered if something interesting being discussed in here 11:05:08 Not really. 11:05:13 Just me complaining about my exam. 11:05:53 And setting up my laptop, getting it ready for my new fpga. 11:09:49 --- join: ygrek (debian-tor@gateway/tor-sasl/ygrek) joined #forth 11:20:19 exam? 11:30:42 Calc IV. 11:35:41 --- quit: kar8nga (Read error: Connection reset by peer) 11:50:29 And prepping my old computer for my fpga. 11:52:01 --- join: segher (~segher@84-105-60-153.cable.quicknet.nl) joined #forth 12:15:09 --- join: qFox (~C00K13S@5356B263.cable.casema.nl) joined #forth 12:38:16 --- quit: Deformative (Ping timeout: 246 seconds) 13:01:36 --- quit: forther (Quit: Page closed) 13:19:32 --- quit: madgarden (Ping timeout: 276 seconds) 13:20:00 --- join: madgarden (~madgarden@CPE001d7e527f89-CM00159a65a870.cpe.net.cable.rogers.com) joined #forth 13:22:45 --- join: GoNoGo (~GoNoGo@2a01:e35:2ec5:dd70:b50d:352a:254a:6041) joined #forth 13:37:08 --- quit: ASau (Remote host closed the connection) 13:37:57 --- join: ASau (~user@83.69.227.32) joined #forth 13:56:19 --- quit: ygrek (Ping timeout: 245 seconds) 14:46:48 --- join: Deformative (~joe@2002:43c2:b52f:d:224:8cff:fe67:e2dd) joined #forth 15:26:09 --- quit: GoNoGo (Quit: ChatZilla 0.9.86 [Firefox 3.6.3/20100401080539]) 15:53:54 --- join: tathi (~josh@dsl-216-227-91-166.fairpoint.net) joined #forth 15:56:57 Tim Trussell has been messing with SDL and OpenGL under gforth since the beginning of this year, and posting to comp.lang.forth about it: his code can be found at ftp://ftp.taygeta.com/pub/Forth/Archive/tutorials/gforth-sdl-opengl/ 16:18:25 --- quit: Deformative (Read error: Operation timed out) 16:20:22 --- join: Deformative (~joe@bursley-185022.reshall.umich.edu) joined #forth 16:21:06 --- quit: qFox (Quit: Time for cookies!) 17:22:51 Hey KipIngram. 17:27:11 I might be joining research staff here. 17:27:19 A professor wants me to work for him it seems. 17:27:42 I don't know if research staff is the right word. 17:27:53 Regardless. 17:28:23 The professor here who does a lot of research in ultra-low power computing invited me to work for him. 17:42:24 Yeah, not staff. 17:42:33 But I would be working or whatever. 17:57:49 Does anyone know where the forth instruction set is available? 17:57:59 Erm 17:58:00 Seaforht 18:01:33 Nevermind, found it. 22:10:36 Deformative: After digesting the Seaforth instruction set for a while I decided it was optimized for small but fast processors operating in a parallel processor environment. The idea is that the fleet of processors cooperate, with each working on a piece of the problem. 22:11:02 I deviated from it on purpose because my intent is to have a single processor that has to do all of the work, and therefore needs more substantial resources. 22:11:23 My stacks are deeper, for one thing. There are other subtle differences as well. 22:20:06 I know. 22:20:19 This is cool: http://www.microcore.org/ 22:20:38 Do you know of any other stack architectures? I am looking for inspiration. 22:29:27 Do a websearch on 4stack. 22:29:49 I didn't find myself particularly inspired by it, but it's worth looking at anyway. 22:30:13 The whole idea of tightly coupled stacks seemed worth a perusal. 22:30:22 How deep did you make your stacks? 22:30:34 I am looking at 18 words. 22:30:54 And 16 bit words.. 22:31:18 So that is like 288 LEs per stack. 22:34:28 Well, it's arbitrary; they're hardware LIFOs, not addressed banks. 22:34:48 I think I'll go with 32 elements each on the initial implementation. 22:35:00 16 bits wide. 22:35:21 In a spartan 6 I can get 8 bits into a slice. :-) 22:35:26 That's really cool. 22:35:37 Cool. 22:35:42 32 is a pretty big stack. 22:35:44 But might as well. 22:36:13 Sure. I will probably develop tools that let me determine how much of it I use, then I can cut them down at the last minute before deploying hardware. 22:36:23 If I need the room. 22:36:26 It sounds like I would be doing some dsp programming if I join this research. 22:39:53 DSP is very interesting and very "in the middle" of a whole lot of modern technology. 22:40:08 Know it well and you will always be employable. 22:50:14 --- join: Pusdesris (~joe@2002:43c2:b722:4:224:8cff:fe67:e2dd) joined #forth 22:50:18 Internet junked out. 22:50:20 --- quit: Deformative (Read error: Connection reset by peer) 22:50:28 --- nick: Pusdesris -> Deformative 22:50:38 I don't know enough about dsp to comment. 22:50:50 I think dynamically programmable fpga are much cooler. 22:55:02 Well, there are FPGAs that have DSP-specific circuitry in them. Things like being multiply-accumulate blocks, and so on. Pretty cool, really. At some point I'll want to take a good hard look at one of those and figure out how to incorporate support for that into my processor. 22:56:05 When you get right down to it pretty much all of DSP relates to Fourier transforms, Shannon's theorem, and discretization issues. The basic concepts are pretty simple - they just get applied in amazingly sophisticated ways. 22:56:20 I don't think I will have time to take DSP courses. 22:56:48 Maybe in grad school, if I want to be very general and never focus on anything in particular. 22:57:01 I hate fourier and anythign mathy. 22:57:21 Then you won't like DSP - some of those concepts are absolutely critical. 22:57:41 Look, though, "very general" and "nothing specific" doesn't set you up well for entry level jobs. 22:57:49 You need a bagful of tricks that you excel at. 22:58:23 I know. 22:58:26 When you get far enough into your career to be a CTO, etc. then generality can fly, but you've got to "get there" first. 22:58:29 Which is why I don't think I will take dsp stuff. 22:59:31 I think I will specialize on architecture and compiler design. 22:59:42 Maybe write HDL compilers someday, unlikely though. 23:00:45 That actually might become very important - I love the idea of computers becoming programmable down to the logic level. 23:01:00 Me too. 23:01:03 Just buy a blank slate and make it whatever you want it to be. 23:01:40 Did I show you this? http://cccp.eecs.umich.edu/ 23:02:07 Yes, you did. 23:02:15 To dynamically generate domain specific coprocessors on demand would be amazing. 23:02:32 I mean, it wouldn't be optimal, but the asm generated by c compilers are not optimal either. 23:02:45 I'd like to dynamically generate problem-specific *processors*. Just reconfigure the whole machine on the fly. 23:03:17 We are thinking on different levels, I am thinking desktop, you are thinking embedded I think. 23:05:12 Like, video games would automatically generate real time raytracers. 23:05:18 Actually I was thinking desktop too. An example that came to my mind was "editing text - dynamically configure for ultra-low power, watching a movie, dynamically configure for the necessary video performance, etc." 23:05:29 Yes - exactly. 23:05:42 I think we're thinking on exactly the same level. 23:05:46 But I would keep a constant core, something which does the actual generating. 23:06:22 The core would be smaller than the co-processors, so maybe more of a 'programmer' than a core. 23:06:23 Ok - I suppose something would have to stay contant, but mabye that part gets shut down once it's done with its work. It's like a "monitor." 23:06:52 Yeah, that technology would be amazing. 23:07:51 I see one obstacle - the hardware necessary for reconfiguration (programmable routing, etc.) will necessarily impose resoure and performance overhead. 23:08:06 I could see it completely eclipsing gpgpu. 23:08:08 So it just may not be possible for programmable hardware to rival custom transistor structures. 23:08:11 Just have a vga controller. 23:08:15 No gpu needed. 23:09:47 Complex garbage collection -like algorithms could be built into the "core." 23:10:31 Hold onto the dynamic circuits until the LEs are needed for something else. 23:10:43 I hate using the word complex. 23:11:19 Wow - virtual logic... Now that's an interesting concept. 23:12:19 Heh. 23:13:52 I mean, it wouldn't be terribly difficult to generate cores at compile time for the C (or whatever) compiler, then load from this finite list of cores as needed. 23:13:53 --- join: kar8nga (~kar8nga@jol13-1-82-66-176-74.fbx.proxad.net) joined #forth 23:15:03 By generate, I mean the C compiler could do some statistical analysis to find critical functions to build into hardware. 23:15:06 I get it - I think it's a neat idea. 23:15:49 I do think that in most cases, though, you could just configure the "best hardware mix" when you loaded a given application (text editor, movie player, game, etc.) and then dump that and reload when you chose another app. 23:16:05 I don't know if true "on the fly" will be really necessary, though it's a fascinating concept. 23:16:28 Unfortunately, it would add significant complexity to how computers work in general, and it is possible that you would need to generate multiple assembly versions of programs, one for each core generated in case one isn't available. 23:17:04 Or use jit technology! 23:17:07 That would be interesting. 23:17:08 I'm concerned about the overhead programmable routine would add. As a hardware guy I'm quite sensitive to that aspect. 23:17:19 routing 23:17:48 Hmm. 23:18:00 If you *know* where a signal goes, then it's just a wire. 23:18:11 It has some capacitance that you have to charge, and that's it. 23:18:28 If you have to be able to choose from say four places that signal comes from then you need a four-input mux. 23:18:50 That has inherent delay as you propogate through the transistor structures. 23:18:52 No avoiding it. 23:19:03 So a custom design will *always* be faster. 23:20:18 And smaller. 23:20:21 I see. 23:20:22 And lower power. 23:20:46 I have no doubt about lower power. 23:21:10 Fpga are inevitably hungry. 23:21:39 They certainly shine as prototyping tools, though. 23:21:48 Or for applications where you can't afford custom silicon. 23:21:50 Yeah. 23:22:42 I bet there is still some possibility for dynamically generated coprocessors to be useful though. 23:23:33 I imagine so. 23:24:16 I need to draw a big picture of my processor before I start translating to Verilog. 23:24:26 I think I have some sloppy logic here and there that could be optimized. 23:24:27 You are using verilog now? 23:24:29 I "grew" it. 23:24:41 Yes, I decided I'd go with Verilog instead of VHDL. 23:24:46 Cool! 23:24:55 Your Verilog==C, VHDL==ADA remark got my attention. 23:24:57 I am going to write mine in verilog too. 23:25:04 Provided I have time. 23:25:07 I know C fairly well - I don't know ADA. 23:25:17 Same here. ^^ 23:25:30 ADA struck me as something of a farce. 23:25:49 Yeah, only alive because the military forces it to be. 23:25:54 Right. 23:26:17 VHDL has all sorts of history steming from the same source. 23:26:20 From what I read. 23:26:22 I downloaded some Ubuntu Verilog tools and ran some examples, so I'm ready to go whenever I'm ready. 23:26:56 But, I think I need to 1) draw it and inspect it carefully for optimality and 2) write a test suite to really kick the tires on the logic. Make sure every instruction works right in every possible case. 23:27:43 I really don't need to send the thing down hole and have something not work right. 23:28:12 Oh the joys of tinkering. :) 23:28:50 I can just fool around and claim that it is just a toy anyway, no pressure on me. 23:28:58 If it's a wireline tool I could reprogram it from the surface, but if it's a MWD tool (Measurement While Drilling) then I'd have to trip it out and back in if it failed. That costs a fortune in lost drilling progress. 23:29:47 How do these devices of yours work? 23:29:49 These guys in the oil industry are absolutely *obsessed* with getting the oil out, not tomorow but *NOW*. 23:29:53 Are they robotics, or? 23:30:36 Well, some of them have motors and stuff. They're all cylindrically shaped and either get inserted into the drill string (MWD) or dropped down a finished well (wireline). 23:30:49 They might use acoustic techniques, radiation measurement, or whatever. 23:31:01 Some of them cut out cores to be returned to the surface. 23:31:18 Something them pump drilling fluid through the interior of the tool and analyze it optically. 23:31:24 All kinds of things. 23:31:54 And it all has to endure 500 degree F temps and (in the MWD case) the vibration, shock, and so on of drilling. 23:32:47 --- join: Pusdesris (~joe@bursley-185022.reshall.umich.edu) joined #forth 23:32:56 Sorry, connection dropped again. 23:34:04 Making a hardware stack in verilog is so easy. 23:34:20 I am excited to work on this stuff. 23:34:35 First I will make some trivial programs to get to know the equipment though. 23:34:41 Simple stochastic stuff probably. 23:34:42 --- quit: Deformative (Ping timeout: 248 seconds) 23:34:52 Sierpinski gasket or such. 23:34:59 Then I will dive in. :) 23:39:12 Sierpinski gasket - that's an awfully mathy concept for a guy that hates mathy stuff. 23:40:14 Not really. 23:40:27 It is one of the most simple stochastic algorithms. 23:41:17 Just half the distance to one of three points at random. 23:41:42 Still mathy though. Even relates to chaos theory in some ways. Mind you - I'm not an expert. This is just one of those areas of the type you referred to earlier where I have a general knowledge. 23:42:11 But yeah - simple in some aspects. 23:42:24 Trivial to implement. 23:42:27 In software. 23:42:30 What would you do with that using an FPGA? Some sort of graphics algorithm? 23:42:34 Should be around there in hardware. 23:43:03 Yes, given how easy divide by 2 is in hardware. 23:43:44 Well, I would store the image in ram, and have a few concurrent modules running, constantly calculating points. 23:44:03 Each time a point is found, it will pipe to a queue module which increments the value at that point in memory. 23:44:14 And the vga module would periodically ouput from the memory. 23:44:42 I may or may not place a cap on the incrementing, it might produce a cool shimmer effect without the cap. 23:44:57 Play around with calculating it at different clock rates. 23:45:03 Just generally getting to know the hardware. 23:46:16 Seemed like a good first project to me. 23:46:25 Hey, have you ever seen the movie "Primer"? 23:46:42 Nope. 23:46:48 Somehow this Sierpinski conversation made me think of it. 23:47:08 You should check it out - it was fun. A very amateur movie, production wise, but interesting just the same. 23:47:10 Oh wait, I think I saw about half of it. 23:47:15 Ties your brain in knots. 23:47:34 It's the one where they realize that something is traveling through time because bacteria is growing on what they put in the device, right? 23:48:10 Right. 23:48:31 I spent a fair bit of time one day trying to draw a full set of timelines for the plot. 23:49:04 Yeah, I only saw a bit of it. 23:49:14 It was on one day when I decided to go to math class. 23:49:15 Then of course I discovered that someone else had done a much better job and published their result on the web. 23:49:20 In high school. 23:50:34 I don't think I will implement my own ALU. 23:50:42 I will borrow one from opencores.org 23:51:21 Cool! A hardware huffman decoder. 23:51:23 That is awesome. 23:52:00 Maybe I will make an ALU. 23:52:07 Now that I think of it, it isn't terribly difficult in verilog. 23:52:48 No, it's really not difficult at all. I took the Seaforth approach - everything gets computed (for all opcodes) in parallel - then the result your opcode needs gets selected. 23:52:59 Probably not at all the most power efficient method. 23:53:28 Ha ha - I'm winning one of my online chess games. 23:53:45 I haven't played chess in so long. 23:53:50 I was never very good. 23:53:54 My brother was though. 23:54:34 He would always beat me, even though he is younger. 23:54:52 I'm decent. I've never studied my openings enough, so I do myself in early on when I'm against someone who has. 23:55:47 When I was at my best, I got that way by observing computers at maximum difficulty. 23:55:51 It's boring - if you don't repeat the industry standard opening seqences verbatim then you lose ground and an opponent who knows the standard sequence and how to exploit deviations from them slaughters you. 23:55:52 Observed some trends. 23:56:23 I'm very good in the middle and end games, though. 23:57:29 Doing all the extra calculations just results in wasted LEs, right? 23:57:31 I've tested myself by playing unrated games online and letting an engine help me on, say, the first 8-10 moves. Just far enough to get past the basic sequences. Then I play on my own, and do pretty well. 23:57:40 So I lose in the openings. 23:58:05 Makes sense. 23:58:35 I could memorize them if I wanted to I guess, but I've just never felt the motivation. Too much other stuff to do. 23:59:09 Ok - got to get up in 6 hours and drive to Austin. Better rest. Have a good night. 23:59:32 Yep, goodnight. 23:59:44 Seems like I always rest because I think I have to, not because I'm really tired. 23:59:59 --- log: ended forth/10.04.23