00:00:00 --- log: started retro/12.04.13 07:17:20 --- join: Kumul (~Kumul@adsl-72-50-76-18.prtc.net) joined #retro 08:46:50 --- quit: jyfl987 (Quit: leaving) 10:22:22 --- join: __tthomas__ (~Owner@24.130.7.34) joined #retro 12:16:24 http://sprunge.us/hjIJ <- experimental patch using a helper function for strings 13:12:32 --- join: Mat2 (5b4085c5@gateway/web/freenode/ip.91.64.133.197) joined #retro 13:16:41 hello 14:38:34 <__tthomas__> hey 14:41:10 Hi 1 14:41:14 ups Hi ! 14:42:33 <__tthomas__> how are things? 14:44:33 coding 14:45:18 i'm testing the vm against gforth, Lua JIT and gcc 14:48:52 <__tthomas__> awesome.. I wouldn't mind playing with opcode parsing and reduction algorithms.. best way to make faster code, get rid of unneeded code.. 14:49:47 <__tthomas__> it would be like writing a sodoku solver, fun stuff.. 14:51:25 you need code tests, code tests and ... code tests for performance 14:51:35 :) 14:52:25 <__tthomas__> You could do the cheesy thing and just inline everything completely removing call returns.. 14:52:56 that's one of my plans 14:53:54 but most performance advantage relate to reducing BTB mispredictions 14:54:42 <__tthomas__> I was reading a nice paper on how processor speed keeps increasing and memory speed doesn't keep up.. 14:55:03 <__tthomas__> basically was showing how bad C++ code really can be.. 14:56:40 memory latencies are a weak point of superscalar pipelined cpu designs 14:57:27 <__tthomas__> It is also why video cards have such large memories these days, too expensive to send data over to video card in real time.. 15:00:19 yes, and there profit from SRAM 15:01:36 <__tthomas__> also putting memory directly into register for ALU instruction and not hitting stack is big win.. 15:03:21 hmm, i think these are classical stream processors with big register files and SIMD processing 15:03:43 or VLIW architectures 15:04:25 so best view would be to see the whole graphic board as cpu 15:26:02 I think 15:28:06 <__tthomas__> I think run time profiling is the way to go regardless.. see what it actually does then try various optimization paths.. 15:32:00 so you think about JIT optimisation ? 15:33:35 <__tthomas__> Wasnt thinking jit, was more thinking run it under profile, then it makes a graph and produces whole new image.. Then next time you run it, uses new image and repeats.. Till you tell it to quit profiling.. 15:34:10 <__tthomas__> This way could detect dead branches and loops and remove/optimize/inline.. 15:35:38 <__tthomas__> I was thinking of this for cases where outputting word as application, so it would start whole new image with just opcodes, no dictionary, no unused words... 15:38:15 my vm is able to compile new instructions, it would be not much effort to combine this with some form of static profiling as you explained 15:38:43 I will test these out 15:40:36 <__tthomas__> I was thinking of doing something for debugger, every step it takes it records just what has changed for that opcode and keep track of this, would let backtrace and change variables, single step, take branches.. I think could be useful.. theoretically wouldn't take a ton of memory.. 15:43:36 a prfiling debugger, good idea ! 15:43:41 profiling 15:45:00 <__tthomas__> You could even dedicate a few ports to it for controlling VM externally, and reporting stacks.. 15:47:14 <__tthomas__> I was already kind of thinking of ripping off protocol that ipython uses for supporting multiple backends.. they have server that runs code and clients connect.. stream for errors, input, output, locals, globals, etc.. It works really well, even history and last few things typed.. 15:50:23 it's half after midnight here, I need some sleep, see you later 15:50:34 <__tthomas__> later man.. 15:50:45 ciao 15:50:51 --- quit: Mat2 (Quit: Page closed) 16:15:45 --- join: CIA-159 (~CIA@cia.atheme.org) joined #retro 16:17:12 --- quit: CIA-98 (*.net *.split) 16:38:53 --- quit: Kumul (Ping timeout: 264 seconds) 17:56:10 --- join: Kumul (~Kumul@adsl-207-204-135-41.prtc.net) joined #retro 23:11:40 --- quit: Kumul (Quit: gone) 23:59:59 --- log: ended retro/12.04.13