00:00:00 --- log: started retro/12.09.26 00:17:59 --- join: tangentstorm (~michal@108-218-151-22.lightspeed.rcsntx.sbcglobal.net) joined #retro 06:32:41 --- join: kumul (~kumul@adsl-72-50-90-224.prtc.net) joined #retro 10:16:46 --- join: Mat2 (~quassel@91-64-133-197-dynip.superkabel.de) joined #retro 10:18:22 --- quit: Mat2 (Client Quit) 10:18:52 --- join: Mat2 (~quassel@91-64-133-197-dynip.superkabel.de) joined #retro 10:19:06 hello 10:22:50 . 11:40:28 ciao 11:40:35 --- part: Mat2 left #retro 11:58:50 howdy 11:58:54 hi docl 11:59:20 hi crc 11:59:27 * docl has been playing with haskell some more 11:59:39 trying to get a handle on IO 11:59:53 it's slooowly starting to click in place. 12:00:58 I was never able to get io to build when I last looked at it 12:01:46 I figured out how to implement getLine to respond to other delimiters besides enter key, but in the implementation I am looking at backspace does not seem to work. 12:10:20 is there anything interesting in the language that I should look at? 12:50:51 --- join: Mat2 (~quassel@91-64-133-197-dynip.superkabel.de) joined #retro 12:50:58 Hi ! 13:15:16 crc: I'm kind of wondering if it might be worth looking into lazy evaluation and monads (which let you keep IO and nondeterministic stuff outside of pure functions) 13:21:00 * tangentstorm is a big fan of haskell 13:23:36 tangentstorm: cool. I'm still in the fairly early stage of learning it. 13:26:45 I'm trying to think how it would mix with forth, when you don't really have types in forth... :D 13:26:46 i'm working on a language that's stort of like pascal + haskell, but built from the ground up in forth 13:26:58 i wanted to be able to take a type-free imperative language, and then add the type system as a library... so it's almost like a lint tool for forth 13:27:48 that way you're not stuck trying to extend the language in a language that lacks the feature you'd want most in order to extend it :D 13:40:39 tangentstorm: I envision a functional forth 13:43:33 (but without a monad like concept as in haskell i would be of limited use) 13:44:48 Mat2: well, you kind of have everything you need to do monads in retro... think about it: they're just changing the execution context. in forth, that means swapping out dictionaries 13:46:47 yes, but monads do not only hold state but also synchronisation for parallel processing. However here is a way for this feature in retro, no doubt 13:47:13 sorry mean there 13:47:30 ... the compiler state that toggles between executing words at compile time or compiling them for runtime is pretty much the same thing as the Either or Maybe monad. 13:51:49 hmm, I will includean experimental monad concept into my retro conversion 13:53:12 after implementing hashed dictionarys 13:53:37 you pretty much would need to change the dictionary structure, so that each constructor in the monad had its own dictionary... or you just compile a case statement into each procedure... it's not really terribly hard, it's just not what ngaro is optimized for. you just build a dynamic layer on top of what's already there... but then you've got to lift everything else up into they type system too or you're not getting much 13:53:51 what are you converting? 13:54:19 retro for a new vm i'm working on he last year 13:54:29 up current 13:56:39 cool.. what's it do? 13:57:11 https://launchpad.net/navm 13:59:08 one advantage over ngaro is better performance and much denser encoding 14:00:41 what's a VSIB inspired ISA? :) or an ISA? ... googling... 14:01:19 Very Short Instruction Byte ;) 14:01:41 ISA = Instruction Set Architecture 14:02:24 oh that makes more sense. here i was reading about vehicle security 14:02:34 *lol* 14:03:05 the encoding bundles up to 16 instructions into an opcode 14:04:51 and the vm execute opcodes of two or three instruction bundles each iteration 14:05:00 how big is an opcode? 14:05:34 i guess i'm confused by the word "Byte" 14:05:41 8 byte for 16 instructions 14:07:28 there are 16 basic operations which can be freely combined (I call that instruction fusion) 14:07:36 okay, so like you have a 64 bit word... oh ok 14:07:45 yes 14:08:20 that's what i was thinking... that you'd be limited to 16, but the page makes it sound like you can add more at runtime 14:09:11 i understand now 14:09:20 yes, you can compile new instructions from instruction streams which extend the instruction set 14:09:58 up to 16 which need a byte for there encoding 14:10:26 but always composed of the 16 basic operations, right? 14:10:32 yes 14:11:17 for example you can compile a fibonacci routine into an extended opcode (I1 - I16) 14:12:36 it is like some kind of micro programming 14:12:45 i like it :) 14:13:19 good to read :) 14:13:49 it's like you're storing a procedure directly on the cpu 14:14:49 he performance is very good, for execution of $FFFFFFFFFFFFFFFF instructions, my iATOM n550 netbook need: 14:14:59 on an intel box, you could store those directly in the mmx registers 14:15:12 0m13.951s 14:15:17 not mmx.. the thing that came after mmx 14:15:25 sse 14:15:46 I cache the current opcode in a 64 bit register 14:16:05 SSE 14:16:12 for comparision, gforth need: 14:16:33 0m22.857s 14:17:12 nice :) 14:17:26 and the gforth vm is based on some kind of simple JIT compilation 14:17:40 my vm is a oken threaded one 14:17:49 I mean token threaded 14:19:43 the hard thing is reading the curren retro sources for me because I rewrite large parts from scratch 14:20:02 and will implement some parts very differently 14:20:32 modern intel cpu's have these SSE registers with 128 bits each... they support bit-rotating the entire register at once... so you could have 32 of your 4-bit instructions rotating around in a loop in each one of the 8 registers. 14:21:53 you can also get at the individual bytes 14:21:57 that is my plan for the nex vm version, sadly the SSE extension of gcc aren't compatible to clang and icc 14:22:31 extensions (my keyboard is a bit broken) 14:22:55 probably I switch to assembler or openCL 14:23:42 in he future 14:23:56 ^t 14:24:01 well cool 14:25:47 i was playing with a 4-bit instruction set a while back too... i was making a cloud of virtual processors that were all limited to 256 bytes of ram 14:26:08 something like the greenarrays chips 14:26:48 i just took a wild guess at what instructions to add to it though. :D 14:27:08 wha are your experiences ? 14:27:12 what 14:27:44 i got it running but i haven't written any code for it yet 14:27:47 one sec. 14:28:54 https://github.com/sabren/b4/blob/master/go/vm.pas 14:29:15 that was the instruction set i picked ( lines 23-26 )... 14:29:43 https://github.com/sabren/b4/blob/master/ref/b4vm.png 14:29:55 that was how i was going to lay out the ram in each machine 14:30:58 that's a bit similar to my instruction set 14:31:08 what is the function of ELS ? 14:31:16 i was modeling it after ngaro : you have 256 opcodes, the first 16 or 32 are primitive to the machine, and the rest just jump to an address in ram 14:31:42 ELS = jump if not zero (for some result flag) 14:31:49 "else" 14:32:13 er.. i guess jump WHEN zero 14:32:19 ok 14:34:16 the stacks would be limited to 8 bytes/words/whatever each... and when you push/pop it basically just shifts the entire $10 - $1F address space left or right 14:35:20 but like i said, i haven't ever tried to program it :) i was curious about the instruction set you picked 14:35:27 the instruction set of my vm is: ADD, SHL, SHR, AND, GOR, CP, DUP, DROP, SWAP, OVER, B, BS, BR, MUL, SYS, X 14:35:33 i hadn't thought about making it programmable. that's cool 14:36:50 what's GOR? and B/BS/BR/X ? 14:37:03 i'd guess SYS is a bios thing? 14:37:53 B = branch, BS = branch subroutine, br = branch reurn, X = execute extended instruction (I1-I16), GOR = or 14:38:05 branch return 14:38:19 makes sense 14:38:21 and SYS = trampoline into system functions 14:39:45 so did you get retro to run on it? 14:40:15 before you started converting retro i mean 14:41:05 no, i've investigated some tests to assure the insruction set is turing complete 14:42:13 a current the listener works fine and at current I reimplement and rewrite most routines 14:44:49 some instruction combinations are useless, like DUP+DROP, so these are replaced with other, more complex instructions 14:46:14 RD, for example decementing TOS and repeat the actual cached opcode if TOS != 0 14:46:58 the greenarrays chips have something like that... they call it micronext 14:48:02 yes, other of these instructions implement indirect pointer access 14:48:09 so you really kind of have a variable-length instruction set there 14:48:28 that was the idea 14:50:15 the drawback is: these instructions consume two slots, like all at runtime generated ones 14:52:53 ok, with 16 instructions in 64 bit, that is still very dense 14:53:18 definitely :) 14:54:13 i see "BSD license" but i don't see any code there...? 14:55:03 I will upload he latest sources after I found a solution for synchronizaion of my mercurial repro 14:55:42 a moment I found no way to upload it :( 14:56:33 yeah that site is really hard to navigate... i've never tried setting up an account 14:57:34 ok, so I seem not alone with these kind of problems 14:57:51 :) 14:58:01 Do you know a good hosting service ? 14:59:17 i used to run one for cvs ... i just use github . i know google code offers mercurial hosting for open source projects though. 15:00:00 I have an old assembla repro but would unwillingly replace the sources of my old sources there 15:00:15 sorry my old sources 15:01:57 http://hg-git.github.com/ 15:03:58 thanks, I will give github a try 15:04:44 ( if you want to try github without giving up your tools... i don't really have any great reason for recommending them except they go out of their way to make things run smoothly, and it's very easy to see what people are working on ) 15:04:49 afk 15:06:00 it's after midnight here, I will go to bed 15:06:03 ciao 15:06:10 --- part: Mat2 left #retro 15:12:50 --- join: erider (~chatzilla@unaffiliated/erider) joined #retro 15:12:55 hi all 15:31:14 tangentstorm: thanks, I was wondering what the most analogous concept in forth would be. it makes sense that it is similar to compile time vs interpret time. 15:40:03 tangentstorm: hmm. that reminds me... lisp-style macros are based on that same distinction. 16:56:27 docl : yeah... immediate words pretty much do what lisp macros do... but that's just one tiny example of a monad... you really want to be able to have any number of contexts, and change them at runtime.. Like the maybe monad basically turns off execution for the rest of the block once you have "nothing" as a result instead of "just x"... 16:59:49 lazy evaluation looks interesting 16:59:56 maybe you could get by adding a hook that ran in all the primitives, or in the bytecode handler itself... i think you'd basically have to build an interpreter 17:00:55 you can do lazy evaluation in retro with quotations :) 17:01:25 yup 17:03:30 well: i'm yanking all my pascal vm code out of that literate programming tool today, and just using regular pascal files. my goal is to get retro running in the terminal tonight 17:30:56 @crc is there a test suite somewhere for checking the vms? 17:51:32 nothing apart from being able to run the retroimage + retro tests 22:04:07 --- quit: kumul (Quit: adew) 23:59:59 --- log: ended retro/12.09.26