00:00:00 --- log: started retro/12.04.17 05:04:24 --- quit: jyfl987 (Quit: leaving) 08:20:21 --- join: Kumul (~Kumul@adsl-72-50-64-45.prtc.net) joined #retro 09:31:53 --- join: __tthomas__ (~Owner@24.130.7.34) joined #retro 09:49:12 --- quit: Kumul (Quit: gone) 10:42:01 --- join: Mat2 (5b4085c5@gateway/web/freenode/ip.91.64.133.197) joined #retro 10:42:13 Hi @ all 10:49:22 <__tthomas__> hey dude 10:49:44 <__tthomas__> have you seen dynasm Mat2 10:50:00 Hi tthomas ! 10:50:34 Yes, sadly after I had programmed my own inline assembler for C 10:50:51 <__tthomas__> heh 10:51:29 however I now use LLVM 10:51:59 for that purpose 10:51:59 <__tthomas__> LLVM is way too big for me to consider using 10:52:39 there IL layer is just a tiny module 10:52:54 bt offering some nice code analysis for free 10:53:00 but 10:53:03 ^ 10:53:54 <__tthomas__> think if I had to choose something for code generation would probably go with FASM.. 10:54:31 <__tthomas__> it can compile itself for windows/freebsd/linux and macosX 10:54:33 I have now some performance results for my vm compared to gforth with its basic native-code generator (there call that "building of dynamic suoper-instructions) 10:54:39 <__tthomas__> plus they are working on arm version.. 10:55:32 <__tthomas__> how does it compare? 10:55:47 gforth-fast (070): 10:56:08 real 0m1.499s user 0m1.484s sys 0m0.008s 10:56:15 my vm: 10:56:32 real 0m1.636s user 0m1.628s sys 0m0.004s 10:57:00 that is a TTC interpreter against gforth with all optimizations and native-code generation 10:57:19 <__tthomas__> what type of program are you compiling/running.. I was thinking of writing a sodoku number series generator, which might be nice for benchmarks... 10:57:55 it's a raw threading test iterating over $FFFFFFF dispatches 10:58:16 here is the current ngaro version for comparision: 10:58:41 real 1m3.402s user 1m3.296s sys 0m0.012s 10:59:16 I'm quite happy with the result 10:59:27 <__tthomas__> nice.. 10:59:39 <__tthomas__> all 3 c versions? 10:59:43 yes 11:01:40 I don't have yet incorporated the instruction compiler, so vm-code is only interpretated 11:01:44 <__tthomas__> would be neat to see how that compares to interpreted language like python with and without pypy 11:02:35 have some tests against Parrot with JIT compiler and LuaJIT 1.6 and it beats both by 50-75 % 11:02:51 <__tthomas__> Parrot still exists? 11:03:09 yes, there have a new version out 11:03:16 this month 11:04:17 if you want some comparision, I will test nAVM against pypy and python 11:04:48 <__tthomas__> I know pypy in a lot of cases is roughly 50% of C code and in some cases can be faster.. 11:05:15 but need some investigation before, because it can be that there code generators generating inefficient vm-code 11:07:02 <__tthomas__> stack optimizations seems to be the most difficult thing.. 11:07:22 hmm probably pypy is a good goal to mess for the instruction compiler :) 11:08:56 another candidat is LuaJIT 2.x 11:09:05 <__tthomas__> I haven't really played with pypy much, mostly since all the python programs I have ever written were IO bound, so I wouldn't be able to tell the difference.. 11:10:03 <__tthomas__> I do notice the speed difference when loading source files in python version of ngaro though, but once it is running vs. compiling more than useable.. 11:11:23 the problem of JIT compilers lays in there inherent relation between interpretation time and optimization overhead 11:11:55 <__tthomas__> yeah, I think pypy doesn't even try to optimize a loop until it runs 1013 or so times.. 11:12:21 so in some cases interpreted code will execute faster if the time overhead for compilation compensate for the speed advange 11:12:59 <__tthomas__> why don't JITs keep that information around for subsequent runs? 11:13:18 <__tthomas__> then it can skip anything that has already been optimized.. 11:13:48 because of possible state changes 11:14:07 <__tthomas__> state changes? you can tell if file has been altered.. 11:14:29 dynamic languages can alter the progam state at run-time 11:14:57 but for static languages like C or Java that is possible of course 11:14:57 <__tthomas__> yeah, and pypy marks those sections and doesn't optimize them.. 11:15:09 ??? 11:15:20 <__tthomas__> it only optimizes a subset, so if function is always called with int argument, it creates an int version.. 11:15:53 <__tthomas__> otherwise it falls back to interpretation.. 11:16:25 seems the compiler does not generate AST information 11:16:26 <__tthomas__> though most code is not that dynamic, always called with same type of arguments for a given program.. 11:17:16 <__tthomas__> I don't think it does AST method.. it optimizes python bytecode.. 11:17:38 <__tthomas__> removes all the guards and checks needed for dynamic methods.. 11:17:58 so pypy relate to a statical python variant, I think 11:18:38 that's the same solution as implemented by BASIC compiler in the 80's 11:18:59 not very efficient 11:19:00 <__tthomas__> not sure, but I watched a few videos on it from this years pycon.. interesting stuff, showed how python bytecode looks, and how they could eliminate a lot of the opcodes based on types passed in... 11:20:24 <__tthomas__> it does a lot of other optimizations and JITs reduced code as well.. so its not just raw translation, they are graphing the flow of entire program as it runs.. 11:21:52 weird 11:22:23 <__tthomas__> I thought it was strange that they weren't looking at python source at all but interpretting and reading in bytecode code, though it does make sense.. 11:23:19 probably there choose a CPS approach ... but that would only make sense for me for functional languages 11:23:39 CPS = continuation passed style 11:24:17 <__tthomas__> well it is a stack based vm... 11:26:19 it is possible if each function consumes only ojne or two parameters 11:26:35 and each stack code can be transformed into such representation 11:27:31 ok, but well, for a language like python that would require some complications 11:28:19 <__tthomas__> why? it is just dictionary lookups for addresses which are dropped on stack and a few opcodes for manipulating them.. 11:29:51 the parser must split subroutine calls with more than two parameters into a sequence of routines which consume only two or one 11:30:14 and that requires some kind of stack analysis at run-time 11:30:36 <__tthomas__> I believe all parameters are kept in locals dict when passing, so can load variable directly on stack when needed.. 11:35:44 hmm 11:36:03 <__tthomas__> I don't know the exact mechanics of how stack frame works, I think the stack frame contains bytecode with locals and reference to global dict.. 11:36:58 <__tthomas__> but it doesn't work all that differently from Forth conceptionally.. 11:37:20 <__tthomas__> s/conceptionally/conceptually 11:40:07 python has some similarities with basic for me 11:40:30 and basic store variables inside a specific array 11:40:42 so it does python it seems 11:41:13 I do not know why such a language is based on a stack vm 11:41:24 <__tthomas__> Java and C# are as well.. 11:42:07 ok, these two are designed around huge vm designs which relate on compilation for speed 11:42:12 <__tthomas__> very different from C stack.. which isn't very stacklike at all.. 11:42:49 and stack code can be seen ase one variant of SSA representation 11:43:01 so this make sense 11:43:17 as i mean^ 11:43:39 <__tthomas__> the basics I have seen were just text interpreters that scanned strings and ran through a giant case statement.. kept all variables inside of a big array.. very very slow.. 11:44:31 <__tthomas__> but easy to implement.. 11:44:42 ok, so straight back-to-the-past desgns *lol* 11:44:50 designs 11:45:35 <__tthomas__> I believe everything useful in computer science was created pre 80's 11:45:58 <__tthomas__> all the new stuff is just re-invention/discovery... 11:46:26 yes, but the old stuff was better implementated for most cases 11:46:45 <__tthomas__> heh.. well I need food... catch you later.. 11:46:53 see you 12:19:28 <__tthomas__> back, that was tasty 12:22:28 --- quit: Mat2 (Ping timeout: 245 seconds) 12:23:52 --- join: Kumul (~Kumul@adsl-72-50-64-45.prtc.net) joined #retro 12:25:35 --- join: Mat2 (5b4085c5@gateway/web/freenode/ip.91.64.133.197) joined #retro 12:25:56 had some connection troubles 12:26:24 <__tthomas__> I see that.. 12:27:01 <__tthomas__> saw something interesting today, using roll for branching.. 12:27:21 <__tthomas__> well was actually for implementing MIN without conditionals, just gave me an idea.. 12:27:43 <__tthomas__> : if rot 1+ roll swap drop do ; 12:28:03 what is the stack usage of roll ? 12:28:22 <__tthomas__> grabs index of stack and moves it to top.. 12:29:48 ok 12:29:54 <__tthomas__> so 0 roll is no op, 1 roll is swap.. 12:30:09 I call this instruction pick 12:30:22 <__tthomas__> I thought pick made copy and left original untouched 12:30:40 <__tthomas__> so 1 pick is same as over 12:31:03 that's the ANSI defination, yes 12:32:54 i've implemented some instructions for roll 1+, roll 1- implementing index adressing 12:34:23 but found it easier to use stack elements as index register directly 12:34:48 <__tthomas__> yeah.. I agree, pick and roll are the goto of forth.. :) 12:35:11 <__tthomas__> if you use them a lot, you are probably doing something wrong.. 12:35:33 you write not well factored code for sure 12:35:44 than 12:36:36 however index adressing is a weak point of traditional forth systems 12:37:10 that's why mr. moores current cpu's implement two index registers (x and y) 12:37:34 <__tthomas__> true, but you aren't supposed to be using the stack as an array.. :) pick and roll are useful for implementing primitive words if you are implementing stack as an array, but it is an implementation detail, not something should rely on.. 12:38:03 <__tthomas__> that is also why chucks definition of dup is.. : dup !a a a ; 12:38:23 <__tthomas__> though his cpu has a real stack.. 12:39:03 I choose the approach of using each stack element as possibly index register 12:39:42 <__tthomas__> yeah, stack on GA-144, top two elements of data stack are register, and top of return stack is a register.. 12:40:33 <__tthomas__> register b is only for addresses, register A is general puprose, addresses and data.. I also think b is smaller so can't be used for memory outside of chip.. 12:41:32 I have these six instructions, ldn, stn, ldi, sti, ldn, stn 12:42:15 ldn n - push the value of pointer at stack element n onto the stack 12:42:56 stn n -store tos at address of pointer in stack element n 12:43:24 ldi n - load value from pointer at element n and increment pointer 12:43:31 <__tthomas__> Yeah, I was thinking of decomposing all the ngaro instructions into 1 operand instructions, just to see how many unique things actually do.. 12:43:47 <__tthomas__> s/operand/operation 12:44:30 you came to a reduction of 8 instructions with some tricks 12:45:41 if each of them operate with an immediate value or stack index 12:46:07 <__tthomas__> not exactly what I mean.. word dup copies top of stack to variable, increments stack pointer, then copies variable to new position of stack pointer.. so it is several instructions, was going to make a stack increment, load stack into register, load register into stack.. 12:46:35 ah ok 12:46:48 <__tthomas__> it is just a curiousity, nothing else.. 12:47:13 my ngaro vm at github impement such instructions 12:47:23 implement 12:47:41 I came across some 32 primitives 12:48:01 <__tthomas__> interesting 12:48:52 <__tthomas__> I wasn't going to make them 0 operand, was going to do like mov a, [sp] 12:49:05 <__tthomas__> inc sp 12:49:59 the instrcution compiler implement an accumulator store architecture like on m6809 12:50:17 instruction 12:50:37 <__tthomas__> not familiar with 6809 instruction set, only 386 12:50:37 but the design was a failure 12:52:07 It would spend large efforts and register analysis like for traditional compilers as gcc to compile optimizated native-code for modern superscalar pipelined architectures 12:52:22 with this approach 12:53:09 where 0 operand stack-code would just need a brain-dead code generator parsing stack code like a one-side SSA tree 12:53:12 <__tthomas__> I think gcc only needs 3 registers, may even be as little as 1 12:53:50 <__tthomas__> which makes me wonder how bad gcc code generation is for non intel platforms 12:54:49 that's because typical in-order designs depend on static register sheduling 12:55:13 and PowerPC on branch-target analysis 12:55:40 the newer gcc versions are quite efficient in this though 12:56:24 <__tthomas__> hmm.. historically gcc code is much slower than microsofts and intels c compilers.. 12:56:41 <__tthomas__> especially intels.. 12:57:34 <__tthomas__> not that I care, I don't run gcc with optimization flag anymore ever since its optimizer ate my code.. 12:57:41 the Intel compiler can auto-vectorize(?) code 12:58:43 clang for example is not so efficient like gcc for threaded code 12:58:50 <__tthomas__> not sure what you mean, but it can use all op codes and does the best job of using the right ones for architecture.. microsofts generates fast code without cheating as they target lower end cpu's but it is still pentium and above.. 13:00:10 I know only the Visual C compiler form 2003 and this generate not well optimizated code at default in comparision with gcc 13:00:41 <__tthomas__> is that visual c++ 6.0, what is that? 13:00:56 <__tthomas__> compare it to gcc from same era though.. 13:01:11 the C compiler from Visual Studio 13:01:41 <__tthomas__> That must be 6.0, version after that was 2005... 13:02:40 right 13:03:31 <__tthomas__> I use lcc on windows if I am doing free C compiler, it compiles fast and is easier to get setup than cygwin or mingw 13:03:44 <__tthomas__> though it doesn't support c++ 13:04:00 <__tthomas__> or I will use tcc if I don't care about library support 13:04:21 i've found pcc interesting 13:04:34 <__tthomas__> yes definitely, I hope all the bsd's switch.. 13:04:49 <__tthomas__> though freebsd is least likely to as it doesn't use pkgsrc 13:05:01 the OpenBSD folk seem to swear on it 13:05:23 <__tthomas__> of course, it compiles very fast and they don't have to worry about gcc dropping a platform out of the blue, or changine ABI's 13:05:38 <__tthomas__> when developing number one thing is compile speed.. 13:06:01 yes or bring up new, obscure bugs 13:06:27 <__tthomas__> god I still have nightmares of gcc 2.95 on BeOS 13:07:03 by the way, I need to install AROS on my netbook 13:07:17 <__tthomas__> amiga OS eh? 13:07:49 <__tthomas__> I bought an amiga off a guy for 60 bucks years ago, but never was able to find power brick for it so gave it away.. 13:08:21 AmigaOS 3.1 clone for standard PC's 13:08:53 <__tthomas__> yeah, pretty cool.. that is why I am so interested in propellers, so want to make a homebrew pc like in the 80's.. 13:09:22 well it looks now more like a crosspath with MorphOS 13:09:47 <__tthomas__> have you seen Symbos? 13:10:34 yes, nect week I will present is at an education class as example for microkernel designs 13:10:38 next 13:10:54 I mean efficient microkernel design 13:10:55 <__tthomas__> I wonder if that is runnable on propeller with z80 core.. 13:11:07 sure, it would run nice 13:11:15 <__tthomas__> I think memory would be the issue.. 13:11:44 ok, take a hive 13:11:56 <__tthomas__> heh, or ramblade.. 13:12:11 <__tthomas__> actually, I have a hydra and there is a 512K expansion card for it.. 13:13:58 reads fine 13:14:53 you will need some adjustments of the graphic modul 13:16:14 <__tthomas__> GEOS might be better fit 13:16:17 but because hydra uses pattern based modes you can use the msx version as template I think 13:16:35 GEOS for the C64 ? 13:16:39 <__tthomas__> yeah 13:16:57 <__tthomas__> it runs on less memory 13:18:10 I have the kernel sources here, it would spend not much effort to implement a geos kernel for propeller (or better AVR32) 13:18:31 <__tthomas__> which avr32? 13:18:38 yes 13:18:40 <__tthomas__> uc3 or ap7000? 13:19:32 <__tthomas__> pic32 is actually really awesome, and it would run great on that.. 13:19:48 ok 13:20:06 <__tthomas__> unfortunately some of the peripherals built in chip are proprietary and require extra licensing, think ethernet and possibly can interfaces.. 13:20:09 I mean some of the AVR32 7000 core 13:20:23 <__tthomas__> NGW-100 dev board? 13:20:38 don't know that board 13:21:11 <__tthomas__> it is their router board based on avr32 AP7000, it was really cheap, around 89$ and ran linux, had dual ethernet.. 13:22:11 every board with 48 kB ram will do 13:22:44 <__tthomas__> Atmel told me they discontinued AP 7000 and only SoC they offered was SAM Arm series.. UC3 was their newer low power version of avr32 bit, but no mmu and no linux support.. 13:25:05 that's new to me 13:27:26 there seem to concentrate on there UC3 serie 13:30:30 ... with 128 kB on-chip SRAM 13:31:56 EVK-1100 dev board 13:33:57 so this should be ok 13:34:25 and I bet you can run geos from an emulator 13:34:26 <__tthomas__> yeah, I loved the AP7000 was best embedded linux environment ever programmed for.. 13:35:07 i'm question why there had abandoned it 13:35:40 <__tthomas__> I think it was because they already had a SoC in the form of ARM and arms are pretty hot, so wasn't worth engineering effort.. 13:36:34 <__tthomas__> it had built in AC97, USB, dual ethernet macs, video.. very awesome chip.. my work chose to not use them anymore because they were discontinued.. 13:37:00 <__tthomas__> so we moved to microblaze on fpga then coldfire PPC 13:37:32 PowerPC ? 13:37:37 <__tthomas__> yeah.. 13:37:58 where one find a cheap PowerPC board ? 13:39:13 <__tthomas__> http://www.freescale.com/webapp/sps/site/taxonomy.jsp?code=MPC8XXX7XXX 13:39:24 <__tthomas__> Sorry, MPC, same architecture I believe.. 13:39:57 <__tthomas__> Genesi sells powerpc I think for running amiga OS 13:40:09 yes but ther boards are expensive 13:40:19 <__tthomas__> http://www.genesi-usa.com/products/pegasos 13:40:48 <__tthomas__> in fact they use the processor I just showed you.. 13:41:37 <__tthomas__> Looks like pegasos is discontinued anyhow, not real surprised by that.. Amiga Inc. has really been f*ing up these last few years.. 13:44:06 the only hardware seller left is Genesis I tnik 13:44:09 think 13:44:29 and there offer these X1000 board for OS 4.1 13:44:42 <__tthomas__> yeah, and those are expensive.. 13:45:10 it's a shame 13:45:11 <__tthomas__> a lot of money for os with very little 3rd party software support.. 13:45:33 <__tthomas__> it took them forever to get OS ready, then there was all the fighting over the hardware... 13:45:56 yeah, but there exist AROS 13:46:02 <__tthomas__> yeah, fortunately.. 13:46:14 <__tthomas__> I understand was a lot of displaced amiga fans.. 13:46:36 <__tthomas__> a lot of the beos users were amiga refugees 13:47:04 the best OS ever used for me was OS/2 Warp 13:47:21 <__tthomas__> yeah, I heard good things.. for me it was pc dos 5.0 13:47:57 <__tthomas__> though win7 is nice I must say.. 13:48:02 have you heard about Freedos 32 ? 13:48:09 <__tthomas__> yeah.. :) 13:48:40 <__tthomas__> tried to run it a few times, but never could figure out internet on it, and these days computer is almost useless without internet.. 13:49:01 <__tthomas__> unless run it in emulator, or don't have network connectivity to begin with.. 13:49:31 I think it's a nice approach 13:49:55 <__tthomas__> yeah, I want a limited resource single user os.. 13:50:03 <__tthomas__> something that is quick and just works.. 13:50:31 same for me 13:50:58 there is not much choice in this regard 13:51:05 <__tthomas__> yeah, I hate all the phone os's.. 13:51:17 <__tthomas__> none of them make a good phone.. 13:52:24 that's gaming os designs for people which play with phones 13:52:28 <__tthomas__> symbian was nice.. 13:52:35 epoc 13:53:10 <__tthomas__> well browsing on a phone is terrible, email is adequate at best, and phone features/contact management is enfuriating.. 13:53:44 <__tthomas__> I wish was a commandline phone, could just type, dial mom, message wife.. search contacts for 13:54:43 would be nice but I bet most people demand finger gestics and screen cleanings 13:55:35 <__tthomas__> which is funny, considering how much text driven world actually is.. web search would not work as a gui.. and most people communicate through documents and email.. not gui's and icons.. 13:56:04 how about a CleanMyScreen OS from Cleenex Software ? 13:56:14 <__tthomas__> eh? 13:56:14 +lol* 13:57:25 Cleenex is a company which sells so called brain mind software for relaxing 13:57:54 <__tthomas__> Cleenex sells software? is this different company then Kleenex? 13:58:00 ys 13:58:03 yes 13:58:08 <__tthomas__> ah, that was the confusion.. :) 13:59:42 that's software for crazy people fearing that thinking can damage there minds 14:00:27 <__tthomas__> heh 14:02:10 <__tthomas__> hmm.. wonder if android is moddable for this.. I don't have the time or necessarily the interest, but would be awesome.. 14:02:44 as I know it, you can control android completely from within a shell 14:03:08 (which must be installed first of course) 14:03:09 <__tthomas__> yeah, you can, I have python installed on my phone.. I am not sure about answering calls though.. 14:03:27 <__tthomas__> or replace built in functionality.. 14:03:39 <__tthomas__> I think even phonegap and javascript can do most things.. 14:04:04 there exist some kind of shell programs and scripts for this 14:04:30 <__tthomas__> yeah, with python I can write script to dial phone, send text, print out gps, even open calendar.. 14:04:46 you need an rooted android device for sure 14:05:12 <__tthomas__> nah, you just need a carrier that doesn't suck.. my phone isnt rooted and I can do most things.. 14:05:25 ok 14:06:39 my plan is at some time to buy a cheap android phone, get rid of this useless gui and use a decent shell 14:06:47 <__tthomas__> my s60 phone was awesome for this type of thing, but our new carrier band for wifi doesn't support higher speed so I switched to android.. 14:07:46 <__tthomas__> I really like that idea.. 14:09:36 hopefully newer Andoid versions will not implement some kind of user protections... 14:10:50 against os modifications 14:11:08 need some sleep, see you 14:11:14 <__tthomas__> it isn't android that enforces that, its hardware maker 14:11:16 <__tthomas__> later man.. 14:11:24 ciao 14:11:30 --- quit: Mat2 (Quit: Page closed) 17:21:22 --- quit: __tthomas__ (Quit: Leaving.) 20:40:57 --- join: jyfl987 (~jyf@unaffiliated/yunfan) joined #retro 22:52:28 --- quit: Kumul (Quit: gone) 23:59:59 --- log: ended retro/12.04.17