00:00:00 --- log: started forth/16.09.18 00:02:58 le v a p o u r w a v e a e s t h e t i q u e :: <3 :: https://www.youtube.com/watch?v=4PdDI2swEaw 00:05:39 --- quit: karswell` (Remote host closed the connection) 00:06:37 --- join: karswell` (~user@167.49.115.87.dyn.plus.net) joined #forth 00:07:01 --- join: rixard (~rixard@h-155-4-135-251.na.cust.bahnhof.se) joined #forth 00:09:41 --- quit: rixard (Client Quit) 00:40:40 --- join: newuser|43212 (0e8b260b@gateway/web/cgi-irc/kiwiirc.com/ip.14.139.38.11) joined #forth 00:42:26 --- quit: segher (Ping timeout: 240 seconds) 00:42:34 --- join: segher (segher@bombadil.infradead.org) joined #forth 00:42:58 --- quit: newuser|43212 (Client Quit) 01:05:36 --- quit: mnemnion (Remote host closed the connection) 01:14:27 --- join: harfbuzz (4504e477@gateway/web/cgi-irc/kiwiirc.com/ip.69.4.228.119) joined #forth 01:26:56 --- join: mnemnion (~mnemnion@2601:643:8102:7c95:91d4:cab2:52f2:e0b0) joined #forth 01:31:26 --- quit: mnemnion (Ping timeout: 255 seconds) 01:35:14 --- join: gyxile (~nick@cpc80309-grim18-2-0-cust167.12-3.cable.virginm.net) joined #forth 01:38:33 --- quit: ASau (Remote host closed the connection) 01:38:55 --- join: ASau (~user@netbsd/developers/asau) joined #forth 01:57:09 --- join: mnemnion (~mnemnion@2601:643:8102:7c95:91d4:cab2:52f2:e0b0) joined #forth 02:42:36 --- join: true-grue (~true-grue@176.14.222.10) joined #forth 02:45:41 --- quit: gyxile (Ping timeout: 255 seconds) 02:58:45 --- quit: mnemnion (Remote host closed the connection) 04:03:51 --- quit: true-grue (Read error: Connection reset by peer) 04:08:17 --- quit: ASau (Ping timeout: 265 seconds) 04:26:06 --- join: true-grue (~true-grue@176.14.222.10) joined #forth 04:33:20 --- quit: harfbuzz (Quit: http://www.kiwiirc.com/ - A hand crafted IRC client) 04:33:41 --- join: harfbuzz (c6ff6c5a@gateway/web/cgi-irc/kiwiirc.com/ip.198.255.108.90) joined #forth 04:34:39 --- quit: harfbuzz (Client Quit) 04:36:24 --- join: harfbuzz (d13a8403@gateway/web/cgi-irc/kiwiirc.com/ip.209.58.132.3) joined #forth 04:46:06 --- quit: harfbuzz (Quit: http://www.kiwiirc.com/ - A hand crafted IRC client) 04:46:35 --- join: harfbuzz (c0c8c802@gateway/web/cgi-irc/kiwiirc.com/ip.192.200.200.2) joined #forth 05:01:10 --- quit: harfbuzz (Quit: http://www.kiwiirc.com/ - A hand crafted IRC client) 05:02:15 --- join: harfbuzz (c6ff6c5a@gateway/web/cgi-irc/kiwiirc.com/ip.198.255.108.90) joined #forth 05:03:15 --- quit: harfbuzz (Client Quit) 05:10:23 --- quit: probonono (Remote host closed the connection) 06:13:58 --- join: MickyW (~MickyW@p4FE8DA90.dip0.t-ipconnect.de) joined #forth 06:31:54 --- join: mnemnion (~mnemnion@2601:643:8102:7c95:28b8:4b:aada:2644) joined #forth 06:36:05 --- quit: mnemnion (Ping timeout: 255 seconds) 06:59:53 --- quit: nha_ (Ping timeout: 248 seconds) 07:05:10 --- join: ASau (~user@netbsd/developers/asau) joined #forth 07:55:16 --- join: nha_ (~nha@rea75-3-88-190-132-231.fbxo.proxad.net) joined #forth 08:10:16 --- join: proteusguy (~proteusgu@49.229.157.160) joined #forth 08:10:16 --- mode: ChanServ set +v proteusguy 08:15:20 --- join: nal (~nal@adsl-64-237-237-132.prtc.net) joined #forth 08:16:59 --- join: newuser|92933 (52062797@gateway/web/cgi-irc/kiwiirc.com/ip.82.6.39.151) joined #forth 08:22:41 --- quit: nha_ (Quit: Ex-Chat) 08:22:55 --- join: nha_ (~nha@rea75-3-88-190-132-231.fbxo.proxad.net) joined #forth 08:23:10 --- quit: MickyW (Quit: Verlassend/leaving) 08:24:09 --- quit: newuser|92933 (Quit: http://www.kiwiirc.com/ - A hand crafted IRC client) 08:33:43 --- join: gyxile (~nick@cpc80309-grim18-2-0-cust167.12-3.cable.virginm.net) joined #forth 08:49:18 --- quit: gyxile (Read error: Connection reset by peer) 08:49:42 --- join: gyxile (~nick@cpc80309-grim18-2-0-cust167.12-3.cable.virginm.net) joined #forth 08:53:41 --- quit: gyxile (Client Quit) 08:59:18 --- join: CodeTortoise (~textual@2602:306:37fe:b70:58a8:168:1cc:aa67) joined #forth 09:12:15 --- quit: dys (Ping timeout: 240 seconds) 09:14:32 --- quit: CodeTortoise (Quit: My MacBook has gone to sleep. ZZZzzz…) 10:13:29 --- join: ASau` (~user@x2f7f183.dyn.telefonica.de) joined #forth 10:15:06 --- quit: ASau (Ping timeout: 244 seconds) 10:24:14 --- quit: proteusguy (Quit: Leaving) 10:27:25 --- quit: djinni_ (*.net *.split) 10:27:26 --- quit: nighty-_ (*.net *.split) 10:27:27 --- quit: diginet2 (*.net *.split) 10:29:34 --- join: djinni_ (~djinni@68.ip-149-56-14.net) joined #forth 10:29:34 --- join: nighty-_ (~cp@www.taiyolabs.com) joined #forth 10:29:34 --- join: diginet2 (~diginet@107.170.146.29) joined #forth 10:30:07 --- quit: ggherdov`_ (Ping timeout: 250 seconds) 10:36:00 --- join: ggherdov`_ (sid11402@gateway/web/irccloud.com/x-uwnnuufhxntduzlu) joined #forth 10:39:22 --- quit: ggherdov`_ (Excess Flood) 10:40:50 --- join: ggherdov`_ (sid11402@gateway/web/irccloud.com/x-wyppxmyhalykuvnn) joined #forth 11:04:00 --- join: Zarutian (~zarutian@168-110-22-46.fiber.hringdu.is) joined #forth 11:16:30 --- nick: ASau` -> ASau 11:22:40 --- join: mnemnion (~mnemnion@2601:643:8102:7c95:28b8:4b:aada:2644) joined #forth 11:25:22 --- quit: eldre (Ping timeout: 244 seconds) 11:25:35 --- join: eldre (~eldre@blip.afturgurluk.net) joined #forth 11:38:05 --- join: bogen (~bogen@cpe-97-99-38-29.tx.res.rr.com) joined #forth 11:38:29 --- part: bogen left #forth 11:48:38 I'm thinking I should start coding krivine in risc 11:48:58 I have a raspberry pi and it would be a good start for my adventure on a board I couldn't do alot of damage on 12:10:32 --- join: neceve (~ncv@unaffiliated/neceve) joined #forth 12:22:08 --- quit: diginet2 (Ping timeout: 250 seconds) 12:22:23 --- join: diginet2_ (~diginet@107.170.146.29) joined #forth 12:23:15 --- nick: diginet2_ -> diginet2 13:11:36 John[Lisbeth]: RISC isn't a language 13:11:54 John[Lisbeth]: what is this krivine thing you keep going on about? 13:12:42 --- quit: neceve (Quit: Konversation terminated!) 13:34:12 I think we need to bring ASau back. 13:35:11 Without him the channel is become dumber :) 13:35:14 --- join: systemsgotyou (~User@71.91.8.13) joined #forth 13:49:00 --- quit: yunfan (Ping timeout: 244 seconds) 13:56:31 hmm gforth-fast is 15x slower than an exact C port of the same test (0 arity functions, with 2 global stacks) 13:57:14 http://lpaste.net/2178288076864880640 14:02:02 nha_, Well, you made sort of STC in C. 14:02:11 whats stc? 14:02:51 subroutine threaded code 14:03:17 gcc -O3 produces surprisingly good code for it 14:03:27 the restrict qualifiers were a huge speedup too 14:03:33 With -O3 is no more STC :) 14:04:32 But even in its naive form it's faster than a C Forth. 14:05:18 forth seems pretty cool but the lack of high performance implementations are kind of a buzzkill 14:05:22 Of course, it's possible to generate machine code from C and even do JIT, but seems that Forth system writers prefer not to deal with it. 14:05:24 especially since its kind of a low level language 14:06:37 and gforth is producing like 350k images which isnt so small 14:09:17 If you can make a good optimized code generator for Forth then you don't need Forth. Because you already have enough knowledge to build a higher level language. The one of great selling points of Forth is that you can implement the whole system in a week. Without real knowledge of compiler writing etc. 14:12:09 true-grue: such as Scheme or Haskell? (C/C++ is not really high level nor much low level.) 14:12:46 Zarutian, Like Oberon, for example. If we are talking about low-level programming for MCUs etc. 14:13:20 true-grue: yeah, I have heard of Oberon. Never seen it nor used it though. 14:14:30 Zarutian, I'm talking about Oberon-like language. But Scheme (without GC) could be a good choice too. Everything is possible if you can build your own compiler :) 14:15:57 --- join: yunfan (~roooot@unaffiliated/yunfan) joined #forth 14:16:42 what I have found looking at optimizing compilers such as gcc and g++ is that beyond peephole optimizations and dead code elimination they are usually looking for repeated patterns that should actually be language constructs (or like Schemes hygenic macros). 14:16:56 But, in fact, not many people have enough knowledge to build good compilers. And even lesser people are good in language design. 14:18:38 Peephole optimizator by itself could be a powerful thing. 14:18:57 true-grue: yeah. Like the C and C++ compilers produced by Intel. I have heard of companies that have tried those out and gotten rather horrible machine code out of it using simple algorithms from a Knuth book as example. 14:19:28 sometimes gcc is better sometimes icc is 14:19:52 true-grue: isnt there one such in cmForth that targets the RTX2010 ISA? 14:19:52 sometimes rearranging code makes one produce better than the other too 14:20:17 icc seems to inline much more aggressively with default settings 14:20:58 not knowing much about the technologies used in compilers, but knowing that in general ideal optimizing compiling is a hard AI sort of problem, I'd expect compilers to be using heuristics - machine learning and stuff like that. 14:21:13 nha_: sure but in many cases there were many 'arch checks' that looked for Intel spefic x86 CPUs and used tricks to speed up a lot of the code. 14:21:30 nha_: problem was that those checks were often revision spefic. 14:21:52 yea its a stupid policy 14:22:51 register allocation seems not so great in compilers 14:23:15 I've had llvm produce some pretty retarded spill patterns 14:23:56 how effective is it to just cache the top of the stack in registers for forth? 14:24:22 seems like it would give a good boost for a really simple optimization 14:24:44 gforth 0.7.9 on my system with a floating point benchmark runs 1/3 the speed of C 14:24:45 I thought using Single Assignement Statement form of the code and simple Least Recently Used policy was the norm for register selection and spilling. 14:25:07 but that's when the C compiler doesn't use any -O I think 14:25:49 nha_: completely depends on the ISA. Are you running your Forth on top of x86, ARM, SPARC or other RISCs? 14:26:08 going up to -O3 changes it to gforth being 1/6 the speed of C 14:26:19 Intel(R) Core(TM) i7-4712HQ CPU @ 2.30GHz 14:27:30 question, I've read a bit from some of anton ertl's papers about implementing fast forth compilers. Anyone know how much of those ideas are currently used in gforth? 14:28:00 as far as I can tell, gforth is still using a direct-threaded interpreter, right? 14:28:02 nha_: usually TOS and NOS is kept in registers but x86 is notariously starved in programmer visible registers. 14:28:51 x86_64 isn't so bad 14:29:05 but most the commercial forths seem to all be 32bit 14:30:22 nha_: many commercial forths are used for stuff like running CNC lathes and such where the raw OS is often just a variant of DOS. (Dr.DOS and PDOS seem to be popular) 14:33:33 other places where I have seen Forths are as the debug monitors for medium sized MCUs (think PIC32 and such) in various machines. 14:49:59 --- join: CodeTortoise (~textual@2602:306:37fe:b70:98c8:fff9:3184:e677) joined #forth 14:53:13 mecrisp-stellaris-ra is an optimizing Forth https://www.youtube.com/watch?v=a8Dg08DkILo 14:54:55 mecrisp-stellaris will run on Linux as well, not just an ARM microcontroller. 15:13:28 true-grue: that's exactly the point of using libraries, isn't it? ;) 15:14:08 true-grue: as for peephole optimiser... 15:19:32 --- quit: CodeTortoise (Quit: My MacBook has gone to sleep. ZZZzzz…) 15:20:46 I think that constants propagation, dead code elimination and common expressions are a bit more important. 15:21:17 ASau, In fact, it's possible to do these things with just PO. 15:21:21 why is dead code elim important? 15:21:48 true-grue: what is "PO"? 15:22:09 or is dead code something different than unreachable code 15:22:17 ASau, peephole optimiser. 15:22:43 Highly unlikely then. 15:24:01 Good POdoes things which are closer to abstract interpretation. 15:24:06 nha_: because you don't want to emit unconditional jumps around code that is never executed, such jumps are useless and the code is unnecessary waste that affects performance. 15:24:56 nha_: they are the same iirc 15:26:39 is it really such a high priority optimization though? 15:26:42 true-grue: not sure what you call "good PO" exactly, can you reference some paper? 15:26:44 i guess jumps stall the pipeline right 15:27:36 It is pretty cheap and very useful. 15:27:43 Zarutian, Yes, cmFORTH does some optimization. It's not a systematic PO approach, but something closely tied with a few particular instructions and situations. 15:28:17 true-grue: such as combineing EXITs and arithmetic instructions. 15:29:58 ASau, I suggest to look at the works of N. Ramsey. He is a known person in a functional world, by the way. 15:30:07 nha_: besides, it is very useful on programmer's side, since dead code warning usually helps a lot with coding. 15:31:29 I understand abstract interpretation part, but I don't see how good should peephole optimiser must be in order to be close enough to it. 15:32:01 Well, even GCC uses very sophisticated PO. 15:32:02 Normally, it doesn't have unlimited buffer. 15:32:52 Or do you mean BURG style code generation? 15:33:39 No. I'm talking about terms like RTL, combiner, extender etc. 15:33:53 expander 15:34:49 Tree rewriting is more simpler and clear thing. 15:35:09 Especially its top-down verison. 15:35:22 --- join: CodeTortoise (~textual@2602:306:37fe:b70:2d97:6cb7:7289:3d3) joined #forth 15:36:05 Alright, probably I do need to check those works. 15:36:14 Or I need to take a sleep. 15:41:36 nha_: consider the following example. 15:41:53 You write code that you want to use on 32-bit and 64-bit system. 15:42:23 Probably, some small and very frequently called subroutine. 15:42:48 If you don't do dead code elimination, you have to resort to textual preprocessor. 15:43:44 If you do eliminate dead code, you can just write the code naturally and avoid creating the mess. Optimiser will handle it. 15:44:02 is DCE done in the back end? 15:44:55 because it seems like the language front end should know if a branch isn't taken when the condition is constant 15:45:03 and just avoid generating code on the spot 15:45:36 It depends how you split backend and frontend. 15:46:17 Doing it in backend has its benefits. 15:50:17 BTW, this example also demonstrates why good compiler and good language - that is _not_ Forth - is important. 15:51:34 ASau: aslo add C and C++ to the not good language list 15:52:07 Yes, C and C++ are also not good enough. 15:52:17 Though much better than Forth still. 15:52:39 not even marginally, I say 15:54:45 Part of the problem with C is that its constants are not "constant enough." 15:55:57 ASau: did you not notice that you were kicked out of this channel recently? 16:01:48 when people get kicked out of a channel like that, there's usually a reason for it. coming into a channel primarily about Forth, and repeatedly making anti-Forth comments (such as those above) isn't the smartest move. Neither is an ad hominem attack against another regular, such as the one you levied at me. That's actually what got me to take action and a small but important part of what eventually led to you being kicked. 16:03:59 What makes a good language, BTW, is very subjective. If you're going to keep making anti-Forth comments, maybe reconsider where you are making them and what channels you participate in on an ongoing basis. 16:05:20 BTW, remind me, have you ever proven your point in any other way than gagging your opponent? 16:05:44 What makes a good language is not "very subjective", there exist objective metrics. 16:05:53 ASau: the point you have proven is that there is a consensus that you should be banned from here permanently 16:06:01 Otherwise you could argue that, say, FORTRAN II is very good language. 16:06:05 i wouldn't kick him personally 16:06:14 why do you care if he hates forth or not 16:06:14 haha 16:06:24 nha_: this has gone on for years, like since 2009 or so 16:07:08 nha_: I don't care whether or not he hates Forth. I do care if we can't go a whole day without flagrant anti-Forth slams, and honestly it looks bad on the channel 16:07:31 bluekelp said it in channel that there's a consensus that ASau shouldn't be here 16:08:05 * Zarutian doesnt care either way. 16:09:26 regarding those 'objective' metrics. Do they measure expressibility, composability and succentness of programming languages? 16:10:03 They measure more important things. 16:10:47 ASau: those three things are the most important things of programming languages. The others are implementation metrics. 16:11:15 Zarutian: no, they are not important at all. 16:11:39 "best programming language" is a matter of opinion 16:11:45 yea but thats the excuse people make for python all the time 16:11:45 All those are attempts to explain why some languages are better than others. 16:11:51 oh CPython isnt python its just an implementation 16:12:03 meanwhile all the other ones like pypy are still slower than lisp 16:12:07 If you think C or C++ are better, maybe it's time to /part this channel and /join ##c and ##c++ 16:12:15 and CPython is like 20-50x slower than C or w/e it is 16:12:16 haha 16:12:49 certain language features just kill performance 16:13:45 nha_: if you really want speed, there is always assembly language. in fact some words in some Forth programs are rewritten in assembler for speed 16:13:56 That's only one part of performance. 16:14:44 nha_: if you really want speed, use an FPGA with ALUs and other macro devices such as memory blocks scattered about in its fabric. 16:14:50 nha_: here you can easily see how some people are stuck in 90ies. :) 16:14:52 the only problem with assembly language is you have to rewrite it every time you want to run on a new architecture 16:16:09 Approximately in nineties compilers broke the barrier where programming in assembler was the best result to gain run time performance. 16:16:19 The best approach. 16:16:23 ASau: and also easily see how some people do not see that the current contempories archs are on eventual design dead ends. 16:16:49 They may be dead end, but you need to hit the limit first. 16:17:07 everyone has a machine running on those archs though so were stuck with it 16:17:25 nha_: we're not stuck. 16:17:27 and as far as language features killing performance: https://ghc.haskell.org/trac/ghc/wiki/Commentary/Rts/HaskellExecution/FunctionCalls 16:17:28 carefully written assembler will equal or beat whatever a compiler can do 16:17:31 We haven't hit the limit. 16:17:51 ASau: they are already hitting the limts regarding CPU clockspeeds, sizes of L1 caches, instruction issuance latencies and others 16:17:54 DocPlatypus: that's fallacy that shows that you're stuck in 90ies. 16:17:56 though these days the speed gain is smaller than it used to be on standard desktop PCs and anything fancier 16:17:58 currying makes function calls much more of a hassle for gode generators 16:18:05 ASau: I am not stuck in the nineties 16:18:12 especially in dynamic languages where the arity isn't always known 16:18:15 You are. 16:18:41 a lot of platforms and CPUs from prior to 2000 are still in existence. example: people are still writing new games for the Atari 2600. not for commercial sale but as a hobby 16:19:02 "Not for commercial sale" says it all. 16:19:49 Some people operate old cars as a hobby. 16:20:00 DocPlatypus: I put a qualifier on "Not for commercial sale". I have seen 6502 cores still in use for low power SOC kind of stuff. 16:20:26 Zarutian: interesting 16:20:55 I had no idea people were still using the 6502. I still remember my 6502 assembly language if someone's hiring 16:21:07 DocPlatypus: ASau seems to thing that majority of CPUs are desktop, server and smart phone based. 16:21:34 Zarutian: a lot of them are but there are CPUs in places where you would not think 16:22:01 Zarutian: you don't need to tell me about 8051 here. 16:22:25 outside of the simplest analog-only electronic gadgets (my analog 2.1 speaker system's amp for example) just about everything else has a CPU of some type in it. 16:22:44 It doesn't mean that 8051 has a nice architecture that helps "expressive" and "composable" "succint" programming languages. 16:22:54 and not all of those are going to be 64-bit CPUs like you'd find in a PC 16:23:04 Zarutian, It's really hard to write VLIW assembly code. And you may find many VLIW machines in embedded area. 16:23:45 DocPlatypus: well depends. Some audiophiles still use tubes and hetrodyne amplification stages. 16:24:47 true-grue: true. VLIW code is usually written in something like Mathematica or other programming languages heavy on the mathematical operators 16:25:19 true-grue: I'd say that even RISC code isn't easy to write manually, but they will start shouting the opposite. :) 16:25:52 Zarutian: right. those *certainly* won't have CPUs in them :-) 16:26:42 ASau: let me ask you this. Have you actually gone through the book Structure and Interpretation of Computer Programs. In that book those three concepts are explained in detail. 16:27:14 DocPlatypus: dont be so sure. There might be one in the power supply brick if it uses switch mode ;-P 16:27:43 Zarutian: no, I took a look at that book and it didn't impress me. 16:30:07 If this book talks about "expressiveness," "composability" and other stuff without explaining why it matters, the book is wrong. 16:30:09 ASau, That's true. RISC is too atomic for comfortable manual coding. 16:30:15 by the way... not only have I programmed 6502 assembly language... at one point I actually wrote a short fragment of machine code by hand into the DATA statements of a BASIC program 16:30:42 this was when I was still an elementary school student 16:30:46 So what? A lot of people did that in 80ies. 16:31:07 A lot of people did that when they were in school. 16:31:11 true-grue: what I havent seen (as I havent had much opertunity) are stream processors that are the next step up from VLIW in design. You can say it is VLIW but on the systems level instead of components level. 16:31:12 not sure I'd ever try to do that on an x86 or x86_64 chip 16:31:18 Zarutian, SICP? Good book :) 16:31:51 why aren't dataflow languages more popular 16:32:26 Zarutian, Stream processors? I think they are predecessors of modern GPU architecture. 16:32:52 nha_: well there are some FlowBasedProgramming ones. 16:33:31 true-grue: some ideas used in GPU archs are from stream processors some have not survived. 16:35:27 true-grue: one idea that hasnt survived much is the MultipleInstructionsMultipleData. There are exactly two stages of such in most GPUs today. The vertex shader and the texile shader. 16:36:19 s/texile/texel/ 16:36:29 Zarutian, In GPU they have some MIMD processors. And each of these processors has SIMT PEs. 16:39:15 ASau: it explains why they matter. 16:43:15 --- quit: true-grue (Read error: Connection reset by peer) 16:52:01 00:29 < DocPlatypus> by the way... not only have I programmed 6502 assembly language... at one point I actually wrote a short fragment of machine code by hand into the DATA statements of a BASIC program 16:52:16 DocPlatypus: back in the 1980s, this was a pretty standard way of writing machine code programs 16:52:48 on some home computers there weren't DATA statements 16:53:11 DocPlatypus: so, on machines that didn't have DATA and READ, we typed in a short program to let us type in hex from a printout 16:53:33 DocPlatypus: that then POKEd the value into a big long REM statement at the start of the program memory 16:57:49 gordonjcp: tell me, were photo transistors and such components expensive in those days? I never understood why there werent something like DIY papertape-esque bar or scan codes used. Say five bits wide where one was 'clock'. (Each bit 2.5mm wide) 16:57:57 gordonjcp: right, typing in the numbers is one thing. figuring out what those numbers should be, on the fly, is another 16:58:33 DocPlatypus: trivial 16:58:52 Zarutian: you'd probably have to make a whole bunch of different interfaces for every home computer on the market 16:58:58 Zarutian: and there were a *lot* of them 16:59:25 that's not to say that people didn't do it - there were magazines that published stuff as barcodes that you scaned in 16:59:40 gordonjcp: just a DIY diagrams and instructions on how to make such a thing. It would have acted like an ASCII stream. 17:01:01 Zarutian: probably something that plugged into a joystick port would work 17:01:20 Zarutian: ultimately it became cheap enough to just glue a cassette to the cover 17:01:28 --- quit: nal (Ping timeout: 264 seconds) 17:04:18 gordonjcp: no possibility of hand-'written' programs on graphing paper (and later photocopied). Cassettes werent often cheap as the paper. 17:04:33 No idea what "figuring" was on 6502, on K580 and K1801 it was easy enough. 17:05:40 --- quit: karswell` (Ping timeout: 244 seconds) 17:06:23 gordonjcp: I was also thinking of 'kinkos' type of self publishing of some home computer gazettes. 17:06:25 --- quit: CodeTortoise (Quit: My MacBook has gone to sleep. ZZZzzz…) 17:07:47 ASau: 6502 had a wonderfully simple instruction set with very orthogonal opcodes 17:09:07 K1801 had almost orthogonal opcodes. 17:09:14 I don't know 6502. 17:09:37 ASau: very common in the UK, they were used in the ubiquitous BBC Micro 17:09:43 every school had a bunch of them 17:09:55 in the US they cropped up in the Apple II and a modified form in the Commodore 64 17:09:56 In any case, we agree that in 80ies it was pretty common to write in machine code directly. 17:10:04 Among teenagers too. 17:10:15 ASau: the K1801, that's the Russian PDP11 clone? 17:10:29 Probably. 17:10:46 I don't know what it corresponded to and if it corresponded to anything at all. 17:10:48 well s/Russian/Soviet Union/ 17:10:57 and s/PDP11/J11-type CPU/ 17:11:05 and s/clone/equivalent/ 17:11:06 :-D 17:11:35 That doesn't mean that it is a good idea to continue this practice today. 17:11:56 yeah 17:12:01 we have macro assemblers now 17:12:07 huge amounts of disk space and RAM 17:12:19 hundreds or even thousands of kilobytes of it 17:12:43 We have enough of tools not to do all those things that we had to do back in the past. 17:12:48 gordonjcp: some programmers think that disk space and RAM is free. 17:13:02 Zarutian: it is! 17:13:11 It is. 17:13:19 Zarutian: I'm currently (need to get to bed really, it's 1am) 17:13:28 You need to hit the limit, and it is way above of average problems. 17:13:48 gordonjcp: no it isnt. That thinking makes it used/occupied 17:13:52 Zarutian: currently writing stuff in Forth on a system with 400kB of disk storage, easily swappable, and 144kB of RAM 17:14:03 144 fucking kilobytes 17:14:13 it would take me weeks to type enough to fill that 17:14:29 or one bad webcam image 17:16:28 (I'd ask, why you want to keep all the memory free, but this is nearly useless.) 17:16:48 ASau: to run other programs. 17:17:05 Doesn't that make the memory occupied??? 17:18:07 yes, but there's a diff between running X programs and 2X programs cause they are well optimized 17:18:30 ASau: well I need quite a lot of the memory free to fill it with wavetables 17:19:44 128kB of the RAM is partitioned into four 32kB pages, and paged into the bottom half of the CPU's address space (6809( 17:20:03 the remaining 16kB is the OS RAM where the OS is loaded by the boot ROM 17:22:17 If you're using bank switching already, adding more memory may easily be a lot more beneficial than optimizing your program. 17:23:38 not a lot of point 17:23:42 the sound chip can't talk to it 17:29:22 --- join: CodeTortoise (~textual@2602:306:37fe:b70:f943:5826:4bab:3046) joined #forth 17:32:40 * Zarutian is off to bed 17:34:04 --- quit: Zarutian (Quit: Zarutian) 17:34:37 * gordonjcp likewise 17:34:42 half 1 in the morning 17:34:43 ffs 17:42:04 --- quit: CodeTortoise (Quit: My MacBook has gone to sleep. ZZZzzz…) 17:47:35 --- quit: DocPlatypus (Ping timeout: 272 seconds) 17:47:40 --- join: _DocPlatypus (~skquinn@c-73-166-108-48.hsd1.tx.comcast.net) joined #forth 17:49:49 --- quit: _DocPlatypus (Client Quit) 17:49:56 --- join: _DocPlatypus (~skquinn@c-73-166-108-48.hsd1.tx.comcast.net) joined #forth 17:50:08 --- nick: _DocPlatypus -> DocPlatypus 17:55:16 --- quit: gordonjcp (Ping timeout: 244 seconds) 17:55:53 sometimes "adding more memory" just isn't an option 17:56:04 especially on embedded systems 17:57:04 --- join: gordonjcp (~gordonjcp@ilyushin.gjcp.net) joined #forth 18:46:18 --- join: neceve (~ncv@79.114.11.234) joined #forth 18:46:18 --- quit: neceve (Changing host) 18:46:18 --- join: neceve (~ncv@unaffiliated/neceve) joined #forth 18:55:51 --- join: CodeTortoise (~textual@2602:306:37fe:b70:a9da:696a:aa23:3235) joined #forth 20:12:19 --- join: nal (~nal@adsl-64-237-237-132.prtc.net) joined #forth 20:16:23 --- quit: groovy2shoes (Quit: Leaving) 20:23:02 --- join: ncv (~ncv@79.114.65.110) joined #forth 20:23:02 --- quit: ncv (Changing host) 20:23:02 --- join: ncv (~ncv@unaffiliated/neceve) joined #forth 20:26:02 --- quit: neceve (Ping timeout: 265 seconds) 20:52:44 --- join: karswell (~user@13.239.113.87.dyn.plus.net) joined #forth 21:01:27 --- quit: CodeTortoise (Quit: My MacBook has gone to sleep. ZZZzzz…) 21:14:51 --- join: CodeTortoise (~textual@2602:306:37fe:b70:9458:b918:19fe:4d4) joined #forth 21:17:07 --- quit: CodeTortoise (Client Quit) 22:08:28 --- quit: ncv (Quit: Konversation terminated!) 23:08:26 --- join: CORDIC (~user@93-86-108-73.dynamic.isp.telekom.rs) joined #forth 23:10:04 --- quit: DKordic (Ping timeout: 276 seconds) 23:18:56 --- join: newuser|23380 (7896b449@gateway/web/cgi-irc/kiwiirc.com/ip.120.150.180.73) joined #forth 23:19:19 --- quit: newuser|23380 (Client Quit) 23:28:55 --- quit: CORDIC (Ping timeout: 276 seconds) 23:32:55 stop forthing so much, plz, you make me to want to implement monitoring scripts in it :-D and other guys at my company will YUUUUGELY hate me for that :-D 23:40:12 --- quit: ASau (Ping timeout: 260 seconds) 23:47:16 pointfree, not http server, but http client 23:47:29 pointfree, sent you PM 23:59:59 --- log: ended forth/16.09.18