00:00:00 --- log: started forth/08.07.15 00:41:38 --- join: aum (n=aum@60-234-243-247.bitstream.orcon.net.nz) joined #forth 01:16:06 --- join: nighty__ (n=nighty@210.188.173.246) joined #forth 01:39:37 --- join: qFox (i=C00K13S@234pc222.sshunet.nl) joined #forth 02:12:01 --- quit: ASau` (Read error: 110 (Connection timed out)) 02:40:35 --- quit: nighty__ (Client Quit) 03:52:15 --- quit: qFox ("Time for cookies!") 04:46:22 --- quit: aum ("Leaving") 05:46:11 --- join: forther (n=forther@c-24-5-187-203.hsd1.ca.comcast.net) joined #forth 07:22:58 --- quit: forther (Read error: 110 (Connection timed out)) 07:24:35 --- join: JasonWoof (n=jason@c-65-96-161-30.hsd1.ma.comcast.net) joined #forth 07:24:35 --- mode: ChanServ set +o JasonWoof 07:46:13 --- part: craigoz left #forth 08:18:56 --- quit: ramkrsna ("Leaving") 08:46:38 --- quit: ecraven ("bbl") 09:03:27 --- join: pozic (n=Pozic@unaffiliated/pozic) joined #forth 09:04:02 Are "screens" obsolete already? 09:05:05 --- join: kar8nga (n=kar8nga@AMarseille-151-1-51-224.w82-122.abo.wanadoo.fr) joined #forth 09:06:20 largely 09:06:37 I also can make pforth segfault. Is that normal/does anyone care about pforth? 09:06:51 (Using only two words) 09:07:24 it's not a particularly robust implementation. But most Forths are crashable, by design. 09:08:24 gforth does handle the case correctly. 09:09:47 what case? 09:10:20 @ VIEW (bogus code) 09:10:44 is there even a 'view' in gforth? 09:11:07 I don't know, but it doesn't give a segfault. 09:13:08 segfaults are easy in forth 09:13:17 0 @ 09:13:19 -1 @ 09:13:45 some implementation catch segfaults and stay running (like gforth) many do not 09:14:53 the general idea seems to be that if you're code segfaults, you should fix it. ie it's not the interpreter's problem 09:16:48 Is there some big pile of open-souce libraries somewhere? 09:17:49 I could use a bignum package for instance. 09:18:19 source* not souce :) 09:32:44 --- join: tathi (n=josh@pdpc/supporter/bronze/tathi) joined #forth 09:32:44 --- mode: ChanServ set +o tathi 09:33:33 This license is fairly hilarious: http://www.jwdt.com/~paysan/bigforth-commercial.html 09:34:00 "Per-Developer license fees also expire 70 years after the dead of the developer, because the hypotetical copyright ownership of that developer expired." 09:34:00 you could look at http://home.iae.nl/users/mhx/bignum.frt 09:34:25 --- join: forther (n=forther@207.47.34.100.static.nextweb.net) joined #forth 09:35:23 I will have to write my own interpreter of Forth code anyway. Is it smart to write one in Forth? 09:35:30 why? 09:35:47 That doesn't really matter. 09:36:02 It does to me. :) 09:36:12 What are you trying to do that you can't do with an existing Forth? 09:37:25 I might say that when I am done. 09:38:26 huh? 09:38:31 pozic: your project is a secret? 09:38:37 JasonWoof: for now, yes. 09:38:49 It's most likely going to fail anyway. 09:38:54 I don't understand why people come in here to discuss their project, then refuse to tell us about it 09:39:07 we can't give you good advice on how to do stuff if you don't tell us what you're trying to do 09:39:42 Just to allure you with the mystery of it... 09:40:14 Ok, is there a Forth interpreter that can run an unbounded number of threads pre-emptively according to my specific wishes? 09:40:34 Huh? 09:40:44 Take unbounded to be some value in the millions on modern hardware. 09:41:04 millions of threads? 09:41:11 Yep 09:41:12 ... 09:41:17 you got a TB of ram? 09:41:29 I don't see why I would need a TB of ram. 09:41:33 Not really RAM overhead that's the issue. 09:41:37 But, the operating system. 09:41:46 They don't need to be OS threads. 09:41:55 pozic: for prempt they do 09:42:07 I think 09:42:12 I want the Forth interpreter to be the one doing the pre-emptive selection. 09:42:30 you need memory for all the stacks 09:42:37 each thread needs it's own stacks 09:42:51 unless you want the stacks to be really small, you nead loads of memory 09:42:56 Yeah... 09:43:07 You can do a lot with reasonably small stacks 09:43:11 I usually do 4KiB/stack 09:43:17 pozic, Maybe Erlang would better suit your needs? 09:43:29 DerDracle: I am sure Erlang doesn't. 09:43:40 Sure? 09:43:56 Forth or Lisp would work fine, but Lisp is fairly complicated. 09:44:05 I heard erlang is a rockstar when it comes to ridiculous quantities of threads 09:44:20 Yeah... Its quite amazing. 09:44:21 That is "to describe". (Lots of rules) 09:44:30 in my experience most forth systems don't support preemptive anything 09:44:44 Right, so I need my own interpreter. 09:44:55 Writing a forth interpreter is extremely straight forward fortunately. 09:45:03 ok, go nuts 09:45:11 Which is why I asked whether writing one in a meta-circular way would be a stupid thing. 09:45:24 it would probabyl be slow 09:45:31 How slow are we talking about? 09:45:37 pozic, The issue here is writing a creative, and intelligent, thread scheduling system. 09:45:41 depends on which forth you use 09:46:00 The fastest free forth there is at first. 09:46:05 And, the very imperative nature of forth leaves you unable to make certain assumptions about code you are interacting with and shared resources. 09:46:16 Which is why I suggested Erlang for the order of threads you're thinking about. 09:46:22 pozic: I think you should write it in C 09:47:35 How is multi-processing implemented on chips btw? Did they just extend x86 with a few instructions? 09:47:50 on what chips? 09:48:10 Processors like the one you can buy for consumer PCs. 09:48:41 At some point something needs to say "execute the following code on processor 4". 09:49:11 pozic, Er, it's not really chip level. 09:49:14 The OS does the scheduling, but then it probably calls some assembly code. 09:49:18 It 'may' be chip level. 09:49:36 which talks to some external hardware to transfer the code to a different processor, probably 09:50:07 I don't think there are much of any changes to the instruction set for that sort of thing. 09:50:15 Generally you have a thread handler that is woken up when a process yields (for a real-time kernel) and then a bit of context-switching code that saves all of the registers. 09:50:17 But I don't know a whole lot about it. 09:50:25 lunch, bbl 09:50:28 And then it just.. Jumps to the address. 09:50:28 pozic: unless you're writing an operating system, you don't need to know how the chips handle scheduling/interrupts/etc 09:51:28 JasonWoof: sure, but this was just out of curiosity. 09:51:34 For a non-realtime kernel, you generally have a thread process that runs in kernel-space, and this scheduler preempts processes using whatever signaling mechanism that OS supports. 09:51:49 millions of threads is insane, unless you've got computer hardware the size of a room 09:52:40 Should I mention that I was actually thinking about billions of threads? ^^ 09:53:00 (I know that's not going to work on a current PC.) 09:53:37 pozic: how many processors do you have? 09:54:29 JasonWoof: I currently only have 1, but I am hoping that in a few years time there are processors with hundreds of cores on them. 09:55:31 yeah, right 09:55:50 (which is fairly realistic as >10,000 cores on a small area is being talked about) 09:56:43 I think you should write something you can actually run on your computer 09:57:04 Yes, it would be a prototype first. 09:58:15 pozic, It won't make much of a difference if your scheduler/language is well restricted to prevent using up the majority of your resources simply waiting on shared-memory. 09:59:00 DerDracle: I didn't say anything about sharing data structures. 09:59:08 --- join: ecraven (n=nex@cm207-109.liwest.at) joined #forth 09:59:10 DerDracle: or do you mean that even then there is a problem? 09:59:11 if you've got 2,000,000 threads, your threads won't get to run more than once in every few minutes 09:59:46 I suspect you need to rethink your design 09:59:55 or perhaps discuss it with people... 09:59:59 pozic, It's a bit hard not to. 10:00:15 pozic, Are these threads going to interact at all with one another? 10:00:23 DerDracle: no, 10:00:38 You don't need threads, then. You need an array-processing language. 10:01:13 Making efficient use of many cores will require NUMA and per-core caches. 10:01:58 By definition I need threads. 10:02:02 ... 10:03:15 I need a number of computational processes that run interleaved. 10:03:27 That's pretty much the definition of a thread, no? 10:03:52 No. 10:04:24 How do you mean "no"? 10:05:48 The typical definition of thread includes an independent path of execution in a given address space. 10:06:08 pozic, The shared address space is part of what makes a thread a thread. 10:06:34 No, that's a more industrial understanding of what a thread is. 10:06:47 pozic, I think you are thinking more of having individual processes work on pieces of a bigger independently computational problem? 10:06:47 The Computer Science interpretation does not state that. 10:06:55 ... 10:07:00 Well, at least "A" 10:07:01 . 10:07:05 Where is the global computer science interpretation you're citing at? 10:07:12 I wasn't aware there was a defining document. 10:07:25 DerDracle: I already retracted "The". 10:07:34 DerDracle: I meant the one that was used in my education. 10:08:10 pozic: You'll have to define "thread" so we can get on the same page. 10:08:10 pozic, Regardless, what you're asking for requires perhaps parallel processes--- I'm not sure why you need these operating within the same program. 10:08:34 pozic, Do you realize that using the forth interpreter to manage threads you will get the same effect as if you wrote your program entirely linearly? 10:09:25 DerDracle: 1) OS threads have too much overhead. 2) No, I don't. Please elaborate. 10:09:52 pozic, The thread scheduler process is tantamount to a large switch with some selection criteria. 10:10:21 --- quit: tathi ("bleh") 10:10:24 pozic, It will never execute anything in parallel- not even if you have multiple processors. 10:10:52 If you want to utilize multiple processors you need to use operating system processes/threads in some manner. 10:11:08 DerDracle: I will start one OS thread per core. 10:12:11 Then you may as well have a linear process, or switch off your various processing tasks. 10:12:43 DerDracle: no 10:12:54 You can multiplex more than one user-space thread onto an OS (or kernel) thread. 10:13:17 TreyB, He's talking about green threads. 10:13:34 I doubt OS threads give enough control. 10:13:56 They are mostly about high, low, real-time or at most twenty of those levels. 10:14:50 Is the idea to increase performance using green threads? 10:14:54 DerDracle: a linear process that does A and B interleaved is not the same as one that does A first and then B. 10:15:03 True. 10:15:08 But they take exactly the same amount of time. 10:15:09 DerDracle: that's not an idea, that's common practice already. 10:15:20 DerDracle: see Haskell/Erlang. 10:15:28 Only the interleaved one has the added issue with restoring/saving the previous processor state. 10:15:51 DerDracle: why would there be any restoring? 10:15:52 DerDracle: The interleaved case may take more time for context switch overhead. 10:16:06 DerDracle: there is no need to switch contexts. 10:16:07 TreyB, That's what I said above. 10:16:15 DerDracle: there is just one OS thread per core. 10:16:22 ... 10:16:32 I could even rip out the scheduler. 10:16:34 You have to switch context for your green threads pozic. 10:16:46 The green threads page on Wikipedia covers it pretty well. 10:16:46 Each green thread will have it's own stack state. 10:16:54 Yes, but that's a fast operation. 10:17:02 Not really. 10:17:05 Not? 10:17:11 It's an extremely slow operation, that happens very commonly. 10:17:15 I might be naive, but that seems trivial to implement. 10:17:22 It's easy to implement. 10:17:23 But slow. 10:17:27 DerDracle: define slow 10:17:42 DerDracle: it is a constant time operation. 10:17:55 As in several operations copying register states into ram sort of slow? 10:18:23 Depends on the CPU architecture. Some CPUs may require expensive cache flushes as well. 10:18:54 Right, it may cache the previous register state if you don't. 10:19:03 Generally it's not a trivial operation. 10:19:15 I've actually written thread schedulers for real-time operating systems before. 10:19:20 It's a very difficult thing to get correctly. 10:19:27 Amen, brutha :-) 10:19:58 If it works within 100 CPU instructions, it's fine by me. 10:20:10 In my world that's "fast". 10:20:24 --- join: Maki_ (n=Maki@adsl-224-84.eunet.yu) joined #forth 10:21:05 In #forth I can imagine that people would find that slow. 10:21:12 ... 10:21:26 ...? 10:21:29 It's really because the operation happens very commonly. 10:21:37 It's the integrated time spent context-switching that is the issue. 10:21:45 It may be very fast for an operation that is performed only once. 10:21:59 Yes, I understand. But is there anything to do about that? 10:22:06 Although, in the case you have to deal with cache issues, and also many other synchronization edge cases- it may be very slow even for a standalone oepration. 10:22:09 Well, yes. 10:22:18 What? 10:22:29 --- part: kar8nga left #forth 10:22:43 If you have certain restrictions on what processes within your threads can do, you may be able to optimize the context switch. 10:23:02 Restrictions like? 10:23:09 I.E: For instance the linux kernel avoids saving/restoring the state of the FPU- and eliminates floating point from kernel code. 10:23:43 That's quite slick. 10:23:50 You may be able to say that the particular operations your threads are performing do not require a certain set of operations, or only a stack of a very limited size. 10:23:59 Etc etc... 10:24:27 Erlang has all of the restrictions of being a functional language, so it can insure there are no side effects of various operations, and perform various optimizations based on that. 10:24:40 Mostly avoiding synchronization. 10:24:56 And since your threads don't seem to interact at all, you probably don't need any synchronization either. 10:24:58 And by definition their "processes" don't share address space. 10:25:00 If there is no shared memory. 10:25:18 Yes, I am familiar with Erlang semantics. 10:25:26 Right, they're more like, small parallel processes with a fast message passing system. 10:25:42 So they could exist in an entirely different process, or even across a computer network. 10:26:49 --- join: ASau` (n=user@79.111.24.130) joined #forth 10:29:29 The Forth processor seems to be ideal for my purposes. If I implement it in C, will running that code on the Forth processor still be faster than the Forth implementation of the interpreter? 10:30:56 Hm. I doubt the C version would be faster than the Forth version. 10:31:21 I mean, if you're going to be using bison/lex you're probably stuck with C. 10:31:33 I'm not sure if there are any notable parser generators that target forth... 10:31:42 Don't assume I use any library. 10:32:21 You're not telling us what you're using, pozic, so we have to assume. 10:32:27 Using bison/lex is fairly pointless for Forth too. 10:32:47 Hm, I doubt lex is pointless. 10:32:49 There're two or three BNF toolkits. 10:32:54 But, forth is pretty straight forward. 10:33:03 There's the Gray parser generator, too. 10:33:14 gnomon, Does it generate forth? 10:33:36 pozic: there's classic implementation in lex. 10:33:43 Pratt's c-forth. 10:34:30 DerDracle, yes: http://www.complang.tuwien.ac.at/projects/forth.html 10:34:43 gnomon, Very cool. 10:35:44 --- quit: ecraven (Success) 10:36:58 I don't really understand your responses. I never said anything about using libraries. I merely asked a question about probable performance from a C implementation on a Forth chip (is there a compiler to Forth from C to begin with on that specific platform (and what about others)). 10:38:21 Which Forth chip is that? 10:38:43 ... 10:39:11 pozic: AFAIR, Ivan Makarychev retargeted lcc to Forth. 10:39:29 I mean the one with 18billion floating point operations per second. 10:39:29 pozic: though I'm not sure, you should try to find it yourself. 10:41:05 pozic: IIRC, he is mak at forth org ru. 10:43:34 ASau`: thanks, it doesn't appear easy to find (but this doesn't matter that much). 10:44:21 pozic: try http://forth.org.ru. 10:45:36 ASau`: my Russian is not so good :) 10:46:08 My Czech either :) 10:46:49 Who is Czech? 10:50:10 Don't mind. 10:52:53 Thanks for your help. Bye. 10:53:08 --- quit: pozic ("leaving") 10:55:36 I was a little late on topic but did pozic mentioned which forth chip he is targeting? Is it ASIC or programmable logic? 10:57:43 I've missed the start of the talk. 11:05:21 I'm not entirely sure if he knows. 11:23:11 --- quit: mathrick (Read error: 104 (Connection reset by peer)) 11:23:29 --- join: mathrick (n=mathrick@users177.kollegienet.dk) joined #forth 11:33:32 --- quit: mathrick (Read error: 104 (Connection reset by peer)) 11:33:56 --- join: mathrick (n=mathrick@users177.kollegienet.dk) joined #forth 11:41:48 --- quit: mathrick (Read error: 104 (Connection reset by peer)) 11:42:13 --- join: mathrick (n=mathrick@users177.kollegienet.dk) joined #forth 11:55:20 --- join: fwiffo (n=user@unaffiliated/fwiffo) joined #forth 11:57:28 he thinks there's a chip named "Forth"?? 11:58:37 man, this is why you can't hire programmers who went to collage but don't have experience 11:58:51 theory alone is worthless 11:59:37 OTOH, those, who attended university, catch up quickly. 13:03:52 bright people catch up quickly. Dull ones don't. Even if they've been to school. 13:20:46 --- quit: fwiffo (Remote closed the connection) 13:33:16 --- join: qFox (i=C00K13S@234pc222.sshunet.nl) joined #forth 13:40:29 When you send them to college at least they learn how to spell it :) 13:46:58 Actually, they don't. 13:47:10 Some of them do though. 14:02:56 speling is definitely not required for doing well in college 14:03:34 maybe it is if you're an english major :) 14:03:49 maybe, but I suspect you'd still have a computer 14:04:03 goood point 14:04:35 so will fronds do any kind of optimization? :) 14:20:25 --- quit: Maki_ ("Leaving") 14:55:31 --- quit: probonono (Read error: 110 (Connection timed out)) 15:09:46 --- quit: mark4 ("Leaving") 15:38:58 --- join: tathi (n=josh@pdpc/supporter/bronze/tathi) joined #forth 15:38:58 --- mode: ChanServ set +o tathi 15:48:44 --- quit: Quartus__ (Read error: 104 (Connection reset by peer)) 16:04:15 --- quit: qFox ("Time for cookies!") 16:08:45 --- quit: JasonWoof (Read error: 104 (Connection reset by peer)) 16:50:51 --- quit: ASau` (Remote closed the connection) 16:53:13 --- join: ASau` (n=user@79.111.24.130) joined #forth 17:51:01 --- join: mark4 (n=mark4@ip70-190-69-3.ph.ph.cox.net) joined #forth 18:36:34 --- quit: forther (Read error: 110 (Connection timed out)) 18:37:22 --- quit: ASau` (Remote closed the connection) 18:37:37 --- join: fwiffo (i=none@unaffiliated/fwiffo) joined #forth 18:38:37 --- join: ASau` (n=user@79.111.24.130) joined #forth 18:45:16 --- quit: tathi ("leaving") 19:11:54 --- join: forther (n=forther@c-24-5-187-203.hsd1.ca.comcast.net) joined #forth 19:15:33 --- quit: forther (Client Quit) 21:59:42 --- join: X-Scale (i=email@2002:59b4:4f7f:0:0:0:59b4:4f7f) joined #forth 22:01:09 --- join: nighty__ (n=nighty@210.188.173.246) joined #forth 22:03:47 --- quit: X-Scale (Remote closed the connection) 22:05:11 --- join: X-Scale2 (i=email@2002:59b4:4f7f:0:0:0:59b4:4f7f) joined #forth 22:05:46 --- nick: X-Scale2 -> X-Scale 22:19:04 --- join: probonono (n=User@ppp103-111.static.internode.on.net) joined #forth 22:25:52 --- quit: probonono (Connection reset by peer) 22:26:12 --- join: probonono (n=User@ppp103-111.static.internode.on.net) joined #forth 22:34:10 --- join: kar8nga (n=kar8nga@AMarseille-151-1-51-224.w82-122.abo.wanadoo.fr) joined #forth 23:25:23 --- join: ecraven (n=nex@140.78.42.115) joined #forth 23:26:41 --- quit: kar8nga ("Leaving.") 23:43:07 --- quit: ASau` (Remote closed the connection) 23:44:35 --- join: ASau` (n=user@79.111.24.130) joined #forth 23:59:59 --- log: ended forth/08.07.15