00:00:00 --- log: started forth/15.08.08 00:09:57 +mornin 00:21:32 --- quit: xyh (Remote host closed the connection) 00:31:28 --- quit: fantazo (Ping timeout: 260 seconds) 00:45:13 --- join: true-grue (~grue@95-27-143-90.broadband.corbina.ru) joined #forth 00:57:46 --- quit: ehaliewi` (Ping timeout: 264 seconds) 02:38:31 anyone awake? 02:51:12 --- join: xyh (~xyh@14.150.213.59) joined #forth 03:02:20 --- quit: xyh (Remote host closed the connection) 03:26:45 --- join: fantazo (~fantazo@213.129.230.10) joined #forth 03:38:40 --- join: xyh (~xyh@14.150.213.59) joined #forth 05:10:22 --- quit: beretta (Quit: Leaving) 05:25:41 --- join: mrm (~user@94.41.237.13.dynamic.ufanet.ru) joined #forth 05:25:59 --- quit: mrm (Remote host closed the connection) 06:19:05 hi Quiznos 06:19:26 --- join: xyh- (~xyh@14.150.213.59) joined #forth 06:21:46 --- quit: xyh (Ping timeout: 264 seconds) 06:23:13 --- nick: xyh- -> xyh 06:38:21 --- quit: proteusguy_ (Quit: Leaving) 06:38:41 --- join: proteusguy (~proteusgu@ppp-110-168-229-95.revip5.asianet.co.th) joined #forth 06:38:41 --- mode: ChanServ set +v proteusguy 06:43:50 hi 06:45:32 hiya Quiznos :) 06:45:52 hi 06:59:43 --- join: mrm (~user@94.41.237.13.dynamic.ufanet.ru) joined #forth 07:00:49 --- part: mrm left #forth 07:09:43 --- quit: defanor (Ping timeout: 245 seconds) 07:18:31 --- join: defanor (~defanor@2a02:7aa0:1619::ca46:4831) joined #forth 07:25:10 --- join: Zarutian (~Adium@46.22.110.168) joined #forth 07:57:11 --- quit: fantazo (Quit: Verlassend) 08:08:32 --- join: gnooth (~gnooth@2602:306:cf96:8b60:53c:80e6:6bd8:f902) joined #forth 08:15:14 --- nick: xyh -> littlt-big-littl 08:15:20 --- nick: littlt-big-littl -> littltbiglittle 08:15:30 --- nick: littltbiglittle -> littlebiglittle 08:47:38 --- nick: littlebiglittle -> xyh 09:30:39 --- quit: Zarutian (Quit: Leaving.) 09:52:51 --- join: fantazo (~fantazo@089144230088.atnat0039.highway.a1.net) joined #forth 10:07:39 --- quit: xyh (Remote host closed the connection) 10:15:22 --- join: Zarutian (~Adium@168-110-22-46.fiber.hringdu.is) joined #forth 10:40:13 --- quit: darkf (Ping timeout: 244 seconds) 11:37:18 --- join: Mat4 (~claude@ip5b40b95e.dynamic.kabel-deutschland.de) joined #forth 11:37:21 hello 11:37:43 h'lo Mat4 11:37:54 hi Zarutian 11:39:31 hi 11:40:42 I will write the dictionary routines for my language implementation today and found it interesting that the call pattern for words is mostly simple. The last defined words are practical the most called ones 11:40:45 hi Quiznos 11:42:05 this lead me to think that full hashing can be avoided and implementing dictionaries as classic binary trees in addition to a word cache is sufficient for ensuring dfast dictionary searches 11:42:20 ^fast 11:42:43 you're working on a modern multi-gigahertz machine 11:42:48 quite thinking about speed. 11:43:27 i'm just doing a plain, unoptimised single linked list for my forth 11:43:39 i have billions of cycles to burn 11:45:10 wel, if you have large sources than this little detail will matter 11:46:04 ^well 11:46:41 Pf! 11:46:51 Get those legendary large sources first. :) 11:47:17 ASau: you quit me prematurely yesterday. i wanted to thank yu for your criticisms and insight. 11:47:20 * Mat4 found it personally inacceptable to waste ressources in any kind 11:47:57 So you prefer to waste your time instead of machine time. :) 11:48:15 Why not learn calculating in memory instead of using computer? 11:50:09 ASau: dont ignore the compliment! 11:50:44 --- quit: Zarutian (Quit: Leaving.) 11:51:39 ASau: Thanks for your concerns. I can reassure you that 1. I have enough large sources to convert and process, 2. your argument is irrational as ever 11:52:44 ASau: wth is wrong with you that you cant accept a compliment? 11:52:51 damn man. 11:53:33 anyhow, I hope someone here have some experiences with hashed dictionaries for discussing advantages and disadvantages (beside memory demands) 11:54:19 Mat4: i dont but i've already posted my position. but i'll haggle with you about it 11:54:25 They make implementing redefinitions in Forth sense tricky. 11:54:42 They make support of markers tricky. 11:55:08 Not such a big deal, though. 11:55:17 Mat4: for me, i think an esy way to hash forth words to signed cpu address 11:55:30 Not supporting any of said features make it feel more conventional language. 11:55:48 bc that is the range for all usable memory, 0x08 to 0x0c-1 11:56:00 including kernel space 11:56:39 another simpler hashing, would be to create a new list of CFAs and sort by MRU 11:56:53 like a bubble sort but incrementally. 11:57:11 so then you use an index to pick the CFA in the list[] 11:57:19 more indirection. 11:57:21 If you don't care of markers, you can just lift words as you search through your SLL. 11:57:37 dont lift names; use CFAs 11:57:59 that could be a runtime opt to test. 12:00:37 ASau: That's one reasion why I want to avoid hashing. However adding a field to the entry headers which points to a linked list of redefinations should be a simple way for support. I think the real problem are markers 12:00:48 ^reason 12:01:45 memory wise I'm not sure if a binary tree would need less memory than a hashed dictionary 12:01:58 It would. 12:02:04 (Yes, the biggest problem is markers.) 12:02:47 Quiznos: Can you please explain your idea more precicly ? 12:07:54 ill try 12:08:31 build a std dict. then during runtime, a word can be called, to build a dynamic list[] of CFAs 12:08:37 --- quit: zhiayang (Quit: snooze.) 12:08:51 i'm thinking it could be provide a kind of runtime token-threading. 12:09:21 so that words could be defined using smaller byte or 16b token CFAs 12:09:48 but in that caase, the list would have to be static for stable indices 12:10:34 then a new next() woud be needed to do a double load before jmp [ebx] 12:11:32 so, forget anout being dynamic in the bubble sorted way 12:12:29 ok. If the entry size is half the word width for example then more as one entry can be compared in one iteration (just my intuitive thought by reading) 12:14:07 maybe but thats not the point. keep it simple 12:16:46 (...And if you build perfect hash function, you can avoid multiple comparisons altogether.) 12:19:44 i'd skip all that in the name of simplicity. 12:21:28 forth is not supposed to be complicated 12:23:37 If it isn't, why does it require support for "forget" or markers? 12:31:09 bc forget is a natural function to apply to a linked list. 12:31:38 altho and i do agree, it is comlicated 12:31:53 and there are other complicated words too that arent factored well 12:31:57 as you well know 12:33:32 Even when you structure your wordlist as linked list, your dictionary is not a linked list still. 12:33:42 It isn't a DAG even. 12:34:13 why not a list? 12:35:07 variable a : m2 ; ' m2 a ! 12:35:19 Now it isn't a DAG even, let alone list. 12:37:24 : m1 ; variable a : m2 ; ' m2 a ! forget m2 ( this leaks memory usually ) 12:45:14 k 12:45:54 but lfa only links names, not cfa's 12:46:48 an cfa is always a member of Word's struct; i call it the Symbol, as represented by CFa. 12:47:23 --- join: vsg1990 (~vsg1990@cpe-67-241-148-119.buffalo.res.rr.com) joined #forth 12:47:53 so, to your 2d examp., better gc has to be done. 12:48:05 Whatever weasel words you write, either you leave a dangling pointer, or leak memory, or you implement proper garbage collector. 12:48:13 ya 12:49:07 that's not too difficult 12:49:13 Which part? 12:49:19 gc 12:49:23 Leaking memory isn't difficult, sure. 12:49:34 sure 12:49:43 Garbage collector will require a lot of effort that is normally considered "overcomplicated" in Forth community. 12:51:07 Note that you cannot implement reference counting and be done. 12:51:28 (Not to mention that reference counting isn't considered garbage collection nowadays.) 12:52:31 but still people love to include reference counting for "performance". 12:57:02 ASau: Garbage collection is avoided mostly because in embedded environments its procedere *can* affect real time responsity 12:57:21 for hosted systems and dependent of the applications in mind it should be of no problem 12:57:43 ^it procedere 12:57:59 or responsivity? 12:58:55 perhaps but i;m not willing just yet to close the book on it. 12:59:06 nor am i redy to fully discuss the subject. 12:59:57 Mat4: "embedded" <> "real-time". 13:00:13 For start. 13:00:28 Second, all this is bullshit. 13:01:05 It only says that Forth programmers are so knowledgable that they don't know about RT GC algorithms. 13:03:05 you can bet there know such algorithms quite well. If you study proper, you will notice that such algorithms either can't grant real-time safety or are restricted to specific application scenarios 13:04:16 also please not that the term 'real-time' is somewhat generic. You can at least differentiate between soft and so called hard real-time demands 13:06:38 again an ASau day 13:07:30 if you count in addition requirements for failure security (which I think is a common case) than GC is a no go in my opinion 13:07:32 Sure, I know that a lot of people use the term "real-time" to denote that you are allowed miss "some" deadlines and nothing bad will happen. 13:08:04 If you count in requirements for failure security, GC becomes mandatory. 13:08:50 for sure not, there exist better alternatives for that applications 13:09:04 Like what? 13:09:25 Like leaving dangling references? 13:12:11 mainly ensuring by 'speculative' compilation in combination with runtime simulations that memory leaks can't occur 13:13:12 Oh, if you mean certified code that hardly applies to Forth with its incoherent semantics. 13:14:32 if this Forth is based on a virtual machine this can be done easily 13:14:49 Bullshit. 13:15:13 It doesn't depend on whether it is virtual machine or real one. 13:15:20 what a precise argument! 13:16:29 If your machine allows mixing references and integers, you'll have big problems certifying your code irrespectively of that. 13:16:47 I don't need to discuss further to you. There are many satellites fore example circulating around the earth which you can take as example by demand 13:16:56 (This again demonstrates how knowledgable Forth programmers are.) 13:17:43 Launching sattelite doesn't require bug-free code. 13:17:48 you donÄt need to certificating a Forth program. All needed is to monitor memory accesses! 13:18:41 and there are plenty of settelites out there which whole software environment is based on Forth 13:18:47 How does it prevent writing a random integer instead of valid address at valid location?? 13:19:20 There're plenty of web sites running PHP. So what? 13:20:18 because memory accesses are registered and type verificated (independent of the VM which executes the Forth program) 13:20:43 (This again demonstrates how knowledgable ASau is) 13:21:08 How does it prevent from writing valid yet wrong address instead of correct one? 13:23:27 find it out for yourself. You can find the sources for some of the operating engines on the net (of course not actual used examples). A Hint: Forth code is easily parsable. It can be patched - or an exception handler can be run a maintenance routine 13:23:36 If you manage to implement your great idea, I suggest that you publish it ASAP. 13:24:17 I think, you'll get your Turing Award or Fields Medal. 13:25:02 *lol* 13:25:04 Now you have only one small step to do: implement it. 13:25:27 already done (not be me) 13:25:37 ORLY? 13:25:51 I have here a programming project. I spend better some time for it 13:25:52 ciao 13:26:06 --- quit: Mat4 (Quit: Verlassend) 13:26:07 Has he got his Turing Award for breaking one of the most famous problems of the century? 13:26:35 Pf! 13:27:34 --- join: Zarutian (~Adium@168-110-22-46.fiber.hringdu.is) joined #forth 13:39:22 interesting thread 13:47:06 Quiznos: re multi gigahertz machine: what if you intend to specify in BLIF an dual stack machine and it is intended to be run in an Secure Multi Party Computation ?framework/nodesetup/whathaveyou? and must keep its memory usage and circuit complexity to minimum yet gaining some usable speed? 14:01:25 Zarutian: i cannot answer bc i am not famiar with terms: BLIF, "secure mutiparty computation". splain? 14:01:31 familar 14:12:02 Berkley Logic Interchange Format 14:13:15 secure multiparty computation is a way for few people or parties to perform joint computation on senstive data and get results they can use without revealing aforesaid data to the others in the computation. 14:15:40 BLIF is a way to specify boolean logic circuits btw 14:16:25 ok that has nothing to do with either the design of forth nor its implementation. 14:17:19 Quiznos: it only pertains to how crappy speed and memory size as a platform for Forth or its implementations. 14:18:32 my comment about modern machines was that the current technology mitigates must of the concern for needing to design fast and optimised implementations from the get-go. 14:18:46 must/much 14:19:36 mitigates in wrong directions. 14:19:50 disagree 14:20:25 I think due to pipeline stalls and cache subsystem issues that Forth code runing on an Apple ][e is actually faster than on modern machines 14:20:47 there are no such elements in a R6502 14:20:53 or c02 14:20:55 Do you have code to prove the statement? 14:21:20 mere logic and knowledge from those days is applicable and evident 14:21:39 pipelining and caching is a post-90s thing 14:21:40 ASau: nope but that isnt my point here. My point is that most modern machines are made with the assumption of running mostly straightline machine code. 14:22:05 Sure. 14:22:15 Yet I don't see why that must be bad. 14:22:19 Quiznos: also pipelining costs die space and energy to run. 14:22:34 i know that 14:22:56 but native forths are small and fasat. suitable for fitting within a icachekine 14:23:04 which mostly results in shorter battery life and cooked laps 14:23:06 fs 14:23:09 fast 14:23:30 indeed Forths fit into caches easily 14:23:47 there is al awful lot of reasoning to mitigate against Forthing. 14:24:05 if you are not running anything else on the hardware (and I count out hypervisors to) 14:24:09 complicate hw doest need cocmplicate sw. 14:24:19 plicated* 14:24:40 complicated hardware because old software was slow. 14:24:40 Alright, Forth implementation fits cache size, indeed. 14:24:49 and besides, forth can translate down to bits if so designed 14:24:53 What about other software? 14:25:12 if they are staticly designed, then no, they are stuck 14:25:21 or staticly compiled 14:25:32 ie, high-level languages 14:25:43 ie, gcc 14:25:49 What I want you to consider is never saying again that 'you shouldnt have issues with choosing illfitting datastructures or algorithms because todays hardware is so fast anyway' or anything of that affect. 14:26:17 otherwise someone who keeps Knuth in high regard might come after you with an ackerman function. 14:26:41 on ancient hw with single digit Mhz speeds, redesigning Forth engines, and other sw's too, required ingenuity to increase speed. 14:26:49 Ackerman's function is irrelevant. 14:27:07 will gigahertz machines, tha need for speed can be set aside 14:27:27 Today you have plenty of data that become a lot easier to process when you organize them in anything more complex than linked lists. 14:27:38 i'm nt talking about elegence 14:28:02 List is the egelent ADT 14:28:08 elegant 14:28:16 It is very ineffective at the same time. 14:28:28 so what? 14:28:33 So that! 14:28:38 we've got a billion cycles plus 14:28:39 ASau: indeed. I just dislike the dismissive talk like that afore quoted sentence portraus. 14:28:45 portrays* 14:29:01 it's not dismissivr in light of modern tech. 14:29:10 let the tech be complicated, not the sw. 14:29:21 Quiznos: see! exactly this thinking is why software today is bogging down hardware so much. 14:29:25 Quiznos: there're plenty of examples lying on surface where replacing lists with, say, balanced trees, improve on time and IT IS VISIBLE. 14:29:43 ok... 14:29:52 leme see if i can refrase 14:30:02 Do you understand that 30 sec and 1 sec is noticable change? 14:30:18 yes 14:30:37 On Intel QuadCore CPU. 14:30:39 but that would obvly be an io bound process 14:31:05 No. 14:31:07 ASau: even that 300 ms and 1 ms is a noticable change in hard real time systems 14:31:34 Zarutian: lets' ignore RT systems for now. 14:31:35 * Zarutian wants to note that not all Forths run on Intels power hungry processors 14:31:58 Slower processes will make it even more visible. 14:32:08 it's not that i am trying to dismiss ingenuity in sw diesng, but at the gHz speeds we'ew discussing, it does largely become ireelevent (bell curve-wise) 14:32:26 diesng/design/ 14:32:44 Get it from 2 GHz to 700 MHz and it will be 90 sec vs 3 sec. 14:33:04 slowness actually comes predominently form ring switching, libc calling, task switching, etc 14:33:19 libc/any lib/ 14:33:28 You think so. 14:33:31 layers upon layers between the user code and the hw 14:33:40 it's all related 14:33:42 Reality is a lot worse. 14:33:47 no doubt 14:34:05 In reality, the most processing time is in applications. 14:34:21 but coding isnt real, it's just manipulatin of electrical patterns. 14:34:30 And a lot of applications use ineffective programming techniques. 14:34:39 and the percieved delay of activity reported by sw 14:34:47 E.g. threading code vs. optimized native code. 14:34:49 agreed 14:35:07 but the space differential is absolutely mesurabl 14:35:08 And linked lists vs. appropriate data structures. 14:35:48 what better ist adt is there that a forth-dict can use? 14:36:00 or be implemented using? 14:36:19 ist/List 14:36:41 ASau: hmm but with linked lists shadowing comes free no? 14:37:02 Zarutian: the question is whether you really need it. 14:37:29 in five seconds i'll accept your silence as an unanswered acquiesence. ;] 14:37:40 Quiznos: If you stop clinging to some Forth properties, you can use more effective data structures than linked list. 14:37:58 but yu cant tell me what that would be. 14:38:10 i am a continuous student 14:38:17 I can suggest several options. 14:38:22 pls 14:38:34 First, the most obvious is hash table. 14:38:55 Second, less obvious is some balanced tree. 14:39:05 Third, even less obvious: prefix tree. 14:39:19 Fourth, even less obvious: splay tree. 14:39:51 You can choose between them depending on what you optimize for. 14:39:56 so one or more of those would be only applicable to hasing a symbol's name and its lfa. 14:40:03 cnt do that to the Body 14:40:30 If you stop clinging to rigid representation you have picked up from FIG model, 14:40:48 you'll understand that you don't actually need to keep everything in single record. 14:40:55 and i'm not seeing any mention of a smart find that properly short-circuits on name]0] mismatch. 14:41:11 Prefix tree. 14:41:17 k 14:41:17 Balanced tree. 14:41:55 Both options give you fast way to distinguish symbols by first characters. 14:41:57 ok so, for me, then, the question is how to do that elegantly in asm with macros. 14:42:06 * ASau shrugs. 14:42:14 now were talking about implementation. 14:42:19 dont shrug, it's rood 14:42:23 The whole point of high-level languages is to help writing complex programs. 14:42:34 Quiznos: with prefix tables you can just do repeated calculated loads. 14:42:43 but i want a forth in asm, then in forth. 14:42:49 metacompile forth. 14:42:50 Yes, implementing red-black trees is more complex than implementing single-linked lists. 14:43:00 Quiznos: bootstrapping Forth? 14:43:10 Zarutian: ya 14:43:14 That's the point: you implement more complex algorithm that makes your program run faster. 14:43:48 ok but not initially; those HL words need to be devrleoped within the forth environ then carried over through metacompile 14:44:02 Quiznos: then tries/Patricia tree/prefix tables are what you might want to look into. 14:44:04 T1 -> T2 -> T3 14:44:17 Zarutian: kl i'm familar with trie 14:44:28 at least by name to lookup 14:44:54 http://en.wikipedia.org/wiki/Tree -> computing -> all the rest. 14:45:01 ya 14:45:22 i'm familar with that thread 14:45:28 Or just go to you library and pick up Knuth's book. 14:45:38 right 14:45:54 but this is just for Find. 14:46:03 implementing the simplest ones in macro assembler shouldnt be that hard. 14:46:06 not much usage for this complexity outside of find 14:46:24 If all you do is writing toy Forth, sure. 14:46:39 no; i want to develope a significant forth 14:46:41 Otherwise, there's a hell lot of applications for red-black trees. 14:46:51 k 14:46:54 Quiznos: then you do want to look into tries by using repeated lookups. 14:47:02 k 14:47:13 but now right now 14:47:13 In fact, you can choose any other balanced tree. 14:47:23 AVL, AA, whatever. 14:47:33 i still need to write the cold word :) and warm. 14:47:34 LLRB. 14:47:40 what's that? 14:47:44 ASau: but here is a question: how often is find invoked outside of source parsing and compilation? 14:48:00 llrb? 14:48:23 inkedlist red-black? 14:48:26 Zarutian: the question is whether he wants to use it anywhere outside "find". 14:48:42 "Left leaning red-black tree." 14:48:47 oh 14:49:09 brb 14:49:49 There exist even exotic data structures that still fit the problem. 14:49:54 E.g. skip list. 14:50:07 "More exotic." 14:50:48 Or Huffman trees. 14:51:06 Again, it all depends on what you optimize for. 14:51:14 ASau: oh those are fun! Esplecially in encoding/decoding! 14:54:25 Well... Huffman tree is a sort of prefix tree... 14:54:27 Whatever. 14:55:28 There's still a damn lot of data structures that implement associative mapping. 15:02:53 ok; so for all this thread, i'd classify the mentioned ADTs as future possibilities. 15:03:23 --- quit: dys (Read error: Connection reset by peer) 15:23:24 --- quit: ASau (Ping timeout: 250 seconds) 15:23:29 --- quit: true-grue (Read error: Connection reset by peer) 16:50:18 --- quit: Zarutian (Quit: Leaving.) 17:14:52 I haven't used symbols in FORTH 17:15:29 when I implement symbols I make it so that I can intern them...once I intern I get some sort of symbol typed thing 17:15:40 if the symbol already exists, I return the existant one 17:15:50 otherwise I make one, add to table, and return the new one 17:16:05 for comparison then of symbols I can just compare the number within them 17:16:14 first symbol might have the number 0 in it 17:16:33 so unless you are talking about the intern part, then no mapping is needed for symbol comparison 17:16:43 just a pointer comparison 17:16:51 symone symtwo = if ... 17:17:45 you can make an immediate form of intern for word definitions 17:18:04 intern goes from string to symbol 17:18:44 if you aren't making new symbols from external sources at runtime, then all the slow part is at compile/read time 17:19:15 and that is if you are naive and use an association list of some sort for linear lookup 17:23:50 if the forth-dict yahl were talking about is the one for finding a execution pointer from a string of a word...then that is compile time so most people don't care to make the lookup sublinear 18:25:07 --- quit: TodPunk (Read error: Connection reset by peer) 18:25:45 --- join: Tod-Autojoined (Tod@50-198-177-186-static.hfc.comcastbusiness.net) joined #forth 18:27:32 protist: i'm not familar with "intern" splain further? 18:28:07 Quiznos: intern is the function in Lisp languages that takes a string and returns a symbol 18:28:23 Quiznos: if you intern the same string twice, the symbols are considered equal 18:28:29 ok i do know the source of the word 18:28:37 but not its use 18:29:03 Quiznos: (intern "this") -> '|this| 18:29:06 how familiar are you with forth internals? 18:29:14 I wrote a FORTH in x86 asm 18:29:17 k 18:29:29 briefly, in my forth; 18:29:31 what exactly were you trying to do? 18:29:42 ok show me...but I can't promise my FORTH isn't rusty :p 18:29:45 struct Sumbol {char name[]; *nfa, *lfa, *cfa, pfa[] } 18:29:59 nfa is a true pointer now 18:30:07 nfa points to what? 18:30:08 and my forth is ITC 18:30:13 name[] 18:30:26 a counted asciiz string 18:30:33 write full names so I can see what this thing is 18:30:34 :P 18:30:36 --- nick: Tod-Autojoined -> TodPunk 18:30:47 indirect threading 18:30:55 (*fp)() is CFA 18:31:01 *CFA 18:31:03 I would prob just do struct symbol{char *name;int val;}; 18:31:23 that's not enuf for a proper word structure with head and body 18:31:42 maybe oh you are defining a word there? 18:32:18 the struct is the whole symbol with separare code and data sections on hostlinux 32 bit 18:34:08 so tell me more about intern? 18:34:48 it is the storing of a name in a searchable way? 18:38:57 : intern dup symboltable lookup dup if swap drop else here swap , symbolcount @ , 1 symbolcount +! dup symboltableinsert then ; 18:39:45 I was assuming the string was represented as one pointer Xp 18:39:51 but that is the idea 18:40:10 and that lookup was a symboltable lookup that just compares the string 18:40:36 from *CFA, the addr of NFA can be derived and getted 18:40:53 ok so it is the same with different wroding. 18:40:59 I still don't know what NFA is 18:41:04 name field addr 18:41:15 whose value is a ptr to the stringz 18:41:24 I'm basically making a symbols table as an array of 2-tupes [(name,id),...] 18:41:25 --- quit: vsg1990 (Read error: Connection reset by peer) 18:41:46 don't need the id actually 18:41:51 can just compare the locations 18:41:55 ya, and in stdforth, id is link field addr, LFA, which points to previous word in dict list. 18:42:03 because a symbol with a given name should only exist in one place 18:42:18 or need only exist in one place* 18:42:34 and, LFA-cellsize -> NFA c@ produces count byte of name 18:43:26 why define symbols in C? 18:43:39 easier to send to kernel for service 18:43:48 whatever works :p 18:43:50 not C, just a final null 18:43:56 ya 18:44:09 symbols can be just like a string except if they have the same name they have the same location 18:44:14 that is how I like to write them 18:44:26 all print names eventually are sent to kernel for display 18:44:37 k 18:44:43 interning a string just finds the one already existant if it does exist...otherwise returns the old pointer 18:44:56 then for symbol equality just compare pointers 18:45:08 but it's easier in forth to keep all struct fields together 18:45:29 in my interpreters I just have an enum type {symbol,int...} field....symbols just like strings except for type field set to sym 18:45:31 bol 18:45:34 after compiing a colon def, strings arent compared 18:46:11 yeah intern in immediate mode and shove it into the code however you will 18:46:55 [ s" thissymbol" intern ] literal 18:47:33 do your interns at compile time...if you need a real-time intern...then you may want a smart lookup 18:49:39 --- join: vsg1990 (~vsg1990@cpe-67-241-148-119.buffalo.res.rr.com) joined #forth 18:57:58 what do you want these symbols for? 18:58:23 I guess enums are like the poor man's symbols 18:58:28 the symbol is the structure that makes up each word defined. 18:58:28 symbols are nice :D 18:58:47 then, the symbol, after compile, is represented by its CFA 18:58:58 nfa is the prname 18:59:11 if you use abbrevations I won't understand :P 18:59:19 lfa is link add to previous defined symbol 18:59:38 so you are just using symbols for word lookup? 18:59:42 you said you wrote a forth of your own. these are std names that i use. 18:59:57 https://github.com/GordianNaught/FORF 19:00:05 heh 19:00:08 clever name 19:00:09 :p 19:00:12 thanks haha 19:00:31 be here in 6hm i have to sleep now. 19:00:35 6h, 19:00:43 I did do something kinda odd with typetages on code....can be handled better 19:00:47 we can argue semantics then 19:00:51 kk cya 19:00:57 ty gn 19:01:00 afk 19:01:04 gn :) 19:04:14 --- join: zhiayang (~zhiayang@bb219-74-68-213.singnet.com.sg) joined #forth 19:06:58 --- quit: zhiayang (Client Quit) 20:05:03 --- join: darkf (~darkf___@unaffiliated/darkf) joined #forth 20:16:58 --- quit: proteusguy (Ping timeout: 264 seconds) 20:20:58 --- quit: vsg1990 (Quit: Leaving) 20:29:06 --- join: proteusguy (~proteusgu@ppp-110-168-229-47.revip5.asianet.co.th) joined #forth 20:29:06 --- mode: ChanServ set +v proteusguy 22:32:34 --- join: xyh (~xyh@14.150.213.59) joined #forth 23:59:59 --- log: ended forth/15.08.08