URI:
   DIR Return Create A Forum - Home
       ---------------------------------------------------------
       The Chess Variant Forum
  HTML https://chessvariantforum.createaforum.com
       ---------------------------------------------------------
       *****************************************************
   DIR Return to: General Discussion
       *****************************************************
       #Post#: 355--------------------------------------------------
       Re: Games against Programs
       By: GothicChessInventor Date: February 2, 2018, 9:35 am
       ---------------------------------------------------------
       [quote author=HGMuller link=topic=56.msg350#msg350
       date=1517560167]
       In Fairy-Max the mate-distance pruning comes as a free side
       effect of the way I determine mate distance. Rather than
       assigning a different score to a checkmate depending on where it
       is in the tree, Fairy-Max always scores them the same. (It is
       actually capture of the King that receives special scoring,
       +INF.) It uses the general mechanism of a 'delayed-loss bonus'
       to make sure it prefers the fastest mate (or the fastest way to
       gain something in general): at the end of the node it adds 1
       eval point if the node's score was below the current static
       evaluation (meaning a forced loss occurs in the future).
       Mated-in-N scores are of course always below static evaluation
       scores, and are thus passed to the parent as 'mate-in-(N+1)'
       scores.
       To make sure that post-incrementing a score doesn't interfere
       with alpha-beta (i.e. cannot move scores that are bounds because
       they where out of the search window into that window, where they
       would be mistaken for exact scores), alpha and/or beta must be
       pre-decremented when they are below the current evaluation. In
       situations with mate scores this leads to decrementing beta, so
       that for the side that is going to be checkmated beta can
       actually be pushed to below -INF, when that side managed to
       delay the mate long enough compared to the branch where it was
       already detected. In that case the initial score of -INF you
       would start with in full-width nodes is already >= beta, and
       gives an immediate beta cutoff similar to the stand-pat cutoff
       in QS, before you have searched any moves!
       [/quote]
       That is a complicated way of doing something that can be
       achieved with a simple "mate - true distance from root" exact
       score.
       Additionally, what happens if you get a fail-high or fail-low
       later in the search at the same depth where you found a mate and
       altered the bounds only by 1 point? When you complete that
       iteration of the search, your "mate in n" score that is no
       longer outside of the shifting bounds would be identified....
       how?
       #Post#: 358--------------------------------------------------
       Re: Games against Programs
       By: HGMuller Date: February 2, 2018, 10:27 am
       ---------------------------------------------------------
       It isn't really complicated; the only code it requires is three
       simple assignment statements. At the start of the node
       [font=courier]alpha -= (alpha < currentEval);
       beta -= (beta <= currentEval);[/font]
       and at the end
       [font=courier]bestScore += (bestScore < currentEval);[/font]
       You would need a similar or even larger amount of extra code to
       adapt mate scores with the distance to root to make sure they
       show up right in the root, and correct for that while storing
       and probing in the transposition table.
       I am not sure what you mean by 'later in the search'. There is
       no inconsistensy: the search window is shifted down when
       propagating towards the leaves; the score is shifted back up
       when propagated back towards the root. It is just like each
       level of the tree has its own, slightly shifted zero point of
       the score scale, so that transferring scores from one tree level
       to the next needs to translate the score to the new scale by
       adding or subtracting 1 quantum.
       And the delayed-loss bonus does more than just accounting for
       DTM. For one, it gives you automatic mate-distance pruning, as
       explained earlier; with the root-distance method you would have
       to program that separately. In addition, it also works for other
       gains than checkmates. The engine will always take the shortest
       path to the best forcible position within its horizon. Plain
       minimax / alpha-beta doesn't do that; it only cares whether you
       reach it in the leaves, but not where along the branch towards
       the leaf. This can be a problem with 'bumpy evaluations', like
       you often encounter in late end-games like KBNK, where you
       temporarily have to allow the bare King to flee away from the
       edge, to drive it back later and closer to the fatal corner. The
       evaluation doesn't like that. So if you look along the line of
       optimal play the static evaluation goes up and down, and then up
       to an even higher local maximum, etc. If the seach has the
       nearest local maximum well within its horizon, but not the
       higher maximum beyond it, it wouldn't see any reason to
       immediately start pushing towards that local maximum, and
       speding the rest of the depth there. First wasting time deep in
       the valley, until just enough moves are left to climb to the
       nearest maximum, scores just as much. And there are always many
       more ways to waste time than to take the best path. So wasting
       time is what it will do, and will continue to do forever (as
       from that valley the next-higher summit will never come into
       view, it will only be able to see that when it has mounted the
       nearest maximum). Even 50-move blackmail would not help to make
       it finally do something, because the nearest maximum does ot
       involve a capture or Pawn push, and at some point it will just
       see "hey, that's strange... now it will be draw in X moves
       whatever I do!" WIth a delayed loss bonus a maximum will look
       lower if it lies further away, so the engine will start climbing
       it as fast as it can. And once it reaches the top, it then can
       see the next-higher maximum, and goes as fast as it can there,
       etc. A delayed-loss bonus is really a very good cure for
       indecisive play.
       #Post#: 382--------------------------------------------------
       Re: Games against Programs
       By: Greg Strong Date: February 3, 2018, 8:33 pm
       ---------------------------------------------------------
       [quote author=HGMuller link=topic=56.msg358#msg358
       date=1517588863]
       It isn't really complicated; the only code it requires is three
       simple assignment statements. At the start of the node
       [font=courier]alpha -= (alpha < currentEval);
       beta -= (beta <= currentEval);[/font]
       and at the end
       [font=courier]bestScore += (bestScore < currentEval);[/font]
       You would need a similar or even larger amount of extra code to
       adapt mate scores with the distance to root to make sure they
       show up right in the root, and correct for that while storing
       and probing in the transposition table.
       I am not sure what you mean by 'later in the search'. There is
       no inconsistensy: the search window is shifted down when
       propagating towards the leaves; the score is shifted back up
       when propagated back towards the root. It is just like each
       level of the tree has its own, slightly shifted zero point of
       the score scale, so that transferring scores from one tree level
       to the next needs to translate the score to the new scale by
       adding or subtracting 1 quantum.
       And the delayed-loss bonus does more than just accounting for
       DTM. For one, it gives you automatic mate-distance pruning, as
       explained earlier; with the root-distance method you would have
       to program that separately. In addition, it also works for other
       gains than checkmates. The engine will always take the shortest
       path to the best forcible position within its horizon. Plain
       minimax / alpha-beta doesn't do that; it only cares whether you
       reach it in the leaves, but not where along the branch towards
       the leaf. This can be a problem with 'bumpy evaluations', like
       you often encounter in late end-games like KBNK, where you
       temporarily have to allow the bare King to flee away from the
       edge, to drive it back later and closer to the fatal corner. The
       evaluation doesn't like that. So if you look along the line of
       optimal play the static evaluation goes up and down, and then up
       to an even higher local maximum, etc. If the seach has the
       nearest local maximum well within its horizon, but not the
       higher maximum beyond it, it wouldn't see any reason to
       immediately start pushing towards that local maximum, and
       speding the rest of the depth there. First wasting time deep in
       the valley, until just enough moves are left to climb to the
       nearest maximum, scores just as much. And there are always many
       more ways to waste time than to take the best path. So wasting
       time is what it will do, and will continue to do forever (as
       from that valley the next-higher summit will never come into
       view, it will only be able to see that when it has mounted the
       nearest maximum). Even 50-move blackmail would not help to make
       it finally do something, because the nearest maximum does ot
       involve a capture or Pawn push, and at some point it will just
       see "hey, that's strange... now it will be draw in X moves
       whatever I do!" WIth a delayed loss bonus a maximum will look
       lower if it lies further away, so the engine will start climbing
       it as fast as it can. And once it reaches the top, it then can
       see the next-higher maximum, and goes as fast as it can there,
       etc. A delayed-loss bonus is really a very good cure for
       indecisive play.
       [/quote]
       Ok, it took me a while to get my mind around this.  I think I
       understand why it works.  And it seems to be significantly
       simpler than the alternative.  You don't need specific
       mate-distance-pruning code, and you don't need the code to
       adjust mate scores when reading and writing from the TT.  And it
       has the added advantage of solving the mate-distance-pruning
       issue in more general cases.  Very clever innovation.  I wonder
       why other programs aren't doing this.  There are no others that
       I'm aware of.  True, some programs don't do a full eval at every
       node and this approach requires that, but I think most engines
       do that now.  ChessV does full eval at every node, so I think
       I'm going to give this a try.
       #Post#: 383--------------------------------------------------
       Re: Games against Programs
       By: HGMuller Date: February 4, 2018, 4:11 am
       ---------------------------------------------------------
       Even when you don't want to do a full evaluation in every node,
       you could use the same idea with some other score replacing
       currentEval. E.g. when you would replace that by -INF+MATERANGE
       it would add the bonus only for delaying a mate (with as
       consequence the opponent would always go for the fastest path to
       mate). It then would have no effect on other gains. You could
       also put it at the incrementally calculated material eval
       (usually also including PST). In that case it would cause the
       engine to take the fastest path to material gains, even if no
       follow-up gain would be visible between that gain and the
       horizon. Usually there is a backlash after a gain, because you
       have been busy securing the gain while in the mean time the
       opponent could already write it off as a loss, and prepare some
       form of retaliation. This would encourage the engine to postpone
       the gain to just at the horizon, pushing the retaliation over
       the horizon. The next move the horizon would have advanced, so
       it would plan to postpone it even more, if it can, preferably
       never cashing it at all. The delayed-loss bonus counteracts
       this, but if the backlash is too large, one score quantum would
       probably not be enough to entirely solve it. Having a larger
       bonus would require more complex code, however, as you want the
       score after bonus to be a monotonous function of the score
       before the bonus. And you would have to think very hard how you
       would have to adapt alpha and beta to prevent that out-of-window
       scores (i.e. bounds) could ever be shifted in window and be
       mistaken for exact. (This would lead to arbitrarily large
       blunders.)
       #Post#: 384--------------------------------------------------
       Re: Games against Programs
       By: GothicChessInventor Date: February 4, 2018, 7:31 am
       ---------------------------------------------------------
       [quote author=HGMuller link=topic=56.msg383#msg383
       date=1517739117]
       Even when you don't want to do a full evaluation in every node,
       you could use the same idea with some other score replacing
       currentEval. E.g. when you would replace that by -INF+MATERANGE
       it would add the bonus only for delaying a mate (with as
       consequence the opponent would always go for the fastest path to
       mate). It then would have no effect on other gains. You could
       also put it at the incrementally calculated material eval
       (usually also including PST). In that case it would cause the
       engine to take the fastest path to material gains, even if no
       follow-up gain would be visible between that gain and the
       horizon. Usually there is a backlash after a gain, because you
       have been busy securing the gain while in the mean time the
       opponent could already write it off as a loss, and prepare some
       form of retaliation. This would encourage the engine to postpone
       the gain to just at the horizon, pushing the retaliation over
       the horizon. The next move the horizon would have advanced, so
       it would plan to postpone it even more, if it can, preferably
       never cashing it at all. The delayed-loss bonus counteracts
       this, but if the backlash is too large, one score quantum would
       probably not be enough to entirely solve it. Having a larger
       bonus would require more complex code, however, as you want the
       score after bonus to be a monotonous function of the score
       before the bonus. And you would have to think very hard how you
       would have to adapt alpha and beta to prevent that out-of-window
       scores (i.e. bounds) could ever be shifted in window and be
       mistaken for exact. (This would lead to arbitrarily large
       blunders.)
       [/quote]
       Try this in 8x8 chess using your idea:
       1.e4 e5
       2.Nf3 Nc6
       3.Bc4 Nd4
       4.Nxe5 Qg5
       5.Nxf7 Qxg2
       6.Rf1
       Black to move. Does it show mate in 2?
       #Post#: 386--------------------------------------------------
       Re: Games against Programs
       By: HGMuller Date: February 4, 2018, 9:05 am
       ---------------------------------------------------------
       It doesn't, because it isn't. You can interpose the Queen
       instead of the Bishop.
       But this isn't something that needs to be 'tried'. It is how all
       my engines work, since ~1980. It is a 'proven technology', which
       I conceived when I had improved he first Chess engine I ever
       wrote to the point where it could see mate in 2, with as a
       result that it couldn't win KQK anymore. Because it consistently
       preferred the mate in 2 over mate in 1.
       #Post#: 389--------------------------------------------------
       Re: Games against Programs
       By: GothicChessInventor Date: February 4, 2018, 12:48 pm
       ---------------------------------------------------------
       [quote author=HGMuller link=topic=56.msg386#msg386
       date=1517756703]
       It doesn't, because it isn't. You can interpose the Queen
       instead of the Bishop.
       But this isn't something that needs to be 'tried'. It is how all
       my engines work, since ~1980. It is a 'proven technology', which
       I conceived when I had improved he first Chess engine I ever
       wrote to the point where it could see mate in 2, with as a
       result that it couldn't win KQK anymore. Because it consistently
       preferred the mate in 2 over mate in 1.
       [/quote]
       I know it is not a mate in 2, which is why I asked.
       When I leave in my "mate - distance" code, it plays the queen
       intervention.
       When I implement your suggestion with the alpha-beta
       modification, it thinks there is a mate in 2.
       So I must not be implementing your coding idea properly.
       #Post#: 392--------------------------------------------------
       Re: Games against Programs
       By: HGMuller Date: February 4, 2018, 3:21 pm
       ---------------------------------------------------------
       This is typically what happens if the correction of bestScore
       after the search completed can shift a score that is a bound
       back into the original window. The code I posted should not
       suffer from that, however. (If inserted in the right place.)
       What also can go wrong is that your Search() routine can return
       from multiple places, (for exceptional conditions), where some
       bypass the incrementing of bestScore.
       #Post#: 397--------------------------------------------------
       Re: Games against Programs
       By: GothicChessInventor Date: February 4, 2018, 10:09 pm
       ---------------------------------------------------------
       [quote author=HGMuller link=topic=56.msg392#msg392
       date=1517779264]
       This is typically what happens if the correction of bestScore
       after the search completed can shift a score that is a bound
       back into the original window. The code I posted should not
       suffer from that, however. (If inserted in the right place.)
       What also can go wrong is that your Search() routine can return
       from multiple places, (for exceptional conditions), where some
       bypass the incrementing of bestScore.
       [/quote]
       Yes I will have to look at your code more closely. It was hard
       to pay attention while GETTING READY FOR THE EAGLES TO WIN THE
       SUPERBOWL!!!!
       Yessssssssssssssssssssssssssssssss!!!!!!!!!!!!!!!!!
       #Post#: 407--------------------------------------------------
       Re: Games against Programs
       By: GothicChessInventor Date: February 6, 2018, 11:58 am
       ---------------------------------------------------------
       [quote author=HGMuller link=topic=56.msg392#msg392
       date=1517779264]
       This is typically what happens if the correction of bestScore
       after the search completed can shift a score that is a bound
       back into the original window. The code I posted should not
       suffer from that, however. (If inserted in the right place.)
       What also can go wrong is that your Search() routine can return
       from multiple places, (for exceptional conditions), where some
       bypass the incrementing of bestScore.
       [/quote]
       Now I see why it will not work with my code.
       I probe the TABLEBASES in RAM, so I need to return "mate -
       tablebase distance - depth from root including quiescence - 1"
       whenever I encounter a tablebase win. So if I hit a 5-piece
       tablebase result and it is white to move and win in 535-ply from
       the ending of Queen + Pawn vs. Queen, and it was found at
       nominal depth 11 that hit depth 37 during the search, the
       winning distance overall is 535 + 37 - 1 = 571-ply. So I return
       the score of 20000 - 571 = 19,429 for that position.
       The reason why I subtract 1 is this: Suppose at depth 1 it is
       white to move and the tablebase says it is the win in 535-ply.
       Well depth = 1, so 535 + 1 would not be correct, since that is a
       "loss in 536 ply." So the true depth is always tablebase depth +
       search depth - 1.
       *****************************************************
   DIR Next Page