Figuring out why AIs get flummoxed by some games

https://www.profitableratecpm.com/f4ffsdxe?key=39b1ebce72f3758345b2155c98e6709c

Figuring out why AIs get flummoxed by some games

In Nimthere are a limited number of optimal moves for a given map configuration. If you don’t play one, you’re essentially ceding control to your opponent, who can win if they only play optimal moves. And again, optimal moves can be identified by evaluating a mathematical parity function.

So there is reason to think that the training process that worked for chess might not be effective for chess. Nim. The surprise is how serious the situation was. Zhou and Riis discovered that during Nim five-row array, the AI ​​improved quite quickly and was still improving after 500 training iterations. However, adding just one extra row caused the improvement rate to slow down considerably. And, for a seven-row table, the performance gains had essentially stopped by the time the AI ​​had played 500 times.

To better illustrate the problem, the researchers replaced the subsystem that suggested potential movements with one that operated randomly. In seven rows Nim table, the performance of the trained and randomized versions was indistinguishable over 500 training gains. Essentially, once the board became large enough, the system was unable to learn by observing game outcomes. The initial state of the seven-row configuration has three potential moves that are all consistent with an eventual victory. Yet when their system’s trained motion evaluator was asked to check all potential motions, it rated them as roughly equivalent.

The researchers conclude that Nim requires players to learn the parity function to play effectively. And the training procedure that works so well for chess and Go is incapable of it.

Not only Nim

One way to view the conclusion is that Nim (and by extension, all impartial games) is just weird. But Zhou and Riis also found signs that similar problems might also arise in chess-playing AIs trained in this way. They identified several “bad” chess moves – those that missed a mating attack or initiated an endgame – that were initially rated highly by the AI ​​board evaluator. Only because the software removed a number of additional branches several times in the future was it able to avoid these blunders.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button