In the paper “Automated Theorem Proving for General Game Playing” by Stephan Schiffel and Michael Thielscher, the authors apply answer set programming (similar to Prolog) to improve performance in general game playing. They point out that it is important for the AI to classify what type of game it is playing and to apply appropriate techniques for that class of game. For example null moves, upper confidence trees, alpha-beta search, Bellman equations (dynamic programming), and temporal difference learning would all be appropriate for two-player, zero-sum, and turn-taking games. Also it is important to recognize concepts “like boards, pieces, and mobility.” Apparently, van der Hoek pioneered using automated theorem proving to extract appropriate concepts from the game rules. Shiffel and Thielscher extend the theorem proving to “automatically prove properties that hold across all legal positions.”
One interesting aspect of Dominion is that certain combinations of cards lead to widely varying growth rates: exponential, quadratic, linear, and “super” linear. Garden type strategies often lead to quadratic growth in the number of victory points.
If all you could buy was gardens and workshops, what would be the optimal percentage of gardens to obtain the fastest growth rate?
Answer: If the fraction cards that are gardens is G, then the total number of points after t turns would be about
pts = (# gardens) * (# points per garden)
= (G*number of cards)*( number of cards/10)
= t^2 G (1-G^5)^2/10.
Above, I am assuming the deck consists entirely of gardens and workshops and the number of cards is large. In that case the probability of not drawing a workshop in a five card hand is G^5.
It is easy to see that if you’re action phase consists of laboratory followed by library, that the laboratory was useless. This kind of reasoning is very easy for a human, but a bit harder for a computer. The computer could apply statistics and some kind of online learning strategy to figure out that the laboratory-library combo is not very super, but it would have to run a rather large number of simulations to come to that conclusion and it still would not understand why that combo is not great. Alternatively, we might be able to create a theorem prover that could prove that a laboratory-library action phase has the exact same effect as a single library. It would be great if we could combine theorem proving, other types of plausible reasoning (Bayesian Networks?), and machine learning in an AI.
I am wondering how many of these ideas can be put into an AI.
This PC Dominion game is fun. The AI is not very strong, but it’s great for practice and most importantly allows you to replay the last set of kingdom cards.