Friday, December 18, 2009

Game Description: Amazons

Amazons was first introduced to me by Dale Garraway, a visiting professor at Colby while I was an undergrad. It is a nice combinatorial game because it can be played on a common checkerboard using chess pieces (though if you use the queen piece as an amazon, you will need multiple sets of pieces) and something to mark destroyed squares. Each turn, a player chooses one of their amazons, then moves that amazon just as a queen piece in chess, but may not move onto or through a square either occupied by another amazon or which had been "destroyed". After moving, that queen shoots an "arrow" at another square which is a "queen move" away from their current location.

The next player then takes their turn, continuing until the current player does not have a move.

Amazons was invented in 1988 by Walter Zamkauskas. The rules are simple to learn and enjoys computer-based opponent championships. In 2005, Bob Hearn showed Amazons to be PSPACE-complete, thus it is unlikely that an efficient algorithm can evaluate all Amazons boards of general size.

The play tip that Dale and I learned was that sugar packets do a great job of marking destroyed squares.

Enjoy the winter break! I am planning on returning to a normal posting schedule on Monday, Jan. 11, 2010.

Wednesday, December 16, 2009

Short Disjunctive Game Sums

A few weeks ago, I attempted to define Paul Ottaway's Short Disjunctive sum in terms of game summands instead of the method of play. I am interested in doing this because other game sums are written this way and also because there is a big difference between game definitions and methods used for play.

For example, combinatorial games are defined just in terms of plays available to each player, but then Normal and Misere are two play methods used in competition. Thus, the basic operations do not sum two misere games together, but rather sum two games together and then to compete on that composite state, a play method is chosen. Winning and losing is not defined via the games, but by the way they're played.

The short disjunctive sum definition seems to refer to a play method, except that it doesn't describe who wins or loses:

The short disjunctive sum of games is played by first choosing a component then making a legal move in that component if such a move exists. The game ends when a player chooses a component in which he has no legal move.

The other week, I tried to express this in set notation the following way: (G1 = {L1 | R1} and G2 = {L2 | R2})

G1 + G2 = { Left | Right } where
if either L1 = null or L2 = null, Left = { L + G2 : L \in L1} U { G1 + L: L \in L2} U { | 0}
otherwise, Left = { L + G2 : L \in L1} U { G1 + L: L \in L2}
and
if either R1= null or R2=null, Right = {R + G2: R \in R1} U {G1 + R: R \in R2} U {0 | }
otherwise, Right = {R + G2: R \in R1} U {G1 + R: R \in R2}

The idea behind this is that in the case where one of the children on the appropriate side is null, then there's an extra "bad child" that player could choose to move to (for example, { | 0} for the Left player). In this case, the right player can choose to move to 0 = {|} next turn, and then the game has ended, just before the Left player's turn.

Why did I use this in my definition? With these sum rules in place, in both Misere and Normal play, the correct player wins. In normal play, the Left player, upon making the bad move, would give the Right the last chance to play and win. In misere play, Left would choose that game and, if I understand short disjunctive sums, they would not complete a move and would win. With the above model, that choice would leave Right with a final complete move, which would lose them the game.

This was only my basic logic; I don't know how well it holds up under other game operations. It is often the case that preserving the value of strategies is not the same as actual game equivalence! Also, I should apologize for not having studied the notation Paul has actually introduced for these game sums!

Monday, December 14, 2009

This week and next semester

Unfortunately, I am too busy today to get down a proper post. Finals have begun at Wittenberg and I have a stack of exam-writing and grading to get down to. On Wednesday I expect to write a longer post than the past few Wednesdays have allowed. I plan also to post on Friday.

After that, winter break is well underway and things will likely be more scattered until the new year. The spring semester begins here at Wittenberg on January 11, so regular posting will resume quickly. I again have MWF classes so I should be able to keep up the same MWF schedule we've become accustomed to here. I may even get my posts up earlier in the morning.

I'd like to graphically jazz up the blog a bit, but I'm not much of a layout specialist. If you have any ideas or images, I'm all ears! (Or eyes.)

Although the break seems like a great time to post about games, I will be somewhat distracted trying to prepare for the FUN with Algorithms paper deadline on January 15. I can only assume gamesters have a good shot at this conference :)

Friday, December 11, 2009

Game: The Voronoi Game

As someone mentioned on the computational complexity blog yesterday, the Voronoi Game is a game of perfect information without randomness. The two-player version is a good partisan combinatorial game.

The game is based on Voronoi diagrams, which describes which areas of a plane are closest to each of a collection of points. Given a set of points, S, in a space, a Voronoi diagram is a partition of that space such that each partition contains exactly those points closest to one of the elements of S. Border-case points (those equivalently far from multiple points in S) are in a separate set belonging to other points that are also equivalently far from the same subset of S.

In the Voronoi Game, players take turns choosing points on a filled square until each has chosen some fixed number, m. At that point, a Voronoi diagram of the square is created and each player scores points equal to the total area of diagrams containing the points they chose.

As far as I can tell, the game was created by Indrit Selimi, as he is mentioned on the original Voronoi Game page. The blog mentions a newer version which can be played with many people.

Is there anything known about strategies for this game?

Thursday, December 10, 2009

Another Game post on Complexity Blog

Just an FYI: Bill Gasarch has posted an interesting discussion about games on the computational complexity blog:

http://blog.computationalcomplexity.org/2009/12/whats-your-game-mr-bond-nim.html

Wednesday, December 9, 2009

CGT Class description

There were some issues and comments from Monday's post that I still have to look into. I will! Fear not!

I wanted to run the latest iteration of a game class description here:

Combinatorial Games are an avenue to model basic decision-making and computation. In this course, we will look at computational and mathematical basics of combinatorial game theory, analyzing such games as Chess, Go, Checkers, Domineering and Amazons as well as many others! Topics will include basic algorithm design, strategy analysis, games as a number system, impartiality, parity and a dip into computational complexity.


I mentioned in a previous post that this class is intended for undergrads, and will only require both basic programming and discrete math pre-requirements. Is this a good description?

Monday, December 7, 2009

Different Game Sums

Previously there had been some discussion about different methods for summing games. Normally, given two games G1 = {L1 | R1} and G2 = {L2 | R2}, we use the disjunctive sum to describe making one move:

Disjunctive:
G1 + G2 =
{ { L + G2 : L \in L1} U { G1 + L: L \in L2} |
{R + G2: R \in R1} U {G1 + R: R \in R2} }

Informally, a player may choose one of the subgames, and make one of the moves available to them normally on that summand.

Lots of the discussion of last month concerned "where to apply misereness" to a game. I was trying to apply it to the game objects themselves, while it is usually applied to the play of a game instead. Paul Ottaway stepped in and said this was okay while evaluating games using the short disjunctive sum. When taking the short disjunctive sum, a player does not need to choose a subgame on which they have a move, but if none legal move exists, they immediately lose.

This is not as immediately obvious to define in the above set-notation, so let's give it a try:
(Warning: Attempted Ascii Brackets ahead! Update: Ascii Brackets removed!)

Short Disjunctive: (Update: fixed spacing issue. Did I get it right, though?)
G1 + G2 = { Left | Right } where
if either L1 = null or L2 = null, Left = { L + G2 : L \in L1} U { G1 + L: L \in L2} U { | 0}
otherwise, Left = { L + G2 : L \in L1} U { G1 + L: L \in L2}
and
if either R1= null or R2=null, Right = {R + G2: R \in R1} U {G1 + R: R \in R2} U {0 | }
otherwise, Right = {R + G2: R \in R1} U {G1 + R: R \in R2}

(Did I do that right?)

Alternatively, one may want to consider the sum where each player chooses to make a move on each of the available game boards. This is called the conjunctive sum of games:

Conjunctive:
G1 + G2 = { { l1 + l2 | r1 + r2} : l1 \in L1, l2 \in L2, r1 \in R1, r2 \in R2}

This may seem simpler, but after getting used to the disjunctive sum, it likely seems much more complicated from a gamester's point of view. Perhaps we will cover these sums more in the future!

Are there sums I missed? Please let me know!

Friday, December 4, 2009

Game Description: Cookie Cutter

One thing I remembered I hoped to do with this blog was describe different combinatorial games. In honor of Paul Ottaway who made many comments the other week concerning adding misere games, I would like to talk about a game presented by him, Cookie Cutter. If there are other games you would like me to talk about, I would be glad to do so. Just let me know!

Cookie cutter is an impartial game that is deliciously reminiscent of Chomp. The state of this game is an n-by-m grid of squares---some of which may have already been cut out of the board---and an i-by-j cutter that is used throughout the game. During a player's turn, they may overlay the cutter anywhere on the board (without turning it) so that it overlaps at least one non-cut square (cookie). The cutter does not need to be totally contained within the n-by-m cookie grid (it can stick out over the side). All cookies under the cutter are then removed from the grid, and play passes to the next player. If no cookies are left, then the game is over and the player who cannot make a move loses.

This is similar to Chomp not only in that it makes me hungry to think about it, but also that Chomp is a misere instance of this game where the cutter is just as big as the square grid and it must cover up the bottom-right cookie each turn.

Perhaps this is a bit of a stretch, and some of the patterns Paul notices in his master's thesis are actually more reflective of Kayles.

This relationship is more clear in games with one row of cookies and a cookie cutter of odd width (say k). In this case, the first player may either cut up to k cookies off one end or remove k cookies and split the row into two parts. If k = 3, that's a lot like Kayles on a line graph. There you can either chop up the line into two parts, removing 3 nodes from between the two pieces, or cut two or three nodes off one of the ends. The difference? You can't remove just one node on an end.

It turns out that this form of Cookie Cutter results in periodic nimbers based on the size. For a row of cookies of size k, and an odd cookie cutter of width i, the Grundy value of the game is: k (mod (i+1)). This simple periodicity is due to the ability to remove one node from the end of a game, missing in Kayles.

Cookie cutter is only one of three games explored in Paul's masters thesis. His analysis naturally carries beyond the one-row version of the game.

Wednesday, December 2, 2009

Talking about Stubborn Matchmaker tomorrow

Wednesdays often require short posts. This one perhaps more so than others.

Tomorrow, I will be talking about some of my thesis work on the impartial game Stubborn Matchmaker. The game is based on pairing candidates in a Stable Matching environment where both sets have a universal preference. The game ends when the matching is stable (the ith candidate on the left is matched to the ith on the right, for all i).

In case you're in the Wittenbergish area (central-western Ohio), feel free to stop by at 4pm for the talk, which will be in room 320 in the Science Center.

Perhaps someone out there will have an answer to how the patterns we see in the game actually work!

-Kyle

Monday, November 30, 2009

Locality of Plays

When I'm looking at a game for the first time, before I think about strategies and analysis there are two properties I first consider. First, is the game partisan or impartial? Second, can moves be made anywhere, or not? The answers to these help package the game into one of four buckets.

Though I may not have that many game complexity results under my belt, I can't even fathom how else to begin hoping to prove a relationship between games. Those shown links between relationships are always amazing to me.

Consider the computational complexity of determining who will win a game and then narrow your gaze further towards those games which are PSPACE-complete (no one has complained yet that I talk too much about this...). The basic PSPACE problem of satisfying quantified boolean formulae (QBF) is the setting for a game: players alternate choosing boolean values for the variables, starting with 1 and going up to n. In the end, if the formula is satisfied, then "there exists" player wins, otherwise the "for all" player wins. To say this differently (and more correctly), if the last (nth) quantifier is "there exists" then the last player wins exactly if the formula is satisfied. If it is instead "for all", then the last player wins exactly if it is not satisfied.

This game falls into the impartial-and-restricted box: the identity of the current player doesn't matter, but moves are restricted in that you can't choose the value of any variable: only the next one in the list.

Thomas Schaefer made a big jump in his collection of game complexity results (On the Complexity of Some Two-Person Perfect-Information Games) by showing that Kayles is PSPACE-complete. Yes, it is true that both games are impartial, but he uses the QBF game to show that Kayles is hard, despite the fact that general Kayles does not enforce any restrictions on where a player can move.

In his proof, Schaefer creates a reduction to Kayles that simulates the forcing, making players take specific nodes or immediately lose. This is one of those beautiful jumps that opens things up for everyone else. Personally, I would always look at Kayles first if I hoped to prove an unrestricted game to be complete for PSPACE.

Are there other excellent jump examples? Do you think other properties deserve this? I considered planarity, but it doesn't seem difficult to overcome this.

Monday, November 23, 2009

Initial Game Positions

Note: it's unlikely there will be a post on Wednesday, as I am traveling to Massachusetts for American Thanksgiving.

We know the following non-constructive result about Hex and Chomp: the first player has a winning strategy.

This only applies to games in the starting position, and for different reasons for both games (although they both go by the "strategy-stealing argument" monker). In Hex, we know that it is better to have more tiles of your color on the board, so it is naturally better to be playing first. In Chomp, we have a very special move from the first board. This move has, as children, all the other children of the initial position. Thus, if you go first, there are two possibilities: either one of those other children is a winning move (make it!) or none are, meaning that the special move is a winning move (make it!).

As mentioned, neither of these are constructive as in neither hint at the best strategy. They just mention that such a strategy exists. We've noticed a similar situation with Atropos (at least as far as brute force searches have allowed so far). A repeating pattern of winnability appears: boards with n open circles along a side have a winning strategy for the first player exactly when n is congruent to 0 or 3 modulo 4. The other two (1 and 2) have a winning strategy for the second player. This may seem a bit convoluted (especially since it's only been shown up through boards of size 11 or so) but it is the same as saying that the first player has a winning strategy when there's an even number of open circles on the initial board, and doesn't when there are an odd number.

In general, Atropos and Hex are difficult to analyze (both PSPACE-complete) and Chomp has resisted all efforts to figure out. Could it be that for many games, the complexity of those initial positions is just inherently easier than a general game state? To some, this is very disconcerting. In some cases, the general state argument is a bit distracting: it could be that a winning strategy exists from that initial position and is easy to describe. Complexity results come from reductions that don't usually say anything about the difficulty of the game in common positions.

Could it be, then, that this easiness comes from the size of information needed to describe those initial positions? In a game of Hex, to describe an intermediate state, you need to list the color (or lack of) for each hexagon. With an initial state, however, you only need to specify the size of the game board.

Have a great Thanksgiving! Also, if you will happen to be in the South-Central Ohio area next week, I will be giving a talk here at Wittenberg on Matchmaker on Thursday (Dec. 3). More info about that soon!

Friday, November 20, 2009

Emotional Machines like Partisan Games

There is a real difference between impartial and partisan games. I don't just mean that analysis is different, or that impartial games will always evaluate to Nimbers. I'm referring to something more basic:

People generally like playing Partisan games.

I can best present evidence by looking at the vast array of published partisan board games. It is hard to find published impartial games! Instead, published games are able to support some sort of mantra. I have my pieces and you have yours. I will build up my position and at some points there may even be a lack of interaction with whatever you're doing. We will race towards some winning condition. I will be able to see my strategy materialize, whether or not it is a good one.

All of these are aspects of partisan games that are missing in impartial-land. In that mystical place, every move forcibly interacts with your opponent. Your board strength is tied precisely into who is the current player and not in some separate position that you can see. Instead of racing towards a finish line, both players are the same racer, just worried about which leg will be forward when the ribbon is snapped.

I think it's natural that we don't like these sort of games. I'm looking forward to playing a World War 2 simulation game with my dad this thanksgiving, but I wouldn't be looking forward to it if after each turn we swapped teams!

In the past few years, I've made some off-hand comments (not here, in "real life") about people being bad logical machines, but instead good emotional machines. We often allow our emotions to make decisions for us instead of precisely reasoning through all the logical details. Sometimes this causes bad decisions to be made, but sometimes it lets us get to the heart of the matter and make decisions quickly. I'm in no way a psychologist, so I won't attempt to back this up with any data and keep blundering ahead.

Partisan games really clue into this emotional capability. "Well, at this point, there are lots of my tanks on the board and not many of the opposing tanks... I must be winning." We hope to be able to evaluate the strength of our board position and we can at least approximate this by taking a look at the game state. "Is it better to go first in Hex? Yes, it's better to have more of your pieces on the board." That is the basic response, and it turns out to be true, though an actual proof is somewhat more involved.

This is more difficult with impartial games. Near the beginning of a long game of Kayles on a complicated graph, I am not sure whether I'm winning or losing unless I check all the possible moves. Oof! That is a logical problem my brain doesn't want to have to work through. (At least, not on it's own.)

I may continue challenging my students to impartial games, but soon they're going to get sick and ask me to break out a Hex board.

P.S. Enjoy the greatest rivalry in sports this weekend!

Wednesday, November 18, 2009

Summing with Misere Games (Part 4)

I got some comments while I was gone, but I haven't even read them yet. No fear: they will be read! Currently I'd like to discuss another issue raised by my students as they continue to code up adding games together. I asked the following question in class:

If I add a misere game with a normal game, is the result misere?

Keep in mind that my students are not in a CGT course; they need to be able to process this information in order to elegantly determine who loses a game. They determined that, yes, the game is misere. When the game is over, that means that the last move from the misere subgame was made, and thus the last player should lose.

We then went about coding up the simple isMisere() function that each Game object should have. Someday I hope to talk more about programming this all up, but today is sadly not that day.

Instead, I realized there that there was still a problem. That sum is misere, but is very different than if it is a sum of normal play games, made to be misere. What do I mean with this?

Consider the sum of nim games (each of one heap) with values 1, 2 and 2, with the added property that the last game (one with 2 sticks) is a misere game. When determining whether the sum is misere, we find that it is. The last player to play will lose.

Now, however, consider the misere version of the sum of the same games, where none of those subgames is misere. Again, this is a misere game: the last player will lose. The games, however, are fundamentally different, causing strategies between the two to be unequal. In the first version, taking the non-misere pile of 2 results in a win; the opponent will soon be forced to take that last stick from the misere pile. In the second version---where the whole game is misere, but none of the subgames are---taking a pile of 2 is a losing move; the opponent will just take the other pile of 2 sticks.

They are both misere, but on a different level. On a final, programming note: depending on how your misereness is implemented, these levels can be represented differently, either as a public or private property.

Perhaps there will be more on this later!

Friday, November 13, 2009

Catching up a bit...

Sometimes it seems like there's some catching up to be done. Let's go ahead and do a bit of that today...

First, I will be at Supercomputing 2009 until Tuesday of next week and I don't think my phone does blogger, so there will likely not be a post next Monday.

Next, on the Computational Complexity blog, I saw a link to a CS Theory blog. This blog updates with job postings and conference deadlines relevant to researchers in the theoretical cs community. Awesome! (If they want Computational Complexity papers, perhaps board game complexities are acceptable...)

A few weeks ago, I mentioned a paper about the winnability of misere Hex. Much like normal Hex and Chomp, they are able to determine non-constructively which player has a winning strategy. Unlike these other games, it turns out not to necessarily be the first player, but instead to be based on the parity of spaces on the game board. Still, their argument uses a strategy-stealing approach, just with a special twist. This is something I hope to talk about more in the near future.

On my office whiteboard there is a short list of potential post topics that I haven't yet got to. The list looks something like this:

* Locality of where-to-play in games.

* Strategies of initial positions

* Why do people seem to prefer partisan games?

There's another one: "* Complexity Results" but I can't recall which results I was talking about when I scribbled that down. In any case, if you'd like to hear about one of these, let me know.

Finally, there were some questions I posed on this blog that I haven't yet answered, mostly because I either haven't finished reading all the papers suggested or because I haven't been able to find those papers. Most notable is this question:
Question: If you add two instances of a game together which creates a new instance of the same game, does this imply that the game is easy?
I mentioned that the Nimania counterexample doesn't really apply for what I was looking for, but I was referred to some other papers:

(i) A.~S. Fraenkel and Y.~Yesha [1979], Complexity of problems in games,
graphs and algebraic equations, {\em Discrete Appl. Math.\/} {\bf 1},
15--30;
(ii) A.~S. Fraenkel and E.~Goldschmidt [1987], Pspace-hardness of some
combinatorial
games, {\em J. Combin. Theory \emph{(Ser.~A)}\/} {\bf 46}, 21--38;

I have not been able to get access to either of these papers electronically or in our library. (Wittenberg's library is not as grand as a research institution's, but is very good for a small school!) I may wind up just seeing if I can get them from tOSU, but if anyone else has a comment from the relevance of either of these, I would appreciate that also!

Have a great weekend! Another post arrives on Wednesday!

Wednesday, November 11, 2009

GenCon 2009

Today's is a short post, but that's okay. Here are some pictures of me and my fiancee at GenCon 2009 in Indianapolis.

The problem I've mostly had is how to explain what GenCon is, because it neither stands for something or is short for something meaningful. "Lake Geneva, Wisconsin" doesn't mean that much to a whole lot of people.

Monday, November 9, 2009

Summing with Misere Games (Part 3)

As I mentioned last Wednesday, I taught my software class how to perform the traditional (disjunctive) method for summing games. So far their projects have been excellent, but I think this one is complicated enough that they're getting an extra week to work on it. We'll see how they do (my class only has 2 students, so there is only one project turned in each period).

In order to describe the disjunctive sum, we did some examples. First, I added 1-2-3 Nim to a Kayles 4-line:

+


We played around with this a bit. My students are both very bright, and we've been moving pretty quickly through the class, so it was fine to take a big chunk of class time to actually try to win this summed game. My students correctly determined that you want to be the second player to go on this board (thus it's in "zero" or P... but not meaning Polynomial-time P... the other one).

I knew I also had to describe what to do with misere games. Thus, we played the same game as above, but with the nim side being misere. The class again correctly decided this game is in P.

For a last trick, I switched, making the nim game normal but the Kayles misere. The class now noticed that the game is in N! The amount of misereness hadn't changed, but where it was located had!

This is what is a bit startling. Namely, the misereness can make a difference depending on which side of the summand it's on! I can't think of a better example for students to use the Composite design pattern (I'm putting a strong Object-Oriented emphasis on the software course). This curve ball of misereness can be a bit frustrating, but it can be a fun curve ball to play with as well!

As a question for this post: nim is solved for both the normal and misere versions. What about versions where some of the piles are misere and some are not? Additionally, what if we designate certain subsets of piles to all be misere. As with examples above, that doesn't mean that each of the piles is misere, but that that collection of piles as a whole has misere status. Can we efficiently evaluate all those piles? What if some of the misereness groupings overlap? (Notice this last one doesn't fall into the theory of summed games, which break down as trees of subgames. Does it matter?)

Friday, November 6, 2009

Summing with Misere Games (continued)

As I mentioned before, it's actually a bit difficult to find a good definition of how to take game sums when including misere games. The rule that a player loses if they make the last move in a misere game seems a bit "tacked on". Then the overall rule is very inelegant: a player loses when they can either no longer make a move or when they make the last move in a misere subgame. The first part needs to be in there in case there are no misere subparts.

This sort of inelegant construction reminds me of defining exponents back in grade school. I understood that 10^2 was 10*10 = 100 and that 10^1 was 10. 10^0, though? Why is that 1? Why not 0? Sure it fits the pattern, but the definition is still a bit skewed. I reasoned that there had to be a patch. 10^x was really 1*10*...*10 instead of just 10*...*10. Thus, 10^1 was 1*10 and 10^0 is just 1. That patch made things made sense. Excellent!

I think a similar patch could be used here. Notice that if we add to any game a one-move misere component (say, a misere nim game with a heap of size 1) the strategy of the game doesn't change at all; a player with a winning strategy should never choose to take the stick in that heap. Now, whenever we sum games together, we'll also add in that one-move misere game, just like we also multiplied by 1 above. With this, we can get rid of the dual rules for who loses a combinatorial game. The rule is now just: whoever makes the final move in a misere subgame loses. While it's true that the boardscape may be polluted with lots of these single misere games, neither of the players will pay them any attention until the final turn.

Yes, this logic may just be trading one patch for another, but this very strongly hints that misere should be the basic form of play instead of "normal" play.

It's hard to agree with this, however, when we are painting games in such a pessimistic light, basing results off the loser instead of winner!

Okay, I still have more to say... hang on until Monday. :)

Wednesday, November 4, 2009

Summing with Misere Games

Again, I have assigned a new project to my software design class, and again I ran into a bit of a descriptive challenge. This time my class must redesign the project to include game sums. It's not too bad to describe the rules of adding two games together (each turn, a player moves on one of the two boards that were added together) and who wins (the player that makes the last move in all game boards).

...until you add in misere games. These games are phrased in terms of the loser, not the winner. The most simple way to "normalize" them, then, is to assume that players cannot make that final move in the game. Thus, misere nim can be rephrased as nim where you are not allowed to take the final "stick". Now this version acts as a normal play game.

So what happens when you have two games, one of which is misere, and you add them together? When, exactly, does one player lose or win? When is the game over?

We can state it as follows: the game ends either whenever a player makes the last play in one of the misere subgames (that player loses) or when a player makes the last play overall (only if none of the games were misere; that player wins).

The interesting point is that this is not equivalent to just counting that "misereness" towards the entire game. For summing a normal with a misere game, you cannot just add the two non-misere versions together and then state that the last person to play loses in the sum instead of wins. Instead, the misereness of each particular subgame must remain attached to that game.

This is unfortunate, because there is no way to construct every misere nim game (namely those with more than one pile) as a sum of single nim heaps, each of which might be misere. Instead you have to sum from normal piles then declare the whole thing to be misere. (This does put a good spin on the structure of the code my class has to come up with, though!)

Strangely, the definition of this adding takes a second seat to straight up analysis in textbook descriptions of misere games. This is my first natural question: how do we add misere games to other games? Instead, the analytical side seems to come up first: how do we evaluate sums with misere games? For gamesters, that is certainly the most pressing question, but for recreational players, this is a bit frustrating.

Unfortunately, I've run out of time! I hope to continue with this Friday, though!

Monday, November 2, 2009

Best Intro Game?

I just had a wonderful visit this weekend with my soon-to-be-in-laws and they described playing Candy Land with their grandkids. Although I never played much as a kid, I think it's a good introductory board game. The game has good visuals, both using the imagery of delicious candy, but also using colors instead of words or numbers to describe moving around the board. This definitely appeals to 3-5 year olds---whatever that age range is called---and is likely the first board game many American children play.

What about when it's time to play games in the classroom? What should a student's first game be then?

I'm focused on impartial games, and versions of Nim had popped up in my elementary and middle school classes. Basic Nim is a candidate for a good introductory game: it is both simple and (surprisingly) has a very sneaky trick to determine who is winning. This gives it a very satisfying quality: we learn about the rules of a game and some of it's underlying details and then, happily, we find a strategy to play the game well! This is unlike some other games that are more fun to play but might be disappointing to students because they don't reach this step.

The text Lessons in Play decides not to introduce impartial games until much later, instead beginning with Domineering. This is another nice game, due partly to its simplicity. Another key point here, though, is that this game uses pieces from other known games: a checkerboard and dominoes. This can lend a sense of creativity to students. "Use your imagination to create your own games from the pieces around you!"

I've also heard from people who swear by Hackenbush. This is a game where your turn consists of erasing an edge of a graph (usually drawn on a whiteboard) that corresponds to your color. After your turn, if there are any subgraphs not connected to a ground node, those are also erased. This game has the excellent feature that it is super customizable and doesn't need to use a parallel-to-the-ground board at all. Again, this sort of thinking outside the box can be very beneficial to students!

Note that both Domineering and Hackenbush have easy extensions to impartial games. Thus, that transition can be made easily.

What else do people prefer? Are these two the most common? Is there anyone out there who advises doing impartial games first?

Friday, October 30, 2009

Misere Hardness

I'm very pleased with the way my software engineering class is going. I've accidentally asked the students to code up some difficult games at times, but they've done a good job of figuring out exactly how to fix these issues. For the most part, using a game suite as the continuing project has worked extremely well!

Early on, I asked them to include both nim and misere nim. As with all misere games, misere nim is the same game, except that the person who makes the last move loses. Thame Plambeck used to maintain Advances in Losing, a blog solely about misere games. My first combinatorial game, Atropos, was misere: the first player to create a 3-colored simplex (triangle) loses. The misere notion is very intuitive for many games.

As we continue with the ongoing class project, I have continued to add difficult parts but at some point I stopped asking them to add more games to the suite. Oops! Currently, I've given them another tricky task: have users make moves through a series of mouse clicks (instead of entering choices in text boxes). I figured it was time to throw in some new games along with that. At the same time, I hope they realize they can apply "misereness" with any games, and not just with Nim.

Solution? They've already programmed a simple version of Kayles and Hex, so why not include misere Kayles and misere Hex? Even after thinking about that, I became immediately interested in misere Hex. How does that game work? Whichever player first creates their bridge between both sides loses? Weird! Who wins misere Hex? What is the complexity of this game?

In 1999, Jeffrey Lagarias and Daniel Sleator tackled that first question. They find the immediate intuition---that the first player never has a winning strategy---incorrect and instead base the winnability on the parity of the board size. They do this by showing that the losing player can stretch out the game play until the final hexagon has been colored. Thus, on boards with an even number of hexagons, the first player can win! The proof they give applies to games starting from the initial (empty) game configuration.

Does this answer the complexity question? Not in any obvious way. In order for the complexity to be solved, an algorithm would have to work for any game state. One can invoke a board where the game cannot be dragged out to fill in all the cells. (Give red two nearly-complete paths with only one hex missing, and let the rest of the board be filled in. If it's red's turn, what can they do?) It could be that the proof generalizes well into an algorithm.

Thinking from the complexity angle, as this blog (too?) often does, is there anything we can say about the relationship between the complexity of a game and it's misere counterpart? Again, I am interested in considering Atropos, which is already misere. The non-misere version means that the first player to create a 3-colored triangle wins. In this impartial game, each player may choose to color one circle any of the three available colors. The circles are laid out similarly to hex---so that each has six neighbors---inside a triangle with borders colored to suffice for Sperner's Lemma. This means that any play along the borders can win: there are always two different neighboring colors, so simply paint that circle with the remaining color.

Thus, the strategy from the initial position is easy. Choose one of those bordering circles and paint it the third color. Game over. In mid-game positions, however, a player must choose to paint a circle that neighbors the last-painted circle (if an open circle exists). Thus, the game might not be quite as simple from any given situation; one of the border circles may not be adjacent to the last play. (But why wouldn't the first player have just won the game? Why would you ever get to this situation?)

In general, are there any results about misere games and complexity? It's been a while since we phrased a question, so let's do it: (I've been marking the relevant posts with question# labels).

Question (2): Given a game G and it's misere version, M, must the complexity of determining the winner of one of these games be in P?

I think this is the best way to phrase the "What is the relationship between the complexity of G and M?" question. Consider nim and misere nim: both are easy. With Hex and Atropos, each has a PSPACE-complete version, but the opposite looks like it might be easy. Is there a game I'm overlooking which is hard both normally and misere...ishly? (someone help me out with French!)

Of course, I'm still curious about misere Hex.

Question (3): What is the complexity of determining whether the current player in misere Hex has a winning strategy?

For those interested, user Zickzack at boardgamegeek.com has a good description and list of misere games.

Wednesday, October 28, 2009

Economic Game Theory vs CGT

When I started grad school in computer science, I quickly learned it was dangerous to admit that to people. "Oh, that's great! You can help me fix my computer!" Though I'm no Linux guru, it quickly became my safe haven: "Which OS are you using? Oh, I haven't run Windows in years... I don't know anything about that." At least that wasn't a lie; I just have no experience cleaning up viruses, etc. Still, I learned it was often easier to tell people I was studying math instead and save the whole explanation. Again, this wasn't too far from the truth. Theoretical Computer Science doesn't look much like computer science to most people.

Nowadays I'm faced with an alternate dilemma related to explaining my field. Talking with other computer scientists, I will often simplify things by saying I am studying "game theory". Sometimes there are no follow-up questions, but other times people will assume you mean economic game theory or derivations thereof. I still haven't mastered handling this situation, but here's how it seems to go.

Me: "Oh, I'm more focused in combinatorial game theory, which isn't quite the same."

Them: "Combinatorial?"

Me: "Yeah, like board games."

Them: "What is there to study there?"

Me (getting all excited): "Well, we try to say things about who can win a game."

Them: "So you could figure out who's going to win a game of Monopoly?"

Me (somewhat less excited): "Well, we try to work with simpler games, like Hex or Chess."

Them: "Chess is simpler than Monopoly?"

Me: "Well, Chess doesn't have any randomness in it-..."

Them: "I think Monopoly is much easier to play than Chess."

Me: "..."

I would much rather replace this with the following conversation:

Them: "Combinatorial?"

Me: "It's similar to the economic side. ."

Them: "Oh. How do you do that?"

Me: "We investigate these things by looking at simple board games. Sometimes what looks simple can wind up being very complex."

Them: "Wow, board games can do that? That's pretty cool!"

Me (beaming): "Yeah, totally!"

I know some of these similarities and differences, but how can I phrase them well? More importantly, how can I state this in a clean way that will not legitimately spark a "whatever!" from someone with a knowledge of economic game theory? A dangerous task indeed!

Monday, October 26, 2009

Ranking Football Teams... Again

A few weeks ago, I posted about tournament ranking algorithms. Since then, I wrote a little algorithm (the simplest I could think of) and have run it on this year's results after each week's games are over. I'm pretty proud of what's come out of it, but I realize there are still a few issues that may need to get smoothed out.

As I'm mentioning to my software engineering students, I'd really love to have the code be very reusable so it could be used as a ranker for other sorts of tournaments. With this in mind, I've tried to use all the OO-design principles I'm handing out in class. So far, this has been working very well!

I'm definitely at a point where I want to start talking about the algorithm, and listening to complaints about the rankings, etc. The latter is easy (I have Iowa up top) but the former is a bit more scary. If I had a paper, I would simply upload it to arXiv so that it was clearly dated and submitted by me. What do I do if I just have a code base?

One goal for this algorithm is for it to actually match up with rankings from previous years. If I can closely match to those, then perhaps I can not only say something about the strength of the algorithm, but also about how people determine which teams are better.

I mentioned above that this is the simplest algorithm I could think of... so how it is then that I am modifying small parts of it? Part of the flexibility of this system is that you can plug in different functions that reflect the value of a win or loss, difference in score. Thus, using different functions, I get different results (both of the ones I have created so far put Iowa up top; please feel free to complain).

There are a lot of these ranking algorithms (tons here) but how can I determine whether any are the same as mine without looking at the source of each of them? I think I'll probably just put something up on arXiv...

Friday, October 23, 2009

Conferences for Combinatorial Games

One thing I'd like to provide here is a list of good options to submit combinatorial game papers to. Unfortunately, I'm just unfamiliar with these options.

Two years ago, I managed to get a paper about Atropos into the Workshop on Internet and Network Economics (WINE) 2007, though I think I was just lucky enough to be covering a lot of current keywords. Nevertheless, it was a great workshop and I saw a lot of cool results as well as met a bunch of excellent people. The audience was very interested in Atropos, and people were immediately remarking on different aspects of the game ("What happens if you change the coloring of the background?"). At the end, someone came up to me and mentioned that this presentation was going to be one of the most memorable from the workshop!

I was ecstatic, but I also realized that I was probably very lucky; it would be unreasonable that I would have such luck again. Later, my adviser discovered the Fun With Algorithms conference, which seems to be a great place to submit future combinatorial games papers. I also know that many people submit CG papers to Richard Nowakowski's Games of No Chance compilations.

Unfortunately, that is about the limit of my familiarity with combinatorial games papers. I know that results have appeared in many different conferences/journals/etc in the past. These days, however, what are the suggested avenues for publishing CGT results?

Wednesday, October 21, 2009

Games and Surreal Numbers

At Colby College, where I did my undergrad, a visiting math professor named Leon Harkleroad gave a talk on Surreal Numbers. Being a good math major, I went to the talk, even though my interests were sharply drifting towards computer science. In high school I had been very interested in the notion of infinity, and I was quickly drawn into the material as the discussion of omega and other non-real numbers began.

In surreal-land, numbers are built from two sets of numbers with non-overlapping ranges. The sets, say L and R are then written in the form {L | R}. Since they are non-overlapping, without loss of generality, we can say that all elements of L are less than all the numbers in R. (This is the aspect that separates surreal numbers from games, wherein L and R can overlap.) Then the "value" of that surreal number is directly between the largest value in L and the lowest value in R.

Thus, you can define an infiniteish number, ω, as: {1, 2, 3, 4, ... | }. The amazing thing about surreal numbers is that you can perform basic arithmetic on ω: 2ω, ω + 1 and ω - 1 are all attainable in surreal number land. Leon gave a great talk and was very accessible to the undergraduate level, though I didn't give it too much thought for a long while.

A year after Leon's talk, however, I was lucky enough to hear John Conway give the keynote at a small-college computer science conference in Rhode Island where I presented my first paper (in poster form). He played some dots-and-boxes against a member of the audience on a small board, promising that his opponent would soon figure out how to beat him. Unfortunately, "soon" took quite a long time and his allotted time quickly began to run out; we students were quite anxious for our poster session and for the attention to be lavished on us. After the game was solved, he swiftly started drawing some basic Hackenbush figures on a new slide. Time was of the essence, but he showed how in this game, players could derive numeric values from the game positions. The figures were nothing more than line-graphs sticking out of the ground, but the intuition behind their evaluations was clear.

Then he drew another line, and used a vertical ellipsis to show that it continued infinitely skyward. Even as he was saying, "We refer to this value..." a light in my head went off: "... as ω". The connection to surreal numbers was there.

Unfortunately, time was up. John was forced to end on that note and may have missed getting to connect that to computer science. We hurried out of the room and to the poster hall.

Monday, October 19, 2009

Fall Break

I forgot to mention last week, but Wittenberg has Fall Break this week; there will be no post today, though I will post on Wednesday!

Friday, October 16, 2009

Complexity of Games with Randomness

It is known that Hex is a PSPACE-hard game. (See this page, it is the "Hex ist PSPACE-vollstaendig" paper by Stefan Reisch.) Once we add a bit of randomness, however, the strategies suddenly become far easier! As I think I've mentioned before, if you change the game so that the current player is chosen by the flip of a coin, the winning strategy becomes simple.

I'm also interested in trying to analyze some games by removing their sources of randomness. Instead of relying on die rolls, for example, give each player the set of potential die rolls and let them choose one at a time, then removing that "roll" from their set of available moves. Once all options are chosen, either the player has no more moves or they get a new set of false rolls. I don't know of any analytical results like this, but I would love to see them!

On the other hand, I also have some interest in answering complexity questions of games that include randomness. For example: What is the probability that the current player has a winning strategy? Alternatively, rephrasing as a decision question, does the current player have a strategy that will allow them to win with probability > .5?

I don't know of any results like this. Do they exist?

Wednesday, October 14, 2009

Programming Hex

Oops!

Two weeks ago I assigned my software engineering students a new project: they were to add Hex to their collection of combinatorial games. To this point they had only implemented impartial games; this was to be a bit of a wrench in the mix.

Unfortunately, I picked a poor partisan game. Hex itself is an excellent game, but I forgot that it was a bit difficult to determine whether the game is over. The obvious approach to determine this comes down to graph connectivity, something beyond the scope of the project. Oops!

Luckily, I quickly recalled the proof of the Hex Theorem: when all hexes are colored, there will be exactly one winner of the game. The late David Gale does an excellent job of describing the theorem in this paper, showing the link between Hex and Brouwer's Fixed-Point theorem. I explained to the students that they should follow the boundaries from one of the corners of the board and see which corner it ends up at. For example, if they choose a corner where the blue side is on the left and red on the right, they can hug either the blue or red boundary (this might be the same if most of the board is colored).

First note that this path created by the boundary cannot end; it will exit through one of the other board corners. Note next that it cannot exit through the corner exactly opposite; the colors there are on the opposite sides! Thus, the path will veer either right or left (from the point of view of the entering corner). If, like we said, blue is on the left and red the right and while hugging the blue boundary we exit to the left, it means blue has not yet won the game. If, on the other hand, the path exits to the right, then there are blue hexes connecting the two sides of the board: blue wins! The opposite is true for red.

My students coded up their project and it works very nicely! Still, next time I think I'll assign Clobber instead!

Monday, October 12, 2009

Tournaments

I find myself playing in a lot of Magic tournaments. Sometimes I would find myself in big tournaments with hundreds of people. The longest I took part in lasted ten rounds, each taking a bit over an hour. For that whole time, I didn't leave the tournament room.

There are a number of ways to run such an event, and likely the easiest is to just hold a single-elimination tournament. Now, in order to determine a champion, only (log_2 (n) +1) rounds are needed with n players. The problem is that this is very unsatisfying for many players, especially those like myself, who aren't necessarily all that great and are just hoping to play a bunch of games. Tournament organizers want to attract as many players as possible and thus they use "Swiss Seating", a method of pairing opponents that have similar records. This works well and, as I recall, their format usually has (log_2 (n) + (2 or 3)) rounds. After that, the top eight players enter a single-elimination bracket to determine the overall winner.

This method is nice for a few reasons: it works while having a very small number of contests between players but still creates a sturdy ranking. Swiss seating is used for a number of combinatorial game tournaments (often with their own variations) but it can't be implemented everywhere.

American college football is plagued with controversy about who the best team is in a season. I won't get into all the details about these rankings, but there are a few major restrictions on the sport. The first is that the level of intensity and preparation for each game requires that teams can only play at most once per week. This means that teams play at most 13 games during the regular season and it means there is (probably) no time for an end-of-season tournament at the end aside from the current one-round system (the top two teams play each other).

Well, you might think, there are only about 120 total college football teams (in the top division. Overall, there are more like 650). 13 games is more than enough to come up with a swiss tournament that does a good job of picking out the top two teams. Unfortunately, swiss seating requires that teams don't know in advance who they're going to be playing. In college football, the schedule is usually prepared over a year in advance!

In order to overcome this, rankings are done with human input. This, of course, leads to the above-mentioned controversy: determining who the two top-ranked teams are leads to no end of discussion. Worse still, many of the voters used for input are people tied to teams themselves; each of the coaches gets to vote!

Are there any interesting tournament solutions that handle "bad seating" without introducing a human element? It seems feasible, except that perhaps more than just wins and losses would be needed in the data (amount a team won by, for example).

Friday, October 9, 2009

Metaposting

Apparently I am still learning how to blog...

In an effort to clean things up here, in lieu of posting about CGT today, I'm just going to try to fix things up a bit in the settings, etc. So far, I changed the fact that not just anyone was allowed to leave comments! Oops! Now anonymous comments are allowed. (How did I not notice that?)

You may have noticed recently that I added labels to a few posts. I'd like to go back through all the posts and try to add labels to help sort things out. That's another project for today/this weekend.

Right now I have a list of good CGT websites, but I would also like to have a list of links to rules for the different games mentioned here. I might also like to have links to different gamesters, but that might overlap a bit with the current list of links. Perhaps that wouldn't be a bad thing...

I do want this to be an actual useful resource for CGT info. Let me know what I can do to help you out!

"Gamesters" is a word I read once used to describe combinatorial gamers. I don't recall where I first saw that word. Does anyone know who coined the term?

Wednesday, October 7, 2009

Too much competition?

One of the great facets I find in studying game theory is that I am naturally competitive. I am never good at sitting down to play a game and not actively trying to win. I don't always strategize with the same ferocity, but rarely do I throw a game on purpose---even if doing so would be a social victory for me. It's easy to be too competitive!

I still hope to teach a combinatorial games course, and a big part of that would be playing different games during class time. Despite the fact that I would not keep track of wins and losses, I am afraid that some students may believe they are losing too often. Whether or not they are a "sore loser" this can still be very emotionally degrading. How can I do well in this course if I can't even play the games well?

Of course, you can still gain a really strong understanding of the game while losing. It's not true that being a good player is equivalent to being a good game theorist. This is likely not something that is immediately clear, though. So far I have no actual experience teaching this sort of course, so I don't know how to drive this point home.

Does anyone have any good suggestions? How can I convince students it is okay to play poorly?

People in general are competitive, and this lends well to taking a good interest in games. Unfortunately, that could play against the study if the competitive aspect becomes too great.

Monday, October 5, 2009

Algorithms and Game Rules

Most of my posts have had the underlying theme of writing algorithms to play games well, but coming up with the rules for games is also a good exercise in algorithms.

In fourth and seventh grades we were asked to create board games for school. A big difference between the two assignments was that in seventh grade we were asked to actually write out the rules. Although this was for a social studies class, our teacher made it plain that we needed to be very clear with these rules; don't assume the players automatically know how to do everything. She also stressed making it clear that the rules should repeat: after one person's turn, the game should move on to the next player.

Combinatorial games are beautiful in part for the simplicity of the unifying design. To describe a new game, it is enough to describe what the left and right children of any state are. Moving to the next person's turn and when the game ends are all built into the overriding definition. For generic board games, however, enough is going on during one turn that algorithmically describing the steps is more worthwhile.

Roll the dice,
move your piece clockwise around the board that many spaces,
if you land on unowned property, you may buy it
otherwise, if you land on owned property, you must pay that player the appropriate amount
otherwise, if you land on...

If I ever get teach a freshman seminar on board games, I will definitely integrate creating our own board games. Creating rules and the prototype has a lot to do with software design, and that process would be very useful!

Thank you, Mrs. Pasco. Your social studies course gave me the best computer science lesson of that school year!

Friday, October 2, 2009

Another Rules Change

Sometimes I wonder how much players should expect board game designers to get the rules right the first time. It seems to me that there are lots of games being published these days and it might be difficult to detect all the errors prior to publication. Still, I am often surprised at what is either a small amount of playtesting or a bad translation of playtesting to actual rules.

Having said this, I know I am already guilty of not getting to do enough playtesting when it comes around to designing new combinatorial games. It's hard to see how a game will play out unless you can find opponents, which I assume is more difficult in the academic setting as opposed to an actual game manufacturer. Still, it does seem reasonable that small pieces may be missing from a rule (for example, a stalemate rule). It's nice when the solution to these is easy enough to see and implement.

I'm having a more difficult problem resolving a conundrum with Ninja versus Ninja. I've mentioned this game before, but after posting on the topic of confusing/incorrect rules this week, it again jumped into my head.

To quickly explain the uncertainty, this page describes the different possible moves in one turn after a throw of the two four-sided dice. The goal is for your ninja to infiltrate the opposing dojo and either end your move on one of their ninja (thus eliminating them from the game) or just returning for a number of points equal to the distance your ninja was able to infiltrate. Ninja only have three turns to leave their dojo and return, so you do not often hang out in enemy territory. Instead, ninja will often use the reverse move to go deep and then turn around.

In order to prevent the easy capture of ninja, movement is restricted to either going in a straight line or an L-shaped move, and jumping of other ninja (even your own) is not allowed. The reverse allows your ninja to move in a straight line, then turn back around at one point, though this is only allowed in the opponent's dojo. The rules then suggest that a good strategy is to move your ninja directly next to an opponent's ninja, since then "there is no way for that Ninja to eliminate your piece (the two dice can only roll the numbers 2–8)".

This isn't true, however! If they roll an odd number (and are in your dojo and are deeper in your dojo than your piece) they can move further in and then reverse into your ninja. If they roll a 2 and a 3 on the dice, they can move forward two spaces, then reverse three, landing on your ninja they were initially next to.

That seems mostly legal. It plays pretty well and can be a bit hard to make happen (see all the requirements above). Sadly, it means that the strategy of moving your piece in position to attempt to block them getting back to the dojo doesn't really work.

Even more unclear and potentially damaging is the use of combining an L-move and a reversal. In this scenario, a ninja is moved a few squares sideways (not deeper into either dojo), then towards the opponent's dojo, then reverses and heads backwards. Everything seems to work fine during the reverse---so long as you don't reverse past the space you turned at. For instance, a ninja who has 6 spaces to move could move 1 square to the left, then forwards 2 squares, then backwards 3. In this way, the ninja is essentially able to land on (and kill) a piece one square left and one square backwards from it's starting position, even though it didn't roll a 2. This combination is often too strong.

My fiancee and I have changed this to not allow the reversing-with-an-L ninja to use more spaces than occur in the L-shape. Thus, the ninja cannot move further backwards than forwards. This is a pretty tame change, though it plays a lot better than allowing the rules as they appear to be stated.

Other potential rules alterations I would consider:

1) Don't allow ninja to move further backwards than forwards on any reversal. This would actually prevent the can't-kill-neighboring-ninja statement the "strategies" part of the rules claims is impossible.

2) Allow ninja to move further backwards than forwards on reversals, but only if the move does not end them on another ninja. This allows more liberal reversals, but still restricts the ninja from using it to capture other ninja.

3) Don't allow reversals at all. (I want to try playing this way.)

Perhaps there are not many Ninja Versus Ninja players out there at this point, but if anyone has played this game and had a similar problem, please let me know!

On the other hand, which other games (seem to) need a change to the rules to make them (more) playable?

Wednesday, September 30, 2009

No Loops and EXPTIME versus PSPACE

Last time I commented about adding rules to unbounded-turn games to prevent stalemate-causing loops of play. On the other side, there is an aspect of games which have a bounded number of turns: they are often shown to be PSPACE-complete instead of EXPTIME-complete (unless they aren't hard at all, such as with Nim). Thus, if PSPACE does not equal EXPTIME, these games are in PSPACE and not EXPTIME \ PSPACE (or EXPTIME - PSPACE, depending on which "set minus" notation you prefer).

Following our previous discussion of constraint logic, these two classes are categorized by whether a game is bounded or unbounded. Thus, it would be expected that potentially hard games such as Oshi would be EXPTIME-complete, while games such as Tsuro are better candidates for PSPACE-completeness. (I'm not sure whether Tsuro is a hard game at all; I finally got to play a bit over the weekend. Perhaps it would be more difficult if I didn't have such a small hand of tiles.)

Interestingly, Go has some play variations which threaten to go against this. Using the ko rule, which prevents looping back to a position from the most recent turn, the game is EXPTIME-complete. Without this rule, however, it is only known that the game is PSPACE-hard, though not necessarily complete. Some versions of Go use the superko rule, which prevents moving to any position that ever previously existed. I personally do not know the complexity of this version. Sadly, I am not familiar with Go and thus can't say which rules are most often used.

Mostly I think it's interesting that this very obvious facet of games---whether or not they have an unbounded number of turns---is a strong intuitive tool in game complexity.

Monday, September 28, 2009

Avoiding Loops and Rewriting Rules

Some games, such as Tic-Tac-Toe, have a clear maximum number of moves. Either the game ends before nine moves have been made, or the ninth move is the last. On the other hand, there are games that don't have such a trait; they have the potential to run an unbounded number of turns.

When I was a kid, I enjoyed playing Stratego with my grandmother. One of the (very sensible) rules that often got me into trouble was the non-repetition rule: a piece cannot be moved back and forth between the same two spaces more than three times in a row (or something similar). Another example is the following rule in Oshi:
Stalemate
When you move one of your pieces, you cannot end its move in the same space it occupied at the beginning of your previous turn.
Oshi is all about moving your pieces around a grid and trying to push the opponents' pieces off the board. It's an excellent game, and this rule prevents a lot of potential "loops" from occurring. By loop, I mean a situation where the sequence of game states repeats. This could potentially still happen in Oshi if two players play a little back-and-forth dancing game between two different pairs of stalemating pieces. Many unbounded-turn games have this difficulty and these sort of loops (which can force stalemates) can often not be easily ruled out through the addition of these sort of stalemate rules.

Unfortunately, however, there are often situations in Oshi where the above stalemate rule fails. As I mentioned above, the game pieces are used to push opponents' pieces off the board. In a very basic case, if there were two pieces (of the same size) they could push each other the same distance. For example, the red piece could push the white piece towards the board's edge. On the white player's turn, the white piece might want to push back, moving both pieces back towards the the other edge.

The red player would likely want to respond by pushing back. The white responds by doing the same thing back, and the game loops. Unfortunately, this very simple repetition and often happens during the game. It happens in almost this exact situation, but also in more complex situations. You get pushed, and you immediately respond by pushing back. This seems like a good strategy, but leads to these bad situations. Thus, for a few years, I've been using the following replacement for the above published rule:
Stalemate
When you move one of your pieces, you cannot end its move in the same space it occupied at the beginning or end of your previous turn.

The rule is only slightly changed, but it makes Oshi far more playable. If you're about to start a game, I highly recommend playing this version!

Friday, September 25, 2009

Constraint Logic: Modelling Computation as Games

The past few years I've been very interested in a family of games from Erik Demaine and Robert Hearn. This family, known as Constraint Logic, revolves around making moves which consist of flipping the direction of edges in a graph. The rules for when edges may be flipped differ only slightly for each version of the game in the family (whether the edge can ever be flipped back to its original state and the number of players in the game).

Nevertheless, they are able to divide the family into games that are complete for many different complexity classes (and one that is undecidable). The brilliant construction is potentially the cleanest way to characterize these classes in terms of combinatorial games. This version of the paper is very easy to digest and clearly describes the graphs used in all Constraint Logic games. The long version of the paper, which contains example reductions to various games, is Hearn's PhD thesis. The examples there show the nature of using this as a vehicle to show completeness in games: Constraint Logic is flexible enough to be used for any game (for example, it handles planar situations well) but is already a game itself.

I feel like I have kept running into different versions of this framework for the past few years (though I don't recall how) but I don't know whether it has actually been used yet by anyone other than its creators. Does anyone know of an appearance of Constraint Logic in another reduction? If so, I would love to hear about it!

Wednesday, September 23, 2009

Partisan Game states that are Impartial

Partisan games are those where the different players do not have the same options for moves (such as in Hex or Chess, but not like Nim). Some states of partisan games, however, mimic impartial games.

This is demonstrated clearly in the game of Domineering, where players alternate placing dominoes on a checkerboard, each taking up two spaces. The Left player must play dominoes vertically (meaning they take up one space and the space directly below that) while the Right must play them horizontally. No pieces are allowed to overlap on the same square of the board.

Some situations in Domineering behave like impartial games, however. Suppose the only free space to play on a board is three squares arranged in the following way: (forgive my ascii art)

XXXX
XOOX
XOXX
XXXX

(Legend: X means this space is covered by a domino. O means this space is uncovered.)

Here, either player may play, but afterwards there are no available plays on the board for anyone. This is much different from

XXXX
XOOX
XOOX
XXXX

where after one of the two players makes a move, they will then be the only one left able to play there. For example, if Left plays on the above board, the situation then becomes:

XXXX
XXOX
XXOX
XXXX

leaving Left able to play again but Right with no such options. With our top example, however, either may play here, but only that one play is left. This is equivalent to the Nim state where there is one pile with only one stick in it. Either player may move by taking that stick, but then no more moves are available.

In 2005, Gabriel Drummond-Cole found positions in Domineering (and Chess!) equivalent to a Nim pile with two sticks. These Domineering states are quite a bit more complicated than those above. The most simple he finds is:

XXXXXXXXX
XXXOXOXXX
XXOOOOOXX
XOOXOXOOX
XXOOXOOXX
XOOXOXOOX
XXOOOOOXX
XXXOXOXXX
XXXXXXXXX

The ascii art here shows its inability to describe what is really happening here (see the paper for better pictures). Gabriel goes on to describe more such situations and even detail the patterns that cause this. One interesting note in this instance is that the covered square in the exact middle can also be uncovered: the value is still equivalent to the Nim-heap with 2 sticks.

Gabriel points out a problem with his constructions, however: there is no way to reach this game state from an initial board! Notice the blocked spaces (X's) in the middle of the diagram: they are each just one blocked square surrounded by unblocked squares. Thus, no domino could have been placed there.

Constructing these situations is a large task, aided often by evaluating software such as Aaron Siegel's Combinatorial Game Suite. Gabriel laments he has "not been able to find any ordinary Domineering positions" equivalent to the Nim-heap of size 2.

Finding other partisan situations equivalent to Nim-heaps is a continuing study. A Nim-heap of size 3 is equivalent to a game with two heaps, of sizes 1 and 2. Thus, we see that we can already have an equivalent domineering game by appending our two boards together:

XXXXXXXXXXXX
XXXOXOXXXOOX
XXOOOOOXXOXX
XOOXOXOOXXXX
XXOOXOOXXXXX
XOOXOXOOXXXX
XXOOOOOXXXXX
XXXOXOXXXXXX
XXXXXXXXXXXX

Work on finding partisan states equivalent to a Nim heap of size 4 is the next big task, and I have heard of some potential results... :)

Monday, September 21, 2009

Good Abstract Games

Before choosing to come to Wittenberg University, I got in touch with the school's role-playing group. I had recently started playing the recent edition of D&D back in Boston and was hoping to find a new group. I was assured there would be a campaign starting up after I got there. Sweet!

I went to the group's opening meeting a few weeks back, even though I had already figured out who I was going to be playing with. Once there I realized that this is actually a full-fledged gaming group, not just focused on role-playing. In particular, the club is proud of the number of board games they have amassed! Every Saturday night, they hold an open gaming night, where most people play some board games.

I will likely be showing up this week, bringing Atropos and Tsuro with. I am curious, though, which good board games people know of that are interesting theoretically---either because they have difficult strategies not overwhelmed by randomness, or they look more like abstract combinatorial games. I know that there are already well-known games like this (Chess, Go, etc) but which lesser-known games fit into this category?

Let me know and I will have them ready for Saturday!

As a note: I have been getting plenty of emails about topics mentioned in these posts. I will try to address everything and will post my own thoughts in comments... when I have a chance. I love getting the emails, but please consider posting a comment directly to the blog. :)

Sunday, September 20, 2009

Special Sunday Bonus Edition

Yes, I was ill through Friday. Thus, I am having an extra post today. Enjoy!

I am currently teaching a software engineering course, and am assigning an on-going project. Each week, the students update the project to the new specifications as well as fix any errors I found in the previous round. So far it's going great, and I don't even have any CS majors in the class!

On one of the recent parts, they were asked to implement Nimania, which I mentioned in this post. I gave them rules, but, being good students, looked around for more about it online. They found... this post (it's the same link).

I'm really enjoying teaching this class. It seems important that theoretical computer scientists keep up with their programming skills. As a grad student, I was somewhat surprised to find that the theory professors were quite competent programmers: keeping up with current languages and teaching some of the intro CS sequence. By next semester, I will have to learn Python in order to teach our CS1 class, which will be completely new for me. I'm looking forward to this challenge but I'm also grateful for the motivation to learn another language.

Even more so, I am anxious to see how my students will model some specific combinatorial game fundamentals. I see two potential paths for one point, and I'm curious to see which way they go. Will they model things to follow basic CGT, or will they do something else? So far, they have had to only model impartial games, and only one at a time. I will introduce adding games together as well as partisan games as separate projects, and it could certainly make a difference which comes first! I'd like to see if I can get them to go the non-traditional route, but without any actual pushing. This seems unlikely, but I'll see what I can do.

Thursday, September 17, 2009

No Post Yesterday... Obviously

Sorry there was no post yesterday. I have been ill a bit this week. With any luck, I'll be able to say something meaningful tomorrow.

Monday, September 14, 2009

Running out the Clock

This past weekend was an exciting week for college (American) football. During the game I watched, however, a thought struck me: was it really a good idea for these teams to be running out the clock?

Michigan was battling Notre Dame in Ann Arbor. Both of these teams have had some recent awkwardly bad seasons, but the rivalry is big enough that this was sure to be a good test.

The last quarter of the game was full of both sides scoring only touchdowns (usually worth 7 points) as each tried to get their offense past the opposing defense. In American Football, at any given time one team has their offensive players out, while the opponents have their defense out. The offense tries to advance down the field with the ball while the defense tries to stop them. A touchdown is scored by getting one of your players through the defense and down to the other end of the field. How's that for a simplification?

The fourth quarter consisted of Michigan's offense scoring a touchdown, putting them 11 points ahead (there are other ways to score points, but let's forget about that for a second). Notre Dame's offense then had the ball, and scored their own touchdown. Michigan had the ball again, but failed to score, thus Notre Dame had another shot and scored again, putting them up by 3 points. Michigan attempted to score and failed, then Notre Dame had another go, but failed as well. On their next turn to be offense, Michigan scored another touchdown, putting them on top by 4 points. With only 11 seconds left to play, Notre Dame did not score again.

The exact amount of time remaining is a big part of the strategies that I think are a little short-sighted, and it happens with every team. After both teams had scored their first touchdown of the quarter and with Michigan up by 4 points, Michigan had the ball with about 8 minutes left to go in the game. They failed to score on this drive, but they used tactics that spent more time on the clock. The logic is that since they were ahead, "running out the clock" would give the opponents less time in which to score.

Of course, after Michigan failed to score and Notre Dame responded by getting a touchdown themselves, the Wolverines were suddenly fighting against the clock. Now it was them who needed to score within the remaining time.

This pattern went back and forth a few times, with the clock obviously running out on Notre Dame at the end. This strategy continues to confuse me, however. For a large portion of the game, Notre Dame showed that they were able to move the ball very well, meaning that if they were on the offense, they had a very good chance of being able to score. With eight minutes left to go in the game, it seems more likely to consider: "How long will the rest of our offense last? If they get possession again, will they be able to score? If so, will they be able to eat up the rest of the time during their possession, so that the clock actually runs out on us?" Michigan barely scored their last touchdown in time; the running of the clock earlier was almost their undoing!

This really starts to look a bit like an impartial game. Instead of determining how to control as much of the time of possession as possible, how can we instead control the parity of the clock so that on our last drive towards the end zone, we will have enough time, but the opponent won't have much after we're done.

Naturally, there are a lot of probabilistic factors in there, ignoring the fact that I've over simplified things. Still, it seems like some better form of analysis could be employed there.

Friday, September 11, 2009

A CGT course for undergrads

I am again curious to hear from people who have taught CGT courses. After starting this blog, I got a lot of comments about potential topics, tools and plans for teaching a graduate-level course in combinatorial games.

At Wittenberg University, however, we do not have graduate students in Math and CS (the "University" comes from our graduate education program, I believe). Also, we sadly do not have enough upper-level CS majors right now to effectively teach a separate course aimed at that audience. We do have two senior-level majors who might be able to take such a course, but it's unclear whether we would actually have any students in it.

It turns out that I have two real options for the near future:

A) I could teach a mid-level combinatorial game theory course aimed at Math and CS students who have completed some form of discrete-math-ish course. This would at least let me assume the students have been exposed to induction and partial ordering and would let me get somewhat deep into the subject.

B) I could teach a freshman-only board game course as a part of our "WittSem" series here. This would be some sort of combination of a class and an introduction tool along with defining an initial advising group. Incoming students have to take exactly one WittSem the fall of their freshman year. Here I think I would be able to have a lot of fun and perhaps draw people into the Math/CS department, but we likely wouldn't be able to get deep into the actual material. I would have to be cautious to avoid a course where we just play a lot of board games and don't do anything academic.

Are there any suggestions out there? I would love to hear experienced and inexperienced advice on these two, or perhaps some completely different options that I haven't considered.

Wednesday, September 9, 2009

Kayles of the planar variety

Before I knew much of anything about CGT and I was looking into the complexity (of "Who can win?") of another game, I saw a cool relationship: all of the "tricky" cases boiled down to solving a different graph game. In this game, a move was equivalent to choosing a node, then removing it and all neighbors from the graph. (Thus, with normal play, the player who can no longer select a node (because the graph is empty) loses).

How interesting! I should look further into this game! It seemed simple, but perhaps not simple enough. I looked around, but didn't know all the best places to look, and thus didn't find out about the exact same game---known as Kayles---until after I'd spent a lot of time figuring out things that were already known. Oops.

As it turns out, Kayles is a very fundamental impartial game, and it's not surprising that my Atropos situations were looking like Kayles boards.

Just knowing this name, however, opened up my resources greatly! Sometimes it's not what you're Googling for, but whether you know what you're Googling for is called. I was interested in the complexity of Planar Kayles (Kayles played only on planar graphs), which is still an open problem (as far as I know). Although we know from Schaeffer that Kayles is PSPACE-complete on general graphs, there are some other planar graphs which are efficiently solvable.

The most recent example I know of is from Rudolf Fleischer and Gerhard Trippen, who showed that star-graphs of bounded degree have been shown to be polynomial-time solvable by . This, as well as Bodlaender's result for graphs with small asteroidal numbers are the two results I'm a bit familiar with. Still, these do not answer the problem of playing Kayles on any planar graph.

Is there a general intuition amongst Kayles enthusiasts that goes either way? Does anyone have a good conjecture about the complexity of Planar Kayles?

Monday, September 7, 2009

Adding Resources and Misereness

I mentioned early on that I wanted this to list lots of good resources for CGT material. I have gotten to say a lot of other things, but today I will repeat a lot of what I've been emailed, and use that to update the surroundings of this blog.

Richard Nowakowski
(one of the three authors of Lessons in Play) provides a link to articles from the Games of No Chance series, which he edited. Aside from this, his game page provides good links for "Budding Combinatorial Game Theorists", including Aaron Siegel's Combinatorial Game Suite.

Martin Mueller mentions that his team's Fuego Go-playing program beat a 9 Dan professional last month. He provides some games on the Fuego page. (I'll quickly admit to not knowing (yet) the rules to Go.) I am not familiar with the program, and thus do not know if it uses any randomization, as mentioned in the complexity blog.

Aviezri Fraenkel's selected CGT bibliography certainly deserves a link. It's been a long time since I saw this and said "Woah!"

I added a link to Thane Plambeck's misere games blog. Unlike the basic definition of a combinatorial game, a misere game is one in which the last play is the losing play. When I was first taught Nim (called "Sticks" by my 8th grade teacher) I learned it as a misere game; the player who took the last "stick" loses.

Although I learned the 3-5-7 version of Nim thoroughly, I was ignorant of the evaluation trick until midway through college. In my freshman year, I challenged my linear algebra professor to a game and he proceeded to mentally process the trick and make all the right moves---until the end of the game. He was playing the standard, non-misere version, and at the end asked "Wait... which one of us wins?"

I learned that version of Nim, Krypto and Equations all at the same time; which games did you learn early on that helped spark your interest in CGT?

Friday, September 4, 2009

Games and Difficulty to Just Play

I got the chance to play a handful of games of Equations yesterday with my department chair. This is a game I was introduced to in the eighth grade in a half-semester math games course. I finally tracked it down on the Internet last year and got a set at Christmas. Woohoo!

Until yesterday I had only played two or three games with my set.

It's hard to convince people to play Equations, mostly because you have to find someone willing to spend time playing a game and also who isn't bored by elementary math. The intersection of these sets isn't tiny, but it also isn't prevalent. Add to this that the rules are pretty complex and it's very hard to get random people to make it all the way to one game.

There is a reason I'm more interested in games with simple rules: it's easier for people to learn to play right off the bat. Whether or not it is easy to determine who can win is often second-hand to making the game accessible for many players. I love the game Equations---and I don't think it could exist without all the complicated rules---but it takes a few games to really understand what all the possible moves are. It's possible that you can make a winning, game-ending move this turn, but never see it (and no one else at the table might see it). Games are often like this. You can fully listen to or read the rules, but unless you see some of the moves acted out by people, you may not comprehend how the rules can be used. It often takes a few plays of any game before people have an understanding of all the moves they can make.

The above difficulty of seeing immediate-winning moves could be taken a step farther, though.

Question: is there a game where it is hard to get the set of all possible moves?

In the project my students are working on, one of the methods for their game class is: getChildren() which gives them all the potential moves from the current state. Are there games for which this is an unreasonable request?

I can see the following potential clarifications/things-I-am-interested-in/etc

Yes: there might be games that have an exponential number of children, relative to their size. I would rather, however, say that we should consider the complexity in terms of (size of game + number of children) since otherwise Nim already has this property.

Maybe: I'm kinda interested in games engineered around this question. I suspect they exist, mostly because I don't think it would be hard to throw in solving a 3-SAT question as part of each turn.

Totally: I want to know if there are already any games like this, either games that are played or games that are studied (or both!).

Today's post is a bit brief, as I will be driving this afternoon. This is not a long weekend for me, however; there will be the usual post on Monday! :) Have a great weekend!

Wednesday, September 2, 2009

Nimania and a Previous Question

Previously, I wrote about a question I didn't know the answer to:
Question: If you add two instances of a game together which creates a new instance of the same game, does this imply that the game is easy?
I received an email from Aviezri Fraenkel about a potential counterexample: the game Nimania.

I'd never heard about this game before, but I was so impressed by the simplicity of it's design, I have already included it as part of a new project assignment for my software design course.

Directly from that specification:
Nimania is a game somewhat like Nim. The game begins with a single pile of sticks, as in Nim, and on turn 1. On turn k, a player chooses a pile. If that pile has only one stick it is removed. Otherwise, that pile loses one stick and k new copies of that pile are added to the game (thus, there are now (k+1) piles of that size). A player wins if they take the last stick from the board.
The description of a game state looks a lot like Nim. Add a description of this and another game of Nimania and you get a new one: a bunch of piles. Except, however, the turn counters aren't necessarily the same, and they won't ever be synchronized (making a move in one of the games doesn't increment k for both games, just for that subgame).

My question above is far from clear; what does it mean to "create" a new instance of the same game? My intuition is that it does it automatically and that no transformations are required. This is true of Nim: the state of a Nim game is just a collection of piles. Add two Nim games together and you just union their piles together. Done.

Aviezri Fraenkel sent me a bunch of different potential counterexamples to the question; time to get back to the reading! In the meantime, any more thoughts on the same question are very welcome!

Monday, August 31, 2009

When do we actually play?

One of the best things I discovered when I arrived at Wittenberg a few months ago was that someone in the geology department had a laminating machine. Wittenberg being an extremely friendly place, they were more than happy to let me use it right away.

What did I want? Some nice game boards I could leave on my desk. Perhaps I could "force" visitors to my office to play with me. I've often found there was no better way to begin research on a game than to actually play with people. It doesn't necessarily matter whether your opponent is a novice or a master; facing someone who doesn't share your mind and watching the game play out is an extremely helpful tool. Between unexpected moves and the ability to focus on just one side of the game (and perhaps some other psychological things I'm unaware of) I often am looking for opponents for my games. (Sometimes playing against yourself is useful and necessary also: it is easy to "implement" one player to follow a certain strategy and then see if the other half of you can beat it.)

Finding actual opponents is difficult, though. Consider the following exchange:

Me: "Hey, do you want to try out a new game with me? I get the feeling it's really really hard to play well."

You: "Uhhhh...."

The readers of this blog might be inclined, but it's easy to see that people wouldn't jump at this chance. Now consider the added dialogue when impartial games are concerned:

Me: "You're still waffling... I wanna let you know this game is really cool. It's impartial!"

You: "What does that mean?"

Me: "It means we don't have our own pieces. Good boards for me and good boards for you look exactly the same!"

And Zoom! They're off. This is even harder when I add that explaining the rules to a game such as Dictator (a voting game) will take over ten minutes. See ya!

It was great to go to GenCon a few weeks back, but even then I think I may wind up spending more time studying games such as Tsuro than actually playing them.

This line has often actually been used in part of my "academic profile". On the web site from my Asprey lecture at Vassar:

When not teaching, he creates combinatorial games and analyzes their computational complexity. This, unfortunately, does not leave him enough time to actually play them.
As it is, I barely have enough time to spend even analyzing games.

... maybe the best plan is to just print and laminate more game boards :)