Monday, August 31, 2009

When do we actually play?

One of the best things I discovered when I arrived at Wittenberg a few months ago was that someone in the geology department had a laminating machine. Wittenberg being an extremely friendly place, they were more than happy to let me use it right away.

What did I want? Some nice game boards I could leave on my desk. Perhaps I could "force" visitors to my office to play with me. I've often found there was no better way to begin research on a game than to actually play with people. It doesn't necessarily matter whether your opponent is a novice or a master; facing someone who doesn't share your mind and watching the game play out is an extremely helpful tool. Between unexpected moves and the ability to focus on just one side of the game (and perhaps some other psychological things I'm unaware of) I often am looking for opponents for my games. (Sometimes playing against yourself is useful and necessary also: it is easy to "implement" one player to follow a certain strategy and then see if the other half of you can beat it.)

Finding actual opponents is difficult, though. Consider the following exchange:

Me: "Hey, do you want to try out a new game with me? I get the feeling it's really really hard to play well."

You: "Uhhhh...."

The readers of this blog might be inclined, but it's easy to see that people wouldn't jump at this chance. Now consider the added dialogue when impartial games are concerned:

Me: "You're still waffling... I wanna let you know this game is really cool. It's impartial!"

You: "What does that mean?"

Me: "It means we don't have our own pieces. Good boards for me and good boards for you look exactly the same!"

And Zoom! They're off. This is even harder when I add that explaining the rules to a game such as Dictator (a voting game) will take over ten minutes. See ya!

It was great to go to GenCon a few weeks back, but even then I think I may wind up spending more time studying games such as Tsuro than actually playing them.

This line has often actually been used in part of my "academic profile". On the web site from my Asprey lecture at Vassar:

When not teaching, he creates combinatorial games and analyzes their computational complexity. This, unfortunately, does not leave him enough time to actually play them.
As it is, I barely have enough time to spend even analyzing games.

... maybe the best plan is to just print and laminate more game boards :)

Friday, August 28, 2009

Word Choice: suboptimal

I realized I was using the same word to mean two different things, recently. That word was 'suboptimal' and I was describing move options in a game.

On one hand, I said that a move was suboptimal if there was another move that definitely was a winning move.

On the other hand, I said that a move was suboptimal if it was definitely a losing move.

These both sort of make sense, and I think this is part of the beauty of combinatorial games. I can say that a move is a winning move and even if the game is not over on that turn, it's (mostly) clear what I mean: the next player does not have a winning strategy. Furthermore, the term strategy does likely not have to be defined for those that have never seen the actual definition (have I?). People play games, and thus have an idea of what these terms mean.

The term suboptimal does make sense... but, taking a look at it, it doesn't necessarily mean what it perhaps should. In the first definition, it doesn't say anything about the "suboptimal" move not actually being a winning move. There's not necessarily anything "sub" about it. I probably shouldn't use it this way any more (I won't cry for too long).

I am, however, attached to this second definition! I've used it and plan on continuing to use it. Why is this? When I'm reasoning about game states, I often want to say "Player 1 won't make this move because it's suboptimal--Player 2 would have a winning strategy after that move---and Player 2 won't make that move either on their next turn, because Player 1 would also have a winning strategy from there."

How can it be suboptimal for both of them? One of them already must have a winning strategy, and the other does not, so it's not actually any worse for that losing player. Although we may not know yet which player that is, it is still not suboptimal for one of them. It seems better to replace this term with, simply, "losing move". A player shouldn't make that play because it's a losing move, not that it's necessarily suboptimal.

Still, there's another part of the usage for suboptimal that gets lost: that move doesn't extend the length of the game. Often, making these suboptimal plays is equivalent to giving up; the game is now going to end quickly, and with the opposing player having a clear view of which plays to make to finish you off. If you avoid the suboptimal play, you may not have a winning strategy any more, but there are places the opponent may "trip up" and give you the advantage again.

Thus, as a player, even if you're in a losing position already, you still probably don't want to make the suboptimal plays.

Have a good weekend! I'm doing a good job of updating MWF, so I will try to keep that up! This, of course, may change in coming semesters...

Wednesday, August 26, 2009

Bigger Hex: clear. Bigger Chess...?

I'm posting late on Wednesday. I'll have to learn to post while eating lunch in the future...

I've gotten a number of emails from people reading the blog, and perhaps I should clarify something about the complexity of games.

For playing Chess, the game is of a set size, and thus there are a constant number of possible move options each turn (it's a big constant). The input size is always bounded above: we know how big the game board is and what the maximum number of each piece there can be, etc. We can come up with an algorithm that will take a maximum amount of time (and space) to determine whether a player has a winning strategy.

When we talk about games (and problems in general) being "complete", this means refers to how the amount of time (and space) needed for a solving algorithm to finish increases as the size of the problem grows. With Chess, the board never gets bigger. There are only so many different configurations we can have on that 8x8 board.

Hex is a bit different. There is a standard size (11 x 11) but it's easy to see how this game extends to any given size: just increase the size of the board. Thus, assuming we can play on variable-sized boards, it makes sense to talk about the complexity of this game. Using this, it was shown that Hex is PSPACE-complete.

I've stated this before, but I haven't made it clear that I am considering the generalized, extendable versions of these games. I am, and I will likely continue to do so. There's a problem with this, however: I don't always know how those extensions work.

I claimed at one point that Chess is EXPTIME-complete, but I have no idea how Chess is generalized to an n x n board. I know only that there is a generalization and that it leads to the requirement of an exponential-time algorithm to solve it (I have a lot of reading to do).

Aside from this, though, does anyone actually play n x n Chess? I've seen a few variants: a hexagon-based variant and a weird three-player version, but I've never seen anyone play on a board larger than 8 x 8.

(Thanks to Aviezri Fraenkel for reading and mentioning this. I still have a lot of emailed comments to get to!)

Monday, August 24, 2009

Randomness and Games

During grad school I took a total of five different courses on randomness in computing (not including cryptography)! When my classes were winding down and my focus shifted to games, I wasn't sure how much randomness would still have to offer.

It seems like it makes a big difference.

There are two facets to randomness in games: using randomness in the rules of games (technically they no longer fit the strict definition of a combinatorial board game, but whatever) and using randomness as a tool to play and solve games. This second facet has gotten some attention lately.

In Lance Fortnow and Bill Gasarch's complexity blog, a post was recently made on using randomness to play Go. From their blog:

Consider the following seemingly crazy way to evaluate a game board: Have each player play randomly and see who wins. Repeat a few hundred times and score the position by the percent of time that White won.
It turns out, this may be a really good way to evaluate Go. I have seen (less scientifically) something similar with Atropos. Instead of giving our Java applet a complex AI to play, I just have it choose uniformly from all the not-immediately-losing options. This makes it pretty tough to play in a lot of situations (especially in boards of size 5 and 6). This is certainly less sophisticated, because it's not doing any smart tree-pruning things, etc, but it is not a trivial opponent.

The first of these facets seems to trivialize a lot of attempting to compute strategies for playing well. It seems that algorithms become more simple, often resorting to a more greedy approach. In some sense this is a boon; people are often more willing to play with me. I have some dice which I can use to randomize the colors available to a player each turn of Atropos, and potential opponents are often more willing to play. It's better to lose to the luck of the die than to not have thought hard enough about it.

For the game Ninja versus Ninja, both me and my fiancee think it would be interesting to get rid of the dice and play with 16 different moves available to us (2, 3, 3, 4, 4, 4, 5, 5, 5, 5, 6, 6, 6, 7, 7, 8, one each of the different possibilities when rolling two 4-sided dice) and then check the score at the end of that. We think it would be interesting... but we haven't done it yet. Too much thought, perhaps. Instead of having our ninja be within a possible roll to die from an opponent's ninja, we know that they could "spend" that number, but then what if it's in the range from one of my own numbers? At which point is it worth spending the number to kill a ninja, etc. (I won't explain all the rules here.)

Indeed, using randomness to determine who gets to have a turn also simplifies strategies. Random-turn impartial games are certainly affected this way (at the end of the game, a coin flip will determine who wins and who loses), but even partisan games, such as Hex, can become simple.

I know of no examples which go the other way---showing that randomness in a game makes it more difficult---if any exist.

Friday, August 21, 2009

Games and Teaching

Here at Wittenberg we start early: Monday is the first day of classes, meaning we will have six school days before September even starts!

It also means that the incoming freshman class is moving in to campus this weekend, trying to learn how to be college students before they get into their courses and trying to figure out what they might want to do here. At the time, I was excited about doing creative writing, which I wouldn't be allowed to do until my sophomore year due to over-demand. Before then I would take four math classes and one cs class: my path quickly shifted to a math-cs double major that sophomore year.

What it means now is that today we will have department information sessions, in which each department will do their best to lure students into trying classes in their discipline. We expect to see students who already know they want to succeed in Math and CS (we have a join department here) but what about students like myself who had a weird senior year as far as math was concerned (another story for another day) and had never taken a serious CS course? My hope today is to help attract those students with games. I found a laminating machine here in my first week and have a nice Atropos board, complete with poker chips and even some cards. (Sadly I left my Atropos dice at home! Oops!)

I'm psyched to use games as both an attention grabber and a teaching tool. Although it might be too late to use today, does anyone have any advice for brief comments that tie CGT and math or cs together. Usually I try to talk about the complexity side of things, but it's not always easy to field follow-up questions while avoiding actually naming complexity classes.

We'll see how I do today... :)

Wednesday, August 19, 2009


I was going to write a short post asking another question I've had in the course of my research, but then I realized I should try to explain some of the terms behind it first.

Thus: Nimbers.

Nimbers are a way of evaluating impartial games in a way that describes more than just "which player can win" (though it does that too). The simplest example is the term's namesake, Nim. I won't go in to describing the many versions of Nim, just the most basic: you are given a bunch of piles of objects (say sticks). Each turn, a player may take as many sticks as they like from one pile. The winner is the player who takes the last stick.

This is very simple in the case of only one pile: take all the sticks. With multiple piles, the winning move is to take from the larger pile so that the two have the same number of sticks (if they already do, you're in trouble). It turns out that we can quickly evaluate the winnability of a situation with any number of sticks: count the sizes and xor them together. If the total is zero, there is no winning strategy.

Thus, while the pile containing 4 sticks has a winning strategy (take them all) a game consisting of two piles of 4 cannot always be won (4 xor 4 = 0).

Also, the game consisting of the three piles 3, 5 and 6 has no winning strategy (3 xor 5 = 6, which xor'ed with itself is 0).

This reality comes from an evaluation technique that can be used to identify any impartial game! This is drawn recursively from the values of the children---the games which can occur from the parent in one move. In Nim, if we just have one pile, then the children of that pile are all piles with less sticks. Thus, that pile has the "power" to be changed into any smaller pile. If we have a pile of size 5, it could be turned into a pile of size 0, 1, 2, 3 or 4. Although in the case where there is only the one pile, the correct move would be to 0, there are still some more options here.

In the case where we have two piles, say 1 and 3, we have the power to move to any of those children: 0 and 3, 1 and 2, 1 and 1 and 1 and 0. Using our evaluation trick from before, we find those games have nim-values 3, 3, 0 and 1 respectively. Thus, this game should be inherently different from the game where we had a pile of size 5. Indeed, look at those values. The above one has a value of 5 (it's just one pile; that's easy). The second has a value of 2 (1 xor 3 = 2). Why is this?

Notice that for the second one, 2 is the first nim value that is not included in the children, which we also call the "mex" or "minimum excluded number". More formally, for any set S of natural numbers,

mex(S) = min{x in N | x not in S} where N are the natural numbers

This really gives us the power of an impartial game by seeing how many continuous values are included in the children. This is what we call the nimber of an impartial game, G:

nimber(G) = mex({x in N | there exists child G' of G where nimber(G') = x})

We can compare and see that the nimbers of our above two games match exactly with the nim-values we calculated (with xor'ing) before. The above principle can be applied to any impartial game, however. Unfortunately, this is a recursive operation which takes, in general, an exponential amount of time to calculate. Nim is easy to evaluate because there is the simple xor-trick to shortcut past the nimber definition.

Having said all that, here is a question I encountered while trying to evaluate some board games: if the nimbers for a game are bounded above, does this imply that there is an efficient algorithm, or shortcut, to evaluate impartial games? Further, if the bound is not a constant but a slowly-growing function, can we find efficient evaluation?

In the course of wishing I knew the answer to this (I was evaluating a game that appeared to have some sort of efficient patterns) I found nimbers emerge for the game that were higher than I thought were possible. I still hope there is some kind of relationship here, however...

Monday, August 17, 2009

Complexity of Tsuro

Another of the games I got to play (and immediately bought) at GenCon was Tsuro. I had seen this game in the store before, but had bought Oshi instead, which is a great game itself. Tsuro, however, has the nice bonus feature that it easily scales from 2 players up to at most 8. The game is played by having each player follow a different string through a grid from the border. Players take turns placing tiles on the board which determine the directions of the strings. Your goal is to avoid having your string either head off the board or collide with another string.

Tsuro, happily, is the sort of game that doesn't seem to take much to generalize. There is a bit of randomness and hidden information, so a CGT redefining of the game would require some adjustment (players in the game choose from a small handful of tiles each turn, which are secret from the other players) but not a terrible amount.

Without doing any sort of analysis (or any research yet at all on this) it already seems like determining whether a winning strategy for this game exists is PSPACE-complete. Here's a little bit of my consideration for this:

1) It's definitely in PSPACE; once you place a tile on the grid, you cannot place another in that spot. Since a general description of the game requires describing the locations of any previously-placed tiles, there will be at most that many more moves, so a brute force algorithm can just use the game board to try all possible results.

2) We can rephrase it as a one-player puzzle, which seems a good candidate for an NP-complete problem: arrange all tiles on the board with one space missing so that all strings neither collide nor lead out of the board.

3) With the path-following nature of the game and the goal of remaining within the structure, some facet of this game may be equivalent to some fixed-point theorem, such as is true with Hex, a well-known PSPACE-complete game.

These are just my thoughts. Does anyone know anything certain about this game?


(Although I posted yesterday, I will likely not post on Sundays!)

Sunday, August 16, 2009

Difficulty of "Who Can Win?"

I am back early from GenCon 2009 (my first time!) because my orientation starts today. I'm not sure what there is to say about the convention, except that I did play a lot of board games and for many of them, I wonder about the difficulty of playing well. Is there a point where you can say, "I am winning!" unless you've already won?

Well, being a computer scientist, I would consider having computers trying to determine who is winning. If there is no randomness in the situation, then we could go through each possible sequence of moves from the current position and see what the possibilities are. Unfortunately, there are usually an there are usually an exponential number (in the size of a generic description of the game) of these different sequences, so we can't expect computers to evaluate a game that way.

Some games, such as Nim, have very fast methods for determining whether there's a winning strategy to play from a given position. Others, such as Chess, have no such algorithm. Computers can play chess well, however, because there are good ways to evaluate your strength on the game board. Even if you don't know for certain you're going to win, if you have a lot more of the strong pieces than your opponent has left, you can confidently state that you are in a better situation.

This doesn't always work, however. I was demoing the game Ninja versus Ninja from Out of the Box on Friday, and made some moves that I thought really put me in a strong position with more ninjas and points than my opponent, only to find that they could counterattack safely and be out of the way of any response from me. To be fair, this is a bit of a difficult comparison, because Ninja versus Ninja has a random aspect to it (dice rolling) that chess does not.

We can classify the difficulty of games based on the length of an algorithm to determine whether a winning strategy exists for the current player. Nim has a strategy that can be calculated in a polynomial amount of time---which we deem "efficient". Thus, we call Nim "easy". Chess, on the other hand, requires an exponential amount of time to solve, so we call it "hard".

Some games exist in realms (classes) somewhere between P (polynomial time) and EXP-TIME (exponential time) such as NP and PSPACE, which could be either equal to P, equal to EXP-TIME or neither (but not both); we don't yet know where this class is for absolute certain, though most believe that it is separate from P. (Feel free to harass the nearest complexity theorist to try to solve this. Remind them they would become famous if they did.) The best known algorithms to us right now take exponential time to solve PSPACE-hard problems, so these are also not usually tackled by brute force.

This computational logic carries over into play between humans. It's more difficult to play these hard games because it's too difficult to be certain what the best strategy is until you get close to the end of the game. Thus, these games tend to be more fun. Playing Nim a few times is fun until at least one of the players knows the trick to win. Then the game is no longer interesting, everyone already knows the best possible play.

Of great interest to me is: are there any tell-tale aspects of a game that help determine whether a game is easy or hard? Analyzing a game can be tricky: proving that a given game is PSPACE-hard or is in P can often be quite a feat. One of the things I've considered lately is the way games are added together:

Question: If you add two instances of a game together which creates a new instance of the same game, does this imply that the game is easy?

To clarify, I should mention quickly how we get a new game from adding two games together. Simply put, each turn you make a move on one of the boards. Since CGT defines losing as no longer being able to make a move, you lose if there are no more moves available in either of the two boards of your composite game.

The question above holds for nim. Adding two sets of nim piles together just creates a new nim game. On the other hand, Hex, which is PSPACE-hard, does not have some obvious method for adding two games together to get a new Hex game. Perhaps the inverse of the question could also be true...

Does anyone know anything about the validity of either?

Wednesday, August 12, 2009

Links and references

As promised, I would really like to provide links to great CGT resources. This initial list will be of things I have used.

First, David Eppstein's CGT page is a necessary starting point. This contains numerous links to other CGT sites as well as links to playable versions of many well-known games.

FUN 2010: 5th International Conference on FUN with Algorithms will take place June 9-11, 2010. The only notification of this I know of is this page from Erik Demaine.

Of course, "Winning Ways for your Mathematical Plays" by Elwyn Berlekamp, John Conway and Richard Guy is the classic text for CGT.

I am hoping to teach a course in CGT in Spring 2010. If I do that, I will likely use "Lessons in Play" by Michael Albert, Richard Nowakowski and David Wolfe.

Erik Demaine and Robert Hearn's survey paper on Algorithmic Combinatorial Game Theory has been a particularly good reference for me.

Aviezri Fraenkel has an extensive "selected" bibliography (pdf) of combinatorial games.

Thane Plambeck runs his own blog focused exclusively on misere games.

I have used David Molnar's page before, but I can now not find a location for it... I will fix this and post a link as soon as I can find one.

Finally, I like buying and playing games a lot, and is a good resource for finding info about published games.

I hope these are helpful!


Hello Combinatorial Gamers!


I am a theoretical computer scientist who semi-accidentally fell into the field of Combinatorial Game Theory. I have, multiple times, had an issue with not knowing enough about what is known about combinatorial games--and worse, not knowing how to find out more! This is a hopeful attempt at mimicking the success of Lance and Bill's complexity blog.

Aside from hoping to hear back from other people, this blog will have two main goals:

1) Provide resources (links, mostly) for those looking to know more about Combinatorial Game Theory (CGT).

2) Ask potentially interesting questions about CGT.

A few bonus points:

If I wind up finding another good blog devoted to this same venture, I'll likely end this blog with a link to that.

It would be great if there were more people than just me making posts (as with the complexity blog). I assume Blogger has a method for doing that and would be glad to implement such a thing.