Wednesday, September 30, 2009

No Loops and EXPTIME versus PSPACE

Last time I commented about adding rules to unbounded-turn games to prevent stalemate-causing loops of play. On the other side, there is an aspect of games which have a bounded number of turns: they are often shown to be PSPACE-complete instead of EXPTIME-complete (unless they aren't hard at all, such as with Nim). Thus, if PSPACE does not equal EXPTIME, these games are in PSPACE and not EXPTIME \ PSPACE (or EXPTIME - PSPACE, depending on which "set minus" notation you prefer).

Following our previous discussion of constraint logic, these two classes are categorized by whether a game is bounded or unbounded. Thus, it would be expected that potentially hard games such as Oshi would be EXPTIME-complete, while games such as Tsuro are better candidates for PSPACE-completeness. (I'm not sure whether Tsuro is a hard game at all; I finally got to play a bit over the weekend. Perhaps it would be more difficult if I didn't have such a small hand of tiles.)

Interestingly, Go has some play variations which threaten to go against this. Using the ko rule, which prevents looping back to a position from the most recent turn, the game is EXPTIME-complete. Without this rule, however, it is only known that the game is PSPACE-hard, though not necessarily complete. Some versions of Go use the superko rule, which prevents moving to any position that ever previously existed. I personally do not know the complexity of this version. Sadly, I am not familiar with Go and thus can't say which rules are most often used.

Mostly I think it's interesting that this very obvious facet of games---whether or not they have an unbounded number of turns---is a strong intuitive tool in game complexity.

Monday, September 28, 2009

Avoiding Loops and Rewriting Rules

Some games, such as Tic-Tac-Toe, have a clear maximum number of moves. Either the game ends before nine moves have been made, or the ninth move is the last. On the other hand, there are games that don't have such a trait; they have the potential to run an unbounded number of turns.

When I was a kid, I enjoyed playing Stratego with my grandmother. One of the (very sensible) rules that often got me into trouble was the non-repetition rule: a piece cannot be moved back and forth between the same two spaces more than three times in a row (or something similar). Another example is the following rule in Oshi:
Stalemate
When you move one of your pieces, you cannot end its move in the same space it occupied at the beginning of your previous turn.
Oshi is all about moving your pieces around a grid and trying to push the opponents' pieces off the board. It's an excellent game, and this rule prevents a lot of potential "loops" from occurring. By loop, I mean a situation where the sequence of game states repeats. This could potentially still happen in Oshi if two players play a little back-and-forth dancing game between two different pairs of stalemating pieces. Many unbounded-turn games have this difficulty and these sort of loops (which can force stalemates) can often not be easily ruled out through the addition of these sort of stalemate rules.

Unfortunately, however, there are often situations in Oshi where the above stalemate rule fails. As I mentioned above, the game pieces are used to push opponents' pieces off the board. In a very basic case, if there were two pieces (of the same size) they could push each other the same distance. For example, the red piece could push the white piece towards the board's edge. On the white player's turn, the white piece might want to push back, moving both pieces back towards the the other edge.

The red player would likely want to respond by pushing back. The white responds by doing the same thing back, and the game loops. Unfortunately, this very simple repetition and often happens during the game. It happens in almost this exact situation, but also in more complex situations. You get pushed, and you immediately respond by pushing back. This seems like a good strategy, but leads to these bad situations. Thus, for a few years, I've been using the following replacement for the above published rule:
Stalemate
When you move one of your pieces, you cannot end its move in the same space it occupied at the beginning or end of your previous turn.

The rule is only slightly changed, but it makes Oshi far more playable. If you're about to start a game, I highly recommend playing this version!

Friday, September 25, 2009

Constraint Logic: Modelling Computation as Games

The past few years I've been very interested in a family of games from Erik Demaine and Robert Hearn. This family, known as Constraint Logic, revolves around making moves which consist of flipping the direction of edges in a graph. The rules for when edges may be flipped differ only slightly for each version of the game in the family (whether the edge can ever be flipped back to its original state and the number of players in the game).

Nevertheless, they are able to divide the family into games that are complete for many different complexity classes (and one that is undecidable). The brilliant construction is potentially the cleanest way to characterize these classes in terms of combinatorial games. This version of the paper is very easy to digest and clearly describes the graphs used in all Constraint Logic games. The long version of the paper, which contains example reductions to various games, is Hearn's PhD thesis. The examples there show the nature of using this as a vehicle to show completeness in games: Constraint Logic is flexible enough to be used for any game (for example, it handles planar situations well) but is already a game itself.

I feel like I have kept running into different versions of this framework for the past few years (though I don't recall how) but I don't know whether it has actually been used yet by anyone other than its creators. Does anyone know of an appearance of Constraint Logic in another reduction? If so, I would love to hear about it!

Wednesday, September 23, 2009

Partisan Game states that are Impartial

Partisan games are those where the different players do not have the same options for moves (such as in Hex or Chess, but not like Nim). Some states of partisan games, however, mimic impartial games.

This is demonstrated clearly in the game of Domineering, where players alternate placing dominoes on a checkerboard, each taking up two spaces. The Left player must play dominoes vertically (meaning they take up one space and the space directly below that) while the Right must play them horizontally. No pieces are allowed to overlap on the same square of the board.

Some situations in Domineering behave like impartial games, however. Suppose the only free space to play on a board is three squares arranged in the following way: (forgive my ascii art)

XXXX
XOOX
XOXX
XXXX

(Legend: X means this space is covered by a domino. O means this space is uncovered.)

Here, either player may play, but afterwards there are no available plays on the board for anyone. This is much different from

XXXX
XOOX
XOOX
XXXX

where after one of the two players makes a move, they will then be the only one left able to play there. For example, if Left plays on the above board, the situation then becomes:

XXXX
XXOX
XXOX
XXXX

leaving Left able to play again but Right with no such options. With our top example, however, either may play here, but only that one play is left. This is equivalent to the Nim state where there is one pile with only one stick in it. Either player may move by taking that stick, but then no more moves are available.

In 2005, Gabriel Drummond-Cole found positions in Domineering (and Chess!) equivalent to a Nim pile with two sticks. These Domineering states are quite a bit more complicated than those above. The most simple he finds is:

XXXXXXXXX
XXXOXOXXX
XXOOOOOXX
XOOXOXOOX
XXOOXOOXX
XOOXOXOOX
XXOOOOOXX
XXXOXOXXX
XXXXXXXXX

The ascii art here shows its inability to describe what is really happening here (see the paper for better pictures). Gabriel goes on to describe more such situations and even detail the patterns that cause this. One interesting note in this instance is that the covered square in the exact middle can also be uncovered: the value is still equivalent to the Nim-heap with 2 sticks.

Gabriel points out a problem with his constructions, however: there is no way to reach this game state from an initial board! Notice the blocked spaces (X's) in the middle of the diagram: they are each just one blocked square surrounded by unblocked squares. Thus, no domino could have been placed there.

Constructing these situations is a large task, aided often by evaluating software such as Aaron Siegel's Combinatorial Game Suite. Gabriel laments he has "not been able to find any ordinary Domineering positions" equivalent to the Nim-heap of size 2.

Finding other partisan situations equivalent to Nim-heaps is a continuing study. A Nim-heap of size 3 is equivalent to a game with two heaps, of sizes 1 and 2. Thus, we see that we can already have an equivalent domineering game by appending our two boards together:

XXXXXXXXXXXX
XXXOXOXXXOOX
XXOOOOOXXOXX
XOOXOXOOXXXX
XXOOXOOXXXXX
XOOXOXOOXXXX
XXOOOOOXXXXX
XXXOXOXXXXXX
XXXXXXXXXXXX

Work on finding partisan states equivalent to a Nim heap of size 4 is the next big task, and I have heard of some potential results... :)

Monday, September 21, 2009

Good Abstract Games

Before choosing to come to Wittenberg University, I got in touch with the school's role-playing group. I had recently started playing the recent edition of D&D back in Boston and was hoping to find a new group. I was assured there would be a campaign starting up after I got there. Sweet!

I went to the group's opening meeting a few weeks back, even though I had already figured out who I was going to be playing with. Once there I realized that this is actually a full-fledged gaming group, not just focused on role-playing. In particular, the club is proud of the number of board games they have amassed! Every Saturday night, they hold an open gaming night, where most people play some board games.

I will likely be showing up this week, bringing Atropos and Tsuro with. I am curious, though, which good board games people know of that are interesting theoretically---either because they have difficult strategies not overwhelmed by randomness, or they look more like abstract combinatorial games. I know that there are already well-known games like this (Chess, Go, etc) but which lesser-known games fit into this category?

Let me know and I will have them ready for Saturday!

As a note: I have been getting plenty of emails about topics mentioned in these posts. I will try to address everything and will post my own thoughts in comments... when I have a chance. I love getting the emails, but please consider posting a comment directly to the blog. :)

Sunday, September 20, 2009

Special Sunday Bonus Edition

Yes, I was ill through Friday. Thus, I am having an extra post today. Enjoy!

I am currently teaching a software engineering course, and am assigning an on-going project. Each week, the students update the project to the new specifications as well as fix any errors I found in the previous round. So far it's going great, and I don't even have any CS majors in the class!

On one of the recent parts, they were asked to implement Nimania, which I mentioned in this post. I gave them rules, but, being good students, looked around for more about it online. They found... this post (it's the same link).

I'm really enjoying teaching this class. It seems important that theoretical computer scientists keep up with their programming skills. As a grad student, I was somewhat surprised to find that the theory professors were quite competent programmers: keeping up with current languages and teaching some of the intro CS sequence. By next semester, I will have to learn Python in order to teach our CS1 class, which will be completely new for me. I'm looking forward to this challenge but I'm also grateful for the motivation to learn another language.

Even more so, I am anxious to see how my students will model some specific combinatorial game fundamentals. I see two potential paths for one point, and I'm curious to see which way they go. Will they model things to follow basic CGT, or will they do something else? So far, they have had to only model impartial games, and only one at a time. I will introduce adding games together as well as partisan games as separate projects, and it could certainly make a difference which comes first! I'd like to see if I can get them to go the non-traditional route, but without any actual pushing. This seems unlikely, but I'll see what I can do.

Thursday, September 17, 2009

No Post Yesterday... Obviously

Sorry there was no post yesterday. I have been ill a bit this week. With any luck, I'll be able to say something meaningful tomorrow.

Monday, September 14, 2009

Running out the Clock

This past weekend was an exciting week for college (American) football. During the game I watched, however, a thought struck me: was it really a good idea for these teams to be running out the clock?

Michigan was battling Notre Dame in Ann Arbor. Both of these teams have had some recent awkwardly bad seasons, but the rivalry is big enough that this was sure to be a good test.

The last quarter of the game was full of both sides scoring only touchdowns (usually worth 7 points) as each tried to get their offense past the opposing defense. In American Football, at any given time one team has their offensive players out, while the opponents have their defense out. The offense tries to advance down the field with the ball while the defense tries to stop them. A touchdown is scored by getting one of your players through the defense and down to the other end of the field. How's that for a simplification?

The fourth quarter consisted of Michigan's offense scoring a touchdown, putting them 11 points ahead (there are other ways to score points, but let's forget about that for a second). Notre Dame's offense then had the ball, and scored their own touchdown. Michigan had the ball again, but failed to score, thus Notre Dame had another shot and scored again, putting them up by 3 points. Michigan attempted to score and failed, then Notre Dame had another go, but failed as well. On their next turn to be offense, Michigan scored another touchdown, putting them on top by 4 points. With only 11 seconds left to play, Notre Dame did not score again.

The exact amount of time remaining is a big part of the strategies that I think are a little short-sighted, and it happens with every team. After both teams had scored their first touchdown of the quarter and with Michigan up by 4 points, Michigan had the ball with about 8 minutes left to go in the game. They failed to score on this drive, but they used tactics that spent more time on the clock. The logic is that since they were ahead, "running out the clock" would give the opponents less time in which to score.

Of course, after Michigan failed to score and Notre Dame responded by getting a touchdown themselves, the Wolverines were suddenly fighting against the clock. Now it was them who needed to score within the remaining time.

This pattern went back and forth a few times, with the clock obviously running out on Notre Dame at the end. This strategy continues to confuse me, however. For a large portion of the game, Notre Dame showed that they were able to move the ball very well, meaning that if they were on the offense, they had a very good chance of being able to score. With eight minutes left to go in the game, it seems more likely to consider: "How long will the rest of our offense last? If they get possession again, will they be able to score? If so, will they be able to eat up the rest of the time during their possession, so that the clock actually runs out on us?" Michigan barely scored their last touchdown in time; the running of the clock earlier was almost their undoing!

This really starts to look a bit like an impartial game. Instead of determining how to control as much of the time of possession as possible, how can we instead control the parity of the clock so that on our last drive towards the end zone, we will have enough time, but the opponent won't have much after we're done.

Naturally, there are a lot of probabilistic factors in there, ignoring the fact that I've over simplified things. Still, it seems like some better form of analysis could be employed there.

Friday, September 11, 2009

A CGT course for undergrads

I am again curious to hear from people who have taught CGT courses. After starting this blog, I got a lot of comments about potential topics, tools and plans for teaching a graduate-level course in combinatorial games.

At Wittenberg University, however, we do not have graduate students in Math and CS (the "University" comes from our graduate education program, I believe). Also, we sadly do not have enough upper-level CS majors right now to effectively teach a separate course aimed at that audience. We do have two senior-level majors who might be able to take such a course, but it's unclear whether we would actually have any students in it.

It turns out that I have two real options for the near future:

A) I could teach a mid-level combinatorial game theory course aimed at Math and CS students who have completed some form of discrete-math-ish course. This would at least let me assume the students have been exposed to induction and partial ordering and would let me get somewhat deep into the subject.

B) I could teach a freshman-only board game course as a part of our "WittSem" series here. This would be some sort of combination of a class and an introduction tool along with defining an initial advising group. Incoming students have to take exactly one WittSem the fall of their freshman year. Here I think I would be able to have a lot of fun and perhaps draw people into the Math/CS department, but we likely wouldn't be able to get deep into the actual material. I would have to be cautious to avoid a course where we just play a lot of board games and don't do anything academic.

Are there any suggestions out there? I would love to hear experienced and inexperienced advice on these two, or perhaps some completely different options that I haven't considered.

Wednesday, September 9, 2009

Kayles of the planar variety

Before I knew much of anything about CGT and I was looking into the complexity (of "Who can win?") of another game, I saw a cool relationship: all of the "tricky" cases boiled down to solving a different graph game. In this game, a move was equivalent to choosing a node, then removing it and all neighbors from the graph. (Thus, with normal play, the player who can no longer select a node (because the graph is empty) loses).

How interesting! I should look further into this game! It seemed simple, but perhaps not simple enough. I looked around, but didn't know all the best places to look, and thus didn't find out about the exact same game---known as Kayles---until after I'd spent a lot of time figuring out things that were already known. Oops.

As it turns out, Kayles is a very fundamental impartial game, and it's not surprising that my Atropos situations were looking like Kayles boards.

Just knowing this name, however, opened up my resources greatly! Sometimes it's not what you're Googling for, but whether you know what you're Googling for is called. I was interested in the complexity of Planar Kayles (Kayles played only on planar graphs), which is still an open problem (as far as I know). Although we know from Schaeffer that Kayles is PSPACE-complete on general graphs, there are some other planar graphs which are efficiently solvable.

The most recent example I know of is from Rudolf Fleischer and Gerhard Trippen, who showed that star-graphs of bounded degree have been shown to be polynomial-time solvable by . This, as well as Bodlaender's result for graphs with small asteroidal numbers are the two results I'm a bit familiar with. Still, these do not answer the problem of playing Kayles on any planar graph.

Is there a general intuition amongst Kayles enthusiasts that goes either way? Does anyone have a good conjecture about the complexity of Planar Kayles?

Monday, September 7, 2009

Adding Resources and Misereness

I mentioned early on that I wanted this to list lots of good resources for CGT material. I have gotten to say a lot of other things, but today I will repeat a lot of what I've been emailed, and use that to update the surroundings of this blog.

Richard Nowakowski
(one of the three authors of Lessons in Play) provides a link to articles from the Games of No Chance series, which he edited. Aside from this, his game page provides good links for "Budding Combinatorial Game Theorists", including Aaron Siegel's Combinatorial Game Suite.

Martin Mueller mentions that his team's Fuego Go-playing program beat a 9 Dan professional last month. He provides some games on the Fuego page. (I'll quickly admit to not knowing (yet) the rules to Go.) I am not familiar with the program, and thus do not know if it uses any randomization, as mentioned in the complexity blog.

Aviezri Fraenkel's selected CGT bibliography certainly deserves a link. It's been a long time since I saw this and said "Woah!"

I added a link to Thane Plambeck's misere games blog. Unlike the basic definition of a combinatorial game, a misere game is one in which the last play is the losing play. When I was first taught Nim (called "Sticks" by my 8th grade teacher) I learned it as a misere game; the player who took the last "stick" loses.

Although I learned the 3-5-7 version of Nim thoroughly, I was ignorant of the evaluation trick until midway through college. In my freshman year, I challenged my linear algebra professor to a game and he proceeded to mentally process the trick and make all the right moves---until the end of the game. He was playing the standard, non-misere version, and at the end asked "Wait... which one of us wins?"

I learned that version of Nim, Krypto and Equations all at the same time; which games did you learn early on that helped spark your interest in CGT?

Friday, September 4, 2009

Games and Difficulty to Just Play

I got the chance to play a handful of games of Equations yesterday with my department chair. This is a game I was introduced to in the eighth grade in a half-semester math games course. I finally tracked it down on the Internet last year and got a set at Christmas. Woohoo!

Until yesterday I had only played two or three games with my set.

It's hard to convince people to play Equations, mostly because you have to find someone willing to spend time playing a game and also who isn't bored by elementary math. The intersection of these sets isn't tiny, but it also isn't prevalent. Add to this that the rules are pretty complex and it's very hard to get random people to make it all the way to one game.

There is a reason I'm more interested in games with simple rules: it's easier for people to learn to play right off the bat. Whether or not it is easy to determine who can win is often second-hand to making the game accessible for many players. I love the game Equations---and I don't think it could exist without all the complicated rules---but it takes a few games to really understand what all the possible moves are. It's possible that you can make a winning, game-ending move this turn, but never see it (and no one else at the table might see it). Games are often like this. You can fully listen to or read the rules, but unless you see some of the moves acted out by people, you may not comprehend how the rules can be used. It often takes a few plays of any game before people have an understanding of all the moves they can make.

The above difficulty of seeing immediate-winning moves could be taken a step farther, though.

Question: is there a game where it is hard to get the set of all possible moves?

In the project my students are working on, one of the methods for their game class is: getChildren() which gives them all the potential moves from the current state. Are there games for which this is an unreasonable request?

I can see the following potential clarifications/things-I-am-interested-in/etc

Yes: there might be games that have an exponential number of children, relative to their size. I would rather, however, say that we should consider the complexity in terms of (size of game + number of children) since otherwise Nim already has this property.

Maybe: I'm kinda interested in games engineered around this question. I suspect they exist, mostly because I don't think it would be hard to throw in solving a 3-SAT question as part of each turn.

Totally: I want to know if there are already any games like this, either games that are played or games that are studied (or both!).

Today's post is a bit brief, as I will be driving this afternoon. This is not a long weekend for me, however; there will be the usual post on Monday! :) Have a great weekend!

Wednesday, September 2, 2009

Nimania and a Previous Question

Previously, I wrote about a question I didn't know the answer to:
Question: If you add two instances of a game together which creates a new instance of the same game, does this imply that the game is easy?
I received an email from Aviezri Fraenkel about a potential counterexample: the game Nimania.

I'd never heard about this game before, but I was so impressed by the simplicity of it's design, I have already included it as part of a new project assignment for my software design course.

Directly from that specification:
Nimania is a game somewhat like Nim. The game begins with a single pile of sticks, as in Nim, and on turn 1. On turn k, a player chooses a pile. If that pile has only one stick it is removed. Otherwise, that pile loses one stick and k new copies of that pile are added to the game (thus, there are now (k+1) piles of that size). A player wins if they take the last stick from the board.
The description of a game state looks a lot like Nim. Add a description of this and another game of Nimania and you get a new one: a bunch of piles. Except, however, the turn counters aren't necessarily the same, and they won't ever be synchronized (making a move in one of the games doesn't increment k for both games, just for that subgame).

My question above is far from clear; what does it mean to "create" a new instance of the same game? My intuition is that it does it automatically and that no transformations are required. This is true of Nim: the state of a Nim game is just a collection of piles. Add two Nim games together and you just union their piles together. Done.

Aviezri Fraenkel sent me a bunch of different potential counterexamples to the question; time to get back to the reading! In the meantime, any more thoughts on the same question are very welcome!