Tuesday, December 13, 2011

Just a little Gamester humor.

Drew this on the board during the final exam for my freshman CGT seminar. :)


Wednesday, November 30, 2011

NP-complete games?

Are there any "balanced" games that are known to be NP-complete? I know that some are NP-hard (Clobber, Col, Graph NoGo) but I don't know of any that are NP-complete...

By balanced, I mean games that have symmetric starting positions (G = -G).

Are there any like this that don't require a hardness of choosing an appropriate move?

Tuesday, November 29, 2011

Extending a Pause

First it was Supercomputing, then Thanksgiving. Now that I've returned, the end-of-semester crush has forced blog updates way down in the priority queue.

If I don't get anything else out this semester, I will return with the New Year. Happy Holidays!

Friday, November 11, 2011

Game Description: Battle Sudoku

One thing I've slowly been chewing on this year is a two-player Sudoku game. The rules are very simple: each turn, you fill in a box with a number not yet in that row, column or sub-square (quadrant in the 4 x 4 case). As with normal play, if you can't move, you lose!

In January, I played with a bunch of games starting with an empty board, but realized there was a simple symmetry strategy for the second player. We brainstormed some potential starting boards to fix things, but more complicated symmetry strategies arose.

After returning to Wittenberg, Noam Elkies and I corresponded, finally working out the current starting board. All the flaws we'd seen were fixed.

Here is the starting board:

My plan was to bring the game to Integers to play with people. Beforehand, I tested it out on my WittSem class, and it proved challenging. My student Patrick also took interest and showed a pseudo-symmetry strategy I was afraid of could be broken! Awesome! (I still don't recall how he does it!)
Here is a .pdf of a sheet of games! :) Enjoy!

Next week, I will be in Seattle for Supercomputing for most of the week, so there may not be regular posts.

Tuesday, November 8, 2011

Somewhat Random Musings

Yes, I didn't post at all last week. I am still very much catching up on things I missed while at Integers. Next week I will be at Supercomputing and will likely miss another post or both. I don't often have the desire to pause the teaching part of the semester to get research/blogging done, but this is turning out to be one of those semesters. Mostly, I want to make sure I write good posts about the game talks at Integers (or what I understood of them). I hope I can find extra time to spend on them, but it's more likely that they'll come out a bit less-than-perfect. Hopefully some of them are readers and can comment on all the errors

While at Integers, I spent a lot of time struggling with NoGo, only to run into problems I encountered at BIRS in January. While there, I found that NoGo on a graph is NP-hard, but was neither able to show that Graph NoGo was PSPACE-complete, nor show any hardness for standard NoGo (on a grid). The same thing happened last month: no new progress. So I tried flipping it around and started looking for an efficient algorithm for NoGo. Either Neil McKay or Alex Fink (I think it was Neil) asked me about it, and I told him what I was doing. He was surprised I had given up on computational hardness so quickly. His comment made sense: I have more experience finding hardness results than showing efficient algorithms for problems (though I would argue that hardness reductions ARE efficient algorithms). Research-wise, you strive for results! So, you should spend your time conquering problems you're good at. Instead, I was trying something a bit different.

Luckily my job is far more focused on teaching than research, so the pressure to publish is less intense. It's very nice to know I can try a completely different tactic if I get frustrated with a problem!

... not that I had any luck with this!

On a related note, student presentations have started in my games class! One question that came up is: What does it mean for a game to be solved? I answered that there's an easy way to evaluate the game without drawing out all of the game tree. I hope that's a good enough answer. For myself, it means there is an efficient algorithm to solve the problem. I generally consider a completeness result to be "solving" the game, though perhaps it's the opposite: the game (probably) cannot be solved!

Friday, October 28, 2011

Integers Liveblogging: Friday

Some awesome quotes from today:

"Actually, I can say I played NoGo; I don't play Go!" -Aviezri Fraenkel

"'Every other weekend'?  Other from what?  What is the complement?"  -Rebecca. ('Every other' is apparently not colloquial in Canada.)

"...and I am guilty of some of this research."  -Florian Luca during his talk on Balancing Fibonacci Numbers.  He went on to describe a problem as a mathematician trying to solve a system to find the street address for a party. :)

Last night, we had a little NoGo tourney between Canada, the US and Europe.  Canada won, 5-4, but there may be a rematch tonight of sorts!

Thursday, October 27, 2011

Integers liveblogging: Thursday

Today a lot happened. I talked and I listened to lots of games talks, many of which will become their own posts in the future. Instead, I'll quickly mention some highlights thus far.

Rebecca confused me yesterday by saying "in P" which I automatically translated as "efficiently solvable" instead of "in Zero". I had just met her, so I asked excitedly if she was in computer science. She said this misconception is even more dangerous because she is dealing with misere games, and need to consider both outcome classes. Those in Fuzzy when playing misere, but in Zero under normal play are in N-,P+, pronounced "NP" (which computational complexity theorists use for another well-studied complexity class). Dangerous!

I've also seen a great bunch of non-games talks, mostly number theory, which I've understood a bit of. Particularly, Carl Pomerance gave a talk I understood most of about product free subsets of the Naturals (if x and y are in S, then x*y is not in S... usually x*y (mod p)). Neil Hindman gave an impressive talk, mostly because I don't think he looked at any notes and covered a lot of complex stuff using only a whiteboard! I also found out yesterday that Beatty sequences/ratios were used to help discover quasicrystals, something which had a lot to do with this year's Nobel Prize! Wow!

I've mostly had a blast meeting people and playing games. While playing FLex with Marla yesterday, she commented, "This is all very important!" Everything I explained about what I was thinking was quickly absorbed by her and Rebecca. Awesome!

Meeting people---especially over games---is great! I played some impartial games with Yuval Tanny and Shira Giat. At one point I made a winning move, but was afraid I hadn't gotten the parity correct. Neil put me to ease, saying, "Yes, you counted to two correctly." That's about as high as I feel I can count these days.

Yuval and Shira are clearly awesome; this seems to be the norm for gamesters! There's a lot of energy in the group---all positive---especially from Jess Enright, who gave an awesome talk today about set representation games. Here is a picture of her realizing she won a game with Marla during the talk.




I feel like a bit of a reporter, trying to keep up with everything, but it's very helpful to help me stay on point in the talks. Seriously, they were great and I can't wait to get some of those posts up.

I will try to get a post up tomorrow before the evening, but if not, I'll put in my final fake-liveblog next week. I'm leaving on Saturday morning, so I won't be around for the last day of the conference, sadly.

Wednesday, October 26, 2011

Integers Liveblog: Wednesday

The first day of Integers has yielded many excellent number theory and combinatorics talks... which I didn't follow very well.  After lunch, I spent much of the time playing FLex and Adjex with Alex Fink, Larry Rolen, Rebecca Milley and Marla Slusky, all of whom I just met today!

During lunch, Patrick played NoGo against Richard Nowakowski, winning two of three games!  Afterwards, Richard said, "it's clear the american strategy is different from the canadian."  Apparently in the states we play aggressively! ;)

With nine minutes left to go before the afternoon talks, Richard looked at Patrick and declared, "okay, one quick one!"  Later, he explained, "there is no such thing as the last game."

Awesome!


Tuesday, October 25, 2011

The Difficulty of Making a Move in International Draughts

As I begin the trip down to the University of West Georgia for Integers, I want to be sure to report on another awesome thing I saw in Banff back in January.

For most games I'm interested in, the rules are simple. Deciding whether one move is legal is usually an easy task. Usually, it's even easy to list all the moves. (Some games, such as Phutball, may have an exponential number of moves, so this task may not be considered "easy".) The challenge usually arises in finding the best move, not just one that works.

This is not always the case for International Draughts. For my American comrades, Draughts is the word the rest of the world uses for variations of the game Checkers. International Draughts is a variant played on a 10x10 board that is one of many "pub games" popular in Europe. The game is different from "standard" draughts in two big ways:
  • Regular pieces may jump other pieces forward or backward.

  • Crowned (Kinged) pieces can move as many spaces in one direction as desired, even after jumping another piece.

  • A turn must include capture of as many opponents' pieces as possible.
This last rule is extremely interesting! Not only must your turn consist of a capture (jumping a piece) if possible, it must capture as many pieces as it can. If you watch the animation on the wikipedia page, it becomes clear that once you have a crowned piece, this maximizing move is not always trivial to find.

In fact, this question was posed last year as a theoretical computer science question: Is it NP-hard to find a legal move? Bob Hearn rose to the challenge and showed that it is, in fact, an NP-hard question. Bob presented this result at BIRS. I've never seen a game where it's computationally difficult to figure out whether you made a legal move!

How is this policy enforced in the "real world" of international draughts?

Friday, October 21, 2011

Nimbers in Konane

I saw a lot of awesome talks at the BIRS workshop in January that I still want to report about. One of these was an awesome talk by Carlos dos Santos that really struck me as a computer scientist. It used a real reduction-like strategy to prove that for any k, *k can be written as a Konane position! A cool result with a cool proof!


Gamesters are interested in which game values exist for different rulesets. Elwyn Berlekamp asked, "What is the habitat of *2?" meaning, which rulesets have a position equal to *2. For Domineering, it was quite complicated to find a *2 position!

Carlos dos Santos does infinitely better with Konane, finding an algorithm to construct a board of value *k for any natural number k. The paper is here. The algorithm is elegant and recursive, and each resulting position has a very clear focal point where one of two or three pieces will be able to make the first move. Naturally, the *4 Konane board is much more complex than the *4 Nim board, but the difficulty is showing that for each successive power of 2, that nimber can be generated. Indeed, I assumed *1, *2, and *3 could be created, but *4 would require a 3-dimensional board.

Instead, they all exist, and the result is something I wish I could ask my algorithms class to learn!

Friday, October 14, 2011

Presenting Neighboring Nim to Undergrads

I replaced getting a post ready for Tuesday with preparing a talk for this past Monday. I spoke at our department colloquium about Neighboring Nim, describing the reduction from (Vertex) Geography. Two excellent things happened.

First, I had a great audience. While playing Neighboring Nim against them, they were very energetic to make good plays against me. Will, a senior student of mine, immediately took on the role of surveyor, collecting move nominations from the crowd, then calling for a vote. The discussions were loud and boisterous. At one point, I made a very fast response to one of the audience's moves, using the Quick and Confident method. Unfortunately, I moved to an N position, with two P options. The audience quickly picked the board apart (it was by no means trivial) and found the two winning moves. The problem was that, individually, they didn't see the viability of the other move. So there were two groups arguing vehemently for two separate perfect moves. The attitude nearly boiled over. Luckily, the audience finally agreed and they made one of the moves. It was very exciting to speak to such a charged group!

Second, I changed the order of my talk around regarding the proof of computational complexity. Usually I claim that a game is hard for some complexity class, explain what that means, then show the reduction. Talking about complexity classes to a general undergraduate audience is a bit of a speed bump, and often shakes people out of the rest of the talk, meaning they don't follow the steps of the reduction. This time, I just said that the two games were related, then explained the reduction. This kept the crowd interested, because it's a description of positions in a game they just learned how to play. At the end of the talk, I spoke briefly about the implications of the transformation: the PSPACE-hardness of Geography implies the PSPACE-hardness of Neighboring Nim. At that point, having seen the reduction in full, I hope it sank in. Although I wouldn't do this in the future when presenting to a primarily graduate-student/professor group, it seemed to work really well for people who hadn't seen computational complexity before.

The slides for the talk are here.

Next week is Fall Break at Wittenberg, so there will probably not be a post on Tuesday as I spend the time off preparing for Integers 2011 the week after. With any luck, once I'm there I'll get to post some notes each day.

Thursday, October 6, 2011

Game Description: Matrix-Game-That-Needs-A-Name

Games are fun and linear algebra is fun. Let's put them together! It's especially fun to try to determine whether a matrix is invertible. Here's a game that revolves around invertibility (it still needs a name).

The game begins with an invertible matrix of non-negative integers. Each turn ends with a player reducing one of the entries of the matrix, so that it is still a non-negative integer. If the resulting matrix is non-invertible, that is a losing move. Depending on how cutthroat you want to play, it might be up to the other player to notice you created a non-invertible matrix at the beginning of their turn.

I've played with my matrix algorithms class a few time|s this semester, using this as the starting position:

| 1 2 |
| 3 4 |

This position is in Fuzzy. What's the winning move?

Tuesday, October 4, 2011

Game Tree Woes

Getting students to write proofs is an interesting pedagogical challenge. Which class in a math sequence should start requiring formal proofs? What does formal mean?

Luckily, for CGT this has a simple solution: the game tree. Want to prove something about a specific position? Just draw out the game tree and label the options appropriately. Students should learn this as soon as possible; I broke the news to my freshmen class right away.

Bad news: I'm expecting rigorous proofs in this class.

Good news: these can be a proof by picture.

Unfortunately, I didn't do a great job teaching the basic requirements. We had problems with:

* Arrows to left/right options of a position. These were sometimes nearly vertical so it was impossible to discern whether they were for right or left arrows. Other times they pointed up, which is often very difficult to read.

* Not drawing all the options. We hadn't learned about dominated options yet, so all options should have been listed.

I think that the next time I teach this sort of course, I will have to be far more explicit about what makes a game tree. Also, I've done a good job this semester of using examples to motivate, but perhaps I didn't do enough examples of game trees.

Tomorrow the students have their first exam. Dangerous! I've decided to include a question where the goal is to find errors in a given game tree. I really wish I'd included these sorts of questions in the first few homeworks! We'll see how it goes.

(Oops! I inadvertently took last week off from posting. Sorry!)

Friday, September 23, 2011

Game Description: Zuniq

Wow, some good games were suggested by readers two weeks ago! In addition to Nick Bentley donating Y with Shifts, Marcos Donnantuoni commented about his own impartial game, Zuniq. Ernie and I tried this out on Monday, and we immediately liked playing this! Here's how the game works:

The starting board is a grid of dots, similar to dots and boxes. Each turn, a player connects two adjacent (vertical or horizontal; not diagonal) dots, but not exactly like dots and boxes. If a new line segment closes off a region, no future turns can be made inside that region. Also, no two regions can have the same size. Thus, you can't close off a region if a region of that size already exists.

The suggested starting 8x8 board is a bit large, but we gave it a go. (Video: [7:16]) Afterwards we tried with 3x3 boards, to quickly learn that they are not very interesting. (The first player can always win by being just a little bit smart.)

The 4x4 game is definitely more interesting. (Video of lots of little games. [6:08]) After a few of these games, and later with games played against Patrick and my WittSem peer mentor, Alec, I think I can consistently win going first there. Still, my initial guess is that the game is PSPACE-complete in general.


I feel like there are lots of nice theorems that can be shown about this game. At the same time, I'm not familiar with what sort of theorems are "useful" from a CGT perspective (aside from computational complexity results).

Tuesday, September 20, 2011

Quiz Time!

I asked my students the following questions in class. We've covered outcome classes, but are not quite at game values yet.

Let X be the Domineering position that is two empty squares tall.

Let Y be the Domineering position that is four empty squares tall.

Find a Domineering position, G, such that X + G is in P (Zero) but Y + G is in L (Positive)

That one's not too tough. Neither is the next one:

Find G such that X + G is in R (Negative) but Y + G is in P.

This one's nice:

Find G such that X + G is in N (Fuzzy) but Y + G is in L.

My first solution for this one had a composite (sum) game for G, but I've since found a non-composite solution:

Find G such that X + G is in R, but Y + G is in L.

If you know a bunch about the values of Domineering positions, I guess these are relatively simple. If not (or if you are good at forgetting), perhaps they require a bit of extra time to consider.

Friday, September 16, 2011

Shifting Connection games

After last Friday's post, reader Nick Bentley suggested playing again but allowing each player to not only place a new piece, but also shift an old piece each turn. Ernie and I easily spent all of our Monday meeting trying this out. Here's one of those games. During the game, we had a lot of questions about what was legal (e.g. declining the shift, etc). Hopefully Nick can answer those for us!

This really changes the game! You can see one point where I was about to lose because I hadn't anticipated the power of the shift. Ernie was nice enough to let me go back... :-D

Nick suggested another game to play, and then another reader, Marcos, suggested his own game. I guess Ernie and I have plenty on our plate! :)

I'm really interested in playing the shift version of Hex. Or the shift version of Adjex! Is there some ruleset to combine this with an adjacency restriction?

Monday, September 12, 2011

The Joy of Teaching Games

Last Wednesday was the second day we just spent playing games in class, and the first one where they had learned some of the theory (specifically, outcome classes). What a blast! I started putting up Amazons positions for them to find the outcome classes of partway through the class. They picked up on this challenge immediately, students flocking to the boards to post the class they had found, then either verifying or questioning the results of others. I had about twenty positions around the room and only a few of them remained unexplored by the end of the hour. The air is very charged, but the feeling is very positive. Students are working together to solve the problems, and this requires them to try out moves on physical boards, then confer with the people around them. Your opponent quickly becomes your best teammate as you collaborate to test all possible game tree paths. For some of the harder boards I put on the marker boards, groups had banded together to discuss their results as a bigger team. There was not an unengaged mind in the room!

As I've mentioned, this class is a first-year-experience seminar at Wittenberg (a WittSem) and has the dual purpose of helping integrate the students into college life. After teaching the math/compsci-elective version of the class last year, I thought games could make for a nice WittSem topic. I was further spurred on by David Wolfe, who told me he had once taught a freshman-introduction class all about playing Go. (I only just played my first game of Go last week, so I wasn't ready for that!)

These new students have actually been very patient. I promised them early on we would spend entire class periods playing games, and it took over two weeks of class before we covered outcome classes; giving them something to analyze while playing.

Also on the point of teaching, I happened across an old reddit post of Joshua Biedenweg's, prior to his teaching a CGT course at UCSB. Josh finished teaching his course right as I was prepping for mine over a year ago; I took some good advice from him and unfortunately ignored some better advice! (Josh, I'm using Toads and Frogs more this year! Pictoral Evidence:

)

Next on the class agenda is Game Sums, and soon it will be time for them to find actual game values! Woohoo!

Conclusion: Teaching CGT is awesome. If you have the opportunity, take it!

Thursday, September 8, 2011

Messing with Y

Y is a game that is very similar to Hex: players take turns claiming hexagons with the hope of connecting sides. In Y, however, the board is a triangular array of hexagons, and the goal of each player is to connect all three sides instead of just their two sides. Perhaps more surprising than the Hex Theorem is the Y Theorem: if all hexagons are colored, exactly one contiguous group of same-color hexagons touches all three sides.

This game plays similarly to hex, except that there are, like, 1.5 sub hex games going on at the same time. Ernie has taken a quick liking to this game; he's very good at beating me before I know I'm beaten! To escape this dilemma, I proposed that we give this the same treatment we gave to Hex: let's force adjacent play. We spent a bunch of time trying out different starting positions, and unfortunately it always seemed that the first player to move had a strategic advantage, even if that first play was on different ends of the board (or smack in the center, of course). This means the second player always wants to evoke the pie rule. Earlier this week, we sat down for a few games and taped them. Our first game has Ernie starting in the center. Spoiler alert: he manages to get to all sides, but the game is quite close!

In the next two games, we try instead from the middle of one of the sides. These are both great games! We are already getting a handle on how this game works.

I'm not sure, however, that I like this as much as Adjacent Hex. Of course, I've gotten more interested in FLex (Follow-the-Leader Hex) so perhaps it's time for a game of FLY!

Does anyone have a story to share about trying out an "adjacent-enforced" version of another game?

Monday, September 5, 2011

An answer to an old Nimber question

Near to the start of this blog, I posed a question:

if the nimbers for a game are bounded above, does this imply that there is an efficient algorithm, or shortcut, to evaluate impartial games?

I recently realized there is strong evidence against this. It is true that any impartial ruleset where all the nimber values are either *0 or *1 is very easy to evaluate---all options from one position have the opposite value (otherwise there would be *2 states)---so all paths down the game tree have the same parity. All that's needed to create PSPACE-hard instances, however, are states with the value *2. Why is this? Well, we can change the rules for any game with higher values into an equivalently-winnable* game with maximum nimber *2. The plan here is to break down the options from a game when there are more than two. We do this by splitting up those options into separate groups, then preventing the other player from being able to make any choices.

For example, we can transform the game G = {A, B, C} into the game G' = { { { A, B } }, { { C } } }.

Here's a diagram of the game trees, with the nimbers *0, *1, and *2 respectively assigned to A, B, and C.
For positions with more than 3 options, this splitting can be done in a recursive fashion. Thus, games with any range of nimbers can be transformed into a game with maximum nimber of *2.

* game states may not have the same value, but winning strategies from one translate into winning strategies in the other.

Friday, September 2, 2011

Game Descriptions: NimG and Neighboring Nim

What happens if you restrict which games can be moved on in a game sum? This seems to be the idea behind NimG, a game played on a collection of Nim Heaps embedded in a graph. There are a few variants, but each turn a player does two things:

* traverse an edge of the graph adjacent to the last move, and
* remove sticks on the nim heap embedded either into the edge traversed or one of the vertices.

In the sticks-on-edges version, players remove sticks from the edges while traversing them during the turn. Naturally, if the heap on the edge is empty, it cannot be traversed. In other words, the player who starts their turn on a vertex where all incident edges have empty nim heaps loses. Masahiko Fukuyama studied this game a bunch and more recently Lindsay Erickson has looked into instances on the complete graph.

When sticks are embedded into the vertices, a move still consists of traversing one edge to arrive at a new vertex. Upon arriving at the new vertex, a move is made on the nim heap embedded there. Gwendolyn Stockman may have been the first person to analyze this game in an REU advised by Alan Frieze and Juan Vera. This is the version that interests me: the vertices act like different heaps in a game sum, but the edges restrict which heap can be chosen each turn. In this ruleset, the complete-graph version is equivalent to standard Nim. (If each vertex is adjacent to a "self-loop", an edge connected twice to the same vertex.) In this game, you lose if all vertices adjacent to the current location contain empty nim-heaps.

In order to avoid the self-loop issue, I altered the rules slightly and set out to do some analysis. Neighboring Nim is exactly the same game as (Vertex) NimG, but a player may choose not to traverse an edge and instead remove sticks from the current vertex (again).

I have actually played this game very rarely, mostly because I can't imagine a standard starting position. Nevertheless, I helped show that the game is PSPACE-hard. I was lucky enough to present this result at BIRS in January, which was quite pleasant. The paper should appear in Games of No Chance 5. (arXiV version)

The cool thing about the computational complexity of this game is that once all vertices have maximum heaps of size 1, the game is the same as undirected vertex geography, which is solvable efficiently. In order to get PSPACE-hard instances, we needed most vertices to have a heap of size 1, with only a few having a second stick on them (less than 1 in 7 of the vertices need the extra stick)! It's very interesting that such a small rule change can result in such a big complexity change!

Monday, August 29, 2011

New School Year!

A busy summer has ended, and the school year begins anew. I worked on a lot of projects over break, and since I'm only slightly modest, I'm sure I'll mention them soon. Since the workshop in Banff in January was so helpful, I'm planning to attend and speak at Integers 2011 in October. I wouldn't have known about this conference had Richard Nowakowski not mentioned the 2009 version to me two years ago.

I spent a lot of time working on an applet for Adjacent Hex, but I didn't quite finish. I hope to get that done during this semester, but can't make any promises.

This semester I'm feeling like a Math professor. I'm teaching two math courses (Discrete Math and a freshman-only games course) and our Algorithms course, which is very math heavy. This is the least programming of any semester I've had here; I don't know how to deal with that!

In addition, I have two excellent senior students doing cool projects. Will is working on an implementation of a Savage Worlds character creator, and Patrick is going to implement a Monte Carlo game tree search in Chapel. Very exciting!

There's lots to talk about. As normal, if there's something you're particularly interested in, let me know.

Sunday, May 8, 2011

Next Semester: Combinatorial Games "WittSem"

Another semester draws to a close. Although I missed posting last week, today will be a bonus final post for the semester. There may be a few more comments during the semester as I figure out how to upload videos that my phone thinks are too big, etc. (Patrick and Ernie had an epic FLex battle a few weeks back.)

In addition, I have a bit of research to get done this summer, so there may be some mention of that. One of my big projects is to prepare for my board games "WittSem" class next semester. This will be a first-year-college-students-only class that has the dual purpose of acting as an introduction to university. I will spend time helping to impart good study habits (do I have these?) and instill a love for games and math.

These WittSems must include a multi-disciplinary spin; in addition to the math, I will talk about cultural/historical aspects of games. To this end I can cover many geographic regions with different games, but I'm not sure what the more interesting points I should definitly cover are. Some ideas:

* Follow the evolution of Chess across Asia and Europe.

* Compare rule sets of Go, which are different by country/region.

* Perhaps the same is true of Mancala?

* Look at origins of Konane as well as taboos while playing.

This last one may have a recurring theme. The name of the class is: "How to play board games: Culture and Tactics" (or something along those lines) so I was intending to talk about what was expected socially by players during the game.

Any additional help would be most excellent! If you know something interesting about the culture of games (from anywhere!) I would love to know!

Friday, April 29, 2011

Gamesters: only Extroverts?

On my trip to BIRS in January, I met a slew of awesome gamesters. Everyone was extremely friendly and I had intense conversations with many different people. Perhaps this is easy when there is a common ice-breaker: "Want to play a game?" People only turned this down when it got late; otherwise they were all eager for the challenge.

Does this mean everyone there was an extrovert? Does combinatorial game theory lend itself more towards extroverts? Will introverts find a hard time breaking into the field?

As I think back to the workshop, I don't recall a single person that came across as shy. There may have been some language barriers, perhaps, but those don't usually cause a problem if both players know the rules to a game. Of course, if you sit down and play quietly, you may never find out whether your opponent is introverted.

I'm somewhat worried that introverts may have a hard time being interested in games and CGT as a result. (There are similar reasons I worry that introverted students aren't getting all the benefits my extroverted students are.) I can see that many people might be intimidated by a class based around something so inherently competitive. Despite the fact that no grades are determined by students' actual ability to play games, I understand the completely irrational fear of not wanting to die in a video game.

Perhaps I needn't worry. Perhaps games and puzzles attract an introverted personality. It is easy to confuse introverts with shy or quiet people; the two do not always go hand in hand.

Wikipedia describes introversion as "a personality trait involving a tendency to drive one's perceptions, actions, thoughts and emotions inside, resulting in reduced interest in activity directed to the outside world." Taking the time to study and consider a game state may induce the same sort of energy as spending time alone for some introverts.

Oh dear, I'm getting into a space I know nothing about. Anyone have any thoughts on this?

Friday, April 22, 2011

Recording games!

On Monday I played some great games of FLex with Ernie and about midway through the first one, I wished we had been recording our experience. We were trying to determine what a good first play looks like, and were testing out playing right smack in the middle. We played 3.5 great games (the last one is on pause in my office until next week's meeting) but took the time to really talk about moves and discuss whether each one looked like a mistake to the other person. Many times we backed up to try another game path. So far, my general feeling is that playing in the middle is a really strong opening move... so you shouldn't do it. With the Pie Rule in place, the second player can choose to take that move from you.

Since Monday, I considered getting a webcam to point at my table to record the games. Then I realized I also want sound (I have an old webcam; does it even work?) so I thought it would be nice to use my phone and upload the videos to YouTube. Unfortunately, I'm not about to hold my phone up for such a long time and I didn't find a stand online that would support it directly above a game board.

So yesterday I got a bit inventive. I brought some wire hangers into the office and spent my lunch (eating and) building this contraption.


(Images courtesy of Patrick Copeland.)

With the wire hangers and some books, I suspended my phone above the game board, then wrangled one of my students into playing a few games with me. We played three games of FLex, one is here, and the end of another is here. The tests went very well and the board is quite visible. I hope to post more videos using this in the future! Many thanks to Patrick for testing this wobbly contraption with me!

Friday, April 15, 2011

Game Description: FLex (Follow-the-Leader Hex)

After talking about Weak Hex (Whex) a few months ago, I tried playing it with some comrades here at Wittenberg. As a reminder, Whex is the version of Hex where both the first and second players must play adjacent to the first player's last move, with the winning condition exactly the same as before. Argimiro Quesada showed that this game is PSPACE-complete on general graphs, without needing to describe what happens if no adjacent move is possible (the graphs in his reduction are designed so that this never happens). When I asked him what the rule should be, he came up with the rule: if you can't play adjacent, then you lose.

This is a common first response for how to deal with this situation. While designing Atropos, I first considered using this same rule: if you can't play adjacent because there are no free spaces, you lose. Luckily, my advisor didn't like this and suggested the jumping: then you can play anywhere. These jumps greatly improve the game, and are also a big part of Adjex (Adjacent Hex).

While first sitting down with some other gamesters to play Whex, we tried the lose-when-no-adjacent-moves-
are-possible option, but found that to be a little unsatisfying. It is most exciting to have the game end when one player has built a path between their sides, not beforehand! We decided to add a jump rule here too.

Handling the jumps was a bit more tricky here, however, since the players have very different roles in Whex. Both players have to move adjacent to only the first player, not to whomever happened to play last. If the first player got to make a jump, then there is little question how to continue play. But what should you do if the second player gets to make the jump? Can they play anywhere they want? If so, are there any restrictions on where the first player goes afterwards? Can they play wherever they want, or do they have to play adjacent to where the second player just moved?

We came up with a nice solution for this case: switch the roles of the players. If the second player gets to jump, then they become the player who both have to play adjacent to instead of the first player.

Here are some more descriptive rules for the game, which we call Follow-the-Leader Hex (FLex):

This game is just like Hex, except at any time one player is the Leader and one is the Follower. On your turn, if possible you must play adjacent to the hexagon painted on the Leader's last move. If none are available, you may play on any unpainted hexagons on the board, and then you become the Leader, while your opponent becomes the Follower.

This game plays very nicely. In the first two games played between myself and Ernie, I won the first one without losing the "Leader" status, then won the second one starting as the Follower, fighting to become the Leader, getting the jump (and becoming the Leader) and using that to win. Having said that, in other games, we have seen the Follower force a win, so it's not clear which role is better. In fact, lately Ernie has had great insight into this game and playing as the Follower, who appears to have some advantage! Most of the games we play nowadays are determined by which player can get a jump first.

While playing, it is vital to use the Pie Rule to avoid a very simple immediate win (see this post on Whex). Unlike regular Hex, in FLex it may be more clear whether the second player should invoke the Pie Rule to win... maybe.

The credit for this game is really due to Argimiro; FLex is a minor tweak on Whex for a situation he had not previously considered. The tweak is due to Ernie, Obed and myself.

Enjoy! I am especially interested in hearing comparisons between Adjex and FLex. Which is more fun to play?

Friday, April 8, 2011

Combinatorial Games: a first-year class

The past few months I have been working on some basic planning for teaching combinatorial games as a first-year college class. At Wittenberg, we have "WittSem" courses; each incoming freshman must take one. This is finally really coming together, so I will likely teach this course in the fall. Woohoo!

This is a bit of a tricky task. Last semester my course started off as too difficult because we were using a graduate-level math text and I didn't convert the book problems enough for the students. (Not to mention I was learning some of the material only slightly before teaching it.) My next batch of students will have less math background so I'm going to have to be even more careful. I will probably rely more heavily on worksheets and less on the book problems, though Lessons in Play will continue to be an excellent reference for the class.

I still plan on devoting one day per week to playing games and discovering outcome classes/values for different states. Each week we'll try to add some new evaluation tools and I'll look for great game examples of those tools.

All in all, the class will likely look a bit like the last, but without emphasis on proofs and programming. This last bit will be replaced by some discussion of cultural aspects of games throughout history. This is definitely a bigger task than I had last semester, but I'm already looking forward to it!

Once the semester starts, I'll link to the class page. Of course, if you are an incoming Wittenberg student and have any questions about how you can be allowed to play board games during class, please ask me!

Tuesday, March 29, 2011

PSPACE problems versus NP problems

I've recently been paying attention to community theoretical computer science forums, including on sites such as StackExchange. This question recently came up:

Are PSPACE-complete problems inherently less tractable than NP-complete problems?

Naturally, since we don't know whether PSPACE is not equal to NP, there is not yet a formal answer to this question. Intuitively, however, this becomes a question of whether it is harder to figure out which player can win a game than to find the solution to a (one-player) puzzle. The link above provides a wonderful bunch of points, but let me mention a few specifically. :)

* Ryan Williams notes that some randomized game tree solvers are very effective, and perhaps solving games isn't quite as terrifying as many might think.

* Neil de Beaudrap notes that verifying solutions is still extremely important, which we know how to do for NP-complete problems. Aaron Sterling backs this up with a great comment about humans being able to remember and describe proofs.

* Lance Fortnow points out that we know how to create average-case PSPACE-complete problems, but don't yet know how to do this for NP.

* Suresh Venkat mentions that in practice, NP-solvers are actually often very fast, and NP-completeness is not always seen as a barrier to computing.

This is one of the few questions on the forum that I have actually read multiple answers to, and it's one I feel I can read over and over. Personally, I find it hard to believe that PSPACE isn't much harder than NP, and professionally I'm partly banking on it by spending my time finding PSPACE-complete games!

Friday, March 25, 2011

No posts this week, and slimming down for a bit

Sorry to my readers (all seven of you, perhaps?) but I haven't had time to post this week due to some extra responsibilities. With doctor's appointments for pre-birth checkups increasing, I can already see I will need to slim down for the rest of the semester.

Next week and for the rest of the spring, I will be posting once per week.

Have a great weekend!

Friday, March 18, 2011

Game Description: Col

Col is another "classic" game that was first deeply analyzed by John H. Conway. It is a game that is inherently a graph game, but is often played on a whole piece of paper divided up into regions by drawn shapes. Each turn, a player chooses a blank region and paints it their own color. You may not choose a region that is adjacent to any other regions of the same color. (You can see some turns of the game in this very descriptive wikipedia page.) I'm sure whenever my future children are learning to draw in the lines, I'll try to coax a few games of Col out of them.

Col has an amazing, but beautiful property for evaluation: each position has a value equal to either a number or a number plus star. No Ups, Downs, *k's (for k > 1), etc. The elegant proof is based completely on the simplicity of the game value, showing that all left options are less-than-or-equal-to all right options for any game. With this Col fact, you can inductively assert that game values of the form {x | x + *} cannot exist (when x is a number). The only remaining options are numbers and numbers plus a star. Check out pages 47 and 48 of Winning Ways for all the details.

I was recently asked whether I knew the computational complexity of Col. I don't know that this analysis exists, but it could be that it is not computationally hard since the different game values have the nice property above. So far I don't know of any relationship between game value ranges and computational complexity (aside from impartial games with only values of 0 and *---those are solvable in Polynomial-Time because you never have a choice on your turn).

Tuesday, March 15, 2011

Topics for this semester and questions about BIRS 2011

I still have plenty of topics to cover from the BIRS workshop about Combinatorial Games from January, but I want to do each of them justice. At the same time, I'm hoping to get more playable games up and running on my website. With any luck, I'll get some of this accomplished. There is a strong possibility posts will end a bit early this semester, as I'm expecting my first child the first week of May. :)

I haven't gotten to reply to the first comment here yet, so let me do that now.
What did Fan Xie said?
How strong is he?
How did the program worked? (Monte Carlo?)

Can you elaborate on that ? :)
Yes, I certainly can elaborate! I did not get to know Fan Xie terribly well, except to help badger him into joining the NoGo tournament. (He nearly did not play! Luckily, Neil McKay is a persuasive individual!) I didn't pay a great amount of attention to Fan's games, aside from his game against the winning NoGo program. This was a wonderful match, because Fan spent too much time analyzing and explaining whether the computer's moves were good or not (and he did not write that program). Instead of focusing on winning, he was answering everyone else's questions, so the computer came out ahead.

I believe all the computer programs used Monte Carlo (multiple random trial) techniques. From what I understand, they were all modified versions of UAlberta's Go-playing program FueGo. I would like to spend more time talking about this, but the most interesting thing is that these Monte Carlo programs are so easily adaptable from one game to the next. In fact, I believe much of the software is written independent of the different game rules. If you supply the rules, the simulations will run and choose a probable good move.

I am not convinced that this works well for all games, however! I'd like to know how well the algorithm plays Chess, or even Nim! (Maybe impartial games are tough...)

Friday, March 4, 2011

Yet another Table Update Post

Today I'm replacing a usual post here with another big round of updates to this table of game rules.

The biggest update is adding links to websites where you can play the game listed. I didn't find links for all of them, but if you know of one (or a better one than what I have listed) please let me know so I can add it to the list! Additionally, I added links to posts labelled: "Game Description: X" where X is that game.

Since no one complained to me, I removed the column about the complexity of winning the game on that turn, replacing it with a column for general game properties.

As usual, I make some errors here, so please let me know if you find anything I need to fix!

Next week Wittenberg has spring break, so I won't have any new posts until Monday the 14th.

Tuesday, March 1, 2011

More Tsuro Combinatorics

I keep coming back to the game Tsuro, which I've mentioned in this blog several times.

Let me describe the rules quickly. The game consists of a 6 x 6 grid of empty squares. The border of this grid has two white lines leading in to each side of a square (12 per side, so 48 lines total along the border). Players begin by putting their marker on one of these lines.

There are 35 tiles in the set, of the size of the empty squares on the board. Each side of each tile has two path ends that line up to the two white lines on the border (see the picture at the end for an example of some tiles). Each tile has four path pieces, each with two ends at different places on the boundary of the tile. These paths can cross, and there is a tile for each combination of paths (more on this later). Players keep hands of three tiles, drawing a new one every time they play one.

Each turn, the current player chooses a tile from their hand and lays it on the board in front of their marker. This new tile continues the path their piece was on, and they move their piece to follow that path until its end (it may link to other already-played tiles, so it could go longer than just one tile). The player also moves any other pieces that had their paths extended by this tile, again as far as they can until they reach their path's end.

For all those pieces that moved, if the path either leads them off the board or collides with another marker, then they have lost and are out of the game. The winner is the last one standing (or tie if multiple people lose on the last turn). Since there are only 35 tiles and 36 squares, it is possible all tiles can be played without anyone losing. In fact, the game comes with enough markers for 8 players. Can they all complete the game? Can they all complete it if they all have access to all tiles from the beginning (instead of having separate hands)?

While showing Tsuro to my aide this week, we raised this question. Ernie quickly became interested in trying out the combinations to see if he could get a working solution. This is an excellent challenge; I would guess the problem is NP-complete for general sets of tiles.

The question also came up of whether there are only 35 possible tiles, or whether the game is missing some possible combinations. I'm terrible at counting, but Ernie had some good insight and we worked it out. Here's a sketch of our logic:

Consider a single tile and count the possible configurations of paths in that tile. Choose one of the incoming path locations. There are 7 places that path could connect to. Now go clockwise around the tile to the next not-yet-connected path end. From here there are now 5 possible places this could connect to. After that, there are 3 more for the last one. In total that gives us 105 possible total configurations.

The game doesn't have this many, so there must be some symmetry that is not being considered. Since 35 = 105 / 3, it seems likely that rotational symmetric positions are not being used (which is the case). Now can we find cases where other symmetry is used? Yup!


Each right piece is symmetric with the corresponding left piece! I think the game uses all pieces. It's excellent that this works out to exactly 35 pieces, leaving that extra final open square. Very elegant!

Friday, February 25, 2011

Game Description: Mad Rooks

This semester I am lucky enough to be working with a student on an implementation of Mad Rooks for the Android OS. In a previous post, I talked a bunch about the length of a Mad Rooks game, but I realized I didn't have a stand-alone post on the game itself.

Mad Rooks is another game developed by Mark Steere who has created a large number of excellent combinatorial games. This particular game is a sort of King-of-the-Hill Long-Distance Clobber. Both players begin with pieces that act like rooks in Chess in that they can move as many spaces horizontally or vertically on a checker board without jumping pieces. If a rook moves onto a space occupied by an opponent, the opponent's piece is removed (captured). A rook is called "engaged" if it can make a capturing move. Each turn, a player either uses one of their rooks to capture a piece or moves one of their unengaged rooks so that it is engaged.

Just as with Clobber, this game is all-small, which means if there is a move for one player, then both players have a move. Although this does not mean that each game has a nimber value (though it always seems like it should to me) it does mean that every instance of an all-small games has an infinitesimal value. Infinitesimals are smaller than all positive numbers, yet greater than all negative numbers (the only infinitesimal number is 0). Values such as Up, and * are infinitesimals.

It is especially interesting to look at end states in Mad Rooks, where suddenly making engaging moves instead of capturing with another piece is a better move, though engaging when you have to is often disastrous. It doesn't always even matter who has more pieces. Consider a board with rooks only on every square along the diagonal of the checkerboard. Now, if only one of those is Blue but the rest are Red, the next person to play loses the game, despite the fact that Red outnumbers Blue seven-to-one! (Try it!)

One reason some players may prefer this game to Clobber is that in the end all pieces of one player are captured, leaving the winner as the last color standing. In Clobber, you can't look at the final position and determine who won the game, but you can in Mad Rooks!

Tuesday, February 22, 2011

What are we taking in Nim?

Nim is a game with many different incarnations. In some cases, it concerns removing items from just one heap, but with restricted amounts. In others, there are multiple heaps on the table at any time. These variations are played by many people, all using the name "Nim" and all with different rules (some people learn the normal play rules, others learn misere). There are many near-gamesters who are an expert at their version of Nim, but have problems if you vary even just the starting position.

Forgetting all this, however, what are those items that players are removing? What are the heaps made out of? I have heard uses of counters, sticks, beans, pennies/coins and others.

In elementary school, I learned the one-pile game as "Nim" and later the multi-pile game as "Sticks". With this second influence, it's very hard for me to not "pick up sticks" when I'm taking my turn in Nim. Not certain this was standard, I consulted Mike Weimerskirch and Thane Plambeck last month. Both of them referred me to the surreal movie Last Year at Marienbad, in which two characters compete at Nim. The games played in the film use matchsticks as props; perhaps sticks are the way to go! (This guy uses sticks, but he also cheats, so perhaps that's a point against it.)

If we consult the authoritative Winning Ways, however, we see that Nim is played with heaps of counters. It's hard to argue with this text! Indeed, Lessons in Play continues this tradition, referring to heaps of counters in the game Nim. Also, Mike told me straight-up that he's a counter man. Again, hard to argue with these Nim masters!

Beans? Beans are gross; who wants to think about playing games with food that will later be cooked. Didn't your parents teach you anything?

Candy, however, is another matter. Candy is delicious (and doesn't get cooked). Who wants to play Candy Nim?

Pennies and various coins make sense, except that the goal of the game is not to make money, but to make (or not make) the last move. Still, I bet these are often used as the "counters".

I'm still not sure what to use, but I'm going to keep with sticks until someone convinces me otherwise!

Friday, February 18, 2011

Game Description: Weak Hex (Whex)

When I was first interested in Adjacent Hex (Adjex) a few years back, a friend of mine pointed out another similar game to me: Weak Hex. Weak Hex, or Whex, is extremely similar to adjacent hex because both players must move adjacent to another player's move. Instead of playing next to the last move of either player, however, both must play adjacent to the last move of the red player!

Here is the description from the original Whex paper by Argimiro A. Arratia Quesada:
Players can not colour an arbitrary vertex but must proceed as follows: Player
1 begins the game and he must do it by colouring red a vertex adjacent to
the source. From this move and on, Player 2 must colour blue an uncoloured
vertex adjacent to the vertex last coloured red (i.e. coloured by Player 1), and
Player 1 replies by colouring red an uncoloured vertex adjacent to the vertex
that he coloured red last.
(If no adjacent moves are available, the next player immediately loses.)

This game is intended for play on a graph instead of the standard hex board, but we can consider that also. In this case, the "source" is likely to be an entire red side (the sink, then, is the other red side). The goal for the red player is to connect the two red sides, and the goal for the blue player is to prevent such a connection. This matches up precisely with the winning conditions of standard Hex, so it is possible to play Whex on the regular hexagonal grid.

Additionally, the game is shown to be PSPACE-complete, so it is certainly hard to play in the general case! This is the same as with Hex and Adjex. In Hex, it is known that the first player has a winning strategy, but that strategy itself is not known. In Whex, however, not only does the first player have a winning strategy, but it can be described very quickly! In fact, I bet you can come up with it yourself!

(Find an opponent, see if you can come up with a method to always win playing first (as red), then come back and finish reading!)

Okay, on your first turn, play in the hexagon "on the bottom"


On each subsequent turn, play in one of the hexagons directly towards the upper-right-hand (red) side. Blue can only choose to occupy (at most) one of those on their turn, so you will get to take the other. Since you don't have to worry about playing adjacent to them, they can't deter you (as is possible in Adjex). Enjoy your march to victory!

If you play with the pie rule, I'm not sure what happens!

Tuesday, February 15, 2011

Blog Layout & Table Updates!

Thanks for tuning in! Molly Dannaher, one of my students in combinatorial games last semester, provided the excellent artwork here, which comes straight from my marker board!

Also, I have many several updates to the game table over the past week, but there's still much more to add.

There was no update Friday as I was home sick.

Thank you for all the emails about potential post topics. I've been a bit weighed down with other work, but I do have lots I want to talk about!

Tuesday, February 8, 2011

Why bother about Computational Complexity?

In a recent phone call with my Ph.D. advisor, Shang-Hua Teng, I was reminded how important the computational complexity is for combinatorial games. The punchline is very simple:
"Hard games give humans a chance to beat machines."
If no efficient algorithm exists to determine who wins a game, then a program cannot always efficiently choose the best move to make. Hardness results are evidence that this is the case (although we still don't know that no efficient algorithm can solve NP-hard or PSPACE-hard problems). These games give us some sense of safety that the computer doesn't already have the game won.

This also applies to the concept of experts and novices. It's not too hard to become an expert in an easy game like Nim. Learn the evaluation trick (computable in polynomial time) and you're good to go. At this point, if you play against someone who doesn't know that simple trick, you can beat them every time (unless they are very lucky). In a game such as Go, much more practice is required to become an expert. There is no quick evaluation algorithm to learn. However, since the strategies are not as clear cut, there is a bigger chance that the novice will be able to beat the expert.

When first designing Atropos, our goal was not just to find a 2-player game, but to find a (PSPACE) hard game. We changed the rules until we could get a proof of the hardness working. At that point we were quite satisfied! Some other games I've defined seem less impressive without a similar result. Learning that games are provably hard usually makes me want to play them more!

(I'm trying to keep track of game facts on this table, with quite a bit of bias towards computational complexity! Please let me know if there's something you'd like to add!)

Friday, February 4, 2011

Game Description: Wythoff's Nim/Game

Last semester, one of my students presented Wythoff's Nim and did a great job explaining all the theory behind it. This is a simple, classic game with some beautiful properties, and something people continue to study variants of.

Wythoff's Nim (often called "Wythoff's Game") is just like playing "regular" Nim with two piles of sticks, except that you are also allowed to take X sticks from both piles, where X is any number. Thus, from the situation where the piles are of size 2 and 4 (we refer to this as (2,4)) legal plays include moving to (1,4) [taking one stick from the first pile], (2,0) [taking all four sticks from the second pile] and (1,3) [taking one stick from each pile]. Just as in Nim, the only location where no plays are available is (0,0).

This game is often envisioned as a Queen moving across a chess board, who starts anywhere on the board, and cannot move up or to the right. Players can thus move her left any number of spaces (subtracting from the first pile), down any number of spaces (subtracting from the second pile), or diagonally down and to the left (subtracting from both piles). This version of the game is visualized in the playable applet on this site.

This is naturally an impartial game, so the question arises: where are the P positions? (0,0) is automatically one, but (1,2) also does not have a safe move (all moves lead to N positions). (3,4) is not, (you can move to (1,2)) but (3,5) is. It turns out there is a very fancy trick for finding the P positions! This is a surprising application of the golden ratio, φ!

All P positions are of the form: (floor(k*φ), floor(k2*φ)) or the reverse. If we think about these as a sequence of pairs, ((a, b)k), then the sequences (ak) and (bk) are Beatty sequences, meaning that they intersect nowhere and contain all the natural numbers (not including zero, so we consider only those where k > 0). Try out the first few pairs, and you'll see that this makes sense. Thinking in terms of the Chess board and the queen, each row should have a P position, and can't have more than one.

Many variants of this game have been studied, many relating to the relevance of Beatty sequences, which I hope to talk about more in the future. This is otherwise an excellent game to learn to exploit the winning "trick" and then show off to your friends with.

Tuesday, February 1, 2011

Graph NoGo is NP-hard

Last week, I mentioned we had some computational complexity results concerning the game NoGo, which was the "new game" played at the game workshop at BIRS this year. After getting more comfortable with the game play, it felt like the game might be another great example of PSPACE-completeness. I put some effort into solving this, knowing it would be great to resolve the complexity while at the workshop.

Finding gadgets is tough, sometimes especially when dealing with the sort of grid-geometry of the NoGo board. Thus, I took a step backwards and considered NoGo played on a general graph. The rules to this game are still the same: each connected subgraph where all vertices have the same color must be adjacent to a vertex in the graph which is uncolored. Still, I had no luck finding a PSPACE-complete reduction. Instead, I took another step backwards and considered showing NP-hardness for Graph NoGo.

Woohoo! Success! Here is the plan for this proof: we will reduce from the Independent Set problem, known to be NP-hard. Given an instance of the Independent Set Problem (G, k), construct a Graph NoGo situation, G', that only the black player can move on, where each play corresponds to choosing a vertex in G, the original graph, to be part of the independent set. If the black player can make k plays on the new game, G', then there will exist a corresponding independent set of k vertices on G. To turn this into a Graph NoGo situation where the only question is: "Can the black player win if they are playing next?" instead of "Can the black player make k plays?" we will add (k-1) unconnected nodes where only the White player can move. Now, in order for Black to win, they must be able to make k (or more) plays (assuming they are going first).

So, there are two things to show. First, we need gadgets in Graph NoGo where only one of the players can play. Second, we need to be able to glue those pieces together so that playing in one of these locations means you can't play in an "adjacent" one. As it turns out, neither of these is terribly hard.

As above, if G is our graph for the Independent Set problem, then for each vertex in G, we will use the following gadget for our game of Graph NoGo:
Here, x is the place Black can choose to play their stone in. The only other blank spot cannot be played by either Black or White. Notice that White cannot play on vertex x.

Now, for each edge in G connecting two vertices, we will connect only the x vertices of the gadgets with another gadget:
Now, if Black plays in one of the vertex-gadgets, he can't play in the neighboring ones. Thus, any set of Black plays on the Graph NoGo board G' corresponds to an independent set on G. For example, two adjacent vertices in G, x and y, and the edge that connects them will look like this:
Thus, if we can always efficiently answer the question "Can the Black player win this game of Graph NoGo?" we can also solve Independent Set in polynomial time. Thus, Graph NoGo is NP-hard.

One nice property of this reduction is that it preserves planarity of the graph. Thus, if Independent Set is hard on a planar graph (is it?) then so is Graph NoGo.

So, how do we now take either of those two steps forward, to reach a PSPACE result or an actual NoGo result? Perhaps next week...

Friday, January 28, 2011

Game Description: Adjacent Hex

While at the CGT (Combinatorial Game Theory) workshop earlier this month, I watched Ryan Hayward give an excellent talk about Hex and "Rex" (Misere Hex). Ryan is an excellent Hex player and even better Hex theorist (and also a good curling skipper). I was determined to play some Hex with him, but needed a little extra help since I had played so little. Thus, I suggested we play a variant: Adjacent Hex.

The game is exactly like standard Hex, except it mimics the play restrictions like Atropos: after an opponent paints a hexagon their color, you must play adjacent to their last move. If all the adjacent hexagons are already colored, you may choose to play in any uncolored hexagon on the board. (We've been referring to this as a "jump".)

Ryan and I played one game with these rules. At one point, I thought I was doomed, but Ryan pointed out a really good move for me. I can't recall whether that move led me to win, but either way it was clear his Hex skills were helping out. We proceeded to play two other games using different rules for jumping, but both agreed the original game was best.

I convinced Alan Guo to play a few games with me. We went back and forth, winning and losing, but building up some good intuitive strategies for the game.

Since returning to Wittenberg, I am undefeated. (My aide, Ernie, would like to add: "So far.") I've played against a number of students, and even beat the combined talents of my aide and teaching assistant, Patrick, earlier this week. Obed, the director of our math resource center, has vowed to get a set to practice with his own students.

This game plays very nicely: knowledge of Hex will help out, though knowing that playing in one hexagon will win you the game doesn't always help if your opponent never plays adjacent to that spot. In most of the games I've played, players are usually fighting to be the one working on their own path, which is sometimes harder than it sounds. Usually one player is currently working on a path, while the other player is just trying to derail that path using the adjacency rule. If their derailing is successful (this can be quite tricky) then that player starts building a path fragment and the other player is now trying to thwart that plan.

Sometimes play moves into an area that is closed off where the color of the hexagons does not matter. Now both players are competing to force their opponent to paint the last hexagon in the region, ensuring that they will be the one to "get the jump" and play wherever they want. A few times I've been in great situations where I decided not to connect two of my important paths because doing so would give my opponent a jump. This is a tough choice!

One nice property of this game is (I think) that it is "automatically" PSPACE-complete. If I understand Stefan Reisch's original "Hex is PSPACE-complete" proof, the gadgets in the reduction don't have any adjacent uncolored hexagons [citation needed], thus ensuring that each move gives the opponent a jump and the adjacency rule doesn't come into play. (I wonder how helpful it would be to produce a translation of Reisch's paper.)

In my office, I have a 14 x 14 hex board, just ready for adjacent-minded opponents!

Tuesday, January 25, 2011

Computational Complexity of Games

At BIRS this year, the game everyone was interested in was Go.

I guess it's hard to compete with that! Aside from Go, one of the big features was NoGo. We had the first ever NoGo world tournament, both for humans and computers. After those tournaments were over, Fan Xie, the reigning human champion, battled Bob Hearn's champion NoGo-playing program. When presented with a new game like NoGo, as a computer scientist I'm always interested in the computational complexity: how hard is it to determine which player has a winning move? How long would it take a computer program to figure this out?

For some games, we have a bad (or good, depending on how you see it) answer: a lot. For Chess, for example, an algorithm that can evaluate the "winnability" of every game requires an exponential number of steps in the worst case. When I say "exponential", I mean, "exponential in the amount of information needed to describe the game board". Thus, if I look at all Chess boards that can be described using 1,000,000 bits of information, some of those require around

a1,000,000

steps to figure out who is going to win where a is some constant greater than 1. This is not a feasible amount of time even for computers to have. As the boards get bigger, the number of steps increases by a multiplicative factor of a. We say that these problems are in the class EXPTIME because they take an exponential amount of time. (Figuring out who wins a game of Go is also one of these problems.) To be more general, we also include problems which can be solved in less time. Thus, the "complexity class" EXPTIME is the set of all problems which can be solved in an exponential amount of time (or less).

In general, if you want to figure out who wins a game by just trying all the options, this method usually takes an exponential amount of time because there are an exponential number of scenarios to reach the end of the game which must be tested.

However, for some games, the player with a winning strategy can be figured out really quickly! In the game Nim, for example, there is a fast procedure ("trick") to figure out whether the next player can win. This can be done in a polynomial amount of time, meaning there is a polynomial function that describes the number of steps needed for a computer to determine who has a winning strategy, again in terms of the size of the information needed to describe the game state. Although polynomials can have large exponents, for any one given polynomial, that exponent remains constant. We say that problems that be answered in a polynomial amount of time are in the complexity class P. Since these are both sets, and anything that can be solved in exponential time can also be solved in polynomial time, we know that P is a subset of EXP-TIME and that the two are not equal.

For some games, we don't know how long the fastest algorithm to determine winnability takes. Sometimes we describe these in terms of resources other than time. The complexity class PSPACE is the set of problems which can be solved with a polynomial amount of "space" meaning a polynomial amount of room to keep a record of important information in the algorithm. The space can be reused; erasing is allowed! Imagine that instead of restricting the amount of time you have to take a test, you are instead restricted by the amount of scrap paper you are given to do your calculations. It has been shown that P is a subset of PSPACE, which is in turn a subset of EXPTIME. It is not known whether either is equal to PSPACE (obviously both cannot be).

We can, however, say that there are some games that REQUIRE a polynomial amount of space to solve, and are thus among the hardest problems in PSPACE. These are known as PSPACE-complete. We show that a game (or any sort of computational problem) is PSPACE-complete by showing two things:
  • The game is in PSPACE
  • Any other problem in PSPACE can be efficiently rewritten as an equivalent instance of this game. (Known as hardness. Proving this means the game is PSPACE-hard.)
The notion of completeness works for any complexity class. Thus, for some class XXXXX, proving that a problem is in XXXXX and is XXXXX-hard means that the problem is XXXXX-complete. Chess, for example, is known to require and exponential amount of time because it has been shown to be EXPTIME-complete.

To show the second part (hardness) for a game, Y, means starting with any other PSPACE-complete game, X, and showing that any instance of X can be rewritten as Y where the next player wins Y exactly when the next player would win X. This is known as "finding a reduction" or "reducing X to Y". This is where the fun is! Check out this game table to see some known complexity results for games.

Why bother to show that a game is PSPACE-complete? Although we don't know whether P = PSPACE, it is likely that the two are not equal. Solutions to problems are generally considered "efficient" if there is a polynomial-time algorithm, so then PSPACE-complete problems cannot be solved "efficiently".

This post has gotten long, but I realized I should explain a bit more about computational complexity before blundering forward into some complexity results from BIRS. Next week I will talk more about that, and especially how it relates to NoGo.

I'm sure I left lots out! Please help me out in the comments!

Friday, January 21, 2011

Updated Game Table

I made some updates to this game table with suggestions from others (and many things I learned at BIRS).

I am considering removing the last column ("Can I win this turn?") mostly because it's only interesting for a few combinatorial games. Anyone have a strong feeling about this?

Tuesday, January 18, 2011

BIRS Workshop, 2011

Last week I was at the Banff International Research Station for a workshop on Combinatorial Game Theory. It was excellent! I got to meet many CGT bigwigs, play a lot of great games, present some things and even prove a few things.

Here were some highlights:
  • Presenting Atropos
  • Meeting 35 new friends
  • Playing Cookie Cutter with creator Paul Ottaway
  • Listening to current NoGo World Champion, Fan Xie, explain why he was losing to Bob Hearn's NoGo program while playing it.
  • Proving something on Tuesday and seeing the result mentioned in a talk on Wednesday morning!
  • Learning to Curl
  • Witnessing Zhujiu Jiang take on eight mathematicians at once in Go. (He only lost one of the matches.)
  • Playing Games... and losing most of them.
I learned a lot, and once I sort out all my notes I'll have a number of good blog topics from the meeting.

In other news, welcome to the Spring semester! I will return to the schedule of posting Tuesdays and Fridays. As normal, let me know if there's anything interesting you'd like me to cover! :)