Full Report for HexSygo by Christian Freeling

Full Report for HexSygo by Christian Freeling

Have the highest score at the end of the game.

Generated at 22/02/2021, 18:42 from 1000 logged games.

Rules

Representative game (in the sense of being of mean length). Wherever you see the 'representative game' referred to in later sections, this is it!

PLAY

Each turn, either:

CAPTURE

If a group loses its last liberty, it is captured. Unlike Go, captured stones are flipped to the opposite color.

SCORING

Territorial, as per go. There is no komi.

Miscellaneous

General comments:

Play: Combinatorial

Family: Go

Mechanism(s): Connection,Pattern

Components: Board

BGG Stats

BGG EntryHexSygo
BGG Rating7.33333
#Voters3
SD0.471405
BGG Weight0
#Voters0
Year2010

BGG Ratings and Comments

UserRatingComment
mrraow8An extremely interesting riff on Go. The flip-captures mean there's no need for a ko rule, and the growth rules lead to a shorter game than go on a similar sized board. Interesting trade-offs between growing and seeding; as you'd expect from the Symple mechanism. I think I prefer this to Symple; Sygo feels sharper, in the sense that each stone placement is more critical. In Symple, where you grow seems much less important than the simple fact that you are growing; but that may just be my flawed understanding of the game.
T0afer7Although Sygo is great, I honestly am not sure I like it as much as I first did. In general I view Go variants as less interesting than chess variants because with chess variants they add needed variety or fix (or attempt to fix) several issues with high level chess play or perceived lack of material completeness. With Go it feels like Go doesn't lack that sort of variety or suffer from First Player Advantage or Draw margin enough to warrant spending much time with the various variants. Because of that I'm bumping Sygo down while simultaneously bumping up my admiration of Symple. I just enjoy the elegance and simplicity of Symple more.
milomilo122N/AWhen I first learned about the concept behind this game, I was dubious - it seemed like a recipe for chaos, what with the possibility for placing a large number of stones on a single turn. I've since learned that many people react similarly to the idea. However, when I played the game my doubts dissolved. It works waaaaaaay better than the rules suggest it would. In fact it's a very good game, maybe even a great one. Update: after a few further attempts to play this game, having to place all those stones on a single turn has gotten a little frustrating for me. Analysis overload.
orangeblood7Another fun design from Christian, and another one that he calls among his most important… among his six core games (the others being Grand Chess, Dameo, Emergo, Symple, and Storisende). Of course it will inevitably be compared to Symple (because it uses the same move protocol). However, once you get into your first game, you’ll see it’s not really like Symple at all. Once the game is finished, the scoring is pretty much like Go. But to get there… really wild. When you capture stones they’re flipped, Othello-like, to your own color. This can present problems as you of course are trying to form two eyes in each of your groups. It’s also really fun to think about things like what would be a normal joseki in Go. Should I try a 3-3 invasion? Maybe not, given that you don’t always connect to the whole group, and thus would lose a turn adding to each of your other groups. Anyway, I can’t really say if I prefer Symple or Sygo at this point, but I do like them both very much!

Kolomogorov Complexity Analysis

Size (bytes)38461
Reference Size10293
Ratio3.74

Ai Ai calculates the size of the implementation, and compares it to the Ai Ai implementation of the simplest possible game (which just fills the board). Note that this estimate may include some graphics and heuristics code as well as the game logic. See the wikipedia entry for more details.

Playout Complexity Estimate

Playouts per second1218.04 (820.99µs/playout)
Reference Size2052123.95 (0.49µs/playout)
Ratio (low is good)1684.78

Tavener complexity: the heat generated by playing every possible instance of a game with a perfectly efficient programme. Since this is not possible to calculate, Ai Ai calculates the number of random playouts per second and compares it to the fastest non-trivial Ai Ai game (Connect 4). This ratio gives a practical indication of how complex the game is. Combine this with the computational state space, and you can get an idea of how strong the default (MCTS-based) AI will be.

Playout/Search Speed

LabelIts/sSDNodes/sSDGame lengthSD
Random playout1,18949343,71614,13028910
search.UCB1,2291228330
search.UCT1,2241128511

Random: 10 second warmup for the hotspot compiler. 100 trials of 1000ms each.

Other: 100 playouts, means calculated over the first 5 moves only to avoid distortion due to speedup at end of game.

Mirroring Strategies

Rotation (Half turn) lost each game as expected.
Reflection (X axis) lost each game as expected.
Reflection (Y axis) lost each game as expected.
Copy last move lost each game as expected.

Mirroring strategies attempt to copy the previous move. On first move, they will attempt to play in the centre. If neither of these are possible, they will pick a random move. Each entry represents a different form of copying; direct copy, reflection in either the X or Y axis, half-turn rotation.

Win % By Player (Bias)

1: White win %50.90±3.10Includes draws = 50%
2: Black win %49.10±3.09Includes draws = 50%
Draw %0.60Percentage of games where all players draw.
Decisive %99.40Percentage of games with a single winner.
Samples1000Quantity of logged games played

Note: that win/loss statistics may vary depending on thinking time (horizon effect, etc.), bad heuristics, bugs, and other factors, so should be taken with a pinch of salt. (Given perfect play, any game of pure skill will always end in the same result.)

Note: Ai Ai differentiates between states where all players draw or win or lose; this is mostly to support cooperative games.

UCT Skill Chains

MatchAIStrong WinsDrawsStrong Losses#GamesStrong Scorep1 Win%Draw%p2 Win%Game Length
0Random         
1UCT (its=2)63023039350.6442 <= 0.6749 <= 0.704151.550.2148.24288.47
4UCT (its=5)63013559860.6090 <= 0.6395 <= 0.668850.100.1049.80288.27
7
UCT (its=8)
565
2
433
1000
0.5351 <= 0.5660 <= 0.5964
48.10
0.20
51.70
287.10
8
UCT (its=8)
481
2
517
1000
0.4512 <= 0.4820 <= 0.5130
46.90
0.20
52.90
288.55

Search for levels ended: time limit reached.

Level of Play: Strong beats Weak 60% of the time (lower bound with 95% confidence).

Draw%, p1 win% and game length may give some indication of trends as AI strength increases.

1st Player Win Ratios by Playing Strength

This chart shows the win(green)/draw(black)/loss(red) percentages, as UCT play strength increases. Note that for most games, the top playing strength show here will be distinctly below human standard.

Complexity

Game length274.16 
Branching factor66.83 
Complexity10^412.30Based on game length and branching factor
Computational Complexity10^8.27Sample quality (100 best): 1.43
Samples1000Quantity of logged games played

Computational complexity (where present) is an estimate of the game tree reachable through actual play. For each game in turn, Ai Ai marks the positions reached in a hashtable, then counts the number of new moves added to the table. Once all moves are applied, it treats this sequence as a geometric progression and calculates the sum as n-> infinity.

Move Classification

Distinct actions273Number of distinct moves (e.g. "e4") regardless of position in game tree
Good moves204A good move is selected by the AI more than the average
Bad moves69A bad move is selected by the AI less than the average
Response distance9.09Mean distance between move and response; a low value relative to the board size may mean a game is tactical rather than strategic.
Samples1000Quantity of logged games played

Board Coverage

A mean of 94.79% of board locations were used per game.

Colour and size show the frequency of visits.

Game Length

Game length frequencies.

Mean274.16
Mode[279]
Median281.0

Change in Material Per Turn

This chart is based on a single representative* playout, and gives a feel for the change in material over the course of a game. (* Representative in the sense that it is close to the mean length.)

Actions/turn

Table: branching factor per turn, based on a single representative* game. (* Representative in the sense that it is close to the mean game length.)

Action Types per Turn

This chart is based on a single representative* game, and gives a feel for the types of moves available throughout that game. (* Representative in the sense that it is close to the mean game length.)

Red: removal, Black: move, Blue: Add, Grey: pass, Purple: swap sides, Brown: other.

Trajectory

This chart shows the best move value with respect to the active player; the orange line represents the value of doing nothing (null move).

The lead changed on 16% of the game turns. Ai Ai found 6 critical turns (turns with only one good option).

Position Heatmap

This chart shows the relative temperature of all moves each turn. Colour range: black (worst), red, orange(even), yellow, white(best).

Good/Effective moves

MeasureAll playersPlayer 1Player 2
Mean % of effective moves71.8673.0670.58
Mean no. of effective moves31.0532.4929.53
Effective game space10^335.2110^174.6610^160.56
Mean % of good moves62.7075.4049.23
Mean no. of good moves27.1630.6723.44
Good move game space10^307.1810^173.0410^134.14

These figures were calculated over a single game.

An effective move is one with score 0.1 of the best move (including the best move). -1 (loss) <= score <= 1 (win)

A good move has a score > 0. Note that when there are no good moves, an multiplier of 1 is used for the game space calculation.

Quality Measures

MeasureValueDescription
Hot turns96.72%A hot turn is one where making a move is better than doing nothing.
Momentum29.93%% of turns where a player improved their score.
Correction49.64%% of turns where the score headed back towards equality.
Depth10.86%Difference in evaluation between a short and long search.
Drama0.21%How much the winner was behind before their final victory.
Foulup Factor81.75%Moves that looked better than the best move after a short search.
Surprising turns2.19%Turns that looked bad after a short search, but good after a long one.
Last lead change87.23%Distance through game when the lead changed for the last time.
Decisiveness3.28%Distance from the result being known to the end of the game.

These figures were calculated over a single representative* game, and based on the measures of quality described in "Automatic Generation and Evaluation of Recombination Games" (Cameron Browne, 2007). (* Representative, in the sense that it is close to the mean game length.)

Opening Heatmap

Colour shows the success ratio of this play over the first 10moves; black < red < yellow < white.

Size shows the frequency this move is played.

Unique Positions Reachable at Depth

012
127173441

Note: most games do not take board rotation and reflection into consideration.
Multi-part turns could be treated as the same or different depth depending on the implementation.
Counts to depth N include all moves reachable at lower depths.
Inaccuracies may also exist due to hash collisions, but Ai Ai uses 64-bit hashes so these will be a very small fraction of a percentage point.

Shortest Game(s)

No solutions found to depth 2.

Puzzles

PuzzleSolution

White to win in 12 moves

White to win in 8 moves

Weak puzzle selection criteria are in place; the first move may not be unique.