Full Report for Resolve by Alek Erickson

Full Report for Resolve by Alek Erickson

Resolve is a connection game for two players

Generated at 8/14/23, 10:29 AM from 1000 logged games.

Rules

Representative game (in the sense of being of mean length). Wherever you see the 'representative game' referred to in later sections, this is it!

Same-colored stones with orthogonal adjacency are connected.

The game is over when a player wins by connecting their designated sides of the board with a single group of connected stones of their color, at any time during their turn or their opponent's turn. Cutting stones are four stones in the following generic crosscut configuration:

OXXO

On your turn you may select one of two actions:

  1. Place a stone of your color on an empty point. If that stone creates a crosscut, swap it with different adjacent enemy stones that share a crosscut with it, until that stone is no longer part of a crosscut.
  2. Choose a stone of your color that is part of a crosscut, and use it to resolve crosscuts as in 1) . Then place a stone of your color on an empty point, if possible.

Resolve was designed by Alek Erickson in July 2020, and the rules were eventually finalized through critical discussions and play testing with Dale Walton and Luis Bolaños Mures. The game was partly inspired by Michal Zapawa's swap mechanic from Slyde. The original idea for swapping stones to resolve crosscuts can be traced to Phil Leduc's Thruway and Bill Taylor's Swapway as early as 2008, but the "resolving stone" mechanism, where a single stone gets serially swapped to fix cuts in Resolve is novel.

Miscellaneous

General comments:

Play: Combinatorial

Family: Combinatorial 2020

Mechanism(s): Connection

Level: Standard

BGG Stats

BGG EntryResolve
BGG Ratingnull
#Votersnull
SDnull
BGG Weightnull
#Votersnull
Yearnull

Kolomogorov Complexity Analysis

Size (bytes)27902
Reference Size10673
Ratio2.61

Ai Ai calculates the size of the implementation, and compares it to the Ai Ai implementation of the simplest possible game (which just fills the board). Note that this estimate may include some graphics and heuristics code as well as the game logic. See the wikipedia entry for more details.

Playout Complexity Estimate

Playouts per second47175.31 (21.20µs/playout)
Reference Size205061.03 (4.88µs/playout)
Ratio (low is good)4.35

Tavener complexity: the heat generated by playing every possible instance of a game with a perfectly efficient programme. Since this is not possible to calculate, Ai Ai calculates the number of random playouts per second and compares it to the fastest non-trivial Ai Ai game (Connect 4). This ratio gives a practical indication of how complex the game is. Combine this with the computational state space, and you can get an idea of how strong the default (MCTS-based) AI will be.

State Space Complexity

% new positions/bucket

State Space Complexity27215583 
State Space Complexity (log 10)7.43 
Samples1357406 
Confidence10.490: totally unreliable, 100: perfect

State space complexity (where present) is an estimate of the number of distinct game tree reachable through actual play. Over a series of random games, Ai Ai checks each position to see if it is new, or a repeat of a previous position and keeps a total for each game. As the number of games increase, the quantity of new positions seen per game decreases. These games are then partitioned into a number of buckets, and if certain conditions are met, Ai Ai treats the number in each bucket as the start of a strictly decreasing geometric sequence and sums it to estimate the total state space. The accuracy is calculated as 1-[end bucket count]/[starting bucklet count]

Playout/Search Speed

LabelIts/sSDNodes/sSDGame lengthSD
Random playout107,23929,6672,586,870715,657243
search.UCT113,54231,917206

Random: 10 second warmup for the hotspot compiler. 100 trials of 1000ms each.

Other: 100 playouts, means calculated over the first 5 moves only to avoid distortion due to speedup at end of game.

Mirroring Strategies

Rotation (Half turn) lost each game as expected.
Reflection (X axis) lost each game as expected.
Reflection (Y axis) lost each game as expected.
Copy last move lost each game as expected.

Mirroring strategies attempt to copy the previous move. On first move, they will attempt to play in the centre. If neither of these are possible, they will pick a random move. Each entry represents a different form of copying; direct copy, reflection in either the X or Y axis, half-turn rotation.

Win % By Player (Bias)

1: White win %65.30±3.00Includes draws = 50%
2: Black win %34.70±2.89Includes draws = 50%
Draw %0.00Percentage of games where all players draw.
Decisive %100.00Percentage of games with a single winner.
Samples1000Quantity of logged games played

Note: that win/loss statistics may vary depending on thinking time (horizon effect, etc.), bad heuristics, bugs, and other factors, so should be taken with a pinch of salt. (Given perfect play, any game of pure skill will always end in the same result.)

Note: Ai Ai differentiates between states where all players draw or win or lose; this is mostly to support cooperative games.

UCT Skill Chains

MatchAIStrong WinsDrawsStrong Losses#GamesStrong Scorep1 Win%Draw%p2 Win%Game Length
0Random         
1UCT (its=2)63102769070.6650 <= 0.6957 <= 0.724851.270.0048.7323.39
5UCT (its=6)63103599900.6069 <= 0.6374 <= 0.666750.510.0049.4923.26
11UCT (its=12)63103569870.6089 <= 0.6393 <= 0.668750.960.0049.0422.74
17UCT (its=18)63103429730.6180 <= 0.6485 <= 0.677950.770.0049.2322.06
25UCT (its=68)63101147450.8194 <= 0.8470 <= 0.871051.810.0048.1919.42
26UCT (its=185)63102018320.7282 <= 0.7584 <= 0.786351.560.0048.4417.52
27UCT (its=502)63103189490.6343 <= 0.6649 <= 0.694254.580.0045.4216.58
28UCT (its=1365)63103619920.6057 <= 0.6361 <= 0.665552.320.0047.6816.60
29UCT (its=3710)63103609910.6063 <= 0.6367 <= 0.666148.130.0051.8716.81
30UCT (its=10086)63102588890.6791 <= 0.7098 <= 0.738744.320.0055.6817.44
31UCT (its=27416)63102558860.6815 <= 0.7122 <= 0.741060.500.0039.5018.83
32
UCT (its=27416)
518
0
482
1000
0.4870 <= 0.5180 <= 0.5488
74.80
0.00
25.20
21.17

Search for levels ended: time limit reached.

Level of Play: Strong beats Weak 60% of the time (lower bound with 95% confidence).

Draw%, p1 win% and game length may give some indication of trends as AI strength increases.

1st Player Win Ratios by Playing Strength

This chart shows the win(green)/draw(black)/loss(red) percentages, as UCT play strength increases. Note that for most games, the top playing strength show here will be distinctly below human standard.

Complexity

Game length37.63 
Branching factor15.38 
Complexity10^30.08Based on game length and branching factor
Samples1000Quantity of logged games played

Computational complexity (where present) is an estimate of the game tree reachable through actual play. For each game in turn, Ai Ai marks the positions reached in a hashtable, then counts the number of new moves added to the table. Once all moves are applied, it treats this sequence as a geometric progression and calculates the sum as n-> infinity.

Move Classification

Board Size25Quantity of distinct board cells
Distinct actions105Quantity of distinct moves (e.g. "e4") regardless of position in game tree
Killer moves35A 'killer' move is selected by the AI more than 50% of the time
Too many killers to list.
Good moves74A good move is selected by the AI more than the average
Bad moves31A bad move is selected by the AI less than the average
Response distance%46.41%Distance from move to response / maximum board distance; a low value suggests a game is tactical rather than strategic.
Samples1000Quantity of logged games played

Board Coverage

A mean of 66.96% of board locations were used per game.

Colour and size show the frequency of visits.

Game Length

Game length frequencies.

Mean21.36
Mode[16]
Median18.0

Change in Material Per Turn

Mean change in material/round0.96Complete round of play (all players)

This chart is based on a single representative* playout, and gives a feel for the change in material over the course of a game. (* Representative in the sense that it is close to the mean length.)

Actions/turn

Table: branching factor per turn, based on a single representative* game. (* Representative in the sense that it is close to the mean game length.)

Action Types per Turn

This chart is based on a single representative* game, and gives a feel for the types of moves available throughout that game. (* Representative in the sense that it is close to the mean game length.)

Red: removal, Black: move, Blue: Add, Grey: pass, Purple: swap sides, Brown: other.

Trajectory

This chart shows the best move value with respect to the active player; the orange line represents the value of doing nothing (null move).

The lead changed on 6% of the game turns. Ai Ai found 9 critical turns (turns with only one good option).

Position Heatmap

This chart shows the relative temperature of all moves each turn. Colour range: black (worst), red, orange(even), yellow, white(best).

Good/Effective moves

MeasureAll playersPlayer 1Player 2
Mean % of effective moves51.9547.5358.40
Mean no. of effective moves3.622.844.77
Effective game space10^11.8610^5.4010^6.46
Mean % of good moves35.0147.5816.65
Mean no. of good moves2.723.321.85
Good move game space10^7.1510^4.9210^2.23

These figures were calculated over a single game.

An effective move is one with score 0.1 of the best move (including the best move). -1 (loss) <= score <= 1 (win)

A good move has a score > 0. Note that when there are no good moves, an multiplier of 1 is used for the game space calculation.

Quality Measures

MeasureValueDescription
Hot turns81.25%A hot turn is one where making a move is better than doing nothing.
Momentum6.25%% of turns where a player improved their score.
Correction18.75%% of turns where the score headed back towards equality.
Depth7.90%Difference in evaluation between a short and long search.
Drama2.82%How much the winner was behind before their final victory.
Foulup Factor21.88%Moves that looked better than the best move after a short search.
Surprising turns3.12%Turns that looked bad after a short search, but good after a long one.
Last lead change46.88%Distance through game when the lead changed for the last time.
Decisiveness56.25%Distance from the result being known to the end of the game.

These figures were calculated over a single representative* game, and based on the measures of quality described in "Automatic Generation and Evaluation of Recombination Games" (Cameron Browne, 2007). (* Representative, in the sense that it is close to the mean game length.)

Openings

MovesAnimation
c2,a2,d2,d4
d2,a2,c2,d4
e4,b2,a3,e3
d4,d3,d5,b5
a1,b3,c1
b1,c2,e3
b1,b3,d4
b1,d5,b2
b1,d5,c5
c1,c2,e5
c1,a3,e3
c1,c5,a2

Opening Heatmap

Colour shows the success ratio of this play over the first 10moves; black < red < yellow < white.

Size shows the frequency this move is played.

Unique Positions Reachable at Depth

0123456
1256257525834576154294170713

Note: most games do not take board rotation and reflection into consideration.
Multi-part turns could be treated as the same or different depth depending on the implementation.
Counts to depth N include all moves reachable at lower depths.
Inaccuracies may also exist due to hash collisions, but Ai Ai uses 64-bit hashes so these will be a very small fraction of a percentage point.

Shortest Game(s)

No solutions found to depth 6.

Puzzles

PuzzleSolution

White to win in 22 moves

White to win in 17 moves

White to win in 20 moves

White to win in 19 moves

White to win in 16 moves

Black to win in 17 moves

White to win in 11 moves

White to win in 16 moves

White to win in 12 moves

White to win in 14 moves

White to win in 18 moves

White to win in 11 moves

Selection criteria: first move must be unique, and not forced to avoid losing. Beyond that, Puzzles will be rated by the product of [total move]/[best moves] at each step, and the best puzzles selected.