Meridians is an abstract strategy game for 2 players. It has a territorial nature, but the goal is to annihilate opponent's stones.
Generated at 15/03/2022, 01:13 from 1000 logged games.
Representative game (in the sense of being of mean length). Wherever you see the 'representative game' referred to in later sections, this is it!
On each player's first turn, players place a stone of their own color on any empty point.
On each player's second turn, players place another stone of their color so that their own two stones have a path.
On subsequent turns, players take actions below on each turn, in this order:
After the second turn, a player with no stones of their color on the board loses.
General comments:
Play: Combinatorial
Family: Combinatorial 2021
BGG Entry | Meridians |
---|---|
BGG Rating | 8.96429 |
#Voters | 7 |
SD | 0.760974 |
BGG Weight | 0 |
#Voters | 0 |
Year | 2021 |
User | Rating | Comment |
---|---|---|
alekerickson | 10 | best game of 2021 |
PSchulman | 9 | Played once in a learning session. It reminds me of Amazons in the sense that it's a territorial game at heart even though its goal is something else. It has a strong Go feel as well. |
Dr Jochum | 9 | |
AbstractGames | 10 | |
David Ells | 8 | |
luish | 8 | |
cdunc123 | 8.75 | DIY version. One of the freshest new abstract games I have seen in a long while. Such simple rules, but such interesting play. My one hesitation upon reading the rules was that the game has the potential to "go cold" -- both players could end up with large living groups that nearly fill the board, and then the game becomes a question of who has the most room to extend his own-groups without effecting a suicidal merger with other own-groups. (It's kind of like how some games of Gonnect reach a point where players have to fill in their own eyes, and whoever has the most eyes wins.) However, on reflection, I don't think this is much of a worry. My sense is that once the game board's "territory" is mapped out -- once no more captures of enemy stones are possible, and the boundaries are clearly defined (no more "dame" as in Go remain to be filled in) -- then the game is essentially over, and whoever has the most territory wins. Advanced players in close games will probably just count interior space at this point -- namely, how much room for non-suicidal growth there is inside your territory -- rather than play out the cold moves. In fact, in the games I have played so far, one player has ended up resigning after the opponent has made a large capture of his stones, so that it was clear his chances of winning were hopeless. So I've yet actually to reach a stage of "let's count up spaces to see who wins." |
Size (bytes) | 27300 |
---|---|
Reference Size | 10673 |
Ratio | 2.56 |
Ai Ai calculates the size of the implementation, and compares it to the Ai Ai implementation of the simplest possible game (which just fills the board). Note that this estimate may include some graphics and heuristics code as well as the game logic. See the wikipedia entry for more details.
Playouts per second | 2144.28 (466.36µs/playout) |
---|---|
Reference Size | 483488.86 (2.07µs/playout) |
Ratio (low is good) | 225.48 |
Tavener complexity: the heat generated by playing every possible instance of a game with a perfectly efficient programme. Since this is not possible to calculate, Ai Ai calculates the number of random playouts per second and compares it to the fastest non-trivial Ai Ai game (Connect 4). This ratio gives a practical indication of how complex the game is. Combine this with the computational state space, and you can get an idea of how strong the default (MCTS-based) AI will be.
% new positions/bucket
State Space Complexity | 1930144532 | |
---|---|---|
State Space Complexity bounds | 92113392 < 1930144532 < ∞ | |
State Space Complexity (log 10) | 9.29 |   |
State Space Complexity bounds (log 10) | 7.96 <= 9.29 <= ∞ | |
Samples | 540426 | |
Confidence | 0.00 | 0: totally unreliable, 100: perfect |
State space complexity (where present) is an estimate of the number of distinct game tree reachable through actual play. Over a series of random games, Ai Ai checks each position to see if it is new, or a repeat of a previous position and keeps a total for each game. As the number of games increase, the quantity of new positions seen per game decreases. These games are then partitioned into a number of buckets, and if certain conditions are met, Ai Ai treats the number in each bucket as the start of a strictly decreasing geometric sequence and sums it to estimate the total state space. The accuracy is calculated as 1-[end bucket count]/[starting bucklet count]
Label | Its/s | SD | Nodes/s | SD | Game length | SD |
---|---|---|---|---|---|---|
Random playout | 2,704 | 41 | 296,142 | 4,212 | 110 | 23 |
search.UCT | 2,753 | 151 | 126 | 12 | ||
search.AlphaBeta | 138,230 | 46,989 | 116 | 13 |
Random: 10 second warmup for the hotspot compiler. 100 trials of 1000ms each.
Other: 100 playouts, means calculated over the first 5 moves only to avoid distortion due to speedup at end of game.
Rotation (Half turn) lost each game as expected.
Reflection (X axis) lost each game as expected.
Reflection (Y axis) lost each game as expected.
Copy last move lost each game as expected.
Mirroring strategies attempt to copy the previous move. On first move, they will attempt to play in the centre. If neither of these are possible, they will pick a random move. Each entry represents a different form of copying; direct copy, reflection in either the X or Y axis, half-turn rotation.
This chart shows the heuristic values thoughout a single representative* game. The orange line shows the difference between player scores. (* Representative, in the sense that it is close to the mean game length.)
1: White win % | 54.80±3.10 | Includes draws = 50% |
---|---|---|
2: Black win % | 45.20±3.06 | Includes draws = 50% |
Draw % | 0.00 | Percentage of games where all players draw. |
Decisive % | 100.00 | Percentage of games with a single winner. |
Samples | 1000 | Quantity of logged games played |
Note: that win/loss statistics may vary depending on thinking time (horizon effect, etc.), bad heuristics, bugs, and other factors, so should be taken with a pinch of salt. (Given perfect play, any game of pure skill will always end in the same result.)
Note: Ai Ai differentiates between states where all players draw or win or lose; this is mostly to support cooperative games.
Match | AI | Strong Wins | Draws | Strong Losses | #Games | Strong Score | p1 Win% | Draw% | p2 Win% | Game Length |
---|---|---|---|---|---|---|---|---|---|---|
0 | Random | |||||||||
1 | UCT (its=2) | 631 | 0 | 365 | 996 | 0.6031 <= 0.6335 <= 0.6629 | 50.50 | 0.00 | 49.50 | 110.46 |
4 | UCT (its=5) | 631 | 0 | 296 | 927 | 0.6500 <= 0.6807 <= 0.7099 | 51.46 | 0.00 | 48.54 | 114.72 |
18 | UCT (its=19) | 631 | 0 | 324 | 955 | 0.6301 <= 0.6607 <= 0.6901 | 48.06 | 0.00 | 51.94 | 120.31 |
19 | UCT (its=19) | 490 | 0 | 510 | 1000 | 0.4591 <= 0.4900 <= 0.5210 | 49.20 | 0.00 | 50.80 | 122.33 |
Search for levels ended: time limit reached.
Level of Play: Strong beats Weak 60% of the time (lower bound with 95% confidence).
Draw%, p1 win% and game length may give some indication of trends as AI strength increases.
This chart shows the win(green)/draw(black)/loss(red) percentages, as UCT play strength increases. Note that for most games, the top playing strength show here will be distinctly below human standard.
Game length | 116.06 | |
---|---|---|
Branching factor | 48.94 |   |
Complexity | 10^188.38 | Based on game length and branching factor |
Samples | 1000 | Quantity of logged games played |
Computational complexity (where present) is an estimate of the game tree reachable through actual play. For each game in turn, Ai Ai marks the positions reached in a hashtable, then counts the number of new moves added to the table. Once all moves are applied, it treats this sequence as a geometric progression and calculates the sum as n-> infinity.
Board Size | 114 | Quantity of distinct board cells |
---|---|---|
Distinct actions | 115 | Quantity of distinct moves (e.g. "e4") regardless of position in game tree |
Good moves | 63 | A good move is selected by the AI more than the average |
Bad moves | 51 | A bad move is selected by the AI less than the average |
Response distance% | 36.64% | Distance from move to response / maximum board distance; a low value suggests a game is tactical rather than strategic. |
Samples | 1000 | Quantity of logged games played |
A mean of 91.17% of board locations were used per game.
Colour and size show the frequency of visits.
Game length frequencies.
Mean | 116.06 |
---|---|
Mode | [114] |
Median | 114.0 |
Mean change in material/round | 0.43 | Complete round of play (all players) |
---|
This chart is based on a single representative* playout, and gives a feel for the change in material over the course of a game. (* Representative in the sense that it is close to the mean length.)
Table: branching factor per turn, based on a single representative* game. (* Representative in the sense that it is close to the mean game length.)
This chart is based on a single representative* game, and gives a feel for the types of moves available throughout that game. (* Representative in the sense that it is close to the mean game length.)
Red: removal, Black: move, Blue: Add, Grey: pass, Purple: swap sides, Brown: other.
Moves | Animation |
---|---|
g6,f6,g8,i3,g5,f3,f5 | |
g6,f6,g8,i3,g5,f3 | |
g6,f6,g8,i3,g5 | |
g6,f6,g8,i3 | |
a10,d10,i2 | |
g6,f6,g8 | |
m2,f2,e10 | |
c12,f5,f9 | |
i3,j7,i10 |
Colour shows the success ratio of this play over the first 10moves; black < red < yellow < white.
Size shows the frequency this move is played.
0 | 1 | 2 | 3 | 4 |
---|---|---|---|---|
1 | 114 | 12996 | 139576 | 1369470 |
Note: most games do not take board rotation and reflection into consideration.
Multi-part turns could be treated as the same or different depth depending on the implementation.
Counts to depth N include all moves reachable at lower depths.
Inaccuracies may also exist due to hash collisions, but Ai Ai uses 64-bit hashes so these will be a very small fraction of a percentage point.
1165 solutions found - search incomplete.
Puzzle | Solution |
---|---|
Black to win in 6 moves | |
White to win in 6 moves | |
Black to win in 4 moves | |
Black to win in 4 moves | |
White to win in 7 moves | |
White to win in 2 moves | |
Black to win in 3 moves | |
Black to win in 2 moves | |
White to win in 5 moves | |
White to win in 2 moves | |
Black to win in 2 moves |
Weak puzzle selection criteria are in place; the first move may not be unique.