Season 18 of the Ozfortress Premiership came to a close a little while ago. In all, the playoffs featured four matches that led to a comfortable victory for No Safeword.

Before the Playoffs started, I used my projection machine to simulate the playoffs. Based on the data it had available at the time, it estimated which teams would win each match, and by extension it guessed which teams would progress to subsequent matches and who they would play against in them.

The machine not only estimates the winners of prospective matches but also makes a guess as to how much they’ll win by by suggesting the sort of scores it expects to see on average across all maps.

Now that the Playoffs are done, we can look back and see where the projection machine was right, where it was wrong, and decide whether or not it did a satisfactory job.


Upper Page Round 1

This turned out to be one of the most accurate predictions by the machine. Only the very brave would have predicted a Xenophobiaphobia victory, even though they finished second in the regular season. No Safeword had simply been on another level all season long.

The stats have reflected this since the season began, and the projection machine agreed that Xenophobiaphobia stood little chance of victory. It expected No Safeword to win four rounds for every one that Xenophobiaphobia won, so suggested an average score across all maps of 4-1. The actual average scoreline was very similar to this, but it still meant that the projection machine either overestimated Xeno or underestimated No Safeword.

Projected: No Safeword 4.0 – 1.0 Xenophobiaphobia

Actual: No Safeword 4.0 – 0.5 Xenophobiaphobia

Lower Page Round 1

The machine was anticipating a closer match here, which ultimately it was but not by a lot. Certainly the actual match wasn’t as close as the machine thought it would be. Nevertheless, it did predict that Mad Men would beat Damage Inc. and sure enough that’s what happened. Damage Inc.’s ability to resist was plainly overestimated by the machine, though. It did predict Man Men’s average score correctly, though.

Projected: Mad Men 3.0 – 2.0 Damage Inc.

Actual: Mad Men 3.0 – 0.5 Damage Inc.

Semi-Final

So the machine correctly predicted the outcome of the two opening matches, but I don’t think anyone would have made differing predictions there while maintaining a straight face. It’s here in the semi-final that things got more difficult to predict, I think. There was a strong case for expecting a Xenophobiaphobia win, but Mad Men were also a clearly strong team and a victory for them was a clear possibility.

Man Men’s victory was something the projection machine saw coming, but what it didn’t expect was the degree by which Xenophobiaphobia were beaten. It had expected this to be the closest match of the Playoffs but in truth it was perhaps the most one-sided. Nevertheless, it correctly predicted the winner in a match that looked like it could have gone either way beforehand.

Projected: Xenophobiaphobia 2.1 – 2.9 Mad Men

Actual: Xenophobiaphobia 0.5 – 4.5 Mad Men

Grand Final

This is one of the highlights for the machine from the Ozfortress Playoffs. It’s projected score balance lines up rather nicely with the true result. Again it slightly overestimated the loser, but got the average score of the victor exactly right for the third time in four matches. It expected Man Men to win perhaps twice as many rounds as they did.

Something that factors in here is that Man Men were playing without their usual scout combo. The original projection was made before the playoffs started and it was assumed that Mad Men would play with Ohai and Ben (although by this point in the season their usual scout combo had become Ohai and V4na). In a projection featuring Teejay and Vanquish instead, the pendulum probably would have swung a little further towards No Safeword, increasing their expected score and lowering Man Men’s.

Projected: Mad Men 1.3 – 3.7 No Safeword

Actual: Mad Men 0.7 – 3.7 No Safeword


Overall I’m pleasantly surprised by how well the projection machine’s Playoffs simulation matched with reality. It correctly predicted all match outcomes and, especially in the two matches featuring No Safeword, the score balances it expected weren’t far from the true ones.

At the same time, I can’t deny that good luck factors into the accuracy seen above. It wouldn’t have taken much to cause the real scores to deviate from the projections much further than they actually did.

The real scoreline seen in the semi-final could be interpreted as an under-performance by Xenophobiaphobia, which could be used as an excuse for how far it deviates from the closeness expected by the machine. That said, it overestimated Damage Inc. as well and expected them to win many more rounds than they actually did.

I suppose the most important thing is that it didn’t incorrectly predict any of the winners, which is fundamentally the most important thing when simulating an entire Playoffs structure.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s