By now we’re familiar with what the Ranking list looks like. A player’s position in the order is ultimately based on two numbers – the amount of times he’s made an appearance in the last 300 matches on record, and the number of times he’s been gilded in the same time frame. This article will shed more light on what exactly the records are and what they look like.

Gamers Assembly - nerdRage vs Windtunnel Tactics2
An example of a records entry from Gamer’s Assembly. It’s one of almost 500 to have been made so far.

This project didn’t start out as a ranking system. Originally I only wanted to log some basic match data in an Excel spreadsheet so that in the future I could look back and see how far players have come, how much team rosters have changed, and so forth. To do this I made space to log every player and their classes in the match as well as their team name. To expand on just the results, I also decided to include a few simple stats that are available from both logs.tf and ESEA logs – these were DPM (or Heals per Minute for Medics) and KA:D. I kept track of the maps played, and what the scores were on each one. I left space for short notes where I could point out any mercs and leave my own observations on the match. I added automated visual aids to all this to help it look nicer. All together these features create what is essentially a very small, simplistic infographic for each match.

In these, I differentiate between the two scout roles – pocket and flank. The detectable differences between the two can often be more nuanced than they are with the similar soldier differentiation (especially for a layman like me), not least when you’re going based only on stats which is often a necessity in un-casted matches featuring unfamiliar teams. When in doubt, I designate the scout that received less healing the flank scout.

Once I’d done all this for a few matches, I started paying more attention to the DPM/HPM/KA:D numbers as these sometimes shed light on what players excelled in matches that weren’t casted, independently of the match scores. To help with detecting this, I added some automated formatting that recognised whenever a player exceeded his counterpart on the other team in these stats. The colour scheme I’d been using was blue and red, representing the two teams, so I wanted a nice neutral colour to highlight these people. I settled on a light shade of gold, which is why I call it ‘gilding’. It was a while after I added this feature that I realised gildings could be a base upon which a ranking system could be built.

Over time I added extra gizmos to each of these entries. First I created the projection machine which simulates the match based on the players it sees. Its calculations are more straightforward and easy to compute than those used to determine the Rankings, and is more directly based on gildings/entries ratios. This means the projection machine has a different personality to the one that determines the Rankings, and it’s more excitable about new teams whereas the Rankings are more conservative. Last season it got very excited indeed about Nunya for several weeks.

As well as this I added a section where, based on a mathematical formula, four players from the match are spotlighted as standing out from the rest. This is based on DPM/HPM and, with the former, it not only compares how each person did in comparison to his counterpart, but also in comparison to the highest value from everyone. It’s a fundamentally more subjective approach than the gilding system which is extremely plain, which is why it doesn’t affect the Rankings and is instead simply another visual aid to suggest what players may have excelled the most. It lists them in descending order of perceived excellence.

Similar to this is a gizmo that makes an estimation of how exciting the match was. The original purpose of this was partly to highlight matches that might be worth watching the VoD of had I missed it live, and partly because I simply wanted to see if a match quality indicator is a feasible thing to make at all based on these limited stats. The figure is based primarily on KA:Ds and is seen in the top-left corner of each match entry in varying shades of green.

Originally it was based only on how close the teams’ overall KA:Ds were to eachother, and also on the outright magnitude of those numbers. This was all well and good until i58 came along and Full Tilt faced Froyotech in the group stage. In that match, Muma had a KA:D of 7.7 and Kos’s was 5.3. That means both teams had a very high average KA:D and it produced a match quality number of something like 12 when the usual range was 0.4-4. I mended this by making it deflate the number if there’s too much variance among all the players’ KA:D numbers, in proportion to exactly what the magnitude of that variance is. If this variance-based safeguard kicks in, the number in the box turns grey. That match’s score, once amended, came down to 3.43.

It also now takes into account how close the projection machine expects the match to be and inflates the number a little bit if a close match was expected (to simulate a level of anticipation). The figure is modified further based on how many gildings the players involved have on record. It’s boosted up quite significantly if a match features lots of heavily-gilded players, and barely changes if there are few gildings between them. Despite its complexity it often misjudges matches – seeing excitement in matches that were actually quite dull and not seeing it in matches that were thrilling. That said, the highest-scoring matches I’ve seen really were all of tremendous quality. The current record-holder is a fundraiser showmatch between Crowns and Full Tilt that happened just before i58. It sees this as the best match of the past 11 months. In a close second place is the ESEA Season 22 Grand Final between Ronin and Froyotech.

Filling in one of these match entries really doesn’t take long. All I have to do is fill the form in with the essentials such as the date, the league, the teams, the scores, etc. Filling in the performance data is easy, too, as I just copy it from logs.tf. When I paste a new form in, the DPM/HPM/KA:D cells come with an Averaging function built in, so all I need to do is add the numbers from each map and the averaging is done automatically. It takes no more than five minutes per match in total. With ESEA matches it takes a bit longer because I have to manually calculate the KA:Ds and HPMs because their logs don’t feature those stats. Thankfully they still include the components you need to calculate those figures yourself.

The formatting, the projection machine, the standouts, etc, are all done automatically. It recognises the names of the players I type in and produces their gildings/entries figures next to their names. When a form is filled in, the actual Rankings spreadsheet also responds automatically to the new data. It’s not a time-consuming effort to keep things up to date. Since I started this in May last year, I’ve now got a total of close to 500 completed entries, of which the last 300 influence the Rankings.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s