# AllegSkill - Commander's ranking

What follows is the simplest incarnation of the Trueskill update algorithm, as used for commander ratings. Note that in this example the teams only consist of one player each, the commander. As we said, simple example.

We've provided as much information as is sensible, and we only assume that the reader is familiar with (or able to look up) the error function ($\text{erf}$).

## Contents

### The update formulae

After the game is played the commanders' ranks are updated as follows (in all cases, the apostrophe refers to the updated version):

Mu, μ Sigma, σ
Winner $\mu'_w=\mu_w+\frac{\sigma_w^{2}}{c}\cdot V_\text{win}\left( \frac{\mu_w-\mu_l}{c},\frac{\varepsilon }{c} \right)$ $\sigma'_w=\sqrt{\sigma_w^{2}\left( 1-\frac{\sigma_w^{2}}{c^{2}}\cdot W_\text{win}\left( \frac{\mu_w-\mu_l}{c},\frac{\varepsilon }{c} \right) \right)+\gamma ^{2}}$
Loser $\mu'_l=\mu_l-\frac{\sigma_l^{2}}{c}\cdot V_\text{win}\left( \frac{\mu_w-\mu_l}{c},\frac{\varepsilon }{c} \right)$ $\sigma'_l=\sqrt{\sigma_l^{2}\left( 1-\frac{\sigma_l^{2}}{c^{2}}\cdot W_\text{win}\left( \frac{\mu_w-\mu_l}{c},\frac{\varepsilon }{c} \right) \right)+\gamma ^{2}}$
Draws Substitute $\,\!V_\text{win}(t, e)$ with $\,\!V_\text{draw}(t, e)$. Substitute $\,\!W_\text{win}(t, e)$ with $\,\!W_\text{draw}(t, e)$.

Where β is constant, and γ and ε are variables:

$\beta = \frac{25}{6}$ is the standard variance around performance
$\gamma = \frac{25}{300}$ is the dynamics variable, which prevents sigma from ever reaching zero, which in turn determines how quickly mu can in/decrease once sigma has stabilised. If we discover that sigma-stabilised ratings are moving too slowly to reflect genuine changes in skill, we will increase gamma.
$\varepsilon \simeq 0.08$ is derived empirically from the percentage of games which result in a draw, currently ~1.01%. If the draw-rate changes, we will update epsilon accordingly.

c is a variable that expresses the general uncertainty of the system:

$c=\sqrt{2\beta ^{2}+\sigma_w^{2}+\sigma_l^{2}}$

and Vwin and Wwin are TrueSkill functions based on

1. The normal distribution (with a mean of zero and variance of one), and more precisely its probability density function.
$\text{PDF}(x):=\tfrac{1}{\sqrt{2\pi}} \text{e}^{-\frac{x^2}{2}}$
2. The cumulative distribution function of the normal distribution:
$\text{CDF}(y):=\tfrac{1}{2}\left(1 + \text{erf}\left( \tfrac{1}{\sqrt{2}} y\right)\right)$.

No idea what they are? Don't worry, they are just scary maths that stats dudes use to try and represent a large group of unknowns (every tiny detail of your in-game actions) based on a small number of samples (the game outcomes).

### TrueSkill functions

Developed by Microsoft, the Trueskill update functions are:

$V_\text{win}(t, e) := \frac{ \text{PDF}(t-e)}{\text{CDF}(t-e) }$

$W_\text{win}(t, e) := V_\text{win}(t,e) \cdot \left( V_\text{win}(t,e)+t-e\right)$

There are also two special versions of $V$ and $W$ when draws take place.

$V_\text{draw}(t, e):=\frac{\text{PDF}(-e-t)-\text{PDF}(e-t)}{\text{CDF}(e-t)-\text{CDF}(-e-t)}$

$W_{draw}(t,\varepsilon ):=V_{draw}^{2}(t,\varepsilon )+\frac{(\varepsilon -t)\cdot PDF(\varepsilon -t)+(\varepsilon +t)PDF(\varepsilon +t)}{CDF(\varepsilon -t)-CDF(-\varepsilon -t)}$

So, what are t and e used in these formulae? Well, if you know your maths, you realise they can be anything that you choose to 'pass into' the function. In our case we are letting

$t = \frac{\mu_w-\mu_l}{c}$       and       $e = \frac{\varepsilon }{c}$

The $V$ and $W$ functions are the core of the Trueskill system, and vary depending on whether the game resulted in a win or a draw. In both instances, positive values for $t$ represent an unsurprising outcome: the winner was more skilled than the loser. Positive values result in the functions returning small values, which in turn result in small $\mu$ and $\sigma$ updates. The converse is also true: Negative values for $t$ represent a surprising outcome, and result in large updates.

### AllegSkill example

This scenario pits a newbie commander ($\,\!\mu_A = 25; \sigma_A = 8.333...$, i.e. normal rank, high uncertainty) against a slightly more experienced commander ($\,\!\mu_B = 32; \sigma_B = 5$, i.e. high rank, medium uncertainty).

#### Number crunching

Let's now get our favourite computer assisted algebra system and do the calculations. Let's assume the experienced commander, B, won.

$c = \tfrac{5}{6}\sqrt{186} \simeq 11.4$

$\mu '_{B}=32+\tfrac{0.09195636321\sqrt{186}\sqrt{2}}{\sqrt{\pi }} = \cdots \simeq 33.0$

$\sigma '_{B}=\sqrt{\tfrac{3601}{144}-\tfrac{2.758690898\sqrt{2}\left( \tfrac{0.5701294519\sqrt{2}}{\sqrt{\pi }}+0.04463650105\sqrt{186} \right)}{\sqrt{\pi }}} = \cdots \simeq 4.8$

$\mu '_{A}=25-\tfrac{0.2554343423\sqrt{186}\sqrt{2}}{\sqrt{\pi }} = \cdots \simeq 22.2$

$\sigma '_{A}=\sqrt{\tfrac{10001}{144}-\tfrac{21.28619519\sqrt{2}\left( \tfrac{0.5701294519\sqrt{2}}{\sqrt{\pi }}+0.04463650105\sqrt{186} \right)}{\sqrt{\pi }}} = \cdots \simeq 7.2$

Now let's run the same scenario in reverse, with commander A winning.

$c = \tfrac{5}{6}\sqrt{186} \simeq 11.4$

$\mu '_{A}=25+\tfrac{0.6919672626\sqrt{186}\sqrt{2}}{\sqrt{\pi }} = \cdots \simeq 32.5$

$\sigma '_{A}=\sqrt{\tfrac{10001}{144}-\tfrac{57.66393855\sqrt{2}\left( \tfrac{1.544470930\sqrt{2}}{\sqrt{\pi }}+0.04568607959\sqrt{186} \right)}{\sqrt{\pi }}} = \cdots \simeq 6.4$

$\mu '_{B}=32-\tfrac{0.2491082145\sqrt{186}\sqrt{2}}{\sqrt{\pi }} = \cdots \simeq 29.3$

$\sigma '_{B}=\sqrt{\tfrac{3601}{144}-\tfrac{7.473246435\sqrt{2}\left( \tfrac{1.544470930\sqrt{2}}{\sqrt{\pi }}+0.04568607959\sqrt{186} \right)}{\sqrt{\pi }}} = \cdots \simeq 4.6$

#### Results

Let's compare the possible scenarios. Data is shown in the (mu, sigma) form.

 Before the game Commander A wins Commander B wins Commander A (Newbie) Commander B (Vet) (25.0 , 8.33)Rank: (0) (32.0 , 5.0)Rank: (17) (32.5 , 6.4)Rank: (13) (29.3 , 4.6)Rank: (16) (22.2 , 7.2)Rank: (1) (33.0 , 4.8)Rank: (19)

As you can see, when the uncertainty is high, ranks can change quickly. But if you look at the players' ratings (the first number in brackets) you will see the variability is much less pronounced.

When the outcome closely matches expectations (Vet beats newb) we can observe how little changes occur:

• Commander A loses little rating, μ, but the confidence of the system on his rating has increased (σ has decreased) (After playing a game AllegSkill now knows more about this player).
• Given the way the conservative rank is calculated, this ultimately results in a higher rank. This effectively replaces ELO's and HELO's newbie modifiers (the modifiers that allowed newbies to gain ranks faster than they lost them).
• Commander B receives a boost from his victory as but since his σ is lower, the change to his rating, μ, is not as big.
• This is controlled by the $\sigma^2/c$ factor in the updating formulas.

When AllegSkill receives a surprising outcome there are much bigger variations:

• Commander A, the (0) comm, gains a whopping 13 ranks (6 of which come from the drop in sigma).
• This is because AllegSkill received very significant information about commander A.
• Commander B's loses a bit of rating, μ, but that is somewhat limited by his lower σ.
• The σ reduction (gain in certainty about accuracy of commander B's rank) is also smaller in this scenario: a loss is less significant than a win.

AllegSkill
 About: AllegSkill · FAQ · Interim FAQ · Gaining ranks · Whore rating · more... Commander's ranking · Player's ranking · Stack rating · AllegBalance