AllegSkill - Player's ranking: Difference between revisions
Badpazzword (talk | contribs) (→To each one's own: Restoring fraction size as it was (arguably kind of) hard to distinguish the sum sigmas from the team sigmas.) |
(Reformatting) |
||
Line 1: | Line 1: | ||
{{stub}} | {{stub}} | ||
This article describes how '''player's ranks''' are updated after a game is completed and transmitted to [[ASGS]]. Note that your rank won't change until you log back in, though. | |||
== | == Defining the players == | ||
Imagine you have a bunch of people playing on the server, let's say '''n''' players. So we label them <big><math>\{1, 2, 3, ..., n\}</math></big>. | |||
Now let's pretend the game had an arbitary number of teams, we'll say '''<big><math>k</math></big>''' teams. The total population, '''n''', is made up of '''<big><math>k</math></big>''' non-overlapping subsets - in maths terms: | |||
*<math>T_{j}\subset \{1,...,n\}</math> where <math>T_{i}\cap T_{j}=0</math> if <math>i\ne j</math>. | |||
*<math>\subset</math> means roughly "Comprises from", <math>\cap</math> means rougly "iNtersects". | |||
*''i'' and ''j'' are math terms that can refer to any given team - could be the 1st and 2nd, the 3rd and 6th, whatever. | |||
In layman's terms | |||
*No player can appear on more than one team ''at the same time''. | |||
There are three variables associated with each player, '''<math>i</math>'''. (Similar to before, '''<math>i</math>''' now refers to any given player). | |||
*<math>f_{ | *'''<big><math>\mu _i</math></big>''', their average skill. | ||
*'''<big><math>\sigma _i</math></big>''', the uncertainty about their '''<big><math>\mu _i</math></big>'''. | |||
*'''<big><math>f_{i}</math></big>''', the fraction of the total game played for their team. | |||
There are also three more variables - however these are global variables; in other words are the same for everyone in the game. See [[AllegSkill_-_Commander%27s_ranking#The_update_formulae|here]] for more details on how they're calculated. | |||
*<math>\beta</math>, standard variance around performance; | *'''<math>\beta</math>''', standard variance around performance; | ||
*<math>\gamma</math>, the dynamics variable that keeps sigma from reaching zero; | *'''<math>\gamma</math>''', the dynamics variable that keeps sigma from reaching zero; | ||
*<math>\epsilon</math>, the draw factor. | *'''<math>\epsilon</math>''', the draw factor. | ||
== Forming the teams == | |||
Each team, '''<big><math>j</math></big>''', has a total '''μ''' and a total '''σ''' which is derived from the ratings of its players. Henceforth we will use capitalization to distinguish between players and teams. | |||
{| class="wikitable" border="1" cellspacing="0" cellpadding="5" align="center" | |||
| | |||
|align="center"| Player | |||
|align="center"| Team | |||
|- | |||
! Mu | |||
|align="center"| '''<big><math>\mu</math></big>''' | |||
|align="center"| '''<big><math>\Mu</math></big>''' | |||
|- | |||
! Sigma | |||
|align="center"| '''<big><math>\sigma</math></big>''' | |||
|align="center"| '''<big><math>\Sigma</math></big>''' | |||
|} | |||
:<math>\ | So team '''<big><math>j</math></big>''' 's Mu and Sigma are calculated as below: | ||
:<math>\Mu _j=\sum\limits_{T_j}{\mu_i f_i}</math> | |||
*The average skill of the team is the sum of the average skill of the players; each multiplied by how long the played for, of course. | |||
:<math>\Sigma _{j}=\sqrt{\left( \sum\limits_{T_j}{\left( \sigma_i^2 f_i +\beta ^{2}+\gamma ^{2} \right)} \right) -\beta ^{2} -\gamma ^{2}}</math> | |||
*The uncertainty about the team is ... that. Basically it's the players' uncertainties multiplied by the fraction of time played, and then a whole bunch of global variables thrown in. | |||
== Winners and Losers == | |||
Now that we have a value for the teams' Mu and Sigma we can deal with them as "single players" and use Microsoft's TrueSkill algorithms on them. Once the game is over the winner's and the loser's values are updated using exactly the same formulae as given on the [[AllegSkill_-_Commander%27s_ranking#The_update_formulae|Commander's ranking article]], the only difference being we are using '''Μ''' and '''Σ''' instead of '''μ''' and '''σ'''. | |||
The updated Mu and Sigma are referred to as '''Μ<big>'</big>''' and '''Σ<big>'</big>'''. | |||
: | :'''Note.''' The acute reader will notice that this is the point that keeps the current incarnation of AllegSkill from supporting multi team games. | ||
== Dividing the spoils of war == | |||
Now that we have the updated Μ and Σ for the teams, we have to figure out how that will filter back down to updated μ and σ for the players. | |||
So we calculate the total variance of all the teams: | |||
== | :<math>\beta _{total}=\sum\limits_{j=1}^{k}{\left( \sum\limits_{T_{j}}{\left( \sigma _{i}^{2}f_{i}+\beta ^{2}+\gamma ^{2} \right)} \right)}</math> | ||
Define two new factors for each team, <math>V</math> and <math>W</math>: | |||
:<math>V_{j}=\frac{\sqrt{\beta _{total}}}{\sum\limits_{T_{j}}{\left( \sigma _{n}^{2}f_{n}+\beta ^{2}+\gamma ^{2} \right)}-\beta ^{2}} \cdot \left( \Mu '_{j}-\Mu _{j} \right)</math> | |||
:<math> | :<math>W_{j}=\frac{\beta _{total}} {\sum\limits_{T_{j}}{\left( \sigma _{n}^{2} f_{n} + \beta^2 + \gamma^2 \right)} - \beta^2 } \cdot \left( 1 - \frac{\Sigma_{j}'^{2}}{\Sigma _{j}^{2}} \right)</math> | ||
Then each player, <math>i</math>, has their stats updated: | |||
:<math>\mu '_{i}=\mu _{i}+\tfrac{\sigma _{i}^{2}+\gamma ^{2}}{\sqrt{\beta _{total}}}f_{i}V_{j}</math> | |||
:<math>\ | :<math>\sigma '_{i}=\sigma _{i}+f_{i}\left( \sigma _{i}\sqrt{1-W_{j}\tfrac{\sigma _{i}^{2}+\gamma ^{2}}{\beta _{total}}}-\sigma _{i} \right)</math> | ||
==Intrepetion== | |||
Even though everyone playing on a team took part in the same battle, no one receives the exact same change to their rank. You will notice the last two formulas, which are the ones that are applied to the players themselves, incorporate sigma - the uncertainty about the player. | |||
If a newb and a vet play on the same team and win then the newb will receive more points for it ''even though it was the exact same win''. The more the system knows about you, the vet, the less likely your rank will change. | |||
{{AllegSkill}} | {{AllegSkill}} |
Revision as of 22:31, 2 December 2008
Stub This article is incomplete This is an article about a topic that should be covered in more detail by the FreeAllegiance Wiki, but is lacking in content.
You can help by improving it!
This article describes how player's ranks are updated after a game is completed and transmitted to ASGS. Note that your rank won't change until you log back in, though.
Defining the players
Imagine you have a bunch of people playing on the server, let's say n players. So we label them <math>\{1, 2, 3, ..., n\}</math>.
Now let's pretend the game had an arbitary number of teams, we'll say <math>k</math> teams. The total population, n, is made up of <math>k</math> non-overlapping subsets - in maths terms:
- <math>T_{j}\subset \{1,...,n\}</math> where <math>T_{i}\cap T_{j}=0</math> if <math>i\ne j</math>.
- <math>\subset</math> means roughly "Comprises from", <math>\cap</math> means rougly "iNtersects".
- i and j are math terms that can refer to any given team - could be the 1st and 2nd, the 3rd and 6th, whatever.
In layman's terms
- No player can appear on more than one team at the same time.
There are three variables associated with each player, <math>i</math>. (Similar to before, <math>i</math> now refers to any given player).
- <math>\mu _i</math>, their average skill.
- <math>\sigma _i</math>, the uncertainty about their <math>\mu _i</math>.
- <math>f_{i}</math>, the fraction of the total game played for their team.
There are also three more variables - however these are global variables; in other words are the same for everyone in the game. See here for more details on how they're calculated.
- <math>\beta</math>, standard variance around performance;
- <math>\gamma</math>, the dynamics variable that keeps sigma from reaching zero;
- <math>\epsilon</math>, the draw factor.
Forming the teams
Each team, <math>j</math>, has a total μ and a total σ which is derived from the ratings of its players. Henceforth we will use capitalization to distinguish between players and teams.
Player | Team | |
Mu | <math>\mu</math> | <math>\Mu</math> |
---|---|---|
Sigma | <math>\sigma</math> | <math>\Sigma</math> |
So team <math>j</math> 's Mu and Sigma are calculated as below:
- <math>\Mu _j=\sum\limits_{T_j}{\mu_i f_i}</math>
- The average skill of the team is the sum of the average skill of the players; each multiplied by how long the played for, of course.
- <math>\Sigma _{j}=\sqrt{\left( \sum\limits_{T_j}{\left( \sigma_i^2 f_i +\beta ^{2}+\gamma ^{2} \right)} \right) -\beta ^{2} -\gamma ^{2}}</math>
- The uncertainty about the team is ... that. Basically it's the players' uncertainties multiplied by the fraction of time played, and then a whole bunch of global variables thrown in.
Winners and Losers
Now that we have a value for the teams' Mu and Sigma we can deal with them as "single players" and use Microsoft's TrueSkill algorithms on them. Once the game is over the winner's and the loser's values are updated using exactly the same formulae as given on the Commander's ranking article, the only difference being we are using Μ and Σ instead of μ and σ.
The updated Mu and Sigma are referred to as Μ' and Σ'.
- Note. The acute reader will notice that this is the point that keeps the current incarnation of AllegSkill from supporting multi team games.
Dividing the spoils of war
Now that we have the updated Μ and Σ for the teams, we have to figure out how that will filter back down to updated μ and σ for the players.
So we calculate the total variance of all the teams:
- <math>\beta _{total}=\sum\limits_{j=1}^{k}{\left( \sum\limits_{T_{j}}{\left( \sigma _{i}^{2}f_{i}+\beta ^{2}+\gamma ^{2} \right)} \right)}</math>
Define two new factors for each team, <math>V</math> and <math>W</math>:
- <math>V_{j}=\frac{\sqrt{\beta _{total}}}{\sum\limits_{T_{j}}{\left( \sigma _{n}^{2}f_{n}+\beta ^{2}+\gamma ^{2} \right)}-\beta ^{2}} \cdot \left( \Mu '_{j}-\Mu _{j} \right)</math>
- <math>W_{j}=\frac{\beta _{total}} {\sum\limits_{T_{j}}{\left( \sigma _{n}^{2} f_{n} + \beta^2 + \gamma^2 \right)} - \beta^2 } \cdot \left( 1 - \frac{\Sigma_{j}'^{2}}{\Sigma _{j}^{2}} \right)</math>
Then each player, <math>i</math>, has their stats updated:
- <math>\mu '_{i}=\mu _{i}+\tfrac{\sigma _{i}^{2}+\gamma ^{2}}{\sqrt{\beta _{total}}}f_{i}V_{j}</math>
- <math>\sigma '_{i}=\sigma _{i}+f_{i}\left( \sigma _{i}\sqrt{1-W_{j}\tfrac{\sigma _{i}^{2}+\gamma ^{2}}{\beta _{total}}}-\sigma _{i} \right)</math>
Intrepetion
Even though everyone playing on a team took part in the same battle, no one receives the exact same change to their rank. You will notice the last two formulas, which are the ones that are applied to the players themselves, incorporate sigma - the uncertainty about the player.
If a newb and a vet play on the same team and win then the newb will receive more points for it even though it was the exact same win. The more the system knows about you, the vet, the less likely your rank will change.
AllegSkill |
|
---|