AllegSkill - Player's ranking: Difference between revisions
Badpazzword (talk | contribs) (Fixed: '_j^2 ==> _j'^2) |
Badpazzword (talk | contribs) (Fixed various sum notations so that formulas no more read: "sum for each sigma_n in team j of sigma of sigma_n multiplied fraction of sigma_n..." (now is: "for the i-th player in team j of sigma_i..")) |
||
Line 9: | Line 9: | ||
Each player, <math>n</math>, has three variables associated with them: | Each player, <math>n</math>, has three variables associated with them: | ||
*<math>\mu _n</math>, their average skill. | |||
*<math>\sigma _n</math>, their uncertainty about <math>\mu _{n}</math>. | |||
*<math>f_{n}</math>, the fraction of the total game played for their team. | |||
Each team, <math>j</math>, has a mu and sigma derived from the ratings of it's players, <math>n</math>, thus: | Each team, <math>j</math>, has a mu and sigma derived from the ratings of it's players, <math>n</math>, thus: | ||
:<math>\mu _j=\sum\limits_{ | :<math>\mu _j=\sum\limits_{i \in T_j}{\mu_i f_i}</math> | ||
:<math>\sigma _{j}=\sqrt{\left( \sum\limits_{ | :<math>\sigma _{j}=\sqrt{\left( \sum\limits_{i \in T_j}{\left( \sigma_i^2 f_i +\beta ^{2}+\gamma ^{2} \right)} \right) -\beta ^{2} -\gamma ^{2}}</math> | ||
For the next step, each team is designated according to whether they won or lost the game. We now calculate a new mu and sigma for our teams using the standard Trueskill update formulae. Definitions of <math>W_{win}</math> etc can be found in the [[AllegSkill - Commander's ranking|Commander's ranking]] section. | For the next step, each team is designated according to whether they won or lost the game. We now calculate a new mu and sigma for our teams using the standard Trueskill update formulae. Definitions of <math>W_{win}</math> etc can be found in the [[AllegSkill - Commander's ranking|Commander's ranking]] section. | ||
Line 34: | Line 34: | ||
Now we calculate the total variance: | Now we calculate the total variance: | ||
:<math>\beta _{total}=\sum\limits_{j=1}^{k}{\left( \sum\limits_{ | :<math>\beta _{total}=\sum\limits_{j=1}^{k}{\left( \sum\limits_{i \in T_{j}}{\left( \sigma _{i}^{2}f_{i}+\beta ^{2}+\gamma ^{2} \right)} \right)}</math> | ||
Line 40: | Line 40: | ||
:<math>V_{j}=\frac{\sqrt{\beta _{total}}\left( \mu '_{j}-\mu _{j} \right)}{\sum\limits_{ | :<math>V_{j}=\frac{\sqrt{\beta _{total}}\left( \mu '_{j}-\mu _{j} \right)}{\sum\limits_{i \in T_{j}}{\left( \sigma _{i}^{2}f_{i}+\beta ^{2}+\gamma ^{2} \right)}-\beta ^{2}}</math> | ||
:<math>W_{j}=\frac{\beta _{total}\left( 1-\frac{\sigma_{j}'^{2}}{\sigma _{j}^{2}} \right)}{\sum\limits_{ | :<math>W_{j}=\frac{\beta _{total}\left( 1-\frac{\sigma_{j}'^{2}}{\sigma _{j}^{2}} \right)}{\sum\limits_{i \in T_{j}}{\left( \sigma _{i}^{2}f_{i}+\beta ^{2}+\gamma ^{2} \right)}-\beta ^{2}}</math> | ||
Revision as of 17:45, 24 November 2008
Stub This article is incomplete This is an article about a topic that should be covered in more detail by the FreeAllegiance Wiki, but is lacking in content.
You can help by improving it!
What follows is the method for updating a two-team game with an arbitrary number of players.
From a population of n players <math>\{1,...,n\}</math> let <math>k</math> teams compete in a match. Teams are defined by <math>k</math> non-overlapping subsets, <math>T_{j}\subset \{1,...,n\}</math>, of the player population, <math>T_{i}\cap T_{j}=0</math> if <math>i\ne j</math>.
In layman's terms, this means that no player can appear on more than one team at the same time.
Each player, <math>n</math>, has three variables associated with them:
- <math>\mu _n</math>, their average skill.
- <math>\sigma _n</math>, their uncertainty about <math>\mu _{n}</math>.
- <math>f_{n}</math>, the fraction of the total game played for their team.
Each team, <math>j</math>, has a mu and sigma derived from the ratings of it's players, <math>n</math>, thus:
- <math>\mu _j=\sum\limits_{i \in T_j}{\mu_i f_i}</math>
- <math>\sigma _{j}=\sqrt{\left( \sum\limits_{i \in T_j}{\left( \sigma_i^2 f_i +\beta ^{2}+\gamma ^{2} \right)} \right) -\beta ^{2} -\gamma ^{2}}</math>
For the next step, each team is designated according to whether they won or lost the game. We now calculate a new mu and sigma for our teams using the standard Trueskill update formulae. Definitions of <math>W_{win}</math> etc can be found in the Commander's ranking section.
- <math>\mu '_{w}=\mu _{w}+\frac{\sigma _{w}^{2}}{c}\cdot V_{win}\left( \frac{\mu _{w}-\mu _{l}}{c},\frac{\varepsilon }{c} \right)</math>
- <math>\sigma '_{w}=\sqrt{\sigma _{w}^{2}\left( 1-\frac{\sigma _{w}^{2}}{c^{2}}\cdot W_{win}\left( \frac{\mu _{w}-\mu _{l}}{c},\frac{\varepsilon }{c} \right) \right)+\gamma ^{2}}</math>
- <math>\mu '_{l}=\mu _{l}-\frac{\sigma _{l}^{2}}{c}\cdot V_{win}\left( \frac{\mu _{w}-\mu _{l}}{c},\frac{\varepsilon }{c} \right)</math>
- <math>\sigma '_{l}=\sqrt{\sigma _{l}^{2}\left( 1-\frac{\sigma _{l}^{2}}{c^{2}}\cdot W_{win}\left( \frac{\mu _{w}-\mu _{l}}{c},\frac{\varepsilon }{c} \right) \right)+\gamma ^{2}}</math>
Now we calculate the total variance:
- <math>\beta _{total}=\sum\limits_{j=1}^{k}{\left( \sum\limits_{i \in T_{j}}{\left( \sigma _{i}^{2}f_{i}+\beta ^{2}+\gamma ^{2} \right)} \right)}</math>
Each team has a <math>V</math> and <math>W</math> factor:
- <math>V_{j}=\frac{\sqrt{\beta _{total}}\left( \mu '_{j}-\mu _{j} \right)}{\sum\limits_{i \in T_{j}}{\left( \sigma _{i}^{2}f_{i}+\beta ^{2}+\gamma ^{2} \right)}-\beta ^{2}}</math>
- <math>W_{j}=\frac{\beta _{total}\left( 1-\frac{\sigma_{j}'^{2}}{\sigma _{j}^{2}} \right)}{\sum\limits_{i \in T_{j}}{\left( \sigma _{i}^{2}f_{i}+\beta ^{2}+\gamma ^{2} \right)}-\beta ^{2}}</math>
Then each player, <math>n</math>, is updated using the values calculated for their team, <math>j</math>:
- <math>\mu '_{n}=\mu _{n}+\frac{\sigma _{n}^{2}+\gamma ^{2}}{\sqrt{\beta _{total}}}f_{n}V_{j}</math>
- <math>\sigma '_{n}=\sigma _{n}+f_{n}\left( \sigma _{n}\sqrt{1-W_{j}\frac{\sigma _{n}^{2}+\gamma ^{2}}{\beta _{total}}}-\sigma _{n} \right)</math>
AllegSkill |
|
---|