Calculates Raw Score (before MAL's hidden adjustments), Mean Absolute Deviation and custom-weighted Pruned/Liked scores with tooltips explaining the weights. Pruned Score attempts to exclude hate votes from people clearly outside the target audience (refer to Rejection Rate), and Liked Score attempts to measure the level of enthusiasm within fans. If you want different stats, just ask Gemini to edit the script for you.
Calculates Raw Score (before MAL's hidden adjustments), Mean Absolute Deviation and custom-weighted Pruned/Liked scores with tooltips explaining the weights. Pruned Score attempts to exclude hate votes from people clearly outside the target audience (refer to Rejection Rate), and Liked Score attempts to measure the level of enthusiasm within fans. If you want different stats, just ask an AI to edit the script for you.
Pruned Score:
Scores 7-10: 1.0x
Score 6: 0.7x
Score 5: 0.35x
Scores 1-4: counted as 5s in 0.22x
Liked Score:
Scores 8-10: 1.0x
Score 7: 0.7x
Scores 5-6: counted as 7s in 0.35x
Scores 1-4: counted as 7s in 0.22x
Rationale: When comparing shows with the assumption that the viewer at least found it watchable, I'm not interested in the depth people's distaste for a show, so lower scores are pruned to mostly eliminate their effect on the score.
Discounting low scores is imperfect: a divisive show might benefit more, whereas a slightly better show might increase an very low uncounted votes to the low end of the counted category, paradoxically dragging down the pruned average. Gradually rolling off the weights and capping low scores to 5 or 7 mitigates this paradox, at the expense of lower scores still having some impact.
---
Expected boost from pruning is calculated based on an approximated equivalent normal distribution, which after truncating the tails above 10 and below 1, leads to the same average and mean absolute deviation. The actual boost is then compared to this theoretical boost.
The rationale: Since all shows show an increase from pruning, and the boost tends to inversely correlate with the score, it can be hard to compare the boost between shows with different scores. Seeing how much the boost diverges from the expectation can help giving you a more comparable idea of how much the pruning boosts the score, through the naive assumption that the scores are normally distributed, which is still not perfect (shows with very high scores seem to disproportionally more positive difference due to the brickwalling to 10). It works better with shows with more mediocre averages.
---
Made with Gemini 3 Pro and Thinking, math results third-party verified by Grok 4.1 Expert. I don't code lol.