The Map Meta in 2017 part 2

 in Categories Maps, Metagame

Last time out we looked at some basic map pick rates data and made some observations about popularity, how maps have displaced each other in the pool and what that might mean for the upcoming reintroduction of Dust 2 into the rotation.

Now we’re going to look at some teams, and as the sample sizes here get very small one of the themes is going to be uncertainty and confidence. If you’ve ever wondered how the hell standard deviation will ever be relevant to your life, here it is.

I’m limiting myself to the top 10 teams in our csgo glicko rankings for the most part, otherwise the sheer quantity of data I have to wade through would be too much, but first we’ll start with a specific example to illustrate a few things.

These are Astralis’s win percentages on maps in 2017 also with the number of times that map was played

The left scale counts for both bars, for the blue it’s the percentage and for the orange it the raw number of times the map is played. Regardless, what we can see is strength on Nuke, Inferno and Overpass, and some difficulty on Cache and mediocrity on Cobble. But the number of times the maps are played varies quite a bit, particularly Cobble where they have played only 2 games. So how do we account for this?

We can use Excel’s Confidence function to use the deviation and population size of the map sample to give us a level of variation that acts as a guide to how much faith we should put in the average figures. What it gives us specifically is a range that we can be 95% confident that a bigger population of results would be in. The more samples we have as evidence, the smaller the range. If we look at the same graph with bars added to express the possible ranges…

We can see that most of the maps have enough samples to be fairly certain that what we see represents close to the team’s strength on a map, but the value of the small amount of data we have on Cobble is almost zero, the error bars show a range from almost 0 to almost 100. This gives us a decision to make – how to deal with that lack of information?

One option is to take a conservative estimate and pick the lowest value the range allows with the idea that we know with 95% certainty they are at least that good. Another might be to try to compensate by averaging all the other map performances and use that as a baseline. I’ve gone with the first option here, because it’s possible that teams can have maps that they are just really bad on, well below their average, and because we won’t always be dealing with clear cut cases like Cobblestone for Astralis here. Often the uncertainty will be somewhere in between, and where you draw the line between replacing with an average then become the question.

To analyse this a contour chart allows us to combine the information in an readable way

The light spots are where there is considerable uncertainty for one of the teams we’re looking at, so on the bottom left we can see Astralis and Cobble is a light patch denoting it’s an uncertain area, and it’s also a slightly less light patch for FaZe. Nuke provides a couple of spots for Cloud9 and Fnatic. Mirage for NiP and Overpass for mousesports is the other culprit.

Everything else is a sea of high confidence results.

Now we can use a similar chart to look at win rates adjusted for confidence for the teams we’re looking at

Here the lighter spots are success and the darker spots either greater uncertainty, no data or failure. There’s only one team that hits the uppermost win rate colour which is mousesports on Nuke. Given their recent surge in results then the potential disappearance of Nuke from the map pool could be pretty devastating to them, particularly given how badly they suffer on the popular Inferno.

Overpass is a problem map for NiP and Fnatic, both struggling despite playing plenty of it. Cache is a much broader problem for a number of teams, North, mousesports, Gambit, Fnatic and Astralis all putting up poor numbers on it. Cloud9, FaZe and G2 are also very average, which implies the map has less potential for a team to separate themselves on it by skill.

Out of the very top teams SK really only stumble on Inferno their Nuke black hole caused by a lack of play on it, while FaZe are only average on Nuke and Train. G2 has a pretty big slice of winning percentages, but Inferno and Overpass are mediocre while Train is a black spot. A problem given two of these are popular maps.

In part 3 I’ll look at team trends over time and see how team map picking has evolved, and make some map pool comparisons.