I am afraid that, although the experiment is very interesting and involves a lot of work, the results can not be conclusive. First, there is the discussion about if match preparation, even if it "boosts" your chances of winning the match, it only does it when you have done this repeatedly in time, which would be the logical thing imo. It shouldn't make a significance difference for a brand new team to do something once and then be better at a level enough to have a significance result on a 200 sample. And then, more about this, the sample size is probably far from being large enough. The sample size is 200 and your "standard" result (victories) is around 75, meaning that is only going to show some results if the "boost" effect is of about 11%, and this would be only in the 1-sigma level assuming a Poisson process, which, even if proven, would also only be conclusive on a 1-sigma level, which is usually not enough. I'll explain this a bit for those who do not often work with statistic (and please correct me if I am wrong, which I might be):
- A 1-sigma Poisson confidence level means that, if you would repeat the experiment infinitive times, ~68% of the results would lie between this uncertainty. This is easily calculated assuming a Poisson process (flip a coin, for example), since it is just the squared root of the value. Thus, in our example, assuming the standard "unmodified" result of victories without training is 75, the 1-sigma level is +-8.6. This means, if we repeat the experiment (1 experiment = 200 matches) infinitely, ~68% of our results would be between ~84 and 66. Still, if one of these experiments would give us, for example, 90 victories, it would not mean much, since that even can happen "randomly" in about 30% of cases. In the results we do not even see the 1-sigma deviation at all.
- Assuming again the expected value of 75 victories, with +-8.6 in the 1-sigma level, we see that 8.6/75 is approximately 11%, meaning that only if the boost modify the probability of victories by that or more, we should see a deviation from the 1-sigma confidence (again, only 68% confidence). A larger sample would make the results much more sensitive, for example if we would have a sample of 2000 expecting 750 victories, then we would be 1-sigma sensitive to a boost of sqrt(750)/750 ~ 3.6% (I am not saying that you or any other should do something similar with a 2000 samples... I understand the work involved in doing it with 200 to start with, so in my opinion this is definitely not worthy unless you have a really quick way to do each iteration).
- Training would be broken if a single session of match preparation or similar stuff would boost the chances of winning by more than a few percent, I do not know which number would be appropriate (1% maybe?) but definitely much smaller than 11%. If you assume a number here, you can calculate how large your test sample need to be so you can include or exclude that tested hypothesis.
- As I said before, I would not conclude anything from a study that shows some deviation from a Poissonian process in the 1-sigma level, since there is a ~34% probability that this would happen... you usually want more certainty.