Jump to content

I am convinced that the match engine does different things depending on whether you're watching or not


Recommended Posts

I usually play on Key Highlights, but so many times absolutely nothing happens, thenĀ  I switch to Comprehensive Highlights, and suddenly I'm getting clearcut chances.

Just now, after 30 minutes of nothing happening in a cup replay, I switched to Full Match, my team was taking a throw-in, and we scored from that throw-in.

I think the game simulation that runs when you're watching the highlights/match is different than the one that runs with no one watching. Prove me wrong!

Link to post
Share on other sites

I think FM20 had more problems than normal in handling key highlights and classifying CCs, so that may be a reason why you don't see anything on key.

Other than that, it is well established that the ME works the same no matter how you decide to watch, no need for conspiracy theories.Ā 

Link to post
Share on other sites

That's just an issue(or the way it is intended) of Key Highlights showing you less events. In previous FMs we had a lot more highlights on 'Key' and the matches seemed longer overall. Now it can vary from 3 to 7-8 minutes max as a lot of fluff has been cut. It's super fast and I think that's the way it was intended. It's great for going fast through the seasons. If you want to see more, just play on Extended, but note that it takes twice the time to watch the whole game in most cases.Ā 

TL; DR - The chances are the same, you just don't see as many of themĀ on Key Highlights.Ā 

Link to post
Share on other sites

2 hours ago, epicrecruit said:

Prove me wrong!

There is a way to prove you wrong (or maybe right), but you have to do the work: play the same match 100 times on key highlights without changing anything (maybe let your assman control the match) and keep all the data (result, stats and performance). Then play the same match 100 times on comprehensive/full without changing anything and keep all the data. Then compare the two sets of data. If averages are similar over both sets of data, for example in both cases you win around 60% of the time, you're wrong. If there is a significant difference that can't be explained by randomness, for example in one case you win 75% of the time, but in the other only 10%, you are correct.

Edited by goranm
Link to post
Share on other sites

6 hours ago, goranm said:

There is a way to prove you wrong (or maybe right), but you have to do the work: play the same match 100 times on key highlights without changing anything (maybe let your assman control the match) and keep all the data (result, stats and performance). Then play the same match 100 times on comprehensive/full without changing anything and keep all the data. Then compare the two sets of data. If averages are similar over both sets of data, for example in both cases you win around 60% of the time, you're wrong. If there is a significant difference that can't be explained by randomness, for example in one case you win 75% of the time, but in the other only 10%, you are correct.

Except this doesn't prove it because replaying the "same" match 100 times is actually playing a different match 100 times with the same teams.Ā Ā 

Whilst your logic is sound (comparing data, allowing for randomness, etc), there is simply too much randomness and chance which would swing things one way or the other which would in no way reflect what the OP wants to measure which is the number of chances the ai creates depending on highlight mode. For example, in 1% of games a defender might make a mistake which has a 50% chance of leading to a CCC. However the kicker for that will be the positioning of literally every other player on the pitch, the time, the weather, who makes the mistake, who benefits, etc.Ā  Not only that, said players morale is affected negatively due to this error which leads to further mistakes. If the mistake is in the first minute, that will have much wider knock on effects than if it is in the eighty ninth.

The OP isn't trying to prove that different settings show you less chances which isn't really up for dispute. He is arguing that different settings result in more/less chances being created.Ā  This is completely impossible to prove because the "same" matches are never the same.Ā 

Ā 

Link to post
Share on other sites

2 hours ago, Junkhead said:

Except this doesn't prove it because replaying the "same" match 100 times is actually playing a different match 100 times with the same teams.

Except what I'm describing is how for example medicine is tested against a (side)effect that was random and one that wasn't, or a placebo effect, or how weather prediction is done. The same medicine goes into 100 different humans with 7000 different organs which follow the same basic rules yet we know how to differentiate between random and non-random effects, and we are able to tell with which probability distribution does a random effect happen. Similarly, for weather prediction we plug in the same initial conditions (same teams) millions of times into a computer model (the ME), and this model converges towards a certain prediction for temperature (score), humidity (number of CCCs) etc. The prediction might be wrong for one particular day, but on average it will correctly predict weather or the trajectory of a particular storm etc.

2 hours ago, Junkhead said:

Whilst your logic is sound (comparing data, allowing for randomness, etc), there is simply too much randomness and chance which would swing things one way or the other which would in no way reflect what the OP wants to measure which is the number of chances the ai creates depending on highlight mode.

If what you're saying is true, then nothing in the game matters. If randomness has such a hold on the ME, then attributes don't matter, morale doesn't matter, tactics don't matter etc. If we have 1000 sets of 100 played games, and for example the win/loss ratio is completely random over these 1000 sets, then the match engine is not really an engine, it's a random number generator. In the same way, if the number of CCC's doesn't converge around a certain value for these 1000 sets, but is completely random, then the ME is nothing but a RNG.

2 hours ago, Junkhead said:

For example, in 1% of games a defender might make a mistake which has a 50% chance of leading to a CCC. However the kicker for that will be the positioning of literally every other player on the pitch, the time, the weather, who makes the mistake, who benefits, etc.Ā  Not only that, said players morale is affected negatively due to this error which leads to further mistakes. If the mistake is in the first minute, that will have much wider knock on effects than if it is in the eighty ninth.

Sure. However, nothing what you've said here shows that the method I'm describing will not prove what OP wants. Think again about medicine, 100 people will have different biology, different pathways in their bodies, and at the smallest level the interaction of electrons in a medicine with the electrons in the human body is completely random. However this randomness follows a distribution and physical laws, so we are still able to deduce "global" effects on the whole body with a degree of accuracy. The ME follows the same logic, the random variables that encode the probability of tiniest mistakes or whatever follow a probability distribution, so over 1000s of games, the distribution of CCCs must show a pattern, and if this pattern is approximately the same for both viewing modes, then OP is disproved. If not, OP is proven.

2 hours ago, Junkhead said:

The OP isn't trying to prove that different settings show you less chances which isn't really up for dispute. He is arguing that different settings result in more/less chances being created.Ā  This is completely impossible to prove because the "same" matches are never the same.Ā 

I understand what OP is trying to prove, and what I am describing is a method of proving that. It doesn't matter that matches are never the same, what matters is that they go through the same black box (the ME) whose properties we are trying to figure out. This black box follows some set of rules, some of which are random, but some are not. If you think about how experiments in science are done, no two experiments are really the same, maybe one experiment is done in a room slightly warmer than the other, maybe it has a slightly higher concentration of oxygen, whatever. Yet we still prove scientific facts by replicable experiments that are never perfectly the same because we know how to how to distinguish between random and non-random effects.

Edited by goranm
Link to post
Share on other sites

This simply is not how the ME works though. The ME calculates the game and then when you go to the match the graphics engine translates this into the highlights you see. It does not recalculate the match every time you change highlights, so I fail to see how this would work in practise. The game does not calculate the match as it goes, which is what would be required here.Ā 

I think it is more a question of how the game selects highlights to show you. The number of times I see a highlight that is basically 3 passes followed by an offside that adds nothing to the match experience, for example. I never play on key highlights, because you simply miss too much of the game. You will get to see goals, CCCS, and major incidents. So I would suggest what you see as CCCs are not what the ME see as CCCs. Which is a whole other conversation about how this is interpreted.Ā 

55 minutes ago, goranm said:

Except what I'm describing is how for example medicine is tested against a (side)effect that was random and one that wasn't, or a placebo effect, or how weather prediction is done. The same medicine goes into 100 different humans with 7000 different organs which follow the same basic rules yet we know how to differentiate between random and non-random effects, and we are able to tell with which probability distribution does a random effect happen. Similarly, for weather prediction we plug in the same initial conditions (same teams) millions of times into a computer model (the ME), and this model converges towards a certain prediction for temperature (score), humidity (number of CCCs) etc. The prediction might be wrong for one particular day, but on average it will correctly predict weather or the trajectory of a particular storm etc.

Weather prediction would be the closer analogy than medical trials here. Medical trials are basically measuring rare outcomes you simply might miss if you look at a narrow range of patients. Or looking for small but clear effects where more people gives greater confidence. Weather prediction is looking at a non-linear system where you cannot predict outcomes from initial inputs at all.Ā 

The problem here is you would need a solid blank to compare your results to. What do you propose to compare to? What is the baseline? Watching the match on full highlights? Text only? And how many times would you have to run this to get any kind of meaningful baseline? And what would that actually mean? It is a complicated question. Football is not really like independent trials where you can repeat and average, because it is not really like the outcome is coming from a normal distribution with some obvious probability you can get from the statistics of a normal distribution. You would need to first compare data sets which were taken under the same conditions to see if they give the same average results or not (by whatever outcome you choose to monitor). Bearing in mind that things like "average" and "standard deviation" and the like have meaning only within a normal distribution. You would need things like T tests and ANOVA for analysis. And this would be just to establish how large a dataset you would need to be able to accurately compare two datasets. Then you would have to repeat for the actually hypotheses you have, and compare to this null dataset to establish differences. Not impossible, but far from simple.

I probably went into way too much detail here, but I love this kind of thing. And the point is that experimental design of trials can be really hard. You have to take care of all the things you are not looking to measure before you can actually try to measure something. And most people fall into the trap of assuming all experimental designs can be based around things like flipping a coin or rolling dice. Which is definitely not the case. Not really a criticism intended for the person who posted about this, just could not resist a chance to expound on this.Ā 

Link to post
Share on other sites

1 hour ago, sporadicsmiles said:

This simply is not how the ME works though. The ME calculates the game and then when you go to the match the graphics engine translates this into the highlights you see. It does not recalculate the match every time you change highlights, so I fail to see how this would work in practise. The game does not calculate the match as it goes, which is what would be required here.Ā 

I'm not claiming that it works like that, you've misunderstood something (I just realised this might have been a reply to OP and not me, but I'll leave it here). The entire point is to show that the ME is independent of the viewing mode. OP can change viewing mode as much as they want in one set of 100 games, and in the other set of 100 games keep it on only full throughout, or only on key throughout. If their claim is true, there should be a noticeable difference in the number of CCCs between the two sets of 100 games that can't be explained by randomness. But there isn't going to be one.

1 hour ago, sporadicsmiles said:

The problem here is you would need a solid blank to compare your results to. What do you propose to compare to? What is the baseline? Watching the match on full highlights? Text only? And how many times would you have to run this to get any kind of meaningful baseline? And what would that actually mean? It is a complicated question. Football is not really like independent trials where you can repeat and average, because it is not really like the outcome is coming from a normal distribution with some obvious probability you can get from the statistics of a normal distribution. You would need to first compare data sets which were taken under the same conditions to see if they give the same average results or not (by whatever outcome you choose to monitor). Bearing in mind that things like "average" and "standard deviation" and the like have meaning only within a normal distribution. You would need things like T tests and ANOVA for analysis. And this would be just to establish how large a dataset you would need to be able to accurately compare two datasets. Then you would have to repeat for the actually hypotheses you have, and compare to this null dataset to establish differences. Not impossible, but far from simple.

We don't need a baseline, we don't have to do anything elaborate, OP made a falsifiable claim (falsifiable in Popper's sense, not that it is a priori false) which can be tested. We don't even have to know or learn anything about the ME, we just have to test if OP's claim is false, and if it is, we reject it, if not, we either test to more precision or accept the claim. We however do know the basic premise of the ME, it takes in some numbers like attributes, morale, tactical instructions etc. and produces some numbers that come out as score, CCCs etc. We know that there is randomness in the process, but we also know that it's not completely random. Since OP's claim is for any match,Ā  we can control certain aspects (maybe we max out one team and min out the other or maybe we max out both and give them the same tactics or whatever). Now we create one data set for each viewing mode (kept constant throughout the match), and one in which viewing modes are changed randomly throughout each repeated match. The only outcome in which we don't reject OP's claim as false is when the dataset in which the viewing modes were changed randomly throughout the match shows more CCCs beyond what could be explained by randomness.

Edited by goranm
Link to post
Share on other sites

4 hours ago, Junkhead said:

, there is simply too much randomness and chance which would swing things one way or the otherĀ 

Ā 

Ā 

I don't think there is much randomness. When playing with friend, was someĀ discussion about how random results are and we replayed World Cup finals, (England vs Spain). It was AI vs AI game, similar teams in terms of quality, so we decided to make it happenĀ 10 times and to see what happens.Ā Expected 5-5, 6-4, something like that. What surprised us is that took us 38 times before England lost it (on penalties).Ā 37 consecutive wins for England !Ā It seems that game was pretty much "rigged"Ā for them to win it.Ā 

Sure, one game proves nothing, but wondering how often it happens...

Link to post
Share on other sites

2 minutes ago, Prokopije said:

Sure, one game proves nothing, but wondering how often it happens...

One game doesn't, but 38 are telling, even if they're the same teams :) The probability that in a 50/50 match the same side wins 37 times in a row is essentially 0. Even if England was a 90% favourite, the chance that they would win 37 in a row is just 2%.

Link to post
Share on other sites

24 minutes ago, Prokopije said:

I don't think there is much randomness. When playing with friend, was someĀ discussion about how random results are and we replayed World Cup finals, (England vs Spain). It was AI vs AI game, similar teams in terms of quality, so we decided to make it happenĀ 10 times and to see what happens.Ā Expected 5-5, 6-4, something like that. What surprised us is that took us 38 times before England lost it (on penalties).Ā 37 consecutive wins for England !Ā It seems that game was pretty much "rigged"Ā for them to win it.Ā 

LOL. That is the proof FM is unrealistic.Ā :DĀ In my experience, England have always been a little overpowered in FM. They often win World Cup and Euro in my saves, sometimes even back to back titles. I remember I used to download custom databasesĀ that would slash English players, back in the day.

14 hours ago, epicrecruit said:

Just now, after 30 minutes of nothing happening in a cup replay, I switched to Full Match, my team was taking a throw-in, and we scored from that throw-in.

How do you know the goal wouldn't happen if you stayed on key highlights? Is this what happens to you consistently? I don't really need an answer because I know it doesn't.

Link to post
Share on other sites

3 hours ago, goranm said:

what I am describing is a method of proving that

It isn't. It's a method of assessing the probability that the op is correct. That is different to proving something.Ā  Looking at the code would prove it, nothing else.

3 hours ago, goranm said:

If what you're saying is true, then nothing in the game matters. If randomness has such a hold on the ME, then attributes don't matter, morale doesn't matter, tactics don't matter etc.

I'm not actually sure it does if I'm honest if we consider JUST the match in isolation. There is little you can do as a manager once the game has begun. I think management IRL is done by minimising the likelihood of mistakes through man management, understanding and preparation. I find if I reflect this in the game (manage morale, train only match prep, practice set pieces, etc.) I get better results.

My understanding was that the match engine calculated a number of things per game, and the influence of the player tightens up the likelihood of things happening. I might be wrong though. I tend to try and play the game by doing things that make sense and that I would actually do in the scenario rather than game it, tbh.

I think one thing we will agree on is that in reality, a football team on a pitch would not create more ccc's if their manager sat on the bench with his eyes closed for 60 minutes than if he did so for 75 minutes.

So why the devil SI would make the game do that is completely beyond me šŸ˜‚

2 hours ago, sporadicsmiles said:

I think it is more a question of how the game selects highlights to show you. The number of times I see a highlight that is basically 3 passes followed by an offside that adds nothing to the match experience, for example. I never play on key highlights, because you simply miss tme. You will get to see goals, CCCS, and major incidents. So I would suggest what you see as CCCs are not what the ME see as CCCs. Which is a whole other conversation about how this is interpreted.Ā 

3 hours ago, goranm said:

Except what I'm describing is how for example medicine is tested against a (side)effect that was random and one that wasn't, or a placebo effect, or how weather prediction is done. The same medicine goes into 100 different humans with 7000 different organs which follow the same basic rules yet we know how to differentiate between random and non-random effects, and we are able to tell with which probability distribution does a random effect happen. Similarly, for weather prediction we plug in the same initial conditions (same teams) millions of times into a computer model (the ME), and this model converges towards a certain prediction for temperature (score), humidity (number of CCCs) etc. The prediction might be wrong for one particular day, but on average it will correctly predict weather or the trajectory of a particular storm etc.

Expand Ā 

Weather prediction would be the closer analogy than medical trials here. Medical trials are basically measuring rare outcomes you simply might miss if you look at a narrow range of patients. Or looking for small but clear effects where more people gives greater confidence. Weather prediction is looking at a non-linear system where you cannot predict outcomes from initial inputs at all.Ā 

The problem here is you would need a solid blank to compare your results to. What do you propose to compare to? What is the baseline? Watching the match on full highlights? Text only? And how many times would you have to run this to get any kind of meaningful baseline? And what would that actually mean? It is a complicated question. Football is not really like independent trials where you can repeat and average, because it is not really like the outcome is coming from a normal distribution with some obvious probability you can get from the statistics of a normal distribution. You would need to first compare data sets which were taken under the same conditions to see if they give the same average results or not (by whatever outcome you choose to monitor). Bearing in mind that things like "average" and "standard deviation" and the like have meaning only within a normal distribution. You would need things like T tests and ANOVA for analysis. And this would be just to establish how large a dataset you would need to be able to accurately compare two datasets. Then you would have to repeat for the actually hypotheses you have, and compare to this null dataset to establish differences. Not impossible, but far from simple.

I probably went into way too much detail here, but I love this kind of thing.

Edit: quoted this last bit in error and couldn't remove it - apologies @sporadicsmiles šŸ‘

Edited by Junkhead
Link to post
Share on other sites

1 hour ago, goranm said:

We don't need a baseline, we don't have to do anything elaborate, OP made a falsifiable claim (falsifiable in Popper's sense, not that it is a priori false) which can be tested.

I love having science discussion here! Of course you need a baseline in this situation. At the very least you need to have an idea of what the level of variation is between measurements, and how many measurements are required to get any kind of significance between results. I cannot think of a single real world experiment where you would not run a control experiment with the hypothesis "there is no significant difference between data sets" that you would test with the same methods you are using to test your actually interesting hypothesis.Ā 

Imagine you do not establish a baseline, my immediate criticism (and what I would say should I review such a result in a paper for a scientific journal) is how they know that any different is significant compared to baseline variations?Ā 

Anyway, as interesting as this discussion is (and I'd be happy to talk more about it, I love this stuff), I do not want to entirely take over this thread with something as non football as this!

Link to post
Share on other sites

Here's my theory:

Players whose teams perform better if they aren't watching are doing so because they make absolutely **** decisions based on what they are watching if they are watching. ProveĀ me wrong. ;)Ā 

The matter of fact is this: Everything you get to see is technically a replay, as the half has been calculated already when hitting kick-off. That's why you can also save matches and view them in their entirety again, or klick on any result in your league and watch the highlights of that. How and whether you chose to watch that makes no difference until you make a change. From that moment on, the half is re-simulated again. This is done so SI know in "advance" where any highlight is to show it as a highlight.

Edited by Svenc
Link to post
Share on other sites

5 hours ago, goranm said:

If what you're saying is true, then nothing in the game matters. If randomness has such a hold on the ME, then attributes don't matter, morale doesn't matter, tactics don't matter etc. If we have 1000 sets of 100 played games, and for example the win/loss ratio is completely random over these 1000 sets, then the match engine is not really an engine, it's a random number generator.

How do you know this wouldn't be exactly the same in real life if you could go back in time and replay a match over and over? The answer is you don't, so you have ZERO frame of reference with which to make such a claim. What impresses me most about the engine IS the randomness factor of it. It encapsulates the varying paths a game can head in when a goal is scored, or a player is sent off. If this wasn't part of the game, it would just get completely predictable.Ā 

For example, we'll never know (for the reasons I gave above) who would have won the World Cup in 1966 had the Russian linesman not been corrupt and disallowed the 'goal' where the ball was never over the lineĀ :lol:

Or just to balance this for the English among you, if Lampard's 'goal' against the Germans stood instead of being incorrectly (though hilariously) disallowed.Ā 

The point is, these events change games, sometimes fractionally, sometimes hugely. Replaying a match you've already played shouldn't really ever be the same. Sure, if you're the stronger team you'd probably expect to win the match more often than you don't, but even still, incidents within a match can alter the whole thing drastically. It's one of the things in the game SI have got spot on for me.Ā 

Link to post
Share on other sites

12 hours ago, Prokopije said:

Ā 

I don't think there is much randomness. When playing with friend, was someĀ discussion about how random results are and we replayed World Cup finals, (England vs Spain). It was AI vs AI game, similar teams in terms of quality, so we decided to make it happenĀ 10 times and to see what happens.Ā Expected 5-5, 6-4, something like that. What surprised us is that took us 38 times before England lost it (on penalties).Ā 37 consecutive wins for England !Ā It seems that game was pretty much "rigged"Ā for them to win it.Ā 

Sure, one game proves nothing, but wondering how often it happens...

Yeah in a controlled environment (which FM is )randomness is minimized to a certain extent. But there is still a degree of uncertainty in it. How much is the uncertainty will only be known by its designers. Therefore it is very difficult to set up any experiment when we do not know how much is the uncertainty.

Ā 

14 hours ago, goranm said:

Except what I'm describing is how for example medicine is tested against a (side)effect that was random and one that wasn't, or a placebo effect, or how weather prediction is done. The same medicine goes into 100 different humans with 7000 different organs which follow the same basic rules yet we know how to differentiate between random and non-random effects, and we are able to tell with which probability distribution does a random effect happen. Similarly, for weather prediction we plug in the same initial conditions (same teams) millions of times into a computer model (the ME), and this model converges towards a certain prediction for temperature (score), humidity (number of CCCs) etc. The prediction might be wrong for one particular day, but on average it will correctly predict weather or the trajectory of a particular storm etc.

If what you're saying is true, then nothing in the game matters. If randomness has such a hold on the ME, then attributes don't matter, morale doesn't matter, tactics don't matter etc. If we have 1000 sets of 100 played games, and for example the win/loss ratio is completely random over these 1000 sets, then the match engine is not really an engine, it's a random number generator. In the same way, if the number of CCC's doesn't converge around a certain value for these 1000 sets, but is completely random, then the ME is nothing but a RNG.

Sure. However, nothing what you've said here shows that the method I'm describing will not prove what OP wants. Think again about medicine, 100 people will have different biology, different pathways in their bodies, and at the smallest level the interaction of electrons in a medicine with the electrons in the human body is completely random. However this randomness follows a distribution and physical laws, so we are still able to deduce "global" effects on the whole body with a degree of accuracy. The ME follows the same logic, the random variables that encode the probability of tiniest mistakes or whatever follow a probability distribution, so over 1000s of games, the distribution of CCCs must show a pattern, and if this pattern is approximately the same for both viewing modes, then OP is disproved. If not, OP is proven.

I understand what OP is trying to prove, and what I am describing is a method of proving that. It doesn't matter that matches are never the same, what matters is that they go through the same black box (the ME) whose properties we are trying to figure out. This black box follows some set of rules, some of which are random, but some are not. If you think about how experiments in science are done, no two experiments are really the same, maybe one experiment is done in a room slightly warmer than the other, maybe it has a slightly higher concentration of oxygen, whatever. Yet we still prove scientific facts by replicable experiments that are never perfectly the same because we know how to how to distinguish between random and non-random effects.

Medical graduate here. Medical trials are not conducted with absolute randomness. Ever heard of controlled trials? Every trial is controlled on every other factor leaving only the factor that we want to test and is allowed to be random. Hence any tests involving FM can only be done if we know how much uncertainty is involved which only the designers know.

Ā 

Link to post
Share on other sites

43 minuti fa, Dagenham_Dave ha scritto:

How do you know this wouldn't be exactly the same in real life if you could go back in time and replay a match over and over?Ā 

If you could go back as only viewer, replaying a match would always give the exactly the same match caouse there is no random element in life

Link to post
Share on other sites

I always feel like the team does worse when I'm watching compared to when I do instant result. It's probably more due to me being a lousy in-game tactician though.

I have thought to conduct a random experiment, where I flip a coin at the team selection page, heads I watch, tails I do instant result.

Link to post
Share on other sites

8 hours ago, zyfon5 said:

Medical graduate here. Medical trials are not conducted with absolute randomness. Ever heard of controlled trials? Every trial is controlled on every other factor leaving only the factor that we want to test and is allowed to be random. Hence any tests involving FM can only be done if we know how much uncertainty is involved which only the designers know.

I've never said that medical trials are conducted with absolute randomness, don't know where you concluded that from. And no, you don't need to know how much uncertainty is involved to run tests, otherwise studying black box events and reverse engineering, and in general the study of any kind of random or apparently random phenomena would be impossible.

Ā 

8 hours ago, Dagenham_Dave said:

How do you know this wouldn't be exactly the same in real life if you could go back in time and replay a match over and over? The answer is you don't, so you have ZERO frame of reference with which to make such a claim. What impresses me most about the engine IS the randomness factor of it. It encapsulates the varying paths a game can head in when a goal is scored, or a player is sent off. If this wasn't part of the game, it would just get completely predictable.Ā 

I don't, but FM is not real life. In FM we can replay matches and we will see patterns over a large number of matches. Mathematical models like the ME don't need a frame of reference except for the predictions they make - we also can't replay a specific day over and over yet we do have weather predicting models with "zero frame of reference".

8 hours ago, Dagenham_Dave said:

The point is, these events change games, sometimes fractionally, sometimes hugely. Replaying a match you've already played shouldn't really ever be the same. Sure, if you're the stronger team you'd probably expect to win the match more often than you don't, but even still, incidents within a match can alter the whole thing drastically. It's one of the things in the game SI have got spot on for me.Ā 

And that is why you repeat the match a 100 times. Over many repetitions patterns will emerge.

Ā 

11 hours ago, sporadicsmiles said:

I love having science discussion here! Of course you need a baseline in this situation. At the very least you need to have an idea of what the level of variation is between measurements, and how many measurements are required to get any kind of significance between results. I cannot think of a single real world experiment where you would not run a control experiment with the hypothesis "there is no significant difference between data sets" that you would test with the same methods you are using to test your actually interesting hypothesis.

Say you're at the LHC and want to ascertain the existence of the Higgs boson for the first time. What is your baseline when the Higgs boson was never observed before? After removing noise, how do you know that it actually is the Higgs boson, and not some other previously undetected particle that maybe shares some properties with the Higgs boson, what is your control?

Ā 

11 hours ago, Junkhead said:

It isn't. It's a method of assessing the probability that the op is correct. That is different to proving something.Ā  Looking at the code would prove it, nothing else.

When we prove stuff about nature in this way all the time without access to the code. We can infer something about the structure of the universe without actually knowing all the details of the underlying code. The same goes for the ME.

Ā 

Edited by goranm
Link to post
Share on other sites

18 hours ago, goranm said:

we also can't replay a specific day over and over yet we do have weather predicting models with "zero frame of reference".

Ā 

What an utterly bizarre comparison.

Ā 

On 31/10/2020 at 12:06, Prokopije said:

Ā 

I don't think there is much randomness. When playing with friend, was someĀ discussion about how random results are and we replayed World Cup finals, (England vs Spain). It was AI vs AI game, similar teams in terms of quality, so we decided to make it happenĀ 10 times and to see what happens.Ā Expected 5-5, 6-4, something like that. What surprised us is that took us 38 times before England lost it (on penalties).Ā 37 consecutive wins for England !Ā It seems that game was pretty much "rigged"Ā for them to win it.Ā 

Sure, one game proves nothing, but wondering how often it happens...

Imagine the fun you'd have just playing the game as normal instead of pointlessly frustrating yourself playing the same match 38 times for absolutely no reason. I'll never understand why people do this. It doesn't prove a single thing.Ā 

Link to post
Share on other sites

7 hours ago, Dagenham_Dave said:

What an utterly bizarre comparison.

Ā 

It's not bizarre, the ME and weather prediction models work on the same principle, they're both a system of equations in which some, but not all variables take random values. Both take fixed numbers as input and produce an output, they don't produce the same output every time the same input is fed into them. So by your argument weather prediction models wouldn't work.

7 hours ago, Dagenham_Dave said:

Imagine the fun you'd have just playing the game as normal instead of pointlessly frustrating yourself playing the same match 38 times for absolutely no reason. I'll never understand why people do this. It doesn't prove a single thing.Ā 

It proves that the random part of the ME isn't very influential for the final result. The probability for a 50/50 match to go to the same side 37 times in a row is practically zero. Even if one side has a 90% chance to win, the probability that it wins 37 times in a row is just 2%.

Link to post
Share on other sites

Le 31/10/2020 Ć  01:16, robot_skeleton a ditĀ :

Schrƶdinger's M.E.

When I'm tired enough that I re-run a selection of matches to figure out tactics, I become paranoid enough to think that the M.E. is indeed a creation of Schrƶdinger. I thought I had figured out one satisfying tactical setup upon watching the same match, the Champions' League final, multiple timesĀ  with fairly consistent results on Extended Highlights. Not always the same of course, but you could see a pattern. Then I make the foolish mistake of watching that same match once again in full length. It was one of the most embarrassing and tepid displays of FM football I had seen in a long while, completely at odds with what I had seen until then with that setup.

Then I went to sleep. It fixed my nerves but it didn't fix my tactics! I think it's time for a relapse and fire FM once more... :ackter:

Edited by Xavier Lukhas
Link to post
Share on other sites

44 minutes ago, Dagenham_Dave said:

Really? From one example? Where we don't know any other information or whether it was actually true or not?Ā 

Not with a very high degree of certainty with that one particular example, however I've ran hundreds of such examples and what was said is consistent with what I've observed.

Edited by goranm
Link to post
Share on other sites

1 hour ago, Dagenham_Dave said:

This is just about the saddest thing I've ever read on here.Ā 

Well I'm a mathematician by profession, so stats and figuring out how black boxes work are interesting to me. But I understand if numbers are difficult for you :brock:

Edited by goranm
Link to post
Share on other sites

12 hours ago, Xavier Lukhas said:

When I'm tired enough that I re-run a selection of matches to figure out tactics, I become paranoid enough to think that the M.E. is indeed a creation of Schrƶdinger. I thought I had figured out one satisfying tactical setup upon watching the same match, the Champions' League final, multiple timesĀ  with fairly consistent results on Extended Highlights. Not always the same of course, but you could see a pattern. Then I make the foolish mistake of watching that same match once again in full length. It was one of the most embarrassing and tepid displays of FM football I had seen in a long while, completely at odds with what I had seen until then with that setup.

Then I went to sleep. It fixed my nerves but it didn't fix my tactics! I think it's time for a relapse and fire FM once more... :ackter:

I tried that as well, replaying just to see if the tactic works but never really managed to extract value from that.

Link to post
Share on other sites

On 31/10/2020 at 13:06, Prokopije said:

Ā 

I don't think there is much randomness. When playing with friend, was someĀ discussion about how random results are and we replayed World Cup finals, (England vs Spain). It was AI vs AI game, similar teams in terms of quality, so we decided to make it happenĀ 10 times and to see what happens.Ā Expected 5-5, 6-4, something like that. What surprised us is that took us 38 times before England lost it (on penalties).Ā 37 consecutive wins for England !Ā It seems that game was pretty much "rigged"Ā for them to win it.Ā 

Sure, one game proves nothing, but wondering how often it happens...

There is exactly one such scenario where I've seen that happen, which is one AI manager exploiting ME flaws that are hard to defend. This happens by chance and research, such as the Barca (assistant) manager with his three strikers back on that release destroying the all-time scoring records in La Liga every single season, scoring 160+ goals each. Or an AI manager flooding the middle of the pitch against another AI manager playing 4-4-2ish on FM17, as the wide midfielders on that would barely defend the middle. Unlike a human player AI can't "read" a match, therefore if they come up against such an AI tactic, they are hopeless no matter how many times you reload.

Ā 

Otherwise, not.

Edited by Svenc
Link to post
Share on other sites

Skipping over the conspiracy part, FM 20 definitely shows a lot less highlights under 'Key Highlights' this year. Unsure if intended or not, but it certainly is annoying.

In years gone by it was always good to show a couple of passages of play that lead to a chance. This year there can be 10/15+ shots on goal and not one highlight in a half. I'm sure my team or the AI have had something worthy!Ā 

At the very least, 1 or 2 things should be shown a half so I can see how the AI appear to be setting up.Ā 

Link to post
Share on other sites

24 minutes ago, north_london_fan said:

I was 4-0 up at half time with Israel, against Switzerland, and I went to make some lunch thinking the game was over. Oh boy, was I wrong! We drew 4-4 with them scoring a last minute equalizer.

Manager complacency is sometimes more dangerous than player complacency.Ā Ā :D

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
 Share

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...