I have agreed to take part in the process of tactic validation, and I will pay close attention to the claims users make when posting their content or tactics. Realistically, I will not have the time to test every tactic. However, if a tactic appears questionable to me — and possibly to other users who reach a similar conclusion — I would be willing to carry out testing myself. Other users could also share their own test results.
If someone else carries out the testing, proper proof should be submitted to justify the results, not just overall statistics. (screenshots).
If we are talking about testing someone else tactics, we need set clear rules on what forms of testing are acceptable.
As previously mentioned, holiday testing or full manual play seems to be the more logical choice. The 70/30 testing method cannot be properly justified, as testers would not know which matches were hand-played and which were not. For that reason, I believe there is a strong case for banning this method.
I will be less strict with new content creators, as they will need time to learn the process. We should therefore be more understanding and forgiving at the beginning.
I have always viewed tactic emulation as something slightly different. The aim is not simply to create a winning tactic at any cost, but to analyze the playing style of a real-life coach and replicate that style within the game as accurately as possible.
I would also like to see Game Status Windows and Managerial Stats included in tactic submissions. Nowadays, technology makes it much easier to edit screenshots than it was years ago, and because of this there will always be a risk of foul play.
There are two main reasons why I am particularly keen on including game status information:
1. It would show how many times the game was saved during testing. Excessive saving could potentially indicate foul play.
2. Managerial stats would indicate whether the game was hand-played and to what extent. As far as I am aware, holiday testing is not reflected in marginal stats, whereas instant results and manual play are.
From the limited testing I have conducted recently, I have the impression that holiday testing is more consistent under the new match engine. However, I have only completed 15 tests, which is not enough to draw a firm conclusion. If other users also conduct testing, we would be able to gather broader and more reliable data.
This is important because I personally always compare league statistics rather than focusing solely on cup wins, as cup success can be influenced heavily by random factors. Sometimes you may win several cups simply because you faced weaker opposition, while in another simulation you might draw a top team early and be knocked out.
Some members here, such as Knap, have been testing tactics for much longer than I have, so they would likely have valuable insight into this process.
I will try to create a structured list outlining what needs to be submitted with a tactic.
I have also noticed that some users submit a tactic and then disappear for a month without logging in. I have attempted to contact some of them but received no response. If someone makes a submission, they should at least remain available to engage in discussion and answer questions.
1