We identified an issue with MetaTester, when we run a Single Test once and then we run the same Single Test without changing anything it gives different results.
This problem is also happening with Optimizations.
We are sending a ZIP File with all the relevant data.
This problem is new with one of the latest builds (1010+). In the past builds we didn't had this problem. We do a lot of heavy optimization and this is definitely a very big issue, because if you can't trust the results you get on single tests and optimization (not being able to reproduce them time after time), there's really little point in trying back tests and all. Does anyone noticed this? Does anyone have a possible solution??
We already contacted ServiceDesk with no luck.