How can I improve further the accuracy of the image detected?
I have tested with the matching threshold at the minimum and maximum, but there seems to be another issue.
The fact is that the actions simply pick the “most similar” spot on the screen. You can add some noise to an image and still the robot will do the task. I would like the inaccuracy to trigger an error that the robot should deal with.
For example: if there is a red mark within the anchor area on your target image, but not in your screen any longer, then the robot should not carry on simply ignoring the unmatched red mark.