Questions and postings pertaining to the usage of ImageMagick regardless of the interface. This includes the command-line utilities, as well as the C and C++ APIs. Usage questions are like "How do I use ImageMagick to create drop shadows?".
I have two images. One is a screenshot of a desktop (source image), and the other is a screenshot of the same desktop, but through a remote access window (to-be-compared image). I need to compare the two images and work out what percentage of the image is the same. My issue is that an offset is created by the remote access window. The title bar of the remote access window creates an offset of 20 pixels or so in the second image, and the task bar cuts of almost 30 pixels from the bottom of the original image. I know I can trim the second image and do a sub image search, but this is not a route I'd like to take.
Is there a way to use imagemagick to compare the two images and work out what percentage of them is the same, even with the offset? Obviously the percentage by which they are the same would be a minimum of 90% or so given the addition of the title and footer bars, and the cut off of the original image.
Thanks, I was interested to see the way you'd do it more for the experiment than the logic. It was interesting to see the way you combined the commands to achieve my spec; a good imagemagick learning experience
The logical way was the way I did it myself, and the way I purposely ruled out in my question above. I used the two commands:
I guess now I need some help interpreting the output. Does the above output mean that there are 1796 pixels different, which is 2.7% of the image, and the subimage was found at offset 0,0?
The use of -compose difference, as described by user snibgo, is the most logical way to get the information you need. But I thought I would point out one way to combine your two commands.
The first file will be the difference image with differences highlighted in red for the matching subregion. I believe the red is binary -- clear where a perfect match and red otherwise. It is blended with a contrast reduced version of the subsection or smaller image (not sure which without further testing). Thus if you want to see the red intensity change with difference, you should use -compose difference and composite that with the image and red.
The second file will be as large as the large image minus the dimensions of the small image. It's brightness represents the match score for each shift position of the small image relative to the large image for the upper left corner of the small image relative to the upper left corner of the large image. Brighter always means better match, regardless of the metric. (Some metrics like rmse are better when the value is smaller, so the match score image will be negated)
And my scripts: normcrosscorr, rmsecorr, dotproductcorr at the link below. They are very fast, but require the use of HDRI because they use FFT (real/imaginary components).
Does the above output mean that there are 1796 pixels different, which is 2.7% of the image, and the subimage was found at offset 0,0?
IM calculates a score for every possible position of output.bmp within source.bmp. For metric RMSE, the score is the square root of the mean square difference of the pixels. The best position is where this score is lowest.
"1796.32 (0.02741) @ 0,0" says the best score occurred at position 0,0 (which means where the images align at top-left corners), and the score was 1796.32 (out of 65535), which is 0.02741 (out of 1.0). The score is RMSE, not a pixel count.
"-metric AE" would give a pixel count, but the images are too dissimilar.