I'm trying to detect movements in a video stream by comparing its extracted images (1 frame per second).
Here is my approach:
Because I don't want to compare the whole image, I extract 9 parts evenly placed throughout the image (16x16 pixels per part).
Code: Select all
convert -extract image1.png crops/16x16+120+64/image1.png
Code: Select all
convert imageY_partXY.png -colorspace rgb -scale 1x1 -format "%[fx:floor(255*r)],%[fx:floor(255*g)],%[fx:floor(255*b)]" info:
If the distance is below a given threshold (which needs to be defined) then I assume that there is no movement for this particular part.
If all the parts have no movement, then I reckon the compared images to be the same. After 5 (just an assumption!) images I assume the stream has no movement.
I know, I could also do it this way:
Code: Select all
compare image1.png image2.png image_out.png
Code: Select all
compare -fuzz 5% image1.png image2.png image_out.png
But I use IM just for prototyping and I need to find a way so it works without IM and that its fast enough.
This solution works kind of. Of course it does not recognize every movement because it just covers those 9 parts.
Does anyone have a better approach? Or any idea to improve it?
Is it a good idea to use mean to compare the parts?
Feedback is very appreciated
