Its only wondering, I have no idea if its feasible or how to do it. Even if its not as absolute as reverse engineering the algorithm, it might be useful to quantify the degree of randomness. As I have pontificated before, perhaps a more random dithering pattern is less likely to create aggravating visual patterns. Afterall, my personal experience has been GPU dithering generally bad, monitor dithering generally ok. It would also be helpful to be able to determine if a GPU is capable of multiple dithering algorithms, and when they are used. Idle thoughts aside, I would like to learn OpenCV as it might be useful for my new job. This could be an interesting place to start.
Anyway, back on topic. Looking forward to seeing how your findings compare with mine. I did not use OpenCV to decode, instead I used VLC player to screenshot each frame, and then decoded the .png files produced using a rando library I found. Be interesting to see if that makes a difference. You are of course welcome to my capture samples, though I'm not sure how to share them. They total 20GB now, and the internet here isn't amazing (0.4MB/s upload only).