Greetings

Today i would like to announce a new project that's been in the works for quite some time by myself and @Slacor. Project VideoDiff. An open source program written in Python for detecting visual anomalies including (but not limited to) temporal dithering using the OpenCV library.

Unfortunately it's often not as easy as doing a passive screen recording and using that for input because most visual anomalies that cause issues are at the GPU output stage, but when paired with a lossless capture card such as the DVI2PCie, it renders VideoDiff into a useful investigation tool for investigating visual anomalies across many types of devices and operating systems.

Here is a demonstration of VideoDiff being able to capture temporal dithering that was explicitly enabled on a Linux system with an AMD Radeon M370X graphics chip. You can see the difference between the blue, green and red color channels and the "mask" method which overlays the differences of the previous frame onto the current frame.

Demonstration (should be relatively safe to view)

Here is the source code.

More information and tests to follow in the future.

Welcome to a new era

    Fascinating, I look forward to seeing where this goes.

    This looks like really impressive work and a potential game changer.

    Roughly how much work went into getting this working?

    • JTL replied to this.

      Edward Roughly how much work went into getting this working?

      I first ran some image subtraction and comparing pixel value tests using a webcam as a "stand in" (the noise "simulates" dithering) around 2 years ago, but I got the PC with a lossless capture card and resumed working on this in January.

      Nice work.

      Are you using OpenCV for its video decoding, or are you doing any machine learning stuff? I've been wondering if there is a machine learning solution to finding the original dithering algorithm that's been applied.

      • JTL replied to this.

        Seagull Are you using OpenCV for its video decoding, or are you doing any machine learning stuff? I've been wondering if there is a machine learning solution to finding the original dithering algorithm that's been applied.

        Just for video decoding and static computation based on the frame pixel values, but I'm all ears 🙂

          Just to understand how this works, in a nut shell: it takes an already-existing video file, plays it back and makes changes visible by coloring them?
          So it could be used with any capture card and existing video files?

          The cheaper cards don't capture raw but use downsampling. Would temporal dithering still be visible? Would be great if there's a real cheap card anyone could afford.

          • JTL replied to this.

            KM Just to understand how this works, in a nut shell: it takes an already-existing video file, plays it back and makes changes visible by coloring them?

            It can also do a realtime input if the card cooperates, but yes

            KM The cheaper cards don't capture raw but use downsampling. Would temporal dithering still be visible? Would be great if there's a real cheap card anyone could afford.

            Unsure. I got the best card I could to make that less of an issue.

            JTL

            Its only wondering, I have no idea if its feasible or how to do it. Even if its not as absolute as reverse engineering the algorithm, it might be useful to quantify the degree of randomness. As I have pontificated before, perhaps a more random dithering pattern is less likely to create aggravating visual patterns. Afterall, my personal experience has been GPU dithering generally bad, monitor dithering generally ok. It would also be helpful to be able to determine if a GPU is capable of multiple dithering algorithms, and when they are used. Idle thoughts aside, I would like to learn OpenCV as it might be useful for my new job. This could be an interesting place to start.

            Anyway, back on topic. Looking forward to seeing how your findings compare with mine. I did not use OpenCV to decode, instead I used VLC player to screenshot each frame, and then decoded the .png files produced using a rando library I found. Be interesting to see if that makes a difference. You are of course welcome to my capture samples, though I'm not sure how to share them. They total 20GB now, and the internet here isn't amazing (0.4MB/s upload only).

            • JTL replied to this.

              Seagull You are of course welcome to my capture samples, though I'm not sure how to share them. They total 20GB now, and the internet here isn't amazing (0.4MB/s upload only).

              If you have a computer that can be left on to host them I have some idea. Feel free to email me (jtl at teamclassified dot ca)

                JTL Unfortunately I don't have a pc I can leave on. I might have a 1tb onedrive account that I can use to share. Will get back to you.

                  Seagull I might have a 1tb onedrive account that I can use to share

                  Still need to upload somehow? So I don't see how that solves that problem.

                  Let's discuss this soon

                  This looks great. How does a 'good' output look like using this s/w? Much less (or no) noise present?

                  • JTL replied to this.

                    diop This looks great. How does a 'good' output look like using this s/w? Much less (or no) noise present?

                    Still early on the testing, but that's right

                    Seagull Some kind of split utility could be used to break files into manageable chunks. (120M should take 5m to upload on a 0.4Mb connection)

                      Slacor

                      All on OneDrive now, anyone that wants them can give me their email. @JTL you should have an invite email?

                        Seagull Haha. I just saw the email without context this morning and thought it was a malicious email of some kind.

                        Thanks for the reassurance.

                        JTL Ha! not at all surprising. I pay for Google drive despite getting a huge 1TB of OneDrive for free. My biggest problem with it is that it often doesn't detect small changes to files resulting in them not being sync'ed, so its no good for code. I can share via Google Drive if needed, but you'll need to be able to download them fast as I don't have a huge amount of space to play with.

                        • JTL replied to this.

                          Seagull What's the total size?

                          I'll poke at it more later, but I'll let you know if I need the Google Drive download.

                          so its no good for code

                          My recommendation there is research git and have a private server if needed.

                            dev