async Really nice find. I'm certain there are important things to be found here. The list of random shit they are doing instead of actually taking a grid of colored pixels and displaying them is just mind blowing.

It looks like these 3 people are involved with many papers relating to the false 3D effect (including the one I previously mentioned that was funded by Intel Labs — BTW, I just noticed that one of the authors of that paper is associated with NVIDIA)

Profiles:

https://neurotree.org/neurotree/publications.php?pid=153838

https://neurotree.org/neurotree/publications.php?pid=1027

https://neurotree.org/neurotree/publications.php?pid=15601

Some of them are specifically related to stereoscopic displays, but there are a surprising amount of papers on these pages that apply to traditional 2D displays as well (AKA relevant to us)

It looks like the "modern" version of this effect may have started with this paper from November 2017:
https://dl.acm.org/doi/pdf/10.1145/3130800.3130815 👀


EDIT: Just found this paper with a ton of information about it

https://theses.ncl.ac.uk/jspui/bitstream/10443/5772/1/Maydel%20F%20A.pdf

Page 78:

Other studies have proven that accommodative responses can be elicited by simulating the effects of LCA with the three primary colours of a screen

a method to render simulated blur that incorporates the LCA of the eye and generates retinal images similar to those found for natural defocus. They showed that this method could be used to drive the accommodative response of observers at distances of up to 1.4 dioptres away from the screen, both when viewed through a pinhole and through a natural pupil

Including more confirmation that it worsens image quality and can cause depth conflicts:

These results indicate that the visual system uses LCA as an important cue to accommodation, even when it is in conflict with other cues such as defocus or microfluctuations, and when it is detrimental for overall retinal image quality (as accommodating away from the screen would worsen the defocus of the image).

Page 90:

presented images to participants that simulated positive or negative refractive errors of up to 1.4 dioptres by differentially blurring the primaries of the screen at luminance edges, as LCA would on a real scene. Their responses [to the simulated LCA] were as robust as those triggered by an actual change in the focal distance of the target.

[…However,] all other cues such as micro-fluctuations and higher order aberrations would be indicating to the visual system that no change in accommodation was required

Page 92:

observers would accommodate close to the peak of their luminous sensitivity. However, our results suggest that the visual system maintains this strategy when accommodating to mixtures of narrowband illuminants, even when it might lead to suboptimal image sharpness. This means that visual displays that use narrowband primaries, particularly those that are used at near distances from the eye, might not be ideal

Page 153 is really interesting:

Modern digital displays are increasingly using narrowband primaries such as lasers and Light Emitting Diodes (LEDs). This allows for a wider colour gamut to be shown as well as higher energy efficiency; however, it is not clear how this might affect our perception, and in particular, our ability to accommodate and keep the image in focus.

considering wavelength for accommodative demand would be more relevant for visual displays that are used at nearer distances from the eye. It is important to note however, that we found large individual differences in this effect

We hypothesised that observers could be either maximising contrast at lower spatial frequencies, even when this is detrimental to contrast at higher spatial frequencies and these higher frequencies are task relevant

For practical applications, this means that mixtures of two narrowband illuminants [[i.e. red and blue]] are not optimal for maximising retinal image quality, particularly at high spatial frequencies.

However, the author didn't seem to realize the importance of checking whether these techniques are already being used in the devices themselves that the studies were done on (I'm very sure at this point that they are)

    async

    Just found two more interesting ones

    1:

    https://ijrat.org/downloads/Vol-2/may-2014/paper%20ID-25201456.pdf

    Lots of technical details about the technique here + more example images

    2:

    https://cse3000-research-project.github.io/static/0a605a3e4f4f6388cec3388286bd0f9d/poster.pdf

    https://repository.tudelft.nl/record/uuid:178a950e-32c3-4397-a014-5a53d740ae74

    This is based off the 2011 Samsung one, although is more basic as it's just a small implementation done by a student (which is why the color shifting is more noticeable). However, there are some more examples here.

    Frustratingly, the section about "ethics" literally only talks about the ethics of someone artificially editing a photo, and NOT about the repercussions of these types of images on eyesight… 🤦‍♂️ 🤦‍♂️

    DisplaysShouldNotBeTVs Woah. Nice insights. I went down a few fabbit holes with accomodation today. There are wealths of information about how the visual system works. I tried to figure out about why some spatial frequencies trigger flickering in migraine / vss, and if it can be trained away. I'm certain there are techniques to reverse some of the issues caused by the apple screens. The best possible solution would be something that actually untrains whatever is causing the screen issues, so people don't have to fight every single screen, OS and bulb for all of eternity.

    Tons of places where rivalry can take place and cause issues. Blue-Yellow Opponency, koniocellular vs parvocellular, different spatial frequencies. It is possible to shift things to other pathways with imagery, overlays etc. Also wondering if things like the same amount of red and green while making yellow tones and pure white causes issues.

    There are also things that can shift how we view colors.

    Effects of acute high intraocular pressure on red-green and blue-yellow cortical color responses in non-human primates - ScienceDirect

    Also, Apple added support for capturing HDR screenshots/streams. Probably doesn't include all processing, but at least it might be usable for some types of diffing tools or overlays. Also they deprecated like 20 other methods and ways to capture the screen. Almost feels like it should be absolutely impossible to get the output right before it reaches the screen. I don't think there exists a single public tool that can capture with HDR. Might create a tool that measures potential rivalery, or overlays a diff of changes. Not sure if it will be useful without a capture card tho.

    Capture HDR content with ScreenCaptureKit - WWDC24 - Videos - Apple Developer
    Capturing screen content in macOS | Apple Developer Documentation (sample project)

      async Created a sample that overlays the screen with a sample of itself while seeing if I could do some quick shaders.

      Realized that it is now possible to capture all windows separately and assemble them again to do advanced things. Like blur background windows, add a slight dimming to the edges of windows to avoid contrasts, or blur the borders of windows to have less edge detection strain. Essentially creating more of custom window manager in mac. Could even do things like "ban" red pixels next to pure blue ones or desharpen.

      Not sure how much can reasonably be done without ending up with massive gpu use and lag tho.

      DisplaysShouldNotBeTVs The list of random shit they are doing instead of actually taking a grid of colored pixels and displaying them is just mind blowing.

      Seems like preparing users "subliminally" for AR/VR future but using 2D displays as testbed? Even on the pure flicker side of things, stuff like this exists:

      https://ledstrain.org/d/2706-guiding-attention-through-high-frequency-flicker-in-images

      Not surprising research would try to explore and exploit any and all available understanding of vision.

        photon78s Not sure, given that both a 2 hour session with the Vision Pro this year, an hour with the original Vive back in 2016, and a recent Oculus headset I tried briefly were all fine enough for me, even when I used a more complex app like VS Code Web on the Vision Pro text was more readable than ANY other modern Apple device

        (I really enjoyed the eye tracking control method too, I wish I could use the Vision Pro control method on regular screens). Depth perception felt natural for me in all 3.

        Other stereoscopic displays like Nintendo 3DS are also fine for me, ironically I get less strain from them even in 3D mode compared to any modern device (although I prefer the 2D mode if I'm going to play for multiple hours)

        I didn't have to "prepare" for trying those VR devices, they worked fine for me without any immediate problem — probably because each of my eyes is getting a totally separate image that they each can understand independently

        (The only time I got strain during that Vision Pro test was in the Mindfulness app that used a completely black background, since my eyes couldn't perceive pure black as "far away". Everything else that used passthrough was fine and had accurate depth, although the camera feed was disappointingly low resolution compared to the UI)

        On the other hand… outside of VR and stereoscopic displays, I cannot stand any 2D display that has this "false 3D effect" for more than 20 minutes LOL

        VR headsets actually makes me feel like I have better depth perception in the real world after I use them sometimes…

        but "false 3D" 2D displays totally mess up my depth perception SO much and give me tunnel vision that lasts for hourrs

          photon78s

          That's VERY likely to be the case, given TV marketing that's actually much more transparent about these kinds of features and promotes them as exactly that.

          The same tech invented for TVs is probably just being "snuck into" other devices whenever manufacturers think it's "perceptually" subtle enough…


          Ironically, unlike other modern devices that default everything to ON and give you no choice…

          TVs actually do give some degree of control over processing… in fact I was able to make an (initially super strainy) modern LG OLED TV actually great with Netflix on PS5 by enabling a minimal processing "4:4:4 Passthrough" mode, disabling "deep color", "contrast enhancement", "gradient smoothing" and dozens other settings which were all clearly labeled!!

          After messing with all the settings, the image is now acceptably flat in a surprising amount of cases, I can consistently focus on most shows, and even understand pretty precisely what characters are doing in action scenes instead of a blur of flashy colors… which I consider a huge win for me!

          And yet GPU settings on laptops totally hide these kinds of options despite using similar techniques!


          (suspiciously, even though that LG TV is now usable for me with PS5… when an Apple TV 4K is connected instead with the EXACT same modified TV settings, watching the same show, I can't focus at all and the 3D effect is SUPER intense even on Apple TV menus… out of nowhere it transforms into that "modern MacBook feel". Apple is 100% messing with their HDMI color output just like all their other products)


          TVs are actually honest about this stuff:

          https://www.lg.com/levant_en/tvs/alpha9

          • "Frequency-based Sharpness Enhancer"
          • "The object depth enhancer precisely separates the main object from the background images and analyzes textures and edges […] This is to elevate the perceived depth of the one whole picture"

          https://news.samsung.com/my/heres-how-the-most-premium-tv-is-going-to-make-your-living-space-as-stylish-as-you

          • "AI Object Depth Enhancer […] mimic the human eye’s focus by improving contrast between foreground and background image. This newly added state-of-the-art technology will further enhance all visuals on the Neo QLED TV, creating a three-dimensional effect"
          • (from a different page) "Experience depth and dimension on screen just the way you see it in real life. Real Depth Enhancer creates an immersive experience by mirroring how the human eye processes depth"

            Hi! Could you help please.
            How can I get info on what company is a manufacturer of my MBA’s M3 15’’display? LG, Samsung, other Japanese... Where to find codeID-decode Name list. For my MBA 15'' M3 it is - "ManufacturerID"="00-10-fa",

            Looking for info:
            1. How many manufacturers (vendors) do supply their displays for Macbook models. Especially Air 15'' M3 Air model
            2. How to define on specific laptop who is the manufacturer of the display that it has (how to translate from "ManufacturerID" to real name of manufacturer (vendor))

            Here I've created new topic with detailed info on my MBA. Any useful info, please, share:

            https://ledstrain.org/d/2956-how-to-define-macbooks-display-manufacturer-vendor-my-mba-15-m3-air

              vladnft This is a good inquiry, but I hope you do not assume the panel is the only issue there.

                Just dropping a quick script here for people that want to experiment with multipe flags a bit easier. Just drop it in an .sh file. It required betterdisplaycli. Do note that some of the values have been found to improve things, and some are just not found to cause adverse effects. If anyone wants to experiment use something like TestUFO.

                #!/bin/bash
                
                # Helper function to run command and print specifier
                run_and_print() {
                    local specifier=$1
                    local property_type=$2
                    local property_value=$3
                    local display_name="built"
                
                    # Get the current value
                    local current_value=$(betterdisplaycli get -namelike="$display_name" -specifier="$specifier" -framebuffer"$property_type"Property 2>&1)
                
                    # Set the new value
                    local output=$(betterdisplaycli set -namelike="$display_name" -specifier="$specifier" -framebuffer"$property_type"Property="$property_value" 2>&1)
                
                    if [[ $output == *"Failed"* ]]; then
                        echo "\033[31m$specifier\033[0m\033[90m - $current_value"
                    else
                        echo "\033[32m$specifier\033[0m\033[90m - $current_value - $property_value\033[0m"
                    fi
                }
                # Boolean properties
                run_and_print "enableDither" "Bool" "off"
                run_and_print "uniformity2D" "Bool" "off"
                run_and_print "IOMFBTemperatureCompensationEnable" "Bool" "off"
                run_and_print "IOMFBBrightnessCompensationEnable" "Bool" "off"
                run_and_print "enable2DTemperatureCorrection" "Bool" "off"
                run_and_print "enableDarkEnhancer" "Bool" "off"
                run_and_print "DisableTempComp" "Bool" "on"
                
                run_and_print "AmbientBrightness" "Numeric" "0"
                run_and_print "IOMFBContrastEnhancerStrength" "Numeric" "0" # better to look at with it on but it seems to adjust slowly causing flicker and blotching
                run_and_print "IdleCachingMethod" "Numeric" "1" # reduces software cursor flicker from color profile
                
                run_and_print "overdriveCompCutoff" "Numeric" "0" // default 334233600, can cause stuck pixels?
                run_and_print "VUCEnable" "Bool" "off" # unstable?
                
                run_and_print "BLMAHMode" "Numeric" "1" # default 2
                
                # stuff that seems a bit better on
                #run_and_print "APTEnableCA" "Bool" "on"
                # run_and_print "enableBLMSloper" "Bool" "on"
                # run_and_print "APTEnablePRC" "Bool" "on"
                # run_and_print "APTPDCEnable" "Bool" "on"
                # run_and_print "enableDBMMode" "Bool" "on"
                # run_and_print "BLMPowergateEnable" "Bool" "on"
                # run_and_print "IOMFBSupports2DBL" "Bool" "on"
                
                # run_and_print "DisableDisplayOptimize" "Numeric" "1" # unstable

                Donux
                Sure, there are some software "algorithms for image improvement" made by Apple that affect image quality - like dithering.

                Basic goal is to rate all Macbooks' displays by hardware. Especially for IPS MBA 15'' M3's built-in displays.
                For example. We have 3 suppliers (vendors). And in terms of quality. With the same software algorithms ON - one of the displays supplier (vendor) is better then another… Would be great to define which one is better and then to have fast-check Terminal command to define which supplier (vendor) is in each specific MBA 15'' M3 specimen …

                If hardware quality of those vendors is very similar and image is practically similar this would be also a good result.

                Is there only one vendor or several? If several 2, 3 or more then:
                IPS MBA 15'' M3 vendors rating
                1st place. Best - Vendor's name 1 - vendor's code 1 in Terminal ioreg etc - reason why it is the best one
                2nd place. W - Vendor's name 2 - vendor's code 2 in Terminal ioreg etc - reason why it is in the middle
                3rd place. Worst - reason why it is in the worst one

                I've started this research because I am not satisfied with IPS display that my new MBA 15'' M3 has.
                Even if
                - dithering is OFF
                - RGB standard profile is ON in system settings
                My MBA 15'' M3 still has
                - brightness flickering when pressing F1-F2, it is not changing smoothly
                - linear gradients are not smooth, there are some linear effect appears randomly
                - black text on white background seems too annoying too much "contrast". It is visible also when system loading and you see white Apple logo on black screen. Same effect

                Comparing to my MBP 15'' 2014 where
                - brightness changes smoothly and
                - gradients also very smooth and stable

                So if there is some MBA 15'' M3 with the Best built-in display from vendor (rated as 1st place). The one that has
                - smooth brightness changing F1-F2 and
                - gradients also very smooth and stable
                That would be great to find this "version" of MBA 15'' M3 with better one display supplier

                  I use Gamma Control for color adjustments now. And at times there is some type of graphics switch that can be seen as it takes a second or two until those adjustments are applied again. So far I didn't pinpoint exactly what happens happens, but it might be relevant to figure out.

                  I noticed it happening in one app when it shows some particular icons, and for this app that I briefly testet it happens upon closing the app. https://apps.apple.com/us/app/almighty-powerful-tweaks/id1576440429?mt=12

                  I've noticed these messages, but changing them around thru GlobalPreferences doesn't seep to influence them. However it might be a way to force sRGB output. There is a ton of settings loaded for most apps that can be changed here.

                  CAEnableDeepFramebuffer
                  CSEnableIOSurfaceCompression
                  CADisableColorMatching
                  CADisableShadingDither
                  FramebufferServerUseLowQualityScaling
                  NSDeepDefaultWorkingColorSpace
                  NSExtendedWorkingColorSpace
                  NSExtendedWorkingColorSpace
                  NSLinearWorkingColorSpace
                  NSSingleWorkingColorSpace
                  NSExtendedWorkingColorSpace
                  NSWindowUsesZeroScreenForDefaultColorSpace

                  Discovered something interesting. This can be used to affect the rendering of different apps. It can also be applied in the plist for specific apps.

                  Setting it to 1 will make apps like Apple Notes blurry, and some uneven values will mess up the text a bit or make it sharper.

                  If you play around with it DO NOT try floats, as that will crash the window server even in safe mode, and you will be forced to fix it in single user mode.

                  ⚡13% 17:25:29 ➜ defaults -currentHost write -g NSCGSWindowSkylightSupportsMoreScaleFactors -bool yes
                  ⚡13% 17:26:00 ➜ defaults -currentHost write -g NSWindowScaleFactor -int 1
                  ⚡13% 17:26:13 ➜ defaults -currentHost write -g NSWindowScaleFactor -int 2
                  ⚡13% 17:26:30 ➜ defaults -currentHost write -g NSWindowScaleFactor -int 3
                  ⚡13% 17:26:38 ➜ defaults -currentHost write -g NSWindowScaleFactor -int 10

                  There are also another option named NSTypesetterBehavior that seems to be able to force typography to how it was in previous versions of MacOS https://developer.apple.com/documentation/appkit/nstypesetterbehavior?changes=_4_1&language=objc

                    async Played around a bit with this. So if I apply it to for example Chrome

                    defaults -currentHost write com.google.Chrome NSCGSWindowSkylightSupportsMoreScaleFactors -bool yes
                    defaults -currentHost write com.google.Chrome NSWindowScaleFactor -int 1

                    Anything under another window / shadow becomes blurry, and goes into focus again if I switch to the window. But if I have an overlay on the screen then Chrome is sharp until I stop doing something, then after a second or so it all turns blurry. So this is related to idlemode somehow. Also if using any type of overlay they you most likely have a slight shift of everything all the time

                    Also I figured out that Kaleidoscope for Mac can be used to diff images to see subtle changes in text rendering, like when turning on and off hardware acceleration.

                    Played around with Opple LightMaster IV on my M1 Max. Measured flicker at a frequency of 15006 hz.

                    Modulation depth:

                    - White 53.1%
                    - Black 39.65
                    - Red 7.19%
                    - Green 76%
                    - Blue 79.09%
                    - Pink 35,72% (red + blue)
                    - Yellow 44.17% (red + green)

                    What's up with the modulation depth on red. Did anyone see this? Slow KSF phosphor, or something else going on?

                      async Am I correctly understand higher percentage is more flicker? If so, does that mean optimum is up to 5 percent to be classified as flicker-free ?

                        Donux It is modulation depth, so 15006 times per second the brightness drops 79% when looking at a pure blue background. The question is why that doesn't happen with red. If this was simply the backlight flicker (not PWM) then at least for me it makes no sense that it doesn't affect red.

                        Could it be that KSF phosphor that is slower to respond causes this?

                        https://pcmonitors.info/articles/the-evolution-of-led-backlights/

                        Just thinking as I type here, and I'm probably butchering the explanation and understanding of the visual system. A lot of how the visual system works it is built around opponency of the following types.

                        • High / low luminance (black / white)
                        • Red / green
                        • Yellow / blue

                        Basically our vision detects a shitton of edges and gratengs thru these combos.

                        What actually happens if red doesn't flicker (due to KSF phosphor) while green that hit the M and L cones does?**

                        We have some rods that mainly respond to brightness (black and white), and we have 3 types of cones that maps to different colors. The type of color and input can make us use certain visual pathways more or less. So looking at large slowly moving black and white blobs will take a different path than looking at colored details.

                        Cone cell - Wikipedia

                        Some quick info on the 3 pathways:

                        Magnocellular (M) pathway:

                        • Processes motion, depth, and coarse visual details

                        • More sensitive to low spatial frequencies and high temporal frequencies

                        • Plays a role in detecting sudden changes or movement

                        • Most vulnerable to flicker due to its high temporal resolution

                        • Responds strongly to rapid changes in luminance

                        • Sensitive to low spatial frequencies and high temporal frequencies

                        • Studies have shown M pathway neurons can follow flicker rates up to 60-80 Hz

                        Parvocellular (P) pathway:

                        • Processes fine detail and color information

                        • More sensitive to high spatial frequencies and lower temporal frequencies

                        • Important for form and object recognition

                        • Less sensitive to flicker compared to M pathway

                        • More responsive to steady-state stimuli

                        • Can still be affected by flicker, especially at lower frequencies

                        • Triggered by red-green opponency.

                        Koniocellular (K) pathway:

                        • Less well-understood compared to M and P pathways

                        • Involved in color processing, particularly blue-yellow distinctions

                        • May play a role in eye movement control and visual attention

                        • Less well-studied in terms of flicker sensitivity

                        • Some evidence suggests it may play a role in processing rapid color changes

                        Maybe one of the problems of KSF phosphor is mainly present when there is high frequency flickering, as instead of a simple brightness dip that affects the Magnocellular pathway, you would keep hammering the Parvocellular pathway with edge / no-edge / edge / no-edge. 15000 times pr second.

                        Remember that white on the display consists of RGB. So to avoid this you would need to remove either all red resulting in a cyan image, or remove both green and blue resulting in a pure red image. This can be done with Gamma Control if someone wants to experiment.

                        One thing to note is that if making blacks brighter or color shifted this effect could be significantly increased, as pure black doesn't generate any signal for yellow-blue opponency. One could also argue that newer screens getting better with less bleeding and overly blue backlight could trigger less of the different types of opponencies that the visual system uses to detect edges.

                        Say you want to create the maximum amount of opponency signals for text you would make blacks slightly gray and purple (blue and red), and you would make the whites greenish yellow. The luminance contrast is there in any case. Compared to a perfect screen with pure black and white you would have 3 opponency channels instead of 1. Not necessarily beneficial, but worth thinking about.

                        In any case I'm a strong advocate for experimenting with small color shifts the make screens feel better, and I think the differerent flicker rates of the colors for the MacBook display potentially could be the reason it messes up people.

                        @DisplaysShouldNotBeTVs

                        dev