Are we confident whether this colored fringe 3D effect on text is distinct from subpixel antialiasing? The latter can also cause colored fringes on text.
I disabled dithering on Apple silicon + Introducing Stillcolor macOS M1/M2/M3
@waydabber Did you try Quartz Display Services to possibly bypass more of the image processing when streaming a display? As far as I can tell when using a streaming display now it simply outouts it in a window, but it should be possible to take over the display.
- Edited
macsforme It's entirely different, because it affects images too. In addition, even if the text you're displaying is manually rendered (or drawn in an image editor) and only made out of sharp black pixels, AND ClearType or font smoothing is off, you will still see the red and blue fringing on affected devices.
This fringing is definitely an additional effect, and not me simply seeing the physical RGB subpixels. This is because I also own a few older devices with screens (that happen to all be safe) where I can't see this effect at all —
on those usable devices, I can still notice physical RGB subpixels if I look closely, but it is very mild and the edges don't look "more exaggerated than the rest" at all.
On the bad devices, it looks like the fringing is extra full-size colored pixels that are visibly placed to the left and right of a filled shape, instead of colors only staying within the shape's pixels themselves.
DisplaysShouldNotBeTVs Thought quite a lot about this.
- There have been examples of removing the McCollough effect by doing the same with alternative colors, and I saw one note about untraining it by viewing the same types of patterns at an angle. If these types of adaptations are involved, then in theory it would be possible to create overlays, colorshifts or other visual training that offsets it.
- Imo more people should find the most reliable way to see if their symptoms turn on or off, and then do stupid experiments like putting the computer monitor at a 45 degree angle for a day. Then tilt it 45 degrees to the side. Or even do something properly hard like using it upside down for a few hours to force the brain to do some new adaptation Or simply use a video wallpaper, or put a light that hits the screen, or add a big bezel. People can laugh about it, but there are tons and tons of ways to modulate things that no one even thought about testing, and everything that provides some relief is a clue as to what needs to happen to fix it. For example I get the same type of blinking effect on text when things are really bad that you get from viewing black and white striped patterns, but if I hold the iPhone light so it shines into the screen near the text it instantly goes away. As does all glowing effects on text. So obviously I force some other pathway to get active.
- I'm pondering about the effect of dark mode and oled / miniled screens. Usually you get a proper reaction and pupil constriction when things are too bright, but with this you can blast a pretty intense focused amount of "white" light into your eyes. On top of this you get super sharp edges, and you get the unatural combo of white text on a black background that doesn't really exist anywhere. So thinking about this McCollough effect it isn't entirely unfanthomable that this could force some advence adaptation near other edges. My guess is that our visual system mostly views the least bright areas in our visual field as we ignore the sky.
- Also for dark mode it is really problematic with astigmatism, or if there is any problem with prescription strength or chromatic abberations.
- Smoothing or Mac antialiasing might offset the chance of getting adverse adaptations from the Edge-Detectors (McCollough effect), but it might also make the eyes think it needs to refocus on high dpi displays.
- Viewing desaturated photos most likely causes adverse adjustments.
- A gamma curve with more blue in blacks seems to work better for me. Adjustable in BetterDisplay.
- Edited
async Really nice find. I'm certain there are important things to be found here. The list of random shit they are doing instead of actually taking a grid of colored pixels and displaying them is just mind blowing.
It looks like these 3 people are involved with many papers relating to the false 3D effect (including the one I previously mentioned that was funded by Intel Labs — BTW, I just noticed that one of the authors of that paper is associated with NVIDIA)
Profiles:
https://neurotree.org/neurotree/publications.php?pid=153838
https://neurotree.org/neurotree/publications.php?pid=1027
https://neurotree.org/neurotree/publications.php?pid=15601
Some of them are specifically related to stereoscopic displays, but there are a surprising amount of papers on these pages that apply to traditional 2D displays as well (AKA relevant to us)
It looks like the "modern" version of this effect may have started with this paper from November 2017:
https://dl.acm.org/doi/pdf/10.1145/3130800.3130815
EDIT: Just found this paper with a ton of information about it
https://theses.ncl.ac.uk/jspui/bitstream/10443/5772/1/Maydel%20F%20A.pdf
Page 78:
Other studies have proven that accommodative responses can be elicited by simulating the effects of LCA with the three primary colours of a screen
a method to render simulated blur that incorporates the LCA of the eye and generates retinal images similar to those found for natural defocus. They showed that this method could be used to drive the accommodative response of observers at distances of up to 1.4 dioptres away from the screen, both when viewed through a pinhole and through a natural pupil
Including more confirmation that it worsens image quality and can cause depth conflicts:
These results indicate that the visual system uses LCA as an important cue to accommodation, even when it is in conflict with other cues such as defocus or microfluctuations, and when it is detrimental for overall retinal image quality (as accommodating away from the screen would worsen the defocus of the image).
Page 90:
presented images to participants that simulated positive or negative refractive errors of up to 1.4 dioptres by differentially blurring the primaries of the screen at luminance edges, as LCA would on a real scene. Their responses [to the simulated LCA] were as robust as those triggered by an actual change in the focal distance of the target.
[…However,] all other cues such as micro-fluctuations and higher order aberrations would be indicating to the visual system that no change in accommodation was required…
Page 92:
observers would accommodate close to the peak of their luminous sensitivity. However, our results suggest that the visual system maintains this strategy when accommodating to mixtures of narrowband illuminants, even when it might lead to suboptimal image sharpness. This means that visual displays that use narrowband primaries, particularly those that are used at near distances from the eye, might not be ideal
Page 153 is really interesting:
Modern digital displays are increasingly using narrowband primaries such as lasers and Light Emitting Diodes (LEDs). This allows for a wider colour gamut to be shown as well as higher energy efficiency; however, it is not clear how this might affect our perception, and in particular, our ability to accommodate and keep the image in focus.
considering wavelength for accommodative demand would be more relevant for visual displays that are used at nearer distances from the eye. It is important to note however, that we found large individual differences in this effect
We hypothesised that observers could be either maximising contrast at lower spatial frequencies, even when this is detrimental to contrast at higher spatial frequencies and these higher frequencies are task relevant
For practical applications, this means that mixtures of two narrowband illuminants [[i.e. red and blue]] are not optimal for maximising retinal image quality, particularly at high spatial frequencies.
However, the author didn't seem to realize the importance of checking whether these techniques are already being used in the devices themselves that the studies were done on (I'm very sure at this point that they are)
- Edited
Just found two more interesting ones
1:
https://ijrat.org/downloads/Vol-2/may-2014/paper%20ID-25201456.pdf
Lots of technical details about the technique here + more example images
2:
https://cse3000-research-project.github.io/static/0a605a3e4f4f6388cec3388286bd0f9d/poster.pdf
https://repository.tudelft.nl/record/uuid:178a950e-32c3-4397-a014-5a53d740ae74
This is based off the 2011 Samsung one, although is more basic as it's just a small implementation done by a student (which is why the color shifting is more noticeable). However, there are some more examples here.
Frustratingly, the section about "ethics" literally only talks about the ethics of someone artificially editing a photo, and NOT about the repercussions of these types of images on eyesight…
DisplaysShouldNotBeTVs Woah. Nice insights. I went down a few fabbit holes with accomodation today. There are wealths of information about how the visual system works. I tried to figure out about why some spatial frequencies trigger flickering in migraine / vss, and if it can be trained away. I'm certain there are techniques to reverse some of the issues caused by the apple screens. The best possible solution would be something that actually untrains whatever is causing the screen issues, so people don't have to fight every single screen, OS and bulb for all of eternity.
Tons of places where rivalry can take place and cause issues. Blue-Yellow Opponency, koniocellular vs parvocellular, different spatial frequencies. It is possible to shift things to other pathways with imagery, overlays etc. Also wondering if things like the same amount of red and green while making yellow tones and pure white causes issues.
There are also things that can shift how we view colors.
Also, Apple added support for capturing HDR screenshots/streams. Probably doesn't include all processing, but at least it might be usable for some types of diffing tools or overlays. Also they deprecated like 20 other methods and ways to capture the screen. Almost feels like it should be absolutely impossible to get the output right before it reaches the screen. I don't think there exists a single public tool that can capture with HDR. Might create a tool that measures potential rivalery, or overlays a diff of changes. Not sure if it will be useful without a capture card tho.
Capture HDR content with ScreenCaptureKit - WWDC24 - Videos - Apple Developer
Capturing screen content in macOS | Apple Developer Documentation (sample project)
async Created a sample that overlays the screen with a sample of itself while seeing if I could do some quick shaders.
Realized that it is now possible to capture all windows separately and assemble them again to do advanced things. Like blur background windows, add a slight dimming to the edges of windows to avoid contrasts, or blur the borders of windows to have less edge detection strain. Essentially creating more of custom window manager in mac. Could even do things like "ban" red pixels next to pure blue ones or desharpen.
Not sure how much can reasonably be done without ending up with massive gpu use and lag tho.
DisplaysShouldNotBeTVs The list of random shit they are doing instead of actually taking a grid of colored pixels and displaying them is just mind blowing.
Seems like preparing users "subliminally" for AR/VR future but using 2D displays as testbed? Even on the pure flicker side of things, stuff like this exists:
https://ledstrain.org/d/2706-guiding-attention-through-high-frequency-flicker-in-images
Not surprising research would try to explore and exploit any and all available understanding of vision.
- Edited
photon78s Not sure, given that both a 2 hour session with the Vision Pro this year, an hour with the original Vive back in 2016, and a recent Oculus headset I tried briefly were all fine enough for me, even when I used a more complex app like VS Code Web on the Vision Pro text was more readable than ANY other modern Apple device
(I really enjoyed the eye tracking control method too, I wish I could use the Vision Pro control method on regular screens). Depth perception felt natural for me in all 3.
Other stereoscopic displays like Nintendo 3DS are also fine for me, ironically I get less strain from them even in 3D mode compared to any modern device (although I prefer the 2D mode if I'm going to play for multiple hours)
I didn't have to "prepare" for trying those VR devices, they worked fine for me without any immediate problem — probably because each of my eyes is getting a totally separate image that they each can understand independently
(The only time I got strain during that Vision Pro test was in the Mindfulness app that used a completely black background, since my eyes couldn't perceive pure black as "far away". Everything else that used passthrough was fine and had accurate depth, although the camera feed was disappointingly low resolution compared to the UI)
On the other hand… outside of VR and stereoscopic displays, I cannot stand any 2D display that has this "false 3D effect" for more than 20 minutes LOL
VR headsets actually makes me feel like I have better depth perception in the real world after I use them sometimes…
but "false 3D" 2D displays totally mess up my depth perception SO much and give me tunnel vision that lasts for hourrs
- Edited
Right. I remember your post about your positive experiences with the Vision Pro. Prepare may not be the right word. Is this all pitched and sold as 2D display experience "enhancement" and without the necessary user testing as usual?
- Edited
That's VERY likely to be the case, given TV marketing that's actually much more transparent about these kinds of features and promotes them as exactly that.
The same tech invented for TVs is probably just being "snuck into" other devices whenever manufacturers think it's "perceptually" subtle enough…
Ironically, unlike other modern devices that default everything to ON and give you no choice…
TVs actually do give some degree of control over processing… in fact I was able to make an (initially super strainy) modern LG OLED TV actually great with Netflix on PS5 by enabling a minimal processing "4:4:4 Passthrough" mode, disabling "deep color", "contrast enhancement", "gradient smoothing" and dozens other settings which were all clearly labeled!!
After messing with all the settings, the image is now acceptably flat in a surprising amount of cases, I can consistently focus on most shows, and even understand pretty precisely what characters are doing in action scenes instead of a blur of flashy colors… which I consider a huge win for me!
And yet GPU settings on laptops totally hide these kinds of options despite using similar techniques!
(suspiciously, even though that LG TV is now usable for me with PS5… when an Apple TV 4K is connected instead with the EXACT same modified TV settings, watching the same show, I can't focus at all and the 3D effect is SUPER intense even on Apple TV menus… out of nowhere it transforms into that "modern MacBook feel". Apple is 100% messing with their HDMI color output just like all their other products)
TVs are actually honest about this stuff:
https://www.lg.com/levant_en/tvs/alpha9
- "Frequency-based Sharpness Enhancer"
- "The object depth enhancer precisely separates the main object from the background images and analyzes textures and edges […] This is to elevate the perceived depth of the one whole picture"
- "AI Object Depth Enhancer […] mimic the human eye’s focus by improving contrast between foreground and background image. This newly added state-of-the-art technology will further enhance all visuals on the Neo QLED TV, creating a three-dimensional effect"
- (from a different page) "Experience depth and dimension on screen just the way you see it in real life. Real Depth Enhancer creates an immersive experience by mirroring how the human eye processes depth"
- Edited
And they don't have to invent new things for different use cases and probably saves costs in supply/manufacturing as well.
- Edited
Hi! Could you help please.
How can I get info on what company is a manufacturer of my MBA’s M3 15’’display? LG, Samsung, other Japanese... Where to find codeID-decode Name list. For my MBA 15'' M3 it is - "ManufacturerID"="00-10-fa",
Looking for info:
1. How many manufacturers (vendors) do supply their displays for Macbook models. Especially Air 15'' M3 Air model
2. How to define on specific laptop who is the manufacturer of the display that it has (how to translate from "ManufacturerID" to real name of manufacturer (vendor))
Here I've created new topic with detailed info on my MBA. Any useful info, please, share:
https://ledstrain.org/d/2956-how-to-define-macbooks-display-manufacturer-vendor-my-mba-15-m3-air
Just dropping a quick script here for people that want to experiment with multipe flags a bit easier. Just drop it in an .sh file. It required betterdisplaycli. Do note that some of the values have been found to improve things, and some are just not found to cause adverse effects. If anyone wants to experiment use something like TestUFO.
#!/bin/bash
# Helper function to run command and print specifier
run_and_print() {
local specifier=$1
local property_type=$2
local property_value=$3
local display_name="built"
# Get the current value
local current_value=$(betterdisplaycli get -namelike="$display_name" -specifier="$specifier" -framebuffer"$property_type"Property 2>&1)
# Set the new value
local output=$(betterdisplaycli set -namelike="$display_name" -specifier="$specifier" -framebuffer"$property_type"Property="$property_value" 2>&1)
if [[ $output == *"Failed"* ]]; then
echo "\033[31m$specifier\033[0m\033[90m - $current_value"
else
echo "\033[32m$specifier\033[0m\033[90m - $current_value - $property_value\033[0m"
fi
}
# Boolean properties
run_and_print "enableDither" "Bool" "off"
run_and_print "uniformity2D" "Bool" "off"
run_and_print "IOMFBTemperatureCompensationEnable" "Bool" "off"
run_and_print "IOMFBBrightnessCompensationEnable" "Bool" "off"
run_and_print "enable2DTemperatureCorrection" "Bool" "off"
run_and_print "enableDarkEnhancer" "Bool" "off"
run_and_print "DisableTempComp" "Bool" "on"
run_and_print "AmbientBrightness" "Numeric" "0"
run_and_print "IOMFBContrastEnhancerStrength" "Numeric" "0" # better to look at with it on but it seems to adjust slowly causing flicker and blotching
run_and_print "IdleCachingMethod" "Numeric" "1" # reduces software cursor flicker from color profile
run_and_print "overdriveCompCutoff" "Numeric" "0" // default 334233600, can cause stuck pixels?
run_and_print "VUCEnable" "Bool" "off" # unstable?
run_and_print "BLMAHMode" "Numeric" "1" # default 2
# stuff that seems a bit better on
#run_and_print "APTEnableCA" "Bool" "on"
# run_and_print "enableBLMSloper" "Bool" "on"
# run_and_print "APTEnablePRC" "Bool" "on"
# run_and_print "APTPDCEnable" "Bool" "on"
# run_and_print "enableDBMMode" "Bool" "on"
# run_and_print "BLMPowergateEnable" "Bool" "on"
# run_and_print "IOMFBSupports2DBL" "Bool" "on"
# run_and_print "DisableDisplayOptimize" "Numeric" "1" # unstable
- Edited
Donux
Sure, there are some software "algorithms for image improvement" made by Apple that affect image quality - like dithering.
Basic goal is to rate all Macbooks' displays by hardware. Especially for IPS MBA 15'' M3's built-in displays.
For example. We have 3 suppliers (vendors). And in terms of quality. With the same software algorithms ON - one of the displays supplier (vendor) is better then another… Would be great to define which one is better and then to have fast-check Terminal command to define which supplier (vendor) is in each specific MBA 15'' M3 specimen …
If hardware quality of those vendors is very similar and image is practically similar this would be also a good result.
…
Is there only one vendor or several? If several 2, 3 or more then:
IPS MBA 15'' M3 vendors rating
1st place. Best - Vendor's name 1 - vendor's code 1 in Terminal ioreg etc - reason why it is the best one
2nd place. W - Vendor's name 2 - vendor's code 2 in Terminal ioreg etc - reason why it is in the middle
3rd place. Worst - reason why it is in the worst one
…
I've started this research because I am not satisfied with IPS display that my new MBA 15'' M3 has.
Even if
- dithering is OFF
- RGB standard profile is ON in system settings
My MBA 15'' M3 still has
- brightness flickering when pressing F1-F2, it is not changing smoothly
- linear gradients are not smooth, there are some linear effect appears randomly
- black text on white background seems too annoying too much "contrast". It is visible also when system loading and you see white Apple logo on black screen. Same effect
Comparing to my MBP 15'' 2014 where
- brightness changes smoothly and
- gradients also very smooth and stable
So if there is some MBA 15'' M3 with the Best built-in display from vendor (rated as 1st place). The one that has
- smooth brightness changing F1-F2 and
- gradients also very smooth and stable
That would be great to find this "version" of MBA 15'' M3 with better one display supplier
- Edited
I use Gamma Control for color adjustments now. And at times there is some type of graphics switch that can be seen as it takes a second or two until those adjustments are applied again. So far I didn't pinpoint exactly what happens happens, but it might be relevant to figure out.
I noticed it happening in one app when it shows some particular icons, and for this app that I briefly testet it happens upon closing the app. https://apps.apple.com/us/app/almighty-powerful-tweaks/id1576440429?mt=12
I've noticed these messages, but changing them around thru GlobalPreferences doesn't seep to influence them. However it might be a way to force sRGB output. There is a ton of settings loaded for most apps that can be changed here.
CAEnableDeepFramebuffer
CSEnableIOSurfaceCompression
CADisableColorMatching
CADisableShadingDither
FramebufferServerUseLowQualityScaling
NSDeepDefaultWorkingColorSpace
NSExtendedWorkingColorSpace
NSExtendedWorkingColorSpace
NSLinearWorkingColorSpace
NSSingleWorkingColorSpace
NSExtendedWorkingColorSpace
NSWindowUsesZeroScreenForDefaultColorSpace
Discovered something interesting. This can be used to affect the rendering of different apps. It can also be applied in the plist for specific apps.
Setting it to 1 will make apps like Apple Notes blurry, and some uneven values will mess up the text a bit or make it sharper.
If you play around with it DO NOT try floats, as that will crash the window server even in safe mode, and you will be forced to fix it in single user mode.
13% 17:25:29 ➜ defaults -currentHost write -g NSCGSWindowSkylightSupportsMoreScaleFactors -bool yes
13% 17:26:00 ➜ defaults -currentHost write -g NSWindowScaleFactor -int 1
13% 17:26:13 ➜ defaults -currentHost write -g NSWindowScaleFactor -int 2
13% 17:26:30 ➜ defaults -currentHost write -g NSWindowScaleFactor -int 3
13% 17:26:38 ➜ defaults -currentHost write -g NSWindowScaleFactor -int 10
There are also another option named NSTypesetterBehavior that seems to be able to force typography to how it was in previous versions of MacOS https://developer.apple.com/documentation/appkit/nstypesetterbehavior?changes=_4_1&language=objc