Hi all,
When dealing with a display with ambiguous backlight specs and no reference spectro, selecting the correct colorimeter correlation file is a guess that risks severe measurement error. For instance, a spec sheet might list a generic "W-LED" backlight but advertise "95% DCI-P3" coverage—which strongly implies a PFS/KSF phosphor layer is being used to achieve the wide gamut, but doesn't explicitly confirm it. This ambiguity can happen with any unverified display. To avoid blindly guessing, I am looking to validate a theoretical workflow to deduce the correct file purely from pre-calibration data:
The RGB Balance Deduction Method:
1. Factory reset the display to its default color preset (untouched RGB gains).
2. Perform a grayscale sweep using Candidate Correlation File A (e.g., standard W-LED).
3. Perform a grayscale sweep using Candidate Correlation File B (e.g., PFS Phosphor).
4. Compare the pre-calibration RGB Balance/Separation graphs.
Hypothesis: Since factory presets are typically tuned with reference spectros, the incorrect correlation file will introduce an artificial mathematical skew, resulting in a severe, unphysical RGB divergence (e.g., a massive deficit in one channel). The correct file should naturally display a tighter, more cohesive RGB balance reflective of the actual factory calibration.
1. Is this deductive method mathematically and scientifically sound across all display technologies, or is it possible for the wrong correlation file to coincidentally produce a flatter RGB balance?
2. Excluding visual A/B daylight matching, is there a universal, data-driven method to definitively identify the correct correlation file for any unknown display when a spectro is unavailable?
Thanks in advance for any insights. |