SteveThanks for the reply.
Well, I expressed myself poorly. It shouldn't be called "clipping," but rather the correct adjustment for grayscale.
I understand that a 1DLUT doesn't correct mixed colors; it only corrects grayscale gamma and grayscale color temperature. Only a 3DLUT can correct other colors.
But as in my example, if a 1DLUT corrects grayscale color temperature, it significantly reduces the difficulty of color correction required by the 3DLUT (less nonlinearity). However, since the 1D correction limits the maximum values for R/B output, doesn't that reduce the number of usable nodes in the 3DLUT? For instance, nodes like (250, 255, 250) and (255, 255, 255) would receive no signal at all, as they are clipped by the 1D LUT.
If the 1D LUT doesn't correct color temperature, it would use the current 8000K white point as reference and only calibrate gamma. This wouldn't shrink the R/B channels, so (255,255,255) would still map to (255,255,255). In this case, the 3D LUT can capture all code values. However, for grayscale color temperature correction, it would rely solely on the 3D LUT's limited 17 grayscale points.
This is the contradiction I'm trying to express.
Upon reflection, I believe a solution exists: if the maximum values for the three RGB channels in the 3DLUT could match the maximum output of a 1DLUT, this issue would be resolved. The 3DLUT could redefine the 17 nodes by applying (0-240)/17 for the R channel, (0-255)/17 for the G channel, and (0-230)/17 for the B channel. This approach would prevent any nodes from becoming unusable.
While casually reviewing a cube 3DLUT file, I noticed a domain parameter (as shown in the image) that appears to define the maximum values for RGB channels. However, when I asked AI ChatGPT about this, it informed me that standard 3DLUTs uniformly distribute values from 0 to the maximum without compressing the three RGB channels.
This leaves me puzzled about how colorspace handles this.