Commons:Featured picture candidates/File:Castle Uçhisar in Cappadocia.jpg

From Wikimedia Commons, the free media repository
Jump to navigation Jump to search

Voting period is over. Please don't add any new votes.Voting period ends on 12 May 2014 at 07:21:40 (UTC)
Visit the nomination page to add or modify image notes.

"Castle" Uçhisar in Cappadocia Turkey
Although on a wiki page, one can ask the Mediawiki software to render the image at any size, the image description page only offers a few choices and there's a huge jump from the largest to 100%. I think this lack of viewing flexibility encourages overly negative comments, because one jumps from screen-sized to 100% in one go, and it isn't so easy to view at 50%, 75% or 6MP, 12MP, unless one uses an external image viewer. I'm not opposed to downsampling where there is no loss of detail, and on this picture I estimate one could downsample to 66% without loss but no further.
Btw, Benh's "2/3 is made up data" claim isn't correct. I suspect you meant to say 1/3. To some extent this is an internet myth. It is true that the resolution captured by a Bayer sensor isn't as high as its stated resolution, and the colour resolution is significantly worse than luminance (which is fine, because the eye doesn't mind). But the conclusion made by some, that since e.g. the sensor only captures 2/3 the stated resolution, one can downsample to 2/3 size without loss of detail is easily proven to be false. Take one of your sharp pictures that are 100% copies out-of-camera. Downsample it to 66% then upsample it back to the original size. Flip between the two. Look at fine lettering in a sign, say. Depending on your eyesight and monitor dot-pitch, you might need to compare the two at 200% to see what is lost. You can get close if you apply some sharpening when you upsample. But it is still not as good. -- Colin (talk) 12:35, 6 May 2014 (UTC)[reply]
  • disclaimer : I know not much about signal processing and all. My claim was more to say "space is not free, no need to provide this big an image, when most of the data is irrelevant". If it's really about providing something as good as it gets, then we would upload plain TIFF but what does the extra space really bring ? Colin, I meant 2/3. One pixel on any sensor (bayer or not) captures either B, R or G channel. The two other channels value are recovered through interpolation, so I believe that, basically, 2/3 of the data is made up. The only case (to my knowledge) where downsampling equals throwing away data is Sigma's FOVEON sensor where each pixel really captures the RGB channels light. - Benh (talk) 15:08, 6 May 2014 (UTC)[reply]
    It doesn't work that way. The demosaicing considers luminance and colour data separately. The reasons the Bayer sensor is so successful is because the eye isn't sensitive to colour resolution. This is also why many still and moving image formats store the colour data at lower resolution. The luminance is reconstructed mostly from the two green sensors in the grid but also from the other colours. While mathematically and on test charts, the sensor can't capture all XX MP it claims to, it will at times capture accurate pixel level detail and at other times smudge it. The demosaicing algorithms are more than just crude interpolation, and describing it as "made up" is unkind. Provided one uses a reasonable lens and low ISO, there really is pixel-level luminance detail in a JPG. The mistake that is made is assuming that downsampling is simply the inverse of the demosaicing algorithm. It isn't and is much cruder. So data is lost.
    But back on topic, I agree that modest downsampling can be justified if the image suffers no loss. This may happen more with huge stitched images and with the latest 24MP APC or 36MP FF sensors coupled with modest quality lenses. Or with high ISO images with heavy NR. But remember that JPG is pretty good at removing wasted data, which although not free, is pretty cheap. But can downsampling be demanded? Is looking at a 2-metre wide image from a distance of 20cm with one's reading glasses on, actually a fair way to judge? -- Colin (talk) 19:37, 6 May 2014 (UTC)[reply]
    I'm staying off topic since it seems "interesting" (for the rest we look to pretty much agree). Again, not to be necessarily taken for granted : Probably eyes are more sensitive to luminance data. That is why sensors have more green pixels. As far as I understood, Bayer pattern is more successful because it's easier to interpolate (always a data in the neighborhood of a blank pixel for a given channel). Far more than, say, X-trans pattern from Fuji which are troublesom to interpolate (but which have better low light capabilities because of the more green pixels). I'd like to point out that of course demosaicing is not "crude make up". But the only tweak we can bring to that process is to make the image look better. We can't guess data we don't have. But since most subjects we take have kind of predictible pattern, we often guess with good certainty. In some cases however, everything is torn appart. The monkey nomination above is a good example of where demosaicing struggles : the monkey details are just a mess because the patterns are not predictible and similar in size to a pixel (btw Author did a good job with careful selective NR). IMO a good approach is the one taken by Nokia in its latests smartphone : using a N zillion pixel sensor but outputing much smaller pictures because they are big enough container for the relevant data. - Benh (talk) 09:01, 7 May 2014 (UTC)[reply]
    So in short : all pictures (good lense or not and all) can be downscaled to some extent without losing relevant data (Which btw should be the reason behind Nokia approach in their smartphone). - Benh (talk) 09:24, 7 May 2014 (UTC)[reply]
    Hmm. I'm not sure then what you mean by "without losing relevant data". You seem to imply "only chucking away stuff that was made up anyway" or even "getting closer to the truth". The transform that demosaicing makes isn't simply an enlargement so why should a reduction transform take one closer to the truth? The downsampling process is just as likely to eliminate reasonably faithful pixels as it does totally unfaithful pixels -- both get averaged out. It's like saying that because a 128Kbps MPG has lost a lot of fidelity, one might as well just play it back on a cheap portable stereo, or even that it will sound better on a cheap stereo compared to a decent hifi. The cheap stereo isn't capable of representing the full audio but in quite a different way to how MPG fails. If one wants an algorithm that is good a chucking away irrelevant picture data, one need look no further than JPG itself. A soft image will be capable of greater compression than a crisp one. Using compression is far more intelligent than a downsample that effectively averages every three pixels to two. -- Colin (talk) 12:14, 7 May 2014 (UTC)[reply]
    Another consideration is the three stages of sharpening: capture, creative and output. The first is designed to compensate for the loss due to the Bayer filter, AA filter, lens softness, etc. The second is up to the photographer as an artist. And the final step depends on the presentation medium (screen or print). Most image presentation websites (including MediWiki) apply some sharpening when presenting downsampled images. For MediWiki, a reduction of 85% or more will trigger "-sharpen 0x0.4" to be passed to ImageMagick when it downsamples.[1][2]. I don't know if this value is a good one or not, and whether Commons or Wikipedia actually use that default. The point is that it is only when rendering at the viewed size that one makes the final judgement about how much sharpening to apply. Theoretically, we should upload our images without any output-sharpening and let MediaWiki apply it on the final rendered JPG in Wikipedia. Re-users of Commons content would want the JPG without any "output sharpening" so they can resize it and choose the appropriate sharpening for their output size/format. You don't want to apply output sharpening twice.
    Back to what FP should be judging? The candidate above is soft at 100% but if a re-user resizes it or prints it at modest size while applying the appropriate output sharpening, then it can look really sharp. Commons is a repository of educational images that can be used for any purpose. Commons isn't a photo portfolio website -- if it was then the image would certainly be offered at a size that doesn't expose any flaws, and sharpened already to a degree that looks crisp on screen. Perhaps we really do need to find a way to nominate an image at FP for review at a certain size, which may not correspond to the size of the uploaded JPG. So Moroder could nominate a 16MP sized rendering of this image but still, if he wants to, upload the 36MP version. -- Colin (talk) 12:14, 7 May 2014 (UTC)[reply]
Colin: Actually, you can view it at any resolution on-wiki by clicking one of the lower-resolution versions (say, 1280px), and replacing the 1280 with whatever you want. --King of 01:28, 9 May 2014 (UTC)[reply]
Thank you Benh and Colin for those most interesting and helpful comments. --Cayambe (talk) 06:29, 7 May 2014 (UTC)[reply]
Confirmed results:
Result: 14 support, 3 oppose, 0 neutral → featured. /A.Savin 14:37, 12 May 2014 (UTC)[reply]
This image will be added to the FP gallery: Places/Natural