this post was submitted on 20 Jan 2024
1 points (100.0% liked)

Astrophotography

1756 readers
1 users here now

Welcome to !astrophotography!

We are Lemmy's dedicated astrophotography community!

If you want to see or post pictures of space taken by amateurs using amateur level equipment, this is the place for you!

If you want to learn more about taking astro photos, check out our wiki or our discord!

Please read the rules before you post! It is your responsibility to be aware of current rules. Failure to be aware of current rules may result in your post being removed without warning at moderator discretion.

Rules




If your post is removed, try reposting with a different title. Don't hesitate to message the mods if you still have questions!


founded 1 year ago
MODERATORS
 

I love seeing the astro images posted here, but may I share an algorithm for making them even more beautiful?

Most astro images are created from separate red, green and blue images taken with electronic detectors (whether using classic BVR filters in an attempt to replicate what the eye might see, or some other combination in a "false color" image). There are two big problems that are common with the images created in this way (even by professionals).

The first is in the choice of stretch: how brightness on the detector maps to brightness on the displayed image. Most choose a linear or a logarithmic stretch. A linear stretch brings out fine detail at the faint end, but can leave the viewer ignorant of details at the bright end. A logarithmic stretch allows you to bring out details at the bright end, but not the faint end. Instead of these, choose an asinh (inverse hyperbolic sine) stretch, which is able to bring out both the faint and bright features. It scales linearly at the faint end and logorithmically at the bright end, giving you the best of both worlds.

The second is in the handling of saturation: how to display pixels that are too bright for the chosen stretch. Most apply the stretch separately in the red, green and blue channels. This makes the cores of bright objects appear as white in the color image, while they are surrounded by a halo that is more appropriate to the actual color of the object. The color of a pixel should instead be set by considering all of the channels together. This way, bright objects will have a uniform color, regardless of whether the stretch has been saturated in any of the channels.

See here for a direct comparison between the classic approach and this (not really) new algorithm on the old Hubble Deep Field.

If you would like to adopt this algorithm for your own work, there is a python implementation that you might find useful.

no comments (yet)
sorted by: hot top controversial new old
there doesn't seem to be anything here