Daniel Hertzberg
It seems that every few years someone states, “We are in the golden age of astrophotography!” This is usually followed by a string of examples explaining why they believe it to be so. Here we are again at a crossroads, with astro-imaging gaining traction in popular culture beyond the amateur astronomy community. And much of this is driven by smartphones, microcomputers, CMOS detectors, and even gaming consoles.
Deep-sky astrophotography is again at the cusp of a revolution. It’s an amazing time to be sure, and we’re seeing things we never thought would be possible — mounts so accurate they don’t require guiding corrections during long exposures, cameras more sensitive than ever, and tiny computers that ride along with the telescope while orchestrating the tasks of a host of accessories. Astrophotography has an amazing journey ahead. Here’s where I see it going next.
The Times They Are A-changin’We saw one revolution already at the close of the 20th century. Film astrophotography, with its chemical processes, darkroom artistry, and gas-hypering alchemy, gave way to the environment of digital cameras, specialized computer software, and the virtual darkroom. But even that familiar scenario will eventually become the exception rather than the rule.
Everything having to do with astrophotography is changing, including the optics we use. Commercially produced telescopes and camera lenses are better than ever, thanks both to advances in computer-formulated optical designs and the addition of new materials such as extra-low-dispersion glasses. Fast, high-quality optics are easily accessible and more affordable than ever. For example, with a mirrorless camera with a tripod and an f/1.4 lens, you can now record a Milky Way image in under 10 seconds with equipment that doesn’t cost a small fortune. An f/7 telescope, once considered photographically “fast” by deep-sky imagers, is now passed by unless it’s paired with a reducer/flattener. Scopes with focal ratios of f/5 and f/4 are now the norm, with even f/2 instruments available from several manufacturers. I currently own three telescopes that produce sharp, round stars across a full-frame camera sensor at f/3 and faster — a feat that was until recently considered prohibitively expensive.
The same goes with the detectors in our cameras. CMOS chips today exceed the performance of CCDs. Some CMOS camera manufacturers are closing in on the effective elimination of read noise and achieving the maximum quantum efficiencies possible for digital detectors. What’s next? I have some ideas, and I’m not the only one.
FAST AND DEEP This image of the Milky Way was recorded with a single, 10-second exposure made with a Canon EOS Ra operating at ISO 3200 and an f/1.4 Sigma Art lens on a stationary tripod. The same photograph taken with a DSLR a dozen years ago would have been riddled with noise and would have barely shown our galaxy at all. When a process can’t be improved, the only thing left to do is to change the process. Any student of modern history understands that today’s amazing, leading-edge, expensive technology is only a few years away from being cheaper and small enough to fit in your pocket or strapped to your wrist. A good example is my Apple Watch. It can monitor my heart rate, read my blood-oxygen level, make international phone calls for free, and, if I ask it to, find all the dog photos on my paired smartphone. We already live in the future!
The next big buzzword creeping into the conversation about astro-imaging is computational photography. In fact, computational photography is already revolutionizing photography. You’re most likely already taking it for granted.
AI FOR YOUR DSLR Products like the Arsenal 2 add a Graphics Processing Unit (GPU) to your camera for remote-controlled computational photography assistance. This device adds new capabilities to your camera, including focus stacking, high-dynamic-range compositing, and even accurate auto-focusing on star fields. Computational photography is when computer processing is used to aid or improve photography in-camera. We already have handheld cameras (in your smartphone) that combine multiple exposures of varying lengths to create high-dynamic range (HDR) images. That same smartphone camera will take as many as a dozen or more very short images in low light, then align and combine them to make a single, low-noise image — all happening automatically in your device while you watch. Today I can take a 5-second, handheld exposure in low light without a tripod, and it won’t be a smeared mess. These are all examples of computer processing being applied while the image is recorded to dramatically improve the capabilities of our cameras and the quality of our photos.
This technology is still in its infancy, as improving computational power continues to increase the capabilities of image- and graphics-processing technologies. In every case, industries outside of astrophotography are driving these technologies forward. You’re probably familiar with a Central Processing Unit (CPU) — the “brain” at the heart of most computer systems. Another acronym you may be less familiar with is GPU, or Graphics Processing Unit, a term coined by the hardware graphics accelerator company NVIDIA in 1999. Today, GPUs power 3D simulations and games in computer and gaming consoles, and the performance improvements they’ve made possible are staggering. Computer images are simply numbers, and a modern GPU can perform trillions of numeric calculations every second, making those 3D-rendered games run smoothly. Many of the latest advances in our smartphone-camera capabilities have arisen because your smartphone, just like your laptop or desktop computer, now comes with an integrated GPU.
Live StackingSure, this technology is amazing, but what does it have to do with astrophotography?
AUTOMATED IMAGING This is the future: completely integrated imaging systems like the Unistellar eVscope 2 (left) and Vaonis Stellina (middle) that perform live stacking and image processing while you watch. Results like the image of the Running Man Nebula (right) excite beginners, and eventually these systems will deliver extremely high-quality results. Nearly all astrophotographers are familiar with the term stacking, in which many exposures are combined to increase the resulting image’s signal-to-noise ratio. (See my recent article “Image Stacking Demystified” in the April 2022 issue.) Some astronomy-specific, camera-control software packages for planetary imaging have added a new feature called live stacking. In this computational process, your camera takes a batch of very short exposures — perhaps only a few seconds each — and your computer aligns and combines them in real time, just as in your smartphone. Santa Barbara Imaging Group (SBIG) implemented the earliest incarnation of live stacking for astrophotography way back in the 1990s. SBIG’s program simply accumulated the images as they downloaded from the camera, but the action was limited by the accuracy of the telescope’s tracking capabilities — periodic error in the tracking mount would produce oblong stars in the resulting stack. Today’s new live-stacking algorithms take advantage of faster computers equipped with powerful CPUs and GPUs to align and stack your image sequence. This means the telescope only needs to track flawlessly for several seconds at a time for the stars to remain round in each image. In fact, equatorial tracking isn’t necessary at all, because field rotation won’t be visible in such short exposures.
SHRINKING COMPUTERS Tiny, power-efficient computers like this PrimaLuceLab Eagle 4 will be taking the place of your laptop at the telescope soon. An integrated GPU is also a standard feature of these devices, and one day will be doing much of the heavy lifting as your images download from the camera. A few years ago, ATIK (atik-cameras.com) introduced the Infinity, a camera/software combination that performs live stacking. However, any camera can be used for live stacking with a computer and the right software. MallinCam’s control software also does live stacking with its cameras (mallincam.net). SharpCap (sharpcap.co.uk) is another inexpensive option with live stacking that can control products from a growing number of manufacturers. I wrote the alignment routine that can do this with Software Bisque’s TheSkyX Imaging Edition (bisque.com), and it runs fast enough to be used on a small, ride-along computer.
A few commercial products on the market today combine all these capabilities. The Vaonis Stellina and Unistellar eVscope (reviewed in the March 2020 and December 2020 issues, respectively) are completely integrated telescope systems that incorporate live stacking, autonomous alignment, and plate solving (matching star patterns to identify field coordinates) to make imaging easier than ever before. Such “observation stations” can send their images to several connected smartphones or tablets simultaneously and let multiple people image with the same scope.
Many accomplished astrophotographers I’ve chatted with discount these products as insufficient for their needs. Well, of course, they’re not going to work for everyone, for the same reason a wedding photographer wouldn’t give up a medium- format camera if given a choice to use a smartphone camera, would they? The barrier to entering the hobby of astrophotography has historically been prohibitively high, both in terms of monetary cost and in the steep learning curve needed to attain the skill necessary to effectively use highly sophisticated gear and complex processing techniques. Products like the Stellina and eVscope may seem pricey to some, but they are cutting-edge technology bringing astrophotography into the mainstream. New, disruptive technologies that upend entire industries almost always seem crude and expensive at first, but it rarely stays that way for long.
Live stacking is the future. How short an exposure can be for it to be useful for stacking a deep-sky image is limited only by the camera’s read and pattern noise. As mentioned, these noise sources are already on a trajectory towards insignificance. The day is coming when there will be virtually no effective limit to how short exposures can be, or how many frames can be combined to produce results equivalent to the lengthy exposures we need today.
Capturing, storing, and processing thousands or even tens of thousands of 30-megapixel deep-sky images may seem like an insurmountable hurdle today, but this is just a temporary situation. My first hard drive stored 20 megabytes of information, which isn’t enough room to save a single raw image from my Canon EOS Ra mirrorless camera. Hard drives, CPUs, and GPUs are improving all the time. You can pick up a 10-terabyte hard drive for about $150 on Amazon right now. And then there’s “cloud storage,” which is entirely online. Imagine what will happen when we can upload massive amounts of data without needing large hard drives at all.
HIGH-END ASSIST Astrophotographer John Gleason assembled his own electronically assisted astronomy setup to take high-quality, live-stacked images. He combined a Rainbow Astro RST-150 robotic mount with a Takahashi FSQ-106ED astrograph with a QHY5III-462C color camera. These images of M20 (left) and M20 (right) total 9 minutes each using SharpCap in Livestack mode. Gleason then processed the images in Adobe Photoshop. Until fast and plentiful cloud storage becomes a reality, there are other ways live stacking can improve that won’t require massive storage space. Arthur C. Clarke’s Law of Revolutionary Ideas states that every important new idea evokes three stages of reaction:
1: “It’s completely impossible — don’t waste my time.”
2: “It’s possible, but it’s not worth doing.”
3: “I said it was a good idea all along.”
Many are still at the second stage, though I’d count myself as fully embracing stage three. I think the next step in live stacking deep-sky images may be to simply save the intermediate results every 5 minutes or so. Then you’ll have a folder full of 5-minute (effective) exposures that you can process later.
All common post-processing routines will eventually be done by an artificial intelligence computer algorithm: Sub-exposures will be calibrated on the fly and have advanced noise-reduction techniques applied as they are saved. Airplane and satellite trails will be detected and rejected immediately. Sophisticated histogram-stretching operations will be per- formed on images before you even see them. Lucky imaging will also be revolutionized by faster computers that can evaluate and discard sub-par frames in real time, reducing data storage requirements tremendously.
Making eye-catching astrophotos is more than collecting and assembling data — and that part of the process, the “art,” will never go away. But make no mistake — live stacking is the future, astrophotography is going mainstream, and getting impressive results will not take nearly as much work as it does now.
Until then, we’ll keep moving forward. We can’t really see what the future holds entirely, but I’m sure it will be amazing when we get there.
This article originally appeared in the August 2022 issue of Sky & Telescope.
Comments (0)