8

The press release UH ATLAS telescope discovers first-of-its-kind asteroid from the Institute of Astronomy at the University of Hawaii shows an image of an "active asteroid" among a field of stars. Tracking the motion of the asteroid causes the star trails to be elongated but the asteroid's image to remain compact, though revealing a comet-like tail.

The right pane of the image however shows the field with the stars "magically subtracted".

How is this subtraction actually done mathematically? The caption calls the process "difference imaging", how does that work exactly?

ATLAS image of asteroid 2019 LD2 from late June 2019

ATLAS image of asteroid 2019 LD2 from late June 2019. Stars seem to streak by in the left panel because the images have been shifted and added to follow 2019 LD2, but the comet itself (indicated by two red lines) is almost lost in the crowded field of stars. The same data are shown at right, but with the stars subtracted. ATLAS uses this star-subtraction process (called difference imaging) for all asteroid search images. Here, the difference imaging reveals the tiny comet with its faint tail. Credit: ATLAS/A. Heinze/IfA


related to the "blurred line" between comets and asteroids:

uhoh
  • 148,791
  • 53
  • 476
  • 1,473
  • You just take two pictures at different times and see what's changed. – user3528438 May 22 '20 at 00:11
  • 1
    Rats! I just noticed that I've posted this question in Space SE; I'd meant to post this in Astronomy SE. – uhoh May 22 '20 at 01:37
  • How is it done? Very carefully :-) . But yeah, you depend on good algorithms to get 'overlay matching' done accurately and then grab the differences. Plus blob-finding so you can remove matching blob shapes even if absolute brightness differs. – Carl Witthoft May 22 '20 at 13:21

1 Answers1

14

The principles of Difference Image Analysis (DIA) or Difference Imaging, which is very common in modern astronomy for finding new transient sources (e.g. asteroids, variable stars, including microlensing events, and supernovae), is simple in principle but complicated by a lot of practical details caused by real-world observations.

The basis, which is set out in this presentation is as follows:

  • Align and resample your images, usually using the FITS images' World Coordinate System (WCS) to the same pixel grid
  • Select a reference image, or template, of the same patch of sky which is the "sharpest" (has the best seeing/smallest Full Width at Half Maximum (FWHM))
  • Then for each image you determine the convolution kernel that blurs the template/reference image by the right amount to match each image
  • Subtract (difference) the current image and the convolved template
  • Run some sort of object detection to find the new sources that have either appeared or which changed in brightness since the reference image was taken
  • Astronomical fame... (not really)

The difficulty is with the details, particularly with how the objects' shapes (their Point Spread Function) varies with time and with position across the CCD. The framework for the current approach to DIA was introduced by Alard & Lupton (1998) for matching a reference image to a target image. The convolution kernel to be applied to the reference image is decomposed into a set of basis functions, and the difference in the sky background between the image and the reference is included as a polynomial of the image coordinates. This then boils down a big set of linear equations, and a chi-squared minimization problem where you are trying to minimize the difference between the model image, produced by convolving the kernel with the template, and the actual image, weighted by the uncertainties, for which there are a wide variety of solvers.

A follow-up paper by Alard (2000) showed how the spatial variation of the convolution kernel (how it changes a a function of $x,y$ on the CCD image) can be modelled by multiplying the kernel basis functions by polynomials of the image coordinates. The kernel basis functions chosen by these two papers, and which are used by most people, are Gaussians of different widths, modified by polynomials of the kernel coordinates.

Bramich et al. 2013 goes into a lot more detail on how this is implemented where they introduce changes to account of the fact that the background is varying with time in a more complex manner across the frame than the simple offset used in Alard and Lupton (1998). This is becoming more important with the larger field of view of current survey telescopes such as the ATLAS NEO survey which made the discovery quoted. A Python implementation of these methods for those that like to see code is available as pyDANDIA.

Sky surveys for transient sources normally construct a grid of pointings across their survey area and then construct a set of reference images for each of these pointings. During normally survey operations each of the images taken at each pointing is subtracted from the corresponding reference image (after deriving a kernel for each frame to match the seeing to the reference of course).

Any object that is in both the reference and the image but has moved will show up as a "dipole" of negative and positive images in the difference image, depending on how far it has moved. Anything that is in the same position but has changed in brightness between the reference and the subtracted image will show up as a negative (if fainter than it was in the reference) or positive image (if brighter than in the reference) in the difference image. This is effectively an "AC" signal of how much has changed between the reference and the current image. To get a correct magnitude for the new source, you also need to perform PSF or aperture photometry on the reference image and add the 2 measurements together.

astrosnapper
  • 2,506
  • 1
  • 13
  • 24
  • This is a great answer, thank you! For the item "Subtract (difference) the current image and the convolved template" could you expand on it a bit? Are there registration shifts for each subtraction? Are all the difference results simply summed for the final image? How are negative numbers following subtractions handled in the final image? Is a grayscale offset added? – uhoh May 22 '20 at 02:48
  • @uhoh: The procedure astrosnapper is describing appears to be for subtracting a single reference image from multiple frames to better detect any moving or varying objects in them. If all you want is one final image, you just need one source image and one reference image; no need to sum anything. – Ilmari Karonen May 22 '20 at 16:07
  • 1
    Image shifts caused by e.g. changes in the telescope pointing are taken out in the initial resampling onto the common pixel grid of the reference. Normally each image is differenced separately; remember that this is mostly used by time-domain transient surveys trying to find new things. Surveys have a fixed "set up in advance" grid of pointings on the sky and a ref. image (or stack) for each pointing. Each night's new image(s) for that pointing get differenced as they are taken. Offsets are not added as you want the changes; -ve means it got fainter than the ref. which can still be interesting – astrosnapper May 22 '20 at 16:07
  • This script https://pastebin.com/pj3cTafP produces this result https://i.stack.imgur.com/QfwCZ.png I've simplified the problem by using perfect gaussians, one on the right moves, one on the left doesn't. This highlights my comment about how to handle negative numbers. While your answer addresses all of the real-world subtleties I still don't get the basic idea of how the difference produces a natural image of the moving object rather than the dipolar +/- image shown here. It still seems "magic" to me because I'm missing something very basic I assume. – uhoh May 22 '20 at 22:49
  • @IlmariKaronen see my comment above, I've tried to highlight what it is that I'm missing. If you think you see what it is please feel free to add an additional answer; comments may not be long enough. Thanks! – uhoh May 23 '20 at 03:14
  • 1
    @uhoh: Use a reference image that doesn't contain the moving object at all. – Ilmari Karonen May 23 '20 at 11:29
  • @IlmariKaronen these short comments are not helping; use a reference image how? I'm looking for an answer that explains how to do this. Thanks! – uhoh May 23 '20 at 12:56
  • 1
    @uhoh: In your test script, use img_a = G(x2, sig). In real life, use an image that was taken when the comet was not there yet. – Ilmari Karonen May 23 '20 at 13:06
  • @IlmariKaronen if you know the answer, post an answer! I think I'm getting the idea though, in addition to the multiple images combined in the left pane which shows star trails because of tracking the object, you are proposing that there is also another image taken before the comet got into this field or after it left. So in addition to this group of images there's yet another one? Oh, and all this kernel business is because the historical image may be taken under very different conditions? Ahh.... I'm getting there now! – uhoh May 23 '20 at 13:13
  • 1
    @uhoh: I had hoped you would, if you just took a minute to think about it. :) Anyway, there would've been no point in me trying to post a separate answer without knowing what it was that you were missing (especially as my knowledge of the topic is entirely superficial and basically a subset of what astrosnapper already explained above). And now that you've figured out what you were missing, it looks like you've also managed to answer your own question. :) – Ilmari Karonen May 23 '20 at 20:31
  • http://manoa.hawaii.edu/news/article.php?aId=10683 – uhoh May 24 '20 at 01:10
  • 1
    I have updated my answer to include more details from the comments of how surveys carry this out and hopefully give more details of the different sorts of signals that will be seen in the difference image. Also added a link to a Python implementation. – astrosnapper May 24 '20 at 05:30
  • I see that, thank you very much! All impressions of magic have been dispelled ;-) – uhoh May 24 '20 at 13:05
  • btw I just noticed this by accident; without the @uhoh I didn't receive a notification that you'd replied nor that the answer was updated. – uhoh May 24 '20 at 13:06
  • No idea; I doubt a spacecraft CPU would have enough floating point performance for DIA, it's a computationally expensive process. – astrosnapper Feb 08 '21 at 21:10
  • this answer seems to show a JWST satellite flare - the zip contains an mp4 - pretty cool! – uhoh Feb 18 '23 at 22:05