The light curve shown above looks very interesting. The seems to increase its variability over time, especially in the shorter wavelengths filters. However, it turns out that this is actually caused by erroneous data, which so far has escaped our detection. How come?

When looking randomly through the very many light curves we have, this object came up a number of times. At some stage we realised that there are several such objects. Which caused a lot of suspicion that several such objects existed and had their variability increase at the same time. One of our students then investigated (by chance) one of these sources in detail. It turned out that all the highly variable data were caused by one specific telescope. If these were removed, the star continued its low level of variability.

Once we knew that, we checked the position of all these unusual stars we could find and it turns out they are all within 5 arcminutes from naked eye stars (brighter than sixth magnitude). Some more checking revealed that over the years the optics and mirror quality of the telescope slowly degraded. This caused the point spread function (PSF) to become more ‘noisy’ far away from the main brightness peak. This only causes problems near very bright sources, as only there the very outer wings of the PSF become detectable. The effect of this is that the background estimate becomes highly uncertain, and thus the flux of the stars apparently varies from image to image.

Now that we know, we can automatically remove all such data in the analysis pipeline. As a side effect, we now need to redetermine all the statistics for our latest paper, before we can submit it to the journal. This will only cause small changes, as only very few stars are influenced, and no significant changes to any of the results are expected.