In our “tour of HOYS papers”, while we are recovering the data base to look at new, interesting light curves, we have a look at a figure from our second paper: “A survey for variable young stars with small telescopes: II – Mapping a protoplanetary disk with stable structures at 0.15 AU“. It shows the distribution of photometry errors for the object V1490Cyg.

Photometric uncertainties are a big part of the analysis of our data. They are determined during the photometry calibration step in the data processing. This manual step is hence vital for the quality control of the data, and to ensure that we extract the most from the available data. The above mentioned paper explains in more detail how the calibration and error calculation works in practice. And as mentioned before, we are looking for volunteers to help us with this quality control once the database is back online and we need to process the back-lock of images accumulated during the outage. Thus, please get in touch if you would like to help us with that.

Many users have gotten in touch, being worried about their data not being of sufficient quality. In almost all cases, these worries are not justified. Yes, some data are more noisy, but depending on the research we are doing, different errors are considered high quality. For example, if we are investigating deep, long term dimming events caused by disk structures, then even 0.1 – 0.2mag uncertainties for the photometry are sufficient, given that these dimming events are often more than one magnitude deep.

For our work on stellar spots, with amplitudes of the order or smaller than 0.1mag, much higher requirements are needed. This is usually the case for most data we have, especially for the brighter stars in them. In the figure we show histograms for the distribution of errors in the different filters for one object investigated in the Pelican Nebula (IC5070). Over-plotted on these histograms as solid lines are cumulative distribution functions for these errors. They way to read them is the following: They show what fraction (scale on the right hand side of each panel) of the data has an error of less than the indicated value. As you can see, especially for the V, R, and I filters, typically of the order of 80% of our data has uncertainties of less than 0.04mag.

These uncertainties, in particular when combined with the large number of data points in each light curve, do allow us not just to detect periodic variations of a few percent amplitude. They also allow us to measure their amplitudes very accurately and thus to determine the spot properties that cause these variations. See some of our latest posts or more details on that: week 212; week 210; week 193;