This week we follow up on the post from week 214 (please read again first :-)). This particular part of the research on HOYS light curves aims to quantify the underlying variability characteristics of young stars. Ultimately this should establish a methodology that allows us to determine the probability that a given model light curve is consistent with the real light curves from young stars, allowing us to validate models of star and planet formation.

The concept of fingerprinting a stars variability was introduced in the previous post (week214), where the probability that a young star will vary by a certain brightness over a given time period can be visualised. These fingerprints are used to overcome some of the issues involved in analysing the light curves, from which the fingerprints are created, themselves. Due to the surroundings of young stars they are complex and can be difficult to characterize. There can be dimming and brightening events on different timescales and amplitudes, as well as periodic, semi-periodic or stochastic variations. The difficulties are also exacerbated because every star has its own observing cadence with some stars being observed more often than others and some light curves containing large gaps in the data due to stars not being observable for longer periods – due to the Sun being in the way (or bad weather).

In week 260 two alternative methods of calculating the error values for each pixel of the fingerprints were introduced, a method using Poisson counting statistics and a method using bootstrapping of the magnitude values in the light curve. While both of these error maps appeared to be very similar it is important to know how similar they are quantitatively and whether it is appropriate to substitute the Poisson method with the significantly more time consuming Bootstrapping method. To do this the error values for each method can be plotted against each other and a gradient calculated for the line of best fit, when this value is calculated across a large sample of objects an average can be created for the value of the relationship between the two methods along with an uncertainty. This will allow the less time consuming method to be used as long as a multiplier derived from this was applied.

The figure above shows the values for each pixel of the error maps plotted against each other along with a line of best fit for the the points and the line y=x, i.e. if both sets of error values were exactly the same, for visual comparison. The gradient of the line of best fit for this example object was 0.65. While it can be seen that there is a strong correlation between the error maps, there is a clear difference between the gradient of the line of best fit and y=x. This indicates it is not appropriate to just substitute one method for another to save compute time without applying an adjustment to the values using a multiplier derived from a large sample of our objects. The figure also shows the spread of the error values as they increase towards the signal to noise limit of 3 to 1 (values of 0.33 along the x and y axes). The final analysis of the gradient values is currently carried out to determine a relationship between the two error methods and to see if that relationship changes in a predictable way as pixel sizes are increased in the fingerprints.