For the likelihood with waveform uncertainty the (additive) log-normalisation is implemented as
normalisation = torch.sum(torch.log(psd)) - torch.log(torch.prod(psd / psd.max()) + torch.prod(variance / variance.max())) + torch.log(psd.max())*len(psd) + torch.log(variance.max())*len(variance)
but for the case without the uncertainty it is implemented as
normalisation = 0
I suspect this is the root of the problem!
In the case of no waveform uncertainty we can just treat this as if this quantity is zero, and so
normalisation = torch.sum(torch.log(psd)) - torch.log(torch.prod(psd / psd.max())) + torch.log(psd.max())*len(psd)
This seems to have fixed the problem, and the two likelihoods now look reasonably sensible. This needs to be reviewed though.
- [ ] Is FFTing the time-domain covariance matrix the equivalent of a 1- or 2-sided PSD?
- [ ] Need two polarisations
- [ ] How does the mixing of uncertainties work
- Probably just a wrapper around our own likelihood function.
- Tests of GR generally assume that an approximant is correct how will this affect things.
- What effect will the systematics have on this?
- Defined by NR training waveforms
- Using two models to train a GPR in order to estimate the overall uncertainty
- e.g. comparison in the (SEOBNR?) paper
- High mass ratio
- Very high total mass