1

I have two time-domain signals sampled at 100 Hz that were measured using two different oscillators and therefore have a time drift between them. I have two synchronization points, one at the start and one at the end of the measurement (usually about 24h long).

The drift is very small, about 1 second for a 24h measurement, i.e. about 0.001% or 1/86400. I assume a linear model for the drift.

The length of the measurement and the relative scale of the drift prohibit usual resampling techniques: I'd have to up- and down-sample the longer signal by large factors.

My approach would be to drop every 86400th sample of the longer measurement, the questions I have are:

  • Is there a better approach?
  • What's the frequency-domain equivalent of dropping a sample? I imagine it would shift the spectrum by a tiny amount and introduce some high-frequency noise.

To be honest, the second question is more academic than practically relevant; most of the analysis is done in small chunks and I can just exclude the chunks with the dropped samples from the analysis.

phausamann
  • 21
  • 2
  • 1
    You can certainly resample. Google "irrational sample rate conversion". – Hilmar Nov 12 '18 at 11:59
  • 1
    Is such fine correction worth it, considering the huge amount of samples ? – MaximGi Nov 12 '18 at 13:49
  • @Hilmar thanks, I've come across the term and that certainly looks like a way to manage this, it's probably still quite computationally expensive – phausamann Nov 12 '18 at 15:36
  • @MaximGi yes, it's absolutely necessary because both time series have to be synchronized, so while I don't care about minimal drift at any given time the accumulated drift at the end of the measurement definitely is a problem – phausamann Nov 12 '18 at 15:37
  • You might find this answer useful. – A_A Nov 12 '18 at 16:00

0 Answers0