Sequencers for musical application have to cope with two different notations of time:
real/clock-time, expressed in seconds and fractional time in nano-seconds.
song-position, expressed in clock ticks, which are related to the tempo the song is playing.
Because of the specific character of music (ie. it has tempo, groove etc.) the second one is preferred. For synchronization to real-time sources like for instance audio or video the first one is preferred. Conversion between these two types is trivial once the currently active tempo and timestamp (in both units!) of last tempo change is known.
A problem arises when we have events queued (using either time format) and have to change tempo or adjust time (for synchronization). To determine dispatch time and order of events we need to be able to compare time stamps of events. If there is a constant tempo, that can easily be done. But when a tempo is changed, the number of clock ticks (ppq) per second changes, and events in the queue will not have correct ordering anymore. To avoid this problem, the queue will implemented as two parallel priority queues: one song queue and one real-time queue.
The existence of these two queues will be hidden for the client applications. The clients will only be presented one queue that can be fed with two type of time stamps