The core sequencer only takes care of passing messages between clients and delivering messages at the right time to the right client. All processing has to be done within the clients. Because of this separation of responsibility the sequencer can be optimized for it's main task: sequencing events. All (complex) processing is left to be done by the clients.
This approach results in a modular configuration in which the clients of which a service is needed can be loaded. Other clients that are not needed do not have to be activated.
Two types of clients are supported:
Kernel mode clients
User-land clients
It's up to the client developer where to place his or her client: in or outside the kernel. The right place depends on the needs of the specific client.
Using this client architecture we get a few advantages:
All the ugly hardware specific details that make life complicated can go into the parts where they belong: the device specific drivers. No need to pollute the sequencer core with it.
There is no need to keep adding new functionality (like midi thru etc.) to the sequencer because the sequencer won't do more than sequencing and routing events from one client to another. All the bells and whistles ("I want to have it make coffee") can implemented in clients.
OSS compatibility can be guaranteed by creating a client with /dev/music and /dev/sequencer interface that simply maps the OSS events to ALSA events.
The concept of the sequencer can be quite clean and easy so that is would not be a problem to implement it. Being a simple system the interfaces can reach a stable state quite fast, which is a good thing for developers of sequencer clients.
Sample applications of clients to get a feeling of the flexibility of this architecture:
The high-level event interface for sound cards can be implemented as clients. This client (for example for a Gravis Ultrasound) can take care of:
Driving MIDI port, translating ALSA sequencer events into MIDI byte stream and interpreting the incoming MIDI bytes and translate these into ALSA events.
Drive the on-board synthesizer. The client has to take care about voice allocation, instrument mapping etc. to make the GUS GF1 synth react to ALSA sequencer messages.
The mixer on the sound card can also be presented as a device that can react on control changes (volume).
Bank managers, synth editors can be run parallel with sequencers. The recording source for a sequencer does not need to be a MIDI input port, but can also be the output of some other client, like for instance a bank manager. All sys-ex to setup a midi device can so be recorded in the sequencer.
A single application does not need to provide all the functionality one can think of. Why put a GM/XG mixer application within a sequencer application like Steinberg's Cubase does if one can use an external application, and let these interact. If one needs a 'meter bridge' that shows the levels for all the MIDI channels in the system, but the sequencer doesn't provide such a thing, a second 'meter bridge only' client can be used! Similar for mixers, bank managers (download samples to Wavetable synth on reception of program changes!).
Other applications (apart from drivers) could be support modules for high-end sequencers, that need fine-grained real-time control or last-minute database changes.
Linux based (embedded Linux) MIDI patch-bay: Receive MIDI events from the input, do some processing, and send it (almost instantly) out.
Imagine one would like to have a soft synth. Instead of going though the trouble of developing one in kernel, the developer can start off with running user-land timidity, and let the existing MIDI playing applications send their data to the 'timidity client' instead of the 'GUS MIDI output port' client. For the player it doesn't make any difference (only send events to another destination), and voila, we have a soft-synth. Over time, the developer can improve upon this soft-synth, make it more real-time, and possibly running it as kernel client.