Sophie

Sophie

distrib > Fedora > 14 > x86_64 > media > updates > by-pkgid > ca54ded59b0fde76a90d33c2ad8033e0 > files > 39

tritonus-0.3.7-0.9.20101108cvs.fc14.x86_64.rpm

First, thank you for this piece of documentation! Very helpful for adapting Tritonus to the newest API. However, running the JMF2.0FCS version of JS with jdk1.3beta still gives me headaches. It looks like a problem with the initialisation of service providers.

General remarks
---------------

* There should be real, working examples, closing the gap between the code framents included and complex examples like JavaSoundDemo.

* Some important information on the behaviour of objects given here are missing in the javadoc. They should be added.

* There should be more figures.

Chapter 8
---------

I've read this only briefly, so I can't say much. As an online resource for the MIDI standards, I found this more complete and more readable than the material presented on www.midi.org:
http://crystal.apana.org.au/ghansper/midi_introduction/contents.html
Especially in this chapters, some instructive figures would be helpful.

Chapter 9
---------

Obtaining a desired device: The method of retrieving a device to know if it's a sequencer, synthesizer or represents a midi port is not very elegant. This is information that should be part of the info object. An alternative is to have methods AudioSystem.getSynthesizerInfo(), AudioSystem.getSequencerInfo() with AudioSystem.getMidiDeviceInfo() returning only midi devices that are not Synthesizers or Sequencers.

Chapter 10
----------

Let me say that I higly like the new model of Receivers and Transmitters!
With timestamps, I see some problems.
- An application deciding to use the facility of a MidiDevice to compensate for latencies should know the maximum amount of time difference handled by the device, so that the app can calculate timestamps that it is sure are handled correctly by the device.
- There is a problem when a device A sends events to a device B with a different timebase. Device A timestamps the events with its current notation of time. It passes the message including th timestamp to a device B which is required to send the message. Now, let's say device B is capable of correcting for latencies, i.e. it will honour A's timestamps. All works well if A and B have the same notation of current time or if B has a later time, in which case it will consider all events to be in the past and try to send them immediately. However, if, say, B is 500 ms earlier than A it will delay all event by this amount, which is undesirable. The problem can be solved by having the transmitter correct the timestamp according to the difference in time between the two devices. This is currently not possible, because there is no way for a transmitter to retrieve the MidiDevice the Receiver it has to send to belongs to. Even if this is possible, the implementation is not very easy: since time must never go backwards and should not jump forewards, maintaining the time difference needs a sophisticated clock synchronization algorithm. If the current time difference differs from a previously measured time difference, it is not appropriate to just set the new amount. Instead, systematic drift has to be introduced to the time difference. Is it woth that effort? Perhaps the work should go into implementations that deliver messages as fast as possible.

Connecting Transmitters to Receivers: Having a single object as both sequencer and synthesizer doesn't harm, but an implicit connection does. Getting the same object on calls to getSequencer() and getSynthesizer() to make a connection between them is no problem -- applications will not even notice that there are not two different object. However, assume that an application wants to use a sequencer to play a midi file via a midi port on an external synthesizer. Having the same file played on an internal synthesizer with no way to untie the implicit connection to shut down the boxes, not even a way to detect that situation is not acceptable for professional applications, e.g. sequencer applications. Additionally, assuming that they are the same object (with an implicit connections) leads to non-portable programming techniques.
Connecting more that one device: It is not clear what a MidiDevice should return on getMaxTransmitters() and getMaxReceivers() if the device doesn't impose a restriction on the number of R/T.

Chapter 11
----------

In the introductory part, some figures would be nice (and perhaps helpful).

p 101: "It [the duration of a tick] can vary[...], and its value is stored in the header of a standard midi file." While not wrong, here the talk goes about Sequence objects. So, the sentence should state that the tick duration is stored as global data of a Sequence object.

Recording and Saving Sequences: There is a sentence that sounds like MidiSystem.getSequencer() returns a different Sequencer object each time it is called. I also noticed this behaviour in JS 0.90. But there is no statement exactely describing this behaviour. Is there a maximum number of instances? Are they managed as a pool or something like that? In Tritonus, the situation will be like this: each (active) Sequencer object needs a queue of the underlying ALSA sequencer (a module running inside the kernel). The queue is allocated on calls to open() and released with close(). There is a fixed number of queues, currently 8. So in Tritonus, there can be an unlimited number of Sequencer objects, but only 8 opened, if no other application in the system has allocated a queue. If the behaviour of the Sun implementation differs much, there is a chance of non-portable programming techniques.
The semantics of recordEnable() is not clear. Is it possible to record all channels to the same track by calling recordEnable multiple times with different values for the channel? What happens if it is called again with the same channel value, but another track? Does the later call take precedence or is the event routed to both tracks? By the way, my feeling is that the method should be named enableRecording() of setRecordingEnabled().
[I have no practical midi experience, so I do not know whether the following situation is realistic] The current method of recording requires that all recorded data is kept in memory and only written to a file after the recording is completed. This may be a problem with long recordings: main memory might not be big enough to hold all data. Perhaps there should be a way to write directely to a file. This is definitely a problem in the audio area, where one minute of CD-quality recording consumes 11 MB. There, I have an idea of how to solve this in an elegant way, to be discussed on another opportunity.

Editing a sequence:
The Track's tick() method should be named getTicks() or getLengthInTicks().
An interesting question is how this method behaves on unordered tracks (it's also a question how the sequencer behaves). If the events in a track are ordered by tick value, calculating the length is trivial: take the difference between the first and the last event's tick values. However, this method sucks if the events are not in timely order. Another interesting point is how the sequencer behaves if the sequence is modified while it is played. I noticed that in JS0.90 it doesn't behave at all because the whole sequence is downloaded to the engine before it is started. Being able to modify sequences under play is desirable, as it would allow to stream midi event over the network.

Moving to an arbitrary position:
Is it legal to call the methods setTicKPosition() and setMicrosecondPosition() while playing is active? If so, is the implementation required to quarantee that currently played notes are shut down correctely (means inserting note off events if they are skipped)? Does calling stop() guarantee this behaviour?

Muting and soloing:
Is it legal to call these methods during play?
Perhaps it's a good idea to throw an IllegalArgumentException if and invalid track number is passed to setTrackMute().

Synchronizing with other midi devices:
There is some confusion in that sometimes the term "synchronizing to a Master/Slave" is used and other times "acting as a M/S", which have opposite directions of thought.

------
Other remarks:

* MidiDevice: It is unclear what value getMaxReceivers() and getMaxTransmitters() should return if the maximum number of Receivers/Transmitters is unlimited.
If a device cannot provide microsecond resolution, but milliseconresolution, should getMicroSecondPosition() return -1 or (milliseconds * 1000) ?

* ControllerEventListener and MetaEventListener should be renamed to ControllerMessageListener and MetaMessageListener for consistency.

* ShortMessage: What happens if getData2() is called on a message with only one data byte? The choices are returning 0 or throwing an exception. Tritonus currently throws an ArrayIndexOutOfBoundsException.

* MidiSystem.getMidiDevice(): there is an inconsistency in possible exceptions between this method and MidiDeviceProvider.getDevice().

* There is a naming inconsistency between getMidiFileTypes() and isFileTypeSupported().

* MidiDeviceProvider should be an interface, not an abstract class. This would
leave the implementor more freedom about the class hierarchy. There is no
problem in using a non-public class like AbstractMidiDeviceProvider
to provide an implementation for isDeviceSupported().

* MidiFileReader should be an interface. Since there are only abstract methods,
I see no reason why this should be a class.

* MidiFileWriter: same as MidiDeviceProvider.

* SoundbankReader: same as MidiFileReader.

* There should be a way to register und unregister service providers at runtime and in a portable way. In JavaSound 0.86, there was a class MidiConfig which
did exactely this. Why was it abandoned?