This adds support for having a sound fade in when it is started and
fade out when it is stopped.
This commit does not yet include support for fading out when the sound
approaches the end - it only fades out when explicitly stopped with
ma_sound_stop().
The fade time is set in milliseconds.
This commit includes a new effect called ma_fader, but it currently
only supports f32 formats. Support for other formats will be added in
the future.
The ma_async_notification object is used for notifying the application
that an asynchronous operation has completed.
Custom notifications can be implemented by implementing the callback in
ma_async_notification_callbacks. There is currently only a single
callback called onSignal which is fired when the operation completes. A
helper notification which wraps around an ma_event object called
ma_async_notification_event is implemented which you can use as an
example for building your own notifications.
* The data buffers and data streams are now first class data sources.
* The ma_resource_manager_data_source object is now just a simple
wrapper around ma_resource_manager_data_buffer and
ma_resource_manager_data_stream.
* Unnecessary pResourceManager parameters have been removed.
* The part of the data buffer that's added to the BST has been split
out from the main data buffer object so that the main object can be
owned by the caller.
* Add ma_resource_manager_data_source_get_available_frames() which is
used to retrieve the number of frames that can be read at the time
of calling. This is useful in asynchronous scenarios.
This commit changes synchronous decoding so that the calling thread is
the one which performs the decoding. Previously, decoding was done on
the job threads which was then waited on by an event on the calling
thread. The rationale for this design was to keep decoding on a single
code path, however this creates a problem for programs that would
prefer not to have any asynchronous job threads. In this case, these
synchronously decoded sounds would never get decoded because there
would not be any threads available to actually perform the decoding.
This commit enables the resource manager to be able to be used without
a job thread so long as asynchronous decoding and streaming are not
used. This scenario could be useful for programs that want to pre-load
all of their sounds at load time and save some system resources by not
incurring the overhead of an additional unnecessary thread.
This new system is used for asynchronous decoding of sound data. The
main improvement with this one over the old one is the ability to do
multi-producer, multi-consumer lock-free posting of messages which
means multiple threads can be used to process jobs simultaneously
rather than a single thread processing all jobs serially.
Decoding is inherently serial which means multiple job threads is only
useful when decoding multiple sounds. Each individual sound will be
decoded serially.
Another change with this commit is the ability for applications to
control whether or not the resource manager manages it's own job
threads. This is useful if an application wants to manage the job queue
themselves if, for example, they want to integrate it more closely with
their existing job system.
The thread priority can be set via ma_thread_create() and can be set in
the context config along side the thread priority for configuring the
size of the stack for the audio thread.
A streaming data source keeps in memory only two pages of audio data
and dynamically loads data from a background thread. It is essentially
a double buffering system - as one page is playing, the other is being
loaded by the async thread.
The size of a single page is defined by the following macro:
MA_RESOURCE_MANAGER_PAGE_SIZE_IN_MILLISECONDS
By default this is currently set to 1 second of audio data. This means
each page has 1 second to load which should be plenty of time. If you
need additional time, the only way to do it is increase the size of the
page by changing the value of the above macro.
* ma_resource_manager_uninit() has been implemented.
* Bug fixes and inserting and removing data buffers from the BST.
* Some old experimental code has been removed.
* Minor whitespace clean up.
This enables early playback of the sound while the remainder of the
sound is loaded in the background. When the first page is loaded, the
sound can start playback. While it's playing, the rest of the sound is
loaded in a background thread. In addition, sounds no longer need to
wait for every sound prior to it in the queue to fully decode before it
is able to start - it only needs to wait for the first page of each of
the queued sounds to decode. This enables much fairer prioritization of
asynchronously loaded sounds.
This paged decoding system is *not* a true streaming solution for long
sounds. Support for true streaming will be added in future commits.
This commit is only concerned with filling in-memory buffers containing
the whole sound in an asynchronous manner.
* Early work on asynchronously decoding into a memory buffer. This is
just an early implementation - there are still issues needing to be
figured out. In particular, sounds do not automatically start until
the entire file has been decoded. It would be good if they could
start as soon as the first second or so of data has been decoded.