Previously you could set the group to NULL in which case the master
group would be used, but this has now changed and the group parameter
can never be NULL. Use ma_engine_get_master_sound_group() to retrieve
the master sound group.
* Removed ma_engine_sound_set_fade_in/out()
* Add ma_engine_sound_set_fade_point_in_frames()
* Add ma_engine_sound_set_fade_point_in_milliseconds()
* Add ma_engine_sound_set_stop_delay()
* Add ma_engine_sound_get_time_in_frames()
* Removed ma_engine_sound_group_set_fade_in/out()
* Add ma_engine_sound_group_set_fade_point_in_frames()
* Add ma_engine_sound_group_set_fade_point_in_milliseconds()
* Add ma_engine_sound_group_set_stop_delay()
* Add ma_engine_sound_group_get_time_in_frames()
The fade in/out system has been replaced with something more general
and flexible which allows for up to two fade points to be configured
per sound or group, with arbitrary time periods and volumes.
This commit also includes the addition of a placeholder parameter for
ma_engine_sound_init_from_file() which is used to notify the caller
when an asynchronously loaded sound has finished loading.
Fading is now set using these APIs:
* ma_engine_sound_set_fade_in()
* ma_engine_sound_set_fade_out()
When a sound is stopped, either by naturally reaching the end, or
explicitly with ma_engine_sound_stop(), the fade out will be applied.
Fading will also be applied around loop transitions.
Note that when a sound is stopped implicitly by it reaching the end,
fading out will not work when the length of the sound is not know (that
is, when ma_data_source_get_length_in_pcm_frames() returns 0).
* ma_data_source_get_cursor_in_pcm_frames()
* ma_data_source_get_length_in_pcm_frames()
When the data source has no notion of a cursor or length, these return
MA_NOT_IMPLEMENTED to let the caller know about it. This is returned
when a custom data source leaves these functions unimplemented.
ma_decoder, ma_audio_buffer, ma_waveform and ma_noise have all been
updated to support these new functions.
This adds support for having a sound fade in when it is started and
fade out when it is stopped.
This commit does not yet include support for fading out when the sound
approaches the end - it only fades out when explicitly stopped with
ma_sound_stop().
The fade time is set in milliseconds.
This commit includes a new effect called ma_fader, but it currently
only supports f32 formats. Support for other formats will be added in
the future.
The ma_async_notification object is used for notifying the application
that an asynchronous operation has completed.
Custom notifications can be implemented by implementing the callback in
ma_async_notification_callbacks. There is currently only a single
callback called onSignal which is fired when the operation completes. A
helper notification which wraps around an ma_event object called
ma_async_notification_event is implemented which you can use as an
example for building your own notifications.
* The data buffers and data streams are now first class data sources.
* The ma_resource_manager_data_source object is now just a simple
wrapper around ma_resource_manager_data_buffer and
ma_resource_manager_data_stream.
* Unnecessary pResourceManager parameters have been removed.
* The part of the data buffer that's added to the BST has been split
out from the main data buffer object so that the main object can be
owned by the caller.
* Add ma_resource_manager_data_source_get_available_frames() which is
used to retrieve the number of frames that can be read at the time
of calling. This is useful in asynchronous scenarios.
This commit changes synchronous decoding so that the calling thread is
the one which performs the decoding. Previously, decoding was done on
the job threads which was then waited on by an event on the calling
thread. The rationale for this design was to keep decoding on a single
code path, however this creates a problem for programs that would
prefer not to have any asynchronous job threads. In this case, these
synchronously decoded sounds would never get decoded because there
would not be any threads available to actually perform the decoding.
This commit enables the resource manager to be able to be used without
a job thread so long as asynchronous decoding and streaming are not
used. This scenario could be useful for programs that want to pre-load
all of their sounds at load time and save some system resources by not
incurring the overhead of an additional unnecessary thread.
This new system is used for asynchronous decoding of sound data. The
main improvement with this one over the old one is the ability to do
multi-producer, multi-consumer lock-free posting of messages which
means multiple threads can be used to process jobs simultaneously
rather than a single thread processing all jobs serially.
Decoding is inherently serial which means multiple job threads is only
useful when decoding multiple sounds. Each individual sound will be
decoded serially.
Another change with this commit is the ability for applications to
control whether or not the resource manager manages it's own job
threads. This is useful if an application wants to manage the job queue
themselves if, for example, they want to integrate it more closely with
their existing job system.