Previously you could set the group to NULL in which case the master
group would be used, but this has now changed and the group parameter
can never be NULL. Use ma_engine_get_master_sound_group() to retrieve
the master sound group.
* Removed ma_engine_sound_set_fade_in/out()
* Add ma_engine_sound_set_fade_point_in_frames()
* Add ma_engine_sound_set_fade_point_in_milliseconds()
* Add ma_engine_sound_set_stop_delay()
* Add ma_engine_sound_get_time_in_frames()
* Removed ma_engine_sound_group_set_fade_in/out()
* Add ma_engine_sound_group_set_fade_point_in_frames()
* Add ma_engine_sound_group_set_fade_point_in_milliseconds()
* Add ma_engine_sound_group_set_stop_delay()
* Add ma_engine_sound_group_get_time_in_frames()
The fade in/out system has been replaced with something more general
and flexible which allows for up to two fade points to be configured
per sound or group, with arbitrary time periods and volumes.
This commit also includes the addition of a placeholder parameter for
ma_engine_sound_init_from_file() which is used to notify the caller
when an asynchronously loaded sound has finished loading.
Fading is now set using these APIs:
* ma_engine_sound_set_fade_in()
* ma_engine_sound_set_fade_out()
When a sound is stopped, either by naturally reaching the end, or
explicitly with ma_engine_sound_stop(), the fade out will be applied.
Fading will also be applied around loop transitions.
Note that when a sound is stopped implicitly by it reaching the end,
fading out will not work when the length of the sound is not know (that
is, when ma_data_source_get_length_in_pcm_frames() returns 0).
This adds support for having a sound fade in when it is started and
fade out when it is stopped.
This commit does not yet include support for fading out when the sound
approaches the end - it only fades out when explicitly stopped with
ma_sound_stop().
The fade time is set in milliseconds.
This commit includes a new effect called ma_fader, but it currently
only supports f32 formats. Support for other formats will be added in
the future.
This new system is used for asynchronous decoding of sound data. The
main improvement with this one over the old one is the ability to do
multi-producer, multi-consumer lock-free posting of messages which
means multiple threads can be used to process jobs simultaneously
rather than a single thread processing all jobs serially.
Decoding is inherently serial which means multiple job threads is only
useful when decoding multiple sounds. Each individual sound will be
decoded serially.
Another change with this commit is the ability for applications to
control whether or not the resource manager manages it's own job
threads. This is useful if an application wants to manage the job queue
themselves if, for example, they want to integrate it more closely with
their existing job system.
The thread priority can be set via ma_thread_create() and can be set in
the context config along side the thread priority for configuring the
size of the stack for the audio thread.
A streaming data source keeps in memory only two pages of audio data
and dynamically loads data from a background thread. It is essentially
a double buffering system - as one page is playing, the other is being
loaded by the async thread.
The size of a single page is defined by the following macro:
MA_RESOURCE_MANAGER_PAGE_SIZE_IN_MILLISECONDS
By default this is currently set to 1 second of audio data. This means
each page has 1 second to load which should be plenty of time. If you
need additional time, the only way to do it is increase the size of the
page by changing the value of the above macro.
This enables early playback of the sound while the remainder of the
sound is loaded in the background. When the first page is loaded, the
sound can start playback. While it's playing, the rest of the sound is
loaded in a background thread. In addition, sounds no longer need to
wait for every sound prior to it in the queue to fully decode before it
is able to start - it only needs to wait for the first page of each of
the queued sounds to decode. This enables much fairer prioritization of
asynchronously loaded sounds.
This paged decoding system is *not* a true streaming solution for long
sounds. Support for true streaming will be added in future commits.
This commit is only concerned with filling in-memory buffers containing
the whole sound in an asynchronous manner.
* Early work on asynchronously decoding into a memory buffer. This is
just an early implementation - there are still issues needing to be
figured out. In particular, sounds do not automatically start until
the entire file has been decoded. It would be good if they could
start as soon as the first second or so of data has been decoded.
* Implement the notion of a virtual file system (VFS) which is used
by the resource manager for loading sound files. The idea is that
the application can implement these to support loading from custom
packages, archives, etc.
* Add a helper API for decoding a file from a VFS and a file name.
* Add some symbols representing allocation types. These are not
currently used, but I've added them in preparation for changes to
the allocation callbacks. The idea is that an allocation type will
be passed to the callbacks to give the allocator better intel as to
what it's allocating which will give it a chance to optimize.
* Add some placeholders for flags for controlling how to load a data
source. Currently only MA_DATA_SOURCE_FLAG_DECODE is implemented
which is used to indicate to the resource manager that it should
store the decoded contents of the sound file in memory rather than
the raw (encoded) file data.
* Support has been added to the resource manager to load audio data
into memory rather than naively reading straight from disk. This
eliminates file IO from the audio thread, but comes at the expense
of extra memory usage. Support for streaming is not implemented as
of this commit. Early (largely untested) work has been implemented
to avoid loading sound files multiple times. This is a simple ref
count system for now, with hashed files paths being used for the
key into a binary search tree. The BST is not fully tested and
likely has bugs which will be ironed out in future commits.
* Support has been added for configuring the stereo pan effect. Most
audio engines use a simple balancing technique to implement the
pan effect, but a true pan should "move" one side to the other
rather than just simply making one side quieter. With this commit,
the ma_panner effect can support both modes. The default mode will
be set to ma_pan_mode_balance which is just a simple balancing and
is consistent with most other audio engines. A true pan can be used
by setting the mode to ma_pan_mode_pan.
* Initial work on infrastructure for spatialization, panning and
pitch shifting.
* Add ma_engine_sound_set_pitch()
Spatialization and panning is not yet implemented, but pitch shifting
should now be working.
* The engine will now auto-start by default. This can be changed in
the config by setting `noAutoStart` to true.
* Initial implementation of ma_engine_play_sound() which can be used
for fire-and-forget playback of sounds.
* Add ma_engine_sound_at_end() for querying whether or not a sound
has reached the end. The at-end flag is set atomically and
locklessly in the mixing thread.