* ma_data_source_get_cursor_in_pcm_frames()
* ma_data_source_get_length_in_pcm_frames()
When the data source has no notion of a cursor or length, these return
MA_NOT_IMPLEMENTED to let the caller know about it. This is returned
when a custom data source leaves these functions unimplemented.
ma_decoder, ma_audio_buffer, ma_waveform and ma_noise have all been
updated to support these new functions.
This adds support for having a sound fade in when it is started and
fade out when it is stopped.
This commit does not yet include support for fading out when the sound
approaches the end - it only fades out when explicitly stopped with
ma_sound_stop().
The fade time is set in milliseconds.
This commit includes a new effect called ma_fader, but it currently
only supports f32 formats. Support for other formats will be added in
the future.
The ma_async_notification object is used for notifying the application
that an asynchronous operation has completed.
Custom notifications can be implemented by implementing the callback in
ma_async_notification_callbacks. There is currently only a single
callback called onSignal which is fired when the operation completes. A
helper notification which wraps around an ma_event object called
ma_async_notification_event is implemented which you can use as an
example for building your own notifications.
* The data buffers and data streams are now first class data sources.
* The ma_resource_manager_data_source object is now just a simple
wrapper around ma_resource_manager_data_buffer and
ma_resource_manager_data_stream.
* Unnecessary pResourceManager parameters have been removed.
* The part of the data buffer that's added to the BST has been split
out from the main data buffer object so that the main object can be
owned by the caller.
* Add ma_resource_manager_data_source_get_available_frames() which is
used to retrieve the number of frames that can be read at the time
of calling. This is useful in asynchronous scenarios.
This commit changes synchronous decoding so that the calling thread is
the one which performs the decoding. Previously, decoding was done on
the job threads which was then waited on by an event on the calling
thread. The rationale for this design was to keep decoding on a single
code path, however this creates a problem for programs that would
prefer not to have any asynchronous job threads. In this case, these
synchronously decoded sounds would never get decoded because there
would not be any threads available to actually perform the decoding.
This commit enables the resource manager to be able to be used without
a job thread so long as asynchronous decoding and streaming are not
used. This scenario could be useful for programs that want to pre-load
all of their sounds at load time and save some system resources by not
incurring the overhead of an additional unnecessary thread.
This new system is used for asynchronous decoding of sound data. The
main improvement with this one over the old one is the ability to do
multi-producer, multi-consumer lock-free posting of messages which
means multiple threads can be used to process jobs simultaneously
rather than a single thread processing all jobs serially.
Decoding is inherently serial which means multiple job threads is only
useful when decoding multiple sounds. Each individual sound will be
decoded serially.
Another change with this commit is the ability for applications to
control whether or not the resource manager manages it's own job
threads. This is useful if an application wants to manage the job queue
themselves if, for example, they want to integrate it more closely with
their existing job system.
The thread priority can be set via ma_thread_create() and can be set in
the context config along side the thread priority for configuring the
size of the stack for the audio thread.
A streaming data source keeps in memory only two pages of audio data
and dynamically loads data from a background thread. It is essentially
a double buffering system - as one page is playing, the other is being
loaded by the async thread.
The size of a single page is defined by the following macro:
MA_RESOURCE_MANAGER_PAGE_SIZE_IN_MILLISECONDS
By default this is currently set to 1 second of audio data. This means
each page has 1 second to load which should be plenty of time. If you
need additional time, the only way to do it is increase the size of the
page by changing the value of the above macro.
* ma_resource_manager_uninit() has been implemented.
* Bug fixes and inserting and removing data buffers from the BST.
* Some old experimental code has been removed.
* Minor whitespace clean up.
This enables early playback of the sound while the remainder of the
sound is loaded in the background. When the first page is loaded, the
sound can start playback. While it's playing, the rest of the sound is
loaded in a background thread. In addition, sounds no longer need to
wait for every sound prior to it in the queue to fully decode before it
is able to start - it only needs to wait for the first page of each of
the queued sounds to decode. This enables much fairer prioritization of
asynchronously loaded sounds.
This paged decoding system is *not* a true streaming solution for long
sounds. Support for true streaming will be added in future commits.
This commit is only concerned with filling in-memory buffers containing
the whole sound in an asynchronous manner.
* Early work on asynchronously decoding into a memory buffer. This is
just an early implementation - there are still issues needing to be
figured out. In particular, sounds do not automatically start until
the entire file has been decoded. It would be good if they could
start as soon as the first second or so of data has been decoded.
* Implement the notion of a virtual file system (VFS) which is used
by the resource manager for loading sound files. The idea is that
the application can implement these to support loading from custom
packages, archives, etc.
* Add a helper API for decoding a file from a VFS and a file name.
* Add some symbols representing allocation types. These are not
currently used, but I've added them in preparation for changes to
the allocation callbacks. The idea is that an allocation type will
be passed to the callbacks to give the allocator better intel as to
what it's allocating which will give it a chance to optimize.
* Add some placeholders for flags for controlling how to load a data
source. Currently only MA_DATA_SOURCE_FLAG_DECODE is implemented
which is used to indicate to the resource manager that it should
store the decoded contents of the sound file in memory rather than
the raw (encoded) file data.
* Support has been added to the resource manager to load audio data
into memory rather than naively reading straight from disk. This
eliminates file IO from the audio thread, but comes at the expense
of extra memory usage. Support for streaming is not implemented as
of this commit. Early (largely untested) work has been implemented
to avoid loading sound files multiple times. This is a simple ref
count system for now, with hashed files paths being used for the
key into a binary search tree. The BST is not fully tested and
likely has bugs which will be ironed out in future commits.
* Support has been added for configuring the stereo pan effect. Most
audio engines use a simple balancing technique to implement the
pan effect, but a true pan should "move" one side to the other
rather than just simply making one side quieter. With this commit,
the ma_panner effect can support both modes. The default mode will
be set to ma_pan_mode_balance which is just a simple balancing and
is consistent with most other audio engines. A true pan can be used
by setting the mode to ma_pan_mode_pan.