This saves a mandatory call to ma_audio_buffer_ref_set_data(). With
this change, an ma_audio_buffer_ref_init() call is all that is required
to initialize a usable data source.
This is a data source whose backing data is an application-controlled
pointer. No data is copied. It's a way of efficiently wrapping a raw
buffer and using it as a data source.
This is useful for retrieving information about some aspect of the
node. A good example is human readable names associated with the node
and it's input and output buses. This is useful for user interfaces
where a brief description of the node such as "Low Pass Filter" can be
drawn on the screen. It's also useful for buses to be named, such as
the source/carrier and excite/modulator on a vocoder effect which would
also need to be visible on a UI.
The following flags can now be associated with nodes via the vtable:
* MA_NODE_FLAG_PASSTHROUGH
* MA_NODE_FLAG_CONTINUOUS_PROCESSING
* MA_NODE_FLAG_ALLOW_NULL_INPUT
* MA_NODE_FLAG_DIFFERENT_PROCESSING_RATES
See commit changes for a description of these flags.
* The simplified callback has been removed.
* The `globalTime` parameter has been removed from the callback.
* The order of input and output frames and counts has been swapped to
be consistent with ma_data_converter_process_pcm_frames(), etc.
This enables the ability to explicitly enable only the backends a
program is interested in. Essentially it's the reverse of the pre-
existing method whereby instead of disabling backends, all backends are
disabled by default, and then specific backends are enabled. Example:
#define MA_ENABLE_ONLY_SPECIFIC_BACKENDS
#define MA_ENABLE_WASAPI /* Only care about WASAPI on Windows. */
#define MA_ENABLE_ALSA /* Only care about ALSA on Linux. */
Note that even if MA_ENABLE_* is used, the backend will still only be
enabled if the compilation environment and target platform actually
supports it. You can therefore use the MA_ENABLE_* options without
needing to worry about platform detection.
Public issue https://github.com/mackron/miniaudio/issues/260
Variables are marked with this annotation to make it clear that access
to the variable should be done through atomics.
I've also review the use of volatile in this commit.
Public issue https://github.com/mackron/miniaudio/issues/259
This was setting the previous pointer of newly attached nodes to NULL
instead of a pointer to the dummy head node. This then results in the
dummy head node never being updated when the node is detached which in
turn results in an uninitialized node being dereferenced.
This would try reading from an uninitialized buffer thereby resulting
in a bad audio glitch. This does two things to fix the problem:
1) When there are no input nodes attached to an input bus, nothing is
read and 0 will be returned for the frames read variable.
2) The buffer is silenced by default.
This fixes a bug with ma_engine where it would glitch in the moment
just after the engine is initialized and before a sound or group is
attached.
This provides an optimization by allowing processing to bypass the
resampler. Audio data needs to pass through the resampler even for the
case where pitch=1 because it needs to update internal buffers which if
it didn't do, would result in a glitch when moving away from 1.
In practice most sounds won't require individual pitch control, however
in the interest in being consistent with miniaudio's philosophy of
things "Just Working", pitching is enabled by default. Pitching can be
disabled with MA_SOUND_FLAG_DISABLE_PITCH in ma_sound_init_*() and
ma_sound_group_init().
This is the first step towards decoupling the ma_effect API from the
engine for the eventual removal.
This also fixes bugs regarding channel conversion when processing an
engine node.
This is mainly for consistency with the node API, but also because it
more clearly indicates that it's an absolute time rather than a delay
which sounds more like a relative time.
This changes ma_sound_set_start/stop_delay() to take an absolute time
in frames based on the global clock. Previously these took a relative
time in milliseconds. To use a relative time, add it to the value
returned by ma_engine_get_time(). To use milliseconds, use a standard
sample rate to milliseconds conversion.
These changes are in preparation for fixing some issues relating to
retrieval of channel counts from data sources. The problem relates to
the asynchronous nature of the resource manager and how data sources
may be in the middle of loading when trying to initialize a sound which
results in the channel count not yet being available. The channel count
is necessary in order for the engine to be able to convert the data
source to the channel count of the final output.
This is needed for scheduling frame-exact starting and stopping of
nodes, in addition to any kind of time-based effects required by custom
nodes, such as fading.