Usage: simple_mixing [input file 0] [input file 1] ... [input file n]
diff --git a/docs/examples/simple_spatialization.html b/docs/examples/simple_spatialization.html
index 79d7a015..714dd4ec 100644
--- a/docs/examples/simple_spatialization.html
+++ b/docs/examples/simple_spatialization.html
@@ -326,7 +326,7 @@ orbiting effect. Terminate the program with Ctrl+C.
/* Rotate the listener on the spot to create an orbiting effect. */
for (;;) {
listenerAngle += 0.01f;
- ma_engine_listener_set_direction(&engine, 0, sinf(listenerAngle), 0, cosf(listenerAngle));
+ ma_engine_listener_set_direction(&engine, 0, (float)sin(listenerAngle), 0, (float)cos(listenerAngle));
ma_sleep(1);
}
diff --git a/docs/manual/index.html b/docs/manual/index.html
index a767842c..be40c986 100644
--- a/docs/manual/index.html
+++ b/docs/manual/index.html
@@ -250,30 +250,15 @@ a.doc-navigation-l4 {
|
-
-
-
1. Introduction
-To use miniaudio, include "miniaudio.h":
+To use miniaudio, just include "miniaudio.h" like any other header and add "miniaudio.c" to your
+source tree. If you don't want to add it to your source tree you can compile and link to it like
+any other library. Note that ABI compatiblity is not guaranteed between versions, even with bug
+fix releases, so take care if compiling as a shared object.
-
-
-#include "miniaudio.h"
-
-
-The implementation is contained in "miniaudio.c". Just compile this like any other source file. You
-can include miniaudio.c if you want to compile your project as a single translation unit:
-
-
-
-
-
-#include "miniaudio.c"
-
-
miniaudio includes both low level and high level APIs. The low level API is good for those who want
to do all of their mixing themselves and only require a light weight interface to the underlying
audio device. The high level API is good for those who have complex mixing and effect requirements.
@@ -295,12 +280,6 @@ object and pass that into the initialization routine. The advantage to this syst
config object can be initialized with logical defaults and new properties added to it without
breaking the API. The config object can be allocated on the stack and does not need to be
maintained after initialization of the corresponding object.
-
-
-
-
-
-
1.1. Low Level API
@@ -327,9 +306,6 @@ Once you have the device configuration set up you can initialize the device. Whe
device you need to allocate memory for the device object beforehand. This gives the application
complete control over how the memory is allocated. In the example below we initialize a playback
device on the stack, but you could allocate it on the heap if that suits your situation better.
-
-
-
void data_callback(ma_device* pDevice, void* pOutput, const void* pInput, ma_uint32 frameCount)
@@ -385,9 +361,6 @@ uses a fairly simple and standard device configuration. The call to config.playback.format member
sets the sample format which can be one of the following (all formats are native-endian):
-
-
-
@@ -486,9 +459,6 @@ Note that it's important to never stop or start the device from inside the c
result in a deadlock. Instead you set a variable or signal an event indicating that the device
needs to stop and handle it in a different thread. The following APIs must never be called inside
the callback:
-
-
-
ma_device_init()
@@ -508,9 +478,6 @@ thing, but rather a real-time processing thing which is beyond the scope of this
The example above demonstrates the initialization of a playback device, but it works exactly the
same for capture. All you need to do is change the device type from ma_device_type_playback to
ma_device_type_capture when setting up the config, like so:
-
-
-
ma_device_config config = ma_device_config_init(ma_device_type_capture);
@@ -525,9 +492,6 @@ the output buffer alone (it will be set to NULL when the device type is set to
These are the available device types and how you should handle the buffers in the callback:
-
-
-
@@ -583,9 +547,6 @@ explained later.
The example above did not specify a physical device to connect to which means it will use the
operating system's default device. If you have multiple physical devices connected and you want to
use a specific one you will need to specify the device ID in the configuration, like so:
-
-
-
config.playback.pDeviceID = pMyPlaybackDeviceID; // Only if requesting a playback or duplex device.
@@ -598,9 +559,6 @@ There is one context to many devices. The purpose of the context is to represent
more global level and to perform operations outside the scope of an individual device. Mainly it is
used for performing run-time linking against backend libraries, initializing backends and
enumerating devices. The example below shows how to enumerate devices.
-
-
-
ma_context context;
@@ -674,12 +632,6 @@ context for you, which you don't want to do since you've already created
internally the context is only tracked by it's pointer which means you must not change the location
of the ma_context object. If this is an issue, consider using malloc() to allocate memory for
the context.
-
-
-
-
-
-
1.2. High Level API
@@ -715,9 +667,6 @@ this manual.
The code below shows how you can initialize an engine using it's default configuration.
-
-
-
ma_result result;
@@ -743,9 +692,6 @@ to be mindful of how you declare them. In the example above we are declaring it
this will result in the struct being invalidated once the function encapsulating it returns. If
allocating the engine on the heap is more appropriate, you can easily do so with a standard call
to malloc() or whatever heap allocation routine you like:
-
-
-
ma_engine* pEngine = malloc(sizeof(*pEngine));
@@ -754,9 +700,6 @@ ma_engine* pEngine = malloc(sizeof(*pEngine))
The ma_engine API uses the same config/init pattern used all throughout miniaudio. To configure
an engine, you can fill out a ma_engine_config object and pass it into the first parameter of
ma_engine_init():
-
-
-
ma_result result;
@@ -784,9 +727,6 @@ The engine must be uninitialized with ma_en
By default the engine will be started, but nothing will be playing because no sounds have been
initialized. The easiest but least flexible way of playing a sound is like so:
-
-
-
ma_engine_play_sound(&engine, "my_sound.wav", NULL);
@@ -797,9 +737,6 @@ internal sound up for recycling. The last parameter is used to specify which sou
should be associated with which will be explained later. This particular way of playing a sound is
simple, but lacks flexibility and features. A more flexible way of playing a sound is to first
initialize a sound:
-
-
-
ma_result result;
@@ -828,9 +765,6 @@ Sounds are not started by default. Start a sound with ma_sound_seek_to_pcm_frame(&sound, 0) to seek back to the start of a sound. By default, starting
and stopping sounds happens immediately, but sometimes it might be convenient to schedule the sound
the be started and/or stopped at a specific time. This can be done with the following functions:
-
-
-
ma_sound_set_start_time_in_pcm_frames()
@@ -844,9 +778,6 @@ engine. The current global time in PCM frames can be retrieved with
ma_engine_get_time_in_pcm_frames(). The engine's global time can be changed with
ma_engine_set_time_in_pcm_frames() for synchronization purposes if required. Note that scheduling
a start time still requires an explicit call to ma_sound_start() before anything will play:
-
-
-
ma_sound_set_start_time_in_pcm_frames(&sound, ma_engine_get_time_in_pcm_frames(&engine) + (ma_engine_get_sample_rate(&engine) * 2);
@@ -917,15 +848,6 @@ Sounds can be faded in and out with ma_soun
To check if a sound is currently playing, you can use ma_sound_is_playing(). To check if a sound
is at the end, use ma_sound_at_end(). Looping of a sound can be controlled with
ma_sound_set_looping(). Use ma_sound_is_looping() to check whether or not the sound is looping.
-
-
-
-
-
-
-
-
-
2. Building
@@ -934,18 +856,19 @@ dependencies. See below for platform-specific details.
+This library has been designed to be added directly to your source tree which is the preferred way
+of using it, but you can compile it as a normal library if that's your preference. Be careful if
+compiling as a shared object because miniaudio is not ABI compatible between any release, including
+bug fix releases. It's recommended you link statically.
+
+
+
Note that GCC and Clang require -msse2, -mavx2, etc. for SIMD optimizations.
If you get errors about undefined references to __sync_val_compare_and_swap_8, __atomic_load_8,
etc. you need to link with -latomic.
-
-
-
-
-
-
2.1. Windows
@@ -956,12 +879,6 @@ include paths nor link to any libraries.
The UWP build may require linking to mmdevapi.lib if you get errors about an unresolved external
symbol for ActivateAudioInterfaceAsync().
-
-
-
-
-
-
2.2. macOS and iOS
@@ -979,9 +896,6 @@ notarization process. To fix this there are two options. The first is to compile
AudioToolbox, try with -framework AudioUnit instead. You may get this when using older versions
of iOS. Alternatively, if you would rather keep using runtime linking you can add the following to
your entitlements.xcent file:
-
-
-
<key>com.apple.security.cs.allow-dyld-environment-variables</key>
@@ -991,35 +905,17 @@ your entitlements.xcent file:
See this discussion for more info: https://github.com/mackron/miniaudio/issues/203.
-
-
-
-
-
-
2.3. Linux
The Linux build only requires linking to -ldl, -lpthread and -lm. You do not need any
development packages. You may need to link with -latomic if you're compiling for 32-bit ARM.
-
-
-
-
-
-
2.4. BSD
The BSD build only requires linking to -lpthread and -lm. NetBSD uses audio(4), OpenBSD uses
sndio and FreeBSD uses OSS. You may need to link with -latomic if you're compiling for 32-bit
ARM.
-
-
-
-
-
-
2.5. Android
@@ -1032,12 +928,6 @@ versions will fall back to OpenSL|ES which requires API level 16+.
There have been reports that the OpenSL|ES backend fails to initialize on some Android based
devices due to dlopen() failing to open "libOpenSLES.so". If this happens on your platform
you'll need to disable run-time linking with MA_NO_RUNTIME_LINKING and link with -lOpenSLES.
-
-
-
-
-
-
2.6. Emscripten
@@ -1068,22 +958,10 @@ To run locally, you'll need to use emrun:
emrun bin/program.html
-
-
-
-
-
-
-
-
-
2.7. Build Options
#define these options before including miniaudio.c, or pass them as compiler flags:
-
-
-
@@ -1416,7 +1294,6 @@ Disables the built-in MP3 decoder.
|
MA_NO_DEVICE_IO
-
|
@@ -1428,33 +1305,11 @@ miniaudio's data conversion and/or decoding APIs.
|
MA_NO_RESOURCE_MANAGER
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
|
Disables the resource manager. When using the engine this will
also disable the following functions:
-
ma_sound_init_from_file()
@@ -1463,6 +1318,7 @@ ma_sound_init_copy()
ma_engine_play_sound_ex()
ma_engine_play_sound()
+
The only way to initialize a ma_sound object is to initialize it
from a data source.
|
@@ -1488,19 +1344,6 @@ Disables the engine API.
MA_NO_THREADING
-
-
-
-
-
-
-
-
-
-
-
-
-
|
@@ -1509,7 +1352,6 @@ Disables the ma_thread,
MA_NO_DEVICE_IO
@@ -1550,18 +1392,6 @@ Disables NEON optimizations.
MA_NO_RUNTIME_LINKING
-
-
-
-
-
-
-
-
-
-
-
-
|
@@ -1570,6 +1400,8 @@ notarization process. When enabling this, you may need to avoid
using -std=c89 or -std=c99 on Linux builds or else you may end
up with compilation errors due to conflicts with timespec and
timeval data types.
+
+
You may need to enable this if your target platform does not allow
runtime linking via dlopen().
@@ -1606,10 +1438,6 @@ Windows only. The value to pass to internal calls to
|
MA_FORCE_UWP
-
-
-
-
|
@@ -1622,7 +1450,6 @@ needed to be used explicitly, but can be useful for debugging.
|
MA_ON_THREAD_ENTRY
-
|
@@ -1634,7 +1461,6 @@ to be executed by the thread entry point.
|
MA_ON_THREAD_EXIT
-
|
@@ -1661,25 +1487,16 @@ MA_API
Controls how public APIs should be decorated. Default is extern.
|
- |
-
-
-3. Definitions
+ |
3. Definitions
This section defines common terms used throughout miniaudio. Unfortunately there is often ambiguity
in the use of terms throughout the audio space, so this section is intended to clarify how miniaudio
uses each term.
-
-
-
3.1. Sample
A sample is a single unit of audio data. If the sample format is f32, then one sample is one 32-bit
floating point number.
-
-
-
3.2. Frame / PCM Frame
@@ -1688,9 +1505,6 @@ samples, a mono frame is 1 sample, a 5.1 surround sound frame is 6 samples, etc.
and "PCM frame" are the same thing in miniaudio. Note that this is different to a compressed frame.
If ever miniaudio needs to refer to a compressed frame, such as a FLAC frame, it will always
clarify what it's referring to with something like "FLAC frame".
-
-
-
3.3. Channel
@@ -1700,24 +1514,15 @@ left channel, and a right channel), a 5.1 surround sound system has 6 channels,
systems refer to a channel as a complex audio stream that's mixed with other channels to produce
the final mix - this is completely different to miniaudio's use of the term "channel" and should
not be confused.
-
-
-
3.4. Sample Rate
The sample rate in miniaudio is always expressed in Hz, such as 44100, 48000, etc. It's the number
of PCM frames that are processed per second.
-
-
-
Throughout miniaudio you will see references to different sample formats:
-
-
-
@@ -1787,15 +1592,6 @@ ma_format_u8
|
All formats are native-endian.
-
-
-
-
-
-
-
-
-
4. Data Sources
@@ -1811,9 +1607,6 @@ implements the data source interface can be plugged into any
ma_result result;
@@ -1835,9 +1628,6 @@ read is 0.
When calling any data source function, with the exception of ma_data_source_init() and
ma_data_source_uninit(), you can pass in any object that implements a data source. For example,
you could plug in a decoder like so:
-
-
-
ma_result result;
@@ -1856,9 +1646,6 @@ can use ma_data_source_seek_pcm_frames()
To seek to a specific PCM frame:
-
-
-
result = ma_data_source_seek_to_pcm_frame(pDataSource, frameIndex);
@@ -1870,9 +1657,6 @@ result = ma_data_source_seek_to_pcm_frame(pDataSource, frameIndex);
You can retrieve the total length of a data source in PCM frames, but note that some data sources
may not have the notion of a length, such as noise and waveforms, and others may just not have a
way of determining the length such as some decoders. To retrieve the length:
-
-
-
ma_uint64 length;
@@ -1890,9 +1674,6 @@ broadcast. If you do this, ma_data_source_g
The current position of the cursor in PCM frames can also be retrieved:
-
-
-
ma_uint64 cursor;
@@ -1905,9 +1686,6 @@ result = ma_data_source_get_cursor_in_pcm_frames(pDataSource, &cursor);
You will often need to know the data format that will be returned after reading. This can be
retrieved like so:
-
-
-
ma_format format;
@@ -1927,9 +1705,6 @@ If you do not need a specific data format property, just pass in NULL to the res
There may be cases where you want to implement something like a sound bank where you only want to
read data within a certain range of the underlying data. To do this you can use a range:
-
-
-
result = ma_data_source_set_range_in_pcm_frames(pDataSource, rangeBegInFrames, rangeEndInFrames);
@@ -1948,9 +1723,6 @@ the range. When the range is set, any previously defined loop point will be rese
Custom loop points can also be used with data sources. By default, data sources will loop after
they reach the end of the data source, but if you need to loop at a specific location, you can do
the following:
-
-
-
result = ma_data_set_loop_point_in_pcm_frames(pDataSource, loopBegInFrames, loopEndInFrames);
@@ -1965,9 +1737,6 @@ The loop point is relative to the current range.
It's sometimes useful to chain data sources together so that a seamless transition can be achieved.
To do this, you can use chaining:
-
-
-
ma_decoder decoder1;
@@ -1995,9 +1764,6 @@ gaps.
Note that when looping is enabled, only the current data source will be looped. You can loop the
entire chain by linking in a loop like so:
-
-
-
ma_data_source_set_next(&decoder1, &decoder2); // decoder1 -> decoder2
@@ -2013,20 +1779,11 @@ Do not use ma_decoder_seek_to_pcm_frame()
instances of the same sound simultaneously. This can be extremely inefficient depending on the type
of data source and can result in glitching due to subtle changes to the state of internal filters.
Instead, initialize multiple data sources for each instance.
-
-
-
-
-
-
4.1. Custom Data Sources
You can implement a custom data source by implementing the functions in ma_data_source_vtable.
Your custom object must have ma_data_source_base as it's first member:
-
-
-
struct my_data_source
@@ -2038,9 +1795,6 @@ Your custom object must have ma_data_source
In your initialization routine, you need to call ma_data_source_init() in order to set up the
base object (ma_data_source_base):
-
-
-
static ma_result my_data_source_read(ma_data_source* pDataSource, void* pFramesOut, ma_uint64 frameCount, ma_uint64* pFramesRead)
@@ -2107,15 +1861,6 @@ base object (ma_data_source_base):
Note that ma_data_source_init() and ma_data_source_uninit() are never called directly outside
of the custom data source. It's up to the custom data source itself to call these within their own
init/uninit functions.
-
-
-
-
-
-
-
-
-
5. Engine
@@ -2139,9 +1884,6 @@ configured via the engine config.
The most basic way to initialize the engine is with a default config, like so:
-
-
-
ma_result result;
@@ -2156,9 +1898,6 @@ result = ma_engine_init(NULL, &engine);
This will result in the engine initializing a playback device using the operating system's default
device. This will be sufficient for many use cases, but if you need more flexibility you'll want to
configure the engine with an engine config:
-
-
-
ma_result result;
@@ -2177,9 +1916,6 @@ result = ma_engine_init(&engineConfig, &engine);
In the example above we're passing in a pre-initialized device. Since the caller is the one in
control of the device's data callback, it's their responsibility to manually call
ma_engine_read_pcm_frames() from inside their data callback:
-
-
-
void playback_data_callback(ma_device* pDevice, void* pOutput, const void* pInput, ma_uint32 frameCount)
@@ -2189,9 +1925,6 @@ control of the device's data callback, it's their responsibility to manu
You can also use the engine independent of a device entirely:
-
-
-
ma_result result;
@@ -2219,9 +1952,6 @@ processing or want to use a different audio system for playback such as SDL.
When a sound is loaded it goes through a resource manager. By default the engine will initialize a
resource manager internally, but you can also specify a pre-initialized resource manager:
-
-
-
ma_result result;
@@ -2248,9 +1978,6 @@ is using their own set of headphones.
By default an engine will be in a started state. To make it so the engine is not automatically
started you can configure it as such:
-
-
-
engineConfig.noAutoStart = MA_TRUE;
@@ -2276,9 +2003,6 @@ prefer decibel based volume control, use ma
When a sound is spatialized, it is done so relative to a listener. An engine can be configured to
have multiple listeners which can be configured via the config:
-
-
-
engineConfig.listenerCount = 2;
@@ -2290,9 +2014,6 @@ to a specific listener which will be explained later. Listener's have a posi
and velocity (for doppler effect). A listener is referenced by an index, the meaning of which is up
to the caller (the index is 0 based and cannot go beyond the listener count, minus 1). The
position, direction and velocity are all specified in absolute terms:
-
-
-
ma_engine_listener_set_position(&engine, listenerIndex, worldPosX, worldPosY, worldPosZ);
@@ -2300,9 +2021,6 @@ ma_engine_listener_set_position(&engine, listenerIndex, worldPosX, worldPosY
The direction of the listener represents it's forward vector. The listener's up vector can also be
specified and defaults to +1 on the Y axis.
-
-
-
ma_engine_listener_set_direction(&engine, listenerIndex, forwardX, forwardY, forwardZ);
@@ -2312,9 +2030,6 @@ ma_engine_listener_set_world_up(&engine, listenerIndex, 0, 1, 0);
The engine supports directional attenuation. The listener can have a cone the controls how sound is
attenuated based on the listener's direction. When a sound is between the inner and outer cones, it
will be attenuated between 1 and the cone's outer gain:
-
-
-
ma_engine_listener_set_cone(&engine, listenerIndex, innerAngleInRadians, outerAngleInRadians, outerGain);
@@ -2333,9 +2048,6 @@ positive Y points up and negative Z points forward.
The simplest and least flexible way to play a sound is like so:
-
-
-
ma_engine_play_sound(&engine, "my_sound.wav", pGroup);
@@ -2344,9 +2056,6 @@ ma_engine_play_sound(&engine, "my_sound.wav
This is a "fire and forget" style of function. The engine will manage the ma_sound object
internally. When the sound finishes playing, it'll be put up for recycling. For more flexibility
you'll want to initialize a sound object:
-
-
-
ma_sound sound;
@@ -2364,9 +2073,6 @@ Sounds need to be uninitialized with ma_sou
The example above loads a sound from a file. If the resource manager has been disabled you will not
be able to use this function and instead you'll need to initialize a sound directly from a data
source:
-
-
-
ma_sound sound;
@@ -2384,9 +2090,6 @@ sound multiple times at the same time, you need to initialize a separate ma_sound_init_ex(). This uses miniaudio's
standard config/init pattern:
-
-
-
ma_sound sound;
@@ -2419,9 +2122,6 @@ memory in exactly the same format as how it's stored on the file system. The
allocate a block of memory and then load the file directly into it. When reading audio data, it
will be decoded dynamically on the fly. In order to save processing time on the audio thread, it
might be beneficial to pre-decode the sound. You can do this with the MA_SOUND_FLAG_DECODE flag:
-
-
-
ma_sound_init_from_file(&engine, "my_sound.wav", MA_SOUND_FLAG_DECODE, pGroup, NULL, &sound);
@@ -2430,9 +2130,6 @@ ma_sound_init_from_file(&engine, "my_sound.
By default, sounds will be loaded synchronously, meaning ma_sound_init_*() will not return until
the sound has been fully loaded. If this is prohibitive you can instead load sounds asynchronously
by specifying the MA_SOUND_FLAG_ASYNC flag:
-
-
-
ma_sound_init_from_file(&engine, "my_sound.wav", MA_SOUND_FLAG_DECODE | MA_SOUND_FLAG_ASYNC, pGroup, NULL, &sound);
@@ -2448,9 +2145,6 @@ is specified.
If you need to wait for an asynchronously loaded sound to be fully loaded, you can use a fence. A
fence in miniaudio is a simple synchronization mechanism which simply blocks until it's internal
counter hit's zero. You can specify a fence like so:
-
-
-
ma_result result;
@@ -2475,9 +2169,6 @@ ma_fence_wait(&fence);
If loading the entire sound into memory is prohibitive, you can also configure the engine to stream
the audio data:
-
-
-
ma_sound_init_from_file(&engine, "my_sound.wav", MA_SOUND_FLAG_STREAM, pGroup, NULL, &sound);
@@ -2533,9 +2224,6 @@ value will result in a higher pitch. The pitch must be greater than 0.
The engine supports 3D spatialization of sounds. By default sounds will have spatialization
enabled, but if a sound does not need to be spatialized it's best to disable it. There are two ways
to disable spatialization of a sound:
-
-
-
// Disable spatialization at initialization time via a flag:
@@ -2547,9 +2235,6 @@ ma_sound_set_spatialization_enabled(&sound, isSpatializationEnabled);
By default sounds will be spatialized based on the closest listener. If a sound should always be
spatialized relative to a specific listener it can be pinned to one:
-
-
-
ma_sound_set_pinned_listener_index(&sound, listenerIndex);
@@ -2557,9 +2242,6 @@ ma_sound_set_pinned_listener_index(&sound, listenerIndex);
Like listeners, sounds have a position. By default, the position of a sound is in absolute space,
but it can be changed to be relative to a listener:
-
-
-
ma_sound_set_positioning(&sound, ma_positioning_relative);
@@ -2567,18 +2249,12 @@ ma_sound_set_positioning(&sound, ma_positioning_relative);
Note that relative positioning of a sound only makes sense if there is either only one listener, or
the sound is pinned to a specific listener. To set the position of a sound:
-
-
-
ma_sound_set_position(&sound, posX, posY, posZ);
The direction works the same way as a listener and represents the sound's forward direction:
-
-
-
ma_sound_set_direction(&sound, forwardX, forwardY, forwardZ);
@@ -2586,18 +2262,12 @@ ma_sound_set_direction(&sound, forwardX, forwardY, forwardZ);
Sound's also have a cone for controlling directional attenuation. This works exactly the same as
listeners:
-
-
-
ma_sound_set_cone(&sound, innerAngleInRadians, outerAngleInRadians, outerGain);
The velocity of a sound is used for doppler effect and can be set as such:
-
-
-
ma_sound_set_velocity(&sound, velocityX, velocityY, velocityZ);
@@ -2606,18 +2276,12 @@ ma_sound_set_velocity(&sound, velocityX, velocityY, velocityZ);
The engine supports different attenuation models which can be configured on a per-sound basis. By
default the attenuation model is set to ma_attenuation_model_inverse which is the equivalent to
OpenAL's AL_INVERSE_DISTANCE_CLAMPED. Configure the attenuation model like so:
-
-
-
ma_sound_set_attenuation_model(&sound, ma_attenuation_model_inverse);
The supported attenuation models include the following:
-
-
-
@@ -2654,18 +2318,12 @@ Exponential attenuation.
|
To control how quickly a sound rolls off as it moves away from the listener, you need to configure
the rolloff:
-
-
-
ma_sound_set_rolloff(&sound, rolloff);
You can control the minimum and maximum gain to apply from spatialization:
-
-
-
ma_sound_set_min_gain(&sound, minGain);
@@ -2676,9 +2334,6 @@ Likewise, in the calculation of attenuation, you can control the minimum and max
the attenuation calculation. This is useful if you want to ensure sounds don't drop below a certain
volume after the listener moves further away and to have sounds play a maximum volume when the
listener is within a certain distance:
-
-
-
ma_sound_set_min_distance(&sound, minDistance);
@@ -2687,9 +2342,6 @@ ma_sound_set_max_distance(&sound, maxDistance);
The engine's spatialization system supports doppler effect. The doppler factor can be configure on
a per-sound basis like so:
-
-
-
ma_sound_set_doppler_factor(&sound, dopplerFactor);
@@ -2698,9 +2350,6 @@ ma_sound_set_doppler_factor(&sound, dopplerFactor);
You can fade sounds in and out with ma_sound_set_fade_in_pcm_frames() and
ma_sound_set_fade_in_milliseconds(). Set the volume to -1 to use the current volume as the
starting volume:
-
-
-
// Fade in over 1 second.
@@ -2714,9 +2363,6 @@ ma_sound_set_fade_in_milliseconds(&sound, -1, 0, 1000);
By default sounds will start immediately, but sometimes for timing and synchronization purposes it
can be useful to schedule a sound to start or stop:
-
-
-
// Start the sound in 1 second from now.
@@ -2754,9 +2400,6 @@ when the sound reaches the end. Note that the callback is fired from the audio t
you cannot be uninitializing sound from the callback. To set the callback you can use
ma_sound_set_end_callback(). Alternatively, if you're using ma_sound_init_ex(), you can pass it
into the config like so:
-
-
-
soundConfig.endCallback = my_end_callback;
@@ -2764,9 +2407,6 @@ soundConfig.pEndCallbackUserData = pMyEndCallbackUserData;
The end callback is declared like so:
-
-
-
void my_end_callback(void* pUserData, ma_sound* pSound)
@@ -2777,9 +2417,6 @@ The end callback is declared like so:
Internally a sound wraps around a data source. Some APIs exist to control the underlying data
source, mainly for convenience:
-
-
-
ma_sound_seek_to_pcm_frame(&sound, frameIndex);
@@ -2798,12 +2435,6 @@ file formats that have built-in support in miniaudio. You can extend this to sup
file format through the use of custom decoders. To do this you'll need to use a self-managed
resource manager and configure it appropriately. See the "Resource Management" section below for
details on how to set this up.
-
-
-
-
-
-
6. Resource Management
@@ -2832,9 +2463,6 @@ the data to be loaded asynchronously.
The example below is how you can initialize a resource manager using it's default configuration:
-
-
-
ma_resource_manager_config config;
@@ -2854,9 +2482,6 @@ will use the file's native data format, but you can configure it to use a co
is useful for offloading the cost of data conversion to load time rather than dynamically
converting at mixing time. To do this, you configure the decoded format, channels and sample rate
like the code below:
-
-
-
config = ma_resource_manager_config_init();
@@ -2877,9 +2502,6 @@ Internally the resource manager uses the ma
only supports decoders that are built into miniaudio. It's possible to support additional encoding
formats through the use of custom decoders. To do so, pass in your ma_decoding_backend_vtable
vtables into the resource manager config:
-
-
-
ma_decoding_backend_vtable* pCustomBackendVTables[] =
@@ -2904,9 +2526,6 @@ via libopus and libopusfile and Vorbis via libvorbis and libvorbisfile.
Asynchronicity is achieved via a job system. When an operation needs to be performed, such as the
decoding of a page, a job will be posted to a queue which will then be processed by a job thread.
By default there will be only one job thread running, but this can be configured, like so:
-
-
-
config = ma_resource_manager_config_init();
@@ -2919,9 +2538,6 @@ existing job infrastructure, or if you simply don't like the way the resourc
do this, just set the job thread count to 0 and process jobs manually. To process jobs, you first
need to retrieve a job using ma_resource_manager_next_job() and then process it using
ma_job_process():
-
-
-
config = ma_resource_manager_config_init();
@@ -2966,9 +2582,6 @@ is to give every thread the opportunity to catch the event and terminate natural
When loading a file, it's sometimes convenient to be able to customize how files are opened and
read instead of using standard fopen(), fclose(), etc. which is what miniaudio will use by
default. This can be done by setting pVFS member of the resource manager's config:
-
-
-
// Initialize your custom VFS object. See documentation for VFS for information on how to do this.
@@ -2989,9 +2602,6 @@ loading a sound you need to specify the file path and options for how the sounds
By default a sound will be loaded synchronously. The returned data source is owned by the caller
which means the caller is responsible for the allocation and freeing of the data source. Below is
an example for initializing a data source:
-
-
-
ma_resource_manager_data_source dataSource;
@@ -3016,9 +2626,6 @@ ma_resource_manager_data_source_uninit(&dataSource);
The flags parameter specifies how you want to perform loading of the sound file. It can be a
combination of the following flags:
-
-
-
MA_RESOURCE_MANAGER_DATA_SOURCE_FLAG_STREAM
@@ -3079,12 +2686,6 @@ caller to ensure the pointer stays valid for its lifetime. Use
ma_resource_manager_register_file() and ma_resource_manager_unregister_file() to register and
unregister a file. It does not make sense to use the MA_RESOURCE_MANAGER_DATA_SOURCE_FLAG_STREAM
flag with a self-managed data pointer.
-
-
-
-
-
-
6.1. Asynchronous Loading and Synchronization
@@ -3118,9 +2719,6 @@ sample rate of the file.
The example below shows how you could use a fence when loading a number of sounds:
-
-
-
// This fence will be released when all sounds are finished loading entirely.
@@ -3145,9 +2743,6 @@ ma_fence_wait(&fence);
In the example above we used a fence for waiting until the entire file has been fully decoded. If
you only need to wait for the initialization of the internal decoder to complete, you can use the
init member of the ma_resource_manager_pipeline_notifications object:
-
-
-
notifications.init.pFence = &fence;
@@ -3155,9 +2750,6 @@ notifications.init.pFence = &fence;
If a fence is not appropriate for your situation, you can instead use a callback that is fired on
an individual sound basis. This is done in a very similar way to fences:
-
-
-
typedef struct
@@ -3190,15 +2782,6 @@ instantiation into the ma_resource_manager_
the fence, only we set pNotification instead of pFence. You can set both of these at the same
time and they should both work as expected. If using the pNotification system, you need to ensure
your ma_async_notification_callbacks object stays valid.
-
-
-
-
-
-
-
-
-
6.2. Resource Manager Implementation Details
@@ -3245,12 +2828,6 @@ to process jobs in the queue so in heavy load situations there will still be som
determine if a data source is ready to have some frames read, use
ma_resource_manager_data_source_get_available_frames(). This will return the number of frames
available starting from the current position.
-
-
-
-
-
-
6.2.1. Job Queue
@@ -3286,9 +2863,6 @@ need to use more than one job thread. There are plans to remove this lock in a f
In addition, posting a job will release a semaphore, which on Win32 is implemented with
ReleaseSemaphore and on POSIX platforms via a condition variable:
-
-
-
pthread_mutex_lock(&pSemaphore->lock);
@@ -3303,15 +2877,6 @@ Again, this is relevant for those with strict lock-free requirements in the audi
this, you can use non-blocking mode (via the MA_JOB_QUEUE_FLAG_NON_BLOCKING
flag) and implement your own job processing routine (see the "Resource Manager" section above for
details on how to do this).
-
-
-
-
-
-
-
-
-
6.2.2. Data Buffers
@@ -3323,9 +2888,6 @@ is uninitialized, the reference counter will be decremented. If the counter hits
will be unloaded. This is a detail to keep in mind because it could result in excessive loading and
unloading of a sound. For example, the following sequence will result in a file be loaded twice,
once after the other:
-
-
-
ma_resource_manager_data_source_init(pResourceManager, "my_file", ..., &myDataBuffer0); // Refcount = 1. Initial load.
@@ -3385,12 +2947,6 @@ decode, the job will post another MA_JOB_TY
keep on happening until the sound has been fully decoded. For sounds of an unknown length, each
page will be linked together as a linked list. Internally this is implemented via the
ma_paged_audio_buffer object.
-
-
-
-
-
-
6.2.3. Data Streams
@@ -3435,15 +2991,6 @@ Note that when a new page needs to be loaded, a job will be posted to the resour
thread from the audio thread. You must keep in mind the details mentioned in the "Job Queue"
section above regarding locking when posting an event if you require a strictly lock-free audio
thread.
-
-
-
-
-
-
-
-
-
7. Node Graph
@@ -3461,9 +3008,6 @@ Each node has a number of input buses and a number of output buses. An output bu
attached to an input bus of another. Multiple nodes can connect their output buses to another
node's input bus, in which case their outputs will be mixed before processing by the node. Below is
a diagram that illustrates a hypothetical node graph setup:
-
-
-
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Data flows left to right >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
@@ -3500,9 +3044,6 @@ To use a node graph, you first need to initialize a ma_node_graph object is required for some thread-safety
issues which will be explained later. A ma_node_graph object is initialized using miniaudio's
standard config/init system:
-
-
-
ma_node_graph_config nodeGraphConfig = ma_node_graph_config_init(myChannelCount);
@@ -3519,9 +3060,6 @@ same channel count, which is specified in the config. Any nodes that connect dir
endpoint must be configured such that their output buses have the same channel count. When you read
audio data from the node graph, it'll have the channel count you specified in the config. To read
data from the graph:
-
-
-
ma_uint32 framesRead;
@@ -3547,9 +3085,6 @@ The ma_node API is designed to allow
miniaudio includes a few stock nodes for common functionality. This is how you would initialize a
node which reads directly from a data source (ma_data_source_node) which is an example of one
of the stock nodes that comes with miniaudio:
-
-
-
ma_data_source_node_config config = ma_data_source_node_config_init(pMyDataSource);
@@ -3569,9 +3104,6 @@ returned from ma_data_source_node_init()
By default the node will not be attached to the graph. To do so, use ma_node_attach_output_bus():
-
-
-
result = ma_node_attach_output_bus(&dataSourceNode, 0, ma_node_graph_get_endpoint(&nodeGraph), 0);
@@ -3597,9 +3129,6 @@ Less frequently you may want to create a specialized node. This will be a node w
your own processing callback to apply a custom effect of some kind. This is similar to initializing
one of the stock node types, only this time you need to specify a pointer to a vtable containing a
pointer to the processing function and the number of input and output buses. Example:
-
-
-
static void my_custom_node_process_pcm_frames(ma_node* pNode, const float** ppFramesIn, ma_uint32* pFrameCountIn, float** ppFramesOut, ma_uint32* pFrameCountOut)
@@ -3658,9 +3187,6 @@ When initializing a custom node, as in the code above, you'll normally just
static space. The number of input and output buses are specified as part of the vtable. If you need
a variable number of buses on a per-node bases, the vtable should have the relevant bus count set
to MA_NODE_BUS_COUNT_UNKNOWN. In this case, the bus count should be set in the node config:
-
-
-
static ma_node_vtable my_custom_node_vtable =
@@ -3689,9 +3215,6 @@ set if the vtable specifies MA_NODE_BUS_COUNT_UNKNOWN in the relevant bus count.
Most often you'll want to create a structure to encapsulate your node with some extra data. You
need to make sure the ma_node_base object is your first member of the structure:
-
-
-
typedef struct
@@ -3717,9 +3240,6 @@ return an error in it's initialization routine.
Custom nodes can be assigned some flags to describe their behaviour. These are set via the vtable
and include the following:
-
-
-
@@ -3732,18 +3252,6 @@ Description
|
MA_NODE_FLAG_PASSTHROUGH
-
-
-
-
-
-
-
-
-
-
-
-
|
@@ -3760,31 +3268,6 @@ counts.
|
MA_NODE_FLAG_CONTINUOUS_PROCESSING
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
|
@@ -3808,15 +3291,6 @@ are no inputs attached.
|
MA_NODE_FLAG_ALLOW_NULL_INPUT
-
-
-
-
-
-
-
-
-
|
@@ -3832,10 +3306,6 @@ processing callback.
|
MA_NODE_FLAG_DIFFERENT_PROCESSING_RATES
-
-
-
-
|
@@ -3848,18 +3318,6 @@ resampling.
|
MA_NODE_FLAG_SILENT_OUTPUT
-
-
-
-
-
-
-
-
-
-
-
-
|
@@ -3878,9 +3336,6 @@ callback because miniaudio will ignore it anyway.
If you need to make a copy of an audio stream for effect processing you can use a splitter node
called ma_splitter_node. This takes has 1 input bus and splits the stream into 2 output buses.
You can use it like this:
-
-
-
ma_splitter_node_config splitterNodeConfig = ma_splitter_node_config_init(channels);
@@ -3897,9 +3352,6 @@ ma_node_attach_output_bus(&splitterNode, 1, &myEffectNode,
The volume of an output bus can be configured on a per-bus basis:
-
-
-
ma_node_set_output_bus_volume(&splitterNode, 0, 0.5f);
@@ -3912,9 +3364,6 @@ copied streams.
You can start and stop a node with the following:
-
-
-
ma_node_set_state(&splitterNode, ma_node_state_started); // The default state.
@@ -3929,9 +3378,6 @@ atomically.
You can configure the initial state of a node in it's config:
-
-
-
nodeConfig.initialState = ma_node_state_stopped;
@@ -3940,9 +3386,6 @@ nodeConfig.initialState = ma_node_state_stopped;
Note that for the stock specialized nodes, all of their configs will have a nodeConfig member
which is the config to use with the base node. This is where the initial state can be configured
for specialized nodes:
-
-
-
dataSourceNodeConfig.nodeConfig.initialState = ma_node_state_stopped;
@@ -3950,12 +3393,6 @@ dataSourceNodeConfig.nodeConfig.initialState = ma_node_state_stopped;
When using a specialized node like ma_data_source_node or ma_splitter_node, be sure to not
modify the vtable member of the nodeConfig object.
-
-
-
-
-
-
7.1. Timing
@@ -3987,9 +3424,6 @@ start and one stop at a time. This is mainly intended for putting nodes into a s
state in a frame-exact manner. Without this mechanism, starting and stopping of a node is limited
to the resolution of a call to ma_node_graph_read_pcm_frames() which would typically be in blocks
of several milliseconds. The following APIs can be used for scheduling node states:
-
-
-
ma_node_set_state_time()
@@ -3997,9 +3431,6 @@ ma_node_get_state_time()
The time is absolute and must be based on the global clock. An example is below:
-
-
-
ma_node_set_state_time(&myNode, ma_node_state_started, sampleRate*1); // Delay starting to 1 second.
@@ -4007,9 +3438,6 @@ ma_node_set_state_time(&myNode, ma_node_state_stopped, sampleRate*5);
An example for changing the state using a relative time.
-
-
-
ma_node_set_state_time(&myNode, ma_node_state_started, sampleRate*1 + ma_node_graph_get_time(&myNodeGraph));
@@ -4020,15 +3448,6 @@ Note that due to the nature of multi-threading the times may not be 100% exact.
issue, consider scheduling state changes from within a processing callback. An idea might be to
have some kind of passthrough trigger node that is used specifically for tracking time and handling
events.
-
-
-
-
-
-
-
-
-
7.2. Thread Safety and Locking
@@ -4122,23 +3541,11 @@ hasn't yet been set, from the perspective of iteration it's been attache
only be happening in a forward direction which means the "previous" pointer won't actually ever get
used. The same general process applies to detachment. See ma_node_attach_output_bus() and
ma_node_detach_output_bus() for the implementation of this mechanism.
-
-
-
-
-
-
-
-
-
8. Decoding
The ma_decoder API is used for reading audio files. Decoders are completely decoupled from
devices and can be used independently. Built-in support is included for the following formats:
-
-
-
@@ -4163,9 +3570,6 @@ FLAC
|
You can disable the built-in decoders by specifying one or more of the following options before the
miniaudio implementation:
-
-
-
#define MA_NO_WAV
@@ -4181,9 +3585,6 @@ to use custom decoders.
A decoder can be initialized from a file with ma_decoder_init_file(), a block of memory with
ma_decoder_init_memory(), or from data delivered via callbacks with ma_decoder_init(). Here is
an example for loading a decoder from a file:
-
-
-
ma_decoder decoder;
@@ -4200,9 +3601,6 @@ ma_decoder_uninit(&decoder);
When initializing a decoder, you can optionally pass in a pointer to a ma_decoder_config object
(the NULL argument in the example above) which allows you to configure the output format, channel
count, sample rate and channel map:
-
-
-
ma_decoder_config config = ma_decoder_config_init(ma_format_f32, 2, 48000);
@@ -4216,9 +3614,6 @@ same as that defined by the decoding backend.
Data is read from the decoder as PCM frames. This will output the number of PCM frames actually
read. If this is less than the requested number of PCM frames it means you've reached the end. The
return value will be MA_AT_END if no samples have been read and the end has been reached.
-
-
-
ma_result result = ma_decoder_read_pcm_frames(pDecoder, pFrames, framesToRead, &framesRead);
@@ -4228,9 +3623,6 @@ return value will be MA_AT_END if no
You can also seek to a specific frame like so:
-
-
-
ma_result result = ma_decoder_seek_to_pcm_frame(pDecoder, targetFrame);
@@ -4240,9 +3632,6 @@ You can also seek to a specific frame like so:
If you want to loop back to the start, you can simply seek back to the first PCM frame:
-
-
-
ma_decoder_seek_to_pcm_frame(pDecoder, 0);
@@ -4252,9 +3641,6 @@ When loading a decoder, miniaudio uses a trial and error technique to find the a
backend. This can be unnecessarily inefficient if the type is already known. In this case you can
use encodingFormat variable in the device config to specify a specific encoding format you want
to decode:
-
-
-
decoderConfig.encodingFormat = ma_encoding_format_wav;
@@ -4266,12 +3652,6 @@ See the ma_encoding_format enum for
The ma_decoder_init_file() API will try using the file extension to determine which decoding
backend to prefer.
-
-
-
-
-
-
8.1. Custom Decoders
@@ -4286,9 +3666,6 @@ Opus decoder in the "extras" folder of the miniaudio repository which
A custom decoder must implement a data source. A vtable called ma_decoding_backend_vtable needs
to be implemented which is then passed into the decoder config:
-
-
-
ma_decoding_backend_vtable* pCustomBackendVTables[] =
@@ -4306,9 +3683,6 @@ decoderConfig.customBackendCount = sizeof
The ma_decoding_backend_vtable vtable has the following functions:
-
-
-
onInit
@@ -4359,23 +3733,11 @@ initialization routine is clean.
When a decoder is uninitialized, the onUninit callback will be fired which will give you an
opportunity to clean up and internal data.
-
-
-
-
-
-
-
-
-
9. Encoding
The ma_encoding API is used for writing audio files. The only supported output format is WAV.
This can be disabled by specifying the following option before the implementation of miniaudio:
-
-
-
#define MA_NO_WAV
@@ -4384,9 +3746,6 @@ This can be disabled by specifying the following option before the implementatio
An encoder can be initialized to write to a file with ma_encoder_init_file() or from data
delivered via callbacks with ma_encoder_init(). Below is an example for initializing an encoder
to output to a file.
-
-
-
ma_encoder_config config = ma_encoder_config_init(ma_encoding_format_wav, FORMAT, CHANNELS, SAMPLE_RATE);
@@ -4404,9 +3763,6 @@ ma_encoder_uninit(&encoder);
When initializing an encoder you must specify a config which is initialized with
ma_encoder_config_init(). Here you must specify the file type, the output sample format, output
channel count and output sample rate. The following file types are supported:
-
-
-
@@ -4429,9 +3785,6 @@ If the format, channel count or sample rate is not supported by the output file
be returned. The encoder will not perform data conversion so you will need to convert it before
outputting any audio data. To output audio data, use ma_encoder_write_pcm_frames(), like in the
example below:
-
-
-
ma_uint64 framesWritten;
@@ -4447,27 +3800,12 @@ is optionally and you can pass in NULL
Encoders must be uninitialized with ma_encoder_uninit().
-
-
-
-
-
-
-
-
-
10. Data Conversion
A data conversion API is included with miniaudio which supports the majority of data conversion
requirements. This supports conversion between sample formats, channel counts (with channel
mapping) and sample rates.
-
-
-
-
-
-
@@ -4476,12 +3814,6 @@ Conversion between sample formats is achieved with the ma_pcm_convert() to convert based on a ma_format variable. Use
ma_convert_pcm_frames_format() to convert PCM frames where you want to specify the frame count
and channel count as a variable instead of the total sample count.
-
-
-
-
-
-
10.1.1. Dithering
@@ -4490,9 +3822,6 @@ Dithering can be set using the ditherMode parameter.
The different dithering modes include the following, in order of efficiency:
-
-
-
@@ -4530,9 +3859,6 @@ ma_dither_mode_triangle
Note that even if the dither mode is set to something other than ma_dither_mode_none, it will be
ignored for conversions where dithering is not needed. Dithering is available for the following
conversions:
-
-
-
s16 -> u8
@@ -4546,24 +3872,12 @@ f32 -> s16
Note that it is not an error to pass something other than ma_dither_mode_none for conversions where
dither is not used. It will just be ignored.
-
-
-
-
-
-
-
-
-
10.2. Channel Conversion
Channel conversion is used for channel rearrangement and conversion from one channel count to
another. The ma_channel_converter API is used for channel conversion. Below is an example of
initializing a simple channel converter which converts from mono to stereo.
-
-
-
ma_channel_converter_config config = ma_channel_converter_config_init(
@@ -4581,9 +3895,6 @@ result = ma_channel_converter_init(&config, NULL, &converter);
To perform the conversion simply call ma_channel_converter_process_pcm_frames() like so:
-
-
-
ma_result result = ma_channel_converter_process_pcm_frames(&converter, pFramesOut, pFramesIn, frameCount);
@@ -4598,12 +3909,6 @@ frames.
Input and output PCM frames are always interleaved. Deinterleaved layouts are not supported.
-
-
-
-
-
-
10.2.1. Channel Mapping
@@ -4646,9 +3951,6 @@ weights. Custom weights can be passed in as the last parameter of
Predefined channel maps can be retrieved with ma_channel_map_init_standard(). This takes a
ma_standard_channel_map enum as its first parameter, which can be one of the following:
-
-
-
@@ -4732,9 +4034,6 @@ ma_standard_channel_map_webaudio
|
Below are the channel maps used by default in miniaudio (ma_standard_channel_map_default):
-
-
-
@@ -4765,7 +4064,6 @@ Mapping
|
3
-
|
@@ -4777,10 +4075,6 @@ Mapping
|
4 (Surround)
-
-
-
-
|
@@ -4793,11 +4087,6 @@ Mapping
|
5
-
-
-
-
-
|
@@ -4811,14 +4100,6 @@ Mapping
|
6 (5.1)
-
-
-
-
-
-
-
-
|
@@ -4833,15 +4114,6 @@ Mapping
|
7
-
-
-
-
-
-
-
-
-
|
@@ -4857,18 +4129,6 @@ Mapping
|
8 (7.1)
-
-
-
-
-
-
-
-
-
-
-
-
|
@@ -4885,7 +4145,6 @@ Mapping
|
Other
-
|
@@ -4894,19 +4153,10 @@ is equivalent to the same
mapping as the device.
|
-
-
-
-
-
-
-10.3. Resampling
+ |
10.3. Resampling
Resampling is achieved with the ma_resampler object. To create a resampler object, do something
like the following:
-
-
-
ma_resampler_config config = ma_resampler_config_init(
@@ -4924,18 +4174,12 @@ like the following:
Do the following to uninitialize the resampler:
-
-
-
ma_resampler_uninit(&resampler);
The following example shows how data can be processed
-
-
-
ma_uint64 frameCountIn = 1000;
@@ -4973,9 +4217,6 @@ changed after initialization.
The miniaudio resampler has built-in support for the following algorithms:
-
-
-
@@ -5033,22 +4274,10 @@ number of input frames. You can do this with ma_resampler_get_input_latency() and ma_resampler_get_output_latency().
-
-
-
-
-
-
10.3.1. Resampling Algorithms
The choice of resampling algorithm depends on your situation and requirements.
-
-
-
-
-
-
10.3.1.1. Linear Resampling
@@ -5073,20 +4302,11 @@ the input and output sample rates (Nyquist Frequency).
The API for the linear resampler is the same as the main resampler API, only it's called
ma_linear_resampler.
-
-
-
-
-
-
10.3.2. Custom Resamplers
You can implement a custom resampler by using the ma_resample_algorithm_custom resampling
algorithm and setting a vtable in the resampler config:
-
-
-
ma_resampler_config config = ma_resampler_config_init(..., ma_resample_algorithm_custom);
@@ -5132,15 +4352,6 @@ frames are required to be available to produce the given number of output frames
onGetExpectedOutputFrameCount callback is used to determine how many output frames will be
produced given the specified number of input frames. miniaudio will use these as a hint, but they
are optional and can be set to NULL if you're unable to implement them.
-
-
-
-
-
-
-
-
-
10.4. General Data Conversion
@@ -5149,9 +4360,6 @@ resampling into one operation. This is what miniaudio uses internally to convert
requested when the device was initialized and the format of the backend's native device. The API
for general data conversion is very similar to the resampling API. Create a ma_data_converter
object like this:
-
-
-
ma_data_converter_config config = ma_data_converter_config_init(
@@ -5173,9 +4381,6 @@ object like this:
In the example above we use ma_data_converter_config_init() to initialize the config, however
there's many more properties that can be configured, such as channel maps and resampling quality.
Something like the following may be more suitable depending on your requirements:
-
-
-
ma_data_converter_config config = ma_data_converter_config_init_default();
@@ -5190,18 +4395,12 @@ config.resampling.linear.lpfOrder = MA_MAX_FILTER_ORDER;
Do the following to uninitialize the data converter:
-
-
-
ma_data_converter_uninit(&converter, NULL);
The following example shows how data can be processed
-
-
-
ma_uint64 frameCountIn = 1000;
@@ -5251,26 +4450,11 @@ number of input frames. You can do this with ma_data_converter_get_input_latency() and ma_data_converter_get_output_latency().
-
-
-
-
-
-
-
-
-
11. Filtering
-
-
-
11.1. Biquad Filtering
Biquad filtering is achieved with the ma_biquad API. Example:
-
-
-
ma_biquad_config config = ma_biquad_config_init(ma_format_f32, channels, b0, b1, b2, a0, a1, a2);
@@ -5302,9 +4486,6 @@ Input and output frames are always interleaved.
Filtering can be applied in-place by passing in the same pointer for both the input and output
buffers, like so:
-
-
-
ma_biquad_process_pcm_frames(&biquad, pMyData, pMyData, frameCount);
@@ -5316,19 +4497,10 @@ filter while keeping the values of registers valid to avoid glitching. Do not us
ma_biquad_init() for this as it will do a full initialization which involves clearing the
registers to 0. Note that changing the format or channel count after initialization is invalid and
will result in an error.
-
-
-
-
-
-
11.2. Low-Pass Filtering
Low-pass filtering is achieved with the following APIs:
-
-
-
@@ -5364,9 +4536,6 @@ High order low-pass filter (Butterworth)
|
Low-pass filter example:
-
-
-
ma_lpf_config config = ma_lpf_config_init(ma_format_f32, channels, sampleRate, cutoffFrequency, order);
@@ -5387,9 +4556,6 @@ you need to convert it yourself beforehand. Input and output frames are always i
Filtering can be applied in-place by passing in the same pointer for both the input and output
buffers, like so:
-
-
-
ma_lpf_process_pcm_frames(&lpf, pMyData, pMyData, frameCount);
@@ -5397,9 +4563,6 @@ ma_lpf_process_pcm_frames(&lpf, pMyData, pMyData, frameCount);
The maximum filter order is limited to MA_MAX_FILTER_ORDER which is set to 8. If you need more,
you can chain first and second order filters together.
-
-
-
for (iFilter = 0; iFilter < filterCount; iFilter += 1) {
@@ -5423,19 +4586,10 @@ may want to consider using ma_lpf1.
If an even filter order is specified, a series of second order filters will be processed in a
chain. If an odd filter order is specified, a first order filter will be applied, followed by a
series of second order filters in a chain.
-
-
-
-
-
-
11.3. High-Pass Filtering
High-pass filtering is achieved with the following APIs:
-
-
-
@@ -5472,19 +4626,10 @@ High order high-pass filter (Butterworth)
|
High-pass filters work exactly the same as low-pass filters, only the APIs are called ma_hpf1,
ma_hpf2 and ma_hpf. See example code for low-pass filters for example usage.
-
-
-
-
-
-
11.4. Band-Pass Filtering
Band-pass filtering is achieved with the following APIs:
-
-
-
@@ -5515,19 +4660,10 @@ Band-pass filters work exactly the same as low-pass filters, only the APIs are c
ma_hpf. See example code for low-pass filters for example usage. Note that the order for
band-pass filters must be an even number which means there is no first order band-pass filter,
unlike low-pass and high-pass filters.
-
-
-
-
-
-
11.5. Notch Filtering
Notch filtering is achieved with the following APIs:
-
-
-
@@ -5545,15 +4681,9 @@ ma_notch2
Second order notching filter
|
-
-
-
-11.6. Peaking EQ Filtering
+ |
11.6. Peaking EQ Filtering
Peaking filtering is achieved with the following APIs:
-
-
-
@@ -5571,15 +4701,9 @@ ma_peak2
Second order peaking filter
|
-
-
-
-11.7. Low Shelf Filtering
+ |
11.7. Low Shelf Filtering
Low shelf filtering is achieved with the following APIs:
-
-
-
@@ -5600,19 +4724,10 @@ Second order low shelf filter
|
Where a high-pass filter is used to eliminate lower frequencies, a low shelf filter can be used to
just turn them down rather than eliminate them entirely.
-
-
-
-
-
-
11.8. High Shelf Filtering
High shelf filtering is achieved with the following APIs:
-
-
-
@@ -5634,30 +4749,12 @@ Second order high shelf filter
The high shelf filter has the same API as the low shelf filter, only you would use ma_hishelf
instead of ma_loshelf. Where a low shelf filter is used to adjust the volume of low frequencies,
the high shelf filter does the same thing for high frequencies.
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
miniaudio supports generation of sine, square, triangle and sawtooth waveforms. This is achieved
with the ma_waveform API. Example:
-
-
-
ma_waveform_config config = ma_waveform_config_init(
@@ -5691,9 +4788,6 @@ control whether or not a sawtooth has a positive or negative ramp, for example.
Below are the supported waveform types:
-
-
-
@@ -5720,18 +4814,9 @@ ma_waveform_type_triangle
ma_waveform_type_sawtooth
|
-
-
-
-
-
-
-12.2. Noise
+ |
12.2. Noise
miniaudio supports generation of white, pink and Brownian noise via the ma_noise API. Example:
-
-
-
ma_noise_config config = ma_noise_config_init(
@@ -5766,18 +4851,12 @@ The amplitude and seed can be changed dynamically with duplicateChannels member of the noise config to true, like so:
-
-
-
config.duplicateChannels = MA_TRUE;
Below are the supported noise types.
-
-
-
@@ -5799,13 +4878,7 @@ ma_noise_type_pink
ma_noise_type_brownian
|
-
-
-
-
-
-
-13. Audio Buffers
+ |
13. Audio Buffers
miniaudio supports reading from a buffer of raw audio data via the ma_audio_buffer API. This can
read from memory that's managed by the application, but can also handle the memory management for
@@ -5814,9 +4887,6 @@ you internally. Memory management is flexible and should support most use cases.
Audio buffers are initialized using the standard configuration system used everywhere in miniaudio:
-
-
-
ma_audio_buffer_config config = ma_audio_buffer_config_init(
@@ -5847,9 +4917,6 @@ Sometimes it can be convenient to allocate the memory for the ma_audio_buffer structure. To do this, use
ma_audio_buffer_alloc_and_init():
-
-
-
ma_audio_buffer_config config = ma_audio_buffer_config_init(
@@ -5884,9 +4951,6 @@ frames requested it means the end has been reached. This should never happen if
parameter is set to true. If you want to manually loop back to the start, you can do so with with
ma_audio_buffer_seek_to_pcm_frame(pAudioBuffer, 0). Below is an example for reading data from an
audio buffer.
-
-
-
ma_uint64 framesRead = ma_audio_buffer_read_pcm_frames(pAudioBuffer, pFramesOut, desiredFrameCount, isLooping);
@@ -5897,9 +4961,6 @@ audio buffer.
Sometimes you may want to avoid the cost of data movement between the internal buffer and the
output buffer. Instead you can use memory mapping to retrieve a pointer to a segment of data:
-
-
-
void* pMappedFrames;
@@ -5922,15 +4983,6 @@ that it does not handle looping for you. You can determine if the buffer is at t
purpose of looping with ma_audio_buffer_at_end() or by inspecting the return value of
ma_audio_buffer_unmap() and checking if it equals MA_AT_END. You should not treat MA_AT_END
as an error when returned by ma_audio_buffer_unmap().
-
-
-
-
-
-
-
-
-
14. Ring Buffers
@@ -5950,9 +5002,6 @@ you.
The examples below use the PCM frame variant of the ring buffer since that's most likely the one
you will want to use. To initialize a ring buffer, do something like the following:
-
-
-
ma_pcm_rb rb;
@@ -6019,15 +5068,6 @@ aligned to MA_SIMD_ALIGNMENT.
Note that the ring buffer is only thread safe when used by a single consumer thread and single
producer thread.
-
-
-
-
-
-
-
-
-
15. Backends
@@ -6039,9 +5079,6 @@ each of these backends in the order listed in the table below.
Note that backends that are not usable by the build target will not be included in the build. For
example, ALSA, which is specific to Linux, will not be included in the Windows build.
-
-
-
@@ -6221,9 +5258,6 @@ Cross Platform (not used on Web)
|
Some backends have some nuance details you may want to be aware of.
-
-
-
15.1. WASAPI
@@ -6235,9 +5269,6 @@ Low-latency shared mode will be disabled when using an application-defined sampl
will result in miniaudio's internal resampler being used instead which will in turn enable the
use of low-latency shared mode.
-
-
-
15.2. PulseAudio
-
-
-
15.3. Android
-
@@ -6266,9 +5294,6 @@ The backend API will perform resampling where possible. The reason for this as o
miniaudio's built-in resampler is to take advantage of any potential device-specific
optimizations the driver may implement.
-
-
-
BSD
-
@@ -6277,9 +5302,6 @@ The sndio backend is currently only enabled on OpenBSD builds.
The audio(4) backend is supported on OpenBSD, but you may need to disable sndiod before you can
use it.
-
-
-
15.4. UWP
-
@@ -6287,9 +5309,6 @@ UWP only supports default playback and capture devices.
-
UWP requires the Microphone capability to be enabled in the application's manifest (Package.appxmanifest):
-
-
-
<Package ...>
...
@@ -6297,10 +5316,7 @@ UWP requires the Microphone capability to be enabled in the application's ma
<DeviceCapability Name="microphone" />
</Capabilities>
</Package>
-
-
-
- 15.5. Web Audio / Emscripten
+ 15.5. Web Audio / Emscripten
-
You cannot use -std=c* compiler flags, nor -ansi. This only applies to the Emscripten build.
@@ -6316,21 +5332,9 @@ Google has implemented a policy in their browsers that prevent automatic media o
https://developers.google.com/web/updates/2017/09/autoplay-policy-changes. Starting the device
may fail if you try to start playback without first handling some kind of user input.
-
-
-
-
-
-
-
-
-
16. Optimization Tips
See below for some tips on improving performance.
-
-
-
16.1. Low Level API
@@ -6343,9 +5347,6 @@ By default, miniaudio will pre-silence the data callback's output buffer. If
will always write valid data to the output buffer you can disable pre-silencing by setting the
noPreSilence config option in the device config to true.
-
-
-
16.2. High Level API
-
@@ -6361,15 +5362,6 @@ If you know all of your sounds will always be the same sample rate, set the engi
consider setting the decoded sample rate to match your sounds. By configuring everything to
use a consistent sample rate, sample rate conversion can be avoided.
-
-
-
-
-
-
-
-
-
17. Miscellaneous Notes
|
|
|