diff --git a/miniaudio.h b/miniaudio.h
index aafb6c04..eb61063c 100644
--- a/miniaudio.h
+++ b/miniaudio.h
@@ -12,7 +12,8 @@ GitHub: https://github.com/mackron/miniaudio
/*
1. Introduction
===============
-miniaudio is a single file library for audio playback and capture. To use it, do the following in one .c file:
+miniaudio is a single file library for audio playback and capture. To use it, do the following in
+one .c file:
```c
#define MINIAUDIO_IMPLEMENTATION
@@ -21,16 +22,19 @@ miniaudio is a single file library for audio playback and capture. To use it, do
You can do `#include "miniaudio.h"` in other parts of the program just like any other header.
-miniaudio uses the concept of a "device" as the abstraction for physical devices. The idea is that you choose a physical device to emit or capture audio from,
-and then move data to/from the device when miniaudio tells you to. Data is delivered to and from devices asynchronously via a callback which you specify when
-initializing the device.
+miniaudio uses the concept of a "device" as the abstraction for physical devices. The idea is that
+you choose a physical device to emit or capture audio from, and then move data to/from the device
+when miniaudio tells you to. Data is delivered to and from devices asynchronously via a callback
+which you specify when initializing the device.
-When initializing the device you first need to configure it. The device configuration allows you to specify things like the format of the data delivered via
-the callback, the size of the internal buffer and the ID of the device you want to emit or capture audio from.
+When initializing the device you first need to configure it. The device configuration allows you to
+specify things like the format of the data delivered via the callback, the size of the internal
+buffer and the ID of the device you want to emit or capture audio from.
-Once you have the device configuration set up you can initialize the device. When initializing a device you need to allocate memory for the device object
-beforehand. This gives the application complete control over how the memory is allocated. In the example below we initialize a playback device on the stack,
-but you could allocate it on the heap if that suits your situation better.
+Once you have the device configuration set up you can initialize the device. When initializing a
+device you need to allocate memory for the device object beforehand. This gives the application
+complete control over how the memory is allocated. In the example below we initialize a playback
+device on the stack, but you could allocate it on the heap if that suits your situation better.
```c
void data_callback(ma_device* pDevice, void* pOutput, const void* pInput, ma_uint32 frameCount)
@@ -63,20 +67,27 @@ but you could allocate it on the heap if that suits your situation better.
}
```
-In the example above, `data_callback()` is where audio data is written and read from the device. The idea is in playback mode you cause sound to be emitted
-from the speakers by writing audio data to the output buffer (`pOutput` in the example). In capture mode you read data from the input buffer (`pInput`) to
-extract sound captured by the microphone. The `frameCount` parameter tells you how many frames can be written to the output buffer and read from the input
-buffer. A "frame" is one sample for each channel. For example, in a stereo stream (2 channels), one frame is 2 samples: one for the left, one for the right.
-The channel count is defined by the device config. The size in bytes of an individual sample is defined by the sample format which is also specified in the
-device config. Multi-channel audio data is always interleaved, which means the samples for each frame are stored next to each other in memory. For example, in
-a stereo stream the first pair of samples will be the left and right samples for the first frame, the second pair of samples will be the left and right samples
-for the second frame, etc.
+In the example above, `data_callback()` is where audio data is written and read from the device.
+The idea is in playback mode you cause sound to be emitted from the speakers by writing audio data
+to the output buffer (`pOutput` in the example). In capture mode you read data from the input
+buffer (`pInput`) to extract sound captured by the microphone. The `frameCount` parameter tells you
+how many frames can be written to the output buffer and read from the input buffer. A "frame" is
+one sample for each channel. For example, in a stereo stream (2 channels), one frame is 2
+samples: one for the left, one for the right. The channel count is defined by the device config.
+The size in bytes of an individual sample is defined by the sample format which is also specified
+in the device config. Multi-channel audio data is always interleaved, which means the samples for
+each frame are stored next to each other in memory. For example, in a stereo stream the first pair
+of samples will be the left and right samples for the first frame, the second pair of samples will
+be the left and right samples for the second frame, etc.
-The configuration of the device is defined by the `ma_device_config` structure. The config object is always initialized with `ma_device_config_init()`. It's
-important to always initialize the config with this function as it initializes it with logical defaults and ensures your program doesn't break when new members
-are added to the `ma_device_config` structure. The example above uses a fairly simple and standard device configuration. The call to `ma_device_config_init()`
-takes a single parameter, which is whether or not the device is a playback, capture, duplex or loopback device (loopback devices are not supported on all
-backends). The `config.playback.format` member sets the sample format which can be one of the following (all formats are native-endian):
+The configuration of the device is defined by the `ma_device_config` structure. The config object
+is always initialized with `ma_device_config_init()`. It's important to always initialize the
+config with this function as it initializes it with logical defaults and ensures your program
+doesn't break when new members are added to the `ma_device_config` structure. The example above
+uses a fairly simple and standard device configuration. The call to `ma_device_config_init()` takes
+a single parameter, which is whether or not the device is a playback, capture, duplex or loopback
+device (loopback devices are not supported on all backends). The `config.playback.format` member
+sets the sample format which can be one of the following (all formats are native-endian):
+---------------+----------------------------------------+---------------------------+
| Symbol | Description | Range |
@@ -88,22 +99,30 @@ backends). The `config.playback.format` member sets the sample format which can
| ma_format_u8 | 8-bit unsigned integer | [0, 255] |
+---------------+----------------------------------------+---------------------------+
-The `config.playback.channels` member sets the number of channels to use with the device. The channel count cannot exceed MA_MAX_CHANNELS. The
-`config.sampleRate` member sets the sample rate (which must be the same for both playback and capture in full-duplex configurations). This is usually set to
-44100 or 48000, but can be set to anything. It's recommended to keep this between 8000 and 384000, however.
+The `config.playback.channels` member sets the number of channels to use with the device. The
+channel count cannot exceed MA_MAX_CHANNELS. The `config.sampleRate` member sets the sample rate
+(which must be the same for both playback and capture in full-duplex configurations). This is
+usually set to 44100 or 48000, but can be set to anything. It's recommended to keep this between
+8000 and 384000, however.
-Note that leaving the format, channel count and/or sample rate at their default values will result in the internal device's native configuration being used
-which is useful if you want to avoid the overhead of miniaudio's automatic data conversion.
+Note that leaving the format, channel count and/or sample rate at their default values will result
+in the internal device's native configuration being used which is useful if you want to avoid the
+overhead of miniaudio's automatic data conversion.
-In addition to the sample format, channel count and sample rate, the data callback and user data pointer are also set via the config. The user data pointer is
-not passed into the callback as a parameter, but is instead set to the `pUserData` member of `ma_device` which you can access directly since all miniaudio
-structures are transparent.
+In addition to the sample format, channel count and sample rate, the data callback and user data
+pointer are also set via the config. The user data pointer is not passed into the callback as a
+parameter, but is instead set to the `pUserData` member of `ma_device` which you can access
+directly since all miniaudio structures are transparent.
-Initializing the device is done with `ma_device_init()`. This will return a result code telling you what went wrong, if anything. On success it will return
-`MA_SUCCESS`. After initialization is complete the device will be in a stopped state. To start it, use `ma_device_start()`. Uninitializing the device will stop
-it, which is what the example above does, but you can also stop the device with `ma_device_stop()`. To resume the device simply call `ma_device_start()` again.
-Note that it's important to never stop or start the device from inside the callback. This will result in a deadlock. Instead you set a variable or signal an
-event indicating that the device needs to stop and handle it in a different thread. The following APIs must never be called inside the callback:
+Initializing the device is done with `ma_device_init()`. This will return a result code telling you
+what went wrong, if anything. On success it will return `MA_SUCCESS`. After initialization is
+complete the device will be in a stopped state. To start it, use `ma_device_start()`.
+Uninitializing the device will stop it, which is what the example above does, but you can also stop
+the device with `ma_device_stop()`. To resume the device simply call `ma_device_start()` again.
+Note that it's important to never stop or start the device from inside the callback. This will
+result in a deadlock. Instead you set a variable or signal an event indicating that the device
+needs to stop and handle it in a different thread. The following APIs must never be called inside
+the callback:
```c
ma_device_init()
@@ -113,12 +132,14 @@ event indicating that the device needs to stop and handle it in a different thre
ma_device_stop()
```
-You must never try uninitializing and reinitializing a device inside the callback. You must also never try to stop and start it from inside the callback. There
-are a few other things you shouldn't do in the callback depending on your requirements, however this isn't so much a thread-safety thing, but rather a
-real-time processing thing which is beyond the scope of this introduction.
+You must never try uninitializing and reinitializing a device inside the callback. You must also
+never try to stop and start it from inside the callback. There are a few other things you shouldn't
+do in the callback depending on your requirements, however this isn't so much a thread-safety
+thing, but rather a real-time processing thing which is beyond the scope of this introduction.
-The example above demonstrates the initialization of a playback device, but it works exactly the same for capture. All you need to do is change the device type
-from `ma_device_type_playback` to `ma_device_type_capture` when setting up the config, like so:
+The example above demonstrates the initialization of a playback device, but it works exactly the
+same for capture. All you need to do is change the device type from `ma_device_type_playback` to
+`ma_device_type_capture` when setting up the config, like so:
```c
ma_device_config config = ma_device_config_init(ma_device_type_capture);
@@ -126,8 +147,9 @@ from `ma_device_type_playback` to `ma_device_type_capture` when setting up the c
config.capture.channels = MY_CHANNEL_COUNT;
```
-In the data callback you just read from the input buffer (`pInput` in the example above) and leave the output buffer alone (it will be set to NULL when the
-device type is set to `ma_device_type_capture`).
+In the data callback you just read from the input buffer (`pInput` in the example above) and leave
+the output buffer alone (it will be set to NULL when the device type is set to
+`ma_device_type_capture`).
These are the available device types and how you should handle the buffers in the callback:
@@ -140,23 +162,29 @@ These are the available device types and how you should handle the buffers in th
| ma_device_type_loopback | Read from input buffer, leave output buffer untouched. |
+-------------------------+--------------------------------------------------------+
-You will notice in the example above that the sample format and channel count is specified separately for playback and capture. This is to support different
-data formats between the playback and capture devices in a full-duplex system. An example may be that you want to capture audio data as a monaural stream (one
-channel), but output sound to a stereo speaker system. Note that if you use different formats between playback and capture in a full-duplex configuration you
-will need to convert the data yourself. There are functions available to help you do this which will be explained later.
+You will notice in the example above that the sample format and channel count is specified
+separately for playback and capture. This is to support different data formats between the playback
+and capture devices in a full-duplex system. An example may be that you want to capture audio data
+as a monaural stream (one channel), but output sound to a stereo speaker system. Note that if you
+use different formats between playback and capture in a full-duplex configuration you will need to
+convert the data yourself. There are functions available to help you do this which will be
+explained later.
-The example above did not specify a physical device to connect to which means it will use the operating system's default device. If you have multiple physical
-devices connected and you want to use a specific one you will need to specify the device ID in the configuration, like so:
+The example above did not specify a physical device to connect to which means it will use the
+operating system's default device. If you have multiple physical devices connected and you want to
+use a specific one you will need to specify the device ID in the configuration, like so:
```c
config.playback.pDeviceID = pMyPlaybackDeviceID; // Only if requesting a playback or duplex device.
config.capture.pDeviceID = pMyCaptureDeviceID; // Only if requesting a capture, duplex or loopback device.
```
-To retrieve the device ID you will need to perform device enumeration, however this requires the use of a new concept called the "context". Conceptually
-speaking the context sits above the device. There is one context to many devices. The purpose of the context is to represent the backend at a more global level
-and to perform operations outside the scope of an individual device. Mainly it is used for performing run-time linking against backend libraries, initializing
-backends and enumerating devices. The example below shows how to enumerate devices.
+To retrieve the device ID you will need to perform device enumeration, however this requires the
+use of a new concept called the "context". Conceptually speaking the context sits above the device.
+There is one context to many devices. The purpose of the context is to represent the backend at a
+more global level and to perform operations outside the scope of an individual device. Mainly it is
+used for performing run-time linking against backend libraries, initializing backends and
+enumerating devices. The example below shows how to enumerate devices.
```c
ma_context context;
@@ -197,44 +225,57 @@ backends and enumerating devices. The example below shows how to enumerate devic
ma_context_uninit(&context);
```
-The first thing we do in this example is initialize a `ma_context` object with `ma_context_init()`. The first parameter is a pointer to a list of `ma_backend`
-values which are used to override the default backend priorities. When this is NULL, as in this example, miniaudio's default priorities are used. The second
-parameter is the number of backends listed in the array pointed to by the first parameter. The third parameter is a pointer to a `ma_context_config` object
-which can be NULL, in which case defaults are used. The context configuration is used for setting the logging callback, custom memory allocation callbacks,
-user-defined data and some backend-specific configurations.
+The first thing we do in this example is initialize a `ma_context` object with `ma_context_init()`.
+The first parameter is a pointer to a list of `ma_backend` values which are used to override the
+default backend priorities. When this is NULL, as in this example, miniaudio's default priorities
+are used. The second parameter is the number of backends listed in the array pointed to by the
+first parameter. The third parameter is a pointer to a `ma_context_config` object which can be
+NULL, in which case defaults are used. The context configuration is used for setting the logging
+callback, custom memory allocation callbacks, user-defined data and some backend-specific
+configurations.
-Once the context has been initialized you can enumerate devices. In the example above we use the simpler `ma_context_get_devices()`, however you can also use a
-callback for handling devices by using `ma_context_enumerate_devices()`. When using `ma_context_get_devices()` you provide a pointer to a pointer that will,
-upon output, be set to a pointer to a buffer containing a list of `ma_device_info` structures. You also provide a pointer to an unsigned integer that will
-receive the number of items in the returned buffer. Do not free the returned buffers as their memory is managed internally by miniaudio.
+Once the context has been initialized you can enumerate devices. In the example above we use the
+simpler `ma_context_get_devices()`, however you can also use a callback for handling devices by
+using `ma_context_enumerate_devices()`. When using `ma_context_get_devices()` you provide a pointer
+to a pointer that will, upon output, be set to a pointer to a buffer containing a list of
+`ma_device_info` structures. You also provide a pointer to an unsigned integer that will receive
+the number of items in the returned buffer. Do not free the returned buffers as their memory is
+managed internally by miniaudio.
-The `ma_device_info` structure contains an `id` member which is the ID you pass to the device config. It also contains the name of the device which is useful
-for presenting a list of devices to the user via the UI.
+The `ma_device_info` structure contains an `id` member which is the ID you pass to the device
+config. It also contains the name of the device which is useful for presenting a list of devices
+to the user via the UI.
-When creating your own context you will want to pass it to `ma_device_init()` when initializing the device. Passing in NULL, like we do in the first example,
-will result in miniaudio creating the context for you, which you don't want to do since you've already created a context. Note that internally the context is
-only tracked by it's pointer which means you must not change the location of the `ma_context` object. If this is an issue, consider using `malloc()` to
-allocate memory for the context.
+When creating your own context you will want to pass it to `ma_device_init()` when initializing the
+device. Passing in NULL, like we do in the first example, will result in miniaudio creating the
+context for you, which you don't want to do since you've already created a context. Note that
+internally the context is only tracked by it's pointer which means you must not change the location
+of the `ma_context` object. If this is an issue, consider using `malloc()` to allocate memory for
+the context.
2. Building
===========
-miniaudio should work cleanly out of the box without the need to download or install any dependencies. See below for platform-specific details.
+miniaudio should work cleanly out of the box without the need to download or install any
+dependencies. See below for platform-specific details.
2.1. Windows
------------
-The Windows build should compile cleanly on all popular compilers without the need to configure any include paths nor link to any libraries.
+The Windows build should compile cleanly on all popular compilers without the need to configure any
+include paths nor link to any libraries.
2.2. macOS and iOS
------------------
-The macOS build should compile cleanly without the need to download any dependencies nor link to any libraries or frameworks. The iOS build needs to be
-compiled as Objective-C and will need to link the relevant frameworks but should compile cleanly out of the box with Xcode. Compiling through the command line
-requires linking to `-lpthread` and `-lm`.
+The macOS build should compile cleanly without the need to download any dependencies nor link to
+any libraries or frameworks. The iOS build needs to be compiled as Objective-C and will need to
+link the relevant frameworks but should compile cleanly out of the box with Xcode. Compiling
+through the command line requires linking to `-lpthread` and `-lm`.
-Due to the way miniaudio links to frameworks at runtime, your application may not pass Apple's notarization process. To fix this there are two options. The
-first is to use the `MA_NO_RUNTIME_LINKING` option, like so:
+Due to the way miniaudio links to frameworks at runtime, your application may not pass Apple's
+notarization process. To fix this there are two options. The first is to use the
+`MA_NO_RUNTIME_LINKING` option, like so:
```c
#ifdef __APPLE__
@@ -244,8 +285,9 @@ first is to use the `MA_NO_RUNTIME_LINKING` option, like so:
#include "miniaudio.h"
```
-This will require linking with `-framework CoreFoundation -framework CoreAudio -framework AudioUnit`. Alternatively, if you would rather keep using runtime
-linking you can add the following to your entitlements.xcent file:
+This will require linking with `-framework CoreFoundation -framework CoreAudio -framework AudioUnit`.
+Alternatively, if you would rather keep using runtime linking you can add the following to your
+entitlements.xcent file:
```
com.apple.security.cs.allow-dyld-environment-variables
@@ -257,23 +299,28 @@ linking you can add the following to your entitlements.xcent file:
2.3. Linux
----------
-The Linux build only requires linking to `-ldl`, `-lpthread` and `-lm`. You do not need any development packages.
+The Linux build only requires linking to `-ldl`, `-lpthread` and `-lm`. You do not need any
+development packages.
2.4. BSD
--------
-The BSD build only requires linking to `-lpthread` and `-lm`. NetBSD uses audio(4), OpenBSD uses sndio and FreeBSD uses OSS.
+The BSD build only requires linking to `-lpthread` and `-lm`. NetBSD uses audio(4), OpenBSD uses
+sndio and FreeBSD uses OSS.
2.5. Android
------------
-AAudio is the highest priority backend on Android. This should work out of the box without needing any kind of compiler configuration. Support for AAudio
-starts with Android 8 which means older versions will fall back to OpenSL|ES which requires API level 16+.
+AAudio is the highest priority backend on Android. This should work out of the box without needing
+any kind of compiler configuration. Support for AAudio starts with Android 8 which means older
+versions will fall back to OpenSL|ES which requires API level 16+.
-There have been reports that the OpenSL|ES backend fails to initialize on some Android based devices due to `dlopen()` failing to open "libOpenSLES.so". If
-this happens on your platform you'll need to disable run-time linking with `MA_NO_RUNTIME_LINKING` and link with -lOpenSLES.
+There have been reports that the OpenSL|ES backend fails to initialize on some Android based
+devices due to `dlopen()` failing to open "libOpenSLES.so". If this happens on your platform
+you'll need to disable run-time linking with `MA_NO_RUNTIME_LINKING` and link with -lOpenSLES.
2.6. Emscripten
---------------
-The Emscripten build emits Web Audio JavaScript directly and should compile cleanly out of the box. You cannot use -std=c* compiler flags, nor -ansi.
+The Emscripten build emits Web Audio JavaScript directly and should compile cleanly out of the box.
+You cannot use -std=c* compiler flags, nor -ansi.
2.7. Build Options
@@ -415,29 +462,36 @@ The Emscripten build emits Web Audio JavaScript directly and should compile clea
3. Definitions
==============
-This section defines common terms used throughout miniaudio. Unfortunately there is often ambiguity in the use of terms throughout the audio space, so this
-section is intended to clarify how miniaudio uses each term.
+This section defines common terms used throughout miniaudio. Unfortunately there is often ambiguity
+in the use of terms throughout the audio space, so this section is intended to clarify how miniaudio
+uses each term.
3.1. Sample
-----------
-A sample is a single unit of audio data. If the sample format is f32, then one sample is one 32-bit floating point number.
+A sample is a single unit of audio data. If the sample format is f32, then one sample is one 32-bit
+floating point number.
3.2. Frame / PCM Frame
----------------------
-A frame is a group of samples equal to the number of channels. For a stereo stream a frame is 2 samples, a mono frame is 1 sample, a 5.1 surround sound frame
-is 6 samples, etc. The terms "frame" and "PCM frame" are the same thing in miniaudio. Note that this is different to a compressed frame. If ever miniaudio
-needs to refer to a compressed frame, such as a FLAC frame, it will always clarify what it's referring to with something like "FLAC frame".
+A frame is a group of samples equal to the number of channels. For a stereo stream a frame is 2
+samples, a mono frame is 1 sample, a 5.1 surround sound frame is 6 samples, etc. The terms "frame"
+and "PCM frame" are the same thing in miniaudio. Note that this is different to a compressed frame.
+If ever miniaudio needs to refer to a compressed frame, such as a FLAC frame, it will always
+clarify what it's referring to with something like "FLAC frame".
3.3. Channel
------------
-A stream of monaural audio that is emitted from an individual speaker in a speaker system, or received from an individual microphone in a microphone system. A
-stereo stream has two channels (a left channel, and a right channel), a 5.1 surround sound system has 6 channels, etc. Some audio systems refer to a channel as
-a complex audio stream that's mixed with other channels to produce the final mix - this is completely different to miniaudio's use of the term "channel" and
-should not be confused.
+A stream of monaural audio that is emitted from an individual speaker in a speaker system, or
+received from an individual microphone in a microphone system. A stereo stream has two channels (a
+left channel, and a right channel), a 5.1 surround sound system has 6 channels, etc. Some audio
+systems refer to a channel as a complex audio stream that's mixed with other channels to produce
+the final mix - this is completely different to miniaudio's use of the term "channel" and should
+not be confused.
3.4. Sample Rate
----------------
-The sample rate in miniaudio is always expressed in Hz, such as 44100, 48000, etc. It's the number of PCM frames that are processed per second.
+The sample rate in miniaudio is always expressed in Hz, such as 44100, 48000, etc. It's the number
+of PCM frames that are processed per second.
3.5. Formats
------------
@@ -459,8 +513,8 @@ All formats are native-endian.
4. Decoding
===========
-The `ma_decoder` API is used for reading audio files. Decoders are completely decoupled from devices and can be used independently. The following formats are
-supported:
+The `ma_decoder` API is used for reading audio files. Decoders are completely decoupled from
+devices and can be used independently. The following formats are supported:
+---------+------------------+----------+
| Format | Decoding Backend | Built-In |
@@ -471,7 +525,8 @@ supported:
| Vorbis | stb_vorbis | No |
+---------+------------------+----------+
-Vorbis is supported via stb_vorbis which can be enabled by including the header section before the implementation of miniaudio, like the following:
+Vorbis is supported via stb_vorbis which can be enabled by including the header section before the
+implementation of miniaudio, like the following:
```c
#define STB_VORBIS_HEADER_ONLY
@@ -487,8 +542,9 @@ Vorbis is supported via stb_vorbis which can be enabled by including the header
A copy of stb_vorbis is included in the "extras" folder in the miniaudio repository (https://github.com/mackron/miniaudio).
-Built-in decoders are amalgamated into the implementation section of miniaudio. You can disable the built-in decoders by specifying one or more of the
-following options before the miniaudio implementation:
+Built-in decoders are amalgamated into the implementation section of miniaudio. You can disable the
+built-in decoders by specifying one or more of the following options before the miniaudio
+implementation:
```c
#define MA_NO_WAV
@@ -496,10 +552,12 @@ following options before the miniaudio implementation:
#define MA_NO_FLAC
```
-Disabling built-in decoding libraries is useful if you use these libraries independantly of the `ma_decoder` API.
+Disabling built-in decoding libraries is useful if you use these libraries independantly of the
+`ma_decoder` API.
-A decoder can be initialized from a file with `ma_decoder_init_file()`, a block of memory with `ma_decoder_init_memory()`, or from data delivered via callbacks
-with `ma_decoder_init()`. Here is an example for loading a decoder from a file:
+A decoder can be initialized from a file with `ma_decoder_init_file()`, a block of memory with
+`ma_decoder_init_memory()`, or from data delivered via callbacks with `ma_decoder_init()`. Here is
+an example for loading a decoder from a file:
```c
ma_decoder decoder;
@@ -513,17 +571,20 @@ with `ma_decoder_init()`. Here is an example for loading a decoder from a file:
ma_decoder_uninit(&decoder);
```
-When initializing a decoder, you can optionally pass in a pointer to a `ma_decoder_config` object (the `NULL` argument in the example above) which allows you
-to configure the output format, channel count, sample rate and channel map:
+When initializing a decoder, you can optionally pass in a pointer to a `ma_decoder_config` object
+(the `NULL` argument in the example above) which allows you to configure the output format, channel
+count, sample rate and channel map:
```c
ma_decoder_config config = ma_decoder_config_init(ma_format_f32, 2, 48000);
```
-When passing in `NULL` for decoder config in `ma_decoder_init*()`, the output format will be the same as that defined by the decoding backend.
+When passing in `NULL` for decoder config in `ma_decoder_init*()`, the output format will be the
+same as that defined by the decoding backend.
-Data is read from the decoder as PCM frames. This will return the number of PCM frames actually read. If the return value is less than the requested number of
-PCM frames it means you've reached the end:
+Data is read from the decoder as PCM frames. This will return the number of PCM frames actually
+read. If the return value is less than the requested number of PCM frames it means you've reached
+the end:
```c
ma_uint64 framesRead = ma_decoder_read_pcm_frames(pDecoder, pFrames, framesToRead);
@@ -547,8 +608,10 @@ If you want to loop back to the start, you can simply seek back to the first PCM
ma_decoder_seek_to_pcm_frame(pDecoder, 0);
```
-When loading a decoder, miniaudio uses a trial and error technique to find the appropriate decoding backend. This can be unnecessarily inefficient if the type
-is already known. In this case you can use `encodingFormat` variable in the device config to specify a specific encoding format you want to decode:
+When loading a decoder, miniaudio uses a trial and error technique to find the appropriate decoding
+backend. This can be unnecessarily inefficient if the type is already known. In this case you can
+use `encodingFormat` variable in the device config to specify a specific encoding format you want
+to decode:
```c
decoderConfig.encodingFormat = ma_encoding_format_wav;
@@ -556,21 +619,24 @@ is already known. In this case you can use `encodingFormat` variable in the devi
See the `ma_encoding_format` enum for possible encoding formats.
-The `ma_decoder_init_file()` API will try using the file extension to determine which decoding backend to prefer.
+The `ma_decoder_init_file()` API will try using the file extension to determine which decoding
+backend to prefer.
5. Encoding
===========
-The `ma_encoding` API is used for writing audio files. The only supported output format is WAV which is achieved via dr_wav which is amalgamated into the
-implementation section of miniaudio. This can be disabled by specifying the following option before the implementation of miniaudio:
+The `ma_encoding` API is used for writing audio files. The only supported output format is WAV
+which is achieved via dr_wav which is amalgamated into the implementation section of miniaudio.
+This can be disabled by specifying the following option before the implementation of miniaudio:
```c
#define MA_NO_WAV
```
-An encoder can be initialized to write to a file with `ma_encoder_init_file()` or from data delivered via callbacks with `ma_encoder_init()`. Below is an
-example for initializing an encoder to output to a file.
+An encoder can be initialized to write to a file with `ma_encoder_init_file()` or from data
+delivered via callbacks with `ma_encoder_init()`. Below is an example for initializing an encoder
+to output to a file.
```c
ma_encoder_config config = ma_encoder_config_init(ma_encoding_format_wav, FORMAT, CHANNELS, SAMPLE_RATE);
@@ -585,8 +651,9 @@ example for initializing an encoder to output to a file.
ma_encoder_uninit(&encoder);
```
-When initializing an encoder you must specify a config which is initialized with `ma_encoder_config_init()`. Here you must specify the file type, the output
-sample format, output channel count and output sample rate. The following file types are supported:
+When initializing an encoder you must specify a config which is initialized with
+`ma_encoder_config_init()`. Here you must specify the file type, the output sample format, output
+channel count and output sample rate. The following file types are supported:
+------------------------+-------------+
| Enum | Description |
@@ -594,8 +661,10 @@ sample format, output channel count and output sample rate. The following file t
| ma_encoding_format_wav | WAV |
+------------------------+-------------+
-If the format, channel count or sample rate is not supported by the output file type an error will be returned. The encoder will not perform data conversion so
-you will need to convert it before outputting any audio data. To output audio data, use `ma_encoder_write_pcm_frames()`, like in the example below:
+If the format, channel count or sample rate is not supported by the output file type an error will
+be returned. The encoder will not perform data conversion so you will need to convert it before
+outputting any audio data. To output audio data, use `ma_encoder_write_pcm_frames()`, like in the
+example below:
```c
framesWritten = ma_encoder_write_pcm_frames(&encoder, pPCMFramesToWrite, framesToWrite);
@@ -606,15 +675,18 @@ Encoders must be uninitialized with `ma_encoder_uninit()`.
6. Data Conversion
==================
-A data conversion API is included with miniaudio which supports the majority of data conversion requirements. This supports conversion between sample formats,
-channel counts (with channel mapping) and sample rates.
+A data conversion API is included with miniaudio which supports the majority of data conversion
+requirements. This supports conversion between sample formats, channel counts (with channel
+mapping) and sample rates.
6.1. Sample Format Conversion
-----------------------------
-Conversion between sample formats is achieved with the `ma_pcm_*_to_*()`, `ma_pcm_convert()` and `ma_convert_pcm_frames_format()` APIs. Use `ma_pcm_*_to_*()`
-to convert between two specific formats. Use `ma_pcm_convert()` to convert based on a `ma_format` variable. Use `ma_convert_pcm_frames_format()` to convert
-PCM frames where you want to specify the frame count and channel count as a variable instead of the total sample count.
+Conversion between sample formats is achieved with the `ma_pcm_*_to_*()`, `ma_pcm_convert()` and
+`ma_convert_pcm_frames_format()` APIs. Use `ma_pcm_*_to_*()` to convert between two specific
+formats. Use `ma_pcm_convert()` to convert based on a `ma_format` variable. Use
+`ma_convert_pcm_frames_format()` to convert PCM frames where you want to specify the frame count
+and channel count as a variable instead of the total sample count.
6.1.1. Dithering
@@ -631,8 +703,9 @@ The different dithering modes include the following, in order of efficiency:
| Triangle | ma_dither_mode_triangle |
+-----------+--------------------------+
-Note that even if the dither mode is set to something other than `ma_dither_mode_none`, it will be ignored for conversions where dithering is not needed.
-Dithering is available for the following conversions:
+Note that even if the dither mode is set to something other than `ma_dither_mode_none`, it will be
+ignored for conversions where dithering is not needed. Dithering is available for the following
+conversions:
```
s16 -> u8
@@ -644,14 +717,16 @@ Dithering is available for the following conversions:
f32 -> s16
```
-Note that it is not an error to pass something other than ma_dither_mode_none for conversions where dither is not used. It will just be ignored.
+Note that it is not an error to pass something other than ma_dither_mode_none for conversions where
+dither is not used. It will just be ignored.
6.2. Channel Conversion
-----------------------
-Channel conversion is used for channel rearrangement and conversion from one channel count to another. The `ma_channel_converter` API is used for channel
-conversion. Below is an example of initializing a simple channel converter which converts from mono to stereo.
+Channel conversion is used for channel rearrangement and conversion from one channel count to
+another. The `ma_channel_converter` API is used for channel conversion. Below is an example of
+initializing a simple channel converter which converts from mono to stereo.
```c
ma_channel_converter_config config = ma_channel_converter_config_init(
@@ -677,34 +752,43 @@ To perform the conversion simply call `ma_channel_converter_process_pcm_frames()
}
```
-It is up to the caller to ensure the output buffer is large enough to accomodate the new PCM frames.
+It is up to the caller to ensure the output buffer is large enough to accomodate the new PCM
+frames.
Input and output PCM frames are always interleaved. Deinterleaved layouts are not supported.
6.2.1. Channel Mapping
----------------------
-In addition to converting from one channel count to another, like the example above, the channel converter can also be used to rearrange channels. When
-initializing the channel converter, you can optionally pass in channel maps for both the input and output frames. If the channel counts are the same, and each
-channel map contains the same channel positions with the exception that they're in a different order, a simple shuffling of the channels will be performed. If,
-however, there is not a 1:1 mapping of channel positions, or the channel counts differ, the input channels will be mixed based on a mixing mode which is
-specified when initializing the `ma_channel_converter_config` object.
+In addition to converting from one channel count to another, like the example above, the channel
+converter can also be used to rearrange channels. When initializing the channel converter, you can
+optionally pass in channel maps for both the input and output frames. If the channel counts are the
+same, and each channel map contains the same channel positions with the exception that they're in
+a different order, a simple shuffling of the channels will be performed. If, however, there is not
+a 1:1 mapping of channel positions, or the channel counts differ, the input channels will be mixed
+based on a mixing mode which is specified when initializing the `ma_channel_converter_config`
+object.
-When converting from mono to multi-channel, the mono channel is simply copied to each output channel. When going the other way around, the audio of each output
-channel is simply averaged and copied to the mono channel.
+When converting from mono to multi-channel, the mono channel is simply copied to each output
+channel. When going the other way around, the audio of each output channel is simply averaged and
+copied to the mono channel.
-In more complicated cases blending is used. The `ma_channel_mix_mode_simple` mode will drop excess channels and silence extra channels. For example, converting
-from 4 to 2 channels, the 3rd and 4th channels will be dropped, whereas converting from 2 to 4 channels will put silence into the 3rd and 4th channels.
+In more complicated cases blending is used. The `ma_channel_mix_mode_simple` mode will drop excess
+channels and silence extra channels. For example, converting from 4 to 2 channels, the 3rd and 4th
+channels will be dropped, whereas converting from 2 to 4 channels will put silence into the 3rd and
+4th channels.
-The `ma_channel_mix_mode_rectangle` mode uses spacial locality based on a rectangle to compute a simple distribution between input and output. Imagine sitting
-in the middle of a room, with speakers on the walls representing channel positions. The MA_CHANNEL_FRONT_LEFT position can be thought of as being in the corner
-of the front and left walls.
+The `ma_channel_mix_mode_rectangle` mode uses spacial locality based on a rectangle to compute a
+simple distribution between input and output. Imagine sitting in the middle of a room, with
+speakers on the walls representing channel positions. The `MA_CHANNEL_FRONT_LEFT` position can be
+thought of as being in the corner of the front and left walls.
-Finally, the `ma_channel_mix_mode_custom_weights` mode can be used to use custom user-defined weights. Custom weights can be passed in as the last parameter of
+Finally, the `ma_channel_mix_mode_custom_weights` mode can be used to use custom user-defined
+weights. Custom weights can be passed in as the last parameter of
`ma_channel_converter_config_init()`.
-Predefined channel maps can be retrieved with `ma_get_standard_channel_map()`. This takes a `ma_standard_channel_map` enum as it's first parameter, which can
-be one of the following:
+Predefined channel maps can be retrieved with `ma_get_standard_channel_map()`. This takes a
+`ma_standard_channel_map` enum as it's first parameter, which can be one of the following:
+-----------------------------------+-----------------------------------------------------------+
| Name | Description |
@@ -778,7 +862,8 @@ Below are the channel maps used by default in miniaudio (`ma_standard_channel_ma
6.3. Resampling
---------------
-Resampling is achieved with the `ma_resampler` object. To create a resampler object, do something like the following:
+Resampling is achieved with the `ma_resampler` object. To create a resampler object, do something
+like the following:
```c
ma_resampler_config config = ma_resampler_config_init(
@@ -815,16 +900,20 @@ The following example shows how data can be processed
// number of output frames written.
```
-To initialize the resampler you first need to set up a config (`ma_resampler_config`) with `ma_resampler_config_init()`. You need to specify the sample format
-you want to use, the number of channels, the input and output sample rate, and the algorithm.
+To initialize the resampler you first need to set up a config (`ma_resampler_config`) with
+`ma_resampler_config_init()`. You need to specify the sample format you want to use, the number of
+channels, the input and output sample rate, and the algorithm.
-The sample format can be either `ma_format_s16` or `ma_format_f32`. If you need a different format you will need to perform pre- and post-conversions yourself
-where necessary. Note that the format is the same for both input and output. The format cannot be changed after initialization.
+The sample format can be either `ma_format_s16` or `ma_format_f32`. If you need a different format
+you will need to perform pre- and post-conversions yourself where necessary. Note that the format
+is the same for both input and output. The format cannot be changed after initialization.
-The resampler supports multiple channels and is always interleaved (both input and output). The channel count cannot be changed after initialization.
+The resampler supports multiple channels and is always interleaved (both input and output). The
+channel count cannot be changed after initialization.
-The sample rates can be anything other than zero, and are always specified in hertz. They should be set to something like 44100, etc. The sample rate is the
-only configuration property that can be changed after initialization.
+The sample rates can be anything other than zero, and are always specified in hertz. They should be
+set to something like 44100, etc. The sample rate is the only configuration property that can be
+changed after initialization.
The miniaudio resampler has built-in support for the following algorithms:
@@ -836,21 +925,27 @@ The miniaudio resampler has built-in support for the following algorithms:
The algorithm cannot be changed after initialization.
-Processing always happens on a per PCM frame basis and always assumes interleaved input and output. De-interleaved processing is not supported. To process
-frames, use `ma_resampler_process_pcm_frames()`. On input, this function takes the number of output frames you can fit in the output buffer and the number of
-input frames contained in the input buffer. On output these variables contain the number of output frames that were written to the output buffer and the
-number of input frames that were consumed in the process. You can pass in NULL for the input buffer in which case it will be treated as an infinitely large
-buffer of zeros. The output buffer can also be NULL, in which case the processing will be treated as seek.
+Processing always happens on a per PCM frame basis and always assumes interleaved input and output.
+De-interleaved processing is not supported. To process frames, use
+`ma_resampler_process_pcm_frames()`. On input, this function takes the number of output frames you
+can fit in the output buffer and the number of input frames contained in the input buffer. On
+output these variables contain the number of output frames that were written to the output buffer
+and the number of input frames that were consumed in the process. You can pass in NULL for the
+input buffer in which case it will be treated as an infinitely large buffer of zeros. The output
+buffer can also be NULL, in which case the processing will be treated as seek.
-The sample rate can be changed dynamically on the fly. You can change this with explicit sample rates with `ma_resampler_set_rate()` and also with a decimal
-ratio with `ma_resampler_set_rate_ratio()`. The ratio is in/out.
+The sample rate can be changed dynamically on the fly. You can change this with explicit sample
+rates with `ma_resampler_set_rate()` and also with a decimal ratio with
+`ma_resampler_set_rate_ratio()`. The ratio is in/out.
-Sometimes it's useful to know exactly how many input frames will be required to output a specific number of frames. You can calculate this with
-`ma_resampler_get_required_input_frame_count()`. Likewise, it's sometimes useful to know exactly how many frames would be output given a certain number of
-input frames. You can do this with `ma_resampler_get_expected_output_frame_count()`.
+Sometimes it's useful to know exactly how many input frames will be required to output a specific
+number of frames. You can calculate this with `ma_resampler_get_required_input_frame_count()`.
+Likewise, it's sometimes useful to know exactly how many frames would be output given a certain
+number of input frames. You can do this with `ma_resampler_get_expected_output_frame_count()`.
-Due to the nature of how resampling works, the resampler introduces some latency. This can be retrieved in terms of both the input rate and the output rate
-with `ma_resampler_get_input_latency()` and `ma_resampler_get_output_latency()`.
+Due to the nature of how resampling works, the resampler introduces some latency. This can be
+retrieved in terms of both the input rate and the output rate with
+`ma_resampler_get_input_latency()` and `ma_resampler_get_output_latency()`.
6.3.1. Resampling Algorithms
@@ -860,28 +955,31 @@ The choice of resampling algorithm depends on your situation and requirements.
6.3.1.1. Linear Resampling
--------------------------
-The linear resampler is the fastest, but comes at the expense of poorer quality. There is, however, some control over the quality of the linear resampler which
-may make it a suitable option depending on your requirements.
+The linear resampler is the fastest, but comes at the expense of poorer quality. There is, however,
+some control over the quality of the linear resampler which may make it a suitable option depending
+on your requirements.
-The linear resampler performs low-pass filtering before or after downsampling or upsampling, depending on the sample rates you're converting between. When
-decreasing the sample rate, the low-pass filter will be applied before downsampling. When increasing the rate it will be performed after upsampling. By default
-a fourth order low-pass filter will be applied. This can be configured via the `lpfOrder` configuration variable. Setting this to 0 will disable filtering.
+The linear resampler performs low-pass filtering before or after downsampling or upsampling,
+depending on the sample rates you're converting between. When decreasing the sample rate, the
+low-pass filter will be applied before downsampling. When increasing the rate it will be performed
+after upsampling. By default a fourth order low-pass filter will be applied. This can be configured
+via the `lpfOrder` configuration variable. Setting this to 0 will disable filtering.
-The low-pass filter has a cutoff frequency which defaults to half the sample rate of the lowest of the input and output sample rates (Nyquist Frequency). This
-can be controlled with the `lpfNyquistFactor` config variable. This defaults to 1, and should be in the range of 0..1, although a value of 0 does not make
-sense and should be avoided. A value of 1 will use the Nyquist Frequency as the cutoff. A value of 0.5 will use half the Nyquist Frequency as the cutoff, etc.
-Values less than 1 will result in more washed out sound due to more of the higher frequencies being removed. This config variable has no impact on performance
-and is a purely perceptual configuration.
+The low-pass filter has a cutoff frequency which defaults to half the sample rate of the lowest of
+the input and output sample rates (Nyquist Frequency).
-The API for the linear resampler is the same as the main resampler API, only it's called `ma_linear_resampler`.
+The API for the linear resampler is the same as the main resampler API, only it's called
+`ma_linear_resampler`.
6.4. General Data Conversion
----------------------------
-The `ma_data_converter` API can be used to wrap sample format conversion, channel conversion and resampling into one operation. This is what miniaudio uses
-internally to convert between the format requested when the device was initialized and the format of the backend's native device. The API for general data
-conversion is very similar to the resampling API. Create a `ma_data_converter` object like this:
+The `ma_data_converter` API can be used to wrap sample format conversion, channel conversion and
+resampling into one operation. This is what miniaudio uses internally to convert between the format
+requested when the device was initialized and the format of the backend's native device. The API
+for general data conversion is very similar to the resampling API. Create a `ma_data_converter`
+object like this:
```c
ma_data_converter_config config = ma_data_converter_config_init(
@@ -900,8 +998,9 @@ conversion is very similar to the resampling API. Create a `ma_data_converter` o
}
```
-In the example above we use `ma_data_converter_config_init()` to initialize the config, however there's many more properties that can be configured, such as
-channel maps and resampling quality. Something like the following may be more suitable depending on your requirements:
+In the example above we use `ma_data_converter_config_init()` to initialize the config, however
+there's many more properties that can be configured, such as channel maps and resampling quality.
+Something like the following may be more suitable depending on your requirements:
```c
ma_data_converter_config config = ma_data_converter_config_init_default();
@@ -935,25 +1034,34 @@ The following example shows how data can be processed
// of output frames written.
```
-The data converter supports multiple channels and is always interleaved (both input and output). The channel count cannot be changed after initialization.
+The data converter supports multiple channels and is always interleaved (both input and output).
+The channel count cannot be changed after initialization.
-Sample rates can be anything other than zero, and are always specified in hertz. They should be set to something like 44100, etc. The sample rate is the only
-configuration property that can be changed after initialization, but only if the `resampling.allowDynamicSampleRate` member of `ma_data_converter_config` is
-set to `MA_TRUE`. To change the sample rate, use `ma_data_converter_set_rate()` or `ma_data_converter_set_rate_ratio()`. The ratio must be in/out. The
-resampling algorithm cannot be changed after initialization.
+Sample rates can be anything other than zero, and are always specified in hertz. They should be set
+to something like 44100, etc. The sample rate is the only configuration property that can be
+changed after initialization, but only if the `resampling.allowDynamicSampleRate` member of
+`ma_data_converter_config` is set to `MA_TRUE`. To change the sample rate, use
+`ma_data_converter_set_rate()` or `ma_data_converter_set_rate_ratio()`. The ratio must be in/out.
+The resampling algorithm cannot be changed after initialization.
-Processing always happens on a per PCM frame basis and always assumes interleaved input and output. De-interleaved processing is not supported. To process
-frames, use `ma_data_converter_process_pcm_frames()`. On input, this function takes the number of output frames you can fit in the output buffer and the number
-of input frames contained in the input buffer. On output these variables contain the number of output frames that were written to the output buffer and the
-number of input frames that were consumed in the process. You can pass in NULL for the input buffer in which case it will be treated as an infinitely large
-buffer of zeros. The output buffer can also be NULL, in which case the processing will be treated as seek.
+Processing always happens on a per PCM frame basis and always assumes interleaved input and output.
+De-interleaved processing is not supported. To process frames, use
+`ma_data_converter_process_pcm_frames()`. On input, this function takes the number of output frames
+you can fit in the output buffer and the number of input frames contained in the input buffer. On
+output these variables contain the number of output frames that were written to the output buffer
+and the number of input frames that were consumed in the process. You can pass in NULL for the
+input buffer in which case it will be treated as an infinitely large
+buffer of zeros. The output buffer can also be NULL, in which case the processing will be treated
+as seek.
-Sometimes it's useful to know exactly how many input frames will be required to output a specific number of frames. You can calculate this with
-`ma_data_converter_get_required_input_frame_count()`. Likewise, it's sometimes useful to know exactly how many frames would be output given a certain number of
-input frames. You can do this with `ma_data_converter_get_expected_output_frame_count()`.
+Sometimes it's useful to know exactly how many input frames will be required to output a specific
+number of frames. You can calculate this with `ma_data_converter_get_required_input_frame_count()`.
+Likewise, it's sometimes useful to know exactly how many frames would be output given a certain
+number of input frames. You can do this with `ma_data_converter_get_expected_output_frame_count()`.
-Due to the nature of how resampling works, the data converter introduces some latency if resampling is required. This can be retrieved in terms of both the
-input rate and the output rate with `ma_data_converter_get_input_latency()` and `ma_data_converter_get_output_latency()`.
+Due to the nature of how resampling works, the data converter introduces some latency if resampling
+is required. This can be retrieved in terms of both the input rate and the output rate with
+`ma_data_converter_get_input_latency()` and `ma_data_converter_get_output_latency()`.
@@ -976,24 +1084,29 @@ Biquad filtering is achieved with the `ma_biquad` API. Example:
ma_biquad_process_pcm_frames(&biquad, pFramesOut, pFramesIn, frameCount);
```
-Biquad filtering is implemented using transposed direct form 2. The numerator coefficients are b0, b1 and b2, and the denominator coefficients are a0, a1 and
-a2. The a0 coefficient is required and coefficients must not be pre-normalized.
+Biquad filtering is implemented using transposed direct form 2. The numerator coefficients are b0,
+b1 and b2, and the denominator coefficients are a0, a1 and a2. The a0 coefficient is required and
+coefficients must not be pre-normalized.
-Supported formats are `ma_format_s16` and `ma_format_f32`. If you need to use a different format you need to convert it yourself beforehand. When using
-`ma_format_s16` the biquad filter will use fixed point arithmetic. When using `ma_format_f32`, floating point arithmetic will be used.
+Supported formats are `ma_format_s16` and `ma_format_f32`. If you need to use a different format
+you need to convert it yourself beforehand. When using `ma_format_s16` the biquad filter will use
+fixed point arithmetic. When using `ma_format_f32`, floating point arithmetic will be used.
Input and output frames are always interleaved.
-Filtering can be applied in-place by passing in the same pointer for both the input and output buffers, like so:
+Filtering can be applied in-place by passing in the same pointer for both the input and output
+buffers, like so:
```c
ma_biquad_process_pcm_frames(&biquad, pMyData, pMyData, frameCount);
```
-If you need to change the values of the coefficients, but maintain the values in the registers you can do so with `ma_biquad_reinit()`. This is useful if you
-need to change the properties of the filter while keeping the values of registers valid to avoid glitching. Do not use `ma_biquad_init()` for this as it will
-do a full initialization which involves clearing the registers to 0. Note that changing the format or channel count after initialization is invalid and will
-result in an error.
+If you need to change the values of the coefficients, but maintain the values in the registers you
+can do so with `ma_biquad_reinit()`. This is useful if you need to change the properties of the
+filter while keeping the values of registers valid to avoid glitching. Do not use
+`ma_biquad_init()` for this as it will do a full initialization which involves clearing the
+registers to 0. Note that changing the format or channel count after initialization is invalid and
+will result in an error.
7.2. Low-Pass Filtering
@@ -1022,16 +1135,18 @@ Low-pass filter example:
ma_lpf_process_pcm_frames(&lpf, pFramesOut, pFramesIn, frameCount);
```
-Supported formats are `ma_format_s16` and` ma_format_f32`. If you need to use a different format you need to convert it yourself beforehand. Input and output
-frames are always interleaved.
+Supported formats are `ma_format_s16` and` ma_format_f32`. If you need to use a different format
+you need to convert it yourself beforehand. Input and output frames are always interleaved.
-Filtering can be applied in-place by passing in the same pointer for both the input and output buffers, like so:
+Filtering can be applied in-place by passing in the same pointer for both the input and output
+buffers, like so:
```c
ma_lpf_process_pcm_frames(&lpf, pMyData, pMyData, frameCount);
```
-The maximum filter order is limited to `MA_MAX_FILTER_ORDER` which is set to 8. If you need more, you can chain first and second order filters together.
+The maximum filter order is limited to `MA_MAX_FILTER_ORDER` which is set to 8. If you need more,
+you can chain first and second order filters together.
```c
for (iFilter = 0; iFilter < filterCount; iFilter += 1) {
@@ -1039,15 +1154,18 @@ The maximum filter order is limited to `MA_MAX_FILTER_ORDER` which is set to 8.
}
```
-If you need to change the configuration of the filter, but need to maintain the state of internal registers you can do so with `ma_lpf_reinit()`. This may be
-useful if you need to change the sample rate and/or cutoff frequency dynamically while maintaing smooth transitions. Note that changing the format or channel
-count after initialization is invalid and will result in an error.
+If you need to change the configuration of the filter, but need to maintain the state of internal
+registers you can do so with `ma_lpf_reinit()`. This may be useful if you need to change the sample
+rate and/or cutoff frequency dynamically while maintaing smooth transitions. Note that changing the
+format or channel count after initialization is invalid and will result in an error.
-The `ma_lpf` object supports a configurable order, but if you only need a first order filter you may want to consider using `ma_lpf1`. Likewise, if you only
-need a second order filter you can use `ma_lpf2`. The advantage of this is that they're lighter weight and a bit more efficient.
+The `ma_lpf` object supports a configurable order, but if you only need a first order filter you
+may want to consider using `ma_lpf1`. Likewise, if you only need a second order filter you can use
+`ma_lpf2`. The advantage of this is that they're lighter weight and a bit more efficient.
-If an even filter order is specified, a series of second order filters will be processed in a chain. If an odd filter order is specified, a first order filter
-will be applied, followed by a series of second order filters in a chain.
+If an even filter order is specified, a series of second order filters will be processed in a
+chain. If an odd filter order is specified, a first order filter will be applied, followed by a
+series of second order filters in a chain.
7.3. High-Pass Filtering
@@ -1062,8 +1180,8 @@ High-pass filtering is achieved with the following APIs:
| ma_hpf | High order high-pass filter (Butterworth) |
+---------+-------------------------------------------+
-High-pass filters work exactly the same as low-pass filters, only the APIs are called `ma_hpf1`, `ma_hpf2` and `ma_hpf`. See example code for low-pass filters
-for example usage.
+High-pass filters work exactly the same as low-pass filters, only the APIs are called `ma_hpf1`,
+`ma_hpf2` and `ma_hpf`. See example code for low-pass filters for example usage.
7.4. Band-Pass Filtering
@@ -1077,9 +1195,10 @@ Band-pass filtering is achieved with the following APIs:
| ma_bpf | High order band-pass filter |
+---------+-------------------------------+
-Band-pass filters work exactly the same as low-pass filters, only the APIs are called `ma_bpf2` and `ma_hpf`. See example code for low-pass filters for example
-usage. Note that the order for band-pass filters must be an even number which means there is no first order band-pass filter, unlike low-pass and high-pass
-filters.
+Band-pass filters work exactly the same as low-pass filters, only the APIs are called `ma_bpf2` and
+`ma_hpf`. See example code for low-pass filters for example usage. Note that the order for
+band-pass filters must be an even number which means there is no first order band-pass filter,
+unlike low-pass and high-pass filters.
7.5. Notch Filtering
@@ -1114,7 +1233,8 @@ Low shelf filtering is achieved with the following APIs:
| ma_loshelf2 | Second order low shelf filter |
+-------------+------------------------------------------+
-Where a high-pass filter is used to eliminate lower frequencies, a low shelf filter can be used to just turn them down rather than eliminate them entirely.
+Where a high-pass filter is used to eliminate lower frequencies, a low shelf filter can be used to
+just turn them down rather than eliminate them entirely.
7.8. High Shelf Filtering
@@ -1127,8 +1247,9 @@ High shelf filtering is achieved with the following APIs:
| ma_hishelf2 | Second order high shelf filter |
+-------------+------------------------------------------+
-The high shelf filter has the same API as the low shelf filter, only you would use `ma_hishelf` instead of `ma_loshelf`. Where a low shelf filter is used to
-adjust the volume of low frequencies, the high shelf filter does the same thing for high frequencies.
+The high shelf filter has the same API as the low shelf filter, only you would use `ma_hishelf`
+instead of `ma_loshelf`. Where a low shelf filter is used to adjust the volume of low frequencies,
+the high shelf filter does the same thing for high frequencies.
@@ -1138,7 +1259,8 @@ adjust the volume of low frequencies, the high shelf filter does the same thing
8.1. Waveforms
--------------
-miniaudio supports generation of sine, square, triangle and sawtooth waveforms. This is achieved with the `ma_waveform` API. Example:
+miniaudio supports generation of sine, square, triangle and sawtooth waveforms. This is achieved
+with the `ma_waveform` API. Example:
```c
ma_waveform_config config = ma_waveform_config_init(
@@ -1160,11 +1282,12 @@ miniaudio supports generation of sine, square, triangle and sawtooth waveforms.
ma_waveform_read_pcm_frames(&waveform, pOutput, frameCount);
```
-The amplitude, frequency, type, and sample rate can be changed dynamically with `ma_waveform_set_amplitude()`, `ma_waveform_set_frequency()`,
-`ma_waveform_set_type()`, and `ma_waveform_set_sample_rate()` respectively.
+The amplitude, frequency, type, and sample rate can be changed dynamically with
+`ma_waveform_set_amplitude()`, `ma_waveform_set_frequency()`, `ma_waveform_set_type()`, and
+`ma_waveform_set_sample_rate()` respectively.
-You can invert the waveform by setting the amplitude to a negative value. You can use this to control whether or not a sawtooth has a positive or negative
-ramp, for example.
+You can invert the waveform by setting the amplitude to a negative value. You can use this to
+control whether or not a sawtooth has a positive or negative ramp, for example.
Below are the supported waveform types:
@@ -1202,13 +1325,16 @@ miniaudio supports generation of white, pink and Brownian noise via the `ma_nois
ma_noise_read_pcm_frames(&noise, pOutput, frameCount);
```
-The noise API uses simple LCG random number generation. It supports a custom seed which is useful for things like automated testing requiring reproducibility.
-Setting the seed to zero will default to `MA_DEFAULT_LCG_SEED`.
+The noise API uses simple LCG random number generation. It supports a custom seed which is useful
+for things like automated testing requiring reproducibility. Setting the seed to zero will default
+to `MA_DEFAULT_LCG_SEED`.
-The amplitude, seed, and type can be changed dynamically with `ma_noise_set_amplitude()`, `ma_noise_set_seed()`, and `ma_noise_set_type()` respectively.
+The amplitude, seed, and type can be changed dynamically with `ma_noise_set_amplitude()`,
+`ma_noise_set_seed()`, and `ma_noise_set_type()` respectively.
-By default, the noise API will use different values for different channels. So, for example, the left side in a stereo stream will be different to the right
-side. To instead have each channel use the same random value, set the `duplicateChannels` member of the noise config to true, like so:
+By default, the noise API will use different values for different channels. So, for example, the
+left side in a stereo stream will be different to the right side. To instead have each channel use
+the same random value, set the `duplicateChannels` member of the noise config to true, like so:
```c
config.duplicateChannels = MA_TRUE;
@@ -1228,8 +1354,9 @@ Below are the supported noise types.
9. Audio Buffers
================
-miniaudio supports reading from a buffer of raw audio data via the `ma_audio_buffer` API. This can read from memory that's managed by the application, but
-can also handle the memory management for you internally. Memory management is flexible and should support most use cases.
+miniaudio supports reading from a buffer of raw audio data via the `ma_audio_buffer` API. This can
+read from memory that's managed by the application, but can also handle the memory management for
+you internally. Memory management is flexible and should support most use cases.
Audio buffers are initialised using the standard configuration system used everywhere in miniaudio:
@@ -1252,11 +1379,14 @@ Audio buffers are initialised using the standard configuration system used every
ma_audio_buffer_uninit(&buffer);
```
-In the example above, the memory pointed to by `pExistingData` will *not* be copied and is how an application can do self-managed memory allocation. If you
-would rather make a copy of the data, use `ma_audio_buffer_init_copy()`. To uninitialize the buffer, use `ma_audio_buffer_uninit()`.
+In the example above, the memory pointed to by `pExistingData` will *not* be copied and is how an
+application can do self-managed memory allocation. If you would rather make a copy of the data, use
+`ma_audio_buffer_init_copy()`. To uninitialize the buffer, use `ma_audio_buffer_uninit()`.
-Sometimes it can be convenient to allocate the memory for the `ma_audio_buffer` structure and the raw audio data in a contiguous block of memory. That is,
-the raw audio data will be located immediately after the `ma_audio_buffer` structure. To do this, use `ma_audio_buffer_alloc_and_init()`:
+Sometimes it can be convenient to allocate the memory for the `ma_audio_buffer` structure and the
+raw audio data in a contiguous block of memory. That is, the raw audio data will be located
+immediately after the `ma_audio_buffer` structure. To do this, use
+`ma_audio_buffer_alloc_and_init()`:
```c
ma_audio_buffer_config config = ma_audio_buffer_config_init(
@@ -1277,13 +1407,18 @@ the raw audio data will be located immediately after the `ma_audio_buffer` struc
ma_audio_buffer_uninit_and_free(&buffer);
```
-If you initialize the buffer with `ma_audio_buffer_alloc_and_init()` you should uninitialize it with `ma_audio_buffer_uninit_and_free()`. In the example above,
-the memory pointed to by `pExistingData` will be copied into the buffer, which is contrary to the behavior of `ma_audio_buffer_init()`.
+If you initialize the buffer with `ma_audio_buffer_alloc_and_init()` you should uninitialize it
+with `ma_audio_buffer_uninit_and_free()`. In the example above, the memory pointed to by
+`pExistingData` will be copied into the buffer, which is contrary to the behavior of
+`ma_audio_buffer_init()`.
-An audio buffer has a playback cursor just like a decoder. As you read frames from the buffer, the cursor moves forward. The last parameter (`loop`) can be
-used to determine if the buffer should loop. The return value is the number of frames actually read. If this is less than the number of frames requested it
-means the end has been reached. This should never happen if the `loop` parameter is set to true. If you want to manually loop back to the start, you can do so
-with with `ma_audio_buffer_seek_to_pcm_frame(pAudioBuffer, 0)`. Below is an example for reading data from an audio buffer.
+An audio buffer has a playback cursor just like a decoder. As you read frames from the buffer, the
+cursor moves forward. The last parameter (`loop`) can be used to determine if the buffer should
+loop. The return value is the number of frames actually read. If this is less than the number of
+frames requested it means the end has been reached. This should never happen if the `loop`
+parameter is set to true. If you want to manually loop back to the start, you can do so with with
+`ma_audio_buffer_seek_to_pcm_frame(pAudioBuffer, 0)`. Below is an example for reading data from an
+audio buffer.
```c
ma_uint64 framesRead = ma_audio_buffer_read_pcm_frames(pAudioBuffer, pFramesOut, desiredFrameCount, isLooping);
@@ -1292,8 +1427,8 @@ with with `ma_audio_buffer_seek_to_pcm_frame(pAudioBuffer, 0)`. Below is an exam
}
```
-Sometimes you may want to avoid the cost of data movement between the internal buffer and the output buffer. Instead you can use memory mapping to retrieve a
-pointer to a segment of data:
+Sometimes you may want to avoid the cost of data movement between the internal buffer and the
+output buffer. Instead you can use memory mapping to retrieve a pointer to a segment of data:
```c
void* pMappedFrames;
@@ -1309,23 +1444,30 @@ pointer to a segment of data:
}
```
-When you use memory mapping, the read cursor is increment by the frame count passed in to `ma_audio_buffer_unmap()`. If you decide not to process every frame
-you can pass in a value smaller than the value returned by `ma_audio_buffer_map()`. The disadvantage to using memory mapping is that it does not handle looping
-for you. You can determine if the buffer is at the end for the purpose of looping with `ma_audio_buffer_at_end()` or by inspecting the return value of
-`ma_audio_buffer_unmap()` and checking if it equals `MA_AT_END`. You should not treat `MA_AT_END` as an error when returned by `ma_audio_buffer_unmap()`.
+When you use memory mapping, the read cursor is increment by the frame count passed in to
+`ma_audio_buffer_unmap()`. If you decide not to process every frame you can pass in a value smaller
+than the value returned by `ma_audio_buffer_map()`. The disadvantage to using memory mapping is
+that it does not handle looping for you. You can determine if the buffer is at the end for the
+purpose of looping with `ma_audio_buffer_at_end()` or by inspecting the return value of
+`ma_audio_buffer_unmap()` and checking if it equals `MA_AT_END`. You should not treat `MA_AT_END`
+as an error when returned by `ma_audio_buffer_unmap()`.
10. Ring Buffers
================
-miniaudio supports lock free (single producer, single consumer) ring buffers which are exposed via the `ma_rb` and `ma_pcm_rb` APIs. The `ma_rb` API operates
-on bytes, whereas the `ma_pcm_rb` operates on PCM frames. They are otherwise identical as `ma_pcm_rb` is just a wrapper around `ma_rb`.
+miniaudio supports lock free (single producer, single consumer) ring buffers which are exposed via
+the `ma_rb` and `ma_pcm_rb` APIs. The `ma_rb` API operates on bytes, whereas the `ma_pcm_rb`
+operates on PCM frames. They are otherwise identical as `ma_pcm_rb` is just a wrapper around
+`ma_rb`.
-Unlike most other APIs in miniaudio, ring buffers support both interleaved and deinterleaved streams. The caller can also allocate their own backing memory for
-the ring buffer to use internally for added flexibility. Otherwise the ring buffer will manage it's internal memory for you.
+Unlike most other APIs in miniaudio, ring buffers support both interleaved and deinterleaved
+streams. The caller can also allocate their own backing memory for the ring buffer to use
+internally for added flexibility. Otherwise the ring buffer will manage it's internal memory for
+you.
-The examples below use the PCM frame variant of the ring buffer since that's most likely the one you will want to use. To initialize a ring buffer, do
-something like the following:
+The examples below use the PCM frame variant of the ring buffer since that's most likely the one
+you will want to use. To initialize a ring buffer, do something like the following:
```c
ma_pcm_rb rb;
@@ -1335,35 +1477,49 @@ something like the following:
}
```
-The `ma_pcm_rb_init()` function takes the sample format and channel count as parameters because it's the PCM varient of the ring buffer API. For the regular
-ring buffer that operates on bytes you would call `ma_rb_init()` which leaves these out and just takes the size of the buffer in bytes instead of frames. The
-fourth parameter is an optional pre-allocated buffer and the fifth parameter is a pointer to a `ma_allocation_callbacks` structure for custom memory allocation
-routines. Passing in `NULL` for this results in `MA_MALLOC()` and `MA_FREE()` being used.
+The `ma_pcm_rb_init()` function takes the sample format and channel count as parameters because
+it's the PCM varient of the ring buffer API. For the regular ring buffer that operates on bytes you
+would call `ma_rb_init()` which leaves these out and just takes the size of the buffer in bytes
+instead of frames. The fourth parameter is an optional pre-allocated buffer and the fifth parameter
+is a pointer to a `ma_allocation_callbacks` structure for custom memory allocation routines.
+Passing in `NULL` for this results in `MA_MALLOC()` and `MA_FREE()` being used.
-Use `ma_pcm_rb_init_ex()` if you need a deinterleaved buffer. The data for each sub-buffer is offset from each other based on the stride. To manage your
-sub-buffers you can use `ma_pcm_rb_get_subbuffer_stride()`, `ma_pcm_rb_get_subbuffer_offset()` and `ma_pcm_rb_get_subbuffer_ptr()`.
+Use `ma_pcm_rb_init_ex()` if you need a deinterleaved buffer. The data for each sub-buffer is
+offset from each other based on the stride. To manage your sub-buffers you can use
+`ma_pcm_rb_get_subbuffer_stride()`, `ma_pcm_rb_get_subbuffer_offset()` and
+`ma_pcm_rb_get_subbuffer_ptr()`.
-Use `ma_pcm_rb_acquire_read()` and `ma_pcm_rb_acquire_write()` to retrieve a pointer to a section of the ring buffer. You specify the number of frames you
-need, and on output it will set to what was actually acquired. If the read or write pointer is positioned such that the number of frames requested will require
-a loop, it will be clamped to the end of the buffer. Therefore, the number of frames you're given may be less than the number you requested.
+Use `ma_pcm_rb_acquire_read()` and `ma_pcm_rb_acquire_write()` to retrieve a pointer to a section
+of the ring buffer. You specify the number of frames you need, and on output it will set to what
+was actually acquired. If the read or write pointer is positioned such that the number of frames
+requested will require a loop, it will be clamped to the end of the buffer. Therefore, the number
+of frames you're given may be less than the number you requested.
-After calling `ma_pcm_rb_acquire_read()` or `ma_pcm_rb_acquire_write()`, you do your work on the buffer and then "commit" it with `ma_pcm_rb_commit_read()` or
-`ma_pcm_rb_commit_write()`. This is where the read/write pointers are updated. When you commit you need to pass in the buffer that was returned by the earlier
-call to `ma_pcm_rb_acquire_read()` or `ma_pcm_rb_acquire_write()` and is only used for validation. The number of frames passed to `ma_pcm_rb_commit_read()` and
-`ma_pcm_rb_commit_write()` is what's used to increment the pointers, and can be less that what was originally requested.
+After calling `ma_pcm_rb_acquire_read()` or `ma_pcm_rb_acquire_write()`, you do your work on the
+buffer and then "commit" it with `ma_pcm_rb_commit_read()` or `ma_pcm_rb_commit_write()`. This is
+where the read/write pointers are updated. When you commit you need to pass in the buffer that was
+returned by the earlier call to `ma_pcm_rb_acquire_read()` or `ma_pcm_rb_acquire_write()` and is
+only used for validation. The number of frames passed to `ma_pcm_rb_commit_read()` and
+`ma_pcm_rb_commit_write()` is what's used to increment the pointers, and can be less that what was
+originally requested.
-If you want to correct for drift between the write pointer and the read pointer you can use a combination of `ma_pcm_rb_pointer_distance()`,
-`ma_pcm_rb_seek_read()` and `ma_pcm_rb_seek_write()`. Note that you can only move the pointers forward, and you should only move the read pointer forward via
-the consumer thread, and the write pointer forward by the producer thread. If there is too much space between the pointers, move the read pointer forward. If
+If you want to correct for drift between the write pointer and the read pointer you can use a
+combination of `ma_pcm_rb_pointer_distance()`, `ma_pcm_rb_seek_read()` and
+`ma_pcm_rb_seek_write()`. Note that you can only move the pointers forward, and you should only
+move the read pointer forward via the consumer thread, and the write pointer forward by the
+producer thread. If there is too much space between the pointers, move the read pointer forward. If
there is too little space between the pointers, move the write pointer forward.
-You can use a ring buffer at the byte level instead of the PCM frame level by using the `ma_rb` API. This is exactly the same, only you will use the `ma_rb`
-functions instead of `ma_pcm_rb` and instead of frame counts you will pass around byte counts.
+You can use a ring buffer at the byte level instead of the PCM frame level by using the `ma_rb`
+API. This is exactly the same, only you will use the `ma_rb` functions instead of `ma_pcm_rb` and
+instead of frame counts you will pass around byte counts.
-The maximum size of the buffer in bytes is `0x7FFFFFFF-(MA_SIMD_ALIGNMENT-1)` due to the most significant bit being used to encode a loop flag and the internally
-managed buffers always being aligned to MA_SIMD_ALIGNMENT.
+The maximum size of the buffer in bytes is `0x7FFFFFFF-(MA_SIMD_ALIGNMENT-1)` due to the most
+significant bit being used to encode a loop flag and the internally managed buffers always being
+aligned to `MA_SIMD_ALIGNMENT`.
-Note that the ring buffer is only thread safe when used by a single consumer thread and single producer thread.
+Note that the ring buffer is only thread safe when used by a single consumer thread and single
+producer thread.
@@ -1395,24 +1551,32 @@ Some backends have some nuance details you may want to be aware of.
11.1. WASAPI
------------
-- Low-latency shared mode will be disabled when using an application-defined sample rate which is different to the device's native sample rate. To work around
- this, set `wasapi.noAutoConvertSRC` to true in the device config. This is due to IAudioClient3_InitializeSharedAudioStream() failing when the
- `AUDCLNT_STREAMFLAGS_AUTOCONVERTPCM` flag is specified. Setting wasapi.noAutoConvertSRC will result in miniaudio's internal resampler being used instead
- which will in turn enable the use of low-latency shared mode.
+- Low-latency shared mode will be disabled when using an application-defined sample rate which is
+ different to the device's native sample rate. To work around this, set `wasapi.noAutoConvertSRC`
+ to true in the device config. This is due to IAudioClient3_InitializeSharedAudioStream() failing
+ when the `AUDCLNT_STREAMFLAGS_AUTOCONVERTPCM` flag is specified. Setting wasapi.noAutoConvertSRC
+ will result in miniaudio's internal resampler being used instead which will in turn enable the
+ use of low-latency shared mode.
11.2. PulseAudio
----------------
- If you experience bad glitching/noise on Arch Linux, consider this fix from the Arch wiki:
- https://wiki.archlinux.org/index.php/PulseAudio/Troubleshooting#Glitches,_skips_or_crackling. Alternatively, consider using a different backend such as ALSA.
+ https://wiki.archlinux.org/index.php/PulseAudio/Troubleshooting#Glitches,_skips_or_crackling.
+ Alternatively, consider using a different backend such as ALSA.
11.3. Android
-------------
-- To capture audio on Android, remember to add the RECORD_AUDIO permission to your manifest: ``
-- With OpenSL|ES, only a single ma_context can be active at any given time. This is due to a limitation with OpenSL|ES.
-- With AAudio, only default devices are enumerated. This is due to AAudio not having an enumeration API (devices are enumerated through Java). You can however
- perform your own device enumeration through Java and then set the ID in the ma_device_id structure (ma_device_id.aaudio) and pass it to ma_device_init().
-- The backend API will perform resampling where possible. The reason for this as opposed to using miniaudio's built-in resampler is to take advantage of any
- potential device-specific optimizations the driver may implement.
+- To capture audio on Android, remember to add the RECORD_AUDIO permission to your manifest:
+ ``
+- With OpenSL|ES, only a single ma_context can be active at any given time. This is due to a
+ limitation with OpenSL|ES.
+- With AAudio, only default devices are enumerated. This is due to AAudio not having an enumeration
+ API (devices are enumerated through Java). You can however perform your own device enumeration
+ through Java and then set the ID in the ma_device_id structure (ma_device_id.aaudio) and pass it
+ to ma_device_init().
+- The backend API will perform resampling where possible. The reason for this as opposed to using
+ miniaudio's built-in resampler is to take advantage of any potential device-specific
+ optimizations the driver may implement.
11.4. UWP
---------
@@ -1431,26 +1595,34 @@ Some backends have some nuance details you may want to be aware of.
11.5. Web Audio / Emscripten
----------------------------
- You cannot use `-std=c*` compiler flags, nor `-ansi`. This only applies to the Emscripten build.
-- The first time a context is initialized it will create a global object called "miniaudio" whose primary purpose is to act as a factory for device objects.
-- Currently the Web Audio backend uses ScriptProcessorNode's, but this may need to change later as they've been deprecated.
-- Google has implemented a policy in their browsers that prevent automatic media output without first receiving some kind of user input. The following web page
- has additional details: https://developers.google.com/web/updates/2017/09/autoplay-policy-changes. Starting the device may fail if you try to start playback
- without first handling some kind of user input.
+- The first time a context is initialized it will create a global object called "miniaudio" whose
+ primary purpose is to act as a factory for device objects.
+- Currently the Web Audio backend uses ScriptProcessorNode's, but this may need to change later as
+ they've been deprecated.
+- Google has implemented a policy in their browsers that prevent automatic media output without
+ first receiving some kind of user input. The following web page has additional details:
+ https://developers.google.com/web/updates/2017/09/autoplay-policy-changes. Starting the device
+ may fail if you try to start playback without first handling some kind of user input.
12. Miscellaneous Notes
=======================
-- Automatic stream routing is enabled on a per-backend basis. Support is explicitly enabled for WASAPI and Core Audio, however other backends such as
- PulseAudio may naturally support it, though not all have been tested.
-- The contents of the output buffer passed into the data callback will always be pre-initialized to silence unless the `noPreZeroedOutputBuffer` config variable
- in `ma_device_config` is set to true, in which case it'll be undefined which will require you to write something to the entire buffer.
-- By default miniaudio will automatically clip samples. This only applies when the playback sample format is configured as `ma_format_f32`. If you are doing
- clipping yourself, you can disable this overhead by setting `noClip` to true in the device config.
+- Automatic stream routing is enabled on a per-backend basis. Support is explicitly enabled for
+ WASAPI and Core Audio, however other backends such as PulseAudio may naturally support it, though
+ not all have been tested.
+- The contents of the output buffer passed into the data callback will always be pre-initialized to
+ silence unless the `noPreZeroedOutputBuffer` config variable in `ma_device_config` is set to true,
+ in which case it'll be undefined which will require you to write something to the entire buffer.
+- By default miniaudio will automatically clip samples. This only applies when the playback sample
+ format is configured as `ma_format_f32`. If you are doing clipping yourself, you can disable this
+ overhead by setting `noClip` to true in the device config.
- The sndio backend is currently only enabled on OpenBSD builds.
-- The audio(4) backend is supported on OpenBSD, but you may need to disable sndiod before you can use it.
+- The audio(4) backend is supported on OpenBSD, but you may need to disable sndiod before you can
+ use it.
- Note that GCC and Clang requires `-msse2`, `-mavx2`, etc. for SIMD optimizations.
-- When compiling with VC6 and earlier, decoding is restricted to files less than 2GB in size. This is due to 64-bit file APIs not being available.
+- When compiling with VC6 and earlier, decoding is restricted to files less than 2GB in size. This
+ is due to 64-bit file APIs not being available.
*/
#ifndef miniaudio_h