125 Commits

Author SHA1 Message Date
David Reid 3b50a854ec Version 0.11.20 2023-11-10 07:44:19 +10:00
David Reid da0572e6b8 Fix a compilation error with iOS.
Public issue https://github.com/mackron/miniaudio/issues/770
2023-11-09 15:01:03 +10:00
David Reid eb0ce6f1a5 Fix an error when dynamically linking when forcing the UWP build.
This also fixes a possible crash during initialization due to leaving a
thread running after early termination of the initialization routine.
2023-11-04 08:43:16 +10:00
David Reid b19cc09fd0 Version 0.11.19 2023-11-04 07:05:24 +10:00
David Reid 863565a8ed Update dr_libs. 2023-11-04 06:57:53 +10:00
cobyj33 ef0f2e5b16 Fix ma_device_config.noClip docs 2023-11-02 15:13:35 +10:00
David Reid 287881b815 Web Audio: Don't attempt to unlock audio on the touchstart event.
Public issue https://github.com/mackron/miniaudio/issues/759
2023-10-29 08:06:56 +10:00
David Reid 2730775e79 Fix a documentation error. 2023-10-27 17:05:56 +10:00
David Reid 1d17b09e41 Fix an error in ma_sound_get_cursor_in_pcm_frames(). 2023-10-27 16:07:14 +10:00
David Reid 39a7cc444b Fix a crash when uninitialising a device without stopping it first. 2023-10-27 16:06:19 +10:00
a.borisov fe5f17ecf3 fix misspells
occured -> occurred
aquired -> acquired
accomodate -> accommodate
seperate -> separate
etc.
2023-10-22 07:06:12 +10:00
David Reid 0bb56819a8 Address an issue in ma_sound_get_cursor_*().
When seeking, the seek does not immediately get applied to the data
source. Instead it is delayed. This results in a situation where the
sound can be seeked, but a small window will exist where querying the
seek point will not be consistent with the previous seek request.
2023-10-21 07:56:06 +10:00
David Reid 105ffd8b05 Update revision history. 2023-10-21 07:31:05 +10:00
David Reid 9bf256dcf3 Fix a documentation error. 2023-10-21 07:29:24 +10:00
David Reid 2f5c661bb7 Update revision history. 2023-10-21 07:24:23 +10:00
David Reid ecf2a8b917 Fix a crash when using a node with more than 2 outputs. 2023-10-21 07:18:58 +10:00
Marek Maškarinec d282fba0fe Fix AudioContext creation fail if sampleRate is 0 (#745) 2023-10-18 16:38:19 +10:00
Christian Alonso-Daubney a185b99f12 Added pSound check to ma_sound_get_cone() 2023-10-15 07:34:09 +10:00
Christian Alonso-Daubney aef76e251c Added pEngine and listenerIndex check to ma_engine_listener_get_cone() 2023-10-15 07:34:09 +10:00
David Reid b792ccd483 Try making runtime linking more robust on Apple platforms.
Public issue https://github.com/mackron/miniaudio/issues/750
2023-10-14 11:56:16 +10:00
David Reid a3cabad692 Fix some warnings with the Emscripten build. 2023-09-21 08:43:50 +10:00
my1e5 1a9acbad95 fix-sample-rate : fixed value of 11025 Hz sample rate in ma_standard_sample_rate enum 2023-09-21 08:38:43 +10:00
David Reid 381a035fdd Fix some unused function warnings.
Public issue https://github.com/mackron/miniaudio/issues/732
2023-09-11 08:30:30 +10:00
David Reid e1a0f523d0 Update dr_wav. 2023-09-11 08:16:53 +10:00
David Reid 03e36da814 Try fixing a strange error when initializing a POSIX mutex.
https://github.com/mackron/miniaudio/issues/733
2023-09-11 07:55:08 +10:00
David Reid a47f065a4a Don't stop the device in ma_device_uninit().
If the state miniaudio side of the device does not match the actual
state of the backend side of it, such as when the device is stopped but
the backend doesn't post a notification, attempting to stop the device
might result in a deadlock.

This is a just a quick workaround hack for the moment while a more
robust solution is figured out.

https://github.com/mackron/miniaudio/issues/717
2023-09-10 07:33:59 +10:00
David Reid bdf9a5554b Update the deviceio test. 2023-09-10 07:26:09 +10:00
David Reid 537c4ca36c Update the simple_playback_sine example. 2023-09-05 06:48:49 +10:00
David Reid 2922859ea9 Remove cosmo windows.h header. 2023-08-31 18:30:42 +10:00
David Reid 6e6823d9e4 Update deviceio test. 2023-08-31 18:30:04 +10:00
Oldes 568e0ae9e9 FIX: Only white noise type is generated
resolves: https://github.com/mackron/miniaudio/issues/723

Signed-off-by: Oldes <oldes.huhuman@gmail.com>
2023-08-31 07:12:15 +10:00
nmlgc 70bf42392d Fix SSE2 sample swapping in mono expansion.
The SSE2 code paths for mono expansion introduced in Version 0.11.15
mixed up the parameters of `_mm_shuffle_ps()`, which in turn caused
adjacent PCM frames to be swapped in the channel-expanded output.
2023-08-30 08:37:12 +10:00
David Reid 9d461f6d5d Minor changes to osaudio. 2023-08-27 15:26:46 +10:00
David Reid 6539b67163 Remove an incorrect comment. 2023-08-27 15:25:08 +10:00
David Reid 78c6fcb370 Fix some parenthesis errors. 2023-08-27 15:24:39 +10:00
David Reid 1f8c86d9ca Fix a copy/paste error. 2023-08-22 21:20:35 +10:00
David Reid 90342e5f67 Update revision history. 2023-08-19 09:24:02 +10:00
David Reid 46eaf1fa5e Fix a bug where ma_decoder_init_file can incorrectly return successful. 2023-08-19 09:22:39 +10:00
David Reid 509bd6d4f2 Fix some typos. 2023-08-17 20:56:42 +10:00
David Reid a81f09d93c Add osaudio to the extras folder.
This is just a small project to experiment with a few API ideas. This
is not a replacement for miniaudio or anything so don't panic.
2023-08-17 19:45:15 +10:00
David Reid f2ea656297 Correctly mark some functions as static. 2023-08-17 19:32:06 +10:00
David Reid c24141c5ae Remove the use of some deprecated functions. 2023-08-07 12:13:36 +10:00
David Reid 3898fff8ed Version 0.11.18 2023-08-07 11:05:14 +10:00
David Reid 9eca9ce0cd Update dr_libs. 2023-08-07 10:00:49 +10:00
David Reid d4fd8411c4 Update Emscripten test. 2023-08-06 15:39:54 +10:00
David Reid efa9e7d727 Web Audio: Memory usage optimization to the Audio Worklet path.
This applies only to duplex devices.
2023-08-06 15:31:50 +10:00
David Reid c24829cbb9 Web Audio: Refactoring to the ScriptProcessorNode path.
This unifies, as much as possible, the ScriptProcessorNode path with
the AudioWorklets path to avoid some code duplication, and to also make
the two paths more similar to each other to ease in maintenance.
2023-08-06 15:06:37 +10:00
David Reid fde7d20414 More improvements to the AudioWorklets Web Audio backend.
* Duplex mode now only creates a single AudioContext and AudioWorklet
  * Devices should now automatically start in response to a gesture
2023-08-06 10:42:48 +10:00
David Reid c36b391cc5 Update changes. 2023-08-05 17:58:01 +10:00
David Reid 4d23c1c5ab Update build instructions for Emscripten. 2023-08-05 17:53:58 +10:00
David Reid 810cdc2380 Improvements to Audio Worklets support for Web Audio.
Public issue https://github.com/mackron/miniaudio/issues/597
2023-08-05 17:02:26 +10:00
David Reid 53907863c7 Add support for stopping and sound and fading out.
This adds the following APIs:

  * ma_sound_stop_with_fade_in_pcm_frames()
  * ma_sound_stop_with_fade_in_milliseconds()
  * ma_sound_set_stop_time_with_fade_in_pcm_frames()
  * ma_sound_set_stop_time_with_fade_in_milliseconds()

These functions will overwrite any existing fades. For the
set_stop_time variants, you specify the time that the sound will be put
into it's stopped state. The fade will start at stopTime - fadeLength.
If the fade length is greater than the stop time, the fade length will
be clamped.

Public issue https://github.com/mackron/miniaudio/issues/669
2023-08-05 10:04:59 +10:00
David Reid f9f542b2fb Fix a fading bug introduced with an earlier commit. 2023-08-05 09:19:17 +10:00
David Reid e43457fcce Add initial implementation for scheduled fades.
This adds the following APIs:

  * ma_sound_set_fade_start_in_pcm_frames()
  * ma_sound_set_fade_start_in_milliseconds()

Public issue https://github.com/mackron/miniaudio/issues/669
2023-08-04 20:28:56 +10:00
David Reid 356eb3252e Set up some infrastructure for starting fades with an offset. 2023-08-04 19:26:55 +10:00
David Reid 3429769623 Update change history. 2023-08-04 12:14:18 +10:00
David Reid ca7284fde5 ALSA: Fix an error where restarting a device can fail. 2023-08-03 17:03:33 +10:00
David Reid 2212965267 Fix errors with the C89 build. 2023-08-03 09:16:59 +10:00
David Reid 320245606a Fix C89 build. 2023-08-02 19:41:06 +10:00
David Reid 18e4756be3 Decouple MA_API and MA_STATIC defines. 2023-08-02 08:39:56 +10:00
David Reid 8df02809b5 Remove stale comment. 2023-07-30 08:19:03 +10:00
Taiko2k 1696031633 Tweak pulseaudio stream flags 2023-07-30 08:17:21 +10:00
David Reid 90bdda29ae Fix a typo. 2023-07-30 08:14:38 +10:00
David Reid 69bc820ae8 Fix an error when loading WAV files.
The sample format of a WAV file is not always being set which results
in get_data_format() returning ma_format_unknown.
2023-07-22 16:39:17 +10:00
David Reid 98a39ded77 Fix compilation error with previous commit. 2023-07-21 18:19:49 +10:00
David Reid 4c7e3218e3 Improvements to decoder initialization.
This change makes use of the onInitFile, onInitFileW and onInitMemory
backend callbacks which enables decoding backends to have optimized
implementations for reading from a file or a block of memory without
having to go through an abstraction layer on the miniaudio side.

Public issue https://github.com/mackron/miniaudio/issues/696
2023-07-21 18:18:15 +10:00
David Reid b2ed26cf76 Fix an error with setting of the cursor when seeking a Vorbis file.
Public issue https://github.com/mackron/miniaudio/issues/707
2023-07-21 07:39:07 +10:00
David Reid 7f0a92a08f Don't call CoUninialize() when CoInitializeEx() fails. 2023-07-09 09:49:01 +10:00
David Reid ab87375257 Add ma_engine_get_volume().
Public issue https://github.com/mackron/miniaudio/issues/700
2023-07-08 09:18:41 +10:00
David Reid 0eadb0f30e Make ma_linear_resampler_set_rate_ratio() more accurate. 2023-07-07 16:47:24 +10:00
David Reid a6eb7d6a6f Update change history. 2023-06-17 08:06:22 +10:00
Jay Baird e9ba163490 Fix issue where duty cycle of a pulsewave was not correctly set at init time 2023-06-10 08:42:27 +10:00
David Reid f9076ef327 Update dr_wav with more AIFF improvements. 2023-06-08 09:10:07 +10:00
David Reid eabc776898 Fix erroneous output with the resampler when in/out rates are the same. 2023-06-08 08:34:04 +10:00
David Reid 4c49c49596 Update change history. 2023-06-07 21:14:58 +10:00
David Reid 34b40bdc17 Update dr_wav with improved AIFF compatibility. 2023-06-07 13:58:46 +10:00
David Reid e1bfeb212a AAudio: Reverse some incorrect logic when setting up streams. 2023-06-05 15:44:27 +10:00
David Reid db8e77cad4 Fix a compilation error with the C++ build. 2023-06-05 15:19:28 +10:00
David Reid 1177997599 Add support for supplying a custom device data callback to ma_engine.
When this is used, the data callback should at some point call
ma_engine_read_pcm_frames() in order to do some processing.
2023-06-05 09:01:30 +10:00
David Reid 5f32336a34 Use float* instead of void* for the engine processing callback. 2023-06-03 16:27:39 +10:00
David Reid a0b952eea6 Add support for setting a processing callback for ma_engine.
This is optional and is fired at the end of each call to
ma_engine_read_process_pcm_frames(). The callback will be passed the
processed audio data so they can do their own processing such as
outputting to a file or whatnot.

The callback is configured via the engine config.
2023-06-03 16:20:16 +10:00
David Reid e7912fa242 Add ma_sound_get_time_in_milliseconds(). 2023-06-03 13:38:55 +10:00
David Reid 0c1c4c7ddc Update dr_wav. 2023-05-29 08:33:31 +10:00
David Reid d76b9a1ac4 Version 0.11.17 2023-05-27 12:49:48 +10:00
David Reid e9b6559be1 Very minor code reorganisation. 2023-05-26 13:41:41 +10:00
Jay Baird 1bd7713e85 swap parameters for better compatibility with ma_data_source 2023-05-26 13:38:45 +10:00
Jay Baird e7e666d827 Add ma_pulsewave generator type 2023-05-26 13:38:45 +10:00
David Reid 8c59e9b736 Update change history. 2023-05-23 19:10:59 +10:00
David Reid a2698a0048 Fix compilation error relating to dlopen() and family. 2023-05-23 14:04:40 +10:00
David Reid ea42e16a79 Fix the C++ build. 2023-05-22 18:27:38 +10:00
David Reid 14be2bd394 Fix some long out of date tests. 2023-05-22 18:20:21 +10:00
David Reid a8f3cb857e Fix compilation errors with MA_NO_DEVICE_IO. 2023-05-22 18:09:04 +10:00
David Reid 563e1c52cb Update change history. 2023-05-22 17:51:08 +10:00
David Reid 4520faa1d2 Update dr_flac amalgamation again to remove redundant error codes. 2023-05-22 17:50:41 +10:00
David Reid 8dec4e0b9b Update amalgamation of dr_flac. 2023-05-22 17:43:27 +10:00
David Reid 69f4a19ef5 Fix a copy/paste error. 2023-05-22 17:41:20 +10:00
David Reid 9374f5e8d2 Update dr_mp3 amalgamation. 2023-05-22 17:25:01 +10:00
David Reid b98acd2422 Update amalgamation of dr_wav.
With this change, dr_wav is now namespaced with "ma" which means dr_wav
can now be used alongside miniaudio.

In addition, some duplicate code has been removed, such as sized types,
result codes, allocation callbacks, etc. which reduces the size of the
file slightly.

This should address the following public issue:
  https://github.com/mackron/miniaudio/issues/673
2023-05-22 16:52:16 +10:00
David Reid 5c099791ee Clean up decoding documentation.
miniaudio is updating it's amalgamation of dr_wav, etc. so that it's
all namespaced with "ma" which will make the amalgamated versions of
dr_libs entirely independent. There's no longer any need to mention
the decoding backends.

Documentation regarding stb_vorbis is removed so as to discourage
new users from using it. Support will not be removed until a
replacement Vorbis decoder can be amalgamated, but new users should
instead be guided to the libvorbis custom decoder in the extras folder.
2023-05-22 16:06:13 +10:00
David Reid 773d97a95c Fix a compilation error with VC6 and VS2003.
These compilers do not support noinline.
2023-05-22 14:48:55 +10:00
David Reid fa7cd81027 Improvements to c89atomic amalgamation.
* Sized types will now use miniaudio's types.
  * Architecture macros now use miniaudio's macros.
  * The c89atomic namespace has been renamed to ma_atomic which makes
    it so c89atomic can be used alongside miniaudio without naming
    conflicts.

Public issue https://github.com/mackron/miniaudio/issues/673
2023-05-21 09:41:49 +10:00
David Reid af46c1fcc0 Minor changes to architecture detection.
This is in preparation for some amalgamation improvements.
2023-05-21 08:25:14 +10:00
David Reid 65574f44e3 Update change history. 2023-05-21 07:57:32 +10:00
David Reid f05bb5306d Try fixing Windows 95/98 build.
This commit makes it so SetFilePointer/Ex() are dynamically loaded at
runtime which allows miniaudio to branch dynamically based on available
support.

This is necessary because versions of Windows prior to XP do not
support the Ex version which results in an error when trying to run the
program.

Public issue https://github.com/mackron/miniaudio/issues/672
2023-05-18 20:44:46 +10:00
David Reid 6eeea700f0 Silence a very minor linting warning in VS2022. 2023-05-17 18:22:24 +10:00
David Reid 04a6fe6eea Work around some bad code generation by Clang. 2023-05-17 18:20:15 +10:00
David Reid ea205fb7b0 Version 0.11.16 2023-05-15 09:36:42 +10:00
David Reid 26c11a7771 Process jobs on the calling thread when WAIT_INIT is used.
Since the calling thread is waiting anyway, it's better to just do the
processing on the calling thread rather than posting it to the job
queue and waiting. This ensures the calling thread stays busy which
will improve performance, but it also makes it so the calling thread
doesn't get stalled while already-queued jobs are getting processed.
2023-05-12 09:11:29 +10:00
David Reid 870ac8a22c Don't link to advapi32.dll for the GDK build. 2023-05-06 08:59:13 +10:00
David Reid a1ea4438ee Fix ma_dlopen() on the GDK build. 2023-05-06 08:50:55 +10:00
David Reid 902c19d6ab WASAPI: Another fix for the GDK build. 2023-05-06 08:41:08 +10:00
David Reid 64f14070a7 WASAPI: Revert an experimental change and try fixing GDK build. 2023-05-06 08:31:32 +10:00
David Reid 6d20ccb701 WASAPI: Experimental change for rerouting. 2023-05-05 10:26:39 +10:00
David Reid 96ac03f184 WASAPI: Log error codes when a device fails to start. 2023-05-05 09:34:31 +10:00
David Reid e913a6d1aa Silence a warning. 2023-05-05 09:24:46 +10:00
David Reid de706d44b8 Experimental fix for better handling of AUDCLNT_E_DEVICE_INVALIDATED. 2023-05-05 09:18:18 +10:00
David Reid 2bf7e03777 WASAPI: Relax validation checks when doing device reroutes. 2023-05-05 08:55:23 +10:00
David Reid ae25dbcdac Fix a memory leak in ma_sound_init_copy().
Public issue https://github.com/mackron/miniaudio/issues/667
2023-05-03 08:02:42 +10:00
David Reid 4326fad97a Update links in readme. 2023-04-30 12:15:38 +10:00
David Reid 937cd9c16c Another update to the readme to make it less wordy. 2023-04-30 12:03:26 +10:00
David Reid d1f3715a08 Playing around with a restructure to the readme. 2023-04-30 11:20:53 +10:00
David Reid 9fe0970e20 Remove the previous experimental change to the readme. Doesn't work. 2023-04-30 08:56:43 +10:00
David Reid a74c2c78d9 Try fixing some HTML formatting. 2023-04-30 08:54:26 +10:00
David Reid 189beb67fa Experiment with a change to the readme. 2023-04-30 08:53:14 +10:00
David Reid 7384bde372 Update change history. 2023-04-30 08:39:53 +10:00
31 changed files with 23535 additions and 23383 deletions
+59 -2
View File
@@ -1,8 +1,65 @@
v0.11.15 - TBD
==============
v0.11.20 - 2023-11-10
=====================
* Fix a compilation error with iOS.
* Fix an error when dynamically linking libraries when forcing the UWP build on desktop.
v0.11.19 - 2023-11-04
=====================
* Fix a bug where `ma_decoder_init_file()` can incorrectly return successfully.
* Fix a crash when using a node with more than 2 outputs.
* Fix a bug where `ma_standard_sample_rate_11025` uses the incorrect rate.
* Fix a bug in `ma_noise` where only white noise would be generated even when specifying pink or Brownian.
* Fix an SSE related bug when converting from mono streams.
* Documentation fixes.
* Remove the use of some deprecated functions.
* Improvements to runtime linking on Apple platforms.
* Web / Emscripten: Audio will no longer attempt to unlock in response to the "touchstart" event. This addresses an issue with iOS and Safari. This results in a change of behavior if you were previously depending on starting audio when the user's finger first touches the screen. Audio will now only unlock when the user's finger is lifted. See this discussion for details: https://github.com/mackron/miniaudio/issues/759
* Web / Emscripten: Fix an error when using a sample rate of 0 in the device config.
v0.11.18 - 2023-08-07
=====================
* Fix some AIFF compatibility issues.
* Fix an error where the cursor of a Vorbis stream is incorrectly incremented.
* Add support for setting a callback on an `ma_engine` object that get's fired after it processes a chunk of audio. This allows applications to do things such as apply a post-processing effect or output the audio to a file.
* Add `ma_engine_get_volume()`.
* Add `ma_sound_get_time_in_milliseconds()`.
* Decouple `MA_API` and `MA_PRIVATE`. This relaxes applications from needing to define both of them if they're only wanting to redefine one.
* Decoding backends will now have their onInitFile/W and onInitMemory initialization routines used where appropriate if they're defined.
* Increase the accuracy of the linear resampler when setting the ratio with `ma_linear_resampler_set_rate_ratio()`.
* Fix erroneous output with the linear resampler when in/out rates are the same.
* AAudio: Fix an error where the buffer size is not configured correctly which sometimes results in excessively high latency.
* ALSA: Fix a possible error when stopping and restarting a device.
* PulseAudio: Minor changes to stream flags.
* Win32: Fix an error where `CoUninialize()` is being called when the corresponding `CoInitializeEx()` fails.
* Web / Emscripten: Add support for AudioWorklets. This is opt-in and can be enabled by defining `MA_ENABLE_AUDIO_WORKLETS`. You must compile with `-sAUDIO_WORKLET=1 -sWASM_WORKERS=1 -sASYNCIFY` for this to work. Requires at least Emscripten v3.1.32.
v0.11.17 - 2023-05-27
=====================
* Fix compilation errors with MA_USE_STDINT.
* Fix a possible runtime error with Windows 95/98.
* Fix a very minor linting warning in VS2022.
* Add support for AIFF/AIFC decoding.
* Add support for RIFX decoding.
* Work around some bad code generation by Clang.
* Amalgamations of dr_wav, dr_flac, dr_mp3 and c89atomic have been updated so that they're now fully namespaced. This allows each of these libraries to be able to be used alongside miniaudio without any conflicts. In addition, some duplicate code, such as sized type declarations, result codes, etc. has been removed.
v0.11.16 - 2023-05-15
=====================
* Fix a memory leak with `ma_sound_init_copy()`.
* Improve performance of `ma_sound_init_*()` when using the `ASYNC | DECODE` flag combination.
v0.11.15 - 2023-04-30
=====================
* Fix a bug where initialization of a duplex device fails on some backends.
* Fix a bug in ma_gainer where smoothing isn't applied correctly thus resulting in glitching.
* Add support for volume smoothing to sounds when changing the volume with `ma_sound_set_volume()`. To use this, you must configure it via the `volumeSmoothTimeInPCMFrames` member of ma_sound_config and use `ma_sound_init_ex()` to initialize your sound. Smoothing is disabled by default.
* WASAPI: Fix a possible buffer overrun when initializing a device.
* WASAPI: Make device initialization more robust by improving the handling of the querying of the internal data format.
v0.11.14 - 2023-03-29
+70 -94
View File
@@ -3,7 +3,7 @@
<br>
</h1>
<h4 align="center">A single file library for audio playback and capture.</h4>
<h4 align="center">An audio playback and capture library in a single source file.</h4>
<p align="center">
<a href="https://discord.gg/9vpqbjU"><img src="https://img.shields.io/discord/712952679415939085?label=discord&logo=discord&style=flat-square" alt="discord"></a>
@@ -12,14 +12,39 @@
</p>
<p align="center">
<a href="#features">Features</a> -
<a href="#examples">Examples</a> -
<a href="#building">Building</a> -
<a href="#documentation">Documentation</a> -
<a href="#supported-platforms">Supported Platforms</a> -
<a href="#backends">Backends</a> -
<a href="#major-features">Major Features</a> -
<a href="#building">Building</a>
<a href="#license">License</a>
</p>
miniaudio is written in C with no dependencies except the standard library and should compile clean on all major
compilers without the need to install any additional development packages. All major desktop and mobile platforms
are supported.
Features
========
- Simple build system with no external dependencies.
- Simple and flexible API.
- Low-level API for direct access to raw audio data.
- High-level API for sound management, mixing, effects and optional 3D spatialization.
- Flexible node graph system for advanced mixing and effect processing.
- Resource management for loading sound files.
- Decoding, with built-in support for WAV, FLAC and MP3, in addition to being able to plug in custom decoders.
- Encoding (WAV only).
- Data conversion.
- Resampling, including custom resamplers.
- Channel mapping.
- Basic generation of waveforms and noise.
- Basic effects and filters.
Refer to the [Programming Manual](https://miniaud.io/docs/manual/) for a more complete description of
available features in miniaudio.
Examples
========
@@ -27,27 +52,21 @@ This example shows one way to play a sound using the high level API.
```c
#define MINIAUDIO_IMPLEMENTATION
#include "../miniaudio.h"
#include "miniaudio.h"
#include <stdio.h>
int main(int argc, char** argv)
int main()
{
ma_result result;
ma_engine engine;
if (argc < 2) {
printf("No input file.");
return -1;
}
result = ma_engine_init(NULL, &engine);
if (result != MA_SUCCESS) {
printf("Failed to initialize audio engine.");
return -1;
}
ma_engine_play_sound(&engine, argv[1], NULL);
ma_engine_play_sound(&engine, "sound.wav", NULL);
printf("Press Enter to quit...");
getchar();
@@ -62,7 +81,7 @@ This example shows how to decode and play a sound using the low level API.
```c
#define MINIAUDIO_IMPLEMENTATION
#include "../miniaudio.h"
#include "miniaudio.h"
#include <stdio.h>
@@ -128,6 +147,34 @@ int main(int argc, char** argv)
More examples can be found in the [examples](examples) folder or online here: https://miniaud.io/docs/examples/
Building
========
Do the following in one source file:
```c
#define MINIAUDIO_IMPLEMENTATION
#include "miniaudio.h"
```
Then just compile. There's no need to install any dependencies. On Windows and macOS there's no need to link
to anything. On Linux just link to `-lpthread`, `-lm` and `-ldl`. On BSD just link to `-lpthread` and `-lm`.
On iOS you need to compile as Objective-C.
If you get errors about undefined references to `__sync_val_compare_and_swap_8`, `__atomic_load_8`, etc. you
need to link with `-latomic`.
If you prefer separate .h and .c files, you can find a split version of miniaudio in the extras/miniaudio_split
folder. From here you can use miniaudio as a traditional .c and .h library - just add miniaudio.c to your source
tree like any other source file and include miniaudio.h like a normal header. If you prefer compiling as a
single translation unit (AKA unity builds), you can just #include the .c file in your main source file:
```c
#include "miniaudio.c"
```
Note that the split version is auto-generated using a tool and is based on the main file in the root directory.
If you want to contribute, please make the change in the main file.
ABI compatibility is not guaranteed between versions so take care if compiling as a DLL/SO. The suggested way
to integrate miniaudio is by adding it directly to your source tree.
Documentation
=============
Online documentation can be found here: https://miniaud.io/docs/
@@ -139,17 +186,20 @@ documentation is generated from this in-code documentation.
Supported Platforms
===================
- Windows, UWP
- Windows
- macOS, iOS
- Linux
- BSD
- FreeBSD / OpenBSD / NetBSD
- Android
- Raspberry Pi
- Emscripten / HTML5
miniaudio should compile clean on other platforms, but it will not include any support for playback or capture
by default. To support that, you would need to implement a custom backend. You can do this without needing to
modify the miniaudio source code. See the [custom_backend](examples/custom_backend.c) example.
Backends
========
--------
- WASAPI
- DirectSound
- WinMM
@@ -167,80 +217,6 @@ Backends
- Custom
Major Features
==============
- Your choice of either public domain or [MIT No Attribution](https://github.com/aws/mit-0).
- Entirely contained within a single file for easy integration into your source tree.
- No external dependencies except for the C standard library and backend libraries.
- Written in C and compilable as C++, enabling miniaudio to work on almost all compilers.
- Supports all major desktop and mobile platforms, with multiple backends for maximum compatibility.
- A low level API with direct access to the raw audio data.
- A high level API with sound management and effects, including 3D spatialization.
- Supports playback, capture, full-duplex and loopback (WASAPI only).
- Device enumeration for connecting to specific devices, not just defaults.
- Connect to multiple devices at once.
- Shared and exclusive mode on supported backends.
- Resource management for loading and streaming sounds.
- A node graph system for advanced mixing and effect processing.
- Data conversion (sample format conversion, channel conversion and resampling).
- Filters.
- Biquads
- Low-pass (first, second and high order)
- High-pass (first, second and high order)
- Band-pass (second and high order)
- Effects.
- Delay/Echo
- Spatializer
- Stereo Pan
- Waveform generation (sine, square, triangle, sawtooth).
- Noise generation (white, pink, Brownian).
- Decoding
- WAV
- FLAC
- MP3
- Vorbis via stb_vorbis (not built in - must be included separately).
- Custom
- Encoding
- WAV
Refer to the [Programming Manual](https://miniaud.io/docs/manual/) for a more complete description of
available features in miniaudio.
Building
========
Do the following in one source file:
```c
#define MINIAUDIO_IMPLEMENTATION
#include "miniaudio.h"
```
Then just compile. There's no need to install any dependencies. On Windows and macOS there's no need to link
to anything. On Linux just link to -lpthread, -lm and -ldl. On BSD just link to -lpthread and -lm. On iOS you
need to compile as Objective-C.
If you prefer separate .h and .c files, you can find a split version of miniaudio in the extras/miniaudio_split
folder. From here you can use miniaudio as a traditional .c and .h library - just add miniaudio.c to your source
tree like any other source file and include miniaudio.h like a normal header. If you prefer compiling as a
single translation unit (AKA unity builds), you can just #include the .c file in your main source file:
```c
#include "miniaudio.c"
```
Note that the split version is auto-generated using a tool and is based on the main file in the root directory.
If you want to contribute, please make the change in the main file.
Vorbis Decoding
---------------
Vorbis decoding is enabled via stb_vorbis. To use it, you need to include the header section of stb_vorbis
before the implementation of miniaudio. You can enable Vorbis by doing the following:
```c
#define STB_VORBIS_HEADER_ONLY
#include "extras/stb_vorbis.c" /* Enables Vorbis decoding. */
#define MINIAUDIO_IMPLEMENTATION
#include "miniaudio.h"
/* stb_vorbis implementation must come after the implementation of miniaudio. */
#undef STB_VORBIS_HEADER_ONLY
#include "extras/stb_vorbis.c"
```
License
=======
Your choice of either public domain or [MIT No Attribution](https://github.com/aws/mit-0).
+4
View File
@@ -11,6 +11,10 @@ path like "C:\emsdk\emsdk_env.bat". Note that PowerShell doesn't work for me for
emcc ../simple_playback_sine.c -o bin/simple_playback_sine.html
emcc ../simple_playback_sine.c -o bin/simple_playback_sine.html -s WASM=0 -Wall -Wextra
To compile with support for Audio Worklets:
emcc ../simple_playback_sine.c -o bin/simple_playback_sine.html -DMA_ENABLE_AUDIO_WORKLETS -sAUDIO_WORKLET=1 -sWASM_WORKERS=1 -sASYNCIFY
If you output WASM it may not work when running the web page locally. To test you can run with something
like this:
+10 -10
View File
@@ -521,7 +521,7 @@ static ma_result ma_context_uninit__sdl(ma_context* pContext)
((MA_PFN_SDL_QuitSubSystem)pContextEx->sdl.SDL_QuitSubSystem)(MA_SDL_INIT_AUDIO);
/* Close the handle to the SDL shared object last. */
ma_dlclose(pContext, pContextEx->sdl.hSDL);
ma_dlclose(ma_context_get_log(pContext), pContextEx->sdl.hSDL);
pContextEx->sdl.hSDL = NULL;
return MA_SUCCESS;
@@ -551,7 +551,7 @@ static ma_result ma_context_init__sdl(ma_context* pContext, const ma_context_con
/* Check if we have SDL2 installed somewhere. If not it's not usable and we need to abort. */
for (iName = 0; iName < ma_countof(pSDLNames); iName += 1) {
pContextEx->sdl.hSDL = ma_dlopen(pContext, pSDLNames[iName]);
pContextEx->sdl.hSDL = ma_dlopen(ma_context_get_log(pContext), pSDLNames[iName]);
if (pContextEx->sdl.hSDL != NULL) {
break;
}
@@ -562,13 +562,13 @@ static ma_result ma_context_init__sdl(ma_context* pContext, const ma_context_con
}
/* Now that we have the handle to the shared object we can go ahead and load some function pointers. */
pContextEx->sdl.SDL_InitSubSystem = ma_dlsym(pContext, pContextEx->sdl.hSDL, "SDL_InitSubSystem");
pContextEx->sdl.SDL_QuitSubSystem = ma_dlsym(pContext, pContextEx->sdl.hSDL, "SDL_QuitSubSystem");
pContextEx->sdl.SDL_GetNumAudioDevices = ma_dlsym(pContext, pContextEx->sdl.hSDL, "SDL_GetNumAudioDevices");
pContextEx->sdl.SDL_GetAudioDeviceName = ma_dlsym(pContext, pContextEx->sdl.hSDL, "SDL_GetAudioDeviceName");
pContextEx->sdl.SDL_CloseAudioDevice = ma_dlsym(pContext, pContextEx->sdl.hSDL, "SDL_CloseAudioDevice");
pContextEx->sdl.SDL_OpenAudioDevice = ma_dlsym(pContext, pContextEx->sdl.hSDL, "SDL_OpenAudioDevice");
pContextEx->sdl.SDL_PauseAudioDevice = ma_dlsym(pContext, pContextEx->sdl.hSDL, "SDL_PauseAudioDevice");
pContextEx->sdl.SDL_InitSubSystem = ma_dlsym(ma_context_get_log(pContext), pContextEx->sdl.hSDL, "SDL_InitSubSystem");
pContextEx->sdl.SDL_QuitSubSystem = ma_dlsym(ma_context_get_log(pContext), pContextEx->sdl.hSDL, "SDL_QuitSubSystem");
pContextEx->sdl.SDL_GetNumAudioDevices = ma_dlsym(ma_context_get_log(pContext), pContextEx->sdl.hSDL, "SDL_GetNumAudioDevices");
pContextEx->sdl.SDL_GetAudioDeviceName = ma_dlsym(ma_context_get_log(pContext), pContextEx->sdl.hSDL, "SDL_GetAudioDeviceName");
pContextEx->sdl.SDL_CloseAudioDevice = ma_dlsym(ma_context_get_log(pContext), pContextEx->sdl.hSDL, "SDL_CloseAudioDevice");
pContextEx->sdl.SDL_OpenAudioDevice = ma_dlsym(ma_context_get_log(pContext), pContextEx->sdl.hSDL, "SDL_OpenAudioDevice");
pContextEx->sdl.SDL_PauseAudioDevice = ma_dlsym(ma_context_get_log(pContext), pContextEx->sdl.hSDL, "SDL_PauseAudioDevice");
#else
pContextEx->sdl.SDL_InitSubSystem = (ma_proc)SDL_InitSubSystem;
pContextEx->sdl.SDL_QuitSubSystem = (ma_proc)SDL_QuitSubSystem;
@@ -581,7 +581,7 @@ static ma_result ma_context_init__sdl(ma_context* pContext, const ma_context_con
resultSDL = ((MA_PFN_SDL_InitSubSystem)pContextEx->sdl.SDL_InitSubSystem)(MA_SDL_INIT_AUDIO);
if (resultSDL != 0) {
ma_dlclose(pContext, pContextEx->sdl.hSDL);
ma_dlclose(ma_context_get_log(pContext), pContextEx->sdl.hSDL);
return MA_ERROR;
}
+1
View File
@@ -83,6 +83,7 @@ int main(int argc, char** argv)
#endif
ma_device_uninit(&device);
ma_waveform_uninit(&sineWave); /* Uninitialize the waveform after the device so we don't pull it from under the device while it's being reference in the data callback. */
(void)argc;
(void)argv;
-104
View File
@@ -1,104 +0,0 @@
/*
IMPORTANT NOTE: Cosmopolitan is not officially supported by miniaudio. This file was added just as
a way to play around and experiment with Cosmopolitan as a proof of concept and to test the viability
of supporting such a compiler. If you get compilation or runtime errors you're on your own.
---------------------------------------------------------------------------------------------------
This is a version of windows.h for compiling with Cosmopolitan. It's not complete. It's intended to
define some missing items from cosmopolitan.h. Hopefully as the project develops we can eventually
eliminate all of the content in this file.
*/
#ifndef _WINDOWS_
#define _WINDOWS_
#define WINAPI
#define STDMETHODCALLTYPE
#define CALLBACK
typedef uint64_t HWND;
typedef uint64_t HANDLE;
typedef uint64_t HKEY;
typedef uint64_t HWAVEIN;
typedef uint64_t HWAVEOUT;
typedef uint32_t HRESULT;
typedef uint8_t BYTE;
typedef uint16_t WORD;
typedef uint32_t DWORD;
typedef uint64_t DWORDLONG;
typedef int32_t BOOL;
typedef int32_t LONG; /* `long` is always 32-bit on Windows. */
typedef int64_t LONGLONG;
typedef uint32_t ULONG; /* `long` is always 32-bit on Windows. */
typedef uint64_t ULONGLONG;
typedef char16_t WCHAR;
typedef unsigned int UINT;
typedef char CHAR;
typedef uint64_t ULONG_PTR; /* Everything is 64-bit with Cosmopolitan. */
typedef ULONG_PTR DWORD_PTR;
#define TRUE 1
#define FALSE 0
#define WAIT_OBJECT_0 0
#define INFINITE 0xFFFFFFFF
#define CP_UTF8 65001
#define FAILED(hr) ((hr) < 0)
#define SUCCEEDED(hr) ((hr) >= 0)
#define NOERROR 0
#define S_OK 0
#define S_FALSE 1
#define E_POINTER ((HRESULT)0x80004003)
#define E_UNEXPECTED ((HRESULT)0x8000FFFF)
#define E_NOTIMPL ((HRESULT)0x80004001)
#define E_OUTOFMEMORY ((HRESULT)0x8007000E)
#define E_INVALIDARG ((HRESULT)0x80070057)
#define E_NOINTERFACE ((HRESULT)0x80004002)
#define E_HANDLE ((HRESULT)0x80070006)
#define E_ABORT ((HRESULT)0x80004004)
#define E_FAIL ((HRESULT)0x80004005)
#define E_ACCESSDENIED ((HRESULT)0x80070005)
#define ERROR_SUCCESS 0
#define ERROR_FILE_NOT_FOUND 2
#define ERROR_PATH_NOT_FOUND 3
#define ERROR_TOO_MANY_OPEN_FILES 4
#define ERROR_ACCESS_DENIED 5
#define ERROR_NOT_ENOUGH_MEMORY 8
#define ERROR_HANDLE_EOF 38
#define ERROR_INVALID_PARAMETER 87
#define ERROR_DISK_FULL 112
#define ERROR_SEM_TIMEOUT 121
#define ERROR_NEGATIVE_SEEK 131
typedef struct
{
unsigned long Data1;
unsigned short Data2;
unsigned short Data3;
unsigned char Data4[8];
} GUID, IID;
typedef int64_t LARGE_INTEGER;
#define HKEY_LOCAL_MACHINE ((HKEY)(ULONG_PTR)(0x80000002))
#define KEY_READ 0x00020019
static HANDLE CreateEventA(struct NtSecurityAttributes* lpEventAttributes, bool32 bManualReset, bool32 bInitialState, const char* lpName)
{
assert(lpName == NULL); /* If this is ever triggered we'll need to do a ANSI-to-Unicode conversion. */
return (HANDLE)CreateEvent(lpEventAttributes, bManualReset, bInitialState, (const char16_t*)lpName);
}
static BOOL IsEqualGUID(const GUID* a, const GUID* b)
{
return memcmp(a, b, sizeof(GUID)) == 0;
}
#endif /* _WINDOWS_ */
File diff suppressed because it is too large Load Diff
+116 -48
View File
@@ -1,6 +1,6 @@
/*
Audio playback and capture library. Choice of public domain or MIT-0. See license statements at the end of this file.
miniaudio - v0.11.15 - 2023-04-30
miniaudio - v0.11.20 - 2023-11-10
David Reid - mackron@gmail.com
@@ -20,7 +20,7 @@ extern "C" {
#define MA_VERSION_MAJOR 0
#define MA_VERSION_MINOR 11
#define MA_VERSION_REVISION 15
#define MA_VERSION_REVISION 20
#define MA_VERSION_STRING MA_XSTRINGIFY(MA_VERSION_MAJOR) "." MA_XSTRINGIFY(MA_VERSION_MINOR) "." MA_XSTRINGIFY(MA_VERSION_REVISION)
#if defined(_MSC_VER) && !defined(__clang__)
@@ -212,6 +212,13 @@ typedef ma_uint16 wchar_t;
#ifdef _MSC_VER
#define MA_INLINE __forceinline
/* noinline was introduced in Visual Studio 2005. */
#if _MSC_VER >= 1400
#define MA_NO_INLINE __declspec(noinline)
#else
#define MA_NO_INLINE
#endif
#elif defined(__GNUC__)
/*
I've had a bug report where GCC is emitting warnings about functions possibly not being inlineable. This warning happens when
@@ -228,16 +235,20 @@ typedef ma_uint16 wchar_t;
#if (__GNUC__ > 3 || (__GNUC__ == 3 && __GNUC_MINOR__ >= 2)) || defined(__clang__)
#define MA_INLINE MA_GNUC_INLINE_HINT __attribute__((always_inline))
#define MA_NO_INLINE __attribute__((noinline))
#else
#define MA_INLINE MA_GNUC_INLINE_HINT
#define MA_NO_INLINE __attribute__((noinline))
#endif
#elif defined(__WATCOMC__)
#define MA_INLINE __inline
#define MA_NO_INLINE
#else
#define MA_INLINE
#define MA_NO_INLINE
#endif
#if !defined(MA_API)
/* MA_DLL is not officially supported. You're on your own if you want to use this. */
#if defined(MA_DLL)
#if defined(_WIN32)
#define MA_DLL_IMPORT __declspec(dllimport)
@@ -254,19 +265,29 @@ typedef ma_uint16 wchar_t;
#define MA_DLL_PRIVATE static
#endif
#endif
#endif
#if !defined(MA_API)
#if defined(MA_DLL)
#if defined(MINIAUDIO_IMPLEMENTATION) || defined(MA_IMPLEMENTATION)
#define MA_API MA_DLL_EXPORT
#else
#define MA_API MA_DLL_IMPORT
#endif
#define MA_PRIVATE MA_DLL_PRIVATE
#else
#define MA_API extern
#endif
#endif
#if !defined(MA_STATIC)
#if defined(MA_DLL)
#define MA_PRIVATE MA_DLL_PRIVATE
#else
#define MA_PRIVATE static
#endif
#endif
/* SIMD alignment in bytes. Currently set to 32 bytes in preparation for future AVX optimizations. */
#define MA_SIMD_ALIGNMENT 32
@@ -464,28 +485,31 @@ typedef enum
MA_CANCELLED = -51,
MA_MEMORY_ALREADY_MAPPED = -52,
/* General non-standard errors. */
MA_CRC_MISMATCH = -100,
/* General miniaudio-specific errors. */
MA_FORMAT_NOT_SUPPORTED = -100,
MA_DEVICE_TYPE_NOT_SUPPORTED = -101,
MA_SHARE_MODE_NOT_SUPPORTED = -102,
MA_NO_BACKEND = -103,
MA_NO_DEVICE = -104,
MA_API_NOT_FOUND = -105,
MA_INVALID_DEVICE_CONFIG = -106,
MA_LOOP = -107,
MA_BACKEND_NOT_ENABLED = -108,
MA_FORMAT_NOT_SUPPORTED = -200,
MA_DEVICE_TYPE_NOT_SUPPORTED = -201,
MA_SHARE_MODE_NOT_SUPPORTED = -202,
MA_NO_BACKEND = -203,
MA_NO_DEVICE = -204,
MA_API_NOT_FOUND = -205,
MA_INVALID_DEVICE_CONFIG = -206,
MA_LOOP = -207,
MA_BACKEND_NOT_ENABLED = -208,
/* State errors. */
MA_DEVICE_NOT_INITIALIZED = -200,
MA_DEVICE_ALREADY_INITIALIZED = -201,
MA_DEVICE_NOT_STARTED = -202,
MA_DEVICE_NOT_STOPPED = -203,
MA_DEVICE_NOT_INITIALIZED = -300,
MA_DEVICE_ALREADY_INITIALIZED = -301,
MA_DEVICE_NOT_STARTED = -302,
MA_DEVICE_NOT_STOPPED = -303,
/* Operation errors. */
MA_FAILED_TO_INIT_BACKEND = -300,
MA_FAILED_TO_OPEN_BACKEND_DEVICE = -301,
MA_FAILED_TO_START_BACKEND_DEVICE = -302,
MA_FAILED_TO_STOP_BACKEND_DEVICE = -303
MA_FAILED_TO_INIT_BACKEND = -400,
MA_FAILED_TO_OPEN_BACKEND_DEVICE = -401,
MA_FAILED_TO_START_BACKEND_DEVICE = -402,
MA_FAILED_TO_STOP_BACKEND_DEVICE = -403
} ma_result;
@@ -547,7 +571,7 @@ typedef enum
ma_standard_sample_rate_192000 = 192000,
ma_standard_sample_rate_16000 = 16000, /* Extreme lows */
ma_standard_sample_rate_11025 = 11250,
ma_standard_sample_rate_11025 = 11025,
ma_standard_sample_rate_8000 = 8000,
ma_standard_sample_rate_352800 = 352800, /* Extreme highs */
@@ -1337,13 +1361,14 @@ typedef struct
float volumeBeg; /* If volumeBeg and volumeEnd is equal to 1, no fading happens (ma_fader_process_pcm_frames() will run as a passthrough). */
float volumeEnd;
ma_uint64 lengthInFrames; /* The total length of the fade. */
ma_uint64 cursorInFrames; /* The current time in frames. Incremented by ma_fader_process_pcm_frames(). */
ma_int64 cursorInFrames; /* The current time in frames. Incremented by ma_fader_process_pcm_frames(). Signed because it'll be offset by startOffsetInFrames in set_fade_ex(). */
} ma_fader;
MA_API ma_result ma_fader_init(const ma_fader_config* pConfig, ma_fader* pFader);
MA_API ma_result ma_fader_process_pcm_frames(ma_fader* pFader, void* pFramesOut, const void* pFramesIn, ma_uint64 frameCount);
MA_API void ma_fader_get_data_format(const ma_fader* pFader, ma_format* pFormat, ma_uint32* pChannels, ma_uint32* pSampleRate);
MA_API void ma_fader_set_fade(ma_fader* pFader, float volumeBeg, float volumeEnd, ma_uint64 lengthInFrames);
MA_API void ma_fader_set_fade_ex(ma_fader* pFader, float volumeBeg, float volumeEnd, ma_uint64 lengthInFrames, ma_int64 startOffsetInFrames);
MA_API float ma_fader_get_current_volume(const ma_fader* pFader);
@@ -1669,7 +1694,7 @@ MA_API void ma_resampler_uninit(ma_resampler* pResampler, const ma_allocation_ca
/*
Converts the given input data.
Both the input and output frames must be in the format specified in the config when the resampler was initilized.
Both the input and output frames must be in the format specified in the config when the resampler was initialized.
On input, [pFrameCountOut] contains the number of output frames to process. On output it contains the number of output frames that
were actually processed, which may be less than the requested amount which will happen if there's not enough input data. You can use
@@ -3314,7 +3339,7 @@ struct ma_device_config
ma_uint32 periods;
ma_performance_profile performanceProfile;
ma_bool8 noPreSilencedOutputBuffer; /* When set to true, the contents of the output buffer passed into the data callback will be left undefined rather than initialized to silence. */
ma_bool8 noClip; /* When set to true, the contents of the output buffer passed into the data callback will be clipped after returning. Only applies when the playback sample format is f32. */
ma_bool8 noClip; /* When set to true, the contents of the output buffer passed into the data callback will not be clipped after returning. Only applies when the playback sample format is f32. */
ma_bool8 noDisableDenormals; /* Do not disable denormals when firing the data callback. */
ma_bool8 noFixedSizedCallback; /* Disables strict fixed-sized data callbacks. Setting this to true will result in the period size being treated only as a hint to the backend. This is an optimization for those who don't need fixed sized callbacks. */
ma_device_data_proc dataCallback;
@@ -3388,7 +3413,7 @@ struct ma_device_config
/*
The callback for handling device enumeration. This is fired from `ma_context_enumerated_devices()`.
The callback for handling device enumeration. This is fired from `ma_context_enumerate_devices()`.
Parameters
@@ -3962,6 +3987,8 @@ struct ma_context
ma_proc RegOpenKeyExA;
ma_proc RegCloseKey;
ma_proc RegQueryValueExA;
/*HRESULT*/ long CoInitializeResult;
} win32;
#endif
#ifdef MA_POSIX
@@ -4241,21 +4268,12 @@ struct ma_device
struct
{
/* AudioWorklets path. */
/* EMSCRIPTEN_WEBAUDIO_T */ int audioContextPlayback;
/* EMSCRIPTEN_WEBAUDIO_T */ int audioContextCapture;
/* EMSCRIPTEN_AUDIO_WORKLET_NODE_T */ int workletNodePlayback;
/* EMSCRIPTEN_AUDIO_WORKLET_NODE_T */ int workletNodeCapture;
size_t intermediaryBufferSizeInFramesPlayback;
size_t intermediaryBufferSizeInFramesCapture;
float* pIntermediaryBufferPlayback;
float* pIntermediaryBufferCapture;
void* pStackBufferPlayback;
void* pStackBufferCapture;
ma_bool32 isInitialized;
/* ScriptProcessorNode path. */
int indexPlayback; /* We use a factory on the JavaScript side to manage devices and use an index for JS/C interop. */
int indexCapture;
/* EMSCRIPTEN_WEBAUDIO_T */ int audioContext;
/* EMSCRIPTEN_WEBAUDIO_T */ int audioWorklet;
float* pIntermediaryBuffer;
void* pStackBuffer;
ma_result initResult; /* Set to MA_BUSY while initialization is in progress. */
int deviceIndex; /* We store the device in a list on the JavaScript side. This is used to map our C object to the JS object. */
} webaudio;
#endif
#ifdef MA_SUPPORT_NULL
@@ -4905,8 +4923,8 @@ then be set directly on the structure. Below are the members of the `ma_device_c
callback will write to every sample in the output buffer, or if you are doing your own clearing.
noClip
When set to true, the contents of the output buffer passed into the data callback will be clipped after returning. When set to false (default), the
contents of the output buffer are left alone after returning and it will be left up to the backend itself to decide whether or not the clip. This only
When set to true, the contents of the output buffer are left alone after returning and it will be left up to the backend itself to decide whether or
not to clip. When set to false (default), the contents of the output buffer passed into the data callback will be clipped after returning. This only
applies when the playback sample format is f32.
noDisableDenormals
@@ -5419,8 +5437,6 @@ speakers or received from the microphone which can in turn result in de-syncs.
Do not call this in any callback.
This will be called implicitly by `ma_device_uninit()`.
See Also
--------
@@ -6343,7 +6359,7 @@ struct ma_encoder
ma_encoder_uninit_proc onUninit;
ma_encoder_write_pcm_frames_proc onWritePCMFrames;
void* pUserData;
void* pInternalEncoder; /* <-- The drwav/drflac/stb_vorbis/etc. objects. */
void* pInternalEncoder;
union
{
struct
@@ -6408,6 +6424,33 @@ MA_API ma_result ma_waveform_set_frequency(ma_waveform* pWaveform, double freque
MA_API ma_result ma_waveform_set_type(ma_waveform* pWaveform, ma_waveform_type type);
MA_API ma_result ma_waveform_set_sample_rate(ma_waveform* pWaveform, ma_uint32 sampleRate);
typedef struct
{
ma_format format;
ma_uint32 channels;
ma_uint32 sampleRate;
double dutyCycle;
double amplitude;
double frequency;
} ma_pulsewave_config;
MA_API ma_pulsewave_config ma_pulsewave_config_init(ma_format format, ma_uint32 channels, ma_uint32 sampleRate, double dutyCycle, double amplitude, double frequency);
typedef struct
{
ma_waveform waveform;
ma_pulsewave_config config;
} ma_pulsewave;
MA_API ma_result ma_pulsewave_init(const ma_pulsewave_config* pConfig, ma_pulsewave* pWaveform);
MA_API void ma_pulsewave_uninit(ma_pulsewave* pWaveform);
MA_API ma_result ma_pulsewave_read_pcm_frames(ma_pulsewave* pWaveform, void* pFramesOut, ma_uint64 frameCount, ma_uint64* pFramesRead);
MA_API ma_result ma_pulsewave_seek_to_pcm_frame(ma_pulsewave* pWaveform, ma_uint64 frameIndex);
MA_API ma_result ma_pulsewave_set_amplitude(ma_pulsewave* pWaveform, double amplitude);
MA_API ma_result ma_pulsewave_set_frequency(ma_pulsewave* pWaveform, double frequency);
MA_API ma_result ma_pulsewave_set_sample_rate(ma_pulsewave* pWaveform, ma_uint32 sampleRate);
MA_API ma_result ma_pulsewave_set_duty_cycle(ma_pulsewave* pWaveform, double dutyCycle);
typedef enum
{
ma_noise_type_white,
@@ -6430,7 +6473,7 @@ MA_API ma_noise_config ma_noise_config_init(ma_format format, ma_uint32 channels
typedef struct
{
ma_data_source_vtable ds;
ma_data_source_base ds;
ma_noise_config config;
ma_lcg lcg;
union
@@ -6828,7 +6871,7 @@ typedef struct
/*
Extended processing callback. This callback is used for effects that process input and output
at different rates (i.e. they perform resampling). This is similar to the simple version, only
they take two seperate frame counts: one for input, and one for output.
they take two separate frame counts: one for input, and one for output.
On input, `pFrameCountOut` is equal to the capacity of the output buffer for each bus, whereas
`pFrameCountIn` will be equal to the number of PCM frames in each of the buffers in `ppFramesIn`.
@@ -7333,6 +7376,15 @@ typedef struct
MA_ATOMIC(4, ma_bool32) isSpatializationDisabled; /* Set to false by default. When set to false, will not have spatialisation applied. */
MA_ATOMIC(4, ma_uint32) pinnedListenerIndex; /* The index of the listener this node should always use for spatialization. If set to MA_LISTENER_INDEX_CLOSEST the engine will use the closest listener. */
/* When setting a fade, it's not done immediately in ma_sound_set_fade(). It's deferred to the audio thread which means we need to store the settings here. */
struct
{
ma_atomic_float volumeBeg;
ma_atomic_float volumeEnd;
ma_atomic_uint64 fadeLengthInFrames; /* <-- Defaults to (~(ma_uint64)0) which is used to indicate that no fade should be applied. */
ma_atomic_uint64 absoluteGlobalTimeInFrames; /* <-- The time to start the fade. */
} fadeSettings;
/* Memory management. */
ma_bool8 _ownsHeap;
void* _pHeap;
@@ -7413,6 +7465,8 @@ typedef ma_sound ma_sound_group;
MA_API ma_sound_group_config ma_sound_group_config_init(void); /* Deprecated. Will be removed in version 0.12. Use ma_sound_config_2() instead. */
MA_API ma_sound_group_config ma_sound_group_config_init_2(ma_engine* pEngine); /* Will be renamed to ma_sound_config_init() in version 0.12. */
typedef void (* ma_engine_process_proc)(void* pUserData, float* pFramesOut, ma_uint64 frameCount);
typedef struct
{
#if !defined(MA_NO_RESOURCE_MANAGER)
@@ -7422,6 +7476,7 @@ typedef struct
ma_context* pContext;
ma_device* pDevice; /* If set, the caller is responsible for calling ma_engine_data_callback() in the device's data callback. */
ma_device_id* pPlaybackDeviceID; /* The ID of the playback device to use with the default listener. */
ma_device_data_proc dataCallback; /* Can be null. Can be used to provide a custom device data callback. */
ma_device_notification_proc notificationCallback;
#endif
ma_log* pLog; /* When set to NULL, will use the context's log. */
@@ -7438,6 +7493,8 @@ typedef struct
ma_bool32 noDevice; /* When set to true, don't create a default device. ma_engine_read_pcm_frames() can be called manually to read data. */
ma_mono_expansion_mode monoExpansionMode; /* Controls how the mono channel should be expanded to other channels when spatialization is disabled on a sound. */
ma_vfs* pResourceManagerVFS; /* A pointer to a pre-allocated VFS object to use with the resource manager. This is ignored if pResourceManager is not NULL. */
ma_engine_process_proc onProcess; /* Fired at the end of each call to ma_engine_read_pcm_frames(). For engine's that manage their own internal device (the default configuration), this will be fired from the audio thread, and you do not need to call ma_engine_read_pcm_frames() manually in order to trigger this. */
void* pProcessUserData; /* User data that's passed into onProcess. */
} ma_engine_config;
MA_API ma_engine_config ma_engine_config_init(void);
@@ -7465,6 +7522,8 @@ struct ma_engine
ma_uint32 gainSmoothTimeInFrames; /* The number of frames to interpolate the gain of spatialized sounds across. */
ma_uint32 defaultVolumeSmoothTimeInPCMFrames;
ma_mono_expansion_mode monoExpansionMode;
ma_engine_process_proc onProcess;
void* pProcessUserData;
};
MA_API ma_result ma_engine_init(const ma_engine_config* pConfig, ma_engine* pEngine);
@@ -7489,7 +7548,9 @@ MA_API ma_uint32 ma_engine_get_sample_rate(const ma_engine* pEngine);
MA_API ma_result ma_engine_start(ma_engine* pEngine);
MA_API ma_result ma_engine_stop(ma_engine* pEngine);
MA_API ma_result ma_engine_set_volume(ma_engine* pEngine, float volume);
MA_API float ma_engine_get_volume(ma_engine* pEngine);
MA_API ma_result ma_engine_set_gain_db(ma_engine* pEngine, float gainDB);
MA_API float ma_engine_get_gain_db(ma_engine* pEngine);
MA_API ma_uint32 ma_engine_get_listener_count(const ma_engine* pEngine);
MA_API ma_uint32 ma_engine_find_closest_listener(const ma_engine* pEngine, float absolutePosX, float absolutePosY, float absolutePosZ);
@@ -7523,6 +7584,8 @@ MA_API ma_engine* ma_sound_get_engine(const ma_sound* pSound);
MA_API ma_data_source* ma_sound_get_data_source(const ma_sound* pSound);
MA_API ma_result ma_sound_start(ma_sound* pSound);
MA_API ma_result ma_sound_stop(ma_sound* pSound);
MA_API ma_result ma_sound_stop_with_fade_in_pcm_frames(ma_sound* pSound, ma_uint64 fadeLengthInFrames); /* Will overwrite any scheduled stop and fade. */
MA_API ma_result ma_sound_stop_with_fade_in_milliseconds(ma_sound* pSound, ma_uint64 fadeLengthInFrames); /* Will overwrite any scheduled stop and fade. */
MA_API void ma_sound_set_volume(ma_sound* pSound, float volume);
MA_API float ma_sound_get_volume(const ma_sound* pSound);
MA_API void ma_sound_set_pan(ma_sound* pSound, float pan);
@@ -7565,13 +7628,18 @@ MA_API void ma_sound_set_directional_attenuation_factor(ma_sound* pSound, float
MA_API float ma_sound_get_directional_attenuation_factor(const ma_sound* pSound);
MA_API void ma_sound_set_fade_in_pcm_frames(ma_sound* pSound, float volumeBeg, float volumeEnd, ma_uint64 fadeLengthInFrames);
MA_API void ma_sound_set_fade_in_milliseconds(ma_sound* pSound, float volumeBeg, float volumeEnd, ma_uint64 fadeLengthInMilliseconds);
MA_API void ma_sound_set_fade_start_in_pcm_frames(ma_sound* pSound, float volumeBeg, float volumeEnd, ma_uint64 fadeLengthInFrames, ma_uint64 absoluteGlobalTimeInFrames);
MA_API void ma_sound_set_fade_start_in_milliseconds(ma_sound* pSound, float volumeBeg, float volumeEnd, ma_uint64 fadeLengthInMilliseconds, ma_uint64 absoluteGlobalTimeInMilliseconds);
MA_API float ma_sound_get_current_fade_volume(const ma_sound* pSound);
MA_API void ma_sound_set_start_time_in_pcm_frames(ma_sound* pSound, ma_uint64 absoluteGlobalTimeInFrames);
MA_API void ma_sound_set_start_time_in_milliseconds(ma_sound* pSound, ma_uint64 absoluteGlobalTimeInMilliseconds);
MA_API void ma_sound_set_stop_time_in_pcm_frames(ma_sound* pSound, ma_uint64 absoluteGlobalTimeInFrames);
MA_API void ma_sound_set_stop_time_in_milliseconds(ma_sound* pSound, ma_uint64 absoluteGlobalTimeInMilliseconds);
MA_API void ma_sound_set_stop_time_with_fade_in_pcm_frames(ma_sound* pSound, ma_uint64 stopAbsoluteGlobalTimeInFrames, ma_uint64 fadeLengthInFrames);
MA_API void ma_sound_set_stop_time_with_fade_in_milliseconds(ma_sound* pSound, ma_uint64 stopAbsoluteGlobalTimeInMilliseconds, ma_uint64 fadeLengthInMilliseconds);
MA_API ma_bool32 ma_sound_is_playing(const ma_sound* pSound);
MA_API ma_uint64 ma_sound_get_time_in_pcm_frames(const ma_sound* pSound);
MA_API ma_uint64 ma_sound_get_time_in_milliseconds(const ma_sound* pSound);
MA_API void ma_sound_set_looping(ma_sound* pSound, ma_bool32 isLooping);
MA_API ma_bool32 ma_sound_is_looping(const ma_sound* pSound);
MA_API ma_bool32 ma_sound_at_end(const ma_sound* pSound);
@@ -9,7 +9,7 @@ extern "C" {
typedef struct
{
ma_node_config nodeConfig;
ma_uint32 channels; /* The number of channels of the source, which will be the same as the output. Must be 1 or 2. The excite bus must always have one channel. */
ma_uint32 channels;
} ma_channel_combiner_node_config;
MA_API ma_channel_combiner_node_config ma_channel_combiner_node_config_init(ma_uint32 channels);
@@ -9,7 +9,7 @@ extern "C" {
typedef struct
{
ma_node_config nodeConfig;
ma_uint32 channels; /* The number of channels of the source, which will be the same as the output. Must be 1 or 2. The excite bus must always have one channel. */
ma_uint32 channels;
} ma_channel_separator_node_config;
MA_API ma_channel_separator_node_config ma_channel_separator_node_config_init(ma_uint32 channels);
+49
View File
@@ -0,0 +1,49 @@
This is just a little experiment to explore some ideas for the kind of API that I would build if I
was building my own operation system. The name "osaudio" means Operating System Audio. Or maybe you
can think of it as Open Source Audio. It's whatever you want it to be.
The idea behind this project came about after considering the absurd complexity of audio APIs on
various platforms after years of working on miniaudio. This project aims to disprove the idea that
complete and flexible audio solutions and simple APIs are mutually exclusive and that it's possible
to have both. I challenge anybody to prove me wrong.
In addition to the above, I also wanted to explore some ideas for a different API design to
miniaudio. miniaudio uses a callback model for data transfer, whereas osaudio uses a blocking
read/write model.
This project is essentially just a header file with a reference implementation that uses miniaudio
under the hood. You can compile this very easily - just compile osaudio_miniaudio.c, and use
osaudio.h just like any other header. There are no dependencies for the header, and the miniaudio
implementation obviously requires miniaudio. Adjust the include path in osaudio_miniaudio.c if need
be.
See osaudio.h for full documentation. Below is an example to get you started:
```c
#include "osaudio.h"
...
osaudio_t audio;
osaudio_config_t config;
osaudio_config_init(&config, OSAUDIO_OUTPUT);
config.format = OSAUDIO_FORMAT_F32;
config.channels = 2;
config.rate = 48000;
osaudio_open(&audio, &config);
osaudio_write(audio, myAudioData, frameCount); // <-- This will block until all of the data has been sent to the device.
osaudio_close(audio);
```
Compare the code above with the likes of other APIs like Core Audio and PipeWire. I challenge
anybody to argue their APIs are cleaner and easier to use than this when it comes to simple audio
playback.
If you have any feedback on this I'd be interested to hear it. In particular, I'd really like to
hear from people who believe the likes of Core Audio (Apple), PipeWire, PulseAudio or any other
audio API actually have good APIs (they don't!) and what makes their's better and/or worse than
this project.
+604
View File
@@ -0,0 +1,604 @@
/*
This is a simple API for low-level audio playback and capture. A reference implementation using
miniaudio is provided in osaudio.c which can be found alongside this file. Consider all code
public domain.
The idea behind this project came about after considering the absurd complexity of audio APIs on
various platforms after years of working on miniaudio. This project aims to disprove the idea that
complete and flexible audio solutions and simple APIs are mutually exclusive and that it's possible
to have both. The idea of reliability through simplicity is the first and foremost goal of this
project. The difference between this project and miniaudio is that this project is designed around
the idea of what I would build if I was building an audio API for an operating system, such as at
the level of WASAPI or ALSA. A cross-platform and cross-backend library like miniaudio is
necessarily different in design, but there are indeed things that I would have done differently if
given my time again, some of those ideas of which I'm expressing in this project.
---
The concept of low-level audio is simple - you have a device, such as a speaker system or a
micrphone system, and then you write or read audio data to/from it. So in the case of playback, you
need only write your raw audio data to the device which then emits it from the speakers when it's
ready. Likewise, for capture you simply read audio data from the device which is filled with data
by the microphone.
A complete low-level audio solution requires the following:
1) The ability to enumerate devices that are connected to the system.
2) The ability to open and close a connection to a device.
3) The ability to start and stop the device.
4) The ability to write and read audio data to/from the device.
5) The ability to query the device for it's data configuration.
6) The ability to notify the application when certain events occur, such as the device being
stopped, or rerouted.
The API presented here aims to meet all of the above requirements. It uses a single-threaded
blocking read/write model for data delivery instead of a callback model. This makes it a bit more
flexible since it gives the application full control over the audio thread. It might also make it
more feasible to use this API on single-threaded systems.
Device enumeration is achieved with a single function: osaudio_enumerate(). This function returns
an array of osaudio_info_t structures which contain information about each device. The array is
allocated must be freed with free(). Contained within the osaudio_info_t struct is, most
importantly, the device ID, which is used to open a connection to the device, and the name of the
device which can be used to display to the user. For advanced users, it also includes information
about the device's native data configuration.
Opening and closing a connection to a device is achieved with osaudio_open() and osaudio_close().
An important concept is that of the ability to configure the device. This is achieved with the
osaudio_config_t structure which is passed to osaudio_open(). In addition to the ID of the device,
this structure includes information about the desired format, channel count and sample rate. You
can also configure the latency of the device, or the buffer size, which is specified in frames. A
flags member is used for specifying additional options, such as whether or not to disable automatic
rerouting. Finally a callback can be specified for notifications. When osaudio_open() returns, the
config structure will be filled with the device's actual configuration. You can inspect the channel
map from this structure to know how to arrange the channels in your audio data.
This API uses a blocking write/read model for pushing and pulling data to/from the device. This
is done with the osaudio_write() and osaudio_read() functions. These functions will block until
the requested number of frames have been processed or the device is drained or flushed with
osaudio_drain() or osaudio_flush() respectively. It is from these functions that the device is
started. As soon as you start writing data with osaudio_write() or reading data with
osaudio_read(), the device will start. When the device is drained of flushed with osaudio_drain()
or osaudio_flush(), the device will be stopped. osaudio_drain() will block until the device has
been drained, whereas osaudio_flush() will stop playback immediately and return. You can also pause
and resume the device with osaudio_pause() and osaudio_resume(). Since reading and writing is
blocking, it can be useful to know how many frames can be written/read without blocking. This is
achieved with osaudio_get_avail().
Querying the device's configuration is achieved with osaudio_get_info(). This function will return
a pointer to a osaudio_info_t structure which contains information about the device, most
importantly it's name and data configuration. The name is important for displaying on a UI, and
the data configuration is important for knowing how to format your audio data. The osaudio_info_t
structure will contain an array of osaudio_config_t structures. This will contain one entry, which
will contain the exact information that was returned in the config structure that was passed to
osaudio_open().
A common requirement is to open a device that represents the operating system's default device.
This is done easily by simply passing in NULL for the device ID. Below is an example for opening a
default device:
int result;
osaudio_t audio;
osaudio_config_t config;
osaudio_config_init(&config, OSAUDIO_OUTPUT);
config.format = OSAUDIO_FORMAT_F32;
config.channels = 2;
config.rate = 48000;
result = osaudio_open(&audio, &config);
if (result != OSAUDIO_SUCCESS) {
printf("Failed to open device.");
return -1;
}
...
osaudio_close(audio);
In the above example, the default device is opened for playback (OSAUDIO_OUTPUT). The format is
set to 32-bit floating point (OSAUDIO_FORMAT_F32), the channel count is set to stereo (2), and the
sample rate is set to 48kHz. The device is then closed when we're done with it.
If instead we wanted to open a specific device, we can do that by passing in the device ID. Below
is an example for how to do this:
int result;
osaudio_t audio;
osaudio_config_t config;
unsigned int infoCount;
osaudio_info_t* info;
result = osaudio_enumerate(&infoCount, &info);
if (result != OSAUDIO_SUCCESS) {
printf("Failed to enumerate devices.\n");
return -1;
}
// ... Iterate over the `info` array and find the device you want to open. Use the `direction` member to discriminate between input and output ...
osaudio_config_init(&config, OSAUDIO_OUTPUT);
config.id = &info[indexOfYourChosenDevice].id;
config.format = OSAUDIO_FORMAT_F32;
config.channels = 2;
config.rate = 48000;
osaudio_open(&audio, &config);
...
osaudio_close(audio);
free(info); // The pointer returned by osaudio_enumerate() must be freed with free().
The id structure is just a 256 byte array that uniquely identifies the device. Implementations may
have different representations for device IDs, and A 256 byte array should accomodates all
device ID representations. Implementations are required to zero-fill unused bytes. The osaudio_id_t
structure can be copied which makes it suitable for serialization and deserialization in situations
where you may want to save the device ID to permanent storage so it can be stored in a config file.
Implementations need to do their own data conversion between the device's native data configuration
and the requested configuration. In this case, when the format, channels and rate are specified in
the config, they should be unchanged when osaudio_open() returns. If this is not possible,
osaudio_open() will return OSAUDIO_FORMAT_NOT_SUPPORTED. However, there are cases where it's useful
for a program to use the device's native configuration instead of some fixed configuration. This is
achieved by setting the format, channels and rate to 0. Below is an example:
int result;
osaudio_t audio;
osaudio_config_t config;
osaudio_config_init(&config, OSAUDIO_OUTPUT);
result = osaudio_open(&audio, &config);
if (result != OSAUDIO_SUCCESS) {
printf("Failed to open device.");
return -1;
}
// ... `config` will have been updated by osaudio_open() to contain the *actual* format/channels/rate ...
osaudio_close(audio);
In addition to the code above, you can explicitly call `osaudio_get_info()` to retrieve the format
configuration. If you need to know the native configuration before opening the device, you can use
enumeration. The format, channels and rate will be contined in the first item in the configs array.
The examples above all use playback, but the same applies for capture. The only difference is that
the direction is set to OSAUDIO_INPUT instead of OSAUDIO_OUTPUT.
To output audio from the speakers you need to call osaudio_write(). Likewise, to capture audio from
a microphone you need to call osaudio_read(). These functions will block until the requested number
of frames have been written or read. The device will start automatically. Below is an example for
writing some data to a device:
int result = osaudio_write(audio, myAudioData, myAudioDataFrameCount);
if (result == OSAUDIO_SUCCESS) {
printf("Successfully wrote %d frames of audio data.\n", myAudioDataFrameCount);
} else {
printf("Failed to write audio data.\n");
}
osaudio_write() and osaudio_read() will return OSAUDIO_SUCCESS if the requested number of frames
were written or read. You cannot call osaudio_close() while a write or read operation is in
progress.
If you want to write or read audio data without blocking, you can use osaudio_get_avail() to
determine how many frames are available for writing or reading. Below is an example:
unsigned int framesAvailable = osaudio_get_avail(audio);
if (result > 0) {
printf("There are %d frames available for writing.\n", framesAvailable);
} else {
printf("There are no frames available for writing.\n");
}
If you want to abort a blocking write or read, you can use osaudio_flush(). This will result in any
pending write or read operation being aborted.
There are several ways of pausing a device. The first is to just drain or flush the device and
simply don't do any more read/write operations. A drain and flush will put the device into a
stopped state until the next call to either read or write, depending on the device's direction.
If, however, this does not suit your requirements, you can use osaudio_pause() and
osaudio_resume(). Take note, however, that these functions will result in osaudio_drain() never
returning because it'll result in the device being in a stopped state which in turn results in the
buffer never being read and therefore never drained.
Everything is thread safe with a few minor exceptions which has no practical issues for the client:
* You cannot call any function while osaudio_open() is still in progress.
* You cannot call osaudio_close() while any other function is still in progress.
* You can only call osaudio_write() and osaudio_read() from one thread at a time.
None of these issues should be a problem for the client in practice. You won't have a valid
osaudio_t object until osaudio_open() has returned. For osaudio_close(), it makes no sense to
destroy the object while it's still in use, and doing so would mean the client is using very poor
form. For osaudio_write() and osaudio_read(), you wouldn't ever want to call this simultaneously
across multiple threads anyway because otherwise you'd end up with garbage audio.
The rules above only apply when working with a single osaudio_t object. You can have multiple
osaudio_t objects open at the same time, and you can call any function on different osaudio_t
objects simultaneously from different threads.
---
# Feedback
I'm looking for feedback on the following:
* Are the supported formats enough? If not, what other formats are needed, and what is the
justification for including it? Just because it's the native format on one particular
piece of hardware is not enough. Big-endian and little-endian will never be supported. All
formats are native-endian.
* Are the available channel positions enough? What other positions are needed?
* Just some general criticism would be appreciated.
*/
#ifndef osaudio_h
#define osaudio_h
#ifdef __cplusplus
extern "C" {
#endif
/*
Support far pointers on relevant platforms (DOS, in particular). The version of this file
distributed with an operating system wouldn't need this because they would just have an
OS-specific version of this file, but as a reference it's useful to use far pointers here.
*/
#if defined(__MSDOS__) || defined(_MSDOS) || defined(__DOS__)
#define OSAUDIO_FAR far
#else
#define OSAUDIO_FAR
#endif
typedef struct _osaudio_t* osaudio_t;
typedef struct osaudio_config_t osaudio_config_t;
typedef struct osaudio_id_t osaudio_id_t;
typedef struct osaudio_info_t osaudio_info_t;
typedef struct osaudio_notification_t osaudio_notification_t;
/* Results codes. */
typedef int osaudio_result_t;
#define OSAUDIO_SUCCESS 0
#define OSAUDIO_ERROR -1
#define OSAUDIO_INVALID_ARGS -2
#define OSAUDIO_INVALID_OPERATION -3
#define OSAUDIO_OUT_OF_MEMORY -4
#define OSAUDIO_FORMAT_NOT_SUPPORTED -101 /* The requested format is not supported. */
#define OSAUDIO_XRUN -102 /* An underrun or overrun occurred. Can be returned by osaudio_read() or osaudio_write(). */
#define OSAUDIO_DEVICE_STOPPED -103 /* The device is stopped. Can be returned by osaudio_drain(). It is invalid to call osaudio_drain() on a device that is not running because otherwise it'll get stuck. */
/* Directions. Cannot be combined. Use separate osaudio_t objects for birectional setups. */
typedef int osaudio_direction_t;
#define OSAUDIO_INPUT 1
#define OSAUDIO_OUTPUT 2
/* All formats are native endian and interleaved. */
typedef int osaudio_format_t;
#define OSAUDIO_FORMAT_UNKNOWN 0
#define OSAUDIO_FORMAT_F32 1
#define OSAUDIO_FORMAT_U8 2
#define OSAUDIO_FORMAT_S16 3
#define OSAUDIO_FORMAT_S24 4 /* Tightly packed. */
#define OSAUDIO_FORMAT_S32 5
/* Channel positions. */
typedef unsigned char osaudio_channel_t;
#define OSAUDIO_CHANNEL_NONE 0
#define OSAUDIO_CHANNEL_MONO 1
#define OSAUDIO_CHANNEL_FL 2
#define OSAUDIO_CHANNEL_FR 3
#define OSAUDIO_CHANNEL_FC 4
#define OSAUDIO_CHANNEL_LFE 5
#define OSAUDIO_CHANNEL_BL 6
#define OSAUDIO_CHANNEL_BR 7
#define OSAUDIO_CHANNEL_FLC 8
#define OSAUDIO_CHANNEL_FRC 9
#define OSAUDIO_CHANNEL_BC 10
#define OSAUDIO_CHANNEL_SL 11
#define OSAUDIO_CHANNEL_SR 12
#define OSAUDIO_CHANNEL_TC 13
#define OSAUDIO_CHANNEL_TFL 14
#define OSAUDIO_CHANNEL_TFC 15
#define OSAUDIO_CHANNEL_TFR 16
#define OSAUDIO_CHANNEL_TBL 17
#define OSAUDIO_CHANNEL_TBC 18
#define OSAUDIO_CHANNEL_TBR 19
#define OSAUDIO_CHANNEL_AUX0 20
#define OSAUDIO_CHANNEL_AUX1 21
#define OSAUDIO_CHANNEL_AUX2 22
#define OSAUDIO_CHANNEL_AUX3 23
#define OSAUDIO_CHANNEL_AUX4 24
#define OSAUDIO_CHANNEL_AUX5 25
#define OSAUDIO_CHANNEL_AUX6 26
#define OSAUDIO_CHANNEL_AUX7 27
#define OSAUDIO_CHANNEL_AUX8 28
#define OSAUDIO_CHANNEL_AUX9 29
#define OSAUDIO_CHANNEL_AUX10 30
#define OSAUDIO_CHANNEL_AUX11 31
#define OSAUDIO_CHANNEL_AUX12 32
#define OSAUDIO_CHANNEL_AUX13 33
#define OSAUDIO_CHANNEL_AUX14 34
#define OSAUDIO_CHANNEL_AUX15 35
#define OSAUDIO_CHANNEL_AUX16 36
#define OSAUDIO_CHANNEL_AUX17 37
#define OSAUDIO_CHANNEL_AUX18 38
#define OSAUDIO_CHANNEL_AUX19 39
#define OSAUDIO_CHANNEL_AUX20 40
#define OSAUDIO_CHANNEL_AUX21 41
#define OSAUDIO_CHANNEL_AUX22 42
#define OSAUDIO_CHANNEL_AUX23 43
#define OSAUDIO_CHANNEL_AUX24 44
#define OSAUDIO_CHANNEL_AUX25 45
#define OSAUDIO_CHANNEL_AUX26 46
#define OSAUDIO_CHANNEL_AUX27 47
#define OSAUDIO_CHANNEL_AUX28 48
#define OSAUDIO_CHANNEL_AUX29 49
#define OSAUDIO_CHANNEL_AUX30 50
#define OSAUDIO_CHANNEL_AUX31 51
/* The maximum number of channels supported. */
#define OSAUDIO_MAX_CHANNELS 64
/* Notification types. */
typedef int osaudio_notification_type_t;
#define OSAUDIO_NOTIFICATION_STARTED 0 /* The device was started in response to a call to osaudio_write() or osaudio_read(). */
#define OSAUDIO_NOTIFICATION_STOPPED 1 /* The device was stopped in response to a call to osaudio_drain() or osaudio_flush(). */
#define OSAUDIO_NOTIFICATION_REROUTED 2 /* The device was rerouted. Not all implementations need to support rerouting. */
#define OSAUDIO_NOTIFICATION_INTERRUPTION_BEGIN 3 /* The device was interrupted due to something like a phone call. */
#define OSAUDIO_NOTIFICATION_INTERRUPTION_END 4 /* The interruption has been ended. */
/* Flags. */
#define OSAUDIO_FLAG_NO_REROUTING 1 /* When set, will tell the implementation to disable automatic rerouting if possible. This is a hint and may be ignored by the implementation. */
#define OSAUDIO_FLAG_REPORT_XRUN 2 /* When set, will tell the implementation to report underruns and overruns via osaudio_write() and osaudio_read() by aborting and returning OSAUDIO_XRUN. */
struct osaudio_notification_t
{
osaudio_notification_type_t type; /* OSAUDIO_NOTIFICATION_* */
union
{
struct
{
int _unused;
} started;
struct
{
int _unused;
} stopped;
struct
{
int _unused;
} rerouted;
struct
{
int _unused;
} interruption;
} data;
};
struct osaudio_id_t
{
char data[256];
};
struct osaudio_config_t
{
osaudio_id_t* device_id; /* Set to NULL to use default device. When non-null, automatic routing will be disabled. */
osaudio_direction_t direction; /* OSAUDIO_INPUT or OSAUDIO_OUTPUT. Cannot be combined. Use separate osaudio_t objects for bidirectional setups. */
osaudio_format_t format; /* OSAUDIO_FORMAT_* */
unsigned int channels; /* Number of channels. */
unsigned int rate; /* Sample rate in seconds. */
osaudio_channel_t channel_map[OSAUDIO_MAX_CHANNELS]; /* Leave all items set to 0 for defaults. */
unsigned int buffer_size; /* In frames. Set to 0 to use the system default. */
unsigned int flags; /* A combination of OSAUDIO_FLAG_* */
void (* notification)(void* user_data, const osaudio_notification_t* notification); /* Called when some kind of event occurs, such as a device being closed. Never called from the audio thread. */
void* user_data; /* Passed to notification(). */
};
struct osaudio_info_t
{
osaudio_id_t id;
char name[256];
osaudio_direction_t direction; /* OSAUDIO_INPUT or OSAUDIO_OUTPUT. */
unsigned int config_count;
osaudio_config_t* configs;
};
/*
Enumerates the available devices.
On output, `count` will contain the number of items in the `info` array. The array must be freed
with free() when it's no longer needed.
Use the `direction` member to discriminate between input and output devices. Below is an example:
unsigned int count;
osaudio_info_t* info;
osaudio_enumerate(&count, &info);
for (int i = 0; i < count; ++i) {
if (info[i].direction == OSAUDIO_OUTPUT) {
printf("Output device: %s\n", info[i].name);
} else {
printf("Input device: %s\n", info[i].name);
}
}
You can use the `id` member to open a specific device with osaudio_open(). You do not need to do
device enumeration if you only want to open the default device.
*/
osaudio_result_t osaudio_enumerate(unsigned int* count, osaudio_info_t** info);
/*
Initializes a default config.
The config object will be cleared to zero, with the direction set to `direction`. This will result
in a configuration that uses the device's native format, channels and rate.
osaudio_config_t is a transparent struct. Just set the relevant fields to the desired values after
calling this function. Example:
osaudio_config_t config;
osaudio_config_init(&config, OSAUDIO_OUTPUT);
config.format = OSAUDIO_FORMAT_F32;
config.channels = 2;
config.rate = 48000;
*/
void osaudio_config_init(osaudio_config_t* config, osaudio_direction_t direction);
/*
Opens a connection to a device.
On input, config must be filled with the desired configuration. On output, it will be filled with
the actual configuration.
Initialize the config with osaudio_config_init() and then fill in the desired configuration. Below
is an example:
osaudio_config_t config;
osaudio_config_init(&config, OSAUDIO_OUTPUT);
config.format = OSAUDIO_FORMAT_F32;
config.channels = 2;
config.rate = 48000;
When the format, channels or rate are left at their default values, or set to 0 (or
OSAUDIO_FORMAT_UNKNOWN for format), the native format, channels or rate will use the device's
native configuration:
osaudio_config_t config;
osaudio_config_init(&config, OSAUDIO_OUTPUT);
config.format = OSAUDIO_FORMAT_UNKNOWN;
config.channels = 0;
config.rate = 0;
The code above is equivalent to this:
osaudio_config_t config;
osaudio_config_init(&config, OSAUDIO_OUTPUT);
On output the config will be filled with the actual configuration. The implementation will perform
any necessary data conversion between the requested data configuration and the device's native
configuration. If it cannot, the function will return a OSAUDIO_FORMAT_NOT_SUPPORTED error. In this
case the caller can decide to reinitialize the device to use it's native configuration and do it's
own data conversion, or abort if it cannot do so. Use the channel map to determine the ordering of
your channels. Automatic channel map conversion is not performed - that must be done manually by
the caller when transfering data to/from the device.
Close the device with osaudio_close().
Returns 0 on success, any other error code on failure.
*/
osaudio_result_t osaudio_open(osaudio_t* audio, osaudio_config_t* config);
/*
Closes a connection to a device.
As soon as this function is called, the device should be considered invalid and unsuable. Do not
attempt to use the audio object once this function has been called.
It's invalid to call this while any other function is still running. You can use osaudio_flush() to
quickly abort any pending writes or reads. You can also use osaudio_drain() to wait for all pending
writes or reads to complete.
Returns 0 on success, < 0 on failure.
*/
osaudio_result_t osaudio_close(osaudio_t audio);
/*
Writes audio data to the device.
This will block until all data has been written or the device is closed.
You can only write from a single thread at any given time. If you want to write from multiple
threads, you need to use your own synchronization mechanism.
This will automatically start the device if frame_count is > 0 and it's not in a paused state.
Use osaudio_get_avail() to determine how much data can be written without blocking.
Returns 0 on success, < 0 on failure.
*/
osaudio_result_t osaudio_write(osaudio_t audio, const void OSAUDIO_FAR* data, unsigned int frame_count);
/*
Reads audio data from the device.
This will block until the requested number of frames has been read or the device is closed.
You can only read from a single thread at any given time. If you want to read from multiple
threads, you need to use your own synchronization mechanism.
This will automatically start the device if frame_count is > 0 and it's not in a paused state.
Use osaudio_get_avail() to determine how much data can be read without blocking.
Returns 0 on success, < 0 on failure.
*/
osaudio_result_t osaudio_read(osaudio_t audio, void OSAUDIO_FAR* data, unsigned int frame_count);
/*
Drains the device.
This will block until all pending reads or writes have completed.
If after calling this function another call to osaudio_write() or osaudio_read() is made, the
device will be resumed like normal.
It is invalid to call this while the device is paused.
Returns 0 on success, < 0 on failure.
*/
osaudio_result_t osaudio_drain(osaudio_t audio);
/*
Flushes the device.
This will immediately flush any pending reads or writes. It will not block. Any in-progress reads
or writes will return immediately.
If after calling this function another thread starts reading or writing, the device will be resumed
like normal.
Returns 0 on success, < 0 on failure.
*/
osaudio_result_t osaudio_flush(osaudio_t audio);
/*
Pauses or resumes the device.
Pausing a device will trigger a OSAUDIO_NOTIFICATION_STOPPED notification. Resuming a device will
trigger a OSAUDIO_NOTIFICATION_STARTED notification.
Returns 0 on success, < 0 on failure.
*/
osaudio_result_t osaudio_pause(osaudio_t audio);
/*
Resumes the device.
Returns 0 on success, < 0 on failure.
*/
osaudio_result_t osaudio_resume(osaudio_t audio);
/*
Returns the number of frames that can be read or written without blocking.
*/
unsigned int osaudio_get_avail(osaudio_t audio);
/*
Gets information about the device.
There will be one item in the configs array which will contain the device's current configuration,
the contents of which will match that of the config that was returned by osaudio_open().
Returns NULL on failure. Do not free the returned pointer. It's up to the implementation to manage
the meory of this object.
*/
const osaudio_info_t* osaudio_get_info(osaudio_t audio);
#ifdef __cplusplus
}
#endif
#endif /* osaudio_h */
+948
View File
@@ -0,0 +1,948 @@
/*
Consider this a reference implementation of osaudio. It uses miniaudio under the hood. You can add
this file directly to your source tree, but you may need to update the miniaudio path.
This will use a mutex in osaudio_read() and osaudio_write(). It's a low-contention lock that's only
used for the purpose of osaudio_drain(), but it's still a lock nonetheless. I'm not worrying about
this too much right now because this is just an example implementation, but I might improve on this
at a later date.
*/
#ifndef osaudio_miniaudio_c
#define osaudio_miniaudio_c
#include "osaudio.h"
/*
If you would rather define your own implementation of miniaudio, define OSAUDIO_NO_MINIAUDIO_IMPLEMENTATION. If you do this,
you need to make sure you include the implmeentation before osaudio.c. This would only really be useful if you are wanting
to do a unity build which uses other parts of miniaudio that this file is currently excluding.
*/
#ifndef OSAUDIO_NO_MINIAUDIO_IMPLEMENTATION
#define MA_API static
#define MA_NO_DECODING
#define MA_NO_ENCODING
#define MA_NO_RESOURCE_MANAGER
#define MA_NO_NODE_GRAPH
#define MA_NO_ENGINE
#define MA_NO_GENERATION
#define MINIAUDIO_IMPLEMENTATION
#include "../../miniaudio.h"
#endif
struct _osaudio_t
{
ma_device device;
osaudio_info_t info;
osaudio_config_t config; /* info.configs will point to this. */
ma_pcm_rb buffer;
ma_semaphore bufferSemaphore; /* The semaphore for controlling access to the buffer. The audio thread will release the semaphore. The read and write functions will wait on it. */
ma_atomic_bool32 isActive; /* Starts off as false. Set to true when config.buffer_size data has been written in the case of playback, or as soon as osaudio_read() is called in the case of capture. */
ma_atomic_bool32 isPaused;
ma_atomic_bool32 isFlushed; /* When set, activation of the device will flush any data that's currently in the buffer. Defaults to false, and will be set to true in osaudio_drain() and osaudio_flush(). */
ma_atomic_bool32 xrunDetected; /* Used for detecting when an xrun has occurred and returning from osaudio_read/write() when OSAUDIO_FLAG_REPORT_XRUN is enabled. */
ma_spinlock activateLock; /* Used for starting and stopping the device. Needed because two variables control this - isActive and isPaused. */
ma_mutex drainLock; /* Used for osaudio_drain(). For mutal exclusion between drain() and read()/write(). Technically results in a lock in read()/write(), but not overthinking that since this is just a reference for now. */
};
static ma_bool32 osaudio_g_is_backend_known = MA_FALSE;
static ma_backend osaudio_g_backend = ma_backend_wasapi;
static ma_context osaudio_g_context;
static ma_mutex osaudio_g_context_lock; /* Only used for device enumeration. Created and destroyed with our context. */
static ma_uint32 osaudio_g_refcount = 0;
static ma_spinlock osaudio_g_lock = 0;
static osaudio_result_t osaudio_result_from_miniaudio(ma_result result)
{
switch (result)
{
case MA_SUCCESS: return OSAUDIO_SUCCESS;
case MA_INVALID_ARGS: return OSAUDIO_INVALID_ARGS;
case MA_INVALID_OPERATION: return OSAUDIO_INVALID_OPERATION;
case MA_OUT_OF_MEMORY: return OSAUDIO_OUT_OF_MEMORY;
default: return OSAUDIO_ERROR;
}
}
static ma_format osaudio_format_to_miniaudio(osaudio_format_t format)
{
switch (format)
{
case OSAUDIO_FORMAT_F32: return ma_format_f32;
case OSAUDIO_FORMAT_U8: return ma_format_u8;
case OSAUDIO_FORMAT_S16: return ma_format_s16;
case OSAUDIO_FORMAT_S24: return ma_format_s24;
case OSAUDIO_FORMAT_S32: return ma_format_s32;
default: return ma_format_unknown;
}
}
static osaudio_format_t osaudio_format_from_miniaudio(ma_format format)
{
switch (format)
{
case ma_format_f32: return OSAUDIO_FORMAT_F32;
case ma_format_u8: return OSAUDIO_FORMAT_U8;
case ma_format_s16: return OSAUDIO_FORMAT_S16;
case ma_format_s24: return OSAUDIO_FORMAT_S24;
case ma_format_s32: return OSAUDIO_FORMAT_S32;
default: return OSAUDIO_FORMAT_UNKNOWN;
}
}
static osaudio_channel_t osaudio_channel_from_miniaudio(ma_channel channel)
{
/* Channel positions between here and miniaudio will remain in sync. */
return (osaudio_channel_t)channel;
}
static ma_channel osaudio_channel_to_miniaudio(osaudio_channel_t channel)
{
/* Channel positions between here and miniaudio will remain in sync. */
return (ma_channel)channel;
}
static void osaudio_dummy_data_callback(ma_device* pDevice, void* pOutput, const void* pInput, ma_uint32 frameCount)
{
(void)pDevice;
(void)pOutput;
(void)pInput;
(void)frameCount;
}
static osaudio_result_t osaudio_determine_miniaudio_backend(ma_backend* pBackend, ma_device* pDummyDevice)
{
ma_device dummyDevice;
ma_device_config dummyDeviceConfig;
ma_result result;
/*
To do this we initialize a dummy device. We allow the caller to make use of this device as an optimization. This is
only used by osaudio_enumerate_devices() because that can make use of the context from the dummy device rather than
having to create it's own. pDummyDevice can be null.
*/
if (pDummyDevice == NULL) {
pDummyDevice = &dummyDevice;
}
dummyDeviceConfig = ma_device_config_init(ma_device_type_playback);
dummyDeviceConfig.dataCallback = osaudio_dummy_data_callback;
result = ma_device_init(NULL, &dummyDeviceConfig, pDummyDevice);
if (result != MA_SUCCESS || pDummyDevice->pContext->backend == ma_backend_null) {
/* Failed to open a default playback device. Try capture. */
if (result == MA_SUCCESS) {
/* This means we successfully initialize a device, but it's backend is null. It could be that there's no playback devices attached. Try capture. */
ma_device_uninit(pDummyDevice);
}
dummyDeviceConfig = ma_device_config_init(ma_device_type_capture);
result = ma_device_init(NULL, &dummyDeviceConfig, pDummyDevice);
}
if (result != MA_SUCCESS) {
return osaudio_result_from_miniaudio(result);
}
*pBackend = pDummyDevice->pContext->backend;
/* We're done. */
if (pDummyDevice == &dummyDevice) {
ma_device_uninit(&dummyDevice);
}
return OSAUDIO_SUCCESS;
}
static osaudio_result_t osaudio_ref_context_nolock()
{
/* Initialize the global context if necessary. */
if (osaudio_g_refcount == 0) {
osaudio_result_t result;
/* If we haven't got a known context, we'll need to determine it here. */
if (osaudio_g_is_backend_known == MA_FALSE) {
result = osaudio_determine_miniaudio_backend(&osaudio_g_backend, NULL);
if (result != OSAUDIO_SUCCESS) {
return result;
}
}
result = osaudio_result_from_miniaudio(ma_context_init(&osaudio_g_backend, 1, NULL, &osaudio_g_context));
if (result != OSAUDIO_SUCCESS) {
return result;
}
/* Need a mutex for device enumeration. */
ma_mutex_init(&osaudio_g_context_lock);
}
osaudio_g_refcount += 1;
return OSAUDIO_SUCCESS;
}
static osaudio_result_t osaudio_unref_context_nolock()
{
if (osaudio_g_refcount == 0) {
return OSAUDIO_INVALID_OPERATION;
}
osaudio_g_refcount -= 1;
/* Uninitialize the context if we don't have any more references. */
if (osaudio_g_refcount == 0) {
ma_context_uninit(&osaudio_g_context);
ma_mutex_uninit(&osaudio_g_context_lock);
}
return OSAUDIO_SUCCESS;
}
static ma_context* osaudio_ref_context()
{
osaudio_result_t result;
ma_spinlock_lock(&osaudio_g_lock);
{
result = osaudio_ref_context_nolock();
}
ma_spinlock_unlock(&osaudio_g_lock);
if (result != OSAUDIO_SUCCESS) {
return NULL;
}
return &osaudio_g_context;
}
static osaudio_result_t osaudio_unref_context()
{
osaudio_result_t result;
ma_spinlock_lock(&osaudio_g_lock);
{
result = osaudio_unref_context_nolock();
}
ma_spinlock_unlock(&osaudio_g_lock);
return result;
}
static void osaudio_info_from_miniaudio(osaudio_info_t* info, const ma_device_info* infoMA)
{
unsigned int iNativeConfig;
/* It just so happens, by absolutely total coincidence, that the size of the ID and name are the same between here and miniaudio. What are the odds?! */
memcpy(info->id.data, &infoMA->id, sizeof(info->id.data));
memcpy(info->name, infoMA->name, sizeof(info->name));
info->config_count = (unsigned int)infoMA->nativeDataFormatCount;
for (iNativeConfig = 0; iNativeConfig < info->config_count; iNativeConfig += 1) {
unsigned int iChannel;
info->configs[iNativeConfig].device_id = &info->id;
info->configs[iNativeConfig].direction = info->direction;
info->configs[iNativeConfig].format = osaudio_format_from_miniaudio(infoMA->nativeDataFormats[iNativeConfig].format);
info->configs[iNativeConfig].channels = (unsigned int)infoMA->nativeDataFormats[iNativeConfig].channels;
info->configs[iNativeConfig].rate = (unsigned int)infoMA->nativeDataFormats[iNativeConfig].sampleRate;
/* Apparently miniaudio does not report channel positions. I don't know why I'm not doing that. */
for (iChannel = 0; iChannel < info->configs[iNativeConfig].channels; iChannel += 1) {
info->configs[iNativeConfig].channel_map[iChannel] = OSAUDIO_CHANNEL_NONE;
}
}
}
static osaudio_result_t osaudio_enumerate_nolock(unsigned int* count, osaudio_info_t** info, ma_context* pContext)
{
osaudio_result_t result;
ma_device_info* pPlaybackInfos;
ma_uint32 playbackCount;
ma_device_info* pCaptureInfos;
ma_uint32 captureCount;
ma_uint32 iInfo;
size_t allocSize;
osaudio_info_t* pRunningInfo;
osaudio_config_t* pRunningConfig;
/* We now need to retrieve the device information from miniaudio. */
result = osaudio_result_from_miniaudio(ma_context_get_devices(pContext, &pPlaybackInfos, &playbackCount, &pCaptureInfos, &captureCount));
if (result != OSAUDIO_SUCCESS) {
osaudio_unref_context();
return result;
}
/*
Because the caller needs to free the returned pointer it's important that we keep it all in one allocation. Because there can be
a variable number of native configs we'll have to compute the size of the allocation first, and then do a second pass to fill
out the data.
*/
allocSize = ((size_t)playbackCount + (size_t)captureCount) * sizeof(osaudio_info_t);
/* Now we need to iterate over each playback and capture device and add up the number of native configs. */
for (iInfo = 0; iInfo < playbackCount; iInfo += 1) {
ma_context_get_device_info(pContext, ma_device_type_playback, &pPlaybackInfos[iInfo].id, &pPlaybackInfos[iInfo]);
allocSize += pPlaybackInfos[iInfo].nativeDataFormatCount * sizeof(osaudio_config_t);
}
for (iInfo = 0; iInfo < captureCount; iInfo += 1) {
ma_context_get_device_info(pContext, ma_device_type_capture, &pCaptureInfos[iInfo].id, &pCaptureInfos[iInfo]);
allocSize += pCaptureInfos[iInfo].nativeDataFormatCount * sizeof(osaudio_config_t);
}
/* Now that we know the size of the allocation we can allocate it. */
*info = (osaudio_info_t*)calloc(1, allocSize);
if (*info == NULL) {
osaudio_unref_context();
return OSAUDIO_OUT_OF_MEMORY;
}
pRunningInfo = *info;
pRunningConfig = (osaudio_config_t*)(((unsigned char*)*info) + (((size_t)playbackCount + (size_t)captureCount) * sizeof(osaudio_info_t)));
for (iInfo = 0; iInfo < playbackCount; iInfo += 1) {
pRunningInfo->direction = OSAUDIO_OUTPUT;
pRunningInfo->configs = pRunningConfig;
osaudio_info_from_miniaudio(pRunningInfo, &pPlaybackInfos[iInfo]);
pRunningConfig += pRunningInfo->config_count;
pRunningInfo += 1;
}
for (iInfo = 0; iInfo < captureCount; iInfo += 1) {
pRunningInfo->direction = OSAUDIO_INPUT;
pRunningInfo->configs = pRunningConfig;
osaudio_info_from_miniaudio(pRunningInfo, &pPlaybackInfos[iInfo]);
pRunningConfig += pRunningInfo->config_count;
pRunningInfo += 1;
}
*count = (unsigned int)(playbackCount + captureCount);
return OSAUDIO_SUCCESS;
}
osaudio_result_t osaudio_enumerate(unsigned int* count, osaudio_info_t** info)
{
osaudio_result_t result;
ma_context* pContext = NULL;
if (count != NULL) {
*count = 0;
}
if (info != NULL) {
*info = NULL;
}
if (count == NULL || info == NULL) {
return OSAUDIO_INVALID_ARGS;
}
pContext = osaudio_ref_context();
if (pContext == NULL) {
return OSAUDIO_ERROR;
}
ma_mutex_lock(&osaudio_g_context_lock);
{
result = osaudio_enumerate_nolock(count, info, pContext);
}
ma_mutex_unlock(&osaudio_g_context_lock);
/* We're done. We can now return. */
osaudio_unref_context();
return result;
}
void osaudio_config_init(osaudio_config_t* config, osaudio_direction_t direction)
{
if (config == NULL) {
return;
}
memset(config, 0, sizeof(*config));
config->direction = direction;
}
static void osaudio_data_callback_playback(osaudio_t audio, void* pOutput, ma_uint32 frameCount)
{
/*
If there's content in the buffer, read from it and release the semaphore. There needs to be a whole frameCount chunk
in the buffer so we can keep everything in nice clean chunks. When we read from the buffer, we release a semaphore
which will allow the main thread to write more data to the buffer.
*/
ma_uint32 framesToRead;
ma_uint32 framesProcessed;
void* pBuffer;
framesToRead = ma_pcm_rb_available_read(&audio->buffer);
if (framesToRead > frameCount) {
framesToRead = frameCount;
}
framesProcessed = framesToRead;
/* For robustness we should run this in a loop in case the buffer wraps around. */
while (frameCount > 0) {
framesToRead = frameCount;
ma_pcm_rb_acquire_read(&audio->buffer, &framesToRead, &pBuffer);
if (framesToRead == 0) {
break;
}
memcpy(pOutput, pBuffer, framesToRead * ma_get_bytes_per_frame(audio->device.playback.format, audio->device.playback.channels));
ma_pcm_rb_commit_read(&audio->buffer, framesToRead);
frameCount -= framesToRead;
pOutput = ((unsigned char*)pOutput) + (framesToRead * ma_get_bytes_per_frame(audio->device.playback.format, audio->device.playback.channels));
}
/* Make sure we release the semaphore if we ended up reading anything. */
if (framesProcessed > 0) {
ma_semaphore_release(&audio->bufferSemaphore);
}
if (frameCount > 0) {
/* Underrun. Pad with silence. */
ma_silence_pcm_frames(pOutput, frameCount, audio->device.playback.format, audio->device.playback.channels);
ma_atomic_bool32_set(&audio->xrunDetected, MA_TRUE);
}
}
static void osaudio_data_callback_capture(osaudio_t audio, const void* pInput, ma_uint32 frameCount)
{
/* If there's space in the buffer, write to it and release the semaphore. The semaphore is only released on full-chunk boundaries. */
ma_uint32 framesToWrite;
ma_uint32 framesProcessed;
void* pBuffer;
framesToWrite = ma_pcm_rb_available_write(&audio->buffer);
if (framesToWrite > frameCount) {
framesToWrite = frameCount;
}
framesProcessed = framesToWrite;
while (frameCount > 0) {
framesToWrite = frameCount;
ma_pcm_rb_acquire_write(&audio->buffer, &framesToWrite, &pBuffer);
if (framesToWrite == 0) {
break;
}
memcpy(pBuffer, pInput, framesToWrite * ma_get_bytes_per_frame(audio->device.capture.format, audio->device.capture.channels));
ma_pcm_rb_commit_write(&audio->buffer, framesToWrite);
frameCount -= framesToWrite;
pInput = ((unsigned char*)pInput) + (framesToWrite * ma_get_bytes_per_frame(audio->device.capture.format, audio->device.capture.channels));
}
/* Make sure we release the semaphore if we ended up reading anything. */
if (framesProcessed > 0) {
ma_semaphore_release(&audio->bufferSemaphore);
}
if (frameCount > 0) {
/* Overrun. Not enough room to move our input data into the buffer. */
ma_atomic_bool32_set(&audio->xrunDetected, MA_TRUE);
}
}
static void osaudio_nofication_callback(const ma_device_notification* pNotification)
{
osaudio_t audio = (osaudio_t)pNotification->pDevice->pUserData;
if (audio->config.notification != NULL) {
osaudio_notification_t notification;
switch (pNotification->type)
{
case ma_device_notification_type_started:
{
notification.type = OSAUDIO_NOTIFICATION_STARTED;
} break;
case ma_device_notification_type_stopped:
{
notification.type = OSAUDIO_NOTIFICATION_STOPPED;
} break;
case ma_device_notification_type_rerouted:
{
notification.type = OSAUDIO_NOTIFICATION_REROUTED;
} break;
case ma_device_notification_type_interruption_began:
{
notification.type = OSAUDIO_NOTIFICATION_INTERRUPTION_BEGIN;
} break;
case ma_device_notification_type_interruption_ended:
{
notification.type = OSAUDIO_NOTIFICATION_INTERRUPTION_END;
} break;
}
audio->config.notification(audio->config.user_data, &notification);
}
}
static void osaudio_data_callback(ma_device* pDevice, void* pOutput, const void* pInput, ma_uint32 frameCount)
{
osaudio_t audio = (osaudio_t)pDevice->pUserData;
if (audio->info.direction == OSAUDIO_OUTPUT) {
osaudio_data_callback_playback(audio, pOutput, frameCount);
} else {
osaudio_data_callback_capture(audio, pInput, frameCount);
}
}
osaudio_result_t osaudio_open(osaudio_t* audio, osaudio_config_t* config)
{
osaudio_result_t result;
ma_context* pContext = NULL;
ma_device_config deviceConfig;
ma_device_info deviceInfo;
int periodCount = 2;
unsigned int iChannel;
if (audio != NULL) {
*audio = NULL; /* Safety. */
}
if (audio == NULL || config == NULL) {
return OSAUDIO_INVALID_ARGS;
}
pContext = osaudio_ref_context(); /* Will be unreferenced in osaudio_close(). */
if (pContext == NULL) {
return OSAUDIO_ERROR;
}
*audio = (osaudio_t)calloc(1, sizeof(**audio));
if (*audio == NULL) {
osaudio_unref_context();
return OSAUDIO_OUT_OF_MEMORY;
}
if (config->direction == OSAUDIO_OUTPUT) {
deviceConfig = ma_device_config_init(ma_device_type_playback);
deviceConfig.playback.format = osaudio_format_to_miniaudio(config->format);
deviceConfig.playback.channels = (ma_uint32)config->channels;
if (config->channel_map[0] != OSAUDIO_CHANNEL_NONE) {
for (iChannel = 0; iChannel < config->channels; iChannel += 1) {
deviceConfig.playback.pChannelMap[iChannel] = osaudio_channel_to_miniaudio(config->channel_map[iChannel]);
}
}
} else {
deviceConfig = ma_device_config_init(ma_device_type_capture);
deviceConfig.capture.format = osaudio_format_to_miniaudio(config->format);
deviceConfig.capture.channels = (ma_uint32)config->channels;
if (config->channel_map[0] != OSAUDIO_CHANNEL_NONE) {
for (iChannel = 0; iChannel < config->channels; iChannel += 1) {
deviceConfig.capture.pChannelMap[iChannel] = osaudio_channel_to_miniaudio(config->channel_map[iChannel]);
}
}
}
deviceConfig.sampleRate = (ma_uint32)config->rate;
/* If the buffer size is 0, we'll default to 10ms. */
deviceConfig.periodSizeInFrames = (ma_uint32)config->buffer_size;
if (deviceConfig.periodSizeInFrames == 0) {
deviceConfig.periodSizeInMilliseconds = 10;
}
deviceConfig.dataCallback = osaudio_data_callback;
deviceConfig.pUserData = *audio;
if ((config->flags & OSAUDIO_FLAG_NO_REROUTING) != 0) {
deviceConfig.wasapi.noAutoStreamRouting = MA_TRUE;
}
if (config->notification != NULL) {
deviceConfig.notificationCallback = osaudio_nofication_callback;
}
result = osaudio_result_from_miniaudio(ma_device_init(pContext, &deviceConfig, &((*audio)->device)));
if (result != OSAUDIO_SUCCESS) {
free(*audio);
osaudio_unref_context();
return result;
}
/* The input config needs to be updated with actual values. */
if (config->direction == OSAUDIO_OUTPUT) {
config->format = osaudio_format_from_miniaudio((*audio)->device.playback.format);
config->channels = (unsigned int)(*audio)->device.playback.channels;
for (iChannel = 0; iChannel < config->channels; iChannel += 1) {
config->channel_map[iChannel] = osaudio_channel_from_miniaudio((*audio)->device.playback.channelMap[iChannel]);
}
} else {
config->format = osaudio_format_from_miniaudio((*audio)->device.capture.format);
config->channels = (unsigned int)(*audio)->device.capture.channels;
for (iChannel = 0; iChannel < config->channels; iChannel += 1) {
config->channel_map[iChannel] = osaudio_channel_from_miniaudio((*audio)->device.capture.channelMap[iChannel]);
}
}
config->rate = (unsigned int)(*audio)->device.sampleRate;
if (deviceConfig.periodSizeInFrames == 0) {
if (config->direction == OSAUDIO_OUTPUT) {
config->buffer_size = (int)(*audio)->device.playback.internalPeriodSizeInFrames;
} else {
config->buffer_size = (int)(*audio)->device.capture.internalPeriodSizeInFrames;
}
}
/* The device object needs to have a it's local info built. We can get the ID and name from miniaudio. */
result = osaudio_result_from_miniaudio(ma_device_get_info(&(*audio)->device, (*audio)->device.type, &deviceInfo));
if (result == MA_SUCCESS) {
memcpy((*audio)->info.id.data, &deviceInfo.id, sizeof((*audio)->info.id.data));
memcpy((*audio)->info.name, deviceInfo.name, sizeof((*audio)->info.name));
}
(*audio)->info.direction = config->direction;
(*audio)->info.config_count = 1;
(*audio)->info.configs = &(*audio)->config;
(*audio)->config = *config;
(*audio)->config.device_id = &(*audio)->info.id;
/* We need a ring buffer. */
result = osaudio_result_from_miniaudio(ma_pcm_rb_init(osaudio_format_to_miniaudio(config->format), (ma_uint32)config->channels, (ma_uint32)config->buffer_size * periodCount, NULL, NULL, &(*audio)->buffer));
if (result != OSAUDIO_SUCCESS) {
ma_device_uninit(&(*audio)->device);
free(*audio);
osaudio_unref_context();
return result;
}
/* Now we need a semaphore to control access to the ring buffer to to block read/write when necessary. */
result = osaudio_result_from_miniaudio(ma_semaphore_init((config->direction == OSAUDIO_OUTPUT) ? periodCount : 0, &(*audio)->bufferSemaphore));
if (result != OSAUDIO_SUCCESS) {
ma_pcm_rb_uninit(&(*audio)->buffer);
ma_device_uninit(&(*audio)->device);
free(*audio);
osaudio_unref_context();
return result;
}
return OSAUDIO_SUCCESS;
}
osaudio_result_t osaudio_close(osaudio_t audio)
{
if (audio == NULL) {
return OSAUDIO_INVALID_ARGS;
}
ma_device_uninit(&audio->device);
osaudio_unref_context();
return OSAUDIO_SUCCESS;
}
static void osaudio_activate(osaudio_t audio)
{
ma_spinlock_lock(&audio->activateLock);
{
if (ma_atomic_bool32_get(&audio->isActive) == MA_FALSE) {
ma_atomic_bool32_set(&audio->isActive, MA_TRUE);
/* If we need to flush, do so now before starting the device. */
if (ma_atomic_bool32_get(&audio->isFlushed) == MA_TRUE) {
ma_pcm_rb_reset(&audio->buffer);
ma_atomic_bool32_set(&audio->isFlushed, MA_FALSE);
}
/* If we're not paused, start the device. */
if (ma_atomic_bool32_get(&audio->isPaused) == MA_FALSE) {
ma_device_start(&audio->device);
}
}
}
ma_spinlock_unlock(&audio->activateLock);
}
osaudio_result_t osaudio_write(osaudio_t audio, const void* data, unsigned int frame_count)
{
if (audio == NULL) {
return OSAUDIO_INVALID_ARGS;
}
ma_mutex_lock(&audio->drainLock);
{
/* Don't return until everything has been written. */
while (frame_count > 0) {
ma_uint32 framesToWrite = frame_count;
ma_uint32 framesAvailableInBuffer;
/* There should be enough data available in the buffer now, but check anyway. */
framesAvailableInBuffer = ma_pcm_rb_available_write(&audio->buffer);
if (framesAvailableInBuffer > 0) {
void* pBuffer;
if (framesToWrite > framesAvailableInBuffer) {
framesToWrite = framesAvailableInBuffer;
}
ma_pcm_rb_acquire_write(&audio->buffer, &framesToWrite, &pBuffer);
{
ma_copy_pcm_frames(pBuffer, data, framesToWrite, audio->device.playback.format, audio->device.playback.channels);
}
ma_pcm_rb_commit_write(&audio->buffer, framesToWrite);
frame_count -= (unsigned int)framesToWrite;
data = (const void*)((const unsigned char*)data + (framesToWrite * ma_get_bytes_per_frame(audio->device.playback.format, audio->device.playback.channels)));
if (framesToWrite > 0) {
osaudio_activate(audio);
}
} else {
/* If we get here it means there's not enough data available in the buffer. We need to wait for more. */
ma_semaphore_wait(&audio->bufferSemaphore);
/* If we're not active it probably means we've flushed. This write needs to be aborted. */
if (ma_atomic_bool32_get(&audio->isActive) == MA_FALSE) {
break;
}
}
}
}
ma_mutex_unlock(&audio->drainLock);
if ((audio->config.flags & OSAUDIO_FLAG_REPORT_XRUN) != 0) {
if (ma_atomic_bool32_get(&audio->xrunDetected)) {
ma_atomic_bool32_set(&audio->xrunDetected, MA_FALSE);
return OSAUDIO_XRUN;
}
}
return OSAUDIO_SUCCESS;
}
osaudio_result_t osaudio_read(osaudio_t audio, void* data, unsigned int frame_count)
{
if (audio == NULL) {
return OSAUDIO_INVALID_ARGS;
}
ma_mutex_lock(&audio->drainLock);
{
while (frame_count > 0) {
ma_uint32 framesToRead = frame_count;
ma_uint32 framesAvailableInBuffer;
/* There should be enough data available in the buffer now, but check anyway. */
framesAvailableInBuffer = ma_pcm_rb_available_read(&audio->buffer);
if (framesAvailableInBuffer > 0) {
void* pBuffer;
if (framesToRead > framesAvailableInBuffer) {
framesToRead = framesAvailableInBuffer;
}
ma_pcm_rb_acquire_read(&audio->buffer, &framesToRead, &pBuffer);
{
ma_copy_pcm_frames(data, pBuffer, framesToRead, audio->device.capture.format, audio->device.capture.channels);
}
ma_pcm_rb_commit_read(&audio->buffer, framesToRead);
frame_count -= (unsigned int)framesToRead;
data = (void*)((unsigned char*)data + (framesToRead * ma_get_bytes_per_frame(audio->device.capture.format, audio->device.capture.channels)));
} else {
/* Activate the device from the get go or else we'll never end up capturing anything. */
osaudio_activate(audio);
/* If we get here it means there's not enough data available in the buffer. We need to wait for more. */
ma_semaphore_wait(&audio->bufferSemaphore);
/* If we're not active it probably means we've flushed. This read needs to be aborted. */
if (ma_atomic_bool32_get(&audio->isActive) == MA_FALSE) {
break;
}
}
}
}
ma_mutex_unlock(&audio->drainLock);
if ((audio->config.flags & OSAUDIO_FLAG_REPORT_XRUN) != 0) {
if (ma_atomic_bool32_get(&audio->xrunDetected)) {
ma_atomic_bool32_set(&audio->xrunDetected, MA_FALSE);
return OSAUDIO_XRUN;
}
}
return OSAUDIO_SUCCESS;
}
osaudio_result_t osaudio_drain(osaudio_t audio)
{
if (audio == NULL) {
return OSAUDIO_INVALID_ARGS;
}
/* This cannot be called while the device is in a paused state. */
if (ma_atomic_bool32_get(&audio->isPaused)) {
return OSAUDIO_DEVICE_STOPPED;
}
/* For capture we want to stop the device immediately or else we won't ever drain the buffer because miniaudio will be constantly filling it. */
if (audio->info.direction == OSAUDIO_INPUT) {
ma_device_stop(&audio->device);
}
/*
Mark the device as inactive *before* releasing the semaphore. When read/write completes waiting
on the semaphore, they'll check this flag and abort.
*/
ma_atomic_bool32_set(&audio->isActive, MA_FALSE);
/*
Again in capture mode, we need to release the semaphore before waiting for the drain lock because
there's a chance read() will be waiting on the semaphore and will need to be woken up in order for
it to be given to chance to return.
*/
if (audio->info.direction == OSAUDIO_INPUT) {
ma_semaphore_release(&audio->bufferSemaphore);
}
/* Now we need to wait for any pending reads or writes to complete. */
ma_mutex_lock(&audio->drainLock);
{
/* No processing should be happening on the buffer at this point. Wait for miniaudio to consume the buffer. */
while (ma_pcm_rb_available_read(&audio->buffer) > 0) {
ma_sleep(1);
}
/*
At this point the buffer should be empty, and we shouldn't be in any read or write calls. If
it's a playback device, we'll want to stop the device. There's no need to release the semaphore.
*/
if (audio->info.direction == OSAUDIO_OUTPUT) {
ma_device_stop(&audio->device);
}
}
ma_mutex_unlock(&audio->drainLock);
return OSAUDIO_SUCCESS;
}
osaudio_result_t osaudio_flush(osaudio_t audio)
{
if (audio == NULL) {
return OSAUDIO_INVALID_ARGS;
}
/*
First stop the device. This ensures the miniaudio background thread doesn't try modifying the
buffer from under us while we're trying to flush it.
*/
ma_device_stop(&audio->device);
/*
Mark the device as inactive *before* releasing the semaphore. When read/write completes waiting
on the semaphore, they'll check this flag and abort.
*/
ma_atomic_bool32_set(&audio->isActive, MA_FALSE);
/*
Release the semaphore after marking the device as inactive. This needs to be released in order
to wakeup osaudio_read() and osaudio_write().
*/
ma_semaphore_release(&audio->bufferSemaphore);
/*
The buffer should only be modified by osaudio_read() or osaudio_write(), or the miniaudio
background thread. Therefore, we don't actually clear the buffer here. Instead we'll clear it
in osaudio_activate(), depending on whether or not the below flag is set.
*/
ma_atomic_bool32_set(&audio->isFlushed, MA_TRUE);
return OSAUDIO_SUCCESS;
}
osaudio_result_t osaudio_pause(osaudio_t audio)
{
osaudio_result_t result = OSAUDIO_SUCCESS;
if (audio == NULL) {
return OSAUDIO_INVALID_ARGS;
}
ma_spinlock_lock(&audio->activateLock);
{
if (ma_atomic_bool32_get(&audio->isPaused) == MA_FALSE) {
ma_atomic_bool32_set(&audio->isPaused, MA_TRUE);
/* No need to stop the device if it's not active. */
if (ma_atomic_bool32_get(&audio->isActive)) {
result = osaudio_result_from_miniaudio(ma_device_stop(&audio->device));
}
}
}
ma_spinlock_unlock(&audio->activateLock);
return result;
}
osaudio_result_t osaudio_resume(osaudio_t audio)
{
osaudio_result_t result = OSAUDIO_SUCCESS;
if (audio == NULL) {
return OSAUDIO_INVALID_ARGS;
}
ma_spinlock_lock(&audio->activateLock);
{
if (ma_atomic_bool32_get(&audio->isPaused)) {
ma_atomic_bool32_set(&audio->isPaused, MA_FALSE);
/* Don't start the device unless it's active. */
if (ma_atomic_bool32_get(&audio->isActive)) {
result = osaudio_result_from_miniaudio(ma_device_start(&audio->device));
}
}
}
ma_spinlock_unlock(&audio->activateLock);
return result;
}
unsigned int osaudio_get_avail(osaudio_t audio)
{
if (audio == NULL) {
return 0;
}
if (audio->info.direction == OSAUDIO_OUTPUT) {
return ma_pcm_rb_available_write(&audio->buffer);
} else {
return ma_pcm_rb_available_read(&audio->buffer);
}
}
const osaudio_info_t* osaudio_get_info(osaudio_t audio)
{
if (audio == NULL) {
return NULL;
}
return &audio->info;
}
#endif /* osaudio_miniaudio_c */
+196
View File
@@ -0,0 +1,196 @@
#include "../osaudio.h"
/* This example uses miniaudio for decoding audio files. */
#define MINIAUDIO_IMPLEMENTATION
#include "../../../miniaudio.h"
#include <stdlib.h>
#include <stdio.h>
#include <string.h>
#define MODE_PLAYBACK 0
#define MODE_CAPTURE 1
#define MODE_DUPLEX 2
void enumerate_devices()
{
int result;
unsigned int iDevice;
unsigned int count;
osaudio_info_t* pDeviceInfos;
result = osaudio_enumerate(&count, &pDeviceInfos);
if (result != OSAUDIO_SUCCESS) {
printf("Failed to enumerate audio devices.\n");
return;
}
for (iDevice = 0; iDevice < count; iDevice += 1) {
printf("(%s) %s\n", (pDeviceInfos[iDevice].direction == OSAUDIO_OUTPUT) ? "Playback" : "Capture", pDeviceInfos[iDevice].name);
}
free(pDeviceInfos);
}
osaudio_t open_device(int direction)
{
int result;
osaudio_t audio;
osaudio_config_t config;
osaudio_config_init(&config, direction);
config.format = OSAUDIO_FORMAT_F32;
config.channels = 2;
config.rate = 48000;
config.flags = OSAUDIO_FLAG_REPORT_XRUN;
result = osaudio_open(&audio, &config);
if (result != OSAUDIO_SUCCESS) {
printf("Failed to open audio device.\n");
return NULL;
}
return audio;
}
void do_playback(int argc, char** argv)
{
int result;
osaudio_t audio;
const osaudio_config_t* config;
const char* pFilePath = NULL;
ma_result resultMA;
ma_decoder_config decoderConfig;
ma_decoder decoder;
audio = open_device(OSAUDIO_OUTPUT);
if (audio == NULL) {
printf("Failed to open audio device.\n");
return;
}
config = &osaudio_get_info(audio)->configs[0];
/* We want to always use f32. */
if (config->format == OSAUDIO_FORMAT_F32) {
if (argc > 1) {
pFilePath = argv[1];
decoderConfig = ma_decoder_config_init(ma_format_f32, (ma_uint32)config->channels, (ma_uint32)config->rate);
resultMA = ma_decoder_init_file(pFilePath, &decoderConfig, &decoder);
if (resultMA == MA_SUCCESS) {
/* Now just keep looping over each sample until we get to the end. */
for (;;) {
float frames[1024];
ma_uint64 frameCount;
resultMA = ma_decoder_read_pcm_frames(&decoder, frames, ma_countof(frames) / config->channels, &frameCount);
if (resultMA != MA_SUCCESS) {
break;
}
result = osaudio_write(audio, frames, (unsigned int)frameCount); /* Safe cast. */
if (result != OSAUDIO_SUCCESS && result != OSAUDIO_XRUN) {
printf("Error writing to audio device.");
break;
}
if (result == OSAUDIO_XRUN) {
printf("WARNING: An xrun occurred while writing to the playback device.\n");
}
}
} else {
printf("Failed to open file: %s\n", pFilePath);
}
} else {
printf("No input file.\n");
}
} else {
printf("Unsupported device format.\n");
}
/* Getting here means we're done and we can tear down. */
osaudio_close(audio);
}
void do_duplex()
{
int result;
osaudio_t capture;
osaudio_t playback;
capture = open_device(OSAUDIO_INPUT);
if (capture == NULL) {
printf("Failed to open capture device.\n");
return;
}
playback = open_device(OSAUDIO_OUTPUT);
if (playback == NULL) {
osaudio_close(capture);
printf("Failed to open playback device.\n");
return;
}
for (;;) {
float frames[1024];
unsigned int frameCount;
frameCount = ma_countof(frames) / osaudio_get_info(capture)->configs[0].channels;
/* Capture. */
result = osaudio_read(capture, frames, frameCount);
if (result != OSAUDIO_SUCCESS && result != OSAUDIO_XRUN) {
printf("Error reading from capture device.\n");
break;
}
if (result == OSAUDIO_XRUN) {
printf("WARNING: An xrun occurred while reading from the capture device.\n");
}
/* Playback. */
result = osaudio_write(playback, frames, frameCount);
if (result != OSAUDIO_SUCCESS && result != OSAUDIO_XRUN) {
printf("Error writing to playback device.\n");
break;
}
if (result == OSAUDIO_XRUN) {
printf("WARNING: An xrun occurred while writing to the playback device.\n");
}
}
osaudio_close(capture);
osaudio_close(playback);
}
int main(int argc, char** argv)
{
int mode = MODE_PLAYBACK;
int iarg;
enumerate_devices();
for (iarg = 0; iarg < argc; iarg += 1) {
if (strcmp(argv[iarg], "capture") == 0) {
mode = MODE_CAPTURE;
} else if (strcmp(argv[iarg], "duplex") == 0) {
mode = MODE_DUPLEX;
}
}
switch (mode)
{
case MODE_PLAYBACK: do_playback(argc, argv); break;
case MODE_CAPTURE: break;
case MODE_DUPLEX: do_duplex(); break;
}
(void)argc;
(void)argv;
return 0;
}
+10658 -11439
View File
File diff suppressed because it is too large Load Diff
@@ -7,7 +7,7 @@ ma_result init_data_converter(ma_uint32 rateIn, ma_uint32 rateOut, ma_resample_a
config = ma_data_converter_config_init(ma_format_s16, ma_format_s16, 1, 1, rateIn, rateOut);
config.resampling.algorithm = algorithm;
result = ma_data_converter_init(&config, pDataConverter);
result = ma_data_converter_init(&config, NULL, pDataConverter);
if (result != MA_SUCCESS) {
return result;
}
@@ -52,7 +52,7 @@ ma_result test_data_converter__resampling_expected_output_fixed_interval(ma_data
ma_uint64 expectedOutputFrameCount;
/* We retrieve the required number of input frames for the specified number of output frames, and then compare with what we actually get when reading. */
expectedOutputFrameCount = ma_data_converter_get_expected_output_frame_count(pDataConverter, frameCountPerIteration);
ma_data_converter_get_expected_output_frame_count(pDataConverter, frameCountPerIteration, &expectedOutputFrameCount);
outputFrameCount = ma_countof(output);
inputFrameCount = frameCountPerIteration;
@@ -90,7 +90,7 @@ ma_result test_data_converter__resampling_expected_output_by_algorithm_and_rate_
result = test_data_converter__resampling_expected_output_fixed_interval(&converter, frameCountPerIteration);
ma_data_converter_uninit(&converter);
ma_data_converter_uninit(&converter, NULL);
if (hasError) {
return MA_ERROR;
@@ -170,12 +170,6 @@ ma_result test_data_converter__resampling_expected_output()
hasError = MA_TRUE;
}
printf("Speex\n");
result = test_data_converter__resampling_expected_output_by_algorithm(ma_resample_algorithm_speex);
if (result != 0) {
hasError = MA_TRUE;
}
if (hasError) {
return MA_ERROR;
} else {
@@ -205,7 +199,7 @@ ma_result test_data_converter__resampling_required_input_fixed_interval(ma_data_
ma_uint64 requiredInputFrameCount;
/* We retrieve the required number of input frames for the specified number of output frames, and then compare with what we actually get when reading. */
requiredInputFrameCount = ma_data_converter_get_required_input_frame_count(pDataConverter, frameCountPerIteration);
ma_data_converter_get_required_input_frame_count(pDataConverter, frameCountPerIteration, &requiredInputFrameCount);
outputFrameCount = frameCountPerIteration;
inputFrameCount = ma_countof(input);
@@ -243,7 +237,7 @@ ma_result test_data_converter__resampling_required_input_by_algorithm_and_rate_f
result = test_data_converter__resampling_required_input_fixed_interval(&converter, frameCountPerIteration);
ma_data_converter_uninit(&converter);
ma_data_converter_uninit(&converter, NULL);
if (hasError) {
return MA_ERROR;
@@ -323,12 +317,6 @@ ma_result test_data_converter__resampling_required_input()
hasError = MA_TRUE;
}
printf("Speex\n");
result = test_data_converter__resampling_required_input_by_algorithm(ma_resample_algorithm_speex);
if (result != MA_SUCCESS) {
hasError = MA_TRUE;
}
if (hasError) {
return MA_ERROR;
} else {
-17
View File
@@ -37,20 +37,3 @@ ma_result ma_register_test(const char* pName, ma_test_entry_proc onEntry)
return MA_SUCCESS;
}
drwav_data_format drwav_data_format_from_minaudio_format(ma_format format, ma_uint32 channels, ma_uint32 sampleRate)
{
drwav_data_format wavFormat;
wavFormat.container = drwav_container_riff;
wavFormat.channels = channels;
wavFormat.sampleRate = sampleRate;
wavFormat.bitsPerSample = ma_get_bytes_per_sample(format) * 8;
if (format == ma_format_f32) {
wavFormat.format = DR_WAVE_FORMAT_IEEE_FLOAT;
} else {
wavFormat.format = DR_WAVE_FORMAT_PCM;
}
return wavFormat;
}
+15 -6
View File
@@ -41,6 +41,7 @@ will receive the captured audio.
If multiple backends are specified, the priority will be based on the order in which you specify them. If multiple waveform or noise types
are specified the last one on the command line will have priority.
*/
#define MA_FORCE_UWP
#include "../test_common/ma_test_common.c"
typedef enum
@@ -344,13 +345,15 @@ void on_data(ma_device* pDevice, void* pFramesOut, const void* pFramesIn, ma_uin
{
case ma_device_type_playback:
{
/* In the playback case we just read from our input source. */
/*
In the playback case we just read from our input source. We're going to use ma_data_source_read_pcm_frames() for this
to ensure the data source abstraction is working properly for each type. */
if (g_State.sourceType == source_type_decoder) {
ma_decoder_read_pcm_frames(&g_State.decoder, pFramesOut, frameCount, NULL);
ma_data_source_read_pcm_frames(&g_State.decoder, pFramesOut, frameCount, NULL);
} else if (g_State.sourceType == source_type_waveform) {
ma_waveform_read_pcm_frames(&g_State.waveform, pFramesOut, frameCount, NULL);
ma_data_source_read_pcm_frames(&g_State.waveform, pFramesOut, frameCount, NULL);
} else if (g_State.sourceType == source_type_noise) {
ma_noise_read_pcm_frames(&g_State.noise, pFramesOut, frameCount, NULL);
ma_data_source_read_pcm_frames(&g_State.noise, pFramesOut, frameCount, NULL);
}
} break;
@@ -596,9 +599,15 @@ int main(int argc, char** argv)
}
if (c == 'p' || c == 'P') {
if (ma_device_is_started(&g_State.device)) {
ma_device_stop(&g_State.device);
result = ma_device_stop(&g_State.device);
if (result != MA_SUCCESS) {
printf("ERROR: Error when stopping the device: %s\n", ma_result_description(result));
}
} else {
ma_device_start(&g_State.device);
result = ma_device_start(&g_State.device);
if (result != MA_SUCCESS) {
printf("ERROR: Error when starting the device: %s\n", ma_result_description(result));
}
}
}
}
+10 -2
View File
@@ -1,3 +1,4 @@
#define MA_DEBUG_OUTPUT
#define MA_NO_DECODING
#define MA_NO_ENCODING
#define MINIAUDIO_IMPLEMENTATION
@@ -73,12 +74,13 @@ static void do_duplex()
deviceConfig = ma_device_config_init(ma_device_type_duplex);
deviceConfig.capture.pDeviceID = NULL;
deviceConfig.capture.format = ma_format_s16;
deviceConfig.capture.format = DEVICE_FORMAT;
deviceConfig.capture.channels = 2;
deviceConfig.capture.shareMode = ma_share_mode_shared;
deviceConfig.playback.pDeviceID = NULL;
deviceConfig.playback.format = ma_format_s16;
deviceConfig.playback.format = DEVICE_FORMAT;
deviceConfig.playback.channels = 2;
deviceConfig.sampleRate = DEVICE_SAMPLE_RATE;
deviceConfig.dataCallback = data_callback_duplex;
result = ma_device_init(NULL, &deviceConfig, &device);
if (result != MA_SUCCESS) {
@@ -102,6 +104,12 @@ static EM_BOOL on_canvas_click(int eventType, const EmscriptenMouseEvent* pMouse
}
isRunning = MA_TRUE;
} else {
if (ma_device_get_state(&device) == ma_device_state_started) {
ma_device_stop(&device);
} else {
ma_device_start(&device);
}
}
(void)eventType;
+1 -1
View File
@@ -12,7 +12,7 @@ ma_result filtering_init_decoder_and_encoder(const char* pInputFilePath, const c
return result;
}
encoderConfig = ma_encoder_config_init(ma_resource_format_wav, pDecoder->outputFormat, pDecoder->outputChannels, pDecoder->outputSampleRate);
encoderConfig = ma_encoder_config_init(ma_encoding_format_wav, pDecoder->outputFormat, pDecoder->outputChannels, pDecoder->outputSampleRate);
result = ma_encoder_init_file(pOutputFilePath, &encoderConfig, pEncoder);
if (result != MA_SUCCESS) {
ma_decoder_uninit(pDecoder);
+6 -6
View File
@@ -20,7 +20,7 @@ ma_result test_bpf2__by_format(const char* pInputFilePath, const char* pOutputFi
}
bpfConfig = ma_bpf2_config_init(decoder.outputFormat, decoder.outputChannels, decoder.outputSampleRate, 2000, 0);
result = ma_bpf2_init(&bpfConfig, &bpf);
result = ma_bpf2_init(&bpfConfig, NULL, &bpf);
if (result != MA_SUCCESS) {
ma_decoder_uninit(&decoder);
ma_encoder_uninit(&encoder);
@@ -36,13 +36,13 @@ ma_result test_bpf2__by_format(const char* pInputFilePath, const char* pOutputFi
ma_uint64 framesJustRead;
framesToRead = ma_min(tempCapIn, tempCapOut);
framesJustRead = ma_decoder_read_pcm_frames(&decoder, tempIn, framesToRead);
ma_decoder_read_pcm_frames(&decoder, tempIn, framesToRead, &framesJustRead);
/* Filter */
ma_bpf2_process_pcm_frames(&bpf, tempOut, tempIn, framesJustRead);
/* Write to the WAV file. */
ma_encoder_write_pcm_frames(&encoder, tempOut, framesJustRead);
ma_encoder_write_pcm_frames(&encoder, tempOut, framesJustRead, NULL);
if (framesJustRead < framesToRead) {
break;
@@ -81,7 +81,7 @@ ma_result test_bpf4__by_format(const char* pInputFilePath, const char* pOutputFi
}
bpfConfig = ma_bpf_config_init(decoder.outputFormat, decoder.outputChannels, decoder.outputSampleRate, 2000, 4);
result = ma_bpf_init(&bpfConfig, &bpf);
result = ma_bpf_init(&bpfConfig, NULL, &bpf);
if (result != MA_SUCCESS) {
ma_decoder_uninit(&decoder);
ma_encoder_uninit(&encoder);
@@ -97,13 +97,13 @@ ma_result test_bpf4__by_format(const char* pInputFilePath, const char* pOutputFi
ma_uint64 framesJustRead;
framesToRead = ma_min(tempCapIn, tempCapOut);
framesJustRead = ma_decoder_read_pcm_frames(&decoder, tempIn, framesToRead);
ma_decoder_read_pcm_frames(&decoder, tempIn, framesToRead, &framesJustRead);
/* Filter */
ma_bpf_process_pcm_frames(&bpf, tempOut, tempIn, framesJustRead);
/* Write to the WAV file. */
ma_encoder_write_pcm_frames(&encoder, tempOut, framesJustRead);
ma_encoder_write_pcm_frames(&encoder, tempOut, framesJustRead, NULL);
if (framesJustRead < framesToRead) {
break;
@@ -14,7 +14,7 @@ ma_result test_dithering__u8(const char* pInputFilePath)
return result;
}
encoderConfig = ma_encoder_config_init(ma_resource_format_wav, ma_format_u8, decoder.outputChannels, decoder.outputSampleRate);
encoderConfig = ma_encoder_config_init(ma_encoding_format_wav, ma_format_u8, decoder.outputChannels, decoder.outputSampleRate);
result = ma_encoder_init_file(pOutputFilePath, &encoderConfig, &encoder);
if (result != MA_SUCCESS) {
ma_decoder_uninit(&decoder);
@@ -30,13 +30,13 @@ ma_result test_dithering__u8(const char* pInputFilePath)
ma_uint64 framesJustRead;
framesToRead = ma_min(tempCapIn, tempCapOut);
framesJustRead = ma_decoder_read_pcm_frames(&decoder, tempIn, framesToRead);
ma_decoder_read_pcm_frames(&decoder, tempIn, framesToRead, &framesJustRead);
/* Convert, with dithering. */
ma_convert_pcm_frames_format(tempOut, ma_format_u8, tempIn, decoder.outputFormat, framesJustRead, decoder.outputChannels, ma_dither_mode_triangle);
/* Write to the WAV file. */
ma_encoder_write_pcm_frames(&encoder, tempOut, framesJustRead);
ma_encoder_write_pcm_frames(&encoder, tempOut, framesJustRead, NULL);
if (framesJustRead < framesToRead) {
break;
@@ -20,7 +20,7 @@ ma_result test_hishelf2__by_format(const char* pInputFilePath, const char* pOutp
}
hishelfConfig = ma_hishelf2_config_init(decoder.outputFormat, decoder.outputChannels, decoder.outputSampleRate, 18, 1, 16000);
result = ma_hishelf2_init(&hishelfConfig, &hishelf);
result = ma_hishelf2_init(&hishelfConfig, NULL, &hishelf);
if (result != MA_SUCCESS) {
ma_decoder_uninit(&decoder);
ma_encoder_uninit(&encoder);
@@ -36,13 +36,13 @@ ma_result test_hishelf2__by_format(const char* pInputFilePath, const char* pOutp
ma_uint64 framesJustRead;
framesToRead = ma_min(tempCapIn, tempCapOut);
framesJustRead = ma_decoder_read_pcm_frames(&decoder, tempIn, framesToRead);
ma_decoder_read_pcm_frames(&decoder, tempIn, framesToRead, &framesJustRead);
/* Filter */
ma_hishelf2_process_pcm_frames(&hishelf, tempOut, tempIn, framesJustRead);
/* Write to the WAV file. */
ma_encoder_write_pcm_frames(&encoder, tempOut, framesJustRead);
ma_encoder_write_pcm_frames(&encoder, tempOut, framesJustRead, NULL);
if (framesJustRead < framesToRead) {
break;
+9 -9
View File
@@ -20,7 +20,7 @@ ma_result test_hpf1__by_format(const char* pInputFilePath, const char* pOutputFi
}
hpfConfig = ma_hpf1_config_init(decoder.outputFormat, decoder.outputChannels, decoder.outputSampleRate, 2000);
result = ma_hpf1_init(&hpfConfig, &hpf);
result = ma_hpf1_init(&hpfConfig, NULL, &hpf);
if (result != MA_SUCCESS) {
ma_decoder_uninit(&decoder);
ma_encoder_uninit(&encoder);
@@ -36,13 +36,13 @@ ma_result test_hpf1__by_format(const char* pInputFilePath, const char* pOutputFi
ma_uint64 framesJustRead;
framesToRead = ma_min(tempCapIn, tempCapOut);
framesJustRead = ma_decoder_read_pcm_frames(&decoder, tempIn, framesToRead);
ma_decoder_read_pcm_frames(&decoder, tempIn, framesToRead, &framesJustRead);
/* Filter */
ma_hpf1_process_pcm_frames(&hpf, tempOut, tempIn, framesJustRead);
/* Write to the WAV file. */
ma_encoder_write_pcm_frames(&encoder, tempOut, framesJustRead);
ma_encoder_write_pcm_frames(&encoder, tempOut, framesJustRead, NULL);
if (framesJustRead < framesToRead) {
break;
@@ -81,7 +81,7 @@ ma_result test_hpf2__by_format(const char* pInputFilePath, const char* pOutputFi
}
hpfConfig = ma_hpf2_config_init(decoder.outputFormat, decoder.outputChannels, decoder.outputSampleRate, 2000, 0);
result = ma_hpf2_init(&hpfConfig, &hpf);
result = ma_hpf2_init(&hpfConfig, NULL, &hpf);
if (result != MA_SUCCESS) {
ma_decoder_uninit(&decoder);
ma_encoder_uninit(&encoder);
@@ -97,13 +97,13 @@ ma_result test_hpf2__by_format(const char* pInputFilePath, const char* pOutputFi
ma_uint64 framesJustRead;
framesToRead = ma_min(tempCapIn, tempCapOut);
framesJustRead = ma_decoder_read_pcm_frames(&decoder, tempIn, framesToRead);
ma_decoder_read_pcm_frames(&decoder, tempIn, framesToRead, &framesJustRead);
/* Filter */
ma_hpf2_process_pcm_frames(&hpf, tempOut, tempIn, framesJustRead);
/* Write to the WAV file. */
ma_encoder_write_pcm_frames(&encoder, tempOut, framesJustRead);
ma_encoder_write_pcm_frames(&encoder, tempOut, framesJustRead, NULL);
if (framesJustRead < framesToRead) {
break;
@@ -142,7 +142,7 @@ ma_result test_hpf3__by_format(const char* pInputFilePath, const char* pOutputFi
}
hpfConfig = ma_hpf_config_init(decoder.outputFormat, decoder.outputChannels, decoder.outputSampleRate, 2000, 3);
result = ma_hpf_init(&hpfConfig, &hpf);
result = ma_hpf_init(&hpfConfig, NULL, &hpf);
if (result != MA_SUCCESS) {
ma_decoder_uninit(&decoder);
ma_encoder_uninit(&encoder);
@@ -158,13 +158,13 @@ ma_result test_hpf3__by_format(const char* pInputFilePath, const char* pOutputFi
ma_uint64 framesJustRead;
framesToRead = ma_min(tempCapIn, tempCapOut);
framesJustRead = ma_decoder_read_pcm_frames(&decoder, tempIn, framesToRead);
ma_decoder_read_pcm_frames(&decoder, tempIn, framesToRead, &framesJustRead);
/* Filter */
ma_hpf_process_pcm_frames(&hpf, tempOut, tempIn, framesJustRead);
/* Write to the WAV file. */
ma_encoder_write_pcm_frames(&encoder, tempOut, framesJustRead);
ma_encoder_write_pcm_frames(&encoder, tempOut, framesJustRead, NULL);
if (framesJustRead < framesToRead) {
break;
@@ -20,7 +20,7 @@ ma_result test_loshelf2__by_format(const char* pInputFilePath, const char* pOutp
}
loshelfConfig = ma_loshelf2_config_init(decoder.outputFormat, decoder.outputChannels, decoder.outputSampleRate, 6, 1, 200);
result = ma_loshelf2_init(&loshelfConfig, &loshelf);
result = ma_loshelf2_init(&loshelfConfig, NULL, &loshelf);
if (result != MA_SUCCESS) {
ma_decoder_uninit(&decoder);
ma_encoder_uninit(&encoder);
@@ -36,13 +36,13 @@ ma_result test_loshelf2__by_format(const char* pInputFilePath, const char* pOutp
ma_uint64 framesJustRead;
framesToRead = ma_min(tempCapIn, tempCapOut);
framesJustRead = ma_decoder_read_pcm_frames(&decoder, tempIn, framesToRead);
ma_decoder_read_pcm_frames(&decoder, tempIn, framesToRead, &framesJustRead);
/* Filter */
ma_loshelf2_process_pcm_frames(&loshelf, tempOut, tempIn, framesJustRead);
/* Write to the WAV file. */
ma_encoder_write_pcm_frames(&encoder, tempOut, framesJustRead);
ma_encoder_write_pcm_frames(&encoder, tempOut, framesJustRead, NULL);
if (framesJustRead < framesToRead) {
break;
+9 -9
View File
@@ -20,7 +20,7 @@ ma_result test_lpf1__by_format(const char* pInputFilePath, const char* pOutputFi
}
lpfConfig = ma_lpf1_config_init(decoder.outputFormat, decoder.outputChannels, decoder.outputSampleRate, 2000);
result = ma_lpf1_init(&lpfConfig, &lpf);
result = ma_lpf1_init(&lpfConfig, NULL, &lpf);
if (result != MA_SUCCESS) {
ma_decoder_uninit(&decoder);
ma_encoder_uninit(&encoder);
@@ -36,13 +36,13 @@ ma_result test_lpf1__by_format(const char* pInputFilePath, const char* pOutputFi
ma_uint64 framesJustRead;
framesToRead = ma_min(tempCapIn, tempCapOut);
framesJustRead = ma_decoder_read_pcm_frames(&decoder, tempIn, framesToRead);
ma_decoder_read_pcm_frames(&decoder, tempIn, framesToRead, &framesJustRead);
/* Filter */
ma_lpf1_process_pcm_frames(&lpf, tempOut, tempIn, framesJustRead);
/* Write to the WAV file. */
ma_encoder_write_pcm_frames(&encoder, tempOut, framesJustRead);
ma_encoder_write_pcm_frames(&encoder, tempOut, framesJustRead, NULL);
if (framesJustRead < framesToRead) {
break;
@@ -81,7 +81,7 @@ ma_result test_lpf2__by_format(const char* pInputFilePath, const char* pOutputFi
}
lpfConfig = ma_lpf2_config_init(decoder.outputFormat, decoder.outputChannels, decoder.outputSampleRate, 2000, 0);
result = ma_lpf2_init(&lpfConfig, &lpf);
result = ma_lpf2_init(&lpfConfig, NULL, &lpf);
if (result != MA_SUCCESS) {
ma_decoder_uninit(&decoder);
ma_encoder_uninit(&encoder);
@@ -97,13 +97,13 @@ ma_result test_lpf2__by_format(const char* pInputFilePath, const char* pOutputFi
ma_uint64 framesJustRead;
framesToRead = ma_min(tempCapIn, tempCapOut);
framesJustRead = ma_decoder_read_pcm_frames(&decoder, tempIn, framesToRead);
ma_decoder_read_pcm_frames(&decoder, tempIn, framesToRead, &framesJustRead);
/* Filter */
ma_lpf2_process_pcm_frames(&lpf, tempOut, tempIn, framesJustRead);
/* Write to the WAV file. */
ma_encoder_write_pcm_frames(&encoder, tempOut, framesJustRead);
ma_encoder_write_pcm_frames(&encoder, tempOut, framesJustRead, NULL);
if (framesJustRead < framesToRead) {
break;
@@ -143,7 +143,7 @@ ma_result test_lpf3__by_format(const char* pInputFilePath, const char* pOutputFi
}
lpfConfig = ma_lpf_config_init(decoder.outputFormat, decoder.outputChannels, decoder.outputSampleRate, 2000, /*poles*/3);
result = ma_lpf_init(&lpfConfig, &lpf);
result = ma_lpf_init(&lpfConfig, NULL, &lpf);
if (result != MA_SUCCESS) {
ma_decoder_uninit(&decoder);
ma_encoder_uninit(&encoder);
@@ -159,13 +159,13 @@ ma_result test_lpf3__by_format(const char* pInputFilePath, const char* pOutputFi
ma_uint64 framesJustRead;
framesToRead = ma_min(tempCapIn, tempCapOut);
framesJustRead = ma_decoder_read_pcm_frames(&decoder, tempIn, framesToRead);
ma_decoder_read_pcm_frames(&decoder, tempIn, framesToRead, &framesJustRead);
/* Filter */
ma_lpf_process_pcm_frames(&lpf, tempOut, tempIn, framesJustRead);
/* Write to the WAV file. */
ma_encoder_write_pcm_frames(&encoder, tempOut, framesJustRead);
ma_encoder_write_pcm_frames(&encoder, tempOut, framesJustRead, NULL);
if (framesJustRead < framesToRead) {
break;
@@ -20,7 +20,7 @@ ma_result test_notch2__by_format(const char* pInputFilePath, const char* pOutput
}
notchConfig = ma_notch2_config_init(decoder.outputFormat, decoder.outputChannels, decoder.outputSampleRate, 1, 60);
result = ma_notch2_init(&notchConfig, &notch);
result = ma_notch2_init(&notchConfig, NULL, &notch);
if (result != MA_SUCCESS) {
ma_decoder_uninit(&decoder);
ma_encoder_uninit(&encoder);
@@ -36,13 +36,13 @@ ma_result test_notch2__by_format(const char* pInputFilePath, const char* pOutput
ma_uint64 framesJustRead;
framesToRead = ma_min(tempCapIn, tempCapOut);
framesJustRead = ma_decoder_read_pcm_frames(&decoder, tempIn, framesToRead);
ma_decoder_read_pcm_frames(&decoder, tempIn, framesToRead, &framesJustRead);
/* Filter */
ma_notch2_process_pcm_frames(&notch, tempOut, tempIn, framesJustRead);
/* Write to the WAV file. */
ma_encoder_write_pcm_frames(&encoder, tempOut, framesJustRead);
ma_encoder_write_pcm_frames(&encoder, tempOut, framesJustRead, NULL);
if (framesJustRead < framesToRead) {
break;
@@ -20,7 +20,7 @@ ma_result test_peak2__by_format(const char* pInputFilePath, const char* pOutputF
}
peakConfig = ma_peak2_config_init(decoder.outputFormat, decoder.outputChannels, decoder.outputSampleRate, 24, 0, 16000);
result = ma_peak2_init(&peakConfig, &peak);
result = ma_peak2_init(&peakConfig, NULL, &peak);
if (result != MA_SUCCESS) {
ma_decoder_uninit(&decoder);
ma_encoder_uninit(&encoder);
@@ -36,13 +36,13 @@ ma_result test_peak2__by_format(const char* pInputFilePath, const char* pOutputF
ma_uint64 framesJustRead;
framesToRead = ma_min(tempCapIn, tempCapOut);
framesJustRead = ma_decoder_read_pcm_frames(&decoder, tempIn, framesToRead);
ma_decoder_read_pcm_frames(&decoder, tempIn, framesToRead, &framesJustRead);
/* Filter */
ma_peak2_process_pcm_frames(&peak, tempOut, tempIn, framesJustRead);
/* Write to the WAV file. */
ma_encoder_write_pcm_frames(&encoder, tempOut, framesJustRead);
ma_encoder_write_pcm_frames(&encoder, tempOut, framesJustRead, NULL);
if (framesJustRead < framesToRead) {
break;
@@ -11,12 +11,12 @@ ma_result test_noise__by_format_and_type(ma_format format, ma_noise_type type, c
printf(" %s\n", pFileName);
noiseConfig = ma_noise_config_init(format, 1, type, 0, 0.1);
result = ma_noise_init(&noiseConfig, &noise);
result = ma_noise_init(&noiseConfig, NULL, &noise);
if (result != MA_SUCCESS) {
return result;
}
encoderConfig = ma_encoder_config_init(ma_resource_format_wav, format, noiseConfig.channels, 48000);
encoderConfig = ma_encoder_config_init(ma_encoding_format_wav, format, noiseConfig.channels, 48000);
result = ma_encoder_init_file(pFileName, &encoderConfig, &encoder);
if (result != MA_SUCCESS) {
return result;
@@ -25,8 +25,8 @@ ma_result test_noise__by_format_and_type(ma_format format, ma_noise_type type, c
/* We'll do a few seconds of data. */
for (iFrame = 0; iFrame < encoder.config.sampleRate * 10; iFrame += 1) {
ma_uint8 temp[1024];
ma_noise_read_pcm_frames(&noise, temp, 1);
ma_encoder_write_pcm_frames(&encoder, temp, 1);
ma_noise_read_pcm_frames(&noise, temp, 1, NULL);
ma_encoder_write_pcm_frames(&encoder, temp, 1, NULL);
}
ma_encoder_uninit(&encoder);
@@ -1,18 +1,11 @@
static drwav_data_format drwav_data_format_from_waveform_config(const ma_waveform_config* pWaveformConfig)
{
MA_ASSERT(pWaveformConfig != NULL);
return drwav_data_format_from_minaudio_format(pWaveformConfig->format, pWaveformConfig->channels, pWaveformConfig->sampleRate);
}
ma_result test_waveform__by_format_and_type(ma_format format, ma_waveform_type type, double amplitude, const char* pFileName)
{
ma_result result;
ma_waveform_config waveformConfig;
ma_waveform waveform;
drwav_data_format wavFormat;
drwav wav;
ma_encoder_config encoderConfig;
ma_encoder encoder;
ma_uint32 iFrame;
printf(" %s\n", pFileName);
@@ -23,19 +16,22 @@ ma_result test_waveform__by_format_and_type(ma_format format, ma_waveform_type t
return result;
}
wavFormat = drwav_data_format_from_waveform_config(&waveformConfig);
if (!drwav_init_file_write(&wav, pFileName, &wavFormat, NULL)) {
return MA_ERROR; /* Could not open file for writing. */
encoderConfig = ma_encoder_config_init(ma_encoding_format_wav, waveformConfig.format, waveformConfig.channels, waveformConfig.sampleRate);
result = ma_encoder_init_file(pFileName, &encoderConfig, &encoder);
if (result != MA_SUCCESS) {
return result; /* Failed to initialize encoder. */
}
/* We'll do a few seconds of data. */
for (iFrame = 0; iFrame < wavFormat.sampleRate * 10; iFrame += 1) {
for (iFrame = 0; iFrame < waveformConfig.sampleRate * 10; iFrame += 1) {
float temp[MA_MAX_CHANNELS];
ma_waveform_read_pcm_frames(&waveform, temp, 1);
drwav_write_pcm_frames(&wav, 1, temp);
ma_waveform_read_pcm_frames(&waveform, temp, 1, NULL);
ma_encoder_write_pcm_frames(&encoder, temp, 1, NULL);
}
drwav_uninit(&wav);
ma_encoder_uninit(&encoder);
ma_waveform_uninit(&waveform);
return MA_SUCCESS;
}
+1 -1
View File
@@ -41,7 +41,7 @@ ma_result do_conversion(ma_decoder* pDecoder, ma_encoder* pEncoder)
MA_ASSERT(pEncoder != NULL);
/*
All we do is read from the decoder and then write straight to the encoder. All of the neccessary data conversion
All we do is read from the decoder and then write straight to the encoder. All of the necessary data conversion
will happen internally.
*/
for (;;) {