Skip to content
Draft
Show file tree
Hide file tree
Changes from 11 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
21 changes: 21 additions & 0 deletions .idea/deployment.xml

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

5 changes: 5 additions & 0 deletions .vscode/settings.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
{
"rust-analyzer.cargo.features": [
"pipewire"
]
}
34 changes: 21 additions & 13 deletions examples/duplex.rs
Original file line number Diff line number Diff line change
@@ -1,20 +1,24 @@
use crate::util::sine::SineWave;
use anyhow::Result;
use interflow::duplex::AudioDuplexCallback;
use interflow::prelude::*;

mod util;

//noinspection RsUnwrap
fn main() -> Result<()> {
env_logger::init();
let input = default_input_device();
let output = default_output_device();
let mut input_config = input.default_input_config().unwrap();
input_config.buffer_size_range = (Some(128), Some(512));
let mut output_config = output.default_output_config().unwrap();
output_config.buffer_size_range = (Some(128), Some(512));
let duplex_config = DuplexStreamConfig::new(input_config, output_config);
let stream =
duplex::create_duplex_stream(input, output, RingMod::new(), duplex_config).unwrap();
log::info!("Opening input: {}", input.name());
log::info!("Opening output: {}", output.name());
let config = StreamConfig {
buffer_size_range: (Some(128), Some(512)),
input_channels: 2,
output_channels: 2,
Comment on lines +16 to +17
Copy link
Copy Markdown

@Be-ing Be-ing Dec 12, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Merely specifying the number of channels cannot adequately take advantage of all the features of some APIs. Pipewire and JACK both give names to ports. This API would require all the ports to share a common name, differentiated only by number like client:port_1, client:port_2, client:port_3. PortAudio does this and it has always frustrated JACK users who expect meaningful names for ports so they can route them as they please in a separate patchbay application.

Copy link
Copy Markdown

@Be-ing Be-ing Dec 12, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also along these lines, putting all channels for input/output into a single buffer would be cumbersome for developers of applications that expose lots of named ports. With the JACK API, the application creates structs for each port which get accessed in the callback. See https://github.com/RustAudio/rust-jack/blob/main/examples/playback_capture.rs. I recommend to create an API along these lines as I find the JACK API quite intuitive to use (at least with the Rust bindings). For backends that do not support naming ports, creating a port struct would simply increment the number of channels and the name string would be ignored.

As an example of what a pain it would be for application developers to not be able to access ports by name, here's qpwgraph with Ardour open with a very simple project that only has a single track:
image

Copy link
Copy Markdown
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As a JACK and Pipewire user, I fully understand the pain and I have been thinking of ways to let users specify port names. The problem being that interflow (like PortAudio) is an abstraction library, so it cannot fully map all platform-specific features. But I don't want interflow to be a "lowest common denominator" library either, and the whole design with "drivers as structs" allows people to specifically target one, get concrete types, allowing specific configuration where it can be found, while also remaining to stay generic with "good enough" defaults.

I could also make it available as part of the stream configuration type, but that would significantly increase the configuration complexity for an option that I would argue, most people wouldn't use in the first place (either out of ignorance or because they don't even develop against backends that support port naming).

The other problem is that if you don't know that you can name your ports, you aren't going to do it. A lot of cross-platform software will not bother doing it anyway even with the feature available; it's not only a library but also a user problem.

Finally, JACK and Pipewire are fundamentally different than other platforms because the user is creating the node and its topology, vs. WASAPI or CoreAudio where the platform dictates what you can connect to. It's hard to blend the two together in the first place, and so the latter use case wins by virtue of being the most often used one.

All in all, the current solution in my head is to keep the generic path light on configuration, and instead allowing people who want to deal with backend-specific configuration to do so directly. GUI applications (as an example) will also require specific UI design to reflect specific configuration details, so I'm not even sure it makes sense to abstract much of that away in the first place, except for providing an "easy mode".

Copy link
Copy Markdown
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

re. the use of named ports, the order of declaration would be kept in the order of audio buffers in the audio callback, do you could set up an enum to do the naming, and case to usize when indexing the audio buffers; I am going to explicitely disallow relying on string types to index ports because it's error prone and has very poor type safety.

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

set up an enum to do the naming, and case to usize when indexing the audio buffers

That could work.

Another big difference between Pipewire and JACK versus APIs that directly connect to hardware is that JACK ports can exist independently of whether they are in active use. It's common for JACK applications to create various ports without programmatically connecting them to any other nodes and leaving that to the user or a session manager application. So as an application developer, I might want to create 4 input and 20 output ports, but by default only connect 2 of those output ports to a hardware output. Not sure how to account for this in a cross platform API, but I hope such considerations can be accounted for early in the API design before they get excluded.

Copy link
Copy Markdown
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's hard to make such specific situations part of the generic use-case where the assumption is that the audio callback that is opened is connected to a device, rather than having an additional nodal abstractions. Again, it's not just an issue of technology but also of expected usage ; most people not being on Linux and specifically not knowing about the inner workings of JACK/Pipewire means they will expect the traditional workflow. I don't want to make this use case weaker in interflow for the sole reason that it might make it easier to target JACK while staying in that workflow.

In all cases, if you want to use interflow for the library design but still directly control the JACK node interface, you'll be able to do that eventually by using the JACK driver explicitely.

Copy link
Copy Markdown

@Be-ing Be-ing Dec 12, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In all cases, if you want to use interflow for the library design but still directly control the JACK node interface, you'll be able to do that eventually by using the JACK driver explicitely.

Interesting idea; we'll see if such an approach can really expose all the features of each API. What I want to avoid is situations like the discussion on that old Mixxx bug where developers are caught between having to choose an oversimplified cross platform abstraction that doesn't really meet users' needs or forgoing the cross platform abstraction and using a platform-specific API directly. I would lean towards a graceful degradation approach of designing the abstractions around the most featureful backends with sensible fallbacks for backends that don't support all features; it seems you're leaning towards a progressive enhancement approach instead.

it's not just an issue of technology but also of expected usage ; most people not being on Linux and specifically not knowing about the inner workings of JACK/Pipewire means they will expect the traditional workflow

Indeed, that's why I'd argue for graceful degradation over progressive enhancement. Read through that old discussion on the Mixxx bug and look how hard it was for users to communicate to unfamiliar developers what the behavior should be, or that there even was a problem to be solved. That's why I'd favor designing the API around the most featureful backends, so that developers who are not so familiar with them don't neglect them, for example by failing to provide port names.

Copy link
Copy Markdown
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have begun the work on an extension-based API, allowing user-defined traits and objects to be queried at runtime, and in a dyn-safe way. For example, here are additional traits for "named channels" and config enumeration, trait implementations can be registered and user code can query those traits from the context of a type-erased (i.e., through dyn DeviceProxy) device.

This would allow the JACK and PipeWire backends to implement custom "dynamic port" and "port name" extensions, and user code that would like to use those, would be able to query for the extension at runtime.

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh interesting idea. The link to the diff in your comment isn't working though.

Copy link
Copy Markdown
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It does for me, but in any case you can look at lib.rs in interflow-core in that branch, the additional traits are extension points. There is also an example of platform-specific extensions in the interflow-wasapi crate, the device.rs implements and registers a DefaultForRole extension that allows specifying a WASAPI Role and get the associated default device ID.

..output.default_config().unwrap()
};
let duplex_config = DuplexStreamConfig::new(config);
let stream = create_duplex_stream(input, output, RingMod::new(), duplex_config).unwrap();
println!("Press Enter to stop");
std::io::stdin().read_line(&mut String::new())?;
stream.eject().unwrap();
Expand All @@ -33,17 +37,21 @@ impl RingMod {
}
}

impl AudioDuplexCallback for RingMod {
fn on_audio_data(
impl AudioCallback for RingMod {
fn prepare(&mut self, context: AudioCallbackContext) {
self.carrier.prepare(context);
}

fn process_audio(
&mut self,
context: AudioCallbackContext,
input: AudioInput<f32>,
mut output: AudioOutput<f32>,
) {
let sr = context.stream_config.samplerate as f32;
for i in 0..output.buffer.num_samples() {
let sr = context.stream_config.sample_rate as f32;
for i in 0..output.buffer.num_frames() {
let inp = input.buffer.get_frame(i)[0];
let c = self.carrier.next_sample(sr);
let c = self.carrier.next_sample();
output.buffer.set_mono(i, inp * c);
}
}
Expand Down
21 changes: 16 additions & 5 deletions examples/input.rs
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ fn main() -> Result<()> {
let device = default_input_device();
let value = Arc::new(AtomicF32::new(0.));
let stream = device
.default_input_stream(RmsMeter::new(value.clone()))
.default_stream(DeviceType::INPUT, RmsMeter::new(value.clone()))
.unwrap();
util::display_peakmeter(value)?;
stream.eject().unwrap();
Expand All @@ -30,12 +30,23 @@ impl RmsMeter {
}
}

impl AudioInputCallback for RmsMeter {
fn on_input_data(&mut self, context: AudioCallbackContext, input: AudioInput<f32>) {
impl AudioCallback for RmsMeter {
fn prepare(&mut self, context: AudioCallbackContext) {
let meter = self
.meter
.get_or_insert_with(|| PeakMeter::new(context.stream_config.samplerate as f32, 15.0));
meter.set_samplerate(context.stream_config.samplerate as f32);
.get_or_insert_with(|| PeakMeter::new(context.stream_config.sample_rate as f32, 15.0));
meter.set_samplerate(context.stream_config.sample_rate as f32);
}
fn process_audio(
&mut self,
_: AudioCallbackContext,
input: AudioInput<f32>,
_output: AudioOutput<f32>,
) {
let meter = self
.meter
.as_mut()
.expect("Peak meter not constructed, prepare not called");
meter.process_buffer(input.buffer.as_ref());
self.value
.store(meter.value(), std::sync::atomic::Ordering::Relaxed);
Expand Down
23 changes: 13 additions & 10 deletions examples/loopback.rs
Original file line number Diff line number Diff line change
Expand Up @@ -10,14 +10,16 @@ fn main() -> Result<()> {

let input = default_input_device();
let output = default_output_device();
let mut input_config = input.default_input_config().unwrap();
input_config.buffer_size_range = (Some(128), Some(512));
let mut output_config = output.default_output_config().unwrap();
output_config.buffer_size_range = (Some(128), Some(512));
input_config.channels = 0b01;
output_config.channels = 0b11;
log::info!("Opening input : {}", input.name());
log::info!("Opening output: {}", output.name());
let config = StreamConfig {
buffer_size_range: (Some(128), Some(512)),
input_channels: 1,
output_channels: 1,
..output.default_config().unwrap()
};
let value = Arc::new(AtomicF32::new(0.));
let config = DuplexStreamConfig::new(input_config, output_config);
let config = DuplexStreamConfig::new(config);
let stream =
create_duplex_stream(input, output, Loopback::new(44100., value.clone()), config).unwrap();
util::display_peakmeter(value)?;
Expand All @@ -39,15 +41,16 @@ impl Loopback {
}
}

impl AudioDuplexCallback for Loopback {
fn on_audio_data(
impl AudioCallback for Loopback {
fn prepare(&mut self, context: AudioCallbackContext) {}
fn process_audio(
&mut self,
context: AudioCallbackContext,
input: AudioInput<f32>,
mut output: AudioOutput<f32>,
) {
self.meter
.set_samplerate(context.stream_config.samplerate as f32);
.set_samplerate(context.stream_config.sample_rate as f32);
let rms = self.meter.process_buffer(input.buffer.as_ref());
self.value.store(rms, std::sync::atomic::Ordering::Relaxed);
output.buffer.as_interleaved_mut().fill(0.0);
Expand Down
21 changes: 13 additions & 8 deletions examples/set_buffer_size.rs
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ mod util;
#[cfg(os_coreaudio)]
fn main() -> anyhow::Result<()> {
use interflow::backends::coreaudio::CoreAudioDriver;
use interflow::channel_map::{ChannelMap32, CreateBitset};
use interflow::channel_map::CreateBitset;
use interflow::prelude::*;
use std::sync::{
atomic::{AtomicBool, Ordering},
Expand All @@ -19,19 +19,23 @@ fn main() -> anyhow::Result<()> {
sine_wave: SineWave,
}

impl AudioOutputCallback for MyCallback {
fn on_output_data(&mut self, context: AudioCallbackContext, mut output: AudioOutput<f32>) {
impl AudioCallback for MyCallback {
fn prepare(&mut self, context: AudioCallbackContext) {
self.sine_wave.prepare(context);
}

fn process_audio(&mut self, _: AudioCallbackContext, _: AudioInput<f32>, mut output: AudioOutput<f32>) {
if self.first_callback.swap(false, Ordering::SeqCst) {
println!(
"Actual buffer size granted by OS: {}",
output.buffer.num_samples()
output.buffer.num_frames()
);
}

for mut frame in output.buffer.as_interleaved_mut().rows_mut() {
let sample = self
.sine_wave
.next_sample(context.stream_config.samplerate as f32);
.next_sample();
for channel_sample in &mut frame {
*channel_sample = sample;
}
Expand Down Expand Up @@ -61,8 +65,9 @@ fn main() -> anyhow::Result<()> {
println!("Requesting buffer size: {}", requested_buffer_size);

let stream_config = StreamConfig {
samplerate: 48000.0,
channels: ChannelMap32::from_indices([0, 1]),
sample_rate: 48000.0,
input_channels: 0,
output_channels: 2,
buffer_size_range: (Some(requested_buffer_size), Some(requested_buffer_size)),
exclusive: false,
};
Expand All @@ -72,7 +77,7 @@ fn main() -> anyhow::Result<()> {
sine_wave: SineWave::new(440.0),
};

let stream = device.create_output_stream(stream_config, callback)?;
let stream = device.create_stream(stream_config, callback)?;

println!("Playing sine wave... Press enter to stop.");
std::io::stdin().read_line(&mut String::new())?;
Expand Down
2 changes: 1 addition & 1 deletion examples/sine_wave.rs
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ fn main() -> Result<()> {

let device = default_output_device();
println!("Using device {}", device.name());
let stream = device.default_output_stream(SineWave::new(440.0)).unwrap();
let stream = device.default_stream(DeviceType::OUTPUT, SineWave::new(440.0)).unwrap();
println!("Press Enter to stop");
std::io::stdin().read_line(&mut String::new())?;
stream.eject().unwrap();
Expand Down
2 changes: 1 addition & 1 deletion examples/util/meter.rs
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,7 @@ impl PeakMeter {
}

pub fn process_buffer(&mut self, buffer: AudioRef<f32>) -> f32 {
let buffer_duration = buffer.num_samples() as f32 * self.dt;
let buffer_duration = buffer.num_frames() as f32 * self.dt;
let peak_lin = buffer
.channels()
.flat_map(|ch| ch.iter().copied().max_by(f32::total_cmp))
Expand Down
24 changes: 17 additions & 7 deletions examples/util/sine.rs
Original file line number Diff line number Diff line change
@@ -1,20 +1,29 @@
use interflow::{AudioCallbackContext, AudioOutput, AudioOutputCallback};
use interflow::{AudioCallback, AudioCallbackContext, AudioInput, AudioOutput};
use std::f32::consts::TAU;

pub struct SineWave {
pub frequency: f32,
pub phase: f32,
step_frequency_scaling: f32,
}

impl AudioOutputCallback for SineWave {
fn on_output_data(&mut self, context: AudioCallbackContext, mut output: AudioOutput<f32>) {
impl AudioCallback for SineWave {
fn prepare(&mut self, context: AudioCallbackContext) {
self.step_frequency_scaling = context.stream_config.sample_rate.recip() as f32;
}
fn process_audio(
&mut self,
context: AudioCallbackContext,
_input: AudioInput<f32>,
mut output: AudioOutput<f32>,
) {
eprintln!(
"Callback called, timestamp: {:2.3} s",
context.timestamp.as_seconds()
);
let sr = context.timestamp.samplerate as f32;
for i in 0..output.buffer.num_samples() {
output.buffer.set_mono(i, self.next_sample(sr));
for i in 0..output.buffer.num_frames() {
output.buffer.set_mono(i, self.next_sample());
}
// Reduce amplitude to not blow up speakers and ears
output.buffer.change_amplitude(0.125);
Expand All @@ -26,11 +35,12 @@ impl SineWave {
Self {
frequency,
phase: 0.0,
step_frequency_scaling: 0.0,
}
}

pub fn next_sample(&mut self, samplerate: f32) -> f32 {
let step = samplerate.recip() * self.frequency;
pub fn next_sample(&mut self) -> f32 {
let step = self.step_frequency_scaling * self.frequency;
let y = (TAU * self.phase).sin();
self.phase += step;
if self.phase > 1. {
Expand Down
Loading
Loading