Fullfører oppgave 16.2: Web Audio mixer-graf

Oppretter dedikert mixer-modul (mixer.ts) som eier hele Web Audio-grafen:
- AudioContext med master GainNode og master AnalyserNode
- Per-kanal signalkjede: MediaStreamSource → AnalyserNode → GainNode → MasterGain → destination
- AnalyserNode per kanal gir peak/RMS-nivådata for VU-meter
- API for gain-kontroll (per-kanal og master), mute/unmute, nivåavlesning
- livekit.ts delegerer all lydrutning til mixer.ts

Arkitekturen er klar for fremtidige faser: effektkjeder kan settes inn
mellom source og gain, sound pads kan legge til kanaler, og SpacetimeDB
kan synkronisere mixer-state.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
This commit is contained in:
vegard 2026-03-18 04:53:13 +00:00
parent 4630820876
commit 9fb1dcf93b
3 changed files with 297 additions and 53 deletions

View file

@ -3,7 +3,7 @@
*
* Handles room connection, participant tracking, and Web Audio routing.
* LiveKit's auto-attach of <audio> elements is disabled all audio is
* routed through the Web Audio API so the mixer (Fase 16) can process it.
* routed through the Web Audio API via the mixer module (mixer.ts).
*/
import {
@ -17,6 +17,8 @@ import {
type Participant,
} from 'livekit-client';
import { addChannel, removeChannel, destroyMixer, ensureAudioContext } from './mixer';
// ─── Types ──────────────────────────────────────────────────────────────────
export interface LiveKitParticipant {
@ -32,12 +34,6 @@ export type RoomStatus = 'disconnected' | 'connecting' | 'connected' | 'reconnec
// ─── State ──────────────────────────────────────────────────────────────────
let room: Room | null = null;
let audioContext: AudioContext | null = null;
/** Map from participant identity → their Web Audio source node */
const audioSources = new Map<string, MediaStreamAudioSourceNode>();
/** Map from participant identity → their GainNode (for future mixer control) */
const gainNodes = new Map<string, GainNode>();
// Reactive state via callbacks
type StateListener = () => void;
@ -109,17 +105,10 @@ function refreshParticipants() {
notify();
}
// ─── Web Audio routing ─────────────────────────────────────────────────────
function ensureAudioContext(): AudioContext {
if (!audioContext || audioContext.state === 'closed') {
audioContext = new AudioContext();
}
return audioContext;
}
// ─── Web Audio routing (delegated to mixer.ts) ────────────────────────────
/**
* Route a remote participant's audio track through Web Audio API
* Route a remote participant's audio track through the mixer graph
* instead of letting LiveKit auto-attach an <audio> element.
*/
function attachTrackToWebAudio(track: RemoteTrack, participant: RemoteParticipant) {
@ -128,33 +117,12 @@ function attachTrackToWebAudio(track: RemoteTrack, participant: RemoteParticipan
const mediaStream = track.mediaStream;
if (!mediaStream) return;
const ctx = ensureAudioContext();
// Clean up previous source for this participant
detachParticipantAudio(participant.identity);
const source = ctx.createMediaStreamSource(mediaStream);
const gain = ctx.createGain();
gain.gain.value = 1.0;
source.connect(gain);
gain.connect(ctx.destination);
audioSources.set(participant.identity, source);
gainNodes.set(participant.identity, gain);
ensureAudioContext();
addChannel(participant.identity, mediaStream);
}
function detachParticipantAudio(identity: string) {
const source = audioSources.get(identity);
if (source) {
source.disconnect();
audioSources.delete(identity);
}
const gain = gainNodes.get(identity);
if (gain) {
gain.disconnect();
gainNodes.delete(identity);
}
removeChannel(identity);
}
// ─── Room connection ────────────────────────────────────────────────────────
@ -246,13 +214,7 @@ export async function disconnect(): Promise<void> {
}
function cleanupAudio() {
for (const [identity] of audioSources) {
detachParticipantAudio(identity);
}
if (audioContext && audioContext.state !== 'closed') {
audioContext.close();
audioContext = null;
}
destroyMixer();
}
/** Toggle local microphone mute */
@ -264,10 +226,8 @@ export async function toggleMute(): Promise<boolean> {
return !enabled;
}
/** Get the GainNode for a participant (for future mixer integration) */
export function getParticipantGain(identity: string): GainNode | undefined {
return gainNodes.get(identity);
}
// Mixer controls are now exported from mixer.ts directly.
// Use: import { getChannel, setChannelGain, ... } from './mixer';
export function isConnected(): boolean {
return room?.state === ConnectionState.Connected;

285
frontend/src/lib/mixer.ts Normal file
View file

@ -0,0 +1,285 @@
/**
* Web Audio mixer graph for Synops.
*
* Manages the audio processing graph:
* MediaStreamSource (per channel) AnalyserNode GainNode MasterGain destination
*
* Each remote participant and the local microphone gets a channel.
* AnalyserNodes provide real-time level data for VU meters.
* Future phases will insert effect chains between source and gain.
*/
// ─── Types ──────────────────────────────────────────────────────────────────
export interface MixerChannel {
identity: string;
source: MediaStreamAudioSourceNode;
analyser: AnalyserNode;
gain: GainNode;
}
export interface ChannelLevels {
identity: string;
peak: number; // 0.01.0, peak amplitude
rms: number; // 0.01.0, RMS level (closer to perceived loudness)
}
// ─── State ──────────────────────────────────────────────────────────────────
let audioContext: AudioContext | null = null;
let masterGain: GainNode | null = null;
let masterAnalyser: AnalyserNode | null = null;
const channels = new Map<string, MixerChannel>();
// Reusable buffer for analyser readings (allocated once per context)
let analyserBuffer: Float32Array | null = null;
// ─── AudioContext lifecycle ─────────────────────────────────────────────────
/**
* Get or create the AudioContext. Must be called from a user gesture
* the first time (browser autoplay policy).
*/
export function ensureAudioContext(): AudioContext {
if (!audioContext || audioContext.state === 'closed') {
audioContext = new AudioContext();
// Create master gain and analyser
masterGain = audioContext.createGain();
masterGain.gain.value = 1.0;
masterAnalyser = audioContext.createAnalyser();
masterAnalyser.fftSize = 256;
masterAnalyser.smoothingTimeConstant = 0.3;
// Master chain: masterGain → masterAnalyser → destination
masterGain.connect(masterAnalyser);
masterAnalyser.connect(audioContext.destination);
analyserBuffer = null; // will be allocated on first use
}
// Resume if suspended (happens after tab goes inactive)
if (audioContext.state === 'suspended') {
audioContext.resume();
}
return audioContext;
}
export function getAudioContext(): AudioContext | null {
return audioContext;
}
// ─── Channel management ────────────────────────────────────────────────────
/**
* Add a channel for a participant's audio track.
* Creates: MediaStreamSource AnalyserNode GainNode MasterGain
*/
export function addChannel(identity: string, mediaStream: MediaStream): MixerChannel {
const ctx = ensureAudioContext();
// Remove existing channel for this identity first
removeChannel(identity);
const source = ctx.createMediaStreamSource(mediaStream);
const analyser = ctx.createAnalyser();
analyser.fftSize = 256;
analyser.smoothingTimeConstant = 0.3;
const gain = ctx.createGain();
gain.gain.value = 1.0;
// Signal chain: source → analyser → gain → masterGain
source.connect(analyser);
analyser.connect(gain);
gain.connect(masterGain!);
const channel: MixerChannel = { identity, source, analyser, gain };
channels.set(identity, channel);
return channel;
}
/**
* Remove a channel and disconnect all its nodes.
*/
export function removeChannel(identity: string): void {
const channel = channels.get(identity);
if (!channel) return;
channel.source.disconnect();
channel.analyser.disconnect();
channel.gain.disconnect();
channels.delete(identity);
}
/**
* Get a channel by participant identity.
*/
export function getChannel(identity: string): MixerChannel | undefined {
return channels.get(identity);
}
/**
* Get all active channel identities.
*/
export function getChannelIdentities(): string[] {
return Array.from(channels.keys());
}
// ─── Gain control ──────────────────────────────────────────────────────────
/**
* Set the gain for a channel (0.01.5, default 1.0).
*/
export function setChannelGain(identity: string, value: number): void {
const channel = channels.get(identity);
if (!channel) return;
channel.gain.gain.value = Math.max(0, Math.min(1.5, value));
}
/**
* Get the current gain value for a channel.
*/
export function getChannelGain(identity: string): number {
const channel = channels.get(identity);
return channel ? channel.gain.gain.value : 1.0;
}
/**
* Mute a channel by setting gain to 0 with immediate scheduling.
*/
export function muteChannel(identity: string): void {
const channel = channels.get(identity);
if (!channel || !audioContext) return;
channel.gain.gain.setValueAtTime(0, audioContext.currentTime);
}
/**
* Unmute a channel by restoring gain to a value (default 1.0).
*/
export function unmuteChannel(identity: string, value: number = 1.0): void {
const channel = channels.get(identity);
if (!channel || !audioContext) return;
channel.gain.gain.setValueAtTime(Math.max(0, Math.min(1.5, value)), audioContext.currentTime);
}
/**
* Set master gain (0.01.5, default 1.0).
*/
export function setMasterGain(value: number): void {
if (!masterGain) return;
masterGain.gain.value = Math.max(0, Math.min(1.5, value));
}
/**
* Get current master gain value.
*/
export function getMasterGain(): number {
return masterGain ? masterGain.gain.value : 1.0;
}
/**
* Mute master output.
*/
export function muteMaster(): void {
if (!masterGain || !audioContext) return;
masterGain.gain.setValueAtTime(0, audioContext.currentTime);
}
/**
* Unmute master output.
*/
export function unmuteMaster(value: number = 1.0): void {
if (!masterGain || !audioContext) return;
masterGain.gain.setValueAtTime(Math.max(0, Math.min(1.5, value)), audioContext.currentTime);
}
// ─── VU meter levels ───────────────────────────────────────────────────────
/**
* Read current levels from a channel's AnalyserNode.
* Returns peak and RMS values normalized to 0.01.0.
*/
export function getChannelLevels(identity: string): ChannelLevels | null {
const channel = channels.get(identity);
if (!channel) return null;
return readAnalyserLevels(identity, channel.analyser);
}
/**
* Read master output levels.
*/
export function getMasterLevels(): ChannelLevels | null {
if (!masterAnalyser) return null;
return readAnalyserLevels('master', masterAnalyser);
}
/**
* Read levels from all channels at once (efficient for UI rendering).
*/
export function getAllLevels(): ChannelLevels[] {
const levels: ChannelLevels[] = [];
for (const [identity, channel] of channels) {
const l = readAnalyserLevels(identity, channel.analyser);
if (l) levels.push(l);
}
return levels;
}
function readAnalyserLevels(identity: string, analyser: AnalyserNode): ChannelLevels {
const bufferLength = analyser.fftSize;
// Allocate or resize the shared buffer
if (!analyserBuffer || analyserBuffer.length < bufferLength) {
analyserBuffer = new Float32Array(bufferLength);
}
analyser.getFloatTimeDomainData(analyserBuffer);
let peak = 0;
let sumSquares = 0;
for (let i = 0; i < bufferLength; i++) {
const sample = analyserBuffer[i];
const abs = Math.abs(sample);
if (abs > peak) peak = abs;
sumSquares += sample * sample;
}
const rms = Math.sqrt(sumSquares / bufferLength);
return {
identity,
peak: Math.min(1.0, peak),
rms: Math.min(1.0, rms),
};
}
// ─── Cleanup ───────────────────────────────────────────────────────────────
/**
* Remove all channels and close the AudioContext.
*/
export function destroyMixer(): void {
for (const [identity] of channels) {
removeChannel(identity);
}
if (masterAnalyser) {
masterAnalyser.disconnect();
masterAnalyser = null;
}
if (masterGain) {
masterGain.disconnect();
masterGain = null;
}
if (audioContext && audioContext.state !== 'closed') {
audioContext.close();
audioContext = null;
}
analyserBuffer = null;
}

View file

@ -178,8 +178,7 @@ Uavhengige faser kan fortsatt plukkes.
Ref: `docs/features/lydmixer.md`
- [x] 16.1 LiveKit-klient i frontend: installer `livekit-client`, koble til rom, vis deltakerliste. Deaktiver LiveKit sin auto-attach av `<audio>`-elementer — lyd rutes gjennom Web Audio API i stedet.
- [~] 16.2 Web Audio mixer-graf: opprett `AudioContext`, `MediaStreamSourceNode` per remote track → per-kanal `GainNode` → master `GainNode``destination`. `AnalyserNode` per kanal for VU-meter.
> Påbegynt: 2026-03-18T04:50
- [x] 16.2 Web Audio mixer-graf: opprett `AudioContext`, `MediaStreamSourceNode` per remote track → per-kanal `GainNode` → master `GainNode``destination`. `AnalyserNode` per kanal for VU-meter.
- [ ] 16.3 Mixer-UI (MixerTrait-komponent): kanalstripe per deltaker med volumslider (0150%), nød-mute-knapp (stor, rød), VU-meter (canvas/CSS), navnelabel. Master-fader og master-mute. Responsivt design (mobil: kompakt fader-modus).
- [ ] 16.4 Delt mixer-kontroll via SpacetimeDB: `MixerChannel`-tabell + reducers (`set_gain`, `set_mute`, `toggle_effect`). Frontend abonnerer og oppdaterer Web Audio-graf ved endring fra andre deltakere. Visuell feedback (sliders beveger seg i sanntid). Tilgangskontroll: eier/admin kan sette deltaker til viewer-modus.
- [ ] 16.5 Sound pads: pad-grid UI (4×2), forhåndslast lydfiler fra CAS til `AudioBuffer`. Avspilling ved trykk (`AudioBufferSourceNode`). Pad-konfig i `metadata.mixer.pads` (label, farge, cas_hash). Synkronisert avspilling via LiveKit Data Message.