Skip to content

WebAudioAPI

Types

analyserNode

A node able to provide real-time frequency and time-domain analysis information. It is an AudioNode that passes the audio stream unchanged from the input to the output, but allows you to take the generated data, process it, and create audio visualizations. See AnalyserNode on MDN

type analyserNode = {
context: baseAudioContext,
numberOfInputs: int,
numberOfOutputs: int,
mutable channelCount: int,
mutable channelCountMode: channelCountMode,
mutable channelInterpretation: channelInterpretation,
mutable fftSize: int,
frequencyBinCount: int,
mutable minDecibels: float,
mutable maxDecibels: float,
mutable smoothingTimeConstant: float,
}

Record fields

numberOfInputs
int
numberOfOutputs
int
channelCount
int
channelInterpretation
fftSize
int
frequencyBinCount
int
minDecibels
float
maxDecibels
float
smoothingTimeConstant
float

Module

There are methods and helpers defined in AnalyserNode .

analyserOptions

type analyserOptions = {
mutable channelCount?: int,
mutable channelCountMode?: channelCountMode,
mutable channelInterpretation?: channelInterpretation,
mutable fftSize?: int,
mutable maxDecibels?: float,
mutable minDecibels?: float,
mutable smoothingTimeConstant?: float,
}

Record fields

channelCount
option< int >
channelCountMode
option< channelCountMode >
channelInterpretation
fftSize
option< int >
maxDecibels
option< float >
minDecibels
option< float >
smoothingTimeConstant
option< float >

audioBuffer

A short audio asset residing in memory, created from an audio file using the AudioContext.decodeAudioData() method, or from raw data using AudioContext.createBuffer(). Once put into an AudioBuffer, the audio can then be played by being passed into an AudioBufferSourceNode. See AudioBuffer on MDN

type audioBuffer = {
sampleRate: float,
length: int,
duration: float,
numberOfChannels: int,
}

Record fields

sampleRate
float
length
int
duration
float
numberOfChannels
int

Module

There are methods and helpers defined in AudioBuffer .

audioBufferOptions

type audioBufferOptions = {
mutable numberOfChannels?: int,
mutable length: int,
mutable sampleRate: float,
}

Record fields

numberOfChannels
option< int >
length
int
sampleRate
float

audioBufferSourceNode

An AudioScheduledSourceNode which represents an audio source consisting of in-memory audio data, stored in an AudioBuffer. It's especially useful for playing back audio which has particularly stringent timing accuracy requirements, such as for sounds that must match a specific rhythm and can be kept in memory rather than being played from disk or the network. See AudioBufferSourceNode on MDN

type audioBufferSourceNode = {
context: baseAudioContext,
numberOfInputs: int,
numberOfOutputs: int,
mutable channelCount: int,
mutable channelCountMode: channelCountMode,
mutable channelInterpretation: channelInterpretation,
mutable buffer: Null.t<audioBuffer>,
playbackRate: audioParam,
detune: audioParam,
mutable loop: bool,
mutable loopStart: float,
mutable loopEnd: float,
}

Record fields

numberOfInputs
int
numberOfOutputs
int
channelCount
int
channelInterpretation
buffer
Null.t< audioBuffer >
loop
bool
loopStart
float
loopEnd
float

Module

There are methods and helpers defined in AudioBufferSourceNode .

audioBufferSourceOptions

type audioBufferSourceOptions = {
mutable buffer?: Null.t<audioBuffer>,
mutable detune?: float,
mutable loop?: bool,
mutable loopEnd?: float,
mutable loopStart?: float,
mutable playbackRate?: float,
}

Record fields

buffer
option< Null.t< audioBuffer > >
detune
option< float >
loop
option< bool >
loopEnd
option< float >
loopStart
option< float >
playbackRate
option< float >

audioContext

An audio-processing graph built from audio modules linked together, each represented by an AudioNode. See AudioContext on MDN

type audioContext = {
destination: audioDestinationNode,
sampleRate: float,
currentTime: float,
listener: audioListener,
state: audioContextState,
audioWorklet: audioWorklet,
baseLatency: float,
outputLatency: float,
}

Record fields

sampleRate
float
currentTime
float
baseLatency
float
outputLatency
float

Module

There are methods and helpers defined in AudioContext .

audioContextOptions

type audioContextOptions = {
mutable latencyHint?: unknown,
mutable sampleRate?: float,
}

Record fields

latencyHint
option< unknown >
sampleRate
option< float >

audioContextState

type audioContextState =
| @as("closed") Closed
| @as("running") Running
| @as("suspended") Suspended

audioDestinationNode

AudioDestinationNode has no output (as it is the output, no more AudioNode can be linked after it in the audio graph) and one input. The number of channels in the input must be between 0 and the maxChannelCount value or an exception is raised. See AudioDestinationNode on MDN

type audioDestinationNode = {
context: baseAudioContext,
numberOfInputs: int,
numberOfOutputs: int,
mutable channelCount: int,
mutable channelCountMode: channelCountMode,
mutable channelInterpretation: channelInterpretation,
maxChannelCount: int,
}

Record fields

numberOfInputs
int
numberOfOutputs
int
channelCount
int
channelInterpretation
maxChannelCount
int

audioListener

The position and orientation of the unique person listening to the audio scene, and is used in audio spatialization. All PannerNodes spatialize in relation to the AudioListener stored in the BaseAudioContext.listener attribute. See AudioListener on MDN

type audioListener = {
positionX: audioParam,
positionY: audioParam,
positionZ: audioParam,
forwardX: audioParam,
forwardY: audioParam,
forwardZ: audioParam,
upX: audioParam,
upY: audioParam,
upZ: audioParam,
}

Record fields

audioNode

A generic interface for representing an audio processing module. Examples include: See AudioNode on MDN

type audioNode = {
context: baseAudioContext,
numberOfInputs: int,
numberOfOutputs: int,
mutable channelCount: int,
mutable channelCountMode: channelCountMode,
mutable channelInterpretation: channelInterpretation,
}

Record fields

numberOfInputs
int
numberOfOutputs
int
channelCount
int
channelInterpretation

Module

There are methods and helpers defined in AudioNode .

audioNodeOptions

type audioNodeOptions = {
mutable channelCount?: int,
mutable channelCountMode?: channelCountMode,
mutable channelInterpretation?: channelInterpretation,
}

Record fields

channelCount
option< int >
channelCountMode
option< channelCountMode >
channelInterpretation

audioParam

The Web Audio API's AudioParam interface represents an audio-related parameter, usually a parameter of an AudioNode (such as GainNode.gain). See AudioParam on MDN

type audioParam = {
mutable value: float,
defaultValue: float,
minValue: float,
maxValue: float,
}

Record fields

value
float
defaultValue
float
minValue
float
maxValue
float

Module

There are methods and helpers defined in AudioParam .

audioParamMap

type audioParamMap = {}

audioProcessingEvent

The Web Audio API events that occur when a ScriptProcessorNode input buffer is ready to be processed. See AudioProcessingEvent on MDN

type audioProcessingEvent = {
type_: WebAPI.EventAPI.eventType,
target: Null.t<WebAPI.EventAPI.eventTarget>,
currentTarget: Null.t<WebAPI.EventAPI.eventTarget>,
eventPhase: int,
bubbles: bool,
cancelable: bool,
defaultPrevented: bool,
composed: bool,
isTrusted: bool,
timeStamp: float,
}

Record fields

type_

Returns the type of event, e.g. "click", "hashchange", or "submit". Read more on MDN

target

Returns the object to which event is dispatched (its target). Read more on MDN

currentTarget

Returns the object whose event listener's callback is currently being invoked. Read more on MDN

eventPhase
int

Returns the event's phase, which is one of NONE, CAPTURING_PHASE, AT_TARGET, and BUBBLING_PHASE. Read more on MDN

bubbles
bool

Returns true or false depending on how event was initialized. True if event goes through its target's ancestors in reverse tree order, and false otherwise. Read more on MDN

cancelable
bool

Returns true or false depending on how event was initialized. Its return value does not always carry meaning, but true can indicate that part of the operation during which event was dispatched, can be canceled by invoking the preventDefault() method. Read more on MDN

defaultPrevented
bool

Returns true if preventDefault() was invoked successfully to indicate cancelation, and false otherwise. Read more on MDN

composed
bool

Returns true or false depending on how event was initialized. True if event invokes listeners past a ShadowRoot node that is the root of its target, and false otherwise. Read more on MDN

isTrusted
bool

Returns true if event was dispatched by the user agent, and false otherwise. Read more on MDN

timeStamp
float

Returns the event's timestamp as the number of milliseconds measured relative to the time origin. Read more on MDN

Module

There are methods and helpers defined in AudioProcessingEvent .

audioProcessingEventInit

type audioProcessingEventInit = {
mutable bubbles?: bool,
mutable cancelable?: bool,
mutable composed?: bool,
mutable playbackTime: float,
mutable inputBuffer: audioBuffer,
mutable outputBuffer: audioBuffer,
}

Record fields

bubbles
option< bool >
cancelable
option< bool >
composed
option< bool >
playbackTime
float
inputBuffer
outputBuffer

audioScheduledSourceNode

type audioScheduledSourceNode = {
context: baseAudioContext,
numberOfInputs: int,
numberOfOutputs: int,
mutable channelCount: int,
mutable channelCountMode: channelCountMode,
mutable channelInterpretation: channelInterpretation,
}

Record fields

numberOfInputs
int
numberOfOutputs
int
channelCount
int
channelInterpretation

Module

There are methods and helpers defined in AudioScheduledSourceNode .

audioTimestamp

type audioTimestamp = {
mutable contextTime?: float,
mutable performanceTime?: float,
}

Record fields

contextTime
option< float >
performanceTime
option< float >

audioWorklet

type audioWorklet = {}

audioWorkletNode

type audioWorkletNode = {
context: baseAudioContext,
numberOfInputs: int,
numberOfOutputs: int,
mutable channelCount: int,
mutable channelCountMode: channelCountMode,
mutable channelInterpretation: channelInterpretation,
parameters: audioParamMap,
port: WebAPI.ChannelMessagingAPI.messagePort,
}

Record fields

numberOfInputs
int
numberOfOutputs
int
channelCount
int
channelInterpretation

Module

There are methods and helpers defined in AudioWorkletNode .

audioWorkletNodeOptions

type audioWorkletNodeOptions = {
mutable channelCount?: int,
mutable channelCountMode?: channelCountMode,
mutable channelInterpretation?: channelInterpretation,
mutable numberOfInputs?: int,
mutable numberOfOutputs?: int,
mutable outputChannelCount?: array<int>,
mutable parameterData?: WebAPI.Prelude.any,
mutable processorOptions?: Dict.t<string>,
}

Record fields

channelCount
option< int >
channelCountMode
option< channelCountMode >
channelInterpretation
numberOfInputs
option< int >
numberOfOutputs
option< int >
outputChannelCount
option< array< int > >
parameterData
option< WebAPI.Prelude.any >
processorOptions
option< Dict.t< string > >

baseAudioContext

type baseAudioContext = {
destination: audioDestinationNode,
sampleRate: float,
currentTime: float,
listener: audioListener,
state: audioContextState,
audioWorklet: audioWorklet,
}

Record fields

sampleRate
float
currentTime
float

Module

There are methods and helpers defined in BaseAudioContext .

biquadFilterNode

A simple low-order filter, and is created using the AudioContext.createBiquadFilter() method. It is an AudioNode that can represent different kinds of filters, tone control devices, and graphic equalizers. See BiquadFilterNode on MDN

type biquadFilterNode = {
context: baseAudioContext,
numberOfInputs: int,
numberOfOutputs: int,
mutable channelCount: int,
mutable channelCountMode: channelCountMode,
mutable channelInterpretation: channelInterpretation,
mutable type_: biquadFilterType,
frequency: audioParam,
detune: audioParam,
q: audioParam,
gain: audioParam,
}

Record fields

numberOfInputs
int
numberOfOutputs
int
channelCount
int
channelInterpretation

Module

There are methods and helpers defined in BiquadFilterNode .

biquadFilterOptions

type biquadFilterOptions = {
mutable channelCount?: int,
mutable channelCountMode?: channelCountMode,
mutable channelInterpretation?: channelInterpretation,
mutable type_?: biquadFilterType,
mutable q?: float,
mutable detune?: float,
mutable frequency?: float,
mutable gain?: float,
}

Record fields

channelCount
option< int >
channelCountMode
option< channelCountMode >
channelInterpretation
type_
option< biquadFilterType >
q
option< float >
detune
option< float >
frequency
option< float >
gain
option< float >

biquadFilterType

type biquadFilterType =
| @as("allpass") Allpass
| @as("bandpass") Bandpass
| @as("highpass") Highpass
| @as("highshelf") Highshelf
| @as("lowpass") Lowpass
| @as("lowshelf") Lowshelf
| @as("notch") Notch
| @as("peaking") Peaking

channelCountMode

type channelCountMode =
| @as("clamped-max") ClampedMax
| @as("explicit") Explicit
| @as("max") Max

channelInterpretation

type channelInterpretation =
| @as("discrete") Discrete
| @as("speakers") Speakers

channelMergerNode

The ChannelMergerNode interface, often used in conjunction with its opposite, ChannelSplitterNode, reunites different mono inputs into a single output. Each input is used to fill a channel of the output. This is useful for accessing each channels separately, e.g. for performing channel mixing where gain must be separately controlled on each channel. See ChannelMergerNode on MDN

type channelMergerNode = {
context: baseAudioContext,
numberOfInputs: int,
numberOfOutputs: int,
mutable channelCount: int,
mutable channelCountMode: channelCountMode,
mutable channelInterpretation: channelInterpretation,
}

Record fields

numberOfInputs
int
numberOfOutputs
int
channelCount
int
channelInterpretation

Module

There are methods and helpers defined in ChannelMergerNode .

channelMergerOptions

type channelMergerOptions = {
mutable channelCount?: int,
mutable channelCountMode?: channelCountMode,
mutable channelInterpretation?: channelInterpretation,
mutable numberOfInputs?: int,
}

Record fields

channelCount
option< int >
channelCountMode
option< channelCountMode >
channelInterpretation
numberOfInputs
option< int >

channelSplitterNode

The ChannelSplitterNode interface, often used in conjunction with its opposite, ChannelMergerNode, separates the different channels of an audio source into a set of mono outputs. This is useful for accessing each channel separately, e.g. for performing channel mixing where gain must be separately controlled on each channel. See ChannelSplitterNode on MDN

type channelSplitterNode = {
context: baseAudioContext,
numberOfInputs: int,
numberOfOutputs: int,
mutable channelCount: int,
mutable channelCountMode: channelCountMode,
mutable channelInterpretation: channelInterpretation,
}

Record fields

numberOfInputs
int
numberOfOutputs
int
channelCount
int
channelInterpretation

Module

There are methods and helpers defined in ChannelSplitterNode .

channelSplitterOptions

type channelSplitterOptions = {
mutable channelCount?: int,
mutable channelCountMode?: channelCountMode,
mutable channelInterpretation?: channelInterpretation,
mutable numberOfOutputs?: int,
}

Record fields

channelCount
option< int >
channelCountMode
option< channelCountMode >
channelInterpretation
numberOfOutputs
option< int >

constantSourceNode

type constantSourceNode = {
context: baseAudioContext,
numberOfInputs: int,
numberOfOutputs: int,
mutable channelCount: int,
mutable channelCountMode: channelCountMode,
mutable channelInterpretation: channelInterpretation,
offset: audioParam,
}

Record fields

numberOfInputs
int
numberOfOutputs
int
channelCount
int
channelInterpretation

Module

There are methods and helpers defined in ConstantSourceNode .

constantSourceOptions

type constantSourceOptions = {mutable offset?: float}

Record fields

offset
option< float >

convolverNode

An AudioNode that performs a Linear Convolution on a given AudioBuffer, often used to achieve a reverb effect. A ConvolverNode always has exactly one input and one output. See ConvolverNode on MDN

type convolverNode = {
context: baseAudioContext,
numberOfInputs: int,
numberOfOutputs: int,
mutable channelCount: int,
mutable channelCountMode: channelCountMode,
mutable channelInterpretation: channelInterpretation,
mutable buffer: Null.t<audioBuffer>,
mutable normalize: bool,
}

Record fields

numberOfInputs
int
numberOfOutputs
int
channelCount
int
channelInterpretation
buffer
Null.t< audioBuffer >
normalize
bool

Module

There are methods and helpers defined in ConvolverNode .

convolverOptions

type convolverOptions = {
mutable channelCount?: int,
mutable channelCountMode?: channelCountMode,
mutable channelInterpretation?: channelInterpretation,
mutable buffer?: Null.t<audioBuffer>,
mutable disableNormalization?: bool,
}

Record fields

channelCount
option< int >
channelCountMode
option< channelCountMode >
channelInterpretation
buffer
option< Null.t< audioBuffer > >
disableNormalization
option< bool >

decodeErrorCallback

type decodeErrorCallback = WebAPI.Prelude.domException => unit

decodeSuccessCallback

type decodeSuccessCallback = audioBuffer => unit

delayNode

A delay-line; an AudioNode audio-processing module that causes a delay between the arrival of an input data and its propagation to the output. See DelayNode on MDN

type delayNode = {
context: baseAudioContext,
numberOfInputs: int,
numberOfOutputs: int,
mutable channelCount: int,
mutable channelCountMode: channelCountMode,
mutable channelInterpretation: channelInterpretation,
delayTime: audioParam,
}

Record fields

numberOfInputs
int
numberOfOutputs
int
channelCount
int
channelInterpretation

Module

There are methods and helpers defined in DelayNode .

delayOptions

type delayOptions = {
mutable channelCount?: int,
mutable channelCountMode?: channelCountMode,
mutable channelInterpretation?: channelInterpretation,
mutable maxDelayTime?: float,
mutable delayTime?: float,
}

Record fields

channelCount
option< int >
channelCountMode
option< channelCountMode >
channelInterpretation
maxDelayTime
option< float >
delayTime
option< float >

distanceModelType

type distanceModelType =
| @as("exponential") Exponential
| @as("inverse") Inverse
| @as("linear") Linear

doubleRange

type doubleRange = {
mutable max?: float,
mutable min?: float,
}

Record fields

max
option< float >
min
option< float >

dynamicsCompressorNode

Inherits properties from its parent, AudioNode. See DynamicsCompressorNode on MDN

type dynamicsCompressorNode = {
context: baseAudioContext,
numberOfInputs: int,
numberOfOutputs: int,
mutable channelCount: int,
mutable channelCountMode: channelCountMode,
mutable channelInterpretation: channelInterpretation,
threshold: audioParam,
knee: audioParam,
ratio: audioParam,
reduction: float,
attack: audioParam,
release: audioParam,
}

Record fields

numberOfInputs
int
numberOfOutputs
int
channelCount
int
channelInterpretation
reduction
float

Module

There are methods and helpers defined in DynamicsCompressorNode .

dynamicsCompressorOptions

type dynamicsCompressorOptions = {
mutable channelCount?: int,
mutable channelCountMode?: channelCountMode,
mutable channelInterpretation?: channelInterpretation,
mutable attack?: float,
mutable knee?: float,
mutable ratio?: float,
mutable release?: float,
mutable threshold?: float,
}

Record fields

channelCount
option< int >
channelCountMode
option< channelCountMode >
channelInterpretation
attack
option< float >
knee
option< float >
ratio
option< float >
release
option< float >
threshold
option< float >

gainNode

A change in volume. It is an AudioNode audio-processing module that causes a given gain to be applied to the input data before its propagation to the output. A GainNode always has exactly one input and one output, both with the same number of channels. See GainNode on MDN

type gainNode = {
context: baseAudioContext,
numberOfInputs: int,
numberOfOutputs: int,
mutable channelCount: int,
mutable channelCountMode: channelCountMode,
mutable channelInterpretation: channelInterpretation,
gain: audioParam,
}

Record fields

numberOfInputs
int
numberOfOutputs
int
channelCount
int
channelInterpretation

Module

There are methods and helpers defined in GainNode .

gainOptions

type gainOptions = {
mutable channelCount?: int,
mutable channelCountMode?: channelCountMode,
mutable channelInterpretation?: channelInterpretation,
mutable gain?: float,
}

Record fields

channelCount
option< int >
channelCountMode
option< channelCountMode >
channelInterpretation
gain
option< float >

iirFilterNode

The IIRFilterNode interface of the Web Audio API is a AudioNode processor which implements a general infinite impulse response (IIR)  filter; this type of filter can be used to implement tone control devices and graphic equalizers as well. It lets the parameters of the filter response be specified, so that it can be tuned as needed. See IIRFilterNode on MDN

type iirFilterNode = {
context: baseAudioContext,
numberOfInputs: int,
numberOfOutputs: int,
mutable channelCount: int,
mutable channelCountMode: channelCountMode,
mutable channelInterpretation: channelInterpretation,
}

Record fields

numberOfInputs
int
numberOfOutputs
int
channelCount
int
channelInterpretation

iirFilterOptions

type iirFilterOptions = {
mutable channelCount?: int,
mutable channelCountMode?: channelCountMode,
mutable channelInterpretation?: channelInterpretation,
mutable feedforward: array<float>,
mutable feedback: array<float>,
}

Record fields

channelCount
option< int >
channelCountMode
option< channelCountMode >
channelInterpretation
feedforward
array< float >
feedback
array< float >

mediaElementAudioSourceNode

A MediaElementSourceNode has no inputs and exactly one output, and is created using the AudioContext.createMediaElementSource method. The amount of channels in the output equals the number of channels of the audio referenced by the HTMLMediaElement used in the creation of the node, or is 1 if the HTMLMediaElement has no audio. See MediaElementAudioSourceNode on MDN

type mediaElementAudioSourceNode = {
context: baseAudioContext,
numberOfInputs: int,
numberOfOutputs: int,
mutable channelCount: int,
mutable channelCountMode: channelCountMode,
mutable channelInterpretation: channelInterpretation,
mediaElement: WebAPI.DOMAPI.htmlMediaElement,
}

Record fields

numberOfInputs
int
numberOfOutputs
int
channelCount
int
channelInterpretation

Module

There are methods and helpers defined in MediaElementAudioSourceNode .

mediaElementAudioSourceOptions

type mediaElementAudioSourceOptions = {
mutable mediaElement: WebAPI.DOMAPI.htmlMediaElement,
}

Record fields

mediaStreamAudioDestinationNode

type mediaStreamAudioDestinationNode = {
context: baseAudioContext,
numberOfInputs: int,
numberOfOutputs: int,
mutable channelCount: int,
mutable channelCountMode: channelCountMode,
mutable channelInterpretation: channelInterpretation,
stream: WebAPI.MediaCaptureAndStreamsAPI.mediaStream,
}

Record fields

numberOfInputs
int
numberOfOutputs
int
channelCount
int
channelInterpretation

Module

There are methods and helpers defined in MediaStreamAudioDestinationNode .

mediaStreamAudioSourceNode

A type of AudioNode which operates as an audio source whose media is received from a MediaStream obtained using the WebRTC or Media Capture and Streams APIs. See MediaStreamAudioSourceNode on MDN

type mediaStreamAudioSourceNode = {
context: baseAudioContext,
numberOfInputs: int,
numberOfOutputs: int,
mutable channelCount: int,
mutable channelCountMode: channelCountMode,
mutable channelInterpretation: channelInterpretation,
mediaStream: WebAPI.MediaCaptureAndStreamsAPI.mediaStream,
}

Record fields

numberOfInputs
int
numberOfOutputs
int
channelCount
int
channelInterpretation

Module

There are methods and helpers defined in MediaStreamAudioSourceNode .

mediaStreamAudioSourceOptions

type mediaStreamAudioSourceOptions = {
mutable mediaStream: WebAPI.MediaCaptureAndStreamsAPI.mediaStream,
}

Record fields

mediaTrackCapabilities

type mediaTrackCapabilities = {
mutable width?: uLongRange,
mutable height?: uLongRange,
mutable aspectRatio?: doubleRange,
mutable frameRate?: doubleRange,
mutable facingMode?: array<string>,
mutable sampleRate?: uLongRange,
mutable sampleSize?: uLongRange,
mutable echoCancellation?: array<bool>,
mutable autoGainControl?: array<bool>,
mutable noiseSuppression?: array<bool>,
mutable channelCount?: uLongRange,
mutable deviceId?: string,
mutable groupId?: string,
mutable backgroundBlur?: array<bool>,
mutable displaySurface?: string,
}

Record fields

width
option< uLongRange >
height
option< uLongRange >
aspectRatio
option< doubleRange >
frameRate
option< doubleRange >
facingMode
option< array< string > >
sampleRate
option< uLongRange >
sampleSize
option< uLongRange >
echoCancellation
option< array< bool > >
autoGainControl
option< array< bool > >
noiseSuppression
option< array< bool > >
channelCount
option< uLongRange >
deviceId
option< string >
groupId
option< string >
backgroundBlur
option< array< bool > >
displaySurface
option< string >

mediaTrackConstraints

type mediaTrackConstraints = {
mutable width?: int,
mutable height?: int,
mutable aspectRatio?: float,
mutable frameRate?: float,
mutable facingMode?: string,
mutable sampleRate?: int,
mutable sampleSize?: int,
mutable echoCancellation?: bool,
mutable autoGainControl?: bool,
mutable noiseSuppression?: bool,
mutable channelCount?: int,
mutable deviceId?: string,
mutable groupId?: string,
mutable backgroundBlur?: bool,
mutable displaySurface?: string,
mutable advanced?: array<mediaTrackConstraintSet>,
}

Record fields

width
option< int >
height
option< int >
aspectRatio
option< float >
frameRate
option< float >
facingMode
option< string >
sampleRate
option< int >
sampleSize
option< int >
echoCancellation
option< bool >
autoGainControl
option< bool >
noiseSuppression
option< bool >
channelCount
option< int >
deviceId
option< string >
groupId
option< string >
backgroundBlur
option< bool >
displaySurface
option< string >
advanced
option< array< mediaTrackConstraintSet > >

mediaTrackConstraintSet

type mediaTrackConstraintSet = {
mutable width?: int,
mutable height?: int,
mutable aspectRatio?: float,
mutable frameRate?: float,
mutable facingMode?: string,
mutable sampleRate?: int,
mutable sampleSize?: int,
mutable echoCancellation?: bool,
mutable autoGainControl?: bool,
mutable noiseSuppression?: bool,
mutable channelCount?: int,
mutable deviceId?: string,
mutable groupId?: string,
mutable backgroundBlur?: bool,
mutable displaySurface?: string,
}

Record fields

width
option< int >
height
option< int >
aspectRatio
option< float >
frameRate
option< float >
facingMode
option< string >
sampleRate
option< int >
sampleSize
option< int >
echoCancellation
option< bool >
autoGainControl
option< bool >
noiseSuppression
option< bool >
channelCount
option< int >
deviceId
option< string >
groupId
option< string >
backgroundBlur
option< bool >
displaySurface
option< string >

mediaTrackSettings

type mediaTrackSettings = {
mutable width?: int,
mutable height?: int,
mutable aspectRatio?: float,
mutable frameRate?: float,
mutable facingMode?: string,
mutable sampleRate?: int,
mutable sampleSize?: int,
mutable echoCancellation?: bool,
mutable autoGainControl?: bool,
mutable noiseSuppression?: bool,
mutable channelCount?: int,
mutable deviceId?: string,
mutable groupId?: string,
mutable backgroundBlur?: bool,
mutable displaySurface?: string,
}

Record fields

width
option< int >
height
option< int >
aspectRatio
option< float >
frameRate
option< float >
facingMode
option< string >
sampleRate
option< int >
sampleSize
option< int >
echoCancellation
option< bool >
autoGainControl
option< bool >
noiseSuppression
option< bool >
channelCount
option< int >
deviceId
option< string >
groupId
option< string >
backgroundBlur
option< bool >
displaySurface
option< string >

offlineAudioCompletionEvent

The Web Audio API OfflineAudioCompletionEvent interface represents events that occur when the processing of an OfflineAudioContext is terminated. The complete event implements this interface. See OfflineAudioCompletionEvent on MDN

type offlineAudioCompletionEvent = {
type_: WebAPI.EventAPI.eventType,
target: Null.t<WebAPI.EventAPI.eventTarget>,
currentTarget: Null.t<WebAPI.EventAPI.eventTarget>,
eventPhase: int,
bubbles: bool,
cancelable: bool,
defaultPrevented: bool,
composed: bool,
isTrusted: bool,
timeStamp: float,
renderedBuffer: audioBuffer,
}

Record fields

type_

Returns the type of event, e.g. "click", "hashchange", or "submit". Read more on MDN

target

Returns the object to which event is dispatched (its target). Read more on MDN

currentTarget

Returns the object whose event listener's callback is currently being invoked. Read more on MDN

eventPhase
int

Returns the event's phase, which is one of NONE, CAPTURING_PHASE, AT_TARGET, and BUBBLING_PHASE. Read more on MDN

bubbles
bool

Returns true or false depending on how event was initialized. True if event goes through its target's ancestors in reverse tree order, and false otherwise. Read more on MDN

cancelable
bool

Returns true or false depending on how event was initialized. Its return value does not always carry meaning, but true can indicate that part of the operation during which event was dispatched, can be canceled by invoking the preventDefault() method. Read more on MDN

defaultPrevented
bool

Returns true if preventDefault() was invoked successfully to indicate cancelation, and false otherwise. Read more on MDN

composed
bool

Returns true or false depending on how event was initialized. True if event invokes listeners past a ShadowRoot node that is the root of its target, and false otherwise. Read more on MDN

isTrusted
bool

Returns true if event was dispatched by the user agent, and false otherwise. Read more on MDN

timeStamp
float

Returns the event's timestamp as the number of milliseconds measured relative to the time origin. Read more on MDN

renderedBuffer

Module

There are methods and helpers defined in OfflineAudioCompletionEvent .

offlineAudioCompletionEventInit

type offlineAudioCompletionEventInit = {
mutable bubbles?: bool,
mutable cancelable?: bool,
mutable composed?: bool,
mutable renderedBuffer: audioBuffer,
}

Record fields

bubbles
option< bool >
cancelable
option< bool >
composed
option< bool >
renderedBuffer

offlineAudioContext

An AudioContext interface representing an audio-processing graph built from linked together AudioNodes. In contrast with a standard AudioContext, an OfflineAudioContext doesn't render the audio to the device hardware; instead, it generates it, as fast as it can, and outputs the result to an AudioBuffer. See OfflineAudioContext on MDN

type offlineAudioContext = {
destination: audioDestinationNode,
sampleRate: float,
currentTime: float,
listener: audioListener,
state: audioContextState,
audioWorklet: audioWorklet,
length: int,
}

Record fields

sampleRate
float
currentTime
float
length
int

Module

There are methods and helpers defined in OfflineAudioContext .

offlineAudioContextOptions

type offlineAudioContextOptions = {
mutable numberOfChannels?: int,
mutable length: int,
mutable sampleRate: float,
}

Record fields

numberOfChannels
option< int >
length
int
sampleRate
float

oscillatorNode

The OscillatorNode interface represents a periodic waveform, such as a sine wave. It is an AudioScheduledSourceNode audio-processing module that causes a specified frequency of a given wave to be created—in effect, a constant tone. See OscillatorNode on MDN

type oscillatorNode = {
context: baseAudioContext,
numberOfInputs: int,
numberOfOutputs: int,
mutable channelCount: int,
mutable channelCountMode: channelCountMode,
mutable channelInterpretation: channelInterpretation,
mutable type_: oscillatorType,
frequency: audioParam,
detune: audioParam,
}

Record fields

numberOfInputs
int
numberOfOutputs
int
channelCount
int
channelInterpretation

Module

There are methods and helpers defined in OscillatorNode .

oscillatorOptions

type oscillatorOptions = {
mutable channelCount?: int,
mutable channelCountMode?: channelCountMode,
mutable channelInterpretation?: channelInterpretation,
mutable type_?: oscillatorType,
mutable frequency?: float,
mutable detune?: float,
mutable periodicWave?: periodicWave,
}

Record fields

channelCount
option< int >
channelCountMode
option< channelCountMode >
channelInterpretation
type_
option< oscillatorType >
frequency
option< float >
detune
option< float >
periodicWave
option< periodicWave >

oscillatorType

type oscillatorType =
| @as("custom") Custom
| @as("sawtooth") Sawtooth
| @as("sine") Sine
| @as("square") Square
| @as("triangle") Triangle

overSampleType

type overSampleType =
| @as("2x") V2x
| @as("4x") V4x
| @as("none") None

pannerNode

A PannerNode always has exactly one input and one output: the input can be mono or stereo but the output is always stereo (2 channels); you can't have panning effects without at least two audio channels! See PannerNode on MDN

type pannerNode = {
context: baseAudioContext,
numberOfInputs: int,
numberOfOutputs: int,
mutable channelCount: int,
mutable channelCountMode: channelCountMode,
mutable channelInterpretation: channelInterpretation,
mutable panningModel: panningModelType,
positionX: audioParam,
positionY: audioParam,
positionZ: audioParam,
orientationX: audioParam,
orientationY: audioParam,
orientationZ: audioParam,
mutable distanceModel: distanceModelType,
mutable refDistance: float,
mutable maxDistance: float,
mutable rolloffFactor: float,
mutable coneInnerAngle: float,
mutable coneOuterAngle: float,
mutable coneOuterGain: float,
}

Record fields

numberOfInputs
int
numberOfOutputs
int
channelCount
int
channelInterpretation
refDistance
float
maxDistance
float
rolloffFactor
float
coneInnerAngle
float
coneOuterAngle
float
coneOuterGain
float

Module

There are methods and helpers defined in PannerNode .

pannerOptions

type pannerOptions = {
mutable channelCount?: int,
mutable channelCountMode?: channelCountMode,
mutable channelInterpretation?: channelInterpretation,
mutable panningModel?: panningModelType,
mutable distanceModel?: distanceModelType,
mutable positionX?: float,
mutable positionY?: float,
mutable positionZ?: float,
mutable orientationX?: float,
mutable orientationY?: float,
mutable orientationZ?: float,
mutable refDistance?: float,
mutable maxDistance?: float,
mutable rolloffFactor?: float,
mutable coneInnerAngle?: float,
mutable coneOuterAngle?: float,
mutable coneOuterGain?: float,
}

Record fields

channelCount
option< int >
channelCountMode
option< channelCountMode >
channelInterpretation
panningModel
option< panningModelType >
distanceModel
option< distanceModelType >
positionX
option< float >
positionY
option< float >
positionZ
option< float >
orientationX
option< float >
orientationY
option< float >
orientationZ
option< float >
refDistance
option< float >
maxDistance
option< float >
rolloffFactor
option< float >
coneInnerAngle
option< float >
coneOuterAngle
option< float >
coneOuterGain
option< float >

panningModelType

type panningModelType = HRTF | @as("equalpower") Equalpower

periodicWave

PeriodicWave has no inputs or outputs; it is used to define custom oscillators when calling OscillatorNode.setPeriodicWave(). The PeriodicWave itself is created/returned by AudioContext.createPeriodicWave(). See PeriodicWave on MDN

type periodicWave = {}

Module

There are methods and helpers defined in PeriodicWave .

periodicWaveConstraints

type periodicWaveConstraints = {
mutable disableNormalization?: bool,
}

Record fields

disableNormalization
option< bool >

periodicWaveOptions

type periodicWaveOptions = {
mutable disableNormalization?: bool,
mutable real?: array<float>,
mutable imag?: array<float>,
}

Record fields

disableNormalization
option< bool >
real
option< array< float > >
imag
option< array< float > >

requestCredentials

type requestCredentials =
| @as("include") Include
| @as("omit") Omit
| @as("same-origin") SameOrigin

stereoPannerNode

The pan property takes a unitless value between -1 (full left pan) and 1 (full right pan). This interface was introduced as a much simpler way to apply a simple panning effect than having to use a full PannerNode. See StereoPannerNode on MDN

type stereoPannerNode = {
context: baseAudioContext,
numberOfInputs: int,
numberOfOutputs: int,
mutable channelCount: int,
mutable channelCountMode: channelCountMode,
mutable channelInterpretation: channelInterpretation,
pan: audioParam,
}

Record fields

numberOfInputs
int
numberOfOutputs
int
channelCount
int
channelInterpretation

Module

There are methods and helpers defined in StereoPannerNode .

stereoPannerOptions

type stereoPannerOptions = {
mutable channelCount?: int,
mutable channelCountMode?: channelCountMode,
mutable channelInterpretation?: channelInterpretation,
mutable pan?: float,
}

Record fields

channelCount
option< int >
channelCountMode
option< channelCountMode >
channelInterpretation
pan
option< float >

uLongRange

type uLongRange = {mutable max?: int, mutable min?: int}

Record fields

max
option< int >
min
option< int >

waveShaperNode

A WaveShaperNode always has exactly one input and one output. See WaveShaperNode on MDN

type waveShaperNode = {
context: baseAudioContext,
numberOfInputs: int,
numberOfOutputs: int,
mutable channelCount: int,
mutable channelCountMode: channelCountMode,
mutable channelInterpretation: channelInterpretation,
mutable curve: Null.t<array<float>>,
mutable oversample: overSampleType,
}

Record fields

numberOfInputs
int
numberOfOutputs
int
channelCount
int
channelInterpretation
curve
Null.t< array< float > >

Module

There are methods and helpers defined in WaveShaperNode .

waveShaperOptions

type waveShaperOptions = {
mutable channelCount?: int,
mutable channelCountMode?: channelCountMode,
mutable channelInterpretation?: channelInterpretation,
mutable curve?: array<float>,
mutable oversample?: overSampleType,
}

Record fields

channelCount
option< int >
channelCountMode
option< channelCountMode >
channelInterpretation
curve
option< array< float > >
oversample
option< overSampleType >

worklet

type worklet = {}

Module

There are methods and helpers defined in Worklet .

workletOptions

type workletOptions = {
mutable credentials?: requestCredentials,
}

Record fields

credentials
option< requestCredentials >