repo_id
stringlengths 22
103
| file_path
stringlengths 41
147
| content
stringlengths 181
193k
| __index_level_0__
int64 0
0
|
---|---|---|---|
data/mdn-content/files/en-us/web/api/gpucompilationmessage | data/mdn-content/files/en-us/web/api/gpucompilationmessage/linenum/index.md | ---
title: "GPUCompilationMessage: lineNum property"
short-title: lineNum
slug: Web/API/GPUCompilationMessage/lineNum
page-type: web-api-instance-property
status:
- experimental
browser-compat: api.GPUCompilationMessage.lineNum
---
{{APIRef("WebGPU API")}}{{SeeCompatTable}}{{SecureContext_Header}}
The **`lineNum`** read-only property of the
{{domxref("GPUCompilationMessage")}} interface is a number representing the line number in the shader code that the message corresponds to.
## Value
A number.
Note that:
- If the message corresponds to a substring, `lineNum` refers to the line number that the substring begins on.
- If the message does not correspond to a specific line of code (perhaps it refers to the whole of the shader code), `lineNum` will be 0.
- Values are one-based — a value of 1 refers to the first line of code.
- Lines are delimited by line breaks. In WGSL, a [specific list of characters](https://gpuweb.github.io/gpuweb/wgsl/#line-break) is defined as line breaks.
## Examples
```js
// ...
const shaderModule = device.createShaderModule({
code: shaders,
});
const shaderInfo = await shaderModule.getCompilationInfo();
const firstMessage = shaderInfo.messages[0];
console.log(firstMessage.lineNum);
// ...
}
```
See the main [`GPUCompilationInfo` page](/en-US/docs/Web/API/GPUCompilationInfo#examples) for a more detailed example.
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
## See also
- The [WebGPU API](/en-US/docs/Web/API/WebGPU_API)
| 0 |
data/mdn-content/files/en-us/web/api/gpucompilationmessage | data/mdn-content/files/en-us/web/api/gpucompilationmessage/linepos/index.md | ---
title: "GPUCompilationMessage: linePos property"
short-title: linePos
slug: Web/API/GPUCompilationMessage/linePos
page-type: web-api-instance-property
status:
- experimental
browser-compat: api.GPUCompilationMessage.linePos
---
{{APIRef("WebGPU API")}}{{SeeCompatTable}}{{SecureContext_Header}}
The **`linePos`** read-only property of the
{{domxref("GPUCompilationMessage")}} interface is a number representing the position in the code line that the message corresponds to. This could be an exact point, or the start of the relevant substring.
## Value
A number.
To be precise, `linePos` is the number of UTF-16 code units from the beginning of the line to the exact point or start of the relevant substring that the message corresponds to.
Note that:
- If the message corresponds to a substring, `linePos` refers to the first UTF-16 code unit of the substring.
- If the message does not correspond to a specific code position (perhaps it refers to the whole of the shader code), `linePos` will be 0.
- Values are one-based — a value of 1 refers to the first code unit of the line.
## Examples
```js
// ...
const shaderModule = device.createShaderModule({
code: shaders,
});
const shaderInfo = await shaderModule.getCompilationInfo();
const firstMessage = shaderInfo.messages[0];
console.log(firstMessage.linePos);
// ...
}
```
See the main [`GPUCompilationInfo` page](/en-US/docs/Web/API/GPUCompilationInfo#examples) for a more detailed example.
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
## See also
- The [WebGPU API](/en-US/docs/Web/API/WebGPU_API)
| 0 |
data/mdn-content/files/en-us/web/api/gpucompilationmessage | data/mdn-content/files/en-us/web/api/gpucompilationmessage/length/index.md | ---
title: "GPUCompilationMessage: length property"
short-title: length
slug: Web/API/GPUCompilationMessage/length
page-type: web-api-instance-property
status:
- experimental
browser-compat: api.GPUCompilationMessage.length
---
{{APIRef("WebGPU API")}}{{SeeCompatTable}}{{SecureContext_Header}}
The **`length`** read-only property of the
{{domxref("GPUCompilationMessage")}} interface is a number representing the length of the substring that the message corresponds to.
## Value
A number.
To be precise, `length` is the number of UTF-16 code units in the shader code substring that the message corresponds to. If the message corresponds to a single point rather than a substring, `length` will be 0.
## Examples
```js
// ...
const shaderModule = device.createShaderModule({
code: shaders,
});
const shaderInfo = await shaderModule.getCompilationInfo();
const firstMessage = shaderInfo.messages[0];
console.log(firstMessage.length);
// ...
}
```
See the main [`GPUCompilationInfo` page](/en-US/docs/Web/API/GPUCompilationInfo#examples) for a more detailed example.
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
## See also
- The [WebGPU API](/en-US/docs/Web/API/WebGPU_API)
| 0 |
data/mdn-content/files/en-us/web/api/gpucompilationmessage | data/mdn-content/files/en-us/web/api/gpucompilationmessage/message/index.md | ---
title: "GPUCompilationMessage: message property"
short-title: message
slug: Web/API/GPUCompilationMessage/message
page-type: web-api-instance-property
status:
- experimental
browser-compat: api.GPUCompilationMessage.message
---
{{APIRef("WebGPU API")}}{{SeeCompatTable}}{{SecureContext_Header}}
The **`message`** read-only property of the
{{domxref("GPUCompilationMessage")}} interface is a string representing human-readable message text.
## Value
A string.
## Examples
```js
// ...
const shaderModule = device.createShaderModule({
code: shaders,
});
const shaderInfo = await shaderModule.getCompilationInfo();
const firstMessage = shaderInfo.messages[0];
console.log(firstMessage.message);
// ...
}
```
See the main [`GPUCompilationInfo` page](/en-US/docs/Web/API/GPUCompilationInfo#examples) for a more detailed example.
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
## See also
- The [WebGPU API](/en-US/docs/Web/API/WebGPU_API)
| 0 |
data/mdn-content/files/en-us/web/api/gpucompilationmessage | data/mdn-content/files/en-us/web/api/gpucompilationmessage/type/index.md | ---
title: "GPUCompilationMessage: type property"
short-title: type
slug: Web/API/GPUCompilationMessage/type
page-type: web-api-instance-property
status:
- experimental
browser-compat: api.GPUCompilationMessage.type
---
{{APIRef("WebGPU API")}}{{SeeCompatTable}}{{SecureContext_Header}}
The **`type`** read-only property of the
{{domxref("GPUCompilationMessage")}} interface is an enumerated value representing the type of the message. Each type represents a different severity level.
## Value
A enumerated value. Possible values are:
- `"error"`
- : A shader-creation error, which stops successful compilation.
- `"info"`
- : A purely informative message, which is low severity.
- `"warning"`
- : A warning about an issue that will not stop successful compilation, but merits attention by the developer. An example is usage of deprecated functions or syntax.
## Examples
```js
// ...
const shaderModule = device.createShaderModule({
code: shaders,
});
const shaderInfo = await shaderModule.getCompilationInfo();
const firstMessage = shaderInfo.messages[0];
console.log(firstMessage.type);
// ...
}
```
See the main [`GPUCompilationInfo` page](/en-US/docs/Web/API/GPUCompilationInfo#examples) for a more detailed example.
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
## See also
- The [WebGPU API](/en-US/docs/Web/API/WebGPU_API)
| 0 |
data/mdn-content/files/en-us/web/api/gpucompilationmessage | data/mdn-content/files/en-us/web/api/gpucompilationmessage/offset/index.md | ---
title: "GPUCompilationMessage: offset property"
short-title: offset
slug: Web/API/GPUCompilationMessage/offset
page-type: web-api-instance-property
status:
- experimental
browser-compat: api.GPUCompilationMessage.offset
---
{{APIRef("WebGPU API")}}{{SeeCompatTable}}{{SecureContext_Header}}
The **`offset`** read-only property of the
{{domxref("GPUCompilationMessage")}} interface is a number representing the offset from the start of the shader code to the exact point, or the start of the relevant substring, that the message corresponds to.
## Value
A number.
To be precise, `offset` is the number of UTF-16 code units from the beginning of the shader code to the exact point or start of the relevant substring that the message corresponds to.
If the message does not correspond to a specific code position (perhaps it refers to the whole of the shader code), `offset` will be 0.
## Examples
```js
// ...
const shaderModule = device.createShaderModule({
code: shaders,
});
const shaderInfo = await shaderModule.getCompilationInfo();
const firstMessage = shaderInfo.messages[0];
console.log(firstMessage.offset);
// ...
}
```
See the main [`GPUCompilationInfo` page](/en-US/docs/Web/API/GPUCompilationInfo#examples) for a more detailed example.
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
## See also
- The [WebGPU API](/en-US/docs/Web/API/WebGPU_API)
| 0 |
data/mdn-content/files/en-us/web/api | data/mdn-content/files/en-us/web/api/fetchevent/index.md | ---
title: FetchEvent
slug: Web/API/FetchEvent
page-type: web-api-interface
browser-compat: api.FetchEvent
---
{{APIRef("Service Workers API")}}
This is the event type for `fetch` events dispatched on the {{domxref("ServiceWorkerGlobalScope", "service worker global scope", "", 1)}}. It contains information about the fetch, including the request and how the receiver will treat the response. It provides the {{domxref("FetchEvent.respondWith", "event.respondWith()")}} method, which allows us to provide a response to this fetch.
{{InheritanceDiagram}}
## Constructor
- {{domxref("FetchEvent.FetchEvent()", "FetchEvent()")}}
- : Creates a new `FetchEvent` object. This constructor is not typically used. The browser creates these objects and provides them to `fetch` event callbacks.
## Instance properties
_Inherits properties from its ancestor, {{domxref("Event")}}_.
- {{domxref("FetchEvent.clientId")}} {{ReadOnlyInline}}
- : The {{domxref("Client.id", "id")}} of the same-origin {{domxref("Client", "client")}} that initiated the fetch.
- {{domxref("FetchEvent.handled")}} {{ReadOnlyInline}}
- : A promise that is pending while the event has not been handled, and fulfilled once it has.
- {{domxref("FetchEvent.preloadResponse")}} {{ReadOnlyInline}}
- : A {{jsxref("Promise")}} for a {{domxref("Response")}}, or `undefined` if this fetch is not a navigation, or [navigation preload](/en-US/docs/Web/API/NavigationPreloadManager) is not enabled.
- {{domxref("FetchEvent.replacesClientId")}} {{ReadOnlyInline}}
- : The {{domxref("Client.id", "id")}} of the {{domxref("Client", "client")}} that is being replaced during a page navigation.
- {{domxref("FetchEvent.resultingClientId")}} {{ReadOnlyInline}}
- : The {{domxref("Client.id", "id")}} of the {{domxref("Client", "client")}} that replaces the previous client during a page navigation.
- {{domxref("FetchEvent.request")}} {{ReadOnlyInline}}
- : The {{domxref("Request")}} the browser intends to make.
## Instance methods
_Inherits methods from its parent, {{domxref("ExtendableEvent")}}_.
- {{domxref("FetchEvent.respondWith()")}}
- : Prevent the browser's default fetch handling, and provide (a promise for) a response yourself.
- {{domxref("ExtendableEvent.waitUntil()")}}
- : Extends the lifetime of the event. Used to notify the browser of tasks that extend beyond the returning of a response, such as streaming and caching.
## Examples
This fetch event uses the browser default for non-GET requests.
For GET requests it tries to return a match in the cache, and falls back to the network. If it finds a match in the cache, it asynchronously updates the cache for next time.
```js
self.addEventListener("fetch", (event) => {
// Let the browser do its default thing
// for non-GET requests.
if (event.request.method !== "GET") return;
// Prevent the default, and handle the request ourselves.
event.respondWith(
(async () => {
// Try to get the response from a cache.
const cache = await caches.open("dynamic-v1");
const cachedResponse = await cache.match(event.request);
if (cachedResponse) {
// If we found a match in the cache, return it, but also
// update the entry in the cache in the background.
event.waitUntil(cache.add(event.request));
return cachedResponse;
}
// If we didn't find a match in the cache, use the network.
return fetch(event.request);
})(),
);
});
```
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
## See also
- [`fetch` event](/en-US/docs/Web/API/ServiceWorkerGlobalScope/fetch_event)
- {{jsxref("Promise")}}
- [Fetch API](/en-US/docs/Web/API/Fetch_API)
| 0 |
data/mdn-content/files/en-us/web/api/fetchevent | data/mdn-content/files/en-us/web/api/fetchevent/clientid/index.md | ---
title: "FetchEvent: clientId property"
short-title: clientId
slug: Web/API/FetchEvent/clientId
page-type: web-api-instance-property
browser-compat: api.FetchEvent.clientId
---
{{APIRef("Service Workers API")}}
The **`clientId`** read-only property of the
{{domxref("FetchEvent")}} interface returns the id of the {{domxref("Client")}} that the
current service worker is controlling.
The {{domxref("Clients.get()")}} method could then be passed this ID to retrieve the
associated client.
## Value
A string that represents the client ID.
## Examples
```js
self.addEventListener("fetch", (event) => {
console.log(event.clientId);
});
```
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
## See also
- [Using Service Workers](/en-US/docs/Web/API/Service_Worker_API/Using_Service_Workers)
- [Service workers basic code example](https://github.com/mdn/dom-examples/tree/main/service-worker/simple-service-worker)
- [Using web workers](/en-US/docs/Web/API/Web_Workers_API/Using_web_workers)
| 0 |
data/mdn-content/files/en-us/web/api/fetchevent | data/mdn-content/files/en-us/web/api/fetchevent/preloadresponse/index.md | ---
title: "FetchEvent: preloadResponse property"
short-title: preloadResponse
slug: Web/API/FetchEvent/preloadResponse
page-type: web-api-instance-property
browser-compat: api.FetchEvent.preloadResponse
---
{{APIRef("Service Workers API")}}
The **`preloadResponse`** read-only property of the {{domxref("FetchEvent")}} interface returns a {{jsxref("Promise")}} that resolves to the navigation preload {{domxref("Response")}} if [navigation preload](/en-US/docs/Web/API/NavigationPreloadManager) was triggered, or `undefined` otherwise.
Navigation preload is triggered if [navigation preload is enabled](/en-US/docs/Web/API/NavigationPreloadManager/enable), the request is a `GET` request, and the request is a navigation request (generated by the browser when loading pages and iframes).
A service worker can wait on this promise in its fetch event handler in order to track completion of a fetch request made during service-worker boot.
## Value
A {{jsxref("Promise")}} that resolves to a {{domxref("Response")}} or otherwise to `undefined`.
## Examples
This code snippet is from [Speed up Service Worker with Navigation Preloads](https://developer.chrome.com/blog/navigation-preload/).
The {{domxref("ServiceWorkerGlobalScope.fetch_event", "onfetch")}} event handler listens for the `fetch` event.
When fired, the handler calls {{domxref("FetchEvent.respondWith", "FetchEvent.respondWith()")}} to pass a promise back to the controlled page.
This promise will resolve with the requested resource.
If there is a matching URL request in the {{domxref("Cache")}} object, then the code returns a promise for fetching the response from the cache.
If no match is found in the cache, the code returns the promise in `preloadResponse`.
If there is no matching cache or preloaded response, the code fetches the response from the network and returns the associated promise.
```js
addEventListener("fetch", (event) => {
event.respondWith(
(async () => {
// Respond from the cache if we can
const cachedResponse = await caches.match(event.request);
if (cachedResponse) return cachedResponse;
// Else, use the preloaded response, if it's there
const response = await event.preloadResponse;
if (response) return response;
// Else try the network.
return fetch(event.request);
})(),
);
});
```
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
## See also
- [Speed up Service Worker with Navigation Preloads](https://developer.chrome.com/blog/navigation-preload/)
- [Using Service Workers](/en-US/docs/Web/API/Service_Worker_API/Using_Service_Workers)
- [Service workers basic code example](https://github.com/mdn/dom-examples/tree/main/service-worker/simple-service-worker)
- [Using web workers](/en-US/docs/Web/API/Web_Workers_API/Using_web_workers)
| 0 |
data/mdn-content/files/en-us/web/api/fetchevent | data/mdn-content/files/en-us/web/api/fetchevent/resultingclientid/index.md | ---
title: "FetchEvent: resultingClientId property"
short-title: resultingClientId
slug: Web/API/FetchEvent/resultingClientId
page-type: web-api-instance-property
browser-compat: api.FetchEvent.resultingClientId
---
{{APIRef("Service Workers API")}}
The **`resultingClientId`** read-only property of the
{{domxref("FetchEvent")}} interface is the {{domxref("Client.id", "id")}} of the
{{domxref("Client", "client")}} that replaces the previous client during a page
navigation.
For example, when navigating from page A to page B `resultingClientId` is
the ID of the client associated with page B.
If the fetch request is a subresource request or the request's
[`destination`](/en-US/docs/Web/API/Request/destination) is
`report`, `resultingClientId` will be an empty string.
## Value
A string.
## Examples
```js
self.addEventListener("fetch", (event) => {
console.log(event.resultingClientId);
});
```
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
## See also
- [Using Service Workers](/en-US/docs/Web/API/Service_Worker_API/Using_Service_Workers)
- [Service workers basic code example](https://github.com/mdn/dom-examples/tree/main/service-worker/simple-service-worker)
- [Using web workers](/en-US/docs/Web/API/Web_Workers_API/Using_web_workers)
| 0 |
data/mdn-content/files/en-us/web/api/fetchevent | data/mdn-content/files/en-us/web/api/fetchevent/handled/index.md | ---
title: "FetchEvent: handled property"
short-title: handled
slug: Web/API/FetchEvent/handled
page-type: web-api-instance-property
browser-compat: api.FetchEvent.handled
---
{{APIRef("Service Workers API")}}
The **`handled`** property of the {{DOMxRef("FetchEvent")}} interface returns a promise indicating if the event has been handled by the fetch algorithm or not. This property allows executing code after the browser has consumed a response, and is usually used together with the {{DOMxRef("ExtendableEvent.waitUntil", "waitUntil()")}} method.
## Value
A {{jsxref("Promise")}} that is pending while the event has not been handled, and fulfilled once it has.
## Examples
```js
addEventListener("fetch", (event) => {
event.respondWith(
(async function () {
const response = await doCalculateAResponse(event.request);
event.waitUntil(
(async function () {
await doSomeAsyncStuff(); // optional
// Wait for the event to be consumed by the browser
await event.handled;
return doFinalStuff(); // Finalize AFTER the event has been consumed
})(),
);
return response;
})(),
);
});
```
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
## See also
- {{DOMxRef("ExtendableEvent.waitUntil()")}}
| 0 |
data/mdn-content/files/en-us/web/api/fetchevent | data/mdn-content/files/en-us/web/api/fetchevent/replacesclientid/index.md | ---
title: "FetchEvent: replacesClientId property"
short-title: replacesClientId
slug: Web/API/FetchEvent/replacesClientId
page-type: web-api-instance-property
browser-compat: api.FetchEvent.replacesClientId
---
{{APIRef("Service Workers API")}}
The **`replacesClientId`** read-only property of the
{{domxref("FetchEvent")}} interface is the {{domxref("Client.id", "id")}} of the
{{domxref("Client", "client")}} that is being replaced during a page navigation.
For example, when navigating from page A to page B `replacesClientId` is the
ID of the client associated with page A. It can be an empty string when navigating from
`about:blank` to another page, as `about:blank`'s client will be
reused, rather than be replaced.
Additionally, if the fetch isn't a navigation, `replacesClientId` will be an
empty string. This could be used to access/communicate with a client that will
imminently be replaced, right before a navigation.
## Value
A string.
## Examples
```js
self.addEventListener("fetch", (event) => {
console.log(event.replacesClientId);
});
```
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
## See also
- [Using Service Workers](/en-US/docs/Web/API/Service_Worker_API/Using_Service_Workers)
- [Service workers basic code example](https://github.com/mdn/dom-examples/tree/main/service-worker/simple-service-worker)
- [Using web workers](/en-US/docs/Web/API/Web_Workers_API/Using_web_workers)
| 0 |
data/mdn-content/files/en-us/web/api/fetchevent | data/mdn-content/files/en-us/web/api/fetchevent/isreload/index.md | ---
title: "FetchEvent: isReload property"
short-title: isReload
slug: Web/API/FetchEvent/isReload
page-type: web-api-instance-property
status:
- deprecated
- non-standard
browser-compat: api.FetchEvent.isReload
---
{{APIRef("Service Workers API")}}{{deprecated_header}}{{Non-standard_header}}
The **`isReload`** read-only property of the
{{domxref("FetchEvent")}} interface returns `true` if the event was
dispatched by the user attempting to reload the page, and `false` otherwise.
Pressing the refresh button is a reload while clicking a link and pressing the back
button is not.
## Value
A boolean value.
## Examples
```js
self.addEventListener("fetch", (event) => {
event.respondWith(async () => {
if (event.isReload) {
//Return something
} else {
//Return something else
}
})();
});
```
## Browser compatibility
{{Compat}}
## See also
- [Using Service Workers](/en-US/docs/Web/API/Service_Worker_API/Using_Service_Workers)
- [Service workers basic code example](https://github.com/mdn/dom-examples/tree/main/service-worker/simple-service-worker)
- [Using web workers](/en-US/docs/Web/API/Web_Workers_API/Using_web_workers)
| 0 |
data/mdn-content/files/en-us/web/api/fetchevent | data/mdn-content/files/en-us/web/api/fetchevent/request/index.md | ---
title: "FetchEvent: request property"
short-title: request
slug: Web/API/FetchEvent/request
page-type: web-api-instance-property
browser-compat: api.FetchEvent.request
---
{{APIRef("Service Workers API")}}
The **`request`** read-only property of the
{{domxref("FetchEvent")}} interface returns the {{domxref("Request")}} that triggered
the event handler.
This property is non-nullable (since version 46, in the case of Firefox.) If a request
is not provided by some other means, the constructor `options` object must
contain a request (see {{domxref("FetchEvent.FetchEvent", "FetchEvent()")}}.)
## Value
A {{domxref("Request")}} object.
## Examples
This code snippet is from the [service worker fetch sample](https://github.com/GoogleChrome/samples/blob/gh-pages/service-worker/prefetch/service-worker.js) ([run the fetch sample live](https://googlechrome.github.io/samples/service-worker/prefetch/)). The {{domxref("ServiceWorkerGlobalScope.fetch_event", "onfetch")}} event handler
listens for the `fetch` event. When fired, pass a promise that back to the
controlled page to {{domxref("FetchEvent.respondWith", "FetchEvent.respondWith()")}}.
This promise resolves to the first matching URL request in the {{domxref("Cache")}}
object. If no match is found, the code fetches a response from the network.
The code also handles exceptions thrown from the
{{domxref("fetch()")}} operation. Note that an HTTP error
response (e.g., 404) will not trigger an exception. It will return a normal response
object that has the appropriate error code set.
```js
self.addEventListener("fetch", (event) => {
console.log("Handling fetch event for", event.request.url);
event.respondWith(
caches.match(event.request).then((response) => {
if (response) {
console.log("Found response in cache:", response);
return response;
}
console.log("No response found in cache. About to fetch from network…");
return fetch(event.request)
.then((response) => {
console.log("Response from network is:", response);
return response;
})
.catch((error) => {
console.error("Fetching failed:", error);
throw error;
});
}),
);
});
```
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
## See also
- [Using Service Workers](/en-US/docs/Web/API/Service_Worker_API/Using_Service_Workers)
- [Service workers basic code example](https://github.com/mdn/dom-examples/tree/main/service-worker/simple-service-worker)
- [Using web workers](/en-US/docs/Web/API/Web_Workers_API/Using_web_workers)
| 0 |
data/mdn-content/files/en-us/web/api/fetchevent | data/mdn-content/files/en-us/web/api/fetchevent/respondwith/index.md | ---
title: "FetchEvent: respondWith() method"
short-title: respondWith()
slug: Web/API/FetchEvent/respondWith
page-type: web-api-instance-method
browser-compat: api.FetchEvent.respondWith
---
{{APIRef("Service Workers API")}}
The **`respondWith()`** method of
{{domxref("FetchEvent")}} prevents the browser's default fetch handling, and
allows you to provide a promise for a {{domxref("Response")}} yourself.
In most cases you can provide any response that the receiver understands. For example,
if an {{HTMLElement('img')}} initiates the request, the response body needs to be
image data. For security reasons, there are a few global rules:
- You can only return {{domxref("Response")}} objects of {{domxref("Response.type",
"type")}} "`opaque`" if the {{domxref("fetchEvent.request")}} object's
{{domxref("request.mode", "mode")}} is "`no-cors`". This prevents the
leaking of private data.
- You can only return {{domxref("Response")}} objects of {{domxref("Response.type",
"type")}} "`opaqueredirect`" if the {{domxref("fetchEvent.request")}}
object's {{domxref("request.mode", "mode")}} is "`manual`".
- You cannot return {{domxref("Response")}} objects of {{domxref("Response.type",
"type")}} "`cors`" if the {{domxref("fetchEvent.request")}} object's
{{domxref("request.mode", "mode")}} is "`same-origin`".
### Specifying the final URL of a resource
From Firefox 59 onwards, when a service worker provides a {{domxref("Response")}} to
{{domxref("FetchEvent.respondWith()")}}, the {{domxref("Response.url")}} value will be
propagated to the intercepted network request as the final resolved URL. If the
{{domxref("Response.url")}} value is the empty string, then the
{{domxref("Request.url","FetchEvent.request.url")}} is used as the final URL.
In the past the {{domxref("Request.url","FetchEvent.request.url")}} was used as the
final URL in all cases. The provided {{domxref("Response.url")}} was effectively
ignored.
This means, for example, if a service worker intercepts a stylesheet or worker script,
then the provided {{domxref("Response.url")}} will be used to resolve any relative
{{cssxref("@import")}} or
{{domxref("WorkerGlobalScope.importScripts()","importScripts()")}} subresource loads
([Firefox bug 1222008](https://bugzil.la/1222008)).
For most types of network request this change has no impact because you can't observe
the final URL. There are a few, though, where it does matter:
- If a {{domxref("fetch()")}} is intercepted,
then you can observe the final URL on the result's {{domxref("Response.url")}}.
- If a [worker](/en-US/docs/Web/API/Web_Workers_API) script is
intercepted, then the final URL is used to set
[`self.location`](/en-US/docs/Web/API/WorkerGlobalScope/location)
and used as the base URL for relative URLs in the worker script.
- If a stylesheet is intercepted, then the final URL is used as the base URL for
resolving relative {{cssxref("@import")}} loads.
Note that navigation requests for {{domxref("Window","Windows")}} and
{{domxref("HTMLIFrameElement","iframes")}} do NOT use the final URL. The way the HTML
specification handles redirects for navigations ends up using the request URL for the
resulting {{domxref("Window.location")}}. This means sites can still provide an
"alternate" view of a web page when offline without changing the user-visible URL.
## Syntax
```js-nolint
respondWith(response)
```
### Parameters
- `response`
- : A {{domxref("Response")}} or a {{jsxref("Promise")}} that resolves to a
`Response`. Otherwise, a network error is returned to Fetch.
### Return value
None ({{jsxref("undefined")}}).
### Exceptions
- `NetworkError` {{domxref("DOMException")}}
- : Returned if a network error is triggered on certain combinations of
{{domxref("Request.mode","FetchEvent.request.mode")}} and
{{domxref("Response.type")}} values, as hinted at in the "global rules"
listed above.
- `InvalidStateError` {{domxref("DOMException")}}
- : Returned if the event has not been dispatched or `respondWith()` has
already been invoked.
## Examples
This fetch event tries to return a response from the cache API, falling back to the
network otherwise.
```js
addEventListener("fetch", (event) => {
// Prevent the default, and handle the request ourselves.
event.respondWith(
(async () => {
// Try to get the response from a cache.
const cachedResponse = await caches.match(event.request);
// Return it if we found one.
if (cachedResponse) return cachedResponse;
// If we didn't find a match in the cache, use the network.
return fetch(event.request);
})(),
);
});
```
> **Note:** {{domxref("CacheStorage.match()", "caches.match()")}} is a
> convenience method. Equivalent functionality is to call
> {{domxref("cache.match()")}} on each cache (in the order returned by
> {{domxref("CacheStorage.keys()", "caches.keys()")}}) until a
> {{domxref("Response")}} is returned.
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
## See also
- [Using Service Workers](/en-US/docs/Web/API/Service_Worker_API/Using_Service_Workers)
- [Fetch API](/en-US/docs/Web/API/Fetch_API)
| 0 |
data/mdn-content/files/en-us/web/api/fetchevent | data/mdn-content/files/en-us/web/api/fetchevent/fetchevent/index.md | ---
title: "FetchEvent: FetchEvent() constructor"
short-title: FetchEvent()
slug: Web/API/FetchEvent/FetchEvent
page-type: web-api-constructor
browser-compat: api.FetchEvent.FetchEvent
---
{{APIRef("Service Workers API")}}
The **`FetchEvent()`** constructor creates a new {{domxref("FetchEvent")}} object.
## Syntax
```js-nolint
new FetchEvent(type, options)
```
### Parameters
- `type`
- : A string with the name of the event.
It is case-sensitive and browsers always set it to `fetch`.
- `options`
- : An object that, _in addition of the properties defined in {{domxref("ExtendableEvent/ExtendableEvent", "ExtendableEvent()")}}_, can have the following properties:
- `request`
- : The {{domxref("Request")}} object that would have triggered the event handler.
- `preloadResponse`
- : A {{jsxref("Promise")}} which returns a previously-loaded response to the client.
- `clientId` {{optional_inline}}
- : The {{domxref("Client")}} that the current service worker is controlling. It defaults to `""`.
- `isReload` {{deprecated_inline}} {{optional_inline}}
- : A boolean value that signifies whether the page was reloaded or not when
the event was dispatched. `true` if yes, and `false` if not.
Typically, pressing the refresh button in a browser is a reload, while clicking a
link and pressing the back button is not. If not present, it defaults to
`false`.
- `replacesClientId` {{optional_inline}}
- : A string which identifies the client which is being replaced by `resultingClientId`. It defaults to `""`.
- `resultingClientId` {{optional_inline}}
- : A string containing the new `clientId` if the client changes as a result of the page load. It defaults to `""`
- `handled`
- : A _pending_ promise that will be fulfilled once the event has been handled.
## Return value
A new {{domxref("FetchEvent")}} object.
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
## See also
- {{jsxref("Promise")}}
- [Fetch API](/en-US/docs/Web/API/Fetch_API)
| 0 |
data/mdn-content/files/en-us/web/api | data/mdn-content/files/en-us/web/api/indexeddb/index.md | ---
title: indexedDB global property
short-title: indexedDB
slug: Web/API/indexedDB
page-type: web-api-global-property
browser-compat: api.indexedDB
---
{{APIRef("IndexedDB")}}{{AvailableInWorkers}}
The global **`indexedDB`** read-only property provides a mechanism for applications to
asynchronously access the capabilities of indexed databases.
## Value
An {{domxref("IDBFactory")}} object.
## Examples
The following code creates a request for a database to be opened asynchronously, after
which the database is opened when the request's `onsuccess` handler is fired:
```js
let db;
function openDB() {
const DBOpenRequest = window.indexedDB.open("toDoList");
DBOpenRequest.onsuccess = (e) => {
db = DBOpenRequest.result;
};
}
```
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
## See also
- [Using IndexedDB](/en-US/docs/Web/API/IndexedDB_API/Using_IndexedDB)
- Starting transactions: {{domxref("IDBDatabase")}}
- Using transactions: {{domxref("IDBTransaction")}}
- Setting a range of keys: {{domxref("IDBKeyRange")}}
- Retrieving and making changes to your data: {{domxref("IDBObjectStore")}}
- Using cursors: {{domxref("IDBCursor")}}
- Reference example: [To-do Notifications](https://github.com/mdn/dom-examples/tree/main/to-do-notifications) ([View the example live](https://mdn.github.io/dom-examples/to-do-notifications/)).
| 0 |
data/mdn-content/files/en-us/web/api | data/mdn-content/files/en-us/web/api/mediatracksupportedconstraints/index.md | ---
title: MediaTrackSupportedConstraints
slug: Web/API/MediaTrackSupportedConstraints
page-type: web-api-interface
browser-compat: api.MediaTrackSupportedConstraints
---
{{APIRef("Media Capture and Streams")}}
The **`MediaTrackSupportedConstraints`** dictionary establishes the list of constrainable properties recognized by the {{Glossary("user agent")}} or browser in its implementation of the {{domxref("MediaStreamTrack")}} object. An object conforming to `MediaTrackSupportedConstraints` is returned by {{domxref("MediaDevices.getSupportedConstraints()")}}.
Because of the way interface definitions in WebIDL work, if a constraint is requested but not supported, no error will occur. Instead, the specified constraints will be applied, with any unrecognized constraints stripped from the request. That can lead to confusing and hard to debug errors, so be sure to use `getSupportedConstraints()` to retrieve this information before attempting to establish constraints if you need to know the difference between silently ignoring a constraint and a constraint being accepted.
An actual constraint set is described using an object based on the {{domxref("MediaTrackConstraints")}} dictionary.
To learn more about how constraints work, see [Capabilities, constraints, and settings](/en-US/docs/Web/API/Media_Capture_and_Streams_API/Constraints).
## Instance properties
Some combination—but not necessarily all—of the following properties will exist on the object.
- {{domxref("MediaTrackSupportedConstraints.autoGainControl", "autoGainControl")}}
- : A Boolean whose value is `true` if the [`autoGainControl`](/en-US/docs/Web/API/MediaTrackConstraints#autogaincontrol) constraint is supported in the current environment.
- {{domxref("MediaTrackSupportedConstraints.width", "width")}}
- : A Boolean value whose value is `true` if the [`width`](/en-US/docs/Web/API/MediaTrackConstraints#width) constraint is supported in the current environment.
- {{domxref("MediaTrackSupportedConstraints.height", "height")}}
- : A Boolean value whose value is `true` if the [`height`](/en-US/docs/Web/API/MediaTrackConstraints#height) constraint is supported in the current environment.
- {{domxref("MediaTrackSupportedConstraints.aspectRatio", "aspectRatio")}}
- : A Boolean value whose value is `true` if the [`aspectRatio`](/en-US/docs/Web/API/MediaTrackConstraints#aspectratio) constraint is supported in the current environment.
- {{domxref("MediaTrackSupportedConstraints.frameRate", "frameRate")}}
- : A Boolean value whose value is `true` if the [`frameRate`](/en-US/docs/Web/API/MediaTrackConstraints#framerate) constraint is supported in the current environment.
- {{domxref("MediaTrackSupportedConstraints.facingMode", "facingMode")}}
- : A Boolean value whose value is `true` if the [`facingMode`](/en-US/docs/Web/API/MediaTrackConstraints#facingmode) constraint is supported in the current environment.
- {{domxref("MediaTrackSupportedConstraints.resizeMode", "resizeMode")}}
- : A Boolean value whose value is `true` if the [`resizeMode`](/en-US/docs/Web/API/MediaTrackConstraints#resizemode) constraint is supported in the current environment.
- {{domxref("MediaTrackSupportedConstraints.volume", "volume")}} {{Deprecated_Inline}} {{Non-standard_Inline}}
- : A Boolean value whose value is `true` if the [`volume`](/en-US/docs/Web/API/MediaTrackConstraints#volume) constraint is supported in the current environment.
- {{domxref("MediaTrackSupportedConstraints.sampleRate", "sampleRate")}}
- : A Boolean value whose value is `true` if the [`sampleRate`](/en-US/docs/Web/API/MediaTrackConstraints#samplerate) constraint is supported in the current environment.
- {{domxref("MediaTrackSupportedConstraints.sampleSize", "sampleSize")}}
- : A Boolean value whose value is `true` if the [`sampleSize`](/en-US/docs/Web/API/MediaTrackConstraints#samplesize) constraint is supported in the current environment.
- {{domxref("MediaTrackSupportedConstraints.echoCancellation", "echoCancellation")}}
- : A Boolean value whose value is `true` if the [`echoCancellation`](/en-US/docs/Web/API/MediaTrackConstraints#echocancellation) constraint is supported in the current environment.
- {{domxref("MediaTrackSupportedConstraints.latency", "latency")}}
- : A Boolean value whose value is `true` if the [`latency`](/en-US/docs/Web/API/MediaTrackConstraints#latency) constraint is supported in the current environment.
- {{domxref("MediaTrackSupportedConstraints.noiseSuppression", "noiseSuppression")}}
- : A Boolean whose value is `true` if the [`noiseSuppression`](/en-US/docs/Web/API/MediaTrackConstraints#noisesuppression) constraint is supported in the current environment.
- {{domxref("MediaTrackSupportedConstraints.channelCount", "channelCount")}}
- : A Boolean value whose value is `true` if the [`channelCount`](/en-US/docs/Web/API/MediaTrackConstraints#channelcount) constraint is supported in the current environment.
- {{domxref("MediaTrackSupportedConstraints.deviceId", "deviceId")}}
- : A Boolean value whose value is `true` if the [`deviceId`](/en-US/docs/Web/API/MediaTrackConstraints#deviceid) constraint is supported in the current environment.
- {{domxref("MediaTrackSupportedConstraints.groupId", "groupId")}}
- : A Boolean value whose value is `true` if the [`groupId`](/en-US/docs/Web/API/MediaTrackConstraints#groupid) constraint is supported in the current environment.
### Instance properties specific to shared screen tracks
For tracks containing video sources from the user's screen contents, the following additional properties are may be included in addition to those available for video tracks.
- {{domxref("MediaTrackSupportedConstraints.displaySurface", "displaySurface")}}
- : A Boolean value which is `true` if the {{domxref("MediaTrackConstraints.displaySurface", "displaySurface")}} constraint is supported in the current environment.
- {{domxref("MediaTrackSupportedConstraints.logicalSurface", "logicalSurface")}}
- : A Boolean value which is `true` if the {{domxref("MediaTrackConstraints.logicalSurface", "logicalSurface")}} constraint is supported in the current environment.
## Browser compatibility
{{Compat}}
## See also
- [Media Capture and Streams API](/en-US/docs/Web/API/Media_Capture_and_Streams_API)
- [Capabilities, constraints, and settings](/en-US/docs/Web/API/Media_Capture_and_Streams_API/Constraints)
- [Screen Capture API](/en-US/docs/Web/API/Screen_Capture_API)
- [Using the Screen Capture API](/en-US/docs/Web/API/Screen_Capture_API/Using_Screen_Capture)
- {{domxref("MediaTrackConstraints")}}
- {{domxref("MediaDevices.getUserMedia()")}}
- {{domxref("MediaStreamTrack.getConstraints()")}}
- {{domxref("MediaStreamTrack.applyConstraints()")}}
- {{domxref("MediaStreamTrack.getSettings()")}}
| 0 |
data/mdn-content/files/en-us/web/api/mediatracksupportedconstraints | data/mdn-content/files/en-us/web/api/mediatracksupportedconstraints/noisesuppression/index.md | ---
title: "MediaTrackSupportedConstraints: noiseSuppression property"
short-title: noiseSuppression
slug: Web/API/MediaTrackSupportedConstraints/noiseSuppression
page-type: web-api-instance-property
browser-compat: api.MediaTrackSupportedConstraints.noiseSuppression
---
{{APIRef("Media Capture and Streams")}}
The {{domxref("MediaTrackSupportedConstraints")}} dictionary's
**`noiseSuppression`** property is a read-only Boolean value
which is present (and set to `true`) in the object returned by
{{domxref("MediaDevices.getSupportedConstraints()")}} if and only if the
{{Glossary("user agent")}} supports the **`noiseSuppression`**
constraint. If the constraint isn't supported, it's not included in the list, so this
value will never be `false`.
You can access the supported constraints dictionary by calling
`navigator.mediaDevices.getSupportedConstraints()`.
The `noiseSuppression` constraint indicates whether or not the browser
offers the ability to automatically control the gain (volume) on media tracks; this
obviously is contingent on whether or not the individual device supports automatic gain
control as well.
## Value
This property is present in the dictionary (and its value is always `true`)
if the user agent supports the `noiseSuppression` constraint (and therefore
supports noise suppression on audio tracks). If the property isn't present, this
property is missing from the supported constraints dictionary, and you'll get
{{jsxref("undefined")}} if you try to look at its value.
## Examples
This example displays whether or not your browser supports the
`noiseSuppression` constraint.
```html hidden
<div id="result"></div>
```
```css hidden
#result {
font:
14px "Arial",
sans-serif;
}
```
```js
const result = document.getElementById("result");
const supported =
navigator.mediaDevices.getSupportedConstraints().noiseSuppression;
result.textContent = supported ? "Supported!" : "Not supported!";
```
### Result
{{ EmbedLiveSample('Examples', 600, 80) }}
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
## See also
- [Media Capture and Streams API](/en-US/docs/Web/API/Media_Capture_and_Streams_API)
- {{domxref("MediaDevices.getSupportedConstraints()")}}
- {{domxref("MediaTrackSupportedConstraints")}}
- {{domxref("MediaStreamTrack")}}
| 0 |
data/mdn-content/files/en-us/web/api/mediatracksupportedconstraints | data/mdn-content/files/en-us/web/api/mediatracksupportedconstraints/samplesize/index.md | ---
title: "MediaTrackSupportedConstraints: sampleSize property"
short-title: sampleSize
slug: Web/API/MediaTrackSupportedConstraints/sampleSize
page-type: web-api-instance-property
browser-compat: api.MediaTrackSupportedConstraints.sampleSize
---
{{APIRef("Media Capture and Streams")}}
The {{domxref("MediaTrackSupportedConstraints")}} dictionary's
**`sampleSize`** property is a read-only Boolean value which is
present (and set to `true`) in the object returned by
{{domxref("MediaDevices.getSupportedConstraints()")}} if and only if the
{{Glossary("user agent")}} supports the `sampleSize` constraint. If the
constraint isn't supported, it's not included in the list, so this value will never be
`false`.
You can access the supported constraints dictionary by calling
`navigator.mediaDevices.getSupportedConstraints()`.
## Value
This property is present in the dictionary (and its value is always `true`)
if the user agent supports the `sampleSize` constraint. If the property isn't
present, this property is missing from the supported constraints dictionary, and you'll
get {{jsxref("undefined")}} if you try to look at its value.
## Examples
```html hidden
<div id="result"></div>
```
```css hidden
#result {
font:
14px "Arial",
sans-serif;
}
```
```js
const result = document.getElementById("result");
const supported = navigator.mediaDevices.getSupportedConstraints().sampleSize;
result.textContent = supported ? "Supported!" : "Not supported!";
```
### Result
{{ EmbedLiveSample('Examples', 600, 80) }}
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
## See also
- [Media Capture and Streams API](/en-US/docs/Web/API/Media_Capture_and_Streams_API)
- {{domxref("MediaDevices.getSupportedConstraints()")}}
- {{domxref("MediaTrackSupportedConstraints")}}
- {{domxref("MediaStreamTrack")}}
| 0 |
data/mdn-content/files/en-us/web/api/mediatracksupportedconstraints | data/mdn-content/files/en-us/web/api/mediatracksupportedconstraints/echocancellation/index.md | ---
title: "MediaTrackSupportedConstraints: echoCancellation property"
short-title: echoCancellation
slug: Web/API/MediaTrackSupportedConstraints/echoCancellation
page-type: web-api-instance-property
browser-compat: api.MediaTrackSupportedConstraints.echoCancellation
---
{{APIRef("Media Capture and Streams")}}
The {{domxref("MediaTrackSupportedConstraints")}} dictionary's
**`echoCancellation`** property is a read-only Boolean value
which is present (and set to `true`) in the object returned by
{{domxref("MediaDevices.getSupportedConstraints()")}} if and only if the
{{Glossary("user agent")}} supports the `echoCancellation` constraint. If the
constraint isn't supported, it's not included in the list, so this value will never be
`false`.
You can access the supported constraints dictionary by calling
`navigator.mediaDevices.getSupportedConstraints()`.
## Value
This property is present in the dictionary (and its value is always `true`)
if the user agent supports the `echoCancellation` constraint. If the property
isn't present, this property is missing from the supported constraints dictionary, and
you'll get {{jsxref("undefined")}} if you try to look at its value.
## Examples
```html hidden
<div id="result"></div>
```
```css hidden
#result {
font:
14px "Arial",
sans-serif;
}
```
```js
const result = document.getElementById("result");
const supported =
navigator.mediaDevices.getSupportedConstraints().echoCancellation;
result.textContent = supported ? "Supported!" : "Not supported!";
```
### Result
{{ EmbedLiveSample('Examples', 600, 80) }}
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
## See also
- [Media Capture and Streams API](/en-US/docs/Web/API/Media_Capture_and_Streams_API)
- {{domxref("MediaDevices.getSupportedConstraints()")}}
- {{domxref("MediaTrackSupportedConstraints")}}
- {{domxref("MediaStreamTrack")}}
| 0 |
data/mdn-content/files/en-us/web/api/mediatracksupportedconstraints | data/mdn-content/files/en-us/web/api/mediatracksupportedconstraints/volume/index.md | ---
title: "MediaTrackSupportedConstraints: volume property"
short-title: volume
slug: Web/API/MediaTrackSupportedConstraints/volume
page-type: web-api-instance-property
status:
- deprecated
- non-standard
browser-compat: api.MediaTrackSupportedConstraints.volume
---
{{APIRef("Media Capture and Streams")}}{{Deprecated_Header}}{{Non-standard_Header}}
The {{domxref("MediaTrackSupportedConstraints")}} dictionary's
**`volume`** property is a read-only Boolean value which is
present (and set to `true`) in the object returned by
{{domxref("MediaDevices.getSupportedConstraints()")}} if and only if the
{{Glossary("user agent")}} supports the `volume` constraint. If the
constraint isn't supported, it's not included in the list, so this value will never be
`false`.
You can access the supported constraints dictionary by calling
`navigator.mediaDevices.getSupportedConstraints()`.
## Value
This property is present in the dictionary (and its value is always `true`)
if the user agent supports the `volume` constraint. If the property isn't
present, this property is missing from the supported constraints dictionary, and you'll
get {{jsxref("undefined")}} if you try to look at its value.
## Examples
```html hidden
<div id="result"></div>
```
```css hidden
#result {
font:
14px "Arial",
sans-serif;
}
```
```js
const result = document.getElementById("result");
const supported = navigator.mediaDevices.getSupportedConstraints().volume;
result.textContent = supported ? "Supported!" : "Not supported!";
```
### Result
{{EmbedLiveSample('Examples', 600, 80)}}
## Browser compatibility
{{Compat}}
## See also
- [Media Capture and Streams API](/en-US/docs/Web/API/Media_Capture_and_Streams_API)
- {{domxref("MediaDevices.getSupportedConstraints()")}}
- {{domxref("MediaTrackSupportedConstraints")}}
- {{domxref("MediaStreamTrack")}}
| 0 |
data/mdn-content/files/en-us/web/api/mediatracksupportedconstraints | data/mdn-content/files/en-us/web/api/mediatracksupportedconstraints/logicalsurface/index.md | ---
title: "MediaTrackSupportedConstraints: logicalSurface property"
short-title: logicalSurface
slug: Web/API/MediaTrackSupportedConstraints/logicalSurface
page-type: web-api-instance-property
browser-compat: api.MediaTrackSupportedConstraints.logicalSurface
---
{{APIRef("Media Capture and Streams")}}
The {{domxref("MediaTrackSupportedConstraints")}} dictionary's **`logicalSurface`** property indicates whether or not the {{domxref("MediaTrackConstraints.logicalSurface", "logicalSurface")}} constraint is supported by the user agent and the device on which the content is being used.
The supported constraints list is obtained by calling {{domxref("MediaDevices.getSupportedConstraints","navigator.mediaDevices.getSupportedConstraints()")}}.
## Syntax
```js-nolint
isLogicalSurfaceSupported = supportedConstraints.logicalSurface
```
### Value
A boolean value which is `true` if the {{domxref("MediaTrackConstraints.logicalSurface", "logicalSurface")}} constraint is supported by the device and user agent.
## Example
This method sets up the constraints object specifying the options for the call to
{{domxref("MediaDevices.getDisplayMedia", "getDisplayMedia()")}}. It adds the
`logicalSurface` constraint (requesting that only logical display
surfaces—those which may not be entirely visible onscreen—be included among the options
available to the user) only if it is known to be supported by the browser. Capturing is
then started by calling `getDisplayMedia()` and attaching the returned stream
to the video element referenced by the variable `videoElem`.
```js
async function capture() {
const supportedConstraints = navigator.mediaDevices.getSupportedConstraints();
const displayMediaOptions = {
video: {},
audio: false,
};
if (supportedConstraints.logicalSurface) {
displayMediaOptions.video.logicalSurface = "monitor";
}
try {
videoElem.srcObject =
await navigator.mediaDevices.getDisplayMedia(displayMediaOptions);
} catch (err) {
/* handle the error */
}
}
```
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
## See also
- [Screen Capture API](/en-US/docs/Web/API/Screen_Capture_API)
- [Using the screen capture API](/en-US/docs/Web/API/Screen_Capture_API/Using_Screen_Capture)
- [Capabilities, constraints, and settings](/en-US/docs/Web/API/Media_Capture_and_Streams_API/Constraints)
- {{domxref("MediaDevices.getDisplayMedia()")}}
- {{domxref("MediaStreamTrack.getConstraints()")}}
- {{domxref("MediaStreamTrack.applyConstraints()")}}
- {{domxref("MediaStreamTrack.getSettings()")}}
| 0 |
data/mdn-content/files/en-us/web/api/mediatracksupportedconstraints | data/mdn-content/files/en-us/web/api/mediatracksupportedconstraints/framerate/index.md | ---
title: "MediaTrackSupportedConstraints: frameRate property"
short-title: frameRate
slug: Web/API/MediaTrackSupportedConstraints/frameRate
page-type: web-api-instance-property
browser-compat: api.MediaTrackSupportedConstraints.frameRate
---
{{APIRef("Media Capture and Streams")}}
The {{domxref("MediaTrackSupportedConstraints")}} dictionary's **`frameRate`** property is a read-only Boolean value which is
present (and set to `true`) in the object returned by {{domxref("MediaDevices.getSupportedConstraints()")}} if and only if the {{Glossary("user agent")}} supports the {{domxref("MediaTrackConstraints.frameRate","frameRate")}} constraint.
If the constraint isn't supported, it's not included in the list, so this value will never be `false`.
The `frameRate` constraint can be used to establish acceptable upper and lower bounds on the video frame rate for a new video track, or to specify an exact frame rate that must be provided for the request to succeed.
Checking the value of this property lets you determine if the user agent allows constraining the video track configuration by frame rate. See the [example](#examples) to see how this can be used.
## Value
This property is present in the dictionary if the user agent supports the `frameRate` constraint.
If the property isn't present, the user agent doesn't allow specifying limits on the frame rate for video tracks.
> **Note:** If this property is present, its value is always `true`.
## Examples
This simple example looks to see if your browser supports constraining the frame rate when requesting video tracks.
### JavaScript
```js
const result = document.getElementById("result");
const supported = navigator.mediaDevices.getSupportedConstraints().frameRate;
result.textContent = supported ? "Supported!" : "Not supported!";
```
### HTML
```html
<div id="result"></div>
```
### CSS
```css
#result {
font:
14px "Arial",
sans-serif;
}
```
### Result
The output, showing if your browser supports the `frameRate` constraint, is:
{{ EmbedLiveSample('Examples', 600, 80) }}
While this example is trivial, you can replace the simple output of "Supported" vs.
"Not supported" with code to provide alternative methods for presenting the audiovisual information you want to share with the user or otherwise work with.
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
## See also
- [Media Capture and Streams API](/en-US/docs/Web/API/Media_Capture_and_Streams_API)
- {{domxref("MediaDevices.getSupportedConstraints()")}}
- {{domxref("MediaTrackSupportedConstraints")}}
- {{domxref("MediaStreamTrack")}}
| 0 |
data/mdn-content/files/en-us/web/api/mediatracksupportedconstraints | data/mdn-content/files/en-us/web/api/mediatracksupportedconstraints/autogaincontrol/index.md | ---
title: "MediaTrackSupportedConstraints: autoGainControl property"
short-title: autoGainControl
slug: Web/API/MediaTrackSupportedConstraints/autoGainControl
page-type: web-api-instance-property
browser-compat: api.MediaTrackSupportedConstraints.autoGainControl
---
{{APIRef("Media Capture and Streams")}}
The {{domxref("MediaTrackSupportedConstraints")}} dictionary's **`autoGainControl`** property is a read-only Boolean value which is present (and set to `true`) in the object returned by {{domxref("MediaDevices.getSupportedConstraints()")}} if and only if the {{Glossary("user agent")}} supports the **`autoGainControl`** constraint.
If the constraint isn't supported, it's not included in the list, so this value will never be `false`.
You can access the supported constraints dictionary by calling `navigator.mediaDevices.getSupportedConstraints()`.
The `autoGainControl` constraint indicates whether or not the browser offers the ability to automatically control the gain (volume) on media tracks; this obviously is contingent on whether or not the individual device supports automatic gain control as well; it's typically a feature provided by microphones.
## Value
This property is present in the dictionary (and its value is always `true`) if the user agent supports the `autoGainControl` constraint.
If the property isn't present, this property is missing from the supported constraints dictionary, and you'll get {{jsxref("undefined")}} if you try to look at its value.
## Examples
This example displays whether or not your browser supports the `autoGainControl` constraint.
```html hidden
<div id="result"></div>
```
```css hidden
#result {
font:
14px "Arial",
sans-serif;
}
```
```js
const result = document.getElementById("result");
const supported =
navigator.mediaDevices.getSupportedConstraints().autoGainControl;
result.textContent = supported ? "Supported!" : "Not supported!";
```
### Result
{{ EmbedLiveSample('Examples', 600, 80) }}
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
## See also
- [Media Capture and Streams API](/en-US/docs/Web/API/Media_Capture_and_Streams_API)
- {{domxref("MediaDevices.getSupportedConstraints()")}}
- {{domxref("MediaTrackSupportedConstraints")}}
- {{domxref("MediaStreamTrack")}}
| 0 |
data/mdn-content/files/en-us/web/api/mediatracksupportedconstraints | data/mdn-content/files/en-us/web/api/mediatracksupportedconstraints/suppresslocalaudioplayback/index.md | ---
title: "MediaTrackSupportedConstraints: suppressLocalAudioPlayback property"
short-title: suppressLocalAudioPlayback
slug: Web/API/MediaTrackSupportedConstraints/suppressLocalAudioPlayback
page-type: web-api-instance-property
status:
- experimental
browser-compat: api.MediaTrackSupportedConstraints.suppressLocalAudioPlayback
---
{{APIRef("Media Capture and Streams")}}{{SeeCompatTable}}
The {{domxref("MediaTrackSupportedConstraints")}} dictionary's **`suppressLocalAudioPlayback`** property indicates whether or not the {{domxref("MediaTrackConstraints.suppressLocalAudioPlayback", "suppressLocalAudioPlayback")}} constraint is supported by the user agent and the device on which the content is being used.
The supported constraints list is obtained by calling {{domxref("MediaDevices.getSupportedConstraints","navigator.mediaDevices.getSupportedConstraints()")}}.
## Value
A boolean value which is `true` if the {{domxref("MediaTrackConstraints.suppressLocalAudioPlayback", "suppressLocalAudioPlayback")}} constraint is supported by the device and user agent.
## Examples
The function below sets up the options object for the call to {{domxref("MediaDevices.getDisplayMedia", "getDisplayMedia()")}}. It adds the `suppressLocalAudioPlayback` constraint (requesting that captured audio is not played out of the user's local speakers) only if it is known to be supported by the browser. Capturing is then started by calling `getDisplayMedia()` and attaching the returned stream to the video element referenced by the variable `videoElem`.
```js
async function capture() {
const supportedConstraints = navigator.mediaDevices.getSupportedConstraints();
const displayMediaOptions = {
audio: {},
};
if (supportedConstraints.suppressLocalAudioPlayback) {
displayMediaOptions.audio.suppressLocalAudioPlayback = true;
}
try {
videoElem.srcObject =
await navigator.mediaDevices.getDisplayMedia(displayMediaOptions);
} catch (err) {
/* handle the error */
}
}
```
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
## See also
- [Screen Capture API](/en-US/docs/Web/API/Screen_Capture_API)
- [Using the screen capture API](/en-US/docs/Web/API/Screen_Capture_API/Using_Screen_Capture)
- [Capabilities, constraints, and settings](/en-US/docs/Web/API/Media_Capture_and_Streams_API/Constraints)
- {{domxref("MediaDevices.getDisplayMedia()")}}
- {{domxref("MediaStreamTrack.getConstraints()")}}
- {{domxref("MediaStreamTrack.applyConstraints()")}}
- {{domxref("MediaStreamTrack.getSettings()")}}
| 0 |
data/mdn-content/files/en-us/web/api/mediatracksupportedconstraints | data/mdn-content/files/en-us/web/api/mediatracksupportedconstraints/latency/index.md | ---
title: "MediaTrackSupportedConstraints: latency property"
short-title: latency
slug: Web/API/MediaTrackSupportedConstraints/latency
page-type: web-api-instance-property
browser-compat: api.MediaTrackSupportedConstraints.latency
---
{{APIRef("Media Capture and Streams")}}
The {{domxref("MediaTrackSupportedConstraints")}} dictionary's **`latency`** property is a read-only Boolean value which is present (and set to `true`) in the object returned by {{domxref("MediaDevices.getSupportedConstraints()")}} if and only if the {{Glossary("user agent")}} supports the `latency` constraint.
If the constraint isn't supported, it's not included in the list, so this value will never be `false`.
You can access the supported constraints dictionary by calling `navigator.mediaDevices.getSupportedConstraints()`.
## Value
This property is present in the dictionary (and its value is always `true`) if the user agent supports the `latency` constraint.
If the property isn't present, this property is missing from the supported constraints dictionary, and you'll get {{jsxref("undefined")}} if you try to look at its value.
## Examples
```html hidden
<div id="result"></div>
```
```css hidden
#result {
font:
14px "Arial",
sans-serif;
}
```
```js
const result = document.getElementById("result");
const supported = navigator.mediaDevices.getSupportedConstraints().latency;
result.textContent = supported ? "Supported!" : "Not supported!";
```
### Result
{{ EmbedLiveSample('Examples', 600, 80) }}
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
## See also
- [Media Capture and Streams API](/en-US/docs/Web/API/Media_Capture_and_Streams_API)
- {{domxref("MediaDevices.getSupportedConstraints()")}}
- {{domxref("MediaTrackSupportedConstraints")}}
- {{domxref("MediaStreamTrack")}}
| 0 |
data/mdn-content/files/en-us/web/api/mediatracksupportedconstraints | data/mdn-content/files/en-us/web/api/mediatracksupportedconstraints/channelcount/index.md | ---
title: "MediaTrackSupportedConstraints: channelCount property"
short-title: channelCount
slug: Web/API/MediaTrackSupportedConstraints/channelCount
page-type: web-api-instance-property
browser-compat: api.MediaTrackSupportedConstraints.channelCount
---
{{APIRef("Media Capture and Streams")}}
The {{domxref("MediaTrackSupportedConstraints")}} dictionary's
**`channelCount`** property is a read-only Boolean value which
is present (and set to `true`) in the object returned by
{{domxref("MediaDevices.getSupportedConstraints()")}} if and only if the
{{Glossary("user agent")}} supports the `channelCount` constraint. If the
constraint isn't supported, it's not included in the list, so this value will never be
`false`.
You can access the supported constraints dictionary by calling
`navigator.mediaDevices.getSupportedConstraints()`.
## Value
This property is present in the dictionary (and its value is always `true`)
if the user agent supports the `channelCount` constraint. If the property
isn't present, this property is missing from the supported constraints dictionary, and
you'll get {{jsxref("undefined")}} if you try to look at its value.
## Examples
```html hidden
<div id="result"></div>
```
```css hidden
#result {
font:
14px "Arial",
sans-serif;
}
```
```js
const result = document.getElementById("result");
const supported = navigator.mediaDevices.getSupportedConstraints().channelCount;
result.textContent = supported ? "Supported!" : "Not supported!";
```
### Result
{{ EmbedLiveSample('Examples', 600, 80) }}
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
## See also
- [Media Capture and Streams API](/en-US/docs/Web/API/Media_Capture_and_Streams_API)
- {{domxref("MediaDevices.getSupportedConstraints()")}}
- {{domxref("MediaTrackSupportedConstraints")}}
- {{domxref("MediaStreamTrack")}}
| 0 |
data/mdn-content/files/en-us/web/api/mediatracksupportedconstraints | data/mdn-content/files/en-us/web/api/mediatracksupportedconstraints/facingmode/index.md | ---
title: "MediaTrackSupportedConstraints: facingMode property"
short-title: facingMode
slug: Web/API/MediaTrackSupportedConstraints/facingMode
page-type: web-api-instance-property
browser-compat: api.MediaTrackSupportedConstraints.facingMode
---
{{APIRef("Media Capture and Streams")}}
The {{domxref("MediaTrackSupportedConstraints")}} dictionary's
**`facingMode`** property is a read-only Boolean value which is
present (and set to `true`) in the object returned by
{{domxref("MediaDevices.getSupportedConstraints()")}} if and only if the
{{Glossary("user agent")}} supports the `facingMode` constraint. If the
constraint isn't supported, it's not included in the list, so this value will never be
`false`.
You can access the supported constraints dictionary by calling
`navigator.mediaDevices.getSupportedConstraints()`.
## Value
This property is present in the dictionary (and its value is always `true`)
if the user agent supports the `facingMode` constraint. If the property isn't
present, this property is missing from the supported constraints dictionary, and you'll
get {{jsxref("undefined")}} if you try to look at its value.
## Examples
```html hidden
<div id="result"></div>
```
```css hidden
#result {
font:
14px "Arial",
sans-serif;
}
```
```js
const result = document.getElementById("result");
const supported = navigator.mediaDevices.getSupportedConstraints().facingMode;
result.textContent = supported ? "Supported!" : "Not supported!";
```
### Result
{{ EmbedLiveSample('Examples', 600, 80) }}
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
## See also
- [Media Capture and Streams API](/en-US/docs/Web/API/Media_Capture_and_Streams_API)
- {{domxref("MediaDevices.getSupportedConstraints()")}}
- {{domxref("MediaTrackSupportedConstraints")}}
- {{domxref("MediaStreamTrack")}}
| 0 |
data/mdn-content/files/en-us/web/api/mediatracksupportedconstraints | data/mdn-content/files/en-us/web/api/mediatracksupportedconstraints/width/index.md | ---
title: "MediaTrackSupportedConstraints: width property"
short-title: width
slug: Web/API/MediaTrackSupportedConstraints/width
page-type: web-api-instance-property
browser-compat: api.MediaTrackSupportedConstraints.width
---
{{APIRef("Media Capture and Streams")}}
The {{domxref("MediaTrackSupportedConstraints")}} dictionary's
**`width`** property is a read-only Boolean value which is
present (and set to `true`) in the object returned by
{{domxref("MediaDevices.getSupportedConstraints()")}} if and only if the
{{Glossary("user agent")}} supports the `width` constraint. If the constraint
isn't supported, it's not included in the list, so this value will never be
`false`.
You can access the supported constraints dictionary by calling
`navigator.mediaDevices.getSupportedConstraints()`.
## Syntax
```js-nolint
widthConstraintSupported = supportedConstraintsDictionary.width
```
### Value
This property is present in the dictionary (and its value is always `true`)
if the user agent supports the `width` constraint. If the property isn't
present, this property is missing from the supported constraints dictionary, and you'll
get {{jsxref("undefined")}} if you try to look at its value.
## Example
```html hidden
<div id="result"></div>
```
```css hidden
#result {
font:
14px "Arial",
sans-serif;
}
```
```js
const result = document.getElementById("result");
const supported = navigator.mediaDevices.getSupportedConstraints().width;
result.textContent = supported ? "Supported!" : "Not supported!";
```
### Result
{{ EmbedLiveSample('Example', 600, 80) }}
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
## See also
- [Media Capture and Streams API](/en-US/docs/Web/API/Media_Capture_and_Streams_API)
- {{domxref("MediaDevices.getSupportedConstraints()")}}
- {{domxref("MediaTrackSupportedConstraints")}}
- {{domxref("MediaStreamTrack")}}
| 0 |
data/mdn-content/files/en-us/web/api/mediatracksupportedconstraints | data/mdn-content/files/en-us/web/api/mediatracksupportedconstraints/height/index.md | ---
title: "MediaTrackSupportedConstraints: height property"
short-title: height
slug: Web/API/MediaTrackSupportedConstraints/height
page-type: web-api-instance-property
browser-compat: api.MediaTrackSupportedConstraints.height
---
{{APIRef("Media Capture and Streams")}}
The {{domxref("MediaTrackSupportedConstraints")}} dictionary's
**`height`** property is a read-only Boolean value which is
present (and set to `true`) in the object returned by
{{domxref("MediaDevices.getSupportedConstraints()")}} if and only if the
{{Glossary("user agent")}} supports the `height` constraint. If the
constraint isn't supported, it's not included in the list, so this value will never be
`false`.
You can access the supported constraints dictionary by calling
`navigator.mediaDevices.getSupportedConstraints()`.
## Value
This property is present in the dictionary (and its value is always `true`)
if the user agent supports the `height` constraint. If the property isn't
present, this property is missing from the supported constraints dictionary, and you'll
get {{jsxref("undefined")}} if you try to look at its value.
## Examples
```html hidden
<div id="result"></div>
```
```css hidden
#result {
font:
14px "Arial",
sans-serif;
}
```
```js
const result = document.getElementById("result");
const supported = navigator.mediaDevices.getSupportedConstraints().height;
result.textContent = supported ? "Supported!" : "Not supported!";
```
### Result
{{ EmbedLiveSample('Examples', 600, 80) }}
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
## See also
- [Media Capture and Streams API](/en-US/docs/Web/API/Media_Capture_and_Streams_API)
- {{domxref("MediaDevices.getSupportedConstraints()")}}
- {{domxref("MediaTrackSupportedConstraints")}}
- {{domxref("MediaStreamTrack")}}
| 0 |
data/mdn-content/files/en-us/web/api/mediatracksupportedconstraints | data/mdn-content/files/en-us/web/api/mediatracksupportedconstraints/deviceid/index.md | ---
title: "MediaTrackSupportedConstraints: deviceId property"
short-title: deviceId
slug: Web/API/MediaTrackSupportedConstraints/deviceId
page-type: web-api-instance-property
browser-compat: api.MediaTrackSupportedConstraints.deviceId
---
{{APIRef("Media Capture and Streams")}}
The {{domxref("MediaTrackSupportedConstraints")}} dictionary's
**`deviceId`** property is a read-only Boolean value which is
present (and set to `true`) in the object returned by
{{domxref("MediaDevices.getSupportedConstraints()")}} if and only if the
{{Glossary("user agent")}} supports the `deviceId` constraint. If the
constraint isn't supported, it's not included in the list, so this value will never be
`false`.
You can access the supported constraints dictionary by calling
`navigator.mediaDevices.getSupportedConstraints()`.
## Value
This property is present in the dictionary (and its value is always `true`)
if the user agent supports the `deviceId` constraint. If the property isn't
present, this property is missing from the supported constraints dictionary, and you'll
get {{jsxref("undefined")}} if you try to look at its value.
## Examples
```html hidden
<div id="result"></div>
```
```css hidden
#result {
font:
14px "Arial",
sans-serif;
}
```
```js
const result = document.getElementById("result");
const supported = navigator.mediaDevices.getSupportedConstraints().deviceId;
result.textContent = supported ? "Supported!" : "Not supported!";
```
### Result
{{ EmbedLiveSample('Examples', 600, 80) }}
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
## See also
- [Media Capture and Streams API](/en-US/docs/Web/API/Media_Capture_and_Streams_API)
- {{domxref("MediaDevices.getSupportedConstraints()")}}
- {{domxref("MediaTrackSupportedConstraints")}}
- {{domxref("MediaStreamTrack")}}
| 0 |
data/mdn-content/files/en-us/web/api/mediatracksupportedconstraints | data/mdn-content/files/en-us/web/api/mediatracksupportedconstraints/displaysurface/index.md | ---
title: "MediaTrackSupportedConstraints: displaySurface property"
short-title: displaySurface
slug: Web/API/MediaTrackSupportedConstraints/displaySurface
page-type: web-api-instance-property
browser-compat: api.MediaTrackSupportedConstraints.displaySurface
---
{{APIRef("Media Capture and Streams")}}
The {{domxref("MediaTrackSupportedConstraints")}} dictionary's **`displaySurface`** property indicates whether or not the {{domxref("MediaTrackConstraints.displaySurface", "displaySurface")}} constraint is supported by the user agent and the device on which the content is being used.
The supported constraints list is obtained by calling {{domxref("MediaDevices.getSupportedConstraints","navigator.mediaDevices.getSupportedConstraints()")}}.
## Value
A Boolean value which is `true` if the {{domxref("MediaTrackConstraints.displaySurface", "displaySurface")}} constraint is supported by the device and user agent.
## Examples
This method sets up the constraints object specifying the options for the call to
{{domxref("MediaDevices.getDisplayMedia", "getDisplayMedia()")}}. It adds the
`displaySurface` constraint (requesting that only fullscreen sharing be
allowed) only if it is known to be supported by the browser. Capturing is then started
by calling `getDisplayMedia()` and attaching the returned stream to the video
element referenced by the variable `videoElem`.
```js
async function capture() {
let supportedConstraints = navigator.mediaDevices.getSupportedConstraints();
let displayMediaOptions = {
video: {},
audio: false,
};
if (supportedConstraints.displaySurface) {
displayMediaOptions.video.displaySurface = "monitor";
}
try {
videoElem.srcObject =
await navigator.mediaDevices.getDisplayMedia(displayMediaOptions);
} catch (err) {
/* handle the error */
}
}
```
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
## See also
- [Screen Capture API](/en-US/docs/Web/API/Screen_Capture_API)
- [Using the screen capture API](/en-US/docs/Web/API/Screen_Capture_API/Using_Screen_Capture)
- [Capabilities, constraints, and settings](/en-US/docs/Web/API/Media_Capture_and_Streams_API/Constraints)
- {{domxref("MediaDevices.getDisplayMedia()")}}
- {{domxref("MediaStreamTrack.getConstraints()")}}
- {{domxref("MediaStreamTrack.applyConstraints()")}}
- {{domxref("MediaStreamTrack.getSettings()")}}
| 0 |
data/mdn-content/files/en-us/web/api/mediatracksupportedconstraints | data/mdn-content/files/en-us/web/api/mediatracksupportedconstraints/groupid/index.md | ---
title: "MediaTrackSupportedConstraints: groupId property"
short-title: groupId
slug: Web/API/MediaTrackSupportedConstraints/groupId
page-type: web-api-instance-property
browser-compat: api.MediaTrackSupportedConstraints.groupId
---
{{APIRef("Media Capture and Streams")}}
The {{domxref("MediaTrackSupportedConstraints")}} dictionary's
**`groupId`** property is a read-only Boolean value which is
present (and set to `true`) in the object returned by
{{domxref("MediaDevices.getSupportedConstraints()")}} if and only if the
{{Glossary("user agent")}} supports the `groupId` constraint. If the
constraint isn't supported, it's not included in the list, so this value will never be
`false`.
You can access the supported constraints dictionary by calling
`navigator.mediaDevices.getSupportedConstraints()`.
## Value
This property is present in the dictionary (and its value is always `true`)
if the user agent supports the `groupId` constraint. If the property isn't
present, this property is missing from the supported constraints dictionary, and you'll
get {{jsxref("undefined")}} if you try to look at its value.
## Examples
```html hidden
<div id="result"></div>
```
```css hidden
#result {
font:
14px "Arial",
sans-serif;
}
```
```js
const result = document.getElementById("result");
const supported = navigator.mediaDevices.getSupportedConstraints().groupId;
result.textContent = supported ? "Supported!" : "Not supported!";
```
### Result
{{ EmbedLiveSample('Examples', 600, 80) }}
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
## See also
- [Media Capture and Streams API](/en-US/docs/Web/API/Media_Capture_and_Streams_API)
- {{domxref("MediaDevices.getSupportedConstraints()")}}
- {{domxref("MediaTrackSupportedConstraints")}}
- {{domxref("MediaStreamTrack")}}
| 0 |
data/mdn-content/files/en-us/web/api/mediatracksupportedconstraints | data/mdn-content/files/en-us/web/api/mediatracksupportedconstraints/samplerate/index.md | ---
title: "MediaTrackSupportedConstraints: sampleRate property"
short-title: sampleRate
slug: Web/API/MediaTrackSupportedConstraints/sampleRate
page-type: web-api-instance-property
browser-compat: api.MediaTrackSupportedConstraints.sampleRate
---
{{APIRef("Media Capture and Streams")}}
The {{domxref("MediaTrackSupportedConstraints")}} dictionary's
**`sampleRate`** property is a read-only Boolean value which is
present (and set to `true`) in the object returned by
{{domxref("MediaDevices.getSupportedConstraints()")}} if and only if the
{{Glossary("user agent")}} supports the `sampleRate` constraint. If the
constraint isn't supported, it's not included in the list, so this value will never be
`false`.
You can access the supported constraints dictionary by calling
`navigator.mediaDevices.getSupportedConstraints()`.
## Value
This property is present in the dictionary (and its value is always `true`)
if the user agent supports the `sampleRate` constraint. If the property isn't
present, this property is missing from the supported constraints dictionary, and you'll
get {{jsxref("undefined")}} if you try to look at its value.
## Examples
```html hidden
<div id="result"></div>
```
```css hidden
#result {
font:
14px "Arial",
sans-serif;
}
```
```js
const result = document.getElementById("result");
const supported = navigator.mediaDevices.getSupportedConstraints().sampleRate;
result.textContent = supported ? "Supported!" : "Not supported!";
```
### Result
{{ EmbedLiveSample('Examples', 600, 80) }}
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
## See also
- [Media Capture and Streams API](/en-US/docs/Web/API/Media_Capture_and_Streams_API)
- {{domxref("MediaDevices.getSupportedConstraints()")}}
- {{domxref("MediaTrackSupportedConstraints")}}
- {{domxref("MediaStreamTrack")}}
| 0 |
data/mdn-content/files/en-us/web/api/mediatracksupportedconstraints | data/mdn-content/files/en-us/web/api/mediatracksupportedconstraints/aspectratio/index.md | ---
title: "MediaTrackSupportedConstraints: aspectRatio property"
short-title: aspectRatio
slug: Web/API/MediaTrackSupportedConstraints/aspectRatio
page-type: web-api-instance-property
browser-compat: api.MediaTrackSupportedConstraints.aspectRatio
---
{{APIRef("Media Capture and Streams")}}
The {{domxref("MediaTrackSupportedConstraints")}} dictionary's **`aspectRatio`** property is a read-only Boolean value which is present (and set to `true`) in the object returned by {{domxref("MediaDevices.getSupportedConstraints()")}} if and only if the
{{Glossary("user agent")}} supports the `aspectRatio` constraint.
If the constraint isn't supported, it's not included in the list, so this value will never be `false`.
You can access the supported constraints dictionary by calling `navigator.mediaDevices.getSupportedConstraints()`.
## Value
This property is present in the dictionary (and its value is always `true`)
if the user agent supports the `aspectRatio` constraint. If the property
isn't present, this property is missing from the supported constraints dictionary, and
you'll get {{jsxref("undefined")}} if you try to look at its value.
## Examples
```html hidden
<div id="result"></div>
```
```css hidden
#result {
font:
14px "Arial",
sans-serif;
}
```
```js
const result = document.getElementById("result");
const supported = navigator.mediaDevices.getSupportedConstraints().aspectRatio;
result.textContent = supported ? "Supported!" : "Not supported!";
```
### Result
{{ EmbedLiveSample('Examples', 600, 80) }}
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
## See also
- [Media Capture and Streams API](/en-US/docs/Web/API/Media_Capture_and_Streams_API)
- {{domxref("MediaDevices.getSupportedConstraints()")}}
- {{domxref("MediaTrackSupportedConstraints")}}
- {{domxref("MediaStreamTrack")}}
| 0 |
data/mdn-content/files/en-us/web/api | data/mdn-content/files/en-us/web/api/xpathresult/index.md | ---
title: XPathResult
slug: Web/API/XPathResult
page-type: web-api-interface
browser-compat: api.XPathResult
---
{{APIRef}}
The **`XPathResult`** interface represents the results generated by evaluating an XPath expression within the context of a given node.
Since XPath expressions can result in a variety of result types, this interface makes it possible to determine and handle the type and value of the result.
## Instance properties
- {{domxref("XPathResult.booleanValue")}} {{ReadOnlyInline}}
- : A `boolean` representing the value of the result if `resultType` is `BOOLEAN_TYPE`.
- {{domxref("XPathResult.invalidIteratorState")}} {{ReadOnlyInline}}
- : Signifies that the iterator has become invalid. It is `true` if `resultType` is `UNORDERED_NODE_ITERATOR_TYPE` or `ORDERED_NODE_ITERATOR_TYPE` and the document has been modified since this result was returned.
- {{domxref("XPathResult.numberValue")}} {{ReadOnlyInline}}
- : A `number` representing the value of the result if `resultType` is `NUMBER_TYPE`.
- {{domxref("XPathResult.resultType")}} {{ReadOnlyInline}}
- : A `number` code representing the type of the result, as defined by the type constants.
- {{domxref("XPathResult.singleNodeValue")}} {{ReadOnlyInline}}
- : A {{domxref("Node")}} representing the value of the single node result, which may be `null`.
- {{domxref("XPathResult.snapshotLength")}} {{ReadOnlyInline}}
- : The number of nodes in the result snapshot.
- {{domxref("XPathResult.stringValue")}} {{ReadOnlyInline}}
- : A string representing the value of the result if `resultType` is `STRING_TYPE`.
## Instance methods
- {{domxref("XPathResult.iterateNext()")}}
- : If the result is a node set, this method iterates over it and returns the next node from it or `null` if there are no more nodes.
- {{domxref("XPathResult.snapshotItem()")}}
- : Returns an item of the snapshot collection or `null` in case the index is not within the range of nodes. Unlike the iterator result, the snapshot does not become invalid, but may not correspond to the current document if it is mutated.
## Constants
<table class="no-markdown">
<thead>
<tr>
<th>Result Type Defined Constant</th>
<th>Value</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td><code>ANY_TYPE</code></td>
<td><code>0</code></td>
<td>
A result set containing whatever type naturally results from evaluation
of the expression. Note that if the result is a node-set then
<code>UNORDERED_NODE_ITERATOR_TYPE</code> is always the resulting type.
</td>
</tr>
<tr>
<td><code>NUMBER_TYPE</code></td>
<td><code>1</code></td>
<td>
A result containing a single number. This is useful for example, in an
XPath expression using the <code>count()</code> function.
</td>
</tr>
<tr>
<td><code>STRING_TYPE</code></td>
<td><code>2</code></td>
<td>A result containing a single string.</td>
</tr>
<tr>
<td><code>BOOLEAN_TYPE</code></td>
<td><code>3</code></td>
<td>
A result containing a single boolean value. This is useful for example,
in an XPath expression using the <code>not()</code> function.
</td>
</tr>
<tr>
<td><code>UNORDERED_NODE_ITERATOR_TYPE</code></td>
<td><code>4</code></td>
<td>
A result node-set containing all the nodes matching the expression. The
nodes may not necessarily be in the same order that they appear in the
document.
</td>
</tr>
<tr>
<td><code>ORDERED_NODE_ITERATOR_TYPE</code></td>
<td><code>5</code></td>
<td>
A result node-set containing all the nodes matching the expression. The
nodes in the result set are in the same order that they appear in the
document.
</td>
</tr>
<tr>
<td><code>UNORDERED_NODE_SNAPSHOT_TYPE</code></td>
<td><code>6</code></td>
<td>
A result node-set containing snapshots of all the nodes matching the
expression. The nodes may not necessarily be in the same order that they
appear in the document.
</td>
</tr>
<tr>
<td><code>ORDERED_NODE_SNAPSHOT_TYPE</code></td>
<td><code>7</code></td>
<td>
A result node-set containing snapshots of all the nodes matching the
expression. The nodes in the result set are in the same order that they
appear in the document.
</td>
</tr>
<tr>
<td><code>ANY_UNORDERED_NODE_TYPE</code></td>
<td><code>8</code></td>
<td>
A result node-set containing any single node that matches the
expression. The node is not necessarily the first node in the document
that matches the expression.
</td>
</tr>
<tr>
<td><code>FIRST_ORDERED_NODE_TYPE</code></td>
<td><code>9</code></td>
<td>
A result node-set containing the first node in the document that matches
the expression.
</td>
</tr>
</tbody>
</table>
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
## See also
- {{domxref("Document.evaluate()")}}
- {{domxref("XPathExpression")}}
- [Dottoro Web Reference - XPathResult object](http://help.dottoro.com/ljagksjc.php)
| 0 |
data/mdn-content/files/en-us/web/api/xpathresult | data/mdn-content/files/en-us/web/api/xpathresult/resulttype/index.md | ---
title: "XPathResult: resultType property"
short-title: resultType
slug: Web/API/XPathResult/resultType
page-type: web-api-instance-property
browser-compat: api.XPathResult.resultType
---
{{APIRef("DOM XPath")}}
The read-only **`resultType`** property of the
{{domxref("XPathResult")}} interface represents the type of the result, as defined by
the type constants.
{{AvailableInWorkers}}
## Value
An integer value representing the type of the result, as defined by the type constants.
## Constants
<table class="no-markdown">
<thead>
<tr>
<th>Result Type Defined Constant</th>
<th>Value</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td><code>ANY_TYPE</code></td>
<td><code>0</code></td>
<td>
A result set containing whatever type naturally results from evaluation
of the expression. Note that if the result is a node-set then
<code>UNORDERED_NODE_ITERATOR_TYPE</code> is always the resulting type.
</td>
</tr>
<tr>
<td><code>NUMBER_TYPE</code></td>
<td><code>1</code></td>
<td>
A result containing a single number. This is useful for example, in an
XPath expression using the <code>count()</code> function.
</td>
</tr>
<tr>
<td><code>STRING_TYPE</code></td>
<td><code>2</code></td>
<td>A result containing a single string.</td>
</tr>
<tr>
<td><code>BOOLEAN_TYPE</code></td>
<td><code>3</code></td>
<td>
A result containing a single boolean value. This is useful for example,
in an XPath expression using the <code>not()</code> function.
</td>
</tr>
<tr>
<td><code>UNORDERED_NODE_ITERATOR_TYPE</code></td>
<td><code>4</code></td>
<td>
A result node-set containing all the nodes matching the expression. The
nodes may not necessarily be in the same order that they appear in the
document.
</td>
</tr>
<tr>
<td><code>ORDERED_NODE_ITERATOR_TYPE</code></td>
<td><code>5</code></td>
<td>
A result node-set containing all the nodes matching the expression. The
nodes in the result set are in the same order that they appear in the
document.
</td>
</tr>
<tr>
<td><code>UNORDERED_NODE_SNAPSHOT_TYPE</code></td>
<td><code>6</code></td>
<td>
A result node-set containing snapshots of all the nodes matching the
expression. The nodes may not necessarily be in the same order that they
appear in the document.
</td>
</tr>
<tr>
<td><code>ORDERED_NODE_SNAPSHOT_TYPE</code></td>
<td><code>7</code></td>
<td>
A result node-set containing snapshots of all the nodes matching the
expression. The nodes in the result set are in the same order that they
appear in the document.
</td>
</tr>
<tr>
<td><code>ANY_UNORDERED_NODE_TYPE</code></td>
<td><code>8</code></td>
<td>
A result node-set containing any single node that matches the
expression. The node is not necessarily the first node in the document
that matches the expression.
</td>
</tr>
<tr>
<td><code>FIRST_ORDERED_NODE_TYPE</code></td>
<td><code>9</code></td>
<td>
A result node-set containing the first node in the document that matches
the expression.
</td>
</tr>
</tbody>
</table>
## Examples
The following example shows the use of the `resultType` property.
### HTML
```html
<div>XPath example</div>
<div>Is XPath result a node set: <output></output></div>
```
### JavaScript
```js
const xpath = "//div";
const result = document.evaluate(
xpath,
document,
null,
XPathResult.ANY_TYPE,
null,
);
document.querySelector("output").textContent =
result.resultType >= XPathResult.UNORDERED_NODE_ITERATOR_TYPE &&
result.resultType <= XPathResult.FIRST_ORDERED_NODE_TYPE;
```
### Result
{{EmbedLiveSample('Examples', 400, 70)}}
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
| 0 |
data/mdn-content/files/en-us/web/api/xpathresult | data/mdn-content/files/en-us/web/api/xpathresult/invaliditeratorstate/index.md | ---
title: "XPathResult: invalidIteratorState property"
short-title: invalidIteratorState
slug: Web/API/XPathResult/invalidIteratorState
page-type: web-api-instance-property
browser-compat: api.XPathResult.invalidIteratorState
---
{{APIRef("DOM XPath")}}
The read-only **`invalidIteratorState`** property of the
{{domxref("XPathResult")}} interface signifies that the iterator has become invalid. It
is `true` if {{domxref("XPathResult.resultType")}} is
`UNORDERED_NODE_ITERATOR_TYPE` or `ORDERED_NODE_ITERATOR_TYPE` and
the document has been modified since this result was returned.
{{AvailableInWorkers}}
## Value
A boolean value indicating whether the iterator has become invalid.
## Examples
The following example shows the use of the `invalidIteratorState` property.
### HTML
```html
<div>XPath example</div>
<p>Iterator state: <output></output></p>
```
### JavaScript
```js
const xpath = "//div";
const result = document.evaluate(
xpath,
document,
null,
XPathResult.ANY_TYPE,
null,
);
// Invalidates the iterator state
document.querySelector("div").remove();
document.querySelector("output").textContent = result.invalidIteratorState
? "invalid"
: "valid";
```
### Result
{{EmbedLiveSample('Examples', 400, 70)}}
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
| 0 |
data/mdn-content/files/en-us/web/api/xpathresult | data/mdn-content/files/en-us/web/api/xpathresult/singlenodevalue/index.md | ---
title: "XPathResult: singleNodeValue property"
short-title: singleNodeValue
slug: Web/API/XPathResult/singleNodeValue
page-type: web-api-instance-property
browser-compat: api.XPathResult.singleNodeValue
---
{{APIRef("DOM XPath")}}
The read-only **`singleNodeValue`** property of the
{{domxref("XPathResult")}} interface returns a {{domxref("Node")}} value or
`null` in case no node was matched of a result with
{{domxref("XPathResult.resultType")}} being `ANY_UNORDERED_NODE_TYPE` or
`FIRST_ORDERED_NODE_TYPE`.
## Value
The return value is the {{domxref("Node")}} value of the `XPathResult`
returned by {{domxref("Document.evaluate()")}}.
### Exceptions
#### TYPE_ERR
In case {{domxref("XPathResult.resultType")}} is not
`ANY_UNORDERED_NODE_TYPE` or `FIRST_ORDERED_NODE_TYPE`, an
{{domxref("XPathException")}} of type `TYPE_ERR` is thrown.
## Examples
The following example shows the use of the `singleNodeValue` property.
### HTML
```html
<div>XPath example</div>
<div>
Tag name of the element having the text content 'XPath example':
<output></output>
</div>
```
### JavaScript
```js
const xpath = "//*[text()='XPath example']";
const result = document.evaluate(
xpath,
document,
null,
XPathResult.FIRST_ORDERED_NODE_TYPE,
null,
);
document.querySelector("output").textContent = result.singleNodeValue.localName;
```
### Result
{{EmbedLiveSample('Examples', 400, 70)}}
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
| 0 |
data/mdn-content/files/en-us/web/api/xpathresult | data/mdn-content/files/en-us/web/api/xpathresult/iteratenext/index.md | ---
title: "XPathResult: iterateNext() method"
short-title: iterateNext()
slug: Web/API/XPathResult/iterateNext
page-type: web-api-instance-method
browser-compat: api.XPathResult.iterateNext
---
{{APIRef("DOM XPath")}}
The **`iterateNext()`** method of the
{{domxref("XPathResult")}} interface iterates over a node set result and returns the
next node from it or `null` if there are no more nodes.
## Syntax
```js-nolint
iterateNext()
```
### Parameters
None.
### Return value
The next {{domxref("Node")}} within the node set of the `XPathResult`.
### Exceptions
#### TYPE_ERR
In case {{domxref("XPathResult.resultType")}} is not
`UNORDERED_NODE_ITERATOR_TYPE` or `ORDERED_NODE_ITERATOR_TYPE`, an
{{domxref("XPathException")}} of type `TYPE_ERR` is thrown.
#### INVALID_STATE_ERR
If the document is mutated since the result was returned, an
{{domxref("XPathException")}} of type `INVALID_STATE_ERR` is thrown.
## Examples
The following example shows the use of the `iterateNext()` method.
### HTML
```html
<div>XPath example</div>
<div>Tag names of the matched nodes: <output></output></div>
```
### JavaScript
```js
const xpath = "//div";
const result = document.evaluate(
xpath,
document,
null,
XPathResult.ANY_TYPE,
null,
);
let node = null;
const tagNames = [];
while ((node = result.iterateNext())) {
tagNames.push(node.localName);
}
document.querySelector("output").textContent = tagNames.join(", ");
```
### Result
{{EmbedLiveSample('Examples')}}
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
| 0 |
data/mdn-content/files/en-us/web/api/xpathresult | data/mdn-content/files/en-us/web/api/xpathresult/snapshotlength/index.md | ---
title: "XPathResult: snapshotLength property"
short-title: snapshotLength
slug: Web/API/XPathResult/snapshotLength
page-type: web-api-instance-property
browser-compat: api.XPathResult.snapshotLength
---
{{APIRef("DOM XPath")}}
The read-only **`snapshotLength`** property of the
{{domxref("XPathResult")}} interface represents the number of nodes in the result
snapshot.
{{AvailableInWorkers}}
## Value
An integer value representing the number of nodes in the result snapshot.
### Exceptions
#### TYPE_ERR
In case {{domxref("XPathResult.resultType")}} is not
`UNORDERED_NODE_SNAPSHOT_TYPE` or `ORDERED_NODE_SNAPSHOT_TYPE`, an
{{domxref("XPathException")}} of type `TYPE_ERR` is thrown.
## Examples
The following example shows the use of the `snapshotLength` property.
### HTML
```html
<div>XPath example</div>
<div>Number of matched nodes: <output></output></div>
```
### JavaScript
```js
const xpath = "//div";
const result = document.evaluate(
xpath,
document,
null,
XPathResult.ORDERED_NODE_SNAPSHOT_TYPE,
null,
);
document.querySelector("output").textContent = result.snapshotLength;
```
### Result
{{EmbedLiveSample('Examples', 400, 70)}}
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
| 0 |
data/mdn-content/files/en-us/web/api/xpathresult | data/mdn-content/files/en-us/web/api/xpathresult/booleanvalue/index.md | ---
title: "XPathResult: booleanValue property"
short-title: booleanValue
slug: Web/API/XPathResult/booleanValue
page-type: web-api-instance-property
browser-compat: api.XPathResult.booleanValue
---
{{APIRef("DOM XPath")}}
The read-only **`booleanValue`** property of the
{{domxref("XPathResult")}} interface returns the boolean value of a result with
{{domxref("XPathResult.resultType")}} being `BOOLEAN_TYPE`.
{{AvailableInWorkers}}
## Value
The return value is the boolean value of the `XPathResult` returned by
{{domxref("Document.evaluate()")}}.
### Exceptions
#### TYPE_ERR
In case {{domxref("XPathResult.resultType")}} is not `BOOLEAN_TYPE`, an
{{domxref("XPathException")}} of type `TYPE_ERR` is thrown.
## Examples
The following example shows the use of the `booleanValue` property.
### HTML
```html
<div>XPath example</div>
<p>Text is 'XPath example': <output></output></p>
```
### JavaScript
```js
const xpath = "//div/text() = 'XPath example'";
const result = document.evaluate(
xpath,
document,
null,
XPathResult.BOOLEAN_TYPE,
null,
);
document.querySelector("output").textContent = result.booleanValue;
```
### Result
{{EmbedLiveSample('Examples', 400, 70)}}
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
| 0 |
data/mdn-content/files/en-us/web/api/xpathresult | data/mdn-content/files/en-us/web/api/xpathresult/numbervalue/index.md | ---
title: "XPathResult: numberValue property"
short-title: numberValue
slug: Web/API/XPathResult/numberValue
page-type: web-api-instance-property
browser-compat: api.XPathResult.numberValue
---
{{APIRef("DOM XPath")}}
The read-only **`numberValue`** property of the
{{domxref("XPathResult")}} interface returns the numeric value of a result with
{{domxref("XPathResult.resultType")}} being `NUMBER_TYPE`.
{{AvailableInWorkers}}
## Value
The return value is the numeric value of the `XPathResult` returned by
{{domxref("Document.evaluate()")}}.
### Exceptions
#### TYPE_ERR
In case {{domxref("XPathResult.resultType")}} is not `NUMBER_TYPE`, an
{{domxref("XPathException")}} of type `TYPE_ERR` is thrown.
## Examples
The following example shows the use of the `numberValue` property.
### HTML
```html
<div>XPath example</div>
<div>Number of <div>s: <output></output></div>
```
### JavaScript
```js
const xpath = "count(//div)";
const result = document.evaluate(
xpath,
document,
null,
XPathResult.NUMBER_TYPE,
null,
);
document.querySelector("output").textContent = result.numberValue;
```
### Result
{{EmbedLiveSample('Examples', 400, 70)}}
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
| 0 |
data/mdn-content/files/en-us/web/api/xpathresult | data/mdn-content/files/en-us/web/api/xpathresult/stringvalue/index.md | ---
title: "XPathResult: stringValue property"
short-title: stringValue
slug: Web/API/XPathResult/stringValue
page-type: web-api-instance-property
browser-compat: api.XPathResult.stringValue
---
{{APIRef("DOM XPath")}}
The read-only **`stringValue`** property of the
{{domxref("XPathResult")}} interface returns the string value of a result with
{{domxref("XPathResult.resultType")}} being `STRING_TYPE`.
{{AvailableInWorkers}}
## Value
The return value is the string value of the `XPathResult` returned by
{{domxref("Document.evaluate()")}}.
### Exceptions
#### TYPE_ERR
In case {{domxref("XPathResult.resultType")}} is not `STRING_TYPE`, an
{{domxref("XPathException")}} of type `TYPE_ERR` is thrown.
## Examples
The following example shows the use of the `stringValue` property.
### HTML
```html
<div>XPath example</div>
<div>Text content of the <div> above: <output></output></div>
```
### JavaScript
```js
const xpath = "//div/text()";
const result = document.evaluate(
xpath,
document,
null,
XPathResult.STRING_TYPE,
null,
);
document.querySelector("output").textContent = result.stringValue;
```
### Result
{{EmbedLiveSample('Examples', 400, 70)}}
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
| 0 |
data/mdn-content/files/en-us/web/api/xpathresult | data/mdn-content/files/en-us/web/api/xpathresult/snapshotitem/index.md | ---
title: "XPathResult: snapshotItem() method"
short-title: snapshotItem()
slug: Web/API/XPathResult/snapshotItem
page-type: web-api-instance-method
browser-compat: api.XPathResult.snapshotItem
---
{{APIRef("DOM XPath")}}
The **`snapshotItem()`** method of the
{{domxref("XPathResult")}} interface returns an item of the snapshot collection or
`null` in case the index is not within the range of nodes. Unlike the
iterator result, the snapshot does not become invalid, but may not correspond to the
current document if it is mutated.
## Syntax
```js-nolint
snapshotItem(i)
```
### Parameters
- `i`
- : A number, the index of the item.
### Return value
The {{domxref("Node")}} at the given index within the node set of the
`XPathResult`.
### Exceptions
#### TYPE_ERR
In case {{domxref("XPathResult.resultType")}} is not
`UNORDERED_NODE_SNAPSHOT_TYPE` or `ORDERED_NODE_SNAPSHOT_TYPE`, an
{{domxref("XPathException")}} of type `TYPE_ERR` is thrown.
## Examples
The following example shows the use of the `snapshotItem()` method.
### HTML
```html
<div>XPath example</div>
<div>Tag names of the matched nodes: <output></output></div>
```
### JavaScript
```js
const xpath = "//div";
const result = document.evaluate(
xpath,
document,
null,
XPathResult.ORDERED_NODE_SNAPSHOT_TYPE,
null,
);
let node = null;
const tagNames = [];
for (let i = 0; i < result.snapshotLength; i++) {
node = result.snapshotItem(i);
tagNames.push(node.localName);
}
document.querySelector("output").textContent = tagNames.join(", ");
```
### Result
{{EmbedLiveSample('Examples')}}
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
| 0 |
data/mdn-content/files/en-us/web/api | data/mdn-content/files/en-us/web/api/nodelist/index.md | ---
title: NodeList
slug: Web/API/NodeList
page-type: web-api-interface
browser-compat: api.NodeList
---
{{APIRef("DOM")}}
**`NodeList`** objects are collections of [nodes](/en-US/docs/Web/API/Node), usually returned by properties such as {{domxref("Node.childNodes")}} and methods such as {{domxref("document.querySelectorAll()")}}.
> **Note:** Although `NodeList` is not an `Array`, it is possible to iterate over it with `forEach()`. It can also be converted to a real `Array` using {{jsxref("Array.from()")}}.
## Live vs. Static NodeLists
Although they are both considered `NodeList` objects, there are 2 varieties of NodeList: _live_ and _static_.
### Live NodeLists
In some cases, the `NodeList` is _live_, which means that changes in the DOM automatically update the collection.
For example, {{domxref("Node.childNodes")}} is live:
```js
const parent = document.getElementById("parent");
let childNodes = parent.childNodes;
console.log(childNodes.length); // let's assume "2"
parent.appendChild(document.createElement("div"));
console.log(childNodes.length); // outputs "3"
```
### Static NodeLists
In other cases, the `NodeList` is _static,_ where any changes in the DOM do not affect the content of the collection. The ubiquitous {{domxref("document.querySelectorAll()")}} method returns a _static_ `NodeList`.
It's good to keep this distinction in mind when you choose how to iterate over the items in the `NodeList`, and whether you should cache the list's `length`.
## Instance properties
- {{domxref("NodeList.length")}} {{ReadOnlyInline}}
- : The number of nodes in the `NodeList`.
## Instance methods
- {{domxref("NodeList.item()")}}
- : Returns an item in the list by its index, or `null` if the index is out-of-bounds.
An alternative to accessing `nodeList[i]` (which instead returns `undefined` when `i` is out-of-bounds). This is mostly useful for non-JavaScript DOM implementations.
- {{domxref("NodeList.entries()")}}
- : Returns an {{jsxref("Iteration_protocols","iterator")}}, allowing code to go through all key/value pairs contained in the collection. (In this case, the keys are integers starting from `0` and the values are nodes.)
- {{domxref("NodeList.forEach()")}}
- : Executes a provided function once per `NodeList` element, passing the element as an argument to the function.
- {{domxref("NodeList.keys()")}}
- : Returns an {{jsxref("Iteration_protocols", "iterator")}}, allowing code to go through all the keys of the key/value pairs contained in the collection. (In this case, the keys are integers starting from `0`.)
- {{domxref("NodeList.values()")}}
- : Returns an {{jsxref("Iteration_protocols", "iterator")}} allowing code to go through all values (nodes) of the key/value pairs contained in the collection.
## Example
It's possible to loop over the items in a `NodeList` using a [for](/en-US/docs/Web/JavaScript/Reference/Statements/for) loop:
```js
for (let i = 0; i < myNodeList.length; i++) {
let item = myNodeList[i];
}
```
**Don't use [`for...in`](/en-US/docs/Web/JavaScript/Reference/Statements/for...in) to enumerate the items in `NodeList`s**, since they will _also_ enumerate its `length` and `item` properties and cause errors if your script assumes it only has to deal with {{domxref("element")}} objects. Also, `for...in` is not guaranteed to visit the properties in any particular order.
[`for...of`](/en-US/docs/Web/JavaScript/Reference/Statements/for...of) loops loop over `NodeList` objects correctly:
```js
const list = document.querySelectorAll("input[type=checkbox]");
for (const checkbox of list) {
checkbox.checked = true;
}
```
Browsers also support the iterator method ({{domxref("NodeList.forEach()", "forEach()")}}) as well as {{domxref("NodeList.entries()", "entries()")}}, {{domxref("NodeList.values()", "values()")}}, and {{domxref("NodeList.keys()", "keys()")}}.
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
| 0 |
data/mdn-content/files/en-us/web/api/nodelist | data/mdn-content/files/en-us/web/api/nodelist/length/index.md | ---
title: "NodeList: length property"
short-title: length
slug: Web/API/NodeList/length
page-type: web-api-instance-property
browser-compat: api.NodeList.length
---
{{APIRef("DOM")}}
The **`NodeList.length`** property returns the number of items
in a {{domxref("NodeList")}}.
## Value
An integer value representing the number of items in a `NodeList`.
## Examples
The `length` property is often useful in DOM programming. It's often used to
test the length of a list, to see if it exists at all. It's also commonly used as the
iterator in a `for` loop, as in this example.
```js
// All the paragraphs in the document
const items = document.getElementsByTagName("p");
// For each item in the list,
// append the entire element as a string of HTML
let gross = "";
for (let i = 0; i < items.length; i++) {
gross += items[i].innerHTML;
}
// gross is now all the HTML for the paragraphs
```
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
| 0 |
data/mdn-content/files/en-us/web/api/nodelist | data/mdn-content/files/en-us/web/api/nodelist/keys/index.md | ---
title: "NodeList: keys() method"
short-title: keys()
slug: Web/API/NodeList/keys
page-type: web-api-instance-method
browser-compat: api.NodeList.keys
---
{{APIRef("DOM")}}
The **`NodeList.keys()`** method returns an
{{jsxref("Iteration_protocols",'iterator')}} allowing to go through all keys contained
in this object. The keys are `unsigned integer`.
## Syntax
```js-nolint
keys()
```
### Return value
Returns an {{jsxref("Iteration_protocols","iterator")}}.
## Example
```js
const node = document.createElement("div");
const kid1 = document.createElement("p");
const kid2 = document.createTextNode("hey");
const kid3 = document.createElement("span");
node.appendChild(kid1);
node.appendChild(kid2);
node.appendChild(kid3);
let list = node.childNodes;
// Using for...of
for (const key of list.keys()) {
console.log(key);
}
```
The result is:
```plain
0
1
2
```
## Browser compatibility
{{Compat}}
## See also
- [Polyfill of `NodeList.prototype.keys` in `core-js`](https://github.com/zloirock/core-js#iterable-dom-collections)
- {{domxref("Node")}}
- {{domxref("NodeList")}}
| 0 |
data/mdn-content/files/en-us/web/api/nodelist | data/mdn-content/files/en-us/web/api/nodelist/item/index.md | ---
title: "NodeList: item() method"
short-title: item()
slug: Web/API/NodeList/item
page-type: web-api-instance-method
browser-compat: api.NodeList.item
---
{{APIRef("DOM")}}
Returns a node from a [`NodeList`](/en-US/docs/Web/API/NodeList) by index. This method
doesn't throw exceptions as long as you provide arguments. A value of `null`
is returned if the index is out of range, and a {{jsxref("TypeError")}} is thrown if no
argument is provided.
## Syntax
```js-nolint
item(index)
```
JavaScript also offers an array-like bracketed syntax for obtaining an item from a
NodeList by index:
```js
nodeItem = nodeList[index];
```
### Parameters
- `index` is the index of the node to be fetched. The index is zero-based.
### Return value
The `index`th node in the `nodeList` returned by the `item` method.
### Exceptions
- {{jsxref("TypeError")}}
- : Thrown if no argument is provided.
## Examples
```js
const tables = document.getElementsByTagName("table");
const firstTable = tables.item(1); // or tables[1] - returns the second table in the DOM
```
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
| 0 |
data/mdn-content/files/en-us/web/api/nodelist | data/mdn-content/files/en-us/web/api/nodelist/values/index.md | ---
title: "NodeList: values() method"
short-title: values()
slug: Web/API/NodeList/values
page-type: web-api-instance-method
browser-compat: api.NodeList.values
---
{{APIRef("DOM")}}
The **`NodeList.values()`** method returns an
{{jsxref("Iteration_protocols",'iterator')}} allowing to go through all values contained
in this object. The values are {{domxref("Node")}} objects.
## Syntax
```js-nolint
values()
```
### Return value
Returns an {{jsxref("Iteration_protocols","iterator")}}.
## Example
```js
const node = document.createElement("div");
const kid1 = document.createElement("p");
const kid2 = document.createTextNode("hey");
const kid3 = document.createElement("span");
node.appendChild(kid1);
node.appendChild(kid2);
node.appendChild(kid3);
const list = node.childNodes;
// Using for...of
for (const value of list.values()) {
console.log(value);
}
```
The result is:
```plain
<p>
#text "hey"
<span>
```
## Browser compatibility
{{Compat}}
## See also
- [Polyfill of `NodeList.prototype.values` in `core-js`](https://github.com/zloirock/core-js#iterable-dom-collections)
- {{domxref("Node")}}
- {{domxref("NodeList")}}
| 0 |
data/mdn-content/files/en-us/web/api/nodelist | data/mdn-content/files/en-us/web/api/nodelist/foreach/index.md | ---
title: "NodeList: forEach() method"
short-title: forEach()
slug: Web/API/NodeList/forEach
page-type: web-api-instance-method
browser-compat: api.NodeList.forEach
---
{{APIRef("DOM")}}
The **`forEach()`** method of the {{domxref("NodeList")}}
interface calls the callback given in parameter once for each value pair in the list, in
insertion order.
## Syntax
```js-nolint
forEach(callback)
forEach(callback, thisArg)
```
### Parameters
- `callback`
- : A function to execute on each element of `someNodeList`. It
accepts 3 parameters:
- `currentValue`
- : The current element being processed in `someNodeList`.
- `currentIndex` {{Optional_inline}}
- : The index of the `currentValue` being processed in
`someNodeList`.
- `listObj` {{Optional_inline}}
- : The `someNodeList` that `forEach()` is being
applied to.
- `thisArg` {{Optional_inline}}
- : Value to use as
[`this`](/en-US/docs/Web/JavaScript/Reference/Operators/this)
when executing `callback`.
### Return value
{{jsxref('undefined')}}.
## Example
```js
const node = document.createElement("div");
const kid1 = document.createElement("p");
const kid2 = document.createTextNode("hey");
const kid3 = document.createElement("span");
node.appendChild(kid1);
node.appendChild(kid2);
node.appendChild(kid3);
const list = node.childNodes;
list.forEach(function (currentValue, currentIndex, listObj) {
console.log(`${currentValue}, ${currentIndex}, ${this}`);
}, "myThisArg");
```
The above code results in the following:
```plain
[object HTMLParagraphElement], 0, myThisArg
[object Text], 1, myThisArg
[object HTMLSpanElement], 2, myThisArg
```
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
## See also
- [Polyfill of `NodeList.prototype.forEach` in `core-js`](https://github.com/zloirock/core-js#iterable-dom-collections)
- {{domxref("Node")}}
- {{domxref("NodeList")}}
| 0 |
data/mdn-content/files/en-us/web/api/nodelist | data/mdn-content/files/en-us/web/api/nodelist/entries/index.md | ---
title: "NodeList: entries() method"
short-title: entries()
slug: Web/API/NodeList/entries
page-type: web-api-instance-method
browser-compat: api.NodeList.entries
---
{{APIRef("DOM")}}
The **`NodeList.entries()`** method returns an
{{jsxref("Iteration_protocols",'iterator')}} allowing to go through all key/value pairs
contained in this object. The values are {{domxref("Node")}} objects.
## Syntax
```js-nolint
entries()
```
### Return value
Returns an {{jsxref("Iteration_protocols","iterator")}}.
## Example
```js
const node = document.createElement("div");
const kid1 = document.createElement("p");
const kid2 = document.createTextNode("hey");
const kid3 = document.createElement("span");
node.appendChild(kid1);
node.appendChild(kid2);
node.appendChild(kid3);
const list = node.childNodes;
// Using for...of
for (const entry of list.entries()) {
console.log(entry);
}
```
results in:
```plain
Array [ 0, <p> ]
Array [ 1, #text "hey" ]
Array [ 2, <span> ]
```
## Browser compatibility
{{Compat}}
## See also
- [Polyfill of `NodeList.prototype.entries` in `core-js`](https://github.com/zloirock/core-js#iterable-dom-collections)
- {{domxref("Node")}}
- {{domxref("NodeList")}}
| 0 |
data/mdn-content/files/en-us/web/api | data/mdn-content/files/en-us/web/api/event/index.md | ---
title: Event
slug: Web/API/Event
page-type: web-api-interface
browser-compat: api.Event
---
{{APIRef("DOM")}}
The **`Event`** interface represents an event which takes place in the DOM.
An event can be triggered by the user action e.g. clicking the mouse button or tapping keyboard, or generated by APIs to represent the progress of an asynchronous task. It can also be triggered programmatically, such as by calling the [`HTMLElement.click()`](/en-US/docs/Web/API/HTMLElement/click) method of an element, or by defining the event, then sending it to a specified target using [`EventTarget.dispatchEvent()`](/en-US/docs/Web/API/EventTarget/dispatchEvent).
There are many types of events, some of which use other interfaces based on the main `Event` interface. `Event` itself contains the properties and methods which are common to all events.
Many DOM elements can be set up to accept (or "listen" for) these events, and execute code in response to process (or "handle") them. Event-handlers are usually connected (or "attached") to various [HTML elements](/en-US/docs/Web/HTML/Element) (such as `<button>`, `<div>`, `<span>`, etc.) using [`EventTarget.addEventListener()`](/en-US/docs/Web/API/EventTarget/addEventListener), and this generally replaces using the old HTML [event handler attributes](/en-US/docs/Web/HTML/Global_attributes). Further, when properly added, such handlers can also be disconnected if needed using [`removeEventListener()`](/en-US/docs/Web/API/EventTarget/removeEventListener).
> **Note:** One element can have several such handlers, even for the exact same event—particularly if separate, independent code modules attach them, each for its own independent purposes. (For example, a webpage with an advertising-module and statistics-module both monitoring video-watching.)
When there are many nested elements, each with its own handler(s), event processing can become very complicated—especially where a parent element receives the very same event as its child elements because "spatially" they overlap so the event technically occurs in both, and the processing order of such events depends on the [Event bubbling and capture](/en-US/docs/Learn/JavaScript/Building_blocks/Events#event_bubbling_and_capture) settings of each handler triggered.
## Interfaces based on Event
Below is a list of interfaces which are based on the main `Event` interface, with links to their respective documentation in the MDN API reference.
Note that all event interfaces have names which end in "Event".
- {{domxref("AnimationEvent")}}
- {{domxref("AudioProcessingEvent")}} {{Deprecated_Inline}}
- {{domxref("BeforeUnloadEvent")}}
- {{domxref("BlobEvent")}}
- {{domxref("ClipboardEvent")}}
- {{domxref("CloseEvent")}}
- {{domxref("CompositionEvent")}}
- {{domxref("CustomEvent")}}
- {{domxref("DeviceMotionEvent")}}
- {{domxref("DeviceOrientationEvent")}}
- {{domxref("DragEvent")}}
- {{domxref("ErrorEvent")}}
- {{domxref("FetchEvent")}}
- {{domxref("FocusEvent")}}
- {{domxref("FontFaceSetLoadEvent")}}
- {{domxref("FormDataEvent")}}
- {{domxref("GamepadEvent")}}
- {{domxref("HashChangeEvent")}}
- {{domxref("HIDInputReportEvent")}}
- {{domxref("IDBVersionChangeEvent")}}
- {{domxref("InputEvent")}}
- {{domxref("KeyboardEvent")}}
- {{domxref("MediaStreamEvent")}} {{Deprecated_Inline}}
- {{domxref("MessageEvent")}}
- {{domxref("MouseEvent")}}
- {{domxref("MutationEvent")}} {{Deprecated_Inline}}
- {{domxref("OfflineAudioCompletionEvent")}}
- {{domxref("PageTransitionEvent")}}
- {{domxref("PaymentRequestUpdateEvent")}}
- {{domxref("PointerEvent")}}
- {{domxref("PopStateEvent")}}
- {{domxref("ProgressEvent")}}
- {{domxref("RTCDataChannelEvent")}}
- {{domxref("RTCPeerConnectionIceEvent")}}
- {{domxref("StorageEvent")}}
- {{domxref("SubmitEvent")}}
- {{domxref("SVGEvent")}} {{Deprecated_Inline}}
- {{domxref("TimeEvent")}}
- {{domxref("TouchEvent")}}
- {{domxref("TrackEvent")}}
- {{domxref("TransitionEvent")}}
- {{domxref("UIEvent")}}
- {{domxref("WebGLContextEvent")}}
- {{domxref("WheelEvent")}}
## Constructor
- {{domxref("Event.Event", "Event()")}}
- : Creates an `Event` object, returning it to the caller.
## Instance properties
- {{domxref("Event.bubbles")}} {{ReadOnlyInline}}
- : A boolean value indicating whether or not the event bubbles up through the DOM.
- {{domxref("Event.cancelable")}} {{ReadOnlyInline}}
- : A boolean value indicating whether the event is cancelable.
- {{domxref("Event.composed")}} {{ReadOnlyInline}}
- : A boolean indicating whether or not the event can bubble across the boundary between the shadow DOM and the regular DOM.
- {{domxref("Event.currentTarget")}} {{ReadOnlyInline}}
- : A reference to the currently registered target for the event. This is the object to which the event is currently slated to be sent. It's possible this has been changed along the way through _retargeting_.
- {{domxref("Event.defaultPrevented")}} {{ReadOnlyInline}}
- : Indicates whether or not the call to {{domxref("event.preventDefault()")}} canceled the event.
- {{domxref("Event.eventPhase")}} {{ReadOnlyInline}}
- : Indicates which phase of the event flow is being processed. It is one of the following numbers: `NONE`, `CAPTURING_PHASE`, `AT_TARGET`, `BUBBLING_PHASE`.
- {{domxref("Event.isTrusted")}} {{ReadOnlyInline}}
- : Indicates whether or not the event was initiated by the browser (after a user click, for instance) or by a script (using an event creation method, for example).
- {{domxref("Event.target")}} {{ReadOnlyInline}}
- : A reference to the object to which the event was originally dispatched.
- {{domxref("Event.timeStamp")}} {{ReadOnlyInline}}
- : The time at which the event was created (in milliseconds). By specification, this value is time since epoch—but in reality, browsers' definitions vary. In addition, work is underway to change this to be a {{domxref("DOMHighResTimeStamp")}} instead.
- {{domxref("Event.type")}} {{ReadOnlyInline}}
- : The name identifying the type of the event.
### Legacy and non-standard properties
- {{domxref("Event.cancelBubble")}} {{deprecated_inline}}
- : A historical alias to {{domxref("Event.stopPropagation()")}} that should be used instead. Setting its value to `true` before returning from an event handler prevents propagation of the event.
- {{domxref("Event.explicitOriginalTarget")}} {{non-standard_inline}} {{ReadOnlyInline}}
- : The explicit original target of the event.
- {{domxref("Event.originalTarget")}} {{non-standard_inline}} {{ReadOnlyInline}}
- : The original target of the event, before any retargetings.
- {{domxref("Event.returnValue")}} {{deprecated_inline}}
- : A historical property still supported in order to ensure existing sites continue to work. Use {{domxref("Event.preventDefault()")}} and {{domxref("Event.defaultPrevented")}} instead.
- {{domxref("Event.composed", "Event.scoped")}} {{ReadOnlyInline}} {{deprecated_inline}}
- : A boolean value indicating whether the given event will bubble across through the shadow root into the standard DOM. Use {{domxref("Event.composed", "composed")}} instead.
## Instance methods
- {{domxref("Event.composedPath()")}}
- : Returns the event's path (an array of objects on which listeners will be invoked). This does not include nodes in shadow trees if the shadow root was created with its {{domxref("ShadowRoot.mode")}} closed.
- {{domxref("Event.preventDefault()")}}
- : Cancels the event (if it is cancelable).
- {{domxref("Event.stopImmediatePropagation()")}}
- : For this particular event, prevent all other listeners from being called. This includes listeners attached to the same element as well as those attached to elements that will be traversed later (during the capture phase, for instance).
- {{domxref("Event.stopPropagation()")}}
- : Stops the propagation of events further along in the DOM.
### Deprecated methods
- {{domxref("Event.initEvent()")}} {{deprecated_inline}}
- : Initializes the value of an Event created. If the event has already been dispatched, this method does nothing. Use the constructor ({{domxref("Event.Event", "Event()")}} instead).
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
## See also
- Types of events available: [Event reference](/en-US/docs/Web/Events)
- [Comparison of Event Targets](/en-US/docs/Web/API/Event/Comparison_of_Event_Targets) (`target` vs. `currentTarget` vs. `relatedTarget` vs. `originalTarget`)
- [Creating and triggering custom events](/en-US/docs/Web/Events/Creating_and_triggering_events)
| 0 |
data/mdn-content/files/en-us/web/api/event | data/mdn-content/files/en-us/web/api/event/preventdefault/index.md | ---
title: "Event: preventDefault() method"
short-title: preventDefault()
slug: Web/API/Event/preventDefault
page-type: web-api-instance-method
browser-compat: api.Event.preventDefault
---
{{apiref("DOM")}}
The **`preventDefault()`** method of the {{domxref("Event")}} interface tells the {{Glossary("user agent")}} that if the event does not get explicitly handled, its default action should not be taken as it normally would be.
The event continues to propagate as usual,
unless one of its event listeners calls
{{domxref("Event.stopPropagation", "stopPropagation()")}}
or {{domxref("Event.stopImmediatePropagation", "stopImmediatePropagation()")}},
either of which terminates propagation at once.
As noted below, calling **`preventDefault()`** for a
non-cancelable event, such as one dispatched via
{{domxref("EventTarget.dispatchEvent()")}}, without specifying
`cancelable: true` has no effect.
## Syntax
```js-nolint
event.preventDefault()
```
## Examples
### Blocking default click handling
Toggling a checkbox is the default action of clicking on a checkbox. This example
demonstrates how to prevent that from happening:
#### JavaScript
```js
const checkbox = document.querySelector("#id-checkbox");
checkbox.addEventListener("click", checkboxClick, false);
function checkboxClick(event) {
let warn = "preventDefault() won't let you check this!<br>";
document.getElementById("output-box").innerHTML += warn;
event.preventDefault();
}
```
#### HTML
```html
<p>Please click on the checkbox control.</p>
<form>
<label for="id-checkbox">Checkbox:</label>
<input type="checkbox" id="id-checkbox" />
</form>
<div id="output-box"></div>
```
#### Result
{{EmbedLiveSample("Blocking_default_click_handling")}}
### Stopping keystrokes from reaching an edit field
The following example demonstrates how invalid text input can be stopped from reaching
the input field with `preventDefault()`. Nowadays, you should usually use [native HTML form validation](/en-US/docs/Learn/Forms/Form_validation)
instead.
#### HTML
The HTML form below captures user input.
Since we're only interested in keystrokes, we're disabling `autocomplete` to prevent the browser from filling in the input field with cached values.
```html
<div class="container">
<p>Please enter your name using lowercase letters only.</p>
<form>
<input type="text" id="my-textbox" autocomplete="off" />
</form>
</div>
```
#### CSS
We use a little bit of CSS for the warning box we'll draw when the user presses an
invalid key:
```css
.warning {
border: 2px solid #f39389;
border-radius: 2px;
padding: 10px;
position: absolute;
background-color: #fbd8d4;
color: #3b3c40;
}
```
#### JavaScript
And here's the JavaScript code that does the job. First, listen for
{{domxref("Element/keydown_event", "keydown")}} events:
```js
const myTextbox = document.getElementById("my-textbox");
myTextbox.addEventListener("keydown", checkName, false);
```
The `checkName()` function, which looks at the pressed key and decides
whether to allow it:
```js
function checkName(evt) {
const key = evt.key;
const lowerCaseAlphabet = "abcdefghijklmnopqrstuvwxyz";
if (!lowerCaseAlphabet.includes(key)) {
evt.preventDefault();
displayWarning(
"Please use lowercase letters only.\n" + `Key pressed: ${key}\n`,
);
}
}
```
The `displayWarning()` function presents a notification of a problem. It's
not an elegant function but does the job for the purposes of this example:
```js
let warningTimeout;
const warningBox = document.createElement("div");
warningBox.className = "warning";
function displayWarning(msg) {
warningBox.innerHTML = msg;
if (document.body.contains(warningBox)) {
clearTimeout(warningTimeout);
} else {
// insert warningBox after myTextbox
myTextbox.parentNode.insertBefore(warningBox, myTextbox.nextSibling);
}
warningTimeout = setTimeout(() => {
warningBox.parentNode.removeChild(warningBox);
warningTimeout = -1;
}, 2000);
}
```
#### Result
{{ EmbedLiveSample('Stopping_keystrokes_from_reaching_an_edit_field', 600, 200) }}
## Notes
Calling `preventDefault()` during any stage of event flow cancels the event,
meaning that any default action normally taken by the implementation as a result of the
event will not occur.
You can use {{domxref("Event.cancelable")}} to check if the event is cancelable.
Calling `preventDefault()` for a non-cancelable event has no effect.
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
| 0 |
data/mdn-content/files/en-us/web/api/event | data/mdn-content/files/en-us/web/api/event/explicitoriginaltarget/index.md | ---
title: "Event: explicitOriginalTarget property"
short-title: explicitOriginalTarget
slug: Web/API/Event/explicitOriginalTarget
page-type: web-api-instance-property
status:
- non-standard
browser-compat: api.Event.explicitOriginalTarget
---
{{APIRef("DOM")}}{{Non-standard_Header}}
The read-only **`explicitOriginalTarget`** property of the {{domxref("Event")}} interface returns the non-anonymous original target of the event.
If the event was retargeted for some reason other than an anonymous boundary crossing, this will be set to the target before the retargeting occurs.
For example, mouse events are retargeted to their parent node when they happen over text nodes (see [Firefox bug 185889](https://bugzil.la/185889)), and in that case [`currentTarget`](/en-US/docs/Web/API/Event/currentTarget) will show the parent while this property will show the text node.
This property also differs from [`originalTarget`](/en-US/docs/Web/API/Event/originalTarget) in that it will never contain anonymous content.
## Value
Returns the {{domxref("EventTarget")}} object, or null if there isn't one.
## Example
This property can be used with `<command>` to get the event details of the original object calling the command.
```js
function myCommand(ev) {
alert(ev.explicitOriginalTarget.nodeName); // returns 'menuitem'
}
```
```xml
<xul:command id="my-cmd-anAction" oncommand="myCommand(event);"/>
<xul:menulist>
<xul:menupopup>
<xul:menuitem label="Get my element name!" command="my-cmd-anAction"/>
</xul:menupopup>
</menulist>
```
## Specifications
_This is a Mozilla-specific property and is not part of any current specification. It is not on track to become a standard._
## Browser compatibility
{{Compat}}
## See also
- [Comparison of Event Targets](/en-US/docs/Web/API/Event/Comparison_of_Event_Targets)
| 0 |
data/mdn-content/files/en-us/web/api/event | data/mdn-content/files/en-us/web/api/event/comparison_of_event_targets/index.md | ---
title: Comparison of Event Targets
slug: Web/API/Event/Comparison_of_Event_Targets
page-type: guide
---
{{ ApiRef() }}
It's easy to get confused about which event target to examine when writing an event handler. This article should clarify the use of the target properties.
There are five targets to consider:
<table class="no-markdown">
<thead>
<tr>
<th>Property</th>
<th>Defined in</th>
<th>Purpose</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<code><a href="/en-US/docs/Web/API/Event/target">event.target</a></code>
</td>
<td>
<a href="https://www.w3.org/TR/DOM-Level-2/events.html#Events-interface"
>DOM Event Interface</a
>
</td>
<td>
The DOM element on the left-hand side of the call that triggered this
event.
</td>
</tr>
<tr>
<td>
<code
><a href="/en-US/docs/Web/API/Event/currentTarget"
>event.currentTarget</a
></code
>
</td>
<td>
<a href="https://www.w3.org/TR/DOM-Level-2/events.html#Events-interface"
>DOM Event Interface</a
>
</td>
<td>
The
<a
href="https://www.w3.org/TR/DOM-Level-2/events.html#Events-EventTarget"
><code>EventTarget</code></a
>
whose
<a
href="https://www.w3.org/TR/DOM-Level-2/events.html#Events-EventListener"
><code>EventListeners</code></a
>
are currently being processed. As the event capturing and bubbling
occurs, this value changes.
</td>
</tr>
<tr>
<td>
<code
><a href="/en-US/docs/Web/API/MouseEvent/relatedTarget"
>event.relatedTarget</a
></code
>
</td>
<td>
<a
href="https://www.w3.org/TR/DOM-Level-2/events.html#Events-MouseEvent"
>DOM MouseEvent Interface</a
>
</td>
<td>Identifies a secondary target for the event.</td>
</tr>
<tr>
<td>
<code
><a href="/en-US/docs/Web/API/Event/explicitOriginalTarget"
>event.explicitOriginalTarget</a
></code
>
</td>
<td><a href="https://dxr.mozilla.org/mozilla-central/source/dom/webidl/Event.webidl">Event.webidl</a>
</td>
<td>
{{ Non-standard_inline() }} If the event was retargeted for
some reason other than an anonymous boundary crossing, this will be set
to the target before the retargeting occurs. For example, mouse events
are retargeted to their parent node when they happen over text nodes
([Firefox bug 185889](https://bugzil.la/185889)), and in that case <code>.target</code> will
show the parent and <code>.explicitOriginalTarget</code> will show the
text node.<br />Unlike <code>.originalTarget</code>,
<code>.explicitOriginalTarget</code> will never contain anonymous
content.
</td>
</tr>
<tr>
<td>
<code
><a href="/en-US/docs/Web/API/Event/originalTarget"
>event.originalTarget</a
></code
>
</td>
<td>
<a href="https://dxr.mozilla.org/mozilla-central/source/dom/webidl/Event.webidl">Event.webidl</a>
</td>
<td>
{{ Non-standard_inline() }} The original target of the event,
before any retargetings. See
<a
href="/en-US/docs/XBL/XBL_1.0_Reference/Anonymous_Content#Event_Flow_and_Targeting"
>Anonymous Content#Event_Flow_and_Targeting</a
>
for details.
</td>
</tr>
<tr>
<td>event.composedTarget</td>
<td>
<a href="https://dxr.mozilla.org/mozilla-central/source/dom/webidl/Event.webidl">Event.webidl</a>
</td>
<td>
{{ Non-standard_inline() }} The original non-native target of
the event before composition from Shadow DOM.
</td>
</tr>
</tbody>
</table>
### Use of `explicitOriginalTarget` and `originalTarget`
> **Note:** These properties are only available in Mozilla-based browsers.
### Examples
```html
<!doctype html>
<html lang="en-US">
<head>
<meta charset="utf-8" />
<meta http-equiv="X-UA-Compatible" content="IE=edge" />
<title>Comparison of Event Targets</title>
<style>
table {
border-collapse: collapse;
height: 150px;
width: 100%;
}
td {
border: 1px solid #ccc;
font-weight: bold;
padding: 5px;
min-height: 30px;
}
.standard {
background-color: #99ff99;
}
.non-standard {
background-color: #902d37;
}
</style>
</head>
<body>
<table>
<thead>
<tr>
<td class="standard">
Original target dispatching the event <small>event.target</small>
</td>
<td class="standard">
Target who's event listener is being processed
<small>event.currentTarget</small>
</td>
<td class="standard">
Identify other element (if any) involved in the event
<small>event.relatedTarget</small>
</td>
<td class="non-standard">
If there was a retargeting of the event for some reason
<small> event.explicitOriginalTarget</small> contains the target
before retargeting (never contains anonymous targets)
</td>
<td class="non-standard">
If there was a retargeting of the event for some reason
<small> event.originalTarget</small> contains the target before
retargeting (may contain anonymous targets)
</td>
</tr>
</thead>
<tr>
<td id="target"></td>
<td id="currentTarget"></td>
<td id="relatedTarget"></td>
<td id="explicitOriginalTarget"></td>
<td id="originalTarget"></td>
</tr>
</table>
<p>
Clicking on the text will show the difference between
explicitOriginalTarget, originalTarget, and target
</p>
<script>
function handleClicks(e) {
document.getElementById("target").innerHTML = e.target;
document.getElementById("currentTarget").innerHTML = e.currentTarget;
document.getElementById("relatedTarget").innerHTML = e.relatedTarget;
document.getElementById("explicitOriginalTarget").innerHTML =
e.explicitOriginalTarget;
document.getElementById("originalTarget").innerHTML = e.originalTarget;
}
function handleMouseover(e) {
document.getElementById("target").innerHTML = e.target;
document.getElementById("relatedTarget").innerHTML = e.relatedTarget;
}
document.addEventListener("click", handleClicks, false);
document.addEventListener("mouseover", handleMouseover, false);
</script>
</body>
</html>
```
### Use of `target` and `relatedTarget`
The `relatedTarget` property for the `mouseover` event holds the node that the mouse was previously over. For the `mouseout` event, it holds the node that the mouse moved to.
| Event type | [event.target](/en-US/docs/Web/API/Event/target) | [event.relatedTarget](/en-US/docs/Web/API/MouseEvent/relatedTarget) |
| ----------- | ------------------------------------------------- | ------------------------------------------------------------------- |
| `mouseover` | the EventTarget which the pointing device entered | the EventTarget which the pointing device exited |
| `mouseout` | the EventTarget which the pointing device exited | the EventTarget which the pointing device entered |
#### Example
```xml
<hbox id="outer">
<hbox id="inner"
onmouseover="dump('mouseover ' + event.relatedTarget.id + ' > ' + event.target.id + '\n');"
onmouseout="dump('mouseout ' + event.target.id + ' > ' + event.relatedTarget.id + '\n');"
style="margin: 100px; border: 10px solid black; width: 100px; height: 100px;" />
</hbox>
```
| 0 |
data/mdn-content/files/en-us/web/api/event | data/mdn-content/files/en-us/web/api/event/originaltarget/index.md | ---
title: "Event: originalTarget property"
short-title: originalTarget
slug: Web/API/Event/originalTarget
page-type: web-api-instance-property
status:
- non-standard
browser-compat: api.Event.originalTarget
---
{{ ApiRef("DOM") }} {{Non-standard_header}}
The read-only **`originalTarget`** property of the {{domxref("Event")}} interface returns the original target of the event before any retargetings. Unlike {{domxref("Event.explicitOriginalTarget")}} it can also be native anonymous content.
See also [Comparison of Event Targets](/en-US/docs/Web/API/Event/Comparison_of_Event_Targets).
## Specifications
_This is a Mozilla-specific property and is not part of any current specification. It is not on track to become a standard._
## Browser compatibility
{{Compat}}
| 0 |
data/mdn-content/files/en-us/web/api/event | data/mdn-content/files/en-us/web/api/event/bubbles/index.md | ---
title: "Event: bubbles property"
short-title: bubbles
slug: Web/API/Event/bubbles
page-type: web-api-instance-property
browser-compat: api.Event.bubbles
---
{{ ApiRef("DOM") }}
The **`bubbles`** read-only property of the {{domxref("Event")}} interface indicates whether the event bubbles up through the DOM tree or not.
> **Note:** See [Event bubbling and capture](/en-US/docs/Learn/JavaScript/Building_blocks/Events#event_bubbling) for more information on bubbling.
## Value
A boolean value, which is `true` if the event bubbles up through the DOM tree.
## Example
```js
function handleInput(e) {
// Check whether the event bubbles passes the event along
if (!e.bubbles) {
passItOn(e);
}
// Already bubbling
doOutput(e);
}
```
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
## See also
- {{domxref("Event.stopPropagation", "stopPropagation()")}} to prevent further propagation of the current event in the capturing and bubbling phases
- {{domxref("Event.stopImmediatePropagation", "stopImmediatePropagation()")}} to not call any further listeners for the same event at the same level in the DOM
- {{domxref("Event.preventDefault", "preventDefault()")}} to allow propagation to continue but to disallow the browser to perform its default action should no listeners handle the event
| 0 |
data/mdn-content/files/en-us/web/api/event | data/mdn-content/files/en-us/web/api/event/istrusted/index.md | ---
title: "Event: isTrusted property"
short-title: isTrusted
slug: Web/API/Event/isTrusted
page-type: web-api-instance-property
browser-compat: api.Event.isTrusted
---
{{APIRef("DOM")}}
The **`isTrusted`** read-only property of the
{{domxref("Event")}} interface is a boolean value that is `true`
when the event was generated by a user action, and `false` when the event was
created or modified by a script or dispatched via
{{domxref("EventTarget.dispatchEvent()")}}.
## Value
A boolean value.
## Example
```js
if (e.isTrusted) {
/* The event is trusted */
} else {
/* The event is not trusted */
}
```
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
| 0 |
data/mdn-content/files/en-us/web/api/event | data/mdn-content/files/en-us/web/api/event/composedpath/index.md | ---
title: "Event: composedPath() method"
short-title: composedPath()
slug: Web/API/Event/composedPath
page-type: web-api-instance-method
browser-compat: api.Event.composedPath
---
{{APIRef("Shadow DOM")}}
The **`composedPath()`** method of the {{domxref("Event")}}
interface returns the event's path which is an array of the objects on which listeners
will be invoked. This does not include nodes in shadow trees if the shadow root was
created with its {{domxref("ShadowRoot.mode")}} closed.
## Syntax
```js-nolint
const composed = Event.composedPath()
```
### Parameters
None.
### Return value
An array of {{domxref("EventTarget")}} objects representing the objects on which an
event listener will be invoked.
## Examples
In the following example, which you can try out at [https://mdn.github.io/web-components-examples/composed-composed-path/](https://mdn.github.io/web-components-examples/composed-composed-path/), we define two trivial custom
elements, `<open-shadow>` and `<closed-shadow>`, both
of which take the contents of their text attribute and insert them into the element's
shadow DOM as the text content of a `<p>` element. The only difference
between the two is that their shadow roots are attached with their modes set to
`open` and `closed` respectively.
```js
customElements.define(
"open-shadow",
class extends HTMLElement {
constructor() {
super();
const pElem = document.createElement("p");
pElem.textContent = this.getAttribute("text");
const shadowRoot = this.attachShadow({ mode: "open" });
shadowRoot.appendChild(pElem);
}
},
);
customElements.define(
"closed-shadow",
class extends HTMLElement {
constructor() {
super();
const pElem = document.createElement("p");
pElem.textContent = this.getAttribute("text");
const shadowRoot = this.attachShadow({ mode: "closed" });
shadowRoot.appendChild(pElem);
}
},
);
```
We then insert one of each element into our page:
```html
<open-shadow text="I have an open shadow root"></open-shadow>
<closed-shadow text="I have a closed shadow root"></closed-shadow>
```
Then include a click event listener on the `<html>` element:
```js
document.querySelector("html").addEventListener("click", (e) => {
console.log(e.composed);
console.log(e.composedPath());
});
```
When you click on the `<open-shadow>` element and then the
`<closed-shadow>` element, you'll notice two things. First, the
`composed` property returns `true` because the `click`
event is always able to propagate across shadow boundaries. Second, you'll notice a
difference in the value of `composedPath` for the two elements. The
`<open-shadow>` element's composed path is this:
```plain
Array [ p, ShadowRoot, open-shadow, body, html, HTMLDocument https://mdn.github.io/web-components-examples/composed-composed-path/, Window ]
```
Whereas the `<closed-shadow>` element's composed path is a follows:
```plain
Array [ closed-shadow, body, html, HTMLDocument https://mdn.github.io/web-components-examples/composed-composed-path/, Window ]
```
In the second case, the event listeners only propagate as far as the
`<closed-shadow>` element itself, but not to the nodes inside the
shadow boundary.
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
| 0 |
data/mdn-content/files/en-us/web/api/event | data/mdn-content/files/en-us/web/api/event/target/index.md | ---
title: "Event: target property"
short-title: target
slug: Web/API/Event/target
page-type: web-api-instance-property
browser-compat: api.Event.target
---
{{ApiRef("DOM")}}
The read-only **`target`** property of the
{{domxref("Event")}} interface is a reference to the object onto which the event was
dispatched. It is different from {{domxref("Event.currentTarget")}} when the event
handler is called during the bubbling or capturing phase of the event.
## Value
The associated {{domxref("EventTarget")}}.
## Example
The `event.target` property can be used in order to implement **event
delegation**.
```js
// Make a list
const ul = document.createElement("ul");
document.body.appendChild(ul);
const li1 = document.createElement("li");
const li2 = document.createElement("li");
ul.appendChild(li1);
ul.appendChild(li2);
function hide(evt) {
// evt.target refers to the clicked <li> element
// This is different than evt.currentTarget, which would refer to the parent <ul> in this context
evt.target.style.visibility = "hidden";
}
// Attach the listener to the list
// It will fire when each <li> is clicked
ul.addEventListener("click", hide, false);
```
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
## See also
- [Comparison of Event Targets](/en-US/docs/Web/API/Event/Comparison_of_Event_Targets)
| 0 |
data/mdn-content/files/en-us/web/api/event | data/mdn-content/files/en-us/web/api/event/stopimmediatepropagation/index.md | ---
title: "Event: stopImmediatePropagation() method"
short-title: stopImmediatePropagation()
slug: Web/API/Event/stopImmediatePropagation
page-type: web-api-instance-method
browser-compat: api.Event.stopImmediatePropagation
---
{{APIRef("DOM")}}
The **`stopImmediatePropagation()`** method of the
{{domxref("Event")}} interface prevents other listeners of the same event from being called.
If several listeners are attached to the same element for the same event type, they are called in the order in which they were added. If `stopImmediatePropagation()` is invoked during one such call, no remaining listeners will be called, either on that element or any other element.
## Syntax
```js-nolint
event.stopImmediatePropagation()
```
## Examples
### Comparing event-stopping functions
The example below has three buttons inside of three nested divs. Each button has three event listeners registered for click events, and each div has an event listener, also registered for click events.
- The top button allows normal event propagation.
- The middle button calls `stopPropagation()` in its first event handler.
- The bottom button calls `stopImmediatePropagation()` in its first event handler.
#### HTML
```html
<h2>Click on the buttons</h2>
<div>
outer div<br />
<div>
middle div<br />
<div>
inner div<br />
<button>allow propagation</button><br />
<button id="stopPropagation">stop propagation</button><br />
<button id="stopImmediatePropagation">immediate stop propagation</button>
</div>
</div>
</div>
<pre></pre>
```
#### CSS
```css
div {
display: inline-block;
padding: 10px;
background-color: #fff;
border: 2px solid #000;
margin: 10px;
}
button {
width: 100px;
color: #008;
padding: 5px;
background-color: #fff;
border: 2px solid #000;
border-radius: 30px;
margin: 5px;
}
```
#### JavaScript
```js
const outElem = document.querySelector("pre");
/* Clear the output */
document.addEventListener(
"click",
() => {
outElem.textContent = "";
},
true,
);
/* Set event listeners for the buttons */
document.querySelectorAll("button").forEach((elem) => {
for (let i = 1; i <= 3; i++) {
elem.addEventListener("click", (evt) => {
/* Do any propagation stopping in first event handler */
if (i === 1 && elem.id) {
evt[elem.id]();
outElem.textContent += `Event handler for event 1 calling ${elem.id}()\n`;
}
outElem.textContent += `Click event ${i} processed on "${elem.textContent}" button\n`;
});
}
});
/* Set event listeners for the divs */
document
.querySelectorAll("div")
.forEach((elem) =>
elem.addEventListener(
"click",
(evt) =>
(outElem.textContent += `Click event processed on "${elem.firstChild.data.trim()}"\n`),
),
);
```
#### Result
Each click-event handler displays a status message when it is called. If you press the middle button, you will see that `stopPropagation()` allows all of the event handlers registered for clicks on that button to execute but prevents execution of the click-event handlers for the divs, which would normally follow. However, if you press the bottom button, `stopImmediatePropagation()` stops all propagation after the event that called it.
{{ EmbedLiveSample("Comparing event-stopping functions", 500, 550) }}
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
| 0 |
data/mdn-content/files/en-us/web/api/event | data/mdn-content/files/en-us/web/api/event/srcelement/index.md | ---
title: "Event: srcElement property"
short-title: srcElement
slug: Web/API/Event/srcElement
page-type: web-api-instance-property
status:
- deprecated
browser-compat: api.Event.srcElement
---
{{ApiRef("DOM")}}{{deprecated_header}}
The deprecated **`Event.srcElement`** is an alias for the {{domxref("Event.target")}} property. Use {{domxref("Event.target")}} instead.
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
## See also
- {{domxref("Window.event")}}
| 0 |
data/mdn-content/files/en-us/web/api/event | data/mdn-content/files/en-us/web/api/event/returnvalue/index.md | ---
title: "Event: returnValue property"
short-title: returnValue
slug: Web/API/Event/returnValue
page-type: web-api-instance-property
status:
- deprecated
browser-compat: api.Event.returnValue
---
{{APIRef("DOM")}}{{Deprecated_Header}}
The {{domxref("Event")}} property
**`returnValue`** indicates whether the default action for
this event has been prevented or not.
It is set to `true` by
default, allowing the default action to occur. Setting this property to
`false` prevents the default action.
> **Note:** While `returnValue` has been adopted into the DOM
> standard, it is present primarily to support existing code. Use
> {{DOMxRef("Event.preventDefault", "preventDefault()")}}, and
> {{domxref("Event.defaultPrevented", "defaultPrevented")}} instead of this historical
> property.
## Value
A boolean value which is `true` if the event has not been
canceled; otherwise, if the event has been canceled or the default has been prevented,
the value is `false`.
The value of `returnValue` is the opposite of the value returned by
{{domxref("Event.defaultPrevented", "defaultPrevented")}}.
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
## See also
- {{domxref("HTMLDialogElement.returnValue")}}: the return value for the {{HTMLElement("dialog")}}.
| 0 |
data/mdn-content/files/en-us/web/api/event | data/mdn-content/files/en-us/web/api/event/cancelbubble/index.md | ---
title: "Event: cancelBubble property"
short-title: cancelBubble
slug: Web/API/Event/cancelBubble
page-type: web-api-instance-property
status:
- deprecated
browser-compat: api.Event.cancelBubble
---
{{APIRef("DOM")}} {{Deprecated_Header}}
The **`cancelBubble`** property of the {{domxref("Event")}}
interface is deprecated. Use {{domxref("Event.stopPropagation()")}} instead.
Setting its value to `true` before returning from an event handler prevents propagation
of the event. In later implementations, setting this to `false` does nothing.
See [Browser compatibility](#browser_compatibility) for details.
## Value
A boolean value. The value `true` means that the event must not be propagated further.
## Example
```js
elem.onclick = (event) => {
// Do cool things here
event.cancelBubble = true;
};
```
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
| 0 |
data/mdn-content/files/en-us/web/api/event | data/mdn-content/files/en-us/web/api/event/stoppropagation/index.md | ---
title: "Event: stopPropagation() method"
short-title: stopPropagation()
slug: Web/API/Event/stopPropagation
page-type: web-api-instance-method
browser-compat: api.Event.stopPropagation
---
{{APIRef("DOM")}}
The **`stopPropagation()`** method of the {{domxref("Event")}}
interface prevents further propagation of the current event in the capturing and
bubbling phases. It does not, however, prevent any default behaviors from occurring; for
instance, clicks on links are still processed. If you want to stop those behaviors, see
the {{domxref("Event.preventDefault", "preventDefault()")}} method. It also does not
prevent propagation to other event-handlers of the current element. If you want to stop those,
see {{domxref("Event.stopImmediatePropagation", "stopImmediatePropagation()")}}.
## Syntax
```js-nolint
event.stopPropagation()
```
### Parameters
None.
### Return value
None.
## Examples
See [Event Propagation](/en-US/docs/Web/API/Document_Object_Model/Examples#example_5_event_propagation).
Also see the example at {{domxref("Event.stopImmediatePropagation", "stopImmediatePropagation()")}}
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
| 0 |
data/mdn-content/files/en-us/web/api/event | data/mdn-content/files/en-us/web/api/event/currenttarget/index.md | ---
title: "Event: currentTarget property"
short-title: currentTarget
slug: Web/API/Event/currentTarget
page-type: web-api-instance-property
browser-compat: api.Event.currentTarget
---
{{APIRef("DOM")}}
The **`currentTarget`** read-only property of the {{domxref("Event")}} interface identifies the element to which the event handler has been attached.
This will not always be the same as the element on which the event was fired, because the event may have fired on a descendant of the element with the handler, and then [bubbled](/en-US/docs/Learn/JavaScript/Building_blocks/Events#event_bubbling) up to the element with the handler. The element on which the event was fired is given by {{domxref("Event.target")}}.
## Value
An {{domxref("EventTarget")}} representing the object to which the current event handler is attached.
## Examples
### currentTarget versus target
This example illustrates the difference between `currentTarget` and `target`.
#### HTML
The page has a "parent" {{htmlelement("div")}} containing a "child" `<div>`.
```html
<div id="parent">
Click parent
<div id="child">Click child</div>
</div>
<button id="reset">Reset</button>
<pre id="output"></pre>
```
```css hidden
button,
div,
pre {
margin: 0.5rem;
}
div {
padding: 1rem;
border: 1px solid black;
}
```
#### JavaScript
The event handler is attached to the parent. It logs the value of `event.currentTarget` and `event.target`.
We also have a "Reset" button that just reloads the example.
```js
const output = document.querySelector("#output");
const parent = document.querySelector("#parent");
parent.addEventListener("click", (event) => {
const currentTarget = event.currentTarget.getAttribute("id");
const target = event.target.getAttribute("id");
output.textContent = `Current target: ${currentTarget}\n`;
output.textContent += `Target: ${target}`;
});
const reset = document.querySelector("#reset");
reset.addEventListener("click", () => document.location.reload());
```
#### Result
If you click inside the child `<div>`, then `target` identifies the child. If you click inside the parent `<div>`, then `target` identifies the parent.
In both cases, `currentTarget` identifies the parent, because that's the element that the handler is attached to.
{{EmbedLiveSample("currentTarget versus target", 100, 250)}}
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
## See also
- [Comparison of Event Targets](/en-US/docs/Web/API/Event/Comparison_of_Event_Targets)
- [Event bubbling](/en-US/docs/Learn/JavaScript/Building_blocks/Events#event_bubbling)
| 0 |
data/mdn-content/files/en-us/web/api/event | data/mdn-content/files/en-us/web/api/event/composed/index.md | ---
title: "Event: composed property"
short-title: composed
slug: Web/API/Event/composed
page-type: web-api-instance-property
browser-compat: api.Event.composed
---
{{APIRef("Shadow DOM")}}
The read-only **`composed`** property of the
{{domxref("Event")}} interface returns a boolean value which indicates whether
or not the event will propagate across the shadow DOM boundary into the standard DOM.
All UA-dispatched UI events are composed (click/touch/mouseover/copy/paste, etc.). Most
other types of events are not composed, and so will return `false`. For
example, this includes synthetic events that are created without their
`composed` option set to `true`.
Propagation only occurs if the {{domxref("Event.bubbles", "bubbles")}} property is also
`true`. However, capturing only composed events are also handled at host as
if they were in `AT_TARGET` phase. You can determine the path the event will
follow through the shadow root to the DOM root by calling
{{domxref("Event.composedPath", "composedPath()")}}.
## Value
A boolean value which is `true` if the event will cross from the
shadow DOM into the standard DOM after reaching the shadow root. (That is, the first
node in the shadow DOM in which the event began to propagate.)
If this value is `false`, the shadow root will be the last node to be
offered the event.
## Examples
In this [example](https://mdn.github.io/web-components-examples/composed-composed-path/), we define two trivial custom elements, `<open-shadow>` and `<closed-shadow>`,
both of which take the contents of their text attribute and insert them into the element's
shadow DOM as the text content of a `<p>` element. The only difference
between the two is that their shadow roots are attached with their modes set to
`open` and `closed` respectively.
The two definitions look like this:
```js
customElements.define(
"open-shadow",
class extends HTMLElement {
constructor() {
super();
const pElem = document.createElement("p");
pElem.textContent = this.getAttribute("text");
const shadowRoot = this.attachShadow({
mode: "open",
});
shadowRoot.appendChild(pElem);
}
},
);
customElements.define(
"closed-shadow",
class extends HTMLElement {
constructor() {
super();
const pElem = document.createElement("p");
pElem.textContent = this.getAttribute("text");
const shadowRoot = this.attachShadow({
mode: "closed",
});
shadowRoot.appendChild(pElem);
}
},
);
```
We then insert one of each element into our page:
```html
<open-shadow text="I have an open shadow root"></open-shadow>
<closed-shadow text="I have a closed shadow root"></closed-shadow>
```
Then include a click event listener on the `<html>` element:
```js
document.querySelector("html").addEventListener("click", (e) => {
console.log(e.composed);
console.log(e.composedPath());
});
```
When you click on the `<open-shadow>` element and then the
`<closed-shadow>` element, you'll notice two things.
1. The `composed` property returns `true` because the
`click` event is always able to propagate across shadow boundaries.
2. A difference in the value of `composedPath` for the two
elements.
The `<open-shadow>` element's composed path is this:
```plain
Array [ p, ShadowRoot, open-shadow, body, html, HTMLDocument https://mdn.github.io/web-components-examples/composed-composed-path/, Window ]
```
Whereas the `<closed-shadow>` element's composed path is a follows:
```plain
Array [ closed-shadow, body, html, HTMLDocument https://mdn.github.io/web-components-examples/composed-composed-path/, Window ]
```
In the second case, the event listeners only propagate as far as the
`<closed-shadow>` element itself, but not to the nodes inside the
shadow boundary.
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
| 0 |
data/mdn-content/files/en-us/web/api/event | data/mdn-content/files/en-us/web/api/event/type/index.md | ---
title: "Event: type property"
short-title: type
slug: Web/API/Event/type
page-type: web-api-instance-property
browser-compat: api.Event.type
---
{{APIRef}}
The **`type`** read-only property of the {{domxref("Event")}}
interface returns a string containing the event's type. It is set when the event is
constructed and is the name commonly used to refer to the specific event, such as
`click`, `load`, or `error`.
## Value
A string containing the type of {{domxref("Event")}}.
## Example
This example logs the event type whenever you press a keyboard key or click a mouse
button.
### HTML
```html
<p>Press any key or click the mouse to get the event type.</p>
<p id="log"></p>
```
### JavaScript
```js
function getEventType(event) {
const log = document.getElementById("log");
log.innerText = `${event.type}\n${log.innerText}`;
}
// Keyboard events
document.addEventListener("keydown", getEventType, false); // first
document.addEventListener("keypress", getEventType, false); // second
document.addEventListener("keyup", getEventType, false); // third
// Mouse events
document.addEventListener("mousedown", getEventType, false); // first
document.addEventListener("mouseup", getEventType, false); // second
document.addEventListener("click", getEventType, false); // third
```
### Result
{{EmbedLiveSample('Example')}}
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
## See also
- {{ domxref("EventTarget.addEventListener()") }}
- {{ domxref("EventTarget.removeEventListener()") }}
| 0 |
data/mdn-content/files/en-us/web/api/event | data/mdn-content/files/en-us/web/api/event/event/index.md | ---
title: "Event: Event() constructor"
short-title: Event()
slug: Web/API/Event/Event
page-type: web-api-constructor
browser-compat: api.Event.Event
---
{{APIRef("DOM")}}
The **`Event()`** constructor creates a new {{domxref("Event")}} object. An event created in this way is called a _synthetic event_, as opposed to an event fired by the browser, and can be [dispatched](/en-US/docs/Web/Events/Creating_and_triggering_events) by a script.
## Syntax
```js-nolint
new Event(type)
new Event(type, options)
```
### Values
- `type`
- : A string with the name of the event.
- `options` {{optional_inline}}
- : An object with the following properties:
- `bubbles` {{optional_inline}}
- : A boolean value indicating whether the event bubbles. The default is
`false`.
- `cancelable` {{optional_inline}}
- : A boolean value indicating whether the event can be cancelled. The
default is `false`.
- `composed` {{optional_inline}}
- : A boolean value indicating whether the event will trigger listeners
outside of a shadow root (see {{domxref("Event.composed")}} for more details). The
default is `false`.
### Return value
A new {{domxref("Event")}} object.
## Example
```js
// create a look event that bubbles up and cannot be canceled
const evt = new Event("look", { bubbles: true, cancelable: false });
document.dispatchEvent(evt);
// event can be dispatched from any element, not only the document
myDiv.dispatchEvent(evt);
```
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
## See also
- {{domxref("Event")}}
- {{domxref("EventTarget.dispatchEvent()")}}
- [Creating and triggering events](/en-US/docs/Web/Events/Creating_and_triggering_events)
| 0 |
data/mdn-content/files/en-us/web/api/event | data/mdn-content/files/en-us/web/api/event/timestamp/index.md | ---
title: "Event: timeStamp property"
short-title: timeStamp
slug: Web/API/Event/timeStamp
page-type: web-api-instance-property
browser-compat: api.Event.timeStamp
---
{{APIRef("DOM")}}
The **`timeStamp`** read-only property of the {{domxref("Event")}} interface returns the time (in milliseconds) at which the event was created.
## Value
This value is the number of milliseconds elapsed from the beginning of the time origin until the event was created.
If the global object is {{domxref("Window")}}, the time origin is the moment the user clicked on the link, or the script that initiated the loading of the document.
In a worker, the time origin is the moment of creation of the worker.
The value is a {{domxref("DOMHighResTimeStamp")}} accurate to 5 microseconds (0.005 ms), but the [precision is reduced](#reduced_time_precision) to prevent [fingerprinting](/en-US/docs/Glossary/Fingerprinting).
## Example
### HTML
```html
<p>
Focus this iframe and press any key to get the current timestamp for the
keypress event.
</p>
<p>timeStamp: <span id="time">-</span></p>
```
### JavaScript
```js
function getTime(event) {
const time = document.getElementById("time");
time.firstChild.nodeValue = event.timeStamp;
}
document.body.addEventListener("keypress", getTime);
```
### Result
{{EmbedLiveSample("Example", "100%", 100)}}
## Reduced time precision
To offer protection against timing attacks and fingerprinting, the precision of `Event.timeStamp` might get rounded depending on browser settings.
In Firefox, the `privacy.reduceTimerPrecision` preference is enabled by default and defaults to 2ms.
```js
// reduced time precision in Firefox (default: 2ms)
event.timeStamp;
// 9934
// 10362
// 11670
// …
```
In Firefox, if you also enable `privacy.resistFingerprinting`, the precision will be 100ms or the value of `privacy.resistFingerprinting.reduceTimerPrecision.microseconds`, whichever is larger.
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
| 0 |
data/mdn-content/files/en-us/web/api/event | data/mdn-content/files/en-us/web/api/event/eventphase/index.md | ---
title: "Event: eventPhase property"
short-title: eventPhase
slug: Web/API/Event/eventPhase
page-type: web-api-instance-property
browser-compat: api.Event.eventPhase
---
{{ApiRef("DOM")}}
The **`eventPhase`** read-only property of the
{{domxref("Event")}} interface indicates which phase of the event flow is currently
being evaluated.
## Value
Returns an integer value which specifies the current evaluation phase of the event
flow. Possible values are:
- `Event.NONE (0)`
- : The event is not being processed at this time.
- `Event.CAPTURING_PHASE (1)`
- : The event is being propagated through the target's ancestor objects.
This process starts with the {{domxref("Window")}}, then {{domxref("Document")}},
then the {{domxref("HTMLHtmlElement")}}, and so on through the elements
until the target's parent is reached.
{{domxref("EventTarget/addEventListener", "Event listeners", "", 1)}}
registered for capture mode when {{domxref("EventTarget.addEventListener()")}} was
called are triggered during this phase.
- `Event.AT_TARGET (2)`
- : The event has arrived at
{{domxref("EventTarget", "the event's target", "",
1)}}.
Event listeners registered for this phase are called at this time. If
{{domxref("Event.bubbles")}} is `false`, processing
the event is finished after this phase is complete.
- `Event.BUBBLING_PHASE (3)`
- : The event is propagating back up through the target's ancestors in reverse order,
starting with the parent, and eventually reaching the containing {{domxref("Window")}}.
This is known as _bubbling_, and occurs only if {{domxref("Event.bubbles")}} is
`true`. {{domxref("EventTarget/addEventListener", "Event listeners", "", 1)}} registered for this phase are triggered during this process.
## Example
### HTML
```html
<h4>Event Propagation Chain</h4>
<ul>
<li>Click 'd1'</li>
<li>Analyze event propagation chain</li>
<li>Click next div and repeat the experience</li>
<li>Change Capturing mode</li>
<li>Repeat the experience</li>
</ul>
<input type="checkbox" id="chCapture" />
<label for="chCapture">Use Capturing</label>
<div id="d1">
d1
<div id="d2">
d2
<div id="d3">
d3
<div id="d4">d4</div>
</div>
</div>
</div>
<div id="divInfo"></div>
```
### CSS
```css
div {
margin: 20px;
padding: 4px;
border: thin black solid;
}
#divInfo {
margin: 18px;
padding: 8px;
background-color: white;
font-size: 80%;
}
```
### JavaScript
```js
let clear = false;
let divInfo = null;
let divs = null;
let chCapture = null;
window.onload = () => {
divInfo = document.getElementById("divInfo");
divs = document.getElementsByTagName("div");
chCapture = document.getElementById("chCapture");
chCapture.onclick = () => {
removeListeners();
addListeners();
clearDivs();
};
clearDivs();
addListeners();
};
function removeListeners() {
for (const div of divs) {
if (div.id !== "divInfo") {
div.removeEventListener("click", onDivClick, true);
div.removeEventListener("click", onDivClick, false);
}
}
}
function addListeners() {
for (const div of divs) {
if (div.id !== "divInfo") {
if (chCapture.checked) {
div.addEventListener("click", onDivClick, true);
} else {
div.addEventListener("click", onDivClick, false);
div.onmousemove = () => {
clear = true;
};
}
}
}
}
function onDivClick(e) {
if (clear) {
clearDivs();
clear = false;
}
if (e.eventPhase === 2) {
e.currentTarget.style.backgroundColor = "red";
}
const level =
["none", "capturing", "target", "bubbling"][e.eventPhase] ?? "error";
const para = document.createElement("p");
para.textContent = `${e.currentTarget.id}; eventPhase: ${level}`;
divInfo.appendChild(para);
}
function clearDivs() {
for (let i = 0; i < divs.length; i++) {
if (divs[i].id !== "divInfo") {
divs[i].style.backgroundColor = i % 2 !== 0 ? "#f6eedb" : "#cceeff";
}
}
divInfo.textContent = "";
}
```
### Result
{{ EmbedLiveSample('Example', '', '700') }}
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
| 0 |
data/mdn-content/files/en-us/web/api/event | data/mdn-content/files/en-us/web/api/event/cancelable/index.md | ---
title: "Event: cancelable property"
short-title: cancelable
slug: Web/API/Event/cancelable
page-type: web-api-instance-property
browser-compat: api.Event.cancelable
---
{{ ApiRef("DOM") }}
The **`cancelable`** read-only property of the {{domxref("Event")}} interface indicates whether the event
can be canceled, and therefore prevented as if the event never happened.
If the event is _not_ cancelable, then its `cancelable` property will be
`false` and the event listener cannot stop the event from occurring.
Most browser-native events that can be canceled are the ones that result from the user
interacting with the page. Canceling the {{domxref("Element/click_event", "click")}},
{{domxref("Element/wheel_event", "wheel")}}, or
{{domxref("Window/beforeunload_event", "beforeunload")}} events would prevent the user
from clicking on something, scrolling the page with the mouse wheel, or
navigating away from the page, respectively.
[Synthetic events](/en-US/docs/Web/API/Event/Event) created by other JavaScript
code define if they can be canceled when they are created.
To cancel an event, call the {{domxref("event.preventDefault", "preventDefault()")}}
method on the event. This keeps the implementation from executing the default action
that is associated with the event.
Event listeners that handle multiple kinds of events may want to check
`cancelable` before invoking their {{domxref("event.preventDefault",
"preventDefault()")}} methods.
## Value
A boolean value, which is `true` if the event can be
canceled.
## Example
For example, browser vendors are proposing that the {{domxref("Element/wheel_event",
"wheel")}} event can only be canceled [the first time the listener is called](https://github.com/WICG/interventions/issues/33) — any following `wheel` events cannot be
canceled.
```js
function preventScrollWheel(event) {
if (typeof event.cancelable !== "boolean" || event.cancelable) {
// The event can be canceled, so we do so.
event.preventDefault();
} else {
// The event cannot be canceled, so it is not safe
// to call preventDefault() on it.
console.warn(`The following event couldn't be canceled:`);
console.dir(event);
}
}
document.addEventListener("wheel", preventScrollWheel);
```
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
| 0 |
data/mdn-content/files/en-us/web/api/event | data/mdn-content/files/en-us/web/api/event/initevent/index.md | ---
title: "Event: initEvent() method"
short-title: initEvent()
slug: Web/API/Event/initEvent
page-type: web-api-instance-method
status:
- deprecated
browser-compat: api.Event.initEvent
---
{{ ApiRef("DOM") }}{{deprecated_header}}
The **`Event.initEvent()`** method is used to initialize the
value of an {{ domxref("event") }} created using {{domxref("Document.createEvent()")}}.
Events initialized in this way must have been created with the
{{domxref("Document.createEvent()") }} method.
This method must be called to set the event
before it is dispatched, using {{ domxref("EventTarget.dispatchEvent()") }}.
Once dispatched, it doesn't do anything anymore.
> **Note:** _Do not use this method anymore as it is deprecated._
> Instead use specific event constructors, like {{domxref("Event.Event", "Event()")}}.
> The page on [Creating and triggering events](/en-US/docs/Web/Events/Creating_and_triggering_events) gives more information about the way to use these.
## Syntax
```js-nolint
event.initEvent(type, bubbles, cancelable)
```
### Parameters
- `type`
- : A string defining the type of event.
- `bubbles`
- : A boolean value deciding whether the event should bubble up through the
event chain or not. Once set, the read-only property {{ domxref("Event.bubbles") }}
will give its value.
- `cancelable`
- : A boolean value defining whether the event can be canceled. Once set, the
read-only property {{ domxref("Event.cancelable") }} will give its value.
## Return value
None.
## Example
```js
// Create the event.
const event = document.createEvent("Event");
// Create a click event that bubbles up and
// cannot be canceled
event.initEvent("click", true, false);
// Listen for the event.
elem.addEventListener(
"click",
(e) => {
// e.target matches elem
},
false,
);
elem.dispatchEvent(event);
```
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
## See also
- The constructor to use instead of this deprecated method:
{{domxref("Event.Event", "Event()")}}. To create more specific event interfaces than `Event`, use the constructor defined for the desired event interface.
| 0 |
data/mdn-content/files/en-us/web/api/event | data/mdn-content/files/en-us/web/api/event/defaultprevented/index.md | ---
title: "Event: defaultPrevented property"
short-title: defaultPrevented
slug: Web/API/Event/defaultPrevented
page-type: web-api-instance-property
browser-compat: api.Event.defaultPrevented
---
{{ APIRef("DOM") }}
The **`defaultPrevented`** read-only property of the {{domxref("Event")}} interface returns a boolean value indicating whether or not the call to {{ domxref("Event.preventDefault()") }} canceled the event.
## Value
A boolean value, where `true` indicates that the default {{glossary("user agent")}} action was prevented, and `false` indicates that it was not.
## Example
This example logs attempts to visit links from two {{htmlElement("a")}} elements. JavaScript is used to prevent the second link from working.
### HTML
```html
<p><a id="link1" href="#link1">Visit link 1</a></p>
<p><a id="link2" href="#link2">Try to visit link 2</a> (you can't)</p>
<p id="log"></p>
```
### JavaScript
```js
function stopLink(event) {
event.preventDefault();
}
function logClick(event) {
const log = document.getElementById("log");
if (event.target.tagName === "A") {
log.innerText = event.defaultPrevented
? `Sorry, but you cannot visit this link!\n${log.innerText}`
: `Visiting link…\n${log.innerText}`;
}
}
const a = document.getElementById("link2");
a.addEventListener("click", stopLink);
document.addEventListener("click", logClick);
```
### Result
{{EmbedLiveSample("Example")}}
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
| 0 |
data/mdn-content/files/en-us/web/api | data/mdn-content/files/en-us/web/api/eyedropper_api/index.md | ---
title: EyeDropper API
slug: Web/API/EyeDropper_API
page-type: web-api-overview
status:
- experimental
browser-compat: api.EyeDropper
---
{{securecontext_header}}{{DefaultAPISidebar("EyeDropper API")}}{{SeeCompatTable}}
The **EyeDropper API** provides a mechanism for creating an eyedropper tool. Using this tool, users can sample colors from their screens, including outside of the browser window.
## Concept
Creative applications often allow users to sample colors from drawings or shapes in the application to reuse. Web applications can use the **EyeDropper API** to provide a similar eyedropper mode, provided by the browser.
Using the API, a web application can start the eyedropper mode. Once started, the cursor changes to indicate to the user that the mode is active. The user can then either select a color from anywhere on the screen, or dismiss the eyedropper mode by pressing <kbd>Escape</kbd>.
## Security and privacy measures
To prevent malicious websites from getting pixel data from a user's screen without them realizing, the **EyeDropper API** implements the following measures:
- The API doesn't let the eyedropper mode start without user intent. The {{domxref("EyeDropper.open()")}} method can only be called in response to a user action (such as a button click).
- No pixel information can be retrieved without user intent. The promise returned by {{domxref("EyeDropper.open()")}} only resolves to a color value in response to a user action (clicking on a pixel). So the eyedropper cannot be used in the background without the user noticing it.
- To help users notice the eyedropper mode more easily, it is made obvious by browsers. The normal mouse cursor disappears after a short delay and a magnifying glass appears instead. There is also a delay between when the eyedropper mode starts and when the user can select a pixel to ensure the user has had time to see the magnifying glass.
- Users are also able to cancel the eyedropper mode at any time (by pressing the <kbd>Escape</kbd> key).
## Interfaces
- {{DOMxRef("EyeDropper")}} {{Experimental_Inline}}
- : The **`EyeDropper`** interface represents an instance of an eyedropper tool that can be opened and used by the user to select colors from the screen.
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
## See also
- [Picking colors of any pixel on the screen with the EyeDropper API](https://developer.chrome.com/docs/capabilities/web-apis/eyedropper)
- [The EyeDropper API W3C/SMPTE Joint Workshop](https://www.w3.org/2021/03/media-production-workshop/talks/patrick-brosset-eyedropper-api.html)
| 0 |
data/mdn-content/files/en-us/web/api | data/mdn-content/files/en-us/web/api/presentationconnectioncloseevent/index.md | ---
title: PresentationConnectionCloseEvent
slug: Web/API/PresentationConnectionCloseEvent
page-type: web-api-interface
status:
- experimental
browser-compat: api.PresentationConnectionCloseEvent
---
{{SeeCompatTable}}{{securecontext_header}}{{APIRef("Presentation API")}}
The **`PresentationConnectionCloseEvent`** interface of the [Presentation API](/en-US/docs/Web/API/Presentation_API) is fired on a {{domxref("PresentationConnection")}} when it is closed.
{{InheritanceDiagram}}
## Constructor
- {{domxref("PresentationConnectionCloseEvent.PresentationConnectionCloseEvent", "PresentationConnectionCloseEvent()")}} {{Experimental_Inline}}
- : Creates a new PresentationConnectionCloseEvent.
## Instance properties
- {{DOMxRef("PresentationConnectionCloseEvent.message")}} {{ReadOnlyInline}} {{Experimental_Inline}}
- : A human-readable message that provides more information about why the connection was closed.
- {{DOMxRef("PresentationConnectionCloseEvent.reason")}} {{ReadOnlyInline}} {{Experimental_Inline}}
- : Indicates why the connection was closed. This property takes one of the following values: `error`, `closed`, or `wentaway`.
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
| 0 |
data/mdn-content/files/en-us/web/api | data/mdn-content/files/en-us/web/api/svgradialgradientelement/index.md | ---
title: SVGRadialGradientElement
slug: Web/API/SVGRadialGradientElement
page-type: web-api-interface
browser-compat: api.SVGRadialGradientElement
---
{{APIRef("SVG")}}
The **`SVGRadialGradientElement`** interface corresponds to the {{SVGElement("RadialGradient")}} element.
{{InheritanceDiagram}}
## Instance properties
_This interface also inherits properties from its parent, {{domxref("SVGGradientElement")}}._
- {{domxref("SVGRadialGradientElement.cx")}} {{ReadOnlyInline}}
- : An {{domxref("SVGAnimatedLength")}} corresponding to the {{SVGAttr("cx")}} attribute of the given {{SVGElement("RadialGradient")}} element.
- {{domxref("SVGRadialGradientElement.cy")}} {{ReadOnlyInline}}
- : An {{domxref("SVGAnimatedLength")}} corresponding to the {{SVGAttr("cy")}} attribute of the given {{SVGElement("RadialGradient")}} element.
- {{domxref("SVGRadialGradientElement.r")}} {{ReadOnlyInline}}
- : An {{domxref("SVGAnimatedLength")}} corresponding to the {{SVGAttr("r")}} attribute of the given {{SVGElement("RadialGradient")}} element.
- {{domxref("SVGRadialGradientElement.fx")}} {{ReadOnlyInline}}
- : An {{domxref("SVGAnimatedLength")}} corresponding to the {{SVGAttr("fx")}} attribute of the given {{SVGElement("RadialGradient")}} element.
- {{domxref("SVGRadialGradientElement.fy")}} {{ReadOnlyInline}}
- : An {{domxref("SVGAnimatedLength")}} corresponding to the {{SVGAttr("fy")}} attribute of the given {{SVGElement("RadialGradient")}} element.
## Instance methods
_This interface doesn't implement any specific methods, but inherits methods from its parent interface, {{domxref("SVGGradientElement")}}._
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
| 0 |
data/mdn-content/files/en-us/web/api | data/mdn-content/files/en-us/web/api/bluetoothuuid/index.md | ---
title: BluetoothUUID
slug: Web/API/BluetoothUUID
page-type: web-api-interface
browser-compat: api.BluetoothUUID
---
{{APIRef("Bluetooth API")}}
The **`BluetoothUUID`** interface of the {{domxref('Web Bluetooth API')}} provides a way to look up Universally Unique Identifier (UUID) values by name in the
[registry](https://www.bluetooth.com/specifications/assigned-numbers/) maintained by the Bluetooth SIG.
## Description
A UUID string is a 128-bit UUID, for example `00001818-0000-1000-8000-00805f9b34fb`.
The Bluetooth registry contains lists of descriptors, services, and characteristics identified by these UUIDs in addition to a 16- or 32- bit alias, and a name.
The `BluetoothUUID` interface provides methods to retrieve these 128-bit UUIDs.
## Static methods
- [`BluetoothUUID.canonicalUUID()`](/en-US/docs/Web/API/BluetoothUUID/canonicalUUID_static) {{Experimental_Inline}}
- : Returns the 128-bit UUID when passed the 16- or 32-bit UUID alias.
- [`BluetoothUUID.getCharacteristic()`](/en-US/docs/Web/API/BluetoothUUID/getCharacteristic_static) {{Experimental_Inline}}
- : Returns the 128-bit UUID representing a registered characteristic when passed a name or the 16- or 32-bit UUID alias.
- [`BluetoothUUID.getDescriptor()`](/en-US/docs/Web/API/BluetoothUUID/getDescriptor_static) {{Experimental_Inline}}
- : Returns a UUID representing a registered descriptor when passed a name or the 16- or 32-bit UUID alias.
- [`BluetoothUUID.getService()`](/en-US/docs/Web/API/BluetoothUUID/getService_static) {{Experimental_Inline}}
- : Returns a UUID representing a registered service when passed a name or the 16- or 32-bit UUID alias.
## Examples
In the following example the UUID representing the service named `device_information` is returned and printed to the console.
```js
let result = BluetoothUUID.getService("device_information");
console.log(result); // "0000180a-0000-1000-8000-00805f9b34fb"
```
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
| 0 |
data/mdn-content/files/en-us/web/api/bluetoothuuid | data/mdn-content/files/en-us/web/api/bluetoothuuid/getdescriptor_static/index.md | ---
title: "BluetoothUUID: getDescriptor() static method"
short-title: getDescriptor()
slug: Web/API/BluetoothUUID/getDescriptor_static
page-type: web-api-static-method
status:
- experimental
browser-compat: api.BluetoothUUID.getDescriptor_static
---
{{APIRef("Bluetooth API")}}{{SeeCompatTable}}
The **`getDescriptor()`** static method of the {{domxref("BluetoothUUID")}} interface returns a UUID representing a registered descriptor when passed a name or the 16- or 32-bit UUID alias.
## Syntax
```js-nolint
BluetoothUUID.getDescriptor(name)
```
### Parameters
- `name`
- : A string containing the name of the descriptor.
### Return value
A 128-bit UUID.
### Exceptions
- {{jsxref("TypeError")}}
- : Thrown if `name` does not appear in the registry.
## Examples
In the following example the UUID representing the descriptor named `time_trigger_setting` is returned and printed to the console.
```js
let result = BluetoothUUID.getDescriptor("time_trigger_setting");
console.log(result); // "0000290e-0000-1000-8000-00805f9b34fb"
```
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
| 0 |
data/mdn-content/files/en-us/web/api/bluetoothuuid | data/mdn-content/files/en-us/web/api/bluetoothuuid/getservice_static/index.md | ---
title: "BluetoothUUID: getService() static method"
short-title: getService()
slug: Web/API/BluetoothUUID/getService_static
page-type: web-api-static-method
status:
- experimental
browser-compat: api.BluetoothUUID.getService_static
---
{{APIRef("Bluetooth API")}}{{SeeCompatTable}}
The **`getService()`** static method of the {{domxref("BluetoothUUID")}} interface returns a UUID representing a registered service when passed a name or the 16- or 32-bit UUID alias.
## Syntax
```js-nolint
BluetoothUUID.getService(name)
```
### Parameters
- `name`
- : A string containing the name of the service.
### Return value
A 128-bit UUID.
### Exceptions
- {{jsxref("TypeError")}}
- : Thrown if `name` does not appear in the registry.
## Examples
In the following example the UUID representing the service named `device_information` is returned and printed to the console.
```js
let result = BluetoothUUID.getService("device_information");
console.log(result); // "0000180a-0000-1000-8000-00805f9b34fb"
```
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
| 0 |
data/mdn-content/files/en-us/web/api/bluetoothuuid | data/mdn-content/files/en-us/web/api/bluetoothuuid/canonicaluuid_static/index.md | ---
title: "BluetoothUUID: canonicalUUID() static method"
short-title: canonicalUUID()
slug: Web/API/BluetoothUUID/canonicalUUID_static
page-type: web-api-static-method
status:
- experimental
browser-compat: api.BluetoothUUID.canonicalUUID_static
---
{{APIRef("Bluetooth API")}}{{SeeCompatTable}}
The **`canonicalUUID()`** static method of the {{domxref("BluetoothUUID")}} interface returns the 128-bit UUID when passed a 16- or 32-bit UUID alias.
## Syntax
```js-nolint
BluetoothUUID.canonicalUUID(alias)
```
### Parameters
- `alias`
- : A string containing a 16-bit or 32-bit UUID alias.
### Return value
A 128-bit UUID.
## Examples
In the following example the UUID represented by the alias `0x110A` is returned and printed to the console.
```js
let result = BluetoothUUID.canonicalUUID("0x110A");
console.log(result); // "0000110a-0000-1000-8000-00805f9b34fb"
```
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
| 0 |
data/mdn-content/files/en-us/web/api/bluetoothuuid | data/mdn-content/files/en-us/web/api/bluetoothuuid/getcharacteristic_static/index.md | ---
title: "BluetoothUUID: getCharacteristic() static method"
short-title: getCharacteristic()
slug: Web/API/BluetoothUUID/getCharacteristic_static
page-type: web-api-static-method
status:
- experimental
browser-compat: api.BluetoothUUID.getCharacteristic_static
---
{{APIRef("Bluetooth API")}}{{SeeCompatTable}}
The **`getCharacteristic()`** static method of the {{domxref("BluetoothUUID")}} interface returns a UUID representing a registered characteristic when passed a name or the 16- or 32-bit UUID alias.
## Syntax
```js-nolint
BluetoothUUID.getCharacteristic(name)
```
### Parameters
- `name`
- : A string containing the name of the characteristic.
### Return value
A 128-bit UUID.
### Exceptions
- {{jsxref("TypeError")}}
- : Thrown if `name` does not appear in the registry.
## Examples
In the following example the UUID representing the characteristic named `apparent_wind_direction` is returned and printed to the console.
```js
let result = BluetoothUUID.getCharacteristic("apparent_wind_direction");
console.log(result); // "00002a73-0000-1000-8000-00805f9b34fb"
```
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
| 0 |
data/mdn-content/files/en-us/web/api | data/mdn-content/files/en-us/web/api/pictureinpictureevent/index.md | ---
title: PictureInPictureEvent
slug: Web/API/PictureInPictureEvent
page-type: web-api-interface
browser-compat: api.PictureInPictureEvent
---
{{APIRef("Picture-in-Picture API")}}
The **`PictureInPictureEvent`** interface represents picture-in-picture-related events, including {{domxref("HTMLVideoElement/enterpictureinpicture_event", "enterpictureinpicture")}}, {{domxref("HTMLVideoElement/leavepictureinpicture_event", "leavepictureinpicture")}} and {{domxref("PictureInPictureWindow/resize_event", "resize")}}
{{InheritanceDiagram}}
## Constructor
- {{domxref("PictureInPictureEvent.PictureInPictureEvent", "PictureInPictureEvent()")}}
- : Creates a `PictureInPictureEvent` event with the given parameters.
## Instance properties
_This interface also inherits properties from its parent {{domxref("Event")}}_.
- {{domxref("PictureInPictureEvent.pictureInPictureWindow")}}
- : Returns the {{domxref("PictureInPictureWindow")}} the event relates to.
## Instance methods
_This interface also inherits methods from its parent {{domxref("Event")}}_.
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
## See also
- The {{domxref("Event")}} base interface
| 0 |
data/mdn-content/files/en-us/web/api/pictureinpictureevent | data/mdn-content/files/en-us/web/api/pictureinpictureevent/pictureinpicturewindow/index.md | ---
title: "PictureInPictureEvent: pictureInPictureWindow property"
short-title: pictureInPictureWindow
slug: Web/API/PictureInPictureEvent/pictureInPictureWindow
page-type: web-api-instance-property
browser-compat: api.PictureInPictureEvent.pictureInPictureWindow
---
{{APIRef("Picture-in-Picture API")}}
The read-only **`pictureInPictureWindow`** property of the {{domxref("PictureInPictureEvent")}} interface returns the {{domxref("PictureInPictureWindow")}} the event relates to.
## Value
A {{domxref("PictureInPictureWindow")}}.
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
## See also
- [Picture-in-Picture API](/en-US/docs/Web/API/Picture-in-Picture_API)
| 0 |
data/mdn-content/files/en-us/web/api/pictureinpictureevent | data/mdn-content/files/en-us/web/api/pictureinpictureevent/pictureinpictureevent/index.md | ---
title: "PictureInPictureEvent: PictureInPictureEvent() constructor"
short-title: PictureInPictureEvent()
slug: Web/API/PictureInPictureEvent/PictureInPictureEvent
page-type: web-api-constructor
browser-compat: api.PictureInPictureEvent.PictureInPictureEvent
---
{{APIRef("Picture-in-Picture API")}}
The **`PictureInPictureEvent()`** constructor returns a new {{domxref("PictureInPictureEvent")}} object.
## Syntax
```js-nolint
new PictureInPictureEvent(type, options)
```
### Parameters
- `type`
- : A string representing the name of the event. It is case-sensitive and browsers set it to `enterpictureinpicture`, `leavepictureinpicture`, or `resize`.
- `options`
- : An object that, _in addition of the properties defined in {{domxref("Event/Event", "Event()")}}_, can have the following properties:
- `pictureInPictureWindow`
- : A {{domxref("PictureInPictureWindow")}}.
### Return value
A new {{domxref("PictureInPictureEvent")}} object.
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
## See also
- The {{domxref("PictureInPictureEvent")}} interface it belongs to.
| 0 |
data/mdn-content/files/en-us/web/api | data/mdn-content/files/en-us/web/api/web_audio_api/index.md | ---
title: Web Audio API
slug: Web/API/Web_Audio_API
page-type: web-api-overview
browser-compat: api.AudioContext
---
{{DefaultAPISidebar("Web Audio API")}}
The Web Audio API provides a powerful and versatile system for controlling audio on the Web, allowing developers to choose audio sources, add effects to audio, create audio visualizations, apply spatial effects (such as panning) and much more.
## Web audio concepts and usage
The Web Audio API involves handling audio operations inside an **audio context**, and has been designed to allow **modular routing**. Basic audio operations are performed with **audio nodes**, which are linked together to form an **audio routing graph**. Several sources — with different types of channel layout — are supported even within a single context. This modular design provides the flexibility to create complex audio functions with dynamic effects.
Audio nodes are linked into chains and simple webs by their inputs and outputs. They typically start with one or more sources. Sources provide arrays of sound intensities (samples) at very small timeslices, often tens of thousands of them per second. These could be either computed mathematically (such as {{domxref("OscillatorNode")}}), or they can be recordings from sound/video files (like {{domxref("AudioBufferSourceNode")}} and {{domxref("MediaElementAudioSourceNode")}}) and audio streams ({{domxref("MediaStreamAudioSourceNode")}}). In fact, sound files are just recordings of sound intensities themselves, which come in from microphones or electric instruments, and get mixed down into a single, complicated wave.
Outputs of these nodes could be linked to inputs of others, which mix or modify these streams of sound samples into different streams. A common modification is multiplying the samples by a value to make them louder or quieter (as is the case with {{domxref("GainNode")}}). Once the sound has been sufficiently processed for the intended effect, it can be linked to the input of a destination ({{domxref("BaseAudioContext.destination")}}), which sends the sound to the speakers or headphones. This last connection is only necessary if the user is supposed to hear the audio.
A simple, typical workflow for web audio would look something like this:
1. Create audio context
2. Inside the context, create sources — such as `<audio>`, oscillator, stream
3. Create effects nodes, such as reverb, biquad filter, panner, compressor
4. Choose final destination of audio, for example your system speakers
5. Connect the sources up to the effects, and the effects to the destination.

Timing is controlled with high precision and low latency, allowing developers to write code that responds accurately to events and is able to target specific samples, even at a high sample rate. So applications such as drum machines and sequencers are well within reach.
The Web Audio API also allows us to control how audio is _spatialized_. Using a system based on a _source-listener model_, it allows control of the _panning model_ and deals with _distance-induced attenuation_ induced by a moving source (or moving listener).
> **Note:** You can read about the theory of the Web Audio API in a lot more detail in our article [Basic concepts behind Web Audio API](/en-US/docs/Web/API/Web_Audio_API/Basic_concepts_behind_Web_Audio_API).
## Web Audio API target audience
The Web Audio API can seem intimidating to those that aren't familiar with audio or music terms, and as it incorporates a great deal of functionality it can prove difficult to get started if you are a developer.
It can be used to incorporate audio into your website or application, by [providing atmosphere like futurelibrary.no](https://www.futurelibrary.no/), or [auditory feedback on forms](https://css-tricks.com/form-validation-web-audio/). However, it can also be used to create _advanced_ interactive instruments. With that in mind, it is suitable for both developers and musicians alike.
We have a [simple introductory tutorial](/en-US/docs/Web/API/Web_Audio_API/Using_Web_Audio_API) for those that are familiar with programming but need a good introduction to some of the terms and structure of the API.
There's also a [Basic Concepts Behind Web Audio API](/en-US/docs/Web/API/Web_Audio_API/Basic_concepts_behind_Web_Audio_API) article, to help you understand the way digital audio works, specifically in the realm of the API. This also includes a good introduction to some of the concepts the API is built upon.
Learning coding is like playing cards — you learn the rules, then you play, then you go back and learn the rules again, then you play again. So if some of the theory doesn't quite fit after the first tutorial and article, there's an [advanced tutorial](/en-US/docs/Web/API/Web_Audio_API/Advanced_techniques) which extends the first one to help you practice what you've learnt, and apply some more advanced techniques to build up a step sequencer.
We also have other tutorials and comprehensive reference material available that covers all features of the API. See the sidebar on this page for more.
If you are more familiar with the musical side of things, are familiar with music theory concepts, want to start building instruments, then you can go ahead and start building things with the advanced tutorial and others as a guide (the above-linked tutorial covers scheduling notes, creating bespoke oscillators and envelopes, as well as an LFO among other things.)
If you aren't familiar with the programming basics, you might want to consult some beginner's JavaScript tutorials first and then come back here — see our [Beginner's JavaScript learning module](/en-US/docs/Learn/JavaScript) for a great place to begin.
## Web Audio API interfaces
The Web Audio API has a number of interfaces and associated events, which we have split up into nine categories of functionality.
### General audio graph definition
General containers and definitions that shape audio graphs in Web Audio API usage.
- {{domxref("AudioContext")}}
- : The **`AudioContext`** interface represents an audio-processing graph built from audio modules linked together, each represented by an {{domxref("AudioNode")}}. An audio context controls the creation of the nodes it contains and the execution of the audio processing, or decoding. You need to create an `AudioContext` before you do anything else, as everything happens inside a context.
- {{domxref("AudioNode")}}
- : The **`AudioNode`** interface represents an audio-processing module like an _audio source_ (e.g. an HTML {{HTMLElement("audio")}} or {{HTMLElement("video")}} element), _audio destination_, _intermediate processing module_ (e.g. a filter like {{domxref("BiquadFilterNode")}}, or _volume control_ like {{domxref("GainNode")}}).
- {{domxref("AudioParam")}}
- : The **`AudioParam`** interface represents an audio-related parameter, like one of an {{domxref("AudioNode")}}. It can be set to a specific value or a change in value, and can be scheduled to happen at a specific time and following a specific pattern.
- {{domxref("AudioParamMap")}}
- : Provides a map-like interface to a group of {{domxref("AudioParam")}} interfaces, which means it provides the methods `forEach()`, `get()`, `has()`, `keys()`, and `values()`, as well as a `size` property.
- {{domxref("BaseAudioContext")}}
- : The **`BaseAudioContext`** interface acts as a base definition for online and offline audio-processing graphs, as represented by {{domxref("AudioContext")}} and {{domxref("OfflineAudioContext")}} respectively. You wouldn't use `BaseAudioContext` directly — you'd use its features via one of these two inheriting interfaces.
- The {{domxref("AudioScheduledSourceNode/ended_event", "ended")}} event
- : The `ended` event is fired when playback has stopped because the end of the media was reached.
### Defining audio sources
Interfaces that define audio sources for use in the Web Audio API.
- {{domxref("AudioScheduledSourceNode")}}
- : The **`AudioScheduledSourceNode`** is a parent interface for several types of audio source node interfaces. It is an {{domxref("AudioNode")}}.
- {{domxref("OscillatorNode")}}
- : The **`OscillatorNode`** interface represents a periodic waveform, such as a sine or triangle wave. It is an {{domxref("AudioNode")}} audio-processing module that causes a given _frequency_ of wave to be created.
- {{domxref("AudioBuffer")}}
- : The **`AudioBuffer`** interface represents a short audio asset residing in memory, created from an audio file using the {{ domxref("BaseAudioContext.decodeAudioData") }} method, or created with raw data using {{ domxref("BaseAudioContext.createBuffer") }}. Once decoded into this form, the audio can then be put into an {{ domxref("AudioBufferSourceNode") }}.
- {{domxref("AudioBufferSourceNode")}}
- : The **`AudioBufferSourceNode`** interface represents an audio source consisting of in-memory audio data, stored in an {{domxref("AudioBuffer")}}. It is an {{domxref("AudioNode")}} that acts as an audio source.
- {{domxref("MediaElementAudioSourceNode")}}
- : The **`MediaElementAudioSourceNode`** interface represents an audio source consisting of an HTML {{ htmlelement("audio") }} or {{ htmlelement("video") }} element. It is an {{domxref("AudioNode")}} that acts as an audio source.
- {{domxref("MediaStreamAudioSourceNode")}}
- : The **`MediaStreamAudioSourceNode`** interface represents an audio source consisting of a {{domxref("MediaStream")}} (such as a webcam, microphone, or a stream being sent from a remote computer). If multiple audio tracks are present on the stream, the track whose {{domxref("MediaStreamTrack.id", "id")}} comes first lexicographically (alphabetically) is used. It is an {{domxref("AudioNode")}} that acts as an audio source.
- {{domxref("MediaStreamTrackAudioSourceNode")}}
- : A node of type {{domxref("MediaStreamTrackAudioSourceNode")}} represents an audio source whose data comes from a {{domxref("MediaStreamTrack")}}. When creating the node using the {{domxref("AudioContext.createMediaStreamTrackSource", "createMediaStreamTrackSource()")}} method to create the node, you specify which track to use. This provides more control than `MediaStreamAudioSourceNode`.
### Defining audio effects filters
Interfaces for defining effects that you want to apply to your audio sources.
- {{domxref("BiquadFilterNode")}}
- : The **`BiquadFilterNode`** interface represents a simple low-order filter. It is an {{domxref("AudioNode")}} that can represent different kinds of filters, tone control devices, or graphic equalizers. A `BiquadFilterNode` always has exactly one input and one output.
- {{domxref("ConvolverNode")}}
- : The **`ConvolverNode`** interface is an {{domxref("AudioNode")}} that performs a Linear Convolution on a given {{domxref("AudioBuffer")}}, and is often used to achieve a reverb effect.
- {{domxref("DelayNode")}}
- : The **`DelayNode`** interface represents a [delay-line](https://en.wikipedia.org/wiki/Digital_delay_line); an {{domxref("AudioNode")}} audio-processing module that causes a delay between the arrival of an input data and its propagation to the output.
- {{domxref("DynamicsCompressorNode")}}
- : The **`DynamicsCompressorNode`** interface provides a compression effect, which lowers the volume of the loudest parts of the signal in order to help prevent clipping and distortion that can occur when multiple sounds are played and multiplexed together at once.
- {{domxref("GainNode")}}
- : The **`GainNode`** interface represents a change in volume. It is an {{domxref("AudioNode")}} audio-processing module that causes a given _gain_ to be applied to the input data before its propagation to the output.
- {{domxref("WaveShaperNode")}}
- : The **`WaveShaperNode`** interface represents a non-linear distorter. It is an {{domxref("AudioNode")}} that use a curve to apply a waveshaping distortion to the signal. Beside obvious distortion effects, it is often used to add a warm feeling to the signal.
- {{domxref("PeriodicWave")}}
- : Describes a periodic waveform that can be used to shape the output of an {{ domxref("OscillatorNode") }}.
- {{domxref("IIRFilterNode")}}
- : Implements a general [infinite impulse response](https://en.wikipedia.org/wiki/Infinite_impulse_response) (IIR) filter; this type of filter can be used to implement tone-control devices and graphic equalizers as well.
### Defining audio destinations
Once you are done processing your audio, these interfaces define where to output it.
- {{domxref("AudioDestinationNode")}}
- : The **`AudioDestinationNode`** interface represents the end destination of an audio source in a given context — usually the speakers of your device.
- {{domxref("MediaStreamAudioDestinationNode")}}
- : The **`MediaStreamAudioDestinationNode`** interface represents an audio destination consisting of a [WebRTC](/en-US/docs/Web/API/WebRTC_API) {{domxref("MediaStream")}} with a single `AudioMediaStreamTrack`, which can be used in a similar way to a {{domxref("MediaStream")}} obtained from {{ domxref("MediaDevices.getUserMedia", "getUserMedia()") }}. It is an {{domxref("AudioNode")}} that acts as an audio destination.
### Data analysis and visualization
If you want to extract time, frequency, and other data from your audio, the `AnalyserNode` is what you need.
- {{domxref("AnalyserNode")}}
- : The **`AnalyserNode`** interface represents a node able to provide real-time frequency and time-domain analysis information, for the purposes of data analysis and visualization.
### Splitting and merging audio channels
To split and merge audio channels, you'll use these interfaces.
- {{domxref("ChannelSplitterNode")}}
- : The **`ChannelSplitterNode`** interface separates the different channels of an audio source out into a set of _mono_ outputs.
- {{domxref("ChannelMergerNode")}}
- : The **`ChannelMergerNode`** interface reunites different mono inputs into a single output. Each input will be used to fill a channel of the output.
### Audio spatialization
These interfaces allow you to add audio spatialization panning effects to your audio sources.
- {{domxref("AudioListener")}}
- : The **`AudioListener`** interface represents the position and orientation of the unique person listening to the audio scene used in audio spatialization.
- {{domxref("PannerNode")}}
- : The **`PannerNode`** interface represents the position and behavior of an audio source signal in 3D space, allowing you to create complex panning effects.
- {{domxref("StereoPannerNode")}}
- : The **`StereoPannerNode`** interface represents a simple stereo panner node that can be used to pan an audio stream left or right.
### Audio processing in JavaScript
Using audio worklets, you can define custom audio nodes written in JavaScript or [WebAssembly](/en-US/docs/WebAssembly). Audio worklets implement the {{domxref("Worklet")}} interface, a lightweight version of the {{domxref("Worker")}} interface.
- {{domxref("AudioWorklet")}}
- : The `AudioWorklet` interface is available through the {{domxref("AudioContext")}} object's {{domxref("BaseAudioContext.audioWorklet", "audioWorklet")}}, and lets you add modules to the audio worklet to be executed off the main thread.
- {{domxref("AudioWorkletNode")}}
- : The `AudioWorkletNode` interface represents an {{domxref("AudioNode")}} that is embedded into an audio graph and can pass messages to the corresponding `AudioWorkletProcessor`.
- {{domxref("AudioWorkletProcessor")}}
- : The `AudioWorkletProcessor` interface represents audio processing code running in a `AudioWorkletGlobalScope` that generates, processes, or analyzes audio directly, and can pass messages to the corresponding `AudioWorkletNode`.
- {{domxref("AudioWorkletGlobalScope")}}
- : The `AudioWorkletGlobalScope` interface is a `WorkletGlobalScope`-derived object representing a worker context in which an audio processing script is run; it is designed to enable the generation, processing, and analysis of audio data directly using JavaScript in a worklet thread rather than on the main thread.
#### Obsolete: script processor nodes
Before audio worklets were defined, the Web Audio API used the `ScriptProcessorNode` for JavaScript-based audio processing. Because the code runs in the main thread, they have bad performance. The `ScriptProcessorNode` is kept for historic reasons but is marked as deprecated.
- {{domxref("ScriptProcessorNode")}} {{deprecated_inline}}
- : The **`ScriptProcessorNode`** interface allows the generation, processing, or analyzing of audio using JavaScript. It is an {{domxref("AudioNode")}} audio-processing module that is linked to two buffers, one containing the current input, one containing the output. An event, implementing the {{domxref("AudioProcessingEvent")}} interface, is sent to the object each time the input buffer contains new data, and the event handler terminates when it has filled the output buffer with data.
- {{domxref("ScriptProcessorNode.audioprocess_event", "audioprocess")}} (event) {{deprecated_inline}}
- : The `audioprocess` event is fired when an input buffer of a Web Audio API {{domxref("ScriptProcessorNode")}} is ready to be processed.
- {{domxref("AudioProcessingEvent")}} {{deprecated_inline}}
- : The `AudioProcessingEvent` represents events that occur when a {{domxref("ScriptProcessorNode")}} input buffer is ready to be processed.
### Offline/background audio processing
It is possible to process/render an audio graph very quickly in the background — rendering it to an {{domxref("AudioBuffer")}} rather than to the device's speakers — with the following.
- {{domxref("OfflineAudioContext")}}
- : The **`OfflineAudioContext`** interface is an {{domxref("AudioContext")}} interface representing an audio-processing graph built from linked together {{domxref("AudioNode")}}s. In contrast with a standard `AudioContext`, an `OfflineAudioContext` doesn't really render the audio but rather generates it, _as fast as it can_, in a buffer.
- {{domxref("OfflineAudioContext/complete_event", "complete")}} (event)
- : The `complete` event is fired when the rendering of an {{domxref("OfflineAudioContext")}} is terminated.
- {{domxref("OfflineAudioCompletionEvent")}}
- : The `OfflineAudioCompletionEvent` represents events that occur when the processing of an {{domxref("OfflineAudioContext")}} is terminated. The {{domxref("OfflineAudioContext/complete_event", "complete")}} event uses this interface.
## Guides and tutorials
{{LandingPageListSubpages}}
## Examples
You can find a number of examples at our [webaudio-example repo](https://github.com/mdn/webaudio-examples/) on GitHub.
## Specifications
{{Specifications}}
## Browser compatibility
### AudioContext
{{Compat}}
## See also
### Tutorials/guides
- [Basic concepts behind Web Audio API](/en-US/docs/Web/API/Web_Audio_API/Basic_concepts_behind_Web_Audio_API)
- [Using the Web Audio API](/en-US/docs/Web/API/Web_Audio_API/Using_Web_Audio_API)
- [Advanced techniques: creating sound, sequencing, timing, scheduling](/en-US/docs/Web/API/Web_Audio_API/Advanced_techniques)
- [Autoplay guide for media and Web Audio APIs](/en-US/docs/Web/Media/Autoplay_guide)
- [Using IIR filters](/en-US/docs/Web/API/Web_Audio_API/Using_IIR_filters)
- [Visualizations with Web Audio API](/en-US/docs/Web/API/Web_Audio_API/Visualizations_with_Web_Audio_API)
- [Web audio spatialization basics](/en-US/docs/Web/API/Web_Audio_API/Web_audio_spatialization_basics)
- [Controlling multiple parameters with ConstantSourceNode](/en-US/docs/Web/API/Web_Audio_API/Controlling_multiple_parameters_with_ConstantSourceNode)
- [Mixing Positional Audio and WebGL (2012)](https://web.dev/articles/webaudio-positional-audio)
- [Developing Game Audio with the Web Audio API (2012)](https://web.dev/articles/webaudio-games)
### Libraries
- [Tones](https://github.com/bit101/tones): a simple library for playing specific tones/notes using the Web Audio API.
- [Tone.js](https://tonejs.github.io/): a framework for creating interactive music in the browser.
- [howler.js](https://github.com/goldfire/howler.js/): a JS audio library that defaults to [Web Audio API](https://webaudio.github.io/web-audio-api/) and falls back to [HTML Audio](https://html.spec.whatwg.org/multipage/media.html#the-audio-element), as well as providing other useful features.
- [Mooog](https://github.com/mattlima/mooog): jQuery-style chaining of AudioNodes, mixer-style sends/returns, and more.
- [XSound](https://xsound.jp/): Web Audio API Library for Synthesizer, Effects, Visualization, Recording, etc.
- [OpenLang](https://github.com/chrisjohndigital/OpenLang): HTML video language lab web application using the Web Audio API to record and combine video and audio from different sources into a single file ([source on GitHub](https://github.com/chrisjohndigital/OpenLang))
- [Pts.js](https://ptsjs.org/): Simplifies web audio visualization ([guide](https://ptsjs.org/guide/sound-0800))
### Related topics
- [Web media technologies](/en-US/docs/Web/Media)
- [Guide to media types and formats on the web](/en-US/docs/Web/Media/Formats)
| 0 |
data/mdn-content/files/en-us/web/api/web_audio_api | data/mdn-content/files/en-us/web/api/web_audio_api/using_iir_filters/index.md | ---
title: Using IIR filters
slug: Web/API/Web_Audio_API/Using_IIR_filters
page-type: guide
---
{{DefaultAPISidebar("Web Audio API")}}
The **`IIRFilterNode`** interface of the [Web Audio API](/en-US/docs/Web/API/Web_Audio_API) is an {{domxref("AudioNode")}} processor that implements a general [infinite impulse response](https://en.wikipedia.org/wiki/Infinite_impulse_response) (IIR) filter; this type of filter can be used to implement tone control devices and graphic equalizers, and the filter response parameters can be specified, so that it can be tuned as needed. This article looks at how to implement one, and use it in a simple example.
## Demo
Our simple example for this guide provides a play/pause button that starts and pauses audio play, and a toggle that turns an IIR filter on and off, altering the tone of the sound. It also provides a canvas on which is drawn the frequency response of the audio, so you can see what effect the IIR filter has.

You can check out the [full demo here on Codepen](https://codepen.io/Rumyra/pen/oPxvYB/). Also see the [source code on GitHub](https://github.com/mdn/webaudio-examples/tree/main/iirfilter-node). It includes some different coefficient values for different lowpass frequencies — you can change the value of the `filterNumber` constant to a value between 0 and 3 to check out the different available effects.
## Browser support
[IIR filters](/en-US/docs/Web/API/IIRFilterNode) are supported well across modern browsers, although they have been implemented more recently than some of the more longstanding Web Audio API features, like [Biquad filters](/en-US/docs/Web/API/BiquadFilterNode).
## The IIRFilterNode
The Web Audio API now comes with an {{domxref("IIRFilterNode")}} interface. But what is this and how does it differ from the {{domxref("BiquadFilterNode")}} we have already?
An IIR filter is a **infinite impulse response filter**. It's one of two primary types of filters used in audio and digital signal processing. The other type is FIR — **finite impulse response filter**. There's a really good overview to [IIF filters and FIR filters here](https://dspguru.com/dsp/faqs/iir/basics/).
A [biquad filter](https://www.mathworks.com/help/dsphdl/ref/biquadfilter.html) is actually a _specific type_ of infinite impulse response filter. It's a commonly-used type and we already have it as a node in the Web Audio API. If you choose this node the hard work is done for you. For instance, if you want to filter lower frequencies from your sound, you can set the [type](/en-US/docs/Web/API/BiquadFilterNode/type) to `highpass` and then set which frequency to filter from (or cut off).
When you are using an {{domxref("IIRFilterNode")}} instead of a {{domxref("BiquadFilterNode")}} you are creating the filter yourself, rather than just choosing a pre-programmed type. So you can create a highpass filter, or a lowpass filter, or a more bespoke one. And this is where the IIR filter node is useful — you can create your own if none of the already available settings is right for what you want. As well as this, if your audio graph needed a highpass and a bandpass filter within it, you could just use one IIR filter node in place of the two biquad filter nodes you would otherwise need for this.
With the IIRFIlter node it's up to you to set what `feedforward` and `feedback` values the filter needs — this determines the characteristics of the filter. The downside is that this involves some complex maths.
If you are looking to learn more there's some [information about the maths behind IIR filters here](https://ece.uccs.edu/~mwickert/ece2610/lecture_notes/ece2610_chap8.pdf). This enters the realms of signal processing theory — don't worry if you look at it and feel like it's not for you.
If you want to play with the IIR filter node and need some values to help along the way, there's [a table of already calculated values here](https://www.dspguide.com/CH20.PDF); on pages 4 & 5 of the linked PDF the `an` values refer to the `feedForward` values and the `bn` values refer to the `feedback`. [musicdsp.org](https://www.musicdsp.org/en/latest/) is also a great resource if you want to read more about different filters and how they are implemented digitally.
With that all in mind, let's take a look at the code to create an IIR filter with the Web Audio API.
## Setting our IIRFilter coefficients
When creating an IIR filter, we pass in the `feedforward` and `feedback` coefficients as options (coefficients is how we describe the values). Both of these parameters are arrays, neither of which can be larger than 20 items.
When setting our coefficients, the `feedforward` values can't all be set to zero, otherwise nothing would be sent to the filter. Something like this is acceptable:
```js
const feedForward = [0.00020298, 0.0004059599, 0.00020298];
```
Our `feedback` values cannot start with zero, otherwise on the first pass nothing would be sent back:
```js
const feedBackward = [1.0126964558, -1.9991880801, 0.9873035442];
```
> **Note:** These values are calculated based on the lowpass filter specified in the [filter characteristics of the Web Audio API specification](https://webaudio.github.io/web-audio-api/#filters-characteristics). As this filter node gains more popularity we should be able to collate more coefficient values.
## Using an IIRFilter in an audio graph
Let's create our context and our filter node:
```js
const audioCtx = new AudioContext();
const iirFilter = audioCtx.createIIRFilter(feedForward, feedBack);
```
We need a sound source to play. We set this up using a custom function, `playSoundNode()`, which [creates a buffer source](/en-US/docs/Web/API/BaseAudioContext/createBufferSource) from an existing {{domxref("AudioBuffer")}}, attaches it to the default destination, starts it playing, and returns it:
```js
function playSourceNode(audioContext, audioBuffer) {
const soundSource = audioContext.createBufferSource();
soundSource.buffer = audioBuffer;
soundSource.connect(audioContext.destination);
soundSource.start();
return soundSource;
}
```
This function is called when the play button is pressed. The play button HTML looks like this:
```html
<button
class="button-play"
role="switch"
data-playing="false"
aria-pressed="false">
Play
</button>
```
And the `click` event listener starts like so:
```js
playButton.addEventListener(
"click",
() => {
if (playButton.dataset.playing === "false") {
srcNode = playSourceNode(audioCtx, sample);
// …
}
},
false,
);
```
The toggle that turns the IIR filter on and off is set up in the similar way. First, the HTML:
```html
<button
class="button-filter"
role="switch"
data-filteron="false"
aria-pressed="false"
aria-describedby="label"
disabled></button>
```
The filter button's `click` handler then connects the `IIRFilter` up to the graph, between the source and the destination:
```js
filterButton.addEventListener(
"click",
() => {
if (filterButton.dataset.filteron === "false") {
srcNode.disconnect(audioCtx.destination);
srcNode.connect(iirfilter).connect(audioCtx.destination);
// …
}
},
false,
);
```
### Frequency response
We only have one method available on {{domxref("IIRFilterNode")}} instances, `getFrequencyResponse()`, this allows us to see what is happening to the frequencies of the audio being passed into the filter.
Let's draw a frequency plot of the filter we've created with the data we get back from this method.
We need to create three arrays. One of frequency values for which we want to receive the magnitude response and phase response for, and two empty arrays to receive the data. All three of these have to be of type [`float32array`](/en-US/docs/Web/JavaScript/Reference/Global_Objects/Float32Array) and all be of the same size.
```js
// arrays for our frequency response
const totalArrayItems = 30;
let myFrequencyArray = new Float32Array(totalArrayItems);
const magResponseOutput = new Float32Array(totalArrayItems);
const phaseResponseOutput = new Float32Array(totalArrayItems);
```
Let's fill our first array with frequency values we want data to be returned on:
```js
myFrequencyArray = myFrequencyArray.map((item, index) => 1.4 ** index);
```
We could go for a linear approach, but it's far better when working with frequencies to take a log approach, so let's fill our array with frequency values that get larger further on in the array items.
Now let's get our response data:
```js
iirFilter.getFrequencyResponse(
myFrequencyArray,
magResponseOutput,
phaseResponseOutput,
);
```
We can use this data to draw a filter frequency plot. We'll do so on a 2d canvas context.
```js
// Create a canvas element and append it to our DOM
const canvasContainer = document.querySelector(".filter-graph");
const canvasEl = document.createElement("canvas");
canvasContainer.appendChild(canvasEl);
// Set 2d context and set dimensions
const canvasCtx = canvasEl.getContext("2d");
const width = canvasContainer.offsetWidth;
const height = canvasContainer.offsetHeight;
canvasEl.width = width;
canvasEl.height = height;
// Set background fill
canvasCtx.fillStyle = "white";
canvasCtx.fillRect(0, 0, width, height);
// Set up some spacing based on size
const spacing = width / 16;
const fontSize = Math.floor(spacing / 1.5);
// Draw our axis
canvasCtx.lineWidth = 2;
canvasCtx.strokeStyle = "grey";
canvasCtx.beginPath();
canvasCtx.moveTo(spacing, spacing);
canvasCtx.lineTo(spacing, height - spacing);
canvasCtx.lineTo(width - spacing, height - spacing);
canvasCtx.stroke();
// Axis is gain by frequency -> make labels
canvasCtx.font = `${fontSize}px sans-serif`;
canvasCtx.fillStyle = "grey";
canvasCtx.fillText("1", spacing - fontSize, spacing + fontSize);
canvasCtx.fillText("g", spacing - fontSize, (height - spacing + fontSize) / 2);
canvasCtx.fillText("0", spacing - fontSize, height - spacing + fontSize);
canvasCtx.fillText("Hz", width / 2, height - spacing + fontSize);
canvasCtx.fillText("20k", width - spacing, height - spacing + fontSize);
// Loop over our magnitude response data and plot our filter
canvasCtx.beginPath();
magResponseOutput.forEach((magResponseData, i) => {
if (i === 0) {
canvasCtx.moveTo(spacing, height - magResponseData * 100 - spacing);
} else {
canvasCtx.lineTo(
(width / totalArrayItems) * i,
height - magResponseData * 100 - spacing,
);
}
});
canvasCtx.stroke();
```
## Summary
That's it for our IIRFilter demo. This should have shown you how to use the basics, and helped you to understand what it's useful for and how it works.
| 0 |
data/mdn-content/files/en-us/web/api/web_audio_api | data/mdn-content/files/en-us/web/api/web_audio_api/using_audioworklet/index.md | ---
title: Background audio processing using AudioWorklet
slug: Web/API/Web_Audio_API/Using_AudioWorklet
page-type: guide
---
{{DefaultAPISidebar("Web Audio API")}}
This article explains how to create an audio worklet processor and use it in a Web Audio application.
When the Web Audio API was first introduced to browsers, it included the ability to use JavaScript code to create custom audio processors that would be invoked to perform real-time audio manipulations. The drawback to `ScriptProcessorNode` was that it ran on the main thread, thus blocking everything else going on until it completed execution. This was far less than ideal, especially for something that can be as computationally expensive as audio processing.
Enter {{domxref("AudioWorklet")}}. An audio context's audio worklet is a {{domxref("Worklet")}} which runs off the main thread, executing audio processing code added to it by calling the context's {{domxref("Worklet.addModule", "audioWorklet.addModule()")}} method. Calling `addModule()` loads the specified JavaScript file, which should contain the implementation of the audio processor. With the processor registered, you can create a new {{domxref("AudioWorkletNode")}} which passes the audio through the processor's code when the node is linked into the chain of audio nodes along with any other audio nodes.
It's worth noting that because audio processing can often involve substantial computation, your processor may benefit greatly from being built using [WebAssembly](/en-US/docs/WebAssembly), which brings near-native or fully native performance to web apps. Implementing your audio processing algorithm using WebAssembly can make it perform markedly better.
## High level overview
Before we start looking at the use of AudioWorklet on a step-by-step basis, let's start with a brief high-level overview of what's involved.
1. Create module that defines an audio worklet processor class, based on {{domxref("AudioWorkletProcessor")}} which takes audio from one or more incoming sources, performs its operation on the data, and outputs the resulting audio data.
2. Access the audio context's {{domxref("AudioWorklet")}} through its {{domxref("BaseAudioContext.audioWorklet", "audioWorklet")}} property, and call the audio worklet's {{domxref("Worklet.addModule", "addModule()")}} method to install the audio worklet processor module.
3. As needed, create audio processing nodes by passing the processor's name (which is defined by the module) to the {{domxref("AudioWorkletNode.AudioWorkletNode", "AudioWorkletNode()")}} constructor.
4. Set up any audio parameters the {{domxref("AudioWorkletNode")}} needs, or that you wish to configure. These are defined in the audio worklet processor module.
5. Connect the created `AudioWorkletNode`s into your audio processing pipeline as you would any other node, then use your audio pipeline as usual.
Throughout the remainder of this article, we'll look at these steps in more detail, with examples (including working examples you can try out on your own).
The example code found on this page is derived from [this working example](https://mdn.github.io/webaudio-examples/audioworklet/) which is part of MDN's [GitHub repository of Web Audio examples](https://github.com/mdn/webaudio-examples/). The example creates an oscillator node and adds white noise to it using an {{domxref("AudioWorkletNode")}} before playing the resulting sound out. Slider controls are available to allow controlling the gain of both the oscillator and the audio worklet's output.
[**See the code**](https://github.com/mdn/webaudio-examples/tree/main/audioworklet)
[**Try it live**](https://mdn.github.io/webaudio-examples/audioworklet/)
## Creating an audio worklet processor
Fundamentally, an audio worklet processor (which we'll refer to usually as either an "audio processor" or as a "processor" because otherwise this article will be about twice as long) is implemented using a JavaScript module that defines and installs the custom audio processor class.
### Structure of an audio worklet processor
An audio worklet processor is a JavaScript module which consists of the following:
- A JavaScript class which defines the audio processor. This class extends the {{domxref("AudioWorkletProcessor")}} class.
- The audio processor class must implement a {{domxref("AudioWorkletProcessor.process", "process()")}} method, which receives incoming audio data and writes back out the data as manipulated by the processor.
- The module installs the new audio worklet processor class by calling {{domxref("AudioWorkletGlobalScope.registerProcessor", "registerProcessor()")}}, specifying a name for the audio processor and the class that defines the processor.
A single audio worklet processor module may define multiple processor classes, registering each of them with individual calls to `registerProcessor()`. As long as each has its own unique name, this will work just fine. It's also more efficient than loading multiple modules from over the network or even the user's local disk.
### Basic code framework
The barest framework of an audio processor class looks like this:
```js
class MyAudioProcessor extends AudioWorkletProcessor {
constructor() {
super();
}
process(inputList, outputList, parameters) {
// Using the inputs (or not, as needed),
// write the output into each of the outputs
// …
return true;
}
}
registerProcessor("my-audio-processor", MyAudioProcessor);
```
After the implementation of the processor comes a call to the global function {{domxref("AudioWorkletGlobalScope.registerProcessor", "registerProcessor()")}}, which is only available within the scope of the audio context's {{domxref("AudioWorklet")}}, which is the invoker of the processor script as a result of your call to {{domxref("Worklet.addModule", "audioWorklet.addModule()")}}. This call to `registerProcessor()` registers your class as the basis for any {{domxref("AudioWorkletProcessor")}}s created when {{domxref("AudioWorkletNode")}}s are set up.
This is the barest framework and actually has no effect until code is added into `process()` to do something with those inputs and outputs. Which brings us to talking about those inputs and outputs.
### The input and output lists
The lists of inputs and outputs can be a little confusing at first, even though they're actually very simple once you realize what's going on.
Let's start at the inside and work our way out. Fundamentally, the audio for a single audio channel (such as the left speaker or the subwoofer, for example) is represented as a [`Float32Array`](/en-US/docs/Web/JavaScript/Reference/Global_Objects/Float32Array) whose values are the individual audio samples. By specification, each block of audio your `process()` function receives contains 128 frames (that is, 128 samples for each channel), but it is planned that _this value will change in the future_, and may in fact vary depending on circumstances, so you should _always_ check the array's [`length`](/en-US/docs/Web/JavaScript/Reference/Global_Objects/TypedArray/length) rather than assuming a particular size. It is, however, guaranteed that the inputs and outputs will have the same block length.
Each input can have a number of channels. A mono input has a single channel; stereo input has two channels. Surround sound might have six or more channels. So each input is, in turn, an array of channels. That is, an array of `Float32Array` objects.
Then, there can be multiple inputs, so the `inputList` is an array of arrays of `Float32Array` objects. Each input may have a different number of channels, and each channel has its own array of samples.
Thus, given the input list `inputList`:
```js
const numberOfInputs = inputList.length;
const firstInput = inputList[0];
const firstInputChannelCount = firstInput.length;
const firstInputFirstChannel = firstInput[0]; // (or inputList[0][0])
const firstChannelByteCount = firstInputFirstChannel.length;
const firstByteOfFirstChannel = firstInputFirstChannel[0]; // (or inputList[0][0][0])
```
The output list is structured in exactly the same way; it's an array of outputs, each of which is an array of channels, each of which is a `Float32Array` object, which contains the samples for that channel.
How you use the inputs and how you generate the outputs depends very much on your processor. If your processor is just a generator, it can ignore the inputs and just replace the contents of the outputs with the generated data. Or you can process each input independently, applying an algorithm to the incoming data on each channel of each input and writing the results into the corresponding outputs' channels (keeping in mind that the number of inputs and outputs may differ, and the channel counts on those inputs and outputs may also differ). Or you can take all the inputs and perform mixing or other computations that result in a single output being filled with data (or all the outputs being filled with the same data).
It's entirely up to you. This is a very powerful tool in your audio programming toolkit.
### Processing multiple inputs
Let's take a look at an implementation of `process()` that can process multiple inputs, with each input being used to generate the corresponding output. Any excess inputs are ignored.
```js
process(inputList, outputList, parameters) {
const sourceLimit = Math.min(inputList.length, outputList.length);
for (let inputNum = 0; inputNum < sourceLimit; inputNum++) {
const input = inputList[inputNum];
const output = outputList[inputNum];
const channelCount = Math.min(input.length, output.length);
for (let channelNum = 0; channelNum < channelCount; channelNum++) {
input[channelNum].forEach((sample, i) => {
// Manipulate the sample
output[channelNum][i] = sample;
});
}
};
return true;
}
```
Note that when determining the number of sources to process and send through to the corresponding outputs, we use [`Math.min()`](/en-US/docs/Web/JavaScript/Reference/Global_Objects/Math/min) to ensure that we only process as many channels as we have room for in the output list. The same check is performed when determining how many channels to process in the current input; we only process as many as there are room for in the destination output. This avoids errors due to overrunning these arrays.
### Mixing inputs
Many nodes perform **mixing** operations, where the inputs are combined in some way into a single output. This is demonstrated in the following example.
```js
process(inputList, outputList, parameters) {
const sourceLimit = Math.min(inputList.length, outputList.length);
for (let inputNum = 0; inputNum < sourceLimit; inputNum++) {
let input = inputList[inputNum];
let output = outputList[0];
let channelCount = Math.min(input.length, output.length);
for (let channelNum = 0; channelNum < channelCount; channelNum++) {
for (let i = 0; i < input[channelNum].length; i++) {
let sample = output[channelNum][i] + input[channelNum][i];
if (sample > 1.0) {
sample = 1.0;
} else if (sample < -1.0) {
sample = -1.0;
}
output[channelNum][i] = sample;
}
}
};
return true;
}
```
This is similar code to the previous sample in many ways, but only the first output—`outputList[0]`—is altered. Each sample is added to the corresponding sample in the output buffer, with a simple code fragment in place to prevent the samples from exceeding the legal range of -1.0 to 1.0 by capping the values; there are other ways to avoid clipping that are perhaps less prone to distortion, but this is a simple example that's better than nothing.
## Lifetime of an audio worklet processor
The only means by which you can influence the lifespan of your audio worklet processor is through the value returned by `process()`, which should be a Boolean value indicating whether or not to override the {{Glossary("user agent")}}'s decision-making as to whether or not your node is still in use.
In general, the lifetime policy of any audio node is simple: if the node is still considered to be actively processing audio, it will continue to be used. In the case of an {{domxref("AudioWorkletNode")}}, the node is considered to be active if its `process()` function returns `true` _and_ the node is either generating content as a source for audio data, or is receiving data from one or more inputs.
Specifying a value of `true` as the result from your `process()` function in essence tells the Web Audio API that your processor needs to keep being called even if the API doesn't think there's anything left for you to do. In other words, `true` overrides the API's logic and gives you control over your processor's lifetime policy, keeping the processor's owning {{domxref("AudioWorkletNode")}} running even when it would otherwise decide to shut down the node.
Returning `false` from the `process()` method tells the API that it should follow its normal logic and shut down your processor node if it deems it appropriate to do so. If the API determines that your node is no longer needed, `process()` will not be called again.
> **Note:** At this time, unfortunately, Chrome does not implement this algorithm in a manner that matches the current standard. Instead, it keeps the node alive if you return `true` and shuts it down if you return `false`. Thus for compatibility reasons you must always return `true` from `process()`, at least on Chrome. However, once [this Chrome issue](https://crbug.com/921354) is fixed, you will want to change this behavior if possible as it may have a slight negative impact on performance.
## Creating an audio processor worklet node
To create an audio node that pumps blocks of audio data through an {{domxref("AudioWorkletProcessor")}}, you need to follow these simple steps:
1. Load and install the audio processor module
2. Create an {{domxref("AudioWorkletNode")}}, specifying the audio processor module to use by its name
3. Connect inputs to the `AudioWorkletNode` and its outputs to appropriate destinations (either other nodes or to the {{domxref("AudioContext")}} object's {{domxref("BaseAudioContext/destination", "destination")}} property.
To use an audio worklet processor, you can use code similar to the following:
```js
let audioContext = null;
async function createMyAudioProcessor() {
if (!audioContext) {
try {
audioContext = new AudioContext();
await audioContext.resume();
await audioContext.audioWorklet.addModule("module-url/module.js");
} catch (e) {
return null;
}
}
return new AudioWorkletNode(audioContext, "processor-name");
}
```
This `createMyAudioProcessor()` function creates and returns a new instance of {{domxref("AudioWorkletNode")}} configured to use your audio processor. It also handles creating the audio context if it hasn't already been done.
In order to ensure the context is usable, this starts by creating the context if it's not already available, then adds the module containing the processor to the worklet. Once that's done, it instantiates and returns a new `AudioWorkletNode`. Once you have that in hand, you connect it to other nodes and otherwise use it just like any other node.
You can then create a new audio processor node by doing this:
```js
let newProcessorNode = await createMyAudioProcessor();
```
If the returned value, `newProcessorNode`, is non-`null`, we have a valid audio context with its hiss processor node in place and ready to use.
## Supporting audio parameters
Just like any other Web Audio node, {{domxref("AudioWorkletNode")}} supports parameters, which are shared with the {{domxref("AudioWorkletProcessor")}} that does the actual work.
### Adding parameter support to the processor
To add parameters to an {{domxref("AudioWorkletNode")}}, you need to define them within your {{domxref("AudioWorkletProcessor")}}-based processor class in your module. This is done by adding the static getter {{domxref("AudioWorkletProcessor.parameterDescriptors", "parameterDescriptors")}} to your class. This function should return an array of {{domxref("AudioParam")}} objects, one for each parameter supported by the processor.
In the following implementation of `parameterDescriptors()`, the returned array has two `AudioParam` objects. The first defines `gain` as a value between 0 and 1, with a default value of 0.5. The second parameter is named `frequency` and defaults to 440.0, with a range from 27.5 to 4186.009, inclusively.
```js
static get parameterDescriptors() {
return [
{
name: "gain",
defaultValue: 0.5,
minValue: 0,
maxValue: 1
},
{
name: "frequency",
defaultValue: 440.0,
minValue: 27.5,
maxValue: 4186.009
}
];
}
```
Accessing your processor node's parameters is as simple as looking them up in the `parameters` object passed into your implementation of {{domxref("AudioWorkletProcessor.process", "process()")}}. Within the `parameters` object are arrays, one for each of your parameters, and sharing the same names as your parameters.
- A-rate parameters
- : For a-rate parameters—parameters whose values automatically change over time—the parameter's entry in the `parameters` object is an array of {{domxref("AudioParam")}} objects, one for each frame in the block being processed. These values are to be applied to the corresponding frames.
- K-rate parameters
- : K-rate parameters, on the other hand, can only change once per block, so the parameter's array has only a single entry. Use that value for every frame in the block.
In the code below, we see a `process()` function that handles a `gain` parameter which can be used as either an a-rate or k-rate parameter. Our node only supports one input, so it just takes the first input in the list, applies the gain to it, and writes the resulting data to the first output's buffer.
```js
process(inputList, outputList, parameters) {
const input = inputList[0];
const output = outputList[0];
const gain = parameters.gain;
for (let channelNum = 0; channelNum < input.length; channelNum++) {
const inputChannel = input[channelNum];
const outputChannel = output[channelNum];
// If gain.length is 1, it's a k-rate parameter, so apply
// the first entry to every frame. Otherwise, apply each
// entry to the corresponding frame.
if (gain.length === 1) {
for (let i = 0; i < inputChannel.length; i++) {
outputChannel[i] = inputChannel[i] * gain[0];
}
} else {
for (let i = 0; i < inputChannel.length; i++) {
outputChannel[i] = inputChannel[i] * gain[i];
}
}
}
return true;
}
```
Here, if `gain.length` indicates that there's only a single value in the `gain` parameter's array of values, the first entry in the array is applied to every frame in the block. Otherwise, for each frame in the block, the corresponding entry in `gain[]` is applied.
### Accessing parameters from the main thread script
Your main thread script can access the parameters just like it can any other node. To do so, first you need to get a reference to the parameter by calling the {{domxref("AudioWorkletNode")}}'s {{domxref("AudioWorkletNode.parameters", "parameters")}} property's {{domxref("AudioParamMap.get", "get()")}} method:
```js
let gainParam = myAudioWorkletNode.parameters.get("gain");
```
The value returned and stored in `gainParam` is the {{domxref("AudioParam")}} used to store the `gain` parameter. You can then change its value effective at a given time using the {{domxref("AudioParam")}} method {{domxref("AudioParam.setValueAtTime", "setValueAtTime()")}}.
Here, for example, we set the value to `newValue`, effective immediately.
```js
gainParam.setValueAtTime(newValue, audioContext.currentTime);
```
You can similarly use any of the other methods in the {{domxref("AudioParam")}} interface to apply changes over time, to cancel scheduled changes, and so forth.
Reading the value of a parameter is as simple as looking at its {{domxref("AudioParam.value", "value")}} property:
```js
let currentGain = gainParam.value;
```
## See also
- [Web Audio API](/en-US/docs/Web/API/Web_Audio_API)
- [Enter Audio Worklet](https://developer.chrome.com/blog/audio-worklet/) (Chrome Developers blog)
| 0 |
data/mdn-content/files/en-us/web/api/web_audio_api | data/mdn-content/files/en-us/web/api/web_audio_api/best_practices/index.md | ---
title: Web Audio API best practices
slug: Web/API/Web_Audio_API/Best_practices
page-type: guide
---
{{DefaultAPISidebar("Web Audio API")}}
There's no strict right or wrong way when writing creative code. As long as you consider security, performance, and accessibility, you can adapt to your own style. In this article, we'll share a number of _best practices_ — guidelines, tips, and tricks for working with the Web Audio API.
## Loading sounds/files
There are four main ways to load sound with the Web Audio API and it can be a little confusing as to which one you should use.
When working with files, you are looking at either grabbing the file from an {{domxref("HTMLMediaElement")}} (i.e. an {{htmlelement("audio")}} or {{htmlelement("video")}} element), or you're looking to fetch the file and decode it into a buffer. Both are legitimate ways of working, however, it's more common to use the former when you are working with full-length tracks, and the latter when working with shorter, more sample-like tracks.
Media elements have streaming support out of the box. The audio will start playing when the browser determines it can load the rest of the file before playing finishes. You can see an example of how to use this with the Web Audio API in the [Using the Web Audio API tutorial](/en-US/docs/Web/API/Web_Audio_API/Using_Web_Audio_API).
You will, however, have more control if you use a buffer node. You have to request the file and wait for it to load ([this section of our advanced article](/en-US/docs/Web/API/Web_Audio_API/Advanced_techniques#Dial_up_%E2%80%94_loading_a_sound_sample) shows a good way to do it), but then you have access to the data directly, which means more precision, and more precise manipulation.
If you're looking to work with audio from the user's camera or microphone you can access it via the [Media Capture and Streams API](/en-US/docs/Web/API/Media_Capture_and_Streams_API) and the {{domxref("MediaStreamAudioSourceNode")}} interface. This is good for WebRTC and situations where you might want to record or possibly analyze audio.
The last way is to generate your own sound, which can be done with either an {{domxref("OscillatorNode")}} or by creating a buffer and populating it with your own data. Check out the [tutorial here for creating your own instrument](/en-US/docs/Web/API/Web_Audio_API/Advanced_techniques) for information on creating sounds with oscillators and buffers.
## Cross browser & legacy support
The Web Audio API specification is constantly evolving and like most things on the web, there are some issues with it working consistently across browsers. Here we'll look at options for getting around cross-browser problems.
There's the [`standardized-audio-context`](https://github.com/chrisguttandin/standardized-audio-context) npm package, which creates API functionality consistently across browsers, filling holes as they are found. It's constantly in development and endeavors to keep up with the current specification.
There is also the option of libraries, of which there are a few depending on your use case. For a good all-rounder, [howler.js](https://howlerjs.com/) is a good choice. It has cross-browser support and, provides a useful subset of functionality. Although it doesn't harness the full gamut of filters and other effects the Web Audio API comes with, you can do most of what you'd want to do.
If you are looking for sound creation or a more instrument-based option, [tone.js](https://tonejs.github.io/) is a great library. It provides advanced scheduling capabilities, synths, and effects, and intuitive musical abstractions built on top of the Web Audio API.
[R-audio](https://github.com/bbc/r-audio), from the [BBC's Research & Development department](https://medium.com/bbc-design-engineering/r-audio-declarative-reactive-and-flexible-web-audio-graphs-in-react-102c44a1c69c), is a library of React components aiming to provide a "more intuitive, declarative interface to Web Audio". If you're used to writing JSX it might be worth looking at.
## Autoplay policy
Browsers have started to implement an autoplay policy, which in general can be summed up as:
> "Create or resume context from inside a user gesture".
But what does that mean in practice? A user gesture has been interpreted to mean a user-initiated event, normally a `click` event. Browser vendors decided that Web Audio contexts should not be allowed to automatically play audio; they should instead be started by a user. This is because autoplaying audio can be really annoying and obtrusive. But how do we handle this?
When you create an audio context (either offline or online) it is created with a `state`, which can be `suspended`, `running`, or `closed`.
When working with an {{domxref("AudioContext")}}, if you create the audio context from inside a `click` event the state should automatically be set to `running`. Here is a simple example of creating the context from inside a `click` event:
```js
const button = document.querySelector("button");
button.addEventListener(
"click",
() => {
const audioCtx = new AudioContext();
// Do something with the audio context
},
false,
);
```
If however, you create the context outside of a user gesture, its state will be set to `suspended` and it will need to be started after user interaction. We can use the same click event example here, test for the state of the context and start it, if it is suspended, using the [`resume()`](/en-US/docs/Web/API/AudioContext/resume) method.
```js
const audioCtx = new AudioContext();
const button = document.querySelector("button");
button.addEventListener(
"click",
() => {
// check if context is in suspended state (autoplay policy)
if (audioCtx.state === "suspended") {
audioCtx.resume();
}
},
false,
);
```
You might instead be working with an {{domxref("OfflineAudioContext")}}, in which case you can resume the suspended audio context with the [`startRendering()`](/en-US/docs/Web/API/OfflineAudioContext/startRendering) method.
## User control
If your website or application contains sound, you should allow the user control over it, otherwise again, it will become annoying. This can be achieved by play/stop and volume/mute controls. The [Using the Web Audio API](/en-US/docs/Web/API/Web_Audio_API/Using_Web_Audio_API) tutorial goes over how to do this.
If you have buttons that switch audio on and off, using the ARIA [`role="switch"`](/en-US/docs/Web/Accessibility/ARIA/Roles/switch_role) attribute on them is a good option for signalling to assistive technology what the button's exact purpose is, and therefore making the app more accessible. There's a [demo of how to use it here](https://codepen.io/Wilto/pen/ZoGoQm?editors=1100).
As you work with a lot of changing values within the Web Audio API and will want to provide users with control over these, the [`range input`](/en-US/docs/Web/HTML/Element/input/range) is often a good choice of control to use. It's a good option as you can set minimum and maximum values, as well as increments with the [`step`](/en-US/docs/Web/HTML/Element/input#step) attribute.
## Setting AudioParam values
There are two ways to manipulate {{domxref("AudioNode")}} values, which are themselves objects of type {{domxref("AudioParam")}} interface. The first is to set the value directly via the property. So for instance if we want to change the `gain` value of a {{domxref("GainNode")}} we would do so thus:
```js
gainNode.gain.value = 0.5;
```
This will set our volume to half. However, if you're using any of the `AudioParam`'s defined methods to set these values, they will take precedence over the above property setting. If for example, you want the `gain` value to be raised to 1 in 2 seconds time, you can do this:
```js
gainNode.gain.setValueAtTime(1, audioCtx.currentTime + 2);
```
It will override the previous example (as it should), even if it were to come later in your code.
Bearing this in mind, if your website or application requires timing and scheduling, it's best to stick with the {{domxref("AudioParam")}} methods for setting values. If you're sure it doesn't, setting it with the `value` property is fine.
| 0 |
data/mdn-content/files/en-us/web/api/web_audio_api | data/mdn-content/files/en-us/web/api/web_audio_api/web_audio_spatialization_basics/index.md | ---
title: Web audio spatialization basics
slug: Web/API/Web_Audio_API/Web_audio_spatialization_basics
page-type: guide
---
{{DefaultAPISidebar("Web Audio API")}}
As if its extensive variety of sound processing (and other) options wasn't enough, the Web Audio API also includes facilities to allow you to emulate the difference in sound as a listener moves around a sound source, for example panning as you move around a sound source inside a 3D game.
The official term for this is **spatialization**, and this article will cover the basics of how to implement such a system.
## Basics of spatialization
In Web Audio, complex 3D spatializations are created using the {{domxref("PannerNode")}}, which in layman's terms is basically a whole lotta cool maths to make audio appear in 3D space.
Think sounds flying over you, creeping up behind you, moving across in front of you.
That sort of thing.
It's really useful for WebXR and gaming.
In 3D spaces, it's the only way to achieve realistic audio. Libraries like [three.js](https://threejs.org/) and [A-frame](https://aframe.io/) harness its potential when dealing with sound.
It's worth noting that you don't _have_ to move sound within a full 3D space either — you could stick with just a 2D plane, so if you were planning a 2D game, this would still be the node you were looking for.
> **Note:** There's also a {{domxref("StereoPannerNode")}} designed to deal with the common use case of creating simple left and right stereo panning effects.
> This is much simpler to use, but obviously nowhere near as versatile.
> If you just want a simple stereo panning effect, our [StereoPannerNode example](https://mdn.github.io/webaudio-examples/stereo-panner-node/) ([see source code](https://github.com/mdn/webaudio-examples/tree/main/stereo-panner-node)) should give you everything you need.
## 3D boombox demo
To demonstrate 3D spatialization we've created a modified version of the boombox demo we created in our basic [Using the Web Audio API](/en-US/docs/Web/API/Web_Audio_API/Using_Web_Audio_API) guide.
See the [3D spatialization demo live](https://mdn.github.io/webaudio-examples/spatialization/) (and see the [source code](https://github.com/mdn/webaudio-examples/tree/main/spatialization) also).

The boombox sits inside a room (defined by the edges of the browser viewport), and in this demo, we can move and rotate it with the provided controls.
When we move the boombox, the sound it produces changes accordingly, panning as it moves to the left or right of the room, or becoming quieter as it is moved away from the user or is rotated so the speakers are facing away from them, etc.
This is done by setting the different properties of the `PannerNode` object instance in relation to that movement, to emulate spacialization.
> **Note:** The experience is much better if you use headphones, or have some kind of surround sound system to plug your computer into.
## Creating an audio listener
So let's begin! The {{domxref("BaseAudioContext")}} (the interface the {{domxref("AudioContext")}} is extended from) has a [`listener`](/en-US/docs/Web/API/BaseAudioContext/listener) property that returns an {{domxref("AudioListener")}} object.
This represents the listener of the scene, usually your user.
You can define where they are in space and in which direction they are facing.
They remain static. The `pannerNode` can then calculate its sound position relative to the position of the listener.
Let's create our context and listener and set the listener's position to emulate a person looking into our room:
```js
const AudioContext = window.AudioContext || window.webkitAudioContext;
const audioCtx = new AudioContext();
const listener = audioCtx.listener;
const posX = window.innerWidth / 2;
const posY = window.innerHeight / 2;
const posZ = 300;
listener.positionX.value = posX;
listener.positionY.value = posY;
listener.positionZ.value = posZ - 5;
```
We could move the listener left or right using `positionX`, up or down using `positionY`, or in or out of the room using `positionZ`. Here we are setting the listener to be in the middle of the viewport and slightly in front of our boombox. We can also set the direction the listener is facing. The default values for these work well:
```js
listener.forwardX.value = 0;
listener.forwardY.value = 0;
listener.forwardZ.value = -1;
listener.upX.value = 0;
listener.upY.value = 1;
listener.upZ.value = 0;
```
The forward properties represent the 3D coordinate position of the listener's forward direction (e.g. the direction they are facing in), while the up properties represent the 3D coordinate position of the top of the listener's head.
These two together can nicely set the direction.
## Creating a panner node
Let's create our {{domxref("PannerNode")}}. This has a whole bunch of properties associated with it. Let's take a look at each of them:
To start we can set the [`panningModel`](/en-US/docs/Web/API/PannerNode/panningModel).
This is the spacialization algorithm that's used to position the audio in 3D space. We can set this to:
`equalpower` — The default and the general way panning is figured out
`HRTF` — This stands for 'Head-related transfer function' and looks to take into account the human head when figuring out where the sound is.
Pretty clever stuff. Let's use the `HRTF` model!
```js
const panningModel = "HRTF";
```
The [`coneInnerAngle`](/en-US/docs/Web/API/PannerNode/coneInnerAngle) and [`coneOuterAngle`](/en-US/docs/Web/API/PannerNode/coneOuterAngle) properties specify where the volume emanates from.
By default, both are 360 degrees.
Our boombox speakers will have smaller cones, which we can define.
The inner cone is where gain (volume) is always emulated at a maximum and the outer cone is where the gain starts to drop away.
The gain is reduced by the value of the [`coneOuterGain`](/en-US/docs/Web/API/PannerNode/coneOuterGain).
Let's create constants that store the values we'll use for these parameters later on:
```js
const innerCone = 60;
const outerCone = 90;
const outerGain = 0.3;
```
The next parameter is [`distanceModel`](/en-US/docs/Web/API/PannerNode/distanceModel) — this can only be set to `linear`, `inverse`, or `exponential`. These are different algorithms, which are used to reduce the volume of the audio source as it moves away from the listener. We'll use `linear`, as it is simple:
```js
const distanceModel = "linear";
```
We can set a maximum distance ([`maxDistance`](/en-US/docs/Web/API/PannerNode/maxDistance)) between the source and the listener — the volume will not be reduced anymore if the source moves further away from this point.
This can be useful, as you may find you want to emulate distance, but volume can drop out and that's actually not what you want.
By default, it's 10,000 (a unitless relative value). We can keep it as this:
```js
const maxDistance = 10000;
```
There's also a reference distance ([`refDistance`](/en-US/docs/Web/API/PannerNode/refDistance)), which is used by the distance models.
We can keep that at the default value of `1` as well:
```js
const refDistance = 1;
```
Then there's the roll-off factor ([`rolloffFactor`](/en-US/docs/Web/API/PannerNode/rolloffFactor)) — how quickly does the volume reduce as the panner moves away from the listener.
The default value is 1; let's make that a bit bigger to exaggerate our movements.
```js
const rollOff = 10;
```
Now we can start setting our position and orientation of our boombox.
This is a lot like how we did it with our listener.
These are also the parameters we're going to change when the controls on our interface are used.
```js
const positionX = posX;
const positionY = posY;
const positionZ = posZ;
const orientationX = 0.0;
const orientationY = 0.0;
const orientationZ = -1.0;
```
Note the minus value on our z orientation — this sets the boombox to face us.
A positive value would set the sound source facing away from us.
Let's use the relevant constructor for creating our panner node and pass in all those parameters we set above:
```js
const panner = new PannerNode(audioCtx, {
panningModel,
distanceModel,
positionX,
positionY,
positionZ,
orientationX,
orientationY,
orientationZ,
refDistance,
maxDistance,
rolloffFactor: rollOff,
coneInnerAngle: innerCone,
coneOuterAngle: outerCone,
coneOuterGain: outerGain,
});
```
## Moving the boombox
Now we're going to move our boombox around our 'room'. We've got some controls set up to do this.
We can move it left and right, up and down, and back and forth; we can also rotate it.
The sound direction is coming from the boombox speaker at the front, so when we rotate it, we can alter the sound's direction — i.e. make it project to the back when the boombox is rotated 180 degrees and facing away from us.
We need to set up a few things for the interface.
First, we'll get references to the elements we want to move, then we'll store references to the values we'll change when we set up [CSS transforms](/en-US/docs/Web/CSS/CSS_transforms) to actually do the movement.
Finally, we'll set some bounds so our boombox doesn't move too far in any direction:
```js
const moveControls = document
.querySelector("#move-controls")
.querySelectorAll("button");
const boombox = document.querySelector(".boombox-body");
// the values for our CSS transforms
const transform = {
xAxis: 0,
yAxis: 0,
zAxis: 0.8,
rotateX: 0,
rotateY: 0,
};
// set our bounds
const topBound = -posY;
const bottomBound = posY;
const rightBound = posX;
const leftBound = -posX;
const innerBound = 0.1;
const outerBound = 1.5;
```
Let's create a function that takes the direction we want to move as a parameter, and both modifies the CSS transform and updates the position and orientation values of our panner node properties to change the sound as appropriate.
To start with let's take a look at our left, right, up and down values as these are pretty straightforward.
We'll move the boombox along these axis and update the appropriate position.
```js
function moveBoombox(direction) {
switch (direction) {
case "left":
if (transform.xAxis > leftBound) {
transform.xAxis -= 5;
panner.positionX.value -= 0.1;
}
break;
case "up":
if (transform.yAxis > topBound) {
transform.yAxis -= 5;
panner.positionY.value -= 0.3;
}
break;
case "right":
if (transform.xAxis < rightBound) {
transform.xAxis += 5;
panner.positionX.value += 0.1;
}
break;
case "down":
if (transform.yAxis < bottomBound) {
transform.yAxis += 5;
panner.positionY.value += 0.3;
}
break;
}
}
```
It's a similar story for our move in and out values too:
```js
case 'back':
if (transform.zAxis > innerBound) {
transform.zAxis -= 0.01;
panner.positionZ.value += 40;
}
break;
case 'forward':
if (transform.zAxis < outerBound) {
transform.zAxis += 0.01;
panner.positionZ.value -= 40;
}
break;
```
Our rotation values are a little more involved, however, as we need to move the sound _around_.
Not only do we have to update two axis values (e.g. if you rotate an object around the x-axis, you update the y and z coordinates for that object), but we also need to do some more maths for this.
The rotation is a circle and we need [`Math.sin`](/en-US/docs/Web/JavaScript/Reference/Global_Objects/Math/sin) and [`Math.cos`](/en-US/docs/Web/JavaScript/Reference/Global_Objects/Math/cos) to help us draw that circle.
Let's set up a rotation rate, which we'll convert into a radian range value for use in `Math.sin` and `Math.cos` later, when we want to figure out the new coordinates when we're rotating our boombox:
```js
// set up rotation constants
const rotationRate = 60; // bigger number equals slower sound rotation
const q = Math.PI / rotationRate; //rotation increment in radians
```
We can also use this to work out degrees rotated, which will help with the CSS transforms we will have to create (note we need both an x and y-axis for the CSS transforms):
```js
// get degrees for CSS
const degreesX = (q * 180) / Math.PI;
const degreesY = (q * 180) / Math.PI;
```
Let's take a look at our left rotation as an example. We need to change the x orientation and the z orientation of the panner coordinates, to move around the y-axis for our left rotation:
```js
case 'rotate-left':
transform.rotateY -= degreesY;
// 'left' is rotation about y-axis with negative angle increment
z = panner.orientationZ.value*Math.cos(q) - panner.orientationX.value*Math.sin(q);
x = panner.orientationZ.value*Math.sin(q) + panner.orientationX.value*Math.cos(q);
y = panner.orientationY.value;
panner.orientationX.value = x;
panner.orientationY.value = y;
panner.orientationZ.value = z;
break;
```
This _is_ a little confusing, but what we're doing is using sin and cos to help us work out the circular motion the coordinates need for the rotation of the boombox.
We can do this for all the axes. We just need to choose the right orientations to update and whether we want a positive or negative increment.
```js
case 'rotate-right':
transform.rotateY += degreesY;
// 'right' is rotation about y-axis with positive angle increment
z = panner.orientationZ.value*Math.cos(-q) - panner.orientationX.value*Math.sin(-q);
x = panner.orientationZ.value*Math.sin(-q) + panner.orientationX.value*Math.cos(-q);
y = panner.orientationY.value;
panner.orientationX.value = x;
panner.orientationY.value = y;
panner.orientationZ.value = z;
break;
case 'rotate-up':
transform.rotateX += degreesX;
// 'up' is rotation about x-axis with negative angle increment
z = panner.orientationZ.value*Math.cos(-q) - panner.orientationY.value*Math.sin(-q);
y = panner.orientationZ.value*Math.sin(-q) + panner.orientationY.value*Math.cos(-q);
x = panner.orientationX.value;
panner.orientationX.value = x;
panner.orientationY.value = y;
panner.orientationZ.value = z;
break;
case 'rotate-down':
transform.rotateX -= degreesX;
// 'down' is rotation about x-axis with positive angle increment
z = panner.orientationZ.value*Math.cos(q) - panner.orientationY.value*Math.sin(q);
y = panner.orientationZ.value*Math.sin(q) + panner.orientationY.value*Math.cos(q);
x = panner.orientationX.value;
panner.orientationX.value = x;
panner.orientationY.value = y;
panner.orientationZ.value = z;
break;
```
One last thing — we need to update the CSS and keep a reference of the last move for the mouse event.
Here's the final `moveBoombox` function.
```js
function moveBoombox(direction, prevMove) {
switch (direction) {
case "left":
if (transform.xAxis > leftBound) {
transform.xAxis -= 5;
panner.positionX.value -= 0.1;
}
break;
case "up":
if (transform.yAxis > topBound) {
transform.yAxis -= 5;
panner.positionY.value -= 0.3;
}
break;
case "right":
if (transform.xAxis < rightBound) {
transform.xAxis += 5;
panner.positionX.value += 0.1;
}
break;
case "down":
if (transform.yAxis < bottomBound) {
transform.yAxis += 5;
panner.positionY.value += 0.3;
}
break;
case "back":
if (transform.zAxis > innerBound) {
transform.zAxis -= 0.01;
panner.positionZ.value += 40;
}
break;
case "forward":
if (transform.zAxis < outerBound) {
transform.zAxis += 0.01;
panner.positionZ.value -= 40;
}
break;
case "rotate-left":
transform.rotateY -= degreesY;
// 'left' is rotation about y-axis with negative angle increment
z =
panner.orientationZ.value * Math.cos(q) -
panner.orientationX.value * Math.sin(q);
x =
panner.orientationZ.value * Math.sin(q) +
panner.orientationX.value * Math.cos(q);
y = panner.orientationY.value;
panner.orientationX.value = x;
panner.orientationY.value = y;
panner.orientationZ.value = z;
break;
case "rotate-right":
transform.rotateY += degreesY;
// 'right' is rotation about y-axis with positive angle increment
z =
panner.orientationZ.value * Math.cos(-q) -
panner.orientationX.value * Math.sin(-q);
x =
panner.orientationZ.value * Math.sin(-q) +
panner.orientationX.value * Math.cos(-q);
y = panner.orientationY.value;
panner.orientationX.value = x;
panner.orientationY.value = y;
panner.orientationZ.value = z;
break;
case "rotate-up":
transform.rotateX += degreesX;
// 'up' is rotation about x-axis with negative angle increment
z =
panner.orientationZ.value * Math.cos(-q) -
panner.orientationY.value * Math.sin(-q);
y =
panner.orientationZ.value * Math.sin(-q) +
panner.orientationY.value * Math.cos(-q);
x = panner.orientationX.value;
panner.orientationX.value = x;
panner.orientationY.value = y;
panner.orientationZ.value = z;
break;
case "rotate-down":
transform.rotateX -= degreesX;
// 'down' is rotation about x-axis with positive angle increment
z =
panner.orientationZ.value * Math.cos(q) -
panner.orientationY.value * Math.sin(q);
y =
panner.orientationZ.value * Math.sin(q) +
panner.orientationY.value * Math.cos(q);
x = panner.orientationX.value;
panner.orientationX.value = x;
panner.orientationY.value = y;
panner.orientationZ.value = z;
break;
}
boombox.style.transform =
`translateX(${transform.xAxis}px) ` +
`translateY(${transform.yAxis}px) ` +
`scale(${transform.zAxis}) ` +
`rotateY(${transform.rotateY}deg) ` +
`rotateX(${transform.rotateX}deg)`;
const move = prevMove || {};
move.frameId = requestAnimationFrame(() => moveBoombox(direction, move));
return move;
}
```
## Wiring up our controls
Wiring up out control buttons is comparatively simple — now we can listen for a mouse event on our controls and run this function, as well as stop it when the mouse is released:
```js
// for each of our controls, move the boombox and change the position values
moveControls.forEach((el) => {
let moving;
el.addEventListener(
"mousedown",
() => {
const direction = this.dataset.control;
if (moving && moving.frameId) {
cancelAnimationFrame(moving.frameId);
}
moving = moveBoombox(direction);
},
false,
);
window.addEventListener(
"mouseup",
() => {
if (moving && moving.frameId) {
cancelAnimationFrame(moving.frameId);
}
},
false,
);
});
```
## Connecting our graph
Our HTML contains the audio element we want to be affected by the panner node.
```html
<audio src="myCoolTrack.mp3"></audio>
```
We need to grab the source from that element and pipe it into the Web Audio API using the {{domxref('AudioContext.createMediaElementSource')}}.
```js
// get the audio element
const audioElement = document.querySelector("audio");
// pass it into the audio context
const track = audioContext.createMediaElementSource(audioElement);
```
Next we have to connect our audio graph. We connect our input (the track) to our modification node (the panner) to our destination (in this case the speakers).
```js
track.connect(panner).connect(audioCtx.destination);
```
Let's create a play button, that when clicked will play or pause the audio depending on the current state.
```html
<button data-playing="false" role="switch">Play/Pause</button>
```
```js
// Select our play button
const playButton = document.querySelector("button");
playButton.addEventListener(
"click",
() => {
// Check if context is in suspended state (autoplay policy)
if (audioContext.state === "suspended") {
audioContext.resume();
}
// Play or pause track depending on state
if (playButton.dataset.playing === "false") {
audioElement.play();
playButton.dataset.playing = "true";
} else if (playButton.dataset.playing === "true") {
audioElement.pause();
playButton.dataset.playing = "false";
}
},
false,
);
```
For a more in depth look at playing/controlling audio and audio graphs check out [Using The Web Audio API.](/en-US/docs/Web/API/Web_Audio_API/Using_Web_Audio_API)
## Summary
Hopefully, this article has given you an insight into how Web Audio spatialization works, and what each of the {{domxref("PannerNode")}} properties do (there are quite a few of them).
The values can be hard to manipulate sometimes and depending on your use case it can take some time to get them right.
> **Note:** There are slight differences in the way the audio spatialization sounds across different browsers.
> The panner node does some very involved maths under the hood;
> there are a [number of tests here](https://wpt.fyi/results/webaudio/the-audio-api/the-pannernode-interface?label=stable&aligned=true) so you can keep track of the status of the inner workings of this node across different platforms.
Again, you can [check out the final demo here](https://mdn.github.io/webaudio-examples/spatialization/), and the [final source code is here](https://github.com/mdn/webaudio-examples/tree/main/spatialization).
There is also a [Codepen demo too](https://codepen.io/Rumyra/pen/MqayoK?editors=0100).
If you are working with 3D games and/or WebXR it's a good idea to harness a 3D library to create such functionality, rather than trying to do this all yourself from first principles.
We rolled our own in this article to give you an idea of how it works, but you'll save a lot of time by taking advantage of work others have done before you.
| 0 |
data/mdn-content/files/en-us/web/api/web_audio_api | data/mdn-content/files/en-us/web/api/web_audio_api/using_web_audio_api/index.md | ---
title: Using the Web Audio API
slug: Web/API/Web_Audio_API/Using_Web_Audio_API
page-type: guide
---
{{DefaultAPISidebar("Web Audio API")}}
Let's take a look at getting started with the [Web Audio API](/en-US/docs/Web/API/Web_Audio_API). We'll briefly look at some concepts, then study a simple boombox example that allows us to load an audio track, play and pause it, and change its volume and stereo panning.
The Web Audio API does not replace the {{HTMLElement("audio")}} media element, but rather complements it, just like {{HTMLElement("canvas")}} coexists alongside the {{HTMLElement("img")}} element. Your use case will determine what tools you use to implement audio. If you want to control playback of an audio track, the `<audio>` media element provides a better, quicker solution than the Web Audio API. If you want to carry out more complex audio processing, as well as playback, the Web Audio API provides much more power and control.
A powerful feature of the Web Audio API is that it does not have a strict "sound call limitation". For example, there is no ceiling of 32 or 64 sound calls at one time. Some processors may be capable of playing more than 1,000 simultaneous sounds without stuttering.
## Example code
Our boombox looks like this:

Note the retro cassette deck with a play button, and vol and pan sliders to allow you to alter the volume and stereo panning. We could make this a lot more complex, but this is ideal for simple learning at this stage.
[Check out the final demo here on Codepen](https://codepen.io/Rumyra/pen/qyMzqN/), or see the [source code on GitHub](https://github.com/mdn/webaudio-examples/tree/main/audio-basics).
## Browser support
Modern browsers have good support for most features of the Web Audio API. There are a lot of features of the API, so for more exact information, you'll have to check the browser compatibility tables at the bottom of each reference page.
## Audio graphs
Everything within the Web Audio API is based around the concept of an audio graph, which is made up of nodes.
The Web Audio API handles audio operations inside an **audio context**, and has been designed to allow **modular routing**. Basic audio operations are performed with **audio nodes**, which are linked together to form an **audio routing graph**. You have input nodes, which are the source of the sounds you are manipulating, modification nodes that change those sounds as desired, and output nodes (destinations), which allow you to save or hear those sounds.
Several audio sources with different channel layouts are supported, even within a single context. Because of this modular design, you can create complex audio functions with dynamic effects.
## Audio context
To be able to do anything with the Web Audio API, we need to create an instance of the audio context. This then gives us access to all the features and functionality of the API.
```js
// for legacy browsers
const AudioContext = window.AudioContext || window.webkitAudioContext;
const audioContext = new AudioContext();
```
So what's going on when we do this? A {{domxref("BaseAudioContext")}} is created for us automatically and extended to an online audio context. We'll want this because we're looking to play live sound.
> **Note:** If you just want to process audio data, for instance, buffer and stream it but not play it, you might want to look into creating an {{domxref("OfflineAudioContext")}}.
## Loading sound
Now, the audio context we've created needs some sound to play through it. There are a few ways to do this with the API. Let's begin with a simple method — as we have a boombox, we most likely want to play a full song track. Also, for accessibility, it's nice to expose that track in the DOM. We'll expose the song on the page using an {{htmlelement("audio")}} element.
```html
<audio src="myCoolTrack.mp3"></audio>
```
> **Note:** If the sound file you're loading is held on a different domain you will need to use the `crossorigin` attribute; see [Cross Origin Resource Sharing (CORS)](/en-US/docs/Web/HTTP/CORS) for more information.
To use all the nice things we get with the Web Audio API, we need to grab the source from this element and _pipe_ it into the context we have created. Lucky for us there's a method that allows us to do just that — {{domxref("AudioContext.createMediaElementSource")}}:
```js
// get the audio element
const audioElement = document.querySelector("audio");
// pass it into the audio context
const track = audioContext.createMediaElementSource(audioElement);
```
> **Note:** The `<audio>` element above is represented in the DOM by an object of type {{domxref("HTMLMediaElement")}}, which comes with its own set of functionality. All of this has stayed intact; we are merely allowing the sound to be available to the Web Audio API.
## Controlling sound
When playing sound on the web, it's important to allow the user to control it. Depending on the use case, there's a myriad of options, but we'll provide functionality to play/pause the sound, alter the track's volume, and pan it from left to right.
Controlling sound programmatically from JavaScript code is covered by browsers' autoplay support policies, as such is likely to be blocked without permission being granted by the user (or an allowlist). Autoplay policies typically require either explicit permission or a user engagement with the page before scripts can trigger audio to play.
These special requirements are in place essentially because unexpected sounds can be annoying and intrusive, and can cause accessibility problems. You can learn more about this in our article [Autoplay guide for media and Web Audio APIs](/en-US/docs/Web/Media/Autoplay_guide).
Since our scripts are playing audio in response to a user input event (a click on a play button, for instance), we're in good shape and should have no problems from autoplay blocking. So, let's start by taking a look at our play and pause functionality. We have a play button that changes to a pause button when the track is playing:
```html
<button data-playing="false" role="switch" aria-checked="false">
<span>Play/Pause</span>
</button>
```
Before we can play our track we need to connect our audio graph from the audio source/input node to the destination.
We've already created an input node by passing our audio element into the API. For the most part, you don't need to create an output node, you can just connect your other nodes to {{domxref("BaseAudioContext.destination")}}, which handles the situation for you:
```js
track.connect(audioContext.destination);
```
A good way to visualize these nodes is by drawing an audio graph so you can visualize it. This is what our current audio graph looks like:

Now we can add the play and pause functionality.
```js
// Select our play button
const playButton = document.querySelector("button");
playButton.addEventListener(
"click",
() => {
// Check if context is in suspended state (autoplay policy)
if (audioContext.state === "suspended") {
audioContext.resume();
}
// Play or pause track depending on state
if (playButton.dataset.playing === "false") {
audioElement.play();
playButton.dataset.playing = "true";
} else if (playButton.dataset.playing === "true") {
audioElement.pause();
playButton.dataset.playing = "false";
}
},
false,
);
```
We also need to take into account what to do when the track finishes playing. Our `HTMLMediaElement` fires an `ended` event once it's finished playing, so we can listen for that and run code accordingly:
```js
audioElement.addEventListener(
"ended",
() => {
playButton.dataset.playing = "false";
},
false,
);
```
## Modifying sound
Let's delve into some basic modification nodes, to change the sound that we have. This is where the Web Audio API really starts to come in handy. First of all, let's change the volume. This can be done using a {{domxref("GainNode")}}, which represents how big our sound wave is.
There are two ways you can create nodes with the Web Audio API. You can use the factory method on the context itself (e.g. `audioContext.createGain()`) or via a constructor of the node (e.g. `new GainNode()`). We'll use the factory method in our code:
```js
const gainNode = audioContext.createGain();
```
Now we have to update our audio graph from before, so the input is connected to the gain, then the gain node is connected to the destination:
```js
track.connect(gainNode).connect(audioContext.destination);
```
This will make our audio graph look like this:

The default value for gain is 1; this keeps the current volume the same. Gain can be set to a minimum of about -3.4028235E38 and a max of about 3.4028235E38 (float number range in JavaScript). Here we'll allow the boombox to move the gain up to 2 (double the original volume) and down to 0 (this will effectively mute our sound).
Let's give the user control to do this — we'll use a [range input](/en-US/docs/Web/HTML/Element/input/range):
```html
<input type="range" id="volume" min="0" max="2" value="1" step="0.01" />
```
> **Note:** Range inputs are a really handy input type for updating values on audio nodes. You can specify a range's values and use them directly with the audio node's parameters.
So let's grab this input's value and update the gain value when the input node has its value changed by the user:
```js
const volumeControl = document.querySelector("#volume");
volumeControl.addEventListener(
"input",
() => {
gainNode.gain.value = volumeControl.value;
},
false,
);
```
> **Note:** The values of node objects (e.g. `GainNode.gain`) are not simple values; they are actually objects of type {{domxref("AudioParam")}} — these called parameters. This is why we have to set `GainNode.gain`'s `value` property, rather than just setting the value on `gain` directly. This enables them to be much more flexible, allowing for passing the parameter a specific set of values to change between over a set period of time, for example.
Great, now the user can update the track's volume! The gain node is the perfect node to use if you want to add mute functionality.
## Adding stereo panning to our app
Let's add another modification node to practice what we've just learnt.
There's a {{domxref("StereoPannerNode")}} node, which changes the balance of the sound between the left and right speakers, if the user has stereo capabilities.
> **Note:** The `StereoPannerNode` is for simple cases in which you just want stereo panning from left to right.
> There is also a {{domxref("PannerNode")}}, which allows for a great deal of control over 3D space, or sound _spatialization_, for creating more complex effects.
> This is used in games and 3D apps to create birds flying overhead, or sound coming from behind the user for instance.
To visualize it, we will be making our audio graph look like this:

Let's use the constructor method of creating a node this time. When we do it this way, we have to pass in the context and any options that the particular node may take:
```js
const pannerOptions = { pan: 0 };
const panner = new StereoPannerNode(audioContext, pannerOptions);
```
> **Note:** The constructor method of creating nodes is not supported by all browsers at this time. The older factory methods are supported more widely.
Here our values range from -1 (far left) and 1 (far right). Again let's use a range type input to vary this parameter:
```html
<input type="range" id="panner" min="-1" max="1" value="0" step="0.01" />
```
We use the values from that input to adjust our panner values in the same way as we did before:
```js
const pannerControl = document.querySelector("#panner");
pannerControl.addEventListener(
"input",
() => {
panner.pan.value = pannerControl.value;
},
false,
);
```
Let's adjust our audio graph again, to connect all the nodes together:
```js
track.connect(gainNode).connect(panner).connect(audioContext.destination);
```
The only thing left to do is give the app a try: [Check out the final demo here on Codepen](https://codepen.io/Rumyra/pen/qyMzqN/).
## Summary
Great! We have a boombox that plays our 'tape', and we can adjust the volume and stereo panning, giving us a fairly basic working audio graph.
This makes up quite a few basics that you would need to start to add audio to your website or web app. There's a lot more functionality to the Web Audio API, but once you've grasped the concept of nodes and putting your audio graph together, we can move on to looking at more complex functionality.
## More examples
There are other examples available to learn more about the Web Audio API.
The [Voice-change-O-matic](https://github.com/mdn/webaudio-examples/tree/main/voice-change-o-matic) is a fun voice manipulator and sound visualization web app that allows you to choose different effects and visualizations. The application is fairly rudimentary, but it demonstrates the simultaneous use of multiple Web Audio API features. ([run the Voice-change-O-matic live](https://mdn.github.io/webaudio-examples/voice-change-o-matic/)).

Another application developed specifically to demonstrate the Web Audio API is the [Violent Theremin](https://mdn.github.io/webaudio-examples/violent-theremin/), a simple web application that allows you to change pitch and volume by moving your mouse pointer. It also provides a psychedelic lightshow ([see Violent Theremin source code](https://github.com/mdn/webaudio-examples/tree/main/violent-theremin)).

Also see our [webaudio-examples repo](https://github.com/mdn/webaudio-examples) for more examples.
| 0 |
data/mdn-content/files/en-us/web/api/web_audio_api | data/mdn-content/files/en-us/web/api/web_audio_api/tools/index.md | ---
title: Tools for analyzing Web Audio usage
slug: Web/API/Web_Audio_API/Tools
page-type: guide
---
{{DefaultAPISidebar("Web Audio API")}}
While working on your Web Audio API code, you may find that you need tools to analyze the graph of nodes you create or to otherwise debug your work. This article discusses tools available to help you do that.
## Chrome
A handy web audio inspector can be found in the [Chrome Web Store](https://chrome.google.com/webstore/detail/audion/cmhomipkklckpomafalojobppmmidlgl).
## Edge
_Add information for developers using Microsoft Edge._
## Firefox
Firefox offers a native [Web Audio Editor](https://firefox-source-docs.mozilla.org/devtools-user/web_audio_editor/index.html).
## Safari
_Add information for developers working in Safari._
## See also
- [Web Audio API](/en-US/docs/Web/API/Web_Audio_API)
- [Using the Web Audio API](/en-US/docs/Web/API/Web_Audio_API/Using_Web_Audio_API)
| 0 |
data/mdn-content/files/en-us/web/api/web_audio_api | data/mdn-content/files/en-us/web/api/web_audio_api/advanced_techniques/index.md | ---
title: "Advanced techniques: Creating and sequencing audio"
slug: Web/API/Web_Audio_API/Advanced_techniques
page-type: guide
---
{{DefaultAPISidebar("Web Audio API")}}
In this tutorial, we're going to cover sound creation and modification, as well as timing and scheduling. We will introduce sample loading, envelopes, filters, wavetables, and frequency modulation. If you're familiar with these terms and looking for an introduction to their application with the Web Audio API, you've come to the right place.
> **Note:** You can find the source code for the demo below on GitHub in the [step-sequencer](https://github.com/mdn/webaudio-examples/tree/main/step-sequencer) subdirectory of the MDN [webaudio-examples](https://github.com/mdn/webaudio-examples) repo. You can also see the [live demo](https://mdn.github.io/webaudio-examples/step-sequencer/).
## Demo
We're going to be looking at a very simple step sequencer:

In practice, this is easier to do with a library — the Web Audio API was built to be built upon. If you are about to embark on building something more complex, [tone.js](https://tonejs.github.io/) would be an excellent place to start. However, we want to demonstrate how to create such a demo from first principles as a learning exercise.
The interface consists of master controls, which allow us to play/stop the sequencer, and adjust the BPM (beats per minute) to speed up or slow down the "music".
Four different sounds, or voices, can be played. Each voice has four buttons, one for each beat in one bar of music. When they are enabled, the note will sound. When the instrument plays, it will move across this set of beats and loop the bar.
Each voice also has local controls, allowing you to manipulate the effects or parameters particular to each technique we use to create those voices. The methods we are using are:
<table class="no-markdown">
<thead>
<tr>
<th scope="col">Name of voice</th>
<th scope="col">Technique</th>
<th scope="col">Associated Web Audio API feature</th>
</tr>
</thead>
<tbody>
<tr>
<td>"Sweep"</td>
<td>Oscillator, periodic wave</td>
<td>
{{domxref("OscillatorNode")}},
{{domxref("PeriodicWave")}}
</td>
</tr>
<tr>
<td>"Pulse"</td>
<td>Multiple oscillators</td>
<td>{{domxref("OscillatorNode")}}</td>
</tr>
<tr>
<td>"Noise"</td>
<td>Random noise buffer, Biquad filter</td>
<td>
{{domxref("AudioBuffer")}},
{{domxref("AudioBufferSourceNode")}},
{{domxref("BiquadFilterNode")}}
</td>
</tr>
<tr>
<td>"Dial up"</td>
<td>Loading a sound sample to play</td>
<td>
{{domxref("BaseAudioContext/decodeAudioData")}},
{{domxref("AudioBufferSourceNode")}}
</td>
</tr>
</tbody>
</table>
> **Note:** We didn't create this instrument to sound good but to provide demonstration code. This demonstration represents a _very_ simplified version of such an instrument. The sounds are based on a dial-up modem. If you are unaware of how such a device sounds, you can [listen to one here](https://soundcloud.com/john-pemberton/modem-dialup).
## Creating an audio context
As you should be used to by now, each Web Audio API app starts with an audio context:
```js
const audioCtx = new AudioContext();
```
## The "sweep" — oscillators, periodic waves, and envelopes
For what we will call the "sweep" sound, that first noise you hear when you dial up, we're going to create an oscillator to generate the sound.
The {{domxref("OscillatorNode")}} comes with basic waveforms out of the box — sine, square, triangle, or sawtooth. However, instead of using the standard waves that come by default, we're going to create our own using the {{domxref("PeriodicWave")}} interface and values set in a wavetable. We can use the {{domxref("PeriodicWave/PeriodicWave", "PeriodicWave()")}} constructor to use this custom wave with an oscillator.
### The periodic wave
First of all, we'll create our periodic wave. To do so, We need to pass real and imaginary values into the {{domxref("PeriodicWave/PeriodicWave", "PeriodicWave()")}} constructor:
```js
const wave = new PeriodicWave(audioCtx, {
real: wavetable.real,
imag: wavetable.imag,
});
```
> **Note:** In our example, the wavetable is held in a separate JavaScript file (`wavetable.js`) because there are _so_ many values. We took it from a [repository of wavetables](https://github.com/GoogleChromeLabs/web-audio-samples/tree/main/src/demos/wavetable-synth/wave-tables), found in the [Web Audio API examples from Google Chrome Labs](https://github.com/GoogleChromeLabs/web-audio-samples/).
### The Oscillator
Now we can create an {{domxref("OscillatorNode")}} and set its wave to the one we've created:
```js
function playSweep(time) {
const osc = new OscillatorNode(audioCtx, {
frequency: 380,
type: "custom",
periodicWave: wave,
});
osc.connect(audioCtx.destination);
osc.start(time);
osc.stop(time + 1);
}
```
We pass a time parameter to the function here, which we'll use later to schedule the sweep.
### Controlling amplitude
This is great, but wouldn't it be nice if we had an amplitude envelope to go with it? Let's create a simple one, so we get used to the methods we need to create an envelope with the Web Audio API.
Let's say our envelope has attack and release. We can allow the user to control these using [range inputs](/en-US/docs/Web/HTML/Element/input/range) on the interface:
```html
<label for="attack">Attack</label>
<input
name="attack"
id="attack"
type="range"
min="0"
max="1"
value="0.2"
step="0.1" />
<label for="release">Release</label>
<input
name="release"
id="release"
type="range"
min="0"
max="1"
value="0.5"
step="0.1" />
```
Now we can create some variables over in JavaScript and have them change when the input values are updated:
```js
let attackTime = 0.2;
const attackControl = document.querySelector("#attack");
attackControl.addEventListener(
"input",
(ev) => {
attackTime = parseInt(ev.target.value, 10);
},
false,
);
let releaseTime = 0.5;
const releaseControl = document.querySelector("#release");
releaseControl.addEventListener(
"input",
(ev) => {
releaseTime = parseInt(ev.target.value, 10);
},
false,
);
```
### The final playSweep() function
Now we can expand our `playSweep()` function. We need to add a {{domxref("GainNode")}} and connect that through our audio graph to apply amplitude variations to our sound. The gain node has one property: `gain`, which is of type {{domxref("AudioParam")}}.
This is useful — now we can start to harness the power of the audio param methods on the gain value. We can set a value at a certain time, or we can change it _over_ time with methods such as {{domxref("AudioParam.linearRampToValueAtTime")}}.
As mentioned above, we'll use the `linearRampToValueAtTime` method for our attack and release. It takes two parameters — the value you want to set the parameter you are changing to (in this case, the gain) and when you want to do this. In our case _when_ is controlled by our inputs. So, in the example below, the gain increases to 1 at a linear rate over the time the attack range input defines. Similarly, for our release, the gain is set to 0 at a linear rate, over the time the release input has been set to.
```js
const sweepLength = 2;
function playSweep(time) {
const osc = new OscillatorNode(audioCtx, {
frequency: 380,
type: "custom",
periodicWave: wave,
});
const sweepEnv = new GainNode(audioCtx);
sweepEnv.gain.cancelScheduledValues(time);
sweepEnv.gain.setValueAtTime(0, time);
sweepEnv.gain.linearRampToValueAtTime(1, time + attackTime);
sweepEnv.gain.linearRampToValueAtTime(0, time + sweepLength - releaseTime);
osc.connect(sweepEnv).connect(audioCtx.destination);
osc.start(time);
osc.stop(time + sweepLength);
}
```
## The "pulse" — low-frequency oscillator modulation
Great, now we've got our sweep! Let's move on and take a look at that nice pulse sound. We can achieve this with a basic oscillator, modulated with a second oscillator.
### Initial oscillator
We'll set up our first {{domxref("OscillatorNode")}} the same way as our sweep sound, except we won't use a wavetable to set a bespoke wave — we'll just use the default `sine` wave:
```js
const osc = new OscillatorNode(audioCtx, {
type: "sine",
frequency: pulseHz,
});
```
Now we're going to create a {{domxref("GainNode")}}, as it's the `gain` value that we will oscillate with our second, low-frequency oscillator:
```js
const amp = new GainNode(audioCtx, {
value: 1,
});
```
### Creating the second, low-frequency oscillator
We'll now create a second — `square` — wave (or pulse) oscillator to alter the amplification of our first sine wave:
```js
const lfo = new OscillatorNode(audioCtx, {
type: "square",
frequency: 30,
});
```
### Connecting the graph
The key here is connecting the graph correctly and also starting both oscillators:
```js
lfo.connect(amp.gain);
osc.connect(amp).connect(audioCtx.destination);
lfo.start();
osc.start(time);
osc.stop(time + pulseTime);
```
> **Note:** We also don't have to use the default wave types for either of these oscillators we're creating — we could use a wavetable and the periodic wave method as we did before. There is a multitude of possibilities with just a minimum of nodes.
### Pulse user controls
For the UI controls, let's expose both frequencies of our oscillators, allowing them to be controlled via range inputs. One will change the tone, and the other will change how the pulse modulates the first wave:
```html
<label for="hz">Hz</label>
<input
name="hz"
id="hz"
type="range"
min="660"
max="1320"
value="880"
step="1" />
<label for="lfo">LFO</label>
<input name="lfo" id="lfo" type="range" min="20" max="40" value="30" step="1" />
```
As before, we'll vary the parameters when the user changes the ranges values.
```js
let pulseHz = 880;
const hzControl = document.querySelector("#hz");
hzControl.addEventListener(
"input",
(ev) => {
pulseHz = parseInt(ev.target.value, 10);
},
false,
);
let lfoHz = 30;
const lfoControl = document.querySelector("#lfo");
lfoControl.addEventListener(
"input",
(ev) => {
lfoHz = parseInt(ev.target.value, 10);
},
false,
);
```
### The final playPulse() function
Here's the entire `playPulse()` function:
```js
const pulseTime = 1;
function playPulse(time) {
const osc = new OscillatorNode(audioCtx, {
type: "sine",
frequency: pulseHz,
});
const amp = new GainNode(audioCtx, {
value: 1,
});
const lfo = new OscillatorNode(audioCtx, {
type: "square",
frequency: lfoHz,
});
lfo.connect(amp.gain);
osc.connect(amp).connect(audioCtx.destination);
lfo.start();
osc.start(time);
osc.stop(time + pulseTime);
}
```
## The "noise" — random noise buffer with a biquad filter
Now we need to make some noise! All modems have noise. Noise is just random numbers when it comes to audio data, so is, therefore, a relatively straightforward thing to create with code.
### Creating an audio buffer
We need to create an empty container to put these numbers into, however, one that the Web Audio API understands. This is where {{domxref("AudioBuffer")}} objects come in. You can fetch a file and decode it into a buffer (we'll get to that later in the tutorial), or you can create an empty buffer and fill it with your data.
For noise, let's do the latter. We first need to calculate the size of our buffer to create it. We can use the {{domxref("BaseAudioContext.sampleRate")}} property for this:
```js
const bufferSize = audioCtx.sampleRate * noiseDuration;
// Create an empty buffer
const noiseBuffer = new AudioBuffer({
length: bufferSize,
sampleRate: audioCtx.sampleRate,
});
```
Now we can fill it with random numbers between -1 and 1:
```js
// Fill the buffer with noise
const data = noiseBuffer.getChannelData(0);
for (let i = 0; i < bufferSize; i++) {
data[i] = Math.random() * 2 - 1;
}
```
> **Note:** Why -1 to 1? When outputting sound to a file or speakers, we need a number representing 0 dB full scale — the numerical limit of the fixed point media or DAC. In floating point audio, 1 is a convenient number to map to "full scale" for mathematical operations on signals, so oscillators, noise generators, and other sound sources typically output bipolar signals in the range -1 to 1. A browser will clamp values outside this range.
### Creating a buffer source
Now we have the audio buffer and have filled it with data; we need a node to add to our graph that can use the buffer as a source. We'll create an {{domxref("AudioBufferSourceNode")}} for this, and pass in the data we've created:
```js
// Create a buffer source for our created data
const noise = new AudioBufferSourceNode(audioCtx, {
buffer: noiseBuffer,
});
```
When we connect this through our audio graph and play it:
```js
noise.connect(audioCtx.destination);
noise.start();
```
You'll notice that it's pretty hissy or tinny. We've created white noise; that's how it should be. Our values are spread from -1 to 1, meaning we have peaks of all frequencies, which are actually quite dramatic and piercing. We _could_ modify the function only spread values from 0.5 to -0.5 or similar to take the peaks off and reduce the discomfort; however, where's the fun in that? Let's route the noise we've created through a filter.
### Adding a biquad filter to the mix
We want something in the range of pink or brown noise. We want to cut off those high frequencies and possibly some lower ones. Let's pick a bandpass biquad filter for the job.
> **Note:** The Web Audio API comes with two types of filter nodes: {{domxref("BiquadFilterNode")}} and {{domxref("IIRFilterNode")}}. For the most part, a biquad filter will be good enough — it comes with different types such as lowpass, highpass, and bandpass. If you're looking to do something more bespoke, however, the IIR filter might be a good option — see [Using IIR filters](/en-US/docs/Web/API/Web_Audio_API/Using_IIR_filters) for more information.
Wiring this up is the same as we've seen before. We create the {{domxref("BiquadFilterNode")}}, configure the properties we want for it, and connect it through our graph. Different types of biquad filters have different properties — for instance, setting the frequency on a bandpass type adjusts the middle frequency. However, on a lowpass, it would set the top frequency.
```js
// Filter the output
const bandpass = new BiquadFilterNode(audioCtx, {
type: "bandpass",
frequency: bandHz,
});
// Connect our graph
noise.connect(bandpass).connect(audioCtx.destination);
```
### Noise user controls
On the UI, we'll expose the noise duration and the frequency we want to band, allowing the user to adjust them via range inputs and event handlers just like in previous sections:
```html
<label for="duration">Duration</label>
<input
name="duration"
id="duration"
type="range"
min="0"
max="2"
value="1"
step="0.1" />
<label for="band">Band</label>
<input
name="band"
id="band"
type="range"
min="400"
max="1200"
value="1000"
step="5" />
```
```js
let noiseDuration = 1;
const durControl = document.querySelector("#duration");
durControl.addEventListener(
"input",
(ev) => {
noiseDuration = parseFloat(ev.target.value);
},
false,
);
let bandHz = 1000;
const bandControl = document.querySelector("#band");
bandControl.addEventListener(
"input",
(ev) => {
bandHz = parseInt(ev.target.value, 10);
},
false,
);
```
### The final playNoise() function
Here's the entire `playNoise()` function:
```js
function playNoise(time) {
const bufferSize = audioCtx.sampleRate * noiseDuration; // set the time of the note
// Create an empty buffer
const noiseBuffer = new AudioBuffer({
length: bufferSize,
sampleRate: audioCtx.sampleRate,
});
// Fill the buffer with noise
const data = noiseBuffer.getChannelData(0);
for (let i = 0; i < bufferSize; i++) {
data[i] = Math.random() * 2 - 1;
}
// Create a buffer source for our created data
const noise = new AudioBufferSourceNode(audioCtx, {
buffer: noiseBuffer,
});
// Filter the output
const bandpass = new BiquadFilterNode(audioCtx, {
type: "bandpass",
frequency: bandHz,
});
// Connect our graph
noise.connect(bandpass).connect(audioCtx.destination);
noise.start(time);
}
```
## "Dial-up" — loading a sound sample
It's straightforward enough to emulate phone dial (DTMF) sounds by playing a couple of oscillators together using the methods we've already used. Instead, we'll load a sample file in this section to look at what's involved.
### Loading the sample
We want to make sure our file has loaded and been decoded into a buffer before we use it, so let's create an [`async`](/en-US/docs/Web/JavaScript/Reference/Statements/async_function) function to allow us to do this:
```js
async function getFile(audioContext, filepath) {
const response = await fetch(filepath);
const arrayBuffer = await response.arrayBuffer();
const audioBuffer = await audioContext.decodeAudioData(arrayBuffer);
return audioBuffer;
}
```
We can then use the [`await`](/en-US/docs/Web/JavaScript/Reference/Operators/await) operator when calling this function, which ensures that we can only run subsequent code when it has finished executing.
Let's create another `async` function to set up the sample — we can combine the two async functions in a lovely promise pattern to perform further actions when this file is loaded and buffered:
```js
async function setupSample() {
const filePath = "dtmf.mp3";
const sample = await getFile(audioCtx, filePath);
return sample;
}
```
> **Note:** You can easily modify the above function to take an array of files and loop over them to load more than one sample. This technique would be convenient for more complex instruments or gaming.
We can now use `setupSample()` like so:
```js
setupSample().then((sample) => {
// sample is our buffered file
// …
});
```
When the sample is ready to play, the program sets up the UI, so it is ready to go.
### Playing the sample
Let's create a `playSample()` function similarly to how we did with the other sounds. This time we will create an {{domxref("AudioBufferSourceNode")}}, put the buffer data we've fetched and decoded into it, and play it:
```js
function playSample(audioContext, audioBuffer, time) {
const sampleSource = new AudioBufferSourceNode(audioContext, {
buffer: audioBuffer,
playbackRate,
});
sampleSource.connect(audioContext.destination);
sampleSource.start(time);
return sampleSource;
}
```
> **Note:** We can call `stop()` on an {{domxref("AudioBufferSourceNode")}}, however, this will happen automatically when the sample has finished playing.
### Dial-up user controls
The {{domxref("AudioBufferSourceNode")}} comes with a [`playbackRate`](/en-US/docs/Web/API/AudioBufferSourceNode/playbackRate) property. Let's expose that to our UI so that we can speed up and slow down our sample. We'll do that in the same sort of way as before:
```html
<label for="rate">Rate</label>
<input
name="rate"
id="rate"
type="range"
min="0.1"
max="2"
value="1"
step="0.1" />
```
```js
let playbackRate = 1;
const rateControl = document.querySelector("#rate");
rateControl.addEventListener(
"input",
(ev) => {
playbackRate = parseInt(ev.target.value, 10);
},
false,
);
```
### The final playSample() function
We'll then add a line to update the `playbackRate` property to our `playSample()` function. The final version looks like this:
```js
function playSample(audioContext, audioBuffer, time) {
const sampleSource = new AudioBufferSourceNode(audioCtx, {
buffer: audioBuffer,
playbackRate,
});
sampleSource.connect(audioContext.destination);
sampleSource.start(time);
return sampleSource;
}
```
> **Note:** The sound file was [sourced from soundbible.com](https://soundbible.com/1573-DTMF-Tones.html).
## Playing the audio in time
A common problem with digital audio applications is getting the sounds to play in time so that the beat remains consistent and things do not slip out of time.
We could schedule our voices to play within a `for` loop; however, the biggest problem with this is updating while it is playing, and we've already implemented UI controls to do so. Also, it would be really nice to consider an instrument-wide BPM control. The best way to get our voices to play on the beat is to create a scheduling system, whereby we look ahead at when the notes will play and push them into a queue. We can start them at a precise time with the `currentTime` property and also consider any changes.
> **Note:** This is a much stripped down version of [Chris Wilson's A Tale Of Two Clocks (2013)](https://web.dev/articles/audio-scheduling) article, which goes into this method with much more detail. There's no point repeating it all here, but we highly recommend reading this article and using this method. Much of the code here is taken from his [metronome example](https://github.com/cwilso/metronome/blob/master/js/metronome.js), which he references in the article.
Let's start by setting up our default BPM (beats per minute), which will also be user-controllable via — you guessed it — another range input.
```js
let tempo = 60.0;
const bpmControl = document.querySelector("#bpm");
bpmControl.addEventListener(
"input",
(ev) => {
tempo = parseInt(ev.target.value, 10);
},
false,
);
```
Then we'll create variables to define how far ahead we want to look and how far ahead we want to schedule:
```js
const lookahead = 25.0; // How frequently to call scheduling function (in milliseconds)
const scheduleAheadTime = 0.1; // How far ahead to schedule audio (sec)
```
Let's create a function that moves the note forwards by one beat and loops back to the first when it reaches the 4th (last) one:
```js
let currentNote = 0;
let nextNoteTime = 0.0; // when the next note is due.
function nextNote() {
const secondsPerBeat = 60.0 / tempo;
nextNoteTime += secondsPerBeat; // Add beat length to last beat time
// Advance the beat number, wrap to zero when reaching 4
currentNote = (currentNote + 1) % 4;
}
```
We want to create a reference queue for the notes that are to be played, and the functionality to play them using the functions we've previously created:
```js
const notesInQueue = [];
function scheduleNote(beatNumber, time) {
// Push the note on the queue, even if we're not playing.
notesInQueue.push({ note: beatNumber, time });
if (pads[0].querySelectorAll("input")[beatNumber].checked) {
playSweep(time);
}
if (pads[1].querySelectorAll("input")[beatNumber].checked) {
playPulse(time);
}
if (pads[2].querySelectorAll("input")[beatNumber].checked) {
playNoise(time);
}
if (pads[3].querySelectorAll("input")[beatNumber].checked) {
playSample(audioCtx, dtmf, time);
}
}
```
Here we look at the current time and compare it to the time for the following note; when the two match, it will call the previous two functions.
{{domxref("AudioContext")}} object instances have a [`currentTime`](/en-US/docs/Web/API/BaseAudioContext/currentTime) property, which allows us to retrieve the number of seconds after we first created the context. We will use it for timing within our step sequencer. It's extremely accurate, returning a float value accurate to about 15 decimal places.
```js
let timerID;
function scheduler() {
// While there are notes that will need to play before the next interval,
// schedule them and advance the pointer.
while (nextNoteTime < audioCtx.currentTime + scheduleAheadTime) {
scheduleNote(currentNote, nextNoteTime);
nextNote();
}
timerID = setTimeout(scheduler, lookahead);
}
```
We also need a `draw()` function to update the UI, so we can see when the beat progresses.
```js
let lastNoteDrawn = 3;
function draw() {
let drawNote = lastNoteDrawn;
const currentTime = audioCtx.currentTime;
while (notesInQueue.length && notesInQueue[0].time < currentTime) {
drawNote = notesInQueue[0].note;
notesInQueue.shift(); // Remove note from queue
}
// We only need to draw if the note has moved.
if (lastNoteDrawn !== drawNote) {
pads.forEach((pad) => {
pad.children[lastNoteDrawn * 2].style.borderColor = "var(--black)";
pad.children[drawNote * 2].style.borderColor = "var(--yellow)";
});
lastNoteDrawn = drawNote;
}
// Set up to draw again
requestAnimationFrame(draw);
}
```
## Putting it all together
Now all that's left to do is make sure we've loaded the sample before we can _play_ the instrument. We'll add a loading screen that disappears when the file has been fetched and decoded. Then we can allow the scheduler to start using the play button click event.
```js
// When the sample has loaded, allow play
const loadingEl = document.querySelector(".loading");
const playButton = document.querySelector("#playBtn");
let isPlaying = false;
setupSample().then((sample) => {
loadingEl.style.display = "none";
dtmf = sample; // to be used in our playSample function
playButton.addEventListener("click", (ev) => {
isPlaying = !isPlaying;
if (isPlaying) {
// Start playing
// Check if context is in suspended state (autoplay policy)
if (audioCtx.state === "suspended") {
audioCtx.resume();
}
currentNote = 0;
nextNoteTime = audioCtx.currentTime;
scheduler(); // kick off scheduling
requestAnimationFrame(draw); // start the drawing loop.
ev.target.dataset.playing = "true";
} else {
clearTimeout(timerID);
ev.target.dataset.playing = "false";
}
});
});
```
## Summary
We've now got an instrument inside our browser! Keep playing and experimenting — you can expand on any of these techniques to create something much more elaborate.
| 0 |
data/mdn-content/files/en-us/web/api/web_audio_api | data/mdn-content/files/en-us/web/api/web_audio_api/visualizations_with_web_audio_api/index.md | ---
title: Visualizations with Web Audio API
slug: Web/API/Web_Audio_API/Visualizations_with_Web_Audio_API
page-type: guide
---
{{DefaultAPISidebar("Web Audio API")}}
One of the most interesting features of the Web Audio API is the ability to extract frequency, waveform, and other data from your audio source, which can then be used to create visualizations. This article explains how, and provides a couple of basic use cases.
> **Note:** You can find working examples of all the code snippets in our [Voice-change-O-matic](https://mdn.github.io/webaudio-examples/voice-change-o-matic/) demo.
## Basic concepts
To extract data from your audio source, you need an {{ domxref("AnalyserNode") }}, which is created using the {{ domxref("BaseAudioContext.createAnalyser") }} method, for example:
```js
const audioCtx = new AudioContext();
const analyser = audioCtx.createAnalyser();
```
This node is then connected to your audio source at some point between your source and your destination, for example:
```js
const source = audioCtx.createMediaStreamSource(stream);
source.connect(analyser);
analyser.connect(distortion);
distortion.connect(audioCtx.destination);
```
> **Note:** you don't need to connect the analyser's output to another node for it to work, as long as the input is connected to the source, either directly or via another node.
The analyser node will then capture audio data using a Fast Fourier Transform (fft) in a certain frequency domain, depending on what you specify as the {{ domxref("AnalyserNode.fftSize") }} property value (if no value is specified, the default is 2048.)
> **Note:** You can also specify a minimum and maximum power value for the fft data scaling range, using {{ domxref("AnalyserNode.minDecibels") }} and {{ domxref("AnalyserNode.maxDecibels") }}, and different data averaging constants using {{ domxref("AnalyserNode.smoothingTimeConstant") }}. Read those pages to get more information on how to use them.
To capture data, you need to use the methods {{ domxref("AnalyserNode.getFloatFrequencyData()") }} and {{ domxref("AnalyserNode.getByteFrequencyData()") }} to capture frequency data, and {{ domxref("AnalyserNode.getByteTimeDomainData()") }} and {{ domxref("AnalyserNode.getFloatTimeDomainData()") }} to capture waveform data.
These methods copy data into a specified array, so you need to create a new array to receive the data before invoking one. The first one produces 32-bit floating point numbers, and the second and third ones produce 8-bit unsigned integers, therefore a standard JavaScript array won't do — you need to use a {{jsxref("Float32Array")}} or {{jsxref("Uint8Array")}} array, depending on what data you are handling.
So for example, say we are dealing with an fft size of 2048. We return the {{ domxref("AnalyserNode.frequencyBinCount") }} value, which is half the fft, then call Uint8Array() with the frequencyBinCount as its length argument — this is how many data points we will be collecting, for that fft size.
```js
analyser.fftSize = 2048;
const bufferLength = analyser.frequencyBinCount;
const dataArray = new Uint8Array(bufferLength);
```
To actually retrieve the data and copy it into our array, we then call the data collection method we want, with the array passed as it's argument. For example:
```js
analyser.getByteTimeDomainData(dataArray);
```
We now have the audio data for that moment in time captured in our array, and can proceed to visualize it however we like, for example by plotting it onto an HTML {{ htmlelement("canvas") }}.
Let's go on to look at some specific examples.
## Creating a waveform/oscilloscope
To create the oscilloscope visualization (hat tip to [Soledad Penadés](https://soledadpenades.com/) for the original code in [Voice-change-O-matic](https://github.com/mdn/webaudio-examples/blob/main/voice-change-o-matic/scripts/app.js#L142), we first follow the standard pattern described in the previous section to set up the buffer:
```js
analyser.fftSize = 2048;
const bufferLength = analyser.frequencyBinCount;
const dataArray = new Uint8Array(bufferLength);
```
Next, we clear the canvas of what had been drawn on it before to get ready for the new visualization display:
```js
canvasCtx.clearRect(0, 0, WIDTH, HEIGHT);
```
We now define the `draw()` function:
```js
function draw() {
```
In here, we use `requestAnimationFrame()` to keep looping the drawing function once it has been started:
```js
const drawVisual = requestAnimationFrame(draw);
```
Next, we grab the time domain data and copy it into our array
```js
analyser.getByteTimeDomainData(dataArray);
```
Next, fill the canvas with a solid color to start
```js
canvasCtx.fillStyle = "rgb(200 200 200)";
canvasCtx.fillRect(0, 0, WIDTH, HEIGHT);
```
Set a line width and stroke color for the wave we will draw, then begin drawing a path
```js
canvasCtx.lineWidth = 2;
canvasCtx.strokeStyle = "rgb(0 0 0)";
canvasCtx.beginPath();
```
Determine the width of each segment of the line to be drawn by dividing the canvas width by the array length (equal to the FrequencyBinCount, as defined earlier on), then define an x variable to define the position to move to for drawing each segment of the line.
```js
const sliceWidth = WIDTH / bufferLength;
let x = 0;
```
Now we run through a loop, defining the position of a small segment of the wave for each point in the buffer at a certain height based on the data point value from the array, then moving the line across to the place where the next wave segment should be drawn:
```js
for (let i = 0; i < bufferLength; i++) {
const v = dataArray[i] / 128.0;
const y = v * (HEIGHT / 2);
if (i === 0) {
canvasCtx.moveTo(x, y);
} else {
canvasCtx.lineTo(x, y);
}
x += sliceWidth;
}
```
Finally, we finish the line in the middle of the right-hand side of the canvas, then draw the stroke we've defined:
```js
canvasCtx.lineTo(WIDTH, HEIGHT / 2);
canvasCtx.stroke();
}
```
At the end of this section of code, we invoke the `draw()` function to start off the whole process:
```js
draw();
```
This gives us a nice waveform display that updates several times a second:

## Creating a frequency bar graph
Another nice little sound visualization to create is one of those Winamp-style frequency bar graphs. We have one available in Voice-change-O-matic; let's look at how it's done.
First, we again set up our analyser and data array, then clear the current canvas display with `clearRect()`. The only difference from before is that we have set the fft size to be much smaller; this is so that each bar in the graph is big enough to actually look like a bar rather than a thin strand.
```js
analyser.fftSize = 256;
const bufferLength = analyser.frequencyBinCount;
const dataArray = new Uint8Array(bufferLength);
canvasCtx.clearRect(0, 0, WIDTH, HEIGHT);
```
Next, we start our `draw()` function off, again setting up a loop with `requestAnimationFrame()` so that the displayed data keeps updating, and clearing the display with each animation frame.
```js
function draw() {
drawVisual = requestAnimationFrame(draw);
analyser.getByteFrequencyData(dataArray);
canvasCtx.fillStyle = "rgb(0 0 0)";
canvasCtx.fillRect(0, 0, WIDTH, HEIGHT);
```
Now we set our `barWidth` to be equal to the canvas width divided by the number of bars (the buffer length). However, we are also multiplying that width by 2.5, because most of the frequencies will come back as having no audio in them, as most of the sounds we hear every day are in a certain lower frequency range. We don't want to display loads of empty bars, therefore we shift the ones that will display regularly at a noticeable height across so they fill the canvas display.
We also set a `barHeight` variable, and an `x` variable to record how far across the screen to draw the current bar.
```js
const barWidth = (WIDTH / bufferLength) * 2.5;
let barHeight;
let x = 0;
```
As before, we now start a for loop and cycle through each value in the `dataArray`. For each one, we make the `barHeight` equal to the array value, set a fill color based on the `barHeight` (taller bars are brighter), and draw a bar at `x` pixels across the canvas, which is `barWidth` wide and `barHeight / 2` tall (we eventually decided to cut each bar in half so they would all fit on the canvas better.)
The one value that needs explaining is the vertical offset position we are drawing each bar at: `HEIGHT - barHeight / 2`. I am doing this because I want each bar to stick up from the bottom of the canvas, not down from the top, as it would if we set the vertical position to 0. Therefore, we instead set the vertical position each time to the height of the canvas minus `barHeight / 2`, so therefore each bar will be drawn from partway down the canvas, down to the bottom.
```js
for (let i = 0; i < bufferLength; i++) {
barHeight = dataArray[i] / 2;
canvasCtx.fillStyle = `rgb(${barHeight + 100} 50 50)`;
canvasCtx.fillRect(x, HEIGHT - barHeight / 2, barWidth, barHeight);
x += barWidth + 1;
}
```
Again, at the end of the code we invoke the `draw()` function to set the whole process in motion.
```js
draw();
```
This code gives us a result like the following:

> **Note:** The examples listed in this article have shown usage of {{ domxref("AnalyserNode.getByteFrequencyData()") }} and {{ domxref("AnalyserNode.getByteTimeDomainData()") }}. For working examples showing {{ domxref("AnalyserNode.getFloatFrequencyData()") }} and {{ domxref("AnalyserNode.getFloatTimeDomainData()") }}, refer to our [Voice-change-O-matic-float-data](https://mdn.github.io/voice-change-o-matic-float-data/) demo — this is exactly the same as the original [Voice-change-O-matic](https://mdn.github.io/webaudio-examples/voice-change-o-matic/), except that it uses Float data, not unsigned byte data. See [this section](https://github.com/mdn/webaudio-examples/blob/main/voice-change-o-matic/scripts/app.js#L155) of the source code for details
| 0 |
data/mdn-content/files/en-us/web/api/web_audio_api | data/mdn-content/files/en-us/web/api/web_audio_api/basic_concepts_behind_web_audio_api/index.md | ---
title: Basic concepts behind Web Audio API
slug: Web/API/Web_Audio_API/Basic_concepts_behind_Web_Audio_API
page-type: guide
---
{{DefaultAPISidebar("Web Audio API")}}
This article explains some of the audio theory behind how the features of the Web Audio API work to help you make informed decisions while designing how your app routes audio. If you are not already a sound engineer, it will give you enough background to understand why the Web Audio API works as it does.
## Audio graphs
The [Web Audio API](/en-US/docs/Web/API/Web_Audio_API) involves handling audio operations inside an [audio context](/en-US/docs/Web/API/AudioContext), and has been designed to allow _modular routing_. Each [audio node](/en-US/docs/Web/API/AudioNode) performs a basic audio operation and is linked with one more other audio nodes to form an [audio routing graph](/en-US/docs/Web/API/AudioNode#the_audio_routing_graph). Several sources with different channel layouts are supported, even within a single context. This modular design provides the flexibility to create complex audio functions with dynamic effects.
Audio nodes are linked via their inputs and outputs, forming a chain that starts with one or more sources, goes through one or more nodes, then ends up at a destination (although you don't have to provide a destination if you only want to visualize some audio data). A simple, typical workflow for web audio would look something like this:
1. Create the audio context.
2. Create audio sources inside the context (such as {{HTMLElement("audio")}}, an oscillator, or stream).
3. Create audio effects (such as the reverb, biquad filter, panner, or compressor nodes).
4. Choose the final destination for the audio (such as the user's computer speakers).
5. Connect the source nodes to zero or more effect nodes and then to the chosen destination.
> **Note:** The [channel notation](https://en.wikipedia.org/wiki/Surround_sound#Channel_notation) is a numeric value, such as _2.0_ or _5.1_, representing the number of audio channels available on a signal. The first number is the number of full frequency range audio channels the signal includes. The number appearing after the period indicates the number of those channels reserved for low-frequency effect (LFE) outputs; these are often called **subwoofers**.

Each input or output is composed of one or more audio **channels**, which together represent a specific audio layout. Any discrete channel structure is supported, including _mono_, _stereo_, _quad_, _5.1_, and so on.

You have several ways to obtain audio:
- Sound can be generated directly in JavaScript by an audio node (such as an oscillator).
- It can be created from raw [PCM](https://en.wikipedia.org/wiki/Pulse-code_modulation) data (such as .WAV files or other formats supported by {{domxref("BaseAudioContext/decodeAudioData", "decodeAudioData()")}}).
- It can be generated from HTML media elements, such as {{HTMLElement("video")}} or {{HTMLElement("audio")}}.
- It can be obtained from a [WebRTC](/en-US/docs/Web/API/WebRTC_API) {{domxref("MediaStream")}}, such as a webcam or microphone.
## Audio data: what's in a sample
When an audio signal is processed, sampling happens. **Sampling** is the conversion of a [continuous signal](https://en.wikipedia.org/wiki/Continuous_signal) to a [discrete signal](https://en.wikipedia.org/wiki/Discrete_signal). Put another way, a continuous sound wave, such as a band playing live, is converted into a sequence of digital samples (a discrete-time signal) that allows a computer to handle the audio in distinct blocks.
You'll find more information on the Wikipedia page [_Sampling (signal processing)_](https://en.wikipedia.org/wiki/Sampling_%28signal_processing%29).
## Audio buffers: frames, samples, and channels
An {{domxref("AudioBuffer")}} is defined with three parameters:
- the number of channels (1 for mono, 2 for stereo, etc.),
- its length, meaning the number of sample frames inside the buffer,
- and the sample rate, the number of sample frames played per second.
A _sample_ is a single 32-bit floating point value representing the value of the audio stream at each specific moment in time within a particular channel (left or right, if in the case of stereo). A _frame_, or _sample frame_, is the set of all values for all channels that will play at a specific moment in time: all the samples of all the channels that play at the same time (two for a stereo sound, six for 5.1, etc.).
The _sample rate_ is the quantity of those samples (or frames, since all samples of a frame play at the same time) that will play in one second, measured in Hz. The higher the sample rate, the better the sound quality.
Let's look at a _mono_ and a _stereo_ audio buffer, each one second long at a rate of 44100Hz:
- The _mono_ buffer will have 44,100 samples and 44,100 frames. The `length` property will be 44,100.
- The _stereo_ buffer will have 88,200 samples but still 44,100 frames. The `length` property will still be 44100 since it equals the number of frames.

When a buffer plays, you will first hear the leftmost sample frame, then the one right next to it, then the next, _and so on_, until the end of the buffer. In the case of stereo, you will hear both channels simultaneously. Sample frames are handy because they are independent of the number of channels and represent time in an ideal way for precise audio manipulation.
> **Note:** To get a time in seconds from a frame count, divide the number of frames by the sample rate. To get the number of frames from the number of samples, you only need to divide the latter value by the channel count.
Here are a couple of simple examples:
```js
const context = new AudioContext();
const buffer = new AudioBuffer(context, {
numberOfChannels: 2,
length: 22050,
sampleRate: 44100,
});
```
> **Note:** In [digital audio](https://en.wikipedia.org/wiki/Digital_audio), **44,100 [Hz](https://en.wikipedia.org/wiki/Hertz)** (alternately represented as **44.1 kHz**) is a common [sampling frequency](https://en.wikipedia.org/wiki/Sampling_frequency). Why 44.1 kHz?
>
> Firstly, because the [hearing range](https://en.wikipedia.org/wiki/Hearing_range) of human ears is roughly 20 Hz to 20,000 Hz. Via the [Nyquist–Shannon sampling theorem](https://en.wikipedia.org/wiki/Nyquist%E2%80%93Shannon_sampling_theorem), the sampling frequency must be greater than twice the maximum frequency one wishes to reproduce. Therefore, the sampling rate has to be _greater_ than 40,000 Hz.
>
> Secondly, signals must be [low-pass filtered](https://en.wikipedia.org/wiki/Low-pass_filter) before sampling, otherwise [aliasing](https://en.wikipedia.org/wiki/Aliasing) occurs. While an ideal low-pass filter would perfectly pass frequencies below 20 kHz (without attenuating them) and perfectly cut off frequencies above 20 kHz, in practice, a [transition band](https://en.wikipedia.org/wiki/Transition_band) is necessary, where frequencies are partly attenuated. The wider this transition band is, the easier and more economical it is to make an [anti-aliasing filter](https://en.wikipedia.org/wiki/Anti-aliasing_filter). The 44.1 kHz sampling frequency allows for a 2.05 kHz transition band.
If you use this call above, you will get a stereo buffer with two channels that, when played back on an {{domxref("AudioContext")}} running at 44100 Hz (very common, most normal sound cards run at this rate), will last for 0.5 seconds: 22,050 frames/44,100 Hz = 0.5 seconds.
```js
const context = new AudioContext();
const buffer = new AudioBuffer(context, {
numberOfChannels: 1,
length: 22050,
sampleRate: 22050,
});
```
If you use this call, you will get a mono buffer (single-channel buffer) that, when played back on an {{domxref("AudioContext")}} running at 44,100 Hz, will be automatically _resampled_ to 44,100 Hz (and therefore yield 44,100 frames), and last for 1.0 second: 44,100 frames/44,100 Hz = 1 second.
> **Note:** Audio resampling is very similar to image resizing. Say you've got a 16 x 16 image but want it to fill a 32 x 32 area. You resize (or resample) it. The result has less quality (it can be blurry or edgy, depending on the resizing algorithm), but it works, with the resized image taking up less space. Resampled audio is the same: you save space, but, in practice, you cannot correctly reproduce high-frequency content or treble sound.
### Planar versus interleaved buffers
The Web Audio API uses a planar buffer format. The left and right channels are stored like this:
```plain
LLLLLLLLLLLLLLLLRRRRRRRRRRRRRRRR (for a buffer of 16 frames)
```
This structure is widespread in audio processing, making it easy to process each channel independently.
The alternative is to use an interleaved buffer format:
```plain
LRLRLRLRLRLRLRLRLRLRLRLRLRLRLRLR (for a buffer of 16 frames)
```
This format is prevalent for storing and playing back audio without much processing, for example: .WAV files or a decoded MP3 stream.
Because the Web Audio API is designed for processing, it exposes _only_ planar buffers. It uses the planar format but converts the audio to the interleaved format when it sends it to the sound card for playback. Conversely, when the API decodes an MP3, it starts with the interleaved format and converts it to the planar format for processing.
## Audio channels
Each audio buffer may contain different numbers of channels. Most modern audio devices use the basic _mono_ (only one channel) and _stereo_ (left and right channels) settings. Some more complex sets support _surround sound_ settings (like _quad_ and _5.1_), which can lead to a richer sound experience thanks to their high channel count. We usually represent the channels with the standard abbreviations detailed in the table below:
| Name | Channels |
| -------- | -------------------------------------------------------------------------------------------------- |
| _Mono_ | `0: M: mono` |
| _Stereo_ | `0: L: left 1: R: right` |
| _Quad_ | `0: L: left 1: R: right 2: SL: surround left 3: SR: surround right` |
| _5.1_ | `0: L: left 1: R: right 2: C: center 3: LFE: subwoofer 4: SL: surround left 5: SR: surround right` |
### Up-mixing and down-mixing
When the numbers of channels of the input and the output don't match, up-mixing, or down-mixing, must be done. The following rules, controlled by setting the {{domxref("AudioNode.channelInterpretation")}} property to `speakers` or `discrete`, apply:
<table class="standard-table">
<thead>
<tr>
<th scope="row">Interpretation</th>
<th scope="col">Input channels</th>
<th scope="col">Output channels</th>
<th scope="col">Mixing rules</th>
</tr>
</thead>
<tbody>
<tr>
<th rowspan="13" scope="row"><code>speakers</code></th>
<td><code>1</code> <em>(Mono)</em></td>
<td><code>2</code> <em>(Stereo)</em></td>
<td>
<em>Up-mix from mono to stereo</em>.<br />The <code>M</code> input
channel is used for both output channels (<code>L</code> and
<code>R</code>).<br /><code
>output.L = input.M<br />output.R = input.M</code
>
</td>
</tr>
<tr>
<td><code>1</code> <em>(Mono)</em></td>
<td><code>4</code> <em>(Quad)</em></td>
<td>
<em>Up-mix from mono to quad.</em><br />The <code>M</code> input channel
is used for non-surround output channels (<code>L</code> and
<code>R</code>). Surround output channels (<code>SL</code> and
<code>SR</code>) are silent.<br /><code
>output.L = input.M<br />output.R = input.M<br />output.SL = 0<br />output.SR
= 0</code
>
</td>
</tr>
<tr>
<td><code>1</code> <em>(Mono)</em></td>
<td><code>6</code> <em>(5.1)</em></td>
<td>
<em>Up-mix from mono to 5.1.</em><br />The <code>M</code> input channel
is used for the center output channel (<code>C</code>). All the others
(<code>L</code>, <code>R</code>, <code>LFE</code>, <code>SL</code>, and
<code>SR</code>) are silent.<br /><code
>output.L = 0<br />output.R = 0</code
><br /><code
>output.C = input.M<br />output.LFE = 0<br />output.SL = 0<br />output.SR
= 0</code
>
</td>
</tr>
<tr>
<td><code>2</code> <em>(Stereo)</em></td>
<td><code>1</code> <em>(Mono)</em></td>
<td>
<em>Down-mix from stereo to mono</em>.<br />Both input channels (<code
>L</code
>
and <code>R</code>) are equally combined to produce the unique output
channel (<code>M</code>).<br /><code
>output.M = 0.5 * (input.L + input.R)</code
>
</td>
</tr>
<tr>
<td><code>2</code> <em>(Stereo)</em></td>
<td><code>4</code> <em>(Quad)</em></td>
<td>
<em>Up-mix from stereo to quad.</em><br />The <code>L</code> and
<code>R </code>input channels are used for their non-surround respective
output channels (<code>L</code> and <code>R</code>). Surround output
channels (<code>SL</code> and <code>SR</code>) are silent.<br /><code
>output.L = input.L<br />output.R = input.R<br />output.SL = 0<br />output.SR
= 0</code
>
</td>
</tr>
<tr>
<td><code>2</code> <em>(Stereo)</em></td>
<td><code>6</code> <em>(5.1)</em></td>
<td>
<em>Up-mix from stereo to 5.1.</em><br />The <code>L</code> and
<code>R </code>input channels are used for their non-surround respective
output channels (<code>L</code> and <code>R</code>). Surround output
channels (<code>SL</code> and <code>SR</code>), as well as the center
(<code>C</code>) and subwoofer (<code>LFE</code>) channels, are left
silent.<br /><code
>output.L = input.L<br />output.R = input.R<br />output.C = 0<br />output.LFE
= 0<br />output.SL = 0<br />output.SR = 0</code
>
</td>
</tr>
<tr>
<td><code>4</code> <em>(Quad)</em></td>
<td><code>1</code> <em>(Mono)</em></td>
<td>
<em>Down-mix from quad to mono</em>.<br />All four input channels
(<code>L</code>, <code>R</code>, <code>SL</code>, and <code>SR</code>)
are equally combined to produce the unique output channel
(<code>M</code>).<br /><code
>output.M = 0.25 * (input.L + input.R + </code
><code>input.SL + input.SR</code><code>)</code>
</td>
</tr>
<tr>
<td><code>4</code> <em>(Quad)</em></td>
<td><code>2</code> <em>(Stereo)</em></td>
<td>
<em>Down-mix from quad to stereo</em>.<br />Both left input channels
(<code>L</code> and <code>SL</code>) are equally combined to produce the
unique left output channel (<code>L</code>). And similarly, both right
input channels (<code>R</code> and <code>SR</code>) are equally combined
to produce the unique right output channel (<code>R</code>).<br /><code
>output.L = 0.5 * (input.L + input.SL</code
><code>)</code><br /><code>output.R = 0.5 * (input.R + input.SR</code
><code>)</code>
</td>
</tr>
<tr>
<td><code>4</code> <em>(Quad)</em></td>
<td><code>6</code> <em>(5.1)</em></td>
<td>
<em>Up-mix from quad to 5.1.</em><br />The <code>L</code>,
<code>R</code>, <code>SL</code>, and <code>SR</code> input channels are
used for their respective output channels (<code>L</code> and
<code>R</code>). Center (<code>C</code>) and subwoofer
(<code>LFE</code>) channels are left silent.<br /><code
>output.L = input.L<br />output.R = input.R<br />output.C = 0<br />output.LFE
= 0<br />output.SL = input.SL<br />output.SR = input.SR</code
>
</td>
</tr>
<tr>
<td><code>6</code> <em>(5.1)</em></td>
<td><code>1</code> <em>(Mono)</em></td>
<td>
<em>Down-mix from 5.1 to mono.</em><br />The left (<code>L</code> and
<code>SL</code>), right (<code>R</code> and <code>SR</code>) and central
channels are all mixed together. The surround channels are slightly
attenuated, and the regular lateral channels are power-compensated to
make them count as a single channel by multiplying by <code>√2/2</code>.
The subwoofer (<code>LFE</code>) channel is lost.<br /><code
>output.M = 0.7071 * (input.L + input.R) + input.C + 0.5 * (input.SL +
input.SR)</code
>
</td>
</tr>
<tr>
<td><code>6</code> <em>(5.1)</em></td>
<td><code>2</code> <em>(Stereo)</em></td>
<td>
<em>Down-mix from 5.1 to stereo.</em><br />The central channel
(<code>C</code>) is summed with each lateral surround channel (<code
>SL</code
>
or <code>SR</code>) and mixed to each lateral channel. As it is mixed
down to two channels, it is mixed at a lower power: in each case, it is
multiplied by <code>√2/2</code>. The subwoofer (<code>LFE</code>)
channel is lost.<br /><code
>output.L = input.L + 0.7071 * (input.C + input.SL)<br />output.R =
input.R </code
><code>+ 0.7071 * (input.C + input.SR)</code>
</td>
</tr>
<tr>
<td><code>6</code> <em>(5.1)</em></td>
<td><code>4</code> <em>(Quad)</em></td>
<td>
<em>Down-mix from 5.1 to quad.</em><br />The central (<code>C</code>) is
mixed with the lateral non-surround channels (<code>L</code> and
<code>R</code>). As it is mixed down to two channels, it is mixed at a
lower power: in each case, it is multiplied by <code>√2/2</code>. The
surround channels are passed unchanged. The subwoofer (<code>LFE</code>)
channel is lost.<br /><code
>output.L = input.L + 0.7071 * input.C<br />output.R = input.R +
0.7071 * input.C<br />output.SL = input.SL<br />output.SR =
input.SR</code
>
</td>
</tr>
<tr>
<td colspan="2">Other, non-standard layouts</td>
<td>
Non-standard channel layouts behave as if
<code>channelInterpretation</code> is set to
<code>discrete</code>.<br />The specification explicitly allows the future definition of new speaker layouts. Therefore, this fallback is not future-proof as the behavior of the browsers for a specific number of channels may change in the future.
</td>
</tr>
<tr>
<th rowspan="2" scope="row"><code>discrete</code></th>
<td>any (<code>x</code>)</td>
<td>any (<code>y</code>) where <code>x<y</code></td>
<td>
<em>Up-mix discrete channels.</em><br />Fill each output channel with
its input counterpart — that is, the input channel with the same index.
Channels with no corresponding input channels are left silent.
</td>
</tr>
<tr>
<td>any (<code>x</code>)</td>
<td>any (<code>y</code>) where <code>x>y</code></td>
<td>
<em>Down-mix discrete channels.</em><br />Fill each output channel with
its input counterpart — that is, the input channel with the same index.
Input channels with no corresponding output channels are dropped.
</td>
</tr>
</tbody>
</table>
## Visualizations
In general, we get the output over time to produce audio visualizations, usually reading its gain or frequency data. Then, using a graphical tool, we turn the obtained data into a visual representation, such as a graph. The Web Audio API has an {{domxref("AnalyserNode")}} available that doesn't alter the audio signal passing through it. Additionally, it outputs the audio data, allowing us to process it via a technology such as {{htmlelement("canvas")}}.

You can grab data using the following methods:
- {{domxref("AnalyserNode.getFloatFrequencyData()")}}
- : Copies the current frequency data into a {{jsxref("Float32Array")}} array passed into it.
- {{domxref("AnalyserNode.getByteFrequencyData()")}}
- : Copies the current frequency data into a {{jsxref("Uint8Array")}} (unsigned byte array) passed into it.
- {{domxref("AnalyserNode.getFloatTimeDomainData()")}}
- : Copies the current waveform, or time-domain, data into a {{jsxref("Float32Array")}} array passed into it.
- {{domxref("AnalyserNode.getByteTimeDomainData()")}}
- : Copies the current waveform, or time-domain, data into a {{jsxref("Uint8Array")}} (unsigned byte array) passed into it.
> **Note:** For more information, see our [Visualizations with Web Audio API](/en-US/docs/Web/API/Web_Audio_API/Visualizations_with_Web_Audio_API) article.
## Spatializations
Audio spatialization allows us to model the position and behavior of an audio signal at a certain point in physical space, simulating the listener hearing that audio. In the Web Audio API, spatialization is handled by the {{domxref("PannerNode")}} and the {{domxref("AudioListener")}}.
The panner uses right-hand Cartesian coordinates to describe the audio source's _position_ as a vector and its _orientation_ as a 3D directional cone. The cone can be pretty large, for example, for omnidirectional sources.

Similarly, the Web Audio API describes the listener using right-hand Cartesian coordinates: their _position_ as one vector and their _orientation_ as two direction vectors, _up_ and _front_. These vectors define the direction of the top of the listener's head and the direction the listener's nose is pointing. The vectors are perpendicular to one another.

> **Note:** For more information, see our [Web audio spatialization basics](/en-US/docs/Web/API/Web_Audio_API/Web_audio_spatialization_basics) article.
## Fan-in and Fan-out
In audio terms, **fan-in** describes the process by which a {{domxref("ChannelMergerNode")}} takes a series of _mono_ input sources and outputs a single multi-channel signal:

**Fan-out** describes the opposite process, whereby a {{domxref("ChannelSplitterNode")}} takes a multi-channel input source and outputs multiple _mono_ output signals:

| 0 |
data/mdn-content/files/en-us/web/api/web_audio_api | data/mdn-content/files/en-us/web/api/web_audio_api/basic_concepts_behind_web_audio_api/webaudioapi_en.svg | <svg xmlns="http://www.w3.org/2000/svg" style="background-color:#fff" width="643" height="143"><path fill="#fff" stroke="#000" pointer-events="none" d="M1.5 1.5h640v140H1.5z"/><text x="41" y="12" text-anchor="middle" font-size="12" font-family="Helvetica" transform="translate(279.5 17.5)">Audio context</text><path d="M149.5 85.5h92.56" fill="none" stroke="#000" stroke-width="8" stroke-miterlimit="10" pointer-events="none"/><path d="M252.56 85.5l-14 7 3.5-7-3.5-7z" stroke="#000" stroke-width="8" stroke-miterlimit="10" pointer-events="none"/><path fill="#fff" stroke="#000" pointer-events="none" d="M29.5 55.5h120v60h-120z"/><text x="18" y="12" text-anchor="middle" font-size="12" font-family="Helvetica" transform="translate(74.5 79.5)">Inputs</text><path fill="#fff" stroke="#000" pointer-events="none" d="M261.5 55.5h120v60h-120z"/><text x="15" y="12" text-anchor="middle" font-size="12" font-family="Helvetica" transform="translate(302.5 79.5)">Effects</text><path fill="#fff" stroke="#000" pointer-events="none" d="M491.5 56.5h120v60h-120z"/><text x="16" y="12" text-anchor="middle" font-size="12" font-family="Helvetica" transform="translate(533.5 80.5)">Destination</text><path d="M381.5 85.5l90.56.82" fill="none" stroke="#000" stroke-width="8" stroke-miterlimit="10" pointer-events="none"/><path d="M482.56 86.42l-14.07 6.87 3.57-6.97-3.44-7.03z" stroke="#000" stroke-width="8" stroke-miterlimit="10" pointer-events="none"/></svg> | 0 |
data/mdn-content/files/en-us/web/api/web_audio_api | data/mdn-content/files/en-us/web/api/web_audio_api/basic_concepts_behind_web_audio_api/fanin.svg | <svg xmlns="http://www.w3.org/2000/svg" width="325" height="257.5" viewBox="129.5 102.5 325 257.5"><path fill="#709ED0" stroke="#000" d="M196.167 120.5h196.834v216H196.167z"/><text transform="translate(240.834 147.073)" fill="#231F20" font-family="'ArialMT'" font-size="11">ChannelMergerNode</text><path d="M410.009 235.233l16.84 5.834-16.84 5.5z"/><path fill="none" stroke="#000" stroke-miterlimit="10" d="M341.792 241.32h68.217"/><path d="M341.5 181.5H167.834m173.666 24H167.834m173.666 24H167.834m173.666 22H167.834m173.666 24H167.834m173.666 24H167.834m173.958.484V180.979" fill="none" stroke="#000"/><path pointer-events="none" fill="#FFF" stroke="#000" d="M284.5 173.5h24v16h-24z"/><text transform="translate(294.275 184.5)" font-family="'Helvetica'" font-size="8">L</text><path pointer-events="none" fill="#FFF" stroke="#000" d="M284.5 196.5h24v16h-24z"/><text transform="translate(293.611 207.5)" font-family="'Helvetica'" font-size="8">R</text><path pointer-events="none" fill="#FFF" stroke="#000" d="M284.5 220.5h24v16h-24z"/><text transform="translate(291.607 231.5)" font-family="'Helvetica'" font-size="8">SL</text><path pointer-events="none" fill="#FFF" stroke="#000" d="M284.5 243.5h24v16h-24z"/><text transform="translate(289.943 254.5)" font-family="'Helvetica'" font-size="8">SR</text><path pointer-events="none" fill="#FFF" stroke="#000" d="M284.5 267.5h24v16h-24z"/><text transform="translate(293.611 278.5)" font-family="'Helvetica'" font-size="8">C</text><path pointer-events="none" fill="#FFF" stroke="#000" d="M284.5 291.5h24v16h-24z"/><text transform="translate(289.164 302.5)" font-family="'Helvetica'" font-size="8">LFE</text></svg> | 0 |
data/mdn-content/files/en-us/web/api/web_audio_api | data/mdn-content/files/en-us/web/api/web_audio_api/basic_concepts_behind_web_audio_api/fanout.svg | <svg xmlns="http://www.w3.org/2000/svg" width="325" height="257.5" viewBox="129.5 102.5 325 257.5"><path fill="#709ED0" stroke="#000" d="M188.007 119.5h196.834v216H188.007z"/><text transform="translate(231.674 146.073)" fill="#231F20" font-family="'ArialMT'" font-size="11">ChannelSplitterNode</text><path d="M403.34 174.312l16.84 5.834-16.84 5.5zm0 23.521l16.84 5.834-16.84 5.5zm0 25l16.84 5.834-16.84 5.5zm0 21.739l16.84 5.834-16.84 5.5zm0 23.521l16.84 5.834-16.84 5.5zm0 25l16.84 5.834-16.84 5.5z"/><path fill="none" stroke="#000" stroke-miterlimit="10" d="M161.457 240.32h68.217"/><path d="M403.34 180.5H229.674m173.666 24H229.674m173.666 24H229.674m173.666 22H229.674m173.666 24H229.674m173.666 24H229.674m0 .484V179.979" fill="none" stroke="#000"/><path pointer-events="none" fill="#FFF" stroke="#000" d="M276.34 172.5h24v16h-24z"/><text transform="translate(286.115 183.5)" font-family="'Helvetica'" font-size="8">L</text><path pointer-events="none" fill="#FFF" stroke="#000" d="M276.34 195.5h24v16h-24z"/><text transform="translate(285.45 206.5)" font-family="'Helvetica'" font-size="8">R</text><path pointer-events="none" fill="#FFF" stroke="#000" d="M276.34 219.5h24v16h-24z"/><text transform="translate(283.447 230.5)" font-family="'Helvetica'" font-size="8">SL</text><path pointer-events="none" fill="#FFF" stroke="#000" d="M276.34 242.5h24v16h-24z"/><text transform="translate(281.783 253.5)" font-family="'Helvetica'" font-size="8">SR</text><path pointer-events="none" fill="#FFF" stroke="#000" d="M276.34 266.5h24v16h-24z"/><text transform="translate(285.45 277.5)" font-family="'Helvetica'" font-size="8">C</text><path pointer-events="none" fill="#FFF" stroke="#000" d="M276.34 290.5h24v16h-24z"/><text transform="translate(281.004 301.5)" font-family="'Helvetica'" font-size="8">LFE</text></svg> | 0 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.