repo_id
stringlengths 22
103
| file_path
stringlengths 41
147
| content
stringlengths 181
193k
| __index_level_0__
int64 0
0
|
---|---|---|---|
data/mdn-content/files/en-us/web/api/fontfaceset | data/mdn-content/files/en-us/web/api/fontfaceset/keys/index.md | ---
title: "FontFaceSet: keys() method"
short-title: keys()
slug: Web/API/FontFaceSet/keys
page-type: web-api-instance-method
browser-compat: api.FontFaceSet.keys
---
{{APIRef("CSS Font Loading API")}}
The **`keys()`** method of the {{domxref("FontFaceSet")}} interface is an alias for {{domxref("FontFaceSet.values")}}.
## Syntax
```js-nolint
keys()
```
### Parameters
None.
### Return value
A new iterator object containing the values for each element in the given `FontFaceSet`, in insertion order.
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
| 0 |
data/mdn-content/files/en-us/web/api/fontfaceset | data/mdn-content/files/en-us/web/api/fontfaceset/status/index.md | ---
title: "FontFaceSet: status property"
short-title: status
slug: Web/API/FontFaceSet/status
page-type: web-api-instance-property
browser-compat: api.FontFaceSet.status
---
{{APIRef("CSS Font Loading API")}}
The **`status`** read-only property of the {{domxref("FontFaceSet")}} interface returns the loading state of the fonts in the set.
## Value
One of:
- `"loading"`
- `"loaded"`
## Examples
In the following example the `status` of the `FontFaceSet` is printed to the console.
```js
console.log(document.fonts.status);
```
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
| 0 |
data/mdn-content/files/en-us/web/api/fontfaceset | data/mdn-content/files/en-us/web/api/fontfaceset/loadingdone_event/index.md | ---
title: "FontFaceSet: loadingdone event"
short-title: loadingdone
slug: Web/API/FontFaceSet/loadingdone_event
page-type: web-api-event
browser-compat: api.FontFaceSet.loadingdone_event
---
{{APIRef("CSS Font Loading API")}}
The `loadingdone` event fires when the document has loaded all fonts.
## Syntax
Use the event name in methods like {{domxref("EventTarget.addEventListener", "addEventListener()")}}, or set an event handler property.
```js
addEventListener("loadingdone", (event) => {});
onloadingdone = (event) => {};
```
## Example
In the following example, when the font `Ephesis` has finished loading, "Font loading complete" is printed to the console.
```js
document.fonts.onloadingdone = () => {
console.log("Font loading complete");
};
(async () => {
await document.fonts.load("16px Ephesis");
})();
```
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
| 0 |
data/mdn-content/files/en-us/web/api/fontfaceset | data/mdn-content/files/en-us/web/api/fontfaceset/clear/index.md | ---
title: "FontFaceSet: clear() method"
short-title: clear()
slug: Web/API/FontFaceSet/clear
page-type: web-api-instance-method
browser-compat: api.FontFaceSet.clear
---
{{APIRef("CSS Font Loading API")}}
The **`clear()`** method of the {{domxref("FontFaceSet")}} interface removes all fonts added via this interface. Fonts added with the {{cssxref("@font-face")}} rule are not removed.
## Syntax
```js-nolint
clear()
```
### Parameters
None.
### Return value
None ({{jsxref("undefined")}}).
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
| 0 |
data/mdn-content/files/en-us/web/api/fontfaceset | data/mdn-content/files/en-us/web/api/fontfaceset/size/index.md | ---
title: "FontFaceSet: size property"
short-title: size
slug: Web/API/FontFaceSet/size
page-type: web-api-instance-property
browser-compat: api.FontFaceSet.size
---
{{APIRef("CSS Font Loading API")}}
The **`size`** property of the {{domxref("FontFaceSet")}} interface returns the number of items in the `FontFaceSet`.
## Value
An integer indicating the number of items in the `FontFaceSet`.
## Examples
In the following example the `size` of the `FontFaceSet` is printed to the console.
```js
console.log(document.fonts.size);
```
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
| 0 |
data/mdn-content/files/en-us/web/api/fontfaceset | data/mdn-content/files/en-us/web/api/fontfaceset/values/index.md | ---
title: "FontFaceSet: values() method"
short-title: values()
slug: Web/API/FontFaceSet/values
page-type: web-api-instance-method
browser-compat: api.FontFaceSet.values
---
{{APIRef("CSS Font Loading API")}}
The **`values()`** method of the {{domxref("FontFaceSet")}} interface returns a new iterator object that yields the values for each element in the `FontFaceSet` object in insertion order.
## Syntax
```js-nolint
values()
```
### Parameters
None.
### Return value
A new iterator object containing the values for each element in the given `FontFaceSet`, in insertion order.
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
| 0 |
data/mdn-content/files/en-us/web/api/fontfaceset | data/mdn-content/files/en-us/web/api/fontfaceset/delete/index.md | ---
title: "FontFaceSet: delete() method"
short-title: delete()
slug: Web/API/FontFaceSet/delete
page-type: web-api-instance-method
browser-compat: api.FontFaceSet.delete
---
{{APIRef("CSS Font Loading API")}}
The **`delete()`** method of the {{domxref("FontFaceSet")}} interface removes a font from the set.
Font faces that were added to the set using the CSS {{cssxref("@font-face")}} rule remain connected to the corresponding CSS, and cannot be deleted.
## Syntax
```js-nolint
delete(font)
```
### Parameters
- `font`
- : A {{domxref("FontFace")}} to be removed from the set.
### Return value
A boolean value which is `true` if the deletion was successful, and `false` otherwise.
## Examples
In the following example a new {{domxref("FontFace")}} object is created and then deleted from the {{domxref("FontFaceSet")}}.
```js
const font = new FontFace("MyFont", "url(myFont.woff2)");
document.fonts.delete(font);
```
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
| 0 |
data/mdn-content/files/en-us/web/api/fontfaceset | data/mdn-content/files/en-us/web/api/fontfaceset/load/index.md | ---
title: "FontFaceSet: load() method"
short-title: load()
slug: Web/API/FontFaceSet/load
page-type: web-api-instance-method
browser-compat: api.FontFaceSet.load
---
{{APIRef("CSS Font Loading API")}}
The `load()` method of the {{domxref("FontFaceSet")}} forces all the fonts given in parameters to be loaded.
## Syntax
```js-nolint
load(font)
load(font, text)
```
### Parameters
- `font`
- : a font specification using the CSS value syntax, e.g. "italic bold 16px Roboto"
- `text`
- : limit the font faces to those whose Unicode range contains at least one of the characters in text. This [does not check for individual glyph coverage](https://lists.w3.org/Archives/Public/www-style/2015Aug/0330.html).
### Return value
A {{jsxref("Promise")}} fulfilled with an {{jsxref("Array")}} of loaded {{domxref("FontFace")}} objects. The
promise is fulfilled when all the fonts are loaded; it is rejected if one of the fonts
failed to load.
## Examples
The following example returns a promise that will be fulfilled or rejected according the success of loading "MyFont". The code in `then()` can assume the availability of that font.
```js
document.fonts.load("12px MyFont", "ß").then(/* ... */);
```
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
| 0 |
data/mdn-content/files/en-us/web/api/fontfaceset | data/mdn-content/files/en-us/web/api/fontfaceset/loading_event/index.md | ---
title: "FontFaceSet: loading event"
short-title: loading
slug: Web/API/FontFaceSet/loading_event
page-type: web-api-event
browser-compat: api.FontFaceSet.loading_event
---
{{APIRef("CSS Font Loading API")}}
The `loading` event fires when the document begins loading fonts.
## Syntax
Use the event name in methods like {{domxref("EventTarget.addEventListener", "addEventListener()")}}, or set an event handler property.
```js
addEventListener("loading", (event) => {});
onloading = (event) => {};
```
## Example
In the following example, when the font `Ephesis` starts to load, "Font is loading…" is printed to the console.
```js
document.fonts.onloading = () => {
console.log("Font is loading");
};
(async () => {
await document.fonts.load("16px Ephesis");
})();
```
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
| 0 |
data/mdn-content/files/en-us/web/api/fontfaceset | data/mdn-content/files/en-us/web/api/fontfaceset/ready/index.md | ---
title: "FontFaceSet: ready property"
short-title: ready
slug: Web/API/FontFaceSet/ready
page-type: web-api-instance-property
browser-compat: api.FontFaceSet.ready
---
{{APIRef("CSS Font Loading API")}}
The `ready` read-only property of the {{domxref("FontFaceSet")}} interface returns a {{jsxref("Promise")}} that resolves to the given {{domxref("FontFaceSet")}}.
The promise will only resolve once the document has completed loading fonts, layout operations are completed, and no further font loads are needed.
## Value
A {{jsxref("Promise")}} that resolves to the given {{domxref("FontFaceSet")}}.
## Examples
In the following example the value of `ready` is printed to the console once the promise has resolved.
```js
async function isReady() {
let ready = await document.fonts.ready;
console.log(ready);
}
isReady();
```
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
| 0 |
data/mdn-content/files/en-us/web/api/fontfaceset | data/mdn-content/files/en-us/web/api/fontfaceset/check/index.md | ---
title: "FontFaceSet: check() method"
short-title: check()
slug: Web/API/FontFaceSet/check
page-type: web-api-instance-method
browser-compat: api.FontFaceSet.check
---
{{APIRef("CSS Font Loading API")}}
The `check()` method of the {{domxref("FontFaceSet")}} returns `true` if you can render some text using the given font specification without attempting to use any fonts in this `FontFaceSet` that are not yet fully loaded. This means you can use the font specification without causing a [font swap](/en-US/docs/Web/CSS/@font-face/font-display#the_font_display_timeline).
## Syntax
```js-nolint
check(font)
check(font, text)
```
### Parameters
- `font`
- : a font specification using the syntax for the CSS [`font`](/en-US/docs/Web/CSS/font) property, for example `"italic bold 16px Roboto"`
- `text`
- : limit the font faces to those whose Unicode range contains at least one of the characters in text. This [does not check for individual glyph coverage](https://lists.w3.org/Archives/Public/www-style/2015Aug/0330.html).
### Return value
A {{jsxref("Boolean")}} value that is `true` if rendering text with the given font specification will not attempt to use any fonts in this `FontFaceSet` that are not yet fully loaded.
This means that all fonts in this `FontFaceSet` that are matched by the given font specification have a [`status`](/en-US/docs/Web/API/FontFace/status) property set to `"loaded"`.
Otherwise, this function returns `false`.
## Examples
In the following example, we create a new `FontFace` and add it to the `FontFaceSet`:
```js
const font = new FontFace(
"molot",
"url(https://interactive-examples.mdn.mozilla.net/media/fonts/molot.woff2)",
{
style: "normal",
weight: "400",
stretch: "condensed",
},
);
document.fonts.add(font);
```
### Unloaded fonts
The font is not yet loaded, so `check("12px molot")` returns `false`, indicating that if we try to use the given font specification, we will trigger a font load:
```js
console.log(document.fonts.check("12px molot"));
// false: the matching font is in the set, but is not yet loaded
```
### System fonts
If we specify only a system font in the argument to `check()`, it returns `true`, because we can use the system font without loading any fonts from the set:
```js
console.log(document.fonts.check("12px Courier"));
// true: the matching font is a system font
```
### Nonexistent fonts
If we specify a font that is not in the `FontFaceSet` and is not a system font, `check()` returns `true`, because in this situation we will not rely on any fonts from the set:
```js
console.log(document.fonts.check("12px i-dont-exist"));
// true: the matching font is a nonexistent font
```
> **Note:** In this situation Chrome incorrectly returns `false`. This can make [fingerprinting](/en-US/docs/Glossary/Fingerprinting) easier, because an attacker can easily test which system fonts the browser has.
### System and unloaded fonts
If we specify both a system font and a font in the set that is not yet loaded, then `check()` returns `false`:
```js
console.log(document.fonts.check("12px molot, Courier"));
// false: `molot` is in the set but not yet loaded
```
### Fonts that are loading
If we specify a font from the set that is still loading, `check()` returns `false`:
```js
function check() {
font.load();
console.log(document.fonts.check("12px molot"));
// false: font is still loading
console.log(font.status);
// "loading"
}
check();
```
### Fonts that have loaded
If we specify a font from the set that has loaded, `check()` returns `true`:
```js
async function check() {
await font.load();
console.log(document.fonts.check("12px molot"));
// true: font has finished loading
console.log(font.status);
// "loaded"
}
check();
```
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
| 0 |
data/mdn-content/files/en-us/web/api/fontfaceset | data/mdn-content/files/en-us/web/api/fontfaceset/foreach/index.md | ---
title: "FontFaceSet: forEach() method"
short-title: forEach()
slug: Web/API/FontFaceSet/forEach
page-type: web-api-instance-method
browser-compat: api.FontFaceSet.forEach
---
{{APIRef("CSS Font Loading API")}}
The **`forEach()`** method of the {{domxref("FontFaceSet")}} interface executes a provided function for each value in the `FontFaceSet` object.
## Syntax
```js-nolint
forEach(callbackFn)
forEach(callbackFn, thisArg)
```
### Parameters
- `callbackFn`
- : Function to execute for each element, taking three arguments:
- `value`, `key`
- : The current element being processed in the `FontFaceSet`. As there are no keys in a `FontFaceSet`, the value is passed for both arguments.
- `set`
- : The `FontFaceSet` which `forEach()` was called on.
- `thisArg`
- : Value to use as [`this`](/en-US/docs/Web/JavaScript/Reference/Operators/this) when executing `callbackFn`.
### Return value
Undefined.
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
| 0 |
data/mdn-content/files/en-us/web/api/fontfaceset | data/mdn-content/files/en-us/web/api/fontfaceset/loadingerror_event/index.md | ---
title: "FontFaceSet: loadingerror event"
short-title: loadingerror
slug: Web/API/FontFaceSet/loadingerror_event
page-type: web-api-event
browser-compat: api.FontFaceSet.loadingerror_event
---
{{APIRef("CSS Font Loading API")}}
The `loadingerror` event fires when fonts have finished loading, but some or all fonts have failed to load.
## Syntax
Use the event name in methods like {{domxref("EventTarget.addEventListener", "addEventListener()")}}, or set an event handler property.
```js
addEventListener("loadingerror", (event) => {});
onloadingerror = (event) => {};
```
## Example
In the following example, if the font `Ephesis` fails to load, "Font loading error" is printed to the console.
```js
document.fonts.onloadingerror = () => {
console.log("Font loading error");
};
(async () => {
await document.fonts.load("16px Ephesis");
})();
```
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
| 0 |
data/mdn-content/files/en-us/web/api/fontfaceset | data/mdn-content/files/en-us/web/api/fontfaceset/entries/index.md | ---
title: "FontFaceSet: entries() method"
short-title: entries()
slug: Web/API/FontFaceSet/entries
page-type: web-api-instance-method
browser-compat: api.FontFaceSet.entries
---
{{APIRef("CSS Font Loading API")}}
The **`entries()`** method of the {{domxref("FontFaceSet")}} interface returns a new {{jsxref("Iterator")}} object, containing an array of `[value,value]` for each element in the `FontFaceSet`.
## Syntax
```js-nolint
entries()
```
### Parameters
None.
### Return value
A new iterator object that contains an array of `[value, value]` for each element in the `CustomStateSet`, in insertion order.
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
| 0 |
data/mdn-content/files/en-us/web/api | data/mdn-content/files/en-us/web/api/hidconnectionevent/index.md | ---
title: HIDConnectionEvent
slug: Web/API/HIDConnectionEvent
page-type: web-api-interface
status:
- experimental
browser-compat: api.HIDConnectionEvent
---
{{securecontext_header}}{{APIRef("WebHID API")}}{{SeeCompatTable}}
The **`HIDConnectionEvent`** interface of the {{domxref('WebHID API')}} represents HID connection events, and is the event type passed to {{domxref("HID/connect_event", "connect")}} and {{domxref("HID/disconnect_event", "disconnect")}} event handlers when an input report is received.
{{InheritanceDiagram}}
## Constructor
- {{domxref("HIDConnectionEvent.HIDConnectionEvent", "HIDConnectionEvent()")}} {{Experimental_Inline}}
- : Returns a new `HIDConnectionEvent` object. Typically this constructor is not used as events are created when an input report is received.
## Instance properties
_This interface also inherits properties from {{domxref("Event")}}._
- {{domxref("HIDConnectionEvent.device")}} {{ReadOnlyInline}} {{Experimental_Inline}}
- : Returns the {{domxref("HIDDevice")}} instance representing the device associated with the connection event.
## Examples
The following example registers event listeners for `connect` and `disconnect` events, then prints the {{domxref("HIDDevice.productName")}} to the console.
```js
navigator.hid.addEventListener("connect", ({ device }) => {
console.log(`HID connected: ${device.productName}`);
});
navigator.hid.addEventListener("disconnect", ({ device }) => {
console.log(`HID disconnected: ${device.productName}`);
});
```
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
| 0 |
data/mdn-content/files/en-us/web/api/hidconnectionevent | data/mdn-content/files/en-us/web/api/hidconnectionevent/hidconnectionevent/index.md | ---
title: "HIDConnectionEvent: HIDConnectionEvent() constructor"
short-title: HIDConnectionEvent()
slug: Web/API/HIDConnectionEvent/HIDConnectionEvent
page-type: web-api-constructor
status:
- experimental
browser-compat: api.HIDConnectionEvent.HIDConnectionEvent
---
{{securecontext_header}}{{APIRef("WebHID API")}}{{SeeCompatTable}}
The **`HIDConnectionEvent()`** constructor creates a new {{domxref("HIDConnectionEvent")}} object. Typically this constructor is not used as events are created when an input report is received.
## Syntax
```js-nolint
new HIDConnectionEvent(type, options)
```
### Parameters
- `type`
- : A string with the name of the event.
It is case-sensitive and browsers set it to `connect` or `disconnect`.
- `options`
- : An object that, _in addition of the properties defined in {{domxref("Event/Event", "Event()")}}_, can have the following properties:
- `device`
- : The {{domxref("HIDDevice")}} instance representing the device sending the input report.
### Return value
A new {{domxref("HIDConnectionEvent")}} object.
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
| 0 |
data/mdn-content/files/en-us/web/api/hidconnectionevent | data/mdn-content/files/en-us/web/api/hidconnectionevent/device/index.md | ---
title: "HIDConnectionEvent: device property"
short-title: device
slug: Web/API/HIDConnectionEvent/device
page-type: web-api-instance-property
status:
- experimental
browser-compat: api.HIDConnectionEvent.device
---
{{securecontext_header}}{{APIRef("WebHID API")}}{{SeeCompatTable}}
The **`device`** read-only property of the {{domxref("HIDConnectionEvent")}} interface returns the {{domxref("HIDDevice")}} associated with this connection event.
## Value
A {{domxref("HIDDevice")}}.
## Examples
The following example registers event listeners for `connect` and `disconnect` events, then prints the {{domxref("HIDDevice.productName")}} to the console.
```js
navigator.hid.addEventListener("connect", ({ device }) => {
console.log(`HID connected: ${device.productName}`);
});
navigator.hid.addEventListener("disconnect", ({ device }) => {
console.log(`HID disconnected: ${device.productName}`);
});
```
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
| 0 |
data/mdn-content/files/en-us/web/api | data/mdn-content/files/en-us/web/api/speechsynthesis/index.md | ---
title: SpeechSynthesis
slug: Web/API/SpeechSynthesis
page-type: web-api-interface
browser-compat: api.SpeechSynthesis
---
{{APIRef("Web Speech API")}}
The **`SpeechSynthesis`** interface of the [Web Speech API](/en-US/docs/Web/API/Web_Speech_API) is the controller interface for the speech service; this can be used to retrieve information about the synthesis voices available on the device, start and pause speech, and other commands besides.
{{InheritanceDiagram}}
## Instance properties
_`SpeechSynthesis` also inherits properties from its parent interface, {{domxref("EventTarget")}}._
- {{domxref("SpeechSynthesis.paused")}} {{ReadOnlyInline}}
- : A boolean value that returns `true` if the `SpeechSynthesis` object is in a paused state.
- {{domxref("SpeechSynthesis.pending")}} {{ReadOnlyInline}}
- : A boolean value that returns `true` if the utterance queue contains as-yet-unspoken utterances.
- {{domxref("SpeechSynthesis.speaking")}} {{ReadOnlyInline}}
- : A boolean value that returns `true` if an utterance is currently in the process of being spoken — even if `SpeechSynthesis` is in a paused state.
## Instance methods
_`SpeechSynthesis` also inherits methods from its parent interface, {{domxref("EventTarget")}}._
- {{domxref("SpeechSynthesis.cancel()")}}
- : Removes all utterances from the utterance queue.
- {{domxref("SpeechSynthesis.getVoices()")}}
- : Returns a list of {{domxref("SpeechSynthesisVoice")}} objects representing all the available voices on the current device.
- {{domxref("SpeechSynthesis.pause()")}}
- : Puts the `SpeechSynthesis` object into a paused state.
- {{domxref("SpeechSynthesis.resume()")}}
- : Puts the `SpeechSynthesis` object into a non-paused state: resumes it if it was already paused.
- {{domxref("SpeechSynthesis.speak()")}}
- : Adds an {{domxref("SpeechSynthesisUtterance", "utterance")}} to the utterance queue; it will be spoken when any other utterances queued before it have been spoken.
## Events
Listen to this event using [`addEventListener()`](/en-US/docs/Web/API/EventTarget/addEventListener) or by assigning an event listener to the `oneventname` property of this interface.
- [`voiceschanged`](/en-US/docs/Web/API/SpeechSynthesis/voiceschanged_event)
- : Fired when the list of {{domxref("SpeechSynthesisVoice")}} objects that would be returned by the {{domxref("SpeechSynthesis.getVoices()")}} method has changed.
Also available via the `onvoiceschanged` property.
## Examples
First, a simple example:
```js
let utterance = new SpeechSynthesisUtterance("Hello world!");
speechSynthesis.speak(utterance);
```
Now we'll look at a more fully-fledged example. In our [Speech synthesizer demo](https://github.com/mdn/dom-examples/tree/main/web-speech-api/speak-easy-synthesis), we first grab a reference to the SpeechSynthesis controller using `window.speechSynthesis`. After defining some necessary variables, we retrieve a list of the voices available using {{domxref("SpeechSynthesis.getVoices()")}} and populate a select menu with them so the user can choose what voice they want.
Inside the `inputForm.onsubmit` handler, we stop the form submitting with [preventDefault()](/en-US/docs/Web/API/Event/preventDefault), create a new {{domxref("SpeechSynthesisUtterance")}} instance containing the text from the text {{htmlelement("input")}}, set the utterance's voice to the voice selected in the {{htmlelement("select")}} element, and start the utterance speaking via the {{domxref("SpeechSynthesis.speak()")}} method.
```js
const synth = window.speechSynthesis;
const inputForm = document.querySelector("form");
const inputTxt = document.querySelector(".txt");
const voiceSelect = document.querySelector("select");
const pitch = document.querySelector("#pitch");
const pitchValue = document.querySelector(".pitch-value");
const rate = document.querySelector("#rate");
const rateValue = document.querySelector(".rate-value");
let voices = [];
function populateVoiceList() {
voices = synth.getVoices();
for (let i = 0; i < voices.length; i++) {
const option = document.createElement("option");
option.textContent = `${voices[i].name} (${voices[i].lang})`;
if (voices[i].default) {
option.textContent += " — DEFAULT";
}
option.setAttribute("data-lang", voices[i].lang);
option.setAttribute("data-name", voices[i].name);
voiceSelect.appendChild(option);
}
}
populateVoiceList();
if (speechSynthesis.onvoiceschanged !== undefined) {
speechSynthesis.onvoiceschanged = populateVoiceList;
}
inputForm.onsubmit = (event) => {
event.preventDefault();
const utterThis = new SpeechSynthesisUtterance(inputTxt.value);
const selectedOption =
voiceSelect.selectedOptions[0].getAttribute("data-name");
for (let i = 0; i < voices.length; i++) {
if (voices[i].name === selectedOption) {
utterThis.voice = voices[i];
}
}
utterThis.pitch = pitch.value;
utterThis.rate = rate.value;
synth.speak(utterThis);
inputTxt.blur();
};
```
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
## See also
- [Web Speech API](/en-US/docs/Web/API/Web_Speech_API)
| 0 |
data/mdn-content/files/en-us/web/api/speechsynthesis | data/mdn-content/files/en-us/web/api/speechsynthesis/resume/index.md | ---
title: "SpeechSynthesis: resume() method"
short-title: resume()
slug: Web/API/SpeechSynthesis/resume
page-type: web-api-instance-method
browser-compat: api.SpeechSynthesis.resume
---
{{APIRef("Web Speech API")}}
The **`resume()`** method of the {{domxref("SpeechSynthesis")}}
interface puts the `SpeechSynthesis` object into a non-paused state:
resumes it if it was already paused.
## Syntax
```js-nolint
resume()
```
### Parameters
None.
### Return value
None ({{jsxref("undefined")}}).
## Examples
```js
let synth = window.speechSynthesis;
let utterance1 = new SpeechSynthesisUtterance(
"How about we say this now? This is quite a long sentence to say.",
);
let utterance2 = new SpeechSynthesisUtterance(
"We should say another sentence too, just to be on the safe side.",
);
synth.speak(utterance1);
synth.speak(utterance2);
synth.pause(); // pauses utterances being spoken
synth.resume(); // resumes speaking
```
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
## See also
- [Web Speech API](/en-US/docs/Web/API/Web_Speech_API)
| 0 |
data/mdn-content/files/en-us/web/api/speechsynthesis | data/mdn-content/files/en-us/web/api/speechsynthesis/speak/index.md | ---
title: "SpeechSynthesis: speak() method"
short-title: speak()
slug: Web/API/SpeechSynthesis/speak
page-type: web-api-instance-method
browser-compat: api.SpeechSynthesis.speak
---
{{APIRef("Web Speech API")}}
The **`speak()`** method of the {{domxref("SpeechSynthesis")}}
interface adds an {{domxref("SpeechSynthesisUtterance", "utterance")}} to the utterance
queue; it will be spoken when any other utterances queued before it have been spoken.
## Syntax
```js-nolint
speak(utterance)
```
### Parameters
- `utterance`
- : A {{domxref("SpeechSynthesisUtterance")}} object.
### Return value
None ({{jsxref("undefined")}}).
## Examples
This snippet is excerpted from our [Speech synthesizer demo](https://github.com/mdn/dom-examples/blob/main/web-speech-api/speak-easy-synthesis/script.js) ([see it live](https://mdn.github.io/dom-examples/web-speech-api/speak-easy-synthesis/)). When a form containing the text we want to speak is submitted,
we (amongst other things) create a new utterance containing this text, then speak it by
passing it into `speak()` as a parameter.
```js
const synth = window.speechSynthesis;
// ...
inputForm.onsubmit = (event) => {
event.preventDefault();
const utterThis = new SpeechSynthesisUtterance(inputTxt.value);
const selectedOption =
voiceSelect.selectedOptions[0].getAttribute("data-name");
for (let i = 0; i < voices.length; i++) {
if (voices[i].name === selectedOption) {
utterThis.voice = voices[i];
}
}
synth.speak(utterThis);
inputTxt.blur();
};
```
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
## See also
- [Web Speech API](/en-US/docs/Web/API/Web_Speech_API)
| 0 |
data/mdn-content/files/en-us/web/api/speechsynthesis | data/mdn-content/files/en-us/web/api/speechsynthesis/voiceschanged_event/index.md | ---
title: "SpeechSynthesis: voiceschanged event"
short-title: voiceschanged
slug: Web/API/SpeechSynthesis/voiceschanged_event
page-type: web-api-event
browser-compat: api.SpeechSynthesis.voiceschanged_event
---
{{APIRef("Web Speech API")}}
The **`voiceschanged`** event of the [Web Speech API](/en-US/docs/Web/API/Web_Speech_API) is fired when the list of {{domxref("SpeechSynthesisVoice")}} objects that would be returned by the {{domxref("SpeechSynthesis.getVoices()")}} method has changed (when the `voiceschanged` event fires.)
## Syntax
Use the event name in methods like {{domxref("EventTarget.addEventListener", "addEventListener()")}}, or set an event handler property.
```js
addEventListener("voiceschanged", (event) => {});
onvoiceschanged = (event) => {};
```
## Event type
A generic {{DOMxRef("Event")}} with no added properties.
## Examples
This could be used to repopulate a list of voices that the user can choose between when the event fires. You can use the `voiceschanged` event in an [`addEventListener`](/en-US/docs/Web/API/EventTarget/addEventListener) method:
```js
const synth = window.speechSynthesis;
synth.addEventListener("voiceschanged", () => {
const voices = synth.getVoices();
for (let i = 0; i < voices.length; i++) {
const option = document.createElement("option");
option.textContent = `${voices[i].name} (${voices[i].lang})`;
option.setAttribute("data-lang", voices[i].lang);
option.setAttribute("data-name", voices[i].name);
voiceSelect.appendChild(option);
}
});
```
Or use the `onvoiceschanged` event handler property:
```js
const synth = window.speechSynthesis;
synth.onvoiceschanged = () => {
const voices = synth.getVoices();
for (let i = 0; i < voices.length; i++) {
const option = document.createElement("option");
option.textContent = `${voices[i].name} (${voices[i].lang})`;
option.setAttribute("data-lang", voices[i].lang);
option.setAttribute("data-name", voices[i].name);
voiceSelect.appendChild(option);
}
};
```
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
## See also
- [Web Speech API](/en-US/docs/Web/API/Web_Speech_API)
| 0 |
data/mdn-content/files/en-us/web/api/speechsynthesis | data/mdn-content/files/en-us/web/api/speechsynthesis/getvoices/index.md | ---
title: "SpeechSynthesis: getVoices() method"
short-title: getVoices()
slug: Web/API/SpeechSynthesis/getVoices
page-type: web-api-instance-method
browser-compat: api.SpeechSynthesis.getVoices
---
{{APIRef("Web Speech API")}}
The **`getVoices()`** method of the
{{domxref("SpeechSynthesis")}} interface returns a list of
{{domxref("SpeechSynthesisVoice")}} objects representing all the available voices on the
current device.
## Syntax
```js-nolint
getVoices()
```
### Parameters
None.
### Return value
A list (array) of {{domxref("SpeechSynthesisVoice")}} objects.
## Examples
### JavaScript
```js
function populateVoiceList() {
if (typeof speechSynthesis === "undefined") {
return;
}
const voices = speechSynthesis.getVoices();
for (let i = 0; i < voices.length; i++) {
const option = document.createElement("option");
option.textContent = `${voices[i].name} (${voices[i].lang})`;
if (voices[i].default) {
option.textContent += " — DEFAULT";
}
option.setAttribute("data-lang", voices[i].lang);
option.setAttribute("data-name", voices[i].name);
document.getElementById("voiceSelect").appendChild(option);
}
}
populateVoiceList();
if (
typeof speechSynthesis !== "undefined" &&
speechSynthesis.onvoiceschanged !== undefined
) {
speechSynthesis.onvoiceschanged = populateVoiceList;
}
```
### HTML
```html
<select id="voiceSelect"></select>
```
{{EmbedLiveSample("Examples", 400, 25)}}
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
## See also
- [Web Speech API](/en-US/docs/Web/API/Web_Speech_API)
| 0 |
data/mdn-content/files/en-us/web/api/speechsynthesis | data/mdn-content/files/en-us/web/api/speechsynthesis/pause/index.md | ---
title: "SpeechSynthesis: pause() method"
short-title: pause()
slug: Web/API/SpeechSynthesis/pause
page-type: web-api-instance-method
browser-compat: api.SpeechSynthesis.pause
---
{{APIRef("Web Speech API")}}
The **`pause()`** method of the {{domxref("SpeechSynthesis")}}
interface puts the `SpeechSynthesis` object into a paused state.
## Syntax
```js-nolint
pause()
```
### Parameters
None.
### Return value
None ({{jsxref("undefined")}}).
## Examples
```js
const synth = window.speechSynthesis;
const utterance1 = new SpeechSynthesisUtterance(
"How about we say this now? This is quite a long sentence to say.",
);
const utterance2 = new SpeechSynthesisUtterance(
"We should say another sentence too, just to be on the safe side.",
);
synth.speak(utterance1);
synth.speak(utterance2);
synth.pause(); // pauses utterances being spoken
```
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
## See also
- [Web Speech API](/en-US/docs/Web/API/Web_Speech_API)
| 0 |
data/mdn-content/files/en-us/web/api/speechsynthesis | data/mdn-content/files/en-us/web/api/speechsynthesis/speaking/index.md | ---
title: "SpeechSynthesis: speaking property"
short-title: speaking
slug: Web/API/SpeechSynthesis/speaking
page-type: web-api-instance-property
browser-compat: api.SpeechSynthesis.speaking
---
{{APIRef("Web Speech API")}}
The **`speaking`** read-only property of the
{{domxref("SpeechSynthesis")}} interface is a boolean value that returns
`true` if an utterance is currently in the process of being spoken — even
if `SpeechSynthesis` is in a
{{domxref("SpeechSynthesis/pause()","paused")}} state.
## Value
A boolean value.
## Examples
```js
const synth = window.speechSynthesis;
const utterance1 = new SpeechSynthesisUtterance(
"How about we say this now? This is quite a long sentence to say.",
);
const utterance2 = new SpeechSynthesisUtterance(
"We should say another sentence too, just to be on the safe side.",
);
synth.speak(utterance1);
synth.speak(utterance2);
const amISpeaking = synth.speaking; // will return true if utterance 1 or utterance 2 are currently being spoken
```
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
## See also
- [Web Speech API](/en-US/docs/Web/API/Web_Speech_API)
| 0 |
data/mdn-content/files/en-us/web/api/speechsynthesis | data/mdn-content/files/en-us/web/api/speechsynthesis/cancel/index.md | ---
title: "SpeechSynthesis: cancel() method"
short-title: cancel()
slug: Web/API/SpeechSynthesis/cancel
page-type: web-api-instance-method
browser-compat: api.SpeechSynthesis.cancel
---
{{APIRef("Web Speech API")}}
The **`cancel()`** method of the {{domxref("SpeechSynthesis")}}
interface removes all utterances from the utterance queue.
If an utterance is currently being spoken, speaking will stop immediately.
## Syntax
```js-nolint
cancel()
```
### Parameters
None.
### Return value
None ({{jsxref("undefined")}}).
## Examples
```js
const synth = window.speechSynthesis;
const utterance1 = new SpeechSynthesisUtterance(
"How about we say this now? This is quite a long sentence to say.",
);
const utterance2 = new SpeechSynthesisUtterance(
"We should say another sentence too, just to be on the safe side.",
);
synth.speak(utterance1);
synth.speak(utterance2);
synth.cancel(); // utterance1 stops being spoken immediately, and both are removed from the queue
```
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
## See also
- [Web Speech API](/en-US/docs/Web/API/Web_Speech_API)
| 0 |
data/mdn-content/files/en-us/web/api/speechsynthesis | data/mdn-content/files/en-us/web/api/speechsynthesis/pending/index.md | ---
title: "SpeechSynthesis: pending property"
short-title: pending
slug: Web/API/SpeechSynthesis/pending
page-type: web-api-instance-property
browser-compat: api.SpeechSynthesis.pending
---
{{APIRef("Web Speech API")}}
The **`pending`** read-only property of the
{{domxref("SpeechSynthesis")}} interface is a boolean value that returns
`true` if the utterance queue contains as-yet-unspoken utterances.
## Value
A boolean value.
## Examples
```js
const synth = window.speechSynthesis;
const utterance1 = new SpeechSynthesisUtterance(
"How about we say this now? This is quite a long sentence to say.",
);
const utterance2 = new SpeechSynthesisUtterance(
"We should say another sentence too, just to be on the safe side.",
);
synth.speak(utterance1);
synth.speak(utterance2);
const amIPending = synth.pending; // will return true if utterance 1 is still being spoken and utterance 2 is in the queue
```
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
## See also
- [Web Speech API](/en-US/docs/Web/API/Web_Speech_API)
| 0 |
data/mdn-content/files/en-us/web/api/speechsynthesis | data/mdn-content/files/en-us/web/api/speechsynthesis/paused/index.md | ---
title: "SpeechSynthesis: paused property"
short-title: paused
slug: Web/API/SpeechSynthesis/paused
page-type: web-api-instance-property
browser-compat: api.SpeechSynthesis.paused
---
{{APIRef("Web Speech API")}}
The **`paused`** read-only property of the
{{domxref("SpeechSynthesis")}} interface is a boolean value that returns
`true` if the `SpeechSynthesis` object is in a paused state, or `false` if not.
It can be set to {{domxref("SpeechSynthesis.pause()", "paused")}} even if nothing is
currently being spoken through it. If
{{domxref("SpeechSynthesisUtterance","utterances")}} are then added to the utterance
queue, they will not be spoken until the `SpeechSynthesis` object is
unpaused, using {{domxref("SpeechSynthesis.resume()")}}.
## Value
A boolean value.
## Examples
```js
const synth = window.speechSynthesis;
synth.pause();
const amIPaused = synth.paused; // will return true
```
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
## See also
- [Web Speech API](/en-US/docs/Web/API/Web_Speech_API)
| 0 |
data/mdn-content/files/en-us/web/api | data/mdn-content/files/en-us/web/api/xrprojectionlayer/index.md | ---
title: XRProjectionLayer
slug: Web/API/XRProjectionLayer
page-type: web-api-interface
status:
- experimental
browser-compat: api.XRProjectionLayer
---
{{securecontext_header}}{{APIRef("WebXR Device API")}}{{SeeCompatTable}}
The **`XRProjectionLayer`** interface of the [WebXR Device API](/en-US/docs/Web/API/WebXR_Device_API) is a layer that fills the entire view of the observer and is refreshed close to the device's native frame rate.
`XRProjectionLayer` is supported by all {{domxref("XRSession")}} objects (no `layers` feature descriptor is needed).
To create a new `XRProjectionLayer`, call {{domxref("XRWebGLBinding.createProjectionLayer()")}}.
To present layers to the XR device, add them to the `layers` render state using {{domxref("XRSession.updateRenderState()")}}.
`XRProjectionLayer` objects don't have an associated {{domxref("XRSpace")}}, because they render to the full frame.
{{InheritanceDiagram}}
## Instance properties
_Inherits properties from its parent, {{domxref("XRCompositionLayer")}} and {{domxref("EventTarget")}}._
- {{domxref("XRProjectionLayer.fixedFoveation")}} {{Experimental_Inline}}
- : A number indicating the amount of foveation used by the XR compositor for the layer. Fixed Foveated Rendering (FFR) renders the edges of the eye textures at a lower resolution than the center and reduces the GPU load.
- {{domxref("XRProjectionLayer.ignoreDepthValues")}} {{ReadOnlyInline}} {{Experimental_Inline}}
- : A boolean indicating that the XR compositor is not making use of depth buffer values when rendering the layer.
- {{domxref("XRProjectionLayer.textureArrayLength")}} {{ReadOnlyInline}} {{Experimental_Inline}}
- : The layer's layer count for array textures when using `texture-array` as the `textureType`.
- {{domxref("XRProjectionLayer.textureHeight")}} {{ReadOnlyInline}} {{Experimental_Inline}}
- : The height in pixels of the color textures of this layer.
- {{domxref("XRProjectionLayer.textureWidth")}} {{ReadOnlyInline}} {{Experimental_Inline}}
- : The width in pixels of the color textures of this layer.
## Instance methods
_Inherits methods from its parents, {{domxref("XRCompositionLayer")}} and {{domxref("EventTarget")}}_.
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
## See also
- {{domxref("XRLayer")}}
- {{domxref("EventTarget")}}
- {{domxref("XRCompositionLayer")}}
- {{domxref("XREquirectLayer")}}
- {{domxref("XRCubeLayer")}}
- {{domxref("XRCylinderLayer")}}
- {{domxref("XRQuadLayer")}}
| 0 |
data/mdn-content/files/en-us/web/api/xrprojectionlayer | data/mdn-content/files/en-us/web/api/xrprojectionlayer/ignoredepthvalues/index.md | ---
title: "XRProjectionLayer: ignoreDepthValues property"
short-title: ignoreDepthValues
slug: Web/API/XRProjectionLayer/ignoreDepthValues
page-type: web-api-instance-property
status:
- experimental
browser-compat: api.XRProjectionLayer.ignoreDepthValues
---
{{APIRef("WebXR Device API")}}{{SeeCompatTable}}
The read-only **`ignoreDepthValues`** property of the {{domxref("XRProjectionLayer")}} interface is a boolean indicating if the XR compositor is not making use of depth buffer values when rendering the layer.
## Value
A boolean. `true` indicates the XR compositor doesn't make use of depth buffer values; `false` indicates the content of the depth buffer will be used when rendering the layer.
## Examples
### Ignoring depth values
If the `depthFormat` option is `0` when creating a projection layer, the `ignoreDepthValues` property will be `true`. See also {{domxref("XRWebGLBinding.createProjectionLayer()")}}.
```js
let glProjectionLayer = xrGLBinding.createProjectionLayer({
depthFormat: 0,
});
glProjectionLayer.ignoreDepthValues; // true
```
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
## See also
- {{domxref("XRWebGLBinding.createProjectionLayer()")}}
| 0 |
data/mdn-content/files/en-us/web/api/xrprojectionlayer | data/mdn-content/files/en-us/web/api/xrprojectionlayer/textureheight/index.md | ---
title: "XRProjectionLayer: textureHeight property"
short-title: textureHeight
slug: Web/API/XRProjectionLayer/textureHeight
page-type: web-api-instance-property
status:
- experimental
browser-compat: api.XRProjectionLayer.textureHeight
---
{{APIRef("WebXR Device API")}}{{SeeCompatTable}}
The read-only **`textureHeight`** property of the {{domxref("XRProjectionLayer")}} interface indicates the height in pixels of the color textures of this layer.
The projection layer's texture height is determined by the user agent or the device. It is reported in the {{domxref("XRSubImage")}}, which can only be accessed inside the frame loop. If you want to manage your own depth buffers and don't want to wait for first frame after layer creation to determine the required dimensions for those buffers, the `textureHeight` property allows access to layer texture height outside the frame loop. Allocation of these buffers can happen directly after layer creation.
## Value
A number indicating the height in pixels.
## Examples
### Using `textureHeight`
The `textureHeight` of a layer is useful when creating render buffers for a layer. See also {{domxref("WebGL2RenderingContext.renderbufferStorageMultisample()")}}.
```js
let glLayer = xrGLBinding.createProjectionLayer();
let color_rb = gl.createRenderbuffer();
gl.bindRenderbuffer(gl.RENDERBUFFER, color_rb);
gl.renderbufferStorageMultisample(
gl.RENDERBUFFER,
samples,
gl.RGBA8,
glLayer.textureWidth,
glLayer.textureHeight,
);
```
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
## See also
- {{domxref("XRWebGLBinding.createProjectionLayer()")}}
- {{domxref("WebGL2RenderingContext.renderbufferStorageMultisample()")}}
- {{domxref("XRSubImage")}}
| 0 |
data/mdn-content/files/en-us/web/api/xrprojectionlayer | data/mdn-content/files/en-us/web/api/xrprojectionlayer/texturearraylength/index.md | ---
title: "XRProjectionLayer: textureArrayLength property"
short-title: textureArrayLength
slug: Web/API/XRProjectionLayer/textureArrayLength
page-type: web-api-instance-property
status:
- experimental
browser-compat: api.XRProjectionLayer.textureArrayLength
---
{{APIRef("WebXR Device API")}}{{SeeCompatTable}}
The read-only **`textureArrayLength`** property of the {{domxref("XRProjectionLayer")}} interface indicates layer's layer count for array textures when using `texture-array` as the `textureType`.
The projection layer's layer count for array textures is determined by the user agent or the device. It is reported in the {{domxref("XRSubImage")}}, which can only be accessed inside the frame loop. If you want to manage your own depth buffers and don't want to wait for first frame after layer creation to determine the required dimensions for those buffers, the `textureArrayLength` property allows access to layer count for array textures outside the frame loop. Allocation of these buffers can happen directly after layer creation.
## Value
A number indicating the number of layers of the color textures when using `texture-array` as the `textureType`. Otherwise it will be `1`.
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
## See also
- {{domxref("XRSubImage")}}
| 0 |
data/mdn-content/files/en-us/web/api/xrprojectionlayer | data/mdn-content/files/en-us/web/api/xrprojectionlayer/texturewidth/index.md | ---
title: "XRProjectionLayer: textureWidth property"
short-title: textureWidth
slug: Web/API/XRProjectionLayer/textureWidth
page-type: web-api-instance-property
status:
- experimental
browser-compat: api.XRProjectionLayer.textureWidth
---
{{APIRef("WebXR Device API")}}{{SeeCompatTable}}
The read-only **`textureWidth`** property of the {{domxref("XRProjectionLayer")}} interface indicates the width in pixels of the color textures of this layer.
The projection layer's texture width is determined by the user agent or the device. It is reported in the {{domxref("XRSubImage")}}, which can only be accessed inside the frame loop. If you want to manage your own depth buffers and don't want to wait for first frame after layer creation to determine the required dimensions for those buffers, the `textureWidth` property allows access to layer texture width outside the frame loop. Allocation of these buffers can happen directly after layer creation.
## Value
A number indicating the width in pixels.
## Examples
### Using `textureWidth`
The `textureWidth` of a layer is useful when creating render buffers for a layer. See also {{domxref("WebGL2RenderingContext.renderbufferStorageMultisample()")}}.
```js
let glLayer = xrGLBinding.createProjectionLayer();
let color_rb = gl.createRenderbuffer();
gl.bindRenderbuffer(gl.RENDERBUFFER, color_rb);
gl.renderbufferStorageMultisample(
gl.RENDERBUFFER,
samples,
gl.RGBA8,
glLayer.textureWidth,
glLayer.textureHeight,
);
```
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
## See also
- {{domxref("XRWebGLBinding.createProjectionLayer()")}}
- {{domxref("WebGL2RenderingContext.renderbufferStorageMultisample()")}}
- {{domxref("XRSubImage")}}
| 0 |
data/mdn-content/files/en-us/web/api/xrprojectionlayer | data/mdn-content/files/en-us/web/api/xrprojectionlayer/fixedfoveation/index.md | ---
title: "XRProjectionLayer: fixedFoveation property"
short-title: fixedFoveation
slug: Web/API/XRProjectionLayer/fixedFoveation
page-type: web-api-instance-property
status:
- experimental
browser-compat: api.XRProjectionLayer.fixedFoveation
---
{{APIRef("WebXR Device API")}}{{SeeCompatTable}}
The **`fixedFoveation`** property of the {{domxref("XRProjectionLayer")}} interface is a number indicating the amount of foveation used by the XR compositor for the layer. Fixed Foveated Rendering (FFR) renders the edges of the eye textures at a lower resolution than the center and reduces the GPU load.
It is most useful for low-contrast textures such as background images, but less for high-contrast ones such as text or detailed images. Authors can adjust the level on a per-frame basis to achieve the best tradeoff between performance and visual quality.
## Value
A number between 0 and 1.
- The minimum amount of foveation is indicated by 0 (full resolution).
- The maximum amount of foveation is indicated by 1 (the edges render at lower resolution).
It's up to the user agent how to interpret the numbers in this range. When changing the foveation level, the effect will visible in the next {{domxref("XRFrame")}}.
Note that some user agents might implement certain levels of foveation, so you might need to adjust the foveation level in large increments to see an effect. Example levels:
- `0`: no foveation
- `1/3`: low foveation
- `2/3`: medium foveation
- `1.0`: maximum foveation
Some devices don't support foveated rendering. In that case `fixedFoveation` is [`null`](/en-US/docs/Web/JavaScript/Reference/Operators/null) and setting it will not do anything.
## Examples
### Dynamically setting the level of fixed foveation rendering
The `fixedFoveation` property allows you to set the level of foveation at runtime and for each frame. To set the maximum foveation for a given {{domxref("XRProjectionLayer")}}, use a value of `1`.
```js
let glProjectionLayer = glBinding.createProjectionLayer(/* … */);
glProjectionLayer.fixedFoveation = 1; // maximum foveation
```
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
## See also
- [Foveated rendering](https://en.wikipedia.org/wiki/Foveated_rendering)
| 0 |
data/mdn-content/files/en-us/web/api | data/mdn-content/files/en-us/web/api/fragmentdirective/index.md | ---
title: FragmentDirective
slug: Web/API/FragmentDirective
page-type: web-api-interface
status:
- experimental
browser-compat: api.FragmentDirective
---
{{SeeCompatTable}}
The **`FragmentDirective`** interface is an object exposed for feature detectability, that is, whether or not a browser supports text fragments.
It is accessed via the {{domxref("Document.fragmentDirective")}} property.
## Instance properties
None.
## Instance methods
None.
## Examples
Try running the following in a supporting browser's devtools, in a tab with one or more matched text fragments:
```js
document.fragmentDirective;
// returns an empty FragmentDirective object, if supported
// undefined otherwise
```
This functionality is mainly intended for feature detection at present. In the future, the `FragmentDirective` object could include additional information.
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
## See also
- [Text fragments](/en-US/docs/Web/Text_fragments)
- {{cssxref("::target-text")}}
| 0 |
data/mdn-content/files/en-us/web/api | data/mdn-content/files/en-us/web/api/webvtt_api/index.md | ---
title: Web Video Text Tracks Format (WebVTT)
slug: Web/API/WebVTT_API
page-type: web-api-overview
browser-compat:
- api.VTTCue
- api.TextTrack
- api.VTTRegion
---
{{DefaultAPISidebar("WebVTT")}}
**Web Video Text Tracks Format** (**WebVTT**) is a format for displaying timed text tracks (such as subtitles or captions) using the {{HTMLElement("track")}} element. The primary purpose of WebVTT files is to add text overlays to a {{HTMLElement("video")}}. WebVTT is a text based format, which must be encoded using {{Glossary("UTF-8")}}. Where you can use spaces you can also use tabs. There is also a small API available to represent and manage these tracks and the data needed to perform the playback of the text at the correct times.
## WebVTT files
The MIME type of WebVTT is `text/vtt`.
A WebVTT file (`.vtt`) contains cues, which can be either a single line or multiple lines, as shown below:
```plain
WEBVTT
00:01.000 --> 00:04.000
- Never drink liquid nitrogen.
00:05.000 --> 00:09.000
- It will perforate your stomach.
- You could die.
```
## WebVTT body
The structure of a WebVTT consists of the following components, some of them optional, in this order:
- An optional byte order mark (BOM).
- The string "`WEBVTT`".
- An optional text header to the right of `WEBVTT`.
- There must be at least one space after `WEBVTT`.
- You could use this to add a description to the file.
- You may use anything in the text header except newlines or the string "`-->`".
- A blank line, which is equivalent to two consecutive newlines.
- Zero or more cues or comments.
- Zero or more blank lines.
### Examples
- Simplest possible WebVTT file
```plain
WEBVTT
```
- Very simple WebVTT file with a text header
```plain
WEBVTT - This file has no cues.
```
- Common WebVTT example with a header and cues
```plain
WEBVTT - This file has cues.
14
00:01:14.815 --> 00:01:18.114
- What?
- Where are we now?
15
00:01:18.171 --> 00:01:20.991
- This is big bat country.
16
00:01:21.058 --> 00:01:23.868
- [ Bats Screeching ]
- They won't get in your hair. They're after the bugs.
```
### Inner structure of a WebVTT file
Let's re-examine one of our previous examples, and look at the cue structure in a bit more detail.
```plain
WEBVTT
00:01.000 --> 00:04.000
- Never drink liquid nitrogen.
00:05.000 --> 00:09.000
- It will perforate your stomach.
- You could die.
```
In the case of each cue:
- The first line is started with a time, which is the starting time for showing the text that appears underneath.
- On the same line, we then have a string of "`-->`".
- We finish the first line with a second time, which is the ending time for showing the associated text.
- We can then have one or more lines that start with a hyphen (-), each containing part of the text track to be shown.
We can also place comments in our `.vtt` file, to help us remember important information about the parts of our file. These should be on separate lines, starting with the string `NOTE`. You'll find more about these in the below section.
It is important to not use "extra" blank lines within a cue, for example between the timings line and the cue payload. WebVTT is line based; a blank line will close the cue.
## WebVTT comments
Comments are an optional component that can be used to add information to a WebVTT file. Comments are intended for those reading the file and are not seen by users. Comments may contain newlines but cannot contain a blank line, which is equivalent to two consecutive newlines. A blank line signifies the end of a comment.
A comment cannot contain the string `-->`, the ampersand character (`&`), or the less-than sign (`<`). If you wish to use such characters, you need to escape them using for example `&` for ampersand and `<` for less-than. It is also recommended that you use the greater-than escape sequence (`>`) instead of the greater-than character (`>`) to avoid confusion with tags.
A comment consists of three parts:
- The string `NOTE`.
- A space or a newline.
- Zero or more characters other than those noted above.
### Examples
- Common WebVTT example
```plain
NOTE This is a comment
```
- Multi-line comment
```plain
NOTE
One comment that is spanning
more than one line.
NOTE You can also make a comment
across more than one line this way.
```
- Common comment usage
```plain
WEBVTT - Translation of that film I like
NOTE
This translation was done by Kyle so that
some friends can watch it with their parents.
1
00:02:15.000 --> 00:02:20.000
- Ta en kopp varmt te.
- Det är inte varmt.
2
00:02:20.000 --> 00:02:25.000
- Har en kopp te.
- Det smakar som te.
NOTE This last line may not translate well.
3
00:02:25.000 --> 00:02:30.000
- Ta en kopp
```
## Styling WebVTT cues
You can style WebVTT cues by looking for elements which match the {{cssxref("::cue")}} pseudo-element.
### Within site CSS
```css
video::cue {
background-image: linear-gradient(to bottom, dimgray, lightgray);
color: papayawhip;
}
video::cue(b) {
color: peachpuff;
}
```
Here, all video elements are styled to use a gray linear gradient as their backgrounds, with a foreground color of `"papayawhip"`. In addition, text boldfaced using the {{HTMLElement("b")}} element are colored `"peachpuff"`.
The HTML snippet below actually handles displaying the media itself.
```html
<video controls autoplay src="video.webm">
<track default src="track.vtt" />
</video>
```
### Within the WebVTT file itself
You can also define the style directly in the WebVTT file. In this case, you insert your CSS rules into the file with each rule preceded by the string `"STYLE"` all by itself on a line of text, as shown below:
```plain
WEBVTT
STYLE
::cue {
background-image: linear-gradient(to bottom, dimgray, lightgray);
color: papayawhip;
}
/* Style blocks cannot use blank lines nor "dash dash greater than" */
NOTE comment blocks can be used between style blocks.
STYLE
::cue(b) {
color: peachpuff;
}
00:00:00.000 --> 00:00:10.000
- Hello <b>world</b>.
NOTE style blocks cannot appear after the first cue.
```
We can also use identifiers inside WebVTT file, which can be used for defining a new style for some particular cues in the file. The example where we wanted the transcription text to be red highlighted and the other part to remain normal, we can define it as follows using CSS. Where it must be noted that the CSS uses escape sequences the way they are used in HTML pages:
```plain
WEBVTT
1
00:00.000 --> 00:02.000
That's an, an, that's an L!
crédit de transcription
00:04.000 --> 00:05.000
Transcrit par Célestes™
```
```css
::cue(#\31) {
color: lime;
}
::cue(#crédit\ de\ transcription) {
color: red;
}
```
Positioning of text tracks is also supported, by including positioning information after the timings in a cue, as seen below (see [Cue settings](#cue_settings) for more information):
```plain
WEBVTT
00:00:00.000 --> 00:00:04.000 position:10%,line-left align:left size:35%
Where did he go?
00:00:03.000 --> 00:00:06.500 position:90% align:right size:35%
I think he went down this lane.
00:00:04.000 --> 00:00:06.500 position:45%,line-right align:center size:35%
What are you waiting for?
```
## WebVTT cues
A cue is a single subtitle block that has a single start time, end time, and textual payload. A cue consists of five components:
- An optional cue identifier followed by a newline.
- Cue timings.
- Optional cue settings with at least one space before the first and between each setting.
- A single newline.
- The cue payload text.
Here is an example of a cue:
```plain
1 - Title Crawl
00:00:05.000 --> 00:00:10.000 line:0 position:20% size:60% align:start
Some time ago in a place rather distant....
```
### Cue identifier
The identifier is a name that identifies the cue. It can be used to reference the cue from a script. It must not contain a newline and cannot contain the string "`-->`". It must end with a single new line. They do not have to be unique, although it is common to number them (e.g., 1, 2, 3).
Here are a few examples:
- A basic cue identifier
```plain
1 - Title Crawl
```
- Common usage of identifiers
```plain
WEBVTT
1
00:00:22.230 --> 00:00:24.606
This is the first subtitle.
2
00:00:30.739 --> 00:00:34.074
This is the second.
3
00:00:34.159 --> 00:00:35.743
Third
```
### Cue timings
A cue timing indicates when the cue is shown. It has a start and end time which are represented by timestamps. The end time must be greater than the start time, and the start time must be greater than or equal to all previous start times. Cues may have overlapping timings.
If the WebVTT file is being used for chapters ({{HTMLElement("track")}} [`kind`](/en-US/docs/Web/HTML/Element/track#kind) is `chapters`) then the file cannot have overlapping timings.
Each cue timing contains five components:
- Timestamp for start time.
- At least one space.
- The string "`-->`".
- At least one space.
- Timestamp for end time, which must be greater than the start time.
The timestamps must be in one of two formats:
- `mm:ss.ttt`
- `hh:mm:ss.ttt`
Where the components are defined as follows:
- `hh`
- : Represents hours and must be at least two digits. It can be greater than two digits (e.g., `9999:00:00.000`).
- `mm`
- : Represents minutes and must be between 00 and 59, inclusive.
- `ss`
- : Represents seconds and must be between 00 and 59, inclusive.
- `ttt`
- : Represents milliseconds and must be between 000 and 999, inclusive.
Here are a few cue timing examples:
- Basic cue timing examples
```plain
00:00:22.230 --> 00:00:24.606
00:00:30.739 --> 00:00:34.074
00:00:34.159 --> 00:00:35.743
00:00:35.827 --> 00:00:40.122
```
- Overlapping cue timing examples
```plain
00:00:00.000 --> 00:00:10.000
00:00:05.000 --> 00:01:00.000
00:00:30.000 --> 00:00:50.000
```
- Non-overlapping cue timing examples
```plain
00:00:00.000 --> 00:00:10.000
00:00:10.000 --> 00:01:00.581
00:01:00.581 --> 00:02:00.100
00:02:01.000 --> 00:02:01.000
```
### Cue settings
Cue settings are optional components used to position where the cue payload text will be displayed over the video. This includes whether the text is displayed horizontally or vertically. There can be zero or more of them, and they can be used in any order so long as each setting is used no more than once.
The cue settings are added to the right of the cue timings. There must be one or more spaces between the cue timing and the first setting and between each setting. A setting's name and value are separated by a colon. The settings are case sensitive so use lower case as shown. There are five cue settings:
- `vertical`
- : Indicates that the text will be displayed vertically rather than horizontally, such as in some Asian languages. There are two possible values:
- `rl`
- : The writing direction is right to left
- `lr`
- : The writing direction is left to right
- `line`
- : If vertical is not set, specifies where the text appears vertically. If vertical is set, line specifies where text appears horizontally. Its value can be:
- a line number
- : The number is the height of the first line of the cue as it appears on the video. Positive numbers indicate top down and negative numbers indicate bottom up.
- a percentage
- : It must be an integer (i.e., no decimals) between 0 and 100 inclusive and must be followed by a percent sign (%).
| Line | `vertical` omitted | `vertical:rl` | `vertical:lr` |
| ----------- | ------------------ | ------------- | ------------- |
| `line:0` | top | right | left |
| `line:-1` | bottom | left | right |
| `line:0%` | top | right | left |
| `line:100%` | bottom | left | right |
- `position`
- : Specifies where the text will appear horizontally. If vertical is set, position specifies where the text will appear vertically. The value is a percentage, that is an integer (no decimals) between 0 and 100 inclusive followed by a percent sign (%).
| Position | `vertical` omitted | `vertical:rl` | `vertical:lr` |
| --------------- | ------------------ | ------------- | ------------- |
| `position:0%` | left | top | top |
| `position:100%` | right | bottom | bottom |
- `size`
- : Specifies the width of the text area. If vertical is set, size specifies the height of the text area. The value is a percentage, that is an integer (no decimals) between 0 and 100 inclusive followed by a percent sign (%).
| Size | `vertical` omitted | `vertical:rl` | `vertical:lr` |
| ----------- | ------------------ | ------------- | ------------- |
| `size:100%` | full width | full height | full height |
| `size:50%` | half width | half height | half height |
- `align`
- : Specifies the alignment of the text. Text is aligned within the space given by the size cue setting if it is set.
| Align | `vertical` omitted | `vertical:rl` | `vertical:lr` |
| -------------- | --------------------- | ------------------- | ------------------- |
| `align:start` | left | top | top |
| `align:center` | centered horizontally | centered vertically | centered vertically |
| `align:end` | right | bottom | bottom |
Let's study an example of cue setting.
The first line demonstrates no settings. The second line might be used to overlay text on a sign or label. The third line might be used for a title. The last line might be used for an Asian language.
```plain
00:00:05.000 --> 00:00:10.000
00:00:05.000 --> 00:00:10.000 line:63% position:72% align:start
00:00:05.000 --> 00:00:10.000 line:0 position:20% size:60% align:start
00:00:05.000 --> 00:00:10.000 vertical:rt line:-1 align:end
```
### Cue payload
The payload is where the main information or content is located. In normal usage the payload contains the subtitles to be displayed. The payload text may contain newlines but it cannot contain a blank line, which is equivalent to two consecutive newlines. A blank line signifies the end of a cue.
A cue text payload cannot contain the string `-->`, the ampersand character (`&`), or the less-than sign (`<`). Instead use the escape sequence `&` for ampersand and `<` for less-than. It is also recommended that you use the greater-than escape sequence `>` instead of the greater-than character (`>`) to avoid confusion with tags. If you are using the WebVTT file for metadata these restrictions do not apply.
In addition to the three escape sequences mentioned above, there are four others. They are listed in the table below.
| Name | Character | Escape sequence |
| ------------------ | --------- | --------------- |
| Ampersand | `&` | `&` |
| Less-than | `<` | `<` |
| Greater-than | `>` | `>` |
| Left-to-right mark | _none_ | `‎` |
| Right-to-left mark | _none_ | `‏` |
| Non-breaking space | | ` ` |
### Cue payload text tags
There are a number of tags, such as `<b>`, that can be used. However, if the WebVTT file is used in a {{HTMLElement("track")}} element where the attribute [`kind`](/en-US/docs/Web/HTML/Element/track#kind) is `chapters` then you cannot use tags.
- Timestamp tag
- : The timestamp must be greater that the cue's start timestamp, greater than any previous timestamp in the cue payload, and less than the cue's end timestamp. The _active text_ is the text between the timestamp and the next timestamp or to the end of the payload if there is not another timestamp in the payload. Any text before the _active text_ in the payload is _previous text_. Any text beyond the _active text_ is _future text_. This enables karaoke style captions.
```plain
1
00:16.500 --> 00:18.500
When the moon <00:17.500>hits your eye
1
00:00:18.500 --> 00:00:20.500
Like a <00:19.000>big-a <00:19.500>pizza <00:20.000>pie
1
00:00:20.500 --> 00:00:21.500
That's <00:00:21.000>amore
```
The following tags are the HTML tags allowed in a cue and require opening and closing tags (e.g., `<b>text</b>`).
- Class tag (`<c></c>`)
- : Style the contained text using a CSS class.
```xml
<c.classname>text</c>
```
- Italics tag (`<i></i>`)
- : Italicize the contained text.
```xml
<i>text</i>
```
- Bold tag (`<b></b>`)
- : Bold the contained text.
```xml
<b>text</b>
```
- Underline tag (`<u></u>`)
- : Underline the contained text.
```xml
<u>text</u>
```
- Ruby tag (`<ruby></ruby>`)
- : Used with ruby text tags to display [ruby characters](https://en.wikipedia.org/wiki/Ruby_character) (i.e., small annotative characters above other characters).
```xml
<ruby>WWW<rt>World Wide Web</rt>oui<rt>yes</rt></ruby>
```
- Ruby text tag (`<rt></rt>`)
- : Used with ruby tags to display [ruby characters](https://en.wikipedia.org/wiki/Ruby_character) (i.e., small annotative characters above other characters).
```xml
<ruby>WWW<rt>World Wide Web</rt>oui<rt>yes</rt></ruby>
```
- Voice tag (`<v></v>`)
- : Similar to class tag, also used to style the contained text using CSS.
```xml
<v Bob>text</v>
```
## Instance methods and properties
The methods used in WebVTT are those which are used to alter the cue or region as the attributes for both interfaces are different. We can categorize them for better understanding relating to each interface in WebVTT:
### VTTCue
The methods which are available in the {{domxref("VTTCue")}} interface are:
- {{domxref("VTTCue.getCueAsHTML", "getCueAsHTML()")}} to get the HTML of that cue.
- A constructor, {{domxref("VTTCue.VTTCue", "VTTCue()")}} for creating new instances of this interface.
Different properties allowing to read and set the characteristics of the cue, like its position, alignment or size are also available. Check {{domxref("VTTCue")}} for a complete list.
### VTTRegion
The {{domxref("VTTRegion")}} provides methods used for region are listed below along with description of their functionality, especially it allows to adjust the scrolling setting of all nodes present in the given region.
## Tutorial on how to write a WebVTT file
There are few steps that can be followed to write a simple webVTT file. Before start, it must be noted that you can make use of a notepad and then save the file as '.vtt' file. Steps are given below:
- Open a notepad.
- The first line of WebVTT is standardized similar to the way some other languages require you to put headers as the file starts to indicate the file type. On the very first line you have to write:
```plain
WEBVTT
```
- Leave the second line blank, and on the third line the time for first cue is to be specified. For example, for a first cue starting at time 1 second and ending at 5 seconds, it is written as:
```plain
00:01.000 --> 00:05.000
```
- On the next line you can write the caption for this cue which will run from the first second to the fifth second, inclusive.
- Following the similar steps, a complete WebVTT file for specific video or audio file can be made.
## CSS pseudo-classes
CSS pseudo classes allow us to classify the type of object which we want to differentiate from other types of objects. It works in similar manner in WebVTT files as it works in HTML file.
It is one of the good features supported by WebVTT is the localization and use of class elements which can be used in same way they are used in HTML and CSS to classify the style for particular type of objects, but here these are used for styling and classifying the Cues as shown below:
```plain
WEBVTT
04:02.500 --> 04:05.000
J'ai commencé le basket à l'âge de 13, 14 ans
04:05.001 --> 04:07.800
Sur les <i.foreignphrase><lang en>playground</lang></i>, ici à Montpellier
```
In the above example it can be observed that we can use the identifier and pseudo class name for defining the language of caption, where `<i>` tag is for italics.
The type of pseudo class is determined by the selector it is using and working is similar in nature as it works in HTML. Following CSS pseudo classes can be used:
- `lang` (Language): e.g., `p:lang(it)`.
- `link`: e.g., `a:link`.
- `nth-last-child`: e.g., `p:nth-last-child(2)`.
- `nth-child(n)`: e.g., `p:nth-child(2)`.
Where p and a are the tags which are used in HTML for paragraph and link, respectively and they can be replaced by identifiers which are used for Cues in WebVTT file.
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
### Notes
Prior to Firefox 50, the `AlignSetting` enum (representing possible values for {{domxref("VTTCue.align")}}) incorrectly included the value `"middle"` instead of `"center"`. This has been corrected.
WebVTT was implemented in Firefox 24 behind the preference `media.webvtt.enabled`, which is disabled by default; you can enable it by setting this preference to `true`. WebVTT is enabled by default starting in Firefox 31 and can be disabled by setting the preference to `false`.
Prior to Firefox 58, the `REGION` keyword was creating {{domxref("VTTRegion")}} objects, but they were not being used. Firefox 58 now fully supports `VTTRegion` and its use; however, this feature is disabled by default behind the preference `media.webvtt.regions.enabled`; set it to `true` to enable region support in Firefox 58. Regions are enabled by default starting in Firefox 59 (see bugs [Firefox bug 1338030](https://bugzil.la/1338030) and [Firefox bug 1415805](https://bugzil.la/1415805)).
## See also
- The CSS [`::cue` and `::cue()`](/en-US/docs/Web/CSS/::cue) pseudo-elements
| 0 |
data/mdn-content/files/en-us/web/api | data/mdn-content/files/en-us/web/api/cssfontpalettevaluesrule/index.md | ---
title: CSSFontPaletteValuesRule
slug: Web/API/CSSFontPaletteValuesRule
page-type: web-api-interface
browser-compat: api.CSSFontPaletteValuesRule
---
{{APIRef("CSSOM")}}
The **`CSSFontPaletteValuesRule`** interface represents an {{cssxref("@font-palette-values")}} [at-rule](/en-US/docs/Web/CSS/At-rule).
{{InheritanceDiagram}}
## Instance properties
_Inherits properties from its ancestor {{domxref("CSSRule")}}._
- {{domxref("CSSFontPaletteValuesRule.name")}} {{ReadOnlyInline}}
- : A string with the name of the font palette.
- {{domxref("CSSFontPaletteValuesRule.fontFamily")}} {{ReadOnlyInline}}
- : A string indicating the font families on which the rule has to be applied.
- {{domxref("CSSFontPaletteValuesRule.basePalette")}} {{ReadOnlyInline}}
- : A string indicating the base palette associated with the rule.
- {{domxref("CSSFontPaletteValuesRule.overrideColors")}} {{ReadOnlyInline}}
- : A string indicating the colors of the base palette that are overwritten and the new colors.
## Instance methods
_Inherits methods from its ancestor {{domxref("CSSRule")}}._
## Examples
### Read associated font family using CSSOM
This example first defines an {{cssxref("@import")}} and an {{cssxref("@font-palette-values")}} at-rule. Then it reads the {{cssxref("@font-palette-values")}} rule and displays its name. As these rules live in the last stylesheet added to the document, the palette will be the second {{domxref("CSSRule")}} returned by the last stylesheet in the document (`document.styleSheets[document.styleSheets.length-1].cssRules`). So, `rules[1]` returns a {{domxref("CSSFontPaletteValuesRule")}} object, from which we can access `fontFamily`.
#### HTML
```html
<pre id="log">The @font-palette-values at-rule font families:</pre>
```
#### CSS
```css
@import url(https://fonts.googleapis.com/css2?family=Bungee+Spice);
@font-palette-values --Alternate {
font-family: "Bungee Spice";
override-colors:
0 #00ffbb,
1 #007744;
}
.alternate {
font-palette: --Alternate;
}
```
#### JavaScript
```js
const log = document.getElementById("log");
const rules = document.styleSheets[document.styleSheets.length - 1].cssRules;
const fontPaletteValuesRule = rules[1]; // aA CSSFontPaletteValuesRule interface
log.textContent += ` ${fontPaletteValuesRule.fontFamily}`;
```
#### Result
{{EmbedLiveSample("Read associated font family using CSSOM", "100", "40")}}
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
## See also
- {{cssxref("@font-palette-values")}}
| 0 |
data/mdn-content/files/en-us/web/api/cssfontpalettevaluesrule | data/mdn-content/files/en-us/web/api/cssfontpalettevaluesrule/name/index.md | ---
title: "CSSFontPaletteValuesRule: name property"
short-title: name
slug: Web/API/CSSFontPaletteValuesRule/name
page-type: web-api-instance-property
browser-compat: api.CSSFontPaletteValuesRule.name
---
{{APIRef("CSSOM")}}
The read-only **`name`** property of the {{domxref("CSSFontPaletteValuesRule")}} interface represents the name identifying the associated {{CSSxRef("@font-palette-values")}} at-rule. A valid name always starts with two dashes, such as `--Alternate`.
## Value
A string beginning with two dashes.
## Examples
### Read the at-rule's name
This example first defines an {{cssxref("@import")}} and an {{cssxref("@font-palette-values")}} at-rule. Then it reads the {{cssxref("@font-palette-values")}} rule and displays its name. As these rules live in the last stylesheet added to the document, the palette will be the second {{domxref("CSSRule")}} returned by the last stylesheet in the document (`document.styleSheets[document.styleSheets.length-1].cssRules`). So, `rules[1]` returns a {{domxref("CSSFontPaletteValuesRule")}} object, from which we can access `name`.
#### HTML
```html
<pre id="log">The @font-palette-values at-rule's name:</pre>
```
#### CSS
```css
@import url(https://fonts.googleapis.com/css2?family=Bungee+Spice);
@font-palette-values --Alternate {
font-family: "Bungee Spice";
override-colors:
0 #00ffbb,
1 #007744;
}
.alternate {
font-palette: --Alternate;
}
```
#### JavaScript
```js
const log = document.getElementById("log");
const rules = document.styleSheets[document.styleSheets.length - 1].cssRules;
const fontPaletteValuesRule = rules[1]; // a CSSFontPaletteValuesRule interface
log.textContent += ` ${fontPaletteValuesRule.name}`;
```
#### Result
{{EmbedLiveSample("Read the at-rule's name", "100", "40")}}
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
## See also
- {{cssxref("@font-palette-values")}} at-rule
| 0 |
data/mdn-content/files/en-us/web/api/cssfontpalettevaluesrule | data/mdn-content/files/en-us/web/api/cssfontpalettevaluesrule/fontfamily/index.md | ---
title: "CSSFontPaletteValuesRule: fontFamily property"
short-title: fontFamily
slug: Web/API/CSSFontPaletteValuesRule/fontFamily
page-type: web-api-instance-property
browser-compat: api.CSSFontPaletteValuesRule.fontFamily
---
{{APIRef("CSSOM")}}
The read-only **`fontFamily`** property of the {{domxref("CSSFontPaletteValuesRule")}} interface lists the font families the rule can be applied to. The font families must be _named_ families; _generic_ families like `courier` are not valid.
## Value
A string containing a space-separated list of the font families on which the rule can be applied
## Examples
### Read the associated font family
This example first defines an {{cssxref("@import")}} and an {{cssxref("@font-palette-values")}} at-rule. Then it reads the {{cssxref("@font-palette-values")}} rule and displays its name. As these rules live in the last stylesheet added to the document, the palette will be the second {{domxref("CSSRule")}} returned by the last stylesheet in the document (`document.styleSheets[document.styleSheets.length-1].cssRules`). So, `rules[1]` returns a {{domxref("CSSFontPaletteValuesRule")}} object, from which we can access `fontFamily`.
#### HTML
```html
<pre id="log">
The @font-palette-values at-rule's applies to the font families:</pre
>
```
#### CSS
```css
@import url(https://fonts.googleapis.com/css2?family=Bungee+Spice);
@font-palette-values --Alternate {
font-family: "Bungee Spice";
override-colors:
0 #00ffbb,
1 #007744;
}
.alternate {
font-palette: --Alternate;
}
```
#### JavaScript
```js
const log = document.getElementById("log");
const rules = document.styleSheets[document.styleSheets.length - 1].cssRules;
const fontPaletteValuesRule = rules[1]; // a CSSFontPaletteValuesRule interface
log.textContent += ` ${fontPaletteValuesRule.fontFamily}`;
```
#### Result
{{EmbedLiveSample("Read the associated font family", "100", "40")}}
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
## See also
- {{cssxref("@font-palette-values")}} at-rule
- {{cssxref("@font-palette-values/font-family", "font-family")}} descriptor
| 0 |
data/mdn-content/files/en-us/web/api/cssfontpalettevaluesrule | data/mdn-content/files/en-us/web/api/cssfontpalettevaluesrule/overridecolors/index.md | ---
title: "CSSFontPaletteValuesRule: overrideColors property"
short-title: overrideColors
slug: Web/API/CSSFontPaletteValuesRule/overrideColors
page-type: web-api-instance-property
browser-compat: api.CSSFontPaletteValuesRule.overrideColors
---
{{APIRef("CSSOM")}}
The read-only **`overrideColors`** property of the {{domxref("CSSFontPaletteValuesRule")}} interface is a string containing a list of color index and color pair that are to be used instead. It is specified in the same format as the corresponding {{cssxref("@font-palette-values/override-colors", "override-colors")}} descriptor.
## Value
A string containing a comma-separated list of color index and color pair
## Examples
### Read the overridden color
This example first defines a few at-rules, among them two {{cssxref("@font-palette-values")}}. As these rules live in the last stylesheet added to the document, the palette will be the second {{domxref("CSSRule")}} returned by the last stylesheet in the document (`document.styleSheets[document.styleSheets.length-1].cssRules`).
#### HTML
```html
<div class="hat">
<div class="emoji colored-hat">🎩</div>
</div>
<button>Toggle color</button>
<pre id="log"></pre>
```
#### CSS
```css
@font-face {
font-family: "Noto Color Emoji";
font-style: normal;
font-weight: 400;
src: url(https://fonts.gstatic.com/l/font?kit=Yq6P-KqIXTD0t4D9z1ESnKM3-HpFabts6diywYkdG3gjD0U&skey=a373f7129eaba270&v=v24)
format("woff2");
}
.emoji {
font-family: "Noto Color Emoji";
font-size: 3rem;
}
@font-palette-values --blue {
font-family: "Noto Color Emoji";
override-colors:
3 rgb(1 28 193),
4 rgb(60 124 230);
}
@font-palette-values --green {
font-family: "Noto Color Emoji";
override-colors:
3 rgb(28 193 1),
4 rgb(34 230 1);
}
.colored-hat {
font-palette: --blue;
}
```
#### JavaScript
```js
const log = document.getElementById("log");
const button = document.querySelector("button");
const hat = document.querySelector(".colored-hat");
const rules = document.styleSheets[document.styleSheets.length - 1].cssRules;
const greenFontPaletteValuesRule = rules[3];
const blueFontPaletteValuesRule = rules[2];
log.textContent = `Overridden colors: ${blueFontPaletteValuesRule.overrideColors}`;
button.addEventListener("click", (event) => {
if (hat.style.fontPalette !== "--green") {
hat.style.fontPalette = "--green";
log.textContent = `Overridden colors: ${greenFontPaletteValuesRule.overrideColors}`;
} else {
hat.style.fontPalette = "--blue";
log.textContent = `Overridden colors: ${blueFontPaletteValuesRule.overrideColors}`;
}
});
```
#### Result
{{EmbedLiveSample("Read the overridden colors", "100", "125")}}
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
## See also
- {{cssxref("@font-palette-values")}} at-rule
- {{cssxref("@font-palette-values/override-colors", "override-colors")}} descriptor
| 0 |
data/mdn-content/files/en-us/web/api/cssfontpalettevaluesrule | data/mdn-content/files/en-us/web/api/cssfontpalettevaluesrule/basepalette/index.md | ---
title: "CSSFontPaletteValuesRule: basePalette property"
short-title: basePalette
slug: Web/API/CSSFontPaletteValuesRule/basePalette
page-type: web-api-instance-property
browser-compat: api.CSSFontPaletteValuesRule.basePalette
---
{{APIRef("CSSOM")}}
The read-only **`basePalette`** property of the {{domxref("CSSFontPaletteValuesRule")}} interface indicates the base palette associated with the rule.
## Value
A string that can be one of the following color values:
- `light`
- : Matches the first palette in the font file that is marked as applicable to a light background, that is, _close to white_. If there is no palette in the font or if no palette has the required metadata, the value is equivalent to `"0"`, that is, the first palette in the font.
- `dark`
- : Matches the first palette in the font file that is marked as applicable to a dark background, that is, _close to black_. If there is no palette in the font or if no palette has the required metadata, the value is equivalent to `"0"`, that is, the first palette in the font.
- a string containing an index (like `"0"`, `"1"`, …)
- : Matches the palette corresponding to the index. The first palette corresponds to `"0"`.
## Examples
### Read the associated base palette
This example adds rules in an extra stylesheet added to the document, returned as the last stylesheet in the document (`document.styleSheets[document.styleSheets.length-1].cssRules`). So, `rules[2]` returns the first {{domxref("CSSFontPaletteValuesRule")}} object, and `rules[3]` the second one.
#### HTML
```html
<h2>default base-palette</h2>
<h2 class="two">base-palette at index 2</h2>
<h2 class="five">base-palette at index 5</h2>
<pre id="log"></pre>
```
#### CSS
```css
@import url("https://fonts.googleapis.com/css2?family=Nabla&display=swap");
h2 {
font-family: "Nabla";
}
@font-palette-values --two {
font-family: "Nabla";
base-palette: 2;
}
@font-palette-values --five {
font-family: "Nabla";
base-palette: 5;
}
.two {
font-palette: --two;
}
.five {
font-palette: --five;
}
```
#### JavaScript
```js
const log = document.getElementById("log");
const rules = document.styleSheets[document.styleSheets.length - 1].cssRules;
const twoRule = rules[2]; // A CSSFontPaletteValuesRule interface
const fiveRule = rules[3]; // A CSSFontPaletteValuesRule interface
log.textContent = `The ${twoRule.name} @font-palette-values base palette is: ${twoRule.basePalette}\n`;
log.textContent += `The ${fiveRule.name} @font-palette-values base palette is: ${fiveRule.basePalette}`;
```
#### Result
{{EmbedLiveSample("Read the associated base palette", "100", "255")}}
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
## See also
- {{cssxref("@font-palette-values")}} at-rule
- {{cssxref("@font-palette-values/base-palette", "base-palette")}} descriptor
| 0 |
data/mdn-content/files/en-us/web/api | data/mdn-content/files/en-us/web/api/audiocontext/index.md | ---
title: AudioContext
slug: Web/API/AudioContext
page-type: web-api-interface
browser-compat: api.AudioContext
---
{{APIRef("Web Audio API")}}
The `AudioContext` interface represents an audio-processing graph built from audio modules linked together, each represented by an {{domxref("AudioNode")}}.
An audio context controls both the creation of the nodes it contains and the execution of the audio processing, or decoding. You need to create an `AudioContext` before you do anything else, as everything happens inside a context. It's recommended to create one AudioContext and reuse it instead of initializing a new one each time, and it's OK to use a single `AudioContext` for several different audio sources and pipeline concurrently.
{{InheritanceDiagram}}
## Constructor
- {{domxref("AudioContext.AudioContext", "AudioContext()")}}
- : Creates and returns a new `AudioContext` object.
## Instance properties
_Also inherits properties from its parent interface, {{domxref("BaseAudioContext")}}._
- {{domxref("AudioContext.baseLatency")}} {{ReadOnlyInline}}
- : Returns the number of seconds of processing latency incurred by the {{domxref("AudioContext")}} passing the audio from the {{domxref("AudioDestinationNode")}} to the audio subsystem.
- {{domxref("AudioContext.outputLatency")}} {{ReadOnlyInline}}
- : Returns an estimation of the output latency of the current audio context.
- {{domxref("AudioContext.sinkId")}} {{ReadOnlyInline}} {{Experimental_Inline}}
- : Returns the sink ID of the current output audio device.
## Instance methods
_Also inherits methods from its parent interface, {{domxref("BaseAudioContext")}}._
- {{domxref("AudioContext.close()")}}
- : Closes the audio context, releasing any system audio resources that it uses.
- {{domxref("AudioContext.createMediaElementSource()")}}
- : Creates a {{domxref("MediaElementAudioSourceNode")}} associated with an {{domxref("HTMLMediaElement")}}. This can be used to play and manipulate audio from {{HTMLElement("video")}} or {{HTMLElement("audio")}} elements.
- {{domxref("AudioContext.createMediaStreamSource()")}}
- : Creates a {{domxref("MediaStreamAudioSourceNode")}} associated with a {{domxref("MediaStream")}} representing an audio stream which may come from the local computer microphone or other sources.
- {{domxref("AudioContext.createMediaStreamDestination()")}}
- : Creates a {{domxref("MediaStreamAudioDestinationNode")}} associated with a {{domxref("MediaStream")}} representing an audio stream which may be stored in a local file or sent to another computer.
- {{domxref("AudioContext.createMediaStreamTrackSource()")}}
- : Creates a {{domxref("MediaStreamTrackAudioSourceNode")}} associated with a {{domxref("MediaStream")}} representing an media stream track.
- {{domxref("AudioContext.getOutputTimestamp()")}}
- : Returns a new `AudioTimestamp` object containing two audio timestamp values relating to the current audio context.
- {{domxref("AudioContext.resume()")}}
- : Resumes the progression of time in an audio context that has previously been suspended/paused.
- {{domxref("AudioContext.setSinkId()")}} {{Experimental_Inline}}
- : Sets the output audio device for the `AudioContext`.
- {{domxref("AudioContext.suspend()")}}
- : Suspends the progression of time in the audio context, temporarily halting audio hardware access and reducing CPU/battery usage in the process.
## Events
- {{domxref("AudioContext/sinkchange_event", "sinkchange")}} {{Experimental_Inline}}
- : Fired when the output audio device (and therefore, the {{domxref("AudioContext.sinkId")}}) has changed.
## Examples
Basic audio context declaration:
```js
const audioCtx = new AudioContext();
const oscillatorNode = audioCtx.createOscillator();
const gainNode = audioCtx.createGain();
const finish = audioCtx.destination;
// etc.
```
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
## See also
- [Using the Web Audio API](/en-US/docs/Web/API/Web_Audio_API/Using_Web_Audio_API)
- {{domxref("OfflineAudioContext")}}
| 0 |
data/mdn-content/files/en-us/web/api/audiocontext | data/mdn-content/files/en-us/web/api/audiocontext/createmediaelementsource/index.md | ---
title: "AudioContext: createMediaElementSource() method"
short-title: createMediaElementSource()
slug: Web/API/AudioContext/createMediaElementSource
page-type: web-api-instance-method
browser-compat: api.AudioContext.createMediaElementSource
---
{{ APIRef("Web Audio API") }}
The `createMediaElementSource()` method of the {{ domxref("AudioContext") }} Interface is used to create a new {{ domxref("MediaElementAudioSourceNode") }} object, given an existing HTML {{htmlelement("audio")}} or {{htmlelement("video")}} element, the audio from which can then be played and manipulated.
For more details about media element audio source nodes, check out the {{ domxref("MediaElementAudioSourceNode") }} reference page.
## Syntax
```js-nolint
createMediaElementSource(myMediaElement)
```
### Parameters
- `myMediaElement`
- : An {{domxref("HTMLMediaElement")}} object that you want to feed into an audio processing graph to manipulate.
### Return value
A {{domxref("MediaElementAudioSourceNode")}}.
## Examples
This simple example creates a source from an {{htmlelement("audio") }} element using `createMediaElementSource()`, then passes the audio through a {{ domxref("GainNode") }} before feeding it into the {{ domxref("AudioDestinationNode") }} for playback. When the mouse pointer is moved, the `updatePage()` function is invoked, which calculates the current gain as a ratio of mouse Y position divided by overall window height. You can therefore increase and decrease the volume of the playing music by moving the mouse pointer up and down.
> **Note:** You can also [view this example running live](https://mdn.github.io/webaudio-examples/media-source-buffer/), or [view the source](https://github.com/mdn/webaudio-examples/tree/main/media-source-buffer).
```js
const audioCtx = new AudioContext();
const myAudio = document.querySelector("audio");
// Create a MediaElementAudioSourceNode
// Feed the HTMLMediaElement into it
const source = audioCtx.createMediaElementSource(myAudio);
// Create a gain node
const gainNode = audioCtx.createGain();
// Create variables to store mouse pointer Y coordinate
// and HEIGHT of screen
let curY;
const HEIGHT = window.innerHeight;
// Get new mouse pointer coordinates when mouse is moved
// then set new gain value
document.onmousemove = updatePage;
function updatePage(e) {
curY = e.pageY;
gainNode.gain.value = curY / HEIGHT;
}
// Connect the AudioBufferSourceNode to the gainNode
// and the gainNode to the destination, so we can play the
// music and adjust the volume using the mouse cursor
source.connect(gainNode);
gainNode.connect(audioCtx.destination);
```
> **Note:** As a consequence of calling `createMediaElementSource()`, audio playback from the {{domxref("HTMLMediaElement")}} will be re-routed into the processing graph of the AudioContext. So playing/pausing the media can still be done through the media element API and the player controls.
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
## See also
- [Using the Web Audio API](/en-US/docs/Web/API/Web_Audio_API/Using_Web_Audio_API)
| 0 |
data/mdn-content/files/en-us/web/api/audiocontext | data/mdn-content/files/en-us/web/api/audiocontext/baselatency/index.md | ---
title: "AudioContext: baseLatency property"
short-title: baseLatency
slug: Web/API/AudioContext/baseLatency
page-type: web-api-instance-property
browser-compat: api.AudioContext.baseLatency
---
{{APIRef("Web Audio API")}}
The **`baseLatency`** read-only property of the
{{domxref("AudioContext")}} interface returns a double that represents the number of
seconds of processing latency incurred by the `AudioContext` passing an audio
buffer from the {{domxref("AudioDestinationNode")}} — i.e. the end of the audio graph —
into the host system's audio subsystem ready for playing.
> **Note:** You can request a certain latency during
> {{domxref("AudioContext.AudioContext()", "construction time", "", "true")}} with the
> `latencyHint` option, but the browser may ignore the option.
## Value
A double representing the base latency in seconds.
## Examples
```js
// default latency ("interactive")
const audioCtx1 = new AudioContext();
console.log(audioCtx1.baseLatency); // 0.00
// higher latency ("playback")
const audioCtx2 = new AudioContext({ latencyHint: "playback" });
console.log(audioCtx2.baseLatency); // 0.15
```
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
## See also
- [Using the Web Audio API](/en-US/docs/Web/API/Web_Audio_API/Using_Web_Audio_API)
- [Web Audio API](/en-US/docs/Web/API/Web_Audio_API)
| 0 |
data/mdn-content/files/en-us/web/api/audiocontext | data/mdn-content/files/en-us/web/api/audiocontext/resume/index.md | ---
title: "AudioContext: resume() method"
short-title: resume()
slug: Web/API/AudioContext/resume
page-type: web-api-instance-method
browser-compat: api.AudioContext.resume
---
{{ APIRef("Web Audio API") }}
The **`resume()`** method of the {{ domxref("AudioContext") }}
interface resumes the progression of time in an audio context that has previously been
suspended.
This method will cause an `INVALID_STATE_ERR` exception to be thrown if
called on an {{domxref("OfflineAudioContext")}}.
## Syntax
```js-nolint
resume()
```
### Parameters
None.
### Return value
A {{jsxref("Promise")}} that resolves when the context has resumed. The promise is
rejected if the context has already been closed.
## Examples
The following snippet is taken from our [AudioContext states demo](https://github.com/mdn/webaudio-examples/tree/main/audiocontext-states) ([see it running live](https://mdn.github.io/webaudio-examples/audiocontext-states/).) When the suspend/resume button is clicked, the
{{domxref("BaseAudioContext/state", "AudioContext.state")}} is queried — if it is `running`,
{{domxref("AudioContext.suspend()", "suspend()")}} is called; if it is
`suspended`, `resume()` is called. In each case, the text label of
the button is updated as appropriate once the promise resolves.
```js
susresBtn.onclick = () => {
if (audioCtx.state === "running") {
audioCtx.suspend().then(() => {
susresBtn.textContent = "Resume context";
});
} else if (audioCtx.state === "suspended") {
audioCtx.resume().then(() => {
susresBtn.textContent = "Suspend context";
});
}
};
```
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
## See also
- [Using the Web Audio API](/en-US/docs/Web/API/Web_Audio_API/Using_Web_Audio_API)
- [Web Audio API](/en-US/docs/Web/API/Web_Audio_API)
| 0 |
data/mdn-content/files/en-us/web/api/audiocontext | data/mdn-content/files/en-us/web/api/audiocontext/getoutputtimestamp/index.md | ---
title: "AudioContext: getOutputTimestamp() method"
short-title: getOutputTimestamp()
slug: Web/API/AudioContext/getOutputTimestamp
page-type: web-api-instance-method
browser-compat: api.AudioContext.getOutputTimestamp
---
{{APIRef("Web Audio API")}}
The
**`getOutputTimestamp()`** method of the
{{domxref("AudioContext")}} interface returns a new `AudioTimestamp` object
containing two audio timestamp values relating to the current audio context.
The two values are as follows:
- `AudioTimestamp.contextTime`: The time of the sample frame currently
being rendered by the audio output device (i.e., output audio stream position), in the
same units and origin as the context's {{domxref("BaseAudioContext/currentTime", "AudioContext.currentTime")}}.
Basically, this is the time after the audio context was first created.
- `AudioTimestamp.performanceTime`: An estimation of the moment when the
sample frame corresponding to the stored `contextTime` value was rendered
by the audio output device, in the same units and origin as
{{domxref("performance.now()")}}. This is the time after the document containing the
audio context was first rendered.
## Syntax
```js-nolint
getOutputTimestamp()
```
### Parameters
None.
### Return value
An `AudioTimestamp` object, which has the following properties.
- `contextTime`: A point in the time coordinate system of the
{{domxref("BaseAudioContext/currentTime","currentTime")}} for the
`BaseAudioContext`; the time after the audio context was first created.
- `performanceTime`: A point in the time coordinate system of a
`Performance` interface; the time after the document containing the audio
context was first rendered
## Examples
In the following code we start to play an audio file after a play button is clicked,
and start off a `requestAnimationFrame` loop running, which constantly
outputs the `contextTime` and `performanceTime`.
You can see full code of this [example at output-timestamp](https://github.com/mdn/webaudio-examples/blob/main/output-timestamp/index.html) ([see it live also](https://mdn.github.io/webaudio-examples/output-timestamp/)).
```js
// Press the play button
playBtn.addEventListener("click", () => {
// We can create the audioCtx as there has been some user action
if (!audioCtx) {
audioCtx = new AudioContext();
}
source = new AudioBufferSourceNode(audioCtx);
getData();
source.start(0);
playBtn.disabled = true;
stopBtn.disabled = false;
rAF = requestAnimationFrame(outputTimestamps);
});
// Press the stop button
stopBtn.addEventListener("click", () => {
source.stop(0);
playBtn.disabled = false;
stopBtn.disabled = true;
cancelAnimationFrame(rAF);
});
// Helper function to output timestamps
function outputTimestamps() {
const ts = audioCtx.getOutputTimestamp();
output.textContent = `Context time: ${ts.contextTime} | Performance time: ${ts.performanceTime}`;
rAF = requestAnimationFrame(outputTimestamps); // Reregister itself
}
```
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
| 0 |
data/mdn-content/files/en-us/web/api/audiocontext | data/mdn-content/files/en-us/web/api/audiocontext/outputlatency/index.md | ---
title: "AudioContext: outputLatency property"
short-title: outputLatency
slug: Web/API/AudioContext/outputLatency
page-type: web-api-instance-property
browser-compat: api.AudioContext.outputLatency
---
{{APIRef("Web Audio API")}}
The **`outputLatency`** read-only property of
the {{domxref("AudioContext")}} Interface provides an estimation of the output latency
of the current audio context.
This is the time, in seconds, between the browser passing an audio buffer out of an
audio graph over to the host system's audio subsystem to play, and the time at which the
first sample in the buffer is actually processed by the audio output device.
It varies depending on the platform and the available hardware.
## Value
A double representing the output latency in seconds.
## Examples
```js
const audioCtx = new AudioContext();
console.log(audioCtx.outputLatency);
```
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
## See also
- [Using the Web Audio API](/en-US/docs/Web/API/Web_Audio_API/Using_Web_Audio_API)
- [Web Audio API](/en-US/docs/Web/API/Web_Audio_API)
| 0 |
data/mdn-content/files/en-us/web/api/audiocontext | data/mdn-content/files/en-us/web/api/audiocontext/createmediastreamsource/index.md | ---
title: "AudioContext: createMediaStreamSource() method"
short-title: createMediaStreamSource()
slug: Web/API/AudioContext/createMediaStreamSource
page-type: web-api-instance-method
browser-compat: api.AudioContext.createMediaStreamSource
---
{{ APIRef("Web Audio API") }}
The `createMediaStreamSource()` method of the {{ domxref("AudioContext") }}
Interface is used to create a new {{ domxref("MediaStreamAudioSourceNode") }}
object, given a media stream (say, from a {{ domxref("MediaDevices.getUserMedia") }}
instance), the audio from which can then be played and manipulated.
For more details about media stream audio source nodes, check out the {{
domxref("MediaStreamAudioSourceNode") }} reference page.
## Syntax
```js-nolint
createMediaStreamSource(stream)
```
### Parameters
- `stream`
- : A {{domxref("MediaStream")}} to serve as an audio source to be fed into an audio
processing graph for use and manipulation.
### Return value
A new {{domxref("MediaStreamAudioSourceNode")}} object representing the audio node
whose media is obtained from the specified source stream.
## Examples
In this example, we grab a media (audio + video) stream from {{
domxref("navigator.getUserMedia") }}, feed the media into a {{ htmlelement("video") }}
element to play then mute the audio, but then also feed the audio into a {{
domxref("MediaStreamAudioSourceNode") }}. Next, we feed this source audio into a low
pass {{ domxref("BiquadFilterNode") }} (which effectively serves as a bass booster),
then a {{domxref("AudioDestinationNode") }}.
The range slider below the {{ htmlelement("video") }} element controls the amount of
gain given to the lowpass filter — increase the value of the slider to make the audio
sound more bass heavy!
> **Note:** You can see this [example running live](https://mdn.github.io/webaudio-examples/stream-source-buffer/), or [view the source](https://github.com/mdn/webaudio-examples/tree/main/stream-source-buffer).
```js
const pre = document.querySelector("pre");
const video = document.querySelector("video");
const myScript = document.querySelector("script");
const range = document.querySelector("input");
// getUserMedia block - grab stream
// put it into a MediaStreamAudioSourceNode
// also output the visuals into a video element
if (navigator.mediaDevices) {
console.log("getUserMedia supported.");
navigator.mediaDevices
.getUserMedia({ audio: true, video: true })
.then((stream) => {
video.srcObject = stream;
video.onloadedmetadata = (e) => {
video.play();
video.muted = true;
};
// Create a MediaStreamAudioSourceNode
// Feed the HTMLMediaElement into it
const audioCtx = new AudioContext();
const source = audioCtx.createMediaStreamSource(stream);
// Create a biquadfilter
const biquadFilter = audioCtx.createBiquadFilter();
biquadFilter.type = "lowshelf";
biquadFilter.frequency.value = 1000;
biquadFilter.gain.value = range.value;
// connect the AudioBufferSourceNode to the gainNode
// and the gainNode to the destination, so we can play the
// music and adjust the volume using the mouse cursor
source.connect(biquadFilter);
biquadFilter.connect(audioCtx.destination);
// Get new mouse pointer coordinates when mouse is moved
// then set new gain value
range.oninput = () => {
biquadFilter.gain.value = range.value;
};
})
.catch((err) => {
console.log(`The following gUM error occurred: ${err}`);
});
} else {
console.log("getUserMedia not supported on your browser!");
}
// dump script to pre element
pre.innerHTML = myScript.innerHTML;
```
> **Note:** As a consequence of calling
> `createMediaStreamSource()`, audio playback from the media stream will
> be re-routed into the processing graph of the {{domxref("AudioContext")}}. So
> playing/pausing the stream can still be done through the media element API and the
> player controls.
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
## See also
- [Using the Web Audio API](/en-US/docs/Web/API/Web_Audio_API/Using_Web_Audio_API)
| 0 |
data/mdn-content/files/en-us/web/api/audiocontext | data/mdn-content/files/en-us/web/api/audiocontext/setsinkid/index.md | ---
title: "AudioContext: setSinkId() method"
short-title: setSinkId()
slug: Web/API/AudioContext/setSinkId
page-type: web-api-instance-method
status:
- experimental
browser-compat: api.AudioContext.setSinkId
---
{{APIRef("Web Audio API")}}{{SeeCompatTable}}{{SecureContext_Header}}
The **`setSinkId()`** method of the {{domxref("AudioContext")}} interface sets the output audio device for the `AudioContext`. If a sink ID is not explicitly set, the default system audio output device will be used.
To set the audio device to a device different than the default one, the developer needs permission to access to audio devices. If required, the user can be prompted to grant the required permission via a {{domxref("MediaDevices.getUserMedia()")}} call.
In addition, this feature may be blocked by a [`speaker-selection`](/en-US/docs/Web/HTTP/Headers/Permissions-Policy/speaker-selection) [Permissions Policy](/en-US/docs/Web/HTTP/Permissions_Policy).
## Syntax
```js-nolint
setSinkId(sinkId)
```
### Parameters
- `sinkId`
- : The sink ID of the device you want to set as the output audio device. This can take one of the following value types:
- String
- : A string representing the sink ID, retrieved for example via the `deviceId` property of the {{domxref("MediaDeviceInfo")}} objects returned by {{domxref("MediaDevices.enumerateDevices()")}}.
- `AudioSinkOptions`
- : An object representing different options for a sink ID. Currently this takes a single property, `type`, with a value of `none`. Setting this parameter causes the audio to be processed without being played through any audio output device. This is a useful option to minimize power consumption when you don't need playback along with processing.
### Return value
A {{jsxref("Promise")}} that fulfills with a value of `undefined`.
Attempting to set the sink ID to its existing value (i.e. returned by {{domxref("AudioContext.sinkId")}}), throws no errors, but it aborts the process immediately.
### Exceptions
- `InvalidAccessError` {{domxref("DOMException")}}
- : Thrown if accessing the selected audio output device failed.
- `NotAllowedError` {{domxref("DOMException")}}
- : Thrown if the browser does not have permission to access audio devices.
- `NotFoundError` {{domxref("DOMException")}}
- : Thrown if the passed `sinkId` does not match any audio device found on the system.
## Examples
In our [SetSinkId test example](https://set-sink-id.glitch.me/) (check out the [source code](https://glitch.com/edit/#!/set-sink-id)), we create an audio graph that generates a three-second burst of white noise via an {{domxref("AudioBufferSourceNode")}}, which we also run through a {{domxref("GainNode")}} to quiet things down a bit.
We also provide the user with a dropdown menu to allow them to change the audio output device on the fly. To do this, we:
1. Provide a button to populate the dropdown menu. We first invoke {{domxref("MediaDevices.getUserMedia()")}} to trigger the permissions prompt we need to allow device enumeration, then use {{domxref("MediaDevices.enumerateDevices()")}} to get all the available devices. We loop through the different devices and make each one available as an option in a {{htmlelement("select")}} element. We also create a "None" option for the case where you don't want to play your audio in any output.
```js
mediaDeviceBtn.addEventListener('click', async () => {
if ("setSinkId" in AudioContext.prototype) {
selectDiv.innerHTML = '';
const stream = await navigator.mediaDevices.getUserMedia({ audio: true });
const devices = await navigator.mediaDevices.enumerateDevices();
// Most of the DOM scripting to generate the dropdown cut out for brevity
const audioOutputs = devices.filter(
(device) => device.kind === 'audiooutput' && device.deviceId !== 'default'
);
audioOutputs.forEach((device) => {
const option = document.createElement('option')
option.value = device.deviceId;
option.textContent = device.label;
select.appendChild(option);
});
const option = document.createElement('option')
option.value = 'none';
option.textContent = 'None';
select.appendChild(option);
//...
```
2. Add a {{domxref("HTMLElement/change_event", "change")}} event listener to the {{htmlelement("select")}} element to change the sink ID and therefore the audio output device when a new value is selected. If "None" is selected in the dropdown, we invoke `setSinkId()` with the `{ type : 'none' }` object parameter to select no audio device, otherwise we run it with the audio device ID contained in the `<select>` element `value` attribute` as the parameter.
```js
// ...
select.addEventListener('change', async () => {
if(select.value === 'none') {
await audioCtx.setSinkId({ type : 'none' });
} else {
await audioCtx.setSinkId(select.value);
}
})
}
});
```
The output device can be changed during audio playback, as well as before, or between plays.
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
## See also
- [SetSinkId test example](https://set-sink-id.glitch.me/)
- [Change the destination output device in Web Audio](https://developer.chrome.com/blog/audiocontext-setsinkid/)
- {{domxref("AudioContext.sinkId")}}
- {{domxref("AudioContext/sinkchange_event", "sinkchange")}}
| 0 |
data/mdn-content/files/en-us/web/api/audiocontext | data/mdn-content/files/en-us/web/api/audiocontext/createmediastreamtracksource/index.md | ---
title: "AudioContext: createMediaStreamTrackSource() method"
short-title: createMediaStreamTrackSource()
slug: Web/API/AudioContext/createMediaStreamTrackSource
page-type: web-api-instance-method
browser-compat: api.AudioContext.createMediaStreamTrackSource
---
{{ APIRef("Web Audio API") }}
The **`createMediaStreamTrackSource()`** method of the {{
domxref("AudioContext") }} interface creates and returns a
{{domxref("MediaStreamTrackAudioSourceNode")}} which represents an audio source whose
data comes from the specified {{domxref("MediaStreamTrack")}}.
This differs from {{domxref("AudioContext.createMediaStreamSource",
"createMediaStreamSource()")}}, which creates a
{{domxref("MediaStreamAudioSourceNode")}} whose audio comes from the audio track in a
specified {{domxref("MediaStream")}} whose {{domxref("MediaStreamTrack.id", "id")}} is
first, lexicographically (alphabetically).
## Syntax
```js-nolint
createMediaStreamTrackSource(track)
```
### Parameters
- `track`
- : The {{domxref("MediaStreamTrack")}} to use as the source of all audio data for the
new node.
### Return value
A {{domxref("MediaStreamTrackAudioSourceNode")}} object which acts as a source for
audio data found in the specified audio track.
## Examples
In this example, {{domxref("MediaDevices.getUserMedia", "getUserMedia()")}} is used to
request access to the user's microphone. Once that access is attained, an audio context
is established and a {{domxref("MediaStreamTrackAudioSourceNode")}} is created using
`createMediaStreamTrackSource()`, taking its audio from the first audio track
in the stream returned by `getUserMedia()`.
Then a {{domxref("BiquadFilterNode")}} is created using
{{domxref("BaseAudioContext/createBiquadFilter", "createBiquadFilter()")}}, and it's
configured as desired to perform a lowshelf filter on the audio coming from the source.
The output from the microphone is then routed into the new biquad filter, and the
filter's output is in turn routed to the audio context's
{{domxref("BaseAudioContext/destination", "destination")}}.
```js
navigator.mediaDevices
.getUserMedia({ audio: true, video: false })
.then((stream) => {
audio.srcObject = stream;
audio.onloadedmetadata = (e) => {
audio.play();
audio.muted = true;
};
const audioCtx = new AudioContext();
const audioTracks = stream.getAudioTracks();
const source = audioCtx.createMediaStreamTrackSource(audioTracks[0]);
const biquadFilter = audioCtx.createBiquadFilter();
biquadFilter.type = "lowshelf";
biquadFilter.frequency.value = 3000;
biquadFilter.gain.value = 20;
source.connect(biquadFilter);
biquadFilter.connect(audioCtx.destination);
})
.catch((err) => {
// Handle getUserMedia() error
});
```
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
## See also
- Web Audio API
- [Using the Web Audio API](/en-US/docs/Web/API/Web_Audio_API/Using_Web_Audio_API)
- {{domxref("MediaStreamTrackAudioSourceNode")}}
| 0 |
data/mdn-content/files/en-us/web/api/audiocontext | data/mdn-content/files/en-us/web/api/audiocontext/audiocontext/index.md | ---
title: "AudioContext: AudioContext() constructor"
short-title: AudioContext()
slug: Web/API/AudioContext/AudioContext
page-type: web-api-constructor
browser-compat: api.AudioContext.AudioContext
---
{{APIRef("Web Audio API")}}
The **`AudioContext()`** constructor
creates a new {{domxref("AudioContext")}} object which represents an audio-processing
graph, built from audio modules linked together, each represented by an
{{domxref("AudioNode")}}.
## Syntax
```js-nolint
new AudioContext()
new AudioContext(options)
```
### Parameters
- `options` {{optional_inline}}
- : An object used to configure the context. The available properties are:
- `latencyHint` {{optional_inline}}
- : The type of playback that the context will be used for, as a predefined string (`"balanced"`, `"interactive"` or `"playback"`)
or a double-precision floating-point value indicating the preferred maximum latency of the context in seconds.
The user agent may or may not choose to meet this request;
check the value of {{domxref("AudioContext.baseLatency")}} to determine the true latency after creating the context.
- `"balanced"`: The browser balances audio output latency and power consumption when selecting a latency value.
- `"interactive"` (default value): The audio is involved in interactive elements,
such as responding to user actions or needing to coincide with visual cues such as a video or game action.
The browser selects the lowest possible latency that doesn't cause glitches in the audio. This is likely to require increased power consumption.
- `"playback"`: The browser selects a latency that will maximize playback time by minimizing power consumption at the expense of latency.
Useful for non-interactive playback, such as playing music.
- `sampleRate` {{optional_inline}}
- : Indicates the sample rate to use for the new context. The value must be a floating-point value indicating the sample rate,
in samples per second, for which to configure the new context;
additionally, the value must be one which is supported by {{domxref("AudioBuffer.sampleRate")}}.
The value will typically be between 8,000 Hz and 96,000 Hz; the default will vary depending on the output device, but the sample rate 44,100 Hz is the most common.
If the `sampleRate` property is not included in the options, or the options are not specified when creating the audio context,
the new context's output device's preferred sample rate is used by default.
- `sinkId` {{optional_inline}} {{Experimental_Inline}}
- : Specifies the sink ID of the audio output device to use for the `AudioContext`. This can take one of the following value types:
- A string representing the sink ID, retrieved for example via the `deviceId` property of the {{domxref("MediaDeviceInfo")}} objects returned by {{domxref("MediaDevices.enumerateDevices()")}}.
- An object representing different options for a sink ID. Currently, this takes a single property, `type`, with a value of `none`. Setting this parameter causes the audio to be processed without being played through any audio output device.
### Return value
A new {{domxref("AudioContext")}} instance.
### Exceptions
- `NotSupportedError` {{domxref("DOMException")}}
- : Thrown if the specified `sampleRate` isn't supported by the context.
## Usage notes
The specification doesn't go into a lot of detail about things like how many audio
contexts a user agent should support, or minimum or maximum latency requirements (if
any), so these details can vary from browser to browser. Be sure to check the values if
they matter to you.
In particular, the specification doesn't indicate a maximum or minimum number of audio
contexts that must be able to be open at the same time, so this is left up to the
browser implementations to decide.
### Google Chrome
#### Per-tab audio context limitation in Chrome
Prior to version 66 Google Chrome only supported up to six audio contexts _per
tab_ at a time.
#### Non-standard exceptions in Chrome
If the value of the `latencyHint` property isn't valid,
Chrome throws a {{jsxref("TypeError")}} exception with the message
"The provided value '...' is not a valid enum value of type
AudioContextLatencyCategory".
## Example
This example creates a new {{domxref("AudioContext")}} for interactive audio
(optimizing for latency), with a sample rate of 44.1kHz and a specific audio output.
```js
const audioCtx = new AudioContext({
latencyHint: "interactive",
sampleRate: 44100,
sinkId: "bb04fea9a8318c96de0bd...", // truncated for brevity
});
```
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
## See also
- {{domxref("OfflineAudioContext.OfflineAudioContext()", "new OfflineAudioContext()")}} constructor
| 0 |
data/mdn-content/files/en-us/web/api/audiocontext | data/mdn-content/files/en-us/web/api/audiocontext/sinkchange_event/index.md | ---
title: "AudioContext: sinkchange event"
short-title: sinkchange
slug: Web/API/AudioContext/sinkchange_event
page-type: web-api-event
status:
- experimental
browser-compat: api.AudioContext.sinkchange_event
---
{{APIRef("Web Audio API")}}{{SeeCompatTable}}
The **`sinkchange`** event of the {{domxref("AudioContext")}} interface is fired when the output audio device (and therefore, the {{domxref("AudioContext.sinkId")}}) has changed.
## Syntax
Use the event name in methods like {{domxref("EventTarget.addEventListener", "addEventListener()")}}, or set an event handler property.
```js
addEventListener("sinkchange", (event) => {});
onsinkchange = (event) => {};
```
## Event type
{{domxref("Event")}}.
{{InheritanceDiagram("Event")}}
## Examples
A `sinkchange` event listener can be used to report a change of audio output device. Note that if {{domxref("AudioContext.sinkId", "sinkId")}} contains an {{domxref("AudioSinkInfo")}} object, it indicates that the audio has been changed to not play on any output device.
```js
audioCtx.addEventListener("sinkchange", () => {
if (typeof audioCtx.sinkId === "object" && audioCtx.sinkId.type === "none") {
console.log("Audio changed to not play on any device");
} else {
console.log(`Audio output device changed to ${audioCtx.sinkId}`);
}
});
```
See our [SetSinkId test example](https://set-sink-id.glitch.me/) for working code.
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
## See also
- [SetSinkId test example](https://set-sink-id.glitch.me/)
- [Change the destination output device in Web Audio](https://developer.chrome.com/blog/audiocontext-setsinkid/)
- {{domxref("AudioContext.setSinkId()")}}
- {{domxref("AudioContext.sinkId")}}
| 0 |
data/mdn-content/files/en-us/web/api/audiocontext | data/mdn-content/files/en-us/web/api/audiocontext/createmediastreamdestination/index.md | ---
title: "AudioContext: createMediaStreamDestination() method"
short-title: createMediaStreamDestination()
slug: Web/API/AudioContext/createMediaStreamDestination
page-type: web-api-instance-method
browser-compat: api.AudioContext.createMediaStreamDestination
---
{{ APIRef("Web Audio API") }}
The `createMediaStreamDestination()` method of the {{ domxref("AudioContext") }} Interface is used to create a new {{domxref("MediaStreamAudioDestinationNode")}} object associated with a [WebRTC](/en-US/docs/Web/API/WebRTC_API) {{domxref("MediaStream")}} representing an audio stream, which may be stored in a local file or sent to another computer.
The {{domxref("MediaStream")}} is created when the node is created and is accessible via the {{domxref("MediaStreamAudioDestinationNode")}}'s `stream` attribute. This stream can be used in a similar way as a `MediaStream` obtained via {{domxref("navigator.getUserMedia") }} — it can, for example, be sent to a remote peer using the `addStream()` method of `RTCPeerConnection`.
For more details about media stream destination nodes, check out the {{domxref("MediaStreamAudioDestinationNode")}} reference page.
## Syntax
```js-nolint
createMediaStreamDestination()
```
### Parameters
None.
### Return value
A {{domxref("MediaStreamAudioDestinationNode")}}.
## Examples
In the following simple example, we create a {{domxref("MediaStreamAudioDestinationNode")}}, an {{ domxref("OscillatorNode") }} and a {{ domxref("MediaRecorder") }} (the example will therefore only work in Firefox and Chrome at this time.) The `MediaRecorder` is set up to record information from the `MediaStreamDestinationNode`.
When the button is clicked, the oscillator starts, and the `MediaRecorder` is started. When the button is stopped, the oscillator and `MediaRecorder` both stop. Stopping the `MediaRecorder` causes the `dataavailable` event to fire, and the event data is pushed into the `chunks` array. After that, the `stop` event fires, a new `blob` is made of type opus — which contains the data in the `chunks` array, and a new window (tab) is then opened that points to a URL created from the blob.
From here, you can play and save the opus file.
```html
<!doctype html>
<html lang="en-US">
<head>
<meta charset="UTF-8" />
<title>createMediaStreamDestination() demo</title>
</head>
<body>
<h1>createMediaStreamDestination() demo</h1>
<p>Encoding a pure sine wave to an Opus file</p>
<button>Make sine wave</button>
<audio controls></audio>
<script>
const b = document.querySelector("button");
let clicked = false;
const chunks = [];
const ac = new AudioContext();
const osc = ac.createOscillator();
const dest = ac.createMediaStreamDestination();
const mediaRecorder = new MediaRecorder(dest.stream);
osc.connect(dest);
b.addEventListener("click", (e) => {
if (!clicked) {
mediaRecorder.start();
osc.start(0);
e.target.textContent = "Stop recording";
clicked = true;
} else {
mediaRecorder.stop();
osc.stop(0);
e.target.disabled = true;
}
});
mediaRecorder.ondataavailable = (evt) => {
// Push each chunk (blobs) in an array
chunks.push(evt.data);
};
mediaRecorder.onstop = (evt) => {
// Make blob out of our blobs, and open it.
const blob = new Blob(chunks, { type: "audio/ogg; codecs=opus" });
document.querySelector("audio").src = URL.createObjectURL(blob);
};
</script>
</body>
</html>
```
> **Note:** You can [view this example live](https://mdn.github.io/webaudio-examples/create-media-stream-destination/index.html), or [study the source code](https://github.com/mdn/webaudio-examples/blob/main/create-media-stream-destination/index.html), on GitHub.
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
## See also
- [Using the Web Audio API](/en-US/docs/Web/API/Web_Audio_API/Using_Web_Audio_API)
| 0 |
data/mdn-content/files/en-us/web/api/audiocontext | data/mdn-content/files/en-us/web/api/audiocontext/suspend/index.md | ---
title: "AudioContext: suspend() method"
short-title: suspend()
slug: Web/API/AudioContext/suspend
page-type: web-api-instance-method
browser-compat: api.AudioContext.suspend
---
{{ APIRef("Web Audio API") }}
The `suspend()` method of the {{ domxref("AudioContext") }} Interface suspends the progression of time in the audio context, temporarily halting audio hardware access and reducing CPU/battery usage in the process — this is useful if you want an application to power down the audio hardware when it will not be using an audio context for a while.
This method will cause an `INVALID_STATE_ERR` exception to be thrown if called on an {{domxref("OfflineAudioContext")}}.
## Syntax
```js-nolint
suspend()
```
### Parameters
None.
### Return value
A {{jsxref("Promise")}} that resolves with {{jsxref('undefined')}}. The promise is rejected if the context has already been closed.
## Examples
The following snippet is taken from our [AudioContext states demo](https://github.com/mdn/webaudio-examples/blob/main/audiocontext-states/index.html) ([see it running live](https://mdn.github.io/webaudio-examples/audiocontext-states/).) When the suspend/resume button is clicked, the {{domxref("BaseAudioContext/state", "AudioContext.state")}} is queried — if it is `running`, `suspend()` is called; if it is `suspended`, {{domxref("AudioContext/resume", "resume()")}} is called. In each case, the text label of the button is updated as appropriate once the promise resolves.
```js
susresBtn.onclick = () => {
if (audioCtx.state === "running") {
audioCtx.suspend().then(() => {
susresBtn.textContent = "Resume context";
});
} else if (audioCtx.state === "suspended") {
audioCtx.resume().then(() => {
susresBtn.textContent = "Suspend context";
});
}
};
```
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
## See also
- [Using the Web Audio API](/en-US/docs/Web/API/Web_Audio_API/Using_Web_Audio_API)
- [Web Audio API](/en-US/docs/Web/API/Web_Audio_API)
| 0 |
data/mdn-content/files/en-us/web/api/audiocontext | data/mdn-content/files/en-us/web/api/audiocontext/sinkid/index.md | ---
title: "AudioContext: sinkId property"
short-title: sinkId
slug: Web/API/AudioContext/sinkId
page-type: web-api-instance-property
status:
- experimental
browser-compat: api.AudioContext.sinkId
---
{{APIRef("Web Audio API")}}{{SeeCompatTable}}{{SecureContext_Header}}
The **`sinkId`** read-only property of the
{{domxref("AudioContext")}} interface returns the sink ID of the current output audio device.
## Value
This property returns one of the following values, depending on how the sink ID was set:
- An empty string
- : If a sink ID has not explicitly been set, the default system audio output device will be used, and `sinkId` will return an empty string.
- A string
- : If the sink ID is set as a string value (using {{domxref("AudioContext.setSinkId", "setSinkId()")}}, or the `sinkId` {{domxref("AudioContext.AudioContext", "AudioContext()")}} constructor option), `sinkId` will return that same string value.
- An {{domxref("AudioSinkInfo")}} object
- : If the sink ID is set as an options object (using {{domxref("AudioContext.setSinkId", "setSinkId()")}}, or the `sinkId` {{domxref("AudioContext.AudioContext", "AudioContext()")}} constructor option), `sinkId` will return an {{domxref("AudioSinkInfo")}} object reflecting the same values set in the initial options object.
## Examples
In our [SetSinkId test example](https://set-sink-id.glitch.me/), we create an audio graph that generates a three-second burst of white noise via an {{domxref("AudioBufferSourceNode")}}, which we also run through a {{domxref("GainNode")}} to quiet things down a bit. We also provide the user with a dropdown menu to allow them to change the audio output device.
When the Play button is clicked, we assemble the audio graph and start it playing, and we also log information about the current device to the console based on the value of `sinkId`:
- An empty string means the default device is still being used.
- If the value is an object, the audio will not be playing on any device because we set an options object containing `type: 'none'`.
- Otherwise the value will be a sink ID string, so we log that.
```js
playBtn.addEventListener("click", () => {
const source = audioCtx.createBufferSource();
source.buffer = myArrayBuffer;
source.connect(gain);
gain.connect(audioCtx.destination);
source.start();
if (audioCtx.sinkId === "") {
console.log("Audio playing on default device");
} else if (
typeof audioCtx.sinkId === "object" &&
audioCtx.sinkId.type === "none"
) {
console.log("Audio not playing on any device");
} else {
console.log(`Audio playing on device ${audioCtx.sinkId}`);
}
});
```
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
## See also
- [SetSinkId test example](https://set-sink-id.glitch.me/)
- [Change the destination output device in Web Audio](https://developer.chrome.com/blog/audiocontext-setsinkid/)
- {{domxref("AudioContext.setSinkId()")}}
- {{domxref("AudioContext/sinkchange_event", "sinkchange")}}
| 0 |
data/mdn-content/files/en-us/web/api/audiocontext | data/mdn-content/files/en-us/web/api/audiocontext/close/index.md | ---
title: "AudioContext: close() method"
short-title: close()
slug: Web/API/AudioContext/close
page-type: web-api-instance-method
browser-compat: api.AudioContext.close
---
{{ APIRef("Web Audio API") }}
The `close()` method of the {{ domxref("AudioContext") }} Interface closes the audio context, releasing any system audio resources that it uses.
This function does not automatically release all `AudioContext`-created objects, unless other references have been released as well; however, it will forcibly release any system audio resources that might prevent additional `AudioContexts` from being created and used, suspend the progression of audio time in the audio context, and stop processing audio data. The returned {{jsxref("Promise")}} resolves when all `AudioContext`-creation-blocking resources have been released. This method throws an `INVALID_STATE_ERR` exception if called on an {{domxref("OfflineAudioContext")}}.
## Syntax
```js-nolint
close()
```
### Parameters
None.
### Return value
A {{jsxref("Promise")}} that resolves with {{jsxref('undefined')}}.
## Examples
The following snippet is taken from our [AudioContext states demo](https://github.com/mdn/webaudio-examples/blob/main/audiocontext-states/index.html) ([see it running live](https://mdn.github.io/webaudio-examples/audiocontext-states/).) When the stop button is clicked, `close()` is called. When the promise resolves, the example is reset to its beginning state.
```js
stopBtn.onclick = () => {
audioCtx.close().then(() => {
startBtn.removeAttribute("disabled");
susresBtn.setAttribute("disabled", "disabled");
stopBtn.setAttribute("disabled", "disabled");
});
};
```
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
## See also
- [Using the Web Audio API](/en-US/docs/Web/API/Web_Audio_API/Using_Web_Audio_API)
- [Web Audio API](/en-US/docs/Web/API/Web_Audio_API)
| 0 |
data/mdn-content/files/en-us/web/api | data/mdn-content/files/en-us/web/api/svgforeignobjectelement/index.md | ---
title: SVGForeignObjectElement
slug: Web/API/SVGForeignObjectElement
page-type: web-api-interface
browser-compat: api.SVGForeignObjectElement
---
{{APIRef("SVG")}}
The **`SVGForeignObjectElement`** interface provides access to the properties of {{SVGElement("foreignObject")}} elements, as well as methods to manipulate them.
{{InheritanceDiagram}}
## Instance properties
_This interface also inherits properties from its parent, {{domxref("SVGGraphicsElement")}}._
- {{domxref("SVGForeignObjectElement.x")}} {{ReadOnlyInline}}
- : An {{domxref("SVGAnimatedLength")}} corresponding to the {{SVGAttr("x")}} attribute of the given {{SVGElement("foreignObject")}} element.
- {{domxref("SVGForeignObjectElement.y")}} {{ReadOnlyInline}}
- : An {{domxref("SVGAnimatedLength")}} corresponding to the {{SVGAttr("x")}} attribute of the given {{SVGElement("foreignObject")}} element.
- {{domxref("SVGForeignObjectElement.width")}} {{ReadOnlyInline}}
- : An {{domxref("SVGAnimatedLength")}} corresponding to the {{SVGAttr("width")}} attribute of the given {{SVGElement("foreignObject")}} element.
- {{domxref("SVGForeignObjectElement.height")}} {{ReadOnlyInline}}
- : An {{domxref("SVGAnimatedLength")}} corresponding to the {{SVGAttr("height")}} attribute of the given {{SVGElement("foreignObject")}} element.
## Instance methods
_This interface has no methods but inherits methods from its parent, {{domxref("SVGGraphicsElement")}}._
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
## See also
- {{SVGElement("foreignObject")}}
| 0 |
data/mdn-content/files/en-us/web/api | data/mdn-content/files/en-us/web/api/cssvaluelist/index.md | ---
title: CSSValueList
slug: Web/API/CSSValueList
page-type: web-api-interface
status:
- deprecated
browser-compat: api.CSSValueList
---
{{APIRef("CSSOM")}}{{Deprecated_Header}}
The **`CSSValueList`** interface derives from the {{DOMxRef("CSSValue")}} interface and provides the abstraction of an ordered collection of CSS values.
> **Note:** This interface was part of an attempt to create a typed CSS Object Model. This attempt has been abandoned, and most browsers do
> not implement it.
>
> To achieve your purpose, you can use:
>
> - the untyped [CSS Object Model](/en-US/docs/Web/API/CSS_Object_Model), widely supported, or
> - the modern [CSS Typed Object Model API](/en-US/docs/Web/API/CSS_Typed_OM_API), less supported and considered experimental.
Some properties allow an empty list in their syntax. In that case, these properties take the `none` identifier. So, an empty list means that the property has the value `none`.
The items in the `CSSValueList` are accessible via an integral index, starting from 0.
{{InheritanceDiagram}}
## Instance properties
_Inherits properties from its parent, {{DOMxRef("CSSValue")}}_.
- {{DOMxRef("CSSValueList.length")}} {{ReadOnlyInline}} {{Deprecated_Inline}}
- : An `unsigned long` representing the number of `CSSValues` in the list.
## Instance methods
- {{DOMxRef("CSSValueList.item()")}} {{Deprecated_Inline}}
- : This method is used to retrieve a {{DOMxRef("CSSValue")}} by ordinal index. The order in this collection represents the order of the values in the CSS style property. If index is greater than or equal to the number of values in the list, this returns `null`.
## Specifications
This feature was originally defined in the [DOM Style Level 2](https://www.w3.org/TR/DOM-Level-2-Style/) specification, but has been dropped from any
standardization effort since then.
It has been superseded by a modern, but incompatible, [CSS Typed Object Model API](/en-US/docs/Web/API/CSS_Typed_OM_API) that is now on the standard track.
## Browser compatibility
{{Compat}}
## See also
- {{DOMxRef("CSSPrimitiveValue")}}
- {{DOMxRef("CSSValue")}}
| 0 |
data/mdn-content/files/en-us/web/api/cssvaluelist | data/mdn-content/files/en-us/web/api/cssvaluelist/length/index.md | ---
title: "CSSValueList: length property"
short-title: length
slug: Web/API/CSSValueList/length
page-type: web-api-instance-property
status:
- deprecated
browser-compat: api.CSSValueList.length
---
{{APIRef("CSSOM")}}{{Deprecated_header}}
The **`length`** read-only property of the
{{domxref("CSSValueList")}} interface represents the number of {{domxref("CSSValue")}}s
in the list. The range of valid values of the indices is `0` to
`length-1` inclusive.
> **Note:** This property was part of an attempt to create a typed CSS Object Model. This attempt has been abandoned, and most browsers do
> not implement it.
>
> To achieve your purpose, you can use:
>
> - the untyped [CSS Object Model](/en-US/docs/Web/API/CSS_Object_Model), widely supported, or
> - the modern [CSS Typed Object Model API](/en-US/docs/Web/API/CSS_Typed_OM_API), less supported and considered experimental.
## Value
An `unsigned long` representing the number of {{domxref("CSSValue")}}s.
## Specifications
This feature was originally defined in the [DOM Style Level 2](https://www.w3.org/TR/DOM-Level-2-Style/) specification, but has been dropped from any
standardization effort since then.
It has been superseded by a modern, but incompatible, [CSS Typed Object Model API](/en-US/docs/Web/API/CSS_Typed_OM_API) that is now on the standard track.
## Browser compatibility
{{Compat}}
| 0 |
data/mdn-content/files/en-us/web/api/cssvaluelist | data/mdn-content/files/en-us/web/api/cssvaluelist/item/index.md | ---
title: "CSSValueList: item() method"
short-title: item()
slug: Web/API/CSSValueList/item
page-type: web-api-instance-method
status:
- deprecated
browser-compat: api.CSSValueList.item
---
{{APIRef("CSSOM")}}{{Deprecated_header}}
The **`item()`** method of the {{domxref("CSSValueList")}}
interface is used to retrieve a {{domxref("CSSValue")}} by ordinal index.
The order in this collection represents the order of the values in the CSS style
property. If the index is greater than or equal to the number of values in the list,
this method returns `null`.
> **Note:** This method was part of an attempt to create a typed CSS Object Model. This attempt has been abandoned, and most browsers do
> not implement it.
>
> To achieve your purpose, you can use:
>
> - the untyped [CSS Object Model](/en-US/docs/Web/API/CSS_Object_Model), widely supported, or
> - the modern [CSS Typed Object Model API](/en-US/docs/Web/API/CSS_Typed_OM_API), less supported and considered experimental.
## Syntax
```js-nolint
item(index)
```
### Parameters
- `index`
- : An `unsigned long` representing the index of the CSS value within the
collection.
### Return value
A {{domxref("CSSValue")}} object at the `index` position in the
`CSSValueList`, or `null` if that is not a valid index.
## Specifications
This feature was originally defined in the [DOM Style Level 2](https://www.w3.org/TR/DOM-Level-2-Style/) specification, but has been dropped from any
standardization effort since then.
It has been superseded by a modern, but incompatible, [CSS Typed Object Model API](/en-US/docs/Web/API/CSS_Typed_OM_API) that is now on the standard track.
## Browser compatibility
{{Compat}}
| 0 |
data/mdn-content/files/en-us/web/api | data/mdn-content/files/en-us/web/api/performancetiming/index.md | ---
title: PerformanceTiming
slug: Web/API/PerformanceTiming
page-type: web-api-interface
status:
- deprecated
browser-compat: api.PerformanceTiming
---
{{APIRef("Performance API")}}{{Deprecated_Header}}
> **Warning:** This interface is deprecated in the [Navigation Timing Level 2 specification](https://w3c.github.io/navigation-timing/#obsolete). Please use the {{domxref("PerformanceNavigationTiming")}} interface instead.
The **`PerformanceTiming`** interface is a legacy interface kept for backwards compatibility and contains properties that offer performance timing information for various events which occur during the loading and use of the current page. You get a `PerformanceTiming` object describing your page using the {{domxref("Performance.timing", "window.performance.timing")}} property.
## Instance properties
_The `PerformanceTiming` interface doesn't inherit any properties._
These properties each describe the time at which a particular point in the page loading process was reached. Some correspond to DOM events; others describe the time at which internal browser operations of interest took place.
Each time is provided as a number representing the moment, in milliseconds since the UNIX epoch.
These properties are listed in the order in which they occur during the navigation process.
- {{domxref("PerformanceTiming.navigationStart")}} {{ReadOnlyInline}} {{Deprecated_Inline}}
- : When the prompt for unload terminates on the previous document in the same browsing context. If there is no previous document, this value will be the same as `PerformanceTiming.fetchStart`.
- {{domxref("PerformanceTiming.unloadEventStart")}} {{ReadOnlyInline}} {{Deprecated_Inline}}
- : When the {{domxref("Window/unload_event", "unload")}} event has been thrown, indicating the time at which the previous document in the window began to unload. If there is no previous document, or if the previous document or one of the needed redirects is not of the same origin, the value returned is `0`.
- {{domxref("PerformanceTiming.unloadEventEnd")}} {{ReadOnlyInline}} {{Deprecated_Inline}}
- : When the {{domxref("Window/unload_event", "unload")}} event handler finishes. If there is no previous document, or if the previous document, or one of the needed redirects, is not of the same origin, the value returned is `0`.
- {{domxref("PerformanceTiming.redirectStart")}} {{ReadOnlyInline}} {{Deprecated_Inline}}
- : When the first HTTP redirect starts. If there is no redirect, or if one of the redirects is not of the same origin, the value returned is `0`.
- {{domxref("PerformanceTiming.redirectEnd")}} {{ReadOnlyInline}} {{Deprecated_Inline}}
- : When the last HTTP redirect is completed, that is when the last byte of the HTTP response has been received. If there is no redirect, or if one of the redirects is not of the same origin, the value returned is `0`.
- {{domxref("PerformanceTiming.fetchStart")}} {{ReadOnlyInline}} {{Deprecated_Inline}}
- : When the browser is ready to fetch the document using an HTTP request. This moment is _before_ the check to any application cache.
- {{domxref("PerformanceTiming.domainLookupStart")}} {{ReadOnlyInline}} {{Deprecated_Inline}}
- : When the domain lookup starts. If a persistent connection is used, or the information is stored in a cache or a local resource, the value will be the same as `PerformanceTiming.fetchStart`.
- {{domxref("PerformanceTiming.domainLookupEnd")}} {{ReadOnlyInline}} {{Deprecated_Inline}}
- : When the domain lookup is finished. If a persistent connection is used, or the information is stored in a cache or a local resource, the value will be the same as `PerformanceTiming.fetchStart`.
- {{domxref("PerformanceTiming.connectStart")}} {{ReadOnlyInline}} {{Deprecated_Inline}}
- : When the request to open a connection is sent to the network. If the transport layer reports an error and the connection establishment is started again, the last connection establishment start time is given. If a persistent connection is used, the value will be the same as `PerformanceTiming.fetchStart`.
- {{domxref("PerformanceTiming.connectEnd")}} {{ReadOnlyInline}} {{Deprecated_Inline}}
- : When the connection is opened network. If the transport layer reports an error and the connection establishment is started again, the last connection establishment end time is given. If a persistent connection is used, the value will be the same as `PerformanceTiming.fetchStart`. A connection is considered as opened when all secure connection handshake, or SOCKS authentication, is terminated.
- {{domxref("PerformanceTiming.secureConnectionStart")}} {{ReadOnlyInline}} {{Deprecated_Inline}}
- : When the secure connection handshake starts. If no such connection is requested, it returns `0`.
- {{domxref("PerformanceTiming.requestStart")}} {{ReadOnlyInline}} {{Deprecated_Inline}}
- : When the browser sent the request to obtain the actual document, from the server or from a cache. If the transport layer fails after the start of the request and the connection is reopened, this property will be set to the time corresponding to the new request.
- {{domxref("PerformanceTiming.responseStart")}} {{ReadOnlyInline}} {{Deprecated_Inline}}
- : When the browser received the first byte of the response, from the server from a cache, or from a local resource.
- {{domxref("PerformanceTiming.responseEnd")}} {{ReadOnlyInline}} {{Deprecated_Inline}}
- : When the browser received the last byte of the response, or when the connection is closed if this happened first, from the server, the cache, or from a local resource.
- {{domxref("PerformanceTiming.domLoading")}} {{ReadOnlyInline}} {{Deprecated_Inline}}
- : When the parser started its work, that is when its {{domxref("Document.readyState")}} changes to `'loading'` and the corresponding {{domxref("Document/readystatechange_event", "readystatechange")}} event is thrown.
- {{domxref("PerformanceTiming.domInteractive")}} {{ReadOnlyInline}} {{Deprecated_Inline}}
- : When the parser finished its work on the main document, that is when its {{domxref("Document.readyState")}} changes to `'interactive'` and the corresponding {{domxref("Document/readystatechange_event", "readystatechange")}} event is thrown.
- {{domxref("PerformanceTiming.domContentLoadedEventStart")}} {{ReadOnlyInline}} {{Deprecated_Inline}}
- : Right before the parser sent the {{domxref("Document/DOMContentLoaded_event", "DOMContentLoaded")}} event, that is right after all the scripts that need to be executed right after parsing have been executed.
- {{domxref("PerformanceTiming.domContentLoadedEventEnd")}} {{ReadOnlyInline}} {{Deprecated_Inline}}
- : Right after all the scripts that need to be executed as soon as possible, in order or not, have been executed.
- {{domxref("PerformanceTiming.domComplete")}} {{ReadOnlyInline}} {{Deprecated_Inline}}
- : When the parser finished its work on the main document, that is when its {{domxref("Document.readyState")}} changes to `'complete'` and the corresponding {{domxref("Document/readystatechange_event", "readystatechange")}} event is thrown.
- {{domxref("PerformanceTiming.loadEventStart")}} {{ReadOnlyInline}} {{Deprecated_Inline}}
- : When the {{domxref("Window/load_event", "load")}} event was sent for the current document. If this event has not yet been sent, it returns `0.`
- {{domxref("PerformanceTiming.loadEventEnd")}} {{ReadOnlyInline}} {{Deprecated_Inline}}
- : When the {{domxref("Window/load_event", "load")}} event handler terminated, that is when the load event is completed. If this event has not yet been sent, or is not yet completed, it returns `0.`
## Instance methods
_The `PerformanceTiming`_ _interface doesn't inherit any methods._
- {{domxref("PerformanceTiming.toJSON()")}} {{Deprecated_Inline}}
- : Returns a [JSON object](/en-US/docs/Web/JavaScript/Reference/Global_Objects/JSON) representing this `PerformanceTiming` object.
## Specifications
This feature is no longer on track to become a standard, as the [Navigation Timing specification](https://w3c.github.io/navigation-timing/#obsolete) has marked it as deprecated.
Use the {{domxref("PerformanceNavigationTiming")}} interface instead.
## Browser compatibility
{{Compat}}
## See also
- The {{domxref("Performance.timing")}} property that creates such an object.
- {{domxref("PerformanceNavigationTiming")}} (part of Navigation Timing Level 2) that has superseded this API.
| 0 |
data/mdn-content/files/en-us/web/api/performancetiming | data/mdn-content/files/en-us/web/api/performancetiming/responseend/index.md | ---
title: "PerformanceTiming: responseEnd property"
short-title: responseEnd
slug: Web/API/PerformanceTiming/responseEnd
page-type: web-api-instance-property
status:
- deprecated
browser-compat: api.PerformanceTiming.responseEnd
---
{{APIRef("Performance API")}}{{Deprecated_Header}}
> **Warning:** This interface of this property is deprecated in the [Navigation Timing Level 2 specification](https://w3c.github.io/navigation-timing/#obsolete). Please use the {{domxref("PerformanceNavigationTiming")}}
> interface instead.
The legacy
**`PerformanceTiming.responseEnd`**
read-only property returns an `unsigned long long` representing the moment,
in milliseconds since the UNIX epoch, when the browser received the last byte of the
response, or when the connection is closed if this happened first, from the server from
a cache or from a local resource.
## Value
An `unsigned long long`.
## Specifications
This feature is no longer on track to become a standard, as the [Navigation Timing specification](https://w3c.github.io/navigation-timing/#obsolete) has marked it as deprecated.
Use the {{domxref("PerformanceNavigationTiming")}} interface instead.
## Browser compatibility
{{Compat}}
## See also
- The {{domxref("PerformanceTiming")}} interface it belongs to.
| 0 |
data/mdn-content/files/en-us/web/api/performancetiming | data/mdn-content/files/en-us/web/api/performancetiming/navigationstart/index.md | ---
title: "PerformanceTiming: navigationStart property"
short-title: navigationStart
slug: Web/API/PerformanceTiming/navigationStart
page-type: web-api-instance-property
status:
- deprecated
browser-compat: api.PerformanceTiming.navigationStart
---
{{APIRef("Performance API")}}{{Deprecated_Header}}
> **Warning:** This interface of this property is deprecated in the [Navigation Timing Level 2 specification](https://w3c.github.io/navigation-timing/#obsolete).
> Please use the {{domxref("PerformanceNavigationTiming")}} interface instead.
The legacy
**`PerformanceTiming.navigationStart`**
read-only property returns an `unsigned long long` representing the moment,
in milliseconds since the UNIX epoch, right after the prompt for unload terminates on
the previous document in the same browsing context. If there is no previous document,
this value will be the same as {{domxref("PerformanceTiming.fetchStart")}}.
## Value
An `unsigned long long`.
## Specifications
This feature is no longer on track to become a standard, as the [Navigation Timing specification](https://w3c.github.io/navigation-timing/#obsolete) has marked it as deprecated.
Use the {{domxref("PerformanceNavigationTiming")}} interface instead.
## Browser compatibility
{{Compat}}
## See also
- The {{domxref("PerformanceTiming")}} interface it belongs to.
| 0 |
data/mdn-content/files/en-us/web/api/performancetiming | data/mdn-content/files/en-us/web/api/performancetiming/loadeventstart/index.md | ---
title: "PerformanceTiming: loadEventStart property"
short-title: loadEventStart
slug: Web/API/PerformanceTiming/loadEventStart
page-type: web-api-instance-property
status:
- deprecated
browser-compat: api.PerformanceTiming.loadEventStart
---
{{APIRef("Performance API")}}{{Deprecated_Header}}
> **Warning:** This interface of this property is deprecated in the [Navigation Timing Level 2 specification](https://w3c.github.io/navigation-timing/#obsolete). Please use the {{domxref("PerformanceNavigationTiming")}}
> interface's {{domxref("PerformanceNavigationTiming.loadEventStart")}} read-only property instead.
The legacy
**`PerformanceTiming.loadEventStart`**
read-only property returns an `unsigned long long` representing the moment,
in milliseconds since the UNIX epoch, when the {{domxref("Window/load_event", "load")}} event was sent for the
current document. If this event has not yet been sent, it returns `0.`
## Value
An `unsigned long long`.
## Specifications
This feature is no longer on track to become a standard, as the [Navigation Timing specification](https://w3c.github.io/navigation-timing/#obsolete) has marked it as deprecated.
Use the {{domxref("PerformanceNavigationTiming")}} interface instead.
## Browser compatibility
{{Compat}}
## See also
- The {{domxref("PerformanceTiming")}} interface it belongs to.
| 0 |
data/mdn-content/files/en-us/web/api/performancetiming | data/mdn-content/files/en-us/web/api/performancetiming/unloadeventstart/index.md | ---
title: "PerformanceTiming: unloadEventStart property"
short-title: unloadEventStart
slug: Web/API/PerformanceTiming/unloadEventStart
page-type: web-api-instance-property
status:
- deprecated
browser-compat: api.PerformanceTiming.unloadEventStart
---
{{APIRef("Performance API")}}{{Deprecated_Header}}
> **Warning:** This interface of this property is deprecated in the [Navigation Timing Level 2 specification](https://w3c.github.io/navigation-timing/#obsolete). Please use the {{domxref("PerformanceNavigationTiming")}}
> interface instead.
The legacy
**`PerformanceTiming.unloadEventStart`**
read-only property returns an `unsigned long long` representing the moment,
in milliseconds since the UNIX epoch, the {{domxref("Window/unload_event", "unload")}} event has been thrown. If
there is no previous document, or if the previous document, or one of the needed
redirects, is not of the same origin, the value returned is `0`.
## Value
An `unsigned long long`.
## Specifications
This feature is no longer on track to become a standard, as the [Navigation Timing specification](https://w3c.github.io/navigation-timing/#obsolete) has marked it as deprecated.
Use the {{domxref("PerformanceNavigationTiming")}} interface instead.
## Browser compatibility
{{Compat}}
## See also
- The {{domxref("PerformanceTiming")}} interface it belongs to.
| 0 |
data/mdn-content/files/en-us/web/api/performancetiming | data/mdn-content/files/en-us/web/api/performancetiming/connectstart/index.md | ---
title: "PerformanceTiming: connectStart property"
short-title: connectStart
slug: Web/API/PerformanceTiming/connectStart
page-type: web-api-instance-property
status:
- deprecated
browser-compat: api.PerformanceTiming.connectStart
---
{{APIRef("Performance API")}}{{Deprecated_Header}}
> **Warning:** This interface of this property is deprecated in the [Navigation Timing Level 2 specification](https://w3c.github.io/navigation-timing/#obsolete). Please use the {{domxref("PerformanceNavigationTiming")}}
> interface instead.
The legacy
**`PerformanceTiming.connectStart`**
read-only property returns an `unsigned long long` representing the moment,
in milliseconds since the UNIX epoch, where the request to open a connection is sent to
the network. If the transport layer reports an error and the connection establishment is
started again, the last connection establishment start time is given. If a persistent
connection is used, the value will be the same as
{{domxref("PerformanceTiming.fetchStart")}}.
## Value
An `unsigned long long`.
## Specifications
This feature is no longer on track to become a standard, as the [Navigation Timing specification](https://w3c.github.io/navigation-timing/#obsolete) has marked it as deprecated.
Use the {{domxref("PerformanceNavigationTiming")}} interface instead.
## Browser compatibility
{{Compat}}
## See also
- The {{domxref("PerformanceTiming")}} interface it belongs to.
| 0 |
data/mdn-content/files/en-us/web/api/performancetiming | data/mdn-content/files/en-us/web/api/performancetiming/fetchstart/index.md | ---
title: "PerformanceTiming: fetchStart property"
short-title: fetchStart
slug: Web/API/PerformanceTiming/fetchStart
page-type: web-api-instance-property
status:
- deprecated
browser-compat: api.PerformanceTiming.fetchStart
---
{{APIRef("Performance API")}}{{Deprecated_Header}}
> **Warning:** This interface of this property is deprecated in the [Navigation Timing Level 2 specification](https://w3c.github.io/navigation-timing/#obsolete). Please use the {{domxref("PerformanceNavigationTiming")}}
> interface instead.
The legacy
**`PerformanceTiming.fetchStart`**
read-only property returns an `unsigned long long` representing the moment,
in milliseconds since the UNIX epoch, the browser is ready to fetch the document using
an HTTP request. This moment is _before_ the check to any application cache.
## Value
An `unsigned long long`.
## Specifications
This feature is no longer on track to become a standard, as the [Navigation Timing specification](https://w3c.github.io/navigation-timing/#obsolete) has marked it as deprecated.
Use the {{domxref("PerformanceNavigationTiming")}} interface instead.
## Browser compatibility
{{Compat}}
## See also
- The {{domxref("PerformanceTiming")}} interface it belongs
to.**`PerformanceTiming.fetchStart`**
| 0 |
data/mdn-content/files/en-us/web/api/performancetiming | data/mdn-content/files/en-us/web/api/performancetiming/domcontentloadedeventend/index.md | ---
title: "PerformanceTiming: domContentLoadedEventEnd property"
short-title: domContentLoadedEventEnd
slug: Web/API/PerformanceTiming/domContentLoadedEventEnd
page-type: web-api-instance-property
status:
- deprecated
browser-compat: api.PerformanceTiming.domContentLoadedEventEnd
---
{{APIRef("Performance API")}}{{Deprecated_Header}}
> **Warning:** This interface of this property is deprecated in the [Navigation Timing Level 2 specification](https://w3c.github.io/navigation-timing/#obsolete). Please use the {{domxref("PerformanceNavigationTiming")}}
> interface instead.
The legacy
**`PerformanceTiming.domContentLoadedEventEnd`**
read-only property returns an `unsigned long long` representing the moment,
in milliseconds since the UNIX epoch, right after all the scripts that need to be
executed as soon as possible, in order or not, has been executed.
## Value
An `unsigned long long`.
## Specifications
This feature is no longer on track to become a standard, as the [Navigation Timing specification](https://w3c.github.io/navigation-timing/#obsolete) has marked it as deprecated.
Use the {{domxref("PerformanceNavigationTiming")}} interface instead.
## Browser compatibility
{{Compat}}
## See also
- The {{domxref("PerformanceTiming")}} interface it belongs to.
| 0 |
data/mdn-content/files/en-us/web/api/performancetiming | data/mdn-content/files/en-us/web/api/performancetiming/unloadeventend/index.md | ---
title: "PerformanceTiming: unloadEventEnd property"
short-title: unloadEventEnd
slug: Web/API/PerformanceTiming/unloadEventEnd
page-type: web-api-instance-property
status:
- deprecated
browser-compat: api.PerformanceTiming.unloadEventEnd
---
{{APIRef("Performance API")}}{{Deprecated_Header}}
> **Warning:** This interface of this property is deprecated in the [Navigation Timing Level 2 specification](https://w3c.github.io/navigation-timing/#obsolete). Please use the {{domxref("PerformanceNavigationTiming")}}
> interface instead.
The legacy
**`PerformanceTiming.unloadEventEnd`**
read-only property returns an `unsigned long long` representing the moment,
in milliseconds since the UNIX epoch, the {{domxref("Window/unload_event", "unload")}} event handler finishes. If
there is no previous document, or if the previous document, or one of the needed
redirects, is not of the same origin, the value returned is `0`.
## Value
An `unsigned long long`.
## Specifications
This feature is no longer on track to become a standard, as the [Navigation Timing specification](https://w3c.github.io/navigation-timing/#obsolete) has marked it as deprecated.
Use the {{domxref("PerformanceNavigationTiming")}} interface instead.
## Browser compatibility
{{Compat}}
## See also
- The {{domxref("PerformanceTiming")}} interface it belongs to.
| 0 |
data/mdn-content/files/en-us/web/api/performancetiming | data/mdn-content/files/en-us/web/api/performancetiming/domcomplete/index.md | ---
title: "PerformanceTiming: domComplete property"
short-title: domComplete
slug: Web/API/PerformanceTiming/domComplete
page-type: web-api-instance-property
status:
- deprecated
browser-compat: api.PerformanceTiming.domComplete
---
{{APIRef("Performance API")}}{{Deprecated_Header}}
> **Warning:** This interface of this property is deprecated in the [Navigation Timing Level 2 specification](https://w3c.github.io/navigation-timing/#obsolete). Please use the {{domxref("PerformanceNavigationTiming")}}
> interface instead.
The legacy
**`PerformanceTiming.domComplete`**
read-only property returns an `unsigned long long` representing the moment,
in milliseconds since the UNIX epoch, when the parser finished its work on the main
document, that is when its {{domxref("Document.readyState")}} changes to
`'complete'` and the corresponding {{domxref("Document/readystatechange_event", "readystatechange")}} event is
thrown.
## Value
An `unsigned long long`.
## Specifications
This feature is no longer on track to become a standard, as the [Navigation Timing specification](https://w3c.github.io/navigation-timing/#obsolete) has marked it as deprecated.
Use the {{domxref("PerformanceNavigationTiming")}} interface instead.
## Browser compatibility
{{Compat}}
## See also
- The {{domxref("PerformanceTiming")}} interface it belongs to.
| 0 |
data/mdn-content/files/en-us/web/api/performancetiming | data/mdn-content/files/en-us/web/api/performancetiming/domainlookupstart/index.md | ---
title: "PerformanceTiming: domainLookupStart property"
short-title: domainLookupStart
slug: Web/API/PerformanceTiming/domainLookupStart
page-type: web-api-instance-property
status:
- deprecated
browser-compat: api.PerformanceTiming.domainLookupStart
---
{{APIRef("Performance API")}}{{Deprecated_Header}}
> **Warning:** This interface of this property is deprecated in the [Navigation Timing Level 2 specification](https://w3c.github.io/navigation-timing/#obsolete). Please use the {{domxref("PerformanceNavigationTiming")}}
> interface instead.
The legacy
**`PerformanceTiming.domainLookupStart`**
read-only property returns an `unsigned long long` representing the moment,
in milliseconds since the UNIX epoch, where the domain lookup starts. If a persistent
connection is used, or the information is stored in a cache or a local resource, the
value will be the same as {{domxref("PerformanceTiming.fetchStart")}}.
## Value
An `unsigned long long`.
## Specifications
This feature is no longer on track to become a standard, as the [Navigation Timing specification](https://w3c.github.io/navigation-timing/#obsolete) has marked it as deprecated.
Use the {{domxref("PerformanceNavigationTiming")}} interface instead.
## Browser compatibility
{{Compat}}
## See also
- The {{domxref("PerformanceTiming")}} interface it belongs to.
| 0 |
data/mdn-content/files/en-us/web/api/performancetiming | data/mdn-content/files/en-us/web/api/performancetiming/domloading/index.md | ---
title: "PerformanceTiming: domLoading property"
short-title: domLoading
slug: Web/API/PerformanceTiming/domLoading
page-type: web-api-instance-property
status:
- deprecated
browser-compat: api.PerformanceTiming.domLoading
---
{{APIRef("Performance API")}}{{Deprecated_Header}}
> **Warning:** This interface of this property is deprecated in the [Navigation Timing Level 2 specification](https://w3c.github.io/navigation-timing/#obsolete). Please use the {{domxref("PerformanceNavigationTiming")}}
> interface instead.
The legacy
**`PerformanceTiming.domLoading`**
read-only property returns an `unsigned long long` representing the moment,
in milliseconds since the UNIX epoch, when the parser started its work, that is when its
{{domxref("Document.readyState")}} changes to `'loading'` and the
corresponding {{domxref("Document/readystatechange_event", "readystatechange")}} event is thrown.
## Value
An `unsigned long long`.
## Specifications
This feature is no longer on track to become a standard, as the [Navigation Timing specification](https://w3c.github.io/navigation-timing/#obsolete) has marked it as deprecated.
Use the {{domxref("PerformanceNavigationTiming")}} interface instead.
## Browser compatibility
{{Compat}}
## See also
- The {{domxref("PerformanceTiming")}} interface it belongs to.
| 0 |
data/mdn-content/files/en-us/web/api/performancetiming | data/mdn-content/files/en-us/web/api/performancetiming/dominteractive/index.md | ---
title: "PerformanceTiming: domInteractive property"
short-title: domInteractive
slug: Web/API/PerformanceTiming/domInteractive
page-type: web-api-instance-property
status:
- deprecated
browser-compat: api.PerformanceTiming.domInteractive
---
{{APIRef("Performance API")}}{{Deprecated_Header}}
> **Warning:** This interface of this property is deprecated in the [Navigation Timing Level 2 specification](https://w3c.github.io/navigation-timing/#obsolete). Please use the {{domxref("PerformanceNavigationTiming")}}
> interface instead.
The legacy
**`PerformanceTiming.domInteractive`**
read-only property returns an `unsigned long long` representing the moment,
in milliseconds since the UNIX epoch, when the parser finished its work on the main
document, that is when its {{domxref("Document.readyState")}} changes to
`'interactive'` and the corresponding {{domxref("Document/readystatechange_event", "readystatechange")}} event is
thrown.
This property can be used to measure the speed of loading websites that users
_feels_. Nevertheless there are a few caveats that happens if scripts are
blocking rendering and not loaded asynchronously or with custom Web fonts. [Check if you are in one of these cases](https://www.stevesouders.com/blog/2015/08/07/dominteractive-is-it-really/) before using this property as a proxy for the
user experience of a website's speed of loading.
## Value
An `unsigned long long`.
## Specifications
This feature is no longer on track to become a standard, as the [Navigation Timing specification](https://w3c.github.io/navigation-timing/#obsolete) has marked it as deprecated.
Use the {{domxref("PerformanceNavigationTiming")}} interface instead.
## Browser compatibility
{{Compat}}
## See also
- The {{domxref("PerformanceTiming")}} interface it belongs to.
- The article "[domInteractive: is it? really?](https://www.stevesouders.com/blog/2015/08/07/dominteractive-is-it-really/)" explaining when you can use this property as a proxy for the
user experience of loading a website.
| 0 |
data/mdn-content/files/en-us/web/api/performancetiming | data/mdn-content/files/en-us/web/api/performancetiming/loadeventend/index.md | ---
title: "PerformanceTiming: loadEventEnd property"
short-title: loadEventEnd
slug: Web/API/PerformanceTiming/loadEventEnd
page-type: web-api-instance-property
status:
- deprecated
browser-compat: api.PerformanceTiming.loadEventEnd
---
{{APIRef("Performance API")}}{{Deprecated_Header}}
> **Warning:** This interface of this property is deprecated in the [Navigation Timing Level 2 specification](https://w3c.github.io/navigation-timing/#obsolete). Please use the {{domxref("PerformanceNavigationTiming")}}
> interface's {{domxref("PerformanceNavigationTiming.loadEventEnd")}} read-only property instead.
The legacy
**`PerformanceTiming.loadEventEnd`**
read-only property returns an `unsigned long long` representing the moment,
in milliseconds since the UNIX epoch, when the {{domxref("Window/load_event", "load")}} event handler
terminated, that is when the load event is completed. If this event has not yet been
sent, or is not yet completed, it returns `0.`
## Value
An `unsigned long long`.
## Specifications
This feature is no longer on track to become a standard, as the [Navigation Timing specification](https://w3c.github.io/navigation-timing/#obsolete) has marked it as deprecated.
Use the {{domxref("PerformanceNavigationTiming")}} interface instead.
## Browser compatibility
{{Compat}}
## See also
- The {{domxref("PerformanceTiming")}} interface it belongs to.
| 0 |
data/mdn-content/files/en-us/web/api/performancetiming | data/mdn-content/files/en-us/web/api/performancetiming/domcontentloadedeventstart/index.md | ---
title: "PerformanceTiming: domContentLoadedEventStart property"
short-title: domContentLoadedEventStart
slug: Web/API/PerformanceTiming/domContentLoadedEventStart
page-type: web-api-instance-property
status:
- deprecated
browser-compat: api.PerformanceTiming.domContentLoadedEventStart
---
{{APIRef("Performance API")}}{{Deprecated_Header}}
> **Warning:** This interface of this property is deprecated in the [Navigation Timing Level 2 specification](https://w3c.github.io/navigation-timing/#obsolete). Please use the {{domxref("PerformanceNavigationTiming")}}
> interface instead.
The legacy
**`PerformanceTiming.domContentLoadedEventStart`**
read-only property returns an `unsigned long long` representing the moment,
in milliseconds since the UNIX epoch, right before the parser sent the
{{domxref("Document/DOMContentLoaded_event", "DOMContentLoaded")}} event, that is right after all the scripts that need to be
executed right after parsing has been executed.
## Value
An `unsigned long long`.
## Specifications
This feature is no longer on track to become a standard, as the [Navigation Timing specification](https://w3c.github.io/navigation-timing/#obsolete) has marked it as deprecated.
Use the {{domxref("PerformanceNavigationTiming")}} interface instead.
## Browser compatibility
{{Compat}}
## See also
- The {{domxref("PerformanceTiming")}} interface it belongs to.
| 0 |
data/mdn-content/files/en-us/web/api/performancetiming | data/mdn-content/files/en-us/web/api/performancetiming/redirectstart/index.md | ---
title: "PerformanceTiming: redirectStart property"
short-title: redirectStart
slug: Web/API/PerformanceTiming/redirectStart
page-type: web-api-instance-property
status:
- deprecated
browser-compat: api.PerformanceTiming.redirectStart
---
{{APIRef("Performance API")}}{{Deprecated_Header}}
> **Warning:** This interface of this property is deprecated in the [Navigation Timing Level 2 specification](https://w3c.github.io/navigation-timing/#obsolete). Please use the {{domxref("PerformanceNavigationTiming")}}
> interface instead.
The legacy
**`PerformanceTiming.redirectStart`**
read-only property returns an `unsigned long long` representing the moment,
in milliseconds since the UNIX epoch, the first HTTP redirect starts. If there is no
redirect, or if one of the redirect is not of the same origin, the value returned is
`0`.
## Value
An `unsigned long long`.
## Specifications
This feature is no longer on track to become a standard, as the [Navigation Timing specification](https://w3c.github.io/navigation-timing/#obsolete) has marked it as deprecated.
Use the {{domxref("PerformanceNavigationTiming")}} interface instead.
## Browser compatibility
{{Compat}}
## See also
- The {{domxref("PerformanceTiming")}} interface it belongs to.
| 0 |
data/mdn-content/files/en-us/web/api/performancetiming | data/mdn-content/files/en-us/web/api/performancetiming/msfirstpaint/index.md | ---
title: "PerformanceTiming: msFirstPaint property"
short-title: msFirstPaint
slug: Web/API/PerformanceTiming/MsFirstPaint
page-type: web-api-instance-property
status:
- non-standard
---
{{APIRef("Performance API API")}}{{Non-standard_header}}
**`msFirstPaint`** is a read-only property which gets the time
when the document loaded by the window object began to be displayed to the user.
Put another way, `msFirstPaint` utilizes the browser to measure when the
first content completes being painted in the window. It is available from JavaScript and
can be reported from the field.
This proprietary property is specific to Internet Explorer and Microsoft Edge.
## Syntax
```js-nolint
p = object.msFirstPaint
```
## Value
An Integer value that represents the time when the document began to be displayed or 0
if the document could not be loaded.
The numerical value reported represents the number of milliseconds between the recorded
time and midnight January 1, 1970 (UTC).
This property is supported only for documents displayed in IE9 Standards mode.
## Example
The following example shows how to calculate the time that is required to request the
document before the document begins to display for the user.
```js
const oTiming = window.performance.timing;
const iTimeMS = oTiming.msFirstPaint - oTiming.navigationStart;
```
| 0 |
data/mdn-content/files/en-us/web/api/performancetiming | data/mdn-content/files/en-us/web/api/performancetiming/domainlookupend/index.md | ---
title: "PerformanceTiming: domainLookupEnd property"
short-title: domainLookupEnd
slug: Web/API/PerformanceTiming/domainLookupEnd
page-type: web-api-instance-property
status:
- deprecated
browser-compat: api.PerformanceTiming.domainLookupEnd
---
{{APIRef("Performance API")}}{{Deprecated_Header}}
> **Warning:** This interface of this property is deprecated in the [Navigation Timing Level 2 specification](https://w3c.github.io/navigation-timing/#obsolete). Please use the {{domxref("PerformanceNavigationTiming")}}
> interface instead.
The legacy
**`PerformanceTiming.domainLookupEnd`**
read-only property returns an `unsigned long long` representing the moment,
in milliseconds since the UNIX epoch, where the domain lookup is finished. If a
persistent connection is used, or the information is stored in a cache or a local
resource, the value will be the same as {{domxref("PerformanceTiming.fetchStart")}}.
## Value
An `unsigned long long`.
## Specifications
This feature is no longer on track to become a standard, as the [Navigation Timing specification](https://w3c.github.io/navigation-timing/#obsolete) has marked it as deprecated.
Use the {{domxref("PerformanceNavigationTiming")}} interface instead.
## Browser compatibility
{{Compat}}
## See also
- The {{domxref("PerformanceTiming")}} interface it belongs to.
| 0 |
data/mdn-content/files/en-us/web/api/performancetiming | data/mdn-content/files/en-us/web/api/performancetiming/secureconnectionstart/index.md | ---
title: "PerformanceTiming: secureConnectionStart property"
short-title: secureConnectionStart
slug: Web/API/PerformanceTiming/secureConnectionStart
page-type: web-api-instance-property
status:
- deprecated
browser-compat: api.PerformanceTiming.secureConnectionStart
---
{{APIRef("Performance API")}}{{Deprecated_Header}}
> **Warning:** This interface of this property is deprecated in the [Navigation Timing Level 2 specification](https://w3c.github.io/navigation-timing/#obsolete). Please use the {{domxref("PerformanceNavigationTiming")}} interface instead.
The legacy
**`PerformanceTiming.secureConnectionStart`**
read-only property returns an `unsigned long long` representing the moment,
in milliseconds since the UNIX epoch, where the secure connection handshake starts. If
no such connection is requested, it returns `0`.
## Value
An `unsigned long long`.
## Specifications
This feature is no longer on track to become a standard, as the [Navigation Timing specification](https://w3c.github.io/navigation-timing/#obsolete) has marked it as deprecated.
Use the {{domxref("PerformanceNavigationTiming")}} interface instead.
## Browser compatibility
{{Compat}}
## See also
- The {{domxref("PerformanceTiming")}} interface it belongs to.
| 0 |
data/mdn-content/files/en-us/web/api/performancetiming | data/mdn-content/files/en-us/web/api/performancetiming/requeststart/index.md | ---
title: "PerformanceTiming: requestStart property"
short-title: requestStart
slug: Web/API/PerformanceTiming/requestStart
page-type: web-api-instance-property
status:
- deprecated
browser-compat: api.PerformanceTiming.requestStart
---
{{ APIRef("PerformanceTiming") }} {{Deprecated_Header}}
> **Warning:** This interface of this property is deprecated in the [Navigation Timing Level 2 specification](https://w3c.github.io/navigation-timing/#obsolete). Please use the {{domxref("PerformanceNavigationTiming")}}
> interface instead.
The legacy
**`PerformanceTiming.requestStart`**
read-only property returns an `unsigned long long` representing the moment,
in milliseconds since the UNIX epoch, when the browser sent the request to obtain the
actual document, from the server or from a cache. If the transport layer fails after the
start of the request and the connection is reopened, this property will be set to the
time corresponding to the new request.
## Value
An `unsigned long long`.
## Specifications
This feature is no longer on track to become a standard, as the [Navigation Timing specification](https://w3c.github.io/navigation-timing/#obsolete) has marked it as deprecated.
Use the {{domxref("PerformanceNavigationTiming")}} interface instead.
## Browser compatibility
{{Compat}}
## See also
- The {{domxref("PerformanceTiming")}} interface it belongs to.
| 0 |
data/mdn-content/files/en-us/web/api/performancetiming | data/mdn-content/files/en-us/web/api/performancetiming/responsestart/index.md | ---
title: "PerformanceTiming: responseStart property"
short-title: responseStart
slug: Web/API/PerformanceTiming/responseStart
page-type: web-api-instance-property
status:
- deprecated
browser-compat: api.PerformanceTiming.responseStart
---
{{APIRef("Performance API")}}{{Deprecated_Header}}
> **Warning:** This interface of this property is deprecated in the [Navigation Timing Level 2 specification](https://w3c.github.io/navigation-timing/#obsolete). Please use the {{domxref("PerformanceNavigationTiming")}}
> interface instead.
The legacy
**`PerformanceTiming.responseStart`**
read-only property returns an `unsigned long long` representing the moment in
time (in milliseconds since the UNIX epoch) when the browser received the first byte of
the response from the server, cache, or local resource.
## Value
An `unsigned long long`.
## Specifications
This feature is no longer on track to become a standard, as the [Navigation Timing specification](https://w3c.github.io/navigation-timing/#obsolete) has marked it as deprecated.
Use the {{domxref("PerformanceNavigationTiming")}} interface instead.
## Browser compatibility
{{Compat}}
## See also
- The {{domxref("PerformanceTiming")}} interface it belongs to.
| 0 |
data/mdn-content/files/en-us/web/api/performancetiming | data/mdn-content/files/en-us/web/api/performancetiming/redirectend/index.md | ---
title: "PerformanceTiming: redirectEnd property"
short-title: redirectEnd
slug: Web/API/PerformanceTiming/redirectEnd
page-type: web-api-instance-property
status:
- deprecated
browser-compat: api.PerformanceTiming.redirectEnd
---
{{APIRef("Performance API")}}{{Deprecated_Header}}
> **Warning:** This interface of this property is deprecated in the [Navigation Timing Level 2 specification](https://w3c.github.io/navigation-timing/#obsolete). Please use the {{domxref("PerformanceNavigationTiming")}}
> interface instead.
The legacy
**`PerformanceTiming.redirectEnd`**
read-only property returns an `unsigned long long` representing the moment,
in milliseconds since the UNIX epoch, the last HTTP redirect is completed, that is when
the last byte of the HTTP response has been received. If there is no redirect, or if one
of the redirect is not of the same origin, the value returned is `0`.
## Value
An `unsigned long long`.
## Specifications
This feature is no longer on track to become a standard, as the [Navigation Timing specification](https://w3c.github.io/navigation-timing/#obsolete) has marked it as deprecated.
Use the {{domxref("PerformanceNavigationTiming")}} interface instead.
## Browser compatibility
{{Compat}}
## See also
- The {{domxref("Performance")}} interface it belongs to.
| 0 |
data/mdn-content/files/en-us/web/api/performancetiming | data/mdn-content/files/en-us/web/api/performancetiming/tojson/index.md | ---
title: "PerformanceTiming: toJSON() method"
short-title: toJSON()
slug: Web/API/PerformanceTiming/toJSON
page-type: web-api-instance-method
status:
- deprecated
browser-compat: api.PerformanceTiming.toJSON
---
{{APIRef("Performance API")}}{{deprecated_header}}
> **Warning:** This interface of this property is deprecated in the [Navigation Timing Level 2 specification](https://w3c.github.io/navigation-timing/#obsolete). Please use the {{domxref("PerformanceNavigationTiming")}}
> interface instead.
The legacy **`toJSON()`** method of the {{domxref("PerformanceTiming")}} interface is a {{Glossary("Serialization","serializer")}}; it returns a JSON representation of the {{domxref("PerformanceTiming")}} object.
## Syntax
```js-nolint
toJSON()
```
### Parameters
None.
### Return value
A {{jsxref("JSON")}} object that is the serialization of the {{domxref("PerformanceTiming")}} object.
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
## See also
- {{jsxref("JSON")}}
| 0 |
data/mdn-content/files/en-us/web/api/performancetiming | data/mdn-content/files/en-us/web/api/performancetiming/connectend/index.md | ---
title: "PerformanceTiming: connectEnd property"
short-title: connectEnd
slug: Web/API/PerformanceTiming/connectEnd
page-type: web-api-instance-property
status:
- deprecated
browser-compat: api.PerformanceTiming.connectEnd
---
{{APIRef("Performance API")}}{{Deprecated_Header}}
> **Warning:** This interface of this property is deprecated in the [Navigation Timing Level 2 specification](https://w3c.github.io/navigation-timing/#obsolete). Please use the {{domxref("PerformanceNavigationTiming")}}
> interface instead.
The legacy
**`PerformanceTiming.connectEnd`**
read-only property returns an `unsigned long long` representing the moment,
in milliseconds since the UNIX epoch, where the connection is opened network. If the
transport layer reports an error and the connection establishment is started again, the
last connection establishment end time is given. If a persistent connection is used, the
value will be the same as {{domxref("PerformanceTiming.fetchStart")}}. A connection is
considered as opened when all secure connection handshake, or SOCKS authentication, is
terminated.
## Value
An `unsigned long long`.
## Specifications
This feature is no longer on track to become a standard, as the [Navigation Timing specification](https://w3c.github.io/navigation-timing/#obsolete) has marked it as deprecated.
Use the {{domxref("PerformanceNavigationTiming")}} interface instead.
## Browser compatibility
{{Compat}}
## See also
- The {{domxref("PerformanceTiming")}} interface it belongs to.
| 0 |
data/mdn-content/files/en-us/web/api | data/mdn-content/files/en-us/web/api/rtcencodedaudioframe/index.md | ---
title: RTCEncodedAudioFrame
slug: Web/API/RTCEncodedAudioFrame
page-type: web-api-interface
browser-compat: api.RTCEncodedAudioFrame
---
{{APIRef("WebRTC")}}
The **`RTCEncodedAudioFrame`** of the [WebRTC API](/en-US/docs/Web/API/WebRTC_API) represents an encoded audio frame in the WebRTC receiver or sender pipeline, which may be modified using a [WebRTC Encoded Transform](/en-US/docs/Web/API/WebRTC_API/Using_Encoded_Transforms).
The interface provides methods and properties to get metadata about the frame, allowing its format and order in the sequence of frames to be determined.
The `data` property gives access to the encoded frame data as a buffer, which might be encrypted, or otherwise modified by a transform.
> **Note:** This feature is available in [_Dedicated_ Web Workers](/en-US/docs/Web/API/Web_Workers_API#worker_types).
## Instance properties
- {{domxref("RTCEncodedAudioFrame.timestamp")}} {{ReadOnlyInline}}
- : Returns the timestamp at which sampling of the frame started.
- {{domxref("RTCEncodedAudioFrame.data")}}
- : Return a buffer containing the encoded frame data.
## Instance methods
- {{DOMxRef("RTCEncodedAudioFrame.getMetadata()")}}
- : Returns the metadata associated with the frame.
## Examples
This code snippet shows a handler for the `rtctransform` event in a {{domxref("Worker")}} that implements a {{domxref("TransformStream")}}, and pipes encoded frames through it from the `event.transformer.readable` to `event.transformer.writable` (`event.transformer` is a {{domxref("RTCRtpScriptTransformer")}}, the worker-side counterpart of {{domxref("RTCRtpScriptTransform")}}).
If the tranformer is inserted into an audio stream, the `transform()` method is called with a `RTCEncodedAudioFrame` whenever a new frame is enqueued on `event.transformer.readable`.
The `transform()` method shows how this might be read, modified using a fictional encryption function, and then enqueued on the controller (this ultimately pipes it through to the `event.transformer.writable`, and then back into the WebRTC pipline).
```js
addEventListener("rtctransform", (event) => {
const async transform = new TransformStream({
async transform(encodedFrame, controller) {
// Reconstruct the original frame.
const view = new DataView(encodedFrame.data);
// Construct a new buffer
const newData = new ArrayBuffer(encodedFrame.data.byteLength);
const newView = new DataView(newData);
//Encrypt frame bytes using the encryptFunction() method (not shown)
for (let i = 0; i < encodedFrame.data.byteLength; ++i) {
const encryptedByte = encryptFunction(~view.getInt8(i));
newView.setInt8(i, encryptedByte);
}
encodedFrame.data = newData;
controller.enqueue(encodedFrame);
},
});
event.transformer.readable
.pipeThrough(transform)
.pipeTo(event.transformer.writable);
});
```
Note that more complete examples are provided in [Using WebRTC Encoded Transforms](/en-US/docs/Web/API/WebRTC_API/Using_Encoded_Transforms).
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
## See also
- [Using WebRTC Encoded Transforms](/en-US/docs/Web/API/WebRTC_API/Using_Encoded_Transforms)
- {{domxref("TransformStream")}}
- {{DOMxRef("RTCRtpScriptTransformer")}}
- {{DOMxRef("RTCEncodedVideoFrame")}}
| 0 |
data/mdn-content/files/en-us/web/api/rtcencodedaudioframe | data/mdn-content/files/en-us/web/api/rtcencodedaudioframe/data/index.md | ---
title: "RTCEncodedAudioFrame: data property"
short-title: data
slug: Web/API/RTCEncodedAudioFrame/data
page-type: web-api-instance-property
browser-compat: api.RTCEncodedAudioFrame.data
---
{{APIRef("WebRTC")}}
The **`data`** property of the {{domxref("RTCEncodedAudioFrame")}} interface returns a buffer containing the data for an encoded frame.
## Value
An {{jsxref("ArrayBuffer")}}.
## Examples
This example [WebRTC encoded transform](/en-US/docs/Web/API/WebRTC_API/Using_Encoded_Transforms) shows how you might get the frame data in a {{domxref("TransformStream")}} `transform()` function modify the bits.
The `transform()` function constructs a {{jsxref("DataView")}} on the buffer in the frame `data` property, and also creates a view on a new {{jsxref("ArrayBuffer")}}.
It then writes the negated bytes in the original data to the new buffer, assigns the buffer to the encoded frame `data` property, and enqueues the modified frame on the stream.
```js
addEventListener("rtctransform", (event) => {
const transform = new TransformStream({
async transform(encodedFrame, controller) {
// Reconstruct the original frame.
const view = new DataView(encodedFrame.data);
// Construct a new buffer
const newData = new ArrayBuffer(encodedFrame.data.byteLength);
const newView = new DataView(newData);
// Negate all bits in the incoming frame
for (let i = 0; i < encodedFrame.data.byteLength; ++i) {
newView.setInt8(i, ~view.getInt8(i));
}
encodedFrame.data = newData;
controller.enqueue(encodedFrame);
},
});
event.transformer.readable
.pipeThrough(transform)
.pipeTo(event.transformer.writable);
});
```
Note that the surrounding code shown here is described in [Using WebRTC Encoded Transforms](/en-US/docs/Web/API/WebRTC_API/Using_Encoded_Transforms).
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
## See also
- [Using WebRTC Encoded Transforms](/en-US/docs/Web/API/WebRTC_API/Using_Encoded_Transforms)
| 0 |
data/mdn-content/files/en-us/web/api/rtcencodedaudioframe | data/mdn-content/files/en-us/web/api/rtcencodedaudioframe/getmetadata/index.md | ---
title: "RTCEncodedAudioFrame: getMetadata() method"
short-title: getMetadata()
slug: Web/API/RTCEncodedAudioFrame/getMetadata
page-type: web-api-instance-method
browser-compat: api.RTCEncodedAudioFrame.getMetadata
---
{{APIRef("WebRTC")}}
The **`getMetadata()`** method of the {{domxref("RTCEncodedAudioFrame")}} interface returns an object containing the metadata associated with the frame.
This includes information about the frame, including the audio encoding used, the synchronization source and contributing sources, and the sequence number (for incoming frames).
## Syntax
```js-nolint
getMetadata()
```
### Parameters
None.
### Return value
An object with the following properties:
- `synchronizationSource`
- : A positive integer value indicating synchronization source ("ssrc") of the stream of RTP packets that are described by this frame.
A source might be something like a microphone, or a mixer application that combines multiple sources.
All packets from the same source share the same time source and sequence space, and so can be ordered relative to each other.
Note that two frames with the same value refer to the same source.
- `payloadType`
- : A positive integer value in the range from 0 to 127 that describes the format of the RTP payload.
The mappings of values to formats is defined in RFC3550, and more specifically [Section 6: Payload Type Definitions](https://www.rfc-editor.org/rfc/rfc3551#section-6) of RFC3551.
- `contributingSources`
- : An {{jsxref("Array")}} of sources (ssrc) that have contributed to the frame.
Consider the case of a conferencing application that combines audio from multiple users.
The `synchronizationSource` would include the ssrc of the application, while `contributingSources` would include the ssrc values of all the individual audio sources.
- `sequenceNumber`
- : The sequence number of an incoming audio frame (not used for outgoing frames) that can be used for reconstructing the original send-order of frames.
This is number between 0 and 32767 .
Note that while numbers are allocated sequentially when sent, they will overflow at 32767 and restart back at 0.
Therefore to compare two frame sequence numbers, in order to determine whether one is assumed to be after another, you must use [serial number arithmetic](https://en.wikipedia.org/wiki/Serial_number_arithmetic). <!-- [RFC1982] -->
## Examples
This example [WebRTC encoded transform](/en-US/docs/Web/API/WebRTC_API/Using_Encoded_Transforms) implementation shows how you might get the frame metadata in a `transform()` function and log it.
```js
addEventListener("rtctransform", (event) => {
const async transform = new TransformStream({
async transform(encodedFrame, controller) {
// Get the metadata and log
const frameMetaData = encodedFrame.getMetadata();
console.log(frameMetaData)
// Enqueue the frame without modifying
controller.enqueue(encodedFrame);
},
});
event.transformer.readable
.pipeThrough(transform)
.pipeTo(event.transformer.writable);
});
```
The resulting object from a local microphone might look like the one shown below.
Note that there are no contributing sources because there is just one source, and no `sequenceNumber` because this is an outgoing frame.
```js
{
"payloadType": 109,
"synchronizationSource": 1876443470
}
```
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
## See also
- [Using WebRTC Encoded Transforms](/en-US/docs/Web/API/WebRTC_API/Using_Encoded_Transforms)
| 0 |
data/mdn-content/files/en-us/web/api/rtcencodedaudioframe | data/mdn-content/files/en-us/web/api/rtcencodedaudioframe/timestamp/index.md | ---
title: "RTCEncodedAudioFrame: timestamp property"
short-title: timestamp
slug: Web/API/RTCEncodedAudioFrame/timestamp
page-type: web-api-instance-property
browser-compat: api.RTCEncodedAudioFrame.timestamp
---
{{APIRef("WebRTC")}}
The readonly **`timestamp`** property of the {{domxref("RTCEncodedAudioFrame")}} interface indicates the time at which frame sampling started.
## Value
A positive integer containing the sampling instant of the first byte in this frame, in microseconds.
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
## See also
- [Using WebRTC Encoded Transforms](/en-US/docs/Web/API/WebRTC_API/Using_Encoded_Transforms)
| 0 |
data/mdn-content/files/en-us/web/api | data/mdn-content/files/en-us/web/api/gpusampler/index.md | ---
title: GPUSampler
slug: Web/API/GPUSampler
page-type: web-api-interface
status:
- experimental
browser-compat: api.GPUSampler
---
{{APIRef("WebGPU API")}}{{SeeCompatTable}}{{SecureContext_Header}}
The **`GPUSampler`** interface of the {{domxref("WebGPU API", "WebGPU API", "", "nocode")}} represents an object that can control how shaders transform and filter texture resource data.
A `GPUSampler` object instance is created using the {{domxref("GPUDevice.createSampler()")}} method.
{{InheritanceDiagram}}
## Instance properties
- {{domxref("GPUSampler.label", "label")}} {{Experimental_Inline}}
- : A string providing a label that can be used to identify the object, for example in {{domxref("GPUError")}} messages or console warnings.
## Examples
The following snippet creates a `GPUSampler` that does trilinear filtering and repeats texture coordinates:
```js
// ...
const sampler = device.createSampler({
addressModeU: "repeat",
addressModeV: "repeat",
magFilter: "linear",
minFilter: "linear",
mipmapFilter: "linear",
});
```
The WebGPU samples [Shadow Mapping sample](https://webgpu.github.io/webgpu-samples/samples/shadowMapping) uses comparison samplers to sample from a depth texture to render shadows.
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
## See also
- The [WebGPU API](/en-US/docs/Web/API/WebGPU_API)
| 0 |
data/mdn-content/files/en-us/web/api/gpusampler | data/mdn-content/files/en-us/web/api/gpusampler/label/index.md | ---
title: "GPUSampler: label property"
short-title: label
slug: Web/API/GPUSampler/label
page-type: web-api-instance-property
status:
- experimental
browser-compat: api.GPUSampler.label
---
{{APIRef("WebGPU API")}}{{SeeCompatTable}}{{SecureContext_Header}}
The **`label`** property of the
{{domxref("GPUSampler")}} interface provides a label that can be used to identify the object, for example in {{domxref("GPUError")}} messages or console warnings.
This can be set by providing a `label` property in the descriptor object passed into the originating {{domxref("GPUDevice.createSampler()")}} call, or you can get and set it directly on the `GPUSampler` object.
## Value
A string. If this has not been previously set as described above, it will be an empty string.
## Examples
Setting and getting a label via `GPUSampler.label`:
```js
// ...
const sampler = device.createSampler({
compare: "less",
});
sampler.label = "mysampler";
console.log(sampler.label); // "mysampler"
```
Setting a label via the originating {{domxref("GPUDevice.createSampler()")}} call, and then getting it via `GPUSampler.label`:
```js
// ...
const sampler = device.createSampler({
compare: "less",
label: "mysampler",
});
console.log(sampler.label); // "mysampler"
```
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
## See also
- The [WebGPU API](/en-US/docs/Web/API/WebGPU_API)
| 0 |
data/mdn-content/files/en-us/web/api | data/mdn-content/files/en-us/web/api/imagedata/index.md | ---
title: ImageData
slug: Web/API/ImageData
page-type: web-api-interface
browser-compat: api.ImageData
---
{{APIRef("Canvas API")}}
The **`ImageData`** interface represents the underlying pixel data of an area of a {{HTMLElement("canvas")}} element.
It is created using the {{domxref("ImageData.ImageData", "ImageData()")}} constructor or creator methods on the {{domxref("CanvasRenderingContext2D")}} object associated with a canvas: {{domxref("CanvasRenderingContext2D.createImageData", "createImageData()")}} and {{domxref("CanvasRenderingContext2D.getImageData", "getImageData()")}}. It can also be used to set a part of the canvas by using {{domxref("CanvasRenderingContext2D.putImageData", "putImageData()")}}.
{{AvailableInWorkers}}
## Constructors
- {{domxref("ImageData.ImageData", "ImageData()")}}
- : Creates an `ImageData` object from a given {{jsxref("Uint8ClampedArray")}} and the size of the image it contains. If no array is given, it creates an image of a transparent black rectangle. Note that this is the most common way to create such an object in workers as {{domxref("CanvasRenderingContext2D.createImageData", "createImageData()")}} is not available there.
## Instance properties
- {{domxref("ImageData.data")}} {{ReadOnlyInline}}
- : A {{jsxref("Uint8ClampedArray")}} representing a one-dimensional array containing the data in the RGBA order, with integer values between `0` and `255` (inclusive). The order goes by rows from the top-left pixel to the bottom-right.
- {{domxref("ImageData.colorSpace")}} {{ReadOnlyInline}}
- : A string indicating the color space of the image data.
- {{domxref("ImageData.height")}} {{ReadOnlyInline}}
- : An `unsigned long` representing the actual height, in pixels, of the `ImageData`.
- {{domxref("ImageData.width")}} {{ReadOnlyInline}}
- : An `unsigned long` representing the actual width, in pixels, of the `ImageData`.
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
## See also
- {{domxref("CanvasRenderingContext2D")}}
- The {{HTMLElement("canvas")}} element and its associated interface, {{domxref("HTMLCanvasElement")}}.
| 0 |
data/mdn-content/files/en-us/web/api/imagedata | data/mdn-content/files/en-us/web/api/imagedata/data/index.md | ---
title: "ImageData: data property"
short-title: data
slug: Web/API/ImageData/data
page-type: web-api-instance-property
browser-compat: api.ImageData.data
---
{{APIRef("Canvas API")}}
The readonly **`ImageData.data`** property returns a
{{jsxref("Uint8ClampedArray")}} that contains the {{domxref("ImageData")}} object's
pixel data. Data is stored as a one-dimensional array in the RGBA order, with integer
values between `0` and `255` (inclusive).
## Value
A {{jsxref("Uint8ClampedArray")}}.
## Examples
### Getting an ImageData object's pixel data
This example creates an `ImageData` object that is 100 pixels wide and 100
pixels tall, making 10,000 pixels in all. The `data` array stores four values
for each pixel, making 4 x 10,000, or 40,000 values in all.
```js
let imageData = new ImageData(100, 100);
console.log(imageData.data); // Uint8ClampedArray[40000]
console.log(imageData.data.length); // 40000
```
### Filling a blank ImageData object
This example creates and fills a new `ImageData` object with colorful
pixels.
#### HTML
```html
<canvas id="canvas"></canvas>
```
#### JavaScript
Since each pixel consists of four values within the `data` array, the
`for` loop iterates by multiples of four. The values associated with each
pixel are R (red), G (green), B (blue), and A (alpha), in that order.
```js
const canvas = document.getElementById("canvas");
const ctx = canvas.getContext("2d");
const imageData = ctx.createImageData(100, 100);
// Fill the array with RGBA values
for (let i = 0; i < imageData.data.length; i += 4) {
// Percentage in the x direction, times 255
let x = ((i % 400) / 400) * 255;
// Percentage in the y direction, times 255
let y = (Math.ceil(i / 400) / 100) * 255;
// Modify pixel data
imageData.data[i + 0] = x; // R value
imageData.data[i + 1] = y; // G value
imageData.data[i + 2] = 255 - x; // B value
imageData.data[i + 3] = 255; // A value
}
// Draw image data to the canvas
ctx.putImageData(imageData, 20, 20);
```
#### Result
{{EmbedLiveSample("Filling_a_blank_ImageData_object", 700, 180)}}
### More examples
For more examples using `ImageData.data`, see [Pixel manipulation with canvas](/en-US/docs/Web/API/Canvas_API/Tutorial/Pixel_manipulation_with_canvas),
{{domxref("CanvasRenderingContext2D.createImageData()")}}, and
{{domxref("CanvasRenderingContext2D.putImageData()")}}.
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
## See also
- {{domxref("ImageData.height")}}
- {{domxref("ImageData.width")}}
- {{domxref("ImageData")}}
- {{domxref("CanvasRenderingContext2D.createImageData()")}}
- {{domxref("CanvasRenderingContext2D.putImageData()")}}
- [Pixel manipulation with canvas](/en-US/docs/Web/API/Canvas_API/Tutorial/Pixel_manipulation_with_canvas)
| 0 |
data/mdn-content/files/en-us/web/api/imagedata | data/mdn-content/files/en-us/web/api/imagedata/imagedata/index.md | ---
title: "ImageData: ImageData() constructor"
short-title: ImageData()
slug: Web/API/ImageData/ImageData
page-type: web-api-constructor
browser-compat: api.ImageData.ImageData
---
{{APIRef("Canvas API")}}
The **`ImageData()`** constructor returns a newly instantiated
{{domxref('ImageData')}} object built from the typed array given and having the
specified width and height.
This constructor is the preferred way of creating such an object in a
{{domxref('Worker')}}.
## Syntax
```js-nolint
new ImageData(width, height)
new ImageData(width, height, settings)
new ImageData(dataArray, width)
new ImageData(dataArray, width, height)
new ImageData(dataArray, width, height, settings)
```
### Parameters
- `width`
- : An unsigned long representing the width of the image.
- `height`
- : An unsigned long representing the height of the image. This value is optional if an
array is given: the height will be inferred from the array's size and the given width.
- `settings` {{optional_inline}}
- : An object with the following properties:
- `colorSpace`: Specifies the color space of the image data. Can be set to `"srgb"` for the [sRGB color space](https://en.wikipedia.org/wiki/SRGB) or `"display-p3"` for the [display-p3 color space](https://en.wikipedia.org/wiki/DCI-P3).
- `dataArray`
- : A {{jsxref("Uint8ClampedArray")}} containing the underlying pixel representation of the image. If no such array is given, an image with a transparent black rectangle of the specified `width` and `height` will be created.
### Return value
A new {{domxref('ImageData')}} object.
### Errors thrown
- `IndexSizeError` {{domxref("DOMException")}}
- : Thrown if `array` is specified, but its length is not a multiple of `(4 * width)` or `(4 * width * height)`.
## Examples
### Creating a blank ImageData object
This example creates an `ImageData` object that is 200 pixels wide and 100
pixels tall, containing a total of 20,000 pixels.
```js
let imageData = new ImageData(200, 100);
// ImageData { width: 200, height: 100, data: Uint8ClampedArray[80000] }
```
### ImageData using the display-p3 color space
This example creates an `ImageData` object with the [display-p3 color space](https://en.wikipedia.org/wiki/DCI-P3).
```js
let imageData = new ImageData(200, 100, { colorSpace: "display-p3" });
```
### Initializing ImageData with an array
This example instantiates an `ImageData` object with pixel colors defined by
an array.
#### HTML
```html
<canvas id="canvas"></canvas>
```
#### JavaScript
The array (`arr`) has a length of `40000`: it consists of 10,000
pixels, each of which is defined by 4 values. The `ImageData` constructor
specifies a `width` of `200` for the new object, so its
`height` defaults to 10,000 divided by 200, which is `50`.
```js
const canvas = document.getElementById("canvas");
const ctx = canvas.getContext("2d");
const arr = new Uint8ClampedArray(40_000);
// Fill the array with the same RGBA values
for (let i = 0; i < arr.length; i += 4) {
arr[i + 0] = 0; // R value
arr[i + 1] = 190; // G value
arr[i + 2] = 0; // B value
arr[i + 3] = 255; // A value
}
// Initialize a new ImageData object
let imageData = new ImageData(arr, 200);
// Draw image data to the canvas
ctx.putImageData(imageData, 20, 20);
```
#### Result
{{EmbedLiveSample('Initializing_ImageData_with_an_array', 700, 180)}}
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
## See also
- {{domxref("CanvasRenderingContext2D.createImageData()")}}, the creator method that
can be used outside workers.
| 0 |
data/mdn-content/files/en-us/web/api/imagedata | data/mdn-content/files/en-us/web/api/imagedata/width/index.md | ---
title: "ImageData: width property"
short-title: width
slug: Web/API/ImageData/width
page-type: web-api-instance-property
browser-compat: api.ImageData.width
---
{{APIRef("Canvas API")}}
The readonly **`ImageData.width`** property returns the number
of pixels per row in the {{domxref("ImageData")}} object.
## Value
A number.
## Examples
This example creates an `ImageData` object that is 200 pixels wide and 100
pixels tall. Thus, the `width` property is `200`.
```js
let imageData = new ImageData(200, 100);
console.log(imageData.width); // 200
```
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
## See also
- {{domxref("ImageData.height")}}
- {{domxref("ImageData")}}
| 0 |
data/mdn-content/files/en-us/web/api/imagedata | data/mdn-content/files/en-us/web/api/imagedata/height/index.md | ---
title: "ImageData: height property"
short-title: height
slug: Web/API/ImageData/height
page-type: web-api-instance-property
browser-compat: api.ImageData.height
---
{{APIRef("Canvas API")}}
The readonly **`ImageData.height`** property returns the number
of rows in the {{domxref("ImageData")}} object.
## Value
A number.
## Examples
This example creates an `ImageData` object that is 200 pixels wide and 100
pixels tall. Thus, the `height` property is `100`.
```js
let imageData = new ImageData(200, 100);
console.log(imageData.height); // 100
```
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
## See also
- {{domxref("ImageData.width")}}
- {{domxref("ImageData")}}
| 0 |
data/mdn-content/files/en-us/web/api/imagedata | data/mdn-content/files/en-us/web/api/imagedata/colorspace/index.md | ---
title: "ImageData: colorSpace property"
short-title: colorSpace
slug: Web/API/ImageData/colorSpace
page-type: web-api-instance-property
browser-compat: api.ImageData.colorSpace
---
{{APIRef("Canvas API")}}
The read-only **`ImageData.colorSpace`** property is a string indicating the color space of the image data.
The color space can be set during `ImageData` initialization using either the [`ImageData()`](/en-US/docs/Web/API/ImageData/ImageData) constructor or the [`createImageData()`](/en-US/docs/Web/API/CanvasRenderingContext2D/createImageData) method.
## Value
This property can have the following values:
- `"srgb"` representing the [sRGB color space](https://en.wikipedia.org/wiki/SRGB).
- `"display-p3"` representing the [display-p3 color space](https://en.wikipedia.org/wiki/DCI-P3).
## Examples
### Getting the color space of canvas image data
The [`getImageData()`](/en-US/docs/Web/API/CanvasRenderingContext2D/getImageData) method allows you to explicitly request a color space. If it doesn't match the color space the canvas was initialized with, a conversion will be performed.
Use the `colorSpace` property to know which color space your `ImageData` object is in.
```js
const context = canvas.getContext("2d", { colorSpace: "display-p3" });
context.fillStyle = "color(display-p3 0.5 0 0)";
context.fillRect(0, 0, 10, 10);
const p3ImageData = context.getImageData(0, 0, 1, 1);
console.log(p3ImageData.colorSpace); // "display-p3"
const srgbImageData = context.getImageData(0, 0, 1, 1, { colorSpace: "srgb" });
console.log(srgbImageData.colorSpace); // "srgb"
```
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
## See also
- [`CanvasRenderingContext2D.createImageData()`](/en-US/docs/Web/API/CanvasRenderingContext2D/createImageData)
- [`CanvasRenderingContext2D.getImageData()`](/en-US/docs/Web/API/CanvasRenderingContext2D/getImageData)
- [`colorSpace` setting in `canvas.getContext()`](/en-US/docs/Web/API/HTMLCanvasElement/getContext#colorspace)
- Setting WebGL color spaces:
- [`WebGLRenderingContext.drawingBufferColorSpace`](/en-US/docs/Web/API/WebGLRenderingContext/drawingBufferColorSpace)
- [`WebGLRenderingContext.unpackColorSpace`](/en-US/docs/Web/API/WebGLRenderingContext/unpackColorSpace)
| 0 |
data/mdn-content/files/en-us/web/api | data/mdn-content/files/en-us/web/api/otpcredential/index.md | ---
title: OTPCredential
slug: Web/API/OTPCredential
page-type: web-api-interface
status:
- experimental
browser-compat: api.OTPCredential
---
{{APIRef("WebOTP API")}}{{SecureContext_Header}}{{SeeCompatTable}}
The **`OTPCredential`** interface of the {{domxref('WebOTP API','','',' ')}} is returned when a WebOTP {{domxref("CredentialsContainer.get", "navigator.credentials.get()")}} call (i.e. invoked with an `otp` option) fulfills. It includes a `code` property that contains the retrieved one-time password (OTP).
{{InheritanceDiagram}}
## Instance properties
_This interface also inherits properties from {{domxref("Credential")}}._
- {{domxref("OTPCredential.code")}} {{Experimental_Inline}}
- : The OTP.
- {{domxref("Credential.id", "OTPCredential.id")}}
- : Inherited from {{domxref("Credential")}}. The ID of the credential type.
- {{domxref("Credential.type", "OTPCredential.type")}}
- : Inherited from {{domxref("Credential")}}. Always set to `otp` for `OTPCredential` instances.
## Instance methods
None.
## Examples
The below code triggers the browser's permission flow when an SMS message arrives. If permission is granted, then the promise resolves with an `OTPCredential` object. The contained `code` value is then set as the value of an {{htmlelement("input")}} form element, which is then submitted.
```js
navigator.credentials
.get({
otp: { transport: ["sms"] },
signal: ac.signal,
})
.then((otp) => {
input.value = otp.code;
if (form) form.submit();
})
.catch((err) => {
console.error(err);
});
```
> **Note:** For a full explanation of the code, see the {{domxref('WebOTP API','','',' ')}} landing page. You can also [see this code as part of a full working demo](https://web-otp.glitch.me/).
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
| 0 |
data/mdn-content/files/en-us/web/api/otpcredential | data/mdn-content/files/en-us/web/api/otpcredential/code/index.md | ---
title: "OTPCredential: code property"
short-title: code
slug: Web/API/OTPCredential/code
page-type: web-api-instance-property
status:
- experimental
browser-compat: api.OTPCredential.code
---
{{SecureContext_Header}}{{APIRef("WebOTP API")}}{{SeeCompatTable}}
The **`code`** property of the {{domxref("OTPCredential")}} interface contains the one-time password (OTP).
## Value
A string containing the OTP.
## Examples
The below code triggers the browser's permission flow when an SMS message arrives. If permission is granted, then the promise resolves with an `OTPCredential` object. The contained `code` value is then set as the value of an {{htmlelement("input")}} form element, which is then submitted.
```js
navigator.credentials
.get({
otp: { transport: ["sms"] },
signal: ac.signal,
})
.then((otp) => {
input.value = otp.code;
if (form) form.submit();
})
.catch((err) => {
console.error(err);
});
```
> **Note:** For a full explanation of the code, see the {{domxref('WebOTP API','','',' ')}} landing page. You can also [see this code as part of a full working demo](https://web-otp.glitch.me/).
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
| 0 |
data/mdn-content/files/en-us/web/api | data/mdn-content/files/en-us/web/api/gpu/index.md | ---
title: GPU
slug: Web/API/GPU
page-type: web-api-interface
status:
- experimental
browser-compat: api.GPU
---
{{APIRef("WebGPU API")}}{{SeeCompatTable}}{{SecureContext_Header}}
The **`GPU`** interface of the {{domxref("WebGPU API", "WebGPU API", "", "nocode")}} is the starting point for using WebGPU. It can be used to return a {{domxref("GPUAdapter")}} from which you can request devices, configure features and limits, and more.
The `GPU` object for the current context is accessed via the {{domxref("Navigator.gpu")}} or {{domxref("WorkerNavigator.gpu")}} properties.
{{InheritanceDiagram}}
## Instance properties
- {{domxref("GPU.wgslLanguageFeatures", "wgslLanguageFeatures")}} {{Experimental_Inline}} {{ReadOnlyInline}}
- : A {{domxref("WGSLLanguageFeatures")}} object that reports the [WGSL language extensions](https://gpuweb.github.io/gpuweb/wgsl/#language-extension) supported by the WebGPU implementation.
## Instance methods
- {{domxref("GPU.requestAdapter", "requestAdapter()")}} {{Experimental_Inline}}
- : Returns a {{jsxref("Promise")}} that fulfills with a {{domxref("GPUAdapter")}} object instance. From this you can request a {{domxref("GPUDevice")}}, which is the primary interface for using WebGPU functionality.
- {{domxref("GPU.getPreferredCanvasFormat", "getPreferredCanvasFormat()")}} {{Experimental_Inline}}
- : Returns the optimal canvas texture format for displaying 8-bit depth, standard dynamic range content on the current system.
## Examples
### Requesting an adapter and a device
```js
async function init() {
if (!navigator.gpu) {
throw Error("WebGPU not supported.");
}
const adapter = await navigator.gpu.requestAdapter();
if (!adapter) {
throw Error("Couldn't request WebGPU adapter.");
}
const device = await adapter.requestDevice();
//...
}
```
### Configuring a GPUCanvasContext with the optimal texture format
```js
const canvas = document.querySelector("#gpuCanvas");
const context = canvas.getContext("webgpu");
context.configure({
device: device,
format: navigator.gpu.getPreferredCanvasFormat(),
alphaMode: "premultiplied",
});
```
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
## See also
- The [WebGPU API](/en-US/docs/Web/API/WebGPU_API)
| 0 |
data/mdn-content/files/en-us/web/api/gpu | data/mdn-content/files/en-us/web/api/gpu/wgsllanguagefeatures/index.md | ---
title: "GPU: wgslLanguageFeatures property"
short-title: wgslLanguageFeatures
slug: Web/API/GPU/wgslLanguageFeatures
page-type: web-api-instance-property
status:
- experimental
browser-compat: api.GPU.wgslLanguageFeatures
---
{{APIRef("WebGPU API")}}{{SeeCompatTable}}{{SecureContext_Header}}
The **`wgslLanguageFeatures`** read-only property of the
{{domxref("GPU")}} interface returns a {{domxref("WGSLLanguageFeatures")}} object that reports the [WGSL language extensions](https://gpuweb.github.io/gpuweb/wgsl/#language-extension) supported by the WebGPU implementation.
> **Note:** Not all WGSL language extensions are available to WebGPU in all browsers that support the API. We recommend you thoroughly test any extensions you choose to use.
## Value
A {{domxref("WGSLLanguageFeatures")}} object instance. This is a [setlike](/en-US/docs/Web/JavaScript/Reference/Global_Objects/Set) object.
## Examples
```js
if (!navigator.gpu) {
throw Error("WebGPU not supported.");
}
const wgslFeatures = navigator.gpu.wgslLanguageFeatures;
// Return the size of the set
console.log(wgslFeatures.size);
// Iterate through all the set values using values()
const valueIterator = wgslFeatures.values();
for (const value of valueIterator) {
console.log(value);
}
// ...
```
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
## See also
- [WebGPU API](/en-US/docs/Web/API/WebGPU_API)
| 0 |
data/mdn-content/files/en-us/web/api/gpu | data/mdn-content/files/en-us/web/api/gpu/getpreferredcanvasformat/index.md | ---
title: "GPU: getPreferredCanvasFormat() method"
short-title: getPreferredCanvasFormat()
slug: Web/API/GPU/getPreferredCanvasFormat
page-type: web-api-instance-method
status:
- experimental
browser-compat: api.GPU.getPreferredCanvasFormat
---
{{APIRef("WebGPU API")}}{{SeeCompatTable}}{{SecureContext_Header}}
The **`getPreferredCanvasFormat()`** method of the
{{domxref("GPU")}} interface returns the optimal canvas texture format for displaying 8-bit depth, standard dynamic range content on the current system.
This is commonly used to provide a {{domxref("GPUCanvasContext.configure()")}} call with the optimal `format` value for the current system. This is recommended — if you don't use the preferred format when configuring the canvas context, you may incur additional overhead, such as additional texture copies, depending on the platform.
## Syntax
```js-nolint
getPreferredCanvasFormat()
```
### Parameters
None.
### Return value
A string indicating a canvas texture format. The value can be `rgba8unorm` or `bgra8unorm`.
### Exceptions
None.
## Examples
```js
const canvas = document.querySelector("#gpuCanvas");
const context = canvas.getContext("webgpu");
context.configure({
device: device,
format: navigator.gpu.getPreferredCanvasFormat(),
alphaMode: "premultiplied",
});
```
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
## See also
- The [WebGPU API](/en-US/docs/Web/API/WebGPU_API)
| 0 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.