repo_id
stringlengths 22
103
| file_path
stringlengths 41
147
| content
stringlengths 181
193k
| __index_level_0__
int64 0
0
|
---|---|---|---|
data/mdn-content/files/en-us/web/api | data/mdn-content/files/en-us/web/api/xmlserializer/index.md | ---
title: XMLSerializer
slug: Web/API/XMLSerializer
page-type: web-api-interface
browser-compat: api.XMLSerializer
---
{{APIRef("XMLSerializer")}}
The `XMLSerializer` interface provides the {{domxref("XMLSerializer.serializeToString", "serializeToString()")}} method to construct an XML string representing a {{Glossary("DOM")}} tree.
## Constructor
- {{domxref("XMLSerializer.XMLSerializer", "XMLSerializer()")}}
- : Creates a new `XMLSerializer` object.
## Instance methods
- {{domxref("XMLSerializer.serializeToString", "serializeToString()")}}
- : Returns the serialized subtree of a string.
## Examples
### Serializing XML into a string
This example just serializes an entire document into a string containing XML.
```js
const s = new XMLSerializer();
const str = s.serializeToString(document);
saveXML(str);
```
This involves creating a new `XMLSerializer` object, then passing the {{domxref("Document")}} to be serialized into {{domxref("XMLSerializer.serializeToString", "serializeToString()")}}, which returns the XML equivalent of the document. `saveXML()` represents a function that would then save the serialized string.
### Inserting nodes into a DOM based on XML
This example uses the {{domxref("Element.insertAdjacentHTML()")}} method to insert a new DOM {{domxref("Node")}} into the body of the {{domxref("Document")}}, based on XML created by serializing an {{domxref("Element")}} object.
> **Note:** In the real world, you should usually instead call {{domxref("Document.importNode", "importNode()")}} method to import the new node into the DOM, then call one of the following methods to add the node to the DOM tree:
>
> - The {{domxref("Element.append()")}}/{{domxref("Element.prepend()")}} and {{domxref("Document.append()")}}/{{domxref("Document.prepend()")}} methods.
> - The {{domxref("Element.replaceWith")}} method (to replace an existing node with the new one)
> - The {{domxref("Element.insertAdjacentElement()")}} method.
Because `insertAdjacentHTML()` accepts a string and not a `Node` as its second parameter, `XMLSerializer` is used to first convert the node into a string.
```js
const inp = document.createElement("input");
const XMLS = new XMLSerializer();
const inp_xmls = XMLS.serializeToString(inp); // First convert DOM node into a string
// Insert the newly created node into the document's body
document.body.insertAdjacentHTML("afterbegin", inp_xmls);
```
The code creates a new {{HTMLElement("input")}} element by calling {{domxref("Document.createElement()")}}, then serializes it into XML using {{domxref("XMLSerializer.serializeToString", "serializeToString()")}}.
Once that's done, `insertAdjacentHTML()` is used to insert the `<input>` element into the DOM.
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
## See also
- [Parsing and serializing XML](/en-US/docs/Web/XML/Parsing_and_serializing_XML)
- {{domxref("DOMParser")}}
| 0 |
data/mdn-content/files/en-us/web/api/xmlserializer | data/mdn-content/files/en-us/web/api/xmlserializer/serializetostring/index.md | ---
title: "XMLSerializer: serializeToString() method"
short-title: serializeToString()
slug: Web/API/XMLSerializer/serializeToString
page-type: web-api-instance-method
browser-compat: api.XMLSerializer.serializeToString
---
{{APIRef("DOM Parsing")}}
The {{domxref("XMLSerializer")}} method
**`serializeToString()`** constructs a string representing the
specified {{Glossary("DOM")}} tree in {{Glossary("XML")}} form.
## Syntax
```js-nolint
serializeToString(rootNode)
```
### Parameters
- `rootNode`
- : The {{domxref("Node")}} to use as the root of the DOM tree or subtree for which to
construct an XML representation.
### Return value
A string containing the XML representation of the specified DOM tree.
### Exceptions
- {{jsxref("TypeError")}}
- : Thrown if the specified `rootNode` is not a compatible node type. The root node
must be either {{domxref("Node")}} or {{domxref("Attr")}}.
- `InvalidStateError` {{domxref("DOMException")}}
- : Thrown if the tree could not be successfully serialized, probably due to issues with the
content's compatibility with XML serialization.
- `SyntaxError` {{domxref("DOMException")}}
- : Thrown if a serialization of HTML was requested but could not succeed due to the content not
being well-formed.
## Usage notes
### Compatible node types
The specified root node—and all of its descendants—must be compatible with the XML
serialization algorithm. The root node itself must be either a {{domxref("Node")}} or
{{domxref("Attr")}} object.
The following types are also permitted as descendants of the root node, in addition to
`Node` and `Attr`:
- {{domxref("DocumentType")}}
- {{domxref("Document")}}
- {{domxref("DocumentFragment")}}
- {{domxref("Element")}}
- {{domxref("Comment")}}
- {{domxref("Text")}}
- {{domxref("ProcessingInstruction")}}
- {{domxref("Attr")}}
If any other type is encountered, a {{jsxref("TypeError")}} exception is thrown.
### Notes on the resulting XML
There are some things worth noting about the XML output by
`serializeToString()`:
- For XML serializations, `Element` and `Attr` nodes are always
serialized with their {{domxref("Element.namespaceURI", "namespaceURI")}} intact. This
may mean that a previously-specified {{domxref("Element.prefix", "prefix")}} or default
namespace may be dropped or altered.
- The resulting XML is compatible with the HTML parser.
- Elements in the HTML namespace that have no child nodes (thereby representing empty
tags) are serialized with both begin and end tags
(`"<someelement></someelement>"`) instead of using the
empty-element tag (`"<someelement/>"`).
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
## See also
- [Parsing and serializing XML](/en-US/docs/Web/XML/Parsing_and_serializing_XML)
- Serializing to HTML: {{domxref("Element.innerHTML")}} and
{{domxref("Element.outerHTML")}}
- Parsing HTML or XML to create a DOM tree: {{domxref("DOMParser")}}
| 0 |
data/mdn-content/files/en-us/web/api/xmlserializer | data/mdn-content/files/en-us/web/api/xmlserializer/xmlserializer/index.md | ---
title: "XMLSerializer: XMLSerializer() constructor"
short-title: XMLSerializer()
slug: Web/API/XMLSerializer/XMLSerializer
page-type: web-api-constructor
browser-compat: api.XMLSerializer.XMLSerializer
---
{{APIRef('XMLSerializer')}}
The **`XMLSerializer()`** constructor creates a new {{domxref("XMLSerializer")}}.
## Syntax
```js-nolint
new XMLSerializer()
```
### Parameters
None.
### Return value
A new {{domxref("XMLSerializer")}} object.
## Examples
### Serializing XML into a string
This example serializes an entire document into a string containing XML.
```js
const s = new XMLSerializer();
const d = document;
const str = s.serializeToString(d);
saveXML(str);
```
This involves creating a new `XMLSerializer` object, then passing the {{domxref("Document")}} to be serialized into {{domxref("XMLSerializer.serializeToString", "serializeToString()")}}, which returns the XML equivalent of the document. `saveXML()` represents a function that would then save the serialized string.
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
| 0 |
data/mdn-content/files/en-us/web/api | data/mdn-content/files/en-us/web/api/webgl_color_buffer_float/index.md | ---
title: WEBGL_color_buffer_float extension
short-title: WEBGL_color_buffer_float
slug: Web/API/WEBGL_color_buffer_float
page-type: webgl-extension
browser-compat: api.WEBGL_color_buffer_float
---
{{APIRef("WebGL")}}
The **`WEBGL_color_buffer_float`** extension is part of the [WebGL API](/en-US/docs/Web/API/WebGL_API) and adds the ability to render to 32-bit floating-point color buffers.
WebGL extensions are available using the {{domxref("WebGLRenderingContext.getExtension()")}} method. For more information, see also [Using Extensions](/en-US/docs/Web/API/WebGL_API/Using_Extensions) in the [WebGL tutorial](/en-US/docs/Web/API/WebGL_API/Tutorial).
> **Note:** This extension is available to {{domxref("WebGLRenderingContext", "WebGL 1", "", 1)}} contexts only. For {{domxref("WebGL2RenderingContext", "WebGL 2", "", 1)}}, use the {{domxref("EXT_color_buffer_float")}} extension.
>
> The {{domxref("OES_texture_float")}} extension implicitly enables this extension.
## Constants
- `ext.RGBA32F_EXT`
- : RGBA 32-bit floating-point color-renderable format.
- `ext.RGB32F_EXT` ({{deprecated_inline}})
- : RGB 32-bit floating-point color-renderable format.
- `ext.FRAMEBUFFER_ATTACHMENT_COMPONENT_TYPE_EXT`
- : ?
- `ext.UNSIGNED_NORMALIZED_EXT`
- : ?
## Extended methods
This extension extends {{domxref("WebGLRenderingContext.renderbufferStorage()")}}:
- The `internalformat` parameter now accepts `ext.RGBA32F_EXT` and `ext.RGB32F_EXT` ({{deprecated_inline}}).
## Examples
```js
const ext = gl.getExtension("WEBGL_color_buffer_float");
gl.renderbufferStorage(gl.RENDERBUFFER, ext.RGBA32F_EXT, 256, 256);
```
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
## See also
- {{domxref("WebGLRenderingContext.getExtension()")}}
- {{domxref("WebGLRenderingContext.renderbufferStorage()")}}
- {{domxref("OES_texture_float")}}
| 0 |
data/mdn-content/files/en-us/web/api | data/mdn-content/files/en-us/web/api/svganimatedtransformlist/index.md | ---
title: SVGAnimatedTransformList
slug: Web/API/SVGAnimatedTransformList
page-type: web-api-interface
browser-compat: api.SVGAnimatedTransformList
---
{{APIRef("SVG")}}
## SVG animated transform list interface
The `SVGAnimatedTransformList` interface is used for attributes which take a list of numbers and which can be animated.
### Interface overview
<table class="no-markdown">
<tbody>
<tr>
<th scope="row">Also implement</th>
<td><em>None</em></td>
</tr>
<tr>
<th scope="row">Methods</th>
<td><em>None</em></td>
</tr>
<tr>
<th scope="row">Properties</th>
<td>
<ul>
<li>
readonly {{ domxref("SVGTransformList") }}
<code>baseVal</code>
</li>
<li>
readonly {{ domxref("SVGTransformList") }}
<code>animVal</code>
</li>
</ul>
</td>
</tr>
<tr>
<th scope="row">Normative document</th>
<td>
<a
href="https://www.w3.org/TR/SVG/coords.html#InterfaceSVGAnimatedTransformList"
>SVG 1.1 (2nd Edition)</a
>
</td>
</tr>
</tbody>
</table>
## Instance properties
<table class="no-markdown">
<thead>
<tr>
<th>Name</th>
<th>Type</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td><code>baseVal</code></td>
<td>{{ domxref("SVGTransformList") }}</td>
<td>
The base value of the given attribute before applying any animations.
</td>
</tr>
<tr>
<td><code>animVal</code></td>
<td>{{ domxref("SVGTransformList") }}</td>
<td>
A read only {{ domxref("SVGTransformList") }} representing
the current animated value of the given attribute. If the given
attribute is not currently being animated, then the
{{ domxref("SVGTransformList") }} will have the same contents
as <code>baseVal</code>. The object referenced by
<code>animVal</code> will always be distinct from the one referenced by
<code>baseVal</code>, even when the attribute is not animated.
</td>
</tr>
</tbody>
</table>
## Instance methods
The `SVGAnimatedTransformList` interface doesn't provide any specific methods.
## Browser compatibility
{{Compat}}
| 0 |
data/mdn-content/files/en-us/web/api | data/mdn-content/files/en-us/web/api/filesystemsync/index.md | ---
title: FileSystemSync
slug: Web/API/FileSystemSync
page-type: web-api-interface
status:
- deprecated
- non-standard
browser-compat: api.FileSystemSync
---
{{APIRef("File and Directory Entries API")}}{{Non-standard_Header}}{{Deprecated_Header}}
In the [File and Directory Entries API](/en-US/docs/Web/API/File_and_Directory_Entries_API/Introduction), a `FileSystemSync` object represents a file system. It has two properties.
> **Warning:** This interface is deprecated and is no more on the standard track.
> _Do not use it anymore._ Use the [File and Directory Entries API](/en-US/docs/Web/API/File_and_Directory_Entries_API) instead.
## Basic concepts
The `FileSystemSync` object is your gateway to the entire API and you will use it a lot. So once you have a reference, cache the object in a global variable or class property.
## Instance properties
- `name` {{ReadOnlyInline}} {{Non-standard_Inline}} {{Deprecated_Inline}}
- : A string that represents the name of the file system. The name must be unique across the list of exposed file systems.
- `root` {{ReadOnlyInline}} {{Non-standard_Inline}} {{Deprecated_Inline}}
- : A `DirectoryEntry` that is the root directory of the file system.
## Specifications
This feature is not part of any specification anymore. It is no longer on track to become a standard.
## Browser compatibility
{{Compat}}
## See also
- [File and Directory Entries API](/en-US/docs/Web/API/File_and_Directory_Entries_API)
- [Introduction to the File and Directory Entries API](/en-US/docs/Web/API/File_and_Directory_Entries_API/Introduction)
| 0 |
data/mdn-content/files/en-us/web/api | data/mdn-content/files/en-us/web/api/webglframebuffer/index.md | ---
title: WebGLFramebuffer
slug: Web/API/WebGLFramebuffer
page-type: web-api-interface
browser-compat: api.WebGLFramebuffer
---
{{APIRef("WebGL")}}
The **WebGLFramebuffer** interface is part of the [WebGL API](/en-US/docs/Web/API/WebGL_API) and represents a collection of buffers that serve as a rendering destination.
{{InheritanceDiagram}}
## Description
The `WebGLFramebuffer` object does not define any methods or properties of its own and its content is not directly accessible. When working with `WebGLFramebuffer` objects, the following methods of the {{domxref("WebGLRenderingContext")}} are useful:
- {{domxref("WebGLRenderingContext.bindFramebuffer()")}}
- {{domxref("WebGLRenderingContext.createFramebuffer()")}}
- {{domxref("WebGLRenderingContext.deleteFramebuffer()")}}
- {{domxref("WebGLRenderingContext.isFramebuffer()")}}
## Examples
### Creating a frame buffer
```js
const canvas = document.getElementById("canvas");
const gl = canvas.getContext("webgl");
const buffer = gl.createFramebuffer();
```
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
## See also
- {{domxref("WebGLRenderingContext.bindFramebuffer()")}}
- {{domxref("WebGLRenderingContext.createFramebuffer()")}}
- {{domxref("WebGLRenderingContext.deleteFramebuffer()")}}
- {{domxref("WebGLRenderingContext.isFramebuffer()")}}
- Other buffers: {{domxref("WebGLBuffer")}}, {{domxref("WebGLRenderbuffer")}}
| 0 |
data/mdn-content/files/en-us/web/api | data/mdn-content/files/en-us/web/api/html_drag_and_drop_api/index.md | ---
title: HTML Drag and Drop API
slug: Web/API/HTML_Drag_and_Drop_API
page-type: web-api-overview
spec-urls: https://html.spec.whatwg.org/multipage/dnd.html
---
{{DefaultAPISidebar("HTML Drag and Drop API")}}
**HTML Drag and Drop** interfaces enable applications to use drag-and-drop features in browsers.
The user may select _draggable_ elements with a mouse, drag those elements to a _droppable_ element, and drop them by releasing the mouse button. A translucent representation of the _draggable_ elements follows the pointer during the drag operation.
You can customize which elements can become _draggable_, the type of feedback the _draggable_ elements produce, and the _droppable_ elements.
This overview of HTML Drag and Drop includes a description of the interfaces, basic steps to add drag-and-drop support to an application, and an interoperability summary of the interfaces.
## Concepts and usage
### Drag Events
HTML drag-and-drop uses the {{domxref("Event","DOM event model")}} and _{{domxref("DragEvent","drag events")}}_ inherited from {{domxref("MouseEvent","mouse events")}}. A typical _drag operation_ begins when a user selects a _draggable_ element, continues when the user drags the element to a _droppable_ element, and then ends when the user releases the dragged element.
During drag operations, several event types are fired, and some events might fire many times, such as the {{domxref('HTMLElement/drag_event', 'drag')}} and {{domxref('HTMLElement/dragover_event', 'dragover')}} events.
Each [drag event type](/en-US/docs/Web/API/DragEvent#event_types) has an associated event handler:
| Event | Fires when... |
| ------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| {{domxref('HTMLElement/drag_event', 'drag')}} | ...a _dragged item_ (element or text selection) is dragged. |
| {{domxref('HTMLElement/dragend_event', 'dragend')}} | ...a drag operation ends (such as releasing a mouse button or hitting the Esc key; see [Finishing a Drag](/en-US/docs/Web/API/HTML_Drag_and_Drop_API/Drag_operations#dragend).) |
| {{domxref('HTMLElement/dragenter_event', 'dragenter')}} | ...a dragged item enters a valid drop target. (See [Specifying Drop Targets](/en-US/docs/Web/API/HTML_Drag_and_Drop_API/Drag_operations#specifying_drop_targets).) |
| {{domxref('HTMLElement/dragleave_event', 'dragleave')}} | ...a dragged item leaves a valid drop target. |
| {{domxref('HTMLElement/dragover_event', 'dragover')}} | ...a dragged item is being dragged over a valid drop target, every few hundred milliseconds. |
| {{domxref('HTMLElement/dragstart_event', 'dragstart')}} | ...the user starts dragging an item. (See [Starting a Drag Operation](/en-US/docs/Web/API/HTML_Drag_and_Drop_API/Drag_operations#dragstart).) |
| {{domxref('HTMLElement/drop_event', 'drop')}} | ...an item is dropped on a valid drop target. (See [Performing a Drop](/en-US/docs/Web/API/HTML_Drag_and_Drop_API/Drag_operations#drop).) |
> **Note:** Neither `dragstart` nor `dragend` events are fired when dragging a file into the browser from the OS.
### The basics
This section is a summary of the basic steps to add drag-and-drop functionality to an application.
#### Identify what is draggable
Making an element _draggable_ requires adding the [`draggable`](/en-US/docs/Web/HTML/Global_attributes#draggable) attribute and the {{domxref("HTMLElement.dragstart_event","dragstart")}} event handler, as shown in the following code sample:
```html
<script>
function dragstartHandler(ev) {
// Add the target element's id to the data transfer object
ev.dataTransfer.setData("text/plain", ev.target.id);
}
window.addEventListener("DOMContentLoaded", () => {
// Get the element by id
const element = document.getElementById("p1");
// Add the ondragstart event listener
element.addEventListener("dragstart", dragstartHandler);
});
</script>
<p id="p1" draggable="true">This element is draggable.</p>
```
For more information, see:
- [Draggable attribute reference](/en-US/docs/Web/HTML/Global_attributes/draggable)
- [Drag operations guide](/en-US/docs/Web/API/HTML_Drag_and_Drop_API/Drag_operations#draggableattribute)
#### Define the drag's data
The application is free to include any number of data items in a drag operation. Each data item is a string of a particular `type` — typically a MIME type such as `text/html`.
Each {{domxref("DragEvent","drag event")}} has a {{domxref("DragEvent.dataTransfer","dataTransfer")}} property that _holds_ the event's data. This property (which is a {{domxref("DataTransfer")}} object) also has methods to _manage_ drag data. The {{domxref("DataTransfer.setData","setData()")}} method is used to add an item to the drag data, as shown in the following example.
```js
function dragstartHandler(ev) {
// Add different types of drag data
ev.dataTransfer.setData("text/plain", ev.target.innerText);
ev.dataTransfer.setData("text/html", ev.target.outerHTML);
ev.dataTransfer.setData(
"text/uri-list",
ev.target.ownerDocument.location.href,
);
}
```
- For a list of common data types used in drag-and-drop (such as text, HTML, links, and files), see [Recommended Drag Types](/en-US/docs/Web/API/HTML_Drag_and_Drop_API/Recommended_drag_types).
- For more information about drag data, see [Drag Data](/en-US/docs/Web/API/HTML_Drag_and_Drop_API/Drag_operations#drag_data).
#### Define the drag image
By default, the browser supplies an image that appears beside the pointer during a drag operation. However, an application may define a custom image with the {{domxref("DataTransfer.setDragImage","setDragImage()")}} method, as shown in the following example.
```js
// Create an image and then use it for the drag image.
// NOTE: change "example.gif" to a real image URL or the image
// will not be created and the default drag image will be used.
let img = new Image();
img.src = "example.gif";
function dragstartHandler(ev) {
ev.dataTransfer.setDragImage(img, 10, 10);
}
```
Learn more about drag feedback images in:
- [Setting the Drag Feedback Image](/en-US/docs/Web/API/HTML_Drag_and_Drop_API/Drag_operations#setting_the_drag_feedback_image)
#### Define the drop effect
The {{domxref("DataTransfer.dropEffect","dropEffect")}} property is used to control the feedback the user is given during a drag-and-drop operation. It typically affects which cursor the browser displays while dragging. For example, when the user hovers over a drop target, the browser's cursor may indicate the type of operation that will occur.
Three effects may be defined:
1. **`copy`** indicates that the dragged data will be copied from its present location to the drop location.
2. **`move`** indicates that the dragged data will be moved from its present location to the drop location.
3. **`link`** indicates that some form of relationship or connection will be created between the source and drop locations.
During the drag operation, drag effects may be modified to indicate that certain effects are allowed at certain locations.
The following example shows how to use this property.
```js
function dragstartHandler(ev) {
ev.dataTransfer.dropEffect = "copy";
}
```
For more details, see:
- [Drag Effects](/en-US/docs/Web/API/HTML_Drag_and_Drop_API/Drag_operations#drag_effects)
#### Define a drop zone
By default, the browser prevents anything from happening when dropping something onto most HTML elements. To change that behavior so that an element becomes a _drop zone_ or is _droppable_, the element must have both {{domxref("HTMLElement.dragover_event","ondragover")}} and {{domxref("HTMLElement.drop_event","ondrop")}} event handler attributes.
The following example shows how to use those attributes, and includes basic event handlers for each attribute.
```html
<script>
function dragoverHandler(ev) {
ev.preventDefault();
ev.dataTransfer.dropEffect = "move";
}
function dropHandler(ev) {
ev.preventDefault();
// Get the id of the target and add the moved element to the target's DOM
const data = ev.dataTransfer.getData("text/plain");
ev.target.appendChild(document.getElementById(data));
}
</script>
<p id="target" ondrop="dropHandler(event)" ondragover="dragoverHandler(event)">
Drop Zone
</p>
```
Note that each handler calls {{domxref("Event.preventDefault","preventDefault()")}} to prevent additional event processing for this event (such as [touch events](/en-US/docs/Web/API/Touch_events) or [pointer events](/en-US/docs/Web/API/Pointer_events)).
For more information, see:
- [Specifying Drop Targets](/en-US/docs/Web/API/HTML_Drag_and_Drop_API/Drag_operations#specifying_drop_targets)
#### Handle the drop effect
The handler for the {{domxref("HTMLElement/drop_event", "drop")}} event is free to process the drag data in an application-specific way.
Typically, an application uses the {{domxref("DataTransfer.getData","getData()")}} method to retrieve drag items and then process them accordingly. Additionally, application semantics may differ depending on the value of the {{domxref("DataTransfer.dropEffect","dropEffect")}} and/or the state of modifier keys.
The following example shows a drop handler getting the source element's `id` from the drag data, and then using the `id` to move the source element to the drop element:
```html
<script>
function dragstartHandler(ev) {
// Add the target element's id to the data transfer object
ev.dataTransfer.setData("application/my-app", ev.target.id);
ev.dataTransfer.effectAllowed = "move";
}
function dragoverHandler(ev) {
ev.preventDefault();
ev.dataTransfer.dropEffect = "move";
}
function dropHandler(ev) {
ev.preventDefault();
// Get the id of the target and add the moved element to the target's DOM
const data = ev.dataTransfer.getData("application/my-app");
ev.target.appendChild(document.getElementById(data));
}
</script>
<p id="p1" draggable="true" ondragstart="dragstartHandler(event)">
This element is draggable.
</p>
<div
id="target"
ondrop="dropHandler(event)"
ondragover="dragoverHandler(event)">
Drop Zone
</div>
```
For more information, see:
- [Performing a Drop](/en-US/docs/Web/API/HTML_Drag_and_Drop_API/Drag_operations#performing_a_drop)
#### Drag end
At the end of a drag operation, the {{domxref("HTMLElement/dragend_event", "dragend")}} event fires at the _source_ element — the element that was the target of the drag start.
This event fires regardless of whether the drag completed or was canceled. The {{domxref("HTMLElement/dragend_event", "dragend")}} event handler can check the value of the {{domxref("DataTransfer.dropEffect","dropEffect")}} property to determine if the drag operation succeeded or not.
For more information about handling the end of a drag operation, see:
- [Finishing a Drag](/en-US/docs/Web/API/HTML_Drag_and_Drop_API/Drag_operations#finishing_a_drag)
## Interfaces
The HTML Drag and Drop interfaces are {{domxref("DragEvent")}}, {{domxref("DataTransfer")}}, {{domxref("DataTransferItem")}} and {{domxref("DataTransferItemList")}}.
The {{domxref("DragEvent")}} interface has a constructor and one {{domxref("DragEvent.dataTransfer","dataTransfer")}} property, which is a {{domxref("DataTransfer")}} object.
{{domxref("DataTransfer")}} objects include the drag event's state, such as the type of drag being done (like `copy` or `move`), the drag's data (one or more items), and the MIME type of each _drag item_. {{domxref("DataTransfer")}} objects also have methods to add or remove items to the drag's data.
The {{domxref("DragEvent")}} and {{domxref("DataTransfer")}} interfaces should be the only ones needed to add HTML Drag and Drop capabilities to an application. (Firefox supports some [Gecko-specific extensions](#gecko_specific_interfaces) to the {{domxref("DataTransfer")}} object, but those extensions will only work on Firefox.)
Each {{domxref("DataTransfer")}} object contains an {{domxref("DataTransfer.items","items")}} property, which is a {{domxref("DataTransferItemList","list")}} of {{domxref("DataTransferItem")}} objects. A {{domxref("DataTransferItem")}} object represents a single _drag item_, each with a {{domxref("DataTransferItem.kind","kind")}} property (either `string` or `file`) and a {{domxref("DataTransferItem.type","type")}} property for the data item's MIME type. The {{domxref("DataTransferItem")}} object also has methods to get the drag item's data.
The {{domxref("DataTransferItemList")}} object is a list of {{domxref("DataTransferItem")}} objects. The list object has methods to add a drag item to the list, remove a drag item from the list, and clear the list of all drag items.
A key difference between the {{domxref("DataTransfer")}} and {{domxref("DataTransferItem")}} interfaces is that the former uses the synchronous {{domxref("DataTransfer.getData","getData()")}} method to access a drag item's data, but the latter instead uses the asynchronous {{domxref("DataTransferItem.getAsString","getAsString()")}} method.
> **Note:** {{domxref("DragEvent")}} and {{domxref("DataTransfer")}} are broadly supported on desktop browsers. However, the {{domxref("DataTransferItem")}} and {{domxref("DataTransferItemList")}} interfaces have limited browser support. See [Interoperability](#interoperability) for more information about drag-and-drop interoperability.
## Examples
- [Copying and moving elements with the `DataTransfer` interface](https://mdn.github.io/dom-examples/drag-and-drop/copy-move-DataTransfer.html)
- [Copying and moving elements with the `DataTransferListItem` interface](https://mdn.github.io/dom-examples/drag-and-drop/copy-move-DataTransferItemList.html)
- Dragging and dropping files (Firefox only): <https://jsfiddle.net/9C2EF/>
- Dragging and dropping files (All browsers): [https://jsbin.com/hiqasek/](https://jsbin.com/hiqasek/edit?html,js,output)
- A parking project using the Drag and Drop API: <https://park.glitch.me/> (You can edit [here](https://glitch.com/edit/#!/park))
## Specifications
{{Specifications}}
## See also
- [Drag Operations](/en-US/docs/Web/API/HTML_Drag_and_Drop_API/Drag_operations)
- [Recommended Drag Types](/en-US/docs/Web/API/HTML_Drag_and_Drop_API/Recommended_drag_types)
- [HTML Living Standard: Drag and Drop](https://html.spec.whatwg.org/multipage/interaction.html#dnd)
- [Drag and Drop interoperability data from CanIUse](https://caniuse.com/#search=draganddrop)
| 0 |
data/mdn-content/files/en-us/web/api/html_drag_and_drop_api | data/mdn-content/files/en-us/web/api/html_drag_and_drop_api/drag_operations/index.md | ---
title: Drag operations
slug: Web/API/HTML_Drag_and_Drop_API/Drag_operations
page-type: guide
---
{{DefaultAPISidebar("HTML Drag and Drop API")}}
The following describes the steps that occur during a drag and drop operation.
The drag operations described in this document use the {{domxref("DataTransfer")}} interface. This document does _not_ use the {{domxref("DataTransferItem")}} interface nor the {{domxref("DataTransferItemList")}} interface.
## The draggable attribute
In a web page, there are certain cases where a default drag behavior is used. These include text selections, images, and links. When an image or link is dragged, the URL of the image or link is set as the drag data, and a drag begins. For other elements, they must be part of a selection for a default drag to occur. To see this in effect, select an area of a webpage, and then click and hold the mouse and drag the selection. An OS-specific rendering of the selection will appear and follow the mouse pointer as the drag occurs. However, this behavior is only the default drag behavior, if no listeners adjust the data to be dragged.
In HTML, apart from the default behavior for images, links, and selections, no other elements are draggable by default.
To make other HTML elements draggable, three things must be done:
1. Set the [`draggable`](/en-US/docs/Web/HTML/Global_attributes#draggable) attribute to `"true"` on the element that you wish to make draggable.
2. Add a listener for the {{domxref("HTMLElement/dragstart_event", "dragstart")}} event.
3. [Set the drag data](/en-US/docs/Web/API/DataTransfer/setData) in the above listener.
Here is an example which allows a section of content to be dragged.
```html
<p draggable="true">This text <strong>may</strong> be dragged.</p>
```
```js
const draggableElement = document.querySelector('p[draggable="true"]');
draggableElement.addEventListener("dragstart", (event) =>
event.dataTransfer.setData("text/plain", "This text may be dragged"),
);
```
The [`draggable`](/en-US/docs/Web/HTML/Global_attributes#draggable) attribute is set to `"true"`, so this element becomes draggable. If this attribute were omitted or set to `"false"`, the element would not be dragged, and instead the text would be selected.
The [`draggable`](/en-US/docs/Web/HTML/Global_attributes#draggable) attribute may be used on any element, including images and links. However, for these last two, the default value is `true`, so you would only use the [`draggable`](/en-US/docs/Web/HTML/Global_attributes#draggable) attribute with a value of `false` to disable dragging of these elements.
> **Note:** When an element is made draggable, text or other elements within it can no longer be selected in the normal way by clicking and dragging with the mouse. Instead, the user must hold down the <kbd>Alt</kbd> key to select text with the mouse, or use the keyboard.
## Starting a drag operation
In this example, we add a listener for the {{domxref("HTMLElement/dragstart_event", "dragstart")}} event by using the `addEventListener()` method.
```html
<p draggable="true">This text <strong>may</strong> be dragged.</p>
```
```js
const draggableElement = document.querySelector('p[draggable="true"]');
draggableElement.addEventListener("dragstart", (event) =>
event.dataTransfer.setData("text/plain", "This text may be dragged"),
);
```
When a user begins to drag, the {{domxref("HTMLElement/dragstart_event", "dragstart")}} event is fired.
In this example the {{domxref("HTMLElement/dragstart_event", "dragstart")}} listener is added to the draggable element itself. However, you could listen to a higher ancestor as drag events bubble up as most other events do.
Within the {{domxref("HTMLElement/dragstart_event", "dragstart")}} event, you can specify the **drag data**, the **feedback image**, and the **drag effects**, all of which are described below. However, only the **drag data** is required. (The default image and drag effects are suitable in most situations.)
## Drag data
All {{domxref("DragEvent","drag events")}} have a property called {{domxref("DragEvent.dataTransfer","dataTransfer")}} which holds the drag data (`dataTransfer` is a {{domxref("DataTransfer")}} object).
When a drag occurs, data must be associated with the drag which identifies _what_ is being dragged. For example, when dragging the selected text within a textbox, the data associated with the _drag data item_ is the text itself. Similarly, when dragging a link on a web page, the drag data item is the link's URL.
The {{domxref("DataTransfer","drag data")}} contains two pieces of information, the **type** (or format) of the data, and the data's **value**. The format is a type string (such as [`text/plain`](/en-US/docs/Web/API/HTML_Drag_and_Drop_API/Recommended_drag_types#text) for text data), and the value is a string of text. When the drag begins, you add data by providing a type and the data. During the drag, in an event listener for the {{domxref("HTMLElement/dragenter_event", "dragenter")}} and {{domxref("HTMLElement/dragover_event", "dragover")}} events, you use the data types of the data being dragged to check whether a drop is allowed. For instance, a drop target that accepts links would check for the type [`text/uri-list`](/en-US/docs/Web/API/HTML_Drag_and_Drop_API/Recommended_drag_types#link). During a drop event, a listener would retrieve the data being dragged and insert it at the drop location.
The {{domxref("DataTransfer","drag data's")}} {{domxref("DataTransfer.types","types")}} property returns a list of MIME-type like strings, such as [`text/plain`](/en-US/docs/Web/API/HTML_Drag_and_Drop_API/Recommended_drag_types#text) or [`image/jpeg`](/en-US/docs/Web/API/HTML_Drag_and_Drop_API/Recommended_drag_types#image). You can also create your own types. The most commonly used types are listed in the article [Recommended Drag Types](/en-US/docs/Web/API/HTML_Drag_and_Drop_API/Recommended_drag_types).
A drag may include data items of several different types. This allows data to be provided in more specific types, often custom types, yet still provide fallback data for drop targets that do not support more specific types. It is usually the case that the least specific type will be normal text data using the type [`text/plain`](/en-US/docs/Web/API/HTML_Drag_and_Drop_API/Recommended_drag_types#text). This data will be a simple textual representation.
To set a drag data item within the {{domxref("DragEvent.dataTransfer","dataTransfer")}}, use the {{domxref("DataTransfer.setData","setData()")}} method. It takes two arguments: the type of data and the data value. For example:
```js
event.dataTransfer.setData("text/plain", "Text to drag");
```
In this case, the data value is "Text to drag" and is of the format [`text/plain`](/en-US/docs/Web/API/HTML_Drag_and_Drop_API/Recommended_drag_types#text).
You can provide data in multiple formats. To do this, call the {{domxref("DataTransfer.setData","setData()")}} method multiple times with different formats. You should call it with formats in order from most specific to least specific.
```js
const dt = event.dataTransfer;
dt.setData("application/x.bookmark", bookmarkString);
dt.setData("text/uri-list", "https://www.mozilla.org");
dt.setData("text/plain", "https://www.mozilla.org");
```
Here, data is added in three different types. The first type, `application/x.bookmark`, is a custom type. Other applications won't support this type, but you can use a custom type for drags between areas of the same site or application.
By providing data in other types as well, we can also support drags to other applications in less specific forms. The `application/x.bookmark` type can provide data with more details for use within the application whereas the other types can include just a single URL or text version.
Note that both the [`text/uri-list`](/en-US/docs/Web/API/HTML_Drag_and_Drop_API/Recommended_drag_types#link) and [`text/plain`](/en-US/docs/Web/API/HTML_Drag_and_Drop_API/Recommended_drag_types#text) contain the same data in this example. This will often be true, but doesn't need to be the case.
If you attempt to add data twice with the same format, the new data will replace the old data, but in the same position within the list of types as the old data.
You can clear the data using the {{domxref("DataTransfer.clearData","clearData()")}} method, which takes one argument: the type of the data to remove.
```js
event.dataTransfer.clearData("text/uri-list");
```
The `type` argument to the {{domxref("DataTransfer.clearData","clearData()")}} method is optional. If the `type` is not specified, the data associated with all types is removed. If the drag contains no drag data items, or all of the items have been subsequently cleared, then no drag will occur.
## Setting the drag feedback image
When a drag occurs, a translucent image is generated from the drag target (the element the "{{domxref("HTMLElement/dragstart_event", "dragstart")}}" event is fired at), and follows the user's pointer during the drag. This image is created automatically, so you do not need to create it yourself. However, you can use {{domxref("DataTransfer.setDragImage","setDragImage()")}} to specify a custom drag feedback image.
```js
event.dataTransfer.setDragImage(image, xOffset, yOffset);
```
Three arguments are necessary. The first is a reference to an image. This reference will typically be to an `<img>` element, but it can also be to a `<canvas>` or any other element. The feedback image will be generated from whatever the image looks like on screen, although for images, they will be drawn at their original size. The second and third arguments to the {{domxref("DataTransfer.setDragImage","setDragImage()")}} method are offsets where the image should appear relative to the mouse pointer.
It is also possible to use images and canvases that are not in a document. This technique is useful when drawing custom drag images using the canvas element, as in the following example:
```js
function dragWithCustomImage(event) {
const canvas = document.createElement("canvas");
canvas.width = canvas.height = 50;
const ctx = canvas.getContext("2d");
ctx.lineWidth = 4;
ctx.moveTo(0, 0);
ctx.lineTo(50, 50);
ctx.moveTo(0, 50);
ctx.lineTo(50, 0);
ctx.stroke();
const dt = event.dataTransfer;
dt.setData("text/plain", "Data to Drag");
dt.setDragImage(canvas, 25, 25);
}
```
In this example, we make one canvas the drag image. As the canvas is `50`×`50` pixels, we use offsets of half of this (`25`) so that the image appears centered on the mouse pointer.
## Drag effects
When dragging, there are several operations that may be performed. The `copy` operation is used to indicate that the data being dragged will be copied from its present location to the drop location. The `move` operation is used to indicate that the data being dragged will be moved, and the `link` operation is used to indicate that some form of relationship or connection will be created between the source and drop locations.
You can specify which of the three operations are allowed for a drag source by setting the {{domxref("DataTransfer.effectAllowed","effectAllowed")}} property within a {{domxref("HTMLElement/dragstart_event", "dragstart")}} event listener.
```js
event.dataTransfer.effectAllowed = "copy";
```
In this example, only a **copy** is allowed.
You can combine the values in various ways:
- `none`
- : no operation is permitted
- `copy`
- : `copy` only
- `move`
- : `move` only
- `link`
- : `link` only
- `copyMove`
- : `copy` or `move` only
- `copyLink`
- : `copy` or `link` only
- `linkMove`
- : `link` or `move` only
- `all`
- : `copy`, `move`, or `link`
- uninitialized
- : The default value is `all`.
Note that these values must be used exactly as listed above. For example, setting the {{domxref("DataTransfer.effectAllowed","effectAllowed")}} property to `copyMove` allows a copy or move operation but prevents the user from performing a link operation. If you don't change the {{domxref("DataTransfer.effectAllowed","effectAllowed")}} property, then any operation is allowed, just like with the '`all`' value. So you don't need to adjust this property unless you want to exclude specific types.
During a drag operation, a listener for the {{domxref("HTMLElement/dragenter_event", "dragenter")}} or {{domxref("HTMLElement/dragover_event", "dragover")}} events can check the {{domxref("DataTransfer.effectAllowed","effectAllowed")}} property to see which operations are permitted. A related property, {{domxref("DataTransfer.dropEffect","dropEffect")}}, should be set within one of these events to specify which single operation should be performed. Valid values for {{domxref("DataTransfer.dropEffect","dropEffect")}} are `none`, `copy`, `move`, or `link`. The combination values are not used for this property.
With the {{domxref("HTMLElement/dragenter_event", "dragenter")}} and {{domxref("HTMLElement/dragover_event", "dragover")}} event, the {{domxref("DataTransfer.dropEffect","dropEffect")}} property is initialized to the effect that the user is requesting. The user can modify the desired effect by pressing modifier keys. Although the exact keys used vary by platform, typically the <kbd>Shift</kbd> and <kbd>Control</kbd> keys would be used to switch between copying, moving, and linking. The mouse pointer will change to indicate which operation is desired. For instance, for a `copy`, the cursor might appear with a plus sign next to it.
You can modify the {{domxref("DataTransfer.dropEffect","dropEffect")}} property during the {{domxref("HTMLElement/dragenter_event", "dragenter")}} or {{domxref("HTMLElement/dragover_event", "dragover")}} events, if for example, a particular drop target only supports certain operations. You can modify the {{domxref("DataTransfer.dropEffect","dropEffect")}} property to override the user effect, and enforce a specific drop operation to occur. Note that this effect must be one listed within the {{domxref("DataTransfer.effectAllowed","effectAllowed")}} property. Otherwise, it will be set to an alternate value that is allowed.
```js
event.dataTransfer.dropEffect = "copy";
```
In this example, copy is the effect that is performed.
You can use the value `none` to indicate that no drop is allowed at this location, although it is preferred not to cancel the event in this case.
Within the {{domxref("HTMLElement/drop_event", "drop")}} and {{domxref("HTMLElement/dragend_event", "dragend")}} events, you can check the {{domxref("DataTransfer.dropEffect","dropEffect")}} property to determine which effect was ultimately chosen. If the chosen effect were "`move`", then the original data should be removed from the source of the drag within the {{domxref("HTMLElement/dragend_event", "dragend")}} event.
## Specifying drop targets
A listener for the {{domxref("HTMLElement/dragenter_event", "dragenter")}} and {{domxref("HTMLElement/dragover_event", "dragover")}} events are used to indicate valid drop targets, that is, places where dragged items may be dropped. Most areas of a web page or application are not valid places to drop data. Thus, the default handling of these events is not to allow a drop.
If you want to allow a drop, you must prevent the default behavior by cancelling both the `dragenter` and `dragover` events. You can do this by calling their {{domxref("Event.preventDefault","preventDefault()")}} methods:
```html
<div id="drop-target">You can drag and then drop a draggable item here</div>
```
```js
const dropElement = document.getElementById("drop-target");
dropElement.addEventListener("dragenter", (event) => {
event.preventDefault();
});
dropElement.addEventListener("dragover", (event) => {
event.preventDefault();
});
```
Calling the {{domxref("Event.preventDefault","preventDefault()")}} method during both a {{domxref("HTMLElement/dragenter_event", "dragenter")}} and {{domxref("HTMLElement/dragover_event", "dragover")}} event will indicate that a drop is allowed at that location. However, you will commonly wish to call the {{domxref("Event.preventDefault","preventDefault()")}} method only in certain situations (for example, only if a link is being dragged).
To do this, call a function which checks a condition and only cancels the event when the condition is met. If the condition is not met, don't cancel the event, and a drop will not occur there if the user releases the mouse button.
It is most common to accept or reject a drop based on the type of drag data in the data transfer — for instance, allowing images, or links, or both. To do this, you can check the {{domxref("DataTransfer.types","types")}} property of the event's {{domxref("DragEvent.dataTransfer","dataTransfer")}} (property). The {{domxref("DataTransfer.types","types")}} property returns an array of the string types that were added when the drag began, in the order from most significant to least significant.
```js
function doDragOver(event) {
const isLink = event.dataTransfer.types.includes("text/uri-list");
if (isLink) {
event.preventDefault();
}
}
```
In this example, we use the `includes` method to check if the type [`text/uri-list`](/en-US/docs/Web/API/HTML_Drag_and_Drop_API/Recommended_drag_types#link) is present in the list of types. If it is, we will cancel the event so that a drop may be allowed. If the drag data does not contain a link, the event will not be cancelled, and a drop cannot occur at that location.
You may also wish to set either the {{domxref("DataTransfer.effectAllowed","effectAllowed")}}, {{domxref("DataTransfer.dropEffect","dropEffect")}} property, or both at the same time, if you wish to be more specific about the type of operation that will performed. Naturally, changing either property will have no effect if you do not cancel the event as well.
## Drop feedback
There are several ways in which you can indicate to the user that a drop is allowed at a certain location. The mouse pointer will update as necessary depending on the value of the {{domxref("DataTransfer.dropEffect","dropEffect")}} property.
Although the exact appearance depends on the user's platform, typically a plus sign icon will appear for a '`copy`' for example, and a 'cannot drop here' icon will appear when a drop is not allowed. This mouse pointer feedback is sufficient in many cases.
For more complex visual effects, you can perform other operations during the {{domxref("HTMLElement/dragenter_event", "dragenter")}} event. For example, by inserting an element at the location where the drop will occur. This might be an insertion marker, or an element that represents the dragged element in its new location. To do this, you could create an [`<img>`](/en-US/docs/Web/HTML/Element/img) element and insert it into the document during the {{domxref("HTMLElement/dragenter_event", "dragenter")}} event.
The {{domxref("HTMLElement/dragover_event", "dragover")}} event will fire at the element the mouse is pointing at. Naturally, you may need to move the insertion marker around a {{domxref("HTMLElement/dragover_event", "dragover")}} event as well. You can use the event's {{domxref("MouseEvent.clientX","clientX")}} and {{domxref("MouseEvent.clientY","clientY")}} properties as with other mouse events to determine the location of the mouse pointer.
Finally, the {{domxref("HTMLElement/dragleave_event", "dragleave")}} event will fire at an element when the drag leaves the element. This is the time when you should remove any insertion markers or highlighting. You do not need to cancel this event. The {{domxref("HTMLElement/dragleave_event", "dragleave")}} event will always fire, even if the drag is cancelled, so you can always ensure that any insertion point cleanup can be done during this event.
## Performing a drop
When the user releases the mouse, the drag and drop operation ends.
If the mouse is released over an element that is a valid drop target, that is, one that cancelled the last {{domxref("HTMLElement/dragenter_event", "dragenter")}} or {{domxref("HTMLElement/dragover_event", "dragover")}} event, then the drop will be successful, and a {{domxref("HTMLElement/drop_event", "drop")}} event will fire at the target. Otherwise, the drag operation is cancelled, and no {{domxref("HTMLElement/drop_event", "drop")}} event is fired.
During the {{domxref("HTMLElement/drop_event", "drop")}} event, you should retrieve that data that was dropped from the event and insert it at the drop location. You can use the {{domxref("DataTransfer.dropEffect","dropEffect")}} property to determine which drag operation was desired.
As with all drag-related events, the event's {{domxref("DataTransfer","dataTransfer")}} property will hold the data that is being dragged. The {{domxref("DataTransfer.getData","getData()")}} method may be used to retrieve the data again.
```js
function onDrop(event) {
const data = event.dataTransfer.getData("text/plain");
event.target.textContent = data;
event.preventDefault();
}
```
The {{domxref("DataTransfer.getData","getData()")}} method takes one argument, the type of data to retrieve. It will return the string value that was set when {{domxref("DataTransfer.setData","setData()")}} was called at the beginning of the drag operation. An empty string will be returned if data of that type does not exist. (Naturally, though, you would likely know that the right type of data was available, as it was previously checked during a {{domxref("HTMLElement/dragover_event", "dragover")}} event.)
In the example here, once the data has been retrieved, we insert the string as the textual content of the target. This has the effect of inserting the dragged text where it was dropped, assuming that the drop target is an area of text such as a `p` or `div` element.
In a web page, you should call the {{domxref("Event.preventDefault","preventDefault()")}} method of the event if you have accepted the drop, so that the browser's default handling is not triggered by the dropped data as well. For example, when a link is dragged to a web page, Firefox will open the link. By cancelling the event, this behavior will be prevented.
You can retrieve other types of data as well. If the data is a link, it should have the type [`text/uri-list`](/en-US/docs/Web/API/HTML_Drag_and_Drop_API/Recommended_drag_types#link). You could then insert a link into the content.
```js
function doDrop(event) {
const lines = event.dataTransfer.getData("text/uri-list").split("\n");
lines
.filter((line) => !line.startsWith("#"))
.forEach((line) => {
const link = document.createElement("a");
link.href = line;
link.textContent = line;
event.target.appendChild(link);
});
event.preventDefault();
}
```
This example inserts a link from the dragged data. As the name implies, the [`text/uri-list`](/en-US/docs/Web/API/HTML_Drag_and_Drop_API/Recommended_drag_types#link) type actually may contain a list of URLs, each on a separate line. The above code uses [`split`](/en-US/docs/Web/JavaScript/Reference/Global_Objects/String/split) to break the string into lines, then iterates over the list of lines, and inserts each as a link into the document. (Note also that links starting with a number sign (`#`) are skipped, as these are comments.)
For simple cases, you can use the special type `URL` just to retrieve the first valid URL in the list. For example:
```js
const link = event.dataTransfer.getData("URL");
```
This eliminates the need to check for comments or iterate through lines yourself. However, it is limited to only the first URL in the list.
The `URL` type is a special type. It is used only as a shorthand, and it does not appear within the list of types specified in the {{domxref("DataTransfer.types","types")}} property.
Sometimes you may support some different formats, and you want to retrieve the data that is most specific that is supported. In the following example, three formats are supported by a drop target.
The following example returns the data associated with the best-supported format:
```js
function doDrop(event) {
const supportedTypes = [
"application/x-moz-file",
"text/uri-list",
"text/plain",
];
const types = event.dataTransfer.types.filter((type) =>
supportedTypes.includes(type),
);
if (types.length) {
const data = event.dataTransfer.getData(types[0]);
// Use this type of data…
}
event.preventDefault();
}
```
## Finishing a drag
Once the drag is complete, a {{domxref("HTMLElement/dragend_event", "dragend")}} event is fired at the source of the drag (the same element that received the {{domxref("HTMLElement/dragstart_event", "dragstart")}} event). This event will fire if the drag was successful or if it was cancelled. However, you can use the {{domxref("DataTransfer.dropEffect","dropEffect")}} property to determine which drop operation occurred.
If the {{domxref("DataTransfer.dropEffect","dropEffect")}} property has the value `none` during a {{domxref("HTMLElement/dragend_event", "dragend")}}, then the drag was cancelled. Otherwise, the effect specifies which operation was performed. The source can use this information after a `move` operation to remove the dragged item from the old location.
A drop can occur inside the same window or over another application. The {{domxref("HTMLElement/dragend_event", "dragend")}} event will always fire regardless. The event's {{domxref("MouseEvent.screenX","screenX")}} and {{domxref("MouseEvent.screenY","screenY")}} properties will be set to the screen coordinates where the drop occurred.
After the {{domxref("HTMLElement/dragend_event", "dragend")}} event has finished propagating, the drag and drop operation is complete.
## See also
- [HTML Drag and Drop API (Overview)](/en-US/docs/Web/API/HTML_Drag_and_Drop_API)
- [Recommended Drag Types](/en-US/docs/Web/API/HTML_Drag_and_Drop_API/Recommended_drag_types)
- [HTML Living Standard: Drag and Drop](https://html.spec.whatwg.org/multipage/interaction.html#dnd)
| 0 |
data/mdn-content/files/en-us/web/api/html_drag_and_drop_api | data/mdn-content/files/en-us/web/api/html_drag_and_drop_api/file_drag_and_drop/index.md | ---
title: File drag and drop
slug: Web/API/HTML_Drag_and_Drop_API/File_drag_and_drop
page-type: guide
---
{{DefaultAPISidebar("HTML Drag and Drop API")}}
HTML Drag and Drop interfaces enable web applications to drag and drop files on a web page. This document describes how an application can accept one or more files that are dragged from the underlying platform's _file manager_ and dropped on a web page.
The main steps to drag and drop are to define a _drop zone_ (i.e. a target element for the file drop) and to define event handlers for the {{domxref("HTMLElement/drop_event", "drop")}} and {{domxref("HTMLElement/dragover_event", "dragover")}} events. These steps are described below, including example code snippets. The full source code is available in [MDN's drag-and-drop repository](https://github.com/mdn/dom-examples/tree/main/drag-and-drop) (pull requests and/or issues are welcome).
Note that {{domxref("HTML_Drag_and_Drop_API","HTML drag and drop")}} defines two different APIs to support dragging and dropping files. One API is the {{domxref("DataTransfer")}} interface and the second API is the {{domxref("DataTransferItem")}} and {{domxref("DataTransferItemList")}} interfaces. This example illustrates the use of both APIs (and does not use any Gecko specific interfaces).
## Define the drop zone
The _target element_ of the {{domxref("HTMLElement/drop_event", "drop")}} event needs an `ondrop` event handler. The following code snippet shows how this is done with a {{HTMLelement("div")}} element:
```html
<div id="drop_zone" ondrop="dropHandler(event);">
<p>Drag one or more files to this <i>drop zone</i>.</p>
</div>
```
Typically, an application will include a {{domxref("HTMLElement/dragover_event", "dragover")}} event handler on the drop target element and that handler will turn off the browser's default drag behavior. To add this handler, you need to include a {{domxref("HTMLElement.dragover_event","ondragover")}} event handler:
```html
<div
id="drop_zone"
ondrop="dropHandler(event);"
ondragover="dragOverHandler(event);">
<p>Drag one or more files to this <i>drop zone</i>.</p>
</div>
```
Lastly, an application may want to style the drop target element to visually indicate the element is a drop zone. In this example, the drop target element uses the following styling:
```css
#drop_zone {
border: 5px solid blue;
width: 200px;
height: 100px;
}
```
> **Note:** {{domxref("HTMLElement/dragstart_event", "dragstart")}} and {{domxref("HTMLElement/dragend_event", "dragend")}} events are not fired when dragging a file into the browser from the OS. To detect when OS files are dragged into the browser, use {{domxref("HTMLElement/dragenter_event", "dragenter")}} and {{domxref("HTMLElement/dragleave_event", "dragleave")}}.
> This means that it is not possible to use {{domxref("DataTransfer.setDragImage","setDragImage()")}} to apply a custom drag image/cursor overlay when dragging files from the OS — because the drag data store can only be modified in the {{domxref("HTMLElement/dragstart_event", "dragstart")}} event. This also applies to {{domxref("DataTransfer.setData","setData()")}}.
## Process the drop
The {{domxref("HTMLElement/drop_event", "drop")}} event is fired when the user drops the file(s). In the following drop handler, if the browser supports {{domxref("DataTransferItemList")}} interface, the {{domxref("DataTransferItem.getAsFile","getAsFile()")}} method is used to access each file; otherwise the {{domxref("DataTransfer")}} interface's {{domxref("DataTransfer.files","files")}} property is used to access each file.
This example shows how to write the name of each dragged file to the console. In a _real_ application, an application may want to process a file using the {{domxref("File","File API")}}.
Note that in this example, any drag item that is not a file is ignored.
```js
function dropHandler(ev) {
console.log("File(s) dropped");
// Prevent default behavior (Prevent file from being opened)
ev.preventDefault();
if (ev.dataTransfer.items) {
// Use DataTransferItemList interface to access the file(s)
[...ev.dataTransfer.items].forEach((item, i) => {
// If dropped items aren't files, reject them
if (item.kind === "file") {
const file = item.getAsFile();
console.log(`… file[${i}].name = ${file.name}`);
}
});
} else {
// Use DataTransfer interface to access the file(s)
[...ev.dataTransfer.files].forEach((file, i) => {
console.log(`… file[${i}].name = ${file.name}`);
});
}
}
```
## Prevent the browser's default drag behavior
The following {{domxref("HTMLElement/dragover_event", "dragover")}} event handler calls {{domxref("Event.preventDefault","preventDefault()")}} to turn off the browser's default drag and drop handler.
```js
function dragOverHandler(ev) {
console.log("File(s) in drop zone");
// Prevent default behavior (Prevent file from being opened)
ev.preventDefault();
}
```
## See also
- [HTML Drag and Drop API](/en-US/docs/Web/API/HTML_Drag_and_Drop_API)
- [Drag Operations](/en-US/docs/Web/API/HTML_Drag_and_Drop_API/Drag_operations)
- [HTML Living Standard: Drag and Drop](https://html.spec.whatwg.org/multipage/interaction.html#dnd)
| 0 |
data/mdn-content/files/en-us/web/api/html_drag_and_drop_api | data/mdn-content/files/en-us/web/api/html_drag_and_drop_api/recommended_drag_types/index.md | ---
title: Recommended Drag Types
slug: Web/API/HTML_Drag_and_Drop_API/Recommended_drag_types
page-type: guide
---
{{DefaultAPISidebar("HTML Drag and Drop API")}}
The HTML Drag and Drop API supports dragging various types of data, including plain text, URLs, HTML code, files, etc. The document describes best practices for common draggable data types.
## Dragging Text
For dragging text, use the `text/plain` type. The second data parameter should be the dragged string. For example:
```js
event.dataTransfer.setData("text/plain", "This is text to drag");
```
Dragging text in textboxes and selections on web pages is done automatically by the browser, so you do not need to handle it yourself.
It is recommended to always add data of the `text/plain` type as a fallback for applications or drop targets that do not support other types, unless there is no logical text alternative. Always add this `text/plain` type last, as it is the least specific and shouldn't be preferred.
Note: In older code, you may find `text/unicode` or the `Text` types. These are equivalent to `text/plain`, and will store and retrieve plain text data.
## Dragging Links
Dragged hyperlinks should include data of two types: `text/uri-list`, and `text/plain`. _Both_ types should use the link's URL for their data. For example:
```js
const dt = event.dataTransfer;
dt.setData("text/uri-list", "https://www.mozilla.org");
dt.setData("text/plain", "https://www.mozilla.org");
```
As usual, set the `text/plain` type last, as a fallback for the `text/uri-list` type.
Note: the URL type is `uri-list` with an _I_, not an _L_.
To drag multiple links, separate each link inside the `text/uri-list` data with a CRLF linebreak. Lines that begin with a number sign (`#`) are comments, and should not be considered URLs. You can use comments to indicate the purpose of a URL, the title associated with a URL, or other data.
> **Warning:** The `text/plain` fallback for multiple links should include all URLs, but no comments.
For example, this sample `text/uri-list` data contains two links and a comment:
```plain
http://www.mozilla.org
#A second link
http://www.example.com
```
When retrieving a dropped link, ensure you handle when multiple links are dragged, including any comments. For convenience, the special type `URL` may be used to refer to the first valid link within data for the `text/uri-list` type.
> **Warning:** Do not add data with the `URL` type — attempting to do so will set the value of the `text/uri-list` type instead.
```js
const url = event.dataTransfer.getData("URL");
```
You may also see data with the Mozilla-specific type `text/x-moz-url`. If it appears, it should appear before the `text/uri-list` type. It holds the URLs of links followed by their titles, separated by a linebreak. For example:
```plain
http://www.mozilla.org
Mozilla
http://www.example.com
Example
```
## Dragging HTML and XML
HTML content may use the `text/html` type. The data for this type should be serialized HTML source code. For example, it would be suitable to set its data to the value of the {{domxref("Element.innerHTML","innerHTML")}} property of an element.
XML content may use the `text/xml` type, but ensure that the data is well-formed XML.
You may also include a plain text representation of the HTML or XML data using the `text/plain` type. The data should be just the text without any of the source tags or attributes. For instance:
```js
const dt = event.dataTransfer;
dt.setData("text/html", "Hello there, <strong>stranger</strong>");
dt.setData("text/plain", "Hello there, stranger");
```
### Updates to DataTransfer.types
The latest spec dictates that {{domxref("DataTransfer.types")}} should return a frozen array of strings rather than a {{domxref("DOMStringList")}} (this is supported in Firefox 52 and above).
As a result, the [contains](/en-US/docs/Web/API/Node/contains) method no longer works; the [includes](/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/includes) method should be used instead to check if a specific type of data is provided, using code like the following:
```js
if ([...event.dataTransfer.types].includes("text/html")) {
// Do something
}
```
You could use feature detection to determine which method is supported on `types`, then run code as appropriate.
## Dragging Images
Direct image dragging is not common. In fact, Mozilla does not support direct image dragging on Mac or Linux. Instead, images are usually dragged only by their URLs. To do this, use the `text/uri-list` type as with other URLs. The data should be the URL of the image, or a [`data:` URL](/en-US/docs/Web/HTTP/Basics_of_HTTP/Data_URLs) if the image is not stored on a website or disk.
As with links, the data for the `text/plain` type should also contain the URL. However, a `data:` URL is not usually useful in a text context, so you may wish to exclude the `text/plain` data in this situation.
In chrome or other privileged code, you may also use the `image/jpeg`, `image/png` or `image/gif` types, depending on the type of image. The data should be an object which implements the `nsIInputStream` interface. When this stream is read, it should provide the data bits for the image, as if the image was a file of that type.
You should also include the `application/x-moz-file` type if the image is located on disk. In fact, this a common way in which image files are dragged.
It is important to set the data in the right order, from most-specific to least-specific. The standard image type, such as `image/jpeg`, should come first, followed by the `application/x-moz-file` type. Next, you should set the `text/uri-list` data, and finally the `text/plain` data. For example:
```js
const dt = event.dataTransfer;
dt.setData("text/uri-list", imageurl);
dt.setData("text/plain", imageurl);
```
## Dragging Nodes
Nodes and elements in a document may be dragged using the `application/x-moz-node` type. The data for the type should be a DOM node. This allows the drop target to receive the actual node where the drag was started from. Note that callers from a different domain will not be able to access the node even when it has been dropped.
You should always include a `text/plain` alternative for the node.
## Dragging Custom Data
You can also use other types that you invent for custom purposes. Strive to always include a `text/plain` alternative, unless the dragged object is specific to a particular site or application. In this case, the custom type ensures that the data cannot be dropped elsewhere.
## Dragging files to an operating system folder
You may want to add a file to an existing drag event session, and you may also want to write the file to disk when the drop operation happens over a folder in the operating system when your code receives notification of the target folder's location. This only works in extensions (or other privileged code) and the type `application/moz-file-promise` should be used. The following sample offers an overview of this advanced case:
```js
// currentEvent is an existing drag operation event
currentEvent.dataTransfer.setData("text/x-moz-url", URL);
currentEvent.dataTransfer.setData("application/x-moz-file-promise-url", URL);
currentEvent.dataTransfer.setData(
"application/x-moz-file-promise-dest-filename",
leafName,
);
function dataProvider() {}
dataProvider.prototype = {
QueryInterface(iid) {
if (
iid.equals(Components.interfaces.nsIFlavorDataProvider) ||
iid.equals(Components.interfaces.nsISupports)
)
return this;
throw Components.results.NS_NOINTERFACE;
},
getFlavorData(aTransferable, aFlavor, aData, aDataLen) {
if (aFlavor === "application/x-moz-file-promise") {
const urlPrimitive = {};
const dataSize = {};
aTransferable.getTransferData(
"application/x-moz-file-promise-url",
urlPrimitive,
dataSize,
);
const url = urlPrimitive.value.QueryInterface(
Components.interfaces.nsISupportsString,
).data;
console.log(`URL file original is = ${url}`);
const namePrimitive = {};
aTransferable.getTransferData(
"application/x-moz-file-promise-dest-filename",
namePrimitive,
dataSize,
);
const name = namePrimitive.value.QueryInterface(
Components.interfaces.nsISupportsString,
).data;
console.log(`target filename is = ${name}`);
const dirPrimitive = {};
aTransferable.getTransferData(
"application/x-moz-file-promise-dir",
dirPrimitive,
dataSize,
);
const dir = dirPrimitive.value.QueryInterface(
Components.interfaces.nsILocalFile,
);
console.log(`target folder is = ${dir.path}`);
const file = Cc["@mozilla.org/file/local;1"].createInstance(
Components.interfaces.nsILocalFile,
);
file.initWithPath(dir.path);
file.appendRelativePath(name);
console.log(`output final path is = ${file.path}`);
// now you can write or copy the file yourself…
}
},
};
```
## See also
- [HTML Drag and Drop API (Overview)](/en-US/docs/Web/API/HTML_Drag_and_Drop_API)
- [Drag Operations](/en-US/docs/Web/API/HTML_Drag_and_Drop_API/Drag_operations)
- [HTML Living Standard: Drag and Drop](https://html.spec.whatwg.org/multipage/interaction.html#dnd)
| 0 |
data/mdn-content/files/en-us/web/api | data/mdn-content/files/en-us/web/api/nodeiterator/index.md | ---
title: NodeIterator
slug: Web/API/NodeIterator
page-type: web-api-interface
browser-compat: api.NodeIterator
---
{{APIRef("DOM")}}
The **`NodeIterator`** interface represents an iterator to traverse nodes of a DOM subtree in document order.
## Syntax
A `NodeIterator` can be created using the {{domxref("Document.createNodeIterator()")}} method, as follows:
```js
const nodeIterator = document.createNodeIterator(root, whatToShow, filter);
```
## Instance properties
_This interface doesn't inherit any property._
- {{domxref("NodeIterator.root")}} {{ReadOnlyInline}}
- : Returns a {{domxref("Node")}} representing the root node, as specified when the
`NodeIterator` was created.
- {{domxref("NodeIterator.whatToShow")}} {{ReadOnlyInline}}
- : Returns an `unsigned long` bitmask that describes the types of {{domxref("Node")}}
to be matched. Non-matching nodes are skipped, but relevant child nodes may be included.
The possible bitmask values are constants from the `NodeFilter` interface:
| Constant | Numerical value | Description |
| -------------------------------------------------------- | ------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `NodeFilter.SHOW_ALL` | `4294967295` (that is the max value of `unsigned long`) | Shows all nodes. |
| `NodeFilter.SHOW_ATTRIBUTE` {{deprecated_inline}} | `2` | Shows attribute {{ domxref("Attr") }} nodes. This is meaningful only when creating a {{ domxref("NodeIterator") }} with an {{ domxref("Attr") }} node as its root; in this case, it means that the attribute node will appear in the first position of the iteration or traversal. Since attributes are never children of other nodes, they do not appear when traversing over the document tree. |
| `NodeFilter.SHOW_CDATA_SECTION` {{deprecated_inline}} | `8` | Shows {{ domxref("CDATASection") }} nodes. |
| `NodeFilter.SHOW_COMMENT` | `128` | Shows {{ domxref("Comment") }} nodes. |
| `NodeFilter.SHOW_DOCUMENT` | `256` | Shows {{ domxref("Document") }} nodes. |
| `NodeFilter.SHOW_DOCUMENT_FRAGMENT` | `1024` | Shows {{ domxref("DocumentFragment") }} nodes. |
| `NodeFilter.SHOW_DOCUMENT_TYPE` | `512` | Shows {{ domxref("DocumentType") }} nodes. |
| `NodeFilter.SHOW_ELEMENT` | `1` | Shows {{ domxref("Element") }} nodes. |
| `NodeFilter.SHOW_ENTITY` {{deprecated_inline}} | `32` | Legacy, no longer used. |
| `NodeFilter.SHOW_ENTITY_REFERENCE` {{deprecated_inline}} | `16` | Legacy, no longer used. |
| `NodeFilter.SHOW_NOTATION` {{deprecated_inline}} | `2048` | Legacy, no longer used. |
| `NodeFilter.SHOW_PROCESSING_INSTRUCTION` | `64` | Shows {{domxref("ProcessingInstruction")}} nodes. |
| `NodeFilter.SHOW_TEXT` | `4` | Shows {{domxref("Text") }} nodes. |
- {{domxref("NodeIterator.filter")}} {{ReadOnlyInline}}
- : Returns a `NodeFilter` used to select the relevant nodes.
- {{domxref("NodeIterator.referenceNode")}} {{ReadOnlyInline}}
{{experimental_inline() }}
- : Returns the {{domxref("Node")}} to which the iterator is anchored.
- {{domxref("NodeIterator.pointerBeforeReferenceNode")}} {{ReadOnlyInline}}
- : Returns a boolean indicating whether or not the {{domxref("NodeIterator")}} is anchored _before_ the {{domxref("NodeIterator.referenceNode")}}. If `false`, it indicates that the iterator is anchored _after_ the reference node.
## Instance methods
_This interface doesn't inherit any method._
- {{domxref("NodeIterator.detach()")}} {{deprecated_inline}}
- : This is a legacy method, and no longer has any effect. Previously it served to mark a
`NodeIterator` as disposed, so it could be reclaimed by garbage collection.
- {{domxref("NodeIterator.previousNode()")}}
- : Returns the previous {{domxref("Node")}} in the document, or `null` if there are none.
- {{domxref("NodeIterator.nextNode()")}}
- : Returns the next {{domxref("Node")}} in the document, or `null` if there are none.
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
## See also
- The creator method: {{domxref("Document.createNodeIterator()")}}.
- Related interface: {{domxref("TreeWalker")}}
| 0 |
data/mdn-content/files/en-us/web/api/nodeiterator | data/mdn-content/files/en-us/web/api/nodeiterator/detach/index.md | ---
title: "NodeIterator: detach() method"
short-title: detach()
slug: Web/API/NodeIterator/detach
page-type: web-api-instance-method
status:
- deprecated
browser-compat: api.NodeIterator.detach
---
{{APIRef("DOM")}}{{deprecated_header}}
The **`NodeIterator.detach()`** method is a no-op, kept for
backward compatibility only.
Originally, it detached the {{domxref("NodeIterator")}} from the set over which it
iterates, releasing any resources used by the set and setting the iterator's state to
`INVALID`. Once this method had been called, calls to other methods on
`NodeIterator` would raise the `INVALID_STATE_ERR` exception.
## Syntax
```js-nolint
detach()
```
### Parameters
None.
### Return value
None ({{jsxref("undefined")}}).
## Examples
```js
const nodeIterator = document.createNodeIterator(
document.body,
NodeFilter.SHOW_ELEMENT,
{
acceptNode(node) {
return NodeFilter.FILTER_ACCEPT;
},
},
);
nodeIterator.detach(); // detaches the iterator
nodeIterator.nextNode(); // throws an INVALID_STATE_ERR exception
```
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
## See also
- The interface it belongs to: {{domxref("NodeIterator")}}.
| 0 |
data/mdn-content/files/en-us/web/api/nodeiterator | data/mdn-content/files/en-us/web/api/nodeiterator/previousnode/index.md | ---
title: "NodeIterator: previousNode() method"
short-title: previousNode()
slug: Web/API/NodeIterator/previousNode
page-type: web-api-instance-method
browser-compat: api.NodeIterator.previousNode
---
{{APIRef("DOM")}}
The **`NodeIterator.previousNode()`** method returns the
previous node in the set represented by the {{domxref("NodeIterator")}} and moves the
position of the iterator backwards within the set.
This method returns `null` when the current node is the first node in the
set.
In old browsers, as specified in old versions of the specifications, the method may
throws the `INVALID_STATE_ERR` {{domxref("DOMException")}} if this method
is called after the {{domxref("NodeIterator.detach()")}}method. Recent browsers never
throw.
## Syntax
```js-nolint
previousNode()
```
### Parameters
None.
### Return value
A {{domxref("Node")}} representing the node before the current node in the set represented by this `NodeIterator`, or `null` if the current node is the first node in the set.
## Examples
```js
const nodeIterator = document.createNodeIterator(
document.body,
NodeFilter.SHOW_ELEMENT,
{
acceptNode(node) {
return NodeFilter.FILTER_ACCEPT;
},
},
);
currentNode = nodeIterator.nextNode(); // returns the next node
previousNode = nodeIterator.previousNode(); // same result, since we backtracked to the previous node
```
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
## See also
- The interface it belongs to: {{domxref("NodeIterator")}}.
| 0 |
data/mdn-content/files/en-us/web/api/nodeiterator | data/mdn-content/files/en-us/web/api/nodeiterator/whattoshow/index.md | ---
title: "NodeIterator: whatToShow property"
short-title: whatToShow
slug: Web/API/NodeIterator/whatToShow
page-type: web-api-instance-property
browser-compat: api.NodeIterator.whatToShow
---
{{APIRef("DOM")}}
The **`NodeIterator.whatToShow`** read-only property represents
an `unsigned integer` representing a bitmask signifying what types of nodes
should be returned by the {{domxref("NodeIterator")}}.
## Value
An `unsigned integer`.
The values that can be combined to form the bitmask are:
<table class="no-markdown">
<thead>
<tr>
<th>Constant</th>
<th>Numerical value</th>
<th>Description</th>
</tr>
</thead>
<tbody>
<tr>
<td><code>NodeFilter.SHOW_ALL</code></td>
<td>
<code>4294967295</code> (that is the max value of <code>unsigned long</code>)
</td>
<td>Shows all nodes.</td>
</tr>
<tr>
<td>
<code>NodeFilter.SHOW_ATTRIBUTE</code> {{deprecated_inline}}
</td>
<td><code>2</code></td>
<td>
Shows attribute {{ domxref("Attr") }} nodes. This is meaningful
only when creating a {{ domxref("NodeIterator") }} or
{{ domxref("TreeWalker") }} with an
{{ domxref("Attr") }} node as its root; in this case, it means
that the attribute node will appear in the first position of the
iteration or traversal. Since attributes are never children of other
nodes, they do not appear when traversing over the document tree.
</td>
</tr>
<tr>
<td>
<code>NodeFilter.SHOW_CDATA_SECTION</code> {{deprecated_inline}}
</td>
<td><code>8</code></td>
<td>Shows {{ domxref("CDATASection") }} nodes.</td>
</tr>
<tr>
<td><code>NodeFilter.SHOW_COMMENT</code></td>
<td><code>128</code></td>
<td>Shows {{ domxref("Comment") }} nodes.</td>
</tr>
<tr>
<td><code>NodeFilter.SHOW_DOCUMENT</code></td>
<td><code>256</code></td>
<td>Shows {{ domxref("Document") }} nodes.</td>
</tr>
<tr>
<td><code>NodeFilter.SHOW_DOCUMENT_FRAGMENT</code></td>
<td><code>1024</code></td>
<td>Shows {{ domxref("DocumentFragment") }} nodes.</td>
</tr>
<tr>
<td><code>NodeFilter.SHOW_DOCUMENT_TYPE</code></td>
<td><code>512</code></td>
<td>Shows {{ domxref("DocumentType") }} nodes.</td>
</tr>
<tr>
<td><code>NodeFilter.SHOW_ELEMENT</code></td>
<td><code>1</code></td>
<td>Shows {{ domxref("Element") }} nodes.</td>
</tr>
<tr>
<td><code>NodeFilter.SHOW_ENTITY</code> {{deprecated_inline}}</td>
<td><code>32</code></td>
<td>Legacy, no more used.</td>
</tr>
<tr>
<td>
<code>NodeFilter.SHOW_ENTITY_REFERENCE</code>
{{deprecated_inline}}
</td>
<td><code>16</code></td>
<td>Legacy, no more used.</td>
</tr>
<tr>
<td>
<code>NodeFilter.SHOW_NOTATION</code> {{deprecated_inline}}
</td>
<td><code>2048</code></td>
<td>Legacy, no more used.</td>
</tr>
<tr>
<td><code>NodeFilter.SHOW_PROCESSING_INSTRUCTION</code></td>
<td><code>64</code></td>
<td>Shows {{ domxref("ProcessingInstruction") }} nodes.</td>
</tr>
<tr>
<td><code>NodeFilter.SHOW_TEXT</code></td>
<td><code>4</code></td>
<td>Shows {{ domxref("Text") }} nodes.</td>
</tr>
</tbody>
</table>
## Examples
```js
const nodeIterator = document.createNodeIterator(
document.body,
NodeFilter.SHOW_ELEMENT | NodeFilter.SHOW_COMMENT | NodeFilter.SHOW_TEXT,
{ acceptNode: (node) => NodeFilter.FILTER_ACCEPT },
);
if (
nodeIterator.whatToShow & NodeFilter.SHOW_ALL ||
nodeIterator.whatToShow & NodeFilter.SHOW_COMMENT
) {
// nodeIterator will show comments
}
```
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
## See also
- The interface this property belongs to: {{domxref("NodeIterator")}}.
| 0 |
data/mdn-content/files/en-us/web/api/nodeiterator | data/mdn-content/files/en-us/web/api/nodeiterator/referencenode/index.md | ---
title: "NodeIterator: referenceNode property"
short-title: referenceNode
slug: Web/API/NodeIterator/referenceNode
page-type: web-api-instance-property
browser-compat: api.NodeIterator.referenceNode
---
{{APIRef("DOM")}}
The **`NodeIterator.referenceNode`** read-only property returns the
{{domxref("Node")}} to which the iterator is anchored; as new nodes are inserted, the
iterator remains anchored to the reference node as specified by this property.
## Value
A {{domxref("Node")}}.
## Examples
```js
const nodeIterator = document.createNodeIterator(
document.body,
NodeFilter.SHOW_ELEMENT,
{
acceptNode(node) {
return NodeFilter.FILTER_ACCEPT;
},
},
);
node = nodeIterator.referenceNode;
```
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
## See also
- The interface it belongs to: {{domxref("NodeIterator")}}
| 0 |
data/mdn-content/files/en-us/web/api/nodeiterator | data/mdn-content/files/en-us/web/api/nodeiterator/pointerbeforereferencenode/index.md | ---
title: "NodeIterator: pointerBeforeReferenceNode property"
short-title: pointerBeforeReferenceNode
slug: Web/API/NodeIterator/pointerBeforeReferenceNode
page-type: web-api-instance-property
browser-compat: api.NodeIterator.pointerBeforeReferenceNode
---
{{APIRef("DOM")}}
The **`NodeIterator.pointerBeforeReferenceNode`** read-only
property returns a boolean flag that indicates whether the
`NodeFilter` is anchored before (if this value is `true`) or
after (if this value is `false`) the anchor node indicated by the
{{domxref("NodeIterator.referenceNode")}} property.
## Value
A boolean.
## Examples
```js
const nodeIterator = document.createNodeIterator(
document.body,
NodeFilter.SHOW_ELEMENT,
{
acceptNode(node) {
return NodeFilter.FILTER_ACCEPT;
},
},
);
flag = nodeIterator.pointerBeforeReferenceNode;
```
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
## See also
- The interface it belongs to: {{domxref("NodeIterator")}}
| 0 |
data/mdn-content/files/en-us/web/api/nodeiterator | data/mdn-content/files/en-us/web/api/nodeiterator/nextnode/index.md | ---
title: "NodeIterator: nextNode() method"
short-title: nextNode()
slug: Web/API/NodeIterator/nextNode
page-type: web-api-instance-method
browser-compat: api.NodeIterator.nextNode
---
{{APIRef("DOM")}}
The **`NodeIterator.nextNode()`** method returns the next node
in the set represented by the {{domxref("NodeIterator")}} and advances the position of
the iterator within the set. The first call to `nextNode()` returns the
first node in the set.
This method returns `null` when there are no nodes left in the set.
In old browsers, as specified in old versions of the specifications, the method may
throws the `INVALID_STATE_ERR` {{domxref("DOMException")}} if this method
is called after the {{domxref("NodeIterator.detach()")}}method. Recent browsers never
throw.
## Syntax
```js-nolint
nextNode()
```
### Parameters
None.
### Return value
A {{domxref("Node")}} representing the node after the current node in the set represented by this `NodeIterator`, or `null` if the current node is the last node in the set.
## Examples
```js
const nodeIterator = document.createNodeIterator(
document.body,
NodeFilter.SHOW_ELEMENT,
{
acceptNode(node) {
return NodeFilter.FILTER_ACCEPT;
},
},
);
currentNode = nodeIterator.nextNode(); // returns the next node
```
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
## See also
- The interface it belongs to: {{domxref("NodeIterator")}}.
| 0 |
data/mdn-content/files/en-us/web/api/nodeiterator | data/mdn-content/files/en-us/web/api/nodeiterator/root/index.md | ---
title: "NodeIterator: root property"
short-title: root
slug: Web/API/NodeIterator/root
page-type: web-api-instance-property
browser-compat: api.NodeIterator.root
---
{{APIRef("DOM")}}
The **`NodeIterator.root`** read-only property represents the
{{DOMxref("Node")}} that is the root of what the {{DOMxref("NodeIterator")}}
traverses.
## Value
A {{DOMxref("Node")}}.
## Examples
```js
const nodeIterator = document.createNodeIterator(
document.body,
NodeFilter.SHOW_ELEMENT,
{
acceptNode(node) {
return NodeFilter.FILTER_ACCEPT;
},
},
);
root = nodeIterator.root; // document.body in this case
```
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
## See also
- The interface it belongs to: {{domxref("NodeIterator")}}.
| 0 |
data/mdn-content/files/en-us/web/api/nodeiterator | data/mdn-content/files/en-us/web/api/nodeiterator/filter/index.md | ---
title: "NodeIterator: filter property"
short-title: filter
slug: Web/API/NodeIterator/filter
page-type: web-api-instance-property
browser-compat: api.NodeIterator.filter
---
{{APIRef("DOM")}}
The **`NodeIterator.filter`** read-only property returns a
`NodeFilter` object, that is an object which implements an
`acceptNode(node)` method, used to screen nodes.
When creating the {{domxref("NodeIterator")}}, the filter object is passed in as the
third parameter, and the object method `acceptNode(node)` is
called on every single node to determine whether or not to accept it. This function
should return the constant `NodeFilter.FILTER_ACCEPT` for cases when the
node should be accepted and `NodeFilter.FILTER_REJECT` for cases when the
node should be rejected.
## Value
A `NodeFilter` object.
## Examples
```js
const nodeIterator = document.createNodeIterator(
document.body,
NodeFilter.SHOW_ELEMENT,
{
acceptNode(node) {
return NodeFilter.FILTER_ACCEPT;
},
},
);
nodeFilter = nodeIterator.filter;
```
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
## See also
- The interface this property belongs to: {{domxref("NodeIterator")}}.
| 0 |
data/mdn-content/files/en-us/web/api | data/mdn-content/files/en-us/web/api/webrtc_api/index.md | ---
title: WebRTC API
slug: Web/API/WebRTC_API
page-type: web-api-overview
spec-urls:
- https://w3c.github.io/webrtc-pc/
- https://w3c.github.io/mediacapture-main/
- https://w3c.github.io/mediacapture-fromelement/
---
{{DefaultAPISidebar("WebRTC")}}
**WebRTC** (Web Real-Time Communication) is a technology that enables Web applications and sites to capture and optionally stream audio and/or video media, as well as to exchange arbitrary data between browsers without requiring an intermediary. The set of standards that comprise WebRTC makes it possible to share data and perform teleconferencing peer-to-peer, without requiring that the user install plug-ins or any other third-party software.
WebRTC consists of several interrelated APIs and protocols which work together to achieve this. The documentation you'll find here will help you understand the fundamentals of WebRTC, how to set up and use both data and media connections, and more.
## WebRTC concepts and usage
WebRTC serves multiple purposes; together with the [Media Capture and Streams API](/en-US/docs/Web/API/Media_Capture_and_Streams_API), they provide powerful multimedia capabilities to the Web, including support for audio and video conferencing, file exchange, screen sharing, identity management, and interfacing with legacy telephone systems including support for sending {{Glossary("DTMF")}} (touch-tone dialing) signals. Connections between peers can be made without requiring any special drivers or plug-ins, and can often be made without any intermediary servers.
Connections between two peers are represented by the {{DOMxRef("RTCPeerConnection")}} interface. Once a connection has been established and opened using `RTCPeerConnection`, media streams ({{DOMxRef("MediaStream")}}s) and/or data channels ({{DOMxRef("RTCDataChannel")}}s) can be added to the connection.
Media streams can consist of any number of tracks of media information; tracks, which are represented by objects based on the {{DOMxRef("MediaStreamTrack")}} interface, may contain one of a number of types of media data, including audio, video, and text (such as subtitles or even chapter names). Most streams consist of at least one audio track and likely also a video track, and can be used to send and receive both live media or stored media information (such as a streamed movie).
You can also use the connection between two peers to exchange arbitrary binary data using the {{DOMxRef("RTCDataChannel")}} interface. This can be used for back-channel information, metadata exchange, game status packets, file transfers, or even as a primary channel for data transfer.
### Interoperability
WebRTC is in general well supported in modern browsers, but some incompatibilities remain. The [adapter.js](https://github.com/webrtcHacks/adapter) library is a shim to insulate apps from these incompatibilities.
## WebRTC reference
Because WebRTC provides interfaces that work together to accomplish a variety of tasks, we have divided up the reference by category. Please see the sidebar for an alphabetical list.
### Connection setup and management
These interfaces, dictionaries, and types are used to set up, open, and manage WebRTC connections. Included are interfaces representing peer media connections, data channels, and interfaces used when exchanging information on the capabilities of each peer in order to select the best possible configuration for a two-way media connection.
#### Interfaces
- {{DOMxRef("RTCPeerConnection")}}
- : Represents a WebRTC connection between the local computer and a remote peer. It is used to handle efficient streaming of data between the two peers.
- {{DOMxRef("RTCDataChannel")}}
- : Represents a bi-directional data channel between two peers of a connection.
- {{DOMxRef("RTCDataChannelEvent")}}
- : Represents events that occur while attaching a {{DOMxRef("RTCDataChannel")}} to a {{DOMxRef("RTCPeerConnection")}}. The only event sent with this interface is {{domxref("RTCPeerConnection.datachannel_event", "datachannel")}}.
- {{DOMxRef("RTCSessionDescription")}}
- : Represents the parameters of a session. Each `RTCSessionDescription` consists of a description {{DOMxRef("RTCSessionDescription.type", "type")}} indicating which part of the offer/answer negotiation process it describes and of the {{Glossary("SDP")}} descriptor of the session.
- {{DOMxRef("RTCStatsReport")}}
- : Provides information detailing statistics for a connection or for an individual track on the connection; the report can be obtained by calling {{DOMxRef("RTCPeerConnection.getStats()")}}.
- {{DOMxRef("RTCIceCandidate")}}
- : Represents a candidate Interactive Connectivity Establishment ({{Glossary("ICE")}}) server for establishing an {{DOMxRef("RTCPeerConnection")}}.
- {{DOMxRef("RTCIceTransport")}}
- : Represents information about an {{Glossary("ICE")}} transport.
- {{DOMxRef("RTCPeerConnectionIceEvent")}}
- : Represents events that occur in relation to ICE candidates with the target, usually an {{DOMxRef("RTCPeerConnection")}}. Only one event is of this type: {{domxref("RTCPeerConnection.icecandidate_event", "icecandidate")}}.
- {{DOMxRef("RTCRtpSender")}}
- : Manages the encoding and transmission of data for a {{DOMxRef("MediaStreamTrack")}} on an {{DOMxRef("RTCPeerConnection")}}.
- {{DOMxRef("RTCRtpReceiver")}}
- : Manages the reception and decoding of data for a {{DOMxRef("MediaStreamTrack")}} on an {{DOMxRef("RTCPeerConnection")}}.
- {{DOMxRef("RTCTrackEvent")}}
- : The interface used to represent a {{domxref("RTCPeerConnection.track_event", "track")}} event, which indicates that an {{DOMxRef("RTCRtpReceiver")}} object was added to the {{DOMxRef("RTCPeerConnection")}} object, indicating that a new incoming {{DOMxRef("MediaStreamTrack")}} was created and added to the `RTCPeerConnection`.
- {{DOMxRef("RTCSctpTransport")}}
- : Provides information which describes a Stream Control Transmission Protocol (**{{Glossary("SCTP")}}**) transport and also provides a way to access the underlying Datagram Transport Layer Security (**{{Glossary("DTLS")}}**) transport over which SCTP packets for all of an [`RTCPeerConnection`](/en-US/docs/Web/API/RTCPeerConnection)'s data channels are sent and received.
#### Events
- {{domxref("RTCDataChannel.bufferedamountlow_event", "bufferedamountlow")}}
- : The amount of data currently buffered by the data channel—as indicated by its {{domxref("RTCDataChannel.bufferedAmount", "bufferedAmount")}} property—has decreased to be at or below the channel's minimum buffered data size, as specified by {{domxref("RTCDataChannel.bufferedAmountLowThreshold", "bufferedAmountLowThreshold")}}.
- {{domxref("RTCDataChannel.close_event", "close")}}
- : The data channel has completed the closing process and is now in the `closed` state. Its underlying data transport is completely closed at this point. You can be notified _before_ closing completes by watching for the `closing` event instead.
- {{domxref("RTCDataChannel.closing_event", "closing")}}
- : The `RTCDataChannel` has transitioned to the `closing` state, indicating that it will be closed soon. You can detect the completion of the closing process by watching for the `close` event.
- {{domxref("RTCPeerConnection.connectionstatechange_event", "connectionstatechange")}}
- : The connection's state, which can be accessed in {{domxref("RTCPeerConnection.connectionState", "connectionState")}}, has changed.
- {{domxref("RTCPeerConnection.datachannel_event", "datachannel")}}
- : A new {{domxref("RTCDataChannel")}} is available following the remote peer opening a new data channel. This event's type is {{domxref("RTCDataChannelEvent")}}.
- {{domxref("RTCDataChannel.error_event", "error")}}
- : An {{domxref("RTCErrorEvent")}} indicating that an error occurred on the data channel.
- {{domxref("RTCDtlsTransport.error_event", "error")}}
- : An {{domxref("RTCErrorEvent")}} indicating that an error occurred on the {{domxref("RTCDtlsTransport")}}. This error will be either `dtls-failure` or `fingerprint-failure`.
- {{domxref("RTCIceTransport.gatheringstatechange_event", "gatheringstatechange")}}
- : The {{domxref("RTCIceTransport")}}'s gathering state has changed.
- {{domxref("RTCPeerConnection.icecandidate_event", "icecandidate")}}
- : An {{domxref("RTCPeerConnectionIceEvent")}} which is sent whenever the local device has identified a new ICE candidate which needs to be added to the local peer by calling {{domxref("RTCPeerConnection.setLocalDescription", "setLocalDescription()")}}.
- {{domxref("RTCPeerConnection.icecandidateerror_event", "icecandidateerror")}}
- : An {{domxref("RTCPeerConnectionIceErrorEvent")}} indicating that an error has occurred while gathering ICE candidates.
- {{domxref("RTCPeerConnection.iceconnectionstatechange_event", "iceconnectionstatechange")}}
- : Sent to an {{domxref("RTCPeerConnection")}} when its ICE connection's state—found in the {{domxref("RTCPeerConnection.iceconnectionstate", "iceconnectionstate")}} property—changes.
- {{domxref("RTCPeerConnection.icegatheringstatechange_event", "icegatheringstatechange")}}
- : Sent to an {{domxref("RTCPeerConnection")}} when its ICE gathering state—found in the {{domxref("RTCPeerConnection.icegatheringstate", "icegatheringstate")}} property—changes.
- {{domxref("RTCDataChannel.message_event", "message")}}
- : A message has been received on the data channel. The event is of type {{domxref("MessageEvent")}}.
- {{domxref("RTCPeerConnection.negotiationneeded_event", "negotiationneeded")}}
- : Informs the `RTCPeerConnection` that it needs to perform session negotiation by calling {{domxref("RTCPeerConnection.createOffer", "createOffer()")}} followed by {{domxref("RTCPeerConnection.setLocalDescription", "setLocalDescription()")}}.
- {{domxref("RTCDataChannel.open_event", "open")}}
- : The underlying data transport for the `RTCDataChannel` has been successfully opened or re-opened.
- {{domxref("RTCIceTransport.selectedcandidatepairchange_event", "selectedcandidatepairchange")}}
- : The currently-selected pair of ICE candidates has changed for the `RTCIceTransport` on which the event is fired.
- {{domxref("RTCPeerConnection.track_event", "track")}}
- : The `track` event, of type {{domxref("RTCTrackevent")}} is sent to an {{domxref("RTCPeerConnection")}} when a new track is added to the connection following the successful negotiation of the media's streaming.
- {{domxref("RTCPeerConnection.signalingstatechange_event", "signalingstatechange")}}
- : Sent to the peer connection when its {{domxref("RTCPeerConnection.signalingstate", "signalingstate")}} has changed. This happens as a result of a call to either {{domxref("RTCPeerConnection.setLocalDescription", "setLocalDescription()")}} or {{domxref("RTCPeerConnection.setRemoteDescription", "setRemoteDescription()")}}.
- {{domxref("RTCDtlsTransport.statechange_event", "statechange")}}
- : The state of the `RTCDtlsTransport` has changed.
- {{domxref("RTCIceTransport.statechange_event", "statechange")}}
- : The state of the `RTCIceTransport` has changed.
- {{domxref("RTCSctpTransport.statechange_event", "statechange")}}
- : The state of the `RTCSctpTransport` has changed.
- {{DOMxRef("DedicatedWorkerGlobalScope.rtctransform_event", "rtctransform")}}
- : An encoded video or audio frame is ready to process using a transform stream in a worker.
#### Types
- {{DOMxRef("RTCSctpTransport.state")}}
- : Indicates the state of an {{DOMxRef("RTCSctpTransport")}} instance.
### Identity and security
These APIs are used to manage user identity and security, in order to authenticate the user for a connection.
- {{DOMxRef("RTCIdentityProvider")}}
- : Enables a user agent is able to request that an identity assertion be generated or validated.
- {{DOMxRef("RTCIdentityAssertion")}}
- : Represents the identity of the remote peer of the current connection. If no peer has yet been set and verified this interface returns `null`. Once set it can't be changed.
- {{DOMxRef("RTCIdentityProviderRegistrar")}}
- : Registers an identity provider (idP).
- {{DOMxRef("RTCCertificate")}}
- : Represents a certificate that an {{DOMxRef("RTCPeerConnection")}} uses to authenticate.
### Telephony
These interfaces and events are related to interactivity with Public-Switched Telephone Networks (PSTNs). They're primarily used to send tone dialing sounds—or packets representing those tones—across the network to the remote peer.
#### Interfaces
- {{DOMxRef("RTCDTMFSender")}}
- : Manages the encoding and transmission of Dual-Tone Multi-Frequency ({{Glossary("DTMF")}}) signaling for an {{DOMxRef("RTCPeerConnection")}}.
- {{DOMxRef("RTCDTMFToneChangeEvent")}}
- : Used by the {{domxref("RTCDTMFSender.tonechange_event", "tonechange")}} event to indicate that a DTMF tone has either begun or ended. This event does not bubble (except where otherwise stated) and is not cancelable (except where otherwise stated).
#### Events
- {{domxref("RTCDTMFSender.tonechange_event", "tonechange")}}
- : Either a new {{Glossary("DTMF")}} tone has begun to play over the connection, or the last tone in the `RTCDTMFSender`'s {{domxref("RTCDTMFSender.toneBuffer", "toneBuffer")}} has been sent and the buffer is now empty. The event's type is {{domxref("RTCDTMFToneChangeEvent")}}.
### Encoded Transforms
These interfaces and events are used to process incoming and outgoing encoded video and audio frames using a transform stream running in a worker.
#### Interfaces
- {{DOMxRef("RTCRtpScriptTransform")}}
- : An interface for inserting transform stream(s) running in a worker into the RTC pipeline.
- {{DOMxRef("RTCRtpScriptTransformer")}}
- : The worker-side counterpart of an `RTCRtpScriptTransform` that passes options from the main thread, along with a readable stream and writeable stream that can be used to pipe encoded frames through a {{DOMxRef("TransformStream")}}.
- {{DOMxRef("RTCEncodedVideoFrame")}}
- : Represents an encoded video frame to be transformed in the RTC pipeline.
- {{DOMxRef("RTCEncodedAudioFrame")}}
- : Represents an encoded audio frame to be transformed in the RTC pipeline.
#### Properties
- {{DOMxRef("RTCRtpReceiver.transform")}}
- : A property used to insert a transform stream into the receiver pipeline for incoming encoded video and audio frames.
- {{DOMxRef("RTCRtpSender.transform")}}
- : A property used to insert a transform stream into the sender pipeline for outgoing encoded video and audio frames.
#### Events
- {{DOMxRef("DedicatedWorkerGlobalScope.rtctransform_event", "rtctransform")}}
- : An RTC transform is ready to run in the worker, or an encoded video or audio frame is ready to process.
## Guides
- [Introduction to WebRTC protocols](/en-US/docs/Web/API/WebRTC_API/Protocols)
- : This article introduces the protocols on top of which the WebRTC API is built.
- [WebRTC connectivity](/en-US/docs/Web/API/WebRTC_API/Connectivity)
- : A guide to how WebRTC connections work and how the various protocols and interfaces can be used together to build powerful communication apps.
- [Lifetime of a WebRTC session](/en-US/docs/Web/API/WebRTC_API/Session_lifetime)
- : WebRTC lets you build peer-to-peer communication of arbitrary data, audio, or video—or any combination thereof—into a browser application. In this article, we'll look at the lifetime of a WebRTC session, from establishing the connection all the way through closing the connection when it's no longer needed.
- [Establishing a connection: The perfect negotiation pattern](/en-US/docs/Web/API/WebRTC_API/Perfect_negotiation)
- : **Perfect negotiation** is a design pattern which is recommended for your signaling process to follow, which provides transparency in negotiation while allowing both sides to be either the offerer or the answerer, without significant coding needed to differentiate the two.
- [Signaling and two-way video calling](/en-US/docs/Web/API/WebRTC_API/Signaling_and_video_calling)
- : A tutorial and example which turns a WebSocket-based chat system created for a previous example and adds support for opening video calls among participants. The chat server's WebSocket connection is used for WebRTC signaling.
- [Codecs used by WebRTC](/en-US/docs/Web/Media/Formats/WebRTC_codecs)
- : A guide to the codecs which WebRTC requires browsers to support as well as the optional ones supported by various popular browsers. Included is a guide to help you choose the best codecs for your needs.
- [Using WebRTC data channels](/en-US/docs/Web/API/WebRTC_API/Using_data_channels)
- : This guide covers how you can use a peer connection and an associated {{DOMxRef("RTCDataChannel")}} to exchange arbitrary data between two peers.
- [Using DTMF with WebRTC](/en-US/docs/Web/API/WebRTC_API/Using_DTMF)
- : WebRTC's support for interacting with gateways that link to old-school telephone systems includes support for sending DTMF tones using the {{DOMxRef("RTCDTMFSender")}} interface. This guide shows how to do so.
- [Using WebRTC Encoded Transforms](/en-US/docs/Web/API/WebRTC_API/Using_Encoded_Transforms)
- : This guide shows how a web application can modify incoming and outgoing WebRTC encoded video and audio frames, using a {{DOMxRef("TransformStream")}} running into a worker.
## Tutorials
- [Improving compatibility using WebRTC adapter.js](#interoperability)
- : The WebRTC organization [provides on GitHub the WebRTC adapter](https://github.com/webrtc/adapter/) to work around compatibility issues in different browsers' WebRTC implementations. The adapter is a JavaScript shim which lets your code to be written to the specification so that it will "just work" in all browsers with WebRTC support.
- [A simple RTCDataChannel sample](/en-US/docs/Web/API/WebRTC_API/Simple_RTCDataChannel_sample)
- : The {{DOMxRef("RTCDataChannel")}} interface is a feature which lets you open a channel between two peers over which you may send and receive arbitrary data. The API is intentionally similar to the [WebSocket API](/en-US/docs/Web/API/WebSockets_API), so that the same programming model can be used for each.
- [Building an internet connected phone with Peer.js](/en-US/docs/Web/API/WebRTC_API/Build_a_phone_with_peerjs)
- : This tutorial is a step-by-step guide on how to build a phone using Peer.js
## Specifications
{{Specifications}}
### WebRTC-proper protocols
- [Application Layer Protocol Negotiation for Web Real-Time Communications](https://datatracker.ietf.org/doc/rfc8833/)
- [WebRTC Audio Codec and Processing Requirements](https://datatracker.ietf.org/doc/rfc7874/)
- [RTCWeb Data Channels](https://datatracker.ietf.org/doc/rfc8831/)
- [RTCWeb Data Channel Protocol](https://datatracker.ietf.org/doc/rfc8832/)
- [Web Real-Time Communication (WebRTC): Media Transport and Use of RTP](https://datatracker.ietf.org/doc/rfc8834/)
- [WebRTC Security Architecture](https://datatracker.ietf.org/doc/rfc8827/)
- [Transports for RTCWEB](https://datatracker.ietf.org/doc/rfc8835/)
### Related supporting protocols
- [Interactive Connectivity Establishment (ICE): A Protocol for Network Address Translator (NAT) Traversal for Offer/Answer Protocol](https://datatracker.ietf.org/doc/html/rfc5245)
- [Session Traversal Utilities for NAT (STUN)](https://datatracker.ietf.org/doc/html/rfc5389)
- [URI Scheme for the Session Traversal Utilities for NAT (STUN) Protocol](https://datatracker.ietf.org/doc/html/rfc7064)
- [Traversal Using Relays around NAT (TURN) Uniform Resource Identifiers](https://datatracker.ietf.org/doc/html/rfc7065)
- [An Offer/Answer Model with Session Description Protocol (SDP)](https://datatracker.ietf.org/doc/html/rfc3264)
- [Session Traversal Utilities for NAT (STUN) Extension for Third Party Authorization](https://datatracker.ietf.org/doc/rfc7635/)
## See also
- {{DOMxRef("MediaDevices")}}
- {{DOMxRef("MediaStreamEvent")}}
- {{DOMxRef("MediaStreamTrack")}}
- {{DOMxRef("MessageEvent")}}
- {{DOMxRef("MediaStream")}}
- [Media Capture and Streams API](/en-US/docs/Web/API/Media_Capture_and_Streams_API)
- [Firefox multistream and renegotiation for Jitsi Videobridge](https://hacks.mozilla.org/2015/06/firefox-multistream-and-renegotiation-for-jitsi-videobridge/)
- [Peering Through the WebRTC Fog with SocketPeer](https://hacks.mozilla.org/2015/04/peering-through-the-webrtc-fog-with-socketpeer/)
- [Inside the Party Bus: Building a Web App with Multiple Live Video Streams + Interactive Graphics](https://hacks.mozilla.org/2014/04/inside-the-party-bus-building-a-web-app-with-multiple-live-video-streams-interactive-graphics/)
- [Web media technologies](/en-US/docs/Web/Media)
| 0 |
data/mdn-content/files/en-us/web/api/webrtc_api | data/mdn-content/files/en-us/web/api/webrtc_api/protocols/index.md | ---
title: Introduction to WebRTC protocols
slug: Web/API/WebRTC_API/Protocols
page-type: guide
---
{{DefaultAPISidebar("WebRTC")}}
This article introduces the protocols on top of which the WebRTC API is built.
## ICE
[Interactive Connectivity Establishment (ICE)](https://en.wikipedia.org/wiki/Interactive_Connectivity_Establishment) is a framework to allow your web browser to connect with peers. There are many reasons why a straight up connection from Peer A to Peer B won't work. It needs to bypass firewalls that would prevent opening connections, give you a unique address if like most situations your device doesn't have a public IP address, and relay data through a server if your router doesn't allow you to directly connect with peers. ICE uses STUN and/or TURN servers to accomplish this, as described below.
## STUN
[Session Traversal Utilities for NAT (STUN)](https://en.wikipedia.org/wiki/STUN) is a protocol to discover your public address and determine any restrictions in your router that would prevent a direct connection with a peer.
The client will send a request to a STUN server on the Internet who will reply with the client's public address and whether or not the client is accessible behind the router's NAT.

## NAT
[Network Address Translation (NAT)](https://en.wikipedia.org/wiki/Network_address_translation) is used to give your device a public IP address. A router will have a public IP address and every device connected to the router will have a private IP address. Requests will be translated from the device's private IP to the router's public IP with a unique port. That way you don't need a unique public IP for each device but can still be discovered on the Internet.
Some routers will have restrictions on who can connect to devices on the network. This can mean that even though we have the public IP address found by the STUN server, not anyone can create a connection. In this situation we need to use TURN.
## TURN
Some routers using NAT employ a restriction called 'Symmetric NAT'. This means the router will only accept connections from peers you've previously connected to.
[Traversal Using Relays around NAT (TURN)](https://en.wikipedia.org/wiki/TURN) is meant to bypass the Symmetric NAT restriction by opening a connection with a TURN server and relaying all information through that server. You would create a connection with a TURN server and tell all peers to send packets to the server which will then be forwarded to you. This obviously comes with some overhead so it is only used if there are no other alternatives.

## SDP
[Session Description Protocol (SDP)](https://en.wikipedia.org/wiki/Session_Description_Protocol) is a standard for describing the multimedia content of the connection such as resolution, formats, codecs, encryption, etc. so that both peers can understand each other once the data is transferring. This is, in essence, the metadata describing the content and not the media content itself.
Technically, then, SDP is not truly a protocol, but a data format used to describe connection that shares media between devices.
Documenting SDP is well outside the scope of this documentation; however, there are a few things worth noting here.
### Structure
SDP consists of one or more lines of UTF-8 text, each beginning with a one-character type, followed by an equals sign (`"="`), followed by structured text comprising a value or description, whose format depends on the type. The lines of text that begin with a given letter are generally referred to as "_letter_-lines". For example, lines providing media descriptions have the type `"m"`, so those lines are referred to as "m-lines."
### For more information
To learn more about SDP, see the following useful resources:
- Specification: {{RFC(8866, "SDP: Session Description Protocol")}}
- [IANA registry of SDP parameters](https://www.iana.org/assignments/sip-parameters/sip-parameters.xhtml)
| 0 |
data/mdn-content/files/en-us/web/api/webrtc_api | data/mdn-content/files/en-us/web/api/webrtc_api/simple_rtcdatachannel_sample/index.md | ---
title: A simple RTCDataChannel sample
slug: Web/API/WebRTC_API/Simple_RTCDataChannel_sample
page-type: guide
---
{{DefaultAPISidebar("WebRTC")}}
The {{domxref("RTCDataChannel")}} interface is a feature of the [WebRTC API](/en-US/docs/Web/API/WebRTC_API) which lets you open a channel between two peers over which you may send and receive arbitrary data. The API is intentionally similar to the [WebSocket API](/en-US/docs/Web/API/WebSockets_API), so that the same programming model can be used for each.
In this example, we will open an {{domxref("RTCDataChannel")}} connection linking two elements on the same page. While that's obviously a contrived scenario, it's useful for demonstrating the flow of connecting two peers. We'll cover the mechanics of accomplishing the connection and transmitting and receiving data, but we will save the bits about locating and linking to a remote computer for another example.
## The HTML
First, let's take a quick look at the [HTML that's needed](https://github.com/mdn/samples-server/blob/master/s/webrtc-simple-datachannel/index.html). There's nothing incredibly complicated here. First, we have a couple of buttons for establishing and closing the connection:
```html
<button id="connectButton" name="connectButton" class="buttonleft">
Connect
</button>
<button
id="disconnectButton"
name="disconnectButton"
class="buttonright"
disabled>
Disconnect
</button>
```
Then there's a box which contains the text input box into which the user can type a message to transmit, with a button to send the entered text. This {{HTMLElement("div")}} will be the first peer in the channel.
```html
<div class="messagebox">
<label for="message"
>Enter a message:
<input
type="text"
name="message"
id="message"
placeholder="Message text"
inputmode="latin"
size="60"
maxlength="120"
disabled />
</label>
<button id="sendButton" name="sendButton" class="buttonright" disabled>
Send
</button>
</div>
```
Finally, there's the little box into which we'll insert the messages. This {{HTMLElement("div")}} block will be the second peer.
```html
<div class="messagebox" id="receivebox">
<p>Messages received:</p>
</div>
```
## The JavaScript code
While you can just [look at the code itself on GitHub](https://github.com/mdn/samples-server/blob/master/s/webrtc-simple-datachannel/main.js), below we'll review the parts of the code that do the heavy lifting.
### Starting up
When the script is run, we set up a {{domxref("Window/load_event", "load")}} event listener, so that once the page is fully loaded, our `startup()` function is called.
```js
let connectButton = null;
let disconnectButton = null;
let sendButton = null;
let messageInputBox = null;
let receiveBox = null;
let localConnection = null; // RTCPeerConnection for our "local" connection
let remoteConnection = null; // RTCPeerConnection for the "remote"
let sendChannel = null; // RTCDataChannel for the local (sender)
let receiveChannel = null; // RTCDataChannel for the remote (receiver)
function startup() {
connectButton = document.getElementById("connectButton");
disconnectButton = document.getElementById("disconnectButton");
sendButton = document.getElementById("sendButton");
messageInputBox = document.getElementById("message");
receiveBox = document.getElementById("receivebox");
// Set event listeners for user interface widgets
connectButton.addEventListener("click", connectPeers, false);
disconnectButton.addEventListener("click", disconnectPeers, false);
sendButton.addEventListener("click", sendMessage, false);
}
```
This is quite straightforward. We declare variables and grab references to all the page elements we'll need to access, then set {{domxref("EventTarget/addEventListener", "event listeners")}} on the three buttons.
### Establishing a connection
When the user clicks the "Connect" button, the `connectPeers()` method is called. We're going to break this up and look at it a bit at a time, for clarity.
> **Note:** Even though both ends of our connection will be on the same page, we're going to refer to the one that starts the connection as the "local" one, and to the other as the "remote" end.
#### Set up the local peer
```js
localConnection = new RTCPeerConnection();
sendChannel = localConnection.createDataChannel("sendChannel");
sendChannel.onopen = handleSendChannelStatusChange;
sendChannel.onclose = handleSendChannelStatusChange;
```
The first step is to create the "local" end of the connection. This is the peer that will send out the connection request. The next step is to create the {{domxref("RTCDataChannel")}} by calling {{domxref("RTCPeerConnection.createDataChannel()")}} and set up event listeners to monitor the channel so that we know when it's opened and closed (that is, when the channel is connected or disconnected within that peer connection).
It's important to keep in mind that each end of the channel has its own {{domxref("RTCDataChannel")}} object.
#### Set up the remote peer
```js
remoteConnection = new RTCPeerConnection();
remoteConnection.ondatachannel = receiveChannelCallback;
```
The remote end is set up similarly, except that we don't need to explicitly create an {{domxref("RTCDataChannel")}} ourselves, since we're going to be connected through the channel established above. Instead, we set up a {{domxref("RTCPeerConnection.datachannel_event", "datachannel")}} event handler; this will be called when the data channel is opened; this handler will receive an `RTCDataChannel` object; you'll see this below.
#### Set up the ICE candidates
The next step is to set up each connection with ICE candidate listeners; these will be called when there's a new ICE candidate to tell the other side about.
> **Note:** In a real-world scenario in which the two peers aren't running in the same context, the process is a bit more involved; each side provides, one at a time, a suggested way to connect (for example, UDP, UDP with a relay, TCP, etc.) by calling {{domxref("RTCPeerConnection.addIceCandidate()")}}, and they go back and forth until agreement is reached. But here, we just accept the first offer on each side, since there's no actual networking involved.
```js
localConnection.onicecandidate = (e) =>
!e.candidate ||
remoteConnection.addIceCandidate(e.candidate).catch(handleAddCandidateError);
remoteConnection.onicecandidate = (e) =>
!e.candidate ||
localConnection.addIceCandidate(e.candidate).catch(handleAddCandidateError);
```
We configure each {{domxref("RTCPeerConnection")}} to have an event handler for the {{domxref("RTCPeerConnection.icecandidate_event", "icecandidate")}} event.
#### Start the connection attempt
The last thing we need to do in order to begin connecting our peers is to create a connection offer.
```js
localConnection
.createOffer()
.then((offer) => localConnection.setLocalDescription(offer))
.then(() =>
remoteConnection.setRemoteDescription(localConnection.localDescription),
)
.then(() => remoteConnection.createAnswer())
.then((answer) => remoteConnection.setLocalDescription(answer))
.then(() =>
localConnection.setRemoteDescription(remoteConnection.localDescription),
)
.catch(handleCreateDescriptionError);
```
Let's go through this line by line and decipher what it means.
1. First, we call {{domxref("RTCPeerConnection.createOffer()")}} method to create an {{Glossary("SDP")}} (Session Description Protocol) blob describing the connection we want to make. This method accepts, optionally, an object with constraints to be met for the connection to meet your needs, such as whether the connection should support audio, video, or both. In our simple example, we don't have any constraints.
2. If the offer is created successfully, we pass the blob along to the local connection's {{domxref("RTCPeerConnection.setLocalDescription()")}} method. This configures the local end of the connection.
3. The next step is to connect the local peer to the remote by telling the remote peer about it. This is done by calling `remoteConnection.`{{domxref("RTCPeerConnection.setRemoteDescription()")}}. Now the `remoteConnection` knows about the connection that's being built. In a real application, this would require a signaling server to exchange the description object.
4. That means it's time for the remote peer to reply. It does so by calling its {{domxref("RTCPeerConnection.createAnswer", "createAnswer()")}} method. This generates a blob of SDP which describes the connection the remote peer is willing and able to establish. This configuration lies somewhere in the union of options that both peers can support.
5. Once the answer has been created, it's passed into the remoteConnection by calling {{domxref("RTCPeerConnection.setLocalDescription()")}}. That establishes the remote's end of the connection (which, to the remote peer, is its local end. This stuff can be confusing, but you get used to it). Again, this would normally be exchanged through a signalling server.
6. Finally, the local connection's remote description is set to refer to the remote peer by calling localConnection's {{domxref("RTCPeerConnection.setRemoteDescription()")}}.
7. The `catch()` calls a routine that handles any errors that occur.
> **Note:** Once again, this process is not a real-world implementation; in normal usage, there's two chunks of code running on two machines, interacting and negotiating the connection. A side channel, commonly called a "signalling server," is usually used to exchange the description (which is in **application/sdp** form) between the two peers.
#### Handling successful peer connection
As each side of the peer-to-peer connection is successfully linked up, the corresponding {{domxref("RTCPeerConnection")}}'s {{domxref("RTCPeerConnection.icecandidate_event", "icecandidate")}} event is fired. These handlers can do whatever's needed, but in this example, all we need to do is update the user interface:
```js
function handleCreateDescriptionError(error) {
console.log(`Unable to create an offer: ${error.toString()}`);
}
function handleLocalAddCandidateSuccess() {
connectButton.disabled = true;
}
function handleRemoteAddCandidateSuccess() {
disconnectButton.disabled = false;
}
function handleAddCandidateError() {
console.log("Oh noes! addICECandidate failed!");
}
```
The only thing we do here is disable the "Connect" button when the local peer is connected and enable the "Disconnect" button when the remote peer connects.
#### Connecting the data channel
Once the {{domxref("RTCPeerConnection")}} is open, the {{domxref("RTCPeerConnection.datachannel_event", "datachannel")}} event is sent to the remote to complete the process of opening the data channel; this invokes our `receiveChannelCallback()` method, which looks like this:
```js
function receiveChannelCallback(event) {
receiveChannel = event.channel;
receiveChannel.onmessage = handleReceiveMessage;
receiveChannel.onopen = handleReceiveChannelStatusChange;
receiveChannel.onclose = handleReceiveChannelStatusChange;
}
```
The {{domxref("RTCPeerConnection.datachannel_event", "datachannel")}} event includes, in its channel property, a reference to a {{domxref("RTCDataChannel")}} representing the remote peer's end of the channel. This is saved, and we set up, on the channel, event listeners for the events we want to handle. Once this is done, our `handleReceiveMessage()` method will be called each time data is received by the remote peer, and the `handleReceiveChannelStatusChange()` method will be called any time the channel's connection state changes, so we can react when the channel is fully opened and when it's closed.
### Handling channel status changes
Both our local and remote peers use a single method to handle events indicating a change in the status of the channel's connection.
When the local peer experiences an open or close event, the `handleSendChannelStatusChange()` method is called:
```js
function handleSendChannelStatusChange(event) {
if (sendChannel) {
const state = sendChannel.readyState;
if (state === "open") {
messageInputBox.disabled = false;
messageInputBox.focus();
sendButton.disabled = false;
disconnectButton.disabled = false;
connectButton.disabled = true;
} else {
messageInputBox.disabled = true;
sendButton.disabled = true;
connectButton.disabled = false;
disconnectButton.disabled = true;
}
}
}
```
If the channel's state has changed to "open", that indicates that we have finished establishing the link between the two peers. The user interface is updated correspondingly by enabling the text input box for the message to send, focusing the input box so that the user can immediately begin to type, enabling the "Send" and "Disconnect" buttons, now that they're usable, and disabling the "Connect" button, since it is not needed when the connection is open.
If the state has changed to "closed", the opposite set of actions occurs: the input box and "Send" button are disabled, the "Connect" button is enabled so that the user can open a new connection if they wish to do so, and the "Disconnect" button is disabled, since it's not useful when no connection exists.
Our example's remote peer, on the other hand, ignores the status change events, except for logging the event to the console:
```js
function handleReceiveChannelStatusChange(event) {
if (receiveChannel) {
console.log(
`Receive channel's status has changed to ${receiveChannel.readyState}`,
);
}
}
```
The `handleReceiveChannelStatusChange()` method receives as an input parameter the event which occurred; this will be an {{domxref("RTCDataChannelEvent")}}.
### Sending messages
When the user presses the "Send" button, the sendMessage() method we've established as the handler for the button's {{domxref("Element/click_event", "click")}} event is called. That method is simple enough:
```js
function sendMessage() {
const message = messageInputBox.value;
sendChannel.send(message);
messageInputBox.value = "";
messageInputBox.focus();
}
```
First, the text of the message is fetched from the input box's [`value`](/en-US/docs/Web/HTML/Element/input#value) attribute. This is then sent to the remote peer by calling {{domxref("RTCDataChannel.send", "sendChannel.send()")}}. That's all there is to it! The rest of this method is just some user experience sugar — the input box is emptied and re-focused so the user can immediately begin typing another message.
### Receiving messages
When a "message" event occurs on the remote channel, our `handleReceiveMessage()` method is called as the event handler.
```js
function handleReceiveMessage(event) {
const el = document.createElement("p");
const txtNode = document.createTextNode(event.data);
el.appendChild(txtNode);
receiveBox.appendChild(el);
}
```
This method performs some basic {{Glossary("DOM")}} injection; it creates a new {{HTMLElement("p")}} (paragraph) element, then creates a new {{domxref("Text")}} node containing the message text, which is received in the event's `data` property. This text node is appended as a child of the new element, which is then inserted into the `receiveBox` block, thereby causing it to draw in the browser window.
### Disconnecting the peers
When the user clicks the "Disconnect" button, the `disconnectPeers()` method previously set as that button's handler is called.
```js
function disconnectPeers() {
// Close the RTCDataChannels if they're open.
sendChannel.close();
receiveChannel.close();
// Close the RTCPeerConnections
localConnection.close();
remoteConnection.close();
sendChannel = null;
receiveChannel = null;
localConnection = null;
remoteConnection = null;
// Update user interface elements
connectButton.disabled = false;
disconnectButton.disabled = true;
sendButton.disabled = true;
messageInputBox.value = "";
messageInputBox.disabled = true;
}
```
This starts by closing each peer's {{domxref("RTCDataChannel")}}, then, similarly, each {{domxref("RTCPeerConnection")}}. Then all the saved references to these objects are set to `null` to avoid accidental reuse, and the user interface is updated to reflect the fact that the connection has been closed.
## Next steps
Take a look at the [webrtc-simple-datachannel](https://github.com/mdn/samples-server/tree/master/s/webrtc-simple-datachannel) source code, available on GitHub.
## See also
- [Signaling and Video Calling](/en-US/docs/Web/API/WebRTC_API/Signaling_and_video_calling).
- The [Perfect Negotiation](/en-US/docs/Web/API/WebRTC_API/Perfect_negotiation) pattern.
| 0 |
data/mdn-content/files/en-us/web/api/webrtc_api | data/mdn-content/files/en-us/web/api/webrtc_api/build_a_phone_with_peerjs/index.md | ---
title: Building an Internet-Connected Phone with PeerJS
slug: Web/API/WebRTC_API/Build_a_phone_with_peerjs
page-type: guide
---
{{DefaultAPISidebar("WebRTC")}}
{{NextMenu("Web/API/WebRTC_API/Build_a_phone_with_peerjs/Setup")}}
One of WebRTC's main issues is that it is pretty complicated to use and develop with — handling the signalling service and knowing when to call the right endpoint can get confusing. But there is some good news; [PeerJS](https://peerjs.com/) is a WebRTC framework that abstracts away all of the ice and signalling logic so that you can focus on the functionality of your application. There are two parts to PeerJS, the client-side framework and the server.
In this series of articles we will create a simple phone application using PeerJS. We'll be using both the server and the client-side framework, but most of our work will be involved with handling the client-side code.
### Prerequisites
This is an intermediate level tutorial; before attempting it you should already be comfortable with:
- [Vanilla JavaScript](/en-US/docs/Web/JavaScript)
- [Node](https://nodejs.org/en/docs/)
- [Express](/en-US/docs/Learn/Server-side/Express_Nodejs)
- [HTML](/en-US/docs/Web/HTML)
Before you get started, you'll want to make sure you've [installed node](https://nodejs.org/en/download/) and [Yarn](https://classic.yarnpkg.com/en/docs/install) (the instructions in later articles assume Yarn, but you can feel free to use [npm](https://docs.npmjs.com/getting-started/) or another manager if you'd prefer).
> **Note:** If you learn better by following step-by-step code, we've also provided this [tutorial in code](https://github.com/SamsungInternet/WebPhone/tree/master/tutorial), which you can use instead.
### Table of Contents
1. [Setup](/en-US/docs/Web/API/WebRTC_API/Build_a_phone_with_peerjs/Setup)
2. [Connect Peers](/en-US/docs/Web/API/WebRTC_API/Build_a_phone_with_peerjs/Connect_peers)
1. [Get Microphone Permission](/en-US/docs/Web/API/WebRTC_API/Build_a_phone_with_peerjs/Connect_peers/Get_microphone_permission)
2. [Showing and hiding HTML](/en-US/docs/Web/API/WebRTC_API/Build_a_phone_with_peerjs/Connect_peers/Show_hide_html)
3. [Create a Peer Connection](/en-US/docs/Web/API/WebRTC_API/Build_a_phone_with_peerjs/Connect_peers/Create_a_peer_connection)
4. [Creating a Call](/en-US/docs/Web/API/WebRTC_API/Build_a_phone_with_peerjs/Connect_peers/Creating_a_call)
5. [Answer a Call](/en-US/docs/Web/API/WebRTC_API/Build_a_phone_with_peerjs/Connect_peers/Answer_a_call)
6. [End a Call](/en-US/docs/Web/API/WebRTC_API/build_a_phone_with_peerjs/connect_peers/End_a_call)
3. [Deployment and Further Reading](/en-US/docs/Web/API/WebRTC_API/Build_a_phone_with_peerjs/Deployment_and_further_reading)
{{NextMenu("Web/API/WebRTC_API/Build_a_phone_with_peerjs/Setup")}}
| 0 |
data/mdn-content/files/en-us/web/api/webrtc_api/build_a_phone_with_peerjs | data/mdn-content/files/en-us/web/api/webrtc_api/build_a_phone_with_peerjs/build_the_server/index.md | ---
title: Building the server
slug: Web/API/WebRTC_API/Build_a_phone_with_peerjs/Build_the_server
page-type: guide
---
{{DefaultAPISidebar("WebRTC")}}
{{PreviousMenuNext("Web/API/WebRTC_API/Build_a_phone_with_peerjs/Setup", "Web/API/WebRTC_API/Build_a_phone_with_peerjs/Connect_peers")}}
In this article we'll set up the server for our phone app. The server file will look like a regular Express server file with one difference, the Peer server.
1. First of all, create a file called `server.js` in the same location as the HTML and CSS files you created previously. This is the entry point of our app, as defined in our `package.json` file.
2. You'll need to start your code by requiring the peer server at the top of the `server.js` file, to ensure that we have access to the peer server:
```js
const { ExpressPeerServer } = require("peer");
```
3. You then need to actually create the peer server. Add the following code below your previous line:
```js
const peerServer = ExpressPeerServer(server, {
proxied: true,
debug: true,
path: "/myapp",
ssl: {},
});
```
We use the `ExpressPeerServer` object to create the peer server, passing it some options in the process. The peer server will handle the signalling required for WebRTC for us, so we don't have to worry about STUN/TURN servers or other protocols.
4. Finally, you'll need to tell your app to use the `peerServer` by calling `app.use(peerServer)`. Your finished `server.js` should include the other necessary dependencies you'd include in a server file, as well as serving the `index.html` file to the root path.
Update `server.js` so that it looks like this:
```js
const express = require("express");
const http = require("http");
const path = require("path");
const app = express();
const server = http.createServer(app);
const { ExpressPeerServer } = require("peer");
const port = process.env.PORT || "8000";
const peerServer = ExpressPeerServer(server, {
proxied: true,
debug: true,
path: "/myapp",
ssl: {},
});
app.use(peerServer);
app.use(express.static(path.join(__dirname)));
app.get("/", (request, response) => {
response.sendFile(`${__dirname}/index.html`);
});
server.listen(port);
console.log(`Listening on: ${port}`);
```
5. You should be able to connect to your app via `localhost` (in our `server.js` we're using port 8000 (defined on line 7) but you may be using another port number). Run `yarn start` (where `start` refers to the script you declared in `package.json` on the previous page) in your terminal. Visit `localhost:8000` in your browser and you should see a page that looks like this:

If you want to learn more about Peer.js, check out the [Peer.js Server repo on GitHub](https://github.com/peers/peerjs-server).
{{PreviousMenuNext("Web/API/WebRTC_API/Build_a_phone_with_peerjs/Setup", "Web/API/WebRTC_API/Build_a_phone_with_peerjs/Connect_peers")}}
| 0 |
data/mdn-content/files/en-us/web/api/webrtc_api/build_a_phone_with_peerjs | data/mdn-content/files/en-us/web/api/webrtc_api/build_a_phone_with_peerjs/deployment_and_further_reading/index.md | ---
title: Deployment and further reading
slug: Web/API/WebRTC_API/Build_a_phone_with_peerjs/Deployment_and_further_reading
page-type: guide
---
{{DefaultAPISidebar("WebRTC")}}
{{PreviousMenu("Web/API/WebRTC_API/Build_a_phone_with_peerjs/Connect_peers/End_a_call")}}
## Deployment
The easiest place to deploy this app would be [Glitch](https://glitch.com/), since you don't have to fiddle with configuring the port for the peer server.
## Further Reading
- [PeerJS](https://peerjs.com/)
- [WebRTC](/en-US/docs/Web/API/WebRTC_API)
- [PeerJS Server](https://github.com/peers/peerjs-server)
- [A similar video tutorial with video](https://www.youtube.com/watch?v=OOrBcpwelPY)
- [The code tutorial](https://github.com/SamsungInternet/WebPhone/tree/master/tutorial)
{{PreviousMenu("Web/API/WebRTC_API/Build_a_phone_with_peerjs/Connect_peers/End_a_call")}}
| 0 |
data/mdn-content/files/en-us/web/api/webrtc_api/build_a_phone_with_peerjs | data/mdn-content/files/en-us/web/api/webrtc_api/build_a_phone_with_peerjs/connect_peers/index.md | ---
title: Connecting the peers
slug: Web/API/WebRTC_API/Build_a_phone_with_peerjs/Connect_peers
page-type: guide
---
{{DefaultAPISidebar("WebRTC")}}
{{PreviousMenuNext("Web/API/WebRTC_API/Build_a_phone_with_peerjs/Build_the_server", "Web/API/WebRTC_API/Build_a_phone_with_peerjs/Connect_peers/Get_microphone_permission")}}
In the last article we set up our server, but it doesn't do anything yet because we are not serving anything. This is the part you've been waiting for — actually creating the client-side peer connection and call logic. This is going to be an involved process, but we've split it into numerous subsections so you can tackle the different parts in easy bite-sized chunks.
1. First up, create a `script.js` file in the same location as the others — this is where all your logic will live.
2. We need to create a peer object with an ID. The ID will be used to connect two peers together and if you don't create one, one will be assigned to the peer. Add the following to `script.js`:
```js
const peer = new Peer(
`${Math.floor(Math.random() * 2 ** 18)
.toString(36)
.padStart(4, 0)}`,
{
host: location.hostname,
debug: 1,
path: "/myapp",
},
);
```
3. You'll then need to attach the peer to the window so that it's accessible. Add the following line below your previous code:
```js
window.peer = peer;
```
4. In another terminal window, start the peer server by running the following command inside the root of your phone app directory:
```bash
peerjs --port 443 --key peerjs --path /myapp
```
This looks very similar to the peer server we created in the last step; this is the client-side portion. In order for the browser to connect to the running peer server, we need to tell it how; this is what the above line does.
{{PreviousMenuNext("Web/API/WebRTC_API/Build_a_phone_with_peerjs/Build_the_server", "Web/API/WebRTC_API/Build_a_phone_with_peerjs/Connect_peers/Get_microphone_permission")}}
| 0 |
data/mdn-content/files/en-us/web/api/webrtc_api/build_a_phone_with_peerjs/connect_peers | data/mdn-content/files/en-us/web/api/webrtc_api/build_a_phone_with_peerjs/connect_peers/end_a_call/index.md | ---
title: Ending a call
slug: Web/API/WebRTC_API/build_a_phone_with_peerjs/connect_peers/End_a_call
page-type: guide
---
{{DefaultAPISidebar("WebRTC")}}
{{PreviousMenuNext("Web/API/WebRTC_API/Build_a_phone_with_peerjs/Connect_peers/Answer_a_call", "Web/API/WebRTC_API/Build_a_phone_with_peerjs/Deployment_and_further_reading")}}
You've nearly finished! The last thing you want to do is ensure your callers have a way of ending a call. The most graceful way of doing this is to close the connection using the `close()` function, which you can do in an event listener for the hang up button.
1. Add the following to the bottom of your `script.js` file:
```js
const hangUpBtn = document.querySelector(".hangup-btn");
hangUpBtn.addEventListener("click", () => {
conn.close();
showCallContent();
});
```
2. When the connection has been closed, you also want to display the correct HTML content so you can just call your `showCallContent()` function. Within the `call` event, you also want to ensure the remote browser is updated. To achieve this, add another event listener within the `peer.on('call', (stream) => { }` event listener, within the conditional block.
```js
conn.on("close", () => {
showCallContent();
});
```
This way, if the person who initiated the call clicks "Hang up" first, both browsers are still updated with the new state.
3. Test out your app again, and try closing a call.
> **Note:** The `on('close')` event that is called on the `conn` variable isn't available in Firefox yet; this just means that in Firefox each caller will have to hang up individually.
> **Warning:** The way we've currently coded things means that when a connection is closed, both browsers will be updated **only** if the person who started the call presses "Hang up" first. If the person who answered the call clicks "Hang up" first, the other caller will also have to click "Hang up" to see the correct HTML.
{{PreviousMenuNext("Web/API/WebRTC_API/Build_a_phone_with_peerjs/Connect_peers/Answer_a_call", "Web/API/WebRTC_API/Build_a_phone_with_peerjs/Deployment_and_further_reading")}}
| 0 |
data/mdn-content/files/en-us/web/api/webrtc_api/build_a_phone_with_peerjs/connect_peers | data/mdn-content/files/en-us/web/api/webrtc_api/build_a_phone_with_peerjs/connect_peers/get_microphone_permission/index.md | ---
title: Getting browser microphone permission
slug: Web/API/WebRTC_API/Build_a_phone_with_peerjs/Connect_peers/Get_microphone_permission
page-type: guide
---
{{DefaultAPISidebar("WebRTC")}}
{{PreviousMenuNext("Web/API/WebRTC_API/build_a_phone_with_peerjs/connect_peers", "Web/API/WebRTC_API/Build_a_phone_with_peerjs/Connect_peers/Show_hide_html")}}
After you've created the peer, you'll want to get the browser's permission to access the microphone. We'll be using the [`getUserMedia()`](/en-US/docs/Web/API/MediaDevices/getUserMedia) method on the [`navigator.mediaDevices`](/en-US/docs/Web/API/Navigator/mediaDevices) object.
The `getUserMedia()` endpoint takes a `constraints` object that specifies which permissions are needed. `getUserMedia()` is a promise which, when successfully resolved, returns a [`MediaStream`](/en-US/docs/Web/API/MediaStream) object. In our case this is going to contain the audio from our stream. If the promise isn't successfully resolved, you'll want to catch and display the error.
1. Add the following code to the bottom of your `script.js` file:
```js
function getLocalStream() {
navigator.mediaDevices
.getUserMedia({ video: false, audio: true })
.then((stream) => {
window.localStream = stream; // A
window.localAudio.srcObject = stream; // B
window.localAudio.autoplay = true; // C
})
.catch((err) => {
console.error(`you got an error: ${err}`);
});
}
```
Let's explain the most important lines:
- `window.localStream = stream` attaches the `MediaStream` object (which we have assigned to `stream` on the previous line) to the window as the `localStream`.
- `window.localAudio.srcObject = stream` sets the [`<audio>` element](/en-US/docs/Web/HTML/Element/audio) with the ID of `localAudio`'s `src` attribute to be the `MediaStream` returned by the promise so that it will play our stream.
- `window.localAudio.autoplay = true` sets the `autoplay` attribute of the `<audio>` element to true, so that the audio plays automatically.
> **Warning:** If you've done some sleuthing online, you may have come across [`navigator.getUserMedia`](/en-US/docs/Web/API/Navigator/getUserMedia) and assumed you can use that instead of `navigator.mediaDevices.getUserMedia`. You'd be wrong. The former is a deprecated method, which requires callbacks as well as constraints as arguments. The latter uses a promise so you don't need to use callbacks.
2. Try calling your `getLocalStream` function by adding the following line at the bottom of your code:
```js
getLocalStream();
```
3. Refresh your app, which should still be running at `localhost:8000`; you should see the following permission pop up:

4. Plugin in some headphones before you allow the microphone usage so that when you unmute yourself later, you don't get any feedback. If you didn't see the permission prompt, open the inspector to see if you have any errors. Make sure your JavaScript file is correctly linked to your `index.html` too.
This what it should all look like together:
```js
/* global Peer */
/**
* Gets the local audio stream of the current caller
* @param callbacks - an object to set the success/error behavior
* @returns {void}
*/
function getLocalStream() {
navigator.mediaDevices
.getUserMedia({ video: false, audio: true })
.then((stream) => {
window.localStream = stream;
window.localAudio.srcObject = stream;
window.localAudio.autoplay = true;
})
.catch((err) => {
console.error(`you got an error: ${err}`);
});
}
getLocalStream();
```
{{PreviousMenuNext("Web/API/WebRTC_API/Build_a_phone_with_peerjs/Connect_peers", "Web/API/WebRTC_API/Build_a_phone_with_peerjs/Connect_peers/Show_hide_html")}}
| 0 |
data/mdn-content/files/en-us/web/api/webrtc_api/build_a_phone_with_peerjs/connect_peers | data/mdn-content/files/en-us/web/api/webrtc_api/build_a_phone_with_peerjs/connect_peers/show_hide_html/index.md | ---
title: Showing and hiding HTML
slug: Web/API/WebRTC_API/Build_a_phone_with_peerjs/Connect_peers/Show_hide_html
page-type: guide
---
{{DefaultAPISidebar("WebRTC")}}
{{PreviousMenuNext("Web/API/WebRTC_API/Build_a_phone_with_peerjs/Connect_peers/Get_microphone_permission", "Web/API/WebRTC_API/Build_a_phone_with_peerjs/Connect_peers/Create_a_peer_connection")}}
Alright, so you've got the microphone permissions set up. The next step is to make sure each user knows what their peer ID is so that they can make connections. The peerJS framework gives us a bunch of event listeners we can call on the peer we created earlier on.
1. Let's use the `open` event to create a listener that displays the peer's ID when it is open. Add the following code to the bottom of `script.js`:
```js
peer.on("open", () => {
window.caststatus.textContent = `Your device ID is: ${peer.id}`;
});
```
Here you're replacing the text in the HTML element with the ID `caststatus`.
2. Try reloading the app in your browser. Instead of `connecting...`, you should see `Your device ID is: <peer ID>`.

3. While you're here, you may as well create some functions to display and hide various content, which you'll use later. There are two functions you should create, `showCallContent()` and `showConnectedContent()`. These functions will be responsible for showing the call button and showing the hang up button and audio elements when appropriate.
```js
const audioContainer = document.querySelector(".call-container");
// Displays the call button and peer ID
function showCallContent() {
window.caststatus.textContent = `Your device ID is: ${peer.id}`;
callBtn.hidden = false;
audioContainer.hidden = true;
}
// Displays the audio controls and correct copy
function showConnectedContent() {
window.caststatus.textContent = "You're connected";
callBtn.hidden = true;
audioContainer.hidden = false;
}
```
{{PreviousMenuNext("Web/API/WebRTC_API/Build_a_phone_with_peerjs/Connect_peers/Get_microphone_permission", "Web/API/WebRTC_API/Build_a_phone_with_peerjs/Connect_peers/Create_a_peer_connection")}}
| 0 |
data/mdn-content/files/en-us/web/api/webrtc_api/build_a_phone_with_peerjs/connect_peers | data/mdn-content/files/en-us/web/api/webrtc_api/build_a_phone_with_peerjs/connect_peers/create_a_peer_connection/index.md | ---
title: Creating a peer connection
slug: Web/API/WebRTC_API/Build_a_phone_with_peerjs/Connect_peers/Create_a_peer_connection
page-type: guide
---
{{DefaultAPISidebar("WebRTC")}}
{{PreviousMenuNext("Web/API/WebRTC_API/Build_a_phone_with_peerjs/Connect_peers/Show_hide_html", "Web/API/WebRTC_API/Build_a_phone_with_peerjs/Connect_peers/Creating_a_call")}}
Next, you want to ensure your users have a way of connecting with their peers. In order to connect two peers, you'll need the peer ID for one of them.
1. Let's create a variable to contain the ID, and a function to request that the user enters it that we'll call later. Add this to the bottom of `script.js`:
```js
let code;
function getStreamCode() {
code = window.prompt("Please enter the sharing code");
}
```
The [`window.prompt()`](/en-US/docs/Web/API/Window/prompt) method provides a convenient way of getting the relevant peer ID — you can use this when you want to collect the peerID needed to create the connection.
2. Using the peerJS framework, you'll want to connect the `localPeer` to the `remotePeer`. PeerJS gives us the `connect()` function, which takes a peer ID to connect to. Add this block below your previous code:
```js
let conn;
function connectPeers() {
conn = peer.connect(code);
}
```
3. When a connection is created, let's use the PeerJS framework's `on('connection')` to set the remote peer's ID and open the connection. The function for this listener accepts a `connection` object which is an instance of the `DataConnection` object (which is a wrapper around WebRTC's [`RTCDataChannel`](/en-US/docs/Web/API/RTCDataChannel)); within this function you'll want to assign it to a variable. Again you'll want to create the variable outside of the function so that you can assign it later. Add the following below your previous code:
```js
peer.on("connection", (connection) => {
conn = connection;
});
```
{{PreviousMenuNext("Web/API/WebRTC_API/Build_a_phone_with_peerjs/Connect_peers/Show_hide_html", "Web/API/WebRTC_API/Build_a_phone_with_peerjs/Connect_peers/Creating_a_call")}}
| 0 |
data/mdn-content/files/en-us/web/api/webrtc_api/build_a_phone_with_peerjs/connect_peers | data/mdn-content/files/en-us/web/api/webrtc_api/build_a_phone_with_peerjs/connect_peers/answer_a_call/index.md | ---
title: Answering a Call
slug: Web/API/WebRTC_API/Build_a_phone_with_peerjs/Connect_peers/Answer_a_call
page-type: guide
---
{{DefaultAPISidebar("WebRTC")}}
{{PreviousMenuNext("Web/API/WebRTC_API/Build_a_phone_with_peerjs/Connect_peers/Creating_a_call", "Web/API/WebRTC_API/Build_a_phone_with_peerjs/Connect_peers/End_a_call")}}
Now our users can make a call, but they can't answer one. Let's add the next piece of the puzzle so that users can answer calls made to them.
1. The peerJS framework makes the `.on('call')` event available to use so let's use it here. Add this to the bottom of `script.js`:
```js
peer.on("call", (call) => {
const answerCall = confirm("Do you want to answer?");
});
```
First, we prompt the user to answer with a confirm prompt. This will show a window on the screen (as shown in the image) from which the user can select "OK" or "Cancel" — this maps to a returned boolean value. When you press "Call" in your browser, the following prompt should appear:

> **Warning:** Since we're using a `confirm` prompt to ask the user if they want to answer the call, it's important that the browser and tab that's being called is "active", which means the window shouldn't be minimized, and the tab should be on screen and have the mouse's focus somewhere inside it. Ideally, in a production version of this app you'd create your own modal window in HTML which wouldn't have these limitations.
2. Let's flesh out this event listener. Update it as follows:
```js
peer.on("call", (call) => {
const answerCall = confirm("Do you want to answer?");
if (answerCall) {
call.answer(window.localStream); // A
showConnectedContent(); // B
call.on("stream", (stream) => {
// C
window.remoteAudio.srcObject = stream;
window.remoteAudio.autoplay = true;
window.peerStream = stream;
});
} else {
console.log("call denied"); // D
}
});
```
Let's walk through the most important parts of this code:
- `call.answer(window.localStream)`: if `answerCall` is `true`, you'll want to call peerJS's `answer()` function on the call to create an answer, passing it the local stream.
- `showCallContent`: Similar to what you did in the call button event listener, you want to ensure the person being called sees the correct HTML content.
- Everything in the `call.on('stream', () => { }` block is exactly the same as it is in call button's event listener. The reason you need to add it here too is so that the browser is also updated for the person answering the call.
- If the person denies the call, we're just going to log a message to the console.
3. The code you have now is enough for you to create a call and answer it. Refresh your browsers and test it out. You'll want to make sure that both browsers have the console open or else you won't get the prompt to answer the call. Click call, submit the peer ID for the other browser and then answer the call. The final page should look like this:

{{PreviousMenuNext("Web/API/WebRTC_API/Build_a_phone_with_peerjs/Connect_peers/Creating_a_call", "Web/API/WebRTC_API/Build_a_phone_with_peerjs/Connect_peers/End_a_call")}}
| 0 |
data/mdn-content/files/en-us/web/api/webrtc_api/build_a_phone_with_peerjs/connect_peers | data/mdn-content/files/en-us/web/api/webrtc_api/build_a_phone_with_peerjs/connect_peers/creating_a_call/index.md | ---
title: Creating a Call
slug: Web/API/WebRTC_API/Build_a_phone_with_peerjs/Connect_peers/Creating_a_call
page-type: guide
---
{{DefaultAPISidebar("WebRTC")}}
{{PreviousMenuNext("Web/API/WebRTC_API/Build_a_phone_with_peerjs/Connect_peers/Create_a_peer_connection", "Web/API/WebRTC_API/Build_a_phone_with_peerjs/Connect_peers/Answer_a_call")}}
Exciting times — now you're going to give your users the ability to create calls.
1. First of all, get a reference to the "Call" button that's defined in the HTML, by adding the following to the bottom of `script.js`:
```js
const callBtn = document.querySelector(".call-btn");
```
2. When a caller clicks "Call" you'll want to ask them for the peer ID of the peer they want to call (which we will store in the `code` variable in `getStreamCode()`) and then you'll want to create a connection with that code. Add the following below your previous code:
```js
callBtn.addEventListener("click", () => {
getStreamCode();
connectPeers();
const call = peer.call(code, window.localStream); // A
call.on("stream", (stream) => {
// B
window.remoteAudio.srcObject = stream; // C
window.remoteAudio.autoplay = true; // D
window.peerStream = stream; //E
showConnectedContent(); //F });
});
});
```
Let's walk through this code:
- `const call = peer.call(code, window.localStream)`: This will create a call with the `code` and `window.localStream` we've previously assigned. Note that the `localStream` will be the user's `localStream`. So for caller A it'll be their stream & for B, their own stream.
- `call.on('stream', (stream) => {` : peerJS gives us a `stream` event which you can use on the `call` that you've created. When a call starts streaming, you need to ensure that the remote stream coming from the call is assigned to the correct HTML elements and window, this is where you'll do that.
- The anonymous function takes a `MediaStream` object as an argument, which you then have to set to your window's HTML like you've done before. Here we get your remote `<audio>` element and assign the stream passed to the function to the `srcObject` property.
- Ensure the element's `autoplay` attribute is also set to `true`.
- Ensure that the window's `peerStream` is set to the stream passed to the function.
- Finally you want to show the correct content, so call the `showConnectedContent()` function you created earlier.
3. To test this out, open `localhost:8000` in two browser windows and click Call inside one of them. You should see this:

If you submit the other peer's ID, the call will be connected!
This is all working so far, but we need to give the other browser the chance to answer or decline the call We'll do that next.
{{PreviousMenuNext("Web/API/WebRTC_API/Build_a_phone_with_peerjs/Connect_peers/Create_a_peer_connection", "Web/API/WebRTC_API/Build_a_phone_with_peerjs/Connect_peers/Answer_a_call")}}
| 0 |
data/mdn-content/files/en-us/web/api/webrtc_api/build_a_phone_with_peerjs | data/mdn-content/files/en-us/web/api/webrtc_api/build_a_phone_with_peerjs/setup/index.md | ---
title: Setup
slug: Web/API/WebRTC_API/Build_a_phone_with_peerjs/Setup
page-type: guide
---
{{DefaultAPISidebar("WebRTC")}}
{{PreviousMenuNext("Web/API/WebRTC_API/Build_a_phone_with_peerjs", "Web/API/WebRTC_API/Build_a_phone_with_peerjs/Build_the_server")}}
So let's get started by setting up the basis for our WebRTC-powered phone app.
1. First find a sensible place on your local file structure and run `mkdir audio_app` and then `cd audio_app` to create a directory to contain your app and enter into it.
2. Next, create a new app by running `yarn init`. Follow the prompts, providing a name, version, description, etc. to your project.
3. Next, install the required dependencies using the following commands:
- [Express](https://expressjs.com/): `yarn add express`
- [PeerJS](https://peerjs.com/docs/): `yarn add peerjs`
- [Peer](https://github.com/peers/peerjs-server): `yarn add peer`
Peer will be used for the peer server and PeerJS will be used to access the PeerJS API and framework. Your `package.json` should something look like this when you've finished installing the dependencies:
```json
{
"name": "audio_app",
"version": "1.0.0",
"description": "An audio app using WebRTC",
"scripts": {
"start": "node server.js",
"test": "echo \"Error: no test specified\" && exit 1"
},
"keywords": [],
"author": "Lola Odelola",
"license": "MIT",
"dependencies": {
"express": "^4.17.1",
"peer": "^0.5.3",
"peerjs": "^1.3.1"
}
}
```
4. To finish the setup, you'll want to copy the following [HTML](https://gist.github.com/lolaodelola/578d692e4700dfe2c9d239c80bbdbabc) and [CSS](https://gist.github.com/lolaodelola/b4498288b86ddce995603546a64abb29) files into the root of your project folder. You can name both files 'index', so the HTML file will be 'index.html' and the CSS file will be 'index.css'. You won't need to modify these much in the articles that follow.
{{PreviousMenuNext("Web/API/WebRTC_API/Build_a_phone_with_peerjs", "Web/API/WebRTC_API/Build_a_phone_with_peerjs/Build_the_server")}}
| 0 |
data/mdn-content/files/en-us/web/api/webrtc_api | data/mdn-content/files/en-us/web/api/webrtc_api/signaling_and_video_calling/webrtc_-_ice_candidate_exchange.svg | <svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 918 774" width="1224" height="1032"><defs><marker orient="auto" overflow="visible" markerUnits="strokeWidth" id="a" viewBox="-1 -4 10 8" markerWidth="10" markerHeight="8" color="#000"><path d="M8 0L0-3v6z" fill="currentColor" stroke="currentColor"/></marker></defs><g fill="none"><path fill="#fff" d="M0 0h918v774H0z"/><path fill="#7465dc" fill-opacity=".5" d="M9 81h360v657H9z"/><path stroke="#000" stroke-linecap="round" stroke-linejoin="round" d="M9 81h360v657H9z"/><path fill="#62d6ac" fill-opacity=".5" d="M549 81h360v657H549z"/><path stroke="#000" stroke-linecap="round" stroke-linejoin="round" d="M549 81h360v657H549z"/><path fill="#be844a" fill-opacity=".5" d="M369 81h180v657H369z"/><path stroke="#000" stroke-linecap="round" stroke-linejoin="round" d="M369 81h180v657H369z"/><path fill="#62d6ac" fill-opacity=".5" d="M549 9h360v72H549z"/><path stroke="#000" stroke-linecap="round" stroke-linejoin="round" d="M549 9h360v72H549z"/><text transform="translate(554 31.5)" fill="#000"><tspan font-family="Open Sans" font-size="20" font-weight="bold" x="109.692" y="21" textLength="130.615">Priya (Callee)</tspan></text><path fill="#be844a" fill-opacity=".5" d="M369 9h180v72H369z"/><path stroke="#000" stroke-linecap="round" stroke-linejoin="round" d="M369 9h180v72H369z"/><text transform="translate(374 31.5)" fill="#000"><tspan font-family="Open Sans" font-size="20" font-weight="bold" x="5.161" y="21" textLength="159.678">Signaling Server</tspan></text><path fill="#7465dc" fill-opacity=".5" d="M9 9h360v72H9z"/><path stroke="#000" stroke-linecap="round" stroke-linejoin="round" d="M9 9h360v72H9z"/><text transform="translate(14 31.5)" fill="#000"><tspan font-family="Open Sans" font-size="20" font-weight="bold" x="103.428" y="21" textLength="143.145">Naomi (Caller)</tspan></text><path fill="#fff" fill-opacity=".503" d="M189 81h180v657H189z"/><path stroke="#000" stroke-linecap="round" stroke-linejoin="round" d="M189 81h180v657H189z"/><path fill="#fff" fill-opacity=".503" d="M549 81h180v657H549z"/><path stroke="#000" stroke-linecap="round" stroke-linejoin="round" d="M549 81h180v657H549z"/><path fill="#c9ffff" d="M27 280.68h144v80H27z"/><path stroke="#000" stroke-linecap="round" stroke-linejoin="round" d="M27 280.68h144v80H27z"/><text transform="translate(32 285.68)" fill="#000"><tspan font-family="Open Sans" font-size="10" font-weight="500" x="0" y="11" textLength="129.526">Receives the candidate and </tspan><tspan font-family="Open Sans" font-size="10" font-weight="500" x="0" y="25" textLength="74.072">sends it to Priya</tspan><tspan font-family="Open Sans" font-size="10" font-weight="500" x="73.975" y="25" textLength="1.699">'</tspan><tspan font-family="Open Sans" font-size="10" font-weight="500" x="75.376" y="25" textLength="35.063">s client </tspan><tspan font-family="Open Sans" font-size="10" font-weight="500" x="0" y="39" textLength="13.75">thr</tspan><tspan font-family="Open Sans" font-size="10" font-weight="500" x="13.55" y="39" textLength="119.78">ough the signaling server </tspan><tspan font-family="Open Sans" font-size="10" font-weight="500" x="0" y="53" textLength="21.089">as a </tspan><tspan font-family="Courier" font-size="10" font-weight="500" x="21.089" y="53" textLength="54.009">“new-ice-</tspan><tspan font-family="Courier" font-size="10" font-weight="500" x="0" y="67" textLength="60.01">candidate”</tspan><tspan font-family="Open Sans" font-size="10" font-weight="500" x="60.01" y="67" textLength="43.701"> message</tspan></text><path fill="#fff5c8" d="M27 261h144v19.68H27z"/><path stroke="#000" stroke-linecap="round" stroke-linejoin="round" d="M27 261h144v19.68H27z"/><text transform="translate(32 265.84)" fill="#000"><tspan font-family="Courier" font-size="8" font-weight="500" x="6.99" y="8" textLength="120.02">handleICECandidateEvent()</tspan></text><path fill="#c9ffff" d="M207 126h144v52H207z"/><path stroke="#000" stroke-linecap="round" stroke-linejoin="round" d="M207 126h144v52H207z"/><text transform="translate(212 131)" fill="#000"><tspan font-family="Open Sans" font-size="10" font-weight="500" x="0" y="11" textLength="28.721">Gener</tspan><tspan font-family="Open Sans" font-size="10" font-weight="500" x="28.521" y="11" textLength="97.397">ate an ICE candidate </tspan><tspan font-family="Open Sans" font-size="10" font-weight="500" x="0" y="25" textLength="4.082">r</tspan><tspan font-family="Open Sans" font-size="10" font-weight="500" x="3.882" y="25" textLength="15.82">epr</tspan><tspan font-family="Open Sans" font-size="10" font-weight="500" x="19.502" y="25" textLength="46.123">esented b</tspan><tspan font-family="Open Sans" font-size="10" font-weight="500" x="65.425" y="25" textLength="43.33">y an SDP </tspan><tspan font-family="Open Sans" font-size="10" font-weight="500" x="0" y="39" textLength="26.528">string</tspan></text><path marker-end="url(#a)" stroke="#000" stroke-linecap="round" stroke-linejoin="round" stroke-dasharray="1,4" d="M207 178l-100.15 76.967"/><text transform="translate(59 194)" fill="#000"><tspan font-family="Open Sans" font-size="10" font-weight="500" x="0" y="11" textLength="31.108">Event: </tspan><tspan font-family="Courier" font-size="10" font-weight="500" x="31.108" y="11" textLength="72.012">icecandidate</tspan></text><path fill="#c9ffff" d="M387 279h144v66H387z"/><path stroke="#000" stroke-linecap="round" stroke-linejoin="round" d="M387 279h144v66H387z"/><text transform="translate(392 284)" fill="#000"><tspan font-family="Open Sans" font-size="10" font-weight="500" x="0" y="11" textLength="35.313">Receive</tspan><tspan font-family="Courier" font-size="10" font-weight="500" x="35.313" y="11" textLength="60.01"> “new-ice-</tspan><tspan font-family="Courier" font-size="10" font-weight="500" x="0" y="25" textLength="60.01">candidate”</tspan><tspan font-family="Open Sans" font-size="10" font-weight="500" x="60.01" y="25" textLength="66.724"> message and </tspan><tspan font-family="Open Sans" font-size="10" font-weight="500" x="0" y="39" textLength="30.933">forwar</tspan><tspan font-family="Open Sans" font-size="10" font-weight="500" x="30.732" y="39" textLength="52.783">d it to Priya</tspan></text><path fill="#fff5c8" d="M387 261h144v18H387z"/><path stroke="#000" stroke-linecap="round" stroke-linejoin="round" d="M387 261h144v18H387z"/><text transform="translate(392 264)" fill="#000"><tspan font-family="Courier" font-size="10" font-weight="500" x="30.994" y="10" textLength="72.012">on.message()</tspan></text><path marker-end="url(#a)" stroke="#000" stroke-linecap="round" stroke-linejoin="round" stroke-dasharray="4,4" d="M171 270.84l206.1-.802"/><text transform="translate(197.427 268.103)" fill="#000"><tspan font-family="Open Sans" font-size="10" font-weight="500" x=".446" y="11" textLength="46.089">Message: </tspan><tspan font-family="Courier" font-size="10" font-weight="500" x="46.535" y="11" textLength="114.019">“new-ice-candidate”</tspan></text><path marker-end="url(#a)" stroke="#000" stroke-linecap="round" stroke-linejoin="round" stroke-dasharray="4,4" d="M531 270h206.1"/><text transform="translate(560 267.56)" fill="#000"><tspan font-family="Open Sans" font-size="10" font-weight="500" x=".196" y="11" textLength="46.089">Message: </tspan><tspan font-family="Courier" font-size="10" font-weight="500" x="46.285" y="11" textLength="108.018">“new-ice-candidate</tspan><tspan font-family="Open Sans" font-size="10" font-weight="500" x="154.303" y="11" textLength="3.501">”</tspan></text><path fill="#c9ffff" d="M747 279h144v144H747z"/><path stroke="#000" stroke-linecap="round" stroke-linejoin="round" d="M747 279h144v144H747z"/><text transform="translate(752 284)" fill="#000"><tspan font-family="Open Sans" font-size="10" font-weight="500" x="1" y="11" textLength="8.379">1.</tspan><tspan font-family="Open Sans" font-size="10" font-weight="500" x="11" y="11" textLength="10.391">Cr</tspan><tspan font-family="Open Sans" font-size="10" font-weight="500" x="21.19" y="11" textLength="37.207">eate an </tspan><tspan font-family="Courier" font-size="10" font-weight="500" x="10.95" y="25" textLength="90.015">RTCIceCandidate</tspan><tspan font-family="Open Sans" font-size="10" font-weight="500" x="100.965" y="25" textLength="33.794"> object </tspan><tspan font-family="Open Sans" font-size="10" font-weight="500" x="10.95" y="39" textLength="77.134">using the SDP pr</tspan><tspan font-family="Open Sans" font-size="10" font-weight="500" x="87.884" y="39" textLength="6.04">o</tspan><tspan font-family="Open Sans" font-size="10" font-weight="500" x="93.723" y="39" textLength="39.268">vided in </tspan><tspan font-family="Open Sans" font-size="10" font-weight="500" x="10.95" y="53" textLength="66.484">the candidate.</tspan><tspan font-family="Open Sans" font-size="10" font-weight="500" x="1" y="67" textLength="8.379">2.</tspan><tspan font-family="Open Sans" font-size="10" font-weight="500" x="11" y="67" textLength="113.848">Deliver the candidate to </tspan><tspan font-family="Open Sans" font-size="10" font-weight="500" x="10.95" y="81" textLength="23.232">Priya</tspan><tspan font-family="Open Sans" font-size="10" font-weight="500" x="34.085" y="81" textLength="1.699">'</tspan><tspan font-family="Open Sans" font-size="10" font-weight="500" x="35.486" y="81" textLength="56.172">s ICE layer b</tspan><tspan font-family="Open Sans" font-size="10" font-weight="500" x="91.458" y="81" textLength="7.637">y </tspan><tspan font-family="Open Sans" font-size="10" font-weight="500" x="10.95" y="95" textLength="58.799">passing it to </tspan><tspan font-family="Courier" font-size="10" font-weight="500" x="10.95" y="108" textLength="120.02">RTCPeerConnection.ad</tspan><tspan font-family="Courier" font-size="10" font-weight="500" x="10.95" y="120" textLength="90.015">dIceCandidate()</tspan></text><path fill="#fff5c8" d="M747 261h144v18H747z"/><path stroke="#000" stroke-linecap="round" stroke-linejoin="round" d="M747 261h144v18H747z"/><text transform="translate(752 265)" fill="#000"><tspan font-family="Courier" font-size="8" font-weight="500" x="4.59" y="8" textLength="124.82">handleNewIceCandidateMsg()</tspan></text><path fill="#c9ffff" d="M747 586.68h144v80H747z"/><path stroke="#000" stroke-linecap="round" stroke-linejoin="round" d="M747 586.68h144v80H747z"/><text transform="translate(752 591.68)" fill="#000"><tspan font-family="Open Sans" font-size="10" font-weight="500" x="0" y="11" textLength="129.526">Receives the candidate and </tspan><tspan font-family="Open Sans" font-size="10" font-weight="500" x="0" y="25" textLength="83.511">sends it to Naomi'</tspan><tspan font-family="Open Sans" font-size="10" font-weight="500" x="83.213" y="25" textLength="35.063">s client </tspan><tspan font-family="Open Sans" font-size="10" font-weight="500" x="0" y="39" textLength="13.75">thr</tspan><tspan font-family="Open Sans" font-size="10" font-weight="500" x="13.55" y="39" textLength="119.78">ough the signaling server </tspan><tspan font-family="Open Sans" font-size="10" font-weight="500" x="0" y="53" textLength="21.089">as a </tspan><tspan font-family="Courier" font-size="10" font-weight="500" x="21.089" y="53" textLength="54.009">“new-ice-</tspan><tspan font-family="Courier" font-size="10" font-weight="500" x="0" y="67" textLength="60.01">candidate”</tspan><tspan font-family="Open Sans" font-size="10" font-weight="500" x="60.01" y="67" textLength="43.701"> message</tspan></text><path fill="#fff5c8" d="M747 567h144v19.68H747z"/><path stroke="#000" stroke-linecap="round" stroke-linejoin="round" d="M747 567h144v19.68H747z"/><text transform="translate(752 571.84)" fill="#000"><tspan font-family="Courier" font-size="8" font-weight="500" x="6.99" y="8" textLength="120.02">handleICECandidateEvent()</tspan></text><path fill="#c9ffff" d="M567 441h144v52H567z"/><path stroke="#000" stroke-linecap="round" stroke-linejoin="round" d="M567 441h144v52H567z"/><text transform="translate(572 446)" fill="#000"><tspan font-family="Open Sans" font-size="10" font-weight="500" x="0" y="11" textLength="28.721">Gener</tspan><tspan font-family="Open Sans" font-size="10" font-weight="500" x="28.521" y="11" textLength="97.397">ate an ICE candidate </tspan><tspan font-family="Open Sans" font-size="10" font-weight="500" x="0" y="25" textLength="4.082">r</tspan><tspan font-family="Open Sans" font-size="10" font-weight="500" x="3.882" y="25" textLength="15.82">epr</tspan><tspan font-family="Open Sans" font-size="10" font-weight="500" x="19.502" y="25" textLength="46.123">esented b</tspan><tspan font-family="Open Sans" font-size="10" font-weight="500" x="65.425" y="25" textLength="43.33">y an SDP </tspan><tspan font-family="Open Sans" font-size="10" font-weight="500" x="0" y="39" textLength="26.528">string</tspan></text><path marker-end="url(#a)" stroke="#000" stroke-linecap="round" stroke-linejoin="round" stroke-dasharray="1,4" d="M711 493l99.833 68.404"/><text transform="translate(761 500)" fill="#000"><tspan font-family="Open Sans" font-size="10" font-weight="500" x="0" y="11" textLength="31.108">Event: </tspan><tspan font-family="Courier" font-size="10" font-weight="500" x="31.108" y="11" textLength="72.012">icecandidate</tspan></text><path fill="#c9ffff" d="M387 585h144v66H387z"/><path stroke="#000" stroke-linecap="round" stroke-linejoin="round" d="M387 585h144v66H387z"/><text transform="translate(392 590)" fill="#000"><tspan font-family="Open Sans" font-size="10" font-weight="500" x="0" y="11" textLength="37.91">Receive </tspan><tspan font-family="Courier" font-size="10" font-weight="500" x="37.91" y="11" textLength="54.009">“new-ice-</tspan><tspan font-family="Courier" font-size="10" font-weight="500" x="0" y="25" textLength="60.01">candidate”</tspan><tspan font-family="Open Sans" font-size="10" font-weight="500" x="60.01" y="25" textLength="66.724"> message and </tspan><tspan font-family="Open Sans" font-size="10" font-weight="500" x="0" y="39" textLength="30.933">forwar</tspan><tspan font-family="Open Sans" font-size="10" font-weight="500" x="30.732" y="39" textLength="60.522">d it to Naomi</tspan></text><path fill="#fff5c8" d="M387 567h144v18H387z"/><path stroke="#000" stroke-linecap="round" stroke-linejoin="round" d="M387 567h144v18H387z"/><text transform="translate(392 570)" fill="#000"><tspan font-family="Courier" font-size="10" font-weight="500" x="30.994" y="10" textLength="72.012">on.message()</tspan></text><path marker-end="url(#a)" stroke="#000" stroke-linecap="round" stroke-linejoin="round" stroke-dasharray="4,4" d="M747 576.84l-206.1-.802"/><text transform="translate(559.573 574.103)" fill="#000"><tspan font-family="Open Sans" font-size="10" font-weight="500" x=".446" y="11" textLength="46.089">Message: </tspan><tspan font-family="Courier" font-size="10" font-weight="500" x="46.535" y="11" textLength="114.019">“new-ice-candidate”</tspan></text><path marker-end="url(#a)" stroke="#000" stroke-linecap="round" stroke-linejoin="round" stroke-dasharray="4,4" d="M387 576H171.9"/><text transform="translate(194 573.56)" fill="#000"><tspan font-family="Open Sans" font-size="10" font-weight="500" x=".446" y="11" textLength="46.089">Message: </tspan><tspan font-family="Courier" font-size="10" font-weight="500" x="46.535" y="11" textLength="114.019">“new-ice-candidate”</tspan></text><path fill="#c9ffff" d="M18 585h144v144H18z"/><path stroke="#000" stroke-linecap="round" stroke-linejoin="round" d="M18 585h144v144H18z"/><text transform="translate(23 590)" fill="#000"><tspan font-family="Open Sans" font-size="10" font-weight="500" x="1" y="11" textLength="8.379">1.</tspan><tspan font-family="Open Sans" font-size="10" font-weight="500" x="11" y="11" textLength="10.391">Cr</tspan><tspan font-family="Open Sans" font-size="10" font-weight="500" x="21.19" y="11" textLength="37.207">eate an </tspan><tspan font-family="Courier" font-size="10" font-weight="500" x="10.95" y="25" textLength="90.015">RTCIceCandidate</tspan><tspan font-family="Open Sans" font-size="10" font-weight="500" x="100.965" y="25" textLength="33.794"> object </tspan><tspan font-family="Open Sans" font-size="10" font-weight="500" x="10.95" y="39" textLength="77.134">using the SDP pr</tspan><tspan font-family="Open Sans" font-size="10" font-weight="500" x="87.884" y="39" textLength="6.04">o</tspan><tspan font-family="Open Sans" font-size="10" font-weight="500" x="93.723" y="39" textLength="39.268">vided in </tspan><tspan font-family="Open Sans" font-size="10" font-weight="500" x="10.95" y="53" textLength="66.484">the candidate.</tspan><tspan font-family="Open Sans" font-size="10" font-weight="500" x="1" y="67" textLength="8.379">2.</tspan><tspan font-family="Open Sans" font-size="10" font-weight="500" x="11" y="67" textLength="113.848">Deliver the candidate to </tspan><tspan font-family="Open Sans" font-size="10" font-weight="500" x="10.95" y="81" textLength="32.671">Naomi'</tspan><tspan font-family="Open Sans" font-size="10" font-weight="500" x="43.323" y="81" textLength="56.172">s ICE layer b</tspan><tspan font-family="Open Sans" font-size="10" font-weight="500" x="99.295" y="81" textLength="7.637">y </tspan><tspan font-family="Open Sans" font-size="10" font-weight="500" x="10.95" y="95" textLength="58.799">passing it to </tspan><tspan font-family="Courier" font-size="10" font-weight="500" x="10.95" y="108" textLength="120.02">RTCPeerConnection.ad</tspan><tspan font-family="Courier" font-size="10" font-weight="500" x="10.95" y="120" textLength="90.015">dIceCandidate()</tspan></text><path fill="#fff5c8" d="M18 567h144v18H18z"/><path stroke="#000" stroke-linecap="round" stroke-linejoin="round" d="M18 567h144v18H18z"/><text transform="translate(23 571)" fill="#000"><tspan font-family="Courier" font-size="8" font-weight="500" x="4.59" y="8" textLength="124.82">handleNewIceCandidateMsg()</tspan></text><path stroke="#000" stroke-linecap="round" stroke-linejoin="round" d="M9 81h180v27H9z"/><text transform="translate(14 86)" fill="#000"><tspan font-family="Open Sans" font-size="16" font-weight="bold" x="50.664" y="17" textLength="15.133">W</tspan><tspan font-family="Open Sans" font-size="16" font-weight="bold" x="65.477" y="17" textLength="53.859">eb App</tspan></text><path stroke="#000" stroke-linecap="round" stroke-linejoin="round" d="M189 81h180v27H189z"/><text transform="translate(194 86)" fill="#000"><tspan font-family="Open Sans" font-size="16" font-weight="bold" x="34.137" y="17" textLength="15.133">W</tspan><tspan font-family="Open Sans" font-size="16" font-weight="bold" x="48.949" y="17" textLength="40.805">eb Br</tspan><tspan font-family="Open Sans" font-size="16" font-weight="bold" x="89.434" y="17" textLength="9.773">o</tspan><tspan font-family="Open Sans" font-size="16" font-weight="bold" x="98.887" y="17" textLength="36.977">wser</tspan></text><path stroke="#000" stroke-linecap="round" stroke-linejoin="round" d="M549 81h180v27H549z"/><text transform="translate(554 86)" fill="#000"><tspan font-family="Open Sans" font-size="16" font-weight="bold" x="34.137" y="17" textLength="15.133">W</tspan><tspan font-family="Open Sans" font-size="16" font-weight="bold" x="48.949" y="17" textLength="40.805">eb Br</tspan><tspan font-family="Open Sans" font-size="16" font-weight="bold" x="89.434" y="17" textLength="9.773">o</tspan><tspan font-family="Open Sans" font-size="16" font-weight="bold" x="98.887" y="17" textLength="36.977">wser</tspan></text><path stroke="#000" stroke-linecap="round" stroke-linejoin="round" d="M729 81h180v27H729z"/><text transform="translate(734 86)" fill="#000"><tspan font-family="Open Sans" font-size="16" font-weight="bold" x="50.664" y="17" textLength="15.133">W</tspan><tspan font-family="Open Sans" font-size="16" font-weight="bold" x="65.477" y="17" textLength="53.859">eb App</tspan></text><path fill="#da456b" fill-opacity=".503" d="M9 738h900v27H9z"/><path stroke="#000" stroke-linecap="round" stroke-linejoin="round" d="M9 738h900v27H9z"/><text transform="translate(14 740.5)" fill="#000"><tspan font-family="Open Sans" font-size="16" font-weight="500" x="227.586" y="17" textLength="16.164">Pr</tspan><tspan font-family="Open Sans" font-size="16" font-weight="500" x="243.43" y="17" textLength="52.211">ocess r</tspan><tspan font-family="Open Sans" font-size="16" font-weight="500" x="295.32" y="17" textLength="230.836">epeats until both ICE layers agr</tspan><tspan font-family="Open Sans" font-size="16" font-weight="500" x="525.836" y="17" textLength="136.578">ee on a candidate.</tspan></text></g></svg>
| 0 |
data/mdn-content/files/en-us/web/api/webrtc_api | data/mdn-content/files/en-us/web/api/webrtc_api/signaling_and_video_calling/index.md | ---
title: Signaling and video calling
slug: Web/API/WebRTC_API/Signaling_and_video_calling
page-type: guide
---
{{DefaultAPISidebar("WebRTC")}}
[WebRTC](/en-US/docs/Web/API/WebRTC_API) allows real-time, peer-to-peer, media exchange between two devices. A connection is established through a discovery and negotiation process called **signaling**. This tutorial will guide you through building a two-way video-call.
[WebRTC](/en-US/docs/Web/API/WebRTC_API) is a fully peer-to-peer technology for the real-time exchange of audio, video, and data, with one central caveat. A form of discovery and media format negotiation must take place, [as discussed elsewhere](/en-US/docs/Web/API/WebRTC_API/Session_lifetime#establishing_a_connection), in order for two devices on different networks to locate one another. This process is called **signaling** and involves both devices connecting to a third, mutually agreed-upon server. Through this third server, the two devices can locate one another, and exchange negotiation messages.
In this article, we will further enhance the [WebSocket chat](https://webrtc-from-chat.glitch.me/) first created as part of our WebSocket documentation (this article link is forthcoming; it isn't actually online yet) to support opening a two-way video call between users. You can [try out this example on Glitch](https://webrtc-from-chat.glitch.me/), and you can [remix the example](https://glitch.com/edit/#!/remix/webrtc-from-chat) to experiment with it as well. You can also [look at the full project](https://github.com/mdn/samples-server/tree/master/s/webrtc-from-chat) on GitHub.
> **Note:** If you try out the example on Glitch, please note that any changes made to the code will immediately reset any connections. In addition, there is a short timeout period; the Glitch instance is for quick experiments and testing only.
## The signaling server
Establishing a WebRTC connection between two devices requires the use of a **signaling server** to resolve how to connect them over the internet. A signaling server's job is to serve as an intermediary to let two peers find and establish a connection while minimizing exposure of potentially private information as much as possible. How do we create this server and how does the signaling process actually work?
First we need the signaling server itself. WebRTC doesn't specify a transport mechanism for the signaling information. You can use anything you like, from [WebSocket](/en-US/docs/Web/API/WebSockets_API) to {{domxref("fetch()")}} to carrier pigeons to exchange the signaling information between the two peers.
It's important to note that the server doesn't need to understand or interpret the signaling data content. Although it's {{Glossary("SDP")}}, even this doesn't matter so much: the content of the message going through the signaling server is, in effect, a black box. What does matter is when the {{Glossary("ICE")}} subsystem instructs you to send signaling data to the other peer, you do so, and the other peer knows how to receive this information and deliver it to its own ICE subsystem. All you have to do is channel the information back and forth. The contents don't matter at all to the signaling server.
### Readying the chat server for signaling
Our [chat server](https://github.com/mdn/samples-server/tree/master/s/websocket-chat) uses the [WebSocket API](/en-US/docs/Web/API/WebSockets_API) to send information as {{Glossary("JSON")}} strings between each client and the server. The server supports several message types to handle tasks, such as registering new users, setting usernames, and sending public chat messages.
To allow the server to support signaling and ICE negotiation, we need to update the code. We'll have to allow directing messages to one specific user instead of broadcasting to all connected users, and ensure unrecognized message types are passed through and delivered, without the server needing to know what they are. This lets us send signaling messages using this same server, instead of needing a separate server.
Let's take a look at changes we need to make to the chat server to support WebRTC signaling. This is in the file [`chatserver.js`](https://github.com/mdn/samples-server/blob/master/s/webrtc-from-chat/chatserver.js).
First up is the addition of the function `sendToOneUser()`. As the name suggests, this sends a stringified JSON message to a particular username.
```js
function sendToOneUser(target, msgString) {
connectionArray.find((conn) => conn.username === target).send(msgString);
}
```
This function iterates over the list of connected users until it finds one matching the specified username, then sends the message to that user. The parameter `msgString` is a stringified JSON object. We could have made it receive our original message object, but in this example it's more efficient this way. Since the message has already been stringified, we can send it with no further processing. Each entry in `connectionArray` is a {{domxref("WebSocket")}} object, so we can just call its {{domxref("WebSocket.send", "send()")}} method directly.
Our original chat demo didn't support sending messages to a specific user. The next task is to update the main WebSocket message handler to support doing so. This involves a change near the end of the `"connection"` message handler:
```js
if (sendToClients) {
const msgString = JSON.stringify(msg);
if (msg.target && msg.target.length !== 0) {
sendToOneUser(msg.target, msgString);
} else {
for (const connection of connectionArray) {
connection.send(msgString);
}
}
}
```
This code now looks at the pending message to see if it has a `target` property. If that property is present, it specifies the username of the client to which the message is to be sent, and we call `sendToOneUser()` to send the message to them. Otherwise, the message is broadcast to all users by iterating over the connection list, sending the message to each user.
As the existing code allows the sending of arbitrary message types, no additional changes are required. Our clients can now send messages of unknown types to any specific user, letting them send signaling messages back and forth as desired.
That's all we need to change on the server side of the equation. Now let's consider the signaling protocol we will implement.
### Designing the signaling protocol
Now that we've built a mechanism for exchanging messages, we need a protocol defining how those messages will look. This can be done in a number of ways; what's demonstrated here is just one possible way to structure signaling messages.
This example's server uses stringified JSON objects to communicate with its clients. This means our signaling messages will be in JSON format, with contents which specify what kind of messages they are as well as any additional information needed in order to handle the messages properly.
#### Exchanging session descriptions
When starting the signaling process, an **offer** is created by the user initiating the call. This offer includes a session description, in {{Glossary("SDP")}} format, and needs to be delivered to the receiving user, which we'll call the **callee**. The callee responds to the offer with an **answer** message, also containing an SDP description. Our signaling server will use WebSocket to transmit offer messages with the type `"video-offer"`, and answer messages with the type `"video-answer"`. These messages have the following fields:
- `type`
- : The message type; either `"video-offer"` or `"video-answer"`.
- `name`
- : The sender's username.
- `target`
- : The username of the person to receive the description (if the caller is sending the message, this specifies the callee, and vice versa).
- `sdp`
- : The SDP (Session Description Protocol) string describing the local end of the connection from the perspective of the sender (or the remote end of the connection from the receiver's point of view).
At this point, the two participants know which [codecs](/en-US/docs/Web/Media/Formats/WebRTC_codecs) and [codec parameters](/en-US/docs/Web/Media/Formats/codecs_parameter) are to be used for this call. They still don't know how to transmit the media data itself though. This is where {{Glossary('ICE', 'Interactive Connectivity Establishment (ICE)')}} comes in.
### Exchanging ICE candidates
Two peers need to exchange ICE candidates to negotiate the actual connection between them. Every ICE candidate describes a method that the sending peer is able to use to communicate. Each peer sends candidates in the order they're discovered, and keeps sending candidates until it runs out of suggestions, even if media has already started streaming.
An {{domxref("RTCPeerConnection.icecandidate_event", "icecandidate")}} event is sent to the {{domxref("RTCPeerConnection")}} to complete the process of adding a local description using `pc.setLocalDescription(offer)`.
Once the two peers agree upon a mutually-compatible candidate, that candidate's SDP is used by each peer to construct and open a connection, through which media then begins to flow. If they later agree on a better (usually higher-performance) candidate, the stream may change formats as needed.
Though not currently supported, a candidate received after media is already flowing could theoretically also be used to downgrade to a lower-bandwidth connection if needed.
Each ICE candidate is sent to the other peer by sending a JSON message of type `"new-ice-candidate"` over the signaling server to the remote peer. Each candidate message include these fields:
- `type`
- : The message type: `"new-ice-candidate"`.
- `target`
- : The username of the person with whom negotiation is underway; the server will direct the message to this user only.
- `candidate`
- : The SDP candidate string, describing the proposed connection method. You typically don't need to look at the contents of this string. All your code needs to do is route it through to the remote peer using the signaling server.
Each ICE message suggests a communication protocol (TCP or UDP), IP address, port number, connection type (for example, whether the specified IP is the peer itself or a relay server), along with other information needed to link the two computers together. This includes NAT or other networking complexity.
> **Note:** The important thing to note is this: the only thing your code is responsible for during ICE negotiation is accepting outgoing candidates from the ICE layer and sending them across the signaling connection to the other peer when your {{domxref("RTCPeerConnection.icecandidate_event", "onicecandidate")}} handler is executed, and receiving ICE candidate messages from the signaling server (when the `"new-ice-candidate"` message is received) and delivering them to your ICE layer by calling {{domxref("RTCPeerConnection.addIceCandidate()")}}. That's it.
>
> The contents of the SDP are irrelevant to you in essentially all cases. Avoid the temptation to try to make it more complicated than that until you really know what you're doing. That way lies madness.
All your signaling server now needs to do is send the messages it's asked to. Your workflow may also demand login/authentication functionality, but such details will vary.
> **Note:** The {{domxref("RTCPeerConnection.icecandidate_event", "onicecandidate")}} Event and {{domxref("RTCPeerConnection.createAnswer", "createAnswer()")}} Promise are both async calls which are handled separately. Be sure that your signaling does not change order! For example {{domxref("RTCPeerConnection.addIceCandidate", "addIceCandidate()")}} with the server's ice candidates must be called after setting the answer with {{domxref("RTCPeerConnection.setRemoteDescription", "setRemoteDescription()")}}.
### Signaling transaction flow
The signaling process involves this exchange of messages between two peers using an intermediary, the signaling server. The exact process will vary, of course, but in general there are a few key points at which signaling messages get handled:
The signaling process involves this exchange of messages among a number of points:
- Each user's client running within a web browser
- Each user's web browser
- The signaling server
- The web server hosting the chat service
Imagine that Naomi and Priya are engaged in a discussion using the chat software, and Naomi decides to open a video call between the two. Here's the expected sequence of events:
[](/en-US/docs/Web/API/WebRTC_API/Signaling_and_video_calling/webrtc_-_signaling_diagram.svg)
We'll see this detailed more over the course of this article.
### ICE candidate exchange process
When each peer's ICE layer begins to send candidates, it enters into an exchange among the various points in the chain that looks like this:
[](webrtc_-_ice_candidate_exchange.svg)
Each side sends candidates to the other as it receives them from their local ICE layer; there is no taking turns or batching of candidates. As soon as the two peers agree upon one candidate that they can both use to exchange the media, media begins to flow. Each peer continues to send candidates until it runs out of options, even after the media has already begun to flow. This is done in hopes of identifying even better options than the one initially selected.
If conditions change (for example, the network connection deteriorates), one or both peers might suggest switching to a lower-bandwidth media resolution, or to an alternative codec. That triggers a new exchange of candidates, after which another media format and/or codec change may take place. In the guide [Codecs used by WebRTC](/en-US/docs/Web/Media/Formats/WebRTC_codecs) you can learn more about the codecs which WebRTC requires browsers to support, which additional codecs are supported by which browsers, and how to choose the best codecs to use.
Optionally, see {{RFC(8445, "Interactive Connectivity Establishment")}}, [section 2.3 ("Negotiating Candidate Pairs and Concluding ICE")](https://datatracker.ietf.org/doc/html/rfc5245#section-2.3) if you want greater understanding of how this process is completed inside the ICE layer. You should note that candidates are exchanged and media starts to flow as soon as the ICE layer is satisfied. This is all taken care of behind the scenes. Our role is to send the candidates, back and forth, through the signaling server.
## The client application
The core to any signaling process is its message handling. It's not necessary to use WebSockets for signaling, but it is a common solution. You should, of course, select a mechanism for exchanging signaling information that is appropriate for your application.
Let's update the chat client to support video calling.
### Updating the HTML
The HTML for our client needs a location for video to be presented. This requires video elements, and a button to hang up the call:
```html
<div class="flexChild" id="camera-container">
<div class="camera-box">
<video id="received_video" autoplay></video>
<video id="local_video" autoplay muted></video>
<button id="hangup-button" onclick="hangUpCall();" disabled>Hang Up</button>
</div>
</div>
```
The page structure defined here is using {{HTMLElement("div")}} elements, giving us full control over the page layout by enabling the use of CSS. We'll skip layout detail in this guide, but [take a look at the CSS](https://github.com/mdn/samples-server/blob/master/s/webrtc-from-chat/chat.css) on GitHub to see how we handled it. Take note of the two {{HTMLElement("video")}} elements, one for your self-view, one for the connection, and the {{HTMLElement("button")}} element.
The `<video>` element with the `id` "`received_video`" will present video received from the connected user. We specify the `autoplay` attribute, ensuring once the video starts arriving, it immediately plays. This removes any need to explicitly handle playback in our code. The "`local_video`" `<video>` element presents a preview of the user's camera; specifying the `muted` attribute, as we don't need to hear local audio in this preview panel.
Finally, the "`hangup-button`" {{HTMLElement("button")}}, to disconnect from a call, is defined and configured to start disabled (setting this as our default for when no call is connected) and apply the function `hangUpCall()` on click. This function's role is to close the call, and send a signalling server notification to the other peer, requesting it also close.
### The JavaScript code
We'll divide this code into functional areas to more easily describe how it works. The main body of this code is found in the `connect()` function: it opens up a {{domxref("WebSocket")}} server on port 6503, and establishes a handler to receive messages in JSON object format. This code generally handles text chat messages as it did previously.
#### Sending messages to the signaling server
Throughout our code, we call `sendToServer()` in order to send messages to the signaling server. This function uses the [WebSocket](/en-US/docs/Web/API/WebSockets_API) connection to do its work:
```js
function sendToServer(msg) {
const msgJSON = JSON.stringify(msg);
connection.send(msgJSON);
}
```
The message object passed into this function is converted into a JSON string by calling {{jsxref("JSON.stringify()")}}, then we call the WebSocket connection's {{domxref("WebSocket.send", "send()")}} function to transmit the message to the server.
#### UI to start a call
The code which handles the `"userlist"` message calls `handleUserlistMsg()`. Here we set up the handler for each connected user in the user list displayed to the left of the chat panel. This function receives a message object whose `users` property is an array of strings specifying the user names of every connected user.
```js
function handleUserlistMsg(msg) {
const listElem = document.querySelector(".userlistbox");
while (listElem.firstChild) {
listElem.removeChild(listElem.firstChild);
}
msg.users.forEach((username) => {
const item = document.createElement("li");
item.appendChild(document.createTextNode(username));
item.addEventListener("click", invite, false);
listElem.appendChild(item);
});
}
```
After getting a reference to the {{HTMLElement("ul")}} which contains the list of user names into the variable `listElem`, we empty the list by removing each of its child elements.
> **Note:** Obviously, it would be more efficient to update the list by adding and removing individual users instead of rebuilding the whole list every time it changes, but this is good enough for the purposes of this example.
Then we iterate over the array of user names using {{jsxref("Array.forEach", "forEach()")}}. For each name, we create a new {{HTMLElement("li")}} element, then create a new text node containing the user name using {{domxref("Document.createTextNode", "createTextNode()")}}. That text node is added as a child of the `<li>` element. Next, we set a handler for the {{domxref("Element/click_event", "click")}} event on the list item, that clicking on a user name calls our `invite()` method, which we'll look at in the next section.
Finally, we append the new item to the `<ul>` that contains all of the user names.
#### Starting a call
When the user clicks on a username they want to call, the `invite()` function is invoked as the event handler for that {{domxref("Element/click_event", "click")}} event:
```js
const mediaConstraints = {
audio: true, // We want an audio track
video: true, // And we want a video track
};
function invite(evt) {
if (myPeerConnection) {
alert("You can't start a call because you already have one open!");
} else {
const clickedUsername = evt.target.textContent;
if (clickedUsername === myUsername) {
alert(
"I'm afraid I can't let you talk to yourself. That would be weird.",
);
return;
}
targetUsername = clickedUsername;
createPeerConnection();
navigator.mediaDevices
.getUserMedia(mediaConstraints)
.then((localStream) => {
document.getElementById("local_video").srcObject = localStream;
localStream
.getTracks()
.forEach((track) => myPeerConnection.addTrack(track, localStream));
})
.catch(handleGetUserMediaError);
}
}
```
This begins with a basic sanity check: is the user already connected? If there's already a {{domxref("RTCPeerConnection")}}, they obviously can't make a call. Then the name of the user that was clicked upon is obtained from the event target's {{domxref("Node.textContent", "textContent")}} property, and we check to be sure that it's not the same user that's trying to start the call.
Then we copy the name of the user we're calling into the variable `targetUsername` and call `createPeerConnection()`, a function which will create and do basic configuration of the {{domxref("RTCPeerConnection")}}.
Once the `RTCPeerConnection` has been created, we request access to the user's camera and microphone by calling {{domxref("MediaDevices.getUserMedia()")}}, which is exposed to us through the {{domxref("MediaDevices.getUserMedia")}} property. When this succeeds, fulfilling the returned promise, our `then` handler is executed. It receives, as input, a {{domxref("MediaStream")}} object representing the stream with audio from the user's microphone and video from their webcam.
> **Note:** We could restrict the set of permitted media inputs to a specific device or set of devices by calling {{domxref("MediaDevices.enumerateDevices", "navigator.mediaDevices.enumerateDevices()")}} to get a list of devices, filtering the resulting list based on our desired criteria, then using the selected devices' {{domxref("MediaTrackConstraints.deviceId", "deviceId")}} values in the `deviceId` field of the `mediaConstraints` object passed into `getUserMedia()`. In practice, this is rarely if ever necessary, since most of that work is done for you by `getUserMedia()`.
We attach the incoming stream to the local preview {{HTMLElement("video")}} element by setting the element's {{domxref("HTMLMediaElement.srcObject", "srcObject")}} property. Since the element is configured to automatically play incoming video, the stream begins playing in our local preview box.
We then iterate over the tracks in the stream, calling {{domxref("RTCPeerConnection.addTrack", "addTrack()")}} to add each track to the `RTCPeerConnection`. Even though the connection is not fully established yet, you can begin sending data when you feel it's appropriate to do so. Media received before the ICE negotiation is completed may be used to help ICE decide upon the best connectivity approach to take, thus aiding in the negotiation process.
Note that for native apps, such as a phone application, you should not begin sending until the connection has been accepted at both ends, at a minimum, to avoid inadvertently sending video and/or audio data when the user isn't prepared for it.
As soon as media is attached to the `RTCPeerConnection`, a {{domxref("RTCPeerConnection.negotiationneeded_event", "negotiationneeded")}} event is triggered at the connection, so that ICE negotiation can be started.
If an error occurs while trying to get the local media stream, our catch clause calls `handleGetUserMediaError()`, which displays an appropriate error to the user as required.
#### Handling getUserMedia() errors
If the promise returned by `getUserMedia()` concludes in a failure, our `handleGetUserMediaError()` function performs.
```js
function handleGetUserMediaError(e) {
switch (e.name) {
case "NotFoundError":
alert(
"Unable to open your call because no camera and/or microphone" +
"were found.",
);
break;
case "SecurityError":
case "PermissionDeniedError":
// Do nothing; this is the same as the user canceling the call.
break;
default:
alert(`Error opening your camera and/or microphone: ${e.message}`);
break;
}
closeVideoCall();
}
```
An error message is displayed in all cases but one. In this example, we ignore `"SecurityError"` and `"PermissionDeniedError"` results, treating refusal to grant permission to use the media hardware the same as the user canceling the call.
Regardless of why an attempt to get the stream fails, we call our `closeVideoCall()` function to shut down the {{domxref("RTCPeerConnection")}}, and release any resources already allocated by the process of attempting the call. This code is designed to safely handle partially-started calls.
#### Creating the peer connection
The `createPeerConnection()` function is used by both the caller and the callee to construct their {{domxref("RTCPeerConnection")}} objects, their respective ends of the WebRTC connection. It's invoked by `invite()` when the caller tries to start a call, and by `handleVideoOfferMsg()` when the callee receives an offer message from the caller.
```js
function createPeerConnection() {
myPeerConnection = new RTCPeerConnection({
iceServers: [
// Information about ICE servers - Use your own!
{
urls: "stun:stun.stunprotocol.org",
},
],
});
myPeerConnection.onicecandidate = handleICECandidateEvent;
myPeerConnection.ontrack = handleTrackEvent;
myPeerConnection.onnegotiationneeded = handleNegotiationNeededEvent;
myPeerConnection.onremovetrack = handleRemoveTrackEvent;
myPeerConnection.oniceconnectionstatechange =
handleICEConnectionStateChangeEvent;
myPeerConnection.onicegatheringstatechange =
handleICEGatheringStateChangeEvent;
myPeerConnection.onsignalingstatechange = handleSignalingStateChangeEvent;
}
```
When using the {{domxref("RTCPeerConnection.RTCPeerConnection", "RTCPeerConnection()")}} constructor, we will specify an object providing configuration parameters for the connection. We use only one of these in this example: `iceServers`. This is an array of objects describing STUN and/or TURN servers for the {{Glossary("ICE")}} layer to use when attempting to establish a route between the caller and the callee. These servers are used to determine the best route and protocols to use when communicating between the peers, even if they're behind a firewall or using {{Glossary("NAT")}}.
> **Note:** You should always use STUN/TURN servers which you own, or which you have specific authorization to use. This example is using a known public STUN server but abusing these is bad form.
Each object in `iceServers` contains at least a `urls` field providing URLs at which the specified server can be reached. It may also provide `username` and `credential` values to allow authentication to take place, if needed.
After creating the {{domxref("RTCPeerConnection")}}, we set up handlers for the events that matter to us.
The first three of these event handlers are required; you have to handle them to do anything involving streamed media with WebRTC. The rest aren't strictly required but can be useful, and we'll explore them. There are a few other events available that we're not using in this example, as well. Here's a summary of each of the event handlers we will be implementing:
- {{domxref("RTCPeerConnection.icecandidate_event", "onicecandidate")}}
- : The local ICE layer calls your {{domxref("RTCPeerConnection.icecandidate_event", "icecandidate")}} event handler, when it needs you to transmit an ICE candidate to the other peer, through your signaling server. See [Sending ICE candidates](#sending_ice_candidates) for more information and to see the code for this example.
- {{domxref("RTCPeerConnection.track_event", "ontrack")}}
- : This handler for the {{domxref("RTCPeerConnection.track_event", "track")}} event is called by the local WebRTC layer when a track is added to the connection. This lets you connect the incoming media to an element to display it, for example. See [Receiving new streams](#receiving_new_streams) for details.
- {{domxref("RTCPeerConnection.negotiationneeded_event", "onnegotiationneeded")}}
- : This function is called whenever the WebRTC infrastructure needs you to start the session negotiation process anew. Its job is to create and send an offer, to the callee, asking it to connect with us. See [Starting negotiation](#starting_negotiation) to see how we handle this.
- {{domxref("RTCPeerConnection.removetrack_event", "onremovetrack")}}
- : This counterpart to `ontrack` is called to handle the {{domxref("MediaStream/removetrack_event", "removetrack")}} event; it's sent to the `RTCPeerConnection` when the remote peer removes a track from the media being sent. See [Handling the removal of tracks](#handling_the_removal_of_tracks).
- {{domxref("RTCPeerConnection.iceconnectionstatechange_event", "oniceconnectionstatechange")}}
- : The {{domxref("RTCPeerConnection.iceconnectionstatechange_event", "iceconnectionstatechange")}} event is sent by the ICE layer to let you know about changes to the state of the ICE connection. This can help you know when the connection has failed, or been lost. We'll look at the code for this example in [ICE connection state](#ice_connection_state) below.
- {{domxref("RTCPeerConnection.icegatheringstatechange_event", "onicegatheringstatechange")}}
- : The ICE layer sends you the {{domxref("RTCPeerConnection.icegatheringstatechange_event", "icegatheringstatechange")}} event, when the ICE agent's process of collecting candidates shifts, from one state to another (such as starting to gather candidates or completing negotiation). See [ICE gathering state](#ice_gathering_state) below.
- {{domxref("RTCPeerConnection.signalingstatechange_event", "onsignalingstatechange")}}
- : The WebRTC infrastructure sends you the {{domxref("RTCPeerConnection.signalingstatechange_event", "signalingstatechange")}} message when the state of the signaling process changes (or if the connection to the signaling server changes). See [Signaling state](#signaling_state) to see our code.
#### Starting negotiation
Once the caller has created its {{domxref("RTCPeerConnection")}}, created a media stream, and added its tracks to the connection as shown in [Starting a call](#starting_a_call), the browser will deliver a {{domxref("RTCPeerConnection.negotiationneeded_event", "negotiationneeded")}} event to the {{domxref("RTCPeerConnection")}} to indicate that it's ready to begin negotiation with the other peer. Here's our code for handling the {{domxref("RTCPeerConnection.negotiationneeded_event", "negotiationneeded")}} event:
```js
function handleNegotiationNeededEvent() {
myPeerConnection
.createOffer()
.then((offer) => myPeerConnection.setLocalDescription(offer))
.then(() => {
sendToServer({
name: myUsername,
target: targetUsername,
type: "video-offer",
sdp: myPeerConnection.localDescription,
});
})
.catch(reportError);
}
```
To start the negotiation process, we need to create and send an SDP offer to the peer we want to connect to. This offer includes a list of supported configurations for the connection, including information about the media stream we've added to the connection locally (that is, the video we want to send to the other end of the call), and any ICE candidates gathered by the ICE layer already. We create this offer by calling {{domxref("RTCPeerConnection.createOffer", "myPeerConnection.createOffer()")}}.
When `createOffer()` succeeds (fulfilling the promise), we pass the created offer information into {{domxref("RTCPeerConnection.setLocalDescription", "myPeerConnection.setLocalDescription()")}}, which configures the connection and media configuration state for the caller's end of the connection.
> **Note:** Technically speaking, the string returned by `createOffer()` is an {{RFC(3264)}} offer.
We know the description is valid, and has been set, when the promise returned by `setLocalDescription()` is fulfilled. This is when we send our offer to the other peer by creating a new `"video-offer"` message containing the local description (now the same as the offer), then sending it through our signaling server to the callee. The offer has the following members:
- `type`
- : The message type: `"video-offer"`.
- `name`
- : The caller's username.
- `target`
- : The name of the user we wish to call.
- `sdp`
- : The SDP string describing the offer.
If an error occurs, either in the initial `createOffer()` or in any of the fulfillment handlers that follow, an error is reported by invoking our `reportError()` function.
Once `setLocalDescription()`'s fulfillment handler has run, the ICE agent begins sending {{domxref("RTCPeerConnection.icecandidate_event", "icecandidate")}} events to the {{domxref("RTCPeerConnection")}}, one for each potential configuration it discovers. Our handler for the `icecandidate` event is responsible for transmitting the candidates to the other peer.
#### Session negotiation
Now that we've started negotiation with the other peer and have transmitted an offer, let's look at what happens on the callee's side of the connection for a while. The callee receives the offer and calls `handleVideoOfferMsg()` function to process it. Let's see how the callee handles the `"video-offer"` message.
##### Handling the invitation
When the offer arrives, the callee's `handleVideoOfferMsg()` function is called with the `"video-offer"` message that was received. This function needs to do two things. First, it needs to create its own {{domxref("RTCPeerConnection")}} and add the tracks containing the audio and video from its microphone and webcam to that. Second, it needs to process the received offer, constructing and sending its answer.
```js
function handleVideoOfferMsg(msg) {
let localStream = null;
targetUsername = msg.name;
createPeerConnection();
const desc = new RTCSessionDescription(msg.sdp);
myPeerConnection
.setRemoteDescription(desc)
.then(() => navigator.mediaDevices.getUserMedia(mediaConstraints))
.then((stream) => {
localStream = stream;
document.getElementById("local_video").srcObject = localStream;
localStream
.getTracks()
.forEach((track) => myPeerConnection.addTrack(track, localStream));
})
.then(() => myPeerConnection.createAnswer())
.then((answer) => myPeerConnection.setLocalDescription(answer))
.then(() => {
const msg = {
name: myUsername,
target: targetUsername,
type: "video-answer",
sdp: myPeerConnection.localDescription,
};
sendToServer(msg);
})
.catch(handleGetUserMediaError);
}
```
This code is very similar to what we did in the `invite()` function back in [Starting a call](#starting_a_call). It starts by creating and configuring an {{domxref("RTCPeerConnection")}} using our `createPeerConnection()` function. Then it takes the SDP offer from the received `"video-offer"` message and uses it to create a new {{domxref("RTCSessionDescription")}} object representing the caller's session description.
That session description is then passed into {{domxref("RTCPeerConnection.setRemoteDescription", "myPeerConnection.setRemoteDescription()")}}. This establishes the received offer as the description of the remote (caller's) end of the connection. If this is successful, the promise fulfillment handler (in the `then()` clause) starts the process of getting access to the callee's camera and microphone using {{domxref("MediaDevices.getUserMedia", "getUserMedia()")}}, adding the tracks to the connection, and so forth, as we saw previously in `invite()`.
Once the answer has been created using {{domxref("RTCPeerConnection.createAnswer", "myPeerConnection.createAnswer()")}}, the description of the local end of the connection is set to the answer's SDP by calling {{domxref("RTCPeerConnection.setLocalDescription", "myPeerConnection.setLocalDescription()")}}, then the answer is transmitted through the signaling server to the caller to let them know what the answer is.
Any errors are caught and passed to `handleGetUserMediaError()`, described in [Handling getUserMedia() errors](#handling_getusermedia_errors).
> **Note:** As is the case with the caller, once the `setLocalDescription()` fulfillment handler has run, the browser begins firing {{domxref("RTCPeerConnection.icecandidate_event", "icecandidate")}} events that the callee must handle, one for each candidate that needs to be transmitted to the remote peer.
Finally, the caller handles the answer message it received by creating a new {{domxref("RTCSessionDescription")}} object representing the callee's session description and passing it into
{{domxref("RTCPeerConnection.setRemoteDescription", "myPeerConnection.setRemoteDescription()")}}.
```js
function handleVideoAnswerMsg(msg) {
const desc = new RTCSessionDescription(msg.sdp);
myPeerConnection.setRemoteDescription(desc).catch(reportError);
}
```
##### Sending ICE candidates
The ICE negotiation process involves each peer sending candidates to the other, repeatedly, until it runs out of potential ways it can support the `RTCPeerConnection`'s media transport needs. Since ICE doesn't know about your signaling server, your code handles transmission of each candidate in your handler for the {{domxref("RTCPeerConnection.icecandidate_event", "icecandidate")}} event.
Your {{domxref("RTCPeerConnection.icecandidate_event", "onicecandidate")}} handler receives an event whose `candidate` property is the SDP describing the candidate (or is `null` to indicate that the ICE layer has run out of potential configurations to suggest). The contents of `candidate` are what you need to transmit using your signaling server. Here's our example's implementation:
```js
function handleICECandidateEvent(event) {
if (event.candidate) {
sendToServer({
type: "new-ice-candidate",
target: targetUsername,
candidate: event.candidate,
});
}
}
```
This builds an object containing the candidate, then sends it to the other peer using the `sendToServer()` function previously described in [Sending messages to the signaling server](#sending_messages_to_the_signaling_server). The message's properties are:
- `type`
- : The message type: `"new-ice-candidate"`.
- `target`
- : The username the ICE candidate needs to be delivered to. This lets the signaling server route the message.
- `candidate`
- : The SDP representing the candidate the ICE layer wants to transmit to the other peer.
The format of this message (as is the case with everything you do when handling signaling) is entirely up to you, depending on your needs; you can provide other information as required.
> **Note:** It's important to keep in mind that the {{domxref("RTCPeerConnection.icecandidate_event", "icecandidate")}} event is **not** sent when ICE candidates arrive from the other end of the call. Instead, they're sent by your own end of the call so that you can take on the job of transmitting the data over whatever channel you choose. This can be confusing when you're new to WebRTC.
##### Receiving ICE candidates
The signaling server delivers each ICE candidate to the destination peer using whatever method it chooses; in our example this is as JSON objects, with a `type` property containing the string `"new-ice-candidate"`. Our `handleNewICECandidateMsg()` function is called by our main [WebSocket](/en-US/docs/Web/API/WebSockets_API) incoming message code to handle these messages:
```js
function handleNewICECandidateMsg(msg) {
const candidate = new RTCIceCandidate(msg.candidate);
myPeerConnection.addIceCandidate(candidate).catch(reportError);
}
```
This function constructs an {{domxref("RTCIceCandidate")}} object by passing the received SDP into its constructor, then delivers the candidate to the ICE layer by passing it into {{domxref("RTCPeerConnection.addIceCandidate", "myPeerConnection.addIceCandidate()")}}. This hands the fresh ICE candidate to the local ICE layer, and finally, our role in the process of handling this candidate is complete.
Each peer sends to the other peer a candidate for each possible transport configuration that it believes might be viable for the media being exchanged. At some point, the two peers agree that a given candidate is a good choice and they open the connection and begin to share media. It's important to note, however, that ICE negotiation does _not_ stop once media is flowing. Instead, candidates may still keep being exchanged after the conversation has begun, either while trying to find a better connection method, or because they were already in transport when the peers successfully established their connection.
In addition, if something happens to cause a change in the streaming scenario, negotiation will begin again, with the {{domxref("RTCPeerConnection.negotiationneeded_event", "negotiationneeded")}} event being sent to the {{domxref("RTCPeerConnection")}}, and the entire process starts again as described before. This can happen in a variety of situations, including:
- Changes in the network status, such as a bandwidth change, transitioning from Wi-Fi to cellular connectivity, or the like.
- Switching between the front and rear cameras on a phone.
- A change to the configuration of the stream, such as its resolution or frame rate.
##### Receiving new streams
When new tracks are added to the `RTCPeerConnection`— either by calling its {{domxref("RTCPeerConnection.addTrack", "addTrack()")}} method or because of renegotiation of the stream's format—a {{domxref("RTCPeerConnection.track_event", "track")}} event is set to the `RTCPeerConnection` for each track added to the connection. Making use of newly added media requires implementing a handler for the `track` event. A common need is to attach the incoming media to an appropriate HTML element. In our example, we add the track's stream to the {{HTMLElement("video")}} element that displays the incoming video:
```js
function handleTrackEvent(event) {
document.getElementById("received_video").srcObject = event.streams[0];
document.getElementById("hangup-button").disabled = false;
}
```
The incoming stream is attached to the `"received_video"` {{HTMLElement("video")}} element, and the "Hang Up" {{HTMLElement("button")}} element is enabled so the user can hang up the call.
Once this code has completed, finally the video being sent by the other peer is displayed in the local browser window!
##### Handling the removal of tracks
Your code receives a {{domxref("MediaStream/removetrack_event", "removetrack")}} event when the remote peer removes a track from the connection by calling {{domxref("RTCPeerConnection.removeTrack()")}}. Our handler for `"removetrack"` is:
```js
function handleRemoveTrackEvent(event) {
const stream = document.getElementById("received_video").srcObject;
const trackList = stream.getTracks();
if (trackList.length === 0) {
closeVideoCall();
}
}
```
This code fetches the incoming video {{domxref("MediaStream")}} from the `"received_video"` {{HTMLElement("video")}} element's [`srcObject`](/en-US/docs/Web/HTML/Element/video#srcobject) attribute, then calls the stream's {{domxref("MediaStream.getTracks", "getTracks()")}} method to get an array of the stream's tracks.
If the array's length is zero, meaning there are no tracks left in the stream, we end the call by calling `closeVideoCall()`. This cleanly restores our app to a state in which it's ready to start or receive another call. See [Ending the call](#ending_the_call) to learn how `closeVideoCall()` works.
#### Ending the call
There are many reasons why calls may end. A call might have completed, with one or both sides having hung up. Perhaps a network failure has occurred, or one user might have quit their browser, or had a system crash. In any case, all good things must come to an end.
##### Hanging up
When the user clicks the "Hang Up" button to end the call, the `hangUpCall()` function is called:
```js
function hangUpCall() {
closeVideoCall();
sendToServer({
name: myUsername,
target: targetUsername,
type: "hang-up",
});
}
```
`hangUpCall()` executes `closeVideoCall()` to shut down and reset the connection and release resources. It then builds a `"hang-up"` message and sends it to the other end of the call to tell the other peer to neatly shut itself down.
##### Ending the call
The `closeVideoCall()` function, shown below, is responsible for stopping the streams, cleaning up, and disposing of the {{domxref("RTCPeerConnection")}} object:
```js
function closeVideoCall() {
const remoteVideo = document.getElementById("received_video");
const localVideo = document.getElementById("local_video");
if (myPeerConnection) {
myPeerConnection.ontrack = null;
myPeerConnection.onremovetrack = null;
myPeerConnection.onremovestream = null;
myPeerConnection.onicecandidate = null;
myPeerConnection.oniceconnectionstatechange = null;
myPeerConnection.onsignalingstatechange = null;
myPeerConnection.onicegatheringstatechange = null;
myPeerConnection.onnegotiationneeded = null;
if (remoteVideo.srcObject) {
remoteVideo.srcObject.getTracks().forEach((track) => track.stop());
}
if (localVideo.srcObject) {
localVideo.srcObject.getTracks().forEach((track) => track.stop());
}
myPeerConnection.close();
myPeerConnection = null;
}
remoteVideo.removeAttribute("src");
remoteVideo.removeAttribute("srcObject");
localVideo.removeAttribute("src");
localVideo.removeAttribute("srcObject");
document.getElementById("hangup-button").disabled = true;
targetUsername = null;
}
```
After pulling references to the two {{HTMLElement("video")}} elements, we check if a WebRTC connection exists; if it does, we proceed to disconnect and close the call:
1. All of the event handlers are removed. This prevents stray event handlers from being triggered while the connection is in the process of closing, potentially causing errors.
2. For both remote and local video streams, we iterate over each track, calling the {{domxref("MediaStreamTrack.stop()")}} method to close each one.
3. Close the {{domxref("RTCPeerConnection")}} by calling {{domxref("RTCPeerConnection.close", "myPeerConnection.close()")}}.
4. Set `myPeerConnection` to `null`, ensuring our code learns there's no ongoing call; this is useful when the user clicks a name in the user list.
Then for both the incoming and outgoing {{HTMLElement("video")}} elements, we remove their [`src`](/en-US/docs/Web/HTML/Element/video#src) and [`srcObject`](/en-US/docs/Web/HTML/Element/video#srcobject) attributes using their {{domxref("Element.removeAttribute", "removeAttribute()")}} methods. This completes the disassociation of the streams from the video elements.
Finally, we set the {{domxref("HTMLButtonElement.disabled", "disabled")}} property to `true` on the "Hang Up" button, making it unclickable while there is no call underway; then we set `targetUsername` to `null` since we're no longer talking to anyone. This allows the user to call another user, or to receive an incoming call.
#### Dealing with state changes
There are a number of additional events for which you can set listeners to notify your code of a variety of state changes. We use three of them: {{domxref("RTCPeerConnection.iceconnectionstatechange_event", "iceconnectionstatechange")}}, {{domxref("RTCPeerConnection.icegatheringstatechange_event", "icegatheringstatechange")}}, and {{domxref("RTCPeerConnection.signalingstatechange_event", "signalingstatechange")}}.
##### ICE connection state
{{domxref("RTCPeerConnection.iceconnectionstatechange_event", "iceconnectionstatechange")}} events are sent to the {{domxref("RTCPeerConnection")}} by the ICE layer when the connection state changes (such as when the call is terminated from the other end).
```js
function handleICEConnectionStateChangeEvent(event) {
switch (myPeerConnection.iceConnectionState) {
case "closed":
case "failed":
closeVideoCall();
break;
}
}
```
Here, we apply our `closeVideoCall()` function when the ICE connection state changes to `"closed"` or `"failed"`. This handles shutting down our end of the connection so that we're ready start or accept a call once again.
> **Note:** We don't watch the `disconnected` signaling state here as it can indicate temporary issues and may go back to a `connected` state after some time. Watching it would close the video call on any temporary network issue.
##### ICE signaling state
Similarly, we watch for {{domxref("RTCPeerConnection.signalingstatechange_event", "signalingstatechange")}} events. If the signaling state changes to `closed`, we likewise close the call out.
```js
function handleSignalingStateChangeEvent(event) {
switch (myPeerConnection.signalingState) {
case "closed":
closeVideoCall();
break;
}
}
```
> **Note:** The `closed` signaling state has been deprecated in favor of the `closed` {{domxref("RTCPeerConnection.iceConnectionState", "iceConnectionState")}}. We are watching for it here to add a bit of backward compatibility.
##### ICE gathering state
{{domxref("RTCPeerConnection.icegatheringstatechange_event", "icegatheringstatechange")}} events are used to let you know when the ICE candidate gathering process state changes. Our example doesn't use this for anything, but it can be useful to watch these events for debugging purposes, as well as to detect when candidate collection has finished.
```js
function handleICEGatheringStateChangeEvent(event) {
// Our sample just logs information to console here,
// but you can do whatever you need.
}
```
## Next steps
You can now [try out this example on Glitch](https://webrtc-from-chat.glitch.me/) to see it in action. Open the Web console on both devices and look at the logged output—although you don't see it in the code as shown above, the code on the server (and on [GitHub](https://github.com/mdn/samples-server/tree/master/s/webrtc-from-chat)) has a lot of console output so you can see the signaling and connection processes at work.
Another obvious improvement would be to add a "ringing" feature, so that instead of just asking the user for permission to use the camera and microphone, a "User X is calling. Would you like to answer?" prompt appears first.
## See also
- [WebRTC API](/en-US/docs/Web/API/WebRTC_API)
- [Web media technologies](/en-US/docs/Web/Media)
- [Guide to media types and formats on the web](/en-US/docs/Web/Media/Formats)
- [Media Capture and Streams API](/en-US/docs/Web/API/Media_Capture_and_Streams_API)
- [Media Capabilities API](/en-US/docs/Web/API/Media_Capabilities_API)
- [MediaStream Recording API](/en-US/docs/Web/API/MediaStream_Recording_API)
- The [Perfect Negotiation](/en-US/docs/Web/API/WebRTC_API/Perfect_negotiation) pattern
| 0 |
data/mdn-content/files/en-us/web/api/webrtc_api | data/mdn-content/files/en-us/web/api/webrtc_api/signaling_and_video_calling/webrtc_-_signaling_diagram.svg | <svg xmlns="http://www.w3.org/2000/svg" viewBox="17.5 8.5 901 1117" width="901" height="1117"><defs><marker orient="auto" overflow="visible" markerUnits="strokeWidth" id="a" stroke-linejoin="miter" stroke-miterlimit="10" viewBox="-1 -4 10 8" markerWidth="10" markerHeight="8" color="#000"><path d="M8 0L0-3v6z" fill="currentColor" stroke="currentColor"/></marker></defs><g fill="none"><path fill="#887ee3" fill-opacity=".5" d="M18 81.165h360V1125H18z"/><path stroke="#000" stroke-linecap="round" stroke-linejoin="round" d="M18 81.165h360V1125H18z"/><path fill="#71dcba" fill-opacity=".5" d="M558 81.165h360V1125H558z"/><path stroke="#000" stroke-linecap="round" stroke-linejoin="round" d="M558 81.165h360V1125H558z"/><path fill="#cb965c" fill-opacity=".5" d="M378 81.165h180V1125H378z"/><path stroke="#000" stroke-linecap="round" stroke-linejoin="round" d="M378 81.165h180V1125H378z"/><path fill="#71dcba" fill-opacity=".5" d="M558 9h360v72.165H558z"/><path stroke="#000" stroke-linecap="round" stroke-linejoin="round" d="M558 9h360v72.165H558z"/><text transform="translate(563 31.583)" fill="#000"><tspan font-family="Open Sans" font-size="20" font-weight="700" x="109.692" y="21">Priya (Callee)</tspan></text><path fill="#cb965c" fill-opacity=".5" d="M378 9h180v72.165H378z"/><path stroke="#000" stroke-linecap="round" stroke-linejoin="round" d="M378 9h180v72.165H378z"/><text transform="translate(383 31.583)" fill="#000"><tspan font-family="Open Sans" font-size="20" font-weight="700" x="5.161" y="21">Signaling Server</tspan></text><path fill="#887ee3" fill-opacity=".5" d="M18 9h360v72.165H18z"/><path stroke="#000" stroke-linecap="round" stroke-linejoin="round" d="M18 9h360v72.165H18z"/><text transform="translate(23 31.583)" fill="#000"><tspan font-family="Open Sans" font-size="20" font-weight="700" x="103.428" y="21">Naomi (Caller)</tspan></text><path fill="#fff" fill-opacity=".503" d="M558 108h180v1017H558z"/><path stroke="#000" stroke-linecap="round" stroke-linejoin="round" d="M558 108h180v1017H558zM18 81h180v27H18z"/><text transform="translate(23 86)" fill="#000"><tspan font-family="Open Sans" font-size="16" font-weight="600" x="50.664" y="17">Web App</tspan></text><path stroke="#000" stroke-linecap="round" stroke-linejoin="round" d="M558 81h180v27H558z"/><text transform="translate(563 86)" fill="#000"><tspan font-family="Open Sans" font-size="16" font-weight="600" x="34.137" y="17">Web Browser</tspan></text><path stroke="#000" stroke-linecap="round" stroke-linejoin="round" d="M738 81h180v27H738z"/><text transform="translate(743 86)" fill="#000"><tspan font-family="Open Sans" font-size="16" font-weight="600" x="50.664" y="17">Web App</tspan></text><path fill="#d2ffff" d="M36 144h144v180H36z"/><path stroke="#000" stroke-linecap="round" stroke-linejoin="round" d="M36 144h144v180H36z"/><text transform="translate(41 149)" fill="#000"><tspan font-family="Open Sans" font-size="10" font-weight="400" x="1" y="11">1. Create an </tspan> <tspan font-family="Courier" font-size="10" font-weight="400" x="10.95" y="24">RTCPeerConnection</tspan> <tspan font-family="Open Sans" font-size="10" font-weight="400" x="1" y="37">2. Call </tspan> <tspan font-family="Courier" font-size="10" font-weight="400" y="37">getUserMedia()</tspan> <tspan font-family="Open Sans" font-size="10" font-weight="400" y="37"> to </tspan> <tspan font-family="Open Sans" font-size="10" font-weight="400" x="10.95" y="51">access the webcam and </tspan> <tspan font-family="Open Sans" font-size="10" font-weight="400" x="10.95" y="65">microphone</tspan> <tspan font-family="Open Sans" font-size="10" font-weight="400" x="1" y="79">3. Promise fulfilled: add the </tspan> <tspan font-family="Open Sans" font-size="10" font-weight="400" x="10.95" y="93">local stream's tracks by </tspan> <tspan font-family="Open Sans" font-size="10" font-weight="400" x="10.95" y="107">calling </tspan> <tspan font-family="Courier" font-size="10" font-weight="400" x="10.95" y="120">RTCPeerConnection.ad</tspan> <tspan font-family="Courier" font-size="10" font-weight="400" x="10.95" y="132">dTrack()</tspan></text><path fill="#fff6d2" d="M36 126h144v18H36z"/><path stroke="#000" stroke-linecap="round" stroke-linejoin="round" d="M36 126h144v18H36z"/><text transform="translate(41 129)" fill="#000"><tspan font-family="Courier" font-size="10" font-weight="400" x="42.996" y="10">invite()</tspan></text><path fill="#fff" fill-opacity=".503" d="M198 108h180v1017H198z"/><path stroke="#000" stroke-linecap="round" stroke-linejoin="round" d="M198 108h180v1017H198zm0-27h180v27H198z"/><text transform="translate(203 86)" fill="#000"><tspan font-family="Open Sans" font-size="16" font-weight="600" x="34.137" y="17">Web Browser</tspan></text><path fill="#d2ffff" d="M36 398.5h144v210H36z"/><path stroke="#000" stroke-linecap="round" stroke-linejoin="round" d="M36 398.5h144v210H36z"/><text transform="translate(41 403.5)" fill="#000"><tspan font-family="Open Sans" font-size="10" font-weight="400" x="1" y="11">1. Create an SDP offer by </tspan> <tspan font-family="Open Sans" font-size="10" font-weight="400" x="10.95" y="25">calling </tspan> <tspan font-family="Courier" font-size="10" font-weight="400" x="10.95" y="38">RTCPeerConnection.cr</tspan> <tspan font-family="Courier" font-size="10" font-weight="400" x="10.95" y="50">eateOffer()</tspan> <tspan font-family="Open Sans" font-size="10" font-weight="400" x="1" y="63">3. Promise fulfilled: set the </tspan> <tspan font-family="Open Sans" font-size="10" font-weight="400" x="10.95" y="77">description of Naomi's </tspan> <tspan font-family="Open Sans" font-size="10" font-weight="400" x="10.95" y="91">end of the call by calling </tspan> <tspan font-family="Courier" font-size="10" font-weight="400" x="10.95" y="104">RTCPeerConnection.se</tspan> <tspan font-family="Courier" font-size="10" font-weight="400" x="10.95" y="116">tLocalDescription()</tspan> <tspan font-family="Open Sans" font-size="10" font-weight="400" x="1" y="129">4. Promise fulfilled: send the </tspan> <tspan font-family="Open Sans" font-size="10" font-weight="400" x="10.95" y="143">offer through the </tspan> <tspan font-family="Open Sans" font-size="10" font-weight="400" x="10.95" y="157">signaling server to Priya in </tspan> <tspan font-family="Open Sans" font-size="10" font-weight="400" x="10.95" y="171">a message of type </tspan> <tspan font-family="Courier" font-size="10" font-weight="400" x="10.95" y="184">“video-offer”</tspan></text><path fill="#fff6d2" d="M36 380.5h144v18H36z"/><path stroke="#000" stroke-linecap="round" stroke-linejoin="round" d="M36 380.5h144v18H36z"/><text transform="translate(41 385.5)" fill="#000"><tspan font-family="Courier" font-size="7" font-weight="400" x="3.99" y="6">handleNegotiationNeededEvent()</tspan></text><path marker-end="url(#a)" stroke="#000" stroke-linecap="round" stroke-linejoin="round" stroke-dasharray="4.0,4.0" d="M180 503.5l206.1.477"/><text transform="rotate(.133 -216390.778 97669.229)" fill="#000"><tspan font-family="Open Sans" font-size="10" font-weight="400" x=".449" y="11">Message: </tspan> <tspan font-family="Courier" font-size="10" font-weight="400" y="11">“video-offer”</tspan></text><path d="M234.773 689.8c-23.273-5.8-13.992-54.632 23.134-46.3 3.444-16.242 46.617-13.606 46.335 0 27.07-17.402 61.665 17.297 38.46 34.7 27.845 8.436-.35 53.893-23.202 46.3-1.829 12.657-42.68 17.086-46.266 0-23.132 18.247-71.366-9.809-38.46-34.7z" fill="#71dcba"/><path d="M234.773 689.8c-23.273-5.8-13.992-54.632 23.134-46.3 3.444-16.242 46.617-13.606 46.335 0 27.07-17.402 61.665 17.297 38.46 34.7 27.845 8.436-.35 53.893-23.202 46.3-1.829 12.657-42.68 17.086-46.266 0-23.132 18.247-71.366-9.809-38.46-34.7z" stroke="#000" stroke-linecap="round" stroke-linejoin="round"/><text transform="translate(242.6 663)" fill="#000"><tspan font-family="Open Sans" font-size="10" font-weight="400" x="10.94" y="11">ICE layer starts </tspan> <tspan font-family="Open Sans" font-size="10" font-weight="400" x=".346" y="25">sending candidates </tspan> <tspan font-family="Open Sans" font-size="10" font-weight="400" x="27.7" y="39">to Priya</tspan></text><path marker-end="url(#a)" stroke="#000" stroke-linecap="round" stroke-linejoin="round" d="M108 608.5l119.276 34.352"/><path fill="#d2ffff" d="M756 513h144v496H756z"/><path stroke="#000" stroke-linecap="round" stroke-linejoin="round" d="M756 513h144v496H756z"/><text transform="translate(761 518)" fill="#000"><tspan font-family="Open Sans" font-size="10" font-weight="400" x="1" y="11">1. Create an </tspan> <tspan font-family="Courier" font-size="10" font-weight="400" x="10.95" y="24">RTCPeerConnection</tspan> <tspan font-family="Open Sans" font-size="10" font-weight="400" x="1" y="37">2. Create an </tspan> <tspan font-family="Courier" font-size="10" font-weight="400" x="10.95" y="50">RTCSessionDescriptio</tspan> <tspan font-family="Courier" font-size="10" font-weight="400" x="10.95" y="63">n</tspan> <tspan font-family="Open Sans" font-size="10" font-weight="400" y="63"> using the received SDP </tspan> <tspan font-family="Open Sans" font-size="10" font-weight="400" x="10.95" y="77">offer</tspan> <tspan font-family="Open Sans" font-size="10" font-weight="400" x="1" y="91">3. Call </tspan> <tspan font-family="Courier" font-size="10" font-weight="400" x="10.95" y="104">RTCPeerConnection.se</tspan> <tspan font-family="Courier" font-size="10" font-weight="400" x="10.95" y="117">tRemoteDescription()</tspan> <tspan font-family="Open Sans" font-size="10" font-weight="400" y="117"> </tspan> <tspan font-family="Open Sans" font-size="10" font-weight="400" x="10.95" y="131">to tell WebRTC about </tspan> <tspan font-family="Open Sans" font-size="10" font-weight="400" x="10.95" y="145">Naomi's configuration.</tspan> <tspan font-family="Open Sans" font-size="10" font-weight="400" x="1" y="159">4. Call </tspan> <tspan font-family="Courier" font-size="10" font-weight="400" y="159">getUserMedia()</tspan> <tspan font-family="Open Sans" font-size="10" font-weight="400" y="159"> to </tspan> <tspan font-family="Open Sans" font-size="10" font-weight="400" x="10.95" y="173">access the webcam and </tspan> <tspan font-family="Open Sans" font-size="10" font-weight="400" x="10.95" y="187">microphone</tspan> <tspan font-family="Open Sans" font-size="10" font-weight="400" x="1" y="201">5. Promise fulfilled: add the </tspan> <tspan font-family="Open Sans" font-size="10" font-weight="400" x="10.95" y="215">local stream's tracks by </tspan> <tspan font-family="Open Sans" font-size="10" font-weight="400" x="10.95" y="229">calling </tspan> <tspan font-family="Courier" font-size="10" font-weight="400" x="10.95" y="242">RTCPeerConnection.ad</tspan> <tspan font-family="Courier" font-size="10" font-weight="400" x="10.95" y="254">dTrack()</tspan> <tspan font-family="Open Sans" font-size="10" font-weight="400" x="1" y="267">6. Promise fulfilled: call </tspan> <tspan font-family="Courier" font-size="10" font-weight="400" x="10.95" y="280">RTCPeerConnection.cr</tspan> <tspan font-family="Courier" font-size="10" font-weight="400" x="10.95" y="293">eateAnswer()</tspan> <tspan font-family="Open Sans" font-size="10" font-weight="400" y="293"> to create </tspan> <tspan font-family="Open Sans" font-size="10" font-weight="400" x="10.95" y="307">an SDP answer to send to </tspan> <tspan font-family="Open Sans" font-size="10" font-weight="400" x="10.95" y="321">Naomi</tspan> <tspan font-family="Open Sans" font-size="10" font-weight="400" x="1" y="335">7. Promise fulfilled: </tspan> <tspan font-family="Open Sans" font-size="10" font-weight="400" x="10.95" y="349">configure Priya's end of </tspan> <tspan font-family="Open Sans" font-size="10" font-weight="400" x="10.95" y="363">the connection by match </tspan> <tspan font-family="Open Sans" font-size="10" font-weight="400" x="10.95" y="377">the generated answer by </tspan> <tspan font-family="Open Sans" font-size="10" font-weight="400" x="10.95" y="391">calling </tspan> <tspan font-family="Courier" font-size="10" font-weight="400" x="10.95" y="404">RTCPeerConnection.se</tspan> <tspan font-family="Courier" font-size="10" font-weight="400" x="10.95" y="416">tLocalDescription()</tspan> <tspan font-family="Open Sans" font-size="10" font-weight="400" x="1" y="429">8. Promise fulfilled: send the </tspan> <tspan font-family="Open Sans" font-size="10" font-weight="400" x="10.95" y="443">SDP answer through the </tspan> <tspan font-family="Open Sans" font-size="10" font-weight="400" x="10.95" y="457">signaling server to Naomi </tspan> <tspan font-family="Open Sans" font-size="10" font-weight="400" x="10.95" y="471">in a message of type </tspan> <tspan font-family="Courier" font-size="10" font-weight="400" x="10.95" y="484">“video-answer”</tspan></text><path fill="#fff6d2" d="M756 495h144v18H756z"/><path stroke="#000" stroke-linecap="round" stroke-linejoin="round" d="M756 495h144v18H756z"/><text transform="translate(761 498)" fill="#000"><tspan font-family="Courier" font-size="10" font-weight="400" x="3.99" y="10">handleVideoOfferMsg()</tspan></text><path marker-end="url(#a)" stroke="#000" stroke-linecap="round" stroke-linejoin="round" stroke-dasharray="4.0,4.0" d="M540 504h206.1"/><text transform="translate(585.5 501.56)" fill="#000"><tspan font-family="Open Sans" font-size="10" font-weight="400" x=".449" y="11">Message: </tspan> <tspan font-family="Courier" font-size="10" font-weight="400" y="11">“video-offer”</tspan></text><path marker-end="url(#a)" stroke="#000" stroke-linecap="round" stroke-linejoin="round" stroke-dasharray="4.0,4.0" d="M756 761l-206.103-4.771"/><text transform="rotate(1.326 -32309.396 25547.84)" fill="#000"><tspan font-family="Open Sans" font-size="10" font-weight="400" x=".449" y="11">Message: </tspan> <tspan font-family="Courier" font-size="10" font-weight="400" y="11">“video-answer”</tspan></text><path d="M594.773 1058.8c-23.273-5.8-13.992-54.632 23.134-46.3 3.444-16.242 46.617-13.606 46.335 0 27.07-17.402 61.665 17.297 38.46 34.7 27.845 8.436-.35 53.893-23.202 46.3-1.829 12.657-42.68 17.086-46.266 0-23.132 18.247-71.366-9.809-38.46-34.7z" fill="#887ee3"/><path d="M594.773 1058.8c-23.273-5.8-13.992-54.632 23.134-46.3 3.444-16.242 46.617-13.606 46.335 0 27.07-17.402 61.665 17.297 38.46 34.7 27.845 8.436-.35 53.893-23.202 46.3-1.829 12.657-42.68 17.086-46.266 0-23.132 18.247-71.366-9.809-38.46-34.7z" stroke="#000" stroke-linecap="round" stroke-linejoin="round"/><text transform="translate(602.6 1032)" fill="#000"><tspan font-family="Open Sans" font-size="10" font-weight="400" x="10.94" y="11">ICE layer starts </tspan> <tspan font-family="Open Sans" font-size="10" font-weight="400" x=".346" y="25">sending candidates </tspan> <tspan font-family="Open Sans" font-size="10" font-weight="400" x="23.83" y="39">to Naomi</tspan></text><path marker-end="url(#a)" stroke="#000" stroke-linecap="round" stroke-linejoin="round" d="M756 1009l-37.442 37.038"/><path fill="#d2ffff" d="M36 765h144v188H36z"/><path stroke="#000" stroke-linecap="round" stroke-linejoin="round" d="M36 765h144v188H36z"/><text transform="translate(41 770)" fill="#000"><tspan font-family="Open Sans" font-size="10" font-weight="400" x="1" y="11">1. Create an </tspan> <tspan font-family="Courier" font-size="10" font-weight="400" x="10.95" y="24">RTCSessionDescriptio</tspan> <tspan font-family="Courier" font-size="10" font-weight="400" x="10.95" y="37">n</tspan> <tspan font-family="Open Sans" font-size="10" font-weight="400" y="37"> using the received SDP </tspan> <tspan font-family="Open Sans" font-size="10" font-weight="400" x="10.95" y="51">answer</tspan> <tspan font-family="Open Sans" font-size="10" font-weight="400" x="1" y="65">2. Pass the session </tspan> <tspan font-family="Open Sans" font-size="10" font-weight="400" x="10.95" y="79">description to </tspan> <tspan font-family="Courier" font-size="10" font-weight="400" x="10.95" y="92">RTCPeerConnection.se</tspan> <tspan font-family="Courier" font-size="10" font-weight="400" x="10.95" y="105">tRemoteDescription()</tspan> <tspan font-family="Open Sans" font-size="10" font-weight="400" y="105"> </tspan> <tspan font-family="Open Sans" font-size="10" font-weight="400" x="10.95" y="119">to configure Naomi's </tspan> <tspan font-family="Open Sans" font-size="10" font-weight="400" x="10.95" y="133">WebRTC layer to know </tspan> <tspan font-family="Open Sans" font-size="10" font-weight="400" x="10.95" y="147">how Priya's end of the </tspan> <tspan font-family="Open Sans" font-size="10" font-weight="400" x="10.95" y="161">connection is configured</tspan></text><path fill="#fff6d2" d="M36 747h144v18H36z"/><path stroke="#000" stroke-linecap="round" stroke-linejoin="round" d="M36 747h144v18H36z"/><text transform="translate(41 750)" fill="#000"><tspan font-family="Courier" font-size="10" font-weight="400" x=".989" y="10">handleVideoAnswerMsg()</tspan></text><path marker-end="url(#a)" stroke="#000" stroke-linecap="round" stroke-linejoin="round" stroke-dasharray="4.0,4.0" d="M396 756H189.9"/><text transform="translate(224 753.56)" fill="#000"><tspan font-family="Open Sans" font-size="10" font-weight="400" x=".199" y="11">Message: </tspan> <tspan font-family="Courier" font-size="10" font-weight="400" y="11">“video-answer</tspan> <tspan font-family="Open Sans" font-size="10" font-weight="400" y="11">”</tspan></text><path fill="#d2ffff" d="M396 513h144v66H396z"/><path stroke="#000" stroke-linecap="round" stroke-linejoin="round" d="M396 513h144v66H396z"/><text transform="translate(401 518)" fill="#000"><tspan font-family="Open Sans" font-size="10" font-weight="400" x="0" y="11">Receive</tspan> <tspan font-family="Courier" font-size="10" font-weight="400" y="11"> “video-offer” </tspan> <tspan font-family="Open Sans" font-size="10" font-weight="400" x="0" y="25">message and forward it to </tspan> <tspan font-family="Open Sans" font-size="10" font-weight="400" x="0" y="39">Priya</tspan></text><path fill="#fff6d2" d="M396 495h144v18H396z"/><path stroke="#000" stroke-linecap="round" stroke-linejoin="round" d="M396 495h144v18H396z"/><text transform="translate(401 498)" fill="#000"><tspan font-family="Courier" font-size="10" font-weight="400" x="30.994" y="10">on.message()</tspan></text><path fill="#d2ffff" d="M396 765h144v66H396z"/><path stroke="#000" stroke-linecap="round" stroke-linejoin="round" d="M396 765h144v66H396z"/><text transform="translate(401 770)" fill="#000"><tspan font-family="Open Sans" font-size="10" font-weight="400" x="0" y="11">Receive </tspan> <tspan font-family="Courier" font-size="10" font-weight="400" y="11">“video-answer”</tspan> <tspan font-family="Open Sans" font-size="10" font-weight="400" y="11"> </tspan> <tspan font-family="Open Sans" font-size="10" font-weight="400" x="0" y="25">message and forward it to </tspan> <tspan font-family="Open Sans" font-size="10" font-weight="400" x="0" y="39">Naomi</tspan></text><path fill="#fff6d2" d="M396 747h144v18H396z"/><path stroke="#000" stroke-linecap="round" stroke-linejoin="round" d="M396 747h144v18H396z"/><text transform="translate(401 750)" fill="#000"><tspan font-family="Courier" font-size="10" font-weight="400" x="30.994" y="10">on.message()</tspan></text><path marker-end="url(#a)" stroke="#000" stroke-linecap="round" stroke-linejoin="round" stroke-dasharray="1.0,4.0" d="M288 303.202L187.423 391.95"/><text transform="rotate(-41.425 606.24 -55.815)" fill="#000"><tspan font-family="Helvetica Neue" font-size="10" font-weight="400" x=".117" y="10">Event: </tspan> <tspan font-family="Courier" font-size="10" font-weight="400" y="10">negotiationneeded</tspan></text><path d="M234.773 257.8c-23.273-5.8-13.992-54.632 23.134-46.3 3.444-16.242 46.617-13.606 46.335 0 27.07-17.402 61.665 17.297 38.46 34.7 27.845 8.436-.35 53.893-23.202 46.3-1.829 12.657-42.68 17.086-46.266 0-23.132 18.247-71.366-9.809-38.46-34.7z" fill="#71dcba"/><path d="M234.773 257.8c-23.273-5.8-13.992-54.632 23.134-46.3 3.444-16.242 46.617-13.606 46.335 0 27.07-17.402 61.665 17.297 38.46 34.7 27.845 8.436-.35 53.893-23.202 46.3-1.829 12.657-42.68 17.086-46.266 0-23.132 18.247-71.366-9.809-38.46-34.7z" stroke="#000" stroke-linecap="round" stroke-linejoin="round"/><text transform="translate(242.6 231)" fill="#000"><tspan font-family="Open Sans" font-size="10" font-weight="400" x=".517" y="11">Ready to negotiate, </tspan> <tspan font-family="Open Sans" font-size="10" font-weight="400" x="2.048" y="25">so ask the caller to </tspan> <tspan font-family="Open Sans" font-size="10" font-weight="400" x="13.503" y="39">start doing so</tspan></text></g></svg>
| 0 |
data/mdn-content/files/en-us/web/api/webrtc_api | data/mdn-content/files/en-us/web/api/webrtc_api/using_encoded_transforms/index.md | ---
title: Using WebRTC Encoded Transforms
slug: Web/API/WebRTC_API/Using_Encoded_Transforms
page-type: guide
browser-compat: api.RTCRtpReceiver.transform
---
{{DefaultAPISidebar("WebRTC")}}
WebRTC Encoded Transforms provide a mechanism to inject a high performance [Stream API](/en-US/docs/Web/API/Streams_API) for modifying encoded video and audio frame into the incoming and outgoing WebRTC pipelines.
This enables use cases such as end-to-end encryption of encoded frames by third-party code.
The API defines both main thread and worker side objects.
The main-thread interface is a {{domxref("RTCRtpScriptTransform")}} instance, which on construction specifies the {{domxref("Worker")}} that is to implement the transformer code.
The transform running in the worker is inserted into the incoming or outgoing WebRTC pipeline by adding the `RTCRtpScriptTransform` to {{domxref("RTCRtpReceiver.transform")}} or {{domxref("RTCRtpSender.transform")}}, respectively.
A counterpart {{domxref("RTCRtpScriptTransformer")}} object is created in the worker thread, which has a {{domxref("ReadableStream")}} `readable` property, a {{domxref("WritableStream")}} `writable` property, and an `options` object passed from the associated {{domxref("RTCRtpScriptTransform")}} constructor.
Encoded video frames ({{domxref("RTCEncodedVideoFrame")}}) or audio frames ({{domxref("RTCEncodedAudioFrame")}}) from the WebRTC pipeline are enqueued on `readable` for processing.
The `RTCRtpScriptTransformer` is made available to code as the `transformer` property of the {{domxref("DedicatedWorkerGlobalScope/rtctransform_event", "rtctransform")}} event, which is fired at the worker global scope whenever an encoded frame is enqueued for processing (and initially on construction of the corresponding {{domxref("RTCRtpScriptTransform")}}).
The worker code must implement a handler for the event that reads encoded frames from `transformer.readable`, modifies them as needed, and writes them to `transformer.writable` in the same order and without any duplication.
While the interface doesn't place any other restrictions on the implementation, a natural way to transform the frames is to create a [pipe chain](/en-US/docs/Web/API/Streams_API/Concepts#pipe_chains) that sends frames enqueued on the `event.transformer.readable` stream through an {{DOMxRef("TransformStream")}} to the `event.transformer.writable` stream.
We can use the `event.transformer.options` property to configure any transform code that depends on whether the transform is enqueuing incoming frames from the packetizer or outgoing frames from a codec.
The {{domxref("RTCRtpScriptTransformer")}} interface also provides methods that can be used when sending encoded video to get the codec to generate a "key" frame, and when receiving video to request that a new key frame be sent.
These may be useful to allow a recipient to start viewing the video more quickly, if (for example) they join a conference call when delta frames are being sent.
The following examples provide more specific examples of how to use the framework using a {{DOMxRef("TransformStream")}} based implementation.
## Test if encoded transforms are supported
Test if [encoded transforms are supported](#browser_compatibility) by checking for the existence of {{domxref("RTCRtpSender.transform")}} (or {{domxref("RTCRtpReceiver.transform")}}):
```js
const supportsEncodedTransforms =
window.RTCRtpSender && "transform" in RTCRtpSender.prototype;
```
## Adding a transform for outgoing frames
A transform running in a worker is inserted into the outgoing WebRTC pipeline by assigning its corresponding `RTCRtpScriptTransform` to the {{domxref("RTCRtpSender.transform")}} for an outgong track.
This example shows how you might stream video from a user's webcam over WebRTC, adding a WebRTC encoded transform to modify the outgoing streams.
The code assumes that there is an {{domxref("RTCPeerConnection")}} called `peerConnection` that is already connected to a remote peer.
First we get a {{domxref("MediaStreamTrack")}}, using {{domxref("MediaDevices/getUserMedia", "getUserMedia()")}} to get a video {{domxref("MediaStream")}} from a media device, and then the {{domxref("MediaStream.getTracks()")}} method to get the first {{domxref("MediaStreamTrack")}} in the stream.
The track is added to the peer connection using {{domxref("RTCPeerConnection/addTrack()", "addTrack()")}}, which starts streaming it to the remote peer.
The `addTrack()` method returns the {{domxref("RTCRtpSender")}} that is being used to send the track.
```js
// Get Video stream and MediaTrack
const stream = await navigator.mediaDevices.getUserMedia({ video: true });
const [track] = stream.getTracks();
const videoSender = peerConnection.addTrack(track, stream);
```
An `RTCRtpScriptTransform` is then constructed taking a worker script, which defines the transform, and an optional object that can be used to pass arbitrary messages to the worker (in this case we've used a `name` property with value "senderTransform" to tell the worker that this transform will be added to the outbound stream).
We add the transform to the outgoing pipeline by assigning it to the {{domxref("RTCRtpSender.transform")}} property.
```js
// Create a worker containing a TransformStream
const worker = new Worker("worker.js");
videoSender.transform = new RTCRtpScriptTransform(worker, {
name: "senderTransform",
});
```
The [Using separate sender and receiver transforms](#using_separate_sender_and_receiver_transforms) section below shows how the `name` might be used in a worker.
Note that you can add the transform at any time, but by adding it immediately after calling `addTrack()` the transform will get the first encoded frame that is sent.
## Adding a transform for incoming frames
A transform running in a worker is inserted into the incoming WebRTC pipeline by assigning its corresponding `RTCRtpScriptTransform` to the {{domxref("RTCRtpReceiver.transform")}} for an incoming track.
This example shows how you add a transform to modify an incoming stream.
The code assumes that there is an {{domxref("RTCPeerConnection")}} called `peerConnection` that is already connected to a remote peer.
First we add an `RTCPeerConnection` [`track` event](/en-US/docs/Web/API/RTCPeerConnection/track_event) handler to catch the event when the peer starts receiving a new track.
Within the handler we construct an `RTCRtpScriptTransform` and add it to `event.receiver.transform` (`event.receiver` is a {{domxref("RTCRtpReceiver")}}).
As in the previous section, the constructor takes an object with `name` property, but here we use `receiverTransform` as the value to tell the worker that frames are incoming.
```js
peerConnection.ontrack = (event) => {
const worker = new Worker("worker.js");
event.receiver.transform = new RTCRtpScriptTransform(worker, {
name: "receiverTransform",
});
received_video.srcObject = event.streams[0];
};
```
Note again that you can add the transform stream at any time.
However by adding it in the `track` event handler ensures that the transform stream will get the first encoded frame for the track.
## Worker implementation
The worker script must implement a handler for the {{domxref("DedicatedWorkerGlobalScope/rtctransform_event", "rtctransform")}} event, creating a [pipe chain](/en-US/docs/Web/API/Streams_API/Concepts#pipe_chains) that pipes the `event.transformer.readable` ({{DOMxRef("ReadableStream")}}) stream through a {{DOMxRef("TransformStream")}} to the `event.transformer.writable` ({{DOMxRef("WritableStream")}}) stream.
A worker might support transforming incoming or outgoing encoded frames, or both, and the transform might be hard coded, or configured at run-time using information passed from the web application.
### Basic WebRTC Encoded Transform
The example below shows a basic WebRTC Encoded transform, which negates all bits in queued frames.
It does not use or need options passed in from the main thread because the same algorithm can be used in the sender pipeline to negate the bits and in the receiver pipeline to restore them.
The code implements an event handler for the `rtctransform` event.
This constructs a {{DOMxRef("TransformStream")}}, then pipes through it using {{domxref("ReadableStream.pipeThrough()")}}, and finally pipes to `event.transformer.writable` using {{domxref("ReadableStream.pipeTo()")}}.
```js
addEventListener("rtctransform", (event) => {
const transform = new TransformStream({
start() {}, // Called on startup.
flush() {}, // Called when the stream is about to be closed.
async transform(encodedFrame, controller) {
// Reconstruct the original frame.
const view = new DataView(encodedFrame.data);
// Construct a new buffer
const newData = new ArrayBuffer(encodedFrame.data.byteLength);
const newView = new DataView(newData);
// Negate all bits in the incoming frame
for (let i = 0; i < encodedFrame.data.byteLength; ++i) {
newView.setInt8(i, ~view.getInt8(i));
}
encodedFrame.data = newData;
controller.enqueue(encodedFrame);
},
});
event.transformer.readable
.pipeThrough(transform)
.pipeTo(event.transformer.writable);
});
```
The implementation of the WebRTC encoded transform is similar to a "generic" {{DOMxRef("TransformStream")}}, but with some important differences.
Like the generic stream, its [constructor](/en-US/docs/Web/API/TransformStream/TransformStream#parameters) takes an object that defines an _optional_ [`start()`](/en-US/docs/Web/API/TransformStream/TransformStream#startcontroller) method, which is called on construction, [`flush()`](/en-US/docs/Web/API/TransformStream/TransformStream#flushcontroller) method, which is called as the stream is about to be closed, and [`transform()`](/en-US/docs/Web/API/TransformStream/TransformStream#transformchunk_controller) method, which is called every time there is a chunk to be processed.
Unlike the generic constructor any `writableStrategy` or `readableStrategy` properties that are passed in the constructor object are ignored, and the queuing strategy is entirely managed by the user agent.
The `transform()` method also differs in that it is passed either an {{domxref("RTCEncodedVideoFrame")}} or {{domxref("RTCEncodedAudioFrame")}} rather than a generic "chunk".
The actual code shown here for the method isn't notable other than it demonstrates how to convert the frame to a form where you can modify it and enqueue it afterwards on the stream.
### Using separate sender and receiver transforms
The previous example works if the transform function is the same when sending and receiving, but in many cases the algorithms will be different.
You could use separate worker scripts for the sender and receiver, or handle both cases in one worker as shown below.
If the worker is used for both sender and receiver, it needs to know whether the current encoded frame is outgoing from a codec, or incoming from the packetizer.
This information can be specified using the second option in the [`RTCRtpScriptTransform` constructor](/en-US/docs/Web/API/RTCRtpScriptTransform/RTCRtpScriptTransform).
For example, we can define a separate `RTCRtpScriptTransform` for the sender and receiver, passing the same worker, and an options object with property `name` that indicates whether the transform is used in the sender or receiver (as shown in previous sections above).
The information is then available in the worker in `event.transformer.options`.
In this example we implement the `onrtctransform` event handler on the global dedicated worker scope object.
The value of the `name` property is used to determine which `TransformStream` to construct (the actual constructor methods are not shown).
```js
// Code to instantiate transform and attach them to sender/receiver pipelines.
onrtctransform = (event) => {
let transform;
if (event.transformer.options.name == "senderTransform")
transform = createSenderTransform(); // returns a TransformStream
else if (event.transformer.options.name == "receiverTransform")
transform = createReceiverTransform(); // returns a TransformStream
else return;
event.transformer.readable
.pipeThrough(transform)
.pipeTo(event.transformer.writable);
};
```
Note that the code to create the pipe chain is the same as in the previous example.
### Runtime communication with the transform
The [`RTCRtpScriptTransform` constructor](/en-US/docs/Web/API/RTCRtpScriptTransform/RTCRtpScriptTransform) allows you to pass options and transfer objects to the worker.
In the previous example we passed static information, but sometimes you might want to modify the transform algorithm in the worker at runtime, or get information back from the worker.
For example, a WebRTC conference call that supports encryption might need to add a new key to the algorithm used by the transform.
While it is possible to share information between the worker running the tranform code and the main thread using {{domxref("Worker.postMessage()")}}, it is generally easier to share a {{domxref("MessageChannel")}} as an [`RTCRtpScriptTransform` constructor](/en-US/docs/Web/API/RTCRtpScriptTransform/RTCRtpScriptTransform) option, because then the channel context is directly available in the `event.transformer.options` when you are handling a new encoded frame.
The code below creates a {{domxref("MessageChannel")}} and [transfers](/en-US/docs/Web/API/Web_Workers_API/Transferable_objects) its second port to the worker.
The main thread and transform can subsequently communicate using the first and second ports.
```js
// Create a worker containing a TransformStream
const worker = new Worker("worker.js");
// Create a channel
// Pass channel.port2 to the transform as a constructor option
// and also transfer it to the worker
const channel = new MessageChannel();
const transform = new RTCRtpScriptTransform(
worker,
{ purpose: "encrypt", port: channel.port2 },
[channel.port2],
);
// Use the port1 to send a string.
// (we can send and transfer basic types/objects).
channel.port1.postMessage("A message for the worker");
channel.port1.start();
```
In the worker the port is available as `event.transformer.options.port`.
The code below shows how you might listen on the port's `message` event to get messages from the main thread.
You can also use the port to send messages back to the main thread.
```js
event.transformer.options.port.onmessage = (event) => {
// The message payload is in 'event.data';
console.log(event.data);
};
```
### Triggering a key frame
Raw video is rarely sent or stored because it consumes a lot of space and bandwidth to represent each frame as a complete image.
Instead, codecs periodically generate a "key frame" that contains enough infomation to construct a full image, and between key frames sends "delta frames" that just include the changes since the last delta frame.
While this is far more efficient that sending raw video, it means that in order to display the image associated with a particular delta frame, you need the last key frame and all subsequent delta frames.
This can cause a delay for new users joining a WebRTC conference application, because they can't display video until they have received their first key frame.
Similarly, if an encoded transform was used to encrypt frames, the recipient would not be able to display video until they get the first key frame encrypted with their key.
In order to ensure that a new key frame can be sent as early as possible when needed, the {{domxref("RTCRtpScriptTransformer")}} object in `event.transformer` has two methods: {{domxref("RTCRtpScriptTransformer.generateKeyFrame()")}}, which causes the codec to generate a key frame, and {{domxref("RTCRtpScriptTransformer.sendKeyFrameRequest()")}}, which a receiver can use to request a key frame from the sender.
The example below shows how the main thread might pass an encryption key to a sender transform, and trigger the codec to generate a key frame.
Note that the main thread doesn't have direct access to the {{domxref("RTCRtpScriptTransformer")}} object, so it needs to pass the key and restriction identifier ("rid") to the worker (the "rid" is a stream id, which indicates the encoder that must generat the key frame).
Here we do that with a `MessageChannel`, using the same pattern as in the previous section.
The code assumes there is already a peer connection, and that `videoSender` is an {{domxref("RTCRtpSender")}}.
```js
const worker = new Worker("worker.js");
const channel = new MessageChannel();
videoSender.transform = new RTCRtpScriptTransform(
worker,
{ name: "senderTransform", port: channel.port2 },
[channel.port2],
);
// Post rid and new key to the sender
channel.port1.start();
channel.port1.postMessage({
rid: "1",
key: "93ae0927a4f8e527f1gce6d10bc6ab6c",
});
```
The {{domxref("DedicatedWorkerGlobalScope/rtctransform_event", "rtctransform")}} event handler in the worker gets the port and uses it to listen for `message` events from the main thread.
If an event is received it gets the `rid` and `key`, and then calls `generateKeyFrame()`.
```js
event.transformer.options.port.onmessage = (event) => {
const { rid, key } = event.data;
// key is used by the transformer to encrypt frames (not shown)
// Get codec to generate a new key frame using the rid
// Here 'rcevent' is the rtctransform event.
rcevent.transformer.generateKeyFrame(rid);
};
```
The code for a receiver to request a new key frame would be almost identical, except that "rid" isn't specified.
Here is the code for just the port message handler:
```js
event.transformer.options.port.onmessage = (event) => {
const { key } = event.data;
// key is used by the transformer to decrypt frames (not shown)
// Request sender to emit a key frame.
transformer.sendKeyFrameRequest();
};
```
## Browser compatibility
{{Compat}}
## See also
- {{domxref("RTCRtpScriptTransform")}}
- {{domxref("RTCRtpReceiver.transform")}}
- {{domxref("RTCRtpSender.transform")}}
- {{domxref("DedicatedWorkerGlobalScope.rtctransform_event")}}
- {{domxref("RTCTransformEvent")}}
- {{domxref("RTCRtpScriptTransformer")}}
- {{domxref("RTCEncodedVideoFrame")}}
- {{domxref("RTCEncodedAudioFrame")}}
| 0 |
data/mdn-content/files/en-us/web/api/webrtc_api | data/mdn-content/files/en-us/web/api/webrtc_api/session_lifetime/index.md | ---
title: Lifetime of a WebRTC session
slug: Web/API/WebRTC_API/Session_lifetime
page-type: guide
---
{{DefaultAPISidebar("WebRTC")}}
WebRTC lets you build peer-to-peer communication of arbitrary data, audio, or video—or any combination thereof—into a browser application. In this article, we'll look at the lifetime of a WebRTC session, from establishing the connection all the way through closing the connection when it's no longer needed.
This article doesn't get into details of the actual APIs involved in establishing and handling a WebRTC connection; it reviews the process in general with some information about why each step is required. See [Signaling and video calling](/en-US/docs/Web/API/WebRTC_API/Signaling_and_video_calling) for an actual example with a step-by-step explanation of what the code does.
> **Note:** This page is currently under construction, and some of the content will move to other pages as the WebRTC guide material is built out. Pardon our dust!
## Establishing the connection
The internet is big. Really big. It's so big that years ago, smart people saw how big it was, how fast it was growing, and the [limitations](https://en.wikipedia.org/wiki/IPv4_address_exhaustion) of the 32-bit IP addressing system, and realized that something had to be done before we ran out of addresses to use, so they started working on designing a new 64-bit addressing system. But they realized that it would take longer to complete the transition than 32-bit addresses would last, so other smart people came up with a way to let multiple computers share the same 32-bit IP address. Network Address Translation ({{Glossary("NAT")}}) is a standard which supports this address sharing by handling routing of data inbound and outbound to and from devices on a LAN, all of which are sharing a single WAN (global) IP address.
The problem for users is that each individual computer on the internet no longer necessarily has a unique IP address, and, in fact, each device's IP address may change not only if they move from one network to another, but if their network's address is changed by {{Glossary("NAT")}} and/or [DHCP](https://en.wikipedia.org/wiki/DHCP). For developers trying to do peer-to-peer networking, this introduces a conundrum: without a unique identifier for every user device, it's not possible to instantly and automatically know how to connect to a specific device on the internet. Even though you know who you want to talk to, you don't necessarily know how to reach them or even what their address is.
This is like trying to mail a package to your friend Michelle by labeling it "Michelle" and dropping it in a mailbox when you don't know her address. You need to look up her address and include it on the package, or she'll wind up wondering why you forgot her birthday again.
This is where signaling comes in.
### Signaling
Signaling is the process of sending control information between two devices to determine the communication protocols, channels, media codecs and formats, and method of data transfer, as well as any required routing information. The most important thing to know about the signaling process for WebRTC: **it is not defined in the specification**.
Why, you may wonder, is something fundamental to the process of establishing a WebRTC connection left out of the specification? The answer is simple: since the two devices have no way to directly contact each other, and the specification can't predict every possible use case for WebRTC, it makes more sense to let the developer select an appropriate networking technology and messaging protocol.
In particular, if a developer already has a method in place for connecting two devices, it doesn't make sense for them to have to use another one, defined by the specification, just for WebRTC. Since WebRTC doesn't live in a vacuum, there is likely other connectivity in play, so it makes sense to avoid having to add additional connection channels for signaling if an existing one can be used.
In order to exchange signaling information, you can choose to send JSON objects back and forth over a WebSocket connection, or you could use XMPP or SIP over an appropriate channel, or you could use {{domxref("fetch()")}} over {{Glossary("HTTPS")}} with polling, or any other combination of technologies you can come up with. You could even use email as the signaling channel.
It's also worth noting that the channel for performing signaling doesn't even need to be over the network. One peer can output a data object that can be printed out, physically carried (on foot or by carrier pigeon) to another device, entered into that device, and a response then output by that device to be returned on foot, and so forth, until the WebRTC peer connection is open. It'd be very high latency but it could be done.
#### Information exchanged during signaling
There are three basic types of information that need to be exchanged during signaling:
- Control messages used to set up, open, and close the communication channel, and to handle errors.
- Information needed in order to set up the connection: the IP addressing and port information needed for the peers to be able to talk to one another.
- Media capability negotiation: what codecs and media data formats can the peers understand? These need to be agreed upon before the WebRTC session can begin.
Only once signaling has been successfully completed can the true process of opening the WebRTC peer connection begin.
It's worth noting that the signaling server does not actually need to understand or do anything with the data being exchanged through it by the two peers during signaling. The signaling server is, in essence, a relay: a common point which both sides connect to knowing that their signaling data can be transferred through it. The server doesn't need to react to this information in any way.
#### The signaling process
There's a sequence of things that have to happen in order to make it possible to begin a WebRTC session:
1. Each peer creates an {{domxref("RTCPeerConnection")}} object representing their end of the WebRTC session.
2. Each peer establishes a handler for {{domxref("RTCPeerConnection/icecandidate_event", "icecandidate")}} events, which handles sending those candidates to the other peer over the signaling channel.
3. Each peer establishes a handler for {{domxref("RTCPeerConnection.track_event", "track")}} event, which is received when the remote peer adds a track to the stream. This code should connect the tracks to its consumer, such as a {{HTMLElement("video")}} element.
4. The caller creates and shares with the receiving peer a unique identifier or token of some kind so that the call between them can be identified by the code on the signaling server. The exact contents and form of this identifier is up to you.
5. Each peer connects to an agreed-upon signaling server, such as a WebSocket server they both know how to exchange messages with.
6. Each peer tells the signaling server that they want to join the same WebRTC session (identified by the token established in step 4).
7. **_descriptions, candidates, etc. — more coming up_**
## ICE restart
Sometimes, during the lifetime of a WebRTC session, network conditions change. One of the users might transition from a cellular to a Wi-Fi network, or the network might become congested, for example. When this happens, the ICE agent may choose to perform **ICE restart**. This is a process by which the network connection is renegotiated, exactly the same way the initial ICE negotiation is performed, with one exception: media continues to flow across the original network connection until the new one is up and running. Then media shifts to the new network connection and the old one is closed.
> **Note:** Different browsers support ICE restart under different sets of conditions. Not all browsers will perform ICE restart due to network congestion, for example.
If you need to change the configuration of the connection in some way (such as changing to a different set of ICE servers), you can do so before restarting ICE by calling {{domxref("RTCPeerConnection.setConfiguration()")}} with an updated configuration object before restarting ICE.
To explicitly trigger ICE restart, start the renegotiation process by calling {{domxref("RTCPeerConnection.createOffer()")}}, specifying the `iceRestart` option with a value of `true`. Then handle the connection process from then on just like you normally would. This generates new values for the ICE username fragment (ufrag) and password, which will be used by the renegotiation process and the resulting connection.
The answerer side of the connection will automatically begin ICE restart when new values are detected for the ICE ufrag and ICE password.
## Transmission
## Reception
| 0 |
data/mdn-content/files/en-us/web/api/webrtc_api | data/mdn-content/files/en-us/web/api/webrtc_api/intro_to_rtp/index.md | ---
title: Introduction to the Real-time Transport Protocol (RTP)
slug: Web/API/WebRTC_API/Intro_to_RTP
page-type: guide
---
{{DefaultAPISidebar("WebRTC")}}
The **Real-time Transport Protocol** (**RTP**), defined in {{RFC(3550)}}, is an IETF standard protocol to enable real-time connectivity for exchanging data that needs real-time priority. This article provides an overview of what RTP is and how it functions in the context of WebRTC.
> **Note:** WebRTC actually uses **SRTP** (Secure Real-time Transport Protocol) to ensure that the exchanged data is secure and authenticated as appropriate.
Keeping latency to a minimum is especially important for WebRTC, since face-to-face communication needs to be performed with as little {{Glossary("latency")}} as possible. The more time lag there is between one user saying something and another hearing it, the more likely there is to be episodes of cross-talking and other forms of confusion.
## Key features of RTP
Before examining RTP's use in WebRTC contexts, it's useful to have a general idea of what RTP does and does not offer. RTP is a data transport protocol, whose mission is to move data between two endpoints as efficiently as possible under current conditions. Those conditions may be affected by everything from the underlying layers of the network stack to the physical network connection, the intervening networks, the performance of the remote endpoint, noise levels, traffic levels, and so forth.
Since RTP is a data transport, it is augmented by the closely-related **RTP Control Protocol** (**RTCP**), which is defined in {{RFC(3550, "", 6)}}. RTCP adds features including **Quality of Service** (**QoS**) monitoring, participant information sharing, and the like. It isn't adequate for the purposes of fully managing users, memberships, permissions, and so forth, but provides the basics needed for an unrestricted multi-user communication session.
The very fact that RTCP is defined in the same RFC as RTP is a clue as to just how closely-interrelated these two protocols are.
### Capabilities of RTP
RTP's primary benefits in terms of WebRTC include:
- Generally low latency.
- Packets are sequence-numbered and timestamped for reassembly if they arrive out of order. This lets data sent using RTP be delivered on transports that don't guarantee ordering or even guarantee delivery at all.
- This means RTP can be — but is not required to be — used atop {{Glossary("UDP")}} for its performance as well as its multiplexing and checksum features.
- RTP supports multicast; while this isn't yet important for WebRTC, it's likely to matter in the future, when WebRTC is (hopefully) enhanced to support multi-user conversations.
- RTP isn't limited to use in audiovisual communication. It can be used for any form of continuous or active data transfer, including data streaming, active badges or status display updates, or control and measurement information transport.
### Things RTP doesn't do
RTP itself doesn't provide every possible feature, which is why other protocols are also used by WebRTC. Some of the more noteworthy things RTP doesn't include:
- RTP does _not_ guarantee **[quality-of-service](https://en.wikipedia.org/wiki/Quality-of-service)** (**QoS**).
- While RTP is intended for use in latency-critical scenarios, it doesn't inherently offer any features that ensure QoS. Instead, it only offers the information necessary to allow QoS to be implemented elsewhere in the stack.
- RTP doesn't handle allocation or reservation of resources that may be needed.
Where it matters for WebRTC purposes, these are dealt with in a variety of places within the WebRTC infrastructure. For example, RTCP handles QoS monitoring.
## RTCPeerConnection and RTP
Each {{domxref("RTCPeerConnection")}} has methods which provide access to the list of RTP transports that service the peer connection. These correspond to the following three types of transport supported by `RTCPeerConnection`:
- {{domxref("RTCRtpSender")}}
- : `RTCRtpSender`s handle the encoding and transmission of {{domxref("MediaStreamTrack")}} data to a remote peer. The senders for a given connection can be obtained by calling {{domxref("RTCPeerConnection.getSenders()")}}.
- {{domxref("RTCRtpReceiver")}}
- : `RTCRtpReceiver`s provide the ability to inspect and obtain information about incoming `MediaStreamTrack` data. A connection's receivers can be obtained by calling {{domxref("RTCPeerConnection.getReceivers()")}}.
- {{domxref("RTCRtpTransceiver")}}
- : An `RTCRtpTransceiver` is a pair of one RTP sender and one RTP receiver which share an SDP `mid` attribute, which means they share the same SDP media m-line (representing a bidirectional SRTP stream). These are returned by the {{domxref("RTCPeerConnection.getTransceivers()")}} method, and each `mid` and transceiver share a one-to-one relationship, with the `mid` being unique for each `RTCPeerConnection`.
### Leveraging RTP to implement a "hold" feature
Because the streams for an `RTCPeerConnection` are implemented using RTP and the interfaces [above](#rtcpeerconnection_and_rtp), you can take advantage of the access this gives you to the internals of streams to make adjustments. Among the simplest things you can do is to implement a "hold" feature, wherein a participant in a call can click a button and turn off their microphone, begin sending music to the other peer instead, and stop accepting incoming audio.
> **Note:** This example makes use of modern JavaScript features including [async functions](/en-US/docs/Web/JavaScript/Reference/Statements/async_function) and the [`await`](/en-US/docs/Web/JavaScript/Reference/Operators/await) expression. This enormously simplifies and makes far more readable the code dealing with the promises returned by WebRTC methods.
In the examples below, we'll refer to the peer which is turning "hold" mode on and off as the local peer and the user being placed on hold as the remote peer.
#### Activating hold mode
##### Local peer
When the local user decides to enable hold mode, the `enableHold()` method below is called. It accepts as input a {{domxref("MediaStream")}} containing the audio to play while the call is on hold.
```js
async function enableHold(audioStream) {
try {
await audioTransceiver.sender.replaceTrack(audioStream.getAudioTracks()[0]);
audioTransceiver.receiver.track.enabled = false;
audioTransceiver.direction = "sendonly";
} catch (err) {
/* handle the error */
}
}
```
The three lines of code within the [`try`](/en-US/docs/Web/JavaScript/Reference/Statements/try...catch) block perform the following steps:
1. Replace their outgoing audio track with a {{domxref("MediaStreamTrack")}} containing hold music.
2. Disable the incoming audio track.
3. Switch the audio transceiver into send-only mode.
This triggers renegotiation of the `RTCPeerConnection` by sending it a {{domxref("RTCPeerConnection.negotiationneeded_event", "negotiationneeded")}} event, which your code responds to generating an SDP offer using {{domxref("RTCPeerConnection.createOffer")}} and sending it through the signaling server to the remote peer.
The `audioStream`, containing the audio to play instead of the local peer's microphone audio, can come from anywhere. One possibility is to have a hidden {{HTMLElement("audio")}} element and use {{domxref("HTMLMediaElement.captureStream", "HTMLAudioElement.captureStream()")}} to get its audio stream.
##### Remote peer
On the remote peer, when we receive an SDP offer with the directionality set to `"sendonly"`, we handle it using the `holdRequested()` method, which accepts as input an SDP offer string.
```js
async function holdRequested(offer) {
try {
await peerConnection.setRemoteDescription(offer);
await audioTransceiver.sender.replaceTrack(null);
audioTransceiver.direction = "recvonly";
await sendAnswer();
} catch (err) {
/* handle the error */
}
}
```
The steps taken here are:
1. Set the remote description to the specified `offer` by calling {{domxref("RTCPeerConnection.setRemoteDescription()")}}.
2. Replace the audio transceiver's {{domxref("RTCRtpSender")}}'s track with `null`, meaning no track. This stops sending audio on the transceiver.
3. Set the audio transceiver's {{domxref("RTCRtpTransceiver.direction", "direction")}} property to `"recvonly"`, instructing the transceiver to only accept audio and not to send any.
4. The SDP answer is generated and sent using a method called `sendAnswer()`, which generates the answer using {{domxref("RTCPeerConnection.createAnswer", "createAnswer()")}} then sends the resulting SDP to the other peer over the signaling service.
#### Deactivating hold mode
##### Local peer
When the local user clicks the interface widget to disable hold mode, the `disableHold()` method is called to begin the process of restoring normal functionality.
```js
async function disableHold(micStream) {
await audioTransceiver.sender.replaceTrack(micStream.getAudioTracks()[0]);
audioTransceiver.receiver.track.enabled = true;
audioTransceiver.direction = "sendrecv";
}
```
This reverses the steps taken in `enableHold()` as follows:
1. The audio transceiver's `RTCRtpSender`'s track is replaced with the specified stream's first audio track.
2. The transceiver's incoming audio track is re-enabled.
3. The audio transceiver's direction is set to `"sendrecv"`, indicating that it should return to both sending and receiving streamed audio, instead of only sending.
Just like when hold was engaged, this triggers negotiation again, resulting in your code sending a new offer to the remote peer.
##### Remote peer
When the `"sendrecv"` offer is received by the remote peer, it calls its `holdEnded()` method:
```js
async function holdEnded(offer, micStream) {
try {
await peerConnection.setRemoteDescription(offer);
await audioTransceiver.sender.replaceTrack(micStream.getAudioTracks()[0]);
audioTransceiver.direction = "sendrecv";
await sendAnswer();
} catch (err) {
/* handle the error */
}
}
```
The steps taken inside the `try` block here are:
1. The received offer is stored as the remote description by calling `setRemoteDescription()`.
2. The audio transceiver's `RTCRtpSender`'s {{domxref("RTCRtpSender.replaceTrack", "replaceTrack()")}} method is used to set the outgoing audio track to the first track of the microphone's audio stream.
3. The transceiver's direction is set to `"sendrecv"`, indicating that it should resume both sending and receiving audio.
From this point on, the microphone is re-engaged and the remote user is once again able to hear the local user, as well as speak to them.
## See also
- [WebRTC connectivity](/en-US/docs/Web/API/WebRTC_API/Connectivity)
- [Introduction to WebRTC protocols](/en-US/docs/Web/API/WebRTC_API/Protocols)
- [Lifetime of a WebRTC session](/en-US/docs/Web/API/WebRTC_API/Session_lifetime)
| 0 |
data/mdn-content/files/en-us/web/api/webrtc_api | data/mdn-content/files/en-us/web/api/webrtc_api/using_dtmf/index.md | ---
title: Using DTMF with WebRTC
slug: Web/API/WebRTC_API/Using_DTMF
page-type: guide
---
{{DefaultAPISidebar("WebRTC")}}
In order to more fully support audio/video conferencing, [WebRTC](/en-US/docs/Web/API/WebRTC_API) supports sending {{Glossary("DTMF")}} to the remote peer on an {{domxref("RTCPeerConnection")}}. This article offers a brief high-level overview of how DTMF works over WebRTC, then provides a guide for everyday developers about how to send DTMF over an `RTCPeerConnection`. The DTMF system is often referred to as "touch tone," after an old trade name for the system.
WebRTC doesn't send DTMF codes as audio data. Instead, they're sent out-of-band, as RTP payloads. Note, however, that although it's possible to _send_ DTMF using WebRTC, there is currently no way to detect or receive _incoming_ DTMF. WebRTC currently ignores these payloads; this is because WebRTC's DTMF support is primarily intended for use with legacy telephone services that rely on DTMF tones to perform tasks such as:
- Teleconferencing systems
- Menu systems
- Voicemail systems
- Entry of credit card or other payment information
- Passcode entry
> **Note:** While the DTMF is not sent to the remote peer as audio, browsers may choose to play the corresponding tone to the local user as part of their user experience, since users are typically used to hearing their phone play the tones audibly.
## Sending DTMF on an RTCPeerConnection
A given {{domxref("RTCPeerConnection")}} can have multiple media tracks sent or received on it. When you wish to transmit DTMF signals, you first need to decide which track to send them on, since DTMF is sent as a series of out-of-band payloads on the {{domxref("RTCRtpSender")}} responsible for transmitting that track's data to the other peer.
Once the track is selected, you can obtain from its `RTCRtpSender` the {{domxref("RTCDTMFSender")}} object you'll use for sending DTMF. From there, you can call {{domxref("RTCDTMFSender.insertDTMF()")}} to enqueue DTMF signals to be sent on the track to the other peer. The `RTCRtpSender` will then send the tones to the other peer as packets alongside the track's audio data.
Each time a tone is sent, the `RTCPeerConnection` receives a [`tonechange`](/en-US/docs/Web/API/RTCDTMFSender/tonechange_event) event with a {{domxref("RTCDTMFToneChangeEvent.tone", "tone")}} property specifying which tone finished playing, which is an opportunity to update interface elements, for example. When the tone buffer is empty, indicating that all the tones have been sent, a `tonechange` event with its `tone` property set to "" (an empty string) is delivered to the connection object.
If you'd like to know more about how this works, read {{RFC(3550, "RTP: A Transport Protocol for Real-Time Applications")}} and {{RFC(4733, "RTP Payload for DTMF Digits, Telephony Tones, and Telephony Signals")}}. The details of how DTMF payloads are handled on RTP are beyond the scope of this article. Instead, we'll focus on how to use DTMF within the context of an {{domxref("RTCPeerConnection")}} by studying how an example works.
## Simple example
This simple example constructs two {{domxref("RTCPeerConnection")}}s, establishes a connection between them, then waits for the user to click a "Dial" button. When the button is clicked, a DTMF string is sent over the connection using {{domxref("RTCDTMFSender.insertDTMF()")}}. Once the tones finish transmitting, the connection is closed.
> **Note:** This example is obviously somewhat contrived, since normally the two `RTCPeerConnection` objects would exist on different devices, and signaling would be done over the network instead of it all being linked up inline as it is here.
### HTML
The HTML for this example is very basic; there are only three elements of importance:
- An {{HTMLElement("audio")}} element to play the audio received by the `RTCPeerConnection` being "called."
- A {{HTMLElement("button")}} element to trigger creating and connecting the two `RTCPeerConnection` objects, then sending the DTMF tones.
- A {{HTMLElement("div")}} to receive and display log text to show status information.
```html
<p>
This example demonstrates the use of DTMF in WebRTC. Note that this example is
"cheating" by generating both peers in one code stream, rather than having
each be a truly separate entity.
</p>
<audio id="audio" autoplay controls></audio><br />
<button name="dial" id="dial">Dial</button>
<div class="log"></div>
```
### JavaScript
Let's take a look at the JavaScript code next. Keep in mind that the process of establishing the connection is somewhat contrived here; you normally don't build both ends of the connection in the same document.
#### Global variables
First, we establish global variables.
```js
let dialString = "12024561111";
let callerPC = null;
let receiverPC = null;
let dtmfSender = null;
let hasAddTrack = false;
let mediaConstraints = {
audio: true,
video: false,
};
let offerOptions = {
offerToReceiveAudio: 1,
offerToReceiveVideo: 0,
};
let dialButton = null;
let logElement = null;
```
These are, in order:
- `dialString`
- : The DTMF string the caller will send when the "Dial" button is clicked.
- `callerPC` and `receiverPC`
- : The {{domxref("RTCPeerConnection")}} objects representing the caller and the receiver, respectively. These will be initialized when the call starts up, in our `connectAndDial()` function, as shown in [Starting the connection process](#starting_the_connection_process) below.
- `dtmfSender`
- : The {{domxref("RTCDTMFSender")}} object for the connection. This will be obtained while setting up the connection, in the `gotStream()` function shown in [Adding the audio to the connection](#adding_the_audio_to_the_connection).
- `hasAddTrack`
- : Because some browsers have not yet implemented {{domxref("RTCPeerConnection.addTrack()")}}, therefore requiring the use of the obsolete {{domxref("RTCPeerConnection.addStream", "addStream()")}} method, we use this Boolean to determine whether or not the user agent supports `addTrack()`; if it doesn't, we'll fall back to `addStream()`. This gets figured out in `connectAndDial()`, as shown in [Starting the connection process](#starting_the_connection_process).
- `mediaConstraints`
- : An object specifying the constraints to use when starting the connection. We want an audio-only connection, so `video` is `false`, while `audio` is `true`.
- `offerOptions`
- : An object providing options to specify when calling {{domxref("RTCPeerConnection.createOffer()")}}. In this case, we state that we want to receive audio but not video.
- `dialButton` and `logElement`
- : These variables will be used to store references to the dial button and the {{HTMLElement("div")}} into which logging information will be written. They'll get set up when the page is first loaded. See [Initialization](#initialization) below.
#### Initialization
When the page loads, we do some basic setup: we fetch references to the dial button and the log output box elements, and we use {{domxref("EventTarget.addEventListener", "addEventListener()")}} to add an event listener to the dial button so that clicking it calls the `connectAndDial()` function to begin the connection process.
```js
window.addEventListener("load", () => {
logElement = document.querySelector(".log");
dialButton = document.querySelector("#dial");
dialButton.addEventListener("click", connectAndDial, false);
});
```
#### Starting the connection process
When the dial button is clicked, `connectAndDial()` is called. This starts building the WebRTC connection in preparation for sending the DTMF codes.
```js
function connectAndDial() {
callerPC = new RTCPeerConnection();
hasAddTrack = callerPC.addTrack !== undefined;
callerPC.onicecandidate = handleCallerIceEvent;
callerPC.onnegotiationneeded = handleCallerNegotiationNeeded;
callerPC.oniceconnectionstatechange = handleCallerIceConnectionStateChange;
callerPC.onsignalingstatechange = handleCallerSignalingStateChangeEvent;
callerPC.onicegatheringstatechange = handleCallerGatheringStateChangeEvent;
receiverPC = new RTCPeerConnection();
receiverPC.onicecandidate = handleReceiverIceEvent;
if (hasAddTrack) {
receiverPC.ontrack = handleReceiverTrackEvent;
} else {
receiverPC.onaddstream = handleReceiverAddStreamEvent;
}
navigator.mediaDevices
.getUserMedia(mediaConstraints)
.then(gotStream)
.catch((err) => log(err.message));
}
```
After creating the `RTCPeerConnection` for the caller (`callerPC`), we look to see if it has an {{domxref("RTCPeerConnection.addTrack", "addTrack()")}} method. If it does, we set `hasAddTrack` to `true`; otherwise, we set it to `false`. This variable will let the example operate even on browsers not yet implementing the newer `addTrack()` method; we'll do so by falling back to the older {{domxref("RTCPeerConnection.addStream", "addStream()")}} method.
Next, the event handlers for the caller are established. We'll cover these in detail later.
Then a second `RTCPeerConnection`, this one representing the receiving end of the call, is created and stored in `receiverPC`; its `onicecandidate` event handler is set up too.
If `addTrack()` is supported, we set up the receiver's `ontrack` event handler; otherwise, we set up `onaddstream`. The {{domxref("RTCPeerConnection.track_event", "track")}} and {{domxref("RTCPeerConnection/addstream_event", "addstream")}} events are sent when media is added to the connection.
Finally, we call {{domxref("MediaDevices.getUserMedia", "getUserMedia()")}} to obtain access to the caller's microphone. If successful, the function `gotStream()` is called, otherwise we log the error because calling has failed.
#### Adding the audio to the connection
As mentioned above, when the audio input from the microphone is obtained, `gotStream()` is called. Its job is to build the stream being sent to the receiver, so the actual process of starting to transmit can begin. It also gets access to the `RTCDTMFSender` we'll use to issue DTMF on the connection.
```js
function gotStream(stream) {
log("Got access to the microphone.");
let audioTracks = stream.getAudioTracks();
if (hasAddTrack) {
if (audioTracks.length > 0) {
audioTracks.forEach((track) => callerPC.addTrack(track, stream));
}
} else {
log(
"Your browser doesn't support RTCPeerConnection.addTrack(). Falling " +
"back to the <strong>deprecated</strong> addStream() method…",
);
callerPC.addStream(stream);
}
if (callerPC.getSenders) {
dtmfSender = callerPC.getSenders()[0].dtmf;
} else {
log(
"Your browser doesn't support RTCPeerConnection.getSenders(), so " +
"falling back to use <strong>deprecated</strong> createDTMFSender() " +
"instead.",
);
dtmfSender = callerPC.createDTMFSender(audioTracks[0]);
}
dtmfSender.ontonechange = handleToneChangeEvent;
}
```
After setting `audioTracks` to be a list of the audio tracks on the stream from the user's microphone, it's time to add the media to the caller's `RTCPeerConnection`. If `addTrack()` is available on the `RTCPeerConnection`, we add each of the stream's audio tracks, one by one, to the connection using {{domxref("RTCPeerConnection.addTrack()")}}. Otherwise we call {{domxref("RTCPeerConnection.addStream()")}} to add the stream to the call as a single unit.
Next we look to see if the {{domxref("RTCPeerConnection.getSenders()")}} method is implemented. If it is, we call it on `callerPC` and get the first entry in the returned list of senders; this is the {{domxref("RTCRtpSender")}} responsible for transmitting data for the first audio track on the call (which is the track we'll send DTMF over). We then obtain the `RTCRtpSender`'s {{domxref("RTCRtpSender.dtmf", "dtmf")}} property, which is an {{domxref("RTCDTMFSender")}} object that can send DTMF on the connection, from the caller to the receiver.
If `getSenders()` isn't available, we instead call {{domxref("RTCPeerConnection.createDTMFSender()")}} to get the `RTCDTMFSender` object. Although this method is obsolete, this example supports it as a fallback to let older browsers (and those not yet updated to support the current WebRTC DTMF API) run the example.
Finally, we set the DTMF sender's {{domxref("RTCDTMFSender.tonechange_event", "ontonechange")}} event handler so we get notified each time a DTMF tone finishes playing.
You can find the log function at the bottom of the documentation.
#### When a tone finishes playing
Each time a DTMF tone finishes playing, a [`tonechange`](/en-US/docs/Web/API/RTCDTMFSender/tonechange_event) event is delivered to `callerPC`. The event listener for these is implemented as the `handleToneChangeEvent()` function.
```js
function handleToneChangeEvent(event) {
if (event.tone !== "") {
log(`Tone played: ${event.tone}`);
} else {
log("All tones have played. Disconnecting.");
callerPC.getLocalStreams().forEach((stream) => {
stream.getTracks().forEach((track) => {
track.stop();
});
});
receiverPC.getLocalStreams().forEach((stream) => {
stream.getTracks().forEach((track) => {
track.stop();
});
});
audio.pause();
audio.srcObject = null;
receiverPC.close();
callerPC.close();
}
}
```
The [`tonechange`](/en-US/docs/Web/API/RTCDTMFSender/tonechange_event) event is used both to indicate when an individual tone has played and when all tones have finished playing. The event's {{domxref("RTCDTMFToneChangeEvent.tone", "tone")}} property is a string indicating which tone just finished playing. If all tones have finished playing, `tone` is an empty string; when that's the case, {{domxref("RTCDTMFSender.toneBuffer")}} is empty.
In this example, we log to the screen which tone just finished playing. In a more advanced application, you might update the user interface, for example, to indicate which note is currently playing.
On the other hand, if the tone buffer is empty, our example is designed to disconnect the call. This is done by stopping each stream on both the caller and the receiver by iterating over each `RTCPeerConnection`'s track list (as returned by its {{domxref("MediaStream.getTracks", "getTracks()")}} method) and calling each track's {{domxref("MediaStreamTrack.stop", "stop()")}} method.
Once both the caller's and the receiver's media tracks are all stopped, we pause the {{HTMLElement("audio")}} element and set its {{domxref("HTMLMediaElement.srcObject", "srcObject")}} to `null`. This detaches the audio stream from the {{HTMLElement("audio")}} element.
Then, finally, each `RTCPeerConnection` is closed by calling its {{domxref("RTCPeerConnection.close", "close()")}} method.
#### Adding candidates to the caller
When the caller's `RTCPeerConnection` ICE layer comes up with a new candidate to propose, it issues an {{domxref("RTCPeerConnection.icecandidate_event", "icecandidate")}} event to `callerPC`. The `icecandidate` event handler's job is to transmit the candidate to the receiver. In our example, we are directly controlling both the caller and the receiver, so we can just directly add the candidate to the receiver by calling its {{domxref("RTCPeerConnection.addIceCandidate", "addIceCandidate()")}} method. That's handled by `handleCallerIceEvent()`:
```js
function handleCallerIceEvent(event) {
if (event.candidate) {
log(`Adding candidate to receiver: ${event.candidate.candidate}`);
receiverPC
.addIceCandidate(new RTCIceCandidate(event.candidate))
.catch((err) => log(`Error adding candidate to receiver: ${err}`));
} else {
log("Caller is out of candidates.");
}
}
```
If the {{domxref("RTCPeerConnection.icecandidate_event", "icecandidate")}} event has a non-`null` `candidate` property, we create a new {{domxref("RTCIceCandidate")}} object from the `event.candidate` string and "transmit" it to the receiver by calling `receiverPC.addIceCandidate()`, providing the new `RTCIceCandidate` as its input. If `addIceCandidate()` fails, the `catch()` clause outputs the error to our log box.
If `event.candidate` is `null`, that indicates that there are no more candidates available, and we log that information.
#### Dialing once the connection is open
Our design requires that when the connection is established, we immediately send the DTMF string. To accomplish that, we watch for the caller to receive an {{domxref("RTCPeerConnection.iceconnectionstatechange_event", "iceconnectionstatechange")}} event. This event is sent when one of a number of changes occurs to the state of the ICE connection process, including the successful establishment of a connection.
```js
function handleCallerIceConnectionStateChange() {
log(`Caller's connection state changed to ${callerPC.iceConnectionState}`);
if (callerPC.iceConnectionState === "connected") {
log(`Sending DTMF: "${dialString}"`);
dtmfSender.insertDTMF(dialString, 400, 50);
}
}
```
The `iceconnectionstatechange` event doesn't actually include within it the new state, so we get the connection process's current state from `callerPC`'s {{domxref("RTCPeerConnection.iceConnectionState")}} property. After logging the new state, we look to see if the state is `"connected"`. If so, we log the fact that we're about to send the DTMF, then we call {{domxref("RTCDTMFSender.insertDTMF", "dtmf.insertDTMF()")}} to send the DTMF on the same track as the audio data method on the `RTCDTMFSender` object we [previously stored](#adding_the_audio_to_the_connection) in `dtmfSender`.
Our call to `insertDTMF()` specifies not only the DTMF to send (`dialString`), but also the length of each tone in milliseconds (400 ms) and the amount of time between tones (50 ms).
#### Negotiating the connection
When the calling {{domxref("RTCPeerConnection")}} begins to receive media (after the microphone's stream is added to it), a {{domxref("RTCPeerConnection.negotiationneeded_event", "negotiationneeded")}} event is delivered to the caller, letting it know that it's time to start negotiating the connection with the receiver. As previously mentioned, our example is simplified somewhat because we control both the caller and the receiver, so `handleCallerNegotiationNeeded()` is able to quickly construct the connection by chaining the required calls together for both the caller and receiver, as shown below.
```js
function handleCallerNegotiationNeeded() {
log("Negotiating…");
callerPC
.createOffer(offerOptions)
.then((offer) => {
log(`Setting caller's local description: ${offer.sdp}`);
return callerPC.setLocalDescription(offer);
})
.then(() => {
log(
"Setting receiver's remote description to the same as caller's local",
);
return receiverPC.setRemoteDescription(callerPC.localDescription);
})
.then(() => {
log("Creating answer");
return receiverPC.createAnswer();
})
.then((answer) => {
log(`Setting receiver's local description to ${answer.sdp}`);
return receiverPC.setLocalDescription(answer);
})
.then(() => {
log("Setting caller's remote description to match");
return callerPC.setRemoteDescription(receiverPC.localDescription);
})
.catch((err) => log(`Error during negotiation: ${err.message}`));
}
```
Since the various methods involved in negotiating the connection return {{jsxref("promise")}}s, we can chain them together like this:
1. Call {{domxref("RTCPeerConnection.createOffer", "callerPC.createOffer()")}} to get an offer.
2. Then take that offer and set the caller's local description to match by calling {{domxref("RTCPeerConnection.setLocalDescription", "callerPC.setLocalDescription()")}}.
3. Then "transmit" the offer to the receiver by calling {{domxref("RTCPeerConnection.setRemoteDescription", "receiverPC.setRemoteDescription()")}}. This configures the receiver so that it knows how the caller is configured.
4. Then the receiver creates an answer by calling {{domxref("RTCPeerConnection.createAnswer", "receiverPC.createAnswer()")}}.
5. Then the receiver sets its local description to match the newly-created answer by calling {{domxref("RTCPeerConnection.setLocalDescription", "receiverPC.setLocalDescription()")}}.
6. Then the answer is "transmitted" to the caller by calling {{domxref("RTCPeerConnection.setRemoteDescription", "callerPC.setRemoteDescription()")}}. This lets the caller know what the receiver's configuration is.
7. If at any time an error occurs, the `catch()` clause outputs an error message to the log.
#### Tracking other state changes
We can also watch for changes to the signaling state (by accepting {{domxref("RTCPeerConnection.signalingstatechange_event", "signalingstatechange")}} events) and the ICE gathering state (by accepting {{domxref("RTCPeerConnection.icegatheringstatechange_event", "icegatheringstatechange")}} events). We aren't using these for anything, so all we do is log them. We could have not set up these event listeners at all.
```js
function handleCallerSignalingStateChangeEvent() {
log(`Caller's signaling state changed to ${callerPC.signalingState}`);
}
function handleCallerGatheringStateChangeEvent() {
log(`Caller's ICE gathering state changed to ${callerPC.iceGatheringState}`);
}
```
#### Adding candidates to the receiver
When the receiver's `RTCPeerConnection` ICE layer comes up with a new candidate to propose, it issues an {{domxref("RTCPeerConnection.icecandidate_event", "icecandidate")}} event to `receiverPC`. The `icecandidate` event handler's job is to transmit the candidate to the caller. In our example, we are directly controlling both the caller and the receiver, so we can just directly add the candidate to the caller by calling its {{domxref("RTCPeerConnection.addIceCandidate", "addIceCandidate()")}} method. That's handled by `handleReceiverIceEvent()`.
This code is analogous to the `icecandidate` event handler for the caller, seen in [Adding candidates to the caller](#adding_candidates_to_the_caller) above.
```js
function handleReceiverIceEvent(event) {
if (event.candidate) {
log(`Adding candidate to caller: ${event.candidate.candidate}`);
callerPC
.addIceCandidate(new RTCIceCandidate(event.candidate))
.catch((err) => log(`Error adding candidate to caller: ${err}`));
} else {
log("Receiver is out of candidates.");
}
}
```
If the {{domxref("RTCPeerConnection.icecandidate_event", "icecandidate")}} event has a non-`null` `candidate` property, we create a new {{domxref("RTCIceCandidate")}} object from the `event.candidate` string and deliver it to the caller by passing that into `callerPC.addIceCandidate()`. If `addIceCandidate()` fails, the `catch()` clause outputs the error to our log box.
If `event.candidate` is `null`, that indicates that there are no more candidates available, and we log that information.
#### Adding media to the receiver
When the receiver begins to receive media, an event is delivered to the receiver's {{domxref("RTCPeerConnection")}}, `receiverPC`. As explained in [Starting the connection process](#starting_the_connection_process), the current WebRTC specification uses the {{domxref("RTCPeerConnection.track_event", "track")}} event for this. Since some browsers haven't been updated to support this yet, we also need to handle the {{domxref("RTCPeerConnection/addstream_event", "addstream")}} event. This is demonstrated in the `handleReceiverTrackEvent()` and `handleReceiverAddStreamEvent()` methods below.
```js
function handleReceiverTrackEvent(event) {
audio.srcObject = event.streams[0];
}
function handleReceiverAddStreamEvent(event) {
audio.srcObject = event.stream;
}
```
The `track` event includes a {{domxref("RTCTrackEvent.streams", "streams")}} property containing an array of the streams the track is a member of (one track can be part of many streams). We take the first stream and attach it to the {{HTMLElement("audio")}} element.
The `addstream` event includes a {{domxref("MediaStreamEvent.stream", "stream")}} property specifying a single stream added to the track. We attach it to the `<audio>` element.
#### Logging
A simple `log()` function is used throughout the code to append HTML to a {{HTMLElement("div")}} box for displaying status and errors to the user.
```js
function log(msg) {
logElement.innerHTML += `${msg}<br/>`;
}
```
### Result
You can try this example here. When you click the "Dial" button, you should see a series of logging messages output; then the dialing will begin. If your browser plays the tones audibly as part of its user experience, you should hear them as they're transmitted.
{{ EmbedLiveSample('Simple_example', 600, 500, "", "", "", "microphone") }}
Once transmission of the tones is complete, the connection is closed. You can click "Dial" again to reconnect and send the tones.
## See also
- [WebRTC API](/en-US/docs/Web/API/WebRTC_API)
- [Lifetime of a WebRTC session](/en-US/docs/Web/API/WebRTC_API/Session_lifetime)
- [Signaling and video calling](/en-US/docs/Web/API/WebRTC_API/Signaling_and_video_calling) (a tutorial and example which explains the signaling process in more detail)
- [Introduction to WebRTC protocols](/en-US/docs/Web/API/WebRTC_API/Protocols)
| 0 |
data/mdn-content/files/en-us/web/api/webrtc_api | data/mdn-content/files/en-us/web/api/webrtc_api/connectivity/index.md | ---
title: WebRTC connectivity
slug: Web/API/WebRTC_API/Connectivity
page-type: guide
---
{{DefaultAPISidebar("WebRTC")}}
This article describes how the various WebRTC-related protocols interact with one another in order to create a connection and transfer data and/or media among peers.
> **Note:** This page needs heavy rewriting for structural integrity and content completeness. Lots of info here is good but the organization is a mess since this is sort of a dumping ground right now.
## Signaling
Unfortunately, WebRTC can't create connections without some sort of server in the middle. We call this the **signal channel** or **signaling service**. It's any sort of channel of communication to exchange information before setting up a connection, whether by email, postcard, or a carrier pigeon. It's up to you.
The information we need to exchange is the Offer and Answer which just contains the {{Glossary("SDP")}} mentioned below.
Peer A who will be the initiator of the connection, will create an Offer. They will then send this offer to Peer B using the chosen signal channel. Peer B will receive the Offer from the signal channel and create an Answer. They will then send this back to Peer A along the signal channel.
### Session descriptions
The configuration of an endpoint on a WebRTC connection is called a **session description**. The description includes information about the kind of media being sent, its format, the transfer protocol being used, the endpoint's IP address and port, and other information needed to describe a media transfer endpoint. This information is exchanged and stored using **Session Description Protocol** ({{Glossary("SDP")}}); if you want details on the format of SDP data, you can find it in {{RFC(8866)}}.
When a user starts a WebRTC call to another user, a special description is created called an **offer**. This description includes all the information about the caller's proposed configuration for the call. The recipient then responds with an **answer**, which is a description of their end of the call. In this way, both devices share with one another the information needed in order to exchange media data. This exchange is handled using Interactive Connectivity Establishment ({{Glossary("ICE")}}), a protocol which lets two devices use an intermediary to exchange offers and answers even if the two devices are separated by Network Address Translation ({{Glossary("NAT")}}).
Each peer, then, keeps two descriptions on hand: the **local description**, describing itself, and the **remote description**, describing the other end of the call.
The offer/answer process is performed both when a call is first established, but also any time the call's format or other configuration needs to change. Regardless of whether it's a new call, or reconfiguring an existing one, these are the basic steps which must occur to exchange the offer and answer, leaving out the ICE layer for the moment:
1. The caller captures local Media via {{domxref("MediaDevices.getUserMedia")}}
2. The caller creates `RTCPeerConnection` and calls {{domxref("RTCPeerConnection.addTrack()")}} (Since `addStream` is deprecating)
3. The caller calls {{domxref("RTCPeerConnection.createOffer()")}} to create an offer.
4. The caller calls {{domxref("RTCPeerConnection.setLocalDescription()")}} to set that offer as the _local description_ (that is, the description of the local end of the connection).
5. After setLocalDescription(), the caller asks STUN servers to generate the ice candidates
6. The caller uses the signaling server to transmit the offer to the intended receiver of the call.
7. The recipient receives the offer and calls {{domxref("RTCPeerConnection.setRemoteDescription()")}} to record it as the _remote description_ (the description of the other end of the connection).
8. The recipient does any setup it needs to do for its end of the call: capture its local media, and attach each media tracks into the peer connection via {{domxref("RTCPeerConnection.addTrack()")}}
9. The recipient then creates an answer by calling {{domxref("RTCPeerConnection.createAnswer()")}}.
10. The recipient calls {{domxref("RTCPeerConnection.setLocalDescription()")}}, passing in the created answer, to set the answer as its local description. The recipient now knows the configuration of both ends of the connection.
11. The recipient uses the signaling server to send the answer to the caller.
12. The caller receives the answer.
13. The caller calls {{domxref("RTCPeerConnection.setRemoteDescription()")}} to set the answer as the remote description for its end of the call. It now knows the configuration of both peers. Media begins to flow as configured.
### Pending and current descriptions
Taking one step deeper into the process, we find that `localDescription` and `remoteDescription`, the properties which return these two descriptions, aren't as simple as they look. Because during renegotiation, an offer might be rejected because it proposes an incompatible format, it's necessary that each endpoint have the ability to propose a new format but not actually switch to it until it's accepted by the other peer. For that reason, WebRTC uses _pending_ and _current_ descriptions.
The **current description** (which is returned by the {{domxref("RTCPeerConnection.currentLocalDescription")}} and {{domxref("RTCPeerConnection.currentRemoteDescription")}} properties) represents the description currently in actual use by the connection. This is the most recent connection that both sides have fully agreed to use.
The **pending description** (returned by {{domxref("RTCPeerConnection.pendingLocalDescription")}} and {{domxref("RTCPeerConnection.pendingRemoteDescription")}}) indicates a description which is currently under consideration following a call to `setLocalDescription()` or `setRemoteDescription()`, respectively.
When reading the description (returned by {{domxref("RTCPeerConnection.localDescription")}} and {{domxref("RTCPeerConnection.remoteDescription")}}), the returned value is the value of `pendingLocalDescription`/`pendingRemoteDescription` if there's a pending description (that is, the pending description isn't `null`); otherwise, the current description (`currentLocalDescription`/`currentRemoteDescription`) is returned.
When changing the description by calling `setLocalDescription()` or `setRemoteDescription()`, the specified description is set as the pending description, and the WebRTC layer begins to evaluate whether or not it's acceptable. Once the proposed description has been agreed upon, the value of `currentLocalDescription` or `currentRemoteDescription` is changed to the pending description, and the pending description is set to null again, indicating that there isn't a pending description.
> **Note:** The `pendingLocalDescription` contains not just the offer or answer under consideration, but any local ICE candidates which have already been gathered since the offer or answer was created. Similarly, `pendingRemoteDescription` includes any remote ICE candidates which have been provided by calls to {{domxref("RTCPeerConnection.addIceCandidate()")}}.
See the individual articles on these properties and methods for more specifics, and [Codecs used by WebRTC](/en-US/docs/Web/Media/Formats/WebRTC_codecs) for information about codecs supported by WebRTC and which are compatible with which browsers. The codecs guide also offers guidance to help you choose the best codecs for your needs.
## ICE candidates
As well as exchanging information about the media (discussed above in Offer/Answer and SDP), peers must exchange information about the network connection. This is known as an **ICE candidate** and details the available methods the peer is able to communicate (directly or through a TURN server). Typically, each peer will propose its best candidates first, making their way down the line toward their worse candidates. Ideally, candidates are UDP (since it's faster, and media streams are able to recover from interruptions relatively easily), but the ICE standard does allow TCP candidates as well.
> **Note:** Generally, ICE candidates using TCP are only going to be used when UDP is not available or is restricted in ways that make it not suitable for media streaming. Not all browsers support ICE over TCP, however.
ICE allows candidates to represent connections over either {{Glossary("TCP")}} or {{Glossary("UDP")}}, with UDP generally being preferred (and being more widely supported). Each protocol supports a few types of candidate, with the candidate types defining how the data makes its way from peer to peer.
### UDP candidate types
UDP candidates (candidates with their {{domxref("RTCIceCandidate.protocol", "protocol")}} set to `udp`) can be one of these types:
- `host`
- : A host candidate is one for which its {{domxref("RTCIceCandidate/address", "ip")}} address is the actual, direct IP address of the remote peer.
- `prflx`
- : A peer reflexive candidate is one whose IP address comes from a symmetric NAT between the two peers, usually as an additional candidate during trickle ICE (that is, additional candidate exchanges that occur after primary signaling but before the connection verification phase is finished).
- `srflx`
- : A server reflexive candidate is generated by a STUN/TURN server; the connection's initiator requests a candidate from the STUN server, which forwards the request through the remote peer's NAT, which creates and returns a candidate whose IP address is local to the remote peer. The STUN server then replies to the initiator's request with a candidate whose IP address is unrelated to the remote peer.
- `relay`
- : A relay candidate is generated just like a server reflexive candidate (`"srflx"`), but using {{Glossary("TURN")}} instead of {{Glossary("STUN")}}.
### TCP candidate types
TCP candidates (that is, candidates whose {{domxref("RTCIceCandidate.protocol", "protocol")}} is `tcp`) can be of these types:
- `active`
- : The transport will try to open an outbound connection but won't receive incoming connection requests. This is the most common type, and the only one that most user agents will gather.
- `passive`
- : The transport will receive incoming connection attempts but won't attempt a connection itself.
- `so`
- : The transport will try to simultaneously open a connection with its peer.
### Choosing a candidate pair
The ICE layer selects one of the two peers to serve as the **controlling agent**. This is the ICE agent which will make the final decision as to which candidate pair to use for the connection. The other peer is called the **controlled agent**. You can identify which one your end of the connection is by examining the value of {{domxref("RTCIceTransport.role", "RTCIceCandidate.transport.role")}}, although in general it doesn't matter which is which.
The controlling agent not only takes responsibility for making the final decision as to which candidate pair to use, but also for signaling that selection to the controlled agent by using STUN and an updated offer, if necessary. The controlled agent just waits to be told which candidate pair to use.
It's important to keep in mind that a single ICE session may result in the controlling agent choosing more than one candidate pair. Each time it does so and shares that information with the controlled agent, the two peers reconfigure their connection to use the new configuration described by the new candidate pair.
Once the ICE session is complete, the configuration that's currently in effect is the final one, unless an ICE reset occurs.
At the end of each generation of candidates, an end-of-candidates notification is sent in the form of an {{domxref("RTCIceCandidate")}} whose {{domxref("RTCIceCandidate.candidate", "candidate")}} property is an empty string. This candidate should still be added to the connection using {{domxref("RTCPeerConnection.addIceCandidate", "addIceCandidate()")}} method, as usual, in order to deliver that notification to the remote peer.
When there are no more candidates at all to be expected during the current negotiation exchange, an end-of-candidates notification is sent by delivering a {{domxref("RTCIceCandidate")}} whose {{domxref("RTCIceCandidate.candidate", "candidate")}} property is `null`. This message does _not_ need to be sent to the remote peer. It's a legacy notification of a state which can be detected instead by watching for the {{domxref("RTCPeerConnection.iceGatheringState", "iceGatheringState")}} to change to `complete`, by watching for the {{domxref("RTCPeerConnection.icegatheringstatechange_event", "icegatheringstatechange")}} event.
## When things go wrong
During negotiation, there will be times when things just don't work out. For example, when renegotiating a connection—for example, to adapt to changing hardware or network configurations—it's possible that negotiation could reach a dead end, or some form of error might occur that prevents negotiation at all. There may be permissions issues or other problems as well, for that matter.
### ICE rollbacks
When renegotiating a connection that's already active and a situation arises in which the negotiation fails, you don't really want to kill the already-running call. After all, you were most likely just trying to upgrade or downgrade the connection, or to otherwise make adaptations to an ongoing session. Aborting the call would be an excessive reaction in that situation.
Instead, you can initiate an **ICE rollback**. A rollback restores the SDP offer (and the connection configuration by extension) to the configuration it had the last time the connection's {{domxref("RTCPeerConnection.signalingState", "signalingState")}} was `stable`.
To programmatically initiate a rollback, send a description whose {{domxref("RTCSessionDescription.type", "type")}} is `rollback`. Any other properties in the description object are ignored.
In addition, the ICE agent will automatically initiate a rollback when a peer that had previously created an offer receives an offer from the remote peer. In other words, if the local peer is in the state `have-local-offer`, indicating that the local peer had previously _sent_ an offer, calling `setRemoteDescription()` with a _received_ offer triggers rollback so that the negotiation switches from the remote peer being the caller to the local peer being the caller.
### ICE restarts
Learn about the [ICE restart](/en-US/docs/Web/API/WebRTC_API/Session_lifetime#ice_restart) process.
## The entire exchange in a complicated diagram
[](https://hacks.mozilla.org/2013/07/webrtc-and-the-ocean-of-acronyms/)
| 0 |
data/mdn-content/files/en-us/web/api/webrtc_api | data/mdn-content/files/en-us/web/api/webrtc_api/using_data_channels/index.md | ---
title: Using WebRTC data channels
slug: Web/API/WebRTC_API/Using_data_channels
page-type: guide
---
{{DefaultAPISidebar("WebRTC")}}
In this guide, we'll examine how to add a data channel to a peer connection, which can then be used to securely exchange arbitrary data; that is, any kind of data we wish, in any format we choose.
> **Note:** Since all WebRTC components are required to use encryption, any data transmitted on an `RTCDataChannel` is automatically secured using Datagram Transport Layer Security (**DTLS**). See [Security](#security) below for more information.
## Creating a data channel
The underlying data transport used by the {{domxref("RTCDataChannel")}} can be created in one of two ways:
- Let WebRTC create the transport and announce it to the remote peer for you (by causing it to receive a {{domxref("RTCPeerConnection.datachannel_event", "datachannel")}} event). This is the easy way, and works for a wide variety of use cases, but may not be flexible enough for your needs.
- Write your own code to negotiate the data transport and write your own code to signal to the other peer that it needs to connect to the new channel.
Let's look at each of these cases, starting with the first, which is the most common.
### Automatic negotiation
Often, you can allow the peer connection to handle negotiating the {{domxref("RTCDataChannel")}} connection for you. To do this, call
{{domxref("RTCPeerConnection.createDataChannel", "createDataChannel()")}} without specifying a value for the `negotiated` property, or specifying the property with a value of `false`. This will automatically trigger the `RTCPeerConnection` to handle the negotiations for you, causing the remote peer to create a data channel and linking the two together across the network.
The `RTCDataChannel` object is returned immediately by `createDataChannel()`; you can tell when the connection has been made successfully by watching for the {{domxref("RTCDataChannel.open_event", "open")}} event to be sent to the `RTCDataChannel`.
```js
let dataChannel = pc.createDataChannel("MyApp Channel");
dataChannel.addEventListener("open", (event) => {
beginTransmission(dataChannel);
});
```
### Manual negotiation
To manually negotiate the data channel connection, you need to first create a new {{domxref("RTCDataChannel")}} object using the {{domxref("RTCPeerConnection.createDataChannel", "createDataChannel()")}} method on the {{domxref("RTCPeerConnection")}}, specifying in the options a `negotiated` property set to `true`. This signals to the peer connection to not attempt to negotiate the channel on your behalf.
Then negotiate the connection out-of-band, using a web server or other means. This process should signal to the remote peer that it should create its own `RTCDataChannel` with the `negotiated` property also set to `true`, using the same {{domxref("RTCDataChannel.id", "id")}}. This will link the two objects across the `RTCPeerConnection`.
```js
let dataChannel = pc.createDataChannel("MyApp Channel", {
negotiated: true,
});
dataChannel.addEventListener("open", (event) => {
beginTransmission(dataChannel);
});
requestRemoteChannel(dataChannel.id);
```
In this code snippet, the channel is created with `negotiated` set to `true`, then a function called `requestRemoteChannel()` is used to trigger negotiation, to create a remote channel with the same ID as the local channel.
Doing this lets you create data channels with each peer using different properties, and to create channels declaratively by using the same value for `id`.
## Buffering
WebRTC data channels support buffering of outbound data. This is handled automatically. While there's no way to control the size of the buffer, you can learn how much data is currently buffered, and you can choose to be notified by an event when the buffer starts to run low on queued data. This makes it easy to write efficient routines that make sure there's always data ready to send without over-using memory or swamping the channel completely.
## Understanding message size limits
For any data being transmitted over a network, there are size restrictions. At a fundamental level, the individual network packets can't be larger than a certain value (the exact number depends on the network and the transport layer being used). At the application level—that is, within the {{Glossary("user agent", "user agent's")}} implementation of WebRTC on which your code is running—the WebRTC implementation implements features to support messages that are larger than the maximum packet size on the network's transport layer.
This can complicate things, since you don't necessarily know what the size limits are for various user agents, and how they respond when a larger message is sent or received. Even when user agents share the same underlying library for handling Stream Control Transmission Protocol (SCTP) data, there can still be variations due to how the library is used. For example, both Firefox and Google Chrome use the [`usrsctp`](https://github.com/sctplab/usrsctp) library to implement SCTP, but there are still situations in which data transfer on an `RTCDataChannel` can fail due to differences in how they call the library and react to errors it returns.
When two users running Firefox are communicating on a data channel, the message size limit is much larger than when Firefox and Chrome are communicating because Firefox implements a now deprecated technique for sending large messages in multiple SCTP messages, which Chrome does not. Chrome will instead see a series of messages that it believes are complete, and will deliver them to the receiving `RTCDataChannel` as multiple messages.
Messages smaller than 16 KiB can be sent without concern, as all major user agents handle them the same way. Beyond that, things get more complicated.
### Concerns with large messages
Currently, it's not practical to use `RTCDataChannel` for messages larger than 64 KiB (16 KiB if you want to support cross-browser exchange of data). The problem arises from the fact that SCTP—the protocol used for sending and receiving data on an `RTCDataChannel`—was originally designed for use as a signaling protocol. It was expected that messages would be relatively small. Support for messages larger than the network layer's [MTU](https://en.wikipedia.org/wiki/Maximum_transmission_unit) was added almost as an afterthought, in case signaling messages needed to be larger than the MTU. This feature requires that each piece of the message have consecutive sequence numbers, so they have to be transmitted one after another, without any other data interleaved between them.
This eventually became a problem. Over time, various applications (including those implementing WebRTC) began to use SCTP to transmit larger and larger messages. Eventually it was realized that when the messages become too large, it's possible for the transmission of a large message to block all other data transfers on that data channel—including critical signaling messages.
This will become an issue when browsers properly support the current standard for supporting larger messages—the end-of-record (EOR) flag that indicates when a message is the last one in a series that should be treated as a single payload. This is implemented in Firefox 57, but is not yet implemented in Chrome (see [Chromium Bug 7774](https://bugs.chromium.org/p/webrtc/issues/detail?id=7774)). With EOR support in place, `RTCDataChannel` payloads can be much larger (officially up to 256 KiB, but Firefox's implementation caps them at a whopping 1 GiB). Even at 256 KiB, that's large enough to cause noticeable delays in handling urgent traffic. If you go even larger, the delays can become untenable unless you are certain of your operational conditions.
In order to resolve this issue, a new system of **stream schedulers** (usually referred to as the "SCTP ndata specification") has been designed to make it possible to interleave messages sent on different streams, including streams used to implement WebRTC data channels. This [proposal](https://datatracker.ietf.org/doc/html/draft-ietf-tsvwg-sctp-ndata) is still in IETF draft form, but once implemented, it will make it possible to send messages with essentially no size limitations, since the SCTP layer will automatically interleave the underlying sub-messages to ensure that every channel's data has the opportunity to get through.
Firefox support for ndata is in the process of being implemented; see [Firefox bug 1381145](https://bugzil.la/1381145) to track it becoming available for general use. The Chrome team is tracking their implementation of ndata support in [Chrome Bug 5696](https://bugs.chromium.org/p/webrtc/issues/detail?id=5696).
> **Note:** Much of the information in this section is based in part on the blog post [Demystifying WebRTC's Data Channel Message Size Limitations](https://lgrahl.de/articles/demystifying-webrtc-dc-size-limit.html), written by Lennart Grahl. He goes into a bit more detail there, but as browsers have been updated since then some of it may be out-of-date. In addition, as time goes by, it will become more so, especially once EOR and ndata support are fully integrated in the major browsers.
## Security
All data transferred using WebRTC is encrypted. In the case of `RTCDataChannel`, the encryption used is Datagram Transport Layer Security (DTLS), which is based on [Transport Layer Security](/en-US/docs/Web/Security/Transport_Layer_Security) (TLS). Since TLS is used to secure every HTTPS connection, any data you send on a data channel is as secure as any other data sent or received by the user's browser.
More fundamentally, since WebRTC is a peer-to-peer connection between two user agents, the data never passes through the web or application server. This reduces opportunities to have the data intercepted.
| 0 |
data/mdn-content/files/en-us/web/api/webrtc_api | data/mdn-content/files/en-us/web/api/webrtc_api/perfect_negotiation/index.md | ---
title: "Establishing a connection: The WebRTC perfect negotiation pattern"
slug: Web/API/WebRTC_API/Perfect_negotiation
page-type: guide
---
{{DefaultAPISidebar("WebRTC")}}
This article introduces WebRTC **perfect negotiation**, describing how it works and why it's the recommended way to negotiate a WebRTC connection between peers, and provides sample code to demonstrate the technique.
Because [WebRTC](/en-US/docs/Web/API/WebRTC_API) doesn't mandate a specific transport mechanism for signaling during the negotiation of a new peer connection, it's highly flexible. However, despite that flexibility in transport and communication of signaling messages, there's still a recommended design pattern you should follow when possible, known as perfect negotiation.
After the first deployments of WebRTC-capable browsers, it was realized that parts of the negotiation process were more complicated than they needed to be for typical use cases. This was due to a small number of issues with the API and some potential race conditions that needed to be prevented. These issues have since been addressed, letting us simplify our WebRTC negotiation significantly. The perfect negotiation pattern is an example of the ways in which negotiation have improved since the early days of WebRTC.
## Perfect negotiation concepts
Perfect negotiation makes it possible to seamlessly and completely separate the negotiation process from the rest of your application's logic. Negotiation is an inherently asymmetric operation: one side needs to serve as the "caller" while the other peer is the "callee." The perfect negotiation pattern smooths this difference away by separating that difference out into independent negotiation logic, so that your application doesn't need to care which end of the connection it is. As far as your application is concerned, it makes no difference whether you're calling out or receiving a call.
The best thing about perfect negotiation is that the same code is used for both the caller and the callee, so there's no repetition or otherwise added levels of negotiation code to write.
Perfect negotiation works by assigning each of the two peers a role to play in the negotiation process that's entirely separate from the WebRTC connection state:
- A **polite** peer, which uses ICE rollback to prevent collisions with incoming offers. A polite peer, essentially, is one which may send out offers, but then responds if an offer arrives from the other peer with "Okay, never mind, drop my offer and I'll consider yours instead."
- An **impolite** peer, which always ignores incoming offers that collide with its own offers. It never apologizes or gives up anything to the polite peer. Any time a collision occurs, the impolite peer wins.
This way, both peers know exactly what should happen if there are collisions between offers that have been sent. Responses to error conditions become far more predictable.
How you determine which peer is polite and which is impolite is generally up to you. It could be as simple as assigning the polite role to the first peer to connect to the signaling server, or you could do something more elaborate like having the peers exchange random numbers and assigning the polite role to the winner. However you make the determination, once these roles are assigned to the two peers, they can then work together to manage signaling in a way that doesn't deadlock and doesn't require a lot of extra code to manage.
An important thing to keep in mind is this: the roles of caller and callee can switch during perfect negotiation. If the polite peer is the caller and it sends an offer but there's a collision with the impolite peer, the polite peer drops its offer and instead replies to the offer it has received from the impolite peer. By doing so, the polite peer has switched from being the caller to the callee!
## Implementing perfect negotiation
Let's take a look at an example that implements the perfect negotiation pattern. The code assumes that there's a `SignalingChannel` class defined that is used to communicate with the signaling server. Your own code, of course, can use any signaling technique you like.
Note that this code is identical for both peers involved in the connection.
### Create the signaling and peer connections
First, the signaling channel needs to be opened and the {{domxref("RTCPeerConnection")}} needs to be created. The {{Glossary("STUN")}} server listed here is obviously not a real one; you'll need to replace `stun.myserver.tld` with the address of a real STUN server.
```js
const config = {
iceServers: [{ urls: "stun:stun.mystunserver.tld" }],
};
const signaler = new SignalingChannel();
const pc = new RTCPeerConnection(config);
```
This code also gets the {{HTMLElement("video")}} elements using the classes "selfview" and "remoteview"; these will contain, respectively, the local user's self-view and the view of the incoming stream from the remote peer.
### Connecting to a remote peer
```js
const constraints = { audio: true, video: true };
const selfVideo = document.querySelector("video.selfview");
const remoteVideo = document.querySelector("video.remoteview");
async function start() {
try {
const stream = await navigator.mediaDevices.getUserMedia(constraints);
for (const track of stream.getTracks()) {
pc.addTrack(track, stream);
}
selfVideo.srcObject = stream;
} catch (err) {
console.error(err);
}
}
```
The `start()` function shown above can be called by either of the two end-points that want to talk to one another. It doesn't matter who does it first; the negotiation will just work.
This isn't appreciably different from older WebRTC connection establishment code. The user's camera and microphone are obtained by calling {{domxref("MediaDevices.getUserMedia", "getUserMedia()")}}. The resulting media tracks are then added to the {{domxref("RTCPeerConnection")}} by passing them into {{domxref("RTCPeerConnection.addTrack", "addTrack()")}}. Then, finally, the media source for the self-view {{HTMLElement("video")}} element indicated by the `selfVideo` constant is set to the camera and microphone stream, allowing the local user to see what the other peer sees.
### Handling incoming tracks
We next need to set up a handler for {{domxref("RTCPeerConnection.track_event", "track")}} events to handle inbound video and audio tracks that have been negotiated to be received by this peer connection. To do this, we implement the {{domxref("RTCPeerConnection")}}'s {{domxref("RTCPeerConnection.track_event", "ontrack")}} event handler.
```js
pc.ontrack = ({ track, streams }) => {
track.onunmute = () => {
if (remoteVideo.srcObject) {
return;
}
remoteVideo.srcObject = streams[0];
};
};
```
When the `track` event occurs, this handler executes. Using [destructuring](/en-US/docs/Web/JavaScript/Reference/Operators/Destructuring_assignment), the {{domxref("RTCTrackEvent")}}'s {{domxref("RTCTrackEvent.track", "track")}} and {{domxref("RTCTrackEvent.streams", "streams")}} properties are extracted. The former is either the video track or the audio track being received. The latter is an array of {{domxref("MediaStream")}} objects, each representing a stream containing this track (a track may in rare cases belong to multiple streams at once). In our case, this will always contain one stream, at index 0, because we passed one stream into `addTrack()` earlier.
We add an unmute event handler to the track, because the track will become unmuted once it starts receiving packets. We put the remainder of our reception code in there.
If we already have video coming in from the remote peer (which we can see if the remote view's `<video>` element's {{domxref("HTMLMediaElement.srcObject", "srcObject")}} property already has a value), we do nothing. Otherwise, we set `srcObject` to the stream at index 0 in the `streams` array.
### The perfect negotiation logic
Now we get into the true perfect negotiation logic, which functions entirely independently from the rest of the application.
#### Handling the negotiationneeded event
First, we implement the {{domxref("RTCPeerConnection")}} event handler {{domxref("RTCPeerConnection.negotiationneeded_event", "onnegotiationneeded")}} to get a local description and send it using the signaling channel to the remote peer.
```js
let makingOffer = false;
pc.onnegotiationneeded = async () => {
try {
makingOffer = true;
await pc.setLocalDescription();
signaler.send({ description: pc.localDescription });
} catch (err) {
console.error(err);
} finally {
makingOffer = false;
}
};
```
Note that `setLocalDescription()` without arguments automatically creates and sets the appropriate description based on the current {{domxref("RTCPeerConnection.signalingState", "signalingState")}}. The set description is either an answer to the most recent offer from the remote peer _or_ a freshly-created offer if there's no negotiation underway. Here, it will always be an `offer`, because the negotiationneeded event is only fired in `stable` state.
We set a Boolean variable, `makingOffer` to `true` to mark that we're preparing an offer. To avoid races, we'll use this value later instead of the signaling state to determine whether or not an offer is being processed because the value of {{domxref("RTCPeerConnection.signalingState", "signalingState")}} changes asynchronously, introducing a glare opportunity.
Once the offer has been created, set and sent (or an error occurs), `makingOffer` gets set back to `false`.
#### Handling incoming ICE candidates
Next, we need to handle the `RTCPeerConnection` event {{domxref("RTCPeerConnection.icecandidate_event", "icecandidate")}}, which is how the local ICE layer passes candidates to us for delivery to the remote peer over the signaling channel.
```js
pc.onicecandidate = ({ candidate }) => signaler.send({ candidate });
```
This takes the `candidate` member of this ICE event and passes it through to the signaling channel's `send()` method to be sent over the signaling server to the remote peer.
#### Handling incoming messages on the signaling channel
The last piece of the puzzle is code to handle incoming messages from the signaling server. That's implemented here as an `onmessage` event handler on the signaling channel object. This method is invoked each time a message arrives from the signaling server.
```js
let ignoreOffer = false;
signaler.onmessage = async ({ data: { description, candidate } }) => {
try {
if (description) {
const offerCollision =
description.type === "offer" &&
(makingOffer || pc.signalingState !== "stable");
ignoreOffer = !polite && offerCollision;
if (ignoreOffer) {
return;
}
await pc.setRemoteDescription(description);
if (description.type === "offer") {
await pc.setLocalDescription();
signaler.send({ description: pc.localDescription });
}
} else if (candidate) {
try {
await pc.addIceCandidate(candidate);
} catch (err) {
if (!ignoreOffer) {
throw err;
}
}
}
} catch (err) {
console.error(err);
}
};
```
Upon receiving an incoming message from the `SignalingChannel` through its `onmessage` event handler, the received JSON object is destructured to obtain the `description` or `candidate` found within. If the incoming message has a `description`, it's either an offer or an answer sent by the other peer.
If, on the other hand, the message has a `candidate`, it's an ICE candidate received from the remote peer as part of [trickle ICE](/en-US/docs/Web/API/RTCPeerConnection/canTrickleIceCandidates). The candidate is destined to be delivered to the local ICE layer by passing it into {{domxref("RTCPeerConnection.addIceCandidate", "addIceCandidate()")}}.
##### On receiving a description
If we received a `description`, we prepare to respond to the incoming offer or answer. First, we check to make sure we're in a state in which we can accept an offer. If the connection's signaling state isn't `stable` or if our end of the connection has started the process of making its own offer, then we need to look out for offer collision.
If we're the impolite peer, and we're receiving a colliding offer, we return without setting the description, and instead set `ignoreOffer` to `true` to ensure we also ignore all candidates the other side may be sending us on the signaling channel belonging to this offer. Doing so avoids error noise since we never informed our side about this offer.
If we're the polite peer, and we're receiving a colliding offer, we don't need to do anything special, because our existing offer will automatically be rolled back in the next step.
Having ensured that we want to accept the offer, we set the remote description to the incoming offer by calling {{domxref("RTCPeerConnection.setRemoteDescription", "setRemoteDescription()")}}. This lets WebRTC know what the proposed configuration of the other peer is. If we're the polite peer, we will drop our offer and accept the new one.
If the newly-set remote description is an offer, we ask WebRTC to select an appropriate local configuration by calling the {{domxref("RTCPeerConnection")}} method {{domxref("RTCPeerConnection.setLocalDescription", "setLocalDescription()")}} without parameters. This causes `setLocalDescription()` to automatically generate an appropriate answer in response to the received offer. Then we send the answer through the signaling channel back to the first peer.
##### On receiving an ICE candidate
On the other hand, if the received message contains an ICE candidate, we deliver it to the local {{Glossary("ICE")}} layer by calling the {{domxref("RTCPeerConnection")}} method {{domxref("RTCPeerConnection.addIceCandidate", "addIceCandidate()")}}. If an error occurs and we've ignored the most recent offer, we also ignore any error that may occur when trying to add the candidate.
## Making negotiation perfect
If you're curious what makes perfect negotiation _so perfect_, this section is for you. Here, we'll look at each change made to the WebRTC API and to best practice recommendations to make perfect negotiation possible.
### Glare-free setLocalDescription()
In the past, the {{domxref("RTCPeerConnection.negotiationneeded_event", "negotiationneeded")}} event was easily handled in a way that was susceptible to glare—that is, it was prone to collisions, where both peers could wind up attempting to make an offer at the same time, leading to one or the other peers getting an error and aborting the connection attempt.
#### The old way
Consider this {{domxref("RTCPeerConnection.negotiationneeded_event", "onnegotiationneeded")}} event handler:
```js example-bad
pc.onnegotiationneeded = async () => {
try {
await pc.setLocalDescription(await pc.createOffer());
signaler.send({ description: pc.localDescription });
} catch (err) {
console.error(err);
}
};
```
Because the {{domxref("RTCPeerConnection.createOffer", "createOffer()")}} method is asynchronous and takes some time to complete, there's time in which the remote peer might attempt to send an offer of its own, causing us to leave the `stable` state and enter the `have-remote-offer` state, which means we are now waiting for a response to the offer. But once it receives the offer we just sent, so is the remote peer. This leaves both peers in a state in which the connection attempt cannot be completed.
#### Perfect negotiation with the updated API
As shown in the section [Implementing perfect negotiation](#implementing_perfect_negotiation), we can eliminate this problem by introducing a variable (here called `makingOffer`) which we use to indicate that we are in the process of sending an offer, and making use of the updated `setLocalDescription()` method:
```js example-good
let makingOffer = false;
pc.onnegotiationneeded = async () => {
try {
makingOffer = true;
await pc.setLocalDescription();
signaler.send({ description: pc.localDescription });
} catch (err) {
console.error(err);
} finally {
makingOffer = false;
}
};
```
We set `makingOffer` immediately before calling `setLocalDescription()` in order to lock against interfering with sending this offer, and we don't clear it back to `false` until the offer has been sent to the signaling server (or an error has occurred, preventing the offer from being made). This way, we avoid the risk of offers colliding.
### Automatic rollback in setRemoteDescription()
A key component to perfect negotiation is the concept of the polite peer, which always rolls itself back if it receives an offer while itself waiting for an answer to an offer. Previously, triggering rollback involved manually checking for rollback conditions and triggering the rollback manually, by setting the local description to one with the type `rollback`, like this:
```js
await pc.setLocalDescription({ type: "rollback" });
```
Doing so returns the local peer to the `stable` {{domxref("RTCPeerConnection.signalingState", "signalingState")}} from whichever state it had previously been in. Since a peer can only accept offers when in the `stable` state, the peer has thus rescinded its offer and is ready to receive the offer from the remote (impolite) peer. As we'll see in a moment, there are problems with this approach, however.
#### Perfect negotiation with the old API
Using the previous API to implement incoming negotiation messages during perfect negotiation would look something like this:
```js example-bad
signaler.onmessage = async ({ data: { description, candidate } }) => {
try {
if (description) {
if (description.type === "offer" && pc.signalingState !== "stable") {
if (!polite) {
return;
}
await Promise.all([
pc.setLocalDescription({ type: "rollback" }),
pc.setRemoteDescription(description),
]);
} else {
await pc.setRemoteDescription(description);
}
if (description.type === "offer") {
await pc.setLocalDescription(await pc.createAnswer());
signaler.send({ description: pc.localDescription });
}
} else if (candidate) {
try {
await pc.addIceCandidate(candidate);
} catch (err) {
if (!ignoreOffer) {
throw err;
}
}
}
} catch (err) {
console.error(err);
}
};
```
Since rollback works by postponing changes until the next negotiation (which will begin immediately after the current one is finished), the polite peer needs to know when it needs to throw away a received offer if it's currently waiting for a reply to an offer it's already sent.
The code checks to see if the message is an offer, and if so, if the local signaling state isn't `stable`. If it's not stable, _and_ the local peer is the polite one, we need to trigger rollback so we can replace the outgoing offer with the new incoming one. And these must both be completed before we can proceed with handling the received offer.
Since there isn't a single "roll back and use this offer instead", performing this change on the polite peer requires two steps, executed in the context of [`Promise.all()`](/en-US/docs/Web/JavaScript/Reference/Global_Objects/Promise/all), which is used to ensure that both statements execute completely before continuing to handle the received offer. The first statement triggers rollback and the second sets the remote description to the received one, thus completing the process of replacing the previously _sent_ offer with the newly _received_ offer. The impolite peer has now become the callee instead of the caller.
All other descriptions received from the impolite peer are processed as normal, by passing them into {{domxref("RTCPeerConnection.setRemoteDescription", "setRemoteDescription()")}}.
Finally, we process a received offer by calling `setLocalDescription()` to set our local description to the one returned by {{domxref("RTCPeerConnection.createAnswer", "createAnswer()")}}. Then that gets sent to the polite peer using the signaling channel.
If the incoming message is an ICE candidate rather than an SDP description, it's delivered to the ICE layer by passing it into the {{domxref("RTCPeerConnection")}} method {{domxref("RTCPeerConnection.addIceCandidate", "addIceCandidate()")}}. If an error occurs here and we didn't just discard an offer due to being the impolite peer during a collision, we [`throw`](/en-US/docs/Web/JavaScript/Reference/Statements/throw) the error so the caller can handle it. Otherwise, we drop the error, ignoring it, since it doesn't matter in this context.
#### Perfect negotiation with the updated API
The updated code takes advantage of the fact that you can now call {{domxref("RTCPeerConnection.setLocalDescription", "setLocalDescription()")}} with no parameters so it just does the right thing for you, as well as the fact that `setRemoteDescription()` automatically rolls back if necessary. This lets us get rid of the need to use a [`Promise`](/en-US/docs/Web/JavaScript/Reference/Global_Objects/Promise) to keep the timing in order, since the rollback becomes an essentially atomic part of the `setRemoteDescription()` call.
```js example-good
let ignoreOffer = false;
signaler.onmessage = async ({ data: { description, candidate } }) => {
try {
if (description) {
const offerCollision =
description.type === "offer" &&
(makingOffer || pc.signalingState !== "stable");
ignoreOffer = !polite && offerCollision;
if (ignoreOffer) {
return;
}
await pc.setRemoteDescription(description);
if (description.type === "offer") {
await pc.setLocalDescription();
signaler.send({ description: pc.localDescription });
}
} else if (candidate) {
try {
await pc.addIceCandidate(candidate);
} catch (err) {
if (!ignoreOffer) {
throw err;
}
}
}
} catch (err) {
console.error(err);
}
};
```
While the difference in code size is minor, and the complexity isn't reduced much either, the code is much, much more reliable. Let's take a dive into the code to see how it works now.
##### On receiving a description
In the revised code, if the received message is an SDP `description`, we check to see if it arrived while we're attempting to transmit an offer. If the received message is an `offer` _and_ the local peer is the impolite peer, _and_ a collision is occurring, we ignore the offer because we want to continue to try to use the offer that's already in the process of being sent. That's the impolite peer in action.
In any other case, we'll try instead to handle the incoming message. This begins by setting the remote description to the received `description` by passing it into {{domxref("RTCPeerConnection.setRemoteDescription", "setRemoteDescription()")}}. This works regardless of whether we're handling an offer or an answer since rollback will be performed automatically as needed.
At that point, if the received message is an `offer`, we use `setLocalDescription()` to create and set an appropriate local description, then we send it to the remote peer over the signaling server.
##### On receiving an ICE candidate
On the other hand, if the received message is an ICE candidate—indicated by the JSON object containing a `candidate` member—we deliver it to the local ICE layer by calling the {{domxref("RTCPeerConnection")}} method {{domxref("RTCPeerConnection.addIceCandidate", "addIceCandidate()")}}. Errors are, as before, ignored if we have just discarded an offer.
### Explicit restartIce() method added
The techniques previously used to trigger an [ICE restart](/en-US/docs/Web/API/WebRTC_API/Session_lifetime#ice_restart) while handling the event {{domxref("RTCPeerConnection.negotiationneeded_event", "negotiationneeded")}} have significant flaws. These flaws have made it difficult to safely and reliably trigger a restart during negotiation. The perfect negotiation improvements have fixed this by adding a new {{domxref("RTCPeerConnection.restartIce", "restartIce()")}} method to `RTCPeerConnection`.
#### The old way
In the past, if you encountered an ICE error and needed to restart negotiation, you might have done something like this:
```js example-bad
pc.onnegotiationneeded = async (options) => {
await pc.setLocalDescription(await pc.createOffer(options));
signaler.send({ description: pc.localDescription });
};
pc.oniceconnectionstatechange = () => {
if (pc.iceConnectionState === "failed") {
pc.onnegotiationneeded({ iceRestart: true });
}
};
```
This has a number of reliability issues and outright bugs (such as failing if the {{domxref("RTCPeerConnection.iceconnectionstatechange_event", "iceconnectionstatechange")}} event fires when the signaling state isn't `stable`), but there was no way you could actually request an ICE restart other than by creating and sending an offer with the `iceRestart` option set to `true`. Sending the restart request thus required directly invoking the `negotiationneeded` event's handler. Getting it right was tricky at best, and was so easy to get wrong that bugs are common.
#### Using restartIce()
Now, you can use `restartIce()` to do this much more cleanly:
```js example-good
let makingOffer = false;
pc.onnegotiationneeded = async () => {
try {
makingOffer = true;
await pc.setLocalDescription();
signaler.send({ description: pc.localDescription });
} catch (err) {
console.error(err);
} finally {
makingOffer = false;
}
};
pc.oniceconnectionstatechange = () => {
if (pc.iceConnectionState === "failed") {
pc.restartIce();
}
};
```
With this improved technique, instead of directly calling `onnegotiationneeded` with options to trigger ICE restart, the `failed` [ICE connection state](/en-US/docs/Web/API/RTCPeerConnection/iceConnectionState) calls {{domxref("RTCPeerConnection.restartIce", "restartIce()")}}. `restartIce()` tells the ICE layer to automatically add the `iceRestart` flag to the next ICE message sent. Problem solved!
### Rollback no longer supported in the pranswer state
The last of the API changes that stand out is that you can no longer roll back when in either of the `have-remote-pranswer` or the `have-local-pranswer` states. Fortunately, when using perfect negotiation there's no need to do this anyway, since the situations that cause this are caught and prevented before rolling these back ever becomes necessary.
Thus, attempting to trigger rollback while in one of the two `pranswer` states will now throw an `InvalidStateError`.
## See also
- [WebRTC API](/en-US/docs/Web/API/WebRTC_API)
- [Lifetime of a WebRTC session](/en-US/docs/Web/API/WebRTC_API/Session_lifetime)
| 0 |
data/mdn-content/files/en-us/web/api | data/mdn-content/files/en-us/web/api/resizeobserver/index.md | ---
title: ResizeObserver
slug: Web/API/ResizeObserver
page-type: web-api-interface
browser-compat: api.ResizeObserver
---
{{APIRef("Resize Observer API")}}
The **`ResizeObserver`** interface reports changes to the dimensions of an {{domxref('Element')}}'s content or border box, or the bounding box of an {{domxref('SVGElement')}}.
> **Note:** The content box is the box in which content can be placed, meaning the border box minus the padding and border width. The border box encompasses the content, padding, and border. See [The box model](/en-US/docs/Learn/CSS/Building_blocks/The_box_model) for further explanation.
## Constructor
- {{domxref("ResizeObserver.ResizeObserver", "ResizeObserver()")}}
- : Creates and returns a new `ResizeObserver` object.
## Instance properties
None.
## Instance methods
- {{domxref('ResizeObserver.disconnect()')}}
- : Unobserves all observed {{domxref('Element')}} targets of a particular observer.
- {{domxref('ResizeObserver.observe()')}}
- : Initiates the observing of a specified {{domxref('Element')}}.
- {{domxref('ResizeObserver.unobserve()')}}
- : Ends the observing of a specified {{domxref('Element')}}.
## Examples
In the [resize-observer-text.html](https://mdn.github.io/dom-examples/resize-observer/resize-observer-text.html) ([see source](https://github.com/mdn/dom-examples/blob/main/resize-observer/resize-observer-text.html)) example, we use the resize observer to change the {{cssxref("font-size")}} of a header and paragraph as a slider's value is changed causing the containing `<div>` to change width. This shows that you can respond to changes in an element's size, even if they have nothing to do with the viewport.
We also provide a checkbox to turn the observer off and on. If it is turned off, the text will not change in response to the `<div>`'s width changing.
The JavaScript looks like so:
```js
const h1Elem = document.querySelector("h1");
const pElem = document.querySelector("p");
const divElem = document.querySelector("body > div");
const slider = document.querySelector('input[type="range"]');
const checkbox = document.querySelector('input[type="checkbox"]');
divElem.style.width = "600px";
slider.addEventListener("input", () => {
divElem.style.width = `${slider.value}px`;
});
const resizeObserver = new ResizeObserver((entries) => {
for (const entry of entries) {
if (entry.contentBoxSize) {
const contentBoxSize = entry.contentBoxSize[0];
h1Elem.style.fontSize = `${Math.max(
1.5,
contentBoxSize.inlineSize / 200,
)}rem`;
pElem.style.fontSize = `${Math.max(
1,
contentBoxSize.inlineSize / 600,
)}rem`;
} else {
h1Elem.style.fontSize = `${Math.max(
1.5,
entry.contentRect.width / 200,
)}rem`;
pElem.style.fontSize = `${Math.max(1, entry.contentRect.width / 600)}rem`;
}
}
console.log("Size changed");
});
resizeObserver.observe(divElem);
checkbox.addEventListener("change", () => {
if (checkbox.checked) {
resizeObserver.observe(divElem);
} else {
resizeObserver.unobserve(divElem);
}
});
```
## Observation Errors
Implementations following the specification invoke resize events before paint (that is, before the frame is presented to the user). If there was any resize event, style and layout are re-evaluated — which in turn may trigger more resize events. Infinite loops from cyclic dependencies are addressed by only processing elements deeper in the DOM during each iteration. Resize events that don't meet that condition are deferred to the next paint, and an error event is fired on the {{domxref('Window')}} object, with the well-defined message string:
**ResizeObserver loop completed with undelivered notifications.**
Note that this only prevents user-agent lockup, not the infinite loop itself. For example, the following code will cause the width of `divElem` to grow indefinitely, with the above error message in the console repeating every frame:
```js
const divElem = document.querySelector("body > div");
const resizeObserver = new ResizeObserver((entries) => {
for (const entry of entries) {
entry.target.style.width = entry.contentBoxSize[0].inlineSize + 10 + "px";
}
});
window.addEventListener("error", function (e) {
console.error(e.message);
});
```
As long as the error event does not fire indefinitely, resize observer will settle and produce a stable, likely correct, layout. However, visitors may see a flash of broken layout, as a sequence of changes expected to happen in a single frame is instead happening over multiple frames.
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
## See also
- [The box model](/en-US/docs/Learn/CSS/Building_blocks/The_box_model)
- {{domxref('PerformanceObserver')}}
- {{domxref('IntersectionObserver')}} (part of the [Intersection Observer API](/en-US/docs/Web/API/Intersection_Observer_API))
- Upcoming [container queries](/en-US/docs/Web/CSS/CSS_containment/Container_queries) may be a viable alternative for implementing responsive design.
| 0 |
data/mdn-content/files/en-us/web/api/resizeobserver | data/mdn-content/files/en-us/web/api/resizeobserver/observe/index.md | ---
title: "ResizeObserver: observe() method"
short-title: observe()
slug: Web/API/ResizeObserver/observe
page-type: web-api-instance-method
browser-compat: api.ResizeObserver.observe
---
{{APIRef("Resize Observer API")}}
The **`observe()`** method of the
{{domxref("ResizeObserver")}} interface starts observing the specified
{{domxref('Element')}} or {{domxref('SVGElement')}}.
## Syntax
```js-nolint
observe(target)
observe(target, options)
```
### Parameters
- `target`
- : A reference to an {{domxref('Element')}} or {{domxref('SVGElement')}} to be
observed.
- `options` {{optional_inline}}
- : An options object allowing you to set options for the observation. Currently this
only has one possible option that can be set:
- `box`
- : Sets which box model the observer will observe changes to. Possible values are:
- `content-box` (the default)
- : Size of the content area as defined in CSS.
- `border-box`
- : Size of the box border area as defined in CSS.
- `device-pixel-content-box`
- : The size of the content area as defined in CSS, in device pixels, before applying any CSS transforms on the element or its ancestors.
### Return value
None ({{jsxref("undefined")}}).
### Exceptions
None.
## Examples
The following snippet is taken from the [resize-observer-text.html](https://mdn.github.io/dom-examples/resize-observer/resize-observer-text.html)
([see source](https://github.com/mdn/dom-examples/blob/main/resize-observer/resize-observer-text.html)) example:
```js
const resizeObserver = new ResizeObserver((entries) => {
for (const entry of entries) {
if (entry.contentBoxSize) {
// Checking for chrome as using a non-standard array
if (entry.contentBoxSize[0]) {
h1Elem.style.fontSize = `${Math.max(
1.5,
entry.contentBoxSize[0].inlineSize / 200,
)}rem`;
pElem.style.fontSize = `${Math.max(
1,
entry.contentBoxSize[0].inlineSize / 600,
)}rem`;
} else {
h1Elem.style.fontSize = `${Math.max(
1.5,
entry.contentBoxSize.inlineSize / 200,
)}rem`;
pElem.style.fontSize = `${Math.max(
1,
entry.contentBoxSize.inlineSize / 600,
)}rem`;
}
} else {
h1Elem.style.fontSize = `${Math.max(
1.5,
entry.contentRect.width / 200,
)}rem`;
pElem.style.fontSize = `${Math.max(1, entry.contentRect.width / 600)}rem`;
}
}
console.log("Size changed");
});
resizeObserver.observe(divElem);
```
An `observe()` call with an options object would look like so:
```js
resizeObserver.observe(divElem, { box: "border-box" });
```
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
| 0 |
data/mdn-content/files/en-us/web/api/resizeobserver | data/mdn-content/files/en-us/web/api/resizeobserver/resizeobserver/index.md | ---
title: "ResizeObserver: ResizeObserver() constructor"
short-title: ResizeObserver()
slug: Web/API/ResizeObserver/ResizeObserver
page-type: web-api-constructor
browser-compat: api.ResizeObserver.ResizeObserver
---
{{APIRef("Resize Observer API")}}
The **`ResizeObserver`** constructor creates a
new {{domxref("ResizeObserver")}} object, which can be used to report changes to the
content or border box of an {{domxref('Element')}} or the bounding box of an
{{domxref('SVGElement')}}.
## Syntax
```js-nolint
new ResizeObserver(callback)
```
### Parameters
- `callback`
- : The function called whenever an observed resize occurs. The function is called with
two parameters:
- `entries`
- : An array of {{domxref('ResizeObserverEntry')}} objects that can be used to
access the new dimensions of the element after each change.
- `observer`
- : A reference to the `ResizeObserver` itself, so it will definitely be
accessible from inside the callback, should you need it. This could be used for
example to automatically unobserve the observer when a certain condition is
reached, but you can omit it if you don't need it.
The callback will generally follow a pattern along the lines of:
```js
function callback(entries, observer) {
for (const entry of entries) {
// Do something to each entry
// and possibly something to the observer itself
}
}
```
## Examples
The following snippet is taken from the [resize-observer-text.html](https://mdn.github.io/dom-examples/resize-observer/resize-observer-text.html)
([see source](https://github.com/mdn/dom-examples/blob/main/resize-observer/resize-observer-text.html)) example:
```js
const resizeObserver = new ResizeObserver((entries) => {
for (const entry of entries) {
if (entry.contentBoxSize) {
if (entry.contentBoxSize[0]) {
h1Elem.style.fontSize = `${Math.max(
1.5,
entry.contentBoxSize[0].inlineSize / 200,
)}rem`;
pElem.style.fontSize = `${Math.max(
1,
entry.contentBoxSize[0].inlineSize / 600,
)}rem`;
} else {
// legacy path
h1Elem.style.fontSize = `${Math.max(
1.5,
entry.contentBoxSize.inlineSize / 200,
)}rem`;
pElem.style.fontSize = `${Math.max(
1,
entry.contentBoxSize.inlineSize / 600,
)}rem`;
}
} else {
h1Elem.style.fontSize = `${Math.max(
1.5,
entry.contentRect.width / 200,
)}rem`;
pElem.style.fontSize = `${Math.max(1, entry.contentRect.width / 600)}rem`;
}
}
console.log("Size changed");
});
resizeObserver.observe(divElem);
```
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
| 0 |
data/mdn-content/files/en-us/web/api/resizeobserver | data/mdn-content/files/en-us/web/api/resizeobserver/disconnect/index.md | ---
title: "ResizeObserver: disconnect() method"
short-title: disconnect()
slug: Web/API/ResizeObserver/disconnect
page-type: web-api-instance-method
browser-compat: api.ResizeObserver.disconnect
---
{{APIRef("Resize Observer API")}}
The **`disconnect()`** method of the
{{domxref("ResizeObserver")}} interface unobserves all observed {{domxref('Element')}}
or {{domxref('SVGElement')}} targets.
## Syntax
```js-nolint
disconnect()
```
### Parameters
None.
### Return value
None ({{jsxref("undefined")}}).
### Exceptions
None.
## Examples
```js
btn.addEventListener("click", () => {
resizeObserver.disconnect();
});
```
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
| 0 |
data/mdn-content/files/en-us/web/api/resizeobserver | data/mdn-content/files/en-us/web/api/resizeobserver/unobserve/index.md | ---
title: "ResizeObserver: unobserve() method"
short-title: unobserve()
slug: Web/API/ResizeObserver/unobserve
page-type: web-api-instance-method
browser-compat: api.ResizeObserver.unobserve
---
{{APIRef("Resize Observer API")}}
The **`unobserve()`** method of the
{{domxref("ResizeObserver")}} interface ends the observing of a specified
{{domxref('Element')}} or {{domxref('SVGElement')}}.
## Syntax
```js-nolint
unobserve(target)
```
### Parameters
- `target`
- : A reference to an {{domxref('Element')}} or {{domxref('SVGElement')}} to be unobserved.
### Return value
None ({{jsxref("undefined")}}).
### Exceptions
None.
## Examples
The following snippet is taken from the [resize-observer-text.html](https://mdn.github.io/dom-examples/resize-observer/resize-observer-text.html)
([see source](https://github.com/mdn/dom-examples/blob/main/resize-observer/resize-observer-text.html)) example:
```js
const resizeObserver = new ResizeObserver((entries) => {
for (const entry of entries) {
if (entry.contentBoxSize) {
// Checking for chrome as using a non-standard array
if (entry.contentBoxSize[0]) {
h1Elem.style.fontSize = `${Math.max(
1.5,
entry.contentBoxSize[0].inlineSize / 200,
)}rem`;
pElem.style.fontSize = `${Math.max(
1,
entry.contentBoxSize[0].inlineSize / 600,
)}rem`;
} else {
h1Elem.style.fontSize = `${Math.max(
1.5,
entry.contentBoxSize.inlineSize / 200,
)}rem`;
pElem.style.fontSize = `${Math.max(
1,
entry.contentBoxSize.inlineSize / 600,
)}rem`;
}
} else {
h1Elem.style.fontSize = `${Math.max(
1.5,
entry.contentRect.width / 200,
)}rem`;
pElem.style.fontSize = `${Math.max(1, entry.contentRect.width / 600)}rem`;
}
}
console.log("Size changed");
});
resizeObserver.observe(divElem);
checkbox.addEventListener("change", () => {
if (checkbox.checked) {
resizeObserver.observe(divElem);
} else {
resizeObserver.unobserve(divElem);
}
});
```
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
| 0 |
data/mdn-content/files/en-us/web/api | data/mdn-content/files/en-us/web/api/svgcircleelement/index.md | ---
title: SVGCircleElement
slug: Web/API/SVGCircleElement
page-type: web-api-interface
browser-compat: api.SVGCircleElement
---
{{APIRef("SVG")}}
The **`SVGCircleElement`** interface is an interface for the {{SVGElement("circle")}} element.
{{InheritanceDiagram}}
## Instance properties
_This interface also inherits properties from its parent, {{domxref("SVGGeometryElement")}}._
- {{domxref("SVGCircleElement.cx")}} {{ReadOnlyInline}}
- : This property defines the x-coordinate of the center of the {{SVGElement("circle")}} element. It is denoted by the {{SVGAttr("cx")}} attribute of the element.
- {{domxref("SVGCircleElement.cy")}} {{ReadOnlyInline}}
- : This property defines the y-coordinate of the center of the `<circle>` element. It is denoted by the {{SVGAttr("cy")}} attribute of the element.
- {{domxref("SVGCircleElement.r")}} {{ReadOnlyInline}}
- : This property defines the radius of the `<circle>` element. It is denoted by the {{SVGAttr("r")}} of the element.
## Instance methods
_This interface has no methods but inherits methods from its parent, {{domxref("SVGGeometryElement")}}._
## Examples
### Resizing a circle
In this example we draw a circle and randomly increase or decrease its radius when you click it.
#### HTML
```html
<svg
xmlns="http://www.w3.org/2000/svg"
viewBox="0 0 250 250"
width="250"
height="250">
<circle
cx="100"
cy="100"
r="50"
fill="gold"
id="circle"
onclick="clickCircle();" />
</svg>
```
#### JavaScript
```js
function clickCircle() {
const circle = document.getElementById("circle");
// Randomly determine if the circle radius will increase or decrease
const change = Math.random() > 0.5 ? 10 : -10;
// Clamp the circle radius to a minimum of 10 and a maximum of 250,
// so it won't disappear or get bigger than the viewport
const newValue = Math.min(Math.max(circle.r.baseVal.value + change, 10), 250);
circle.setAttribute("r", newValue);
}
```
{{EmbedLiveSample('Resizing a circle', '', '300')}}
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
## See also
- {{SVGElement("circle")}} SVG element
| 0 |
data/mdn-content/files/en-us/web/api/svgcircleelement | data/mdn-content/files/en-us/web/api/svgcircleelement/r/index.md | ---
title: "SVGCircleElement: r property"
short-title: r
slug: Web/API/SVGCircleElement/r
page-type: web-api-instance-property
browser-compat: api.SVGCircleElement.r
---
{{APIRef("SVG")}}
The **`r`** read-only property of the {{domxref("SVGCircleElement")}} interface reflects the {{SVGAttr("r")}} attribute of a {{SVGElement("circle")}} element and by that defines the radius of the circle.
If unspecified, the effect is as if the value is set to `0`.
## Value
An {{domxref("SVGAnimatedLength")}} representing the radius of the circle.
## Examples
### SVG
```html
<svg
xmlns="http://www.w3.org/2000/svg"
viewBox="0 0 100 100"
width="200"
height="200">
<circle r="50" r="50" r="50" fill="gold" id="circle" />
</svg>
```
### JavaScript
```js
const circle = document.getElementById("circle");
console.log(circle.r);
```
{{EmbedLiveSample("Examples", "200", "200")}}
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
## See also
- {{domxref("SVGCircleElement.cx")}}
- {{domxref("SVGCircleElement.cy")}}
| 0 |
data/mdn-content/files/en-us/web/api/svgcircleelement | data/mdn-content/files/en-us/web/api/svgcircleelement/cy/index.md | ---
title: "SVGCircleElement: cy property"
short-title: cy
slug: Web/API/SVGCircleElement/cy
page-type: web-api-instance-property
browser-compat: api.SVGCircleElement.cy
---
{{APIRef("SVG")}}
The **`cy`** read-only property of the {{domxref("SVGCircleElement")}} interface reflects the {{SVGAttr("cy")}} attribute of a {{SVGElement("circle")}} element and by that defines the y-coordinate of the circle's center.
If unspecified, the effect is as if the value is set to `0`.
## Value
An {{domxref("SVGAnimatedLength")}} representing the y-coordinate of the circle's center.
## Examples
### SVG
```html
<svg
xmlns="http://www.w3.org/2000/svg"
viewBox="0 0 100 100"
width="200"
height="200">
<circle cy="50" cy="50" r="50" fill="gold" id="circle" />
</svg>
```
### JavaScript
```js
const circle = document.getElementById("circle");
console.log(circle.cy);
```
{{EmbedLiveSample("Examples", "200", "200")}}
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
## See also
- {{domxref("SVGCircleElement.cx")}}
- {{domxref("SVGCircleElement.r")}}
| 0 |
data/mdn-content/files/en-us/web/api/svgcircleelement | data/mdn-content/files/en-us/web/api/svgcircleelement/cx/index.md | ---
title: "SVGCircleElement: cx property"
short-title: cx
slug: Web/API/SVGCircleElement/cx
page-type: web-api-instance-property
browser-compat: api.SVGCircleElement.cx
---
{{APIRef("SVG")}}
The **`cx`** read-only property of the {{domxref("SVGCircleElement")}} interface reflects the {{SVGAttr("cx")}} attribute of a {{SVGElement("circle")}} element and by that defines the x-coordinate of the circle's center.<
If unspecified, the effect is as if the value is set to `0`.
## Value
An {{domxref("SVGAnimatedLength")}} representing the x-coordinate of the circle's center.
## Examples
### SVG
```html
<svg
xmlns="http://www.w3.org/2000/svg"
viewBox="0 0 100 100"
width="200"
height="200">
<circle cx="50" cy="50" r="50" fill="gold" id="circle" />
</svg>
```
### JavaScript
```js
const circle = document.getElementById("circle");
console.log(circle.cx);
```
{{EmbedLiveSample("Examples", "200", "200")}}
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
## See also
- {{domxref("SVGCircleElement.cy")}}
- {{domxref("SVGCircleElement.r")}}
| 0 |
data/mdn-content/files/en-us/web/api | data/mdn-content/files/en-us/web/api/textdecoder/index.md | ---
title: TextDecoder
slug: Web/API/TextDecoder
page-type: web-api-interface
browser-compat: api.TextDecoder
---
{{APIRef("Encoding API")}}
The **`TextDecoder`** interface represents a decoder for a specific text encoding, such as `UTF-8`, `ISO-8859-2`, `KOI8-R`, `GBK`, etc. A decoder takes a stream of bytes as input and emits a stream of code points.
{{AvailableInWorkers}}
## Constructor
- {{DOMxRef("TextDecoder.TextDecoder", "TextDecoder()")}}
- : Returns a newly constructed `TextDecoder` that will generate a code point stream with the decoding method specified in parameters.
## Instance properties
_The `TextDecoder` interface doesn't inherit any properties._
- {{DOMxRef("TextDecoder.encoding")}} {{ReadOnlyInline}}
- : A string containing the name of the decoder, which is a string describing the method the `TextDecoder` will use.
- {{DOMxRef("TextDecoder.fatal")}} {{ReadOnlyInline}}
- : A {{jsxref('Boolean')}} indicating whether the error mode is fatal.
- {{DOMxRef("TextDecoder.ignoreBOM")}} {{ReadOnlyInline}}
- : A {{jsxref('Boolean')}} indicating whether the [byte order mark](https://www.w3.org/International/questions/qa-byte-order-mark) is ignored.
## Instance methods
_The `TextDecoder` interface doesn't inherit any methods_.
- {{DOMxRef("TextDecoder.decode()")}}
- : Returns a string containing the text decoded with the method of the specific `TextDecoder` object.
## Examples
### Representing text with typed arrays
This example shows how to decode a Chinese/Japanese character , as represented by five different typed arrays: {{jsxref("Uint8Array")}}, {{jsxref("Int8Array")}}, {{jsxref("Uint16Array")}}, {{jsxref("Int16Array")}}, and {{jsxref("Int32Array")}}.
```js
let utf8decoder = new TextDecoder(); // default 'utf-8' or 'utf8'
let u8arr = new Uint8Array([240, 160, 174, 183]);
let i8arr = new Int8Array([-16, -96, -82, -73]);
let u16arr = new Uint16Array([41200, 47022]);
let i16arr = new Int16Array([-24336, -18514]);
let i32arr = new Int32Array([-1213292304]);
console.log(utf8decoder.decode(u8arr));
console.log(utf8decoder.decode(i8arr));
console.log(utf8decoder.decode(u16arr));
console.log(utf8decoder.decode(i16arr));
console.log(utf8decoder.decode(i32arr));
```
### Handling non-UTF8 text
In this example, we decode the Russian text "Привет, мир!", which means "Hello, world." In our {{domxref("TextDecoder/TextDecoder", "TextDecoder()")}} constructor, we specify the Windows-1251 character encoding, which is appropriate for Cyrillic script.
```js
const win1251decoder = new TextDecoder("windows-1251");
const bytes = new Uint8Array([
207, 240, 232, 226, 229, 242, 44, 32, 236, 232, 240, 33,
]);
console.log(win1251decoder.decode(bytes)); // Привет, мир!
```
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
## See also
- The {{DOMxRef("TextEncoder")}} interface describing the inverse operation.
- A [shim](https://github.com/inexorabletash/text-encoding) allowing to use this interface in browsers that do not support it.
- [Node.js supports global export from v11.0.0](https://nodejs.org/api/util.html#util_class_util_textdecoder)
| 0 |
data/mdn-content/files/en-us/web/api/textdecoder | data/mdn-content/files/en-us/web/api/textdecoder/fatal/index.md | ---
title: "TextDecoder: fatal property"
short-title: fatal
slug: Web/API/TextDecoder/fatal
page-type: web-api-instance-property
browser-compat: api.TextDecoder.fatal
---
{{APIRef("Encoding API")}}
The **`fatal`** read-only property of the {{domxref("TextDecoder")}} interface is a {{jsxref('Boolean')}} indicating whether the error mode is fatal.
If the property is `true`, then a decoder will throw a {{jsxref("TypeError")}} if it encounters malformed data while decoding.
If `false`, the decoder will substitute the invalid data with the replacement character `U+FFFD` (�).
The value of the property is set in the [`TextDecoder()` constructor](/en-US/docs/Web/API/TextDecoder/TextDecoder).
## Value
A boolean which will be `true` if the error mode is set to `fatal`.
Otherwise, it will be `false`, indicating that the error mode is `replacement`.
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
| 0 |
data/mdn-content/files/en-us/web/api/textdecoder | data/mdn-content/files/en-us/web/api/textdecoder/encoding/index.md | ---
title: "TextDecoder: encoding property"
short-title: encoding
slug: Web/API/TextDecoder/encoding
page-type: web-api-instance-property
browser-compat: api.TextDecoder.encoding
---
{{APIRef("Encoding API")}}
The **`TextDecoder.encoding`** read-only property returns a string containing the name of the decoding algorithm used by the specific decoder object.
The encoding is set by the [constructor](/en-US/docs/Web/API/TextDecoder/TextDecoder) `label` parameter, and defaults to `utf-8`.
## Value
A lower-cased ASCII string, which can be one of the following values:
- The recommended encoding for the Web: `'utf-8'`.
- The legacy single-byte encodings:
['ibm866'](https://en.wikipedia.org/wiki/Code_page_866),
['iso-8859-2'](https://en.wikipedia.org/wiki/ISO/IEC_8859-2),
['iso-8859-3'](https://en.wikipedia.org/wiki/ISO/IEC_8859-3),
['iso-8859-4'](https://en.wikipedia.org/wiki/ISO/IEC_8859-4),
['iso-8859-5'](https://en.wikipedia.org/wiki/ISO/IEC_8859-5),
['iso-8859-6'](https://en.wikipedia.org/wiki/ISO/IEC_8859-6),
['iso-8859-7'](https://en.wikipedia.org/wiki/ISO/IEC_8859-7),
['iso-8859-8'](https://en.wikipedia.org/wiki/ISO/IEC_8859-8)'`,
['iso-8859-8i'](https://en.wikipedia.org/wiki/ISO-8859-8-I),
['iso-8859-10'](https://en.wikipedia.org/wiki/ISO/IEC_8859-10),
['iso-8859-13'](https://en.wikipedia.org/wiki/ISO/IEC_8859-13),
['iso-8859-14'](https://en.wikipedia.org/wiki/ISO/IEC_8859-14),
['iso-8859-15'](https://en.wikipedia.org/wiki/ISO/IEC_8859-15),
['iso-8859-16'](https://en.wikipedia.org/wiki/ISO/IEC_8859-16),
['koi8-r'](https://en.wikipedia.org/wiki/KOI8-R),
['koi8-u'](https://en.wikipedia.org/wiki/KOI8-U),
['macintosh'](https://en.wikipedia.org/wiki/Mac_OS_Roman),
['windows-874'](https://en.wikipedia.org/wiki/Windows-874),
['windows-1250'](https://en.wikipedia.org/wiki/Windows-1250),
['windows-1251'](https://en.wikipedia.org/wiki/Windows-1251),
['windows-1252'](https://en.wikipedia.org/wiki/Windows-1252),
['windows-1253'](https://en.wikipedia.org/wiki/Windows-1253),
['windows-1254'](https://en.wikipedia.org/wiki/Windows-1254),
['windows-1255'](https://en.wikipedia.org/wiki/Windows-1255),
['windows-1256'](https://en.wikipedia.org/wiki/Windows-1256),
['windows-1257'](https://en.wikipedia.org/wiki/Windows-1257),
['windows-1258'](https://en.wikipedia.org/wiki/Windows-1258), or
['x-mac-cyrillic'](https://en.wikipedia.org/wiki/Macintosh_Cyrillic_encoding).
- The legacy multi-byte Chinese (simplified) encodings:
['gbk'](https://en.wikipedia.org/wiki/GBK),
['gb18030'](https://en.wikipedia.org/wiki/GB_18030).
- The legacy multi-byte Chinese (traditional) encoding:
['big5'](https://en.wikipedia.org/wiki/Big5).
- The legacy multi-byte Japanese encodings:
['euc-jp'](https://en.wikipedia.org/wiki/Extended_Unix_Code#EUC-JP),
['iso-2022-jp'](https://en.wikipedia.org/wiki/ISO/IEC_2022#ISO-2022-JP),
['shift-jis'](https://en.wikipedia.org/wiki/Shift_JIS).
- The legacy multi-byte Korean encodings:
['euc-kr'](https://en.wikipedia.org/wiki/Extended_Unix_Code#EUC-KR).
- The legacy miscellaneous encodings:
['utf-16be'](https://en.wikipedia.org/wiki/UTF-16#Byte_order_encoding_schemes),
['utf-16le'](https://en.wikipedia.org/wiki/UTF-16#Byte_order_encoding_schemes),
`'x-user-defined'`.
- A special encoding, `'replacement'`.
This decodes empty input into empty output and any other arbitrary-length input into a single replacement character.
It is used to prevent attacks that mismatch encodings between the client and server.
The following encodings also map to the replacement encoding: `ISO-2022-CN`, `ISO-2022-CN-ext`, ['iso-2022-kr'](https://en.wikipedia.org/wiki/ISO/IEC_2022#ISO-2022-KR), and ['hz-gb-2312'](<https://en.wikipedia.org/wiki/HZ_(character_encoding)>).
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
## See also
- The {{DOMxRef("TextDecoder")}} interface it belongs to.
| 0 |
data/mdn-content/files/en-us/web/api/textdecoder | data/mdn-content/files/en-us/web/api/textdecoder/decode/index.md | ---
title: "TextDecoder: decode() method"
short-title: decode()
slug: Web/API/TextDecoder/decode
page-type: web-api-instance-method
browser-compat: api.TextDecoder.decode
---
{{APIRef("Encoding API")}}
The **`TextDecoder.decode()`** method returns a string containing text decoded from the buffer passed as a parameter.
The decoding method is defined in the current {{domxref("TextDecoder")}} object.
This includes the expected encoding of the data, and how decoding errors are handled.
## Syntax
```js-nolint
decode()
decode(buffer)
decode(buffer, options)
```
### Parameters
- `buffer` {{Optional_Inline}}
- : An [`ArrayBuffer`](/en-US/docs/Web/JavaScript/Reference/Global_Objects/ArrayBuffer), a {{jsxref("TypedArray")}}, or a {{jsxref("DataView")}} object containing the encoded text to decode.
- `options` {{Optional_Inline}}
- : An object with the property:
- `stream`
- : A boolean flag indicating whether additional data will follow in subsequent calls to `decode()`.
Set to `true` if processing the data in chunks, and `false` for the final chunk or if the data is not chunked.
It defaults to `false`.
### Exceptions
- {{jsxref("TypeError")}}
- : Thrown if there is a decoding error when the property {{DOMxRef("TextDecoder.fatal")}} is `true`.
### Return value
A string.
## Examples
This example encodes and decodes the euro symbol, €.
### HTML
```html
<p>Encoded value: <span id="encoded-value"></span></p>
<p>Decoded value: <span id="decoded-value"></span></p>
```
### JavaScript
```js
const encoder = new TextEncoder();
const array = encoder.encode("€"); // Uint8Array(3) [226, 130, 172]
document.getElementById("encoded-value").textContent = array;
const decoder = new TextDecoder();
const str = decoder.decode(array); // String "€"
document.getElementById("decoded-value").textContent = str;
```
### Result
{{EmbedLiveSample("Examples")}}
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
## See also
- The {{DOMxRef("TextDecoder")}} interface it belongs to.
| 0 |
data/mdn-content/files/en-us/web/api/textdecoder | data/mdn-content/files/en-us/web/api/textdecoder/textdecoder/index.md | ---
title: "TextDecoder: TextDecoder() constructor"
short-title: TextDecoder()
slug: Web/API/TextDecoder/TextDecoder
page-type: web-api-constructor
browser-compat: api.TextDecoder.TextDecoder
---
{{APIRef("Encoding API")}}
The **`TextDecoder()`** constructor returns a newly created {{DOMxRef("TextDecoder")}} object for the encoding specified in parameter.
## Syntax
```js-nolint
new TextDecoder()
new TextDecoder(label)
new TextDecoder(label, options)
```
### Parameters
- `label` {{optional_inline}}
- : A string, defaulting to `"utf-8"`.
This may be [any valid label](/en-US/docs/Web/API/Encoding_API/Encodings).
- `options` {{optional_inline}}
- : An object with the following properties:
- `fatal` {{optional_inline}}
- : A boolean value indicating if the {{DOMxRef("TextDecoder.decode()")}} method must throw a {{jsxref("TypeError")}} when decoding invalid data.
It defaults to `false`, which means that the decoder will substitute malformed data with a replacement character.
- `ignoreBOM` {{optional_inline}}
- : A boolean value indicating whether the [byte order mark](https://www.w3.org/International/questions/qa-byte-order-mark) will be included in the output or skipped over.
It defaults to `false`, which means that the byte order mark will be skipped over when decoding and will not be included in the decoded text.
### Exceptions
- {{jsxref("RangeError")}}
- : Thrown if the value of `label` is unknown, or is one of the values leading to a `'replacement'` decoding algorithm (`"iso-2022-cn"` or `"iso-2022-cn-ext"`).
## Examples
```js
const textDecoder1 = new TextDecoder("iso-8859-2");
const textDecoder2 = new TextDecoder();
const textDecoder3 = new TextDecoder("csiso2022kr", { fatal: true }); // Allows TypeError exception to be thrown.
const textDecoder4 = new TextDecoder("iso-2022-cn"); // Throw a RangeError exception.
```
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
## See also
- The {{DOMxRef("TextDecoder")}} interface it belongs to.
| 0 |
data/mdn-content/files/en-us/web/api/textdecoder | data/mdn-content/files/en-us/web/api/textdecoder/ignorebom/index.md | ---
title: "TextDecoder: ignoreBOM property"
short-title: ignoreBOM
slug: Web/API/TextDecoder/ignoreBOM
page-type: web-api-instance-property
browser-compat: api.TextDecoder.ignoreBOM
---
{{APIRef("Encoding API")}}
The **`ignoreBOM`** read-only property of the {{domxref("TextDecoder")}} interface is a {{jsxref('Boolean')}} indicating whether the [byte order mark](https://www.w3.org/International/questions/qa-byte-order-mark) will be included in the output or skipped over.
## Value
`true` if the [byte order mark](https://www.w3.org/International/questions/qa-byte-order-mark) will be included in the decoded text; `false` if it will be skipped over when decoding and omitted.
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
| 0 |
data/mdn-content/files/en-us/web/api | data/mdn-content/files/en-us/web/api/oes_vertex_array_object/index.md | ---
title: OES_vertex_array_object extension
short-title: OES_vertex_array_object
slug: Web/API/OES_vertex_array_object
page-type: webgl-extension
browser-compat: api.OES_vertex_array_object
---
{{APIRef("WebGL")}}
The **OES_vertex_array_object** extension is part of the [WebGL API](/en-US/docs/Web/API/WebGL_API) and provides vertex array objects (VAOs) which encapsulate vertex array states. These objects keep pointers to vertex data and provide names for different sets of vertex data.
WebGL extensions are available using the {{domxref("WebGLRenderingContext.getExtension()")}} method. For more information, see also [Using Extensions](/en-US/docs/Web/API/WebGL_API/Using_Extensions) in the [WebGL tutorial](/en-US/docs/Web/API/WebGL_API/Tutorial).
> **Note:** This extension is only available to {{domxref("WebGLRenderingContext", "WebGL1", "", 1)}} contexts. In {{domxref("WebGL2RenderingContext", "WebGL2", "", 1)}}, the functionality of this extension is available on the WebGL2 context by default and the constants and methods are available without the "`OES`" suffix.
## Constants
This extension exposes one new constant, which can be used in the {{domxref("WebGLRenderingContext.getParameter()", "gl.getParameter()")}} method:
- `ext.VERTEX_ARRAY_BINDING_OES`
- : Returns a {{domxref("WebGLVertexArrayObject")}} object when used in the {{domxref("WebGLRenderingContext.getParameter()", "gl.getParameter()")}} method as the `pname` parameter.
## Instance methods
This extension exposes four new methods.
- {{domxref("OES_vertex_array_object.createVertexArrayOES()", "ext.createVertexArrayOES()")}}
- : Creates a new {{domxref("WebGLVertexArrayObject")}}.
- {{domxref("OES_vertex_array_object.deleteVertexArrayOES()", "ext.deleteVertexArrayOES()")}}
- : Deletes a given {{domxref("WebGLVertexArrayObject")}}.
- {{domxref("OES_vertex_array_object.isVertexArrayOES()", "ext.isVertexArrayOES()")}}
- : Returns `true` if a given object is a {{domxref("WebGLVertexArrayObject")}}.
- {{domxref("OES_vertex_array_object.bindVertexArrayOES()", "ext.bindVertexArrayOES()")}}
- : Binds a given {{domxref("WebGLVertexArrayObject")}} to the buffer.
## Examples
```js
const oes_vao_ext = gl.getExtension("OES_vertex_array_object");
const vao = oes_vao_ext.createVertexArrayOES();
oes_vao_ext.bindVertexArrayOES(vao);
// …
// calls to bindBuffer or vertexAttribPointer
// which will be "recorded" in the VAO
// …
```
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
## See also
- {{domxref("WebGLRenderingContext.getExtension()")}}
- {{domxref("WebGLRenderingContext.vertexAttribPointer()")}}
- WebGL2 equivalent methods:
- {{domxref("WebGL2RenderingContext.createVertexArray()")}}
- {{domxref("WebGL2RenderingContext.deleteVertexArray()")}}
- {{domxref("WebGL2RenderingContext.isVertexArray()")}}
- {{domxref("WebGL2RenderingContext.bindVertexArray()")}}
| 0 |
data/mdn-content/files/en-us/web/api/oes_vertex_array_object | data/mdn-content/files/en-us/web/api/oes_vertex_array_object/isvertexarrayoes/index.md | ---
title: "OES_vertex_array_object: isVertexArrayOES() method"
short-title: isVertexArrayOES()
slug: Web/API/OES_vertex_array_object/isVertexArrayOES
page-type: webgl-extension-method
browser-compat: api.OES_vertex_array_object.isVertexArrayOES
---
{{APIRef("WebGL")}}
The **`OES_vertex_array_object.isVertexArrayOES()`** method of
the [WebGL API](/en-US/docs/Web/API/WebGL_API) returns `true` if
the passed object is a {{domxref("WebGLVertexArrayObject")}} object.
## Syntax
```js-nolint
isVertexArrayOES(arrayObject)
```
### Parameters
- `arrayObject`
- : A {{domxref("WebGLVertexArrayObject")}} (VAO) object to test.
### Return value
A {{domxref("WebGL_API.Types")}} indicating whether the given object is a
{{domxref("WebGLVertexArrayObject")}} object (`true`) or not
(`false`).
## Examples
```js
const ext = gl.getExtension("OES_vertex_array_object");
const vao = ext.createVertexArrayOES();
ext.bindVertexArrayOES(vao);
// …
ext.isVertexArrayOES(vao);
```
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
## See also
- {{domxref("WebGLRenderingContext.getExtension()")}}
- {{domxref("WebGLRenderingContext.vertexAttribPointer()")}}
- WebGL2 equivalent: {{domxref("WebGL2RenderingContext.isVertexArray()")}}
| 0 |
data/mdn-content/files/en-us/web/api/oes_vertex_array_object | data/mdn-content/files/en-us/web/api/oes_vertex_array_object/deletevertexarrayoes/index.md | ---
title: "OES_vertex_array_object: deleteVertexArrayOES() method"
short-title: deleteVertexArrayOES()
slug: Web/API/OES_vertex_array_object/deleteVertexArrayOES
page-type: webgl-extension-method
browser-compat: api.OES_vertex_array_object.deleteVertexArrayOES
---
{{APIRef("WebGL")}}
The **`OES_vertex_array_object.deleteVertexArrayOES()`** method
of the [WebGL API](/en-US/docs/Web/API/WebGL_API) deletes a given
{{domxref("WebGLVertexArrayObject")}} object.
## Syntax
```js-nolint
deleteVertexArrayOES(arrayObject)
```
### Parameters
- `arrayObject`
- : A {{domxref("WebGLVertexArrayObject")}} (VAO) object to delete.
### Return value
None ({{jsxref("undefined")}}).
## Examples
```js
const ext = gl.getExtension("OES_vertex_array_object");
const vao = ext.createVertexArrayOES();
ext.bindVertexArrayOES(vao);
// …
ext.deleteVertexArrayOES(vao);
```
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
## See also
- {{domxref("WebGLRenderingContext.getExtension()")}}
- {{domxref("WebGLRenderingContext.vertexAttribPointer()")}}
- WebGL2 equivalent: {{domxref("WebGL2RenderingContext.deleteVertexArray()")}}
| 0 |
data/mdn-content/files/en-us/web/api/oes_vertex_array_object | data/mdn-content/files/en-us/web/api/oes_vertex_array_object/bindvertexarrayoes/index.md | ---
title: "OES_vertex_array_object: bindVertexArrayOES() method"
short-title: bindVertexArrayOES()
slug: Web/API/OES_vertex_array_object/bindVertexArrayOES
page-type: webgl-extension-method
browser-compat: api.OES_vertex_array_object.bindVertexArrayOES
---
{{APIRef("WebGL")}}
The **`OES_vertex_array_object.bindVertexArrayOES()`** method
of the [WebGL API](/en-US/docs/Web/API/WebGL_API) binds a
passed {{domxref("WebGLVertexArrayObject")}} object to the buffer.
## Syntax
```js-nolint
bindVertexArrayOES(arrayObject)
```
### Parameters
- `arrayObject`
- : A {{domxref("WebGLVertexArrayObject")}} (VAO) object to bind.
### Return value
None ({{jsxref("undefined")}}).
## Examples
```js
const ext = gl.getExtension("OES_vertex_array_object");
const vao = ext.createVertexArrayOES();
ext.bindVertexArrayOES(vao);
// …
// calls to bindBuffer or vertexAttribPointer
// which will be "recorded" in the VAO
// …
```
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
## See also
- {{domxref("WebGLRenderingContext.getExtension()")}}
- {{domxref("WebGLRenderingContext.vertexAttribPointer()")}}
- WebGL2 equivalent: {{domxref("WebGL2RenderingContext.bindVertexArray()")}}
| 0 |
data/mdn-content/files/en-us/web/api/oes_vertex_array_object | data/mdn-content/files/en-us/web/api/oes_vertex_array_object/createvertexarrayoes/index.md | ---
title: "OES_vertex_array_object: createVertexArrayOES() method"
short-title: createVertexArrayOES()
slug: Web/API/OES_vertex_array_object/createVertexArrayOES
page-type: webgl-extension-method
browser-compat: api.OES_vertex_array_object.createVertexArrayOES
---
{{APIRef("WebGL")}}
The **`OES_vertex_array_object.createVertexArrayOES()`** method
of the [WebGL API](/en-US/docs/Web/API/WebGL_API) creates and initializes a
{{domxref("WebGLVertexArrayObject")}} object that represents a vertex array object (VAO)
pointing to vertex array data and which provides names for different sets of vertex
data.
## Syntax
```js-nolint
createVertexArrayOES()
```
### Parameters
None.
### Return value
A {{domxref("WebGLVertexArrayObject")}} representing a vertex array object (VAO) which
points to vertex array data.
## Examples
```js
const ext = gl.getExtension("OES_vertex_array_object");
const vao = ext.createVertexArrayOES();
ext.bindVertexArrayOES(vao);
// …
// calls to bindBuffer or vertexAttribPointer
// which will be "recorded" in the VAO
// …
```
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
## See also
- {{domxref("WebGLRenderingContext.getExtension()")}}
- {{domxref("WebGLRenderingContext.vertexAttribPointer()")}}
- WebGL2 equivalent: {{domxref("WebGL2RenderingContext.createVertexArray()")}}
| 0 |
data/mdn-content/files/en-us/web/api | data/mdn-content/files/en-us/web/api/xrinputsourceschangeevent/index.md | ---
title: XRInputSourcesChangeEvent
slug: Web/API/XRInputSourcesChangeEvent
page-type: web-api-interface
browser-compat: api.XRInputSourcesChangeEvent
---
{{APIRef("WebXR Device API")}} {{SecureContext_Header}}
The WebXR Device API interface **`XRInputSourcesChangeEvent`** is used to represent the {{domxref("XRSession.inputsourceschange_event", "inputsourceschange")}} event sent to an {{domxref("XRSession")}} when the set of available WebXR input controllers changes.
{{InheritanceDiagram}}
## Constructor
- {{domxref("XRInputSourcesChangeEvent.XRInputSourcesChangeEvent", "XRInputSourcesChangeEvent()")}}
- : Creates and returns a new `XRInputSourcesChangeEvent` object. The specified type must be `inputsourceschange`, which is the only event that uses this interface.
## Instance properties
- {{domxref("XRInputSourcesChangeEvent.added", "added")}} {{ReadOnlyInline}}
- : An array of zero or more {{domxref("XRInputSource")}} objects, each representing an input device which has been newly connected or enabled for use.
- {{domxref("XRInputSourcesChangeEvent.removed", "removed")}} {{ReadOnlyInline}}
- : An array of zero or more {{domxref("XRInputSource")}} objects representing the input devices newly connected or enabled for use.
- {{domxref("XRInputSourcesChangeEvent.session", "session")}} {{ReadOnlyInline}}
- : The {{domxref("XRSession")}} to which this input source change event is being directed.
## Instance methods
_While `XRInputSourcesChangeEvent` defines no methods of its own, it inherits methods from its parent interface, {{domxref("Event")}}._
## Event types
- {{domxref("XRSession.inputsourceschange_event", "inputsourceschange")}}
- : Delivered to the {{domxref("XRSession")}} when the set of input devices available to it changes.
## Examples
The following example shows how to set up an event handler which uses `inputsourceschange` events to detect newly-available pointing devices and to load their models in preparation to display them in the next animation frame.
```js
xrSession.addEventListener("inputsourceschange", onInputSourcesChange);
function onInputSourcesChange(event) {
for (const input of event.added) {
if (input.targetRayMode === "tracked-pointer") {
loadControllerMesh(input);
}
}
}
```
You can also add a handler for `inputsourceschange` events by setting the `oninputsourceschange` event handler:
```js
xrSession.oninputsourceschange = onInputSourcesChange;
```
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
| 0 |
data/mdn-content/files/en-us/web/api/xrinputsourceschangeevent | data/mdn-content/files/en-us/web/api/xrinputsourceschangeevent/removed/index.md | ---
title: "XRInputSourcesChangeEvent: removed property"
short-title: removed
slug: Web/API/XRInputSourcesChangeEvent/removed
page-type: web-api-instance-property
browser-compat: api.XRInputSourcesChangeEvent.removed
---
{{APIRef("WebXR Device API")}}{{SecureContext_Header}}
The read-only {{domxref("XRInputSourcesChangeEvent")}} property {{domxref("XRInputSourcesChangeEvent.removed", "removed")}} is an array of
zero or more {{domxref("XRInputSource")}} objects representing the input sources that have been removed from the {{domxref("XRSession")}}.
## Value
An {{jsxref("Array")}} of zero or more {{domxref("XRInputSource")}} objects, each representing one input device removed from the XR system.
## Examples
See [`XRInputSourcesChangeEvent.added`](/en-US/docs/Web/API/XRInputSourcesChangeEvent/added#examples) for example code.
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
| 0 |
data/mdn-content/files/en-us/web/api/xrinputsourceschangeevent | data/mdn-content/files/en-us/web/api/xrinputsourceschangeevent/session/index.md | ---
title: "XRInputSourcesChangeEvent: session property"
short-title: session
slug: Web/API/XRInputSourcesChangeEvent/session
page-type: web-api-instance-property
browser-compat: api.XRInputSourcesChangeEvent.session
---
{{APIRef("WebXR Device API")}}{{SecureContext_Header}}
The {{domxref("XRInputSourcesChangeEvent")}} property
{{domxref("XRInputSourcesChangeEvent.session", "session")}} specifies the
{{domxref("XRSession")}} to which the input source list change event applies.
## Value
An {{domxref("XRSession")}} indicating the WebXR session to which the input source list
change applies.
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
| 0 |
data/mdn-content/files/en-us/web/api/xrinputsourceschangeevent | data/mdn-content/files/en-us/web/api/xrinputsourceschangeevent/added/index.md | ---
title: "XRInputSourcesChangeEvent: added property"
short-title: added
slug: Web/API/XRInputSourcesChangeEvent/added
page-type: web-api-instance-property
browser-compat: api.XRInputSourcesChangeEvent.added
---
{{APIRef("WebXR Device API")}}{{SecureContext_Header}}
The read-only {{domxref("XRInputSourcesChangeEvent")}}
property {{domxref("XRInputSourcesChangeEvent.added", "added")}} is a list of zero or
more input sources, each identified using an {{domxref("XRInputSource")}} object,
which have been newly made available for use.
## Value
An {{jsxref("Array")}} of zero or more {{domxref("XRInputSource")}} objects, each
representing one input device added to the XR system.
## Examples
The example below creates a handler for the
{{domxref("XRSession.inputsourceschange_event", "inputsourceschange")}} event that
processes the lists of added and removed from the WebXR system. It looks for new and
removed devices whose {{domxref("XRInputSource.targetRayMode", "targetRayMode")}} is
`tracked-pointer`.
```js
xrSession.oninputsourcescchange = (event) => {
for (const input of event.added) {
if (input.targetRayMode === "tracked-pointer") {
addedPointerDevice(input);
}
}
for (const input of event.removed) {
if (input.targetRayMode === "tracked-pointer") {
removedPointerDevice(input);
}
}
};
```
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
| 0 |
data/mdn-content/files/en-us/web/api/xrinputsourceschangeevent | data/mdn-content/files/en-us/web/api/xrinputsourceschangeevent/xrinputsourceschangeevent/index.md | ---
title: "XRInputSourcesChangeEvent: XRInputSourcesChangeEvent() constructor"
short-title: XRInputSourcesChangeEvent()
slug: Web/API/XRInputSourcesChangeEvent/XRInputSourcesChangeEvent
page-type: web-api-constructor
browser-compat: api.XRInputSourcesChangeEvent.XRInputSourcesChangeEvent
---
{{APIRef("WebXR Device API")}}{{SecureContext_Header}}
The **`XRInputSourcesChangeEvent()`**
constructor creates and returns a new {{domxref("XRInputSourcesChangeEvent")}} object,
representing an update to the list of available [WebXR](/en-US/docs/Web/API/WebXR_Device_API) input devices. You
won't typically call this constructor yourself, as these events are created and sent to
you by the WebXR system.
## Syntax
```js-nolint
new XRInputSourcesChangeEvent(type, options)
```
### Parameters
- `type`
- : A string with the name of the event.
It is case-sensitive and browsers always set it to `inputsourceschange`.
- `options`
- : An object that, _in addition of the properties defined in {{domxref("Event/Event", "Event()")}}_, can have the following properties:
- `added`
- : An array of zero or more {{domxref("XRInputSource")}} objects, each representing one input device which is newly available to use.
- `removed`
- : An array of zero or more {{domxref("XRInputSource")}} objects representing the input devices which are no longer available.
- `session`
- : The {{domxref("XRSession")}} to which the event applies.
### Return value
A new {{domxref("XRInputSourcesChangeEvent")}} object configured based upon
the input parameters provided.
## Examples
The following snippet of code creates a new `XRInputSourcesChangeEvent`
object indicating that a single new input source, described by an
{{domxref("XRInputSource")}} object named `newInputSource`, has been added to
the system.
```js
let iscEvent = new XRInputSourcesChangeEvent("inputsourceschange", {
session: xrSession,
added: [newInputSource],
removed: [],
});
```
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
| 0 |
data/mdn-content/files/en-us/web/api | data/mdn-content/files/en-us/web/api/xrview/index.md | ---
title: XRView
slug: Web/API/XRView
page-type: web-api-interface
status:
- experimental
browser-compat: api.XRView
---
{{APIRef("WebXR Device API")}}{{SecureContext_Header}}{{SeeCompatTable}}
The [WebXR Device API](/en-US/docs/Web/API/WebXR_Device_API)'s **`XRView`** interface describes a single view into the XR scene for a specific frame, providing orientation and position information for the viewpoint. You can think of it as a description of a specific eye or camera and how it views the world. A 3D frame will involve two views, one for each eye, separated by an appropriate distance which approximates the distance between the viewer's eyes. This allows the two views, when projected in isolation into the appropriate eyes, to simulate a 3D world.
## Instance properties
- {{domxref("XRView.eye", "eye")}} {{ReadOnlyInline}} {{Experimental_Inline}}
- : Which of the two eyes (`left`) or (`right`) for which this `XRView` represents the perspective. This value is used to ensure that any content which is pre-rendered for presenting to a specific eye is distributed or positioned correctly. The value can also be `none` if the `XRView` is presenting monoscopic data (such as a 2D image, a fullscreen view of text, or a close-up view of something that doesn't need to appear in 3D).
- {{domxref("XRView.isFirstPersonObserver", "isFirstPersonObserver")}} {{ReadOnlyInline}} {{Experimental_Inline}}
- : Returns a boolean indicating if the `XRView` is a first-person observer view.
- {{domxref("XRView.projectionMatrix", "projectionMatrix")}} {{ReadOnlyInline}} {{Experimental_Inline}}
- : The projection matrix that will transform the scene to appear correctly given the point-of-view indicated by `eye`. This matrix should be used directly in order to avoid presentation distortions that may lead to potentially serious user discomfort.
- {{domxref("XRView.recommendedViewportScale", "recommendedViewportScale")}} {{ReadOnlyInline}} {{Experimental_Inline}}
- : The recommended viewport scale value that you can use for `requestViewportScale()` if the user agent has such a recommendation; [`null`](/en-US/docs/Web/JavaScript/Reference/Operators/null) otherwise.
- {{domxref("XRView.transform", "transform")}} {{ReadOnlyInline}} {{Experimental_Inline}}
- : An {{domxref("XRRigidTransform")}} which describes the current position and orientation of the viewpoint in relation to the {{domxref("XRReferenceSpace")}} specified when {{domxref("XRFrame.getViewerPose", "getViewerPose()")}} was called on the {{domxref("XRFrame")}} being rendered.
## Instance methods
- {{domxref("XRView.requestViewportScale", "requestViewportScale()")}} {{ReadOnlyInline}} {{Experimental_Inline}}
- : Requests that the user agent should set the requested viewport scale for this viewport to the requested value.
## Usage notes
### Positions and number of XRViews per frame
While rendering a scene, the set of views that are used to render the scene for the viewer as of the current frame are obtained by calling the {{domxref("XRFrame")}} object's {{domxref("XRFrame.getViewerPose", "getViewerPose()")}} method to get the {{domxref("XRViewerPose")}} representing (in essence) the position of the viewer's head. That object's {{domxref("XRViewerPose.views", "views")}} property is a list of all of the `XRView` objects representing the viewpoints which can be used to construct the scene for presentation to the user.
It's possible to have `XRView` objects which represent overlapping regions as well as entirely disparate regions; in a game, you might have views that can be presented to observe a remote site using a security camera or other device, for example. In other words, don't assume there are exactly two views on a given viewer; there can be as few as one (such as when rendering the scene in `inline` mode, and potentially many (especially if the field of view is very large). There might also be views representing observers watching the action, or other viewpoints not directly associated with a player's eye.
In addition, the number of views can change at any time, depending on the needs of the time. So you should process the view list every time without making assumptions based on previous frames.
All positions and orientations within the views for a given {{domxref("XRViewerPose")}} are specified in the reference space that was passed to {{domxref("XRFrame.getViewerPose()")}}; this is called the **viewer reference space**. The {{domxref("XRView.transform", "transform")}} property describes the position and orientation of the eye or camera represented by the `XRView`, given in that reference space.
### The destination rendering layer
To render a frame, you iterate over the `XRViewerPose`'s views, rendering each of them into the appropriate viewport within the frame's {{domxref("XRWebGLLayer")}}. Currently, the specification (and therefore all current implementations of WebXR) is designed around rendering every `XRView` into a single `XRWebGLLayer`, which is then presented on the XR device with half used for the left eye and half for the right eye. The {{domxref("XRViewport")}} for each view is used to position the rendering into the correct half of the layer.
If in the future it becomes possible for each view to render into a different layer, there would have to be changes made to the API, so it's safe for now to assume that all views will render into the same layer.
## Examples
### Preparing to render every view for a pose
To draw everything the user sees, each frame requires iterating over the list of views returned by the {{domxref("XRViewerPose")}} object's {{domxref("XRViewerPose.views", "views")}} list:
```js
for (const view of pose.views) {
const viewport = glLayer.getViewport(view);
gl.viewport(viewport.x, viewport.y, viewport.width, viewport.height);
// Draw the scene; the eye being drawn is identified
// by view.eye.
}
```
### Special view transforms
There are a few special transforms that are used on the view while rendering and lighting a scene.
#### Model view matrix
The **model view matrix** is a matrix which defines the position of an object relative to the space in which it's located: If `objectMatrix` is a transform applied to the object to provide its basic position and rotation, then the model view matrix can be computed by multiplying the object's matrix by the inverse of the view transform matrix, like this:
```js
mat4.multiply(modelViewMatrix, view.transform.inverse.matrix, objectMatrix);
```
#### Normal matrix
The model view's **normal matrix** is used when lighting the scene, in order to transform each surface's normal vectors to ensure that the light is reflected in the correct direction given the orientation and position of the surface relative to the light source or sources. It's computed by inverting then transposing the model view matrix:
```js
mat4.invert(normalMatrix, modelViewMatrix);
mat4.transpose(normalMatrix, normalMatrix);
```
### Teleporting an object
To programmatically move and/or rotate (often referred to as **teleporting**) an object, you need to create a new reference space for that object which applies a transform that encapsulates the desired changes. The `createTeleportTransform()` function returns the transform needed to move and rotate an object whose current situation is described by the reference space `refSpace` to a new position and orientation which is computed using previously recorded mouse and keyboard input data which has generated offsets for yaw, pitch, and position along all three axes.
```js
function applyMouseMovement(refSpace) {
if (
!mouseYaw &&
!mousePitch &&
!axialDistance &&
!transverseDistance &&
!verticalDistance
) {
return refSpace;
}
// Compute the quaternion used to rotate the image based
// on the pitch and yaw.
quat.identity(inverseOrientation);
quat.rotateX(inverseOrientation, inverseOrientation, -mousePitch);
quat.rotateY(inverseOrientation, inverseOrientation, -mouseYaw);
// Compute the true "up" vector for our object.
vec3.cross(vecX, vecY, cubeOrientation);
vec3.cross(vecY, cubeOrientation, vecX);
// Now compute the transform that teleports the object to the
// specified point and save a copy of it to display to the user
// later; otherwise we probably wouldn't need to save mouseMatrix
// at all.
let newTransform = new XRRigidTransform(
{ x: transverseDistance, y: verticalDistance, z: axialDistance },
{
x: inverseOrientation[0],
y: inverseOrientation[1],
z: inverseOrientation[2],
w: inverseOrientation[3],
},
);
mat4.copy(mouseMatrix, newTransform.matrix);
// Create a new reference space that transforms the object to the new
// position and orientation, returning the new reference space.
return refSpace.getOffsetReferenceSpace(newTransform);
}
```
This code is broken into four sections. In the first, the quaternion `inverseOrientation` is computed. This represents the rotation of the object given the values of `mousePitch` (rotation around the object's reference's space's X axis) and `mouseYaw` (rotation around the object's Y axis).
The second section computes the "up" vector for the object. This vector indicates the direction which is "up" in the scene overall, but in the object's reference space.
The third section creates the new {{domxref("XRRigidTransform")}}, specifying a point providing the offsets along the three axes as the first parameter, and the orientation quaternion as the second parameter. The returned object's {{domxref("XRRigidTransform.matrix", "matrix")}} property is the actual matrix that transforms points from the scene's reference space to the object's new position.
Finally, a new reference space is created to describe the relationship between the two reference spaces fully. That reference space is returned to the caller.
To use this function, we pass the returned reference space into {{domxref("XRFrame.getPose()")}} or {{domxref("XRFrame.getViewerPose", "getViewerPose()")}}, as appropriate for your needs. The returned {{domxref("XRPose")}} will then be used to render the scene for the current frame.
You can find a more extensive and complete example in our article [Movement, orientation, and motion](/en-US/docs/Web/API/WebXR_Device_API/Movement_and_motion).
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
| 0 |
data/mdn-content/files/en-us/web/api/xrview | data/mdn-content/files/en-us/web/api/xrview/requestviewportscale/index.md | ---
title: "XRView: requestViewportScale() method"
short-title: requestViewportScale()
slug: Web/API/XRView/requestViewportScale
page-type: web-api-instance-method
status:
- experimental
browser-compat: api.XRView.requestViewportScale
---
{{APIRef("WebXR Device API")}}{{SeeCompatTable}}{{SecureContext_Header}}
The **`requestViewportScale()`** method of the {{domxref("XRView")}} interface requests that the user agent sets the requested viewport scale for this viewport to the given value. This is used for dynamic viewport scaling which allows rendering to a subset of the WebXR viewport using a scale factor that can be changed every animation frame.
## Syntax
```js-nolint
requestViewportScale(scale)
```
## Parameters
- `scale`
- : A number greater than 0.0 and less than or equal to 1.0 representing the scale factor.
## Return value
None ({{jsxref("undefined")}}).
## Dynamic viewport scaling
Dynamic viewport scaling allows applications to only use a subset of the available {{domxref("XRWebGLLayer.framebuffer", "framebuffer")}}. The feature may not be available on all systems since it depends on driver support, so you might want to ensure that `requestViewportScale()` exists before calling it.
The `scale` parameter can be a number greater than 0.0 and less than or equal to 1.0.
Alternatively, you can use the {{domxref("XRView.recommendedViewportScale")}} property which contains the user agent's recommended value based on internal heuristics. If the user agent doesn't provide a recommended viewport scale, its value is `null` and the call to `requestViewportScale()` is ignored.
## Examples
The following example shows how to request and apply a new viewport scale. The call to {{domxref("XRWebGLLayer.getViewport()")}} applies the change and returns the updated viewport.
```js
for (const view of pose.views) {
if (view.requestViewportScale) {
view.requestViewportScale(0.8);
// or use view.requestViewportScale(view.recommendedViewportScale);
}
const viewport = glLayer.getViewport(view);
}
```
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
## See also
- {{domxref("XRView.recommendedViewportScale")}}
| 0 |
data/mdn-content/files/en-us/web/api/xrview | data/mdn-content/files/en-us/web/api/xrview/transform/index.md | ---
title: "XRView: transform property"
short-title: transform
slug: Web/API/XRView/transform
page-type: web-api-instance-property
status:
- experimental
browser-compat: api.XRView.transform
---
{{APIRef("WebXR Device API")}}{{SeeCompatTable}}{{SecureContext_Header}}
The read-only **`transform`** property of the
{{domxref("XRView")}} interface is an {{domxref("XRRigidTransform")}} object which
provides the position and orientation of the viewpoint relative to the
{{domxref("XRReferenceSpace")}} specified when the
{{domxref("XRFrame.getViewerPose()")}} method was called to obtain the view object.
With the `transform`, you can then position the view as a camera within the
3D scene. If you instead need the more traditional view matrix, you can get using
`view.transform.inverse.matrix`; this gets the underlying
{{domxref("XRRigidTransform.matrix", "matrix")}} of the transform's
{{domxref("XRRigidTransform.inverse", "inverse")}}.
## Value
A {{domxref("XRRigidTransform")}} object specifying the position and orientation of the
viewpoint represented by the `XRView`.
## Examples
For each view making up the presented scene, the view's `transform`
represents the position and orientation of the viewer or camera relative to the
reference space's origin. You can then use the inverse of this matrix to transform the
objects in your scene to adjust their placement and orientation to simulate the viewer's
movement through space.
In this example, we see an outline of a code fragment used while rendering an
{{domxref("XRFrame")}}, which makes use of the view transform to place objects in the
world during rendering.
```js
const modelViewMatrix = mat4.create();
const normalMatrix = mat4.create();
for (const view of pose.views) {
const viewport = glLayer.getViewport(view);
gl.viewport(viewport.x, viewport.y, viewport.width, viewport.height);
for (const obj of world.objects) {
mat4.multiply(modelViewMatrix, view.transform.inverse.matrix, obj.matrix);
mat4.invert(normalMatrix, modelViewMatrix);
mat4.transpose(normalMatrix, normalMatrix);
obj.render(modelViewMatrix, normalMatrix);
}
}
```
Two matrices are created outside the rendering loop; this avoids repeatedly allocating
and deallocating the matrices, and generally reduces overhead by reusing the same matrix
for each object rendered.
Then we iterate over each {{domxref("XRView")}} found in the
{{domxref("XRViewerPose")}}'s list of {{domxref("XRViewerPose.views", "views")}}. There
will usually be two: one for the left eye and one for the right, but there may be only
one if in monoscopic mode. Currently, WebXR doesn't support more than two views per
pose, although room has been left to extend the specification to support that in the
future with some additions to the API.
For each view, we obtain its viewport and pass that to WebGL using
{{domxref("WebGLRenderingContext.viewport", "gl.viewport()")}}. For the left eye, this
will be the left half of the canvas, while the right eye will use the right half.
Then we iterate over each object that makes up the scene. Each object's model view
matrix is computed by multiplying its own matrix which describes the object's own
position and orientation by the additional position and orientation adjustments needed
to match the camera's movement. To convert the "camera focused" transform matrix into an
"object focused" transform, we use the transform's inverse, thus taking the matrix
returned by {{domxref("XRRigidTransform.matrix", "view.transform.inverse.matrix")}}. The
resulting model view matrix will apply all the transforms needed to move and rotate the
object based on the relative positions of the object and the camera. This will simulate
the movement of the camera even though we're actually moving the object.
We then compute the normals for the model view matrix by inverting it, then transposing
it.
Finally, we call the object's `render()` routine, passing along the
`modelViewMatrix` and `normalMatrix` so the renderer can place and
light the object properly.
> **Note:** This example is derived from a larger example…
> **<<<--- finish and add link --->>>**
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
| 0 |
data/mdn-content/files/en-us/web/api/xrview | data/mdn-content/files/en-us/web/api/xrview/projectionmatrix/index.md | ---
title: "XRView: projectionMatrix property"
short-title: projectionMatrix
slug: Web/API/XRView/projectionMatrix
page-type: web-api-instance-property
status:
- experimental
browser-compat: api.XRView.projectionMatrix
---
{{APIRef("WebXR Device API")}}{{SeeCompatTable}}{{SecureContext_Header}}
The {{domxref("XRView")}} interface's read-only
**`projectionMatrix`** property specifies the projection matrix
to apply to the underlying view. This should be used to integrate perspective to
everything in the scene, in order to ensure the result is consistent with what the eye
expects to see.
> **Note:** Failure to apply proper perspective, or inconsistencies
> in perspective, may result in possibly serious user discomfort or distress.
## Value
A {{jsxref("Float32Array")}} object representing the projection matrix for the view.
The projection matrix for each eye's view is used to ensure that the correct area of the
scene is presented to each eye in order to create a believable 3D scene without
introducing discomfort for the user.
## Examples
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
| 0 |
data/mdn-content/files/en-us/web/api/xrview | data/mdn-content/files/en-us/web/api/xrview/isfirstpersonobserver/index.md | ---
title: "XRView: isFirstPersonObserver property"
short-title: isFirstPersonObserver
slug: Web/API/XRView/isFirstPersonObserver
page-type: web-api-instance-property
status:
- experimental
browser-compat: api.XRView.isFirstPersonObserver
---
{{APIRef("WebXR Device API")}}{{SeeCompatTable}}{{SecureContext_Header}}
The {{domxref("XRView")}} interface's read-only **`isFirstPersonObserver`** property is a boolean indicating if the `XRView` is a first-person observer view.
To create video recordings of AR device cameras, you can't simply use one of the rendered eyes, as there often will be a physical offset. Some devices expose a secondary view, the first-person observer view, which has an `eye` of `none`.
To receive a first-person observer view, you need to enable the "secondary-views" feature descriptor explicitly (typically as an optional feature). See {{domxref("XRSystem.requestSession()")}} for details.
The `isFirstPersonObserver` property then allows you to check which secondary view is a first-person observer view.
## Examples
### Checking for first-person observer views
```js
// Make sure to enable "secondary-view"
navigator.xr
.requestSession("immersive-ar", {
optionalFeatures: ["secondary-views"],
})
.then((session) => {
// …
session.requestAnimationFrame((frame) => {
const views = frame.getViewerPose(space);
// Make sure to iterate over all views
for (const view of views) {
if (view.isFirstPersonObserver) {
renderFPO();
} else {
render();
}
}
});
});
```
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
| 0 |
data/mdn-content/files/en-us/web/api/xrview | data/mdn-content/files/en-us/web/api/xrview/eye/index.md | ---
title: "XRView: eye property"
short-title: eye
slug: Web/API/XRView/eye
page-type: web-api-instance-property
status:
- experimental
browser-compat: api.XRView.eye
---
{{APIRef("WebXR Device API")}}{{SeeCompatTable}}{{SecureContext_Header}}
The {{domxref("XRView")}} interface's read-only **`eye`**
property is a string indicating which eye's viewpoint the `XRView` represents: `left` or
`right`. For views which represent neither eye, such as monoscopic views,
this property's value is `none`.
## Value
A string that can be one of the following values:
- `left`
- : The {{domxref("XRView")}} represents the point-of-view of the viewer's left eye.
- `right`
- : The view represents the viewer's right eye.
- `none`
- : The `XRView` describes a monoscopic view, or the view otherwise doesn't represent a particular eye's point-of-view.
## Usage notes
The primary purpose of this property is to allow the correct area of any pre-rendered
stereo content to be presented to the correct eye. For dynamically-rendered 3D content,
you can usually ignore this and render each of the viewer's views, one after another.
## Examples
This code, from the viewer pose's renderer, iterates over the pose's views and renders
them. _However_, we have flags which, if `true`, indicate that a
particular eye has been injured during gameplay. When rendering that eye, if the flag is
`true`, that view is skipped instead of being rendered.
```js
glLayer = xrSession.renderState.baseLayer;
gl.bindFramebuffer(gl.FRAMEBUFFER, glLayer.framebuffer);
gl.clearColor(0, 0, 0, 1.0);
gl.clearDepth(1.0);
gl.clear(gl.COLOR_BUFFER_BIT, gl.DEPTH_BUFFER_BIT);
for (const view of xrPose.views) {
let skipView = false;
if (view.eye === "left" && body.leftEye.injured) {
skipView = updateInjury(body.leftEye);
} else if (view.eye === "right" && body.rightEye.injured) {
skipView = updateInjury(body.rightEye);
}
if (!skipView) {
let viewport = glLayer.getViewport(view);
gl.viewport(viewport.x, viewport.y, viewport.width, viewport.height);
renderScene(gl, view);
}
}
```
For each of the views, the value of `eye` is checked and if it's either
`left` or `right`, we check to see if the
`body.leftEye.injured` or `body.rightEye.injured` property is
`true`; if so, we call a function `updateInjury()` on that eye to
do things such as allow a bit of healing to occur, track the progress of a poison
effect, or the like, as appropriate for the game's needs.
`updateInjury()` returns `true` if the eye is still injured or
`false` if the eye has been restored to health by the function. If the
result is `false`, indicating that the eye is now healthy, we render the
scene for that eye. Otherwise, we don't.
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
| 0 |
data/mdn-content/files/en-us/web/api/xrview | data/mdn-content/files/en-us/web/api/xrview/recommendedviewportscale/index.md | ---
title: "XRView: recommendedViewportScale property"
short-title: recommendedViewportScale
slug: Web/API/XRView/recommendedViewportScale
page-type: web-api-instance-property
status:
- experimental
browser-compat: api.XRView.recommendedViewportScale
---
{{APIRef("WebXR Device API")}}{{SeeCompatTable}}{{SecureContext_Header}}
The read-only **`recommendedViewportScale`** property of the {{domxref("XRView")}} interface is the recommended viewport scale value that you can use for {{domxref("XRView.requestViewportScale()")}} if the user agent has such a recommendation; [`null`](/en-US/docs/Web/JavaScript/Reference/Operators/null) otherwise.
## Value
A number greater than 0.0 and less than or equal to 1.0; or [`null`](/en-US/docs/Web/JavaScript/Reference/Operators/null) if the user agent does not provide a recommended scale.
## Examples
### Dynamic viewport scaling
Dynamic viewport scaling allows applications to only use a subset of the available {{domxref("XRWebGLLayer.framebuffer", "framebuffer")}}. The feature may not be available on all systems since it depends on driver support, so you might want to ensure that {{domxref("XRView.requestViewportScale")}} exists before calling it.
```js
for (const view of pose.views) {
if (view.requestViewportScale) {
view.requestViewportScale(view.recommendedViewportScale);
}
const viewport = glLayer.getViewport(view);
}
```
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
## See also
- {{domxref("XRView.requestViewportScale()")}}
| 0 |
data/mdn-content/files/en-us/web/api | data/mdn-content/files/en-us/web/api/csspagerule/index.md | ---
title: CSSPageRule
slug: Web/API/CSSPageRule
page-type: web-api-interface
browser-compat: api.CSSPageRule
---
{{APIRef("CSSOM")}}
**`CSSPageRule`** represents a single CSS {{cssxref("@page")}} rule.
{{InheritanceDiagram}}
## Instance properties
_Inherits properties from its ancestor {{domxref("CSSRule")}}._
- {{domxref("CSSPageRule.selectorText")}}
- : Represents the text of the page selector associated with the at-rule.
- {{domxref("CSSPageRule.style")}} {{ReadOnlyInline}}
- : Returns the [declaration block](/en-US/docs/Web/API/CSS_Object_Model/CSS_Declaration_Block) associated with the at-rule.
## Instance methods
_Inherits methods from its ancestor {{domxref("CSSRule")}}._
## Examples
The stylesheet includes a single {{cssxref("@page")}} rule, therefore the first (and only) rule returned will be a `CSSPageRule`.
```css
@page {
margin: 1cm;
}
```
```js
let myRules = document.styleSheets[0].cssRules;
console.log(myRules[0]); // a CSSPageRule
```
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
| 0 |
data/mdn-content/files/en-us/web/api/csspagerule | data/mdn-content/files/en-us/web/api/csspagerule/style/index.md | ---
title: "CSSPageRule: style property"
short-title: style
slug: Web/API/CSSPageRule/style
page-type: web-api-instance-property
browser-compat: api.CSSPageRule.style
---
{{APIRef("CSSOM")}}
The **`style`** read-only property of the {{domxref("CSSPageRule")}} interface returns a {{domxref("CSSStyleDeclaration")}} object. This represents an object that is a [CSS declaration block](/en-US/docs/Web/API/CSS_Object_Model/CSS_Declaration_Block), and exposes style information and various style-related methods and properties.
## Value
A {{domxref("CSSStyleDeclaration")}} object, which represents a [CSS declaration block](/en-US/docs/Web/API/CSS_Object_Model/CSS_Declaration_Block) with the following properties:
- computed flag
- : Unset.
- declarations
- : The declared declarations in the rule, in the order they were specified, shorthand properties expanded to longhands.
- parent CSS rule
- : The context object, which is an alias for [this](https://heycam.github.io/webidl/#this).
- owner node
- : Null.
## Examples
The stylesheet includes a {{cssxref("@page")}} rule. Getting a list of rules, then returning the value of the style property will return a {{domxref("CSSStyleDeclaration")}} object.
```css
@page {
margin: 1cm;
}
```
```js
let myRules = document.styleSheets[0].cssRules;
console.log(myRules[0].style); // returns a CSSStyleDeclaration object
```
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
| 0 |
data/mdn-content/files/en-us/web/api/csspagerule | data/mdn-content/files/en-us/web/api/csspagerule/selectortext/index.md | ---
title: "CSSPageRule: selectorText property"
short-title: selectorText
slug: Web/API/CSSPageRule/selectorText
page-type: web-api-instance-property
browser-compat: api.CSSPageRule.selectorText
---
{{APIRef("CSSOM")}}
The **`selectorText`** property of the {{domxref("CSSPageRule")}} interface gets and sets the selectors associated with the `CSSPageRule`.
## Value
A string.
## Examples
The stylesheet includes two {{cssxref("@page")}} rules. The `selectorText` property will return the literal selector text of `:first` as a string.
```css
@page {
margin: 1cm;
}
@page :first {
margin: 2cm;
}
```
```js
let myRules = document.styleSheets[0].cssRules; //returns two myRules
console.log(myRules[1].selectorText); // returns the string ":first"
```
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
| 0 |
data/mdn-content/files/en-us/web/api | data/mdn-content/files/en-us/web/api/permissions_api/index.md | ---
title: Permissions API
slug: Web/API/Permissions_API
page-type: web-api-overview
browser-compat:
- api.Permissions
- api.Navigator.permissions
- api.WorkerNavigator.permissions
spec-urls: https://w3c.github.io/permissions/
---
{{DefaultAPISidebar("Permissions API")}}
The **Permissions API** provides a consistent programmatic way to query the status of API permissions attributed to the current context. For example, the Permissions API can be used to determine if permission to access a particular API has been granted or denied, or requires specific user permission.
Note that the permissions from this API effectively aggregate all security restrictions for the context, including any requirement for an API to be used in a secure context, [Permissions-Policy](/en-US/docs/Web/HTTP/Headers/Permissions-Policy) restrictions applied to the document, and user prompts.
So, for example, if an API is restricted by permissions policy, the returned permission would be `denied` and the user would not be prompted for access.
{{AvailableInWorkers}}
## Concepts and usage
Historically different APIs handle their own permissions inconsistently — for example the [Notifications API](/en-US/docs/Web/API/Notifications_API) provided its own methods for requesting permissions and checking permission status, whereas the [Geolocation API](/en-US/docs/Web/API/Geolocation) did not.
The Permissions API provides the tools to allow developers to implement a consistent and better user experience for working with permissions.
The `permissions` property has been made available on the {{domxref("Navigator")}} object, both in the standard browsing context and the worker context ({{domxref("WorkerNavigator")}} — so permission checks are available inside workers), and returns a {{domxref("Permissions")}} object that provides access to the Permissions API functionality.
Once you have this object you can then use the {{domxref("Permissions.query()")}} method to return a promise that resolves with the {{domxref("PermissionStatus")}} for a specific API.
Note that if the status is `prompt` the user must acknowledge a prompt before accessing the feature, and that the mechanism for launching this prompt will depend on the specific API — it is not defined as part of the Permissions API.
### Permission-aware APIs
Not all APIs' permission statuses can be queried using the Permissions API.
A non-exhaustive list of permission-aware APIs includes:
- [Background Synchronization API](/en-US/docs/Web/API/Background_Synchronization_API): `background-sync` (should always be granted)
- [Geolocation API](/en-US/docs/Web/API/Geolocation_API): `geolocation`
- [Local Font Access API](/en-US/docs/Web/API/Local_Font_Access_API): `local-fonts`
- [Media Capture and Streams API](/en-US/docs/Web/API/Media_Capture_and_Streams_API): `microphone`, `camera`
- [Notifications API](/en-US/docs/Web/API/Notifications_API): `notifications`
- [Payment Handler API](/en-US/docs/Web/API/Payment_Handler_API): `payment-handler`
- [Push API](/en-US/docs/Web/API/Push_API): `push`
- [Screen Wake Lock API](/en-US/docs/Web/API/Screen_Wake_Lock_API): `screen-wake-lock`
- [Sensor APIs](/en-US/docs/Web/API/Sensor_APIs): `accelerometer`, `gyroscope`, `magnetometer`, `ambient-light-sensor`
- [Storage Access API](/en-US/docs/Web/API/Storage_Access_API): `storage-access`, `top-level-storage-access`
- [Storage API](/en-US/docs/Web/API/Storage_API): `persistent-storage`
- [Web MIDI API](/en-US/docs/Web/API/Web_MIDI_API): `midi`
- [Window Management API](/en-US/docs/Web/API/Window_Management_API): `window-management`
## Examples
We have created a simple example called Location Finder. You can [run the example live](https://chrisdavidmills.github.io/location-finder-permissions-api/), or [view the source code on GitHub](https://github.com/chrisdavidmills/location-finder-permissions-api/tree/gh-pages).
Read more about how it works in our article [Using the Permissions API](/en-US/docs/Web/API/Permissions_API/Using_the_Permissions_API).
## Interfaces
- {{domxref("Navigator.permissions")}} and {{domxref("WorkerNavigator.permissions")}} {{ReadOnlyInline}}
- : Provides access to the {{domxref("Permissions")}} object from the main context and worker context respectively.
- {{domxref("Permissions")}}
- : Provides the core Permission API functionality, such as methods for querying and revoking permissions.
- {{domxref("PermissionStatus")}}
- : Provides access to the current status of a permission, and an event handler to respond to changes in permission status.
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
## See also
- [Using the Permissions API](/en-US/docs/Web/API/Permissions_API/Using_the_Permissions_API)
- [Using the Permissions API to Detect How Often Users Allow or Deny Camera Access](https://blog.addpipe.com/using-permissions-api-to-detect-getusermedia-responses/)
- {{DOMxref("Notification.permission_static", "Notification.permission")}}
- [Privacy, permissions, and information security](/en-US/docs/Web/Privacy)
| 0 |
data/mdn-content/files/en-us/web/api/permissions_api | data/mdn-content/files/en-us/web/api/permissions_api/using_the_permissions_api/index.md | ---
title: Using the Permissions API
slug: Web/API/Permissions_API/Using_the_Permissions_API
page-type: guide
status:
- experimental
---
{{DefaultAPISidebar("Permissions API")}}
This article provides a basic guide to using the W3C [Permissions API](/en-US/docs/Web/API/Permissions_API), which provides a programmatic way to query the status of API permissions attributed to the current context.
## The trouble with asking for permission…
Permissions on the Web are a necessary evil, but they are not much fun to deal with as developers.
Historically, different APIs handle their own permissions inconsistently — for example the [Notifications API](/en-US/docs/Web/API/Notifications_API) had its own methods for checking the permission status and requesting permission, whereas the [Geolocation API](/en-US/docs/Web/API/Geolocation_API) did not.
The [Permissions API](/en-US/docs/Web/API/Permissions_API) provides a consistent approach for developers, and allows them to implement a better user experience as far as permissions are concerned.
Specifically, developers can use {{domxref("Permissions.query()")}} to check whether permission to use a particular API in the current context is granted, denied, or requires specific user permission via a prompt.
Querying permissions in the main thread is [broadly supported](/en-US/docs/Web/API/Permissions_API#api.navigator.permissions), and also in [Workers](/en-US/docs/Web/API/Permissions_API#api.workernavigator.permissions) (with a notable exception).
Many APIs now enable permission querying, such as the [Clipboard API](/en-US/docs/Web/API/Clipboard_API), [Notifications API](/en-US/docs/Web/API/Notifications_API)
- [Push API](/en-US/docs/Web/API/Push_API), [Web MIDI API](/en-US/docs/Web/API/Web_MIDI_API).
A list of many permission enabled APIs is provided in the [API Overview](/en-US/docs/Web/API/Permissions_API#permission-aware_apis), and you can get a sense of browser support in the [compatibility table here](/en-US/docs/Web/API/Permissions_API#api.permissions).
{{domxref("Permissions")}} has other methods to specifically request permission to use an API, and to revoke permission, but these are deprecated (non-standard, and/or not broadly supported).
## A simple example
For this article, we have put together a simple demo called Location Finder. It uses Geolocation to query the user's current location and plot it out on a Google Map:

You can [run the example live](https://chrisdavidmills.github.io/location-finder-permissions-api/), or [view the source code on GitHub](https://github.com/chrisdavidmills/location-finder-permissions-api/tree/gh-pages). Most of the code is simple and unremarkable — below we'll just be walking through the Permissions API-related code, so check the code yourself if you want to study any of the other parts.
### Accessing the Permissions API
The {{domxref("Navigator.permissions")}} property has been added to the browser to allow access to the global {{domxref("Permissions")}} object. This object will eventually include methods for querying, requesting, and revoking permissions, although currently it only contains {{domxref("Permissions.query()")}}; see below.
### Querying permission state
In our example, the Permissions functionality is handled by one function — `handlePermission()`. This starts off by querying the permission status using {{domxref("Permissions.query()")}}. Depending on the value of the {{domxref("PermissionStatus.state", "state")}} property of the {{domxref("PermissionStatus")}} object returned when the promise resolves, it reacts differently:
- `"granted"`
- : The "Enable Geolocation" button is hidden, as it isn't needed if Geolocation is already active.
- `"prompt"`
- : The "Enable Geolocation" button is hidden, as it isn't needed if the user will be prompted to grant permission for Geolocation. The {{domxref("Geolocation.getCurrentPosition()")}} function is then run, which prompts the user for permission; it runs the `revealPosition()` function if permission is granted (which shows the map), or the `positionDenied()` function if permission is denied (which makes the "Enable Geolocation" button appear).
- `"denied"`
- : The "Enable Geolocation" button is revealed (this code needs to be here too, in case the permission state is already set to denied for this origin when the page is first loaded).
```js
function handlePermission() {
navigator.permissions.query({ name: "geolocation" }).then((result) => {
if (result.state === "granted") {
report(result.state);
geoBtn.style.display = "none";
} else if (result.state === "prompt") {
report(result.state);
geoBtn.style.display = "none";
navigator.geolocation.getCurrentPosition(
revealPosition,
positionDenied,
geoSettings,
);
} else if (result.state === "denied") {
report(result.state);
geoBtn.style.display = "inline";
}
result.addEventListener("change", () => {
report(result.state);
});
});
}
function report(state) {
console.log(`Permission ${state}`);
}
handlePermission();
```
### Permission descriptors
The {{domxref("Permissions.query()")}} method takes a `PermissionDescriptor` dictionary as a parameter — this contains the name of the API you are interested in. Some APIs have more complex `PermissionDescriptor`s containing additional information, which inherit from the default `PermissionDescriptor`. For example, the `PushPermissionDescriptor` should also contain a Boolean that specifies if [`userVisibleOnly`](/en-US/docs/Web/API/PushManager/subscribe#parameters) is `true` or `false`.
### Revoking permissions
Starting in Firefox 47, you can now revoke existing permissions, using the {{domxref("Permissions.revoke()")}} method. This works in exactly the same way as the {{domxref("Permissions.query()")}} method, except that it causes an existing permission to be reverted back to its default state when the promise successfully resolves (which is usually `prompt`). See the following code in our demo:
```js
const revokeBtn = document.querySelector(".revoke");
// ...
revokeBtn.onclick = () => {
revokePermission();
};
// ...
function revokePermission() {
navigator.permissions.revoke({ name: "geolocation" }).then((result) => {
report(result.state);
});
}
```
> **Note:** The `revoke()` function has been disabled by default starting in Firefox 51, since its design has been brought into question in the [Web Applications Security Working Group](https://www.w3.org/2011/webappsec/). It can be re-enabled by setting the preference `dom.permissions.revoke.enable` to `true`.
### Responding to permission state changes
You'll notice that we're listening to the {{domxref("PermissionStatus.change_event", "change")}} event in the code above, attached to the {{domxref("PermissionStatus")}} object — this allows us to respond to any changes in the permission status for the API we are interested in. At the moment we are just reporting the change in state.
## Conclusion and future work
At the moment this doesn't offer much more than what we had already. If we choose to never share our location from the permission prompt (deny permission), then we can't get back to the permission prompt without using the browser menu options:
- **Firefox**: _Tools > Page Info > Permissions > Access Your Location_. Select _Always Ask_.
- **Chrome**: _Hamburger Menu > Settings > Show advanced settings_. In the _Privacy_ section, click _Content Settings_. In the resulting dialog, find the _Location_ section and select _Ask when a site tries to…_. Finally, click _Manage Exceptions_ and remove the permissions you granted to the sites you are interested in.
However, future additions to browser functionality should provide the `request()` method, which will allow us to programmatically request permissions, any time we like. These should hopefully be available soon.
| 0 |
data/mdn-content/files/en-us/web/api | data/mdn-content/files/en-us/web/api/xrpose/index.md | ---
title: XRPose
slug: Web/API/XRPose
page-type: web-api-interface
browser-compat: api.XRPose
---
{{APIRef("WebXR Device API")}}{{securecontext_header}}
`XRPose` is a [WebXR API](/en-US/docs/Web/API/WebXR_Device_API) interface representing a position and orientation in the 3D space, relative to the {{domxref("XRSpace")}} within which it resides. The `XRSpace`—which is either an {{domxref("XRReferenceSpace")}} or an {{domxref("XRBoundedReferenceSpace")}}—defines the coordinate system used for the pose and, in the case of an {{domxref("XRViewerPose")}}, its underlying views.
To obtain the `XRPose` for the `XRSpace` used as the local coordinate system of an object, call {{domxref("XRFrame.getPose()")}}, specifying that local `XRSpace` and the space to which you wish to convert:
```js
thePose = xrFrame.getPose(localSpace, baseSpace);
```
The pose for a viewer (or camera) is represented by the {{domxref("XRViewerPose")}} subclass of `XRPose`. This is obtained using {{domxref("XRFrame.getViewerPose()")}} instead of `getPose()`, specifying a reference space which has been adjusted to position and orient the node to provide the desired viewing position and angle:
```js
viewerPose = xrFrame.getViewerPose(adjReferenceSpace);
```
Here, `adjReferenceSpace` is a reference space which has been updated using the base frame of reference for the frame and any adjustments needed to position the viewer based on movement or rotation which is being supplied from a source other than the XR device, such as keyboard or mouse inputs.
See the article [Movement, orientation, and motion](/en-US/docs/Web/API/WebXR_Device_API/Movement_and_motion) for further details and an example with thorough explanations of what's going on.
## Instance properties
- {{DOMxRef("XRPose.angularVelocity")}} {{ReadOnlyInline}}
- : A {{DOMxRef("DOMPointReadOnly")}} describing the angular velocity in radians per second relative to the base {{DOMxRef("XRSpace")}}.
- {{DOMxRef("XRPose.emulatedPosition")}} {{ReadOnlyInline}}
- : A Boolean value which is `false` if the position and orientation given by {{DOMxRef("XRPose.transform", "transform")}} is obtained directly from a full six degree of freedom (6DoF) XR device (that is, a device which tracks not only the pitch, yaw, and roll of the head but also the forward, backward, and side-to-side motion of the viewer). If any component of the `transform` is computed or created artificially (such as by using mouse or keyboard controls to move through space), this value is instead `true`, indicating that the `transform` is in part emulated in software.
- {{DOMxRef("XRPose.linearVelocity")}} {{ReadOnlyInline}}
- : A {{DOMxRef("DOMPointReadOnly")}} describing the linear velocity in meters per second relative to the base {{DOMxRef("XRSpace")}}.
- {{DOMxRef("XRPose.transform")}} {{ReadOnlyInline}}
- : A {{DOMxRef("XRRigidTransform")}} which provides the position and orientation of the pose relative to the base {{DOMxRef("XRSpace")}}.
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
## See also
- [WebXR Device API](/en-US/docs/Web/API/WebXR_Device_API)
- {{DOMxRef("XRFrame.getPose()")}}
- {{DOMxRef("XRViewerPose")}}
| 0 |
data/mdn-content/files/en-us/web/api/xrpose | data/mdn-content/files/en-us/web/api/xrpose/emulatedposition/index.md | ---
title: "XRPose: emulatedPosition property"
short-title: emulatedPosition
slug: Web/API/XRPose/emulatedPosition
page-type: web-api-instance-property
browser-compat: api.XRPose.emulatedPosition
---
{{APIRef}}{{SecureContext_Header}}
The `emulatedPosition` read-only attribute of the
{{DOMxRef("XRPose")}} interface is a Boolean value indicating whether or not both the
{{domxref("XRRigidTransform.position", "position")}} component of the pose's
{{domxref("XRPose.transform", "transform")}} is directly taken from the XR device, or
it's simulated or computed based on other sources.
## Value
A Boolean which is `true` if the pose's position is computed based on
estimates or is derived from sources other than direct sensor data. If the position is
precisely based on direct sensor inputs, the value is `false`.
## Usage notes
There are two basic categories of XR tracking systems. A basic XR headset provides
three degrees of freedom (3DoF), tracking the pitch, yaw, and roll of the user's head.
No information is available about movement forward, backward, or to the sides. Any such
data is taken from other sources, such as keyboard or mouse inputs or game controllers.
As such, the position is considered to be emulated, so the `emulatedPosition`
property is `true`.
Contrariwise, XR devices which can also track movement forward and backward as well as
laterally—six degree of freedom (6DoF) devices—don't require any information from other
sources to determine the user's position, so the value of `emulatedPosition`
is `false`.
The same notion applies not just to the user's head, but to any object. A hand
controller that can directly report its position would have a value of
`false` for this property as well. If its position is computed as an offset
from another object (such as by basing it off the model representing the user's body),
then this value is `true`.
This information is important because devices whose position is emulated are prone to
their offset drifting relative to the real world space over time. This is because
emulating a position based on accelerometer inputs and models tends to introduce minor
errors which accumulate over time.
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
| 0 |
data/mdn-content/files/en-us/web/api/xrpose | data/mdn-content/files/en-us/web/api/xrpose/transform/index.md | ---
title: "XRPose: transform property"
short-title: transform
slug: Web/API/XRPose/transform
page-type: web-api-instance-property
browser-compat: api.XRPose.transform
---
{{APIRef("WebXR Device API")}}{{SecureContext_header}}
The `transform` read-only attribute of the
{{DOMxRef("XRPose")}} interface is a {{DOMxRef("XRRigidTransform")}} object providing
the position and orientation of the pose relative to the base {{DOMxRef("XRSpace")}}
as specified when the pose was obtained by calling
{{domxref("XRFrame.getPose()")}}.
## Value
An {{domxref("XRRigidTransform")}} which provides the position and orientation of the
{{domxref("XRPose")}} relative to the {{domxref("XRFrame")}} to which this
`XRPose` is aligned. This is the same pose that's returned by the frame's
{{domxref("XRFrame.getPose", "getPose()")}} method.
## Examples
This handler for the {{domxref("XRSession")}} event {{domxref("XRSession.select_event",
"select")}} handles events for tracked pointers. It determines the targeted object by
passing the event frame's pose into a function called `findTargetUsingRay()`,
then dispatches the event differently depending on the user's handedness; this is done
by comparing the value of the {{domxref("XRInputSource")}} property
{{domxref("XRInputSource.handedness", "handedness")}} to a value in the variable
`user.handedness`. If the source is a controller in the user's primary hand,
a function is called on the targeted object called `primaryAction()`;
otherwise, it calls the targeted object's `offHandAction()` function.
```js
xrSession.addEventListener("select", (event) => {
let source = event.inputSource;
let frame = event.frame;
let targetRayPose = frame.getPose(source.targetRaySpace, myRefSpace);
let targetObject = findTargetUsingRay(targetRay.transform.matrix);
if (source.targetRayMode === "tracked-pointer") {
if (source.handedness === user.handedness) {
targetObject.primaryAction();
} else {
targetObject.offHandAction();
}
}
});
```
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
| 0 |
data/mdn-content/files/en-us/web/api/xrpose | data/mdn-content/files/en-us/web/api/xrpose/angularvelocity/index.md | ---
title: "XRPose: angularVelocity property"
short-title: angularVelocity
slug: Web/API/XRPose/angularVelocity
page-type: web-api-instance-property
browser-compat: api.XRPose.angularVelocity
---
{{APIRef}}{{SecureContext_Header}}
The `angularVelocity` read-only property of the
{{DOMxRef("XRPose")}} interface is a {{DOMxRef("DOMPointReadOnly")}} describing
the angular velocity in radians per second relative to the base
{{DOMxRef("XRSpace")}}.
## Value
A {{DOMxRef("DOMPointReadOnly")}} describing the angular velocity in radians
per second relative to the base {{DOMxRef("XRSpace")}}. Returns [`null`](/en-US/docs/Web/JavaScript/Reference/Operators/null)
if the user agent can't populate this value.
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
## See also
- {{domxref("XRPose.linearVelocity")}}
| 0 |
data/mdn-content/files/en-us/web/api/xrpose | data/mdn-content/files/en-us/web/api/xrpose/linearvelocity/index.md | ---
title: "XRPose: linearVelocity property"
short-title: linearVelocity
slug: Web/API/XRPose/linearVelocity
page-type: web-api-instance-property
browser-compat: api.XRPose.linearVelocity
---
{{APIRef}}{{SecureContext_Header}}
The `linearVelocity` read-only property of the
{{DOMxRef("XRPose")}} interface is a {{DOMxRef("DOMPointReadOnly")}} describing
the linear velocity in meters per second relative to the base
{{DOMxRef("XRSpace")}}.
## Value
A {{DOMxRef("DOMPointReadOnly")}} describing the linear velocity in meters
per second relative to the base {{DOMxRef("XRSpace")}}. Returns [`null`](/en-US/docs/Web/JavaScript/Reference/Operators/null)
if the user agent can't populate this value.
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
## See also
- {{domxref("XRPose.linearVelocity")}}
| 0 |
data/mdn-content/files/en-us/web/api | data/mdn-content/files/en-us/web/api/rtcrtptransceiver/index.md | ---
title: RTCRtpTransceiver
slug: Web/API/RTCRtpTransceiver
page-type: web-api-interface
browser-compat: api.RTCRtpTransceiver
---
{{APIRef("WebRTC")}}
The WebRTC interface **`RTCRtpTransceiver`** describes a permanent pairing of an {{domxref("RTCRtpSender")}} and an {{domxref("RTCRtpReceiver")}}, along with some shared state.
Each {{Glossary("SDP")}} media section describes one bidirectional SRTP ("Secure Real Time Protocol") stream (excepting the media section for {{domxref("RTCDataChannel")}}, if present).
This pairing of send and receive SRTP streams is significant for some applications, so `RTCRtpTransceiver` is used to represent this pairing, along with other important state from the media section.
Each non-disabled SRTP media section is always represented by exactly one transceiver.
A transceiver is uniquely identified using its {{domxref("RTCRtpTransceiver.mid", "mid")}} property, which is the same as the media ID (`mid`) of its corresponding m-line. An `RTCRtpTransceiver` is **associated** with an m-line if its `mid` is non-null; otherwise it's considered disassociated.
## Instance properties
- {{domxref("RTCRtpTransceiver.currentDirection", "currentDirection")}} {{ReadOnlyInline}}
- : A read-only string which indicates the transceiver's current negotiated directionality, or `null` if the transceiver has never participated in an exchange of offers and answers.
To change the transceiver's directionality, set the value of the {{domxref("RTCRtpTransceiver.direction", "direction")}} property.
- {{domxref("RTCRtpTransceiver.direction", "direction")}}
- : A string which is used to set the transceiver's desired direction.
- {{domxref("RTCRtpTransceiver.mid", "mid")}} {{ReadOnlyInline}}
- : The media ID of the m-line associated with this transceiver. This association is established, when possible, whenever either a local or remote description is applied. This field is `null` if neither a local or remote description has been applied, or if its associated m-line is rejected by either a remote offer or any answer.
- {{domxref("RTCRtpTransceiver.receiver", "receiver")}} {{ReadOnlyInline}}
- : The {{domxref("RTCRtpReceiver")}} object that handles receiving and decoding incoming media.
- {{domxref("RTCRtpTransceiver.sender", "sender")}} {{ReadOnlyInline}}
- : The {{domxref("RTCRtpSender")}} object responsible for encoding and sending data to the remote peer.
- {{domxref("RTCRtpTransceiver.stopped", "stopped")}} {{Deprecated_Inline}}
- : Indicates whether or not sending and receiving using the paired `RTCRtpSender` and `RTCRtpReceiver` has been permanently disabled, either due to SDP offer/answer, or due to a call to {{domxref("RTCRtpTransceiver.stop", "stop()")}}.
## Instance methods
- {{domxref("RTCRtpTransceiver.setCodecPreferences", "setCodecPreferences()")}}
- : A list of {{domxref("RTCRtpCodecParameters")}} objects which override the default preferences used by the {{Glossary("user agent")}} for the transceiver's codecs.
- {{domxref("RTCRtpTransceiver.stop", "stop()")}}
- : Permanently stops the `RTCRtpTransceiver`.
The associated sender stops sending data, and the associated receiver likewise stops receiving and decoding incoming data.
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
## See also
- [WebRTC API](/en-US/docs/Web/API/WebRTC_API)
- [Introduction to the Real-time Transport Protocol (RTP)](/en-US/docs/Web/API/WebRTC_API/Intro_to_RTP)
- {{domxref("RTCPeerConnection.addTrack()")}} and {{domxref("RTCPeerConnection.addTransceiver()")}} both create transceivers
- {{domxref("RTCRtpReceiver")}} and {{domxref("RTCRtpSender")}}
| 0 |
data/mdn-content/files/en-us/web/api/rtcrtptransceiver | data/mdn-content/files/en-us/web/api/rtcrtptransceiver/stop/index.md | ---
title: "RTCRtpTransceiver: stop() method"
short-title: stop()
slug: Web/API/RTCRtpTransceiver/stop
page-type: web-api-instance-method
browser-compat: api.RTCRtpTransceiver.stop
---
{{APIRef("WebRTC")}}
The **`stop()`** method in the {{domxref("RTCRtpTransceiver")}} interface permanently stops the transceiver by stopping both the associated {{domxref("RTCRtpSender")}} and
{{domxref("RTCRtpReceiver")}}.
## Syntax
```js-nolint
stop()
```
### Parameters
None.
### Return value
None ({{jsxref("undefined")}}).
### Exceptions
- `InvalidStateError` {{domxref("DOMException")}}
- : Thrown if the `RTCPeerConnection`, of which the transceiver is a member, is closed.
## Description
When you call `stop()` on a transceiver, the sender immediately stops sending media and each of its RTP streams are closed using the {{Glossary("RTCP")}} `"BYE"` message.
The receiver then stops receiving media; the receiver's {{domxref("RTCRtpReceiver.track", "track")}} is stopped, and the transceiver's {{domxref("RTCRtpTransceiver.direction", "direction")}} is changed to `stopped`.
Renegotiation is triggered by sending a {{domxref("RTCPeerConnection.negotiationneeded_event", "negotiationneeded")}} event to the transceiver's {{domxref("RTCPeerConnection")}}, so that the connection can adapt to the change.
The method does nothing if the transceiver is already stopped.
You can check whether it has stopped by comparing {{domxref("RTCRtpTransceiver.currentDirection", "currentDirection")}} to `"stopped"`.
> **Note:** Earlier versions of the specification used the deprecated {{domxref("RTCRtpTransceiver.stopped", "stopped")}} {{deprecated_inline}} property to indicate if the transceiver has stopped.
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
## See also
- [WebRTC API](/en-US/docs/Web/API/WebRTC_API)
- [Introduction to the Real-time Transport Protocol (RTP)](/en-US/docs/Web/API/WebRTC_API/Intro_to_RTP)
- {{domxref("MediaStreamTrack")}}
| 0 |
data/mdn-content/files/en-us/web/api/rtcrtptransceiver | data/mdn-content/files/en-us/web/api/rtcrtptransceiver/mid/index.md | ---
title: "RTCRtpTransceiver: mid property"
short-title: mid
slug: Web/API/RTCRtpTransceiver/mid
page-type: web-api-instance-property
browser-compat: api.RTCRtpTransceiver.mid
---
{{APIRef("WebRTC")}}
The read-only {{domxref("RTCRtpTransceiver")}} interface's
**`mid`** property specifies the negotiated media ID
(`mid`) which the local and remote peers have agreed upon to uniquely
identify the stream's pairing of sender and receiver.
## Value
A string which uniquely identifies the pairing of source and
destination of the transceiver's stream. Its value is taken from the media ID of the SDP
m-line. This value is `null` if negotiation has not completed.
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
## See also
- [Introduction to the Real-time Transport Protocol (RTP)](/en-US/docs/Web/API/WebRTC_API/Intro_to_RTP)
| 0 |
data/mdn-content/files/en-us/web/api/rtcrtptransceiver | data/mdn-content/files/en-us/web/api/rtcrtptransceiver/setcodecpreferences/index.md | ---
title: "RTCRtpTransceiver: setCodecPreferences() method"
short-title: setCodecPreferences()
slug: Web/API/RTCRtpTransceiver/setCodecPreferences
page-type: web-api-instance-method
browser-compat: api.RTCRtpTransceiver.setCodecPreferences
---
{{APIRef("WebRTC")}}
The {{domxref("RTCRtpTransceiver")}} method **`setCodecPreferences()`** configures the transceiver's preferred list of codecs.
The specified set of codecs will be used for all future connections that include this transceiver until this method is called again.
When preparing to open an {{domxref("RTCPeerConnection")}}, you can change the codec parameters from the {{Glossary("user agent", "user agent's")}} default configuration by calling `setCodecParameters()` _before_ calling either {{domxref("RTCPeerConnection.createOffer()")}} or {{domxref("RTCPeerConnection.createAnswer", "createAnswer()")}}.
A guide to codecs supported by WebRTC—and each codec's positive and negative characteristics—can be found in [Codecs used by WebRTC](/en-US/docs/Web/Media/Formats/WebRTC_codecs).
## Syntax
```js-nolint
setCodecPreferences(codecs)
```
### Parameters
- `codecs`
- : An array of objects, each providing the parameters for one of the transceiver's supported [media codecs](/en-US/docs/Web/Media/Formats/WebRTC_codecs), ordered by preference.
If `codecs` is empty, the codec configurations are all returned to the user agent's defaults.
> **Note:** Any codecs not included in `codecs` will not be considered during the process of negotiating a connection.
> This lets you prevent the use of codecs you don't wish to use.
Each codec object in the array has the following properties:
- `channels` {{optional_inline}}
- : A positive integer value indicating the maximum number of channels supported by the codec; for example, a codec that supports only mono sound would have a value of 1; stereo codecs would have a 2, etc.
- `clockRate`
- : A positive integer specifying the codec's clock rate in Hertz (Hz).
The IANA maintains a [list of codecs and their parameters](https://www.iana.org/assignments/rtp-parameters/rtp-parameters.xhtml#rtp-parameters-1), including their clock rates.
- `mimeType`
- : A string indicating the codec's MIME media type and subtype.
The MIME type strings used by RTP differ from those used elsewhere.
See {{RFC(3555, "", 4)}} for the complete IANA registry of these types.
Also see [Codecs used by WebRTC](/en-US/docs/Web/Media/Formats/WebRTC_codecs) for details about potential codecs that might be referenced here.
- `sdpFmtpLine` {{optional_inline}}
- : A string giving the format specific parameters field from the `a=fmtp` line in the SDP which corresponds to the codec, if such a line exists.
If there is no parameters field, this property is left out.
### Return value
None ({{jsxref("undefined")}}).
### Exceptions
- `InvalidAccessError` {{domxref("DOMException")}}
- : The `codecs` list includes one or more codecs which are not supported by the transceiver.
## Usage notes
### Getting a list of supported codecs
You can only include in the `codecs` list codecs which the transceiver actually supports.
That means that either the associated {{domxref("RTCRtpSender")}} or the {{domxref("RTCRtpReceiver")}} needs to support every codec in the list.
If any unsupported codecs are listed, the browser will throw an `InvalidAccessError` exception when you call this method.
A good approach to setting codec preferences is to first get the list of codecs that are actually supported, then modify that list to match what you want.
Pass the altered list into `setCodecPreferences()` to specify your preferences.
To determine which codecs are supported by the transceiver, call the sender's {{domxref("RTCRtpSender.getCapabilities_static", "getCapabilities()")}} and the receiver's {{domxref("RTCRtpReceiver.getCapabilities_static", "getCapabilities()")}} methods and get the `codecs` array from the results of each.
The following code snippet demonstrates how to get both the list of codecs supported by the transceiver's {{domxref("RTCRtpSender")}} and {{domxref("RTCRtpReceiver")}}.
```js
const availSendCodecs = transceiver.sender.getCapabilities("video").codecs;
const availReceiveCodecs = transceiver.receiver.getCapabilities("video").codecs;
```
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
## See also
- [WebRTC API](/en-US/docs/Web/API/WebRTC_API)
- [Codecs used by WebRTC](/en-US/docs/Web/Media/Formats/WebRTC_codecs)
- [Introduction to the Real-time Transport Protocol (RTP)](/en-US/docs/Web/API/WebRTC_API/Intro_to_RTP)
- [Web media technologies](/en-US/docs/Web/Media)
| 0 |
data/mdn-content/files/en-us/web/api/rtcrtptransceiver | data/mdn-content/files/en-us/web/api/rtcrtptransceiver/sender/index.md | ---
title: "RTCRtpTransceiver: sender property"
short-title: sender
slug: Web/API/RTCRtpTransceiver/sender
page-type: web-api-instance-property
browser-compat: api.RTCRtpTransceiver.sender
---
{{APIRef("WebRTC")}}
The read-only **`sender`** property
of WebRTC's {{domxref("RTCRtpTransceiver")}} interface indicates the
{{domxref("RTCRtpSender")}} responsible for encoding and sending outgoing media data
for the transceiver's stream.
## Value
An {{domxref("RTCRtpSender")}} object used to encode and send media whose media ID
matches the current value of {{domxref("RTCRtpTransceiver.mid", "mid")}}.
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
## See also
- [Introduction to the Real-time Transport Protocol (RTP)](/en-US/docs/Web/API/WebRTC_API/Intro_to_RTP)
- {{domxref("RTCRtpSender")}}
| 0 |
data/mdn-content/files/en-us/web/api/rtcrtptransceiver | data/mdn-content/files/en-us/web/api/rtcrtptransceiver/receiver/index.md | ---
title: "RTCRtpTransceiver: receiver property"
short-title: receiver
slug: Web/API/RTCRtpTransceiver/receiver
page-type: web-api-instance-property
browser-compat: api.RTCRtpTransceiver.receiver
---
{{APIRef("WebRTC")}}
The read-only **`receiver`** property
of WebRTC's {{domxref("RTCRtpTransceiver")}} interface indicates the
{{domxref("RTCRtpReceiver")}} responsible for receiving and decoding incoming media
data for the transceiver's stream.
## Value
An {{domxref("RTCRtpReceiver")}} object which is responsible for receiving and decoding
incoming media data whose media ID is the same as the current value of
{{domxref("RTCRtpTransceiver.mid", "mid")}}.
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
## See also
- [Introduction to the Real-time Transport Protocol (RTP)](/en-US/docs/Web/API/WebRTC_API/Intro_to_RTP)
- {{domxref("RTCRtpReceiver")}}
| 0 |
data/mdn-content/files/en-us/web/api/rtcrtptransceiver | data/mdn-content/files/en-us/web/api/rtcrtptransceiver/direction/index.md | ---
title: "RTCRtpTransceiver: direction property"
short-title: direction
slug: Web/API/RTCRtpTransceiver/direction
page-type: web-api-instance-property
browser-compat: api.RTCRtpTransceiver.direction
---
{{APIRef("WebRTC")}}
The {{domxref("RTCRtpTransceiver")}} property **`direction`** is a string that indicates the transceiver's _preferred_ directionality.
The directionality indicates whether the transceiver will offer to send and/or receive {{Glossary("RTP")}} data, or whether it is inactive or stopped (terminated).
When setting the transceiver's direction, the value is not applied immediately.
The _current_ direction is indicated by the {{domxref("RTCRtpTransceiver.currentDirection", "currentDirection")}} property.
## Value
A string with one of the following values:
- `"sendrecv"`
- : Transceiver offers to send and receive RTP data:
- `RTCRtpSender`: Offers to send RTP data, and will do so if the remote peer accepts the connection and at least one of the sender's encodings is active.
- `RTCRtpReceiver`: Offers to receive RTP data, and does so if the remote peer accepts.
- `"sendonly"`
- : Transceiver offers to send but not receive RTP data:
- `RTCRtpSender`: Offers to send RTP data, and will do so if the remote peer accepts the connection and at least one of the sender's encodings is active.
- `RTCRtpReceiver`: Does _not_ offer to receive RTP data and will not do so.
- `"recvonly"`
- : Transceiver offers to receive but not set RTP data:
- `RTCRtpSender`: Does _not_ offer to send RTP data, and will not do so.
- `RTCRtpReceiver`: Offers to receive RTP data, and will do so if the remote peer offers.
- `"inactive"`
- : Transceiver is inactive:
- `RTCRtpSender`: Does _not_ offer to send RTP data, and will not do so.
- `RTCRtpReceiver`: Does _not_ offer to receive RTP data and will not do so.
- `"stopped"`
- : This is the terminal state of the transceiver.
The transceiver is stopped and will not send or receive RTP data or offer to do so.
Setting this value when the transceiver is not already stopped raises a `TypeError`.
- `RTCRtpSender`: Does _not_ offer to send RTP data, and will not do so.
- `RTCRtpReceiver`: Does _not_ offer to receive RTP data and will not do so.
### Exceptions
When setting the value of `direction`, the following exception can occur:
- `InvalidStateError` {{domxref("DOMException")}}
- : The receiver's {{domxref("RTCPeerConnection")}} is closed or the {{domxref("RTCRtpReceiver")}} is stopped.
- `TypeError`
- : The value is being set to `stopped` when the current value is anything other than `stopped`.
## Description
The **`direction`** property can be used to set or get the transceiver's _preferred_ directionality.
Updating the directionality does not take effect immediately.
If the new value of `direction` is different from the existing value, renegotiation of the connection is required, so a {{domxref("RTCPeerConnection.negotiationneeded_event", "negotiationneeded")}} event is sent to the {{domxref("RTCPeerConnection")}}.
A `direction` value (other than `stopped`) is then used by {{domxref("RTCPeerConnection.createOffer()")}} or {{domxref("RTCPeerConnection.createAnswer()")}} in order to generate the {{glossary("SDP")}} message created those methods.
For example, if the `direction` is specified as `"sendrecv"`, the corresponding SDP a-line indicates the directionality:
```plain
a=sendrecv
```
The new directionality takes effect once the negotiation process is completed and the new session description is successfully applied.
The transceiver's _current_ direction is indicated by the {{domxref("RTCRtpTransceiver.currentDirection", "currentDirection")}} property.
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
## See also
- {{domxref("RTCRtpTransceiver.currentDirection")}}
| 0 |
data/mdn-content/files/en-us/web/api/rtcrtptransceiver | data/mdn-content/files/en-us/web/api/rtcrtptransceiver/currentdirection/index.md | ---
title: "RTCRtpTransceiver: currentDirection property"
short-title: currentDirection
slug: Web/API/RTCRtpTransceiver/currentDirection
page-type: web-api-instance-property
browser-compat: api.RTCRtpTransceiver.currentDirection
---
{{APIRef("WebRTC")}}
The read-only {{domxref("RTCRtpTransceiver")}} property **`currentDirection`** is a string which indicates the current negotiated directionality of the transceiver.
The directionality indicates whether the transceiver will offer to send and/or receive {{Glossary("RTP")}} data, or whether it is inactive or stopped and won't send or receive data.
The transceiver's preferred directionality can be set and read using the {{domxref("RTCRtpTransceiver.direction", "direction")}} property.
Changing the `direction` triggers a renegotiation, which may eventually result in the `currentDirection` also changing.
## Value
The value is initially `null`, prior to negotiation using an offer/answer.
After negotiation the value is a string with one of the following values:
- `"sendrecv"`
- : Transceiver offers to send and receive RTP data:
- `RTCRtpSender`: Offers to send RTP data, and will do so if the remote peer accepts the connection and at least one of the sender's encodings is active.
- `RTCRtpReceiver`: Offers to receive RTP data, and does so if the remote peer accepts.
- `"sendonly"`
- : Transceiver offers to send but not receive RTP data:
- `RTCRtpSender`: Offers to send RTP data, and will do so if the remote peer accepts the connection and at least one of the sender's encodings is active.
- `RTCRtpReceiver`: Does _not_ offer to receive RTP data and will not do so.
- `"recvonly"`
- : Transceiver offers to receive but not set RTP data:
- `RTCRtpSender`: Does _not_ offer to send RTP data, and will not do so.
- `RTCRtpReceiver`: Offers to receive RTP data, and will do so if the remote peer offers.
- `"inactive"`
- : Transceiver is inactive:
- `RTCRtpSender`: Does _not_ offer to send RTP data, and will not do so.
- `RTCRtpReceiver`: Does _not_ offer to receive RTP data and will not do so.
- `"stopped"`
- : This is the terminal state of the transceiver.
The transceiver is stopped and will not send or receive RTP data or offer to do so.
- `RTCRtpSender`: Does _not_ offer to send RTP data, and will not do so.
- `RTCRtpReceiver`: Does _not_ offer to receive RTP data and will not do so.
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
## See also
- {{domxref("RTCRtpTransceiver.direction")}}
| 0 |
data/mdn-content/files/en-us/web/api/rtcrtptransceiver | data/mdn-content/files/en-us/web/api/rtcrtptransceiver/stopped/index.md | ---
title: "RTCRtpTransceiver: stopped property"
short-title: stopped
slug: Web/API/RTCRtpTransceiver/stopped
page-type: web-api-instance-property
status:
- deprecated
browser-compat: api.RTCRtpTransceiver.stopped
---
{{APIRef("WebRTC")}}{{deprecated_header}}
> **Note:** Instead of using this deprecated property, compare {{domxref("RTCRtpTransceiver.currentDirection", "currentDirection")}} to `"stopped"`.
The read-only **`stopped`** property on the {{domxref("RTCRtpTransceiver")}} interface indicates whether or not the transceiver's associated sender and receiver have both been stopped.
The transceiver is stopped if the {{domxref("RTCRtpTransceiver.stop", "stop()")}} method has been called or if a change to either the local or the remote description has caused the transceiver to be stopped for some reason.
## Value
A Boolean value which is `true` if the transceiver's
{{domxref("RTCRtpTransceiver.sender", "sender")}} will no longer send data, and its
{{domxref("RTCRtpTransceiver.receiver", "receiver")}} will no longer receive data. If
either or both are still at work, the result is `false`.
## Usage notes
## Specifications
This feature is not part of any current specification. It is no longer on track to become a standard.
## Browser compatibility
{{Compat}}
## See also
- [WebRTC API](/en-US/docs/Web/API/WebRTC_API)
- [Introduction to the Real-time Transport Protocol (RTP)](/en-US/docs/Web/API/WebRTC_API/Intro_to_RTP)
| 0 |
data/mdn-content/files/en-us/web/api | data/mdn-content/files/en-us/web/api/structuredclone/index.md | ---
title: structuredClone() global function
short-title: structuredClone()
slug: Web/API/structuredClone
page-type: web-api-global-function
browser-compat: api.structuredClone
---
{{APIRef("HTML DOM")}}{{AvailableInWorkers}}
The global **`structuredClone()`** method creates a [deep clone](/en-US/docs/Glossary/Deep_copy) of a given value using the [structured clone algorithm](/en-US/docs/Web/API/Web_Workers_API/Structured_clone_algorithm).
The method also allows [transferable objects](/en-US/docs/Web/API/Web_Workers_API/Transferable_objects) in the original value to be _transferred_ rather than cloned to the new object.
Transferred objects are detached from the original object and attached to the new object; they are no longer accessible in the original object.
## Syntax
```js-nolint
structuredClone(value)
structuredClone(value, options)
```
### Parameters
- `value`
- : The object to be cloned.
This can be any [structured-cloneable type](/en-US/docs/Web/API/Web_Workers_API/Structured_clone_algorithm#supported_types).
- `options` {{optional_inline}}
- : An object with the following properties:
- `transfer`
- : An array of [transferable objects](/en-US/docs/Web/API/Web_Workers_API/Transferable_objects) that will be moved rather than cloned to the returned object.
### Return value
The returned value is a [deep copy](/en-US/docs/Glossary/Deep_copy) of the original `value`.
### Exceptions
- `DataCloneError` {{domxref("DOMException")}}
- : Thrown if any part of the input value is not serializable.
## Description
This function can be used to [deep copy](/en-US/docs/Glossary/Deep_copy) JavaScript values.
It also supports circular references, as shown below:
```js
// Create an object with a value and a circular reference to itself.
const original = { name: "MDN" };
original.itself = original;
// Clone it
const clone = structuredClone(original);
console.assert(clone !== original); // the objects are not the same (not same identity)
console.assert(clone.name === "MDN"); // they do have the same values
console.assert(clone.itself === clone); // and the circular reference is preserved
```
### Transferring values
[Transferable objects](/en-US/docs/Web/API/Web_Workers_API/Transferable_objects) (only) can be transferred rather than duplicated in the cloned object, using the `transfer` property of the `options` parameter. Transferring makes the original object unusable.
> **Note:** A scenario where this might be useful is when asynchronously validating some data in a buffer before saving it.
> To avoid the buffer being modified before the data is saved, you can clone the buffer and validate that data.
> If you also _transfer_ the data, any attempts to modify the original buffer will fail, preventing its accidental misuse.
The following code shows how to clone an array and transfer its underlying resources to the new object.
On return, the original `uInt8Array.buffer` will be cleared.
```js
// 16MB = 1024 * 1024 * 16
const uInt8Array = Uint8Array.from({ length: 1024 * 1024 * 16 }, (v, i) => i);
const transferred = structuredClone(uInt8Array, {
transfer: [uInt8Array.buffer],
});
console.log(uInt8Array.byteLength); // 0
```
You can clone any number of objects and transfer any subset of those objects.
For example, the code below would transfer `arrayBuffer1` from the passed in value, but not `arrayBuffer2`.
```js
const transferred = structuredClone(
{ x: { y: { z: arrayBuffer1, w: arrayBuffer2 } } },
{ transfer: [arrayBuffer1] },
);
```
## Examples
### Cloning an object
In this example, we clone an object with one member, which is an array. After cloning, changes to each object do not affect the other object.
```js
const mushrooms1 = {
amanita: ["muscaria", "virosa"],
};
const mushrooms2 = structuredClone(mushrooms1);
mushrooms2.amanita.push("pantherina");
mushrooms1.amanita.pop();
console.log(mushrooms2.amanita); // ["muscaria", "virosa", "pantherina"]
console.log(mushrooms1.amanita); // ["muscaria"]
```
### Transferring an object
In this example we create an {{jsxref("ArrayBuffer")}} and then clone the object it is a member of, transferring the buffer. We can use the buffer in the cloned object, but if we try to use the original buffer we will get an exception.
```js
// Create an ArrayBuffer with a size in bytes
const buffer1 = new ArrayBuffer(16);
const object1 = {
buffer: buffer1,
};
// Clone the object containing the buffer, and transfer it
const object2 = structuredClone(object1, { transfer: [buffer1] });
// Create an array from the cloned buffer
const int32View2 = new Int32Array(object2.buffer);
int32View2[0] = 42;
console.log(int32View2[0]);
// Creating an array from the original buffer throws a TypeError
const int32View1 = new Int32Array(object1.buffer);
```
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
## See also
- [A polyfill of `structuredClone`](https://github.com/zloirock/core-js#structuredclone) is available in [`core-js`](https://github.com/zloirock/core-js)
- [Structured clone algorithm](/en-US/docs/Web/API/Web_Workers_API/Structured_clone_algorithm)
- [Structured clone polyfill](https://github.com/ungap/structured-clone)
| 0 |
data/mdn-content/files/en-us/web/api | data/mdn-content/files/en-us/web/api/midioutputmap/index.md | ---
title: MIDIOutputMap
slug: Web/API/MIDIOutputMap
page-type: web-api-interface
browser-compat: api.MIDIOutputMap
---
{{APIRef("Web MIDI API")}}{{SecureContext_Header}}
The **`MIDIOutputMap`** read-only interface of the [Web MIDI API](/en-US/docs/Web/API/Web_MIDI_API) provides the set of MIDI output ports that are currently available.
A `MIDIOutputMap` instance is a read-only [`Map`-like object](/en-US/docs/Web/JavaScript/Reference/Global_Objects/Map#map-like_browser_apis), in which each key is the ID string for MIDI output, and the associated value is the corresponding {{domxref("MIDIOutput")}} object.
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
| 0 |
data/mdn-content/files/en-us/web/api | data/mdn-content/files/en-us/web/api/media_capture_and_streams_api/index.md | ---
title: Media Capture and Streams API (Media Stream)
slug: Web/API/Media_Capture_and_Streams_API
page-type: web-api-overview
browser-compat:
- api.MediaStream
- api.MediaStreamTrack
- api.MediaDevices
- api.MediaDeviceInfo
- api.InputDeviceInfo
- api.CanvasCaptureMediaStreamTrack
spec-urls:
- https://w3c.github.io/mediacapture-main/
- https://w3c.github.io/mediacapture-fromelement/
---
{{DefaultAPISidebar("Media Capture and Streams")}}
The **Media Capture and Streams** API, often called the **Media Streams API** or **MediaStream API**, is an API related to [WebRTC](/en-US/docs/Web/API/WebRTC_API) which provides support for streaming audio and video data.
It provides the interfaces and methods for working with the streams and their constituent tracks, the constraints associated with data formats, the success and error callbacks when using the data asynchronously, and the events that are fired during the process.
## Concepts and usage
The API is based on the manipulation of a {{domxref("MediaStream")}} object representing a flux of audio- or video-related data. See an example in [Get the media stream](/en-US/docs/Web/API/Media_Capture_and_Streams_API/Taking_still_photos#the_startup_function).
A `MediaStream` consists of zero or more {{domxref("MediaStreamTrack")}} objects, representing various audio or video **tracks**. Each `MediaStreamTrack` may have one or more **channels**. The channel represents the smallest unit of a media stream, such as an audio signal associated with a given speaker, like _left_ or _right_ in a stereo audio track.
`MediaStream` objects have a single **input** and a single **output**. A `MediaStream` object generated by {{domxref("MediaDevices.getUserMedia", "getUserMedia()")}} is called _local_, and has as its source input one of the user's cameras or microphones. A non-local `MediaStream` may be representing a media element, like {{HTMLElement("video")}} or {{HTMLElement("audio")}}, a stream originating over the network, and obtained via the WebRTC {{domxref("RTCPeerConnection")}} API, or a stream created using the [Web Audio API](/en-US/docs/Web/API/Web_Audio_API) {{domxref("MediaStreamAudioDestinationNode")}}.
The output of the `MediaStream` object is linked to a **consumer**. It can be a media element, like {{HTMLElement("audio")}} or {{HTMLElement("video")}}, the WebRTC {{domxref("RTCPeerConnection")}} API or a [Web Audio API](/en-US/docs/Web/API/Web_Audio_API) {{domxref("MediaStreamAudioSourceNode")}}.
## Interfaces
In these reference articles, you'll find the fundamental information you'll need to know about each of the interfaces that make up the Media Capture and Streams API.
- {{domxref("CanvasCaptureMediaStreamTrack")}}
- {{domxref("InputDeviceInfo")}}
- {{domxref("MediaDeviceInfo")}}
- {{domxref("MediaDevices")}}
- {{domxref("MediaStream")}}
- {{domxref("MediaStreamTrack")}}
- {{domxref("MediaStreamTrackEvent")}}
- {{domxref("MediaTrackConstraints")}}
- {{domxref("MediaTrackSettings")}}
- {{domxref("MediaTrackSupportedConstraints")}}
- {{domxref("OverconstrainedError")}}
## Events
- {{domxref("MediaStream/addtrack_event", "addtrack")}}
- {{domxref("MediaStreamTrack/ended_event", "ended")}}
- {{domxref("MediaStreamTrack/mute_event", "mute")}}
- {{domxref("MediaStream/removetrack_event", "removetrack")}}
- {{domxref("MediaStreamTrack/unmute_event", "unmute")}}
## Guides and tutorials
The [Capabilities, constraints, and settings](/en-US/docs/Web/API/Media_Capture_and_Streams_API/Constraints) article discusses the concepts of **constraints** and **capabilities**, as well as media settings, and includes a [Constraint Exerciser](/en-US/docs/Web/API/Media_Capture_and_Streams_API/Constraints#example_constraint_exerciser) that lets you experiment with the results of different constraint sets being applied to the audio and video tracks coming from the computer's A/V input devices (such as its webcam and microphone).
The [Taking still photos with getUserMedia()](/en-US/docs/Web/API/Media_Capture_and_Streams_API/Taking_still_photos) article shows how to use [`getUserMedia()`](/en-US/docs/Web/API/MediaDevices/getUserMedia) to access the camera on a computer or mobile phone with `getUserMedia()` support and take a photo with it.
## Browser compatibility
{{Compat}}
## See also
- [WebRTC](/en-US/docs/Web/API/WebRTC_API) - the introductory page to the API
- [Taking still photos with WebRTC](/en-US/docs/Web/API/Media_Capture_and_Streams_API/Taking_still_photos): a demonstration and tutorial about using `getUserMedia()`.
| 0 |
data/mdn-content/files/en-us/web/api/media_capture_and_streams_api | data/mdn-content/files/en-us/web/api/media_capture_and_streams_api/taking_still_photos/index.md | ---
title: Taking still photos with getUserMedia()
slug: Web/API/Media_Capture_and_Streams_API/Taking_still_photos
page-type: guide
---
{{DefaultAPISidebar("Media Capture and Streams")}}
This article shows how to use [`navigator.mediaDevices.getUserMedia()`](/en-US/docs/Web/API/MediaDevices/getUserMedia) to access the camera on a computer or mobile phone with `getUserMedia()` support and take a photo with it.

You can also jump straight to the [Demo](#demo) if you like.
## The HTML markup
[Our HTML interface](#html) has two main operational sections: the stream and capture panel and the presentation panel. Each of these is presented side-by-side in its own {{HTMLElement("div")}} to facilitate styling and control.
The first panel on the left contains two components: a {{HTMLElement("video")}} element, which will receive the stream from `navigator.mediaDevices.getUserMedia()`, and a {{HTMLElement("button")}} the user clicks to capture a video frame.
```html
<div class="camera">
<video id="video">Video stream not available.</video>
<button id="startbutton">Take photo</button>
</div>
```
This is straightforward, and we'll see how it ties together when we get into the JavaScript code.
Next, we have a {{HTMLElement("canvas")}} element into which the captured frames are stored, potentially manipulated in some way, and then converted into an output image file. This canvas is kept hidden by styling the canvas with {{cssxref("display")}}`:none`, to avoid cluttering up the screen — the user does not need to see this intermediate stage.
We also have an {{HTMLElement("img")}} element into which we will draw the image — this is the final display shown to the user.
```html
<canvas id="canvas"> </canvas>
<div class="output">
<img id="photo" alt="The screen capture will appear in this box." />
</div>
```
That's all of the relevant HTML. The rest is just some page layout fluff and a bit of text offering a link back to this page.
## The JavaScript code
Now let's take a look at the [JavaScript code](#javascript). We'll break it up into a few bite-sized pieces to make it easier to explain.
### Initialization
We start by wrapping the whole script in an anonymous function to avoid global variables, then setting up various variables we'll be using.
```js
(() => {
const width = 320; // We will scale the photo width to this
const height = 0; // This will be computed based on the input stream
const streaming = false;
let video = null;
let canvas = null;
let photo = null;
let startbutton = null;
```
Those variables are:
- `width`
- : Whatever size the incoming video is, we're going to scale the resulting image to be 320 pixels wide.
- `height`
- : The output height of the image will be computed given the `width` and the aspect ratio of the stream.
- `streaming`
- : Indicates whether or not there is currently an active stream of video running.
- `video`
- : This will be a reference to the {{HTMLElement("video")}} element after the page is done loading.
- `canvas`
- : This will be a reference to the {{HTMLElement("canvas")}} element after the page is done loading.
- `photo`
- : This will be a reference to the {{HTMLElement("img")}} element after the page is done loading.
- `startbutton`
- : This will be a reference to the {{HTMLElement("button")}} element that's used to trigger capture. We'll get that after the page is done loading.
### The startup() function
The `startup()` function is run when the page has finished loading, courtesy of {{domxref("EventTarget.addEventListener")}}. This function's job is to request access to the user's webcam, initialize the output {{HTMLElement("img")}} to a default state, and to establish the event listeners needed to receive each frame of video from the camera and react when the button is clicked to capture an image.
#### Getting element references
First, we grab references to the major elements we need to be able to access.
```js
function startup() {
video = document.getElementById('video');
canvas = document.getElementById('canvas');
photo = document.getElementById('photo');
startbutton = document.getElementById('startbutton');
```
#### Get the media stream
The next task is to get the media stream:
```js
navigator.mediaDevices
.getUserMedia({ video: true, audio: false })
.then((stream) => {
video.srcObject = stream;
video.play();
})
.catch((err) => {
console.error(`An error occurred: ${err}`);
});
```
Here, we're calling {{domxref("MediaDevices.getUserMedia()")}} and requesting a video stream (without audio). It returns a promise which we attach success and failure callbacks to.
The success callback receives a `stream` object as input. It is the {{HTMLElement("video")}} element's source to our new stream.
Once the stream is linked to the `<video>` element, we start it playing by calling [`HTMLMediaElement.play()`](/en-US/docs/Web/API/HTMLMediaElement#play).
The error callback is called if opening the stream doesn't work. This will happen for example if there's no compatible camera connected, or the user denied access.
#### Listen for the video to start playing
After calling [`HTMLMediaElement.play()`](/en-US/docs/Web/API/HTMLMediaElement#play) on the {{HTMLElement("video")}}, there's a (hopefully brief) period of time that elapses before the stream of video begins to flow. To avoid blocking until that happens, we add an event listener to `video` for the {{domxref("HTMLMediaElement/canplay_event", "canplay")}} event, which is delivered when the video playback actually begins. At that point, all the properties in the `video` object have been configured based on the stream's format.
```js
video.addEventListener(
"canplay",
(ev) => {
if (!streaming) {
height = (video.videoHeight / video.videoWidth) * width;
video.setAttribute("width", width);
video.setAttribute("height", height);
canvas.setAttribute("width", width);
canvas.setAttribute("height", height);
streaming = true;
}
},
false,
);
```
This callback does nothing unless it's the first time it's been called; this is tested by looking at the value of our `streaming` variable, which is `false` the first time this method is run.
If this is indeed the first run, we set the video's height based on the size difference between the video's actual size, `video.videoWidth`, and the width at which we're going to render it, `width`.
Finally, the `width` and `height` of both the video and the canvas are set to match each other by calling {{domxref("Element.setAttribute()")}} on each of the two properties on each element, and setting widths and heights as appropriate. Finally, we set the `streaming` variable to `true` to prevent us from inadvertently running this setup code again.
#### Handle clicks on the button
To capture a still photo each time the user clicks the `startbutton`, we need to add an event listener to the button, to be called when the {{domxref("Element/click_event", "click")}} event is issued:
```js
startbutton.addEventListener(
"click",
(ev) => {
takepicture();
ev.preventDefault();
},
false,
);
```
This method is simple enough: it just calls our `takepicture()` function, defined below in the section [Capturing a frame from the stream](#capturing_a_frame_from_the_stream), then calls {{domxref("Event.preventDefault()")}} on the received event to prevent the click from being handled more than once.
#### Wrapping up the startup() method
There are only two more lines of code in the `startup()` method:
```js
clearphoto();
}
```
This is where we call the `clearphoto()` method we'll describe below in the section [Clearing the photo box](#clearing_the_photo_box).
### Clearing the photo box
Clearing the photo box involves creating an image, then converting it into a format usable by the {{HTMLElement("img")}} element that displays the most recently captured frame. That code looks like this:
```js
function clearphoto() {
const context = canvas.getContext("2d");
context.fillStyle = "#AAA";
context.fillRect(0, 0, canvas.width, canvas.height);
const data = canvas.toDataURL("image/png");
photo.setAttribute("src", data);
}
```
We start by getting a reference to the hidden {{HTMLElement("canvas")}} element that we use for offscreen rendering. Next we set the `fillStyle` to `#AAA` (a fairly light grey), and fill the entire canvas with that color by calling {{domxref("CanvasRenderingContext2D.fillRect()","fillRect()")}}.
Last in this function, we convert the canvas into a PNG image and call {{domxref("Element.setAttribute", "photo.setAttribute()")}} to make our captured still box display the image.
### Capturing a frame from the stream
There's one last function to define, and it's the point to the entire exercise: the `takepicture()` function, whose job it is to capture the currently displayed video frame, convert it into a PNG file, and display it in the captured frame box. The code looks like this:
```js
function takepicture() {
const context = canvas.getContext("2d");
if (width && height) {
canvas.width = width;
canvas.height = height;
context.drawImage(video, 0, 0, width, height);
const data = canvas.toDataURL("image/png");
photo.setAttribute("src", data);
} else {
clearphoto();
}
}
```
As is the case any time we need to work with the contents of a canvas, we start by getting the {{domxref("CanvasRenderingContext2D","2D drawing context")}} for the hidden canvas.
Then, if the width and height are both non-zero (meaning that there's at least potentially valid image data), we set the width and height of the canvas to match that of the captured frame, then call {{domxref("CanvasRenderingContext2D.drawImage()", "drawImage()")}} to draw the current frame of the video into the context, filling the entire canvas with the frame image.
> **Note:** This takes advantage of the fact that the {{domxref("HTMLVideoElement")}} interface looks like an {{domxref("HTMLImageElement")}} to any API that accepts an `HTMLImageElement` as a parameter, with the video's current frame presented as the image's contents.
Once the canvas contains the captured image, we convert it to PNG format by calling {{domxref("HTMLCanvasElement.toDataURL()")}} on it; finally, we call {{domxref("Element.setAttribute", "photo.setAttribute()")}} to make our captured still box display the image.
If there isn't a valid image available (that is, the `width` and `height` are both 0), we clear the contents of the captured frame box by calling `clearphoto()`.
## Demo
### HTML
```html
<div class="contentarea">
<h1>MDN - navigator.mediaDevices.getUserMedia(): Still photo capture demo</h1>
<p>
This example demonstrates how to set up a media stream using your built-in
webcam, fetch an image from that stream, and create a PNG using that image.
</p>
<div class="camera">
<video id="video">Video stream not available.</video>
<button id="startbutton">Take photo</button>
</div>
<canvas id="canvas"> </canvas>
<div class="output">
<img id="photo" alt="The screen capture will appear in this box." />
</div>
<p>
Visit our article
<a
href="https://developer.mozilla.org/en-US/docs/Web/API/Media_Capture_and_Streams_API/Taking_still_photos">
Taking still photos with WebRTC</a
>
to learn more about the technologies used here.
</p>
</div>
```
### CSS
```css
#video {
border: 1px solid black;
box-shadow: 2px 2px 3px black;
width: 320px;
height: 240px;
}
#photo {
border: 1px solid black;
box-shadow: 2px 2px 3px black;
width: 320px;
height: 240px;
}
#canvas {
display: none;
}
.camera {
width: 340px;
display: inline-block;
}
.output {
width: 340px;
display: inline-block;
vertical-align: top;
}
#startbutton {
display: block;
position: relative;
margin-left: auto;
margin-right: auto;
bottom: 32px;
background-color: rgb(0 150 0 / 50%);
border: 1px solid rgb(255 255 255 / 70%);
box-shadow: 0px 0px 1px 2px rgb(0 0 0 / 20%);
font-size: 14px;
font-family: "Lucida Grande", "Arial", sans-serif;
color: rgb(255 255 255 / 100%);
}
.contentarea {
font-size: 16px;
font-family: "Lucida Grande", "Arial", sans-serif;
width: 760px;
}
```
### JavaScript
```js
(() => {
// The width and height of the captured photo. We will set the
// width to the value defined here, but the height will be
// calculated based on the aspect ratio of the input stream.
const width = 320; // We will scale the photo width to this
let height = 0; // This will be computed based on the input stream
// |streaming| indicates whether or not we're currently streaming
// video from the camera. Obviously, we start at false.
let streaming = false;
// The various HTML elements we need to configure or control. These
// will be set by the startup() function.
let video = null;
let canvas = null;
let photo = null;
let startbutton = null;
function showViewLiveResultButton() {
if (window.self !== window.top) {
// Ensure that if our document is in a frame, we get the user
// to first open it in its own tab or window. Otherwise, it
// won't be able to request permission for camera access.
document.querySelector(".contentarea").remove();
const button = document.createElement("button");
button.textContent = "View live result of the example code above";
document.body.append(button);
button.addEventListener("click", () => window.open(location.href));
return true;
}
return false;
}
function startup() {
if (showViewLiveResultButton()) {
return;
}
video = document.getElementById("video");
canvas = document.getElementById("canvas");
photo = document.getElementById("photo");
startbutton = document.getElementById("startbutton");
navigator.mediaDevices
.getUserMedia({ video: true, audio: false })
.then((stream) => {
video.srcObject = stream;
video.play();
})
.catch((err) => {
console.error(`An error occurred: ${err}`);
});
video.addEventListener(
"canplay",
(ev) => {
if (!streaming) {
height = video.videoHeight / (video.videoWidth / width);
// Firefox currently has a bug where the height can't be read from
// the video, so we will make assumptions if this happens.
if (isNaN(height)) {
height = width / (4 / 3);
}
video.setAttribute("width", width);
video.setAttribute("height", height);
canvas.setAttribute("width", width);
canvas.setAttribute("height", height);
streaming = true;
}
},
false,
);
startbutton.addEventListener(
"click",
(ev) => {
takepicture();
ev.preventDefault();
},
false,
);
clearphoto();
}
// Fill the photo with an indication that none has been
// captured.
function clearphoto() {
const context = canvas.getContext("2d");
context.fillStyle = "#AAA";
context.fillRect(0, 0, canvas.width, canvas.height);
const data = canvas.toDataURL("image/png");
photo.setAttribute("src", data);
}
// Capture a photo by fetching the current contents of the video
// and drawing it into a canvas, then converting that to a PNG
// format data URL. By drawing it on an offscreen canvas and then
// drawing that to the screen, we can change its size and/or apply
// other changes before drawing it.
function takepicture() {
const context = canvas.getContext("2d");
if (width && height) {
canvas.width = width;
canvas.height = height;
context.drawImage(video, 0, 0, width, height);
const data = canvas.toDataURL("image/png");
photo.setAttribute("src", data);
} else {
clearphoto();
}
}
// Set up our event listener to run the startup process
// once loading is complete.
window.addEventListener("load", startup, false);
})();
```
### Result
{{EmbedLiveSample('Demo', '100%', 30)}}
## Fun with filters
Since we're capturing images from the user's webcam by grabbing frames from a {{HTMLElement("video")}} element, we can very easily apply filters and fun effects to the video. As it turns out, any CSS filters you apply to the element using the {{cssxref("filter")}} property affect the captured photo. These filters can range from the simple (making the image black and white) to the extreme (gaussian blurs and hue rotation).
You can play with this effect using, for example, the Firefox developer tools' [style editor](https://firefox-source-docs.mozilla.org/devtools-user/style_editor/index.html); see [Edit CSS filters](https://firefox-source-docs.mozilla.org/devtools-user/page_inspector/how_to/edit_css_filters/index.html) for details on how to do so.
## Using specific devices
You can, if needed, restrict the set of permitted video sources to a specific device or set of devices. To do so, call {{domxref("MediaDevices.enumerateDevices")}}. When the promise is fulfilled with an array of {{domxref("MediaDeviceInfo")}} objects describing the available devices, find the ones that you want to allow and specify the corresponding {{domxref("MediaTrackConstraints.deviceId", "deviceId")}} or `deviceId`s in the {{domxref("MediaTrackConstraints")}} object passed into {{domxref("MediaDevices.getUserMedia", "getUserMedia()")}}.
## See also
- [Sample code on GitHub](https://github.com/mdn/samples-server/tree/master/s/webrtc-capturestill)
- {{domxref("MediaDevices.getUserMedia")}}
- [Using frames from a video](/en-US/docs/Web/API/Canvas_API/Tutorial/Using_images#using_frames_from_a_video) in the Canvas tutorial
- {{domxref("CanvasRenderingContext2D.drawImage()")}}
| 0 |
data/mdn-content/files/en-us/web/api/media_capture_and_streams_api | data/mdn-content/files/en-us/web/api/media_capture_and_streams_api/constraints/index.md | ---
title: Capabilities, constraints, and settings
slug: Web/API/Media_Capture_and_Streams_API/Constraints
page-type: guide
browser-compat: api.MediaDevices.getSupportedConstraints
---
{{APIRef("Media Capture and Streams")}}
This article discusses the twin concepts of **constraints** and **capabilities**, as well as media settings, and includes an example we call the [Constraint Exerciser](#example_constraint_exerciser). The Constraint Exerciser lets you experiment with the results of different constraint sets being applied to the audio and video tracks coming from the computer's A/V input devices (such as its webcam and microphone).
Historically, writing scripts for the Web that work intimately with Web APIs has had a well-known challenge: often, your code needs to know whether or not an API exists and if so, what its limitations are on the {{Glossary("user agent")}} it's running on. Figuring this out has often been difficult, and has usually involved looking at some combination of which {{Glossary("user agent")}} (or browser) you're running on, which version it is, looking to see if certain objects exist, trying to see whether various things work or not and determining what errors occur, and so forth. The result has been a lot of very fragile code, or a reliance on libraries which figure this stuff out for you, then implement {{Glossary("polyfill", "polyfills")}} to patch the holes in the implementation on your behalf.
Capabilities and constraints let the browser and website or app exchange information about what **constrainable properties** the browser's implementation supports and what values it supports for each.
## Overview
The process works like this (using {{domxref("MediaStreamTrack")}} as an example):
1. If needed, call {{domxref("MediaDevices.getSupportedConstraints()")}} to get the list of **supported constraints**, which tells you what constrainable properties the browser knows about. This isn't always necessary, since any that aren't known will be ignored when you specify them—but if you have any that you can't get by without, you can start by checking to be sure they're on the list.
2. Once the script knows whether the property or properties it wishes to use are supported, it can then check the **capabilities** of the API and its implementation by examining the object returned by the track's `getCapabilities()` method; this object lists each supported constraint and the values or range of values which are supported.
3. Finally, the track's `applyConstraints()` method is called to configure the API as desired by specifying the values or ranges of values it wishes to use for any of the constrainable properties about which it has a preference.
4. The track's `getConstraints()` method returns the set of constraints passed into the most recent call to `applyConstraints()`. This may not represent the actual current state of the track, due to properties whose requested values had to be adjusted and because platform default values aren't represented. For a complete representation of the track's current configuration, use `getSettings()`.
In the Media Capture and Streams API, both {{domxref("MediaStream")}} and {{domxref("MediaStreamTrack")}} have constrainable properties.
## Determining if a constraint is supported
If you need to know whether or not a given constraint is supported by the user agent, you can find out by calling {{domxref("MediaDevices.getSupportedConstraints", "navigator.mediaDevices.getSupportedConstraints()")}} to get a list of the constrainable properties which the browser knows, like this:
```js
const supported = navigator.mediaDevices.getSupportedConstraints();
document.getElementById("frameRateSlider").disabled = !supported["frameRate"];
```
In this example, the supported constraints are fetched, and a control that lets the user configure the frame rate is disabled if the `frameRate` constraint isn't supported.
## How constraints are defined
A single constraint is an object whose name matches the constrainable property whose desired value or range of values is being specified. This object contains zero or more individual constraints, as well as an optional sub-object named `advanced`, which contains another set of zero or more constraints which the user agent must satisfy if at all possible. The user agent attempts to satisfy constraints in the order specified in the constraint set.
The most important thing to understand is that most constraints aren't requirements; instead, they're requests. There are exceptions, and we'll get to those shortly.
### Requesting a specific value for a setting
Most, each constraint may be a specific value indicating a desired value for the setting. For example:
```js
const constraints = {
width: 1920,
height: 1080,
aspectRatio: 1.777777778,
};
myTrack.applyConstraints(constraints);
```
In this case, the constraints indicate that any values are fine for nearly all properties, but that a standard high definition (HD) video size is desired, with the standard 16:9 aspect ratio. There's no guarantee that the resulting track will match any of these, but the user agent should do its best to match as many as possible.
The prioritization of the properties is simple: if two properties' requested values are mutually exclusive, then the one listed first in the constraint set will be used. As an example, if the browser running the code above couldn't provide a 1920x1080 track but could do 1920x900, then that's what would be provided.
Simple constraints like these, specifying a single value, are always treated as non-required. The user agent will try to provide what you request but will not guarantee that what you get will match. However, if you use simple values for properties when calling {{domxref("MediaStreamTrack.applyConstraints()")}}, the request will always succeed, because these values will be considered a request, not a requirement.
### Specifying a range of values
Sometimes, any value within a range is acceptable for a property's value. You can specify ranges with either or both minimum and maximum values, and you can even specify an ideal value within the range, if you choose. If you provide an ideal value, the browser will try to get as close as possible to matching that value, given the other constraints specified.
```js
const supports = navigator.mediaDevices.getSupportedConstraints();
if (
!supports["width"] ||
!supports["height"] ||
!supports["frameRate"] ||
!supports["facingMode"]
) {
// We're missing needed properties, so handle that error.
} else {
const constraints = {
width: { min: 640, ideal: 1920, max: 1920 },
height: { min: 400, ideal: 1080 },
aspectRatio: 1.777777778,
frameRate: { max: 30 },
facingMode: { exact: "user" },
};
myTrack
.applyConstraints(constraints)
.then(() => {
/* do stuff if constraints applied successfully */
})
.catch((reason) => {
/* failed to apply constraints; reason is why */
});
}
```
Here, after ensuring that the constrainable properties for which matches must be found are supported (`width`, `height`, `frameRate`, and `facingMode`), we set up constraints which request a width no smaller than 640 and no larger than 1920 (but preferably 1920), a height no smaller than 400 (but ideally 1080), an aspect ratio of 16:9 (1.777777778), and a frame rate no greater than 30 frames per second. In addition, the only acceptable input device is a camera facing the user (a "selfie cam"). If the `width`, `height`, `frameRate`, or `facingMode` constraints can't be met, the promise returned by `applyConstraints()` will be rejected.
> **Note:** Constraints which are specified using any or all of `max`, `min`, or `exact` are always treated as mandatory. If any constraint which uses one or more of those can't be met when calling `applyConstraints()`, the promise will be rejected.
### Advanced constraints
So-called advanced constraints are created by adding an `advanced` property to the constraint set; this property's value is an array of additional constraint sets which are considered optional. There are few if any use cases for this feature, and there is some interest in removing it from the specification, so it will not be discussed here. If you wish to learn more, see [section 11 of the Media Capture and Streams specification](https://www.w3.org/TR/mediacapture-streams/#idl-def-Constraints), past example 2.
## Checking capabilities
You can call {{domxref("MediaStreamTrack.getCapabilities()")}} to get a list of all of the supported capabilities and the values or ranges of values which each one accepts on the current platform and user agent. This function returns an object which lists each constrainable property supported by the browser and a value or range of values which are supported for each one of those properties.
> **Note:** `getCapabilities()` hasn't been implemented yet by all major browsers. For the time being, you'll have to try to get what you need, and if you can't, decide what to do at that point. See Firefox [Firefox bug 1179084](https://bugzil.la/1179084), for example.
## Applying constraints
The first and most common way to use constraints is to specify them when you call {{domxref("MediaDevices.getUserMedia", "getUserMedia()")}}:
```js
navigator.mediaDevices
.getUserMedia({
video: {
width: { min: 640, ideal: 1920 },
height: { min: 400, ideal: 1080 },
aspectRatio: { ideal: 1.7777777778 },
},
audio: {
sampleSize: 16,
channelCount: 2,
},
})
.then((stream) => {
videoElement.srcObject = stream;
})
.catch(handleError);
```
In this example, constraints are applied at `getUserMedia()` time, asking for an ideal set of options with fallbacks for the video.
> **Note:** You can specify one or more media input device IDs to establish restrictions on which input sources are allowed. To collect a list of the available devices, you can call {{domxref("MediaDevices.enumerateDevices", "navigator.mediaDevices.enumerateDevices()")}}, then for each device which meets the desired criteria, add its `deviceId` to the `MediaConstraints` object that eventually gets passed into `getUserMedia()`.
You can also change the constraints of an existing {{domxref("MediaStreamTrack")}} on the fly, by calling the track's {{domxref("MediaStreamTrack.applyConstraints", "applyConstraints()")}} method, passing into it an object representing the constraints you wish to apply to the track:
```js
videoTrack.applyConstraints({
width: 1920,
height: 1080,
});
```
In this snippet, the video track referenced by `videoTrack` is updated so that its resolution as closely as possible matches 1920x1080 pixels (1080p high definition).
## Retrieving current constraints and settings
It's important to remember the difference between **constraints** and **settings**. Constraints are a way to specify what values you need, want, and are willing to accept for the various constrainable properties (as described in the documentation for {{domxref("MediaTrackConstraints")}}), while settings are the actual values of each constrainable property at the current time.
### Getting the constraints in effect
If at any time you need to fetch the set of constraints that are currently applied to the media, you can get that information by calling {{domxref("MediaStreamTrack.getConstraints()")}}, as shown in the example below.
```js
function switchCameras(track, camera) {
const constraints = track.getConstraints();
constraints.facingMode = camera;
track.applyConstraints(constraints);
}
```
This function accepts a {{domxref("MediaStreamTrack")}} and a string indicating the camera facing mode to use, fetches the current constraints, sets the value of the {{domxref("MediaTrackConstraints.facingMode")}} to the specified value, then applies the updated constraint set.
### Getting the current settings for a track
Unless you only use exact constraints (which is pretty restrictive, so be sure you mean it!), there's no guarantee exactly what you're going to actually get after the constraints are applied. The values of the constrainable properties as they actually are in the resulting media are referred to as the settings. If you need to know the true format and other properties of the media, you can obtain those settings by calling {{domxref("MediaStreamTrack.getSettings()")}}. This returns an object based on the dictionary {{domxref("MediaTrackSettings")}}. For example:
```js
function whichCamera(track) {
return track.getSettings().facingMode;
}
```
This function uses `getSettings()` to obtain the track's currently in-use values for the constrainable properties and returns the value of {{domxref("MediaTrackSettings.facingMode", "facingMode")}}.
## Example: Constraint exerciser
In this example, we create an exerciser which lets you experiment with media constraints by editing the source code describing the constraint sets for audio and video tracks. You can then apply those changes and see the result, including both what the stream looks like and what the actual media settings are set to after applying the new constraints.
The HTML and CSS for this example are pretty simple, and aren't shown here. You can look at the complete example by {{LiveSampleLink("Example_Constraint_exerciser", "clicking here")}}.
```html hidden
<p>
Experiment with media constraints! Edit the constraint sets for the video and
audio tracks in the edit boxes on the left, then click the "Apply Constraints"
button to try them out. The actual settings the browser selected and is using
are shown in the boxes on the right. Below all of that, you'll see the video
itself.
</p>
<p>Click the "Start" button to begin.</p>
<h3>Constrainable properties available:</h3>
<ul id="supportedConstraints"></ul>
<div id="startButton" class="button">Start</div>
<div class="wrapper">
<div class="trackrow">
<div class="leftside">
<h3>Requested video constraints:</h3>
<textarea id="videoConstraintEditor" cols="32" rows="8"></textarea>
</div>
<div class="rightside">
<h3>Actual video settings:</h3>
<textarea id="videoSettingsText" cols="32" rows="8" disabled></textarea>
</div>
</div>
<div class="trackrow">
<div class="leftside">
<h3>Requested audio constraints:</h3>
<textarea id="audioConstraintEditor" cols="32" rows="8"></textarea>
</div>
<div class="rightside">
<h3>Actual audio settings:</h3>
<textarea id="audioSettingsText" cols="32" rows="8" disabled></textarea>
</div>
</div>
<div class="button" id="applyButton">Apply Constraints</div>
</div>
<video id="video" autoplay></video>
<div class="button" id="stopButton">Stop Video</div>
<div id="log"></div>
```
```css hidden
body {
font:
14px "Open Sans",
"Arial",
sans-serif;
}
video {
margin-top: 20px;
border: 1px solid black;
}
.button {
cursor: pointer;
width: 150px;
border: 1px solid black;
font-size: 16px;
text-align: center;
padding-top: 2px;
padding-bottom: 4px;
color: white;
background-color: darkgreen;
}
.wrapper {
margin-bottom: 10px;
width: 600px;
}
.trackrow {
height: 200px;
}
.leftside {
float: left;
width: calc(calc(100% / 2) - 10px);
}
.rightside {
float: right;
width: calc(calc(100% / 2) - 10px);
}
textarea {
padding: 8px;
}
h3 {
margin-bottom: 3px;
}
#supportedConstraints {
column-count: 2;
}
#log {
padding-top: 10px;
}
```
### Defaults and variables
First we have the default constraint sets, as strings. These strings are presented in editable {{HTMLElement("textarea")}}s, but this is the initial configuration of the stream.
```js
const videoDefaultConstraintString =
'{\n "width": 320,\n "height": 240,\n "frameRate": 30\n}';
const audioDefaultConstraintString =
'{\n "sampleSize": 16,\n "channelCount": 2,\n "echoCancellation": false\n}';
```
These defaults ask for a pretty common camera configuration, but don't insist on any property being of special importance. The browser should do its best to match these settings but will settle for anything it considers a close match.
Then we initialize the variables which will hold the {{domxref("MediaTrackConstraints")}} objects for the video and audio tracks, as well as the variables which will hold references to the video and audio tracks themselves, to `null`.
```js
let videoConstraints = null;
let audioConstraints = null;
let audioTrack = null;
let videoTrack = null;
```
And we get references to all of the elements we'll need to access.
```js
const videoElement = document.getElementById("video");
const logElement = document.getElementById("log");
const supportedConstraintList = document.getElementById("supportedConstraints");
const videoConstraintEditor = document.getElementById("videoConstraintEditor");
const audioConstraintEditor = document.getElementById("audioConstraintEditor");
const videoSettingsText = document.getElementById("videoSettingsText");
const audioSettingsText = document.getElementById("audioSettingsText");
```
These elements are:
- `videoElement`
- : The {{HTMLElement("video")}} element that will show the stream.
- `logElement`
- : A {{HTMLElement("div")}} into which any error messages or other log-type output will be written.
- `supportedConstraintList`
- : A {{HTMLElement("ul")}} (unordered list) into which we programmatically add the names of each of the constrainable properties supported by the user's browser.
- `videoConstraintEditor`
- : A {{HTMLElement("textarea")}} element that lets the user edit the code for the video track's constraint set.
- `audioConstraintEditor`
- : A {{HTMLElement("textarea")}} element that lets the user edit the code for the audio track's constraint set.
- `videoSettingsText`
- : A {{HTMLElement("textarea")}} (which is always disabled) that displays the current settings for the video track's constrainable properties.
- `audioSettingsText`
- : A {{HTMLElement("textarea")}} (which is always disabled) that displays the current settings for the audio track's constrainable properties.
Finally, we set the current contents of the two constraint set editor elements to the defaults.
```js
videoConstraintEditor.value = videoDefaultConstraintString;
audioConstraintEditor.value = audioDefaultConstraintString;
```
### Updating the settings display
To the right of each of the constraint set editors is a second text box which we use to display the current configuration of the track's configurable properties. This display is updated by the function `getCurrentSettings()`, which gets the current settings for the audio and video tracks and inserts the corresponding code into the tracks' settings display boxes by setting their [`value`](/en-US/docs/Web/HTML/Element/textarea#value).
```js
function getCurrentSettings() {
if (videoTrack) {
videoSettingsText.value = JSON.stringify(videoTrack.getSettings(), null, 2);
}
if (audioTrack) {
audioSettingsText.value = JSON.stringify(audioTrack.getSettings(), null, 2);
}
}
```
This gets called after the stream first starts up, as well as any time we've applied updated constraints, as you'll see below.
### Building the track constraint set objects
The `buildConstraints()` function builds the {{domxref("MediaTrackConstraints")}} objects for the audio and video tracks using the code in the two tracks' constraint set edit boxes.
```js
function buildConstraints() {
try {
videoConstraints = JSON.parse(videoConstraintEditor.value);
audioConstraints = JSON.parse(audioConstraintEditor.value);
} catch (error) {
handleError(error);
}
}
```
This uses {{jsxref("JSON.parse()")}} to parse the code in each editor into an object. If either call to JSON.parse() throws an exception, `handleError()` is called to output the error message to the log.
### Configuring and starting the stream
The `startVideo()` method handles setting up and starting the video stream.
```js
function startVideo() {
buildConstraints();
navigator.mediaDevices
.getUserMedia({
video: videoConstraints,
audio: audioConstraints,
})
.then((stream) => {
const audioTracks = stream.getAudioTracks();
const videoTracks = stream.getVideoTracks();
videoElement.srcObject = stream;
if (audioTracks.length > 0) {
audioTrack = audioTracks[0];
}
if (videoTracks.length > 0) {
videoTrack = videoTracks[0];
}
})
.then(() => {
return new Promise((resolve) => {
videoElement.onloadedmetadata = resolve;
});
})
.then(() => {
getCurrentSettings();
})
.catch(handleError);
}
```
There are several steps here:
1. It calls `buildConstraints()` to create the {{domxref("MediaTrackConstraints")}} objects for the two tracks from the code in the edit boxes.
2. It calls {{domxref("MediaDevices.getUserMedia", "navigator.mediaDevices.getUserMedia()")}}, passing in the constraints objects for the video and audio tracks. This returns a {{domxref("MediaStream")}} with the audio and video from a source matching the inputs (typically a webcam, although if you provide the right constraints you can get media from other sources).
3. When the stream is obtained, it's attached to the {{HTMLElement("video")}} element so that it's visible on screen, and we grab the audio track and video track into the variables `audioTrack` and `videoTrack`.
4. Then we set up a promise which resolves when the {{domxref("HTMLMediaElement/loadedmetadata_event", "loadedmetadata")}} event occurs on the video element.
5. When that happens, we know the video has started playing, so we call our `getCurrentSettings()` function (described above) to display the actual settings that the browser decided upon after considering our constraints and the capabilities of the hardware.
6. If an error occurs, we log it using the `handleError()` method that we'll look at farther down in this article.
We also need to set up an event listener to watch for the "Start Video" button to be clicked:
```js
document.getElementById("startButton").addEventListener(
"click",
() => {
startVideo();
},
false,
);
```
### Applying constraint set updates
Next, we set up an event listener for the "Apply Constraints" button. If it's clicked and there's not already media in use, we call `startVideo()`, and let that function handle starting the stream with the specified settings in place. Otherwise, we follow these steps to apply the updated constraints to the already-active stream:
1. `buildConstraints()` is called to construct updated {{domxref("MediaTrackConstraints")}} objects for the audio track (`audioConstraints`) and the video track (`videoConstraints`).
2. {{domxref("MediaStreamTrack.applyConstraints()")}} is called on the video track (if there is one) to apply the new `videoConstraints`. If this succeeds, the contents of the video track's current settings box are updated based on the result of calling its {{domxref("MediaStreamTrack.getSettings", "getSettings()")}} method.
3. Once that's done, `applyConstraints()` is called on the audio track (if there is one) to apply the new audio constraints. If this succeeds, the contents of the audio track's current settings box are updated based on the result of calling its {{domxref("MediaStreamTrack.getSettings", "getSettings()")}} method.
4. If an error occurs applying either set of constraints, `handleError()` is used to output a message into the log.
```js
document.getElementById("applyButton").addEventListener(
"click",
() => {
if (!videoTrack && !audioTrack) {
startVideo();
} else {
buildConstraints();
const prettyJson = (obj) => JSON.stringify(obj, null, 2);
if (videoTrack) {
videoTrack
.applyConstraints(videoConstraints)
.then(() => {
videoSettingsText.value = prettyJson(videoTrack.getSettings());
})
.catch(handleError);
}
if (audioTrack) {
audioTrack
.applyConstraints(audioConstraints)
.then(() => {
audioSettingsText.value = prettyJson(audioTrack.getSettings());
})
.catch(handleError);
}
}
},
false,
);
```
### Handling the stop button
Then we set up the handler for the stop button.
```js
document.getElementById("stopButton").addEventListener("click", () => {
if (videoTrack) {
videoTrack.stop();
}
if (audioTrack) {
audioTrack.stop();
}
videoTrack = audioTrack = null;
videoElement.srcObject = null;
});
```
This stops the active tracks, sets the `videoTrack` and `audioTrack` variables to `null` so we know they're gone, and removes the stream from the {{HTMLElement("video")}} element by setting {{domxref("HTMLMediaElement.srcObject")}} to `null`.
### Simple tab support in the editor
This code adds simple support for tabs to the {{HTMLElement("textarea")}} elements by making the tab key insert two space characters when either constraint edit box is focused.
```js
function keyDownHandler(event) {
if (event.key === "Tab") {
const elem = event.target;
const str = elem.value;
const position = elem.selectionStart;
const beforeTab = str.substring(0, position);
const afterTab = str.substring(position, str.length);
const newStr = `${beforeTab} ${afterTab}`;
elem.value = newStr;
elem.selectionStart = elem.selectionEnd = position + 2;
event.preventDefault();
}
}
videoConstraintEditor.addEventListener("keydown", keyDownHandler, false);
audioConstraintEditor.addEventListener("keydown", keyDownHandler, false);
```
### Show constrainable properties the browser supports
The last significant piece of the puzzle: code that displays, for the user's reference, a list of the constrainable properties which their browser supports. Each property is a link to its documentation on MDN for the user's convenience. See the [`MediaDevices.getSupportedConstraints()` examples](/en-US/docs/Web/API/MediaDevices/getSupportedConstraints#examples) for details on how this code works.
> **Note:** Of course, there may be non-standard properties in this list, in which case you probably will find that the documentation link doesn't help much.
```js
const supportedConstraints = navigator.mediaDevices.getSupportedConstraints();
for (const constraint in supportedConstraints) {
if (Object.hasOwn(supportedConstraints, constraint)) {
const elem = document.createElement("li");
elem.innerHTML = `<code><a href='https://developer.mozilla.org/docs/Web/API/MediaTrackSupportedConstraints/${constraint}' target='_blank'>${constraint}</a></code>`;
supportedConstraintList.appendChild(elem);
}
}
```
### Error handling
We also have some simple error handling code; `handleError()` is called to handle promises which fail, and the `log()` function appends the error message to a special logging {{HTMLElement("div")}} box under the video.
```js
function log(msg) {
logElement.innerHTML += `${msg}<br>`;
}
function handleError(reason) {
log(
`Error <code>${reason.name}</code> in constraint <code>${reason.constraint}</code>: ${reason.message}`,
);
}
```
### Result
Here you can see the complete example in action.
{{EmbedLiveSample("Example_Constraint_exerciser", 650, 800)}}
## Specifications
{{Specifications}}
## Browser compatibility
{{Compat}}
## See also
- [Media Capture and Streams API](/en-US/docs/Web/API/Media_Capture_and_Streams_API)
- {{domxref("MediaTrackConstraints")}}
- {{domxref("MediaTrackSettings")}}
- {{domxref("MediaDevices.getSupportedConstraints()")}}
- {{domxref("MediaStreamTrack.applyConstraints()")}}
- {{domxref("MediaStreamTrack.getSettings()")}}
| 0 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.