hash
stringlengths
32
32
doc_id
stringlengths
5
12
section
stringlengths
4
595
content
stringlengths
0
6.67M
17ba42c440e050d74c1fc2ffcb25f033
23.180
10.5.1.2.1 IOPS group call announcement
Table 10.5.1.2.1-1 describes the information flow for the IOPS group call announcement from one MCPTT client to other MCPTT clients. The packet(s) carrying the IOPS group call announcement are transmitted from the originating MCPTT client to the IOPS MC connectivity function for distribution to the target MCPTT clients. Table 10.5.1.2.1-1: IOPS group call announcement Information Element Status Description IOPS MCPTT ID M The identity of the calling party IOPS MCPTT group ID M The IOPS MCPTT group ID on which the call is to be conducted SDP offer M Media parameters of the MCPTT client Announcement period M Period of the group call announcement Encryption parameters O Encryption parameters to be used for the call, if the call is to be encrypted Confirm mode indicator O Indicates whether the MCPTT group call is to be confirmed Emergency indicator O Indicates that the MCPTT group call is an MCPTT emergency call
17ba42c440e050d74c1fc2ffcb25f033
23.180
10.5.1.2.2 IOPS group call response
Table 10.5.1.2.2-1 describes the information flow for the IOPS group call response from one MCPTT client to other MCPTT clients. The packet(s) carrying the IOPS group call response is transmitted from the called MCPTT client to the IOPS MC connectivity function for distribution to the target MCPTT clients. Table 10.5.1.2.2-1: IOPS group call response Information Element Status Description IOPS MCPTT ID M The identity of the called party IOPS MCPTT group ID M The IOPS MCPTT group ID of the group on which the call is requested SDP answer M Media parameters selected Result M Result of the group call announcement (success or failure)
17ba42c440e050d74c1fc2ffcb25f033
23.180
10.5.1.2.3 IOPS emergency group call upgrade
Table 10.5.1.2.3-1 describes the information flow for the IOPS emergency group call upgrade from one MCPTT client to other MCPTT clients. The packet(s) carrying the IOPS emergency group call upgrade are transmitted from the originating MCPTT client to the IOPS MC connectivity function for distribution to the target MCPTT clients. Table 10.5.1.2.3-1: IOPS emergency group call upgrade Information Element Status Description IOPS MCPTT ID M The identity of the upgrading MC user IOPS MCPTT group ID M The IOPS MCPTT group ID on which the call is to be upgraded to emergency call
17ba42c440e050d74c1fc2ffcb25f033
23.180
10.5.1.2.4 IOPS emergency group call state cancel
Table 10.5.1.2.4-1 describes the information flow for the IOPS emergency group call state cancel from one MCPTT client to other MCPTT clients. The packet(s) carrying the IOPS emergency group call state cancel are transmitted from the originating MCPTT client to the IOPS MC connectivity function for distribution to the target MCPTT clients. Table 10.5.1.2.4-1: IOPS emergency group call state cancel Information Element Status Description IOPS MCPTT ID M The identity of the cancelling MC user IOPS MCPTT group ID M The IOPS MCPTT group ID on which the emergency call state is to be cancelled
17ba42c440e050d74c1fc2ffcb25f033
23.180
10.5.1.3 IOPS group call setup
The procedure in figure 10.5.1.3-1 illustrates the procedure for an IOPS MCPTT group call establishment based on the IP connectivity functionality. The procedure describes how an MCPTT client initiates and establishes an IOPS MCPTT group call with other MCPTT clients. Pre-conditions: - MCPTT user profile used for the IOPS mode of operation is pre-provisioned in the MCPTT UEs; - The IOPS MCPTT group ID and its associated IOPS group IP multicast address are pre-configured in the MCPTT clients; - MCPTT users have an active PDN connection or PDU session to the IOPS MC connectivity function for the communication based on the IP connectivity functionality; - MCPTT users affiliated to the target IOPS MCPTT group are discovered by the IOPS MC connectivity function supporting the IP connectivity functionality; - The IOPC MC connectivity function may have established a broadcast/multicast session and announced it to the MCPTT clients; - MCPTT client 1 may have retrieved group connectivity information from the IOPS connectivity client related to the target IOPS MCPTT group; - MCPTT clients 1, 2 … n are configured within the same IOPS MCPTT group. Figure 10.5.1.3-1: IOPS group call setup based on the IP connectivity functionality 1. The MCPTT user at MCPTT client 1 would like to initiate an IOPS group call with a specific IOPS MCPTT group based on the IP connectivity functionality. 2. The MCPTT client 1 sends an IOPS group call announcement to the target IOPS MCPTT group. The MCPTT client 1 transmits the group session packets carrying the IOPS group call announcement to the IOPS MC connectivity function for distribution to the corresponding IOPS group IP multicast address. 3. The IOPS MC connectivity function determines that the received packets correspond to a group session targeting a specific IOPS MCPTT group. The IOPS MC connectivity function decides distributing the received group session packets to the target MCPTT clients over broadcast/multicast and/or unicast transmissions. 4. The IOPS MC connectivity function distributes the group session packets carrying the IOPS group call announcement to the MCPTT clients from the target IOPS MCPTT group. 5. The MCPTT clients receiving the IOPS group call announcement join the IOPS group call and notify the target MCPTT users about the IOPS group call. 6. If confirm mode indication is included in the IOPS group call announcement, the receiving MCPTT clients respond to the IOPS MCPTT group indicating the result of the establishment of the announced IOPS group call. The receiving MCPTT clients transmit the group session packets carrying the IOPS group call response to the IOPS MC connectivity function for distribution to the corresponding IOPS group IP multicast address. NOTE 1: Step 6 can also occur prior to step 5. 7. The IOPS MC connectivity function determines that the received packets correspond to a group session targeting a specific IOPS MCPTT group. The IOPS MC connectivity function decides distributing the received group session packets to the target MCPTT clients over broadcast/multicast and/or unicast transmissions. 8. The IOPS MC connectivity function distributes the group session packets carrying the IOPS group call response to the MCPTT clients from the target IOPS MCPTT group. The MCPTT clients recognize the IOPS group call originator through the IOPS group call announcement and can check the participants of the IOPS group call through the received response message. 9. The MCPTT clients have successfully established the IOPS group call with floor control based on the IP connectivity functionality. NOTE 2: Due to the movement of the participants (in and out of the IOPS 3GPP system coverage) during the IOPS group call, the IOPS group call announcement is periodically sent by the MCPTT client 1. NOTE 3: The participating MCPTT clients do not need to respond to the periodic IOPS group call announcement.
17ba42c440e050d74c1fc2ffcb25f033
23.180
10.5.1.4 IOPS emergency group call
The procedure in figure 10.5.1.4-1 illustrates the procedure for an IOPS MCPTT emergency group call establishment based on the IP connectivity functionality. The IOPS emergency group call is a special case of the IOPS group call setup procedure described in clause 10.5.1.3, wherein the IOPS group call announcement contains an indication that the IOPS group call is an IOPS emergency group call. The group call participants can become aware of the IOPS MCPTT group's in-progress emergency state based on the emergency indicator. When an MCPTT client intends to initiate an IOPS emergency group call, the MCPTT client can request higher priority from the IOPS MC connectivity function via the IOPS discovery request. An IOPS group call in-progress can be upgraded to an IOPS emergency group call by including the emergency indicator within the periodic IOPS group call announcement. An IOPS group call in-progress can also be upgraded by a participating MCPTT client by sending an IOPS emergency group call upgrade to the IOPS group. The MCPTT user who initiated an IOPS emergency group call or upgraded an IOPS group call to an emergency group call, or an authorized user can cancel the emergency state of the group call by sending an IOPS emergency group call state cancel to the IOPS MCPTT group. The emergency state of the IOPS group call remains active until the emergency group call ends or the in-progress emergency state is cancelled. Pre-conditions: - MCPTT user profile used for the IOPS mode of operation is pre-provisioned in the MCPTT UEs; - The IOPS MCPTT group ID and its associated IOPS group IP multicast address are pre-configured in the MCPTT clients; - MCPTT users have an active PDN connection or PDU session to the IOPS MC connectivity function for the communication based on the IP connectivity functionality; - MCPTT users affiliated to the target IOPS MCPTT group are discovered by the IOPS MC connectivity function supporting the IP connectivity functionality; - The IOPC MC connectivity function may have established a broadcast/multicast/ session and announced it to the MCPTT clients; - MCPTT client 1 may have retrieved group connectivity information from the IOPS connectivity client related to the target IOPS MCPTT group; - MCPTT clients 1, 2 … n are configured within the same IOPS MCPTT group. Figure 10.5.1.4-1: IOPS emergency group call setup based on the IP connectivity functionality 1. The MCPTT user at MCPTT client 1 would like to initiate an IOPS emergency group call with a specific IOPS MCPTT group based on the IP connectivity functionality. NOTE 1: The MCPTT client 1 may have previously requested higher priority from the IOPS MC connectivity function using the IOPS discovery request. 2. The MCPTT client 1 sends an IOPS group call announcement to the target IOPS MCPTT group. The announcement contains an indication that the call is an IOPS emergency group call. The MCPTT client 1 transmits the group session packets carrying the IOPS group call announcement to the IOPS MC connectivity function for distribution to the corresponding IOPS group IP multicast address. 3. The IOPS MC connectivity function determines that the received packets correspond to a group session targeting a specific IOPS MCPTT group. The IOPS MC connectivity function decides to distribute the received group session packets to the target MCPTT clients over broadcast/multicast and/or unicast transmissions. If the MCPTT client 1 requested a priority state from the IOPS MC connectivity function, the IOPS MC connectivity function distributes the group session packets with higher priority. 4. The IOPS MC connectivity function distributes the group session packets carrying the IOPS group call announcement to the MCPTT clients from the target IOPS MCPTT group. 5. The MCPTT clients receiving the IOPS group call announcement with an emergency indicator join the IOPS emergency group call and notify the target MCPTT users about the IOPS emergency group call. The IOPS MCPTT group's emergency state is defined. NOTE 2: Whilst the emergency state of the IOPS group call remains active, other participating MCPTT clients of the group call may also request higher priority from the IOPS MC connectivity function using the IOPS discovery request. 6. If confirm mode indication is included in the IOPS group call announcement, the receiving MCPTT clients respond to the IOPS MCPTT group indicating the result of the establishment of the announced IOPS emergency group call. The receiving MCPTT clients transmit the group session packets carrying the IOPS group call response to the IOPS MC connectivity function for distribution to the corresponding IOPS group IP multicast address. NOTE 3: Step 6 can also occur prior to step 5. 7. The IOPS MC connectivity function determines that the received packets correspond to a group session targeting a specific IOPS MCPTT group. The IOPS MC connectivity function decides to distribute the received group session packets to the target MCPTT clients over broadcast/multicast and/or unicast transmissions. If any participating MCPTT client of the group call requested a priority state from the IOPS MC connectivity function, the IOPS MC connectivity function distributes the group session packets with higher priority. 8. The IOPS MC connectivity function distributes the group session packets carrying the IOPS group call response to the MCPTT clients from the target IOPS MCPTT group. The MCPTT clients recognize the IOPS emergency group call originator through the IOPS group call announcement and can check the participants of the IOPS group call through the received response message. 9. The MCPTT clients have successfully established the IOPS emergency group call based on the IP connectivity functionality. NOTE 4: Due to the movement of the participants (in and out of the IOPS 3GPP system coverage) during the IOPS emergency group call, the IOPS group call announcement is periodically sent by the MCPTT client 1. NOTE 5: The participating MCPTT clients do not need to respond to the periodic IOPS group call announcement.
17ba42c440e050d74c1fc2ffcb25f033
23.180
10.5.1.5 IOPS group call release
Each MCPTT client may release itself from an ongoing IOPS group call without the transmission of any signalling if the call has been inactive for a specific duration. NOTE: Inactivity time can be set according to the policy of the MCPTT service provider.
17ba42c440e050d74c1fc2ffcb25f033
23.180
10.5.2 IOPS private call (IP connectivity functionality)
17ba42c440e050d74c1fc2ffcb25f033
23.180
10.5.2.1 General
The support of MCPTT private calls based on the IP connectivity functionality in the IOPS mode of operation enables that the service is provided by the MCPTT clients over the IOPS MC connectivity function. The IOPS MC connectivity function provides IP connectivity for the communication among MCPTT users. When an MCPTT user wants to communicate with a specific target MCPTT user based on the IP connectivity functionality, the MCPTT client retrieves the connectivity information of the target MCPTT user (i.e. the MCPTT UE's IP address) from the IOPS connectivity client. Then, the MCPTT clients enable establishing the IOPS private call over the IOPS MC connectivity function. The related session packets, i.e. signalling and media, of the IOPS private call are transmitted to the IOPS MC connectivity function addressing the corresponding target MCPTT UE's IP address. NOTE: The IOPS connectivity client can only provide connectivity information of the target MCPTT user if it is already available (see clauses 10.3 on IOPS subscription and notification procedures). The IOPS MC connectivity function distributes the received session packets over unicast transmissions to the target MCPTT client. IOPS private calls can be setup in two different commencement modes, automatic commencement mode and manual commencement mode. The following clauses specify the IOPS private call procedures and information flows for the IP connectivity functionality in the IOPS mode of operation.
17ba42c440e050d74c1fc2ffcb25f033
23.180
10.5.2.2 Information flows
17ba42c440e050d74c1fc2ffcb25f033
23.180
10.5.2.2.1 IOPS call setup request
Table 10.5.2.2.1-1 describes the information flow for the IOPS call setup request from one MCPTT client to another MCPTT client. The packet(s) carrying the IOPS call setup request are transmitted from the calling MCPTT client to the IOPS MC connectivity function for distribution to the called MCPTT client. Table 10.5.2.2.1-1: IOPS call setup request Information element Status Description IOPS MCPTT ID M The identity of the calling party IOPS MCPTT ID M The identity of the called party SDP offer for the IOPS private call M SDP with media information offered by (to) client Location information O Location of the calling party Requested commencement mode O An indication that is included if the user is requesting a particular commencement mode Implicit floor request O An indication that the user is also requesting the floor. Emergency indicator O Indicates that the MCPTT private call is an MCPTT emergency call
17ba42c440e050d74c1fc2ffcb25f033
23.180
10.5.2.2.2 IOPS call setup response
Table 10.5.2.2.2-1 describes the information flow for the IOPS call setup response from one MCPTT client to another MCPTT client. The packet(s) carrying the IOPS call setup response are transmitted from the called MCPTT client to the IOPS MC connectivity function for distribution to the calling MCPTT client. Table 10.5.2.2.2-1: IOPS call setup response Information element Status Description IOPS MCPTT ID M The identity of the calling party IOPS MCPTT ID M The identity of the called party SDP answer for private call M SDP with media parameters selected Acceptance confirmation O An indication whether the user has positively accepted the call.
17ba42c440e050d74c1fc2ffcb25f033
23.180
10.5.2.2.3 IOPS MCPTT ringing
Table 10.5.2.2.3-1 describes the information flow for the IOPS MCPTT ringing from one MCPTT client to another MCPTT client. The packet(s) carrying the IOPS MCPTT ringing are transmitted from the called MCPTT client to the IOPS MC connectivity function for distribution to the calling MCPTT client. Table 10.5.2.2.3-1: IOPS MCPTT ringing information elements Information Element Status Description IOPS MCPTT ID M The MCPTT ID of the calling party IOPS MCPTT ID M The MCPTT ID of the called party Ringing indication O Indication to the caller.
17ba42c440e050d74c1fc2ffcb25f033
23.180
10.5.2.2.4 IOPS call release request
Table 10.5.2.2.4-1 describes the information flow for the IOPS call release request from one MCPTT client to another MCPTT client. The packet(s) carrying the IOPS call release request are transmitted from one MCPTT client to the IOPS MC connectivity function for distribution to the other MCPTT client. Table 10.5.2.2.4-1: IOPS call release request Information element Status Description IOPS MCPTT ID M The identity of the calling party IOPS MCPTT ID M The identity of the called party MCPTT private call release reason O This element indicates the reason for the private call release. e.g., Originating client requested.
17ba42c440e050d74c1fc2ffcb25f033
23.180
10.5.2.2.5 IOPS call release response
Table 10.5.2.2.5-1 describes the information flow for the IOPS call release response from one MCPTT client to another MCPTT client. The packet(s) carrying the IOPS call release response are transmitted from one MCPTT client to the IOPS MC connectivity function for distribution to the other MCPTT client. Table 10.5.2.2.5-1: IOPS call release response Information element Status Description IOPS MCPTT ID M The identity of the calling party IOPS MCPTT ID M The identity of the called party
17ba42c440e050d74c1fc2ffcb25f033
23.180
10.5.2.2.6 IOPS emergency private call upgrade
Table 10.5.2.2.6-1 describes the information flow for the IOPS emergency private call upgrade from one MCPTT client to another MCPTT client. The packet(s) carrying the IOPS emergency private call upgrade are transmitted from the originating MCPTT client to the IOPS MC connectivity function for distribution to the target MCPTT client. Table 10.5.2.2.6-1: IOPS emergency private call upgrade Information Element Status Description IOPS MCPTT ID M The identity of the calling party IOPS MCPTT ID M The identity of the called party
17ba42c440e050d74c1fc2ffcb25f033
23.180
10.5.2.3 IOPS private call setup in automatic commencement mode procedure
The procedure in figure 10.5.2.3-1 is the basic procedure for an MCPTT client initiating the establishment of an IOPS MCPTT private call with a target MCPTT client based on the IP connectivity functionality. The procedure focuses on the case of an IOPS MCPTT private call using an automatic commencement mode. Pre-conditions: - MCPTT user profile used for the IOPS mode of operation is pre-provisioned in the MCPTT UEs. - MCPTT users have an active PDN connection to the IOPS MC connectivity function for the communication based on the IP connectivity functionality - The MCPTT users are discovered by the IOPS MC connectivity function supporting the IP connectivity functionality. - MCPTT clients have retrieved connectivity information from target MCPTT users. Figure 10.5.2.3-1: IOPS private call setup in automatic commencement mode based on the IP connectivity functionality 1. The MCPTT user at MCPTT client 1 would like to initiate an IOPS private call with the MCPTT user at MCPTT client 2 based on the IP connectivity functionality. 2. The MCPTT client 1 retrieves the connectivity information of the target MCPTT user from the IOPS connectivity client 1 (not shown in figure) and sends an IOPS call setup request towards the MCPTT client 2. The MCPTT client 1 transmits the session packets carrying the IOPS call setup request to the IOPS MC connectivity function for distribution to the corresponding target MCPTT UE 2's IP address. The IOPS call setup request contains an SDP offer, an automatic commencement mode indication, and an element that indicates that MCPTT client 1 is requesting the floor. The IOPS private call request may include location information. 3. The IOPS MC connectivity function receives the session packets addressing the MCPTT UE 2's IP address. The IOPS MC connectivity function checks if the MCPTT UE 2's IP address corresponds to a discovered MC user in order to distribute the received session packets. If it does, the IOPS MC connectivity function distributes the received session packets to the target MCPTT client over unicast transmissions. 4. The IOPS MC connectivity function distributes the session packets carrying the IOPS call setup request to the MCPTT client 2. 5. The MCPTT client 2 notifies the target MCPTT user about the incoming IOPS private call. 6. The receiving MCPTT client 2 accepts the IOPS private call automatically, and an IOPS call setup response indicating the successful call establishment is sent to MCPTT client 1. The MCPTT client 2 transmits the session packet(s) carrying the IOPS call setup response to the IOPS MC connectivity function for distribution to the corresponding target MCPTT UE 1's IP address. If MCPTT client 2 rejected the incoming call, the MCPTT client 2 sends an IOPS call setup response indicating the failure reason to the MCPTT client 1. NOTE: Step 6 can also occur prior to step 5. 7. The IOPS MC connectivity function receives the session packets addressing the MCPTT UE 1's IP address. The IOPS MC connectivity function checks if the MCPTT UE 1's IP address corresponds to a discovered MC user in order to distribute the received session packets. If it does, the IOPS MC connectivity function distributes the received session packets to the target MCPTT client over unicast transmissions. 8. The IOPS MC connectivity function distributes the session packets carrying the IOPS call setup response to the MCPTT client 1. 9. The MCPTT client 1 and the MCPTT client 2 have successfully established the IOPS private call with floor control based on the IP connectivity functionality. The MCPTT client 1 is automatically granted the floor.
17ba42c440e050d74c1fc2ffcb25f033
23.180
10.5.2.4 IOPS private call setup in manual commencement mode procedure
The procedure in figure 10.5.2.4-1 focuses on the case where an MCPTT user is initiating an IOPS MCPTT private call for communicating with another MCPTT user using a manual commencement mode. The IOPS MCPTT private call is based on the IP connectivity functionality. Pre-conditions: - MCPTT user profile used for the IOPS mode of operation is pre-provisioned in the MCPTT UEs. - MCPTT users have an active PDN connection to the IOPS MC connectivity function for the communication based on the IP connectivity functionality. - The MCPTT users are discovered by the IOPS MC connectivity function supporting the IP connectivity functionality. - MCPTT clients have retrieved connectivity information from target MCPTT users. Figure 10.5.2.4-1: IOPS private call setup in manual commencement mode based on the IP connectivity functionality 1. The MCPTT user at MCPTT client 1 would like to initiate an IOPS MCPTT private call with the MCPTT user at MCPTT client 2 based on the IP connectivity functionality. 2. The MCPTT client 1 retrieves the connectivity information of the target MCPTT user from the IOPS connectivity client 1 (not shown in figure) and sends an IOPS call setup request towards the MCPTT client 2. The MCPTT client 1 transmits the session packets carrying the IOPS call setup request to the IOPS MC connectivity function for distribution to the corresponding target MCPTT UE 2's IP address. The IOPS call setup request contains an SDP offer, a manual commencement mode indication, and an element that indicates that MCPTT client 1 is requesting the floor. The IOPS private call request may include location information. 3. The IOPS MC connectivity function receives the session packets addressing the MCPTT UE 2's IP address. The IOPS MC connectivity function checks if the MCPTT UE 2's IP address corresponds to a discovered MC user in order to distribute the received session packets. If it does, the IOPS MC connectivity function distributes the received session packets to the MCPTT client 2 over unicast transmissions. 4. The IOPS MC connectivity function distributes the session packets carrying the IOPS call setup request to the MCPTT client 2. 5. The MCPTT client 2 notifies the target MCPTT user about the incoming IOPS private call. 6. The MCPTT client 2 sends an IOPS MCPTT ringing message to the MCPTT client 1. The MCPTT client 2 transmits the session packet(s) carrying the IOPS MCPTT ringing to the IOPS MC connectivity function for distribution to the corresponding target MCPTT UE 1's IP address. NOTE 1: Step 6 can also occur prior to step 5. 7. The IOPS MC connectivity function receives the session packets addressing the MCPTT UE 1's IP address. The IOPS MC connectivity function checks if the MCPTT UE 1's IP address corresponds to a discovered MC user in order to distribute the received session packets. If it does, the IOPS MC connectivity function distributes the received session packets to the MCPTT client 1 over unicast transmissions. 8. The IOPS MC connectivity function distributes the session packets carrying the IOPS MCPTT ringing to the MCPTT client 1. 9. The MCPTT user at the MCPTT client 2 has accepted the call using manual commencement mode (i.e., it has taken some action to accept it via the user interface). The MCPTT user may also reject or fail to answer the incoming call. NOTE 2: Step 9 can also occur at any time between steps 6 and 8. 10. The MCPTT client 2 sends an IOPS call setup response indicating the successful call establishment to the MCPTT client 1. If the MCPTT client 2 rejected the call or the MCPTT user 2 rejected or failed to answer the incoming call, the MCPTT client 2 sends an IOPS call setup response indicating the failure reason to the MCPTT client 1. The MCPTT client 2 transmits the session packet(s) carrying the IOPS call setup response to the IOPS MC connectivity function for distribution to the corresponding target MCPTT UE 1's IP address. 11. The IOPS MC connectivity function receives the session packets addressing the MCPTT UE 1's IP address. The IOPS MC connectivity function checks if the MCPTT UE 1's IP address corresponds to a discovered MC user in order to distribute the received session packets. If it does, the IOPS MC connectivity function distributes the received session packets to the MCPTT client 1 over unicast transmissions. 12. The IOPS MC connectivity function distributes the session packets carrying the IOPS call setup response to the MCPTT client 1. 13. The MCPTT client 1 and the MCPTT client 2 have successfully established the IOPS private call with floor control based on the IP connectivity functionality. The MCPTT client 1 is automatically granted the floor.
17ba42c440e050d74c1fc2ffcb25f033
23.180
10.5.2.5 IOPS private call release
The procedure in figure 10.5.2.5-1 focuses on the case where an MCPTT client is requesting to release an ongoing IOPS MCPTT private call based on the IP connectivity functionality. Either MCPTT client can initiate the call release procedure. Pre-conditions: - Two MCPTT users are currently engaged in an IOPS MCPTT private call based on the IP connectivity functionality. Figure 10.5.2.5-1: IOPS private call release based on the IP connectivity functionality 1. The MCPTT user at MCPTT client 1 would like to initiate an IOPS MCPTT private call release to the MCPTT user at MCPTT client 2 based on the IP connectivity functionality. 2. The MCPTT client 1 retrieves the connectivity information of the target MCPTT user from the IOPS connectivity client 1 (not shown in figure) and sends an IOPS call release request towards the MCPTT client 2. The MCPTT client 1 transmits the session packets carrying the IOPS call release request to the IOPS MC connectivity function for distribution to the corresponding target MCPTT UE 2's IP address. 3. The IOPS MC connectivity function receives the session packets addressing the MCPTT UE 2's IP address. The IOPS MC connectivity function checks if the MCPTT UE 2's IP address corresponds to a discovered MC user in order to distribute the received session packets. If it does, the IOPS MC connectivity function distributes the received session packets to the MCPTT client 2 over unicast transmissions. 4. The IOPS MC connectivity function distributes the session packets carrying the IOPS call release request to the MCPTT client 2. 5. The MCPTT client 2 notifies the MCPTT user about the IOPS private call release. 6. The MCPTT client 2 sends an IOPS call release response indicating the successful call release to the MCPTT client 1. The MCPTT client 2 transmits the session packet(s) carrying the IOPS call release response to the IOPS MC connectivity function for distribution to the corresponding target MCPTT UE 1's IP address. NOTE: Step 6 can also occur prior to step 5. 7. The IOPS MC connectivity function receives the session packets addressing the MCPTT UE 1's IP address. The IOPS MC connectivity function checks if the MCPTT UE 1's IP address corresponds to a discovered MC user in order to distribute the received session packets. If it does, the IOPS MC connectivity function distributes the received session packets to the MCPTT client 1 over unicast transmissions. 8. The IOPS MC connectivity function distributes the session packets carrying the IOPS call release response to the MCPTT client 1. 9. The MCPTT client 1 and the MCPTT client 2 release all associated call resources from the private call communication based on the IP connectivity functionality.
17ba42c440e050d74c1fc2ffcb25f033
23.180
10.5.2.6 IOPS emergency private call
The procedure in figure 10.5.2.6-1 is the basic procedure for an MCPTT client initiating the establishment of an IOPS emergency private call with a target MCPTT client based on the IP connectivity functionality. The IOPS emergency private call is a special case of the IOPS private call setup procedures described in clause 10.5.2.3, wherein the IOPS call setup request contains an indication that the IOPS private call is an IOPS emergency private call. The called MCPTT user can become aware of the emergency state of the calling MCPTT user based on the emergency indicator. When an MCPTT client intends to initiate an IOPS emergency private call, the MCPTT client can request higher priority from the IOPS MC connectivity function via the IOPS discovery request. For the case of an IOPS private call in-progress, either call participant can upgrade the call to an IOPS emergency private call by sending an IOPS emergency private call upgrade. The emergency state of the call remains until the emergency call ends. Pre-conditions: - MCPTT user profile used for the IOPS mode of operation is pre-provisioned in the MCPTT UEs. - MCPTT users have an active PDN connection to the IOPS MC connectivity function for the communication based on the IP connectivity functionality - The MCPTT users are discovered by the IOPS MC connectivity function supporting the IP connectivity functionality. - MCPTT clients have retrieved connectivity information from target MCPTT users. Figure 10.5.2.6-1: IOPS emergency private call setup based on the IP connectivity functionality 1. The MCPTT user at MCPTT client 1 would like to initiate an IOPS emergency private call with the MCPTT user at MCPTT client 2 based on the IP connectivity functionality. NOTE 1: The MCPTT client 1 may have previously requested higher priority from the IOPS MC connectivity function using the IOPS discovery request. 2. The MCPTT client 1 retrieves the connectivity information of the target MCPTT user from the IOPS connectivity client 1 (not shown in figure) and sends an IOPS call setup request towards the MCPTT client 2. The request contains an indication that the call is an IOPS emergency private call. The MCPTT client 1 transmits the session packets carrying the IOPS call setup request to the IOPS MC connectivity function for distribution to the corresponding target MCPTT UE 2's IP address. 3. The IOPS MC connectivity function receives the session packets addressing the MCPTT UE 2's IP address. The IOPS MC connectivity function checks if the MCPTT UE 2's IP address corresponds to a discovered MC user in order to distribute the received session packets. If it does, the IOPS MC connectivity function distributes the received session packets to the target MCPTT client over unicast transmissions. If the MCPTT client 1 requested a priority state from the IOPS MC connectivity function, the IOPS MC connectivity function distributes the session packets with higher priority. 4. The IOPS MC connectivity function distributes the session packets carrying the IOPS call setup request to the MCPTT client 2. 5. The MCPTT client 2 notifies the target MCPTT user about the incoming IOPS emergency private call. 6. The receiving MCPTT client 2 accepts the IOPS emergency private call and an IOPS call setup response indicating the successful call establishment is sent to MCPTT client 1. The MCPTT client 2 transmits the session packet(s) carrying the IOPS call setup response to the IOPS MC connectivity function for distribution to the corresponding target MCPTT UE 1's IP address. NOTE 2: Whilst the IOPS emergency private call is in progress, the MCPTT client 2 may also request higher priority from the IOPS MC connectivity function using the IOPS discovery request. NOTE 3: Step 6 can also occur prior to step 5. 7. The IOPS MC connectivity function receives the session packets addressing the MCPTT UE 1's IP address. The IOPS MC connectivity function checks if the MCPTT UE 1's IP address corresponds to a discovered MC user in order to distribute the received session packets. If it does, the IOPS MC connectivity function distributes the received session packets to the target MCPTT client over unicast transmissions. If any participating MCPTT client of the call requested a priority state from the IOPS MC connectivity function, the IOPS MC connectivity function distributes the session packets with higher priority. 8. The IOPS MC connectivity function distributes the session packets carrying the IOPS call setup response to the MCPTT client 1. 9. The MCPTT client 1 and the MCPTT client 2 have successfully established the IOPS emergency private call based on the IP connectivity functionality.
17ba42c440e050d74c1fc2ffcb25f033
23.180
10.5.3 IOPS floor control (IP connectivity functionality)
17ba42c440e050d74c1fc2ffcb25f033
23.180
10.5.3.1 General
For MCPTT calls based on the IP connectivity functionality in the IOPS mode of operation, floor control is performed by using floor control messages among the MCPTT clients without a centralized MCPTT server. The MCPTT client can transmit voice packets over the IOPS MC connectivity function once it is granted the right to speak, either locally in the UE or by the reception of a floor granted message from another MCPTT client. The MCPTT client currently speaking performs the temporary floor arbitrator during speaking since there is no centralized MCPTT floor control server. The floor arbitrator controls the floor whether or not queue is supported, and when floor is requested with override. If queue is supported, the MCPTT client performing floor arbitrator grants the right to speak to the next speaker and transfers the floor arbitrator role after completing the voice transfer and releasing the floor. For IOPS group calls, the floor arbitrator also transfers the floor control queue when granting the floor. The next MCPTT client receiving the right to speak becomes the new floor arbitrator and, for IOPS group calls, has the floor control queue. For IOPS group calls, the group session packets carrying the floor control messages can be transmitted by the IOPS MC connectivity function over broadcast/multicast transmissions and can be monitored by all the members from the target IOPS MCPTT group. The following clauses specify the floor control procedures and information flows for IOPS private calls and IOPS group calls based on the IP connectivity functionality in the IOPS mode of operation.
17ba42c440e050d74c1fc2ffcb25f033
23.180
10.5.3.2 Information flows
17ba42c440e050d74c1fc2ffcb25f033
23.180
10.5.3.2.1 IOPS floor request
Table 10.5.3.2.1-1 describes the information flow for the IOPS floor request, from the floor participant to another floor participant, which is used to request the floor for media transfer. The packet(s) carrying the IOPS floor request are transmitted from the requesting MCPTT client to the IOPS MC connectivity function for distribution to the target MCPTT client. Table 10.5.3.2.1-1: IOPS floor request Information element Status Description IOPS MCPTT ID M Requester identity Floor priority M Priority of the request
17ba42c440e050d74c1fc2ffcb25f033
23.180
10.5.3.2.2 IOPS floor taken
Table 10.5.3.2.2-1 describes the information flow for the IOPS floor taken, from the floor participant to the floor participant, which is used to indicate the floor is granted to an MCPTT user. The packet(s) carrying the IOPS floor taken are transmitted from the originating MCPTT client to the IOPS MC connectivity function for distribution to target MCPTT client. Table 10.5.3.2.2-1: IOPS floor taken Information element Status Description IOPS MCPTT ID M Identity for the granted party Acknowledgement required O Indicates if acknowledgement from the floor participant is required
17ba42c440e050d74c1fc2ffcb25f033
23.180
10.5.3.3 IOPS floor control during silence
If a floor arbitrator does not exist, figure 10.5.3.3-1 shows the successful high level floor control procedure during periods when there is no detectable talker in an IOPS group call based on the IP connectivity functionality. NOTE 1: The description also applies to IOPS private calls. Pre-conditions: - MCPTT user profile used for the IOPS mode of operation is pre-provisioned in the MCPTT UEs. - MCPTT users have an active PDN connection or PDU session to the IOPS MC connectivity function for the communication based on the IP connectivity functionality - The IOPS MCPTT group ID and its associated IOPS group IP multicast address are pre-configured in the MCPTT clients (for the case of an IOPS group call) - The IOPC MC connectivity function may have established a broadcast/multicast session and announced it to the MCPTT clients - The MCPTT users are discovered by the IOPS MC connectivity function supporting the IP connectivity functionality. - MCPTT clients has retrieved connectivity information from the target MCPTT user (for the case of an IOPS private call). - An IOPS private call or IOPS group call based on the IP connectivity functionality has been established. No participant is currently talking (i.e. the floor is idle) and no floor arbitrator is identified. Figure 10.5.3.3-1: Successful floor taken flow in an IOPS group call based on the IP connectivity functionality (no floor contention) 1. The MCPTT client 1 sends the IOPS floor request message to the target IOPS MCPTT group. The MCPTT client 1 transmits the group session packets carrying the IOPS floor request message to the IOPS MC connectivity function for distribution to the corresponding IOPS group IP multicast address. 2. The IOPS MC connectivity function determines that the received packets correspond to a group session targeting a specific IOPS MCPTT group. The IOPS MC connectivity function decides distributing the received group session packets to the target MCPTT clients over broadcast/multicast and/or unicast transmissions. 3. The IOPS MC connectivity function distributes the group session packets carrying the IOPS floor request to the MCPTT clients from the target IOPS MCPTT group. 4. The MCPTT client 1 does not detect any floor contention. Floor contention occurs when multiple floor requests may exist simultaneously. NOTE 2: The mechanism for detecting floor contention in the IOPS mode of operation is out of scope of the present document. 5. The MCPTT client 1 sends the IOPS floor taken message to the IOPS MCPTT group. The MCPTT client 1 transmits the group session packets carrying the IOPS floor taken message to the IOPS MC connectivity function for distribution to the corresponding IOPS group IP multicast address. 6. The IOPS MC connectivity function determines that the received packets correspond to a group session targeting a specific IOPS MCPTT group. The IOPS MC connectivity function decides distributing the received group session packets to the target MCPTT clients over broadcast/multicast and/or unicast transmissions. 7. The IOPS MC connectivity function distributes the group session packets carrying the IOPS floor taken message to the MCPTT clients from the target IOPS MCPTT group. 8. The MC user at MCPTT client 1 gets a notification that the IOPS floor request was successful (the floor has been granted). NOTE 3: Step 8 can also occur prior to steps 6 and 7. 9. The MCPTT client 1 begins voice transmission with the target IOPS MCPTT group based on the IP connectivity functionality.
17ba42c440e050d74c1fc2ffcb25f033
23.180
10.6 MCData service
17ba42c440e050d74c1fc2ffcb25f033
23.180
10.6.1 IOPS short data service (IP connectivity functionality)
17ba42c440e050d74c1fc2ffcb25f033
23.180
10.6.1.1 General
The support of the MCData short data service (SDS) based on the IP connectivity functionality in the IOPS mode of operation enables that the service is provided by the MCData clients over the IOPS MC connectivity function. The IOPS MC connectivity function provides IP connectivity for the communication among MCData users.
17ba42c440e050d74c1fc2ffcb25f033
23.180
10.6.1.2 Information flows
17ba42c440e050d74c1fc2ffcb25f033
23.180
10.6.1.2.1 IOPS MCData standalone data request
Table 10.6.1.2.1-1 describes the information flow for the IOPS MCData standalone data request from one MCData client to another MCData client. The packet(s) carrying the IOPS MCData standalone data request are transmitted from the sending MCData client to the IOPS MC connectivity function for distribution to the target MCData client. Table 10.6.1.2.1-1: IOPS MCData standalone data request Information element Status Description IOPS MCData ID M The identity of the MCData user sending data IOPS MCData ID M The identity of the MCData user towards which the data is sent Conversation Identifier M Identifies the conversation Transaction Identifier M Identifies the MCData transaction Reply Identifier O Identifies the original MCData transaction to which the current transaction is a reply to Disposition Type O Indicates the disposition type expected from the receiver (i.e., delivered or read or both) Payload Destination Type M Indicates whether the payload is for application consumption or MCData user consumption Application identifier (see NOTE) O Identifies the application for which the payload is intended (e.g. text string, port address, URI) Payload M SDS content NOTE: The application identifier shall be included only if the payload destination type indicates that the payload is for application consumption.
17ba42c440e050d74c1fc2ffcb25f033
23.180
10.6.1.2.2 IOPS MCData data disposition notification
Table 10.6.1.2.2-1 describes the information flow for the IOPS MCData data disposition notification from one MCData client to another MCData client. The packet(s) carrying the IOPS MCData data disposition notification are transmitted from the sending MCData client to the IOPS MC connectivity function for distribution to the target MCData client. Table 10.6.1.2.2-1: IOPS MCData data disposition notification Information element Status Description IOPS MCData ID M The identity of the MCData user towards which the notification is sent IOPS MCData ID M The identity of the MCData user sending notification Conversation Identifier M Identifies the conversation Disposition association M Identity of the original MCData transaction Disposition M Disposition which is delivered or read or both
17ba42c440e050d74c1fc2ffcb25f033
23.180
10.6.1.2.3 IOPS MCData group standalone data request
Table 10.6.1.2.3-1 describes the information flow for the IOPS MCData group standalone data request from one MCData client to other MCData clients. The packet(s) carrying the IOPS MCData group standalone data request are transmitted from the sending MCData client to the IOPS MC connectivity function for distribution to the target MCData clients. Table 10.6.1.2.3-1: IOPS MCData group standalone data request Information element Status Description IOPS MCData ID M The identity of the MCData user sending data IOPS MCData group ID M The IOPS MCData group ID to which the data is to be sent Conversation Identifier M Identifies the conversation Transaction Identifier M Identifies the MCData transaction Reply Identifier O Identifies the original MCData transaction to which the current transaction is a reply to Disposition Type O Indicates the disposition type expected from the receiver (i.e., delivered or read or both) Payload Destination Type M Indicates whether the payload is for application consumption or MCData user consumption Application identifier (see NOTE) O Identifies the application for which the payload is intended (e.g. text string, port address, URI) Payload M SDS content NOTE: The application identifier shall be included only if the payload destination type indicates that the payload is for application consumption.
17ba42c440e050d74c1fc2ffcb25f033
23.180
10.6.1.3 IOPS one-to-one standalone SDS using signalling control plane
17ba42c440e050d74c1fc2ffcb25f033
23.180
10.6.1.3.1 General
When an MCData user initiates an IOPS standalone SDS data transfer with another MCData user using the signalling control plane based on the IP connectivity functionality, the MCData client retrieves the connectivity information of the target MCData user (i.e. the MCData UE's IP address) from the IOPS connectivity client. Then, the MCData client enables the IOPS SDS data transfer over the IOPS MC connectivity function. The related session packets, i.e. signalling messages, carrying the data are transmitted to the IOPS MC connectivity function addressing the corresponding target MCData UE's IP address. NOTE: The IOPS connectivity client can only provide connectivity information of the target MCData user if it is already available (see clauses 10.3 on IOPS subscription and notification procedures). The IOPS MC connectivity function distributes the received session packets over unicast transmissions to the target MCData client.
17ba42c440e050d74c1fc2ffcb25f033
23.180
10.6.1.3.2 Procedure
The procedure in figure 10.6.1.3.2-1 describes the case where an MCData user is initiating an IOPS one-to-one MCData communication for sending standalone SDS data over signalling control plane to another MCData user, with or without disposition request. Standalone refers to sending unidirectional data in one transaction. Pre-conditions: - MCData user profile used for the IOPS mode of operation is pre-provisioned in the MCData UEs. - MCData users have an active PDN connection or PDU session to the IOPS MC connectivity function for the communication based on the IP connectivity functionality. - The MCData users are discovered by the IOPS MC connectivity function supporting the IP connectivity functionality. - MCData clients have retrieved connectivity information from target MCData users. Figure 10.6.1.3.2-1: IOPS one-to-one standalone SDS using signalling control plane based on the IP connectivity functionality 1. The MCData user at MCData client 1 would like to initiate an IOPS SDS data transfer with the MCData user at MCData client 2 based on the IP connectivity functionality. The MCData client 1 checks whether the MCData user 1 is authorized to send an IOPS MCData standalone data request. 2. The MCData client 1 retrieves the connectivity information of the target MCData user from the IOPS connectivity client 1 (not shown in figure) and sends an IOPS MCData standalone data request towards the MCData client 2. The MCData client 1 transmits the session packets carrying the IOPS MCData standalone data request to the IOPS MC connectivity function for distribution to the corresponding target MCData UE 2's IP address. The IOPS MCData standalone data request contains the data payload, i.e. the SDS content. The request also contains a conversation identifier for message thread indication and may contain a disposition request if indicated by the user at MCData client 1. 3. The IOPS MC connectivity function receives the session packets addressing the MCData UE 2's IP address. The IOPS MC connectivity function checks if the MCData UE 2's IP address corresponds to a discovered MC user in order to distribute the received session packets. If it does, the IOPS MC connectivity function distributes the received session packets to the target MCData client over unicast transmissions. 4. The IOPS MC connectivity function distributes the session packets carrying the IOPS MCData standalone data request to the MCData client 2. 5. Upon the receipt of the IOPS MCData standalone data request, the MCData client 2 checks whether any policy is to be asserted to limit certain types of message or content to certain members due to, for example, location or user privilege. If the policy assertion is positive and the payload is for MCData user consumption (e.g. it is not application data, not command instructions, etc.) then the MCData client 2 notifies the target MCData user. The actions taken when the payload contains application data or command instructions are based on the payload content. Payload content received by MCData client 2 which is addressed to a known local non-MCData application that is not yet running shall cause the MCData client 2 to start the local non-MCData application (i.e., remote start application) and shall pass the payload content to the just started application. NOTE: If the policy assertion was negative, the MCData client 2 sends an appropriate notification to MCData client 1. 6. If MCData data disposition was indicated (for delivery, read or both) within the request sent by the MCData client 1, then the receiving MCData client 2 initiates the corresponding IOPS MCData data disposition notification(s) towards the MCData client 1, i.e. addressing the MCData UE 1's IP address. 7. The IOPS MC connectivity function receives the session packets addressing the MCData UE 1's IP address. The IOPS MC connectivity function checks if the MCData UE 1's IP address corresponds to a discovered MC user in order to distribute the received session packets. If it does, the IOPS MC connectivity function distributes the received session packets to the target MCData client over unicast transmissions. 8. The IOPS MC connectivity function distributes the session packets carrying the IOPS MCData data disposition notification to the MCData client 1.
17ba42c440e050d74c1fc2ffcb25f033
23.180
10.6.1.4 IOPS group standalone SDS using signalling control plane
17ba42c440e050d74c1fc2ffcb25f033
23.180
10.6.1.4.1 General
IOPS group standalone SDS using signalling control plane based on the IP connectivity functionality can use pre-configured information provided to MCData clients prior to initiating the data service. When an MCData client initiates an IOPS group standalone SDS based on the IP connectivity functionality it uses the pre-configured IOPS group IP multicast address associated to the target IOPS MCData group ID. The related group session packets, i.e. signalling messages, carrying the data are transmitted to the IOPS MC connectivity function for distribution to the corresponding discovered MC users of the target IOPS MCData group. The IOPS MC connectivity function can distribute the group session packets to the discovered MC users over broadcast/multicast sessions as described in clause 10.4.5. The IOPS MC connectivity function can also replicate and distribute the group session packets over unicast transmissions to MCData UEs associated to the target IOPS MCData group. MCData UEs receiving the group session packets are associated to discovered MC users that included the target IOPS MCData group ID within the IOPS discovery request, as described in clause 10.5.2.3.
17ba42c440e050d74c1fc2ffcb25f033
23.180
10.6.1.4.2 Procedure
The procedure in figure 10.6.1.4.2-1 describes the case where an MCData user is initiating an IOPS group MCData communication for sending standalone SDS data over signalling control plane to an IOPS MCData group, with or without disposition request. Standalone refers to sending unidirectional data in one transaction. Pre-conditions: - MCData user profile used for the IOPS mode of operation is pre-provisioned in the MCData UEs. - The IOPS MCData group ID and its associated IOPS group IP multicast address are pre-configured in the MCData clients. - MCData users have an active PDN connection or PDU session to the IOPS MC connectivity function for the communication based on the IP connectivity functionality. - MCData users affiliated to the target IOPS MCData group are discovered by the IOPS MC connectivity function supporting the IP connectivity functionality. - The IOPS MC connectivity function may have established a broadcast/multicast session and announced it to the MCData clients. - MCData client 1 may have retrieved group connectivity information from the IOPS connectivity client related to the target IOPS MCData group. - MCData clients 1, 2 … n are configured within the same IOPS MCData group. Figure 10.6.1.4.2-1: IOPS group standalone SDS using signalling control plane based on the IP connectivity functionality 1. The MCData user at MCData client 1 would like to initiate an IOPS SDS data transfer with a specific IOPS MCData group based on the IP connectivity functionality. The MCData client 1 checks whether the MCData user 1 is authorized to send an IOPS MCData group standalone data request. 2. The MCData client 1 sends an IOPS MCData group standalone data request to the target IOPS MCData group. The MCData client 1 transmits the group session packets carrying the IOPS MCData group standalone data request to the IOPS MC connectivity function for distribution to the corresponding IOPS group IP multicast address. The IOPS MCData group standalone data request contains the data payload, i.e. the SDS content. The request also contains a conversation identifier for message thread indication and may contain a disposition request if indicated by the user at MCData client 1. 3. The IOPS MC connectivity function determines that the received packets correspond to a group session targeting a specific IOPS MCData group. The IOPS MC connectivity function decides distributing the received group session packets to the target MCData clients over broadcast/multicast session and/or unicast transmissions. 4. The IOPS MC connectivity function distributes the group session packets carrying the IOPS MCData group standalone data request to the discovered MCData clients from the target IOPS MCData group. 5. The MCData clients receiving the IOPS MCData group standalone data request check whether any policy is to be asserted to limit certain types of messages or content to certain members due to, for example, location or user privilege. If the policy assertion is positive and the payload is for MCData user consumption (e.g. it is not application data, not command instructions, etc.) then the MCData clients notify the target MCData users. The actions taken when the payload contains application data or command instructions are based on the payload content. Payload content received by an MCData client which is addressed to a known local non-MCData application that is not yet running shall cause the MCData client to start the local non-MCData application (i.e., remote start application) and shall pass the payload content to the just started application. NOTE: If the policy assertion was negative, the corresponding MCData client sends an appropriate notification to MCData client 1. 6. If MCData data disposition was indicated (for delivery, read or both) within the request sent by the MCData client 1, then the receiving MCData clients initiate the corresponding IOPS MCData data disposition notification(s) towards the MCData client 1, i.e. addressing the MCData UE 1's IP address. 7. The IOPS MC connectivity function receives the session packets addressing the MCData UE 1's IP address. The IOPS MC connectivity function checks if the MCData UE 1's IP address corresponds to a discovered MC user in order to distribute the received session packets. If it does, the IOPS MC connectivity function distributes the received session packets to the target MCData client over unicast transmissions. 8. The IOPS MC connectivity function distributes the session packets carrying the IOPS MCData data disposition notification to the MCData client 1.
17ba42c440e050d74c1fc2ffcb25f033
23.180
10.7 MC IOPS notification
17ba42c440e050d74c1fc2ffcb25f033
23.180
10.7.1 General
In the IOPS mode of operation, it is assumed that the IOPS MC system does not have connectivity to the primary MC system due to the backhaul failure. Therefore, the primary MC system cannot be aware of the initiation of the IOPS operation and the corresponding activation of an IOPS MC connectivity function within the primary MC system coverage. When an IOPS MC system is active, MC service UEs can move around and may enter and leave the IOPS MC system coverage, i.e. the MC service users may switch from the active IOPS MC connectivity function to the MC service server of the primary MC system, and vice versa. In order to notify the primary MC service server about the active IOPS MC connectivity function, when MC service users register to the primary MC service server after being recently registered to the IOPS MC connectivity function, the MC service users can provide information to the primary MC service server about the active IOPS MC connectivity function and optionally include associated dynamic information. The primary MC service server uses the provided information to become aware of the active IOPS MC connectivity function. NOTE: Dynamic information can be, e.g., information about other available MC service users or active MC service groups that the MC service user identified while on the IOPS MC connectivity function. The primary MC service server can use the dynamic information to determine that affiliated MC service users might be registered on the active IOPS MC connectivity function and might not be reachable on the system. Upon the receipt of the notification, the primary MC service server may notify other MC service users in the proximity of the IOPS MC system coverage about the corresponding active IOPS MC connectivity function. This information can be used by the MC service users to optimize the user experience, e.g. to improve the switching time between systems and to obtain information about the potential availability of other registered MC service users or active MC service groups on the IOPS MC connectivity function.
17ba42c440e050d74c1fc2ffcb25f033
23.180
10.7.2 Information flows
17ba42c440e050d74c1fc2ffcb25f033
23.180
10.7.2.1 MC IOPS notification
Table 10.7.2.1-1 describes the information flow MC IOPS notification from the MC service client to the primary MC service server. Table 10.7.2.1-1: MC IOPS notification Information element Status Description MC service ID M The identity of the MC service user providing the notification IOPS MC system information M Information related to the identified active IOPS MC connectivity function (see NOTE) List of MC service group IDs O The list of groups identified by the MC service user as active on the IOPS MC connectivity function List of MC service IDs O The list of other users identified by the MC service user as available on the IOPS MC connectivity function NOTE: The IOPS MC system information consists of the following elements: IOPS PLMN ID, server URI of the IOPS MC connectivity function, and location information (set of coordinates including altitude, longitude and latitude, and time of measurement and optional accuracy) related to the MC service user registration on the IOPS MC connectivity function. Table 10.7.2.1-2 describes the information flow MC IOPS notification from the primary MC service server to the MC service client. Table 10.7.2.1-2: MC IOPS notification Information element Status Description MC service ID (see NOTE 1) M The identity of the MC service user receiving the notification IOPS MC system information M Information related to the identified active IOPS MC connectivity function (see NOTE 2) List of MC service group IDs (see NOTE 3) O The list of MC service groups identified as active on the IOPS MC connectivity function List of MC service IDs (see NOTE 3) O The list of MC service users identified as available on the IOPS MC connectivity function NOTE 1: This information element is not included if the notification is transmitted over broadcast/multicast session. NOTE 2: The IOPS MC system information consists of the following elements: server URI of the IOPS MC connectivity function, and location information (set of coordinates including altitude, longitude and latitude) where the IOPS MC connectivity function is identified as active. NOTE 3: The MC service server may provide information about identified active MC service groups or available MC service users on the IOPS MC connectivity function. This information is only included if the MC service user receiving the notification is authorized, e.g. if the MC service user is a member of the corresponding MC service groups. This information element is not included if the notification is transmitted over broadcast/multicast session.
17ba42c440e050d74c1fc2ffcb25f033
23.180
10.7.3 MC IOPS notification procedure
Figure 10.7.3-1 describes the IOPS MC notification procedure when an MC service user has left an active IOPS MC system and enters the primary MC system. Pre-conditions: - There is an active IOPS MC connectivity function and the neighbouring cells of the IOPS MC system are part of the primary MC system. - The MC service user 1 is initially registered to the IOPS MC connectivity function for the support of MC services in the IOPS mode of operation. The MC service user 1 is authorized to provide MC IOPS notifications to the primary MC service server. - The MC service user 2 is registered to the primary MC service server and is in the proximity of the IOPS MC system coverage. - MC server users 1 and 2 are members of the same MC service group or are authorized to have a one-to-one MC service communication. Figure 10.7.3-1: MC IOPS notification procedure 1. An IOPS mode of operation is active, and the MC service communication is handled by the IOPS MC connectivity function. The MC service client 1 is registered to the IOPS MC connectivity function. 2. The MC service client 1 moves out of the coverage of the IOPS MC system and registers to the primary MC service server. 3. The MC service client 1 sends a MC IOPS notification to the primary MC service server to provide information about an active IOPS MC connectivity function in the area. This notification includes information about the active IOPS MC connectivity function such as server URI, associated IOPS PLMN ID, and location information. Also, the MC service client 1 may indicate which MC service groups and MC service users were identified as active and available on the IOPS MC connectivity function. 4. The primary MC service server becomes aware of the active IOPS MC connectivity function and can use the received information to determine that affiliated MC service users might be registered on the active IOPS MC connectivity function and might not be reachable on the system. If the primary MC service server determines that the IOPS MC connectivity function is active and identifies that affiliated MC service users are in the proximity of the IOPS MC system, the primary MC service server may notify the corresponding MC service users about the IOPS MC connectivity function. NOTE 1: The primary MC service server can use received information from different MC IOPS notifications (e.g. location information including the time of measurement) and the information obtained from location information subscriptions (as described in 3GPP TS 23.280 [3]) to determine if the IOPS MC connectivity function might be still active. For instance, if information received from location information subscriptions indicates that MC service users are located within the notified active IOPS MC system coverage, the primary MC service server can determine that the IOPS MC connectivity function is no longer active. 5. If the primary MC service server determines that the IOPS MC connectivity function is active, it sends a MC IOPS notification to the MC service client 2 in proximity of the active IOPS MC system coverage. This information can be used by the MC service user to become aware of the active IOPS MC connectivity function. Hence, the MC service user might decide to not move into the IOPS MC system coverage or to improve the switching time between the systems. Also, the MC service user can be aware of the potential availability of MC service users or active MC service groups on the IOPS MC connectivity function. NOTE 2: The MC IOPS notification can be sent on a broadcast/multicast session configured within the proximity of the active IOPS MC system to target multiple MC service users. In this case, information about the active MC service IDs and MC service group IDs is not included in the MC IOPS notification. Annex A (normative): Configuration data for the support of MC services in the IOPS mode of operation A.1 General This Annex provides information about the static configuration data needed for the support of MC services in the IOPS mode of operation. The configuration data belong to one of the following categories: - MC service UE configuration data (see subclause A.2); - MC service user profile configuration data (see subclause A.3); - MC service group configuration data (see subclause A.4); and - MC service configuration data (see subclause A.5). - Location user profile configuration data (see subclause A.8) The configuration data in each configuration category corresponds to a single instance of the category type i.e. the MC service UE, MC service group, MC service user and MC service configuration data refers to the information that will be stored against each MC service UE, MC service group, MC service user and MC service. NOTE: The configuration data described in this Annex together with corresponding configuration data provided in 3GPP TS 23.280 [3], 3GPP TS 23.289 [12], 3GPP TS 23.379 [5] and 3GPP TS 23.282 [6] represent the complete set of data for each configuration data category element. The columns in the tables have the following meanings: - Reference: the reference of the corresponding requirement in 3GPP TS 22.346 [9] and 3GPP TS 22.280 [11] or the corresponding clause from either the present document or the referenced document. - Parameter description: A short definition of the semantics of the corresponding item of data, including denotation of the level of the parameter in the configuration hierarchy. - When it is not clear to which functional entities the parameter is configured, then one or more columns indicating this are provided where the following nomenclature is used: - "Y" to denote "Yes" i.e. the parameter denoted for the row needs to be configured to the functional entity denoted for the column. - "N" to denote "No" i.e. the parameter denoted for the row does not need to be configured to the functional entity denoted for the column. Parameters within a set of configuration data have a level within a hierarchy that pertains only to that configuration data. The level of a parameter within the hierarchy of the configuration data is denoted by use of the character ">" in the parameter description field within each table, one per level. Parameters that are at the top‑most level within the hierarchy have no ">" character. Parameters that have one or more ">" characters are child parameters of the first parameter above them that has one less ">" character. Parent parameters are parameters that have one or more child parameters. Parent parameters act solely as a "grouping" of their child parameters and therefore do not contain an actual value themselves i.e. they are just containers for their child parameters. Each parameter that can be configured online shall only be configured through one online reference point. Each parameter that can be configured offline shall only be configured through one offline reference point. The most recent configuration data made available to the MC service UE shall always overwrite previous configuration data, irrespective of whether the configuration data was provided via the online or offline mechanism. A.2 MC service UE configuration data MC service UE configuration data has to be known by an MC service UE after MC service authorization. The CSC-4 reference point, specified in 3GPP TS 23.280 [3], is used for configuration between the configuration management server and the configuration management client on the MC service UE when the MC service UE is on-network. MC service UE configuration data can be configured offline using the CSC-11 reference point specified in 3GPP TS 23.280 [3], and 3GPP TS 23.289 [12]. Within each MC service, the MC service UE configuration data can be the same or different across MC service UEs. The MCPTT UE configuration data specified in table A.2-1 in 3GPP TS 23.379 [5] is also used, as needed, in the IOPS mode of operation for the MCPTT service. The MCData UE configuration data specified in table A.2-1 in 3GPP TS 23.282 [6] is also used, as needed, in the IOPS mode of operation for the MCData service. A.3 MC service user profile configuration data The MC service user profile configuration data is stored in the MC service user database. The configuration management server is used to configure the MC service user profile configuration data to the MC service user database (CSC-13) and MC service UE (CSC-4), as specified in 3GPP TS 23.280 [3]. MC service user profile configuration data can be configured offline using the CSC-11 reference point specified in 3GPP TS 23.280 [3]. For the MCPTT service, the MCPTT user profile configuration data specified in table A.3-1 in 3GPP TS 23.379 [5] is also used, as needed, in the IOPS mode of operation, wherein the IOPS MCPTT user identity (IOPS MCPTT ID) can be the MCPTT user identity (MCPTT ID) or a specific ID configured for the IOPS mode of operation. For the MCData service, the MCData user profile configuration data specified in table A.3-1 in 3GPP TS 23.282 [6] is also used, as needed, in the IOPS mode of operation, wherein the IOPS MCData user identity (IOPS MCData ID) can be the MCData user identity (MCData ID) or a specific ID configured for the IOPS mode of operation. Table A.3-1 described below contains additional MC service user profile configuration required to support MC services in the IOPS mode of operation. Table A.3-1: MC service user profile data (IOPS) Reference Parameter description MC service UE Configuration management server MC service user database [R-10-001] of 3GPP TS 22.280 [11] List of IOPS MC service groups for use by an MC service user Y Y Y > IOPS MC service Group ID > Application plane server identity information of group management server where group is defined >> Server URI Y Y Y > Application plane server identity information of identity management server which provides authorization for group (see NOTE 1) >> Server URI Y Y Y [R-10-001] of 3GPP TS 22.280 [11] Authorization for participant to change an IOPS group call in-progress to IOPS emergency group call (see NOTE 2) Y Y Y [R-10-001] of 3GPP TS 22.280 [11] Authorization for MC services in the IOPS mode of operation Y Y Y Clause 10.2.2.3 Authorization for participant to indicate availability of connectivity information Y Y Y Clause 10.2.2.3 Authorization for participant to request priority state Y Y Y NOTE 1: If this parameter is not configured, authorization to use the group shall be obtained from the identity management server identified in the initial MC service UE configuration data configured in 3GPP TS 23.280 [3]. NOTE 2: This parameter only applies for the MCPTT service. A.4 Group configuration data As specified in 3GPP TS 23.280 [3], the group configuration data is stored in the group management server. The group management server is used to configure the group configuration data to the MC service UE (CSC-2). The group configuration data can be configured offline using the CSC-12 reference point. The common group configuration data specified in table A.4-1 in 3GPP TS 23.280 [3] is also used, as needed, in the IOPS mode of operation. Table A.4-1 described below contains additional group configuration data required to support MC services in the IOPS mode of operation. Table A.4-1: Group configuration data (IOPS) Reference Parameter description MC service UE Group management server [R-10-001] of 3GPP TS 22.280 [11] List of IOPS MC service groups Y Y > IOPS MCPTT Group ID Clause 8.1.3 >> IOPS group IP multicast address Y Y >> Preferred voice codecs for IOPS MCPTT group Y Y >> Indication whether emergency group call is permitted on the IOPS MCPTT group Y Y > IOPS MCData Group ID Clause 8.1.3 >> IOPS group IP multicast address Y Y >> MCData sub-services and features enabled for the group >>> Short data service enabled Y Y >>> Whether MCData user is permitted to transmit data in the group Y Y >>> Maximum amount of data that the MCData user can transmit in a single request during group communication Y Y >>> Maximum amount of time that the MCData user can transmit in a single request during group communication Y Y A.5 MC service configuration data As specified in 3GPP TS 23.280 [3], the configuration management server is used to configure the MC service configuration data to the MC service UE (CSC-4). The MC service configuration data can be configured offline using the CSC-11 reference point. Tables A.5-1 and A.5-2 describe the configuration data required to support in IOPS the use of MCPTT service and MCData service, respectively. Table A.5-1: MCPTT service configuration data (IOPS) Reference Parameter description MCPTT UE Configuration management server [R-10-001] of 3GPP TS 22.280 [11] Max IOPS private call (with floor control) duration Y Y [R-10-001] of 3GPP TS 22.280 [11] Hang timer for private calls in IOPS Y Y [R-10-001] of 3GPP TS 22.280 [11] Priority hierarchy for floor control override in IOPS Y Y [R-10-001] of 3GPP TS 22.280 [11] Transmit time limit from a single request to transmit in a group or private call. Y Y [R-10-001] of 3GPP TS 22.280 [11] Configuration of warning time before time limit of transmission is reached in an IOPS call Y Y [R-10-001] of 3GPP TS 22.280 [11] Configuration of warning time before hang time is reached in an IOPS call Y Y [R-10-001] of 3GPP TS 22.280 [11] Configuration of metadata to log Y Y Table A.5-2: MCData service configuration data (IOPS) Reference Parameter description MCData UE Configuration management server [R-10-001] of 3GPP TS 22.280 [11] Transmission and reception control > Time limit for the temporarily stored data waiting to be delivered to a receiving user Y Y > Timer for periodic announcement with the list of available recently invited data group communications Y Y A.6 Initial MC service UE configuration data Initial MC service UE configuration data is essential to the MC service UE to successfully connect to the MC system, as described in 3GPP TS 23.280 [3], and 3GPP TS 23.289 [12]. The configuration data defined in table A.6-1 is additionally provided to the MC service UE's clients to successfully connect to the IOPS MC system in the IOPS mode of operation. The MC service UE's clients (e.g. MC service client, IOPS connectivity client) obtain the data during the bootstrap process (described in clause 10.1.1 in 3GPP TS 23.280 [3]), and can be configured on the MC service UE offline using the CSC-11 reference point or via other means e.g. as part of the MC service client's provisioning on the UE, using a device management procedure. Table A.6-1: Initial MC service UE configuration data (IOPS) Reference Parameter description Clause 5.4.3 PDN connectivity information in IOPS (see NOTE 1) > IOPS HPLMN ID and optionally IOPS VPLMN ID to which the data pertains > MC services PDN in IOPS >> APN >> PDN access credentials PDU session information in IOPS (see NOTE 2) > IOPS HPLMN ID and optionally IOPS VPLMN ID to which the data pertains > MC service PDU in IOPS >> DNN >> PDU access credentials >Default configured slice(s) S-NSSAI(s) Application plane server identity information > Indication of whether the UE shall use IPv4 or IPv6 for the support of MC services in IOPS > IOPS MC connectivity function >> Server URI NOTE 1: These configurations shall only be used to access via IOPS EPS system. NOTE 2: These configurations shall only be used to access via IOPS 5GS system. A.7 Location user profile configuration data The location user profile configuration data defined in annex A.8 of 3GPP TS 23.280 [3] is applicable as needed in the IOPS mode of operation. Annex B (informative): Change history Change history Date Meeting TDoc CR Rev Cat Subject/Comment New version 2019-09 SA6#33 TS skeleton 0.0.0 2019-09 SA6#33 Implementation of the following pCRs approved by SA6: S6-191814, S6-191821, S6-191815, S6-191816, S6-191817, S6-191818, S6-191851, S6-191820, S6-191822. Editorial changes by the rapporteur. 0.1.0 2019-11 SA6#34 Implementation of the following pCRs approved by SA6: S6-192226, S6-192227, S6-192228, S6-192229, S6-192230, S6-192231, S6-192233, S6-192349, S6-192350. Editorial changes by the rapporteur. 0.2.0 2020-01 SA6#35 Implementation of the following pCRs approved by SA6: S6-200280, S6-200185, S6-200186, S6-200187, S6-200097, S6-200098. Editorial changes by the rapporteur. 0.3.0 2020-04 SA6#36 BIS-e Implementation of the following pCRs approved by SA6: S6-200588, S6-200589, S6-200590, S6-200591, S6-200561, S6-200562, S6-200563. Editorial changes by the rapporteur. 0.4.0 2020-05 SA6#37-e Implementation of the following pCRs approved by SA6: S6-200928, S6-200933. Editorial changes by the rapporteur. 0.5.0 2020-06 SA#88-e SP-200334 Presentation for information at SA#88-e 1.0.0 2020-07 SA6#38-e Implementation of the following pCRs approved by SA6: S6-201084, S6-201085, S6-201087, S6-201088, S6-201108, S6-201109, S6-201110. Editorial changes by the rapporteur. 1.1.0 2020-09 SA6#39-e Implementation of the following pCRs approved by SA6: S6-201432, S6-201433, S6-201434, S6-201435, S6-201532. Editorial changes by the rapporteur. 1.2.0 2020-09 SA#89-e SP-200826 Presentation for approval at SA#89-e 2.0.0 2020-09 SA#89-e SP-200826 MCC Editorial update for publication after TSG SA approval (SA#89) 17.0.0 2024-05 Update to Rel-18 version (MCC) 18.0.0 2024-12 SA#106 SP-241726 0001 1 B Updates to sections 10.6.1.4 and 10.7 to support generic IOPS 19.0.0 2024-12 SA#106 SP-241726 0002 2 B Updates to sections 10.5.1.3, 10.5.1.4, 10.5.3.1, and 10.5.3.3 to support generic IOPS 19.0.0 2024-12 SA#106 SP-241726 0003 B Clause 4 update 19.0.0 2024-12 SA#106 SP-241726 0004 B Clause 5 update 19.0.0 2024-12 SA#106 SP-241726 0005 B Clause 6 update 19.0.0 2024-12 SA#106 SP-241726 0006 1 B Clause 7 update 19.0.0 2024-12 SA#106 SP-241726 0007 B Clauses 10.2.3, 10.3.3 and 10.6.1.3.2 update 19.0.0 2024-12 SA#106 SP-241726 0008 1 B Clause A.1, A.2 and A.6 update 19.0.0 2024-12 SA#106 SP-241726 0010 1 B Updates to clauses 2 and 3 (References, Definitions, Abbreviations) to support access agnostic IOPS 19.0.0 2024-12 SA#106 SP-241726 0011 1 B Updates to clause 1 (Scope) to support access generic IOPS 19.0.0 2024-12 SA#106 SP-241726 0012 1 B Update the IOPS network deployment 19.0.0 2024-12 SA#106 SP-241726 0013 1 B Update MBMS transmissions aspect 19.0.0 2024-12 SA#106 SP-241726 0014 1 B Updates to clause 10.2.2.3 (IOPS discovery request) to support access generic IOPS 19.0.0
5817f2cb61f7e615e7879961cd761fa6
23.203
1 Scope
The present document specifies the overall stage 2 level functionality for Policy and Charging Control that encompasses the following high level functions for IP‑CANs (e.g. GPRS, Fixed Broadband, EPC, etc.): - Flow Based Charging for network usage, including charging control and online credit control, for service data flows and application traffic; - Policy control (e.g. gating control, QoS control, QoS signalling, etc.). The present document specifies the Policy and Charging Control functionality for Evolved 3GPP Packet Switched domain, including both 3GPP accesses GERAN/UTRAN/E-UTRAN and Non-3GPP accesses, according to TS 23.401 [17] and TS 23.402 [18]. The present document specifies functionality for unicast bearers. Broadcast and multicast bearers, such as MBMS contexts for GPRS, are out of scope of the present document. NOTE: For E-UTRAN access, the usage of functionalities covered in this specification for features such as MBMS, CIoT and V2X is described in TS 23.246 [6], TS 23.682 [42] and TS 23.285 [48], respectively.
5817f2cb61f7e615e7879961cd761fa6
23.203
2 References
The following documents contain provisions which, through reference in this text, constitute provisions of the present document. - References are either specific (identified by date of publication, edition number, version number, etc.) or non‑specific. - For a specific reference, subsequent revisions do not apply. - For a non-specific reference, the latest version applies. In the case of a reference to a 3GPP document (including a GSM document), a non-specific reference implicitly refers to the latest version of that document in the same Release as the present document. [1] 3GPP TS 41.101: "Technical Specifications and Technical Reports for a GERAN-based 3GPP system". [2] Void. [3] 3GPP TS 32.240: "Telecommunication management; Charging management; Charging architecture and principles". [4] IETF RFC 4006: "Diameter Credit-Control Application". [5] 3GPP TS 23.207: "End-to-end Quality of Service (QoS) concept and architecture". [6] 3GPP TS 23.246: "Multimedia Broadcast/Multicast Service (MBMS); Architecture and functional description". [7] 3GPP TS 23.125: "Overall high level functionality and architecture impacts of flow based charging; Stage 2". [8] 3GPP TR 21.905: "Vocabulary for 3GPP Specifications". [9] 3GPP TS 32.251: "Telecommunication management; Charging management; Packet Switched (PS) domain charging". [10] 3GPP TS 29.061: "Interworking between the Public Land Mobile Network (PLMN) supporting packet based services and Packet Data Networks (PDN)". [11] 3GPP TR 33.919: "3G Security; Generic Authentication Architecture (GAA); System description". [12] 3GPP TS 23.060: "General Packet Radio Service (GPRS); Service description; Stage 2". [13] Void. [14] 3GPP TS 23.107: "Quality of Service (QoS) concept and architecture". [15] "WiMAX End-to-End Network Systems Architecture" (http://www.wimaxforum.org/technology/documents). [16] 3GPP TS 23.003: "Numbering, addressing and identification". [17] 3GPP TS 23.401: "General Packet Radio Service (GPRS) enhancements for Evolved Universal Terrestrial Radio Access Network (E-UTRAN) access". [18] 3GPP TS 23.402: "Architecture Enhancements for non-3GPP accesses". [19] 3GPP TS 36.300: "Evolved Universal Terrestrial Radio Access (E-UTRA) and Evolved Universal Terrestrial Radio Access Network (E-UTRAN); Overall description; Stage 2". [20] 3GPP2 X.S0057-B v2.0: "E UTRAN - HRPD Connectivity and Interworking: Core Network Aspects", July 2014. [21] 3GPP TS 23.167: "IP Multimedia Subsystem (IMS) emergency sessions". [22] 3GPP TS 29.213: "Policy and Charging Control signalling flows and QoS parameter mapping". [23] 3GPP TS 23.261: "IP Flow Mobility and seamless WLAN offload; Stage 2". [24] 3GPP TS 23.198: "Open Service Access (OSA); Stage 2". [25] 3GPP TS 23.335: "User Data Convergence (UDC); Technical realization and information flows; Stage 2". [26] 3GPP TS 29.335: "User Data Convergence (UDC); User Data Repository Access Protocol over the Ud interface; Stage 3". [27] 3GPP TS 22.115: "Service aspects; Charging and billing". [28] 3GPP TS 23.216: "Single Radio Voice Call Continuity (SRVCC); Stage 2". [29] 3GPP TS 23.139:"3GPP-Fixed Broadband Access Interworking". [30] Broadband Forum TR-203: "Interworking between Next Generation Fixed and 3GPP Wireless Access" (work in progress). [31] Broadband Forum TR-134: "Policy Control Framework " (work in progress). [32] 3GPP TS 25.467: "UTRAN architecture for 3G Home Node B (HNB); Stage 2". [33] Broadband Forum TR-291: "Nodal Requirements for Interworking between Next Generation Fixed and 3GPP Wireless Access" (work in progress). [34a] Broadband Forum TR-124 issue 2: "Functional Requirements for Broadband Residential Gateway Devices". [34b] Broadband Forum TR-124 issue 3: "Functional Requirements for Broadband Residential Gateway Devices". [35] Broadband Forum TR-101: "Migration to Ethernet-Based Broadband Aggregation". [36] Broadband Forum TR-146: "Internet Protocol (IP) Sessions". [37] Broadband Forum TR-300: "Nodal Requirements for Converged Policy Management". [38] 3GPP TS 22.278: "Service requirements for the Evolved Packet System (EPS)". [39] 3GPP TS 23.228: "IP Multimedia Subsystem (IMS); Stage 2". [40] Broadband Forum TR-092: "Broadband Remote Access Server (BRAS) Requirements Document". [41] Broadband Forum TR-134: "Broadband Policy Control Framework (BPCF)". [42] 3GPP TS 23.682: "Architecture enhancements to facilitate communications with packet data networks and applications". [43] 3GPP TS 23.161: "Network-based IP flow mobility and Wireless Local Area Network (WLAN) offload; Stage 2". [44] 3GPP TS 23.303: "Proximity-based services (ProSe); Stage 2". [45] 3GPP TS 26.114: "Multimedia telephony over IP Multimedia Subsystem (IMS); Multimedia telephony; media handling and interaction". [46] 3GPP TS 23.179: "Functional architecture and information flows to support mission-critical communication service; Stage 2". [47] IETF RFC 6066: "Transport Layer Security (TLS) Extensions: Extension Definitions". [48] 3GPP TS 23.285: "Architecture enhancements for V2X services". [49] 3GPP TS 22.011: "Service accessibility". [50] 3GPP TS 24.008: "Mobile radio interface Layer 3 specification; Core network protocols; Stage 3". [51] 3GPP TS 22.261: "Service requirements for the 5G system; Stage1". [52] 3GPP TS 23.272: "Circuit Switched (CS) fallback in Evolved Packet System (EPS); Stage 2". [53] 3GPP TS 26.238: "Uplink streaming". [54] 3GPP TR 26.939: "Guidelines on the Framework for Live Uplink Streaming (FLUS)". [55] 3GPP TS 23.221: "3rd Generation Partnership Project; Technical Specification Group Services and Systems Aspects; Architectural Requirements". [56] 3GPP TS 23.204: "Support of Short Message Service (SMS) over generic Internet Protocol (IP) access; Stage 2".
5817f2cb61f7e615e7879961cd761fa6
23.203
3 Definitions, symbols and abbreviations
5817f2cb61f7e615e7879961cd761fa6
23.203
3.1 Definitions
For the purposes of the present document, the terms and definitions given in TR 21.905 [8] and the following apply. A term defined in the present document takes precedence over the definition of the same term, if any, in TR 21.905 [8]. application detection filter: A logic used to detect packets generated by an application based on extended inspection of these packets, e.g. header and/or payload information, as well as dynamics of packet flows. The logic is entirely internal to a TDF or a PCEF enhanced with ADC and is out of scope of this specification. application identifier: An identifier referring to a specific application detection filter. application service provider: A business entity responsible for the application that is being / will be used by a UE, which may be either an AF operator or has an association with the AF operator. ADC decision: A decision consists of references to ADC rules, associated enforcement actions (for dynamic ADC rules) and TDF session attributes and is provided by the PCRF to the TDF for application detection and control. ADC rule: A set of information enabling the detection of application traffic and associated enforcement actions. ADC rules are directly provisioned into the TDF and referenced by the PCRF. authorised QoS: The maximum QoS that is authorised for a service data flow. In case of an aggregation of multiple service data flows within one IP‑CAN bearer (e.g. for GPRS a PDP context), the combination of the "Authorised QoS" information of the individual service data flows is the "Authorised QoS" for the IP‑CAN bearer. It contains the QoS class identifier and the data rate. binding: The association between a service data flow and the IP‑CAN bearer (for GPRS the PDP context) transporting that service data flow. binding mechanism: The method for creating, modifying and deleting bindings. charging control: The process of associating packets, belonging to a service data flow, to a charging key and applying online charging and/or offline charging, as appropriate. charging key: information used by the online and offline charging system for rating purposes. detected application traffic: An aggregate set of packet flows that are generated by a given application and detected by an application detection filter. dynamic ADC Rule: an ADC rule, for which the PCRF can provide and modify some parameters via the Sd reference point. dynamic PCC Rule: a PCC rule, for which the definition is provided to the PCEF via the Gx reference point. event report: a notification, possibly containing additional information, of an event which occurs that corresponds with an event trigger. Also, an event report is a report from the PCRF to the AF concerning transmission resources or requesting additional information. event trigger: a rule specifying the event reporting behaviour of a PCEF or BBERF or TDF. Also, it is a trigger for credit management events. gating control: The process of blocking or allowing packets, belonging to a service data flow / detected application's traffic, to pass through to the desired endpoint. Gateway Control Session: An association between a BBERF and a PCRF (when GTP is not used in the EPC), used for transferring access specific parameters, BBERF events and QoS rules between PCRF and BBERF. GBR bearer: An IP‑CAN bearer with reserved (guaranteed) bitrate resources. GPRS IP‑CAN: This IP‑CAN incorporates GPRS over GERAN and UTRAN, see TS 23.060 [12]. IP‑CAN bearer: An IP transmission path of defined capacity, delay and bit error rate, etc. See TR 21.905 [8] for the definition of bearer. IP‑CAN session: The association between a UE and an IP network. The association is identified by one IPv4 and/or an IPv6 prefix together with UE identity information, if available and a PDN represented by a PDN ID (e.g. an APN). An IP‑CAN session incorporates one or more IP‑CAN bearers. Support for multiple IP‑CAN bearers per IP‑CAN session is IP‑CAN specific. An IP‑CAN session exists as long as UE IP addresses/prefix are established and announced to the IP network. non-GBR bearer: An IP‑CAN bearer with no reserved (guaranteed) bitrate resources. operator-controlled service: A service for which complete PCC rule information, including service data flow filter information, is available in the PCRF through configuration and/or dynamic interaction with an AF. packet flow: A specific user data flow from and/or to the UE. Presence Reporting Area: An area defined within 3GPP Packet Domain for the purposes of reporting of UE presence within that area due to policy control and/or charging reasons. There are two types of Presence Reporting Areas: "UE-dedicated Presence Reporting Areas" and "Core Network pre-configured Presence Reporting Areas". They are further defined in TS 23.401 [17]. PCC decision: A decision consists of PCC rules and IP‑CAN bearer attributes and is provided by the PCRF to the PCEF for policy and charging control and, for PCEF enhanced with ADC, application detection and control. PCC rule: A set of information enabling the detection of a service data flow and providing parameters for policy control and/or charging control and, for PCEF enhanced with ADC, for application detection and control. PCEF enhanced with ADC: PCEF, enhanced with application detection and control feature. policy control: The process whereby the PCRF indicates to the PCEF how to control the IP‑CAN bearer. Policy control includes QoS control and/or gating control. predefined PCC Rule: a PCC rule that has been provisioned directly into the PCEF by the operator. policy counter: A mechanism within the OCS to track spending applicable to a subscriber. policy counter identifier: A reference to a policy counter in the OCS for a subscriber. policy counter status: A label whose values are not standardized and that is associated with a policy counter's value relative to the spending limit(s) (the number of possible policy counter status values for a policy counter is one greater than the number of thresholds associated with that policy counter, i.e policy counter status values describe the status around the thresholds). This is used to convey information relating to subscriber spending from OCS to PCRF. Specific labels are configured jointly in OCS and PCRF. Packet Flow Description (PFD): A set of information enabling the detection of application traffic provided by a 3rd party service provider. A PFD is further defined in TS 23.682 [42]. QoS class identifier (QCI): A scalar that is used as a reference to a specific packet forwarding behaviour (e.g. packet loss rate, packet delay budget) to be provided to a SDF. This may be implemented in the access network by the QCI referencing node specific parameters that control packet forwarding treatment (e.g. scheduling weights, admission thresholds, queue management thresholds, link layer protocol configuration, etc.), that have been pre-configured by the operator at a specific node(s) (e.g. eNodeB). QoS rule: A set of information enabling the detection of a service data flow and defining its associated QoS parameters. Monitoring key: information used by the PCEF, TDF and PCRF for usage monitoring control purposes as a reference to a given set of service data flows or application (s), that all share a common allowed usage on a per UE and APN basis. RAN user plane congestion: RAN user plane congestion occurs when the demand for RAN resources exceeds the available RAN capacity to deliver the user data for a prolonged period of time. NOTE 1: Short-duration traffic bursts is a normal condition at any traffic load level and is not considered to be RAN user plane congestion. Likewise, a high-level of utilization of RAN resources (based on operator configuration) is considered a normal mode of operation and might not be RAN user plane congestion. Redirection: Redirect the detected service traffic to an application server (e.g. redirect to a top-up / service provisioning page). service data flow: An aggregate set of packet flows carried through the PCEF that matches a service data flow template. service data flow filter: A set of packet flow header parameter values/ranges used to identify one or more of the packet flows. The possible service data flow filters are defined in clause 6.2.2.2. service data flow filter identifier: A scalar that is unique for a specific service data flow (SDF) filter (used on Gx and Gxx) within an IP‑CAN session. service data flow template: The set of service data flow filters in a PCC Rule or an application identifier in a PCC rule referring to an application detection filter, required for defining a service data flow. service identifier: An identifier for a service. The service identifier provides the most detailed identification, specified for flow based charging, of a service data flow. A concrete instance of a service may be identified if additional AF information is available (further details to be found in clause 6.3.1). session based service: An end user service requiring application level signalling, which is separated from service rendering. spending limit: A spending limit is the usage limit of a policy counter (e.g. monetary, volume, duration) that a subscriber is allowed to consume. spending limit report: a notification, containing the current policy counter status generated from the OCS to the PCRF via the Sy reference point. subscribed guaranteed bandwidth QoS: The per subscriber, authorized cumulative guaranteed bandwidth QoS which is provided by the SPR/UDR to the PCRF. subscriber category: is a means to group the subscribers into different classes, e.g. gold user, the silver user and the bronze user. (S)Gi-LAN: The network infrastructure connected to the 3GPP network over the SGi or Gi reference point that provides various IP-based services. (S)Gi-LAN service function: A function located in the (S)Gi-LAN that provides value-added IP-based services e.g. NAT, anti-malware, parental control, DDoS protection. TDF session: An association between an IP-CAN session and the assigned TDF for the purpose of application detection and control. uplink bearer binding verification: The network enforcement of terminal compliance with the negotiated uplink traffic mapping to bearers. For the purposes of the present document, the following terms and definitions given in TS 23.401 [17] apply: Narrowband-IoT: See TS 23.401 [17]. WB-E-UTRAN: See TS 23.401 [17].
5817f2cb61f7e615e7879961cd761fa6
23.203
3.2 Abbreviations
For the purposes of the present document, the abbreviations given in TR 21.905 [8] and the following apply. An abbreviation defined in the present document takes precedence over the definition of the same abbreviation, if any, in TR 21.905 [8]. ADC Application Detection and Control AF Application Function AMBR Aggregated Maximum Bitrate ARP Allocation and Retention Priority ASP Application Service Provider BBERF Bearer Binding and Event Reporting Function BBF Bearer Binding Function BBF AAA Broadband Forum AAA BNG Broadband Network Gateway BPCF Broadband Policy Control Function BRAS Broadband Remote Access Server CSG Closed Subscriber Group CSG ID Closed Subscriber Group Identity DRA Diameter Routing Agent E-UTRAN Evolved Universal Terrestrial Radio Access Network H-PCEF A PCEF in the HPLMN H-PCRF A PCRF in the HPLMN HRPD High Rate Packet Data HSGW HRPD Serving Gateway IP‑CAN IP Connectivity Access Network MPS Multimedia Priority Service NB-IoT Narrowband IoT NBIFOM Network-based IP flow mobility NSWO Non-Seamless WLAN Offload OAM Operation Administration and Maintenance OFCS Offline Charging System OCS Online Charging System PCC Policy and Charging Control PCEF Policy and Charging Enforcement Function PCRF Policy and Charging Rules Function PFDF Packet Flow Description Function PRA Presence Reporting Area QCI QoS Class Identifier RAN Radio Access Network RCAF RAN Congestion Awareness Function RLOS Restricted Local Operator Services RUCI RAN User Plane Congestion Information RG Residential Gateway SCEF Service Capability Exposure Function vSRVCC video Single Radio Voice Call Continuity SPR Subscription Profile Repository TDF Traffic Detection Function TSSF Traffic Steering Support Function UDC User Data Convergence UDR User Data Repository V-PCEF A PCEF in the VPLMN V-PCRF A PCRF in the VPLMN WB-E-UTRAN Wide Band E-UTRAN
5817f2cb61f7e615e7879961cd761fa6
23.203
4 High level requirements
5817f2cb61f7e615e7879961cd761fa6
23.203
4.1 General requirements
It shall be possible for the PCC architecture to base decisions upon subscription information. It shall be possible to apply policy and charging control to any kind of 3GPP IP‑CAN and any non-3GPP accesses connected via EPC complying with TS 23.402 [18]. Applicability of PCC to other IP‑CANs is not restricted. However, it shall be possible for the PCC architecture to base decisions upon the type of IP‑CAN used (e.g. GPRS, etc.). The policy and charging control shall be possible in the roaming and local breakout scenarios defined in TS 23.401 [17] and TS 23.402 [18]. The PCC architecture shall discard packets that don't match any service data flow of the active PCC rules. It shall also be possible for the operator to define PCC rules, with wild-carded service data flow filters, to allow for the passage and charging for packets that do not match any service data flow template of any other active PCC rules. The PCC architecture shall allow the charging control to be applied on a per service data flow and on a per application basis, independent of the policy control. The PCC architecture shall have a binding method that allows the unique association between service data flows and their IP‑CAN bearer. A single service data flow detection shall suffice for the purpose of both policy control and flow based charging. A PCC rule may be predefined or dynamically provisioned at establishment and during the lifetime of an IP‑CAN session. The latter is referred to as a dynamic PCC rule. The number of real-time PCC interactions shall be minimized although not significantly increasing the overall system reaction time. This requires optimized interfaces between the PCC nodes. It shall be possible to take a PCC rule into service and out of service, at a specific time of day, without any PCC interaction at that point in time. It shall be possible to take APN-related policy information into service and out of service, once validity conditions specified as part of the APN-related policy information are fulfilled or not fulfilled anymore, respectively, without any PCC interaction at that point in time. PCC shall be enabled on a per PDN basis (represented by an access point and the configured range of IP addresses) at the PCEF. It shall be possible for the operator to configure the PCC architecture to perform charging control, policy control or both for a PDN access. PCC shall support roaming users. The PCC architecture shall allow the resolution of conflicts which would otherwise cause a subscriber's Subscribed Guaranteed Bandwidth QoS to be exceeded. The PCC architecture shall support topology hiding. It should be possible to use PCC architecture for handling IMS-based emergency service and Restricted Local Operator Services. It shall be possible with the PCC architecture, in real-time, to monitor the overall amount of resources that are consumed by a user and to control usage independently from charging mechanisms, the so-called usage monitoring control. It shall be possible for the PCC architecture to provide application awareness even when there is no explicit service level signalling. The PCC architecture shall support making policy decisions based on subscriber spending limits. The PCC architecture shall support making policy decisions based on RAN user plane congestion status. The PCC architecture shall support making policy decisions for multi-access IP flow mobility solution described in TS 23.161 [43]. The PCC architecture shall support making policy decisions for (S)Gi-LAN traffic steering.
5817f2cb61f7e615e7879961cd761fa6
23.203
4.2 Charging related requirements
5817f2cb61f7e615e7879961cd761fa6
23.203
4.2.1 General
In order to allow for charging control on service data flow, the information in the PCC rule identifies the service data flow and specifies the parameters for charging control. The PCC rule information may depend on subscription data. In order to allow for charging control on detected application traffic identified by ADC Rule for the TDF, the information in the ADC rule contains the application identifier and specifies the parameters for charging control. The ADC rule information may depend on subscription data. For the purpose of charging correlation between application level (e.g. IMS) and service data flow level, applicable charging identifiers shall be passed along within the PCC architecture, if such identifiers are available. For the purpose of charging correlation between service data flow level and application level (e.g. IMS) as well as on-line charging support at the application level, applicable charging identifiers and IP‑CAN type identifiers shall be passed from the PCRF to the AF, if such identifiers are available.
5817f2cb61f7e615e7879961cd761fa6
23.203
4.2.2 Charging models
The PCC charging shall support the following charging models both for charging performed by PCEF and charging performed by TDF: - Volume based charging; - Time based charging; - Volume and time based charging; - Event based charging; - No charging. NOTE 1: The charging model - "No charging" implies that charging control is not applicable. Shared revenue services shall be supported. In this case settlement for all parties shall be supported, including the third parties that may have been involved providing the services. NOTE 2: When developing a charging solution, the PCC charging models may be combined to form the solution. How to achieve a specific solution is however not within the scope of this TS. NOTE 3: The Event based charging defined in this specification applies only to session based charging as defined by the charging specifications. 4.2.2a Charging requirements The requirements in this clause apply to both PCC rules based charging and ADC rules based charging unless exceptions are explicitly mentioned. It shall be possible to apply different rates and charging models when a user is identified to be roaming from when the user is in the home network. Furthermore, it shall be possible to apply different rates and charging models based on the location of a user, beyond the granularity of roaming. It shall be possible to apply different rates and charging models when a user consuming network services via a CSG cell or a hybrid cell according to the user CSG information. User CSG information includes CSG ID, access mode and CSG membership indication. It shall be possible to apply a separate rate to a specific service, e.g. allow the user to download a certain volume of data, reserved for the purpose of one service for free and then continue with a rate causing a charge. It shall be possible to change the rate based on the time of day. It shall be possible to enforce per-service identified by PCC Rule/per-application identified by ADC Rule usage limits for a service data flow using online charging on a per user basis (may apply to prepaid and post-paid users). It shall be possible to apply different rates depending on the access used to carry a Service Data Flow. This applies also to a PDN connection supporting NBIFOM. It shall be possible for the online charging system to set and send the thresholds (time and/or volume based) for the amount of remaining credit to the PCEF or TDF for monitoring. In case the PCEF or TDF detects that any of the time based or volume based credit falls below the threshold, the PCEF or TDF shall send a request for credit re-authorization to the OCS with the remaining credit (time and/or volume based). It shall be possible for the charging system to select the applicable rate based on: - home/visited IP‑CAN; - User CSG information; - IP‑CAN bearer characteristics (e.g. QoS); - QoS provided for the service; - time of day; - IP‑CAN specific parameters according to Annex A. IP-CAN bearer characteristics are not applicable to charging performed in TDF. NOTE 1: The same IP-CAN parameters related to access network/subscription/location information as reported for service data flow based charging may need to be reported for the application based charging at the beginning of the session and following any of the relevant re-authorization triggers. The charging system maintains the tariff information, determining the rate based on the above input. Thus the rate may change e.g. as a result of IP‑CAN session modification to change the bearer characteristics provided for a service data flow. The charging rate or charging model applicable to a service data flow/detected application traffic may change as a result of events in the service (e.g. insertion of a paid advertisement within a user requested media stream). The charging model applicable to a service data flow/detected application traffic may change as a result of events identified by the OCS (e.g. after having spent a certain amount of time and/or volume, the user gets to use some services for free). The charging rate or charging model applicable to a service data flow/detected application traffic may change as a result of having used the service data flow/detected application traffic for a certain amount of time and/or volume. For online charging, it shall be possible to apply an online charging action upon PCEF or TDF events (e.g. re-authorization upon QoS change). It shall be possible to apply an online charging action for detected application upon Application Start/Stop events. It shall be possible to indicate to the PCEF or TDF that interactions with the charging systems are not required for a PCC or ADC rule, i.e. to perform neither accounting nor credit control for the service data flow/detected application traffic and then no offline charging information is generated. This specification supports charging and enforcement being done in either the PCEF or the TDF for a certain IP-CAN session, but not both for the same IP-CAN session (applies to all IP-CAN sessions belonging to the same APN). NOTE 2: The above requirement is to ensure that there is no double charging in both TDF and PCEF or over charging in case of packet discarded at PCEF or TDF.
5817f2cb61f7e615e7879961cd761fa6
23.203
4.2.3 Examples of Service Data Flow Charging
There are many different services that may be used within a network, including both user-user and user-network services. Service data flows from these services may be identified and charged in many different ways. A number of examples of configuring PCC rules for different service data flows are described below. EXAMPLE 1: A network server provides an FTP service. The FTP server supports both the active (separate ports for control and data) and passive modes of operation. A PCC rule is configured for the service data flows associated with the FTP server for the user. The PCC rule uses a filter specification for the uplink that identifies packets sent to port 20 or 21 of the IP address of the server and the origination information is wildcarded. In the downlink direction, the filter specification identifies packets sent from port 20 or 21 of the IP address of the server. EXAMPLE 2: A network server provides a "web" service. A PCC rule is configured for the service data flows associated with the HTTP server for the user. The PCC rule uses a filter specification for the uplink that identifies packets sent to port 80 of the IP address of the server and the origination information is wildcarded. In the downlink direction, the filter specification identifies packets sent from port 80 of the IP address of the server. EXAMPLE 3: The same server provides a WAP service. The server has multiple IP addresses and the IP address of the WAP server is different from the IP address of the web server. The PCC rule uses the same filter specification as for the web server, but with the IP addresses for the WAP server only. EXAMPLE 4: An operator offers a zero rating for network provided DNS service. A PCC rule is established setting all DNS traffic to/from the operators DNS servers as offline charged. The data flow filter identifies the DNS port number and the source/destination address within the subnet range allocated to the operators network nodes. EXAMPLE 5: An operator has a specific charging rate for user-user VoIP traffic over the IMS. A PCC rule is established for this service data flow. The filter information to identify the specific service data flow for the user-user traffic is provided by the P‑CSCF (AF). EXAMPLE 6: An operator is implementing UICC based authentication mechanisms for HTTP based services utilizing the GAA Framework as defined in TR 33.919 [11] by e.g. using the Authentication Proxy. The Authentication Proxy may appear as an AF and provide information to the PCRF for the purpose of selecting an appropriate PCC Rule.
5817f2cb61f7e615e7879961cd761fa6
23.203
4.3 Policy control requirements
5817f2cb61f7e615e7879961cd761fa6
23.203
4.3.1 General
The policy control features comprise gating control and QoS control. The concept of QoS class identifier and the associated bitrates specify the QoS information for service data flows and bearers on the Gx and Gxx reference points.
5817f2cb61f7e615e7879961cd761fa6
23.203
4.3.2 Gating control
Gating control shall be applied by the PCEF on a per service data flow basis. To enable the PCRF gating control decisions, the AF shall report session events (e.g. session termination, modification) to the PCRF. For example, session termination, in gating control, may trigger the blocking of packets or "closing the gate".
5817f2cb61f7e615e7879961cd761fa6
23.203
4.3.3 QoS control
5817f2cb61f7e615e7879961cd761fa6
23.203
4.3.3.1 QoS control at service data flow level
It shall be possible to apply QoS control on a per service data flow basis in the PCEF. QoS control per service data flow allows the PCC architecture to provide the PCEF with the authorized QoS to be enforced for each specific service data flow. Criteria such as the QoS subscription information may be used together with policy rules such as, service-based, subscription-based, or predefined PCRF internal policies to derive the authorized QoS to be enforced for a service data flow. It shall be possible to apply multiple PCC rules, without application provided information, using different authorised QoS within a single IP‑CAN session and within the limits of the Subscribed QoS profile.
5817f2cb61f7e615e7879961cd761fa6
23.203
4.3.3.2 QoS control at IP‑CAN bearer level
It shall be possible for the PCC architecture to support control of QoS reservation procedures (UE-initiated or network-initiated) for IP‑CANs that support such procedures for its IP‑CAN bearers in the PCEF or the BBERF, if applicable. It shall be possible to determine the QoS to be applied in QoS reservation procedures (QoS control) based on the authorised QoS of the service data flows that are applicable to the IP‑CAN bearer and on criteria such as the QoS subscription information, service based policies and/or predefined PCRF internal policies. Details of QoS reservation procedures are IP‑CAN specific and therefore, the control of these procedures is described in Annex A and Annex D. It shall be possible for the PCC architecture to support control of QoS for the packet traffic of IP‑CANs. The PCC architecture shall be able to provide policy control in the presence of NAT devices. This may be accomplished by providing appropriate address and port information to the PCRF. The enforcement of the control for QoS reservation procedures for an IP‑CAN bearer shall allow for a downgrading or an upgrading of the requested QoS as part of a UE-initiated IP‑CAN bearer establishment and modification. The PCC architecture shall be able to provide a mechanism to initiate IP‑CAN bearer establishment and modification (for IP‑CANs that support such procedures for its bearers) as part of the QoS control. The IP‑CAN shall prevent cyclic QoS upgrade attempts due to failed QoS upgrades. NOTE: These measures are IP‑CAN specific. The PCC architecture shall be able to handle IP‑CAN bearers that require a guaranteed bitrate (GBR bearers) and IP‑CAN bearers for which there is no guaranteed bitrate (non-GBR bearers).
5817f2cb61f7e615e7879961cd761fa6
23.203
4.3.3.3 QoS Conflict Handling
It shall be possible for the PCC architecture to support conflict resolution in the PCRF when the authorized bandwidth associated with multiple PCC rules exceeds the Subscribed Guaranteed bandwidth QoS.
5817f2cb61f7e615e7879961cd761fa6
23.203
4.3.3.4 QoS control at APN level
It shall be possible for the PCRF to authorize the APN-AMBR to be enforced by the PCEF as defined in TS 23.401 [17]. The APN-AMBR applies to all IP‑CAN sessions of a UE to the same APN and has separate values for the uplink and downlink direction. It shall be possible for the PCRF to provide the authorized APN-AMBR values unconditionally or conditionally, i.e. per IP-CAN type and/or RAT type. It shall be possible for the PCRF to request a change of the unconditional or conditional authorized APN-AMBR value(s) at a specific point in time. The details are specified in clause 6.4b.
5817f2cb61f7e615e7879961cd761fa6
23.203
4.3.4 Subscriber Spending Limits
It shall be possible to enforce policies based on subscriber spending limits as per TS 22.115 [27]. The OCS shall maintain policy counter(s) to track spending for a subscription. These policy counters must be available in the OCS prior to their use over the Sy interface. NOTE 1: The mechanism for provisioning the policy counters in the OCS is out of scope of this document. NOTE 2: A policy counter in the OCS can represent the spending for one or more services, one or more devices, one or more subscribers, etc. The representation is operator dependent. There is no explicit relationship between Charging-Key and policy counter. The PCRF shall request information regarding the subscriber's spending from the OCS, to be used as input for dynamic policy decisions for the subscriber, using subscriptions to spending limit reports. The OCS shall make information regarding the subscriber's spending available to the PCRF using spending limit reports.
5817f2cb61f7e615e7879961cd761fa6
23.203
4.4 Usage Monitoring Control
It shall be possible to apply usage monitoring for the accumulated usage of network resources on a per IP-CAN session and user basis. This capability is required for enforcing dynamic policy decisions based on the total network usage in real-time. The PCRF that uses usage monitoring for making dynamic policy decisions shall set and send the applicable thresholds to the PCEF or TDF for monitoring. The usage monitoring thresholds shall be based either on time, or on volume. The PCRF may send both thresholds to the PCEF or TDF. The PCEF or TDF shall notify the PCRF when a threshold is reached and report the accumulated usage since the last report for usage monitoring. If both time and volume thresholds were provided to the PCEF or TDF, the accumulated usage since last report shall be reported when either the time or the volume thresholds are reached. NOTE: There are reasons other than reaching a threshold that may cause the PCEF/TDF to report accumulated usage to the PCRF as defined in clauses 6.2.2.3 and 6.6.2. The usage monitoring capability shall be possible for an individual or a group of service data flow(s), or for all traffic of an IP-CAN session in the PCEF. When usage monitoring for all traffic of an IP-CAN session is enabled, it shall be possible to exclude an individual SDF or a group of service data flow(s) from the usage monitoring for all traffic of this IP-CAN session. It shall be possible to activate usage monitoring both to service data flows associated with predefined PCC rules and dynamic PCC rules, including rules with deferred activation and/or deactivation times while those rules are active. The usage monitoring capability shall be possible for an individual or a group of detected application(s) traffic, or all detected traffic belonging to a specific TDF session. When usage monitoring for all traffic of a TDF session is enabled, it shall be possible to exclude an individual application or a group of detected application(s) from the usage monitoring for all traffic belonging to this TDF session if usage monitoring. It shall be possible to activate usage monitoring both to predefined ADC rules and to dynamic ADC rules, including rules with deferred activation and/or deactivation times while those rules are active. If service data flow(s)/application(s) need to be excluded from IP-CAN/TDF session level usage monitoring and IP‑CAN /TDF session level usage monitoring is enabled, the PCRF shall be able to provide the an indication of exclusion from session level monitoring associated with the respective PCC/ADC rule(s). It shall be possible to apply different usage monitoring depending on the access used to carry a Service Data Flow. This applies also to a PDN connection supporting NBIFOM. IP-CAN session level usage monitoring is not dependent on the access used to carry a Service Data Flow.
5817f2cb61f7e615e7879961cd761fa6
23.203
4.5 Application Detection and Control
The application detection and control feature comprise the request to detect the specified application traffic, report to the PCRF on the start or stop of application traffic and to apply the specified enforcement and charging actions. The application detection and control shall be implemented either by the TDF or by the PCEF enhanced with ADC. Two models may be applied, depending on operator requirements: solicited and unsolicited application reporting. The unsolicited application reporting is only supported by the TDF. Solicited application reporting: The PCRF shall instruct the TDF, or the PCEF enhanced with ADC, on which applications to detect and whether to report start or stop event to the PCRF by activating the appropriate ADC/PCC rules in the TDF/PCEF enhanced with ADC. Reporting notifications of start and stop of application detection to the PCRF may be muted, in addition, per specific ADC/PCC rule. The PCRF may, in a dynamic ADC/PCC rule, instruct the TDF or PCEF enhanced with ADC, what enforcement actions to apply to the detected application traffic. The PCRF may activate application detection only if user profile configuration allows this. Unsolicited application reporting: The TDF is pre-configured on which applications to detect and report. The PCRF may enable enforcement in the PCEF based on the service data flow description provided to PCRF by the TDF. It is assumed that user profile configuration indicating whether application detection and control can be enabled is not required. The report to the PCRF shall include the same information for solicited and unsolicited application reporting that is whether the report is for start or stop, the detected application identifier and, if deducible, the service data flow descriptions for the detected application traffic. For the application types, where service data flow descriptions are deducible, the Start and Stop of the application may be indicated multiple times, including the application instance identifier to inform the PCRF about the service data flow descriptions belonging to that application instance. The application instance identifier is dynamically assigned by the TDF or by the PCEF enhanced with ADC in order to allow correlation of application Start and Stop events to the specific service data flow description. NOTE 1: The reporting to the PCRF on the start or stop of application traffic is not depending on any enforcement action of the ADC/PCC rule. Unless the PCRF muted the reporting for the ADC/PCC rule, every detected start or stop event is reported even if the application traffic is discarded due to enforcement actions of the ADC/PCC rule. For the TDF operating in the solicited application reporting model: - When the TDF cannot provide to the PCRF the service data flow description for the detected applications, the TDF shall perform charging, gating, redirection and bandwidth limitation for the detected applications, as defined in the ADC rule. The existing PCEF functionality remains unchanged. NOTE 2: Redirection may not be possible for all types of detected application traffic (e.g. this may only be performed on specific HTTP based flows). - When the TDF provides to the PCRF the service data flow description, the PCRF may take control over the actions resulting of application detection, by applying the charging and policy enforcement per service data flow as defined in this document, or the TDF may perform charging, gating, redirection and bandwidth limitation as described above. It is the PCRF's responsibility to coordinate the PCC rules with ADC rules in order to ensure consistent service delivery. Usage monitoring as described in clause 4.4 may be activated in conjunction with application detection and control. The usage monitoring functionality is only applicable to the solicited application reporting model. For TDF, ADC rule based charging is applicable. ADC rule based charging, as described in clause 4.2.2a, may be activated in conjunction with application detection and control. The charging functionality is only applicable to the solicited application reporting model. In order to avoid charging for the same traffic in both the TDF and the PCEF, this specification supports charging and enforcement implemented in either the PCEF or the TDF for a certain IP-CAN session, but not both for the same IP-CAN session. The ADC rules are used to determine the online and offline characteristics for charging. For offline charging, usage reporting over the Gzn interface shall be used. For online charging, credit management and reporting over the Gyn interface shall be used. The PCEF is in this case not used for charging and enforcement (based on active PCC rules and APN-AMBR configuration), but shall still be performing bearer binding based on the active PCC rules. In order to avoid having traffic that is charged in the TDF later discarded by the policing function in the PCEF, the assumption is that no GBR bearers are required when TDF is the charging and policy enforcement point. In addition, the DL APN-AMBR in PCEF shall be configured with such high values that it does not result in discarded packets. NOTE 3: An example of applicability is IMS APN, which would require dynamic PCC rules, would be configured such that PCEF based charging and enforcement is employed, but for regular internet access APN, the network would be configured such that the TDF performs both charging and enforcement. NOTE 4: An operator may also apply this solution with both PCEF and TDF performing enforcement and charging for a single IP-CAN session as long as the network is configured in such a way that the traffic charged and enforced in the PCEF does not overlap with the traffic charged and enforced by the TDF. NOTE 5: The PCEF may still do enforcement actions on uplink traffic without impacting the accuracy of the charging information produced by the TDF. If only charging for a service data flow identified by a PCC Rule is required for the corresponding IP-CAN session, the PCEF performs charging and policy enforcement for the IP-CAN session. The TDF may be used for application detection and reporting of application start/stop and for enforcement actions on downlink traffic.
5817f2cb61f7e615e7879961cd761fa6
23.203
4.6 RAN user plane congestion detection, reporting and mitigation
It shall be possible to transfer RAN user plane congestion information from the RAN to the Core Network in order to mitigate the congestion by measures selected by the PCRF and applied by the PCEF/TDF/AF. The detailed description of this functionality can be found in TS 23.401 [17] and TS 23.060 [12].
5817f2cb61f7e615e7879961cd761fa6
23.203
4.7 Support for service capability exposure
It shall be possible to transfer information related to service capability exposure between the PCRF and the AF via an SCEF (see TS 23.682 [42]).
5817f2cb61f7e615e7879961cd761fa6
23.203
4.8 Traffic Steering Control
Traffic Steering Control refers to the capability to activate/deactivate traffic steering policies from the PCRF in the PCEF, the TDF or the TSSF for the purpose of steering the subscriber's traffic to appropriate operator or 3rd party service functions (e.g. NAT, antimalware, parental control, DDoS protection) in the (S)Gi-LAN. The traffic steering control is supported in non-roaming and home-routed scenarios only.
5817f2cb61f7e615e7879961cd761fa6
23.203
4.9 Management of Packet Flow Descriptions in the PCEF/TDF using the PFDF
Management of Packet Flow Descriptions in the PCEF/TDF using the PFDF refers to the capability to create, update or remove PFDs in the PFDF via the SCEF (as described in TS 23.682 [42]) and the distribution from the PFDF to the PCEF or the TDF or both. This feature may be used when the PCEF or the TDF is configured to detect a particular application provided by an ASP. NOTE 1: A possible scenario for the management of PFDs in the PCEF/TDF is when an application, identified by an application detection filter in the PCEF/TDF, deploys a new server or reconfiguration at the ASP network occurs which impacts the application detection filters of that particular application. NOTE 2: The management of application detection filters in the PCEF/TDF can still be performed by using operation and maintenance procedures. NOTE 3: This feature aims for both: to enable accurate application detection at the PCEF and at the TDF and to minimize storage requirements for the PCEF and the TDF. The management of Packet Flow Descriptions is supported in non-roaming and home-routed scenarios for those ASPs that have a business relation with the home operator.
5817f2cb61f7e615e7879961cd761fa6
23.203
5 Architecture model and reference points
5817f2cb61f7e615e7879961cd761fa6
23.203
5.1 Reference architecture
The PCC functionality is comprised by the functions of the Policy and Charging Enforcement Function (PCEF), the Bearer Binding and Event Reporting Function (BBERF), the Policy and Charging Rules Function (PCRF), the Application Function (AF), the Traffic Detection Function (TDF), the Traffic Steering Support Function (TSSF), the Online Charging System (OCS), the Offline Charging System (OFCS) and the Subscription Profile Repository (SPR) or the User Data Repository (UDR). UDR replaces SPR when the UDC architecture as defined in TS 23.335 [25] is applied to store PCC related subscription data. In this deployment scenario Ud interface between PCRF and UDR is used to access subscription data in the UDR. NOTE 1: When UDC architecture is used, SPR and Sp, whenever mentioned in this document, can be replaced by UDR and Ud. The PCRF can receive RAN User Plane Congestion Information from the RAN Congestion Awareness Function (RCAF). The PCC architecture extends the architecture of an IP‑CAN, where the Policy and Charging Enforcement Function is a functional entity in the Gateway node implementing the IP access to the PDN. The allocation of the Bearer Binding and Event Reporting Function is specific to each IP‑CAN type and specified in the corresponding Annex. The non-3GPP network relation to the PLMN is the same as defined in TS 23.402 [18]. Figure 5.1-1: Overall PCC logical architecture (non-roaming) when SPR is used Figure 5.1-2: Overall PCC logical architecture (non-roaming) when UDR is used Figure 5.1-3: Overall PCC architecture (roaming with home routed access) when SPR is used Figure 5.1-4: Overall PCC architecture for roaming with PCEF in visited network (local breakout) when SPR is used NOTE 2: Similar figures for the roaming cases apply when UDR is used instead of SPR and Ud instead of Sp. NOTE 3: PCEF may be enhanced with application detection and control feature. NOTE 4: In general, Gy and Gyn don't apply for the same IP-CAN session and Gz and Gzn also doesn't apply for the same IP-CAN session. For the description of the case where simultaneous reports apply, please refer to the clause 4.5). NOTE 5: RCAF also supports Nq/Nq' interfaces for E-UTRAN and UTRAN as specified in TS 23.401 [17] and TS 23.060 [12], respectively. NOTE 6: Use of TSSF in roaming scenarios is in this release only specified for the home routed access case. NOTE 7: The SCEF acts as an AF (using Rx) in some service capability exposure use cases as described in TS 23.682 [42]. NOTE 8: Gw and Gwn interface are not supported in roaming scenario with PCEF/TDF in visited network.
5817f2cb61f7e615e7879961cd761fa6
23.203
5.2 Reference points
5817f2cb61f7e615e7879961cd761fa6
23.203
5.2.1 Rx reference point
The Rx reference point resides between the AF and the PCRF. NOTE 1: The AF may be a third party application server. This reference point enables transport of application level session information from AF to PCRF. Such information includes, but is not limited to: - IP filter information to identify the service data flow for policy control and/or differentiated charging; - Media/application bandwidth requirements for QoS control. - In addition, for sponsored data connectivity: - the sponsor's identification, - optionally, a usage threshold and whether the PCRF reports these events to the AF, - information identifying the application service provider and application (e.g. SDFs, application identifier, etc.). The Rx reference point enables the AF subscription to notifications on IP‑CAN bearer level events (e.g. signalling path status of AF session) in the IP‑CAN. In order to mitigate RAN user plane congestion, the Rx reference point enables transport of the following information from the PCRF to the AF: - Re-try interval, which indicates when service delivery may be retried on Rx. NOTE 2: Additionally, existing bandwidth limitation parameters on Rx interface during the Rx session establishment are available in order to mitigate RAN user plane congestion.
5817f2cb61f7e615e7879961cd761fa6
23.203
5.2.2 Gx reference point
The Gx reference point resides between the PCEF and the PCRF. The Gx reference point enables the PCRF to have dynamic control over the PCC behaviour at a PCEF. The Gx reference point enables the signalling of PCC decision, which governs the PCC behaviour and it supports the following functions: - Establishment of Gx session (corresponding to an IP‑CAN session) by the PCEF; - Request for PCC decision from the PCEF to the PCRF; - Provision of IP flow mobility routing information from PCEF to PCRF; this applies only when IP flow mobility as defined in TS 23.261 [23] is supported; - Provision of PCC decision from the PCRF to the PCEF; - Reporting of the start and the stop of detected applications and transfer of service data flow descriptions and application instance identifiers for detected applications from the PCEF to the PCRF; - Reporting of the accumulated usage of network resources on a per IP-CAN session basis from the PCEF to the PCRF; - Delivery of IP‑CAN session specific parameters from the PCEF to the PCRF or, if Gxx is deployed, from the PCRF to the PCEF per corresponding request; - Negotiation of IP‑CAN bearer establishment mode (UE-only or UE/NW); - Termination of Gx session (corresponding to an IP‑CAN session) by the PCEF or the PCRF. NOTE: The PCRF decision to terminate a Gx session is based on operator policies. It should only occur in rare situations (e.g. the removal of a UE subscription) to avoid service interruption due to the termination of the IP‑CAN session. The information contained in a PCC rule is defined in clause 6.3.
5817f2cb61f7e615e7879961cd761fa6
23.203
5.2.3 Reference points to subscriber databases
5817f2cb61f7e615e7879961cd761fa6
23.203
5.2.3.1 Sp reference point
The Sp reference point lies between the SPR and the PCRF. The Sp reference point allows the PCRF to request subscription information related to the IP‑CAN transport level policies from the SPR based on a subscriber ID, a PDN identifier and possible further IP‑CAN session attributes, see Annex A and Annex D. For example, the subscriber ID can be IMSI. The reference point allows the SPR to notify the PCRF when the subscription information has been changed if the PCRF has requested such notifications. The SPR shall stop sending the updated subscription information when a cancellation notification request has been received from the PCRF. NOTE: The details associated with the Sp reference point are not specified in this Release.
5817f2cb61f7e615e7879961cd761fa6
23.203
5.2.3.2 Ud reference point
The Ud reference point resides between the UDR and the PCRF, acting as an Application Frontend as defined in TS 23.335 [25]. It is used by the PCRF to access PCC related subscription data when stored in the UDR. The details for this reference point are described in TS 23.335 [25] and TS 29.335 [26].
5817f2cb61f7e615e7879961cd761fa6
23.203
5.2.4 Gy reference point
The Gy reference point resides between the OCS and the PCEF. The Gy reference point allows online credit control for service data flow based charging. The functionalities required across the Gy reference point are defined in TS 32.251 [9] and is based on RFC 4006 [4].
5817f2cb61f7e615e7879961cd761fa6
23.203
5.2.5 Gz reference point
The Gz reference point resides between the PCEF and the OFCS. The Gz reference point enables transport of service data flow based offline charging information. The Gz interface is specified in TS 32.240 [3].
5817f2cb61f7e615e7879961cd761fa6
23.203
5.2.6 S9 reference point
The S9 reference point resides between a PCRF in the HPLMN (H‑PCRF) and a PCRF in the VPLMN (V‑PCRF). For roaming with a visited access (PCEF and, if applicable, BBERF in the visited network), the S9 reference point enables the H‑PCRF to (via the V‑PCRF): - have dynamic PCC control, including the PCEF and, if applicable, BBERF and, if applicable, TDF, in the VPLMN; - deliver or receive IP‑CAN-specific parameters from both the PCEF and, if applicable, BBERF, in the VPLMN; - serve Rx authorizations and event subscriptions from an AF in the VPLMN; - receive application identifier, service data flow descriptions, if available, application instance identifiers, if available and application detection start/stop event triggers report. For roaming with a home routed access, the S9 enables the H‑PCRF to provide dynamic QoS control policies from the HPLMN, via a V‑PCRF, to a BBERF in the VPLMN.
5817f2cb61f7e615e7879961cd761fa6
23.203
5.2.7 Gxx reference point
The Gxx reference point resides between the PCRF and the BBERF. This reference point corresponds to the Gxa and Gxc, as defined in TS 23.402 [18] and further detailed in the annexes. The Gxx reference point enables a PCRF to have dynamic control over the BBERF behaviour. The Gxx reference point enables the signalling of QoS control decisions and it supports the following functions: - Establishment of Gxx session by BBERF; Termination of Gxx session by BBERF or PCRF; - Establishment of Gateway Control Session by the BBERF; - Termination of Gateway Control Session by the BBERF or PCRF; - Request for QoS decision from BBERF to PCRF; - Provision of QoS decision from PCRF to BBERF; - Delivery of IP‑CAN-specific parameters from PCRF to BBERF or from BBERF to PCRF; - Negotiation of IP‑CAN bearer establishment mode (UE-only and UE/NW). A QoS control decision consists of zero or more QoS rule(s) and IP‑CAN attributes. The information contained in a QoS rule is defined in clause 6.5. NOTE: The Gxx session serves as a channel for communication between the BBERF and the PCRF. A Gateway Control Session utilizes the Gxx session and operates as defined in TS 23.402 [18], which includes both the alternatives as defined by cases 2a and 2b in clause 7.1.
5817f2cb61f7e615e7879961cd761fa6
23.203
5.2.8 Sd reference point
The Sd reference point resides between the PCRF and the TDF. The Sd reference point enables a PCRF to have dynamic control over the application detection and control behaviour at a TDF. The Sd reference point enables the signalling of ADC decision, which governs the ADC behaviour and it supports the following functions: 1. Establishment of Sd session between the PCRF and the TDF; 2. Termination of Sd session between the PCRF and the TDF; 3. Provision of ADC decision from the PCRF for the purpose of application's traffic detection, enforcement and charging at the TDF; 4. Request for ADC decision from the TDF to the PCRF; 5. Reporting of the start and the stop of a detected applications and transfer of service data flow descriptions and application instance identifiers for detected applications from the TDF to the PCRF; 6. Reporting of the accumulated usage of network resources on a per TDF session basis from the TDF to the PCRF; 7. Request and delivery of IP‑CAN session specific parameters between the PCRF and the TDF. While 1-7 are relevant for solicited application reporting; only 1, 2 and 5 are relevant for unsolicited application reporting. When Sd is used for traffic steering control only, then the following function is supported: - Provision of ADC Rules from the PCRF for the purpose of application's traffic detection and traffic steering control. The information contained in an ADC rule is defined in clause 6.8.
5817f2cb61f7e615e7879961cd761fa6
23.203
5.2.9 Sy reference point
The Sy reference point resides between the PCRF and the OCS. The Sy reference point enables transfer of policy counter status information relating to subscriber spending from OCS to PCRF and supports the following functions: - Request for reporting of policy counter status information from PCRF to OCS and subscribe to or unsubscribe from spending limit reports (i.e. notifications of policy counter status changes). - Report of policy counter status information upon a PCRF request from OCS to PCRF. - Notification of spending limit reports from OCS to PCRF. - Cancellation of spending limit reporting from PCRF to OCS. Since the Sy reference point resides between the PCRF and OCS in the HPLMN, roaming with home routed or visited access as well as non-roaming scenarios are supported in the same manner.
5817f2cb61f7e615e7879961cd761fa6
23.203
5.2.10 Gyn reference point
The Gyn reference point resides between the OCS and the TDF. The Gyn reference point allows online credit control for charging in case of ADC rules based charging in TDF. The functionalities required across the Gyn reference point are defined in TS 32.251 [9] and is based on RFC 4006 [4].
5817f2cb61f7e615e7879961cd761fa6
23.203
5.2.11 Gzn reference point
The Gzn reference point resides between the TDF and the OFCS. The Gzn reference point enables transport of offline charging information in case of ADC rule based charging in TDF. The Gzn interface is specified in TS 32.240 [3].
5817f2cb61f7e615e7879961cd761fa6
23.203
5.2.12 Np reference point
The Np reference point resides between the RCAF and the PCRF. The Np reference point enables transport of RAN User Plane Congestion Information (RUCI) sent from the RCAF to the PCRF for all or selected subscribers, depending on the operator's congestion mitigation policy. The Np reference point supports the following functions: - Reporting of RUCI from the RCAF to the PCRF. - Sending, updating and removal of the reporting restrictions from the PCRF to the RCAF as defined in clause 6.1.15.2.
5817f2cb61f7e615e7879961cd761fa6
23.203
5.2.13 Nt reference point
The Nt reference point enables the negotiation between the SCEF and the PCRF about the recommended time window(s) and the related conditions for future background data transfer. The SCEF is triggered by an SCS/AS (as described in TS 23.682 [42]) which requests for this negotiation and provides necessary information to the SCEF. The SCEF will forward the information received from the SCS/AS to the PCRF as well as the information received from the PCRF to the SCS/AS. Whenever the SCEF contacts the PCRF, the PCRF shall use the information provided by the SCS/AS via the SCEF to determine the policies belonging to the application service provider (ASP). NOTE: This interaction between the SCEF and the PCRF over the Nt reference point is not related to any IP-CAN session.
5817f2cb61f7e615e7879961cd761fa6
23.203
5.2.14 St reference point
The St reference point resides between the TSSF and the PCRF. The St reference point enables the PCRF to provide traffic steering control information to the TSSF. The St reference point supports the following functions: - Provision, modification and removal of traffic steering control information from PCRF to the TSSF.
5817f2cb61f7e615e7879961cd761fa6
23.203
5.2.15 Nu reference point
The Nu reference point resides between the SCEF and the PFDF and enables the 3rd party service provider to manage PFDs in the PFDF as specified in TS 23.682 [42].
5817f2cb61f7e615e7879961cd761fa6
23.203
5.2.16 Gw reference point
The Gw reference point resides between the PFDF and the PCEF. The Gw reference point enables transport of PFDs from the PFDF to the PCEF for a particular Application Identifier or for a set of Application Identifiers. The Gw reference point supports the following functions: - Creation, updating and removal of individual or the whole set of PFDs from the PFDF to the PCEF. - Confirmation of creation, updating and removel of PFDs from the PCEF to the PFDF. NOTE: The interaction between the PCEF and the PFDF is not related to any IP-CAN session.
5817f2cb61f7e615e7879961cd761fa6
23.203
5.2.17 Gwn reference point
The Gwn reference point resides between the PFDF and the TDF. The Gwn reference point enables transport of PFDs from the PFDF to the TDF for a particular Application Identifier or for a set of Application Identifiers. The Gwn reference point supports the following functions: - Creation, updating and removal of individual or the whole set of PFDs from the PFDF to the TDF. - Confirmation of creation, updating and removel of PFDs from the PCEF to the TDF. NOTE: The interaction between the PCEF and the TDF is not related to any IP-CAN session.
5817f2cb61f7e615e7879961cd761fa6
23.203
6 Functional description
5817f2cb61f7e615e7879961cd761fa6
23.203
6.1 Overall description
5817f2cb61f7e615e7879961cd761fa6
23.203
6.1.0 General
The PCC architecture works on a service data flow level. The PCC architecture provides the functions for policy and charging control as well as event reporting for service data flows.
5817f2cb61f7e615e7879961cd761fa6
23.203
6.1.1 Binding mechanism
5817f2cb61f7e615e7879961cd761fa6
23.203
6.1.1.1 General
The binding mechanism is the procedure that associates a service data flow (defined in a PCC and QoS rule, if applicable, by means of the SDF template), to the IP‑CAN bearer deemed to transport the service data flow. For service data flows belonging to AF sessions, the binding mechanism shall also associate the AF session information with the IP‑CAN bearer that is selected to carry the service data flow. NOTE 1: The relation between AF sessions and rules depends only on the operator configuration. An AF session can be covered by one or more PCC and QoS rules, if applicable (e.g. one rule per media component of an IMS session). Alternatively, a rule could comprise multiple AF sessions. NOTE 2: The PCRF may authorize dynamic PCC rules for service data flows without a corresponding AF session. Such PCC rules may be statically configured at the PCRF or dynamically filled with the UE provided traffic mapping information. NOTE 3: For PCC rules with application identifier and for certain IP-CAN types, up-link traffic may be received on other/additional IP-CAN bearers than the one determined by the binding mechanism (further details provided in clause 6.2.2.2 and the IP-CAN specific annexes). The binding mechanism creates bindings. The algorithm, employed by the binding mechanism, may contain elements specific for the kind of IP‑CAN. The binding mechanism includes three steps: 1. Session binding. 2 PCC rule authorization and QoS rule generation, if applicable. 3. Bearer binding.
5817f2cb61f7e615e7879961cd761fa6
23.203
6.1.1.2 Session binding
Session binding is the association of the AF session information to one and only one IP‑CAN session. The PCRF shall perform the session binding, which shall take the following IP‑CAN parameters into account: a) The UE IPv4 address and/or IPv6 network prefix; b) The UE identity (of the same kind), if present. NOTE 1: In case the UE identity in the IP‑CAN and the application level identity for the user are of different kinds, the PCRF needs to maintain, or have access to, the mapping between the identities. Such mapping is not subject to specification within this TS. c) The information about the packet data network (PDN) the user is accessing, if present. For an IP-CAN session to the dedicated APN for UE-to-Network Relay connectivity (as defined in TS 23.303 [44]) and using IPv6 prefix delegation (i.e. the assigned IPv6 network prefix is shorter than 64) the PCRF shall perform session binding based on the IPv6 network prefix only. A successful session binding occurs whenever a longer prefix received from an AF matches the prefix value of the IP-CAN session. PCRF shall not use the UE identity for session binding for this IP-CAN session. NOTE 2: For UE-to-Network Relay connectivity, the UE identity that the PCEF has provided (i.e. UE-to-Network Relay UE Identity) and a UE identity provided by the AF (i.e. Remote UE Identity) can be different, while the binding with the IP-CAN session is valid. NOTE 3: In this Release of the specification the support for policy control of Remote UEs behind a ProSe UE-Network Relay using IPv4 is not available. The PCRF shall identify the PCC rules affected by the AF session information, including new rules to be installed and existing rules to be modified or removed.
5817f2cb61f7e615e7879961cd761fa6
23.203
6.1.1.3 PCC rule authorization and QoS rule generation
PCC Rule authorization is the selection of the QoS parameters (QCI, ARP, GBR, MBR, etc.) for the PCC rules. The PCRF shall perform the PCC rule authorization for complete dynamic PCC rules belonging to AF sessions that have been selected in step 1, as described in clause 6.1.1.2, as well as for PCC rules without corresponding AF sessions. Based on AF instructions (as described in clause 6.1.5) dynamic PCC rules can be authorized even if they are not complete (e.g. due to missing service information regarding QoS or traffic filter parameters). The PCC rule authorization depends on the IP‑CAN bearer establishment mode of the IP‑CAN session and the mode (UE or NW) of the PCC rule: - In UE/NW bearer establishment mode, the PCRF shall perform the authorization for all PCC rules that are to be handled in NW mode. - In UE/NW bearer establishment mode, for PCC rules that are to be handled in UE mode or when in UE-only bearer establishment mode, the PCRF shall first identify the PCC rules that correspond to a UE resource request and authorize only these. The PCRF shall compare the traffic mapping information of the UE resource request with the service data flow filter information of the services that are allowed for the user. Each part of the traffic mapping information shall be evaluated separately in the order of their related precedence. Any matching service data flow filter leads to an authorization of the corresponding PCC rule for the UE resource request unless the PCC rule is already authorized for a more specific traffic mapping information or the PCC rule cannot be authorized for the QCI that is related to the UE resource request (the details are described in the next paragraph). Since a PCC rule can contain multiple service data flow filters it shall be ensured by the PCRF that a service data flow is only authorized for a single UE resource request. NOTE 1: For example, a PCC rule containing multiple service data flow filters that match traffic mapping information of different UE resource requests could be segmented by the PCRF according to the different matching traffic mapping information. Afterwards, the PCRF can authorize the different PCC rules individually. The PCRF knows whether a PCC rule can be authorized for a single QCI only or a set of QCIs (based on SPR information or local configuration). If the processing of the traffic mapping information would lead to an authorization of a PCC rule, the PCRF shall also check whether the PCC rule can be authorized for the QCI that is related to the UE resource request containing the traffic mapping information. If the PCC rule cannot be authorized for this QCI, the PCRF shall reject the traffic mapping information unless otherwise stated in an access-specific Annex. If there is any traffic mapping information not matching to any service data flow filter known to the PCRF and the UE is allowed to request for enhanced QoS for traffic not belonging to operator-controlled services, the PCRF shall authorize this traffic mapping information by adding the respective service data flow filter to a new or existing PCC. If the PCRF received an SDF filter identifier together with this traffic mapping information, the PCRF shall modify the existing PCC rule if the PCC rule is authorized for a GBR QCI. NOTE 2: If the PCC rule is authorized for a non-GBR QCI, the PCRF may either create a new PCC rule or modify the existing PCC rule. The PCC rule that needs to be modified can be identified by the service data flow filter the SDF filter identifier refers to. The requested QoS shall be checked against the subscription limitations for traffic not belonging to operator-controlled services. If the PCRF needs to perform the authorization based on incomplete service information and thus cannot associate a PCC rule with a single IP‑CAN bearer, then the PCRF shall generate for the affected service data flow an individual PCC rule per IP‑CAN bearer that could carry that service data flow. Once the PCRF receives the complete service information, the PCC rule on the IP‑CAN bearer with the matching traffic mapping information shall be updated according to the service information. Any other PCC rule(s) previously generated for the same service data flow shall be removed by the PCRF. NOTE 3: This is required to enable the successful activation or modification of IP‑CAN bearers before knowing the intended use of the IP‑CAN bearers to carry the service data flow(s). For an IP‑CAN, where the PCRF gains no information about the uplink IP flows (i.e. the UE provided traffic mapping information contains no information about the uplink IP flows), the binding mechanism shall assume that, for bi-directional service data flows, both downlink and uplink packets travel on the same IP‑CAN bearer. Whenever the service data flow template or the UE provided traffic mapping information change, the existing authorizations shall be re-evaluated, i.e. the authorization procedure specified in this clause, is performed. The re-evaluation may, for a service data flow, require a new authorization for a different UE provided mapping information. Based on PCRF configuration or AF instructions (as described in clause 6.1.5) dynamic PCC rules may have to be first authorized for the default QCI/default bearer (i.e. bearer without UE provided traffic mapping information) until a corresponding UE resource request occurs. NOTE 4: This is required to enable services that start before dedicated resources are allocated. A PCC rule for a service data flow that is a candidate for vSRVCC according to TS 23.216 [28] shall have the PS to CS session continuity indicator set. For the authorization of a PCC rule the PCRF shall take into account the IP‑CAN specific restrictions and other information available to the PCRF. Each PCC rule receives a set of QoS parameters that can be supported by the IP‑CAN. The authorization of a PCC rule associated with an emergency service and Restricted Local Operator Services shall be supported without subscription information (e.g. information stored in the SPR). The PCRF shall apply policies configured for the emergency service and Restricted Local Operator Services. When both a Gx and associated Gxx interface(s) exist for an IP‑CAN session, the PCRF shall generate QoS rules for all the authorized PCC rules in this step. The PCRF shall ensure consistency between the QoS rules and PCC rules authorized for the same service data flow when QoS rules are derived from corresponding PCC rules. When flow mobility applies for the IP-CAN Session, one IP‑CAN session may be associated to multiple Gateway Control Sessions with separate BBRFs. In this case, the PCRF shall provision QoS rules only to the appropriate BBERF based on IP flow mobility routing rules received from the PCEF.
5817f2cb61f7e615e7879961cd761fa6
23.203
6.1.1.4 Bearer Binding
Bearer binding is the association of the PCC rule and the QoS rule (if applicable) to an IP‑CAN bearer within that IP‑CAN session. This function resides in the Bearer Binding Function (BBF). The Bearer Binding Function is located either at the BBERF or at the PCEF, depending on the architecture (see clause 5.1). The BBF is located at the PCEF if GTP is used as the mobility protocol towards the PCEF; otherwise, the BBF is located at the BBERF. The Bearer Binding Function may also be located in the PCRF as specified in Annex A and Annex D (e.g. for GPRS running UE only IP‑CAN bearer establishment mode). NOTE 1: For an IP‑CAN, limited to a single IP‑CAN bearer per IP‑CAN session, the bearer is implicit, so finding the IP‑CAN session is sufficient for successful binding. For an IP‑CAN which allows for multiple IP‑CAN bearers for each IP‑CAN session, the binding mechanism shall use the QoS parameters of the existing IP‑CAN bearers to create the bearer binding for a rule, in addition to the PCC rule and the QoS rule (if applicable) authorized in the previous step. The set of QoS parameters assigned in step 2, as described in clause 6.1.1.3, to the service data flow is the main input for bearer binding. The BBF should not use the same bearer for rules with different settings for the PS to CS session continuity indicator. NOTE 2: When NBIFOM applies for the IP-CAN session, additional information has to be taken into account as described in clause 6.1.18.1. The BBF shall evaluate whether it is possible to use one of the existing IP‑CAN bearers or not and whether initiate IP‑CAN bearer modification if applicable. If none of the existing bearers are possible to use, the BBF should initiate the establishment of a suitable IP‑CAN bearer. The binding is created between service data flow(s) and the IP‑CAN bearer which have the same QoS class identifier and ARP. NOTE 3: The handling of a rule with MBR>GBR is up to operator policy (e.g. an independent IP‑CAN bearer may be maintained for that SDF to prevent unfairness between competing SDFs). Requirements, specific for each type of IP‑CAN, are defined in the IP‑CAN specific Annex. Whenever the QoS authorization of a PCC/QoS rule changes, the existing bindings shall be re-evaluated, i.e. the bearer binding procedures specified in this clause, is performed. The re-evaluation may, for a service data flow, require a new binding with another IP‑CAN bearer. The BBF should, if the PCRF requests the same change to the ARP/QCI value for all PCC/QoS Rules with the bearer binding to the same bearer, modify the bearer ARP/QCI value as requested. NOTE 4: A QoS change of the default EPS bearer causes the bearer binding for PCC/QoS rules previously bound to the default EPS bearer to be re-evaluated. At the end of the re-evaluation of the PCC/QoS rules of the IP-CAN session, there needs to be at least one PCC rule that successfully binds with the default bearer.