doc_id
stringlengths
40
40
url
stringlengths
90
160
title
stringlengths
5
96
document
stringlengths
24
62.1k
md_document
stringlengths
63
109k
FD3B0E405075464C05C52E5BB0C414A870B06334
https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/admin-project.html?context=cdpaas&locale=en
Administering a project
Administering a project If you have the Admin role in a project, you can perform administrative tasks for the project. * [Manage collaborators](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/collaborate.html) * [Mark data assets in project as sensitive](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/mark-sensitive.html) * [Stop all active runtimes](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/manage-envs-new.htmlstop-active-runtimes) * [Export a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/export-project.html) * [Manage project access tokens](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/token.html) * [Remove assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/manage-assets.htmlremove-asset) * [Edit a locked asset](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/manage-assets.htmleditassets) * [Delete the project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/admin-project.html?context=cdpaas&locale=endelete-project) * [Copy a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/admin-project.html?context=cdpaas&locale=encopy-project) * [Switch the platform for a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/switch-platform.html) Note: In the activity log, the user ID for some activities might display icp4d-dev instead of admin. Delete a project If you have the Admin role in a project, you can delete it. All project assets, associated files in the project's storage, and the associated storage for the project are also deleted. Data in a remote data source that is accessed through a connection is not affected. To delete a project, choose Project > View All Projects and then choose Delete from the ACTIONS menu next to the project name. Copy a project You can copy an existing project by [exporting it](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/export-project.html), and then [importing it](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/import-project.html) with a different name. Parent topic:[Projects ](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/manage-projects.html)
# Administering a project # If you have the **Admin** role in a project, you can perform administrative tasks for the project\. <!-- <ul> --> * [Manage collaborators](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/collaborate.html) * [Mark data assets in project as sensitive](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/mark-sensitive.html) * [Stop all active runtimes](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/manage-envs-new.html#stop-active-runtimes) * [Export a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/export-project.html) * [Manage project access tokens](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/token.html) * [Remove assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/manage-assets.html#remove-asset) * [Edit a locked asset](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/manage-assets.html#editassets) * [Delete the project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/admin-project.html?context=cdpaas&locale=en#delete-project) * [Copy a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/admin-project.html?context=cdpaas&locale=en#copy-project) * [Switch the platform for a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/switch-platform.html) <!-- </ul> --> Note: In the activity log, the user ID for some activities might display `icp4d-dev` instead of `admin`\. ## Delete a project ## If you have the **Admin** role in a project, you can delete it\. All project assets, associated files in the project's storage, and the associated storage for the project are also deleted\. Data in a remote data source that is accessed through a connection is not affected\. To delete a project, choose **Project > View All Projects** and then choose **Delete** from the **ACTIONS** menu next to the project name\. ## Copy a project ## You can copy an existing project by [exporting it](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/export-project.html), and then [importing it](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/import-project.html) with a different name\. **Parent topic:**[Projects ](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/manage-projects.html) <!-- </article "role="article" "> -->
1D21C3DD339C26916F6DF1EF3124C9D2D39C9ACC
https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-amazon-s3.html?context=cdpaas&locale=en
Amazon S3 connection
Amazon S3 connection To access your data in Amazon S3, create a connection asset for it. Amazon S3 (Amazon Simple Storage Service) is a service that is offered by Amazon Web Services (AWS) that provides object storage through a web service interface. For other types of S3-compliant connections, you can use the [Generic S3 connection](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-generics3.html). Create a connection to Amazon S3 To create the connection asset, you need these connection details: * Bucket: Bucket name that contains the files. If your AWS credentials have permissions to list buckets and access all buckets, then you only need to supply the credentials. If your credentials don't have the privilege to list buckets and can only access a particular bucket, then you need to specify the bucket. * Endpoint URL: Use for an AWS GovCloud instance. Include the region code. For example, https://s3.<region-code>.amazonaws.com. For the list of region codes, see [AWS service endpoints](https://docs.aws.amazon.com/general/latest/gr/rande.htmlregional-endpoints). * Region: Amazon Web Services (AWS) region. If you specify an Endpoint URL that is not for the AWS default region (us-west-2), then you should also enter a value for Region. Select Server proxy to access the Amazon S3 data source through a proxy server. Depending on its setup, a proxy server can provide load balancing, increased security, and privacy. The proxy server settings are independent of the authentication credentials and the personal or shared credentials selection. * Proxy host: The proxy URL. For example, https://proxy.example.com. * Proxy port number: The port number to connect to the proxy server. For example, 8080 or 8443. * The Proxy username and Proxy password fields are optional. Credentials The combination of Access key and Secret key is the minimum credentials. If the Amazon S3 account owner has set up temporary credentials or a Role ARN (Amazon Resource Name), enter the values provided by the Amazon S3 account owner for the applicable authentication combination: * Access key, Secret key, and Session token * Access key, Secret key, Role ARN, Role session name, and optional Duration seconds * Access key, Secret key, Role ARN, Role session name, External ID, and optional Duration seconds For setup instructions for the Amazon S3 account owner, see [Setting up temporary credentials or a Role ARN for Amazon S3](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-az3-tempcreds.html). Choose the method for creating a connection based on where you are in the platform In a project : Click Assets > New asset > Connect to a data source. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html). In a deployment space : Click Add to space > Connection. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html). In the Platform assets catalog : Click New connection. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html). Next step: Add data assets from the connection * See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html). Where you can use this connection You can use Amazon S3 connections in the following workspaces and tools: Projects * Data Refinery * Decision Optimization * Notebooks. Click Read data on the Code snippets pane to get the connection credentials and load the data into a data structure. See [Load data from data source connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/load-and-access-data.htmlconns). * SPSS Modeler * Synthetic Data Generator Catalogs * Platform assets catalog Amazon S3 setup See the [Amazon Simple Storage Service User Guide](https://docs.aws.amazon.com/AmazonS3/latest/userguide/setting-up-s3.html) for the setup steps. Restriction Folders cannot be named with the slash symbol (/) because the slash symbol is a delimiter for the file structure. Supported file types The Amazon S3 connection supports these file types: Avro, CSV, Delimited text, Excel, JSON, ORC, Parquet, SAS, SAV, SHP, and XML. Learn more [Amazon S3 documentation](https://docs.aws.amazon.com/AmazonS3/latest/userguide/Welcome.html) Related connection: [Generic S3 connection](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-generics3.html) Parent topic: [Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html)
# Amazon S3 connection # To access your data in Amazon S3, create a connection asset for it\. Amazon S3 (Amazon Simple Storage Service) is a service that is offered by Amazon Web Services (AWS) that provides object storage through a web service interface\. For other types of S3\-compliant connections, you can use the [Generic S3 connection](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-generics3.html)\. ## Create a connection to Amazon S3 ## To create the connection asset, you need these connection details: <!-- <ul> --> * **Bucket**: Bucket name that contains the files\. If your AWS credentials have permissions to list buckets and access all buckets, then you only need to supply the credentials\. If your credentials don't have the privilege to list buckets and can only access a particular bucket, then you need to specify the bucket\. * **Endpoint URL**: Use for an AWS GovCloud instance\. Include the region code\. For example, `https://s3.<region-code>.amazonaws.com`\. For the list of region codes, see [AWS service endpoints](https://docs.aws.amazon.com/general/latest/gr/rande.html#regional-endpoints)\. * **Region**: Amazon Web Services (AWS) region\. If you specify an Endpoint URL that is not for the AWS default region (us\-west\-2), then you should also enter a value for Region\. <!-- </ul> --> Select **Server proxy** to access the Amazon S3 data source through a proxy server\. Depending on its setup, a proxy server can provide load balancing, increased security, and privacy\. The proxy server settings are independent of the authentication credentials and the personal or shared credentials selection\. <!-- <ul> --> * **Proxy host**: The proxy URL\. For example, `https://proxy.example.com`\. * **Proxy port number**: The port number to connect to the proxy server\. For example, `8080` or `8443`\. * The **Proxy username** and **Proxy password** fields are optional\. <!-- </ul> --> ### Credentials ### The combination of **Access key** and **Secret key** is the minimum credentials\. If the Amazon S3 account owner has set up temporary credentials or a Role ARN (Amazon Resource Name), enter the values provided by the Amazon S3 account owner for the applicable authentication combination: <!-- <ul> --> * **Access key**, **Secret key**, and **Session token** * **Access key**, **Secret key**, **Role ARN**, **Role session name**, and optional **Duration seconds** * **Access key**, **Secret key**, **Role ARN**, **Role session name**, **External ID**, and optional **Duration seconds** <!-- </ul> --> For setup instructions for the Amazon S3 account owner, see [Setting up temporary credentials or a Role ARN for Amazon S3](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-az3-tempcreds.html)\. ### Choose the method for creating a connection based on where you are in the platform ### **In a project** : Click **Assets > New asset > Connect to a data source**\. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html)\. **In a deployment space** : Click **Add to space > Connection**\. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html)\. **In the Platform assets catalog** : Click **New connection**\. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html)\. ### Next step: Add data assets from the connection ### <!-- <ul> --> * See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html)\. <!-- </ul> --> ## Where you can use this connection ## You can use Amazon S3 connections in the following workspaces and tools: **Projects** <!-- <ul> --> * Data Refinery * Decision Optimization * Notebooks\. Click **Read data** on the **Code snippets** pane to get the connection credentials and load the data into a data structure\. See [Load data from data source connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/load-and-access-data.html#conns)\. * SPSS Modeler * Synthetic Data Generator <!-- </ul> --> **Catalogs** <!-- <ul> --> * Platform assets catalog <!-- </ul> --> ## Amazon S3 setup ## See the [Amazon Simple Storage Service User Guide](https://docs.aws.amazon.com/AmazonS3/latest/userguide/setting-up-s3.html) for the setup steps\. ## Restriction ## Folders cannot be named with the slash symbol (`/`) because the slash symbol is a delimiter for the file structure\. ## Supported file types ## The Amazon S3 connection supports these file types: Avro, CSV, Delimited text, Excel, JSON, ORC, Parquet, SAS, SAV, SHP, and XML\. ## Learn more ## [Amazon S3 documentation](https://docs.aws.amazon.com/AmazonS3/latest/userguide/Welcome.html) **Related connection**: [Generic S3 connection](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-generics3.html) **Parent topic**: [Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html) <!-- </article "role="article" "> -->
E73062F1E8466AB5604358A0AD0D66F31C81507C
https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-az3-tempcreds.html?context=cdpaas&locale=en
Setting up temporary credentials or a Role ARN for Amazon S3
Setting up temporary credentials or a Role ARN for Amazon S3 Instead of adding another IAM user to your Amazon S3 account, you can grant them access with temporary security credentials and a Session token. Or, you can create a Role ARN (Amazon Resource Name) and then grant permission to that role to access the account. The trusted user can then use the role. You can assign role policies to the temporary credentials to limit the permissions. For example, you can assign read-only access or access to a particular S3 bucket. Prerequisite: You must be the IAM owner of the Amazon S3 account. You can set up one of the following authentication combinations: * Access key, Secret key, and Session token * Access key, Secret key, Role ARN, Role session name, and optional Duration seconds * Access key, Secret key, Role ARN, Role session name, External ID, and optional Duration seconds Access key, Secret key, and Session token Use the AWS Security Token Service (AWS STS) operations in the AWS API to obtain temporary security credentials. These credentials consist of an Access key, a Secret key, and a Session token that expires within a configurable amount of time. For instructions, see the AWS documentation: [Requesting temporary security credentials](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp_request.html). Access key, Secret key, Role ARN, Role session name, and optional Duration seconds If someone else has their own S3 account, you can create a temporary role for that person to access your S3 account. Create the role either with the AWS Management Console or the AWS CLI. See [Creating a role to delegate permissions to an IAM user](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create_for-user.html) The Role ARN is the Amazon Resource Name for connection's role. The Role session name identifies the session to S3 administrators. For example, your IAM username. The Duration seconds parameter is optional. The minimum is 15 minutes. The maximum is 36 hours, the default is 1 hour. The duration seconds timer starts every time that the connection is established. You then provide values for the Access key, Secret key, Role ARN, Role session name, and optional Duration seconds to the user who will create the connection. Access key, Secret key, Role ARN, Role session name, External ID, and optional Duration seconds If someone else has their own S3 account, you can create a temporary role for that person to access your S3 account. With this combination, the External ID is a unique string that you specify and that the user must enter for extra security. First, create the role either with the AWS Management Console or the AWS CLI. See [Creating a role to delegate permissions to an IAM user](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create_for-user.html). To create the External ID, see [How to use an external ID when granting access to your AWS resources to a third party](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create_for-user_externalid.html). You then provide the values for the Access key, Secret key, Role ARN, Role session name, External ID, and optional Duration seconds to the user who will create the connection. Learn more [Amazon Resource Names (ARNs)](https://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html) Parent topic:[Amazon S3 connection](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-amazon-s3.html)
# Setting up temporary credentials or a Role ARN for Amazon S3 # Instead of adding another IAM user to your Amazon S3 account, you can grant them access with temporary security credentials and a Session token\. Or, you can create a Role ARN (Amazon Resource Name) and then grant permission to that role to access the account\. The trusted user can then use the role\. You can assign role policies to the temporary credentials to limit the permissions\. For example, you can assign read\-only access or access to a particular S3 bucket\. **Prerequisite**: You must be the IAM owner of the Amazon S3 account\. You can set up one of the following authentication combinations: <!-- <ul> --> * **Access key**, **Secret key**, and **Session token** * **Access key**, **Secret key**, **Role ARN**, **Role session name**, and optional **Duration seconds** * **Access key**, **Secret key**, **Role ARN**, **Role session name**, **External ID**, and optional **Duration seconds** <!-- </ul> --> ## Access key, Secret key, and Session token ## Use the AWS Security Token Service (AWS STS) operations in the AWS API to obtain temporary security credentials\. These credentials consist of an Access key, a Secret key, and a Session token that expires within a configurable amount of time\. For instructions, see the AWS documentation: [Requesting temporary security credentials](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp_request.html)\. ## Access key, Secret key, Role ARN, Role session name, and optional Duration seconds ## If someone else has their own S3 account, you can create a temporary role for that person to access your S3 account\. Create the role either with the AWS Management Console or the AWS CLI\. See [Creating a role to delegate permissions to an IAM user](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create_for-user.html) The **Role ARN** is the Amazon Resource Name for connection's role\. The **Role session name** identifies the session to S3 administrators\. For example, your IAM username\. The **Duration seconds** parameter is optional\. The minimum is 15 minutes\. The maximum is 36 hours, the default is 1 hour\. The duration seconds timer starts every time that the connection is established\. You then provide values for the **Access key**, **Secret key**, **Role ARN**, **Role session name**, and optional **Duration seconds** to the user who will create the connection\. ## Access key, Secret key, Role ARN, Role session name, External ID, and optional Duration seconds ## If someone else has their own S3 account, you can create a temporary role for that person to access your S3 account\. With this combination, the **External ID** is a unique string that you specify and that the user must enter for extra security\. First, create the role either with the AWS Management Console or the AWS CLI\. See [Creating a role to delegate permissions to an IAM user](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create_for-user.html)\. To create the External ID, see [How to use an external ID when granting access to your AWS resources to a third party](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create_for-user_externalid.html)\. You then provide the values for the **Access key**, **Secret key**, **Role ARN**, **Role session name**, **External ID**, and optional **Duration seconds** to the user who will create the connection\. ## Learn more ## [Amazon Resource Names (ARNs)](https://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html) **Parent topic:**[Amazon S3 connection](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-amazon-s3.html) <!-- </article "role="article" "> -->
BE1C1C3648CB7F6F6C394334554B1ECEDC3504DD
https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-azrds-mysql.html?context=cdpaas&locale=en
Amazon RDS for MySQL connection
Amazon RDS for MySQL connection To access your data in Amazon RDS for MySQL, create a connection asset for it. Amazon RDS for MySQL is a MySQL relational database that runs on the Amazon Relational Database Service (RDS). Supported versions MySQL database versions 5.6 through 8.0 Create a connection to Amazon RDS for MySQL To create the connection asset, you need these connection details: * Database name * Hostname or IP address * Port number * Username and password * SSL certificate (if required by the database server) Choose the method for creating a connection based on where you are in the platform In a project : Click Assets > New asset > Connect to a data source. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html). In a deployment space : Click Add to space > Connection. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html). In the Platform assets catalog : Click New connection. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html). Next step: Add data assets from the connection * See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html). Where you can use this connection You can use Amazon RDS for MySQL connections in the following workspaces and tools: Projects * Data Refinery * Notebooks. Click Read data on the Code snippets pane to get the connection credentials and load the data into a data structure. See [Load data from data source connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/load-and-access-data.htmlconns). * Decision Optimization * SPSS Modeler * Synthetic Data Generator Catalogs * Platform assets catalog Amazon RDS for MySQL setup For setup instructions, see these topics: * [Creating an Amazon RDS DB Instance](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_CreateDBInstance.html) * [Connecting to a DB Instance Running the MySQL Database Engine](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_ConnectToInstance.html) Running SQL statements To ensure that your SQL statements run correctly, refer to the [ Amazon RDS for MySQL documentation](https://aws.amazon.com/rds/mysql) for the correct syntax. Learn more [Amazon RDS for MySQL](https://aws.amazon.com/rds/mysql) Parent topic:[Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html)
# Amazon RDS for MySQL connection # To access your data in Amazon RDS for MySQL, create a connection asset for it\. Amazon RDS for MySQL is a MySQL relational database that runs on the Amazon Relational Database Service (RDS)\. ## Supported versions ## MySQL database versions 5\.6 through 8\.0 ## Create a connection to Amazon RDS for MySQL ## To create the connection asset, you need these connection details: <!-- <ul> --> * Database name * Hostname or IP address * Port number * Username and password * SSL certificate (if required by the database server) <!-- </ul> --> ### Choose the method for creating a connection based on where you are in the platform ### **In a project** : Click **Assets > New asset > Connect to a data source**\. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html)\. **In a deployment space** : Click **Add to space > Connection**\. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html)\. **In the Platform assets catalog** : Click **New connection**\. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html)\. ### Next step: Add data assets from the connection ### <!-- <ul> --> * See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html)\. <!-- </ul> --> ## Where you can use this connection ## You can use Amazon RDS for MySQL connections in the following workspaces and tools: **Projects** <!-- <ul> --> * Data Refinery * Notebooks\. Click **Read data** on the **Code snippets** pane to get the connection credentials and load the data into a data structure\. See [Load data from data source connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/load-and-access-data.html#conns)\. * Decision Optimization * SPSS Modeler * Synthetic Data Generator <!-- </ul> --> **Catalogs** <!-- <ul> --> * Platform assets catalog <!-- </ul> --> ## Amazon RDS for MySQL setup ## For setup instructions, see these topics: <!-- <ul> --> * [Creating an Amazon RDS DB Instance](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_CreateDBInstance.html) * [Connecting to a DB Instance Running the MySQL Database Engine](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_ConnectToInstance.html) <!-- </ul> --> ### Running SQL statements ### To ensure that your SQL statements run correctly, refer to the [ Amazon RDS for MySQL documentation](https://aws.amazon.com/rds/mysql) for the correct syntax\. ## Learn more ## [Amazon RDS for MySQL](https://aws.amazon.com/rds/mysql) **Parent topic:**[Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html) <!-- </article "role="article" "> -->
82199291DA8DBB7656B03232F5BE43BA4D343654
https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-azrds-oracle.html?context=cdpaas&locale=en
Amazon RDS for Oracle connection
Amazon RDS for Oracle connection To access your data in Amazon RDS for Oracle, create a connection asset for it. Amazon RDS for Oracle is an Oracle relational database that runs on the Amazon Relational Database Service (RDS). Supported Oracle versions and editions * Oracle Database 19c (19.0.0.0) * Oracle Database 12c Release 2 (12.2.0.1) * Oracle Database 12c Release 1 (12.1.0.2) Create a connection to Amazon RDS for Oracle To create the connection asset, you'll need these connection details: * Either the Oracle Service name or the Oracle System ID (SID) for the database. * Hostname or IP address of the database * Port number of the database. (Default is 1521) * SSL certificate (if required by the database server) Choose the method for creating a connection based on where you are in the platform In a project : Click Assets > New asset > Connect to a data source. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html). In a deployment space : Click Add to space > Connection. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html). In the Platform assets catalog : Click New connection. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html). Next step: Add data assets from the connection * See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html). Where you can use this connection Projects You can use Amazon RDS for Oracle connections in the following workspaces and tools: * Data Refinery * Decision Optimization Catalogs * Platform assets catalog Amazon RDS for Oracle setup To set up the Oracle database on Amazon, see these topics: * [Creating an Amazon RDS DB Instance](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_CreateDBInstance.html) * [Creating an Oracle DB instance and connecting to a database on an Oracle DB instance](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_GettingStarted.CreatingConnecting.Oracle.html) * [Connecting to your Oracle DB instance](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_ConnectToOracleInstance.html) Learn more [Amazon RDS for Oracle](https://aws.amazon.com/rds/oracle/) Parent topic:[Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html)
# Amazon RDS for Oracle connection # To access your data in Amazon RDS for Oracle, create a connection asset for it\. Amazon RDS for Oracle is an Oracle relational database that runs on the Amazon Relational Database Service (RDS)\. ## Supported Oracle versions and editions ## <!-- <ul> --> * Oracle Database 19c (19\.0\.0\.0) * Oracle Database 12c Release 2 (12\.2\.0\.1) * Oracle Database 12c Release 1 (12\.1\.0\.2) <!-- </ul> --> ## Create a connection to Amazon RDS for Oracle ## To create the connection asset, you'll need these connection details: <!-- <ul> --> * Either the Oracle Service name or the Oracle System ID (SID) for the database\. * Hostname or IP address of the database * Port number of the database\. (Default is `1521`) * SSL certificate (if required by the database server) <!-- </ul> --> ### Choose the method for creating a connection based on where you are in the platform ### **In a project** : Click **Assets > New asset > Connect to a data source**\. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html)\. **In a deployment space** : Click **Add to space > Connection**\. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html)\. **In the Platform assets catalog** : Click **New connection**\. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html)\. ### Next step: Add data assets from the connection ### <!-- <ul> --> * See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html)\. <!-- </ul> --> ## Where you can use this connection ## **Projects** You can use Amazon RDS for Oracle connections in the following workspaces and tools: <!-- <ul> --> * Data Refinery * Decision Optimization <!-- </ul> --> **Catalogs** <!-- <ul> --> * Platform assets catalog <!-- </ul> --> ## Amazon RDS for Oracle setup ## To set up the Oracle database on Amazon, see these topics: <!-- <ul> --> * [Creating an Amazon RDS DB Instance](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_CreateDBInstance.html) * [Creating an Oracle DB instance and connecting to a database on an Oracle DB instance](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_GettingStarted.CreatingConnecting.Oracle.html) * [Connecting to your Oracle DB instance](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_ConnectToOracleInstance.html) <!-- </ul> --> ## Learn more ## [Amazon RDS for Oracle](https://aws.amazon.com/rds/oracle/) **Parent topic:**[Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html) <!-- </article "role="article" "> -->
EC284AB3DC23241975F76E0519B531407042AEF1
https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-azrds-postresql.html?context=cdpaas&locale=en
Amazon RDS for PostgreSQL connection
Amazon RDS for PostgreSQL connection To access your data in Amazon RDS for PostgreSQL, create a connection asset for it. Amazon RDS for PostgreSQL is a PostgreSQL relational database that runs on the Amazon Relational Database Service (RDS). Supported versions PostgreSQL database versions 9.4, 9.5, 9.6, 10, 11 and 12 Create a connection to Amazon RDS for PostgreSQL To create the connection asset, you need these connection details: * Database name * Hostname or IP address * Port number * Username and password * SSL certificate (if required by the database server) Choose the method for creating a connection based on where you are in the platform In a project : Click Assets > New asset > Connect to a data source. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html). In a deployment space : Click Add to space > Connection. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html). In the Platform assets catalog : Click New connection. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html). Next step: Add data assets from the connection * See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html). Where you can use this connection You can use Amazon RDS for PostgreSQL connections in the following workspaces and tools: Projects * Data Refinery * Decision Optimization * Notebooks. Click Read data on the Code snippets pane to get the connection credentials and load the data into a data structure. See [Load data from data source connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/load-and-access-data.htmlconns). * SPSS Modeler * Synthetic Data Generator Catalogs * Platform assets catalog Amazon RDS for PostgreSQL setup For setup instructions, see these topics: * [Creating an Amazon RDS DB Instance](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_CreateDBInstance.html) * [Connecting to a DB Instance Running the PostgreSQL Database Engine](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_ConnectToPostgreSQLInstance.html) Running SQL statements To ensure that your SQL statements run correctly, refer to the [ Amazon RDS for PostgreSQL documentation](https://aws.amazon.com/rds/postgresql/) for the correct syntax. Learn more [Amazon RDS for PostgreSQL](https://aws.amazon.com/rds/postgresql/) Parent topic:[Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html)
# Amazon RDS for PostgreSQL connection # To access your data in Amazon RDS for PostgreSQL, create a connection asset for it\. Amazon RDS for PostgreSQL is a PostgreSQL relational database that runs on the Amazon Relational Database Service (RDS)\. ## Supported versions ## PostgreSQL database versions 9\.4, 9\.5, 9\.6, 10, 11 and 12 ## Create a connection to Amazon RDS for PostgreSQL ## To create the connection asset, you need these connection details: <!-- <ul> --> * Database name * Hostname or IP address * Port number * Username and password * SSL certificate (if required by the database server) <!-- </ul> --> ### Choose the method for creating a connection based on where you are in the platform ### **In a project** : Click **Assets > New asset > Connect to a data source**\. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html)\. **In a deployment space** : Click **Add to space > Connection**\. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html)\. **In the Platform assets catalog** : Click **New connection**\. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html)\. ### Next step: Add data assets from the connection ### <!-- <ul> --> * See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html)\. <!-- </ul> --> ## Where you can use this connection ## You can use Amazon RDS for PostgreSQL connections in the following workspaces and tools: **Projects** <!-- <ul> --> * Data Refinery * Decision Optimization * Notebooks\. Click **Read data** on the **Code snippets** pane to get the connection credentials and load the data into a data structure\. See [Load data from data source connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/load-and-access-data.html#conns)\. * SPSS Modeler * Synthetic Data Generator <!-- </ul> --> **Catalogs** <!-- <ul> --> * Platform assets catalog <!-- </ul> --> ## Amazon RDS for PostgreSQL setup ## For setup instructions, see these topics: <!-- <ul> --> * [Creating an Amazon RDS DB Instance](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_CreateDBInstance.html) * [Connecting to a DB Instance Running the PostgreSQL Database Engine](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_ConnectToPostgreSQLInstance.html) <!-- </ul> --> ### Running SQL statements ### To ensure that your SQL statements run correctly, refer to the [ Amazon RDS for PostgreSQL documentation](https://aws.amazon.com/rds/postgresql/) for the correct syntax\. ## Learn more ## [Amazon RDS for PostgreSQL](https://aws.amazon.com/rds/postgresql/) **Parent topic:**[Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html) <!-- </article "role="article" "> -->
47B819E4861856FAB3C5627661EDD8E59FBED8A2
https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-azure-sql.html?context=cdpaas&locale=en
Microsoft Azure SQL Database connection
Microsoft Azure SQL Database connection To access your data in a Microsoft Azure SQL Database, create a connection asset for it. Microsoft Azure SQL Database is a managed cloud database provided as part of Microsoft Azure. Create a connection to Microsoft Azure SQL Database To create the connection asset, you need these connection details: * Database name * Hostname or IP address * Port number * Select Use Active Directory if the server has been set up to use Azure Active Directory authentication (Azure AD). Enter your Azure AD user and password. * Username and password * SSL certificate (if required by the database server) For Private connectivity, to connect to a database that is not externalized to the internet (for example, behind a firewall), you must set up a [secure connection](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.html). Choose the method for creating a connection based on where you are in the platform In a project : Click Assets > New asset > Connect to a data source. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html). In a deployment space : Click Add to space > Connection. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html). In the Platform assets catalog : Click New connection. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html). Next step: Add data assets from the connection * See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html). Where you can use this connection You can use Microsoft Azure SQL Database connections in the following workspaces and tools: * Decision Optimization * SPSS Modeler * Synthetic Data Generator Catalogs * Platform assets catalog Running SQL statements To ensure that your SQL statements run correctly, refer to the [Azure SQL Database documentation](https://docs.microsoft.com/en-ca/azure/azure-sql/database/connect-query-content-reference-guide) for the correct syntax. Microsoft Azure SQL Database setup [Getting started with single databases in Azure SQL Database](https://docs.microsoft.com/en-ca/azure/azure-sql/database/quickstart-content-reference-guide) Learn more [Azure SQL Database documentation](https://docs.microsoft.com/en-ca/azure/azure-sql/database/) Parent topic:[Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html)
# Microsoft Azure SQL Database connection # To access your data in a Microsoft Azure SQL Database, create a connection asset for it\. Microsoft Azure SQL Database is a managed cloud database provided as part of Microsoft Azure\. ## Create a connection to Microsoft Azure SQL Database ## To create the connection asset, you need these connection details: <!-- <ul> --> * Database name * Hostname or IP address * Port number * Select **Use Active Directory** if the server has been set up to use Azure Active Directory authentication (Azure AD)\. Enter your Azure AD user and password\. * Username and password * SSL certificate (if required by the database server) <!-- </ul> --> For **Private connectivity**, to connect to a database that is not externalized to the internet (for example, behind a firewall), you must set up a [secure connection](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.html)\. ### Choose the method for creating a connection based on where you are in the platform ### **In a project** : Click **Assets > New asset > Connect to a data source**\. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html)\. **In a deployment space** : Click **Add to space > Connection**\. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html)\. **In the Platform assets catalog** : Click **New connection**\. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html)\. ### Next step: Add data assets from the connection ### <!-- <ul> --> * See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html)\. <!-- </ul> --> ## Where you can use this connection ## You can use Microsoft Azure SQL Database connections in the following workspaces and tools: <!-- <ul> --> * Decision Optimization * SPSS Modeler * Synthetic Data Generator <!-- </ul> --> **Catalogs** <!-- <ul> --> * Platform assets catalog <!-- </ul> --> ### Running SQL statements ### To ensure that your SQL statements run correctly, refer to the [Azure SQL Database documentation](https://docs.microsoft.com/en-ca/azure/azure-sql/database/connect-query-content-reference-guide) for the correct syntax\. ## Microsoft Azure SQL Database setup ## [Getting started with single databases in Azure SQL Database](https://docs.microsoft.com/en-ca/azure/azure-sql/database/quickstart-content-reference-guide) ## Learn more ## [Azure SQL Database documentation](https://docs.microsoft.com/en-ca/azure/azure-sql/database/) **Parent topic:**[Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html) <!-- </article "role="article" "> -->
270242015183DA2517DF613FD951623042268BEE
https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-azureblob.html?context=cdpaas&locale=en
Microsoft Azure Blob Storage connection
Microsoft Azure Blob Storage connection To access your data in Microsoft Azure Blob Storage, create a connection asset for it. Azure Blob Storage is used for storing large amounts of data in the cloud. Create a connection to Microsoft Azure Blob Storage To create the connection asset, you need these connection details: Connection string: Authentication is managed by the Azure portal access keys. Choose the method for creating a connection based on where you are in the platform In a project : Click Assets > New asset > Connect to a data source. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html). In a deployment space : Click Add to space > Connection. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html). In the Platform assets catalog : Click New connection. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html). Next step: Add data assets from the connection * See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html). Where you can use this connection You can use Azure Blob Storage connections in the following workspaces and tools: Projects * Data Refinery * SPSS Modeler * Synthetic Data Generator Catalogs * Platform assets catalog Azure Blob Storage connection string setup Set up blob storage and access keys on the Microsoft Azure portal. For instructions see: * [Quickstart: Upload, download, and list blobs with the Azure portal](https://docs.microsoft.com/en-us/azure/storage/blobs/storage-quickstart-blobs-portal) * [Manage storage account access keys](https://docs.microsoft.com/en-us/azure/storage/common/storage-account-keys-manage) Example connection string, which you can find in the ApiKeys section of the container: DefaultEndpointsProtocol=https;AccountName=sampleaccount;AccountKey=samplekey;EndpointSuffix=core.windows.net Supported file types The Azure Blob Storage connection supports these file types: Avro, CSV, Delimited text, Excel, JSON, ORC, Parquet, SAS, SAV, SHP, and XML. Learn more [Microsoft Azure](https://azure.microsoft.com) Parent topic:[Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html)
# Microsoft Azure Blob Storage connection # To access your data in Microsoft Azure Blob Storage, create a connection asset for it\. Azure Blob Storage is used for storing large amounts of data in the cloud\. ## Create a connection to Microsoft Azure Blob Storage ## To create the connection asset, you need these connection details: Connection string: Authentication is managed by the Azure portal access keys\. ### Choose the method for creating a connection based on where you are in the platform ### **In a project** : Click **Assets > New asset > Connect to a data source**\. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html)\. **In a deployment space** : Click **Add to space > Connection**\. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html)\. **In the Platform assets catalog** : Click **New connection**\. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html)\. ### Next step: Add data assets from the connection ### <!-- <ul> --> * See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html)\. <!-- </ul> --> ## Where you can use this connection ## You can use Azure Blob Storage connections in the following workspaces and tools: **Projects** <!-- <ul> --> * Data Refinery * SPSS Modeler * Synthetic Data Generator <!-- </ul> --> **Catalogs** <!-- <ul> --> * Platform assets catalog <!-- </ul> --> ## Azure Blob Storage connection string setup ## Set up blob storage and access keys on the Microsoft Azure portal\. For instructions see: <!-- <ul> --> * [Quickstart: Upload, download, and list blobs with the Azure portal](https://docs.microsoft.com/en-us/azure/storage/blobs/storage-quickstart-blobs-portal) * [Manage storage account access keys](https://docs.microsoft.com/en-us/azure/storage/common/storage-account-keys-manage) <!-- </ul> --> Example connection string, which you can find in the **ApiKeys** section of the container: `DefaultEndpointsProtocol=https;AccountName=sampleaccount;AccountKey=samplekey;EndpointSuffix=core.windows.net` ## Supported file types ## The Azure Blob Storage connection supports these file types: Avro, CSV, Delimited text, Excel, JSON, ORC, Parquet, SAS, SAV, SHP, and XML\. ## Learn more ## [Microsoft Azure](https://azure.microsoft.com) **Parent topic:**[Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html) <!-- </article "role="article" "> -->
BC75F4741F871360B8E3CE356754C329323306F7
https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-azuredls.html?context=cdpaas&locale=en
Microsoft Azure Data Lake Storage connection
Microsoft Azure Data Lake Storage connection To access your data in Microsoft Azure Data Lake Storage, create a connection asset for it. Azure Data Lake Storage (ADLS) is a scalable data storage and analytics service that is hosted in Azure, Microsoft's public cloud. The Microsoft Azure Data Lake Storage connection supports access to both Gen1 and Gen2 Azure Data Lake Storage repositories. Create a connection to Microsoft Azure Data Lake Storage To create the connection asset, you need these connection details: * WebHDFS URL: The WebHDFS URL for accessing HDFS. To connect to a Gen 2 ADLS, use the format, https://<account-name>.dfs.core.windows.net/<file-system> Where <account-name> is the name you used when you created the ADLS instance. For <file-system>, use the name of the container you created. For more information, see the [Microsoft Data Lake Storage Gen2 documentation](https://docs.microsoft.com/en-us/rest/api/storageservices/datalakestoragegen2/path/read). * Tenant ID: The Azure Active Directory tenant ID * Client ID: The client ID for authorizing access to Microsoft Azure Data Lake Storage * Client secret: The authentication key that is associated with the client ID for authorizing access to Microsoft Azure Data Lake Storage Select Server proxy to access the Azure Data Lake Storage data source through a proxy server. Depending on its setup, a proxy server can provide load balancing, increased security, and privacy. The proxy server settings are independent of the authentication credentials and the personal or shared credentials selection. * Proxy host: The proxy URL. For example, https://proxy.example.com. * Proxy port number: The port number to connect to the proxy server. For example, 8080 or 8443. * The Proxy protocol selection for HTTP or HTTPS is optional. For Private connectivity, to connect to a database that is not externalized to the internet (for example, behind a firewall), you must set up a [secure connection](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.html). Choose the method for creating a connection based on where you are in the platform In a project : Click Assets > New asset > Connect to a data source. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html). In a deployment space : Click Add to space > Connection. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html). In the Platform assets catalog : Click New connection. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html). Next step: Add data assets from the connection * See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html). Where you can use this connection You can use Microsoft Azure Data Lake Storage connections in the following workspaces and tools: Projects * Decision Optimization * SPSS Modeler * Synthetic Data Generator Catalogs * Platform assets catalog Azure Data Lake Storage authentication setup To set up authentication, you need a tenant ID, client (or application) ID, and client secret. * Gen1: 1. Create an Azure Active Directory (Azure AD) web application, get an application ID, authentication key, and a tenant ID. 2. Then, you must assign the Azure AD application to the Azure Data Lake Storage account file or folder. Follow Steps 1, 2, and 3 at [Service-to-service authentication with Azure Data Lake Storage using Azure Active Directory](https://docs.microsoft.com/en-us/azure/data-lake-store/data-lake-store-service-to-service-authenticate-using-active-directory). * Gen2: 1. Follow instructions in [Acquire a token from Azure AD for authorizing requests from a client application](https://docs.microsoft.com/en-us/azure/storage/common/storage-auth-aad-app). These steps create a new identity. After you create the identity, set permissions to grant the application access to your ADLS. The Microsoft Azure Data Lake Storage connection will use the associated Client ID, Client secret, and Tenant ID for the application. 2. Give the Azure App access to the storage container using Storage Explorer. For instructions, see [Use Azure Storage Explorer to manage directories and files in Azure Data Lake Storage Gen2](https://docs.microsoft.com/en-us/azure/storage/blobs/data-lake-storage-explorermanaging-access). Supported file types The Microsoft Azure Data Lake Storage connection supports these file types: Avro, CSV, Delimited text, Excel, JSON, ORC, Parquet, SAS, SAV, SHP, and XML. Learn more [Azure Data Lake](https://azure.microsoft.com/en-us/solutions/data-lake) Parent topic:[Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html)
# Microsoft Azure Data Lake Storage connection # To access your data in Microsoft Azure Data Lake Storage, create a connection asset for it\. Azure Data Lake Storage (ADLS) is a scalable data storage and analytics service that is hosted in Azure, Microsoft's public cloud\. The Microsoft Azure Data Lake Storage connection supports access to both Gen1 and Gen2 Azure Data Lake Storage repositories\. ## Create a connection to Microsoft Azure Data Lake Storage ## To create the connection asset, you need these connection details: <!-- <ul> --> * WebHDFS URL: The WebHDFS URL for accessing HDFS\. To connect to a Gen 2 ADLS, use the format, `https://<account-name>.dfs.core.windows.net/<file-system>` Where `<account-name>` is the name you used when you created the ADLS instance. For `<file-system>`, use the name of the container you created. For more information, see the [Microsoft Data Lake Storage Gen2 documentation](https://docs.microsoft.com/en-us/rest/api/storageservices/datalakestoragegen2/path/read). * Tenant ID: The Azure Active Directory tenant ID * Client ID: The client ID for authorizing access to Microsoft Azure Data Lake Storage * Client secret: The authentication key that is associated with the client ID for authorizing access to Microsoft Azure Data Lake Storage <!-- </ul> --> Select **Server proxy** to access the Azure Data Lake Storage data source through a proxy server\. Depending on its setup, a proxy server can provide load balancing, increased security, and privacy\. The proxy server settings are independent of the authentication credentials and the personal or shared credentials selection\. <!-- <ul> --> * **Proxy host**: The proxy URL\. For example, `https://proxy.example.com`\. * **Proxy port number**: The port number to connect to the proxy server\. For example, `8080` or `8443`\. * The **Proxy protocol** selection for HTTP or HTTPS is optional\. <!-- </ul> --> For **Private connectivity**, to connect to a database that is not externalized to the internet (for example, behind a firewall), you must set up a [secure connection](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.html)\. ### Choose the method for creating a connection based on where you are in the platform ### **In a project** : Click **Assets > New asset > Connect to a data source**\. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html)\. **In a deployment space** : Click **Add to space > Connection**\. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html)\. **In the Platform assets catalog** : Click **New connection**\. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html)\. ### Next step: Add data assets from the connection ### <!-- <ul> --> * See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html)\. <!-- </ul> --> ## Where you can use this connection ## You can use Microsoft Azure Data Lake Storage connections in the following workspaces and tools: **Projects** <!-- <ul> --> * Decision Optimization * SPSS Modeler * Synthetic Data Generator <!-- </ul> --> **Catalogs** <!-- <ul> --> * Platform assets catalog <!-- </ul> --> ## Azure Data Lake Storage authentication setup ## To set up authentication, you need a tenant ID, client (or application) ID, and client secret\. <!-- <ul> --> * Gen1: <!-- <ol> --> 1. Create an Azure Active Directory (Azure AD) web application, get an application ID, authentication key, and a tenant ID. 2. Then, you must assign the Azure AD application to the Azure Data Lake Storage account file or folder. Follow Steps 1, 2, and 3 at [Service-to-service authentication with Azure Data Lake Storage using Azure Active Directory](https://docs.microsoft.com/en-us/azure/data-lake-store/data-lake-store-service-to-service-authenticate-using-active-directory). <!-- </ol> --> * Gen2: <!-- <ol> --> 1. Follow instructions in [Acquire a token from Azure AD for authorizing requests from a client application](https://docs.microsoft.com/en-us/azure/storage/common/storage-auth-aad-app). These steps create a new identity. After you create the identity, set permissions to grant the application access to your ADLS. The Microsoft Azure Data Lake Storage connection will use the associated Client ID, Client secret, and Tenant ID for the application. 2. Give the Azure App access to the storage container using Storage Explorer. For instructions, see [Use Azure Storage Explorer to manage directories and files in Azure Data Lake Storage Gen2](https://docs.microsoft.com/en-us/azure/storage/blobs/data-lake-storage-explorer#managing-access). <!-- </ol> --> <!-- </ul> --> ## Supported file types ## The Microsoft Azure Data Lake Storage connection supports these file types: Avro, CSV, Delimited text, Excel, JSON, ORC, Parquet, SAS, SAV, SHP, and XML\. ## Learn more ## [Azure Data Lake](https://azure.microsoft.com/en-us/solutions/data-lake) **Parent topic:**[Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html) <!-- </article "role="article" "> -->
C832EB23963C0F4BCDDB2DD45C8E67001AEE8F4F
https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-azurefs.html?context=cdpaas&locale=en
Microsoft Azure File Storage connection
Microsoft Azure File Storage connection To access your data in Microsoft Azure File Storage, create a connection asset for it. Azure Files are Microsoft's cloud file system. They are managed file shares that are accessible via the Server Message Block (SMB) protocol or the Network File System (NFS) protocol. Create a connection to Microsoft Azure File Storage To create the connection asset, you need these connection details: Connection string: Authentication is managed by the Azure portal access keys. Choose the method for creating a connection based on where you are in the platform In a project : Click Assets > New asset > Connect to a data source. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html). In a deployment space : Click Add to space > Connection. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html). In the Platform assets catalog : Click New connection. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html). Next step: Add data assets from the connection * See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html). Where you can use this connection You can use Microsoft Azure File Storage connections in the following workspaces and tools: Projects * Decision Optimization * SPSS Modeler * Synthetic Data Generator Catalogs * Platform assets catalog Azure File Storage setup Set up storage and access keys on the Microsoft Azure portal. For instructions see [Manage storage account access keys](https://docs.microsoft.com/en-us/azure/storage/common/storage-account-keys-manage). Example connection string, which you can find in the ApiKeys section of the container: DefaultEndpointsProtocol=https;AccountName=sampleaccount;AccountKey=samplekey;EndpointSuffix=core.windows.net Choose the method to create and manage your Azure Files: * [Quickstart: Create and manage Azure Files share with Windows virtual machines](https://docs.microsoft.com/en-us/azure/storage/files/storage-files-quick-create-use-windows) * [Quickstart: Create and manage Azure file shares with the Azure portal](https://docs.microsoft.com/en-us/azure/storage/files/storage-how-to-use-files-portal) * [Quickstart: Create and manage an Azure file share with Azure PowerShell](https://docs.microsoft.com/en-us/azure/storage/files/storage-how-to-use-files-powershell) * [Quickstart: Create and manage Azure file shares using Azure CLI](https://docs.microsoft.com/en-us/azure/storage/files/storage-how-to-use-files-cli) * [Quickstart: Create and manage Azure file shares with Azure Storage Explorer](https://docs.microsoft.com/en-us/azure/storage/files/storage-how-to-use-files-storage-explorer) Restriction Microsoft Azure's maximum file size is 1 TB. Supported file types The Azure File Storage connection supports these file types: Avro, CSV, Delimited text, Excel, JSON, ORC, Parquet, SAS, SAV, SHP, and XML. Known issue During the upload, the data is appended in portions to a temporary blob and then converted into the file. Depending on the size of the streamed content, there might be a delay in creating the file. Wait until all the data is uploaded. Learn more [Azure Files](https://azure.microsoft.com/en-us/services/storage/files/) Parent topic:[Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html)
# Microsoft Azure File Storage connection # To access your data in Microsoft Azure File Storage, create a connection asset for it\. Azure Files are Microsoft's cloud file system\. They are managed file shares that are accessible via the Server Message Block (SMB) protocol or the Network File System (NFS) protocol\. ## Create a connection to Microsoft Azure File Storage ## To create the connection asset, you need these connection details: Connection string: Authentication is managed by the Azure portal access keys\. ### Choose the method for creating a connection based on where you are in the platform ### **In a project** : Click **Assets > New asset > Connect to a data source**\. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html)\. **In a deployment space** : Click **Add to space > Connection**\. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html)\. **In the Platform assets catalog** : Click **New connection**\. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html)\. ### Next step: Add data assets from the connection ### <!-- <ul> --> * See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html)\. <!-- </ul> --> ## Where you can use this connection ## You can use Microsoft Azure File Storage connections in the following workspaces and tools: **Projects** <!-- <ul> --> * Decision Optimization * SPSS Modeler * Synthetic Data Generator <!-- </ul> --> **Catalogs** <!-- <ul> --> * Platform assets catalog <!-- </ul> --> ## Azure File Storage setup ## Set up storage and access keys on the Microsoft Azure portal\. For instructions see [Manage storage account access keys](https://docs.microsoft.com/en-us/azure/storage/common/storage-account-keys-manage)\. Example connection string, which you can find in the **ApiKeys** section of the container: `DefaultEndpointsProtocol=https;AccountName=sampleaccount;AccountKey=samplekey;EndpointSuffix=core.windows.net` Choose the method to create and manage your Azure Files: <!-- <ul> --> * [Quickstart: Create and manage Azure Files share with Windows virtual machines](https://docs.microsoft.com/en-us/azure/storage/files/storage-files-quick-create-use-windows) * [Quickstart: Create and manage Azure file shares with the Azure portal](https://docs.microsoft.com/en-us/azure/storage/files/storage-how-to-use-files-portal) * [Quickstart: Create and manage an Azure file share with Azure PowerShell](https://docs.microsoft.com/en-us/azure/storage/files/storage-how-to-use-files-powershell) * [Quickstart: Create and manage Azure file shares using Azure CLI](https://docs.microsoft.com/en-us/azure/storage/files/storage-how-to-use-files-cli) * [Quickstart: Create and manage Azure file shares with Azure Storage Explorer](https://docs.microsoft.com/en-us/azure/storage/files/storage-how-to-use-files-storage-explorer) <!-- </ul> --> ## Restriction ## Microsoft Azure's maximum file size is 1 TB\. ## Supported file types ## The Azure File Storage connection supports these file types: Avro, CSV, Delimited text, Excel, JSON, ORC, Parquet, SAS, SAV, SHP, and XML\. ## Known issue ## During the upload, the data is appended in portions to a temporary blob and then converted into the file\. Depending on the size of the streamed content, there might be a delay in creating the file\. Wait until all the data is uploaded\. ## Learn more ## [Azure Files](https://azure.microsoft.com/en-us/services/storage/files/) **Parent topic:**[Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html) <!-- </article "role="article" "> -->
4800C7E7C443EC310D775747125585F4671534FC
https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-bigquery.html?context=cdpaas&locale=en
Google BigQuery connection
Google BigQuery connection To access your data in Google BigQuery, you must create a connection asset for it. Google BigQuery is a fully managed, serverless data warehouse that enables scalable analysis over petabytes of data. Create a connection to Google BigQuery To create the connection asset, choose an authentication method. Choices include authentication with or without workload identity federation. Without workload identity federation * Credentials: The contents of the Google service account key JSON file * Client ID, Client secret, Access token, and Refresh token With workload identity federation You use an external identity provider (IdP) for authentication. An external identity provider uses Identity and Access Management (IAM) instead of service account keys. IAM provides increased security and centralized management. You can use workload identity federation authentication with an access token or with a token URL. You can configure a Google BigQuery connection for workload identity federation with any identity provider that complies with the OpenID Connect (OIDC) specification and that satisfies the Google Cloud requirements that are described in [Prepare your external IdP](https://cloud.google.com/iam/docs/workload-identity-federation-with-other-providersoidc). The requirements include: * The identity provider must support OpenID Connect 1.0. * The identity provider's OIDC metadata and JWKS endpoints must be publicly accessible over the internet. Google Cloud uses these endpoints to download your identity provider's key set and uses that key set to validate tokens. * The identity provider is configured so that your workload can obtain ID tokens that meet these criteria: * Tokens are signed with the RS256 or ES256 algorithm. * Tokens contain an aud claim. For examples of the workload identity federation configuration steps and the Google BigQuery connection details for Amazon Web Services (AWS) and Microsoft Azure, see [Workload identity federation examples](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/wif-examples.html). Workload Identity Federation with access token connection details * Access token: An access token from the identity provider to connect to BigQuery. * Security Token Service audience: The security token service audience that contains the project ID, pool ID, and provider ID. Use this format: //iam.googleapis.com/projects/PROJECT_NUMBER/locations/global/workloadIdentityPools/POOL_ID/providers/PROVIDER_ID For more information, see [Authenticate a workload using the REST API](https://cloud.google.com/iam/docs/workload-identity-federation-with-other-cloudsrest). * Service account email: The email address of the Google service account to be impersonated. For more information, see [Create a service account for the external workload](https://cloud.google.com/iam/docs/workload-identity-federation-with-other-cloudscreate_a_service_account_for_the_external_workload). * Service account token lifetime (optional): The lifetime in seconds of the service account access token. The default lifetime of a service account access token is one hour. For more information, see [URL-sourced credentials](https://cloud.google.com/iam/docs/workload-identity-federation-with-other-providersurl-sourced-credentials). * Token format: Text or JSON with the Token field name for the name of the field in the JSON response that contains the token. * Token field name: The name of the field in the JSON response that contains the token. This field appears only when the Token format is JSON. * Token type: AWS Signature Version 4 request, Google OAuth 2.0 access token, ID token, JSON Web Token (JWT), or SAML 2.0. Workload Identity Federation with token URL connection details * Security Token Service audience: The security token service audience that contains the project ID, pool ID, and provider ID. Use this format: //iam.googleapis.com/projects/PROJECT_NUMBER/locations/global/workloadIdentityPools/POOL_ID/providers/PROVIDER_ID For more information, see [Authenticate a workload using the REST API](https://cloud.google.com/iam/docs/workload-identity-federation-with-other-cloudsrest). * Service account email: The email address of the Google service account to be impersonated. For more information, see [Create a service account for the external workload](https://cloud.google.com/iam/docs/workload-identity-federation-with-other-cloudscreate_a_service_account_for_the_external_workload). * Service account token lifetime (optional): The lifetime in seconds of the service account access token. The default lifetime of a service account access token is one hour. For more information, see [URL-sourced credentials](https://cloud.google.com/iam/docs/workload-identity-federation-with-other-providersurl-sourced-credentials). * Token URL: The URL to retrieve a token. * HTTP method: HTTP method to use for the token URL request: GET, POST, or PUT. * Request body (for POST or PUT methods): The body of the HTTP request to retrieve a token. * HTTP headers: HTTP headers for the token URL request in JSON or as a JSON body. Use format: "Key1"="Value1","Key2"="Value2". * Token format: Text or JSON with the Token field name for the name of the field in the JSON response that contains the token. * Token field name: The name of the field in the JSON response that contains the token. This field appears only when the Token format is JSON. * Token type: AWS Signature Version 4 request, Google OAuth 2.0 access token, ID token, JSON Web Token (JWT), or SAML 2.0. Other properties Project ID (optional) Output JSON string format: JSON string format for output values that are complex data types (for example, nested or repeated). * Pretty: Values are formatted before sending them to output. Use this option to visually read a few rows. * Raw: (Default) No formatting. Use this option for the best performance. Permissions The connection to Google BigQuery requires the following BigQuery permissions: * bigquery.job.create * bigquery.tables.get * bigquery.tables.getData Use one of three ways to gain these permissions: * Use the predefined BigQuery Cloud IAM role bigquery.admin, which includes these permissions; * Use a combination of two roles, one from each column in the following table; or * Create a custom role. See [Create and manage custom roles](https://cloud.google.com/iam/docs/creating-custom-roles). First role Second role bigquery.dataEditor bigquery.jobUser bigquery.dataOwner bigquery.user bigquery.dataViewer For more information about permissions and roles in Google BigQuery, see [Predefined roles and permissions](https://cloud.google.com/bigquery/docs/access-control). Choose the method for creating a connection based on where you are in the platform In a project : Click Assets > New asset > Connect to a data source. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html). In a deployment space : Click Add to space > Connection. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html). In the Platform assets catalog : Click New connection. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html). Next step: Add data assets from the connection * See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html). Where you can use this connection You can use Google BigQuery connections in the following workspaces and tools: Projects * Data Refinery * SPSS Modeler * Synthetic Data Generator Catalogs * Platform assets catalog Google BigQuery setup [Quickstart by using the Cloud Console](https://cloud.google.com/bigquery/docs/quickstarts/quickstart-web-ui) Learn more * [Google BigQuery documentation](https://cloud.google.com/bigquery/docs) * [Google BigQuery workload identity federation examples](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/wif-examples.html) Parent topic:[Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html)
# Google BigQuery connection # To access your data in Google BigQuery, you must create a connection asset for it\. Google BigQuery is a fully managed, serverless data warehouse that enables scalable analysis over petabytes of data\. ## Create a connection to Google BigQuery ## To create the connection asset, choose an authentication method\. Choices include authentication with or without workload identity federation\. **Without workload identity federation** <!-- <ul> --> * Credentials: The contents of the Google service account key JSON file * Client ID, Client secret, Access token, and Refresh token <!-- </ul> --> **With workload identity federation** You use an external identity provider (IdP) for authentication\. An external identity provider uses Identity and Access Management (IAM) instead of service account keys\. IAM provides increased security and centralized management\. You can use workload identity federation authentication with an access token or with a token URL\. You can configure a Google BigQuery connection for workload identity federation with any identity provider that complies with the OpenID Connect (OIDC) specification and that satisfies the Google Cloud requirements that are described in [Prepare your external IdP](https://cloud.google.com/iam/docs/workload-identity-federation-with-other-providers#oidc)\. The requirements include: <!-- <ul> --> * The identity provider must support OpenID Connect 1\.0\. * The identity provider's OIDC metadata and JWKS endpoints must be publicly accessible over the internet\. Google Cloud uses these endpoints to download your identity provider's key set and uses that key set to validate tokens\. * The identity provider is configured so that your workload can obtain ID tokens that meet these criteria: <!-- <ul> --> * Tokens are signed with the RS256 or ES256 algorithm. * Tokens contain an aud claim. <!-- </ul> --> <!-- </ul> --> For examples of the workload identity federation configuration steps and the Google BigQuery connection details for Amazon Web Services (AWS) and Microsoft Azure, see [Workload identity federation examples](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/wif-examples.html)\. #### Workload Identity Federation with access token connection details #### <!-- <ul> --> * **Access token**: An access token from the identity provider to connect to BigQuery\. * **Security Token Service audience**: The security token service audience that contains the project ID, pool ID, and provider ID\. Use this format: //iam.googleapis.com/projects/PROJECT_NUMBER/locations/global/workloadIdentityPools/POOL_ID/providers/PROVIDER_ID For more information, see [Authenticate a workload using the REST API](https://cloud.google.com/iam/docs/workload-identity-federation-with-other-clouds#rest). * **Service account email**: The email address of the Google service account to be impersonated\. For more information, see [Create a service account for the external workload](https://cloud.google.com/iam/docs/workload-identity-federation-with-other-clouds#create_a_service_account_for_the_external_workload)\. * **Service account token lifetime** (optional): The lifetime in seconds of the service account access token\. The default lifetime of a service account access token is one hour\. For more information, see [URL\-sourced credentials](https://cloud.google.com/iam/docs/workload-identity-federation-with-other-providers#url-sourced-credentials)\. * **Token format**: Text or JSON with the Token field name for the name of the field in the JSON response that contains the token\. * **Token field name**: The name of the field in the JSON response that contains the token\. This field appears only when the **Token format** is JSON\. * **Token type**: AWS Signature Version 4 request, Google OAuth 2\.0 access token, ID token, JSON Web Token (JWT), or SAML 2\.0\. <!-- </ul> --> #### Workload Identity Federation with token URL connection details #### <!-- <ul> --> * **Security Token Service audience**: The security token service audience that contains the project ID, pool ID, and provider ID\. Use this format: //iam.googleapis.com/projects/PROJECT_NUMBER/locations/global/workloadIdentityPools/POOL_ID/providers/PROVIDER_ID For more information, see [Authenticate a workload using the REST API](https://cloud.google.com/iam/docs/workload-identity-federation-with-other-clouds#rest). * **Service account email**: The email address of the Google service account to be impersonated\. For more information, see [Create a service account for the external workload](https://cloud.google.com/iam/docs/workload-identity-federation-with-other-clouds#create_a_service_account_for_the_external_workload)\. * **Service account token lifetime** (optional): The lifetime in seconds of the service account access token\. The default lifetime of a service account access token is one hour\. For more information, see [URL\-sourced credentials](https://cloud.google.com/iam/docs/workload-identity-federation-with-other-providers#url-sourced-credentials)\. * **Token URL**: The URL to retrieve a token\. * **HTTP method**: HTTP method to use for the token URL request: GET, POST, or PUT\. * **Request body** (for POST or PUT methods): The body of the HTTP request to retrieve a token\. * **HTTP headers**: HTTP headers for the token URL request in JSON or as a JSON body\. Use format: `"Key1"="Value1","Key2"="Value2"`\. * **Token format**: Text or JSON with the Token field name for the name of the field in the JSON response that contains the token\. * **Token field name**: The name of the field in the JSON response that contains the token\. This field appears only when the **Token format** is JSON\. * **Token type**: AWS Signature Version 4 request, Google OAuth 2\.0 access token, ID token, JSON Web Token (JWT), or SAML 2\.0\. <!-- </ul> --> ### Other properties ### **Project ID** (optional) **Output JSON string format**: JSON string format for output values that are complex data types (for example, nested or repeated)\. <!-- <ul> --> * **Pretty**: Values are formatted before sending them to output\. Use this option to visually read a few rows\. * **Raw**: (Default) No formatting\. Use this option for the best performance\. <!-- </ul> --> ### Permissions ### The connection to Google BigQuery requires the following BigQuery permissions: <!-- <ul> --> * `bigquery.job.create` * `bigquery.tables.get` * `bigquery.tables.getData` <!-- </ul> --> Use one of three ways to gain these permissions: <!-- <ul> --> * Use the predefined BigQuery Cloud IAM role `bigquery.admin`, which includes these permissions; * Use a combination of two roles, one from each column in the following table; or * Create a custom role\. See [Create and manage custom roles](https://cloud.google.com/iam/docs/creating-custom-roles)\. <!-- </ul> --> <!-- <table> --> | First role | Second role | | --------------------- | ------------------ | | `bigquery.dataEditor` | `bigquery.jobUser` | | `bigquery.dataOwner` | `bigquery.user` | | `bigquery.dataViewer` | | <!-- </table ""> --> For more information about permissions and roles in Google BigQuery, see [Predefined roles and permissions](https://cloud.google.com/bigquery/docs/access-control)\. ### Choose the method for creating a connection based on where you are in the platform ### **In a project** : Click **Assets > New asset > Connect to a data source**\. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html)\. **In a deployment space** : Click **Add to space > Connection**\. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html)\. **In the Platform assets catalog** : Click **New connection**\. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html)\. ### Next step: Add data assets from the connection ### <!-- <ul> --> * See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html)\. <!-- </ul> --> ## Where you can use this connection ## You can use Google BigQuery connections in the following workspaces and tools: **Projects** <!-- <ul> --> * Data Refinery * SPSS Modeler * Synthetic Data Generator <!-- </ul> --> **Catalogs** <!-- <ul> --> * Platform assets catalog <!-- </ul> --> ## Google BigQuery setup ## [Quickstart by using the Cloud Console](https://cloud.google.com/bigquery/docs/quickstarts/quickstart-web-ui) ## Learn more ## <!-- <ul> --> * [Google BigQuery documentation](https://cloud.google.com/bigquery/docs) * [Google BigQuery workload identity federation examples](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/wif-examples.html) <!-- </ul> --> **Parent topic:**[Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html) <!-- </article "role="article" "> -->
B481BDE61EEBA2CC6B2A0C1D9C43D8DD56AB2A08
https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-box.html?context=cdpaas&locale=en
Box connection
Box connection To access your data in Box, create a connection asset for it. The Box platform is a cloud content management and file sharing service. Prerequisite: Create a custom app in Box Before you create a connection to Box, you create a custom app in the Box Developer Console. You can create an app for application-level access that users can use to share files, or you can create an app for enterprise-wide access to all user accounts. With enterprise-wide access, users do not need to share files and folders with the application. 1. Go to the [Box Developer Console](https://app.box.com/developers/console), and follow the wizard to create a Custom App. For the Authentication Method, select OAuth 2.0 with JWT (Server Authentication). 2. Make the following selections in the Configuration page. Otherwise, keep the default settings. 1. Select one of two choices for App Access Level: * Keep the default App Access Only selection to allow access where users share files. * Select App + Enterprise Access to create an app with enterprise-wide access to all user accounts. 2. Under Add and Manage Public Keys, click Generate a Public/Private Keypair. This selection requires that two-factor authentication is enabled on the Box account, but you can disable it afterward. The generated key pair produces a config (_config.json) file for you to download. You will need the information in this file to create the connection in your project. 3. If you selected an App + Enterprise Access, under Advanced Features, select both of these check boxes: * Make API calls using the as-user header * Generate user access tokens 4. Submit the app client ID to the Box enterprise administrator for authorization: Go to your application in the [Box Developer Console](https://app.box.com/developers/console) and select the General link from the left sidebar in your application. Scroll down to the App Authorization section. Choose the method for creating a connection based on where you are in the platform In a project : Click Assets > New asset > Connect to a data source. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html). In a deployment space : Click Add to space > Connection. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html). In the Platform assets catalog : Click New connection. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html). Create the Box connection Enter the values from the downloaded config file for these settings: * Client ID * Client Secret * Enterprise ID * Private Key (Replace each n with a newline) * Private Key Password (The passphrase value in the config file) * Public Key (The publicKeyID value in the config file) Enterprise-wide app If you configured an enterprise-wide access app, enter the username of the Box user account in the Username field. Application-level app Users must explicitly share their files with the app's email address in order for the app to access the files. 1. Make a REST call to the connection to find out the app email address. For example: PUT https://api.dataplatform.cloud.ibm.com/v2/connections/{connection_id}/actions/get_user_info?project_id={project_id} Request body: {} Returns: { "login_name": "[email protected]" } 2. Share the files and folders in Box that you want accessible from Watson Studio with the login name that was returned by the REST call. Next step: Add data assets from the connection * See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html). Where you can use this connection You can use the Box connection in the following workspaces and tools: Projects * Data Refinery * Synthetic Data Generator Catalogs * Platform assets catalog Limitation If you have thousands of files in a Box folder, the connection might not be able to retrieve the files before a time-out. Jobs or profiling that use the Box files might not work. Workaround: Reorganize the file hierarchy in Box so that there are fewer files in the same folder. Supported file types The Box connection supports these file types: Avro, CSV, Delimited text, Excel, JSON, ORC, Parquet, SAS, SAV, SHP, and XML. Learn more [Managing custom apps](https://support.box.com/hc/articles/360044196653-Managing-custom-apps) Parent topic: [Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html)
# Box connection # To access your data in Box, create a connection asset for it\. The Box platform is a cloud content management and file sharing service\. ## Prerequisite: Create a custom app in Box ## Before you create a connection to Box, you create a custom app in the Box Developer Console\. You can create an app for application\-level access that users can use to share files, or you can create an app for enterprise\-wide access to all user accounts\. With enterprise\-wide access, users do not need to share files and folders with the application\. <!-- <ol> --> 1. Go to the [Box Developer Console](https://app.box.com/developers/console), and follow the wizard to create a **Custom App**\. For the **Authentication Method**, select `OAuth 2.0 with JWT (Server Authentication)`\. 2. Make the following selections in the **Configuration** page\. Otherwise, keep the default settings\. <!-- <ol> --> 1. Select one of two choices for **App Access Level**: <!-- <ul> --> * Keep the default **App Access Only** selection to allow access where users share files. * Select **App \+ Enterprise Access** to create an app with enterprise-wide access to all user accounts. <!-- </ul> --> 2. Under **Add and Manage Public Keys**, click **Generate a Public/Private Keypair**. This selection requires that two-factor authentication is enabled on the Box account, but you can disable it afterward. The generated key pair produces a config (`*_config.json`) file for you to download. You will need the information in this file to create the connection in your project. <!-- </ol> --> 3. If you selected an **App \+ Enterprise Access**, under **Advanced Features**, select both of these check boxes: <!-- <ul> --> * **Make API calls using the as-user header** * **Generate user access tokens** <!-- </ul> --> 4. Submit the app client ID to the Box enterprise administrator for authorization: Go to your application in the [Box Developer Console](https://app.box.com/developers/console) and select the **General** link from the left sidebar in your application\. Scroll down to the **App Authorization** section\. <!-- </ol> --> ### Choose the method for creating a connection based on where you are in the platform ### **In a project** : Click **Assets > New asset > Connect to a data source**\. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html)\. **In a deployment space** : Click **Add to space > Connection**\. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html)\. **In the Platform assets catalog** : Click **New connection**\. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html)\. ### Create the Box connection ### Enter the values from the downloaded config file for these settings: <!-- <ul> --> * **Client ID** * **Client Secret** * **Enterprise ID** * **Private Key** (Replace each `\n` with a newline) * **Private Key Password** (The `passphrase` value in the config file) * **Public Key** (The `publicKeyID` value in the config file) <!-- </ul> --> ### Enterprise\-wide app ### If you configured an enterprise\-wide access app, enter the username of the Box user account in the **Username** field\. ### Application\-level app ### Users must explicitly share their files with the app's email address in order for the app to access the files\. <!-- <ol> --> 1. Make a REST call to the connection to find out the app email address\. For example: `PUT https://api.dataplatform.cloud.ibm.com/v2/connections/{connection_id}/actions/get_user_info?project_id={project_id}` Request body: {} Returns: { "login_name": "[email protected]" } 2. Share the files and folders in Box that you want accessible from Watson Studio with the login name that was returned by the REST call\. <!-- </ol> --> ### Next step: Add data assets from the connection ### <!-- <ul> --> * See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html)\. <!-- </ul> --> ## Where you can use this connection ## You can use the Box connection in the following workspaces and tools: **Projects** <!-- <ul> --> * Data Refinery * Synthetic Data Generator <!-- </ul> --> **Catalogs** <!-- <ul> --> * Platform assets catalog <!-- </ul> --> ## Limitation ## If you have thousands of files in a Box folder, the connection might not be able to retrieve the files before a time\-out\. Jobs or profiling that use the Box files might not work\. **Workaround**: Reorganize the file hierarchy in Box so that there are fewer files in the same folder\. ## Supported file types ## The Box connection supports these file types: Avro, CSV, Delimited text, Excel, JSON, ORC, Parquet, SAS, SAV, SHP, and XML\. ## Learn more ## [Managing custom apps](https://support.box.com/hc/articles/360044196653-Managing-custom-apps) **Parent topic**: [Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html) <!-- </article "role="article" "> -->
E03DD29F683C4F22A7084C9AB8F1488C380170F0
https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-cassandra.html?context=cdpaas&locale=en
Apache Cassandra connection
Apache Cassandra connection To access your data in Apache Cassandra, create a connection asset for it. Apache Cassandra is an open source, distributed, NoSQL database. Supported versions Apache Cassandra 2.0 or later Create a connection to Apache Cassandra To create the connection asset, you need these connection details: * Hostname or IP address * Port number * Keyspace * Username and password * SSL certificate (if required by the database server) Choose the method for creating a connection based on where you are in the platform In a project : Click Assets > New asset > Connect to a data source. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html). In a deployment space : Click Add to space > Connection. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html). In the Platform assets catalog : Click New connection. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html). Next step: Add data assets from the connection * See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html). Where you can use this connection You can use Apache Cassandra connections in the following workspaces and tools: * Data Refinery * Decision Optimization * Notebooks. Click Read data on the Code snippets pane to get the connection credentials and load the data into a data structure. See [Load data from data source connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/load-and-access-data.htmlconns). * SPSS Modeler * Synthetic Data Generator Catalogs * Platform assets catalog Apache Cassandra setup * [Installing Cassandra](https://cassandra.apache.org/doc/latest/getting_started/installing.html) * [Configuring Cassandra](https://cassandra.apache.org/doc/latest/getting_started/configuring.html) * [CREATE KEYSPACE](https://cassandra.apache.org/doc/latest/cql/ddl.htmlcreate-keyspace) Learn more * [cassandra.apache.org](https://cassandra.apache.org/) * [Cassandra Documentation](https://cassandra.apache.org/doc/latest/architecture/overview.html) Parent topic:[Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html)
# Apache Cassandra connection # To access your data in Apache Cassandra, create a connection asset for it\. Apache Cassandra is an open source, distributed, NoSQL database\. ## Supported versions ## Apache Cassandra 2\.0 or later ## Create a connection to Apache Cassandra ## To create the connection asset, you need these connection details: <!-- <ul> --> * Hostname or IP address * Port number * Keyspace * Username and password * SSL certificate (if required by the database server) <!-- </ul> --> ### Choose the method for creating a connection based on where you are in the platform ### **In a project** : Click **Assets > New asset > Connect to a data source**\. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html)\. **In a deployment space** : Click **Add to space > Connection**\. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html)\. **In the Platform assets catalog** : Click **New connection**\. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html)\. ### Next step: Add data assets from the connection ### <!-- <ul> --> * See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html)\. <!-- </ul> --> ## Where you can use this connection ## You can use Apache Cassandra connections in the following workspaces and tools: <!-- <ul> --> * Data Refinery * Decision Optimization * Notebooks\. Click **Read data** on the **Code snippets** pane to get the connection credentials and load the data into a data structure\. See [Load data from data source connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/load-and-access-data.html#conns)\. * SPSS Modeler * Synthetic Data Generator <!-- </ul> --> **Catalogs** <!-- <ul> --> * Platform assets catalog <!-- </ul> --> ## Apache Cassandra setup ## <!-- <ul> --> * [Installing Cassandra](https://cassandra.apache.org/doc/latest/getting_started/installing.html) * [Configuring Cassandra](https://cassandra.apache.org/doc/latest/getting_started/configuring.html) * [CREATE KEYSPACE](https://cassandra.apache.org/doc/latest/cql/ddl.html#create-keyspace) <!-- </ul> --> ## Learn more ## <!-- <ul> --> * [cassandra\.apache\.org](https://cassandra.apache.org/) * [Cassandra Documentation](https://cassandra.apache.org/doc/latest/architecture/overview.html) <!-- </ul> --> **Parent topic:**[Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html) <!-- </article "role="article" "> -->
D5D7BD00BD17EFE339C848F46345F2192FDA5C11
https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-cloud-storage.html?context=cdpaas&locale=en
Google Cloud Storage connection
Google Cloud Storage connection To access your data in Google Cloud Storage, create a connection asset for it. Google Cloud Storage is an online file storage web service for storing and accessing data on Google Cloud Platform Infrastructure. Create a connection to Google Cloud Storage To create the connection asset, you need these connection details: * Project ID * Credentials: The contents of the Google service account key JSON file * Client ID and Client secret * Access token * Refresh token Choose the method for creating a connection based on where you are in the platform In a project : Click Assets > New asset > Connect to a data source. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html). In a deployment space : Click Add to space > Connection. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html). In the Platform assets catalog : Click New connection. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html). Next step: Add data assets from the connection * See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html). Where you can use this connection You can use Google Cloud Storage connections in the following workspaces and tools: Projects * Data Refinery * Decision Optimization * SPSS Modeler * Synthetic Data Generator Catalogs * Platform assets catalog Supported file types The Google Cloud Storage connection supports these file types: Avro, CSV, Delimited text, Excel, JSON, ORC, Parquet, SAS, SAV, SHP, and XML. Learn more [Google Cloud Storage documentation](https://cloud.google.com/storage/docs/introduction) Parent topic:[Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html)
# Google Cloud Storage connection # To access your data in Google Cloud Storage, create a connection asset for it\. Google Cloud Storage is an online file storage web service for storing and accessing data on Google Cloud Platform Infrastructure\. ## Create a connection to Google Cloud Storage ## To create the connection asset, you need these connection details: <!-- <ul> --> * Project ID * Credentials: The contents of the Google service account key JSON file * Client ID and Client secret * Access token * Refresh token <!-- </ul> --> ### Choose the method for creating a connection based on where you are in the platform ### **In a project** : Click **Assets > New asset > Connect to a data source**\. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html)\. **In a deployment space** : Click **Add to space > Connection**\. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html)\. **In the Platform assets catalog** : Click **New connection**\. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html)\. ### Next step: Add data assets from the connection ### <!-- <ul> --> * See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html)\. <!-- </ul> --> ## Where you can use this connection ## You can use Google Cloud Storage connections in the following workspaces and tools: **Projects** <!-- <ul> --> * Data Refinery * Decision Optimization * SPSS Modeler * Synthetic Data Generator <!-- </ul> --> **Catalogs** <!-- <ul> --> * Platform assets catalog <!-- </ul> --> ## Supported file types ## The Google Cloud Storage connection supports these file types: Avro, CSV, Delimited text, Excel, JSON, ORC, Parquet, SAS, SAV, SHP, and XML\. ## Learn more ## [Google Cloud Storage documentation](https://cloud.google.com/storage/docs/introduction) **Parent topic:**[Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html) <!-- </article "role="article" "> -->
1D393A73EBC623578DD6DA2C09C20E97FAE074D4
https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-cloudant.html?context=cdpaas&locale=en
IBM Cloudant connection
IBM Cloudant connection To access your data in IBM Cloudant, create a connection asset for it. Cloudant is a JSON document database available in IBM Cloud. Create a connection to Cloudant To create the connection asset, you need these connection details: * URL to the Cloudant database * Database name * Username and password Choose the method for creating a connection based on where you are in the platform In a project : Click Assets > New asset > Connect to a data source. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html). In a deployment space : Click Add to space > Connection. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html). In the Platform assets catalog : Click New connection. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html). Next step: Add data assets from the connection * See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html). Where you can use this connection You can use Cloudant connections in the following workspaces and tools: Projects * Data Refinery * Decision Optimization * SPSS Modeler * Synthetic Data Generator Catalogs * Platform assets catalog Cloudant setup To set up the Cloudant database on IBM Cloud, see [Getting started with IBM Cloudant](https://cloud.ibm.com/docs/Cloudant?topic=Cloudant-getting-started-with-cloudant). When you create your Cloudant service, for Authentication method, select IAM and legacy credentials. Restriction IBM Cloud Query (CQ) is not supported. Learn more [IBM Cloudant docs](https://cloud.ibm.com/docs/Cloudant) Parent topic:[Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html)
# IBM Cloudant connection # To access your data in IBM Cloudant, create a connection asset for it\. Cloudant is a JSON document database available in IBM Cloud\. ## Create a connection to Cloudant ## To create the connection asset, you need these connection details: <!-- <ul> --> * URL to the Cloudant database * Database name * Username and password <!-- </ul> --> ### Choose the method for creating a connection based on where you are in the platform ### **In a project** : Click **Assets > New asset > Connect to a data source**\. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html)\. **In a deployment space** : Click **Add to space > Connection**\. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html)\. **In the Platform assets catalog** : Click **New connection**\. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html)\. ### Next step: Add data assets from the connection ### <!-- <ul> --> * See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html)\. <!-- </ul> --> ## Where you can use this connection ## You can use Cloudant connections in the following workspaces and tools: **Projects** <!-- <ul> --> * Data Refinery * Decision Optimization * SPSS Modeler * Synthetic Data Generator <!-- </ul> --> **Catalogs** <!-- <ul> --> * Platform assets catalog <!-- </ul> --> ## Cloudant setup ## To set up the Cloudant database on IBM Cloud, see [Getting started with IBM Cloudant](https://cloud.ibm.com/docs/Cloudant?topic=Cloudant-getting-started-with-cloudant)\. When you create your Cloudant service, for **Authentication method**, select **IAM and legacy credentials**\. ## Restriction ## IBM Cloud Query (CQ) is not supported\. ## Learn more ## [IBM Cloudant docs](https://cloud.ibm.com/docs/Cloudant) **Parent topic:**[Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html) <!-- </article "role="article" "> -->
DEDC196B72E081228279B523BA3585AED93C2370
https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-cloudera.html?context=cdpaas&locale=en
Cloudera Impala connection
Cloudera Impala connection To access your data in Cloudera Impala, create a connection asset for it. Cloudera Impala provides SQL queries directly on your Apache Hadoop data stored in HDFS or HBase. Supported versions Cloudera Impala 1.3+ Create a connection to Cloudera Impala To create the connection asset, you need these connection details: * Database name * Hostname or IP address * Port number * Username and password * SSL certificate (if required by the database server) Choose the method for creating a connection based on where you are in the platform In a project : Click Assets > New asset > Connect to a data source. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html). In a deployment space : Click Add to space > Connection. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html). In the Platform assets catalog : Click New connection. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html). Next step: Add data assets from the connection * See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html). Where you can use this connection You can use Cloudera Impala connections in the following workspaces and tools: Projects * Data Refinery * SPSS Modeler * Synthetic Data Generator Catalogs * Platform assets catalog Cloudera Impala setup [Cloudera Impala installation](https://docs.cloudera.com/documentation/enterprise/5-14-x/topics/cm_ig_install_impala.html) Restriction You can use this connection only for source data. You cannot write to data or export data with this connection. Running SQL statements To ensure that your SQL statements run correctly, refer to the [Impala SQL Language Reference](https://docs.cloudera.com/documentation/enterprise/5-5-x/topics/impala_langref.html) for the correct syntax. Learn more [Cloudera Impala documentation](https://docs.cloudera.com/documentation/enterprise/5-5-x/topics/impala.html) Parent topic:[Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html)
# Cloudera Impala connection # To access your data in Cloudera Impala, create a connection asset for it\. Cloudera Impala provides SQL queries directly on your Apache Hadoop data stored in HDFS or HBase\. ## Supported versions ## Cloudera Impala 1\.3\+ ## Create a connection to Cloudera Impala ## To create the connection asset, you need these connection details: <!-- <ul> --> * Database name * Hostname or IP address * Port number * Username and password * SSL certificate (if required by the database server) <!-- </ul> --> ### Choose the method for creating a connection based on where you are in the platform ### **In a project** : Click **Assets > New asset > Connect to a data source**\. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html)\. **In a deployment space** : Click **Add to space > Connection**\. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html)\. **In the Platform assets catalog** : Click **New connection**\. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html)\. ### Next step: Add data assets from the connection ### <!-- <ul> --> * See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html)\. <!-- </ul> --> ## Where you can use this connection ## You can use Cloudera Impala connections in the following workspaces and tools: **Projects** <!-- <ul> --> * Data Refinery * SPSS Modeler * Synthetic Data Generator <!-- </ul> --> **Catalogs** <!-- <ul> --> * Platform assets catalog <!-- </ul> --> ## Cloudera Impala setup ## [Cloudera Impala installation](https://docs.cloudera.com/documentation/enterprise/5-14-x/topics/cm_ig_install_impala.html) ## Restriction ## You can use this connection only for source data\. You cannot write to data or export data with this connection\. ### Running SQL statements ### To ensure that your SQL statements run correctly, refer to the [Impala SQL Language Reference](https://docs.cloudera.com/documentation/enterprise/5-5-x/topics/impala_langref.html) for the correct syntax\. ## Learn more ## [Cloudera Impala documentation](https://docs.cloudera.com/documentation/enterprise/5-5-x/topics/impala.html) **Parent topic:**[Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html) <!-- </article "role="article" "> -->
981BCFC7F5817524CB8E0C9FE04E3F267A268926
https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-cognos.html?context=cdpaas&locale=en
IBM Cognos Analytics connection
IBM Cognos Analytics connection To access your data in Cognos Analytics, create a connection asset for it. Cognos Analytics is an AI-fueled business intelligence platform that supports the entire analytics cycle, from discovery to operationalization. Supported versions IBM Cognos Analytics 11 Supported content types * Report (except Reports that require prompts) * Query Create a connection to Cognos Analytics To create the connection asset, you need these connection details: * Gateway URL * SSL certificate (if required by the database server) Credentials For Private connectivity, to connect to a database that is not externalized to the internet (for example, behind a firewall), you must set up a [secure connection](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.html). Choose the method for creating a connection based on where you are in the platform In a project : Click Assets > New asset > Connect to a data source. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html). In a deployment space : Click Add to space > Connection. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html). In the Platform assets catalog : Click New connection. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html). Next step: Add data assets from the connection * See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html). Where you can use this connection You can use Cognos Analytics connections in the following workspaces and tools: * Data Refinery * Notebooks. Click Read data on the Code snippets pane to get the connection credentials and load the data into a data structure. See [Load data from data source connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/load-and-access-data.htmlconns). * SPSS Modeler * Synthetic Data Generator Catalogs * Platform assets catalog Cognos Analytics setup Instructions for setting up Cognos Analytics: [Getting started in Cognos Analytics](https://www.ibm.com/support/knowledgecenter/SSEP7J_11.1.0/com.ibm.swg.ba.cognos.ca_gtstd.doc/c_gtstd_ica_overview.html). Restrictions * You can use this connection only for source data. You cannot write to data or export data with this connection. * Notebooks: Self-signed certificates are not supported for notebooks. The SSL certificate that is imported into the Cognos Analytics server must be signed by a trusted root authority. To confirm that the certificate is signed by a trusted root authority, enter the Cognos Analytics URL into a browser and verify that there is a padlock to the left of the URL. If the certificate is self-signed, the Cognos Analytics server administrator must replace it with a trusted TLS certificate. Running SQL statements To ensure that your SQL statements run correctly, refer to [Working with Queries in SQL](https://www.ibm.com/docs/SSEP7J_11.2.0/com.ibm.swg.ba.cognos.ug_cr_rptstd.doc/c_cr_rptstd_wrkdat_working_with_sql_mdx_rel.html) in the Cognos Analytics documentation for the correct syntax. Learn more [Cognos Analytics documentation](https://www.ibm.com/docs/cognos-analytics/11.0.0) Parent topic:[Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html)
# IBM Cognos Analytics connection # To access your data in Cognos Analytics, create a connection asset for it\. Cognos Analytics is an AI\-fueled business intelligence platform that supports the entire analytics cycle, from discovery to operationalization\. ## Supported versions ## IBM Cognos Analytics 11 ## Supported content types ## <!-- <ul> --> * Report (except Reports that require prompts) * Query <!-- </ul> --> ## Create a connection to Cognos Analytics ## To create the connection asset, you need these connection details: <!-- <ul> --> * Gateway URL * SSL certificate (if required by the database server) <!-- </ul> --> ### Credentials ### For **Private connectivity**, to connect to a database that is not externalized to the internet (for example, behind a firewall), you must set up a [secure connection](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.html)\. ### Choose the method for creating a connection based on where you are in the platform ### **In a project** : Click **Assets > New asset > Connect to a data source**\. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html)\. **In a deployment space** : Click **Add to space > Connection**\. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html)\. **In the Platform assets catalog** : Click **New connection**\. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html)\. ### Next step: Add data assets from the connection ### <!-- <ul> --> * See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html)\. <!-- </ul> --> ## Where you can use this connection ## You can use Cognos Analytics connections in the following workspaces and tools: <!-- <ul> --> * Data Refinery * Notebooks\. Click **Read data** on the **Code snippets** pane to get the connection credentials and load the data into a data structure\. See [Load data from data source connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/load-and-access-data.html#conns)\. * SPSS Modeler * Synthetic Data Generator <!-- </ul> --> **Catalogs** <!-- <ul> --> * Platform assets catalog <!-- </ul> --> ## Cognos Analytics setup ## Instructions for setting up Cognos Analytics: [Getting started in Cognos Analytics](https://www.ibm.com/support/knowledgecenter/SSEP7J_11.1.0/com.ibm.swg.ba.cognos.ca_gtstd.doc/c_gtstd_ica_overview.html)\. ## Restrictions ## <!-- <ul> --> * You can use this connection only for source data\. You cannot write to data or export data with this connection\. * Notebooks: Self\-signed certificates are not supported for notebooks\. The SSL certificate that is imported into the Cognos Analytics server must be signed by a trusted root authority\. To confirm that the certificate is signed by a trusted root authority, enter the Cognos Analytics URL into a browser and verify that there is a padlock to the left of the URL\. If the certificate is self\-signed, the Cognos Analytics server administrator must replace it with a trusted TLS certificate\. <!-- </ul> --> ### Running SQL statements ### To ensure that your SQL statements run correctly, refer to [Working with Queries in SQL](https://www.ibm.com/docs/SSEP7J_11.2.0/com.ibm.swg.ba.cognos.ug_cr_rptstd.doc/c_cr_rptstd_wrkdat_working_with_sql_mdx_rel.html) in the Cognos Analytics documentation for the correct syntax\. ## Learn more ## [Cognos Analytics documentation](https://www.ibm.com/docs/cognos-analytics/11.0.0) **Parent topic:**[Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html) <!-- </article "role="article" "> -->
C119B8D62C156451A8B8665E8969422803527DF3
https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-compose-mysql.html?context=cdpaas&locale=en
IBM Cloud Databases for MySQL connection
IBM Cloud Databases for MySQL connection To access your data in IBM Cloud Databases for MySQL, create a connection asset for it. IBM Cloud Databases for MySQL extends the capabilities of MySQL by offering an auto-scaling deployment system managed on IBM Cloud that delivers high availability, redundancy, and automated backups. IBM Cloud Databases for MySQL was formerly known as IBM Cloud Compose for MySQL. Create a connection to IBM Cloud Databases for MySQL To create the connection asset, you need these connection details: * Database name * Hostname or IP address * Port number * Username and password Choose the method for creating a connection based on where you are in the platform In a project : Click Assets > New asset > Connect to a data source. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html). In a deployment space : Click Add to space > Connection. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html). In the Platform assets catalog : Click New connection. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html). Next step: Add data assets from the connection * See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html). Where you can use this connection You can use IBM Cloud Databases for MySQL connections in the following workspaces and tools: Projects * SPSS Modeler * Synthetic Data Generator Catalogs * Platform assets catalog IBM Cloud Databases for MySQL setup [IBM Cloud Databases for MySQL](https://cloud.ibm.com/catalog/services/compose-for-mysql) Restriction For SPSS Modeler, you can use this connection only to import data. You cannot export data to this connection or to an IBM Cloud Databases for MySQL connected data asset. Learn more [IBM Cloud Databases for MySQL Help](https://help.compose.com/docs/mysql-compose-for-mysqlsection-compose-for-mysql-for-all) Parent topic:[Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html)
# IBM Cloud Databases for MySQL connection # To access your data in IBM Cloud Databases for MySQL, create a connection asset for it\. IBM Cloud Databases for MySQL extends the capabilities of MySQL by offering an auto\-scaling deployment system managed on IBM Cloud that delivers high availability, redundancy, and automated backups\. IBM Cloud Databases for MySQL was formerly known as *IBM Cloud Compose for MySQL*\. ## Create a connection to IBM Cloud Databases for MySQL ## To create the connection asset, you need these connection details: <!-- <ul> --> * Database name * Hostname or IP address * Port number * Username and password <!-- </ul> --> ### Choose the method for creating a connection based on where you are in the platform ### **In a project** : Click **Assets > New asset > Connect to a data source**\. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html)\. **In a deployment space** : Click **Add to space > Connection**\. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html)\. **In the Platform assets catalog** : Click **New connection**\. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html)\. ### Next step: Add data assets from the connection ### <!-- <ul> --> * See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html)\. <!-- </ul> --> ## Where you can use this connection ## You can use IBM Cloud Databases for MySQL connections in the following workspaces and tools: **Projects** <!-- <ul> --> * SPSS Modeler * Synthetic Data Generator <!-- </ul> --> **Catalogs** <!-- <ul> --> * Platform assets catalog <!-- </ul> --> ## IBM Cloud Databases for MySQL setup ## [IBM Cloud Databases for MySQL](https://cloud.ibm.com/catalog/services/compose-for-mysql) ## Restriction ## For SPSS Modeler, you can use this connection only to import data\. You cannot export data to this connection or to an IBM Cloud Databases for MySQL connected data asset\. ## Learn more ## [IBM Cloud Databases for MySQL Help](https://help.compose.com/docs/mysql-compose-for-mysql#section-compose-for-mysql-for-all) **Parent topic:**[Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html) <!-- </article "role="article" "> -->
CD1C58AD8E180AB922890FA8182FF7F51C589962
https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-cos-infra.html?context=cdpaas&locale=en
IBM Cloud Object Storage (infrastructure) connection
IBM Cloud Object Storage (infrastructure) connection To access your data in IBM Cloud Object Storage (infrastructure), create a connection asset for it. The Cloud Object Storage (infrastructure) connection is for object storage that was formerly on SoftLayer. SoftLayer was replaced by IBM Cloud. You cannot provision a new instance for Cloud Object Storage (infrastructure). This connection is for users who set up an earlier instance on SoftLayer. Create a connection to Cloud Object Storage (infrastructure) To create the connection asset, you need this information. Required connection values The Login URL is required, plus one of the following values for authentication: * Access Key and Secret Key * Credentials If you plan to use the S3 API, you must enter an Access Key. Connection values in the Cloud Object Storage Resource list The values for these fields are found in the Cloud Object Storage Resource list. To find the Login URL: 1. Go to the Cloud Object Storage Resource list at [https://cloud.ibm.com/resources](https://cloud.ibm.com/resources). 2. Expand the Storage resource. 3. Click the Cloud Object Storage service. From the menu, select Endpoints. 4. Copy the value of the public endpoint that is in the same region as the bucket that you want to use. To find the values for Access key and the Secret Key: 1. Go to the Cloud Object Storage Resource list at [https://cloud.ibm.com/resources](https://cloud.ibm.com/resources). 2. Expand the Storage resource. 3. Click the Cloud Object Storage service, and then click the Service credentials tab. 4. Expand the Key name that you want to use. Copy the values without the quotation marks: 5. Access Key: access_key_id 6. Secret Key: secret_access_key Note: Alternatively, you can use the contents of the JSON file in Credentials to copy the values for the Access Key and Secret Key. To find the Credentials: 1. Go to the Cloud Object Storage Resource list at [https://cloud.ibm.com/resources](https://cloud.ibm.com/resources). 2. Expand the Storage resource. 3. Click the Cloud Object Storage service, and then click the Service credentials tab. 4. Expand the Key name that you want to use. 5. Copy the entire JSON file. Include the opening and closing braces { } symbols. For Certificates (Optional) Enter the self-signed SSL certificate that was created by a tool such as OpenSSL. Choose the method for creating a connection based on where you are in the platform In a project : Click Assets > New asset > Connect to a data source. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html). In a deployment space : Click Add to space > Connection. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html). In the Platform assets catalog : Click New connection. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html). Next step: Add data assets from the connection * See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html). Where you can use this connection Projects * Decision Optimization * SPSS Modeler * Synthetic Data Generator Catalogs * Platform assets catalog Supported file types The Cloud Object Storage (infrastructure) connection supports these file types: Avro, CSV, Delimited text, Excel, JSON, ORC, Parquet, SAS, SAV, SHP, and XML. Learn more Related connection: [IBM Cloud Object Storage connection](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-cos.html) Parent topic:[Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html)
# IBM Cloud Object Storage (infrastructure) connection # To access your data in IBM Cloud Object Storage (infrastructure), create a connection asset for it\. The Cloud Object Storage (infrastructure) connection is for object storage that was formerly on SoftLayer\. SoftLayer was replaced by IBM Cloud\. You cannot provision a new instance for Cloud Object Storage (infrastructure)\. This connection is for users who set up an earlier instance on SoftLayer\. ## Create a connection to Cloud Object Storage (infrastructure) ## To create the connection asset, you need this information\. ### Required connection values ### The **Login URL** is required, plus one of the following values for authentication: <!-- <ul> --> * **Access Key** and **Secret Key** * **Credentials** If you plan to use the S3 API, you must enter an **Access Key**\. <!-- </ul> --> ### Connection values in the Cloud Object Storage Resource list ### The values for these fields are found in the Cloud Object Storage **Resource list**\. To find the **Login URL**: <!-- <ol> --> 1. Go to the Cloud Object Storage **Resource list** at [https://cloud\.ibm\.com/resources](https://cloud.ibm.com/resources)\. 2. Expand the **Storage** resource\. 3. Click the Cloud Object Storage service\. From the menu, select **Endpoints**\. 4. Copy the value of the public endpoint that is in the same region as the bucket that you want to use\. <!-- </ol> --> To find the values for **Access key** and the **Secret Key**: <!-- <ol> --> 1. Go to the Cloud Object Storage **Resource list** at [https://cloud\.ibm\.com/resources](https://cloud.ibm.com/resources)\. 2. Expand the **Storage** resource\. 3. Click the Cloud Object Storage service, and then click the **Service credentials** tab\. 4. Expand the Key name that you want to use\. Copy the values without the quotation marks: 5. **Access Key**: `access_key_id` 6. **Secret Key**: `secret_access_key` Note: Alternatively, you can use the contents of the JSON file in **Credentials** to copy the values for the **Access Key** and **Secret Key**. <!-- </ol> --> To find the **Credentials**: <!-- <ol> --> 1. Go to the Cloud Object Storage **Resource list** at [https://cloud\.ibm\.com/resources](https://cloud.ibm.com/resources)\. 2. Expand the **Storage** resource\. 3. Click the Cloud Object Storage service, and then click the **Service credentials** tab\. 4. Expand the Key name that you want to use\. 5. Copy the entire JSON file\. Include the opening and closing braces `{ }` symbols\. <!-- </ol> --> For **Certificates** (Optional) Enter the self\-signed SSL certificate that was created by a tool such as OpenSSL\. ### Choose the method for creating a connection based on where you are in the platform ### **In a project** : Click **Assets > New asset > Connect to a data source**\. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html)\. **In a deployment space** : Click **Add to space > Connection**\. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html)\. **In the Platform assets catalog** : Click **New connection**\. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html)\. ### Next step: Add data assets from the connection ### <!-- <ul> --> * See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html)\. <!-- </ul> --> ## Where you can use this connection ## **Projects** <!-- <ul> --> * Decision Optimization * SPSS Modeler * Synthetic Data Generator <!-- </ul> --> **Catalogs** <!-- <ul> --> * Platform assets catalog <!-- </ul> --> ## Supported file types ## The Cloud Object Storage (infrastructure) connection supports these file types: Avro, CSV, Delimited text, Excel, JSON, ORC, Parquet, SAS, SAV, SHP, and XML\. ## Learn more ## Related connection: [IBM Cloud Object Storage connection](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-cos.html) **Parent topic:**[Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html) <!-- </article "role="article" "> -->
EA3E8B5797EE8A3FD8AA34EA9A1EEB6D81B6A779
https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-cos.html?context=cdpaas&locale=en
IBM Cloud Object Storage connection
IBM Cloud Object Storage connection To access your data in IBM Cloud Object Storage (COS), create a connection asset for it. IBM Cloud Object Storage on IBM Cloud provides unstructured data storage for cloud applications. Cloud Object Storage offers S3 API and application binding with regional and cross-regional resiliency. Create a connection to IBM Cloud Object Storage To create the connection asset, you need these connection details: * Bucket name. (Optional. If you do not enter the bucket name, then the credentials must have permission to list all the buckets.) * Login URL. To find the Login URL: 1. Go to the Cloud Object Storage Resource list at [https://cloud.ibm.com/resources](https://cloud.ibm.com/resources). 2. Expand the Storage resource. 3. Click the Cloud Object Storage service. From the menu, select Endpoints. 4. Optional: Use the Select resiliency and Select location menus to filter the choices. 5. Copy the value of the public endpoint that is in the same region as the bucket that you want to use. * SSL certificate: (Optional). A self-signed certificate that was created by a tool such as OpenSSL. Credentials Use one of the following combination of values for authentication: * Service credentials * Resource instance ID and API key * Resource instance ID, API key, Access key, and Secret key (In this combination, the Resource instance ID and API key are used for authentication. The Access key and Secret key are stored.) * Access key and Secret key To find the value for Service credentials: 1. Go to the Cloud Object Storage Resource list at [https://cloud.ibm.com/resources](https://cloud.ibm.com/resources). 2. Expand the Storage resource. 3. Click the Cloud Object Storage service, and then click the Service credentials tab. 4. Expand the Key name that you want to use. 5. Copy the entire JSON file. Include the opening and closing braces { } symbols. To find the values for the API key, Access key, Secret key, and the Resource instance ID: 1. Go to the Cloud Object Storage Resource list at [https://cloud.ibm.com/resources](https://cloud.ibm.com/resources). 2. Expand the Storage resource. 3. Click the Cloud Object Storage service, and then click the Service credentials tab. 4. Expand the Key name that you want to use. Copy the values without the quotation marks: 5. API key: apikey 6. Access key: access_key_id 7. Secret key: secret_access_key 8. Resource instance ID: resource_instance_id Choose the method for creating a connection based on where you are in the platform In a project : Click Assets > New asset > Connect to a data source. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html). In a deployment space : Click Add to space > Connection. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html). In the Platform assets catalog : Click New connection. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html). Next step: Add data assets from the connection * See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html). Where you can use this connection You can use IBM Cloud Object Storage connections in the following workspaces and tools: Projects * AutoAI * Data Refinery * Decision Optimization * SPSS Modeler * Synthetic Data Generator Catalogs * Platform assets catalog Connecting to the Cloud Object Storage service with the S3 API To connect to Cloud Object Storage with the S3 API, you need the Login URL, an Access key and a Secret key. The API key is a token that is used to call the Watson IoT Platform HTTP APIs. Users are assigned roles and they can generate an API key that they can use to authorize calls to API endpoints. For more information, see the [IBM Cloud Object Storage S3 API documentation](https://cloud.ibm.com/docs/cloud-object-storage/api-reference?topic=cloud-object-storage-compatibility-api). IBM Cloud Object Storage setup [Getting started with IBM Cloud Object Storage](https://cloud.ibm.com/docs/cloud-object-storage) Supported file types The IBM Cloud Object Storage connection supports these file types: Avro, CSV, Delimited text, Excel, JSON, ORC, Parquet, SAS, SAV, SHP, and XML. Learn more [Controlling access to COS buckets](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/cos_buckets.html) Parent topic:[Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html)
# IBM Cloud Object Storage connection # To access your data in IBM Cloud Object Storage (COS), create a connection asset for it\. IBM Cloud Object Storage on IBM Cloud provides unstructured data storage for cloud applications\. Cloud Object Storage offers S3 API and application binding with regional and cross\-regional resiliency\. ## Create a connection to IBM Cloud Object Storage ## To create the connection asset, you need these connection details: <!-- <ul> --> * **Bucket** name\. (Optional\. If you do not enter the bucket name, then the credentials must have permission to list all the buckets\.) * **Login URL**\. To find the **Login URL**: <!-- <ol> --> 1. Go to the Cloud Object Storage **Resource list** at [https://cloud.ibm.com/resources](https://cloud.ibm.com/resources). 2. Expand the **Storage** resource. 3. Click the Cloud Object Storage service. From the menu, select **Endpoints**. 4. Optional: Use the **Select resiliency** and **Select location** menus to filter the choices. 5. Copy the value of the public endpoint that is in the same region as the bucket that you want to use. <!-- </ol> --> * **SSL certificate**: (Optional)\. A self\-signed certificate that was created by a tool such as OpenSSL\. <!-- </ul> --> ### Credentials ### Use one of the following combination of values for authentication: <!-- <ul> --> * **Service credentials** <!-- </ul> --> <!-- <ul> --> * **Resource instance ID** and **API key** <!-- </ul> --> <!-- <ul> --> * **Resource instance ID**, **API key**, **Access key**, and **Secret key** (In this combination, the **Resource instance ID** and **API key** are used for authentication\. The **Access key** and **Secret key** are stored\.) * **Access key** and **Secret key** <!-- </ul> --> To find the value for **Service credentials**: <!-- <ol> --> 1. Go to the Cloud Object Storage **Resource list** at [https://cloud\.ibm\.com/resources](https://cloud.ibm.com/resources)\. 2. Expand the **Storage** resource\. 3. Click the Cloud Object Storage service, and then click the **Service credentials** tab\. 4. Expand the Key name that you want to use\. 5. Copy the entire JSON file\. Include the opening and closing braces `{ }` symbols\. <!-- </ol> --> To find the values for the **API key**, **Access key**, **Secret key**, and the **Resource instance ID**: <!-- <ol> --> 1. Go to the Cloud Object Storage **Resource list** at [https://cloud\.ibm\.com/resources](https://cloud.ibm.com/resources)\. 2. Expand the **Storage** resource\. 3. Click the Cloud Object Storage service, and then click the **Service credentials** tab\. 4. Expand the Key name that you want to use\. Copy the values without the quotation marks: 5. **API key**: `apikey` 6. **Access key**: `access_key_id` 7. **Secret key**: `secret_access_key` 8. **Resource instance ID**: `resource_instance_id` <!-- </ol> --> ### Choose the method for creating a connection based on where you are in the platform ### **In a project** : Click **Assets > New asset > Connect to a data source**\. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html)\. **In a deployment space** : Click **Add to space > Connection**\. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html)\. **In the Platform assets catalog** : Click **New connection**\. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html)\. ### Next step: Add data assets from the connection ### <!-- <ul> --> * See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html)\. <!-- </ul> --> ## Where you can use this connection ## You can use IBM Cloud Object Storage connections in the following workspaces and tools: **Projects** <!-- <ul> --> * AutoAI * Data Refinery * Decision Optimization * SPSS Modeler * Synthetic Data Generator <!-- </ul> --> **Catalogs** <!-- <ul> --> * Platform assets catalog <!-- </ul> --> ## Connecting to the Cloud Object Storage service with the S3 API ## To connect to Cloud Object Storage with the S3 API, you need the **Login URL**, an **Access key** and a **Secret key**\. The **API key** is a token that is used to call the Watson IoT Platform HTTP APIs\. Users are assigned roles and they can generate an API key that they can use to authorize calls to API endpoints\. For more information, see the [IBM Cloud Object Storage S3 API documentation](https://cloud.ibm.com/docs/cloud-object-storage/api-reference?topic=cloud-object-storage-compatibility-api)\. ## IBM Cloud Object Storage setup ## [Getting started with IBM Cloud Object Storage](https://cloud.ibm.com/docs/cloud-object-storage) ## Supported file types ## The IBM Cloud Object Storage connection supports these file types: Avro, CSV, Delimited text, Excel, JSON, ORC, Parquet, SAS, SAV, SHP, and XML\. ## Learn more ## [Controlling access to COS buckets](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/cos_buckets.html) **Parent topic:**[Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html) <!-- </article "role="article" "> -->
2BD7112457A8B916F3B4701580570C85AE1B520E
https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-cosmosdb.html?context=cdpaas&locale=en
Microsoft Azure Cosmos DB connection
Microsoft Azure Cosmos DB connection To access your data in Microsoft Azure Cosmos DB, create a connection asset for it. Azure Cosmos DB is a fully managed NoSQL database service. Create a connection to Microsoft Azure Cosmos DB To create the connection asset, you need these connection details: * Hostname * Port number * Master key: The Azure Cosmos Database primary read-write key For Private connectivity, to connect to a database that is not externalized to the internet (for example, behind a firewall), you must set up a [secure connection](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.html). Choose the method for creating a connection based on where you are in the platform In a project : Click Assets > New asset > Connect to a data source. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html). In a deployment space : Click Add to space > Connection. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html). In the Platform assets catalog : Click New connection. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html). Next step: Add data assets from the connection * See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html). Where you can use this connection You can use Microsoft Azure Cosmos DB connections in the following workspaces and tools: Projects * Data Refinery * Decision Optimization * Notebooks. Click Read data on the Code snippets pane to get the connection credentials and load the data into a data structure. See [Load data from data source connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/load-and-access-data.htmlconns). * SPSS Modeler * Synthetic Data Generator Catalogs * Platform assets catalog Azure Cosmos DB setup * Set up Azure Cosmos DB: [Azure portal](https://docs.microsoft.com/en-us/azure/cosmos-db/create-cosmosdb-resources-portal) * Secure access to data in Azure Cosmos DB: [Master keys](https://docs.microsoft.com/en-us/azure/cosmos-db/secure-access-to-datamaster-keys) Restrictions Only the Core (SQL) API is supported. Running SQL statements To ensure that your SQL statements run correctly, refer to the [Azure Cosmos DB documentation](https://docs.microsoft.com/azure/cosmos-db/) for the correct syntax. Learn more [Azure Cosmos DB](https://azure.microsoft.com/en-us/services/cosmos-db/) Parent topic:[Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html)
# Microsoft Azure Cosmos DB connection # To access your data in Microsoft Azure Cosmos DB, create a connection asset for it\. Azure Cosmos DB is a fully managed NoSQL database service\. ## Create a connection to Microsoft Azure Cosmos DB ## To create the connection asset, you need these connection details: <!-- <ul> --> * Hostname * Port number * Master key: The Azure Cosmos Database primary read\-write key <!-- </ul> --> For **Private connectivity**, to connect to a database that is not externalized to the internet (for example, behind a firewall), you must set up a [secure connection](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.html)\. ### Choose the method for creating a connection based on where you are in the platform ### **In a project** : Click **Assets > New asset > Connect to a data source**\. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html)\. **In a deployment space** : Click **Add to space > Connection**\. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html)\. **In the Platform assets catalog** : Click **New connection**\. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html)\. ### Next step: Add data assets from the connection ### <!-- <ul> --> * See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html)\. <!-- </ul> --> ## Where you can use this connection ## You can use Microsoft Azure Cosmos DB connections in the following workspaces and tools: **Projects** <!-- <ul> --> * Data Refinery * Decision Optimization * Notebooks\. Click **Read data** on the **Code snippets** pane to get the connection credentials and load the data into a data structure\. See [Load data from data source connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/load-and-access-data.html#conns)\. * SPSS Modeler * Synthetic Data Generator <!-- </ul> --> **Catalogs** <!-- <ul> --> * Platform assets catalog <!-- </ul> --> ## Azure Cosmos DB setup ## <!-- <ul> --> * Set up Azure Cosmos DB: [Azure portal](https://docs.microsoft.com/en-us/azure/cosmos-db/create-cosmosdb-resources-portal) * Secure access to data in Azure Cosmos DB: [Master keys](https://docs.microsoft.com/en-us/azure/cosmos-db/secure-access-to-data#master-keys) <!-- </ul> --> ## Restrictions ## Only the Core (SQL) API is supported\. ### Running SQL statements ### To ensure that your SQL statements run correctly, refer to the [Azure Cosmos DB documentation](https://docs.microsoft.com/azure/cosmos-db/) for the correct syntax\. ## Learn more ## [Azure Cosmos DB](https://azure.microsoft.com/en-us/services/cosmos-db/) **Parent topic:**[Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html) <!-- </article "role="article" "> -->
D377DA7CF67645F321593FA8B1536BE2F0753333
https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-data-virtual.html?context=cdpaas&locale=en
IBM Watson Query connection
IBM Watson Query connection To access your data in Watson Query, create a connection asset for it. A Watson Query connection is created automatically in a catalog or project when you publish a virtual object to a catalog or assign it to a project. The Watson Query connection was formerly named the Data Virtualization connection. Watson Query integrates data sources across multiple types and locations and turns all this data into one logical data view. This virtual data view makes the job of getting value out of your data easy. Create a Watson Query connection To create the connection asset, you need these connection details: * Database name * Hostname or IP address of the database * Port number * Instance ID * [Credentials information](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-data-virtual.html?context=cdpaas&locale=encreds) * Application name (optional): The name of the application that is currently using the connection. For information, see [Client info properties support by the IBM Data Server Driver for JDBC and SQLJ](https://www.ibm.com/docs/SSEPGG_11.5.0/Javadocs/src/tpc/imjcc_r0052001.html). * Client accounting information (optional): The value of the accounting string from the client information that is specified for the connection. For information, see [Client info properties support by the IBM Data Server Driver for JDBC and SQLJ](https://www.ibm.com/docs/SSEPGG_11.5.0/Javadocs/src/tpc/imjcc_r0052001.html). * Client hostname (optional): The hostname of the machine on which the application that is using the connection is running. For information, see [Client info properties support by the IBM Data Server Driver for JDBC and SQLJ](https://www.ibm.com/docs/SSEPGG_11.5.0/Javadocs/src/tpc/imjcc_r0052001.html). * Client user (optional): The name of the user on whose behalf the application that is using the connection is running. For information, see [Client info properties support by the IBM Data Server Driver for JDBC and SQLJ](https://www.ibm.com/docs/SSEPGG_11.5.0/Javadocs/src/tpc/imjcc_r0052001.html). * SSL certificate (if required by the database server) Credentials Connecting to a Watson Query instance in IBM Cloud * API key: Enter an IAM API key. Prerequisites: 1. Add the user ID as an IAM user or as a service ID to your IBM Cloud account. For instructions, see the Console user experience section of the [Identity and access management (IAM) on IBM Cloud](https://cloud.ibm.com/docs/Db2whc?topic=Db2whc-iamconsole-ux) topic. 2. The Watson Query Manager of the Watson Query instance must add IAM users by selecting Data > Watson Query > Administration > User management from the IBM watsonx navigation menu. Connecting to a Watson Query instance in Cloud Pak for Data (on-prem) * User credentials: Enter your Cloud Pak for Data username and password. * API key: Enter an API key value with your Cloud Pak for Data username and a Cloud Pak for Data API key. Use this syntax: user_name:api_key For Private connectivity, to connect to a database that is not externalized to the internet (for example, behind a firewall), you must set up a [secure connection](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.html). Choose the method for creating a connection based on where you are in the platform In a project : Click Assets > New asset > Connect to a data source. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html). In a deployment space : Click Add to space > Connection. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html). In the Platform assets catalog : Click New connection. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html). Next step: Add data assets from the connection * See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html). Where you can use this connection You can use Watson Query connections in the following workspaces and tools: Projects * Data Refinery * SPSS Modeler * Synthetic Data Generator Catalogs * Platform assets catalog Setup [Getting started with Watson Query ](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-virtualize.html) Restriction You can use this connection only for source data. You cannot write to data with this connection. Learn more Parent topic:[Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html)
# IBM Watson Query connection # To access your data in Watson Query, create a connection asset for it\. A Watson Query connection is created automatically in a catalog or project when you publish a virtual object to a catalog or assign it to a project\. The Watson Query connection was formerly named the Data Virtualization connection\. Watson Query integrates data sources across multiple types and locations and turns all this data into one logical data view\. This virtual data view makes the job of getting value out of your data easy\. ## Create a Watson Query connection ## To create the connection asset, you need these connection details: <!-- <ul> --> * **Database name** * **Hostname or IP address** of the database * **Port number** * **Instance ID** * [Credentials information](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-data-virtual.html?context=cdpaas&locale=en#creds) * **Application name** (optional): The name of the application that is currently using the connection\. For information, see [Client info properties support by the IBM Data Server Driver for JDBC and SQLJ](https://www.ibm.com/docs/SSEPGG_11.5.0/Javadocs/src/tpc/imjcc_r0052001.html)\. * **Client accounting information** (optional): The value of the accounting string from the client information that is specified for the connection\. For information, see [Client info properties support by the IBM Data Server Driver for JDBC and SQLJ](https://www.ibm.com/docs/SSEPGG_11.5.0/Javadocs/src/tpc/imjcc_r0052001.html)\. * **Client hostname** (optional): The hostname of the machine on which the application that is using the connection is running\. For information, see [Client info properties support by the IBM Data Server Driver for JDBC and SQLJ](https://www.ibm.com/docs/SSEPGG_11.5.0/Javadocs/src/tpc/imjcc_r0052001.html)\. * **Client user** (optional): The name of the user on whose behalf the application that is using the connection is running\. For information, see [Client info properties support by the IBM Data Server Driver for JDBC and SQLJ](https://www.ibm.com/docs/SSEPGG_11.5.0/Javadocs/src/tpc/imjcc_r0052001.html)\. * **SSL certificate** (if required by the database server) <!-- </ul> --> ### Credentials ### **Connecting to a Watson Query instance in IBM Cloud** <!-- <ul> --> * **API key**: Enter an IAM API key\. Prerequisites: <!-- <ol> --> 1. Add the user ID as an IAM user or as a service ID to your IBM Cloud account. For instructions, see the **Console user experience** section of the [Identity and access management (IAM) on IBM Cloud](https://cloud.ibm.com/docs/Db2whc?topic=Db2whc-iam#console-ux) topic. 2. The Watson Query Manager of the Watson Query instance must add IAM users by selecting **Data > Watson Query > Administration > User management** from the IBM watsonx navigation menu. <!-- </ol> --> <!-- </ul> --> **Connecting to a Watson Query instance in Cloud Pak for Data (on\-prem)** <!-- <ul> --> * **User credentials**: Enter your Cloud Pak for Data username and password\. * **API key**: Enter an API key value with your Cloud Pak for Data username and a Cloud Pak for Data API key\. Use this syntax: `user_name:api_key` <!-- </ul> --> For **Private connectivity**, to connect to a database that is not externalized to the internet (for example, behind a firewall), you must set up a [secure connection](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.html)\. ### Choose the method for creating a connection based on where you are in the platform ### **In a project** : Click **Assets > New asset > Connect to a data source**\. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html)\. **In a deployment space** : Click **Add to space > Connection**\. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html)\. **In the Platform assets catalog** : Click **New connection**\. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html)\. ### Next step: Add data assets from the connection ### <!-- <ul> --> * See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html)\. <!-- </ul> --> ## Where you can use this connection ## You can use Watson Query connections in the following workspaces and tools: **Projects** <!-- <ul> --> * Data Refinery * SPSS Modeler * Synthetic Data Generator <!-- </ul> --> **Catalogs** <!-- <ul> --> * Platform assets catalog <!-- </ul> --> ## Setup ## [Getting started with Watson Query ](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-virtualize.html) ## Restriction ## You can use this connection only for source data\. You cannot write to data with this connection\. ## Learn more ## **Parent topic:**[Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html) <!-- </article "role="article" "> -->
EBA93B500AF79FE9BC96FF2FFC71078766532A86
https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-datastax.html?context=cdpaas&locale=en
IBM Cloud Databases for DataStax connection
IBM Cloud Databases for DataStax connection To access your data in IBM Cloud Databases for DataStax, create a connection asset for it. Important: The IBM Cloud Databases for DataStax connector is deprecated and will be discontinued in a future release. IBM Cloud Databases for DataStax is a scale-out NoSQL database in IBM Cloud that is built on Apache Cassandra. It’s designed to power real-time applications with high availability and massive scalability. Create a connection to IBM Cloud Databases for DataStax To create the connection asset, you need these connection details: * Host name * Port number * Username and password * Keyspace * SSL certificate (if required by the database server) Recommended values to insert into "SSL certificate", "Key certificate" and "Private key" fields can be found in secure-connect-bundle.zip. It can be downloaded from the Databases for DataStax instance (tab Overview). After downloading secure connect bundle, unzip it, and you'll find the following: * SSL certificate property: contents of ca.crt * Private key property: contents of key In order to paste contents of key into Private key property, it has to be parsed to one-line. It can be done for example by executing following in the console: tr -d 'n' < key. The output from this command can be put into Private key property. * Key certificate property: contents of cert For Private connectivity, to connect to a database that is not externalized to the internet (for example, behind a firewall), you must set up a [secure connection](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.html). Choose the method for creating a connection based on where you are in the platform In a project : Click Assets > New asset > Connect to a data source. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html). In a deployment space : Click Add to space > Connection. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html). In the Platform assets catalog : Click New connection. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html). Next step: Add data assets from the connection * See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html). Where you can use this connection Projects * Data Refinery * Decision Optimization * SPSS Modeler * Synthetic Data Generator Catalogs * Platform assets catalog IBM Cloud Databases for DataStax setup [Getting Started with IBM Cloud Databases for DataStax](https://cloud.ibm.com/docs/databases-for-cassandra) Learn more [IBM Cloud Databases for DataStax Documentation](https://cloud.ibm.com/databases/databases-for-cassandra/create) Parent topic:[Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html)
# IBM Cloud Databases for DataStax connection # To access your data in IBM Cloud Databases for DataStax, create a connection asset for it\. Important: The IBM Cloud Databases for DataStax connector is deprecated and will be discontinued in a future release\. IBM Cloud Databases for DataStax is a scale\-out NoSQL database in IBM Cloud that is built on Apache Cassandra\. It’s designed to power real\-time applications with high availability and massive scalability\. ## Create a connection to IBM Cloud Databases for DataStax ## To create the connection asset, you need these connection details: <!-- <ul> --> * Host name * Port number * Username and password * Keyspace * SSL certificate (if required by the database server) <!-- </ul> --> Recommended values to insert into "SSL certificate", "Key certificate" and "Private key" fields can be found in `secure-connect-bundle.zip`\. It can be downloaded from the Databases for DataStax instance (tab `Overview`)\. After downloading secure connect bundle, unzip it, and you'll find the following: <!-- <ul> --> * SSL certificate property: contents of `ca.crt` * Private key property: contents of `key` <!-- </ul> --> In order to paste contents of `key` into Private key property, it has to be parsed to one\-line\. It can be done for example by executing following in the console: `tr -d '\n' < key`\. The output from this command can be put into `Private key` property\. <!-- <ul> --> * Key certificate property: contents of `cert` <!-- </ul> --> For **Private connectivity**, to connect to a database that is not externalized to the internet (for example, behind a firewall), you must set up a [secure connection](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.html)\. ### Choose the method for creating a connection based on where you are in the platform ### **In a project** : Click **Assets > New asset > Connect to a data source**\. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html)\. **In a deployment space** : Click **Add to space > Connection**\. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html)\. **In the Platform assets catalog** : Click **New connection**\. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html)\. ### Next step: Add data assets from the connection ### <!-- <ul> --> * See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html)\. <!-- </ul> --> ## Where you can use this connection ## **Projects** <!-- <ul> --> * Data Refinery * Decision Optimization * SPSS Modeler * Synthetic Data Generator <!-- </ul> --> **Catalogs** <!-- <ul> --> * Platform assets catalog <!-- </ul> --> ## IBM Cloud Databases for DataStax setup ## [Getting Started with IBM Cloud Databases for DataStax](https://cloud.ibm.com/docs/databases-for-cassandra) ## Learn more ## [IBM Cloud Databases for DataStax Documentation](https://cloud.ibm.com/databases/databases-for-cassandra/create) **Parent topic:**[Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html) <!-- </article "role="article" "> -->
323360F59C9E5C3D6BE4B7CD36927C23C0DFA268
https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-datavirt-z.html?context=cdpaas&locale=en
IBM Data Virtualization Manager for z/OS connection
IBM Data Virtualization Manager for z/OS connection To access your data in Data Virtualization Manager for z/OS, create a connection asset for it. Use the Data Virtualization Manager for z/OS connection to access data in your z/OS mainframe environment. Supported versions IBM Data Virtualization Manager for z/OS 1.1.0 Create a connection to Data Virtualization Manager for z/OS To create the connection asset, you need these connection details: * Hostname or IP address * Port number * Username and password * SSL certificate (if required by the database server) For Private connectivity, to connect to a database that is not externalized to the internet (for example, behind a firewall), you must set up a [secure connection](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.html). Choose the method for creating a connection based on where you are in the platform In a project : Click Assets > New asset > Connect to a data source. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html). In a deployment space : Click Add to space > Connection. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html). In the Platform assets catalog : Click New connection. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html). Next step: Add data assets from the connection * See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html). Where you can use this connection You can use Data Virtualization Manager for z/OS connections in the following workspaces and tools: * Data Refinery * SPSS Modeler * Synthetic Data Generator Catalogs * Platform assets catalog Parent topic:[Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html)
# IBM Data Virtualization Manager for z/OS connection # To access your data in Data Virtualization Manager for z/OS, create a connection asset for it\. Use the Data Virtualization Manager for z/OS connection to access data in your z/OS mainframe environment\. ## Supported versions ## IBM Data Virtualization Manager for z/OS 1\.1\.0 ## Create a connection to Data Virtualization Manager for z/OS ## To create the connection asset, you need these connection details: <!-- <ul> --> * Hostname or IP address * Port number * Username and password * SSL certificate (if required by the database server) <!-- </ul> --> For **Private connectivity**, to connect to a database that is not externalized to the internet (for example, behind a firewall), you must set up a [secure connection](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.html)\. ### Choose the method for creating a connection based on where you are in the platform ### **In a project** : Click **Assets > New asset > Connect to a data source**\. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html)\. **In a deployment space** : Click **Add to space > Connection**\. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html)\. **In the Platform assets catalog** : Click **New connection**\. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html)\. ### Next step: Add data assets from the connection ### <!-- <ul> --> * See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html)\. <!-- </ul> --> ## Where you can use this connection ## You can use Data Virtualization Manager for z/OS connections in the following workspaces and tools: <!-- <ul> --> * Data Refinery * SPSS Modeler * Synthetic Data Generator <!-- </ul> --> **Catalogs** <!-- <ul> --> * Platform assets catalog <!-- </ul> --> **Parent topic:**[Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html) <!-- </article "role="article" "> -->
456A228A64C2AD85E66F1FE0DE558B4A426B197C
https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-db2-bigsql.html?context=cdpaas&locale=en
IBM Db2 Big SQL connection
IBM Db2 Big SQL connection To access your data in IBM Db2 Big SQL, create a connection asset for it. IBM Db2 Big SQL is a high performance massively parallel processing (MPP) SQL engine for Hadoop that makes querying enterprise data from across the organization an easy and secure experience. Supported versions Db2 Big SQL for Version 4.1+ Create a connection to IBM Db2 Big SQL To create the connection asset, you need these connection details: * Database name * Hostname or IP address * Port number * Username and password * SSL certificate (if required by the database server) For Private connectivity, to connect to a database that is not externalized to the internet (for example, behind a firewall), you must set up a [secure connection](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.html). Choose the method for creating a connection based on where you are in the platform In a project : Click Assets > New asset > Connect to a data source. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html). In a deployment space : Click Add to space > Connection. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html). In the Platform assets catalog : Click New connection. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html). Next step: Add data assets from the connection * See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html). Where you can use this connection You can use IBM Db2 Big SQL connections in the following workspaces and tools: Projects * Data Refinery * Decision Optimization * SPSS Modeler * Synthetic Data Generator Catalogs * Platform assets catalog IBM Db2 Big SQL setup [Installing IBM Db2 Big SQL](https://www.ibm.com/docs/SSCRJT_5.0.3/com.ibm.swg.im.bigsql.doc/doc/hdp_bigsql_versions.html) Running SQL statements To ensure that your SQL statements run correctly, refer to the [ IBM Db2 Big SQL documentation](https://www.ibm.com/docs/SSCRJT_5.0.4/com.ibm.swg.im.bigsql.doc/doc/bi_sql_access.html) for the correct syntax. Learn more [Db2 Big SQL documentation](https://www.ibm.com/docs/en/db2-big-sql/5.0.3) Parent topic:[Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html)
# IBM Db2 Big SQL connection # To access your data in IBM Db2 Big SQL, create a connection asset for it\. IBM Db2 Big SQL is a high performance massively parallel processing (MPP) SQL engine for Hadoop that makes querying enterprise data from across the organization an easy and secure experience\. ## Supported versions ## Db2 Big SQL for Version 4\.1\+ ## Create a connection to IBM Db2 Big SQL ## To create the connection asset, you need these connection details: <!-- <ul> --> * Database name * Hostname or IP address * Port number * Username and password * SSL certificate (if required by the database server) <!-- </ul> --> For **Private connectivity**, to connect to a database that is not externalized to the internet (for example, behind a firewall), you must set up a [secure connection](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.html)\. ### Choose the method for creating a connection based on where you are in the platform ### **In a project** : Click **Assets > New asset > Connect to a data source**\. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html)\. **In a deployment space** : Click **Add to space > Connection**\. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html)\. **In the Platform assets catalog** : Click **New connection**\. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html)\. ### Next step: Add data assets from the connection ### <!-- <ul> --> * See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html)\. <!-- </ul> --> ## Where you can use this connection ## You can use IBM Db2 Big SQL connections in the following workspaces and tools: **Projects** <!-- <ul> --> * Data Refinery * Decision Optimization * SPSS Modeler * Synthetic Data Generator <!-- </ul> --> **Catalogs** <!-- <ul> --> * Platform assets catalog <!-- </ul> --> ## IBM Db2 Big SQL setup ## [Installing IBM Db2 Big SQL](https://www.ibm.com/docs/SSCRJT_5.0.3/com.ibm.swg.im.bigsql.doc/doc/hdp_bigsql_versions.html) ### Running SQL statements ### To ensure that your SQL statements run correctly, refer to the [ IBM Db2 Big SQL documentation](https://www.ibm.com/docs/SSCRJT_5.0.4/com.ibm.swg.im.bigsql.doc/doc/bi_sql_access.html) for the correct syntax\. ## Learn more ## [Db2 Big SQL documentation](https://www.ibm.com/docs/en/db2-big-sql/5.0.3) **Parent topic:**[Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html) <!-- </article "role="article" "> -->
8C953B2300B547AD82EDD9697CAB8A5985F5EAD3
https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-db2-cloud.html?context=cdpaas&locale=en
IBM Db2 on Cloud connection
IBM Db2 on Cloud connection To access your data in IBM Db2 on Cloud, you must create a connection asset for it. Db2 on Cloud is an SQL database that is managed by IBM Cloud and is provisioned for you in the cloud. Create a connection to Db2 on Cloud To create the connection asset, you need the following connection details: * Database name * Hostname or IP address * Port number * Username and password For Private connectivity, to connect to a database that is not externalized to the internet (for example, behind a firewall), you must set up a [secure connection](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.html). Choose the method for creating a connection based on where you are in the platform In a project : Click Assets > New asset > Connect to a data source. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html). In a deployment space : Click Add to space > Connection. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html). In the Platform assets catalog : Click New connection. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html). Next step: Add data assets from the connection * See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html). Where you can use this connection You can use Db2 on Cloud connections in the following workspaces and tools: Projects * Data Refinery * Decision Optimization * Notebooks. Click Read data on the Code snippets pane to get the connection credentials and load the data into a data structure. See [Load data from data source connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/load-and-access-data.htmlconns). * SPSS Modeler * Synthetic Data Generator Catalogs * Platform assets catalog Db2 on Cloud setup [Getting started with Db2 on Cloud](https://cloud.ibm.com/docs/Db2onCloud?topic=Db2onCloud-getting-started) Running SQL statements To ensure that your SQL statements run correctly, refer to the [ Structured Query Language (SQL)](https://www.ibm.com/docs/SSFMBX/com.ibm.swg.im.dashdb.sql.ref.doc/doc/c0004100.html) topic in the Db2 on Cloud documentation for the correct syntax. Learn more * [Db2 on Cloud documentation](https://cloud.ibm.com/docs/Db2onCloud?topic=Db2onCloud-about) * [SSL connectivity](https://cloud.ibm.com/docs/Db2onCloud?topic=Db2onCloud-ssl_support) Parent topic:[Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html)
# IBM Db2 on Cloud connection # To access your data in IBM Db2 on Cloud, you must create a connection asset for it\. Db2 on Cloud is an SQL database that is managed by IBM Cloud and is provisioned for you in the cloud\. ## Create a connection to Db2 on Cloud ## To create the connection asset, you need the following connection details: <!-- <ul> --> * Database name * Hostname or IP address * Port number * Username and password <!-- </ul> --> For **Private connectivity**, to connect to a database that is not externalized to the internet (for example, behind a firewall), you must set up a [secure connection](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.html)\. ### Choose the method for creating a connection based on where you are in the platform ### **In a project** : Click **Assets > New asset > Connect to a data source**\. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html)\. **In a deployment space** : Click **Add to space > Connection**\. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html)\. **In the Platform assets catalog** : Click **New connection**\. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html)\. ### Next step: Add data assets from the connection ### <!-- <ul> --> * See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html)\. <!-- </ul> --> ## Where you can use this connection ## You can use Db2 on Cloud connections in the following workspaces and tools: **Projects** <!-- <ul> --> * Data Refinery * Decision Optimization * Notebooks\. Click **Read data** on the **Code snippets** pane to get the connection credentials and load the data into a data structure\. See [Load data from data source connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/load-and-access-data.html#conns)\. * SPSS Modeler * Synthetic Data Generator <!-- </ul> --> **Catalogs** <!-- <ul> --> * Platform assets catalog <!-- </ul> --> ## Db2 on Cloud setup ## [Getting started with Db2 on Cloud](https://cloud.ibm.com/docs/Db2onCloud?topic=Db2onCloud-getting-started) ### Running SQL statements ### To ensure that your SQL statements run correctly, refer to the [ Structured Query Language (SQL)](https://www.ibm.com/docs/SSFMBX/com.ibm.swg.im.dashdb.sql.ref.doc/doc/c0004100.html) topic in the Db2 on Cloud documentation for the correct syntax\. ## Learn more ## <!-- <ul> --> * [Db2 on Cloud documentation](https://cloud.ibm.com/docs/Db2onCloud?topic=Db2onCloud-about) * [SSL connectivity](https://cloud.ibm.com/docs/Db2onCloud?topic=Db2onCloud-ssl_support) <!-- </ul> --> **Parent topic:**[Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html) <!-- </article "role="article" "> -->
C61D407536D31A069AA857469A0EEBFEF1C0E1B8
https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-db2-wh.html?context=cdpaas&locale=en
IBM Db2 Warehouse connection
IBM Db2 Warehouse connection To access your data in IBM Db2 Warehouse, create a connection asset for it. IBM Db2 Warehouse is an analytics data warehouse that gives you a high level of control over your data and applications. You can use the IBM Db2 Warehouse connection to connect to a database in these products: * IBM Db2 Warehouse in IBM Cloud * IBM Db2 Warehouse on-prem Create a connection to Db2 Warehouse To create the connection asset, you need these connection details: * Database name * Hostname or IP address of the database server * Port number * API key or Username and password * Application name (optional): The name of the application that is currently using the connection. For information, see [Client info properties support by the IBM Data Server Driver for JDBC and SQLJ](https://www.ibm.com/docs/SSEPGG_11.5.0/Javadocs/src/tpc/imjcc_r0052001.html). * Client accounting information (optional): The value of the accounting string from the client information that is specified for the connection. For information, see [Client info properties support by the IBM Data Server Driver for JDBC and SQLJ](https://www.ibm.com/docs/SSEPGG_11.5.0/Javadocs/src/tpc/imjcc_r0052001.html). * Client hostname (optional): The hostname of the machine on which the application that is using the connection is running. For information, see [Client info properties support by the IBM Data Server Driver for JDBC and SQLJ](https://www.ibm.com/docs/SSEPGG_11.5.0/Javadocs/src/tpc/imjcc_r0052001.html). * Client user (optional): The name of the user on whose behalf the application that is using the connection is running. For information, see [Client info properties support by the IBM Data Server Driver for JDBC and SQLJ](https://www.ibm.com/docs/SSEPGG_11.5.0/Javadocs/src/tpc/imjcc_r0052001.html). * SSL certificate (if required by the database server) Credentials For Credentials, you can enter either an API key or a username and password Authenticating with an API Key You can use an API key to authenticate to Db2 Warehouse in IBM Cloud. Db2 Warehouse in IBM Cloud First add the user ID as an IAM user or as a service ID. For instructions, see the Console user experience section of the [Identity and access management (IAM) on IBM Cloud](https://cloud.ibm.com/docs/Db2whc?topic=Db2whc-iamconsole-ux) topic. If users want to authenticate with Db2 Warehouse with an IAM API key, the administrator of the Db2 Warehouse instance can add the IAM users by using the User management console, and then the users can each create an API key for themselves by using the IAM access management console. For Private connectivity, to connect to a database that is not externalized to the internet (for example, behind a firewall), you must set up a [secure connection](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.html). Choose the method for creating a connection based on where you are in the platform In a project : Click Assets > New asset > Connect to a data source. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html). In a deployment space : Click Add to space > Connection. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html). In the Platform assets catalog : Click New connection. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html). Next step: Add data assets from the connection * See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html). Where you can use this connection You can use Db2 Warehouse connections in the following workspaces and tools: * Data Refinery * Decision Optimization * Notebooks. Click Read data on the Code snippets pane to get the connection credentials and load the data into a data structure. See [Load data from data source connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/load-and-access-data.htmlconns). * SPSS Modeler * Synthetic Data Generator Catalogs * Platform assets catalog Db2 Warehouse setup * IBM Db2 Warehouse on Cloud: [Getting started with Db2 Warehouse on Cloud](https://cloud.ibm.com/docs/Db2whc?topic=Db2whc-getting-started) * IBM Db2 Warehouse on-prem: [Setting up Db2 Warehouse](https://www.ibm.com/docs/SSCJDQ/com.ibm.swg.im.dashdb.doc/admin/local_setup.html) Running SQL statements To ensure that your SQL statements run correctly, refer to the product documentation in [Learn more ](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-db2-wh.html?context=cdpaas&locale=enlm) for the correct syntax. Known issue On Data Refinery, system-level schemas aren’t filtered out. Learn more * IBM Db2 Warehouse on Cloud [product documentation](https://cloud.ibm.com/docs/Db2whc) (IBM Cloud) * IBM Db2 Warehouse on-prem [product documentation](https://www.ibm.com/docs/SSCJDQ/com.ibm.swg.im.dashdb.doc/local_overview.html) Parent topic:[Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html)
# IBM Db2 Warehouse connection # To access your data in IBM Db2 Warehouse, create a connection asset for it\. IBM Db2 Warehouse is an analytics data warehouse that gives you a high level of control over your data and applications\. You can use the IBM Db2 Warehouse connection to connect to a database in these products: <!-- <ul> --> * IBM Db2 Warehouse in IBM Cloud * IBM Db2 Warehouse on\-prem <!-- </ul> --> ## Create a connection to Db2 Warehouse ## To create the connection asset, you need these connection details: <!-- <ul> --> * **Database name** * **Hostname or IP address** of the database server * **Port number** * **API key** or **Username** and **password** * **Application name** (optional): The name of the application that is currently using the connection\. For information, see [Client info properties support by the IBM Data Server Driver for JDBC and SQLJ](https://www.ibm.com/docs/SSEPGG_11.5.0/Javadocs/src/tpc/imjcc_r0052001.html)\. * **Client accounting information** (optional): The value of the accounting string from the client information that is specified for the connection\. For information, see [Client info properties support by the IBM Data Server Driver for JDBC and SQLJ](https://www.ibm.com/docs/SSEPGG_11.5.0/Javadocs/src/tpc/imjcc_r0052001.html)\. * **Client hostname** (optional): The hostname of the machine on which the application that is using the connection is running\. For information, see [Client info properties support by the IBM Data Server Driver for JDBC and SQLJ](https://www.ibm.com/docs/SSEPGG_11.5.0/Javadocs/src/tpc/imjcc_r0052001.html)\. * **Client user** (optional): The name of the user on whose behalf the application that is using the connection is running\. For information, see [Client info properties support by the IBM Data Server Driver for JDBC and SQLJ](https://www.ibm.com/docs/SSEPGG_11.5.0/Javadocs/src/tpc/imjcc_r0052001.html)\. * **SSL certificate** (if required by the database server) <!-- </ul> --> ### Credentials ### For Credentials, you can enter either an API key or a username and password #### Authenticating with an API Key #### You can use an API key to authenticate to Db2 Warehouse in IBM Cloud\. **Db2 Warehouse in IBM Cloud** First add the user ID as an IAM user or as a service ID\. For instructions, see **the Console user experience** section of the [Identity and access management (IAM) on IBM Cloud](https://cloud.ibm.com/docs/Db2whc?topic=Db2whc-iam#console-ux) topic\. If users want to authenticate with Db2 Warehouse with an IAM API key, the administrator of the Db2 Warehouse instance can add the IAM users by using the User management console, and then the users can each create an API key for themselves by using the IAM access management console\. For **Private connectivity**, to connect to a database that is not externalized to the internet (for example, behind a firewall), you must set up a [secure connection](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.html)\. ### Choose the method for creating a connection based on where you are in the platform ### **In a project** : Click **Assets > New asset > Connect to a data source**\. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html)\. **In a deployment space** : Click **Add to space > Connection**\. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html)\. **In the Platform assets catalog** : Click **New connection**\. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html)\. ### Next step: Add data assets from the connection ### <!-- <ul> --> * See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html)\. <!-- </ul> --> ## Where you can use this connection ## You can use Db2 Warehouse connections in the following workspaces and tools: <!-- <ul> --> * Data Refinery * Decision Optimization * Notebooks\. Click **Read data** on the **Code snippets** pane to get the connection credentials and load the data into a data structure\. See [Load data from data source connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/load-and-access-data.html#conns)\. * SPSS Modeler * Synthetic Data Generator <!-- </ul> --> **Catalogs** <!-- <ul> --> * Platform assets catalog <!-- </ul> --> ## Db2 Warehouse setup ## <!-- <ul> --> * IBM Db2 Warehouse on Cloud: [Getting started with Db2 Warehouse on Cloud](https://cloud.ibm.com/docs/Db2whc?topic=Db2whc-getting-started) * IBM Db2 Warehouse on\-prem: [Setting up Db2 Warehouse](https://www.ibm.com/docs/SSCJDQ/com.ibm.swg.im.dashdb.doc/admin/local_setup.html) <!-- </ul> --> ### Running SQL statements ### To ensure that your SQL statements run correctly, refer to the product documentation in [Learn more ](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-db2-wh.html?context=cdpaas&locale=en#lm) for the correct syntax\. ## Known issue ## On Data Refinery, system\-level schemas aren’t filtered out\. ## Learn more ## <!-- <ul> --> * IBM Db2 Warehouse on Cloud [product documentation](https://cloud.ibm.com/docs/Db2whc) (IBM Cloud) * IBM Db2 Warehouse on\-prem [product documentation](https://www.ibm.com/docs/SSCJDQ/com.ibm.swg.im.dashdb.doc/local_overview.html) <!-- </ul> --> **Parent topic:**[Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html) <!-- </article "role="article" "> -->
9B89A3B2D7E1868F15A28EACF6D5C6214F7F12B7
https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-db2.html?context=cdpaas&locale=en
IBM Db2 connection
IBM Db2 connection To access your data in an IBM Db2 database, you must create a connection asset for it. IBM Db2 is a database that contains relational data. Supported versions IBM Db2 10.1 and later Create a connection to Db2 To create the connection asset, you need the following connection details: * Database * Hostname or IP address * Username and password See [Credentials](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-db2.html?context=cdpaas&locale=encreds). * Port * Application name (optional): The name of the application that is currently using the connection. For information, see [Client info properties support by the IBM Data Server Driver for JDBC and SQLJ](https://www.ibm.com/docs/SSEPGG_11.5.0/Javadocs/src/tpc/imjcc_r0052001.html). * Client accounting information (optional): The value of the accounting string from the client information that is specified for the connection. For information, see [Client info properties support by the IBM Data Server Driver for JDBC and SQLJ](https://www.ibm.com/docs/SSEPGG_11.5.0/Javadocs/src/tpc/imjcc_r0052001.html). * Client hostname (optional): The hostname of the machine on which the application that is using the connection is running. For information, see [Client info properties support by the IBM Data Server Driver for JDBC and SQLJ](https://www.ibm.com/docs/SSEPGG_11.5.0/Javadocs/src/tpc/imjcc_r0052001.html). * Client user (optional): The name of the user on whose behalf the application that is using the connection is running. For information, see [Client info properties support by the IBM Data Server Driver for JDBC and SQLJ](https://www.ibm.com/docs/SSEPGG_11.5.0/Javadocs/src/tpc/imjcc_r0052001.html). * SSL certificate (if required by your database server) For Private connectivity, to connect to a database that is not externalized to the internet (for example, behind a firewall), you must set up a [secure connection](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.html). Choose the method for creating a connection based on where you are in the platform In a project : Click Assets > New asset > Connect to a data source. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html). In a deployment space : Click Add to space > Connection. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html). In the Platform assets catalog : Click New connection. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html). Next step: Add data assets from the connection * See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html). Where you can use this connection You can use Db2 connections in the following workspaces and tools: Projects * Data Refinery * Decision Optimization * Notebooks. Click Read data on the Code snippets pane to get the connection credentials and load the data into a data structure. See [Load data from data source connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/load-and-access-data.htmlconns). * SPSS Modeler * Synthetic Data Generator Catalogs * Platform assets catalog Running SQL statements To ensure that your SQL statements run correctly, refer to the [Structured Query Language (SQL)](https://www.ibm.com/docs/SSEPGG_11.5.0/com.ibm.db2.luw.sql.ref.doc/doc/c0004100.html) topic in the IBM Db2 product documentation for the correct syntax. Learn more [IBM Db2 product documentation](https://www.ibm.com/docs/db2) Parent topic:[Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html)
# IBM Db2 connection # To access your data in an IBM Db2 database, you must create a connection asset for it\. IBM Db2 is a database that contains relational data\. ## Supported versions ## IBM Db2 10\.1 and later ## Create a connection to Db2 ## To create the connection asset, you need the following connection details: <!-- <ul> --> * **Database** * **Hostname or IP address** * **Username and password** See [Credentials](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-db2.html?context=cdpaas&locale=en#creds)\. * **Port** * **Application name** (optional): The name of the application that is currently using the connection\. For information, see [Client info properties support by the IBM Data Server Driver for JDBC and SQLJ](https://www.ibm.com/docs/SSEPGG_11.5.0/Javadocs/src/tpc/imjcc_r0052001.html)\. * **Client accounting information** (optional): The value of the accounting string from the client information that is specified for the connection\. For information, see [Client info properties support by the IBM Data Server Driver for JDBC and SQLJ](https://www.ibm.com/docs/SSEPGG_11.5.0/Javadocs/src/tpc/imjcc_r0052001.html)\. * **Client hostname** (optional): The hostname of the machine on which the application that is using the connection is running\. For information, see [Client info properties support by the IBM Data Server Driver for JDBC and SQLJ](https://www.ibm.com/docs/SSEPGG_11.5.0/Javadocs/src/tpc/imjcc_r0052001.html)\. * **Client user** (optional): The name of the user on whose behalf the application that is using the connection is running\. For information, see [Client info properties support by the IBM Data Server Driver for JDBC and SQLJ](https://www.ibm.com/docs/SSEPGG_11.5.0/Javadocs/src/tpc/imjcc_r0052001.html)\. <!-- </ul> --> <!-- <ul> --> * **SSL certificate** (if required by your database server) <!-- </ul> --> For **Private connectivity**, to connect to a database that is not externalized to the internet (for example, behind a firewall), you must set up a [secure connection](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.html)\. ### Choose the method for creating a connection based on where you are in the platform ### **In a project** : Click **Assets > New asset > Connect to a data source**\. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html)\. **In a deployment space** : Click **Add to space > Connection**\. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html)\. **In the Platform assets catalog** : Click **New connection**\. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html)\. ### Next step: Add data assets from the connection ### <!-- <ul> --> * See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html)\. <!-- </ul> --> ## Where you can use this connection ## You can use Db2 connections in the following workspaces and tools: **Projects** <!-- <ul> --> * Data Refinery * Decision Optimization * Notebooks\. Click **Read data** on the **Code snippets** pane to get the connection credentials and load the data into a data structure\. See [Load data from data source connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/load-and-access-data.html#conns)\. * SPSS Modeler * Synthetic Data Generator <!-- </ul> --> **Catalogs** <!-- <ul> --> * Platform assets catalog <!-- </ul> --> ### Running SQL statements ### To ensure that your SQL statements run correctly, refer to the [Structured Query Language (SQL)](https://www.ibm.com/docs/SSEPGG_11.5.0/com.ibm.db2.luw.sql.ref.doc/doc/c0004100.html) topic in the IBM Db2 product documentation for the correct syntax\. ## Learn more ## [IBM Db2 product documentation](https://www.ibm.com/docs/db2) **Parent topic:**[Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html) <!-- </article "role="article" "> -->
2A95F85CD3F7250FCBA52D8FDD82D1502FC617D6
https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-db2i.html?context=cdpaas&locale=en
IBM Db2 for i connection
IBM Db2 for i connection To access your data in IBM Db2 for i, create a connection asset for it. Db2 for i is the relational database manager that is fully integrated on your system. Because it is integrated on the system, Db2 for i is easy to use and manage. Supported versions IBM DB2 for i 7.2+ Prerequisites Obtain the certificate file A certificate file on the Db2 for i server is required to use this connection. To obtain an IBM Db2 Connect Unlimited Edition license certificate file, go to [IBM Db2 Connect: Pricing](https://www.ibm.com/products/db2-connect/pricing) and [Installing the IBM Data Server Driver for JDBC and SQLJ](https://www.ibm.com/docs/SSEPGG_11.5.0/com.ibm.db2.luw.apdv.gs.doc/doc/t0010264.html). For installation instructions, see [Activating the license certificate file for Db2 Connect Unlimited Edition](https://www.ibm.com/docs/SSEPGG_11.5.0/com.ibm.db2.luw.licensing.doc/doc/t0057375.html). Run the bind command Run the following commands from the Db2 client that is configured to access the Db2 for i server. You need to run the bind command only once per remote database per Db2 client version. db2 connect to DBALIAS user USERID using PASSWORD db2 bind [email protected] blocking all sqlerror continue messages ddcs400.msg grant public db2 connect reset For information about bind commands, see [Binding applications and utilities](https://www.ibm.com/docs/SSEPGG_11.5.0/com.ibm.db2.luw.qb.dbconn.doc/doc/c0005595.html). Run catalog commands Run the following catalog commands from the Db2 client that is configured to access the Db2 for i server: 1. db2 catalog tcpip node node_name remote hostname_or_address server port_no_or_service_name Example: db2 catalog tcpip node db2i123 remote 192.0.2.0 server 446 2. db2 catalog dcs database local_name as real_db_name Example: db2 catalog dcs database db2i123 as db2i123 3. db2 catalog database local_name as alias at node node_name authentication server Example: db2 catalog database db2i123 as db2i123 at node db2i123 authentication server For information about catalog commands, see [CATALOG TCPIP NODE](https://www.ibm.com/docs/SSEPGG_11.5.0/com.ibm.db2.luw.admin.cmd.doc/doc/r0001944.html) and [CATALOG DCS DATABASE](https://www.ibm.com/docs/SSEPGG_11.5.0/com.ibm.db2.luw.admin.cmd.doc/doc/r0001937.html). Create a connection to Db2 for i To create the connection asset, you need these connection details: * Hostname or IP address * Port number * Location: The unique name of the Db2 location you want to access * Username and password * SSL certificate (if required by the database server) For Private connectivity, to connect to a database that is not externalized to the internet (for example, behind a firewall), you must set up a [secure connection](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.html). Choose the method for creating a connection based on where you are in the platform In a project : Click Assets > New asset > Connect to a data source. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html). In a deployment space : Click Add to space > Connection. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html). In the Platform assets catalog : Click New connection. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html). Next step: Add data assets from the connection * See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html). Where you can use this connection You can use Db2 for i connections in the following workspaces and tools: Projects * SPSS Modeler * Synthetic Data Generator Catalogs * Platform assets catalog Restriction For SPSS Modeler, you can use this connection only to import data. You cannot export data to this connection or to a Db2 for i connection connected data asset. Running SQL statements To ensure that your SQL statements run correctly, refer to the [ Db2 for i SQL reference](https://www.ibm.com/docs/ssw_ibm_i_72/db2/rbafzintro.htm) for the correct syntax. Learn more [IBM Db2 for i documentation](https://www.ibm.com/docs/i) Parent topic:[Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html)
# IBM Db2 for i connection # To access your data in IBM Db2 for i, create a connection asset for it\. Db2 for i is the relational database manager that is fully integrated on your system\. Because it is integrated on the system, Db2 for i is easy to use and manage\. ## Supported versions ## IBM DB2 for i 7\.2\+ ## Prerequisites ## ### Obtain the certificate file ### A certificate file on the Db2 for i server is required to use this connection\. To obtain an IBM Db2 Connect Unlimited Edition license certificate file, go to [IBM Db2 Connect: Pricing](https://www.ibm.com/products/db2-connect/pricing) and [Installing the IBM Data Server Driver for JDBC and SQLJ](https://www.ibm.com/docs/SSEPGG_11.5.0/com.ibm.db2.luw.apdv.gs.doc/doc/t0010264.html)\. For installation instructions, see [Activating the license certificate file for Db2 Connect Unlimited Edition](https://www.ibm.com/docs/SSEPGG_11.5.0/com.ibm.db2.luw.licensing.doc/doc/t0057375.html)\. ### Run the bind command ### Run the following commands from the Db2 client that is configured to access the Db2 for i server\. You need to run the bind command only once per remote database per Db2 client version\. db2 connect to DBALIAS user USERID using PASSWORD db2 bind [email protected] blocking all sqlerror continue messages ddcs400.msg grant public db2 connect reset For information about bind commands, see [Binding applications and utilities](https://www.ibm.com/docs/SSEPGG_11.5.0/com.ibm.db2.luw.qb.dbconn.doc/doc/c0005595.html)\. ### Run catalog commands ### Run the following catalog commands from the Db2 client that is configured to access the Db2 for i server: <!-- <ol> --> 1. db2 catalog tcpip node node_name remote hostname_or_address server port_no_or_service_name Example: `db2 catalog tcpip node db2i123 remote 192.0.2.0 server 446` 2. db2 catalog dcs database local_name as real_db_name Example: `db2 catalog dcs database db2i123 as db2i123` 3. db2 catalog database local_name as alias at node node_name authentication server Example: `db2 catalog database db2i123 as db2i123 at node db2i123 authentication server` <!-- </ol> --> For information about catalog commands, see [CATALOG TCPIP NODE](https://www.ibm.com/docs/SSEPGG_11.5.0/com.ibm.db2.luw.admin.cmd.doc/doc/r0001944.html) and [CATALOG DCS DATABASE](https://www.ibm.com/docs/SSEPGG_11.5.0/com.ibm.db2.luw.admin.cmd.doc/doc/r0001937.html)\. ## Create a connection to Db2 for i ## To create the connection asset, you need these connection details: <!-- <ul> --> * Hostname or IP address * Port number * Location: The unique name of the Db2 location you want to access * Username and password * SSL certificate (if required by the database server) <!-- </ul> --> For **Private connectivity**, to connect to a database that is not externalized to the internet (for example, behind a firewall), you must set up a [secure connection](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.html)\. ### Choose the method for creating a connection based on where you are in the platform ### **In a project** : Click **Assets > New asset > Connect to a data source**\. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html)\. **In a deployment space** : Click **Add to space > Connection**\. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html)\. **In the Platform assets catalog** : Click **New connection**\. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html)\. ### Next step: Add data assets from the connection ### <!-- <ul> --> * See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html)\. <!-- </ul> --> ## Where you can use this connection ## You can use Db2 for i connections in the following workspaces and tools: **Projects** <!-- <ul> --> * SPSS Modeler * Synthetic Data Generator <!-- </ul> --> **Catalogs** <!-- <ul> --> * Platform assets catalog <!-- </ul> --> ## Restriction ## For SPSS Modeler, you can use this connection only to import data\. You cannot export data to this connection or to a Db2 for i connection connected data asset\. ### Running SQL statements ### To ensure that your SQL statements run correctly, refer to the [ Db2 for i SQL reference](https://www.ibm.com/docs/ssw_ibm_i_72/db2/rbafzintro.htm) for the correct syntax\. ## Learn more ## [IBM Db2 for i documentation](https://www.ibm.com/docs/i) **Parent topic:**[Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html) <!-- </article "role="article" "> -->
BE7F45C3E17998A50B8414D623007ED668B37C04
https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-db2zos.html?context=cdpaas&locale=en
IBM Db2 for z/OS connection
IBM Db2 for z/OS connection To access your data in IBM Db2 for z/OS, create a connection asset for it. Db2 for z/OS is an enterprise data server for IBM Z. It manages core business data across an enterprise and supports key business applications. Supported versions IBM Db2 for z/OS version 11 and later Prerequisites Obtain the certificate file A certificate file on the Db2 for z/OS server is required to use this connection.These steps must be done on the Db2 for z/OS server: Obtain an IBM Db2 Connect Unlimited Edition license certificate file from [IBM Db2 Connect: Pricing](https://www.ibm.com/products/db2-connect/pricing) and [Installing the IBM Data Server Driver for JDBC and SQLJ](https://www.ibm.com/docs/en/db2/11.5?topic=apis-installing-data-server-driver-jdbc-sqlj). For installation instructions, see [Activating the license certificate file for Db2 Connect Unlimited Edition](https://www.ibm.com/docs/en/db2/11.5?topic=li-activating-license-certificate-file-db2-connect-unlimited-edition). Run the bind command Run the following commands from the Db2 client that is configured to access the Db2 for z/OS server. You need to run the bind command only once per remote database per Db2 client version. db2 connect to DBALIAS user USERID using PASSWORD db2 bind [email protected] blocking all sqlerror continue messages ddcsmvs.msg grant public db2 connect reset For information about bind commands, see [Binding applications and utilities (Db2 Connect Server)](https://www.ibm.com/docs/SSEPGG_11.5.0/com.ibm.db2.luw.qb.dbconn.doc/doc/c0005595.html?pos=2). Run catalog commands Run the following catalog commands from the Db2 client that is configured to access the Db2 for z/OS server: 1. db2 catalog tcpip node node_name remote hostname_or_address server port_no_or_service_name Example: db2 catalog tcpip node db2z123 remote 192.0.2.0 server 446 2. db2 catalog dcs database local_name as real_db_name Example: db2 catalog dcs database db2z123 as db2z123 3. db2 catalog database local_name as alias at node node_name authentication server Example: db2 catalog database db2z123 as db2z123 at node db2z123 authentication server For information about catalog commands, see [CATALOG TCPIP NODE](https://www.ibm.com/docs/SSEPGG_11.5.0/com.ibm.db2.luw.admin.cmd.doc/doc/r0001944.html) and [CATALOG DCS DATABASE](https://www.ibm.com/docs/SSEPGG_11.5.0/com.ibm.db2.luw.admin.cmd.doc/doc/r0001937.html). Create a connection to Db2 for z/OS To create the connection asset, you need these connection details: * Hostname or IP address * Port number * Collection ID: The ID of the collections of packages to use * Location: The unique name of the Db2 location you want to access * Username and password * Application name (optional): The name of the application that is currently using the connection. For information, see [Client info properties support by the IBM Data Server Driver for JDBC and SQLJ](https://www.ibm.com/docs/SSEPGG_11.5.0/Javadocs/src/tpc/imjcc_r0052001.html). * Client accounting information (optional): The value of the accounting string from the client information that is specified for the connection. For information, see [Client info properties support by the IBM Data Server Driver for JDBC and SQLJ](https://www.ibm.com/docs/SSEPGG_11.5.0/Javadocs/src/tpc/imjcc_r0052001.html). * Client hostname (optional): The hostname of the machine on which the application that is using the connection is running. For information, see [Client info properties support by the IBM Data Server Driver for JDBC and SQLJ](https://www.ibm.com/docs/SSEPGG_11.5.0/Javadocs/src/tpc/imjcc_r0052001.html). * Client user (optional): The name of the user on whose behalf the application that is using the connection is running. For information, see [Client info properties support by the IBM Data Server Driver for JDBC and SQLJ](https://www.ibm.com/docs/SSEPGG_11.5.0/Javadocs/src/tpc/imjcc_r0052001.html). * SSL certificate (if required by the database server) For Private connectivity, to connect to a database that is not externalized to the internet (for example, behind a firewall), you must set up a [secure connection](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.html). Choose the method for creating a connection based on where you are in the platform In a project : Click Assets > New asset > Connect to a data source. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html). In a deployment space : Click Add to space > Connection. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html). In the Platform assets catalog : Click New connection. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html). Next step: Add data assets from the connection * See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html). Where you can use this connection You can use Db2 for z/OS connections in the following workspaces and tools: Projects * Notebooks. Click Read data on the Code snippets pane to get the connection credentials and load the data into a data structure. See [Load data from data source connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/load-and-access-data.htmlconns). * Decision Optimization * SPSS Modeler * Synthetic Data Generator Catalogs * Platform assets catalog Restriction For SPSS Modeler, you can use this connection only to import data. You cannot export data to this connection or to a Db2 for z/OS connected data asset. Running SQL statements To ensure that your SQL statements run correctly, refer to the [ Db2 for z/OS and SQL concepts](https://www.ibm.com/docs/db2-for-zos/12?topic=zos-db2-sql-concepts) for the correct syntax. Learn more [IBM Db2 for z/OS documentation](https://www.ibm.com/docs/db2-for-zos) Parent topic:[Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html)
# IBM Db2 for z/OS connection # To access your data in IBM Db2 for z/OS, create a connection asset for it\. Db2 for z/OS is an enterprise data server for IBM Z\. It manages core business data across an enterprise and supports key business applications\. ## Supported versions ## IBM Db2 for z/OS version 11 and later ## Prerequisites ## ### Obtain the certificate file ### A certificate file on the Db2 for z/OS server is required to use this connection\.**These steps must be done on the Db2 for z/OS server**: Obtain an IBM Db2 Connect Unlimited Edition license certificate file from [IBM Db2 Connect: Pricing](https://www.ibm.com/products/db2-connect/pricing) and [Installing the IBM Data Server Driver for JDBC and SQLJ](https://www.ibm.com/docs/en/db2/11.5?topic=apis-installing-data-server-driver-jdbc-sqlj)\. For installation instructions, see [Activating the license certificate file for Db2 Connect Unlimited Edition](https://www.ibm.com/docs/en/db2/11.5?topic=li-activating-license-certificate-file-db2-connect-unlimited-edition)\. ### Run the bind command ### Run the following commands from the Db2 client that is configured to access the Db2 for z/OS server\. You need to run the bind command only once per remote database per Db2 client version\. db2 connect to DBALIAS user USERID using PASSWORD db2 bind [email protected] blocking all sqlerror continue messages ddcsmvs.msg grant public db2 connect reset For information about bind commands, see [Binding applications and utilities (Db2 Connect Server)](https://www.ibm.com/docs/SSEPGG_11.5.0/com.ibm.db2.luw.qb.dbconn.doc/doc/c0005595.html?pos=2)\. ### Run catalog commands ### Run the following catalog commands from the Db2 client that is configured to access the Db2 for z/OS server: <!-- <ol> --> 1. db2 catalog tcpip node node_name remote hostname_or_address server port_no_or_service_name Example: `db2 catalog tcpip node db2z123 remote 192.0.2.0 server 446` 2. db2 catalog dcs database local_name as real_db_name Example: `db2 catalog dcs database db2z123 as db2z123` 3. db2 catalog database local_name as alias at node node_name authentication server Example: `db2 catalog database db2z123 as db2z123 at node db2z123 authentication server` <!-- </ol> --> For information about catalog commands, see [CATALOG TCPIP NODE](https://www.ibm.com/docs/SSEPGG_11.5.0/com.ibm.db2.luw.admin.cmd.doc/doc/r0001944.html) and [CATALOG DCS DATABASE](https://www.ibm.com/docs/SSEPGG_11.5.0/com.ibm.db2.luw.admin.cmd.doc/doc/r0001937.html)\. ## Create a connection to Db2 for z/OS ## To create the connection asset, you need these connection details: <!-- <ul> --> * **Hostname or IP address** * **Port number** * **Collection ID**: The ID of the collections of packages to use * **Location:** The unique name of the Db2 location you want to access * **Username** and **password** * **Application name** (optional): The name of the application that is currently using the connection\. For information, see [Client info properties support by the IBM Data Server Driver for JDBC and SQLJ](https://www.ibm.com/docs/SSEPGG_11.5.0/Javadocs/src/tpc/imjcc_r0052001.html)\. * **Client accounting information** (optional): The value of the accounting string from the client information that is specified for the connection\. For information, see [Client info properties support by the IBM Data Server Driver for JDBC and SQLJ](https://www.ibm.com/docs/SSEPGG_11.5.0/Javadocs/src/tpc/imjcc_r0052001.html)\. * **Client hostname** (optional): The hostname of the machine on which the application that is using the connection is running\. For information, see [Client info properties support by the IBM Data Server Driver for JDBC and SQLJ](https://www.ibm.com/docs/SSEPGG_11.5.0/Javadocs/src/tpc/imjcc_r0052001.html)\. * **Client user** (optional): The name of the user on whose behalf the application that is using the connection is running\. For information, see [Client info properties support by the IBM Data Server Driver for JDBC and SQLJ](https://www.ibm.com/docs/SSEPGG_11.5.0/Javadocs/src/tpc/imjcc_r0052001.html)\. * **SSL certificate** (if required by the database server) <!-- </ul> --> For **Private connectivity**, to connect to a database that is not externalized to the internet (for example, behind a firewall), you must set up a [secure connection](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.html)\. ### Choose the method for creating a connection based on where you are in the platform ### **In a project** : Click **Assets > New asset > Connect to a data source**\. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html)\. **In a deployment space** : Click **Add to space > Connection**\. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html)\. **In the Platform assets catalog** : Click **New connection**\. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html)\. ### Next step: Add data assets from the connection ### <!-- <ul> --> * See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html)\. <!-- </ul> --> ## Where you can use this connection ## You can use Db2 for z/OS connections in the following workspaces and tools: **Projects** <!-- <ul> --> * Notebooks\. Click **Read data** on the **Code snippets** pane to get the connection credentials and load the data into a data structure\. See [Load data from data source connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/load-and-access-data.html#conns)\. * Decision Optimization * SPSS Modeler * Synthetic Data Generator <!-- </ul> --> **Catalogs** <!-- <ul> --> * Platform assets catalog <!-- </ul> --> ## Restriction ## For SPSS Modeler, you can use this connection only to import data\. You cannot export data to this connection or to a Db2 for z/OS connected data asset\. ### Running SQL statements ### To ensure that your SQL statements run correctly, refer to the [ Db2 for z/OS and SQL concepts](https://www.ibm.com/docs/db2-for-zos/12?topic=zos-db2-sql-concepts) for the correct syntax\. ## Learn more ## [IBM Db2 for z/OS documentation](https://www.ibm.com/docs/db2-for-zos) **Parent topic:**[Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html) <!-- </article "role="article" "> -->
92C74E7245DFE20BF93F1D73039ED4DF95375C6F
https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-dbase-postgresql.html?context=cdpaas&locale=en
IBM Cloud Databases for PostgreSQL connection
IBM Cloud Databases for PostgreSQL connection To access your data in IBM Cloud Databases for PostgreSQL, you must create a connection asset for it. IBM Cloud Databases for PostgreSQL is an open source object-relational database that is highly customizable. It’s a feature-rich enterprise database with JSON support. Create a connection to IBM Cloud Databases for PostgreSQL To create the connection asset, you need the following connection details: * Database name * Hostname or IP address of the database * Port number * Username and password * SSL certificate (if required by the database server) For Private connectivity, to connect to a database that is not externalized to the internet (for example, behind a firewall), you must set up a [secure connection](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.html). Choose the method for creating a connection based on where you are in the platform In a project : Click Assets > New asset > Connect to a data source. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html). In a deployment space : Click Add to space > Connection. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html). In the Platform assets catalog : Click New connection. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html). Next step: Add data assets from the connection * See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html). Where you can use this connection You can use IBM Cloud Databases for PostgreSQL connections in the following workspaces and tools: Projects * Notebooks. Click Read data on the Code snippets pane to get the connection credentials and load the data into a data structure. See [Load data from data source connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/load-and-access-data.htmlconns). * SPSS Modeler * Synthetic Data Generator Catalogs * Platform assets catalog IBM Cloud Databases for PostgreSQL setup [IBM Cloud Databases for PostgreSQL setup](https://cloud.ibm.com/catalog/services/databases-for-postgresql) Restriction For SPSS Modeler, you can use this connection only to import data. You cannot export data to this connection or to an IBM Cloud Databases for PostgreSQL connected data asset. Running SQL statements To ensure that your SQL statements run correctly, refer to the [IBM Cloud Databases for PostgreSQL documentation](https://www.postgresql.org/docs/9.1/ecpg-commands.html) for the correct syntax. Learn more [ IBM Cloud Databases for PostgreSQL documentation](https://cloud.ibm.com/catalog/services/databases-for-postgresql) Parent topic:[Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html)
# IBM Cloud Databases for PostgreSQL connection # To access your data in IBM Cloud Databases for PostgreSQL, you must create a connection asset for it\. IBM Cloud Databases for PostgreSQL is an open source object\-relational database that is highly customizable\. It’s a feature\-rich enterprise database with JSON support\. ## Create a connection to IBM Cloud Databases for PostgreSQL ## To create the connection asset, you need the following connection details: <!-- <ul> --> * Database name * Hostname or IP address of the database * Port number * Username and password * SSL certificate (if required by the database server) <!-- </ul> --> For **Private connectivity**, to connect to a database that is not externalized to the internet (for example, behind a firewall), you must set up a [secure connection](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.html)\. ### Choose the method for creating a connection based on where you are in the platform ### **In a project** : Click **Assets > New asset > Connect to a data source**\. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html)\. **In a deployment space** : Click **Add to space > Connection**\. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html)\. **In the Platform assets catalog** : Click **New connection**\. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html)\. ### Next step: Add data assets from the connection ### <!-- <ul> --> * See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html)\. <!-- </ul> --> ## Where you can use this connection ## You can use IBM Cloud Databases for PostgreSQL connections in the following workspaces and tools: **Projects** <!-- <ul> --> * Notebooks\. Click **Read data** on the **Code snippets** pane to get the connection credentials and load the data into a data structure\. See [Load data from data source connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/load-and-access-data.html#conns)\. * SPSS Modeler * Synthetic Data Generator <!-- </ul> --> **Catalogs** <!-- <ul> --> * Platform assets catalog <!-- </ul> --> ## IBM Cloud Databases for PostgreSQL setup ## [IBM Cloud Databases for PostgreSQL setup](https://cloud.ibm.com/catalog/services/databases-for-postgresql) ## Restriction ## For SPSS Modeler, you can use this connection only to import data\. You cannot export data to this connection or to an IBM Cloud Databases for PostgreSQL connected data asset\. ### Running SQL statements ### To ensure that your SQL statements run correctly, refer to the [IBM Cloud Databases for PostgreSQL documentation](https://www.postgresql.org/docs/9.1/ecpg-commands.html) for the correct syntax\. ## Learn more ## [ IBM Cloud Databases for PostgreSQL documentation](https://cloud.ibm.com/catalog/services/databases-for-postgresql) **Parent topic:**[Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html) <!-- </article "role="article" "> -->
7551CB2BCD77B26AE0E05154A3E8CC51C070D707
https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-derby.html?context=cdpaas&locale=en
Apache Derby connection
Apache Derby connection To access your data in Apache Derby, create a connection asset for it. Apache Derby is a relational database management system developed by the Apache Software Foundation. Create a connection to Apache Derby To create the connection asset, you need these connection details: * Database name * Hostname or IP address * Port number * Username and password * SSL certificate (if required by the database server) For Private connectivity, to connect to a database that is not externalized to the internet (for example, behind a firewall), you must set up a [secure connection](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.html). Choose the method for creating a connection based on where you are in the platform In a project : Click Assets > New asset > Connect to a data source. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html). In a deployment space : Click Add to space > Connection. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html). In the Platform assets catalog : Click New connection. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html). Next step: Add data assets from the connection * See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html). Where you can use this connection You can use Apache Derby connections in the following workspaces and tools: Projects * Data Refinery * SPSS Modeler * Synthetic Data Generator Catalogs * Platform assets catalog Apache Derby setup [Apache Derby installation](https://db.apache.org/derby/papers/DerbyTut/install_software.htmlderby_download) Running SQL statements To ensure that your SQL statements run correctly, refer to the [Apache Derby documentation](https://db.apache.org/derby/docs/10.8/ref/index.html) for the correct syntax. Learn more [Apache Derby documentation](https://db.apache.org/derby/papers/DerbyTut/install_software.htmlderby) Parent topic:[Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html)
# Apache Derby connection # To access your data in Apache Derby, create a connection asset for it\. Apache Derby is a relational database management system developed by the Apache Software Foundation\. ## Create a connection to Apache Derby ## To create the connection asset, you need these connection details: <!-- <ul> --> * Database name * Hostname or IP address * Port number * Username and password * SSL certificate (if required by the database server) <!-- </ul> --> For **Private connectivity**, to connect to a database that is not externalized to the internet (for example, behind a firewall), you must set up a [secure connection](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.html)\. ### Choose the method for creating a connection based on where you are in the platform ### **In a project** : Click **Assets > New asset > Connect to a data source**\. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html)\. **In a deployment space** : Click **Add to space > Connection**\. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html)\. **In the Platform assets catalog** : Click **New connection**\. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html)\. ### Next step: Add data assets from the connection ### <!-- <ul> --> * See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html)\. <!-- </ul> --> ## Where you can use this connection ## You can use Apache Derby connections in the following workspaces and tools: **Projects** <!-- <ul> --> * Data Refinery * SPSS Modeler * Synthetic Data Generator <!-- </ul> --> **Catalogs** <!-- <ul> --> * Platform assets catalog <!-- </ul> --> ## Apache Derby setup ## [Apache Derby installation](https://db.apache.org/derby/papers/DerbyTut/install_software.html#derby_download) ### Running SQL statements ### To ensure that your SQL statements run correctly, refer to the [Apache Derby documentation](https://db.apache.org/derby/docs/10.8/ref/index.html) for the correct syntax\. ## Learn more ## [Apache Derby documentation](https://db.apache.org/derby/papers/DerbyTut/install_software.html#derby) **Parent topic:**[Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html) <!-- </article "role="article" "> -->
E700C898EC2EFE96C76C2CAC042E063529B23D3B
https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-dremio.html?context=cdpaas&locale=en
Dremio connection
Dremio connection To access your data in Dremio, create a connection asset for it. Dremio is an open data lake platform. It supports all the major third-party data sources. Create a connection to Dremio To create the connection asset, you need these connection details: * Username and password * Hostname: You can create a Dremio Cloud instance only in the European Union (EU) or the United States (US). Use sql.dremio.cloud for the US, and use sql.eu.dremio.cloud for the EU. Dremio Software can be hosted anywhere. * Port number: The default port for Dremio Cloud instances is 443 and for Dremio Software it is 31010. * Dremio Cloud Project ID: See [Obtaining the ID of a Project](https://docs.dremio.com/cloud/cloud-entities/projects/.obtaining-the-id-of-a-project). * SSL certificate: * Select Port is SSL-enabled if you provided Dremio Cloud Project ID. * Select Port is SSL-enabled and provide SSL Certificate if you want to connect to Dremio Software with SSL. For Private connectivity, to connect to a database that is not externalized to the internet (for example, behind a firewall), you must set up a [secure connection](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.html). Choose the method for creating a connection based on where you are in the platform In a project Click Assets > New asset > Connect to a data source. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html). In a deployment space Click Add to space > Connection. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html). In the Platform assets catalog : Click New connection. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html). Next step: Add data assets from the connection * See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html). Where you can use this connection You can use the Dremio connection in the following workspaces and tools: Projects * Data Refinery Catalogs * Platform assets catalog Dremio setup Dremio can be set up in various deployments, see [Dremio Cluster Deployment](https://docs.dremio.com/current/get-started/cluster-deployments/). To set up Dremio Cloud, see [Dremio Cloud](https://docs.dremio.com/cloud/). Restrictions You can use this connection only for reading data. You cannot write data or export data with this connection. Running SQL statements To ensure that your SQL statements run correctly, refer to the [Dremio SQL Reference](https://docs.dremio.com/software/sql-reference/) for the correct syntax. Learn more * [Dremio Software documentation](https://docs.dremio.com/current/) * [Dremio Cloud documentation](https://docs.dremio.com/cloud/) Parent topic: [Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html)
# Dremio connection # To access your data in Dremio, create a connection asset for it\. Dremio is an open data lake platform\. It supports all the major third\-party data sources\. ## Create a connection to Dremio ## To create the connection asset, you need these connection details: <!-- <ul> --> * Username and password * Hostname: You can create a Dremio Cloud instance only in the European Union (EU) or the United States (US)\. Use `sql.dremio.cloud` for the US, and use `sql.eu.dremio.cloud` for the EU\. Dremio Software can be hosted anywhere\. * Port number: The default port for Dremio Cloud instances is `443` and for Dremio Software it is `31010`\. * Dremio Cloud Project ID: See [Obtaining the ID of a Project](https://docs.dremio.com/cloud/cloud-entities/projects/.#obtaining-the-id-of-a-project)\. * SSL certificate: <!-- <ul> --> * Select `Port is SSL-enabled` if you provided `Dremio Cloud Project ID`. * Select `Port is SSL-enabled` and provide `SSL Certificate` if you want to connect to Dremio Software with SSL. <!-- </ul> --> <!-- </ul> --> For **Private connectivity**, to connect to a database that is not externalized to the internet (for example, behind a firewall), you must set up a [secure connection](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.html)\. ### Choose the method for creating a connection based on where you are in the platform ### **In a project** Click **Assets > New asset > Connect to a data source**\. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html)\. **In a deployment space** Click **Add to space > Connection**\. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html)\. **In the Platform assets catalog** : Click **New connection**\. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html)\. ### Next step: Add data assets from the connection ### <!-- <ul> --> * See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html)\. <!-- </ul> --> ## Where you can use this connection ## You can use the Dremio connection in the following workspaces and tools: **Projects** <!-- <ul> --> * Data Refinery <!-- </ul> --> **Catalogs** <!-- <ul> --> * Platform assets catalog <!-- </ul> --> ## Dremio setup ## Dremio can be set up in various deployments, see [Dremio Cluster Deployment](https://docs.dremio.com/current/get-started/cluster-deployments/)\. To set up Dremio Cloud, see [Dremio Cloud](https://docs.dremio.com/cloud/)\. ## Restrictions ## You can use this connection only for reading data\. You cannot write data or export data with this connection\. ### Running SQL statements ### To ensure that your SQL statements run correctly, refer to the [Dremio SQL Reference](https://docs.dremio.com/software/sql-reference/) for the correct syntax\. ## Learn more ## <!-- <ul> --> * [Dremio Software documentation](https://docs.dremio.com/current/) * [Dremio Cloud documentation](https://docs.dremio.com/cloud/) <!-- </ul> --> **Parent topic**: [Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html) <!-- </article "role="article" "> -->
1C42359BDF665EBCA2BAE4B9B3D5108F8097BF34
https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-dropbox.html?context=cdpaas&locale=en
Dropbox connection
Dropbox connection To access your data in Dropbox, create a connection asset for it. Dropbox is a cloud storage service that lets you host and synchronize files on your devices. Create a connection to Dropbox To create the connection asset, you need an Access token. Choose the method for creating a connection based on where you are in the platform In a project : Click Assets > New asset > Connect to a data source. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html). In a deployment space : Click Add to space > Connection. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html). In the Platform assets catalog : Click New connection. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html). Next step: Add data assets from the connection * See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html). Where you can use this connection You can use Dropbox connections in the following workspaces and tools: Projects * Data Refinery * SPSS Modeler * Synthetic Data Generator (Synthetic Data Generator service) Catalogs * Platform assets catalog Dropbox setup [Dropbox plans](https://www.dropbox.com/plans) Supported file types The Dropbox connection supports these file types: Avro, CSV, Delimited text, Excel, JSON, ORC, Parquet, SAS, SAV, SHP, and XML. Learn more [Dropbox quick start guides](https://help.dropbox.com/guide) Parent topic:[Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html)
# Dropbox connection # To access your data in Dropbox, create a connection asset for it\. Dropbox is a cloud storage service that lets you host and synchronize files on your devices\. ## Create a connection to Dropbox ## To create the connection asset, you need an Access token\. ### Choose the method for creating a connection based on where you are in the platform ### **In a project** : Click **Assets > New asset > Connect to a data source**\. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html)\. **In a deployment space** : Click **Add to space > Connection**\. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html)\. **In the Platform assets catalog** : Click **New connection**\. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html)\. ### Next step: Add data assets from the connection ### <!-- <ul> --> * See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html)\. <!-- </ul> --> ## Where you can use this connection ## You can use Dropbox connections in the following workspaces and tools: **Projects** <!-- <ul> --> * Data Refinery * SPSS Modeler * Synthetic Data Generator (Synthetic Data Generator service) <!-- </ul> --> **Catalogs** <!-- <ul> --> * Platform assets catalog <!-- </ul> --> ## Dropbox setup ## [Dropbox plans](https://www.dropbox.com/plans) ## Supported file types ## The Dropbox connection supports these file types: Avro, CSV, Delimited text, Excel, JSON, ORC, Parquet, SAS, SAV, SHP, and XML\. ## Learn more ## [Dropbox quick start guides](https://help.dropbox.com/guide) **Parent topic:**[Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html) <!-- </article "role="article" "> -->
36D7217BC75C917100AE7DF27DC14FDB919D1609
https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-elastic.html?context=cdpaas&locale=en
Elasticsearch connection
Elasticsearch connection To access your data in Elasticsearch, create a connection asset for it. Elasticsearch is a distributed, open source search and analytics engine. Use the Elasticsearch connection to access JSON documents in Elasticsearch indexes. Supported versions Elasticsearch version 6.0 or later Create a connection to Elasticsearch To create the connection asset, you need these connection details: * Username and password (Optional) Anonymous access * SSL certificate (if required by the database server) For Private connectivity, to connect to a database that is not externalized to the internet (for example, behind a firewall), you must set up a [secure connection](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.html). Choose the method for creating a connection based on where you are in the platform In a project : Click Assets > New asset > Connect to a data source. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html). In a deployment space : Click Add to space > Connection. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html). In the Platform assets catalog : Click New connection. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html). Next step: Add data assets from the connection * See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html). Where you can use this connection You can use Elasticsearch connections in the following workspaces and tools: Projects * Data Refinery * SPSS Modeler Catalogs * Platform assets catalog Elasticsearch setup [Set up Elasticsearch](https://www.elastic.co/guide/en/elasticsearch/reference/current/setup.html) Restrictions * For Elasticsearch versions earlier than version 7, read is limited to 10,000 rows. * For Data Refinery, the only supported action on the target file is to append all the rows of the Data Refinery flow output to the existing data set. Running SQL statements To ensure that your SQL statements run correctly, refer to the [Elasticsearch Guide for SQL](https://www.elastic.co/guide/en/elasticsearch/reference/current/xpack-sql.html) for the correct syntax. Learn more * [Elasticsearch](https://www.elastic.co/elasticsearch/) * [Elastic Docs](https://www.elastic.co/guide/en/elastic-stack/current/overview.html) Parent topic:[Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html)
# Elasticsearch connection # To access your data in Elasticsearch, create a connection asset for it\. Elasticsearch is a distributed, open source search and analytics engine\. Use the Elasticsearch connection to access JSON documents in Elasticsearch indexes\. ## Supported versions ## Elasticsearch version 6\.0 or later ## Create a connection to Elasticsearch ## To create the connection asset, you need these connection details: <!-- <ul> --> * Username and password (Optional) Anonymous access * SSL certificate (if required by the database server) <!-- </ul> --> For **Private connectivity**, to connect to a database that is not externalized to the internet (for example, behind a firewall), you must set up a [secure connection](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.html)\. ### Choose the method for creating a connection based on where you are in the platform ### **In a project** : Click **Assets > New asset > Connect to a data source**\. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html)\. **In a deployment space** : Click **Add to space > Connection**\. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html)\. **In the Platform assets catalog** : Click **New connection**\. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html)\. ### Next step: Add data assets from the connection ### <!-- <ul> --> * See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html)\. <!-- </ul> --> ## Where you can use this connection ## You can use Elasticsearch connections in the following workspaces and tools: **Projects** <!-- <ul> --> * Data Refinery * SPSS Modeler <!-- </ul> --> **Catalogs** <!-- <ul> --> * Platform assets catalog <!-- </ul> --> ## Elasticsearch setup ## [Set up Elasticsearch](https://www.elastic.co/guide/en/elasticsearch/reference/current/setup.html) ## Restrictions ## <!-- <ul> --> * For Elasticsearch versions earlier than version 7, read is limited to 10,000 rows\. * For Data Refinery, the only supported action on the target file is to append all the rows of the Data Refinery flow output to the existing data set\. <!-- </ul> --> ### Running SQL statements ### To ensure that your SQL statements run correctly, refer to the [Elasticsearch Guide for SQL](https://www.elastic.co/guide/en/elasticsearch/reference/current/xpack-sql.html) for the correct syntax\. ## Learn more ## <!-- <ul> --> * [Elasticsearch](https://www.elastic.co/elasticsearch/) * [Elastic Docs](https://www.elastic.co/guide/en/elastic-stack/current/overview.html) <!-- </ul> --> **Parent topic:**[Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html) <!-- </article "role="article" "> -->
8A69DD7BEAD4ADB82C87A200309CD63ECBD625D8
https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-ftp.html?context=cdpaas&locale=en
FTP (remote file system) connection
FTP (remote file system) connection To access your data with the FTP protocol, create a connection asset for it. FTP is a standard communication protocol that is used to transfer files from a server to a client on a computer network. Create an FTP connection To create the connection asset, you need these connection details: * Connection mode: The connection method configured on the FTP server: * Anonymous * Basic authentication (with username and password) * SFTP Tectia: Transfer data sets that are in Multiple Virtual Storage (MVS) format to or from an IBM z/OS mainframe computer. MVS data sets use a period (.) to separate the qualifiers in the data set names. To write to an MVS data set, select Access MVS Dataset and enter the file transfer advice (FTADV) strings in key-value pairs separated by commas. For information, see the [Tectia documentation](https://info.ssh.com/hubfs/2021%20Support%20manuals%20documents/TectiaServer_zOS_UserManual.pdf). * SSH: File transfer over a secure channel that uses the Secure Shell protocol. Also requires username and password. * SSL: File transfer that uses File Transport Protocol (FTP), which supports secure transmission via SSL (sslTLSv2) protocol. Also requires username and password. * Hostname or IP address * Port number of the FTP server * SSH mode: Private key and Key passphrase * Authentication method: * Username and password * Username, password, private key. If you use an encrypted private key, you will need a key passphrase. * Username and private key. If you use an encrypted private key, you will need a key passphrase. If you use a private key, make sure that the key is an RSA private key that is generated by the ssh-keygen tool. The private key must be in the PEM format. For Private connectivity, to connect to a database that is not externalized to the internet (for example, behind a firewall), you must set up a [secure connection](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.html). This selection is available for the SSH connection mode only. Choose the method for creating a connection based on where you are in the platform In a project : Click Assets > New asset > Connect to a data source. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html). In a deployment space : Click Add to space > Connection. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html). In the Platform assets catalog : Click New connection. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html). Next step: Add data assets from the connection * See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html). Where you can use this connection You can use FTP connections in the following workspaces and tools: Projects * Data Refinery * SPSS Modeler * Synthetic Data Generator Catalogs * Platform assets catalog Supported file types The FTP connection supports these file types: Avro, CSV, Delimited text, Excel, JSON, ORC, Parquet, SAS, SAV, SHP, and XML. Parent topic:[Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html)
# FTP (remote file system) connection # To access your data with the FTP protocol, create a connection asset for it\. FTP is a standard communication protocol that is used to transfer files from a server to a client on a computer network\. ## Create an FTP connection ## To create the connection asset, you need these connection details: <!-- <ul> --> * Connection mode: The connection method configured on the FTP server: <!-- <ul> --> * Anonymous * Basic authentication (with username and password) * SFTP Tectia: Transfer data sets that are in Multiple Virtual Storage (MVS) format to or from an IBM z/OS mainframe computer. MVS data sets use a period (`.`) to separate the qualifiers in the data set names. To write to an MVS data set, select **Access MVS Dataset** and enter the file transfer advice (FTADV) strings in key-value pairs separated by commas. For information, see the [Tectia documentation](https://info.ssh.com/hubfs/2021%20Support%20manuals%20documents/TectiaServer_zOS_UserManual.pdf). * SSH: File transfer over a secure channel that uses the Secure Shell protocol. Also requires username and password. * SSL: File transfer that uses File Transport Protocol (FTP), which supports secure transmission via SSL (sslTLSv2) protocol. Also requires username and password. <!-- </ul> --> * Hostname or IP address * Port number of the FTP server * SSH mode: Private key and Key passphrase <!-- </ul> --> <!-- <ul> --> * Authentication method: <!-- <ul> --> * Username and password * Username, password, private key. If you use an encrypted private key, you will need a key passphrase. * Username and private key. If you use an encrypted private key, you will need a key passphrase. <!-- </ul> --> <!-- </ul> --> If you use a private key, make sure that the key is an RSA private key that is generated by the **ssh\-keygen** tool\. The private key must be in the PEM format\. For **Private connectivity**, to connect to a database that is not externalized to the internet (for example, behind a firewall), you must set up a [secure connection](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.html)\. This selection is available for the SSH connection mode only\. ### Choose the method for creating a connection based on where you are in the platform ### **In a project** : Click **Assets > New asset > Connect to a data source**\. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html)\. **In a deployment space** : Click **Add to space > Connection**\. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html)\. **In the Platform assets catalog** : Click **New connection**\. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html)\. ### Next step: Add data assets from the connection ### <!-- <ul> --> * See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html)\. <!-- </ul> --> ## Where you can use this connection ## You can use FTP connections in the following workspaces and tools: **Projects** <!-- <ul> --> * Data Refinery * SPSS Modeler * Synthetic Data Generator <!-- </ul> --> **Catalogs** <!-- <ul> --> * Platform assets catalog <!-- </ul> --> ## Supported file types ## The FTP connection supports these file types: Avro, CSV, Delimited text, Excel, JSON, ORC, Parquet, SAS, SAV, SHP, and XML\. **Parent topic:**[Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html) <!-- </article "role="article" "> -->
631559E8401C52C3AAC6D64E1F9DA0F765FC4846
https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-generics3.html?context=cdpaas&locale=en
Generic S3 connection
Generic S3 connection To access your data from a storage service that is compatible with the Amazon S3 API, create a connection asset for it. Create a Generic S3 connection To create the connection asset, you need these connection details: * Endpoint URL: The endpoint URL to access to S3 * Bucket(optional): The name of the bucket that contains the files * Region (optional): S3 region. Specify a region that matches the regional endpoint. * Access key: The access key (username) that authorizes access to S3 * Secret key: The password associated with the Access key ID that authorizes access to S3 * The SSL certificate of the trusted host. The certificate is required when the host certificate is not signed by a known certificate authority. * Disable chunked encoding: Select if the storage does not support chunked encoding. * Enable global bucket access: Consult the documentation for your S3 data source for whether to select this property. * Enable path style access: Consult the documentation for your S3 data source for whether to select this property. Choose the method for creating a connection based on where you are in the platform In a project Click Assets > New asset > Connect to a data source. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html). In a deployment space Click Add to space > Connection. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html). In the Platform assets catalog : Click New connection. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html). Next step: Add data assets from the connection * See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html). Where you can use this connection You can use the Generic S3 connection in the following workspaces and tools: Projects * Data Refinery Catalogs * Platform assets catalog Generic S3 connection setup For setup information, consult the documentation of the S3-compatible data source that you are connecting to. Supported file types The Generic S3 connection supports these file types: Avro, CSV, delimited text, Excel, JSON, ORC, Parquet, SAS, SAV, SHP, and XML. Related connection: [Amazon S3 connection](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-amazon-s3.html) Parent topic: [Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html)
# Generic S3 connection # To access your data from a storage service that is compatible with the Amazon S3 API, create a connection asset for it\. ## Create a Generic S3 connection ## To create the connection asset, you need these connection details: <!-- <ul> --> * **Endpoint URL**: The endpoint URL to access to S3 * **Bucket**(optional): The name of the bucket that contains the files * **Region** (optional): S3 region\. Specify a region that matches the regional endpoint\. * **Access key**: The access key (username) that authorizes access to S3 * **Secret key**: The password associated with the Access key ID that authorizes access to S3 * The SSL certificate of the trusted host\. The certificate is required when the host certificate is not signed by a known certificate authority\. * **Disable chunked encoding**: Select if the storage does not support chunked encoding\. * **Enable global bucket access**: Consult the documentation for your S3 data source for whether to select this property\. * **Enable path style access**: Consult the documentation for your S3 data source for whether to select this property\. <!-- </ul> --> ### Choose the method for creating a connection based on where you are in the platform ### **In a project** Click **Assets > New asset > Connect to a data source**\. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html)\. **In a deployment space** Click **Add to space > Connection**\. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html)\. **In the Platform assets catalog** : Click **New connection**\. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html)\. ### Next step: Add data assets from the connection ### <!-- <ul> --> * See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html)\. <!-- </ul> --> ## Where you can use this connection ## You can use the Generic S3 connection in the following workspaces and tools: **Projects** <!-- <ul> --> * Data Refinery <!-- </ul> --> **Catalogs** <!-- <ul> --> * Platform assets catalog <!-- </ul> --> ## Generic S3 connection setup ## For setup information, consult the documentation of the S3\-compatible data source that you are connecting to\. ## Supported file types ## The Generic S3 connection supports these file types: Avro, CSV, delimited text, Excel, JSON, ORC, Parquet, SAS, SAV, SHP, and XML\. **Related connection**: [Amazon S3 connection](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-amazon-s3.html) **Parent topic**: [Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html) <!-- </article "role="article" "> -->
908121C9993CBEDB4916C19D84605A9728FD27E5
https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-greenplum.html?context=cdpaas&locale=en
Greenplum connection
Greenplum connection To access your data in Greenplum, you must create a connection asset for it. Greenplum is a massively parallel processing (MPP) database server that supports next generation data warehousing and large-scale analytics processing. Supported versions Greenplum 3.2+ Create a connection to Greenplum To create the connection asset, you need the following connection details: * Username and password * SSL certificate (if required by the database server) For Private connectivity, to connect to a database that is not externalized to the internet (for example, behind a firewall), you must set up a [secure connection](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.html). Choose the method for creating a connection based on where you are in the platform In a project : Click Assets > New asset > Connect to a data source. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html). In a deployment space : Click Add to space > Connection. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html). In the Platform assets catalog : Click New connection. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html). Next step: Add data assets from the connection * See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html). Where you can use this connection You can use Greenplum connections in the following workspaces and tools: * SPSS Modeler * Synthetic Data Generator Catalogs * Platform assets catalog Greenplum setup [Greenplum Database Installation Guide](https://docs.vmware.com/en/VMware-Greenplum/5/greenplum-database/install_guide-install_guide.html) Restriction For SPSS Modeler, you can use this connection only to import data. You cannot export data to this connection or to a Greenplum connected data asset. Learn more * [Greenplum database](https://greenplum.org/) * [Greenplum documentation](https://docs.Greenplum.org/6-8/common/gpdb-features.html) Parent topic:[Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html)
# Greenplum connection # To access your data in Greenplum, you must create a connection asset for it\. Greenplum is a massively parallel processing (MPP) database server that supports next generation data warehousing and large\-scale analytics processing\. ## Supported versions ## Greenplum 3\.2\+ ## Create a connection to Greenplum ## To create the connection asset, you need the following connection details: <!-- <ul> --> * Username and password * SSL certificate (if required by the database server) <!-- </ul> --> For **Private connectivity**, to connect to a database that is not externalized to the internet (for example, behind a firewall), you must set up a [secure connection](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.html)\. ### Choose the method for creating a connection based on where you are in the platform ### **In a project** : Click **Assets > New asset > Connect to a data source**\. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html)\. **In a deployment space** : Click **Add to space > Connection**\. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html)\. **In the Platform assets catalog** : Click **New connection**\. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html)\. ### Next step: Add data assets from the connection ### <!-- <ul> --> * See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html)\. <!-- </ul> --> ## Where you can use this connection ## You can use Greenplum connections in the following workspaces and tools: <!-- <ul> --> * SPSS Modeler * Synthetic Data Generator <!-- </ul> --> **Catalogs** <!-- <ul> --> * Platform assets catalog <!-- </ul> --> ## Greenplum setup ## [Greenplum Database Installation Guide](https://docs.vmware.com/en/VMware-Greenplum/5/greenplum-database/install_guide-install_guide.html) ## Restriction ## For SPSS Modeler, you can use this connection only to import data\. You cannot export data to this connection or to a Greenplum connected data asset\. ## Learn more ## <!-- <ul> --> * [Greenplum database](https://greenplum.org/) * [Greenplum documentation](https://docs.Greenplum.org/6-8/common/gpdb-features.html) <!-- </ul> --> **Parent topic:**[Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html) <!-- </article "role="article" "> -->
049DC7FC73042985F3258EF2CF3BB05114F7F175
https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-hdfs.html?context=cdpaas&locale=en
Apache HDFS connection
Apache HDFS connection To access your data in Apache HDFS, create a connection asset for it. Apache Hadoop Distributed File System (HDFS) is a distributed file system that is designed to run on commodity hardware. Apache HDFS was formerly Hortonworks HDFS. Create a connection to Apache HDFS To create the connection asset, you need these connection details. The WebHDFS URL is required. The available properties in the connection form depend on whether you select Connect to Apache Hive so that you can write tables to the Hive data source. * WebHDFS URL to access HDFS. * Hive host: Hostname or IP address of the Apache Hive server. * Hive database: The database in Apache Hive. * Hive port number: The port number of the Apache Hive server. The default value is 10000. * Hive HTTP path: The path of the endpoint such as gateway/default/hive when the server is configured for HTTP transport mode. * SSL certificate (if required by the Apache Hive server). For Private connectivity, to connect to a database that is not externalized to the internet (for example, behind a firewall), you must set up a [secure connection](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.html). Choose the method for creating a connection based on where you are in the platform In a project : Click Assets > New asset > Connect to a data source. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html). In a deployment space : Click Add to space > Connection. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html). In the Platform assets catalog : Click New connection. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html). Next step: Add data assets from the connection * See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html). Where you can use this connection You can use Apache HDFS connections in the following workspaces and tools: Projects * Data Refinery * SPSS Modeler * Synthetic Data Generator Catalogs * Platform assets catalog Apache HDFS setup [Install and set up a Hadoop cluster](https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-hdfs/HdfsUserGuide.htmlPrerequisites) Supported file types The Apache HDFS connection supports these file types: Avro, CSV, Delimited text, Excel, JSON, ORC, Parquet, SAS, SAV, SHP, and XML. Learn more [Apache HDFS Users Guide](https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-hdfs/HdfsUserGuide.html) Parent topic:[Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html)
# Apache HDFS connection # To access your data in Apache HDFS, create a connection asset for it\. Apache Hadoop Distributed File System (HDFS) is a distributed file system that is designed to run on commodity hardware\. Apache HDFS was formerly Hortonworks HDFS\. ## Create a connection to Apache HDFS ## To create the connection asset, you need these connection details\. The WebHDFS URL is required\. The available properties in the connection form depend on whether you select **Connect to Apache Hive** so that you can write tables to the Hive data source\. <!-- <ul> --> * WebHDFS URL to access HDFS\. * Hive host: Hostname or IP address of the Apache Hive server\. * Hive database: The database in Apache Hive\. * Hive port number: The port number of the Apache Hive server\. The default value is `10000`\. * Hive HTTP path: The path of the endpoint such as gateway/default/hive when the server is configured for HTTP transport mode\. * SSL certificate (if required by the Apache Hive server)\. <!-- </ul> --> For **Private connectivity**, to connect to a database that is not externalized to the internet (for example, behind a firewall), you must set up a [secure connection](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.html)\. ### Choose the method for creating a connection based on where you are in the platform ### **In a project** : Click **Assets > New asset > Connect to a data source**\. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html)\. **In a deployment space** : Click **Add to space > Connection**\. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html)\. **In the Platform assets catalog** : Click **New connection**\. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html)\. ### Next step: Add data assets from the connection ### <!-- <ul> --> * See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html)\. <!-- </ul> --> ## Where you can use this connection ## You can use Apache HDFS connections in the following workspaces and tools: **Projects** <!-- <ul> --> * Data Refinery * SPSS Modeler * Synthetic Data Generator <!-- </ul> --> **Catalogs** <!-- <ul> --> * Platform assets catalog <!-- </ul> --> ## Apache HDFS setup ## [Install and set up a Hadoop cluster](https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-hdfs/HdfsUserGuide.html#Prerequisites) ## Supported file types ## The Apache HDFS connection supports these file types: Avro, CSV, Delimited text, Excel, JSON, ORC, Parquet, SAS, SAV, SHP, and XML\. ## Learn more ## [Apache HDFS Users Guide](https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-hdfs/HdfsUserGuide.html) **Parent topic:**[Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html) <!-- </article "role="article" "> -->
4451C208E0350C4C480F50929BD6735588B6F2BC
https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-hive.html?context=cdpaas&locale=en
Apache Hive connection
Apache Hive connection To access your data in Apache Hive, you must create a connection asset for it. Apache Hive is a data warehouse software project that provides data query and analysis and is built on top of Apache Hadoop. Supported versions Apache Hive 1.0.x, 1.1.x, 1.2.x. 2.0.x, 2.1.x, 3.0.x, 3.1.x. Create a connection to Apache Hive To create the connection asset, you need the following connection details: * Database name * Hostname or IP address * Port number * HTTP path (Optional): The path of the endpoint such as the gateway, default, or hive if the server is configured for the HTTP transport mode. * If required by the database server, the SSL certificate For Private connectivity, to connect to a database that is not externalized to the internet (for example, behind a firewall), you must set up a [secure connection](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.html). Choose the method for creating a connection based on where you are in the platform In a project : Click Assets > New asset > Connect to a data source. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html). In a deployment space : Click Add to space > Connection. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html). In the Platform assets catalog : Click New connection. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html). Next step: Add data assets from the connection * See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html). Where you can use this connection You can use the Apache Hive connection in the following workspaces and tools: * Data Refinery * SPSS Modeler * Synthetic Data Generator Catalogs * Platform assets catalog Apache Hive setup [Apache Hive installation and configuration](https://cwiki.apache.org/confluence/display/Hive/GettingStartedGettingStarted-InstallationandConfiguration) Restriction Running SQL statements To ensure that your SQL statements run correctly, refer to the [SQL Operations](https://cwiki.apache.org/confluence/display/Hive/GettingStartedGettingStarted-SQLOperations) in the Apache Hive documentation for the correct syntax. Learn more [Apache Hive documentation](https://cwiki.apache.org/confluence/display/Hive/GettingStarted) Parent topic: [Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html)
# Apache Hive connection # To access your data in Apache Hive, you must create a connection asset for it\. Apache Hive is a data warehouse software project that provides data query and analysis and is built on top of Apache Hadoop\. ## Supported versions ## Apache Hive 1\.0\.x, 1\.1\.x, 1\.2\.x\. 2\.0\.x, 2\.1\.x, 3\.0\.x, 3\.1\.x\. ## Create a connection to Apache Hive ## To create the connection asset, you need the following connection details: <!-- <ul> --> * Database name * Hostname or IP address * Port number * HTTP path (Optional): The path of the endpoint such as the gateway, default, or hive if the server is configured for the HTTP transport mode\. * If required by the database server, the SSL certificate <!-- </ul> --> For **Private connectivity**, to connect to a database that is not externalized to the internet (for example, behind a firewall), you must set up a [secure connection](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.html)\. ### Choose the method for creating a connection based on where you are in the platform ### **In a project** : Click **Assets > New asset > Connect to a data source**\. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html)\. **In a deployment space** : Click **Add to space > Connection**\. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html)\. **In the Platform assets catalog** : Click **New connection**\. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html)\. ### Next step: Add data assets from the connection ### <!-- <ul> --> * See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html)\. <!-- </ul> --> ## Where you can use this connection ## You can use the Apache Hive connection in the following workspaces and tools: <!-- <ul> --> * Data Refinery * SPSS Modeler * Synthetic Data Generator <!-- </ul> --> **Catalogs** <!-- <ul> --> * Platform assets catalog <!-- </ul> --> ## Apache Hive setup ## [Apache Hive installation and configuration](https://cwiki.apache.org/confluence/display/Hive/GettingStarted#GettingStarted-InstallationandConfiguration) ## Restriction ## ### Running SQL statements ### To ensure that your SQL statements run correctly, refer to the [SQL Operations](https://cwiki.apache.org/confluence/display/Hive/GettingStarted#GettingStarted-SQLOperations) in the Apache Hive documentation for the correct syntax\. ## Learn more ## [Apache Hive documentation](https://cwiki.apache.org/confluence/display/Hive/GettingStarted) **Parent topic**: [Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html) <!-- </article "role="article" "> -->
2245768521F36E6F2DF594E1BF3111DD63CC824A
https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-http.html?context=cdpaas&locale=en
HTTP connection
HTTP connection To access your data from a URL, create an HTTP connection asset for it. Supported file Use the full path in the URL to the file that you want to read. You cannot browse for files. Certificates Enter the SSL certificate of the host to be trusted. The SSL certificate is needed only when the host certificate is not signed by a known certificate authority. For Private connectivity, to connect to a database that is not externalized to the internet (for example, behind a firewall), you must set up a [secure connection](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.html). Choose the method for creating a connection based on where you are in the platform In a project : Click Assets > New asset > Connect to a data source. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html). In a deployment space : Click Add to space > Connection. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html). In the Platform assets catalog : Click New connection. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html). Next step: Add data assets from the connection * See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html). Where you can use this connection You can use HTTP connections in the following workspaces and tools: Projects * Data Refinery * Notebooks. Click Read data on the Code snippets pane to get the connection credentials and load the data into a data structure. See [Load data from data source connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/load-and-access-data.htmlconns). * SPSS Modeler * Synthetic Data Generator Catalogs * Platform assets catalog Restriction You can use this connection only for source data. You cannot write to data or export data with this connection. Supported file types The HTTP connection supports these file types: Avro, CSV, Delimited text, Excel, JSON, ORC, Parquet, SAS, SAV, SHP, and XML. Parent topic:[Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html)
# HTTP connection # To access your data from a URL, create an HTTP connection asset for it\. ## Supported file ## Use the full path in the URL to the file that you want to read\. You cannot browse for files\. ## Certificates ## Enter the SSL certificate of the host to be trusted\. The SSL certificate is needed only when the host certificate is not signed by a known certificate authority\. For **Private connectivity**, to connect to a database that is not externalized to the internet (for example, behind a firewall), you must set up a [secure connection](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.html)\. ### Choose the method for creating a connection based on where you are in the platform ### **In a project** : Click **Assets > New asset > Connect to a data source**\. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html)\. **In a deployment space** : Click **Add to space > Connection**\. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html)\. **In the Platform assets catalog** : Click **New connection**\. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html)\. ### Next step: Add data assets from the connection ### <!-- <ul> --> * See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html)\. <!-- </ul> --> ## Where you can use this connection ## You can use HTTP connections in the following workspaces and tools: **Projects** <!-- <ul> --> * Data Refinery * Notebooks\. Click **Read data** on the **Code snippets** pane to get the connection credentials and load the data into a data structure\. See [Load data from data source connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/load-and-access-data.html#conns)\. * SPSS Modeler * Synthetic Data Generator <!-- </ul> --> **Catalogs** <!-- <ul> --> * Platform assets catalog <!-- </ul> --> ## Restriction ## You can use this connection only for source data\. You cannot write to data or export data with this connection\. ## Supported file types ## The HTTP connection supports these file types: Avro, CSV, Delimited text, Excel, JSON, ORC, Parquet, SAS, SAV, SHP, and XML\. **Parent topic:**[Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html) <!-- </article "role="article" "> -->
FA15A8A5795BAEC1D8933A768407294110203E03
https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-informix.html?context=cdpaas&locale=en
IBM Informix connection
IBM Informix connection To access your data in an IBM Informix database, create a connection asset for it. IBM Informix is a database that contains relational, object-relational, or dimensional data. You can use the Informix connection to access data from an on-prem Informix database server or from IBM Informix on Cloud. Supported Informix versions (on-prem) * Informix 14.10 and later. This version does not support the Progress DataDirect JDBC driver, which is used by the Informix connection. The Informix connection supports Informix 14.10 features that are comparable to previous Informix versions, but not the new features. Issues related to DataDirect's JDBC driver are not supported. * Informix 12.10 and later * Informix 11.0 and later * Informix 10.0 and later * Informix 9.2 and later Create a connection to Informix To create the connection asset, you need these connection details: * Name of the database server * Name of the database * Hostname or IP address of the database * Port number (Default is 1526) * Username and password On-prem Informix database servers: For Private connectivity, to connect to a database that is not externalized to the internet (for example, behind a firewall), you must set up a [secure connection](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.html). Choose the method for creating a connection based on where you are in the platform In a project : Click Assets > New asset > Connect to a data source. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html). In a deployment space : Click Add to space > Connection. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html). In the Platform assets catalog : Click New connection. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html). Next step: Add data assets from the connection * See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html). Where you can use this connection You can use Informix connections in the following workspaces and tools: Projects * SPSS Modeler * Synthetic Data Generator Catalogs * Platform assets catalog Informix setup To set up Informix, see these topics: * Informix on-prem: [Creating a database server after installation](https://www.ibm.com/docs/SSGU8G_14.1.0/com.ibm.inst.doc/ids_inst_023.htm) * Informix on Cloud: [Getting started with Informix on Cloud](https://cloud.ibm.com/docs/InformixOnCloud/InformixOnCloud.html) Running SQL statements To ensure that your SQL statements run correctly, refer to the [Guide to SQL: Syntax](https://www.ibm.com/docs/SSGU8G_14.1.0/com.ibm.sqls.doc/sqls.htm) in the product documentation for the correct syntax. Learn more * [Informix product documentation](https://www.ibm.com/docs/informix-servers/14.10) (on-prem) * [IBM Informix on Cloud](https://www.ibm.com/cloud/informix) * [IBM Informix on Cloud FAQ](https://www.ibm.com/cloud/informix/faq) Parent topic:[Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html)
# IBM Informix connection # To access your data in an IBM Informix database, create a connection asset for it\. IBM Informix is a database that contains relational, object\-relational, or dimensional data\. You can use the Informix connection to access data from an on\-prem Informix database server or from IBM Informix on Cloud\. ## Supported Informix versions (on\-prem) ## <!-- <ul> --> * Informix 14\.10 and later\. This version does not support the Progress DataDirect JDBC driver, which is used by the Informix connection\. The Informix connection supports Informix 14\.10 features that are comparable to previous Informix versions, but not the new features\. Issues related to DataDirect's JDBC driver are not supported\. * Informix 12\.10 and later * Informix 11\.0 and later * Informix 10\.0 and later * Informix 9\.2 and later <!-- </ul> --> ## Create a connection to Informix ## To create the connection asset, you need these connection details: <!-- <ul> --> * Name of the database server * Name of the database * Hostname or IP address of the database * Port number (Default is `1526`) * Username and password <!-- </ul> --> On\-prem Informix database servers: For **Private connectivity**, to connect to a database that is not externalized to the internet (for example, behind a firewall), you must set up a [secure connection](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.html)\. ### Choose the method for creating a connection based on where you are in the platform ### **In a project** : Click **Assets > New asset > Connect to a data source**\. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html)\. **In a deployment space** : Click **Add to space > Connection**\. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html)\. **In the Platform assets catalog** : Click **New connection**\. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html)\. ### Next step: Add data assets from the connection ### <!-- <ul> --> * See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html)\. <!-- </ul> --> ## Where you can use this connection ## You can use Informix connections in the following workspaces and tools: **Projects** <!-- <ul> --> * SPSS Modeler * Synthetic Data Generator <!-- </ul> --> **Catalogs** <!-- <ul> --> * Platform assets catalog <!-- </ul> --> ## Informix setup ## To set up Informix, see these topics: <!-- <ul> --> * Informix on\-prem: [Creating a database server after installation](https://www.ibm.com/docs/SSGU8G_14.1.0/com.ibm.inst.doc/ids_inst_023.htm) * Informix on Cloud: [Getting started with Informix on Cloud](https://cloud.ibm.com/docs/InformixOnCloud/InformixOnCloud.html) <!-- </ul> --> ### Running SQL statements ### To ensure that your SQL statements run correctly, refer to the [Guide to SQL: Syntax](https://www.ibm.com/docs/SSGU8G_14.1.0/com.ibm.sqls.doc/sqls.htm) in the product documentation for the correct syntax\. ## Learn more ## <!-- <ul> --> * [Informix product documentation](https://www.ibm.com/docs/informix-servers/14.10) (on\-prem) * [IBM Informix on Cloud](https://www.ibm.com/cloud/informix) * [IBM Informix on Cloud FAQ](https://www.ibm.com/cloud/informix/faq) <!-- </ul> --> **Parent topic:**[Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html) <!-- </article "role="article" "> -->
102D3D188E3EDC4A4AA55F731966EBB22C827822
https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-looker.html?context=cdpaas&locale=en
Looker connection
Looker connection To access your data in Looker, create a connection asset for it. Looker is a business intelligence software and big data analytics platform that helps you explore, analyze and share real-time business analytics. Create a connection to Looker To create the connection asset, you need these connection details: * Hostname or IP address * Port number of the Looker server * Client ID and Client secret Before you configure the connection, set up API3 credentials for your Looker instance. For details, see [Looker API Authentication](https://www.ibm.com/links?url=https%3A%2F%2Fdocs.looker.com%2Freference%2Fapi-and-integration%2Fapi-auth). Choose the method for creating a connection based on where you are in the platform In a project : Click Assets > New asset > Connect to a data source. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html). In a deployment space : Click Add to space > Connection. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html). In the Platform assets catalog : Click New connection. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html). Next step: Add data assets from the connection * See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html). Where you can use this connection You can use Looker connections in the following workspaces and tools: Projects * SPSS Modeler * Synthetic Data Generator Catalogs * Platform assets catalog Looker setup [Set up and administer Looker](https://docs.looker.com/admin-options) Restriction You can use this connection only for source data. You cannot write to data or export data with this connection. Running SQL statements To ensure that your SQL statements run correctly, refer to the Looker documentation, [Using SQL Runner](https://docs.looker.com/data-modeling/learning-lookml/sql-runner-create-queries), for the correct syntax. Supported file types The Looker connection supports these file types: CSV, Delimited text, Excel, JSON. Learn more [Looker documentation](https://docs.looker.com/) Parent topic:[Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html)
# Looker connection # To access your data in Looker, create a connection asset for it\. Looker is a business intelligence software and big data analytics platform that helps you explore, analyze and share real\-time business analytics\. ## Create a connection to Looker ## To create the connection asset, you need these connection details: <!-- <ul> --> * Hostname or IP address * Port number of the Looker server * Client ID and Client secret <!-- </ul> --> Before you configure the connection, set up API3 credentials for your Looker instance\. For details, see [Looker API Authentication](https://www.ibm.com/links?url=https%3A%2F%2Fdocs.looker.com%2Freference%2Fapi-and-integration%2Fapi-auth)\. ### Choose the method for creating a connection based on where you are in the platform ### **In a project** : Click **Assets > New asset > Connect to a data source**\. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html)\. **In a deployment space** : Click **Add to space > Connection**\. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html)\. **In the Platform assets catalog** : Click **New connection**\. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html)\. ### Next step: Add data assets from the connection ### <!-- <ul> --> * See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html)\. <!-- </ul> --> ## Where you can use this connection ## You can use Looker connections in the following workspaces and tools: **Projects** <!-- <ul> --> * SPSS Modeler * Synthetic Data Generator <!-- </ul> --> **Catalogs** <!-- <ul> --> * Platform assets catalog <!-- </ul> --> ## Looker setup ## [Set up and administer Looker](https://docs.looker.com/admin-options) ## Restriction ## You can use this connection only for source data\. You cannot write to data or export data with this connection\. ### Running SQL statements ### To ensure that your SQL statements run correctly, refer to the Looker documentation, [Using SQL Runner](https://docs.looker.com/data-modeling/learning-lookml/sql-runner-create-queries), for the correct syntax\. ## Supported file types ## The Looker connection supports these file types: CSV, Delimited text, Excel, JSON\. ## Learn more ## [Looker documentation](https://docs.looker.com/) **Parent topic:**[Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html) <!-- </article "role="article" "> -->
FECB1C7B603627E1CF1386AD0EBDFE57FA485F93
https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-mariadb.html?context=cdpaas&locale=en
MariaDB connection
MariaDB connection To access your data in MariaDB, create a connection asset for it. MariaDB is an open source relational database. You can use the MariaDB connection to connect to either a MariaDB server or to a Microsoft Azure Database for MariaDB service in the cloud. Supported versions * MariaDB server: 10.5.5 * Microsoft Azure Database for MariaDB: 10.3 Create a connection to MariaDB To create the connection asset, you need these connection details: * Database name * Hostname or IP address * Port number * Username and password * SSL certificate (if required by the database server) For Private connectivity, to connect to a database that is not externalized to the internet (for example, behind a firewall), you must set up a [secure connection](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.html). Choose the method for creating a connection based on where you are in the platform In a project : Click Assets > New asset > Connect to a data source. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html). In a deployment space : Click Add to space > Connection. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html). In the Platform assets catalog : Click New connection. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html). Next step: Add data assets from the connection * See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html). Where you can use this connection You can use MariaDB connections in the following workspaces and tools: Projects * Data Refinery * Decision Optimization * SPSS Modeler * Synthetic Data Generator Catalogs * Platform assets catalog MariaDB setup Setup depends on whether you are connecting from a local MariaDB server or a Microsoft Azure Database for MariaDB database service in the cloud. * MariaDB server: [MariaDB Administration](https://mariadb.com/kb/en/mariadb-administration/) * Microsoft Azure Database for MariaDB: [Quickstart: Create an Azure Database for MariaDB server by using the Azure portal](https://docs.microsoft.com/en-us/azure/mariadb/quickstart-create-mariadb-server-database-using-azure-portal) Learn more * [MariaDB Foundation](https://mariadb.org/) * [Microsoft Azure Database for MariaDB](https://azure.microsoft.com/en-us/services/mariadb/) Parent topic:[Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html)
# MariaDB connection # To access your data in MariaDB, create a connection asset for it\. MariaDB is an open source relational database\. You can use the MariaDB connection to connect to either a MariaDB server or to a Microsoft Azure Database for MariaDB service in the cloud\. ## Supported versions ## <!-- <ul> --> * MariaDB server: 10\.5\.5 * Microsoft Azure Database for MariaDB: 10\.3 <!-- </ul> --> ## Create a connection to MariaDB ## To create the connection asset, you need these connection details: <!-- <ul> --> * Database name * Hostname or IP address * Port number * Username and password * SSL certificate (if required by the database server) <!-- </ul> --> For **Private connectivity**, to connect to a database that is not externalized to the internet (for example, behind a firewall), you must set up a [secure connection](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.html)\. ### Choose the method for creating a connection based on where you are in the platform ### **In a project** : Click **Assets > New asset > Connect to a data source**\. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html)\. **In a deployment space** : Click **Add to space > Connection**\. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html)\. **In the Platform assets catalog** : Click **New connection**\. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html)\. ### Next step: Add data assets from the connection ### <!-- <ul> --> * See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html)\. <!-- </ul> --> ## Where you can use this connection ## You can use MariaDB connections in the following workspaces and tools: **Projects** <!-- <ul> --> * Data Refinery * Decision Optimization * SPSS Modeler * Synthetic Data Generator <!-- </ul> --> **Catalogs** <!-- <ul> --> * Platform assets catalog <!-- </ul> --> ## MariaDB setup ## Setup depends on whether you are connecting from a local MariaDB server or a Microsoft Azure Database for MariaDB database service in the cloud\. <!-- <ul> --> * MariaDB server: [MariaDB Administration](https://mariadb.com/kb/en/mariadb-administration/) * Microsoft Azure Database for MariaDB: [Quickstart: Create an Azure Database for MariaDB server by using the Azure portal](https://docs.microsoft.com/en-us/azure/mariadb/quickstart-create-mariadb-server-database-using-azure-portal) <!-- </ul> --> ## Learn more ## <!-- <ul> --> * [MariaDB Foundation](https://mariadb.org/) * [Microsoft Azure Database for MariaDB](https://azure.microsoft.com/en-us/services/mariadb/) <!-- </ul> --> **Parent topic:**[Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html) <!-- </article "role="article" "> -->
FD0BA49BF07FCC9CAF384A50B3012F98E4F1D81E
https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-mongo.html?context=cdpaas&locale=en
MongoDB connection
MongoDB connection To access your data in MongoDB, create a connection asset for it. MongoDB is a distributed database that stores data in JSON-like documents. Supported editions and versions MongoDB editions * MongoDB Community * IBM Cloud Databases for MongoDB. See [IBM Cloud Databases for MongoDB connection](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-mongodb.html) for this data source. * MongoDB Atlas * WiredTiger Storage Engine MongoDB versions * MongoDB 3.6 and later, 4.x, 5.x, and 6.x * Microsoft Azure Cosmos DB for MongoDB 3.6 and later, 4.x Create a connection to MongoDB To create the connection asset, you need these connection details: * Database name * Hostname or IP address * Port number * Authentication database: The name of the database in which the user was created. * Username and password * SSL certificate (if required by the database server) For Private connectivity, to connect to a database that is not externalized to the internet (for example, behind a firewall), you must set up a [secure connection](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.html). Choose the method for creating a connection based on where you are in the platform In a project : Click Assets > New asset > Connect to a data source. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html). In a deployment space : Click Add to space > Connection. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html). In the Platform assets catalog : Click New connection. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html). Next step: Add data assets from the connection * See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html). Where you can use this connection You can use MongoDB connections in the following workspaces and tools: Projects * Data Refinery * Decision Optimization * SPSS Modeler * Synthetic Data Generator Catalogs * Platform assets catalog MongoDB setup [MongoDB installation](https://docs.mongodb.com/manual/installation/) Restrictions * You can only use this connection for source data. You cannot write to data or export data with this connection. * MongoDB Query Language (MQL) is not supported. Learn more * [MongoDB tutorials](https://docs.mongodb.com/manual/tutorial/) * [mongodb.com](https://www.mongodb.com/) Related connection: [IBM Cloud Databases for MongoDB connection](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-mongodb.html) Parent topic:[Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html)
# MongoDB connection # To access your data in MongoDB, create a connection asset for it\. MongoDB is a distributed database that stores data in JSON\-like documents\. ## Supported editions and versions ## ### MongoDB editions ### <!-- <ul> --> * MongoDB Community * IBM Cloud Databases for MongoDB\. See [IBM Cloud Databases for MongoDB connection](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-mongodb.html) for this data source\. * MongoDB Atlas * WiredTiger Storage Engine <!-- </ul> --> ### MongoDB versions ### <!-- <ul> --> * MongoDB 3\.6 and later, 4\.x, 5\.x, and 6\.x * Microsoft Azure Cosmos DB for MongoDB 3\.6 and later, 4\.x <!-- </ul> --> ## Create a connection to MongoDB ## To create the connection asset, you need these connection details: <!-- <ul> --> * Database name * Hostname or IP address * Port number * Authentication database: The name of the database in which the user was created\. * Username and password * SSL certificate (if required by the database server) <!-- </ul> --> For **Private connectivity**, to connect to a database that is not externalized to the internet (for example, behind a firewall), you must set up a [secure connection](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.html)\. ### Choose the method for creating a connection based on where you are in the platform ### **In a project** : Click **Assets > New asset > Connect to a data source**\. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html)\. **In a deployment space** : Click **Add to space > Connection**\. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html)\. **In the Platform assets catalog** : Click **New connection**\. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html)\. ### Next step: Add data assets from the connection ### <!-- <ul> --> * See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html)\. <!-- </ul> --> ## Where you can use this connection ## You can use MongoDB connections in the following workspaces and tools: **Projects** <!-- <ul> --> * Data Refinery * Decision Optimization * SPSS Modeler * Synthetic Data Generator <!-- </ul> --> **Catalogs** <!-- <ul> --> * Platform assets catalog <!-- </ul> --> ## MongoDB setup ## [MongoDB installation](https://docs.mongodb.com/manual/installation/) ## Restrictions ## <!-- <ul> --> * You can only use this connection for source data\. You cannot write to data or export data with this connection\. * MongoDB Query Language (MQL) is not supported\. <!-- </ul> --> ## Learn more ## <!-- <ul> --> * [MongoDB tutorials](https://docs.mongodb.com/manual/tutorial/) * [mongodb\.com](https://www.mongodb.com/) <!-- </ul> --> **Related connection**: [IBM Cloud Databases for MongoDB connection](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-mongodb.html) **Parent topic:**[Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html) <!-- </article "role="article" "> -->
3721517369CF4EA8476BCBB39040542BA2A212D8
https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-mongodb.html?context=cdpaas&locale=en
IBM Cloud Databases for MongoDB connection
IBM Cloud Databases for MongoDB connection To access your data in IBM Cloud Databases for MongoDB, create a connection asset for it. IBM Cloud Databases for MongoDB is a MongoDB database that is managed by IBM Cloud. It uses a JSON document store with a rich query and aggregation framework. Supported editions * MongoDB Community Edition * MongoDB Enterprise Edition Create a connection to IBM Cloud Databases for MongoDB To create the connection asset, you need these connection details: * Database name * Hostname or IP address * Port number * Authentication database: The name of the database in which the user was created. * Username and password * SSL certificate (if required by the database server) Choose the method for creating a connection based on where you are in the platform In a project : Click Assets > New asset > Connect to a data source. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html). In a deployment space : Click Add to space > Connection. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html). In the Platform assets catalog : Click New connection. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html). Next step: Add data assets from the connection * See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html). Where you can use this connection You can use IBM Cloud Databases for MongoDB connections in the following workspaces and tools: Projects * Data Refinery * Decision Optimization * SPSS Modeler * Synthetic Data Generator Catalogs * Platform assets catalog IBM Cloud Databases for MongoDB setup [Getting Started Tutorial](https://cloud.ibm.com/docs/databases-for-mongodb?topic=databases-for-mongodb-getting-started-tutorial) Restrictions * You can only use this connection for source data. You cannot write to data or export data with this connection. * MongoDB Query Language (MQL) is not supported. Related connection: [MongoDB connection](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-mongo.html) Parent topic:[Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html)
# IBM Cloud Databases for MongoDB connection # To access your data in IBM Cloud Databases for MongoDB, create a connection asset for it\. IBM Cloud Databases for MongoDB is a MongoDB database that is managed by IBM Cloud\. It uses a JSON document store with a rich query and aggregation framework\. ## Supported editions ## <!-- <ul> --> * MongoDB Community Edition * MongoDB Enterprise Edition <!-- </ul> --> ## Create a connection to IBM Cloud Databases for MongoDB ## To create the connection asset, you need these connection details: <!-- <ul> --> * Database name * Hostname or IP address * Port number * Authentication database: The name of the database in which the user was created\. * Username and password * SSL certificate (if required by the database server) <!-- </ul> --> ### Choose the method for creating a connection based on where you are in the platform ### **In a project** : Click **Assets > New asset > Connect to a data source**\. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html)\. **In a deployment space** : Click **Add to space > Connection**\. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html)\. **In the Platform assets catalog** : Click **New connection**\. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html)\. ### Next step: Add data assets from the connection ### <!-- <ul> --> * See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html)\. <!-- </ul> --> ## Where you can use this connection ## You can use IBM Cloud Databases for MongoDB connections in the following workspaces and tools: **Projects** <!-- <ul> --> * Data Refinery * Decision Optimization * SPSS Modeler * Synthetic Data Generator <!-- </ul> --> **Catalogs** <!-- <ul> --> * Platform assets catalog <!-- </ul> --> ## IBM Cloud Databases for MongoDB setup ## [Getting Started Tutorial](https://cloud.ibm.com/docs/databases-for-mongodb?topic=databases-for-mongodb-getting-started-tutorial) ## Restrictions ## <!-- <ul> --> * You can only use this connection for source data\. You cannot write to data or export data with this connection\. * MongoDB Query Language (MQL) is not supported\. <!-- </ul> --> **Related connection**: [MongoDB connection](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-mongo.html) **Parent topic:**[Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html) <!-- </article "role="article" "> -->
BA78048BAD0EE0F455762254F562704769EA4149
https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-mysql.html?context=cdpaas&locale=en
MySQL connection
MySQL connection To access your data in MySQL, create a connection asset for it. MySQL is an open-source relational database management system. Supported versions * MySQL Enterprise Edition 5.0+ * MySQL Community Edition 4.1, 5.0, 5.1, 5.5, 5.6, 5.7 Create a connection to MySQL To create the connection asset, you need these connection details: * Database name * Hostname or IP Address * Port number * Character Encoding * Username and password * SSL certificate (if required by the database server) For Private connectivity, to connect to a database that is not externalized to the internet (for example, behind a firewall), you must set up a [secure connection](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.html). Choose the method for creating a connection based on where you are in the platform In a project : Click Assets > New asset > Connect to a data source. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html). In a deployment space : Click Add to space > Connection. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html). In the Platform assets catalog : Click New connection. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html). Next step: Add data assets from the connection * See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html). Where you can use this connection You can use MySQL connections in the following workspaces and tools: Projects * Decision Optimization * SPSS Modeler * Synthetic Data Generator Catalogs * Platform assets catalog Running SQL statements To ensure that your SQL statements run correctly, refer to the [MySQL documentation](https://dev.mysql.com/doc/) for the correct syntax. MySQL setup [MySQL Installation ](https://dev.mysql.com/doc/mysql-getting-started/en/) Learn more [MySQL documentation](https://dev.mysql.com/doc/) Parent topic:[Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html)
# MySQL connection # To access your data in MySQL, create a connection asset for it\. MySQL is an open\-source relational database management system\. ## Supported versions ## <!-- <ul> --> * MySQL Enterprise Edition 5\.0\+ * MySQL Community Edition 4\.1, 5\.0, 5\.1, 5\.5, 5\.6, 5\.7 <!-- </ul> --> ## Create a connection to MySQL ## To create the connection asset, you need these connection details: <!-- <ul> --> * Database name * Hostname or IP Address * Port number * Character Encoding * Username and password * SSL certificate (if required by the database server) <!-- </ul> --> For **Private connectivity**, to connect to a database that is not externalized to the internet (for example, behind a firewall), you must set up a [secure connection](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.html)\. ### Choose the method for creating a connection based on where you are in the platform ### **In a project** : Click **Assets > New asset > Connect to a data source**\. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html)\. **In a deployment space** : Click **Add to space > Connection**\. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html)\. **In the Platform assets catalog** : Click **New connection**\. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html)\. ### Next step: Add data assets from the connection ### <!-- <ul> --> * See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html)\. <!-- </ul> --> ## Where you can use this connection ## You can use MySQL connections in the following workspaces and tools: **Projects** <!-- <ul> --> * Decision Optimization * SPSS Modeler * Synthetic Data Generator <!-- </ul> --> **Catalogs** <!-- <ul> --> * Platform assets catalog <!-- </ul> --> ### Running SQL statements ### To ensure that your SQL statements run correctly, refer to the [MySQL documentation](https://dev.mysql.com/doc/) for the correct syntax\. ## MySQL setup ## [MySQL Installation ](https://dev.mysql.com/doc/mysql-getting-started/en/) ## Learn more ## [MySQL documentation](https://dev.mysql.com/doc/) **Parent topic:**[Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html) <!-- </article "role="article" "> -->
4CA6000DC674CB2486D905F8531FC19BC88F887A
https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-odata.html?context=cdpaas&locale=en
OData connection
OData connection To access your data in OData, create a connection asset for it. The OData (Open Data) protocol is a REST-based data access protocol. The OData connection reads data from a data source that uses the OData protocol. Supported versions The OData connection is supported on OData protocol version 2 or version 4. Create a connection to OData To create the connection asset, you need these connection details: Credentials type: * API Key * Basic * None Encryption: SSL certificate (if required by the database server) For Private connectivity, to connect to a database that is not externalized to the internet (for example, behind a firewall), you must set up a [secure connection](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.html). Choose the method for creating a connection based on where you are in the platform In a project : Click Assets > New asset > Connect to a data source. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html). In a deployment space : Click Add to space > Connection. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html). In the Platform assets catalog : Click New connection. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html). Next step: Add data assets from the connection * See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html). Where you can use this connection You can use the OData connection in the following workspaces and tools: Projects * Data Refinery * SPSS Modeler * Synthetic Data Generator Catalogs * Platform assets catalog OData setup To set up the OData service, see [How to Use Web API OData to Build an OData V4 Service without Entity Framework](https://www.odata.org/blog/how-to-use-web-api-odata-to-build-an-odata-v4-service-without-entity-framework/). Restrictions * For Data Refinery, you can use this connection only as a source. You cannot use this connection as a target connection or as a target connected data asset. * For SPSS Modeler, you cannot create new entity sets. Learn more [www.odata.org](https://www.odata.org/) Parent topic:[Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html)
# OData connection # To access your data in OData, create a connection asset for it\. The OData (Open Data) protocol is a REST\-based data access protocol\. The OData connection reads data from a data source that uses the OData protocol\. ## Supported versions ## The OData connection is supported on OData protocol version 2 or version 4\. ## Create a connection to OData ## To create the connection asset, you need these connection details: Credentials type: <!-- <ul> --> * API Key * Basic * None <!-- </ul> --> Encryption: SSL certificate (if required by the database server) For **Private connectivity**, to connect to a database that is not externalized to the internet (for example, behind a firewall), you must set up a [secure connection](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.html)\. ### Choose the method for creating a connection based on where you are in the platform ### **In a project** : Click **Assets > New asset > Connect to a data source**\. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html)\. **In a deployment space** : Click **Add to space > Connection**\. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html)\. **In the Platform assets catalog** : Click **New connection**\. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html)\. ### Next step: Add data assets from the connection ### <!-- <ul> --> * See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html)\. <!-- </ul> --> ## Where you can use this connection ## You can use the OData connection in the following workspaces and tools: **Projects** <!-- <ul> --> * Data Refinery * SPSS Modeler * Synthetic Data Generator <!-- </ul> --> **Catalogs** <!-- <ul> --> * Platform assets catalog <!-- </ul> --> ## OData setup ## To set up the OData service, see [How to Use Web API OData to Build an OData V4 Service without Entity Framework](https://www.odata.org/blog/how-to-use-web-api-odata-to-build-an-odata-v4-service-without-entity-framework/)\. ## Restrictions ## <!-- <ul> --> * For Data Refinery, you can use this connection only as a source\. You cannot use this connection as a target connection or as a target connected data asset\. * For SPSS Modeler, you cannot create new entity sets\. <!-- </ul> --> ## Learn more ## [www\.odata\.org](https://www.odata.org/) **Parent topic:**[Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html) <!-- </article "role="article" "> -->
96B0DF7161FF334810F77FE93235BD4D548164A7
https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-oracle.html?context=cdpaas&locale=en
Oracle connection
Oracle connection To access your data in Oracle, you must create a connection asset for it. Oracle is a multi-model database management system. Supported versions * Oracle 19c and 21c Create a connection to Oracle To create the connection asset, you need the following connection details: * Service name or Database (SID) * Hostname or IP address * Port number * SSL certificate (if required by the database server) * Alternate servers: A list of alternate database servers to use for failover for new or lost connections. Syntax: (servername1[:port1]]]]...) The server name (servername1, servername2, and so on) is required for each alternate server entry. The port number (port1, port2, and so on) and the connection properties (property=value) are optional for each alternate server entry. If the port is unspecified, the port number of the primary server is used. If the port number of the primary server is not specified, the default port number 1521 is used. The optional connection properties are the ServiceName and SID. * Metadata discovery: The setting determines whether comments on columns (remarks) and aliases for schema objects such as tables or views (synonyms) are retrieved when assets are added by using this connection. For Private connectivity, to connect to a database that is not externalized to the internet (for example, behind a firewall), you must set up a [secure connection](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.html). Choose the method for creating a connection based on where you are in the platform In a project : Click Assets > New asset > Connect to a data source. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html). In a deployment space : Click Add to space > Connection. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html). In the Platform assets catalog : Click New connection. For more information, see [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html). Next step: Add data assets from the connection * See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html). Where you can use this connection You can use Oracle connections in the following workspaces and tools: Projects * Data Refinery * SPSS Modeler * Synthetic Data Generator Catalogs * Platform assets catalog Oracle setup [Oracle installation](https://docs.oracle.com/cd/E11882_01/server.112/e10897/install.htmADMQS002) Running SQL statements To ensure that your SQL statements run correctly, refer to the [Oracle Supported SQL Syntax and Functions](https://docs.oracle.com/en/database/oracle/oracle-database/21/gmswn/database-gateway-sqlserver-supported-sql-syntax-functions.html) for the correct syntax. Learn more [Oracle product documentation](https://docs.oracle.com/en/database/) Parent topic:[Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html)
# Oracle connection # To access your data in Oracle, you must create a connection asset for it\. Oracle is a multi\-model database management system\. ## Supported versions ## <!-- <ul> --> * Oracle 19c and 21c <!-- </ul> --> ## Create a connection to Oracle ## To create the connection asset, you need the following connection details: <!-- <ul> --> * Service name or Database (SID) * Hostname or IP address * Port number * SSL certificate (if required by the database server) * Alternate servers: A list of alternate database servers to use for failover for new or lost connections\. Syntax: `(servername1[:port1]]]]...)` The server name (`servername1`, `servername2`, and so on) is required for each alternate server entry. The port number (`port1`, `port2`, and so on) and the connection properties (`property=value`) are optional for each alternate server entry. If the port is unspecified, the port number of the primary server is used. If the port number of the primary server is not specified, the default port number `1521` is used. The optional connection properties are the `ServiceName` and `SID`. * Metadata discovery: The setting determines whether comments on columns (remarks) and aliases for schema objects such as tables or views (synonyms) are retrieved when assets are added by using this connection\. <!-- </ul> --> For **Private connectivity**, to connect to a database that is not externalized to the internet (for example, behind a firewall), you must set up a [secure connection](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.html)\. ### Choose the method for creating a connection based on where you are in the platform ### **In a project** : Click **Assets > New asset > Connect to a data source**\. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html)\. **In a deployment space** : Click **Add to space > Connection**\. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html)\. **In the Platform assets catalog** : Click **New connection**\. For more information, see [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html)\. ### Next step: Add data assets from the connection ### <!-- <ul> --> * See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html)\. <!-- </ul> --> ## Where you can use this connection ## You can use Oracle connections in the following workspaces and tools: **Projects** <!-- <ul> --> * Data Refinery * SPSS Modeler * Synthetic Data Generator <!-- </ul> --> **Catalogs** <!-- <ul> --> * Platform assets catalog <!-- </ul> --> ## Oracle setup ## [Oracle installation](https://docs.oracle.com/cd/E11882_01/server.112/e10897/install.htm#ADMQS002) ### Running SQL statements ### To ensure that your SQL statements run correctly, refer to the [Oracle Supported SQL Syntax and Functions](https://docs.oracle.com/en/database/oracle/oracle-database/21/gmswn/database-gateway-sqlserver-supported-sql-syntax-functions.html) for the correct syntax\. ## Learn more ## [Oracle product documentation](https://docs.oracle.com/en/database/) **Parent topic:**[Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html) <!-- </article "role="article" "> -->
4517071B8CDD91311A13DECDD9D0A7FD761AA616
https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-plananalytics.html?context=cdpaas&locale=en
IBM Planning Analytics connection
IBM Planning Analytics connection To access your data in Planning Analytics, create a connection asset for it. Planning Analytics (formerly known as "TM1") is an enterprise performance management database that stores data in in-memory multidimensional OLAP cubes. Supported versions IBM Planning Analytics, version 2.0.5 or later Create a connection to Planning Analytics To create the connection asset, you need these connection details: * TM1 server API root URL * Authentication type (Basic or CAM Credentials) * Username and password * SSL certificate (if required by the database server) For authentication setup information, see [Authenticating and managing sessions](https://www.ibm.com/docs/SSD29G_2.0.0/com.ibm.swg.ba.cognos.tm1_rest_api.2.0.0.doc/dg_tm1_odata_auth.html). Choose the method for creating a connection based on where you are in the platform In a project : Click Assets > New asset > Connect to a data source. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html). In a deployment space : Click Add to space > Connection. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html). In the Platform assets catalog : Click New connection. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html). Next step: Add data assets from the connection * See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html). Where you can use this connection You can use Planning Analytics connections in the following workspaces and tools: * Data Refinery * Decision Optimization experiments * Notebooks. Click Read data on the Code snippets pane to get the connection credentials and load the data into a data structure. See [Load data from data source connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/load-and-access-data.htmlconns). * SPSS Modeler * Synthetic Data Generator Catalogs * Platform assets catalog Planning Analytics setup Enable TM1 REST APIs on the TM1 Server. See TMI REST API [Installation and configuration](https://www.ibm.com/docs/SSD29G_2.0.0/com.ibm.swg.ba.cognos.tm1_rest_api.2.0.0.doc/dg_tm1_odata_install.html). Cube dimension order Versions earlier than TM1 11.4 For best performance, do not combine string and numeric data in a single cube. However, if the cube does include both string and numeric data, the string elements must be in the last dimension when the cube is created. Reordering dimensions later is ignored. Version TM1 11.4 or later The default setting in Planning Analytics for cube creation is current. This setting might cause errors or unexpected results when you use the Planning Analytics connection. Instead, set the interaction property use_creation_order value to true. Restriction For Data Refinery, you can use this connection only as a source. You cannot use this connection as a target connection or as a target connected data asset. Learn more [Planning Analytics product documentation](https://www.ibm.com/docs/planning-analytics/2.0.0) Parent topic:[Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html)
# IBM Planning Analytics connection # To access your data in Planning Analytics, create a connection asset for it\. Planning Analytics (formerly known as "TM1") is an enterprise performance management database that stores data in in\-memory multidimensional OLAP cubes\. ## Supported versions ## IBM Planning Analytics, version 2\.0\.5 or later ## Create a connection to Planning Analytics ## To create the connection asset, you need these connection details: <!-- <ul> --> * TM1 server API root URL * Authentication type (Basic or CAM Credentials) * Username and password * SSL certificate (if required by the database server) <!-- </ul> --> For authentication setup information, see [Authenticating and managing sessions](https://www.ibm.com/docs/SSD29G_2.0.0/com.ibm.swg.ba.cognos.tm1_rest_api.2.0.0.doc/dg_tm1_odata_auth.html)\. ### Choose the method for creating a connection based on where you are in the platform ### **In a project** : Click **Assets > New asset > Connect to a data source**\. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html)\. **In a deployment space** : Click **Add to space > Connection**\. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html)\. **In the Platform assets catalog** : Click **New connection**\. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html)\. ### Next step: Add data assets from the connection ### <!-- <ul> --> * See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html)\. <!-- </ul> --> ## Where you can use this connection ## You can use Planning Analytics connections in the following workspaces and tools: <!-- <ul> --> * Data Refinery * Decision Optimization experiments * Notebooks\. Click **Read data** on the **Code snippets** pane to get the connection credentials and load the data into a data structure\. See [Load data from data source connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/load-and-access-data.html#conns)\. * SPSS Modeler * Synthetic Data Generator <!-- </ul> --> **Catalogs** <!-- <ul> --> * Platform assets catalog <!-- </ul> --> ## Planning Analytics setup ## Enable TM1 REST APIs on the TM1 Server\. See TMI REST API [Installation and configuration](https://www.ibm.com/docs/SSD29G_2.0.0/com.ibm.swg.ba.cognos.tm1_rest_api.2.0.0.doc/dg_tm1_odata_install.html)\. ## Cube dimension order ## **Versions earlier than TM1 11\.4** For best performance, do not combine string and numeric data in a single cube\. However, if the cube does include both string and numeric data, the string elements must be in the last dimension when the cube is created\. Reordering dimensions later is ignored\. **Version TM1 11\.4 or later** The default setting in Planning Analytics for cube creation is `current`\. This setting might cause errors or unexpected results when you use the Planning Analytics connection\. Instead, set the interaction property `use_creation_order` value to `true`\. ## Restriction ## For Data Refinery, you can use this connection only as a source\. You cannot use this connection as a target connection or as a target connected data asset\. ## Learn more ## [Planning Analytics product documentation](https://www.ibm.com/docs/planning-analytics/2.0.0) **Parent topic:**[Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html) <!-- </article "role="article" "> -->
1BED610A414085E625BD32AAE5FFAC81B41F97E0
https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-postgresql.html?context=cdpaas&locale=en
PostgreSQL connection
PostgreSQL connection To access your data in PostgreSQL, you must create a connection asset for it. PostgreSQL is an open source and customizable object-relational database. Supported versions * PostgreSQL 15.0 and later * PostgreSQL 14.0 and later * PostgreSQL 13.0 and later * PostgreSQL 12.0 and later * PostgreSQL 11.0 and later * PostgreSQL 10.1 and later * PostgreSQL 9.6 and later Create a connection to PostgreSQL To create the connection asset, you need the following connection details: * Database name * Hostname or IP address * Port number * Username and password * SSL certificate (if required by the database server) For Private connectivity, to connect to a database that is not externalized to the internet (for example, behind a firewall), you must set up a [secure connection](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.html). Choose the method for creating a connection based on where you are in the platform In a project : Click Assets > New asset > Connect to a data source. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html). In a deployment space : Click Add to space > Connection. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html). In the Platform assets catalog : Click New connection. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html). Next step: Add data assets from the connection * See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html). Where you can use this connection You can use PostgreSQL connections in the following workspaces and tools: Projects * Data Refinery * Decision Optimization * SPSS Modeler * Synthetic Data Generator Catalogs * Platform assets catalog PostgreSQL setup [PostgreSQL installation](https://www.pgadmin.org/docs/pgadmin4/latest/getting_started.html) Running SQL statements To ensure that your SQL statements run correctly, refer to the [SQL Syntax](https://www.postgresql.org/docs/current/sql-syntax.html) in the PostgreSQL documentation. Learn more [PostgreSQL documentation](https://www.postgresql.org/docs/) Parent topic:[Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html)
# PostgreSQL connection # To access your data in PostgreSQL, you must create a connection asset for it\. PostgreSQL is an open source and customizable object\-relational database\. ## Supported versions ## <!-- <ul> --> * PostgreSQL 15\.0 and later * PostgreSQL 14\.0 and later * PostgreSQL 13\.0 and later * PostgreSQL 12\.0 and later * PostgreSQL 11\.0 and later * PostgreSQL 10\.1 and later * PostgreSQL 9\.6 and later <!-- </ul> --> ## Create a connection to PostgreSQL ## To create the connection asset, you need the following connection details: <!-- <ul> --> * Database name * Hostname or IP address * Port number * Username and password * SSL certificate (if required by the database server) <!-- </ul> --> For **Private connectivity**, to connect to a database that is not externalized to the internet (for example, behind a firewall), you must set up a [secure connection](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.html)\. ### Choose the method for creating a connection based on where you are in the platform ### **In a project** : Click **Assets > New asset > Connect to a data source**\. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html)\. **In a deployment space** : Click **Add to space > Connection**\. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html)\. **In the Platform assets catalog** : Click **New connection**\. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html)\. ### Next step: Add data assets from the connection ### <!-- <ul> --> * See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html)\. <!-- </ul> --> ## Where you can use this connection ## You can use PostgreSQL connections in the following workspaces and tools: **Projects** <!-- <ul> --> * Data Refinery * Decision Optimization * SPSS Modeler * Synthetic Data Generator <!-- </ul> --> **Catalogs** <!-- <ul> --> * Platform assets catalog <!-- </ul> --> ## PostgreSQL setup ## [PostgreSQL installation](https://www.pgadmin.org/docs/pgadmin4/latest/getting_started.html) ### Running SQL statements ### To ensure that your SQL statements run correctly, refer to the [SQL Syntax](https://www.postgresql.org/docs/current/sql-syntax.html) in the PostgreSQL documentation\. ## Learn more ## [PostgreSQL documentation](https://www.postgresql.org/docs/) **Parent topic:**[Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html) <!-- </article "role="article" "> -->
2353FFC21B5B117F70EEEC0E71C9D6FB2F6DC022
https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-presto.html?context=cdpaas&locale=en
Presto connection
Presto connection To access your data in Presto, create a connection asset for it. Presto is a fast and reliable SQL engine for Data Analytics and the Open Lakehouse. Supported versions * Version 0.279 and earlier Create a connection to Presto To create the connection asset, you need these connection details: * Hostname or IP address * Port * Username * Password (required if you connect to Presto with SSL enabled) * SSL certificate (if required by the Presto server) Connecting to Presto within IBM watsonx.data To connect to a Presto server within watsonx.data on IBM Cloud, use these connection details: * Username: ibmlhapikey * Password (for SSL-enabled, which is the default): An IBM Cloud API key. For more information, see [Connecting to Presto server](https://cloud.ibm.com/docs/watsonxdata?topic=watsonxdata-con-presto-serv). To connect to a Presto server within watsonx.data on Cloud Pak for Data or stand-alone watsonx.data, use the username and password that you use for the watsonx.data console. Choose the method for creating a connection based on where you are in the platform In a project Click Assets > New asset > Connect to a data source. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html). In a deployment space Click Add to space > Connection. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html). In the Platform assets catalog : Click New connection. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html). Next step: Add data assets from the connection * See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html). Where you can use this connection You can use the Presto connection in the following workspaces and tools: Projects * Data Refinery * SPSS Modeler * Synthetic Data Generator Catalogs * Platform assets catalog Presto setup To set up Presto, see [Presto installation](https://prestodb.io/docs/current/installation.html). Restriction You can use this connection only for source data. You cannot write to data or export data with this connection. Limitation The Presto connection does not support the Apache Cassandra Time data type. Running SQL statements To ensure that your SQL statements run correctly, refer to the [SQL Statement Syntax](https://prestodb.io/docs/current/sql.html) for the correct syntax. Learn more [Presto documentation](https://prestodb.io/docs/current/index.html) Parent topic: [Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html)
# Presto connection # To access your data in Presto, create a connection asset for it\. Presto is a fast and reliable SQL engine for Data Analytics and the Open Lakehouse\. ## Supported versions ## <!-- <ul> --> * Version 0\.279 and earlier <!-- </ul> --> ## Create a connection to Presto ## To create the connection asset, you need these connection details: <!-- <ul> --> * Hostname or IP address * Port * Username * Password (required if you connect to Presto with SSL enabled) * SSL certificate (if required by the Presto server) <!-- </ul> --> ### Connecting to Presto within IBM watsonx\.data ### To connect to a Presto server within watsonx\.data on IBM Cloud, use these connection details: <!-- <ul> --> * Username: `ibmlhapikey` * Password (for SSL\-enabled, which is the default): An IBM Cloud API key\. For more information, see [Connecting to Presto server](https://cloud.ibm.com/docs/watsonxdata?topic=watsonxdata-con-presto-serv)\. <!-- </ul> --> To connect to a Presto server within watsonx\.data on Cloud Pak for Data or stand\-alone watsonx\.data, use the username and password that you use for the watsonx\.data console\. ### Choose the method for creating a connection based on where you are in the platform ### **In a project** Click **Assets > New asset > Connect to a data source**\. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html)\. **In a deployment space** Click **Add to space > Connection**\. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html)\. **In the Platform assets catalog** : Click **New connection**\. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html)\. ### Next step: Add data assets from the connection ### <!-- <ul> --> * See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html)\. <!-- </ul> --> ## Where you can use this connection ## You can use the Presto connection in the following workspaces and tools: **Projects** <!-- <ul> --> * Data Refinery * SPSS Modeler * Synthetic Data Generator <!-- </ul> --> **Catalogs** <!-- <ul> --> * Platform assets catalog <!-- </ul> --> ## Presto setup ## To set up Presto, see [Presto installation](https://prestodb.io/docs/current/installation.html)\. ## Restriction ## You can use this connection only for source data\. You cannot write to data or export data with this connection\. ## Limitation ## The Presto connection does not support the Apache Cassandra Time data type\. ### Running SQL statements ### To ensure that your SQL statements run correctly, refer to the [SQL Statement Syntax](https://prestodb.io/docs/current/sql.html) for the correct syntax\. ## Learn more ## [Presto documentation](https://prestodb.io/docs/current/index.html) **Parent topic**: [Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html) <!-- </article "role="article" "> -->
2B5427922D2D03F0FDDFE73BBDE7E8B8DCDA6A60
https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-puredata.html?context=cdpaas&locale=en
IBM Netezza Performance Server connection
IBM Netezza Performance Server connection To access your data in IBM Netezza Performance Server, you must create a connection asset for it. Netezza Performance Server is a platform for high-performance data warehousing and analytics. Supported versions * IBM Netezza Performance Server 11.x * IBM Netezza appliance software 7.0.x, 7.1.x, 7.2.x Create a connection to Netezza Performance Server To create the connection asset, you need the following connection details: * Database name * Hostname or IP address * Port number * Username and password * SSL certificate (if required by the database server) For Private connectivity, to connect to a database that is not externalized to the internet (for example, behind a firewall), you must set up a [secure connection](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.html). Choose the method for creating a connection based on where you are in the platform In a project : Click Assets > New asset > Connect to a data source. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html). In a deployment space : Click Add to space > Connection. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html). In the Platform assets catalog : Click New connection. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html). Next step: Add data assets from the connection * See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html). Where you can use this connection You can use Netezza Performance Server connections in the following workspaces and tools: Projects * SPSS Modeler * Synthetic Data Generator Catalogs * Platform assets catalog Netezza Performance Server setup * [Netezza Performance Server Getting started](https://www.ibm.com/docs/SSTNZ3/get-started/get_strt.html) * [PureData System for Analytics Initial system setup](https://www.ibm.com/docs/psfa/7.2.1?topic=overview-initial-system-setup-information) Running SQL statements To ensure that your SQL statements run correctly, refer to the product documentation: * [Netezza Performance Server SQL command reference](https://www.ibm.com/docs/SSTNZ3/nps-cpds-20X/dbuser/r_dbuser_ntz_sql_command_reference.html) * [PureData System for Analytics IBM Netezza SQL Extensions toolkit](https://www.ibm.com/docs/en/psfa/7.2.1?topic=netezza-sql-extensions-toolkit) Learn more * [IBM Netezza Performance Server documentation](https://www.ibm.com/docs/netezza) * [IBM PureData System for Analytics documentation](https://www.ibm.com/docs/psfa) Parent topic:[Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html)
# IBM Netezza Performance Server connection # To access your data in IBM Netezza Performance Server, you must create a connection asset for it\. Netezza Performance Server is a platform for high\-performance data warehousing and analytics\. ## Supported versions ## <!-- <ul> --> * IBM Netezza Performance Server 11\.x * IBM Netezza appliance software 7\.0\.x, 7\.1\.x, 7\.2\.x <!-- </ul> --> ## Create a connection to Netezza Performance Server ## To create the connection asset, you need the following connection details: <!-- <ul> --> * Database name * Hostname or IP address * Port number * Username and password * SSL certificate (if required by the database server) <!-- </ul> --> For **Private connectivity**, to connect to a database that is not externalized to the internet (for example, behind a firewall), you must set up a [secure connection](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.html)\. ### Choose the method for creating a connection based on where you are in the platform ### **In a project** : Click **Assets > New asset > Connect to a data source**\. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html)\. **In a deployment space** : Click **Add to space > Connection**\. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html)\. **In the Platform assets catalog** : Click **New connection**\. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html)\. ### Next step: Add data assets from the connection ### <!-- <ul> --> * See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html)\. <!-- </ul> --> ## Where you can use this connection ## You can use Netezza Performance Server connections in the following workspaces and tools: **Projects** <!-- <ul> --> * SPSS Modeler * Synthetic Data Generator <!-- </ul> --> **Catalogs** <!-- <ul> --> * Platform assets catalog <!-- </ul> --> ## Netezza Performance Server setup ## <!-- <ul> --> * [Netezza Performance Server Getting started](https://www.ibm.com/docs/SSTNZ3/get-started/get_strt.html) * [PureData System for Analytics Initial system setup](https://www.ibm.com/docs/psfa/7.2.1?topic=overview-initial-system-setup-information) <!-- </ul> --> ### Running SQL statements ### To ensure that your SQL statements run correctly, refer to the product documentation: <!-- <ul> --> * [Netezza Performance Server SQL command reference](https://www.ibm.com/docs/SSTNZ3/nps-cpds-20X/dbuser/r_dbuser_ntz_sql_command_reference.html) * [PureData System for Analytics IBM Netezza SQL Extensions toolkit](https://www.ibm.com/docs/en/psfa/7.2.1?topic=netezza-sql-extensions-toolkit) <!-- </ul> --> ## Learn more ## <!-- <ul> --> * [IBM Netezza Performance Server documentation](https://www.ibm.com/docs/netezza) * [IBM PureData System for Analytics documentation](https://www.ibm.com/docs/psfa) <!-- </ul> --> **Parent topic:**[Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html) <!-- </article "role="article" "> -->
85DFC4B40DA36A5D66892B5B231C9743C67D7E71
https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-redshift.html?context=cdpaas&locale=en
Amazon Redshift connection
Amazon Redshift connection To access your data in Amazon Redshift, create a connection asset for it. Amazon Redshift is a data warehouse product that forms part of the larger cloud-computing platform Amazon Web Services (AWS). Create a connection to Amazon Redshift To create the connection asset, you need these connection details: * Database name * Hostname or IP address * Port number * Username and password * SSL certificate (if required by the database server) Choose the method for creating a connection based on where you are in the platform In a project : Click Assets > New asset > Connect to a data source. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html). In a deployment space : Click Add to space > Connection. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html). In the Platform assets catalog : Click New connection. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html). Next step: Add data assets from the connection * See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html). Where you can use this connection You can use Amazon Redshift connections in the following workspaces and tools: Projects * Data Refinery * SPSS Modeler * Synthetic Data Generator Catalogs * Platform assets catalog Amazon Redshift setup See [Amazon Redshift setup prerequisites](https://docs.aws.amazon.com/redshift/latest/gsg/rs-gsg-prereq.html) for setup information. Running SQL statements To ensure that your SQL statements run correctly, refer to the [ Amazon Redshift documentation](https://docs.aws.amazon.com/redshift/latest/dg/cm_chap_SQLCommandRef.html) for the correct syntax. Learn more [Amazon Redshift documentation](https://docs.aws.amazon.com/redshift/index.html) Parent topic:[Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html)
# Amazon Redshift connection # To access your data in Amazon Redshift, create a connection asset for it\. Amazon Redshift is a data warehouse product that forms part of the larger cloud\-computing platform Amazon Web Services (AWS)\. ## Create a connection to Amazon Redshift ## To create the connection asset, you need these connection details: <!-- <ul> --> * Database name * Hostname or IP address * Port number * Username and password * SSL certificate (if required by the database server) <!-- </ul> --> ### Choose the method for creating a connection based on where you are in the platform ### **In a project** : Click **Assets > New asset > Connect to a data source**\. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html)\. **In a deployment space** : Click **Add to space > Connection**\. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html)\. **In the Platform assets catalog** : Click **New connection**\. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html)\. ### Next step: Add data assets from the connection ### <!-- <ul> --> * See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html)\. <!-- </ul> --> ## Where you can use this connection ## You can use Amazon Redshift connections in the following workspaces and tools: **Projects** <!-- <ul> --> * Data Refinery * SPSS Modeler * Synthetic Data Generator <!-- </ul> --> **Catalogs** <!-- <ul> --> * Platform assets catalog <!-- </ul> --> ## Amazon Redshift setup ## See [Amazon Redshift setup prerequisites](https://docs.aws.amazon.com/redshift/latest/gsg/rs-gsg-prereq.html) for setup information\. ### Running SQL statements ### To ensure that your SQL statements run correctly, refer to the [ Amazon Redshift documentation](https://docs.aws.amazon.com/redshift/latest/dg/cm_chap_SQLCommandRef.html) for the correct syntax\. ## Learn more ## [Amazon Redshift documentation](https://docs.aws.amazon.com/redshift/index.html) **Parent topic:**[Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html) <!-- </article "role="article" "> -->
02326495F914D005BDE7360F0826E8C140816613
https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-salesforce.html?context=cdpaas&locale=en
Salesforce.com connection
Salesforce.com connection To access your data in Salesforce.com, create a connection asset for it. Salesforce.com is a cloud-based software company which provides customer relationship management (CRM). The Salesforce.com connection supports the standard SQL query language to select, insert, update, and delete data from Salesforce.com products and other supported products that use the Salesforce API. Other supported products that use the Salesforce API * Salesforce AppExchange * FinancialForce * Service Cloud * ServiceMax * Veeva CRM Create a connection to Salesforce.com To create the connection asset, you need these connection details: * The username to access the Salesforce.com server. * The password and security token to access the Salesforce.com server. In the Password field, append your security token to the end of your password. For example, MypasswordMyAccessToken. For information about access tokens, see [Reset Your Security Token](https://help.salesforce.com/articleView?id=sf.user_security_token.htm&type=5). * The Salesforce.com server name. The default is login.salesforce.com. Choose the method for creating a connection based on where you are in the platform In a project : Click Assets > New asset > Connect to a data source. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html). In a deployment space : Click Add to space > Connection. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html). In the Platform assets catalog : Click New connection. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html). Next step: Add data assets from the connection * See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html). Where you can use this connection You can use Salesforce.com connections in the following workspaces and tools: Projects * SPSS Modeler * Synthetic Data Generator Catalogs * Platform assets catalog Restriction You can only use this connection for source data. You cannot write to data or export data with this connection. Known issue The following objects in the SFORCE schema are not supported: APPTABMEMBER, CONTENTDOCUMENTLINK, CONTENTFOLDERITEM, CONTENTFOLDERMEMBER, DATACLOUDADDRESS, DATACLOUDCOMPANY, DATACLOUDCONTACT, DATACLOUDANDBCOMPANY, DATASTATISTICS, ENTITYPARTICLE, EVENTBUSSUBSCRIBER, FIELDDEFINITION, FLEXQUEUEITEM, ICONDEFINITION, IDEACOMMENT, LISTVIEWCHARINSTANCE, LOGINEVENT, OUTGOINGEMAIL, OUTGOINGEMAILRELATION, OWNERCHANGEOPTIONINFO, PICKLISTVALUEINFO, PLATFORMACTION, RECORDACTIONHISTORY, RELATIONSHIPDOMAIN, RELATIONSHIPINFO, SEARCHLAYOUT, SITEDETAIL, USERAPPMENUITEM, USERENTITYACCESS, USERFIELDACCESS, USERRECORDACCESS, VOTE. Learn more * [Get Started with Salesforce](https://help.salesforce.com/s/articleView?id=sf.basics_welcome_salesforce_users.htm&type=5) * [Salesforce editions with API access](https://help.salesforce.com/s/articleView?id=000326486&type=1) Parent topic:[Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html)
# Salesforce\.com connection # To access your data in Salesforce\.com, create a connection asset for it\. Salesforce\.com is a cloud\-based software company which provides customer relationship management (CRM)\. The Salesforce\.com connection supports the standard SQL query language to select, insert, update, and delete data from Salesforce\.com products and other supported products that use the Salesforce API\. ## Other supported products that use the Salesforce API ## <!-- <ul> --> * Salesforce AppExchange * FinancialForce * Service Cloud * ServiceMax * Veeva CRM <!-- </ul> --> ## Create a connection to Salesforce\.com ## To create the connection asset, you need these connection details: <!-- <ul> --> * The username to access the Salesforce\.com server\. * The password and security token to access the Salesforce\.com server\. In the **Password** field, append your security token to the end of your password\. For example, `MypasswordMyAccessToken`\. For information about access tokens, see [Reset Your Security Token](https://help.salesforce.com/articleView?id=sf.user_security_token.htm&type=5)\. * The Salesforce\.com server name\. The default is `login.salesforce.com`\. <!-- </ul> --> ### Choose the method for creating a connection based on where you are in the platform ### **In a project** : Click **Assets > New asset > Connect to a data source**\. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html)\. **In a deployment space** : Click **Add to space > Connection**\. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html)\. **In the Platform assets catalog** : Click **New connection**\. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html)\. ### Next step: Add data assets from the connection ### <!-- <ul> --> * See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html)\. <!-- </ul> --> ## Where you can use this connection ## You can use Salesforce\.com connections in the following workspaces and tools: **Projects** <!-- <ul> --> * SPSS Modeler * Synthetic Data Generator <!-- </ul> --> **Catalogs** <!-- <ul> --> * Platform assets catalog <!-- </ul> --> ## Restriction ## You can only use this connection for source data\. You cannot write to data or export data with this connection\. ## Known issue ## The following objects in the SFORCE schema are not supported: APPTABMEMBER, CONTENTDOCUMENTLINK, CONTENTFOLDERITEM, CONTENTFOLDERMEMBER, DATACLOUDADDRESS, DATACLOUDCOMPANY, DATACLOUDCONTACT, DATACLOUDANDBCOMPANY, DATASTATISTICS, ENTITYPARTICLE, EVENTBUSSUBSCRIBER, FIELDDEFINITION, FLEXQUEUEITEM, ICONDEFINITION, IDEACOMMENT, LISTVIEWCHARINSTANCE, LOGINEVENT, OUTGOINGEMAIL, OUTGOINGEMAILRELATION, OWNERCHANGEOPTIONINFO, PICKLISTVALUEINFO, PLATFORMACTION, RECORDACTIONHISTORY, RELATIONSHIPDOMAIN, RELATIONSHIPINFO, SEARCHLAYOUT, SITEDETAIL, USERAPPMENUITEM, USERENTITYACCESS, USERFIELDACCESS, USERRECORDACCESS, VOTE\. ## Learn more ## <!-- <ul> --> * [Get Started with Salesforce](https://help.salesforce.com/s/articleView?id=sf.basics_welcome_salesforce_users.htm&type=5) * [Salesforce editions with API access](https://help.salesforce.com/s/articleView?id=000326486&type=1) <!-- </ul> --> **Parent topic:**[Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html) <!-- </article "role="article" "> -->
9288D1E76019A9D1F08873E515B9798906CC1C4B
https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-sap-ase.html?context=cdpaas&locale=en
SAP ASE connection
SAP ASE connection To access your data in SAP ASE, create a connection asset for it. SAP ASE is a relational model database server. SAP ASE was formerly Sybase. Supported versions SAP Sybase ASE 11.5+, 16.0+ Create a connection to SAP ASE To create the connection asset, you need these connection details: * Database name * Hostname or IP address * Port number * Username and password * SSL certificate (if required by the database server) For Private connectivity, to connect to a database that is not externalized to the internet (for example, behind a firewall), you must set up a [secure connection](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.html). Choose the method for creating a connection based on where you are in the platform In a project : Click Assets > New asset > Connect to a data source. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html). In a deployment space : Click Add to space > Connection. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html). In the Platform assets catalog : Click New connection. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html). Next step: Add data assets from the connection * See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html). Where you can use this connection You can use SAP ASE connections in the following workspaces and tools: Projects * SPSS Modeler * Synthetic Data Generator Catalogs * Platform assets catalog SAP ASE setup [Get Started with SAP ASE](https://www.sap.com/canada/products/sybase-ase/get-started.html) Running SQL statements To ensure that your SQL statements run correctly, refer to the [SAP ASE documentation](https://help.sap.com/viewer/product/SAP_ASE/16.0.4.1/en-US?task=whats_new_task) for the correct syntax. Learn more [SAP ASE technical information](https://www.sap.com/canada/products/sybase-ase/technical-information.html) Parent topic:[Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html)
# SAP ASE connection # To access your data in SAP ASE, create a connection asset for it\. SAP ASE is a relational model database server\. SAP ASE was formerly Sybase\. ## Supported versions ## SAP Sybase ASE 11\.5\+, 16\.0\+ ## Create a connection to SAP ASE ## To create the connection asset, you need these connection details: <!-- <ul> --> * Database name * Hostname or IP address * Port number * Username and password * SSL certificate (if required by the database server) <!-- </ul> --> For **Private connectivity**, to connect to a database that is not externalized to the internet (for example, behind a firewall), you must set up a [secure connection](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.html)\. ### Choose the method for creating a connection based on where you are in the platform ### **In a project** : Click **Assets > New asset > Connect to a data source**\. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html)\. **In a deployment space** : Click **Add to space > Connection**\. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html)\. **In the Platform assets catalog** : Click **New connection**\. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html)\. ### Next step: Add data assets from the connection ### <!-- <ul> --> * See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html)\. <!-- </ul> --> ## Where you can use this connection ## You can use SAP ASE connections in the following workspaces and tools: **Projects** <!-- <ul> --> * SPSS Modeler * Synthetic Data Generator <!-- </ul> --> **Catalogs** <!-- <ul> --> * Platform assets catalog <!-- </ul> --> ## SAP ASE setup ## [Get Started with SAP ASE](https://www.sap.com/canada/products/sybase-ase/get-started.html) ### Running SQL statements ### To ensure that your SQL statements run correctly, refer to the [SAP ASE documentation](https://help.sap.com/viewer/product/SAP_ASE/16.0.4.1/en-US?task=whats_new_task) for the correct syntax\. ## Learn more ## [SAP ASE technical information](https://www.sap.com/canada/products/sybase-ase/technical-information.html) **Parent topic:**[Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html) <!-- </article "role="article" "> -->
BC1AD7048032258F29E1D4081A2EEC98B36D13CF
https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-sap-iq.html?context=cdpaas&locale=en
SAP IQ connection
SAP IQ connection To access your data in SAP IQ, create a connection asset for it. SAP IQ is a column-based, petabyte scale, relational database software system used for business intelligence, data warehousing, and data marts. SAP IQ was formerly Sybase IQ. Create a connection to SAP IQ To create the connection asset, you need these connection details: * Database name * Hostname or IP address * Port number * Username and password * SSL certificate (if required by the database server) For Private connectivity, to connect to a database that is not externalized to the internet (for example, behind a firewall), you must set up a [secure connection](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.html). Choose the method for creating a connection based on where you are in the platform In a project : Click Assets > New asset > Connect to a data source. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html). In a deployment space : Click Add to space > Connection. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html). In the Platform assets catalog : Click New connection. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html). Next step: Add data assets from the connection * See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html). Where you can use this connection You can use SAP IQ connections in the following workspaces and tools: Projects * SPSS Modeler * Synthetic Data Generator Catalogs * Platform assets catalog SAP IQ setup [Get Started with SAP IQ](https://www.sap.com/canada/products/sybase-iq-big-data-management/get-started.html) Restriction You can use this connection only for source data. You cannot write to data or export data with this connection. Running SQL statements To ensure that your SQL statements run correctly, refer to the [SAP IQ SQL Reference](https://help.sap.com/docs/SAP_IQ/a898e08b84f21015969fa437e89860c8/7b5bd4e8cdcb4593aba6f2895572b0a9.html) for the correct syntax. Learn more [SAP IQ technical information](https://www.sap.com/canada/products/sybase-iq-big-data-management/technical-information.html) Parent topic:[Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html)
# SAP IQ connection # To access your data in SAP IQ, create a connection asset for it\. SAP IQ is a column\-based, petabyte scale, relational database software system used for business intelligence, data warehousing, and data marts\. SAP IQ was formerly Sybase IQ\. ## Create a connection to SAP IQ ## To create the connection asset, you need these connection details: <!-- <ul> --> * Database name * Hostname or IP address * Port number * Username and password * SSL certificate (if required by the database server) <!-- </ul> --> For **Private connectivity**, to connect to a database that is not externalized to the internet (for example, behind a firewall), you must set up a [secure connection](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.html)\. ### Choose the method for creating a connection based on where you are in the platform ### **In a project** : Click **Assets > New asset > Connect to a data source**\. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html)\. **In a deployment space** : Click **Add to space > Connection**\. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html)\. **In the Platform assets catalog** : Click **New connection**\. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html)\. ### Next step: Add data assets from the connection ### <!-- <ul> --> * See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html)\. <!-- </ul> --> ## Where you can use this connection ## You can use SAP IQ connections in the following workspaces and tools: **Projects** <!-- <ul> --> * SPSS Modeler * Synthetic Data Generator <!-- </ul> --> **Catalogs** <!-- <ul> --> * Platform assets catalog <!-- </ul> --> ## SAP IQ setup ## [Get Started with SAP IQ](https://www.sap.com/canada/products/sybase-iq-big-data-management/get-started.html) ## Restriction ## You can use this connection only for source data\. You cannot write to data or export data with this connection\. ### Running SQL statements ### To ensure that your SQL statements run correctly, refer to the [SAP IQ SQL Reference](https://help.sap.com/docs/SAP_IQ/a898e08b84f21015969fa437e89860c8/7b5bd4e8cdcb4593aba6f2895572b0a9.html) for the correct syntax\. ## Learn more ## [SAP IQ technical information](https://www.sap.com/canada/products/sybase-iq-big-data-management/technical-information.html) **Parent topic:**[Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html) <!-- </article "role="article" "> -->
6CD7B46E0C35165BFE21BE4967B68481E9BE840F
https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-sapodata.html?context=cdpaas&locale=en
SAP OData connection
SAP OData connection To access your data in SAP OData, create a connection asset for it. Use the SAP OData connection to extract data from a SAP system through its exposed OData services. Supported SAP OData products The SAP OData connection is supported on SAP products that support the OData protocol version 2. Example products are S4/HANA (on premises or cloud), ERP, and CRM. Create a connection to SAP OData To create the connection asset, you need these connection details: Credentials type: * API Key * Basic * None Encryption: SSL certificate (if required by the database server) For Private connectivity, to connect to a database that is not externalized to the internet (for example, behind a firewall), you must set up a [secure connection](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.html). Choose the method for creating a connection based on where you are in the platform In a project : Click Assets > New asset > Connect to a data source. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html). In a deployment space : Click Add to space > Connection. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html). In the Platform assets catalog : Click New connection. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html). Next step: Add data assets from the connection * See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html). Where you can use this connection You can use the SAP OData connection in the following workspaces and tools: Projects * Data Refinery * SPSS Modeler * Synthetic Data Generator Catalogs * Platform assets catalog SAP OData setup See [Prerequisites for using the SAP ODATA Connector](https://www.ibm.com/support/pages/node/886655) for the SAP Gateway setup instructions. Restrictions * For Data Refinery, you can use this connection only as a source. You cannot use this connection as a target connection or as a target connected data asset. * For SPSS Modeler, you cannot create new entity sets. Parent topic:[Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html)
# SAP OData connection # To access your data in SAP OData, create a connection asset for it\. Use the SAP OData connection to extract data from a SAP system through its exposed OData services\. ## Supported SAP OData products ## The SAP OData connection is supported on SAP products that support the OData protocol version 2\. Example products are S4/HANA (on premises or cloud), ERP, and CRM\. ## Create a connection to SAP OData ## To create the connection asset, you need these connection details: Credentials type: <!-- <ul> --> * API Key * Basic * None <!-- </ul> --> Encryption: SSL certificate (if required by the database server) For **Private connectivity**, to connect to a database that is not externalized to the internet (for example, behind a firewall), you must set up a [secure connection](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.html)\. ### Choose the method for creating a connection based on where you are in the platform ### **In a project** : Click **Assets > New asset > Connect to a data source**\. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html)\. **In a deployment space** : Click **Add to space > Connection**\. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html)\. **In the Platform assets catalog** : Click **New connection**\. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html)\. ### Next step: Add data assets from the connection ### <!-- <ul> --> * See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html)\. <!-- </ul> --> ## Where you can use this connection ## You can use the SAP OData connection in the following workspaces and tools: **Projects** <!-- <ul> --> * Data Refinery * SPSS Modeler * Synthetic Data Generator <!-- </ul> --> **Catalogs** <!-- <ul> --> * Platform assets catalog <!-- </ul> --> ## SAP OData setup ## See [Prerequisites for using the SAP ODATA Connector](https://www.ibm.com/support/pages/node/886655) for the SAP Gateway setup instructions\. ## Restrictions ## <!-- <ul> --> * For Data Refinery, you can use this connection only as a source\. You cannot use this connection as a target connection or as a target connected data asset\. * For SPSS Modeler, you cannot create new entity sets\. <!-- </ul> --> **Parent topic:**[Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html) <!-- </article "role="article" "> -->
8B9532ADFC4FE3D9213BBE56DA3323C759426287
https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-singlestore.html?context=cdpaas&locale=en
SingleStoreDB connection
SingleStoreDB connection To access your data in SingleStoreDB, create a connection asset for it. SingleStoreDB is a fast, distributed, and highly scalable cloud-based SQL database. You can use SingleStoreDB to power real-time and data-intensive applications. Use SingleStoreDB and watsonx.ai for generative AI applications. Benefits include semantic search, fast ingest, and low-latency response times for foundation models and traditional machine learning. Create a connection to SingleStoreDB To create the connection asset, you need these connection details: * Database name * Hostname or IP address * Port number * Username and password * SSL certificate (if required by the database server) Choose the method for creating a connection based on where you are in the platform In a project Click Assets > New asset > Connect to a data source. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html). In a deployment space Click Add to space > Connection. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html). In the Platform assets catalog : Click New connection. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html). Next step: Add data assets from the connection * See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html). Where you can use this connection You can use the SingleStoreDB connection in the following workspaces and tools: Projects * Data Refinery * Decision Optimization * SPSS Modeler Catalogs * Platform assets catalog SingleStoreDB setup To set up SingleStoreDB, see [Getting Started with SingleStoreDB Cloud](https://docs.singlestore.com/cloud/getting-started-with-singlestoredb-cloud/). Running SQL statements To ensure that your SQL statements run correctly, refer to the SingleStore Docs [SQL Reference](https://docs.singlestore.com/db/v8.1/reference/sql-reference/) for the correct syntax. Learn more * [SingleStoreDB Cloud](https://docs.singlestore.com/) * [SingleStoreDB with IBM](https://www.ibm.com/products/singlestore) for information about the IBM partnership with SingleStoreDB that provides a single source of procurement, support, and security. Parent topic: [Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html)
# SingleStoreDB connection # To access your data in SingleStoreDB, create a connection asset for it\. SingleStoreDB is a fast, distributed, and highly scalable cloud\-based SQL database\. You can use SingleStoreDB to power real\-time and data\-intensive applications\. Use SingleStoreDB and watsonx\.ai for generative AI applications\. Benefits include semantic search, fast ingest, and low\-latency response times for foundation models and traditional machine learning\. ## Create a connection to SingleStoreDB ## To create the connection asset, you need these connection details: <!-- <ul> --> * Database name * Hostname or IP address * Port number * Username and password * SSL certificate (if required by the database server) <!-- </ul> --> ### Choose the method for creating a connection based on where you are in the platform ### **In a project** Click **Assets > New asset > Connect to a data source**\. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html)\. **In a deployment space** Click **Add to space > Connection**\. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html)\. **In the Platform assets catalog** : Click **New connection**\. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html)\. ### Next step: Add data assets from the connection ### <!-- <ul> --> * See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html)\. <!-- </ul> --> ## Where you can use this connection ## You can use the SingleStoreDB connection in the following workspaces and tools: **Projects** <!-- <ul> --> * Data Refinery * Decision Optimization * SPSS Modeler <!-- </ul> --> **Catalogs** <!-- <ul> --> * Platform assets catalog <!-- </ul> --> ## SingleStoreDB setup ## To set up SingleStoreDB, see [Getting Started with SingleStoreDB Cloud](https://docs.singlestore.com/cloud/getting-started-with-singlestoredb-cloud/)\. ### Running SQL statements ### To ensure that your SQL statements run correctly, refer to the SingleStore Docs [SQL Reference](https://docs.singlestore.com/db/v8.1/reference/sql-reference/) for the correct syntax\. ## Learn more ## <!-- <ul> --> * [SingleStoreDB Cloud](https://docs.singlestore.com/) * [SingleStoreDB with IBM](https://www.ibm.com/products/singlestore) for information about the IBM partnership with SingleStoreDB that provides a single source of procurement, support, and security\. <!-- </ul> --> **Parent topic**: [Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html) <!-- </article "role="article" "> -->
0B5C4D75EA0A1CD2ADE09184EE2B159E23033CAE
https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-snowflake.html?context=cdpaas&locale=en
Snowflake connection
Snowflake connection To access your data in Snowflake, you must create a connection asset for it. Snowflake is a cloud-based data storage and analytics service. Create a connection to Snowflake To create the connection asset, you need the following connection details: * Account name: The full name of your account * Database name * Role: The default access control role to use in the Snowflake session * Warehouse: The virtual warehouse Credentials Authentication method: * Username and password * Key-Pair: Enter the contents of the private key and the key passphrase (if configured). These properties must be set up by the Snowflake administrator. For information, see [Key Pair Authentication & Key Pair Rotation](https://docs.snowflake.com/en/user-guide/key-pair-auth) in the Snowflake documentation. * Okta URL endpoint: If your company uses native Okta SSO authentication, enter the Okta URL endpoint for your Okta account. Example: https://<okta_account_name>.okta.com. Leave this field blank if you want to use the default authentication of Snowflake. For information about federated authentication provided by Okta, see [Native SSO](https://docs.snowflake.com/en/user-guide/admin-security-fed-auth-use.htmlnative-sso-okta-only). Choose the method for creating a connection based on where you are in the platform In a project : Click Assets > New asset > Connect to a data source. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html). In a deployment space : Click Add to space > Connection. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html). In the Platform assets catalog : Click New connection. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html). Next step: Add data assets from the connection * See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html). Where you can use this connection You can use Snowflake connections in the following workspaces and tools: Projects * Data Refinery * Decision Optimization * SPSS Modeler * Synthetic Data Generator Catalogs * Platform assets catalog Snowflake setup [General Configuration ](https://docs.snowflake.com/en/user-guide/gen-conn-config.html) Running SQL statements To ensure that your SQL statements run correctly, refer to the [Snowflake SQL Command Reference](https://docs.snowflake.com/en/sql-reference-commands.html) for the correct syntax. Learn more [Snowflake in 20 Minutes](https://docs.snowflake.com/en/user-guide/getting-started-tutorial.html) Parent topic:[Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html)
# Snowflake connection # To access your data in Snowflake, you must create a connection asset for it\. Snowflake is a cloud\-based data storage and analytics service\. ## Create a connection to Snowflake ## To create the connection asset, you need the following connection details: <!-- <ul> --> * Account name: The full name of your account * Database name * Role: The default access control role to use in the Snowflake session * Warehouse: The virtual warehouse <!-- </ul> --> ### Credentials ### Authentication method: <!-- <ul> --> * Username and password * Key\-Pair: Enter the contents of the private key and the key passphrase (if configured)\. These properties must be set up by the Snowflake administrator\. For information, see [Key Pair Authentication & Key Pair Rotation](https://docs.snowflake.com/en/user-guide/key-pair-auth) in the Snowflake documentation\. * Okta URL endpoint: If your company uses native Okta SSO authentication, enter the Okta URL endpoint for your Okta account\. Example: `https://<okta_account_name>.okta.com`\. Leave this field blank if you want to use the default authentication of Snowflake\. For information about federated authentication provided by Okta, see [Native SSO](https://docs.snowflake.com/en/user-guide/admin-security-fed-auth-use.html#native-sso-okta-only)\. <!-- </ul> --> ### Choose the method for creating a connection based on where you are in the platform ### **In a project** : Click **Assets > New asset > Connect to a data source**\. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html)\. **In a deployment space** : Click **Add to space > Connection**\. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html)\. **In the Platform assets catalog** : Click **New connection**\. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html)\. ### Next step: Add data assets from the connection ### <!-- <ul> --> * See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html)\. <!-- </ul> --> ## Where you can use this connection ## You can use Snowflake connections in the following workspaces and tools: **Projects** <!-- <ul> --> * Data Refinery * Decision Optimization * SPSS Modeler * Synthetic Data Generator <!-- </ul> --> **Catalogs** <!-- <ul> --> * Platform assets catalog <!-- </ul> --> ## Snowflake setup ## [General Configuration ](https://docs.snowflake.com/en/user-guide/gen-conn-config.html) ### Running SQL statements ### To ensure that your SQL statements run correctly, refer to the [Snowflake SQL Command Reference](https://docs.snowflake.com/en/sql-reference-commands.html) for the correct syntax\. ## Learn more ## [Snowflake in 20 Minutes](https://docs.snowflake.com/en/user-guide/getting-started-tutorial.html) **Parent topic:**[Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html) <!-- </article "role="article" "> -->
7946DCF2F69A7420490A7B5CA677C2273DE5764B
https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-sql-server.html?context=cdpaas&locale=en
Microsoft SQL Server connection
Microsoft SQL Server connection You can create a connection asset for Microsoft SQL Server. Microsoft SQL Server is a relational database management system. Supported versions * Microsoft SQL Server 2000+ * Microsoft SQL Server 2000 Desktop Engine (MSDE 2000) * Microsoft SQL Server 7.0 Create a connection to Microsoft SQL Server To create the connection asset, you need the following connection details: * Database name * Hostname or IP address * Either the Port number or the Instance name. If the server is configured for dynamic ports, use the Instance name. * Username and password * Select Use Active Directory if the Microsoft SQL Server has been set up in a domain that uses NTLM (New Technology LAN Manager) authentication. Then enter the name of the domain that is associated with the username and password * SSL certificate (if required by the database server) For Private connectivity, to connect to a database that is not externalized to the internet (for example, behind a firewall), you must set up a [secure connection](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.html). Choose the method for creating a connection based on where you are in the platform In a project : Click Assets > New asset > Connect to a data source. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html). In a deployment space : Click Add to space > Connection. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html). In the Platform assets catalog : Click New connection. Next step: Add data assets from the connection * See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html). Where you can use this connection You can use Microsoft SQL Server connections in the following workspaces and tools: Projects * Data Refinery * Decision Optimization * Notebooks. Click Read data on the Code snippets pane to get the connection credentials and load the data into a data structure. See [Load data from data source connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/load-and-access-data.htmlconns). * SPSS Modeler * Synthetic Data Generator Catalogs * Platform assets catalog Microsoft SQL Server setup [Microsoft SQL Server installation](https://docs.microsoft.com/en-us/sql/database-engine/install-windows/install-sql-server?view=sql-server-ver15) Restriction Except for NTLM authentication, Windows Authentication is not supported. Running SQL statements To ensure that your SQL statements run correctly, refer to the [Transact-SQL Reference](https://docs.microsoft.com/en-us/sql/t-sql/language-reference?view=sql-server-ver15) for the correct syntax. Learn more [Microsoft SQL Server documentation](https://docs.microsoft.com/en-us/sql/sql-server/?view=sql-server-ver15) Parent topic:[Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html)
# Microsoft SQL Server connection # You can create a connection asset for Microsoft SQL Server\. Microsoft SQL Server is a relational database management system\. ## Supported versions ## <!-- <ul> --> * Microsoft SQL Server 2000\+ * Microsoft SQL Server 2000 Desktop Engine (MSDE 2000) * Microsoft SQL Server 7\.0 <!-- </ul> --> ## Create a connection to Microsoft SQL Server ## To create the connection asset, you need the following connection details: <!-- <ul> --> * Database name * Hostname or IP address * Either the Port number or the Instance name\. If the server is configured for dynamic ports, use the Instance name\. * Username and password * Select **Use Active Directory** if the Microsoft SQL Server has been set up in a domain that uses NTLM (New Technology LAN Manager) authentication\. Then enter the name of the domain that is associated with the username and password * SSL certificate (if required by the database server) <!-- </ul> --> For **Private connectivity**, to connect to a database that is not externalized to the internet (for example, behind a firewall), you must set up a [secure connection](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.html)\. ### Choose the method for creating a connection based on where you are in the platform ### **In a project** : Click **Assets > New asset > Connect to a data source**\. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html)\. **In a deployment space** : Click **Add to space > Connection**\. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html)\. **In the Platform assets catalog** : Click **New connection**\. ### Next step: Add data assets from the connection ### <!-- <ul> --> * See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html)\. <!-- </ul> --> ## Where you can use this connection ## You can use Microsoft SQL Server connections in the following workspaces and tools: **Projects** <!-- <ul> --> * Data Refinery * Decision Optimization * Notebooks\. Click **Read data** on the **Code snippets** pane to get the connection credentials and load the data into a data structure\. See [Load data from data source connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/load-and-access-data.html#conns)\. * SPSS Modeler * Synthetic Data Generator <!-- </ul> --> **Catalogs** <!-- <ul> --> * Platform assets catalog <!-- </ul> --> ## Microsoft SQL Server setup ## [Microsoft SQL Server installation](https://docs.microsoft.com/en-us/sql/database-engine/install-windows/install-sql-server?view=sql-server-ver15) ## Restriction ## Except for NTLM authentication, Windows Authentication is not supported\. ### Running SQL statements ### To ensure that your SQL statements run correctly, refer to the [Transact\-SQL Reference](https://docs.microsoft.com/en-us/sql/t-sql/language-reference?view=sql-server-ver15) for the correct syntax\. ## Learn more ## [Microsoft SQL Server documentation](https://docs.microsoft.com/en-us/sql/sql-server/?view=sql-server-ver15) **Parent topic:**[Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html) <!-- </article "role="article" "> -->
F64414C7F435B4B5E5A681A5F561C07780037836
https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-sqlquery.html?context=cdpaas&locale=en
IBM Cloud Data Engine connection
IBM Cloud Data Engine connection To access your data in IBM Cloud Data Engine, create a connection asset for it. IBM Cloud Data Engine is a service on IBM Cloud that you use to build, manage, and consume data lakes and their table assets in IBM Cloud Object Storage (COS). IBM Cloud Data Engine provides functions to load, prepare, and query big data that is stored in various formats. It also includes a metastore with table definitions. IBM Cloud Data Engine was formerly named "IBM Cloud SQL Query." Prerequisites Create a connection to IBM Cloud Data Engine To create the connection asset, you need these connection details: * The Cloud Resource Name (CRN) of the IBM Cloud Data Engine instance. Go to the IBM Cloud Data Engine service instance in your resources list in your IBM Cloud dashboard and copy the value of the CRN from the deployment details. * Target Cloud Object Storage: A default location where IBM Cloud Data Engine stores query results. You can specify any Cloud Object Storage bucket that you have access to. You can also select the default Cloud Object Storage bucket that is created when you open the IBM Cloud Data Engine web console for the first time from IBM Cloud dashboard. See the Target location field in the IBM Cloud Data Engine web console. * IBM Cloud API key: An API key for a user or service ID that has access to your IBM Cloud Data Engine and Cloud Object Storage services (for both the Cloud Object Storage data that you want to query and the default target Cloud Object Storage location). You can create a new API key for your own user: 1. In the IBM Cloud console, go to Manage > Access (IAM). 2. In the left navigation, select API keys. 3. Select Create an IBM Cloud API Key. Credentials IBM Cloud Data Engine uses the SSO credentials that are specified as a single API key, which authenticates a user or service ID. The API key must have the following properties: * Manage permission for the IBM Cloud Data Engine instance * Read access to all Cloud Object Storage locations that you want to read from * Write access to the default Cloud Object Storage target location * Write access to the IBM Cloud Data Engine instance Choose the method for creating a connection based on where you are in the platform In a project : Click Assets > New asset > Connect to a data source. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html). In a deployment space : Click Add to space > Connection. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html). In the Platform assets catalog : Click New connection. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html). Next step: Add data assets from the connection * See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html). Where you can use this connection You can use IBM Cloud Data Engine connections in the following workspaces and tools: Projects * Data Refinery * Notebooks. See the Notebook [tutorial](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/e82c765fd1165439caccfc4ce8579a25) for using the IBM Cloud Data Engine (SQL Query) API to run SQL statements. * SPSS Modeler * Synthetic Data Generator Catalogs * Platform assets catalog Restrictions You can only use this connection for source data. You cannot write to data or export data with this connection. IBM Cloud Data Engine setup To set up IBM Cloud Data Engine on IBM Cloud Object Storage, see [Getting started with IBM Cloud Data Engine](https://cloud.ibm.com/docs/sql-query/sql-query.htmloverview?cm_sp=Cloud-Product-_-OnPageNavLink-IBMCloudPlatform_IBMCloudObjectStorage-_-COSsql_LearnMore). Supported encryption By default, all objects that are stored in IBM Cloud Object Storage are encrypted by using randomly generated keys and an all-or-nothing-transform (AONT). For details, see [Encrypting your data](https://cloud.ibm.com/docs/cloud-object-storage?topic=cloud-object-storage-encryption). Additionally, you can use managed keys to encrypt the SQL query texts and error messages that are stored in the job information. See [Encrypting SQL queries with Key Protect](https://cloud.ibm.com/docs/sql-query?topic=sql-query-keyprotect). Running SQL statements [Video to learn how you can get started to run a basic query](https://cloud.ibm.com/docs/sql-query?topic=sql-query-overviewrunning) Learn more * [IBM Cloud Data Engine](https://www.ibm.com/cloud/sql-query) * [Connecting to a Cloud Data Lake with IBM Cloud Pak for Data](https://www.ibm.com/cloud/blog/connecting-to-a-cloud-data-lake-with-ibm-cloud-pak-for-data) Parent topic:[Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html)
# IBM Cloud Data Engine connection # To access your data in IBM Cloud Data Engine, create a connection asset for it\. IBM Cloud Data Engine is a service on IBM Cloud that you use to build, manage, and consume data lakes and their table assets in IBM Cloud Object Storage (COS)\. IBM Cloud Data Engine provides functions to load, prepare, and query big data that is stored in various formats\. It also includes a metastore with table definitions\. IBM Cloud Data Engine was formerly named "IBM Cloud SQL Query\." ## Prerequisites ## ## Create a connection to IBM Cloud Data Engine ## To create the connection asset, you need these connection details: <!-- <ul> --> * The Cloud Resource Name (CRN) of the IBM Cloud Data Engine instance\. Go to the IBM Cloud Data Engine service instance in your resources list in your IBM Cloud dashboard and copy the value of the CRN from the deployment details\. * Target Cloud Object Storage: A default location where IBM Cloud Data Engine stores query results\. You can specify any Cloud Object Storage bucket that you have access to\. You can also select the default Cloud Object Storage bucket that is created when you open the IBM Cloud Data Engine web console for the first time from IBM Cloud dashboard\. See the **Target location** field in the IBM Cloud Data Engine web console\. * IBM Cloud API key: An API key for a user or service ID that has access to your IBM Cloud Data Engine and Cloud Object Storage services (for both the Cloud Object Storage data that you want to query and the default target Cloud Object Storage location)\. <!-- </ul> --> You can create a new API key for your own user: <!-- <ol> --> 1. In the IBM Cloud console, go to **Manage > Access (IAM)**\. 2. In the left navigation, select **API keys**\. 3. Select **Create an IBM Cloud API Key**\. <!-- </ol> --> ### Credentials ### IBM Cloud Data Engine uses the SSO credentials that are specified as a single API key, which authenticates a user or service ID\. The API key must have the following properties: <!-- <ul> --> * Manage permission for the IBM Cloud Data Engine instance * Read access to all Cloud Object Storage locations that you want to read from * Write access to the default Cloud Object Storage target location * Write access to the IBM Cloud Data Engine instance <!-- </ul> --> ### Choose the method for creating a connection based on where you are in the platform ### **In a project** : Click **Assets > New asset > Connect to a data source**\. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html)\. **In a deployment space** : Click **Add to space > Connection**\. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html)\. **In the Platform assets catalog** : Click **New connection**\. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html)\. ### Next step: Add data assets from the connection ### <!-- <ul> --> * See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html)\. <!-- </ul> --> ## Where you can use this connection ## You can use IBM Cloud Data Engine connections in the following workspaces and tools: **Projects** <!-- <ul> --> * Data Refinery * Notebooks\. See the Notebook [tutorial](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/e82c765fd1165439caccfc4ce8579a25) for using the IBM Cloud Data Engine (SQL Query) API to run SQL statements\. * SPSS Modeler * Synthetic Data Generator <!-- </ul> --> **Catalogs** <!-- <ul> --> * Platform assets catalog <!-- </ul> --> ## Restrictions ## You can only use this connection for source data\. You cannot write to data or export data with this connection\. ## IBM Cloud Data Engine setup ## To set up IBM Cloud Data Engine on IBM Cloud Object Storage, see [Getting started with IBM Cloud Data Engine](https://cloud.ibm.com/docs/sql-query/sql-query.html#overview?cm_sp=Cloud-Product-_-OnPageNavLink-IBMCloudPlatform_IBMCloudObjectStorage-_-COSsql_LearnMore)\. ## Supported encryption ## By default, all objects that are stored in IBM Cloud Object Storage are encrypted by using randomly generated keys and an all\-or\-nothing\-transform (AONT)\. For details, see [Encrypting your data](https://cloud.ibm.com/docs/cloud-object-storage?topic=cloud-object-storage-encryption)\. Additionally, you can use managed keys to encrypt the SQL query texts and error messages that are stored in the job information\. See [Encrypting SQL queries with Key Protect](https://cloud.ibm.com/docs/sql-query?topic=sql-query-keyprotect)\. ### Running SQL statements ### [Video to learn how you can get started to run a basic query](https://cloud.ibm.com/docs/sql-query?topic=sql-query-overview#running) ## Learn more ## <!-- <ul> --> * [IBM Cloud Data Engine](https://www.ibm.com/cloud/sql-query) * [Connecting to a Cloud Data Lake with IBM Cloud Pak for Data](https://www.ibm.com/cloud/blog/connecting-to-a-cloud-data-lake-with-ibm-cloud-pak-for-data) <!-- </ul> --> **Parent topic:**[Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html) <!-- </article "role="article" "> -->
4B8EBFDF4EA9E571D53720FE09A2CE610AECBCB9
https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-tableau.html?context=cdpaas&locale=en
Tableau connection
Tableau connection To access your data in Tableau, you must create a connection asset for it. Tableau is an interactive data visualization platform. Supported products Tableau Server 2020.3.3 and Tableau Cloud Create a connection to Tableau To create the connection asset, you need the following connection details: * Hostname or IP address * Port number * Site: The name of the Tableau site to use * For Authentication method, you need either a username and password or an Access token (with Access token name and Access token secret). * SSL certificate (if required by the database server) Choose the method for creating a connection based on where you are in the platform In a project : Click Assets > New asset > Connect to a data source. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html). In a deployment space : Click Add to space > Connection. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html). In the Platform assets catalog : Click New connection. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html). Next step: Add data assets from the connection * See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html). Where you can use this connection You can use Tableau connections in the following workspaces and tools: Projects * SPSS Modeler * Synthetic Data Generator Catalogs * Platform assets catalog Tableau setup * [Get Started with Tableau Server on Linux](https://help.tableau.com/current/server-linux/en-us/get_started_server.htm) * [Get Started with Tableau Server on Windows](https://help.tableau.com/current/server/en-us/get_started_server.htm) * [Get Started with Tableau Cloud](https://help.tableau.com/current/online/en-us/to_get_started.htm) Restriction You can use this connection only for source data. You cannot write to data or export data with this connection. Running SQL statements To ensure that your SQL statements run correctly, refer to the [Run Initial SQL](https://help.tableau.com/current/online/en-us/connect_basic_initialsql.htm) for the correct syntax. Learn more * [Tableau](https://www.tableau.com/) * [SSL for Tableau Server on Linux](https://help.tableau.com/current/server-linux/en-us/ssl.htm) * [SSL for Tableau Server on Windows](https://help.tableau.com/current/server/en-us/ssl.htm) * [Security in Tableau Cloud](https://help.tableau.com/current/online/en-us/to_security.htm) Parent topic:[Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html)
# Tableau connection # To access your data in Tableau, you must create a connection asset for it\. Tableau is an interactive data visualization platform\. ## Supported products ## Tableau Server 2020\.3\.3 and Tableau Cloud ## Create a connection to Tableau ## To create the connection asset, you need the following connection details: <!-- <ul> --> * Hostname or IP address * Port number * Site: The name of the Tableau site to use * For **Authentication method**, you need either a username and password or an Access token (with Access token name and Access token secret)\. * SSL certificate (if required by the database server) <!-- </ul> --> ### Choose the method for creating a connection based on where you are in the platform ### **In a project** : Click **Assets > New asset > Connect to a data source**\. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html)\. **In a deployment space** : Click **Add to space > Connection**\. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html)\. **In the Platform assets catalog** : Click **New connection**\. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html)\. ### Next step: Add data assets from the connection ### <!-- <ul> --> * See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html)\. <!-- </ul> --> ## Where you can use this connection ## You can use Tableau connections in the following workspaces and tools: **Projects** <!-- <ul> --> * SPSS Modeler * Synthetic Data Generator <!-- </ul> --> **Catalogs** <!-- <ul> --> * Platform assets catalog <!-- </ul> --> ## Tableau setup ## <!-- <ul> --> * [Get Started with Tableau Server on Linux](https://help.tableau.com/current/server-linux/en-us/get_started_server.htm) * [Get Started with Tableau Server on Windows](https://help.tableau.com/current/server/en-us/get_started_server.htm) * [Get Started with Tableau Cloud](https://help.tableau.com/current/online/en-us/to_get_started.htm) <!-- </ul> --> ## Restriction ## You can use this connection only for source data\. You cannot write to data or export data with this connection\. ### Running SQL statements ### To ensure that your SQL statements run correctly, refer to the [Run Initial SQL](https://help.tableau.com/current/online/en-us/connect_basic_initialsql.htm) for the correct syntax\. ## Learn more ## <!-- <ul> --> * [Tableau](https://www.tableau.com/) * [SSL for Tableau Server on Linux](https://help.tableau.com/current/server-linux/en-us/ssl.htm) * [SSL for Tableau Server on Windows](https://help.tableau.com/current/server/en-us/ssl.htm) * [Security in Tableau Cloud](https://help.tableau.com/current/online/en-us/to_security.htm) <!-- </ul> --> **Parent topic:**[Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html) <!-- </article "role="article" "> -->
71A4244C07321F32F283E49CFD6D6AFA19639744
https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-teradata.html?context=cdpaas&locale=en
Teradata connection
Teradata connection To access your data in Teradata, you must create a connection asset for it. Teradata provides database and analytics-related services and products. Supported versions Teradata databases 15.10, 16.10, 17.00, 17.10, and 17.20 Create a connection to Teradata To create the connection asset, you need the following connection details: * Database name * Hostname or IP address * Port number * Client character set: IMPORTANT: Do not enter a value unless you are instructed by IBM support. The character set value overrides the Teradata JDBC drivers normal mapping of the Teradata session character sets. Data corruption can occur if you specify the wrong character set. If no value is specified, UTF16 is used. * Authentication method: Select the security mechanism to use to authenticate the user: * TD2 (Teradata Method 2): Use the Teradata security mechanism. * LDAP: Use an LDAP security mechanism for external authentication. * Username and password * SSL certificate (if required by the database server) For Private connectivity, to connect to a database that is not externalized to the internet (for example, behind a firewall), you must set up a [secure connection](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.html). Choose the method for creating a connection based on where you are in the platform In a project : Click Assets > New asset > Connect to a data source. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html). In a deployment space : Click Add to space > Connection. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html). In the Platform assets catalog : Click New connection. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html). Next step: Add data assets from the connection * See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html). Where you can use this connection You can use Teradata connections in the following workspaces and tools: Projects * Decision Optimization * SPSS Modeler * Synthetic Data Generator Catalogs * Platform assets catalog Running SQL statements To ensure that your SQL statements run correctly, refer to the [Teradata SQL documentation](https://docs.teradata.com/reader/eWpPpcMoLGQcZEoyt5AjEg/9iudpbZXGZ_rAb7c6PL54g) for the correct syntax. Learn more * [Teradata documentation](https://docs.teradata.com/) * [Teradata Community](https://support.teradata.com/community) Parent topic:[Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html) Teradata JDBC Driver 17.00.00.03 Copyright (C) 2023 by Teradata. All rights reserved. IBM provides embedded usage of the Teradata JDBC Driver under license from Teradata solely for use as part of the IBM Watson service offering.
# Teradata connection # To access your data in Teradata, you must create a connection asset for it\. Teradata provides database and analytics\-related services and products\. ## Supported versions ## Teradata databases 15\.10, 16\.10, 17\.00, 17\.10, and 17\.20 ## Create a connection to Teradata ## To create the connection asset, you need the following connection details: <!-- <ul> --> * Database name * Hostname or IP address * Port number * Client character set: **IMPORTANT**: Do not enter a value unless you are instructed by IBM support\. The character set value overrides the Teradata JDBC drivers normal mapping of the Teradata session character sets\. Data corruption can occur if you specify the wrong character set\. If no value is specified, UTF16 is used\. * Authentication method: Select the security mechanism to use to authenticate the user: <!-- <ul> --> * **TD2 (Teradata Method 2)**: Use the Teradata security mechanism. * **LDAP**: Use an LDAP security mechanism for external authentication. <!-- </ul> --> * Username and password * SSL certificate (if required by the database server) <!-- </ul> --> For **Private connectivity**, to connect to a database that is not externalized to the internet (for example, behind a firewall), you must set up a [secure connection](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.html)\. ### Choose the method for creating a connection based on where you are in the platform ### **In a project** : Click **Assets > New asset > Connect to a data source**\. See [Adding a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html)\. **In a deployment space** : Click **Add to space > Connection**\. See [Adding connections to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html)\. **In the Platform assets catalog** : Click **New connection**\. See [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html)\. ### Next step: Add data assets from the connection ### <!-- <ul> --> * See [Add data from a connection in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html)\. <!-- </ul> --> ## Where you can use this connection ## You can use Teradata connections in the following workspaces and tools: **Projects** <!-- <ul> --> * Decision Optimization * SPSS Modeler * Synthetic Data Generator <!-- </ul> --> **Catalogs** <!-- <ul> --> * Platform assets catalog <!-- </ul> --> ### Running SQL statements ### To ensure that your SQL statements run correctly, refer to the [Teradata SQL documentation](https://docs.teradata.com/reader/eWpPpcMoLGQcZEoyt5AjEg/9iudpbZXGZ_rAb7c6PL54g) for the correct syntax\. ## Learn more ## <!-- <ul> --> * [Teradata documentation](https://docs.teradata.com/) * [Teradata Community](https://support.teradata.com/community) <!-- </ul> --> **Parent topic:**[Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html) *Teradata JDBC Driver 17\.00\.00\.03 Copyright (C) 2023 by Teradata\. All rights reserved\. IBM provides embedded usage of the Teradata JDBC Driver under license from Teradata solely for use as part of the IBM Watson service offering\.* <!-- </article "role="article" "> -->
53EE442D78ABE20AAA100DDA3FF139E566842C2E
https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html?context=cdpaas&locale=en
Connectors
Connectors You can add connections to a broad array of data sources in projects. Source connections can be used to read data; target connections can be used to load (save) data. When you create a target connection, be sure to use credentials that have Write permission or you won't be able to save data to the target. From a project, you must [create a connection](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html) to a data source before you can read data from it or load data to it. * [IBM services](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html?context=cdpaas&locale=enibm) * [Third-party services](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html?context=cdpaas&locale=enthird) * [Supported connectors by tool](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html?context=cdpaas&locale=enst) IBM services * [IBM Cloud Data Engine](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-sqlquery.html). Supports source connections only. * [IBM Cloud Databases for DataStax](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-datastax.html) * [IBM Cloud Databases for MongoDB](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-mongodb.html). Supports source connections only. * [IBM Cloud Databases for MySQL](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-compose-mysql.html) * [IBM Cloud Databases for PostgreSQL](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-dbase-postgresql.html) * [IBM Cloud Object Storage](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-cos.html) * [IBM Cloud Object Storage (infrastructure)](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-cos-infra.html) * [IBM Cloudant](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-cloudant.html) * [IBM Cognos Analytics](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-cognos.html). Supports source connections only. * [IBM Data Virtualization Manager for z/OS](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-datavirt-z.html) * [IBM Db2](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-db2.html) * [IBM Db2 Big SQL](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-db2-bigsql.html) * [IBM Db2 for i](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-db2i.html) * [IBM Db2 for z/OS](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-db2zos.html) * [IBM Db2 on Cloud](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-db2-cloud.html) * [IBM Db2 Warehouse](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-db2-wh.html) * [IBM Informix](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-informix.html) * [IBM Netezza Performance Server](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-puredata.html) * [IBM Planning Analytics](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-plananalytics.html) * [IBM Watson Query](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-data-virtual.html). Supports source connections only. Third-party services * [Amazon RDS for MySQL](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-azrds-mysql.html) * [Amazon RDS for Oracle](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-azrds-oracle.html) * [Amazon RDS for PostgreSQL](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-azrds-postresql.html) * [Amazon Redshift](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-redshift.html) * [Amazon S3](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-amazon-s3.html) * [Apache Cassandra](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-cassandra.html) * [Apache Derby](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-derby.html) * [Apache HDFS](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-hdfs.html) * [Apache Hive](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-hive.html). Supports source connections only. * [Box](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-box.html) * [Cloudera Impala](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-cloudera.html). Supports source connections only. * [Dremio](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-dremio.html). Supports source connections only. * [Dropbox](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-dropbox.html) * [Elasticsearch](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-elastic.html) * [FTP](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-ftp.html) * [Generic S3](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-generics3.html) * [Google BigQuery](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-bigquery.html) * [Google Cloud Storage](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-cloud-storage.html) * [Greenplum](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-greenplum.html) * [HTTP](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-http.html). Supports source connections only. * [Looker](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-looker.html). Supports source connections only. * [MariaDB](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-mariadb.html) * [Microsoft Azure Blob Storage](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-azureblob.html) * [Microsoft Azure Cosmos DB](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-cosmosdb.html) * [Microsoft Azure Data Lake Storage](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-azuredls.html) * [Microsoft Azure File Storage](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-azurefs.html) * [Microsoft Azure SQL Database](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-azure-sql.html) * [Microsoft SQL Server](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-sql-server.html) * [MongoDB](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-mongo.html). Supports source connections only. * [MySQL](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-mysql.html) * [OData](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-odata.html) * [Oracle](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-oracle.html) * [PostgreSQL](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-postgresql.html) * [Presto](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-presto.html). Supports source connections only. * [Salesforce.com](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-salesforce.html). Supports source connections only. * [SAP ASE](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-sap-ase.html) * [SAP IQ](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-sap-iq.html). Supports source connections only. * [SAP OData](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-sapodata.html) * [SingleStoreDB](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-singlestore.html) * [Snowflake](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-snowflake.html) * [Tableau](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-tableau.html). Supports source connections only. * [Teradata](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-teradata.html) Teradata JDBC Driver 17.00.00.03 Copyright (C) 2023 by Teradata. All rights reserved. IBM provides embedded usage of the Teradata JDBC Driver under license from Teradata solely for use as part of the IBM Watson service offering.. Supported connectors by tool The following tools support connections: * [AutoAI](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-overview.html) * [Data Refinery](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/refinery-datasources.html) * [Decision Optimization](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/DOconnections.html) * [Notebooks](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/data-load-support.html) * [SPSS Modeler](https://dataplatform.cloud.ibm.com/docs/content/wsd/spss-connections.html) * [Synthetic Data Generator](https://dataplatform.cloud.ibm.com/docs/content/wsj/synthetic/import_data_sd.html) Learn more * [Asset previews](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/previews.html) * [Profiles of assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/profile.html) * [Troubleshooting connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/troubleshoot-conn.html) Parent topic: [Preparing data](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/get-data.html)
# Connectors # You can add connections to a broad array of data sources in projects\. Source connections can be used to read data; target connections can be used to load (save) data\. When you create a target connection, be sure to use credentials that have Write permission or you won't be able to save data to the target\. From a project, you must [create a connection](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html) to a data source before you can read data from it or load data to it\. <!-- <ul> --> * [IBM services](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html?context=cdpaas&locale=en#ibm) * [Third\-party services](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html?context=cdpaas&locale=en#third) <!-- </ul> --> <!-- <ul> --> * [Supported connectors by tool](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html?context=cdpaas&locale=en#st) <!-- </ul> --> ## IBM services ## <!-- <ul> --> * [IBM Cloud Data Engine](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-sqlquery.html)\. *Supports source connections only*\. * [IBM Cloud Databases for DataStax](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-datastax.html) * [IBM Cloud Databases for MongoDB](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-mongodb.html)\. *Supports source connections only*\. * [IBM Cloud Databases for MySQL](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-compose-mysql.html) * [IBM Cloud Databases for PostgreSQL](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-dbase-postgresql.html) * [IBM Cloud Object Storage](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-cos.html) * [IBM Cloud Object Storage (infrastructure)](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-cos-infra.html) * [IBM Cloudant](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-cloudant.html) * [IBM Cognos Analytics](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-cognos.html)\. *Supports source connections only*\. * [IBM Data Virtualization Manager for z/OS](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-datavirt-z.html) * [IBM Db2](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-db2.html) * [IBM Db2 Big SQL](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-db2-bigsql.html) * [IBM Db2 for i](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-db2i.html) * [IBM Db2 for z/OS](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-db2zos.html) * [IBM Db2 on Cloud](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-db2-cloud.html) * [IBM Db2 Warehouse](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-db2-wh.html) * [IBM Informix](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-informix.html) * [IBM Netezza Performance Server](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-puredata.html) * [IBM Planning Analytics](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-plananalytics.html) * [IBM Watson Query](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-data-virtual.html)\. *Supports source connections only*\. <!-- </ul> --> ## Third\-party services ## <!-- <ul> --> * [Amazon RDS for MySQL](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-azrds-mysql.html) * [Amazon RDS for Oracle](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-azrds-oracle.html) * [Amazon RDS for PostgreSQL](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-azrds-postresql.html) * [Amazon Redshift](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-redshift.html) * [Amazon S3](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-amazon-s3.html) * [Apache Cassandra](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-cassandra.html) * [Apache Derby](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-derby.html) * [Apache HDFS](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-hdfs.html) * [Apache Hive](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-hive.html)\. *Supports source connections only*\. * [Box](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-box.html) * [Cloudera Impala](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-cloudera.html)\. *Supports source connections only*\. * [Dremio](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-dremio.html)\. *Supports source connections only*\. * [Dropbox](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-dropbox.html) * [Elasticsearch](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-elastic.html) * [FTP](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-ftp.html) * [Generic S3](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-generics3.html) * [Google BigQuery](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-bigquery.html) * [Google Cloud Storage](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-cloud-storage.html) * [Greenplum](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-greenplum.html) * [HTTP](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-http.html)\. *Supports source connections only*\. * [Looker](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-looker.html)\. *Supports source connections only*\. * [MariaDB](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-mariadb.html) * [Microsoft Azure Blob Storage](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-azureblob.html) * [Microsoft Azure Cosmos DB](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-cosmosdb.html) * [Microsoft Azure Data Lake Storage](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-azuredls.html) * [Microsoft Azure File Storage](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-azurefs.html) * [Microsoft Azure SQL Database](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-azure-sql.html) * [Microsoft SQL Server](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-sql-server.html) * [MongoDB](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-mongo.html)\. *Supports source connections only*\. * [MySQL](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-mysql.html) * [OData](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-odata.html) * [Oracle](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-oracle.html) * [PostgreSQL](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-postgresql.html) * [Presto](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-presto.html)\. *Supports source connections only*\. * [Salesforce\.com](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-salesforce.html)\. *Supports source connections only*\. * [SAP ASE](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-sap-ase.html) * [SAP IQ](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-sap-iq.html)\. *Supports source connections only*\. * [SAP OData](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-sapodata.html) * [SingleStoreDB](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-singlestore.html) * [Snowflake](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-snowflake.html) * [Tableau](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-tableau.html)\. *Supports source connections only*\. * [Teradata](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-teradata.html) *Teradata JDBC Driver 17.00.00.03 Copyright (C) 2023 by Teradata. All rights reserved. IBM provides embedded usage of the Teradata JDBC Driver under license from Teradata solely for use as part of the IBM Watson service offering.*. <!-- </ul> --> ## Supported connectors by tool ## The following tools support connections: <!-- <ul> --> * [AutoAI](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-overview.html) * [Data Refinery](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/refinery-datasources.html) * [Decision Optimization](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/DOconnections.html) * [Notebooks](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/data-load-support.html) * [SPSS Modeler](https://dataplatform.cloud.ibm.com/docs/content/wsd/spss-connections.html) * [Synthetic Data Generator](https://dataplatform.cloud.ibm.com/docs/content/wsj/synthetic/import_data_sd.html) <!-- </ul> --> ## Learn more ## <!-- <ul> --> * [Asset previews](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/previews.html) * [Profiles of assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/profile.html) * [Troubleshooting connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/troubleshoot-conn.html) <!-- </ul> --> **Parent topic**: [Preparing data](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/get-data.html) <!-- </article "role="article" "> -->
2200315EA9DA921EDFF8A3322417BB211F15B4EB
https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html?context=cdpaas&locale=en
Adding data from a connection to a project
Adding data from a connection to a project A connected data asset is a pointer to data that is accessed through a connection to an external data source. You create a connected data asset by specifying a connection, any intermediate structures or paths, and a relational table or view, a set of partitioned data files, or a file. When you access a connected data asset, the data is dynamically retrieved from the data source. You can also add a connected folder asset that is accessed through a connection in the same way. See [Add a connected folder asset to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/folder-asset.html). Partitioned data assets have previews and profiles like relational tables. However, you cannot yet shape and cleanse partitioned data assets with the Data Refinery tool. To add a data asset from a connection to a project: 1. From the project page, click the Assets tab, and then click Import assets > Connected data. 2. Select an existing connection asset as the source of the data. If you don't have any connection assets, cancel and go to New asset > Connect to a data source, and [create a connection asset](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html). 3. Select the data you want. You can select multiple connected data assets from the same connection. Click Import. For partitioned data, select the folder that contains the files. If the files are recognized as partitioned data, you see the message This folder contains a partitioned data set. 4. Type a name and description. 5. Click Create. The asset appears on the project Assets page. When you click on the asset name, you can see this information about connected assets: * The asset name and description * The tags for the asset * The name of the person who created the asset * The size of the data * The date when the asset was added to the project * The date when the asset was last modified * A [preview](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/previews.html) of relational data * A [profile](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/profile.html) of relational data Watch this video to see how to create a connection and add connected data to a project. Video disclaimer: Some minor steps and graphical elements in this video might differ from your platform. This video provides a visual method to learn the concepts and tasks in this documentation. * Transcript Synchronize transcript with video Time Transcript 00:00 This video shows you how to set up a connection to a data source and add connected data to a Watson Studio project. 00:08 If you have data stored in a data source, you can set up a connection to that data source from any project. 00:16 From here, you can add different elements to the project. 00:20 In this case, you want to add a connection. 00:24 You can create a new connection to an IBM service, such as IBM Db2 and Cloud Object Storage, or to a service from third parties, such as Amazon, Microsoft or Apache. 00:39 And you can filter the list based on compatible services. 00:45 You can also add a connection that was created at the platform level, which can be used across projects and catalogs. 00:54 Or you can create a connection to one of your provisioned IBM Cloud services. 00:59 In this case, select the provisioned IBM Cloud service for Db2 Warehouse on Cloud. 01:08 If the credentials are not prepopulated, you can get the credentials for the instance from the IBM Cloud service launch page. 01:17 First, test the connection and then create the connection. 01:25 The new connection now displays in the list of data assets. 01:30 Next, add connected data assets to this project. 01:37 Select the source - in this case, it's the Db2 Warehouse on Cloud connection just created. 01:43 Then select the schema and table. 01:50 You can see that this will add a reference to the data within this connection and include it in the target project. 01:58 Provide a name and a description and click "Create". 02:06 The data now displays in the list of data assets. 02:09 Open the data set to get a preview; and from here you can move directly into refining the data. 02:17 Find more videos in the Cloud Pak for Data as a Service documentation. Next steps * [Refine the data](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/refining_data.html) * [Analyze the data or build models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/data-science.html) Learn more * [Connected folder assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/folder-asset.html) * [Connection assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html) Parent topic: [Adding data to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/add-data-project.html)
# Adding data from a connection to a project # A *connected data asset* is a pointer to data that is accessed through a connection to an external data source\. You create a connected data asset by specifying a connection, any intermediate structures or paths, and a relational table or view, a set of partitioned data files, or a file\. When you access a connected data asset, the data is dynamically retrieved from the data source\. You can also add a connected folder asset that is accessed through a connection in the same way\. See [Add a connected folder asset to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/folder-asset.html)\. Partitioned data assets have previews and profiles like relational tables\. However, you cannot yet shape and cleanse partitioned data assets with the Data Refinery tool\. To add a data asset from a connection to a project: <!-- <ol> --> 1. From the project page, click the **Assets** tab, and then click **Import assets > Connected data**\. 2. Select an existing connection asset as the source of the data\. If you don't have any connection assets, cancel and go to **New asset > Connect to a data source**, and [create a connection asset](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html)\. 3. Select the data you want\. You can select multiple connected data assets from the same connection\. Click **Import**\. For partitioned data, select the folder that contains the files\. If the files are recognized as partitioned data, you see the message `This folder contains a partitioned data set.` 4. Type a name and description\. 5. Click **Create**\. The asset appears on the project **Assets** page\. <!-- </ol> --> When you click on the asset name, you can see this information about connected assets: <!-- <ul> --> * The asset name and description * The tags for the asset * The name of the person who created the asset * The size of the data * The date when the asset was added to the project * The date when the asset was last modified * A [preview](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/previews.html) of relational data * A [profile](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/profile.html) of relational data <!-- </ul> --> Watch this video to see how to create a connection and add connected data to a project\. Video disclaimer: Some minor steps and graphical elements in this video might differ from your platform\. This video provides a visual method to learn the concepts and tasks in this documentation\. <!-- <ul> --> * Transcript Synchronize transcript with video <!-- <table "class="bx--data-table bx--data-table--zebra" style="border-collapse: collapse; border: none;" "> --> | Time | Transcript | | ----- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | 00:00 | This video shows you how to set up a connection to a data source and add connected data to a Watson Studio project. | | 00:08 | If you have data stored in a data source, you can set up a connection to that data source from any project. | | 00:16 | From here, you can add different elements to the project. | | 00:20 | In this case, you want to add a connection. | | 00:24 | You can create a new connection to an IBM service, such as IBM Db2 and Cloud Object Storage, or to a service from third parties, such as Amazon, Microsoft or Apache. | | 00:39 | And you can filter the list based on compatible services. | | 00:45 | You can also add a connection that was created at the platform level, which can be used across projects and catalogs. | | 00:54 | Or you can create a connection to one of your provisioned IBM Cloud services. | | 00:59 | In this case, select the provisioned IBM Cloud service for Db2 Warehouse on Cloud. | | 01:08 | If the credentials are not prepopulated, you can get the credentials for the instance from the IBM Cloud service launch page. | | 01:17 | First, test the connection and then create the connection. | | 01:25 | The new connection now displays in the list of data assets. | | 01:30 | Next, add connected data assets to this project. | | 01:37 | Select the source - in this case, it's the Db2 Warehouse on Cloud connection just created. | | 01:43 | Then select the schema and table. | | 01:50 | You can see that this will add a reference to the data within this connection and include it in the target project. | | 01:58 | Provide a name and a description and click "Create". | | 02:06 | The data now displays in the list of data assets. | | 02:09 | Open the data set to get a preview; and from here you can move directly into refining the data. | | 02:17 | Find more videos in the Cloud Pak for Data as a Service documentation. | <!-- </table "class="bx--data-table bx--data-table--zebra" style="border-collapse: collapse; border: none;" "> --> <!-- </ul> --> ## Next steps ## <!-- <ul> --> * [Refine the data](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/refining_data.html) * [Analyze the data or build models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/data-science.html) <!-- </ul> --> ## Learn more ## <!-- <ul> --> * [Connected folder assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/folder-asset.html) * [Connection assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html) <!-- </ul> --> **Parent topic:** [Adding data to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/add-data-project.html) <!-- </article "role="article" "> -->
D9B02A6929162AF5F13E95C700CE0E548F6A9EE3
https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/cos_buckets.html?context=cdpaas&locale=en
Controlling access to Cloud Object Storage buckets
Controlling access to Cloud Object Storage buckets A bucket is a logical abstraction that provides a container for data. Buckets in Cloud Object Storage are created in IBM Cloud. Within a Cloud Object Storage instance, you can use policies to restrict users' access to buckets. Here's how it works: ![A Cloud Object Storage instance with two buckets](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/images/COSInstanceAndBuckets.svg) In this illustration, two credentials are associated with a Cloud Object Storage instance. Each of the credentials references an IAM service ID in which policies are defined to control which bucket that service ID can access. By using a specific credential when you add a Cloud Object Storage connection to a project, only the buckets accessible to the service ID associated with that credential are visible. To create connections that restrict users' access to buckets, follow these steps. First, in IBM Cloud: 1. [Create a Cloud Object Storage instance and several buckets](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/cos_buckets.html?context=cdpaas&locale=encreatebucket) 2. [Create a service credential and Service ID for each combination of buckets that you want users to be able to access](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/cos_buckets.html?context=cdpaas&locale=encredentials) 3. [Verify that the service IDs were created](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/cos_buckets.html?context=cdpaas&locale=enverify) 4. [Edit the policies of each service ID to provide access to the appropriate buckets](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/cos_buckets.html?context=cdpaas&locale=enpolicy) 5. [Copy values from each of the service credentials that you created](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/cos_buckets.html?context=cdpaas&locale=encopy) 6. [Copy the endpoint](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/cos_buckets.html?context=cdpaas&locale=enendpoint) Then, in your project: 7. [Add Cloud Object Storage connections that use the service credentials that you created](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/cos_buckets.html?context=cdpaas&locale=enadd) 8. [Test users' access to buckets](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/cos_buckets.html?context=cdpaas&locale=entest) Step 1: Create a Cloud Object Storage instance and several buckets 1. From the [IBM Cloud catalog](https://cloud.ibm.com/catalogservices), search for Object Storage, then create a Cloud Object Storage instance. 2. Select Buckets in the navigation pane. 3. Create as many buckets as you need. For example, create three buckets: dept1-bucket, dept2-bucket, and dept3-bucket. ![Buckets page](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/images/BucketsAndObjectsPage.png) Step 2: Create a service credential and Service ID for each combination of buckets that you want users to be able to access 1. Select Service credentials in the navigation pane. 2. Click New Credential. 3. In the Add new credential dialog, provide a name for the credential and select the appropriate access role. 4. Within the Select Service ID field, click Create New Service ID. 5. Enter a name for the new service ID. We recommend using the same or a similar name to that of the credential for easy identification. 6. Click Add. 7. Repeat steps 2 to 6 for each credential that you want to create. For example, create three credentials: cos-all-access, dept1-dept2-buckets-only, and dept2-dept3-buckets-only. ![Service credentials page](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/images/ServiceCredentialsPage.png) Step 3: Verify that the service IDs were created 1. In the IBM Cloud page header, click Manage > Access (IAM). 2. Select Service IDs in the navigation pane. 3. Confirm that the service IDs you created in steps 2d and 2e are visible. ![Service IDs page](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/images/ServiceIDsPage.png) Step 4: Edit the policies of each service ID to provide access to the appropriate buckets 1. Open each service ID in turn. 2. On the Access policies tab, select Edit from the Actions menu to view the policy. 3. If necessary, edit the policy to provide access to the appropriate buckets. 4. If needed, create one or more new policies. 1. Remove the existing, default policy which provides access to all of the buckets in the Cloud Object Storage instance. 2. Click Assign access. 3. For Resource type, specify "bucket". 4. For Resource ID, specify a bucket name. 5. In the Select roles section, select Viewer from the "Assign platform access roles" list and select Writer from the "Assign service access roles" list. Example 1 By default, the policy for the cos-all-access service ID provides Writer access to the Cloud Object Storage instance. ![Access policies tab for the cos-all-access service ID](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/images/AccessPoliciesPageForcosallaccess.png) Because you want this service ID and the corresponding credential to provide users with access to all of the buckets, no edits are required. ![Edit policy page for the cos-all-access service ID](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/images/cosallaccessServiceIDPolicy.png) Example 2 By default, the policy for the "dept1-dept2-buckets-only" service ID provides Writer access to the Cloud Object Storage instance. Because you want this service ID and the corresponding credential to provide users with access only to the dept1-bucket and dept2-bucket buckets, remove the default policy and create two access policies, one for dept1-bucket and one for dept2-bucket. ![Access policies tab for the dept1-dept2-buckets-only service ID](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/images/AccessPoliciesPageFordept1dept2bucketsonly.png) ![Edit Policy page for the dept1-bucket-only service ID](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/images/SelectRolesSection_dept1.png) ![Edit Policy page for the dept2-bucket-only service ID](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/images/SelectRolesSection_dept2.png) Step 5: Copy values from each of the service credentials that you created 1. Return to your IBM Cloud Dashboard and select Cloud Object Storage from the Storage list. 2. Select Service credentials in the navigation pane. 3. Click the View credentials action for one of the service IDs that you created in step 2. 4. Copy the "apikey" value and the "resource_instance_id" value to a temporary location, such as a desktop note. ![cos-all-access credential](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/images/ViewCredentials_apikey.png) ![cos-all-access credential](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/images/ViewCredentials_resourceinstanceid.png) 5. Repeat steps 3 and 4 for each credential. Step 6: Copy the Endpoint 1. Select Endpoint in the navigation pane. 2. Copy the URL of the endpoint that you want to connect to. Save the value to a temporary location, such as a desktop note. Step 7: Add Cloud Object Storage connections that use the service credentials that you created 1. Return to your project on the Assets tab, and click New asset > Connect to a data source.. 2. On the New connection page, click Cloud Object Storage. 3. Name the new connection and enter the login URL (from the Endpoints page) as well as the "apikey" and "resource_instance_id" values that you copied in step 5 from one of the service credentials. 4. Repeat steps 3 to 5 for each service credential. The connections will be visible in the Data assets section of the project. Test users' access to buckets Going forward, when you add a data asset from a Cloud Object Storage connection to a project, you'll see only the buckets that the policies allow you to access. To test this: 1. From a project, click New asset > Connected data. Or from a catalog, click Add to project > Connected data. 2. In the Connection source section, click Select source. On the Select connection source page, you can see the Cloud Object Storage connections that you created. 3. Select one of the Cloud Object Storage connections to see that only the buckets accessible to the service ID associated with that bucket's credential are visible. Parent topic:[Adding connections to projects](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html)
# Controlling access to Cloud Object Storage buckets # A bucket is a logical abstraction that provides a container for data\. Buckets in Cloud Object Storage are created in IBM Cloud\. Within a Cloud Object Storage instance, you can use policies to restrict users' access to buckets\. Here's how it works: ![A Cloud Object Storage instance with two buckets](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/images/COSInstanceAndBuckets.svg) In this illustration, two credentials are associated with a Cloud Object Storage instance\. Each of the credentials references an IAM service ID in which policies are defined to control which bucket that service ID can access\. By using a specific credential when you add a Cloud Object Storage connection to a project, only the buckets accessible to the service ID associated with that credential are visible\. To create connections that restrict users' access to buckets, follow these steps\. First, in IBM Cloud: <!-- <ol> --> 1. [Create a Cloud Object Storage instance and several buckets](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/cos_buckets.html?context=cdpaas&locale=en#createbucket) 2. [Create a service credential and Service ID for each combination of buckets that you want users to be able to access](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/cos_buckets.html?context=cdpaas&locale=en#credentials) 3. [Verify that the service IDs were created](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/cos_buckets.html?context=cdpaas&locale=en#verify) 4. [Edit the policies of each service ID to provide access to the appropriate buckets](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/cos_buckets.html?context=cdpaas&locale=en#policy) 5. [Copy values from each of the service credentials that you created](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/cos_buckets.html?context=cdpaas&locale=en#copy) 6. [Copy the endpoint](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/cos_buckets.html?context=cdpaas&locale=en#endpoint) Then, in your project: 7. [Add Cloud Object Storage connections that use the service credentials that you created](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/cos_buckets.html?context=cdpaas&locale=en#add) 8. [Test users' access to buckets](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/cos_buckets.html?context=cdpaas&locale=en#test) <!-- </ol> --> ## Step 1: Create a Cloud Object Storage instance and several buckets ## <!-- <ol> --> 1. From the [IBM Cloud catalog](https://cloud.ibm.com/catalog#services), search for Object Storage, then create a Cloud Object Storage instance\. 2. Select **Buckets** in the navigation pane\. 3. Create as many buckets as you need\. For example, create three buckets: dept1-bucket, dept2-bucket, and dept3-bucket. ![Buckets page](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/images/BucketsAndObjectsPage.png) <!-- </ol> --> ## Step 2: Create a service credential and Service ID for each combination of buckets that you want users to be able to access ## <!-- <ol> --> 1. Select **Service credentials** in the navigation pane\. 2. Click **New Credential**\. 3. In the Add new credential dialog, provide a name for the credential and select the appropriate access role\. 4. Within the **Select Service ID** field, click **Create New Service ID**\. 5. Enter a name for the new service ID\. We recommend using the same or a similar name to that of the credential for easy identification\. 6. Click **Add**\. 7. Repeat steps 2 to 6 for each credential that you want to create\. For example, create three credentials: cos-all-access, dept1-dept2-buckets-only, and dept2-dept3-buckets-only. ![Service credentials page](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/images/ServiceCredentialsPage.png) <!-- </ol> --> ## Step 3: Verify that the service IDs were created ## <!-- <ol> --> 1. In the IBM Cloud page header, click **Manage > Access (IAM)**\. 2. Select **Service IDs** in the navigation pane\. 3. Confirm that the service IDs you created in steps 2d and 2e are visible\. ![Service IDs page](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/images/ServiceIDsPage.png) <!-- </ol> --> ## Step 4: Edit the policies of each service ID to provide access to the appropriate buckets ## <!-- <ol> --> 1. Open each service ID in turn\. 2. On the Access policies tab, select **Edit** from the Actions menu to view the policy\. 3. If necessary, edit the policy to provide access to the appropriate buckets\. 4. If needed, create one or more new policies\. <!-- <ol> --> 1. Remove the existing, default policy which provides access to all of the buckets in the Cloud Object Storage instance. 2. Click **Assign access**. 3. For **Resource type**, specify "bucket". 4. For **Resource ID**, specify a bucket name. 5. In the Select roles section, select **Viewer** from the "Assign platform access roles" list and select **Writer** from the "Assign service access roles" list. <!-- </ol> --> <!-- </ol> --> ### Example 1 ### By default, the policy for the cos\-all\-access service ID provides Writer access to the Cloud Object Storage instance\. ![Access policies tab for the cos\-all\-access service ID](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/images/AccessPoliciesPageForcosallaccess.png) Because you want this service ID and the corresponding credential to provide users with access to all of the buckets, no edits are required\. ![Edit policy page for the cos\-all\-access service ID](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/images/cosallaccessServiceIDPolicy.png) ### Example 2 ### By default, the policy for the "dept1\-dept2\-buckets\-only" service ID provides Writer access to the Cloud Object Storage instance\. Because you want this service ID and the corresponding credential to provide users with access only to the dept1\-bucket and dept2\-bucket buckets, remove the default policy and create two access policies, one for dept1\-bucket and one for dept2\-bucket\. ![Access policies tab for the dept1\-dept2\-buckets\-only service ID](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/images/AccessPoliciesPageFordept1dept2bucketsonly.png) ![Edit Policy page for the dept1\-bucket\-only service ID](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/images/SelectRolesSection_dept1.png) ![Edit Policy page for the dept2\-bucket\-only service ID](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/images/SelectRolesSection_dept2.png) ## Step 5: Copy values from each of the service credentials that you created ## <!-- <ol> --> 1. Return to your IBM Cloud Dashboard and select Cloud Object Storage from the **Storage** list\. 2. Select **Service credentials** in the navigation pane\. 3. Click the **View credentials** action for one of the service IDs that you created in step 2\. 4. Copy the "apikey" value and the "resource\_instance\_id" value to a temporary location, such as a desktop note\. ![cos-all-access credential](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/images/ViewCredentials_apikey.png) ![cos-all-access credential](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/images/ViewCredentials_resourceinstanceid.png) 5. Repeat steps 3 and 4 for each credential\. <!-- </ol> --> ## Step 6: Copy the Endpoint ## <!-- <ol> --> 1. Select **Endpoint** in the navigation pane\. 2. Copy the URL of the endpoint that you want to connect to\. Save the value to a temporary location, such as a desktop note\. <!-- </ol> --> ## Step 7: Add Cloud Object Storage connections that use the service credentials that you created ## <!-- <ol> --> 1. Return to your project on the *Assets* tab, and click **New asset > Connect to a data source**\.\. 2. On the New connection page, click **Cloud Object Storage**\. 3. Name the new connection and enter the login URL (from the Endpoints page) as well as the "apikey" and "resource\_instance\_id" values that you copied in step 5 from one of the service credentials\. 4. Repeat steps 3 to 5 for each service credential\. The connections will be visible in the Data assets section of the project. <!-- </ol> --> ## Test users' access to buckets ## Going forward, when you add a data asset from a Cloud Object Storage connection to a project, you'll see only the buckets that the policies allow you to access\. To test this: <!-- <ol> --> 1. From a project, click **New asset > Connected data**\. Or from a catalog, click **Add to project > Connected data**\. 2. In the Connection source section, click **Select source**\. On the Select connection source page, you can see the Cloud Object Storage connections that you created. 3. Select one of the Cloud Object Storage connections to see that only the buckets accessible to the service ID associated with that bucket's credential are visible\. <!-- </ol> --> **Parent topic:**[Adding connections to projects](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html) <!-- </article "role="article" "> -->
A7F2612AD7178C8AFA4C8B7C2F210A10DD7EE5CC
https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html?context=cdpaas&locale=en
Adding connections to projects
Adding connections to projects You need to create a connection asset for a data source before you can access or load data to or from it. A connection asset contains the information necessary to establish a connection to a data source. Create connections to multiple types of data sources, including IBM Cloud services, other cloud services, on-prem databases, and more. See [Connectors](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html) for the list of data sources. To create a new connection in a project: 1. Go to the project page, and click the Assets tab. 2. Click New asset > Connect to a data source. 3. Choose the kind of connection: * Select New connection (the default) to create a new connection in the project. * Select Platform connections to select a connection that has already been created at the platform level. * Select Deployed services to connect to a data source from a cloud service this is integrated with IBM watsonx. 4. Choose a data source. 5. Enter the connection information that is required for the data source. Typically, you need to provide information like the hostname, port number, username, and password. 6. If prompted, specify whether you want to use personal or shared credentials. You cannot change this option after you create the connection. The credentials type for the connection, either Personal or Shared, is set by the account owner on the [Account page](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/account-settings.html). The default setting is Shared. * Personal: With personal credentials, each user must specify their own credentials to access the connection. Each user's credentials are saved but are not shared with any other users. Use personal credentials instead of shared credentials to protect credentials. For example, if you use personal credentials and another user changes the connection properties (such as the hostname or port number), the credentials are invalidated to prevent malicious redirection. * Shared: With shared credentials, all users access the connection with the credentials that you provide. Shared credentials can potentially be retrieved by a user who has access to the connection asset. Because the credentials are shared, it is difficult to audit access to the connection, to identify the source of data loss, or identify the source of a security breach. 1. For Private connectivity: To connect to a database that is not externalized to the internet (for example, behind a firewall), see [Securing connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.html). 2. If available, click Test connection. 3. Click Create. The connection appears on the Assets page. You can edit the connection by clicking the connection name on the Assets page. 4. Add tables, files, or other types of data from the connection by [creating a connected data asset](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html). Connections with personal credentials are marked with a key icon (![the key symbol for private connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/images/privatekey.png)) on the Assets page and are locked. If you are authorized to access the connection, you can unlock it by entering your credentials the first time you select it. This is a one-time step that permanently unlocks the connection for you. After you unlock the connection, the key icon is no longer displayed. Connections with personal credentials are already unlocked if you created the connections yourself. Watch this video to see how to create a connection and add connected data to a project. Video disclaimer: Some minor steps and graphical elements in this video might differ from your platform. This video provides a visual method to learn the concepts and tasks in this documentation. * Transcript Synchronize transcript with video Time Transcript 00:00 This video shows you how to set up a connection to a data source and add connected data to a Watson Studio project. 00:08 If you have data stored in a data source, you can set up a connection to that data source from any project. 00:16 From here, you can add different elements to the project. 00:20 In this case, you want to add a connection. 00:24 You can create a new connection to an IBM service, such as IBM Db2 and Cloud Object Storage, or to a service from third parties, such as Amazon, Microsoft or Apache. 00:39 And you can filter the list based on compatible services. 00:45 You can also add a connection that was created at the platform level, which can be used across projects and catalogs. 00:54 Or you can create a connection to one of your provisioned IBM Cloud services. 00:59 In this case, select the provisioned IBM Cloud service for Db2 Warehouse on Cloud. 01:08 If the credentials are not prepopulated, you can get the credentials for the instance from the IBM Cloud service launch page. 01:17 First, test the connection and then create the connection. 01:25 The new connection now displays in the list of data assets. 01:30 Next, add connected data assets to this project. 01:37 Select the source - in this case, it's the Db2 Warehouse on Cloud connection just created. 01:43 Then select the schema and table. 01:50 You can see that this will add a reference to the data within this connection and include it in the target project. 01:58 Provide a name and a description and click "Create". 02:06 The data now displays in the list of data assets. 02:09 Open the data set to get a preview; and from here you can move directly into refining the data. 02:17 Find more videos in the Cloud Pak for Data as a Service documentation. Next step Go to [Data Refinery](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/asset_browser.html), and select the connection. Drill down to a schema, and table or view. Learn more * [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html) * [Integrations with other cloud platforms](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/int-cloud.html) * [Controlling access to Cloud Object Storage buckets](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/cos_buckets.html) Parent topic: [Adding data to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/add-data-project.html)
# Adding connections to projects # You need to create a connection asset for a data source before you can access or load data to or from it\. A connection asset contains the information necessary to establish a connection to a data source\. Create connections to multiple types of data sources, including IBM Cloud services, other cloud services, on\-prem databases, and more\. See [Connectors](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html) for the list of data sources\. To create a new connection in a project: <!-- <ol> --> 1. Go to the project page, and click the **Assets** tab\. 2. Click **New asset > Connect to a data source**\. 3. Choose the kind of connection: <!-- <ul> --> * Select **New connection** (the default) to create a new connection in the project. * Select **Platform connections** to select a connection that has already been created at the platform level. * Select **Deployed services** to connect to a data source from a cloud service this is integrated with IBM watsonx. <!-- </ul> --> 4. Choose a data source\. 5. Enter the connection information that is required for the data source\. Typically, you need to provide information like the hostname, port number, username, and password\. 6. If prompted, specify whether you want to use personal or shared credentials\. You cannot change this option after you create the connection\. The credentials type for the connection, either Personal or Shared, is set by the account owner on the [Account page](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/account-settings.html)\. The default setting is **Shared**\. <!-- <ul> --> * **Personal**: With personal credentials, each user must specify their own credentials to access the connection. Each user's credentials are saved but are not shared with any other users. Use personal credentials instead of shared credentials to protect credentials. For example, if you use personal credentials and another user changes the connection properties (such as the hostname or port number), the credentials are invalidated to prevent malicious redirection. * **Shared**: With shared credentials, all users access the connection with the credentials that you provide. Shared credentials can potentially be retrieved by a user who has access to the connection asset. Because the credentials are shared, it is difficult to audit access to the connection, to identify the source of data loss, or identify the source of a security breach. <!-- </ul> --> <!-- </ol> --> <!-- <ol> --> 1. For **Private connectivity**: To connect to a database that is not externalized to the internet (for example, behind a firewall), see [Securing connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.html)\. 2. If available, click **Test connection**\. 3. Click **Create**\. The connection appears on the **Assets** page\. You can edit the connection by clicking the connection name on the **Assets** page\. 4. Add tables, files, or other types of data from the connection by [creating a connected data asset](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html)\. <!-- </ol> --> Connections with personal credentials are marked with a key icon (![the key symbol for private connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/images/privatekey.png)) on the **Assets** page and are locked\. If you are authorized to access the connection, you can unlock it by entering your credentials the first time you select it\. This is a one\-time step that permanently unlocks the connection for you\. After you unlock the connection, the key icon is no longer displayed\. Connections with personal credentials are already unlocked if you created the connections yourself\. Watch this video to see how to create a connection and add connected data to a project\. Video disclaimer: Some minor steps and graphical elements in this video might differ from your platform\. This video provides a visual method to learn the concepts and tasks in this documentation\. <!-- <ul> --> * Transcript Synchronize transcript with video <!-- <table "class="bx--data-table bx--data-table--zebra" style="border-collapse: collapse; border: none;" "> --> | Time | Transcript | | ----- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | 00:00 | This video shows you how to set up a connection to a data source and add connected data to a Watson Studio project. | | 00:08 | If you have data stored in a data source, you can set up a connection to that data source from any project. | | 00:16 | From here, you can add different elements to the project. | | 00:20 | In this case, you want to add a connection. | | 00:24 | You can create a new connection to an IBM service, such as IBM Db2 and Cloud Object Storage, or to a service from third parties, such as Amazon, Microsoft or Apache. | | 00:39 | And you can filter the list based on compatible services. | | 00:45 | You can also add a connection that was created at the platform level, which can be used across projects and catalogs. | | 00:54 | Or you can create a connection to one of your provisioned IBM Cloud services. | | 00:59 | In this case, select the provisioned IBM Cloud service for Db2 Warehouse on Cloud. | | 01:08 | If the credentials are not prepopulated, you can get the credentials for the instance from the IBM Cloud service launch page. | | 01:17 | First, test the connection and then create the connection. | | 01:25 | The new connection now displays in the list of data assets. | | 01:30 | Next, add connected data assets to this project. | | 01:37 | Select the source - in this case, it's the Db2 Warehouse on Cloud connection just created. | | 01:43 | Then select the schema and table. | | 01:50 | You can see that this will add a reference to the data within this connection and include it in the target project. | | 01:58 | Provide a name and a description and click "Create". | | 02:06 | The data now displays in the list of data assets. | | 02:09 | Open the data set to get a preview; and from here you can move directly into refining the data. | | 02:17 | Find more videos in the Cloud Pak for Data as a Service documentation. | <!-- </table "class="bx--data-table bx--data-table--zebra" style="border-collapse: collapse; border: none;" "> --> <!-- </ul> --> ## Next step ## Go to [Data Refinery](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/asset_browser.html), and select the connection\. Drill down to a schema, and table or view\. ## Learn more ## <!-- <ul> --> * [Adding platform connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html) * [Integrations with other cloud platforms](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/int-cloud.html) * [Controlling access to Cloud Object Storage buckets](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/cos_buckets.html) <!-- </ul> --> **Parent topic**: [Adding data to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/add-data-project.html) <!-- </article "role="article" "> -->
1093C3D02F71F4FBA221375302D20BC761E70AEF
https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-job-dr.html?context=cdpaas&locale=en
Creating jobs in Data Refinery
Creating jobs in Data Refinery You can create a job to run a Data Refinery flow directly in Data Refinery. To create a Data Refinery flow job: 1. In Data Refinery, click the Jobs icon ![the jobs icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/images/Run-schedule_Blue.png) from the Data Refinery toolbar and select Save and create a job. 2. Define the job details by entering a name and a description (optional). 3. On the Configure page, select an environment runtime for the job, and optionally modify the job retention settings. 4. On the Schedule page, you can optionally add a one-time or repeating schedule. If you define a start day and time without selecting Repeat, the job will run exactly one time at the specified day and time. If you define a start date and time and you select Repeat, the job will run for the first time at the timestamp indicated in the Repeat section. You can't change the time zone; the schedule uses your web browser's time zone setting. If you exclude certain weekdays, the job might not run as you would expect. The reason might be due to a discrepancy between the time zone of the user who creates the schedule, and the time zone of the compute node where the job runs. 5. Optional: Set up notifications for the job. You can select the type of alerts to receive. 6. Review the job settings. Then, create the job and run it immediately, or create the job and run it later. The Data Refinery flow job is listed in the Jobs in your project. Learn more * [Compute resource options for Data Refinery in projects](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/spark-dr-envs.html) * [Viewing job details](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/jobs.htmlview-job-details) * [Refining data](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/refining_data.html) Parent topic: [Jobs](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/jobs.html)
# Creating jobs in Data Refinery # You can create a job to run a Data Refinery flow directly in Data Refinery\. To create a Data Refinery flow job: <!-- <ol> --> 1. In Data Refinery, click the Jobs icon ![the jobs icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/images/Run-schedule_Blue.png) from the Data Refinery toolbar and select **Save and create a job**\. 2. Define the job details by entering a name and a description (optional)\. 3. On the Configure page, select an environment runtime for the job, and optionally modify the job retention settings\. 4. On the Schedule page, you can optionally add a one\-time or repeating schedule\. If you define a start day and time without selecting **Repeat**, the job will run exactly one time at the specified day and time. If you define a start date and time and you select **Repeat**, the job will run for the first time at the timestamp indicated in the Repeat section. You can't change the time zone; the schedule uses your web browser's time zone setting. If you exclude certain weekdays, the job might not run as you would expect. The reason might be due to a discrepancy between the time zone of the user who creates the schedule, and the time zone of the compute node where the job runs. 5. Optional: Set up notifications for the job\. You can select the type of alerts to receive\. 6. Review the job settings\. Then, create the job and run it immediately, or create the job and run it later\. The Data Refinery flow job is listed in the **Jobs** in your project. <!-- </ol> --> ## Learn more ## <!-- <ul> --> * [Compute resource options for Data Refinery in projects](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/spark-dr-envs.html) * [Viewing job details](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/jobs.html#view-job-details) * [Refining data](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/refining_data.html) <!-- </ul> --> **Parent topic**: [Jobs](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/jobs.html) <!-- </article "role="article" "> -->
FB9D913B400E9F00E6AA6EFF7A7C8A84F5762DC9
https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-job-nb-editor.html?context=cdpaas&locale=en
Creating jobs in the Notebook editor
Creating jobs in the Notebook editor You can create a job to run a notebook directly in the Notebook editor. To create a notebook job: 1. In the Notebook editor, click ![the jobs icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/images/Run-schedule_Blue.png) from the menu bar and select Create a job. 2. Define the job details by entering a name and a description (optional). 3. On the Configure page, select: * A notebook version. The most recently saved version of the notebook is used by default. If no version of the notebook exists, you must create a version by clicking ![the versions icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/images/versions.png) from the notebook action bar. * A runtime. By default, the job uses the same environment template that was selected for the notebook. * Advanced configuration to add environment variables and select the job run retention settings. * The environment variables that are passed to the notebook when the job is started and affect the execution of the notebook. Each variable declaration must be made for a single variable in the following format VAR_NAME=foo and appear on its own line. For example, to determine which data source to access if the same notebook is used in different jobs, you can set the variable DATA_SOURCE to DATA_SOURCE=jdbc:db2//db2.server.com:1521/testdata in the notebook job that trains a model and to DATA_SOURCE=jdbc:db2//db2.server.com:1521/productiondata in the job where the model runs on real data. In another example, the variables BATCH_SIZE, NUM_CLASSES and EPOCHS that are required for a Keras model can be passed to the same notebook with different values in separate jobs. * Select the job run result output. You can select: * Log & notebook to store the output files of specific runs, the log file, and the resulting notebook. This is the default that is set for all new jobs. Select: * To compare the results of different job runs, not just by viewing the log file. By keeping the output files of specific job runs, you can compare the results of job runs to fine tune your code. For example, by configuring different environment variables when the job is started, you can change the way the code in the notebook behaves and then compare these differences (including graphics) step by step between runs. Note: * The job run retention value is set to 5 by default to avoid creating too many run output files. This means that the last 5 job run output files will be retained. You need to adjust this value if you want to compare more run output files. * You cannot use the results of a specific job run to create a URL to enable "Sharing by URL". If you want to use a specific job result run as the source of what is shown via "Share by URL", you must create a new job and select Log & updated version. * To view the logs. * Log only to store the log file only. The resulting notebook is discarded. Select: * To view the logs. * Log & updated version to store the log file and update the output cells of the version you used as input to this task. Select: * To view the logs. * To share the result of a job run via "Share by URL". * Retention configuration to set how long to retain finished job runs and job run artifacts like logs or notebook results. You can either select the number of days to retain the job runs or the last number of job runs to keep. The retention value is set to 5 by default (the last 5 job run output files are retained). Be mindful when changing the default as too many job run files can quickly use up project storage. 4. On the Schedule page, you can optionally add a one-time or repeating schedule. If you define a start day and time without selecting Repeat, the job will run exactly one time at the specified day and time. If you define a start date and time and you select Repeat, the job will run for the first time at the timestamp indicated in the Repeat section. You can't change the time zone; the schedule uses your web browser's time zone setting. If you exclude certain weekdays, the job might not run as you would expect. The reason might be due to a discrepancy between the time zone of the user who creates the schedule, and the time zone of the compute node where the job runs. An API key is generated when you create a scheduled job, and future runs will use this API key. If you didn't create a scheduled job but choose to modify one, an API key is generated for you when you modify the job and future runs will use this API key. 5. Optionally set to see notifications for the job. You can select the type of alerts to receive. 6. Review the job settings. Then create the job and run it immediately, or create the job and run it later. All notebook code cells are run and all output cells are updated. The notebook job is listed under Jobs in your project. To view the notebook run output, click the job and then Run result on the Job run details page. Learn more * [Viewing job details](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/jobs.htmlview-job-details) * [Coding and running notebooks](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/code-run-notebooks.html) * [Environments for the Notebook editor](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebook-environments.html) Parent topic:[Creating and managing jobs](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/jobs.html)
# Creating jobs in the Notebook editor # You can create a job to run a notebook directly in the Notebook editor\. To create a notebook job: <!-- <ol> --> 1. In the Notebook editor, click ![the jobs icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/images/Run-schedule_Blue.png) from the menu bar and select **Create a job**\. 2. Define the job details by entering a name and a description (optional)\. 3. On the Configure page, select: <!-- <ul> --> * A notebook version. The most recently saved version of the notebook is used by default. If no version of the notebook exists, you must create a version by clicking ![the versions icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/images/versions.png) from the notebook action bar. * A runtime. By default, the job uses the same environment template that was selected for the notebook. * **Advanced configuration** to add environment variables and select the job run retention settings. <!-- <ul> --> * The environment variables that are passed to the notebook when the job is started and affect the execution of the notebook. Each variable declaration must be made for a single variable in the following format `VAR_NAME=foo` and appear on its own line. For example, to determine which data source to access if the same notebook is used in different jobs, you can set the variable `DATA_SOURCE` to `DATA_SOURCE=jdbc:db2//db2.server.com:1521/testdata` in the notebook job that trains a model and to `DATA_SOURCE=jdbc:db2//db2.server.com:1521/productiondata` in the job where the model runs on real data. In another example, the variables `BATCH_SIZE`, `NUM_CLASSES` and `EPOCHS` that are required for a Keras model can be passed to the same notebook with different values in separate jobs. * Select the job run result output. You can select: <!-- <ul> --> * **Log & notebook** to store the output files of specific runs, the log file, and the resulting notebook. This is the default that is set for all new jobs. Select: <!-- <ul> --> * To compare the results of different job runs, not just by viewing the log file. By keeping the output files of specific job runs, you can compare the results of job runs to fine tune your code. For example, by configuring different environment variables when the job is started, you can change the way the code in the notebook behaves and then compare these differences (including graphics) step by step between runs. Note: <!-- <ul> --> * The job run retention value is set to 5 by default to avoid creating too many run output files. This means that the last 5 job run output files will be retained. You need to adjust this value if you want to compare more run output files. * You cannot use the results of a specific job run to create a URL to enable "Sharing by URL". If you want to use a specific job result run as the source of what is shown via "Share by URL", you must create a new job and select **Log & updated version**. <!-- </ul> --> * To view the logs. <!-- </ul> --> * **Log only** to store the log file only. The resulting notebook is discarded. Select: <!-- <ul> --> * To view the logs. <!-- </ul> --> * **Log & updated version** to store the log file and update the output cells of the version you used as input to this task. Select: <!-- <ul> --> * To view the logs. * To share the result of a job run via "Share by URL". <!-- </ul> --> <!-- </ul> --> <!-- </ul> --> * **Retention configuration** to set how long to retain finished job runs and job run artifacts like logs or notebook results. You can either select the number of days to retain the job runs or the last number of job runs to keep. The retention value is set to 5 by default (the last 5 job run output files are retained). Be mindful when changing the default as too many job run files can quickly use up project storage. <!-- </ul> --> 4. On the Schedule page, you can optionally add a one\-time or repeating schedule\. If you define a start day and time without selecting **Repeat**, the job will run exactly one time at the specified day and time. If you define a start date and time and you select **Repeat**, the job will run for the first time at the timestamp indicated in the Repeat section. You can't change the time zone; the schedule uses your web browser's time zone setting. If you exclude certain weekdays, the job might not run as you would expect. The reason might be due to a discrepancy between the time zone of the user who creates the schedule, and the time zone of the compute node where the job runs. An API key is generated when you create a scheduled job, and future runs will use this API key. If you didn't create a scheduled job but choose to modify one, an API key is generated for you when you modify the job and future runs will use this API key. 5. Optionally set to see notifications for the job\. You can select the type of alerts to receive\. 6. Review the job settings\. Then create the job and run it immediately, or create the job and run it later\. All notebook code cells are run and all output cells are updated\. The notebook job is listed under **Jobs** in your project. To view the notebook run output, click the job and then **Run result** on the Job run details page. <!-- </ol> --> ## Learn more ## <!-- <ul> --> * [Viewing job details](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/jobs.html#view-job-details) * [Coding and running notebooks](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/code-run-notebooks.html) * [Environments for the Notebook editor](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebook-environments.html) <!-- </ul> --> **Parent topic:**[Creating and managing jobs](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/jobs.html) <!-- </article "role="article" "> -->
1C863B2624AB2712318442337C917143C19E7DDD
https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-job-pipelines.html?context=cdpaas&locale=en
Creating jobs for Pipelines
Creating jobs for Pipelines You can create jobs for Pipelines. To create a Pipelines job: 1. Open your Pipelines asset from the project. 2. Click Run pipeline > Create a job. 3. On the Create a job page, you can choose the asset version that you'd like to run. The most recently saved version of the Pipelines is used by default. 4. Give a name and optional description for your job. Click next. 5. Define your IAM API key. The most recently used API key is used by default. If you'd like to use a new API key, click Generate new API key. Click next. 6. You can schedule your job by toggling Schedule off to Schedule to run. You can choose either or both options: * Start on: Choose a date for your scheduled job to run. The time zone is GMT-0400 (Eastern Daylight Time). If you do not choose a start date, the job will never run automatically and must be started manually. * Repeat: You can choose to schedule the repeated frequency (every minute to every month), exclude running the job on certain days, and choose an end date. If you do not choose to repeat the job, it runs one time if a start date is given, or does not run. 7. Review your job settings and click Create. The Pipelines job is listed under Jobs in your project. Learn more * [Viewing jobs across projects](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/job-views-projs.html) Parent topic:[Creating and managing jobs](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/jobs.html)
# Creating jobs for Pipelines # You can create jobs for Pipelines\. To create a Pipelines job: <!-- <ol> --> 1. Open your Pipelines asset from the project\. 2. Click **Run pipeline > Create a job**\. 3. On the **Create a job** page, you can choose the asset version that you'd like to run\. The most recently saved version of the Pipelines is used by default\. 4. Give a name and optional description for your job\. Click **next**\. 5. Define your IAM API key\. The most recently used API key is used by default\. If you'd like to use a new API key, click **Generate new API key**\. Click **next**\. 6. You can schedule your job by toggling *Schedule off* to *Schedule to run*\. You can choose either or both options: <!-- <ul> --> * *Start on*: Choose a date for your scheduled job to run. The time zone is GMT-0400 (Eastern Daylight Time). If you do not choose a start date, the job will never run automatically and must be started manually. * *Repeat*: You can choose to schedule the repeated frequency (every minute to every month), exclude running the job on certain days, and choose an end date. If you do not choose to repeat the job, it runs one time if a start date is given, or does not run. <!-- </ul> --> 7. Review your job settings and click **Create**\. The Pipelines job is listed under **Jobs** in your project\. <!-- </ol> --> ## Learn more ## <!-- <ul> --> * [Viewing jobs across projects](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/job-views-projs.html) <!-- </ul> --> **Parent topic:**[Creating and managing jobs](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/jobs.html) <!-- </article "role="article" "> -->
27FCAB0041FEB8B819E329A319B12D2F4167318A
https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-job-spss.html?context=cdpaas&locale=en
Creating SPSS Modeler jobs
Creating SPSS Modeler jobs You can create a job to run an SPSS Modeler flow. To create an SPSS Modeler job: 1. In SPSS Modeler, click the Create a job icon ![the jobs icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/images/Run-schedule_Blue.png) from the toolbar and select Create a job. A wizard will appear. Click Next to proceed through each page of the wizard as described here. 2. Define the job details by entering a name and a description (optional). If desired, you can also specify retention settings for the job. Select Job run retention settings to set how long to retain finished job runs and job run artifacts such as logs. You can select one of the following retention methods. Be mindful when changing the default as too many job run files can quickly use up project storage. * By duration (days). Specify the number of days to retain job runs and job artifacts. The retention value is set to 7 days by default (the last 7 days of job runs retained). * By amount. Specify the last number of finished job runs and job artifacts to keep. The retention value is set to 200 jobs by default. 3. On the Flow parameters page, you can set values for flow parameters if any exist for the flow. They are, in effect, user-defined variables that are saved and persisted with the flow. Parameters are often used in scripting to control the behavior of the script by providing information about fields and values that don't need to be hard coded in the script. See [Setting properties for flows](https://dataplatform.cloud.ibm.com/docs/content/wsd/flow_properties.html) for more information. For example, your flow might contain a parameter called age_param that you choose to set to 40 here, and a parameter called bp_param you might set to HIGH. 4. On the Configuration page, you can choose whether the job will run the entire flow or one or more branches of the flow. 5. On the Schedule page, you can optionally add a one-time or repeating schedule. If you define a start day and time without selecting Repeat, the job will run exactly one time at the specified day and time. If you define a start date and time and you select Repeat, the job will run for the first time at the timestamp indicated in the Repeat section. You can't change the time zone; the schedule uses your web browser's time zone setting. If you exclude certain weekdays, the job might not run as you would expect. The reason might be due to a discrepancy between the time zone of the user who creates the schedule, and the time zone of the compute node where the job runs. 6. Optionally turn on notifications for the job. You can select the type of alerts to receive. 7. Review the job settings. Click Save to create the job. The SPSS Modeler job is listed under Jobs in your project. Learn more * [Viewing job details](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/jobs.htmlview-job-details) * [SPSS Modeler documentation](https://dataplatform.cloud.ibm.com/docs/content/wsd/spss-modeler.html) Parent topic: [Jobs](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/jobs.html)
# Creating SPSS Modeler jobs # You can create a job to run an SPSS Modeler flow\. To create an SPSS Modeler job: <!-- <ol> --> 1. In SPSS Modeler, click the **Create a job** icon ![the jobs icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/images/Run-schedule_Blue.png) from the toolbar and select **Create a job**\. A wizard will appear\. Click **Next** to proceed through each page of the wizard as described here\. 2. Define the job details by entering a name and a description (optional)\. If desired, you can also specify retention settings for the job\. Select **Job run retention settings** to set how long to retain finished job runs and job run artifacts such as logs\. You can select one of the following retention methods\. Be mindful when changing the default as too many job run files can quickly use up project storage\. <!-- <ul> --> * **By duration (days)**. Specify the number of days to retain job runs and job artifacts. The retention value is set to 7 days by default (the last 7 days of job runs retained). * **By amount**. Specify the last number of finished job runs and job artifacts to keep. The retention value is set to 200 jobs by default. <!-- </ul> --> 3. On the Flow parameters page, you can set values for flow parameters if any exist for the flow\. They are, in effect, user\-defined variables that are saved and persisted with the flow\. Parameters are often used in scripting to control the behavior of the script by providing information about fields and values that don't need to be hard coded in the script\. See [Setting properties for flows](https://dataplatform.cloud.ibm.com/docs/content/wsd/flow_properties.html) for more information\. For example, your flow might contain a parameter called **age\_param** that you choose to set to **40** here, and a parameter called **bp\_param** you might set to **HIGH**. 4. On the Configuration page, you can choose whether the job will run the entire flow or one or more branches of the flow\. 5. On the Schedule page, you can optionally add a one\-time or repeating schedule\. If you define a start day and time without selecting **Repeat**, the job will run exactly one time at the specified day and time. If you define a start date and time and you select **Repeat**, the job will run for the first time at the timestamp indicated in the Repeat section. You can't change the time zone; the schedule uses your web browser's time zone setting. If you exclude certain weekdays, the job might not run as you would expect. The reason might be due to a discrepancy between the time zone of the user who creates the schedule, and the time zone of the compute node where the job runs. 6. Optionally turn on notifications for the job\. You can select the type of alerts to receive\. 7. Review the job settings\. Click **Save** to create the job\. The SPSS Modeler job is listed under **Jobs** in your project. <!-- </ol> --> ## Learn more ## <!-- <ul> --> * [Viewing job details](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/jobs.html#view-job-details) * [SPSS Modeler documentation](https://dataplatform.cloud.ibm.com/docs/content/wsd/spss-modeler.html) <!-- </ul> --> **Parent topic**: [Jobs](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/jobs.html) <!-- </article "role="article" "> -->
9370EEEF3D5414148EFC5CC390B4EFBE3020F23D
https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/download.html?context=cdpaas&locale=en
Downloading data assets from a project
Downloading data assets from a project You can download data assets from a project to your local system. Important: Take care when you download assets. Collaborators can upload any type of file into a project, including files that have malware or other types of malicious code. Required permissions : You must have the Editor or Admin role in the project to download an asset. Download a data asset To download a data asset that is in the project's storage, select Download from the ACTION menu next to the asset name. For an alternate method of downloading data assets for a project, select Files in the Data side panel. Checkmark the data asset and select Download from the ACTION menu in the side panel. Download a connected data asset To download a connected data asset, use Data Refinery to run a job that saves the file as the output of a Data Refinery flow. The output of the Data Refinery flow is a new CSV file in the project’s storage. 1. Click the asset name to open it. 2. Click Prepare data. 3. From the Jobs menu, click Save and create a job. Enter a job name and click Create and Run. 4. Go back to the Assets page. Refresh the page. By default, the downloadable asset is named table-name_shaped.csv. 5. Choose Download from the ACTION menu next to the asset name. Learn more * [Download a data asset from a catalog](https://dataplatform.cloud.ibm.com/docs/content/wsj/catalog/download.html) * [Data Refinery](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/refining_data.html) Parent topic:[Projects](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/manage-projects.html)
# Downloading data assets from a project # You can download data assets from a project to your local system\. Important: Take care when you download assets\. Collaborators can upload any type of file into a project, including files that have malware or other types of malicious code\. **Required permissions** : You must have the **Editor** or **Admin** role in the project to download an asset\. ## Download a data asset ## To download a data asset that is in the project's storage, select **Download** from the ACTION menu next to the asset name\. For an alternate method of downloading data assets for a project, select **Files** in the Data side panel\. Checkmark the data asset and select **Download** from the ACTION menu in the side panel\. ## Download a connected data asset ## To download a *connected data asset*, use Data Refinery to run a job that saves the file as the output of a Data Refinery flow\. The output of the Data Refinery flow is a new CSV file in the project’s storage\. <!-- <ol> --> 1. Click the asset name to open it\. 2. Click **Prepare data**\. 3. From the **Jobs** menu, click **Save and create a job**\. Enter a job name and click **Create and Run**\. 4. Go back to the **Assets** page\. Refresh the page\. By default, the downloadable asset is named *table\-name*\_shaped\.csv\. 5. Choose **Download** from the ACTION menu next to the asset name\. <!-- </ol> --> ## Learn more ## <!-- <ul> --> * [Download a data asset from a catalog](https://dataplatform.cloud.ibm.com/docs/content/wsj/catalog/download.html) * [Data Refinery](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/refining_data.html) <!-- </ul> --> **Parent topic:**[Projects](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/manage-projects.html) <!-- </article "role="article" "> -->
EEE6714A8CF20E18EA398651B41E0278071EE42B
https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/export-project.html?context=cdpaas&locale=en
Exporting a project
Exporting a project You can share assets in a project with others and copy a project by exporting them as a ZIP file to your desktop. The project readme file is added to the exported ZIP file by default. Requirements and restrictions Required role : You need Admin or Editor role in the project to export a project. Restrictions : - You cannot export assets larger than 500 MB : - If your project is marked as sensitive, you can't export data assets, connections or connected data from the project. : - Be mindful when selecting assets to always also include the dependencies of those assets, for example the data assets or connections for a data flow, a notebook, connected data, or jobs. There is no check for dependencies. If you don't include the dependencies, subsequent project imports do not work. : - You can only export and share assets across projects created in watsonx.ai. You can't export a project from Cloud Pak for Data as a Service and import it into watsonx.ai, or the other way around. You can however, move projects between Cloud Pak for Data as a Service and watsonx.ai. See [Switching the platform for a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/switch-platform.html). : - Exporting a project from one region and importing the assets to a project or space in another region can result in an error creating the assets. The error message An unexpected response was returned when creating asset is a symptom of this restriction. : - Exporting a project is not available for all Watson Studio plans. See [Watson Studio plans](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/ws-plans.html). Exporting a project to desktop Exporting a project packs the project assets that you select into a single ZIP file that can be shared like any other file. To export project assets to desktop: 1. Open the project you want to export assets from. 2. Check whether the assets that you include in your export, for example notebooks or connections, don't contain credentials or other sensitive information that you don't want to share. You should remove this information before you begin the export. Only private connection credentials are removed. 3. Optional. Add information to the readme on the Overview page of your project about the assets that you include in the export. For example, you can give a brief description of the analytics use case of the added assets and the data analysis methods that are used. 4. Click ![the Export to desktop icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/images/export-proj.png) from the project toolbar. 5. Select the assets to add. You can filter by asset type or customize the project export settings by selecting preferences (the settings icon to the right of the window title) which are applied each time you export the project. 6. Optional: Change the name of the project export file. 7. Supply a password if you want to export connections that have shared credentials. Note that this password must be provided to decrypt these credentials on project import. 8. Click Export. Do not leave the page while the export is running. When you export to desktop, the file is saved to the Downloads folder by default. If a ZIP file with the same name already exists, the existing file isn't overwritten. Ensure that your browser settings download the ZIP file to the desktop as a .zip file and not as a folder. Compressing this folder to enable project import leads to an error. Note also that you cannot manually add other assets to an exported project ZIP file on your desktop. The status of a project export is tracked on the project's Overview page. Learn more * [Administering a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/admin-project.html) * [Importing a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/import-project.html) Parent topic:[Administering projects](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/admin-project.html)
# Exporting a project # You can share assets in a project with others and copy a project by exporting them as a ZIP file to your desktop\. The project readme file is added to the exported ZIP file by default\. ## Requirements and restrictions ## **Required role** : You need **Admin** or **Editor** role in the project to export a project\. **Restrictions** : \- You cannot export assets larger than 500 MB : \- If your project is marked as sensitive, you can't export data assets, connections or connected data from the project\. : \- Be mindful when selecting assets to always also include the dependencies of those assets, for example the data assets or connections for a data flow, a notebook, connected data, or jobs\. There is no check for dependencies\. If you don't include the dependencies, subsequent project imports do not work\. : \- You can only export and share assets across projects created in watsonx\.ai\. You can't export a project from Cloud Pak for Data as a Service and import it into watsonx\.ai, or the other way around\. You can however, move projects between Cloud Pak for Data as a Service and watsonx\.ai\. See [Switching the platform for a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/switch-platform.html)\. : \- Exporting a project from one region and importing the assets to a project or space in another region can result in an error creating the assets\. The error message `An unexpected response was returned when creating asset` is a symptom of this restriction\. : \- Exporting a project is not available for all Watson Studio plans\. See [Watson Studio plans](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/ws-plans.html)\. ## Exporting a project to desktop ## Exporting a project packs the project assets that you select into a single ZIP file that can be shared like any other file\. To export project assets to desktop: <!-- <ol> --> 1. Open the project you want to export assets from\. 2. Check whether the assets that you include in your export, for example notebooks or connections, don't contain credentials or other sensitive information that you don't want to share\. You should remove this information before you begin the export\. Only private connection credentials are removed\. 3. Optional\. Add information to the readme on the **Overview** page of your project about the assets that you include in the export\. For example, you can give a brief description of the analytics use case of the added assets and the data analysis methods that are used\. 4. Click ![the Export to desktop icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/images/export-proj.png) from the project toolbar\. 5. Select the assets to add\. You can filter by asset type or customize the project export settings by selecting preferences (the settings icon to the right of the window title) which are applied each time you export the project\. 6. Optional: Change the name of the project export file\. 7. Supply a password if you want to export connections that have shared credentials\. Note that this password must be provided to decrypt these credentials on project import\. 8. Click **Export**\. Do not leave the page while the export is running\. When you export to desktop, the file is saved to the `Downloads` folder by default. If a ZIP file with the same name already exists, the existing file isn't overwritten. Ensure that your browser settings download the ZIP file to the desktop as a .zip file and not as a folder. Compressing this folder to enable project import leads to an error. Note also that you cannot manually add other assets to an exported project ZIP file on your desktop. The status of a project export is tracked on the project's **Overview** page. <!-- </ol> --> ## Learn more ## <!-- <ul> --> * [Administering a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/admin-project.html) * [Importing a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/import-project.html) <!-- </ul> --> **Parent topic:**[Administering projects](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/admin-project.html) <!-- </article "role="article" "> -->
A5C7CF086B303923D48F8AD63CF85A6BCCBBE3F5
https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/feature-group.html?context=cdpaas&locale=en
Managing feature groups (beta)
Managing feature groups (beta) Create a feature group to preserve a set of columns of a data asset along with associated metadata for use with Machine Learning models. Required service : You must have these services. - Watson Studio (for projects) Required permissions : To view this page, you can have any role in a project. : To edit or update information on this page, you must have the Editor or Admin role in the project. Workspaces : You can view the asset feature group in these workspaces: : Projects Types of assets : These types of assets can have a feature group: : Tabular: CSV, TSV, Parquet, xls, xslx, avro, text, json files : [Connected data types](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html) that are structured and supported in Watson Studio. Data size : No limit Feature groups (beta) Create a feature group to preserve a set of columns of a particular data asset along with the metadata used for Machine Learning. For example, if you have a set of features for a credit approval model, you can preserve the features used to train the model, as well as some metadata, including which column is used as the prediction target, and which columns are used for bias detection. Feature groups make it simple to preserve the metadata for the features used to train a machine learning model so other data scientists can use the same features. You can see the feature group tab when you preview a particular asset. * [Creating a feature group](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/feature-group.html?context=cdpaas&locale=encreate-featuregrp) * [Editing a feature group](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/feature-group.html?context=cdpaas&locale=enedit-featuregrp) * [Removing features or a feature group](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/feature-group.html?context=cdpaas&locale=enremove-featuregrp) * [Using the Python API for feature groups](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/feature-group.html?context=cdpaas&locale=enapi-featuregrp) Creating a feature group in a project Before you begin If you create a [profile](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/profile.html) for the data asset before creating a feature group you can select profile metadata to add values to the feature. Create a feature group You can select particular columns of data assets to form a feature group. 1. In the project Assets tab, click the name of the relevant asset to open the preview and select the Feature group tab. Here you can create a feature group or view and edit an existing one. An asset can have only one feature group. Click New feature group. ![Create a feature group](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/images/create-feature-group3.png) 2. Select the columns that you want to be used in the feature group. Select the Name checkbox to include all the columns as features. ![Select the feature group columns](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/images/create-feature-group-columns1.png) Editing a feature group When you have selected the columns of the data asset to be used in the feature group, you can then view each feature and edit it to specify the role it will have in Machine Learning models. ![View feature group](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/images/feature-group-example3.png) 1. Click a feature name and click Edit this feature. A window opens displaying the following tabs: * Details - provide the following information about the feature. ![Details](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/images/feature-group-details.png) Select a Role to be assigned to the feature: * Input: the feature can be used as input for training a Machine Learning model. * Target: the feature to be used as the prediction target when the data is used to train a Machine Learning model. * Identifier: the primary key, such as customer ID, used to identify the input data. Enter a Description, Recipe (any method or formula used to create values for the feature) and any Tags. * Value descriptions ![Value descriptions](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/images/feature-group-values.png) Value descriptions allow you to clarify the meaning of specific values. For example, consider a column "credit evaluation" with the values -1, 0 and 1. You can use value descriptions to provide meaning for these values. For example, -1 might mean "evaluation rejected". You can enter descriptions for particular values. For numerical values, you can also specify a range. To specify a range of numerical values, enter the following text [n,m] where n is the start and m is the end of the range, surrounded by brackets, and click Add. For example, to describe all age values between 18 and 24 as "millenials", enter [18,24] as the value and millenials as the description. If you have a [profile](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/profile.html) defined, the profile values are displayed in the value descriptions list. From here you can select one value or multiple values. * Fairness information ![Fairness information](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/images/feature-group-fairness.png) You can define Monitor or Reference groups of values for monitoring bias. The values that are more at risk of biased outcomes can be placed in the Monitor group. These values are then compared to values in the Reference group. To specify a range of numerical values, enter the following text [n,m] where n is the start and m is the end of the range, surrounded by brackets. For example, to monitor all age values between 18 and 35, enter [18,35]. Then select Monitor or Reference and click Add. You can also specify Favorable outcomes. See [Fairness in AutoAI experiments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-fairness.html) for more information about fairness. 2. When you have edited the feature, click Save. You can now see your changes in the Feature Details window. Close this window to return to the feature group. Removing features from a group To remove a feature from a group: 1. Preview the asset in the project and select the Feature group tab. 2. In the Features table that is displayed, select the feature (or features) that you want to remove. 3. In the toolbar that appears, select Remove from group. ![Removing features](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/images/feature-group-remove3.png) The feature, or feature group if you selected all the features, is removed. Searching for a feature group You can [search for assets or columns across all projects](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/search-assets.htmlfilter). To filter your search results to find assets with a feature group, select Data to see the filter options, and select Feature group. Assets containing a feature group will then be listed in the search results. Using the Python API to create and use feature groups You can also use the [assetframe-lib Python library](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/py_assetframe.html) in notebooks to create and edit feature groups. This library also allows you use feature metadata like fairness information when creating machine learning models. Learn more For examples on how to create and use feature groups in notebooks: * [Creating and using feature store data](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/e756adfa2855bdfc20f588f9c1986382) sample project in the Samples
# Managing feature groups (beta) # Create a feature group to preserve a set of columns of a data asset along with associated metadata for use with Machine Learning models\. **Required service** : You must have these services\. - Watson Studio (for projects) **Required permissions** : To view this page, you can have any role in a project\. : To edit or update information on this page, you must have the **Editor** or **Admin** role in the project\. **Workspaces** : You can view the asset feature group in these workspaces: : Projects **Types of assets** : These types of assets can have a feature group: : Tabular: CSV, TSV, Parquet, xls, xslx, avro, text, json files : [Connected data types](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html) that are structured and supported in Watson Studio\. **Data size** : No limit ## Feature groups (beta) ## Create a **feature group** to preserve a set of columns of a particular data asset along with the metadata used for Machine Learning\. For example, if you have a set of features for a credit approval model, you can preserve the features used to train the model, as well as some metadata, including which column is used as the prediction target, and which columns are used for bias detection\. Feature groups make it simple to preserve the metadata for the features used to train a machine learning model so other data scientists can use the same features\. You can see the feature group tab when you preview a particular asset\. <!-- <ul> --> * [Creating a feature group](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/feature-group.html?context=cdpaas&locale=en#create-featuregrp) * [Editing a feature group](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/feature-group.html?context=cdpaas&locale=en#edit-featuregrp) * [Removing features or a feature group](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/feature-group.html?context=cdpaas&locale=en#remove-featuregrp) * [Using the Python API for feature groups](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/feature-group.html?context=cdpaas&locale=en#api-featuregrp) <!-- </ul> --> ### Creating a feature group in a project ### #### Before you begin #### If you create a [profile](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/profile.html) for the data asset before creating a feature group you can select profile metadata to add values to the feature\. #### Create a feature group #### You can select particular columns of data assets to form a feature group\. <!-- <ol> --> 1. In the project **Assets** tab, click the name of the relevant asset to open the preview and select the **Feature group** tab\. Here you can create a feature group or view and edit an existing one\. An asset can have only one feature group\. Click **New feature group**\. ![Create a feature group](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/images/create-feature-group3.png) 2. Select the columns that you want to be used in the feature group\. Select the **Name** checkbox to include all the columns as features\. ![Select the feature group columns](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/images/create-feature-group-columns1.png) <!-- </ol> --> ### Editing a feature group ### When you have selected the columns of the data asset to be used in the feature group, you can then view each feature and edit it to specify the role it will have in Machine Learning models\. ![View feature group](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/images/feature-group-example3.png) <!-- <ol> --> 1. Click a feature name and click **Edit this feature**\. A window opens displaying the following tabs: <!-- <ul> --> * **Details** - provide the following information about the feature. ![Details](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/images/feature-group-details.png) Select a **Role** to be assigned to the feature: <!-- <ul> --> * `Input`: the feature can be used as input for training a Machine Learning model. * `Target`: the feature to be used as the prediction target when the data is used to train a Machine Learning model. * `Identifier`: the primary key, such as customer ID, used to identify the input data. Enter a **Description**, **Recipe** (any method or formula used to create values for the feature) and any **Tags**. <!-- </ul> --> * **Value descriptions** ![Value descriptions](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/images/feature-group-values.png) Value descriptions allow you to clarify the meaning of specific values. For example, consider a column "credit evaluation" with the values *-1*, *0* and *1*. You can use value descriptions to provide meaning for these values. For example, *-1* might mean "evaluation rejected". You can enter descriptions for particular values. For numerical values, you can also specify a range. To specify a range of numerical values, enter the following text *\[n,m\]* where *n* is the start and *m* is the end of the range, surrounded by brackets, and click **Add**. For example, to describe all age values between 18 and 24 as "millenials", enter *\[18,24\]* as the value and *millenials* as the description. If you have a [profile](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/profile.html) defined, the profile values are displayed in the value descriptions list. From here you can select one value or multiple values. * **Fairness information** ![Fairness information](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/images/feature-group-fairness.png) You can define `Monitor` or `Reference` groups of values for monitoring bias. The values that are more at risk of biased outcomes can be placed in the Monitor group. These values are then compared to values in the Reference group. To specify a range of numerical values, enter the following text *\[n,m\]* where *n* is the start and *m* is the end of the range, surrounded by brackets. For example, to monitor all age values between 18 and 35, enter *\[18,35\]*. Then select Monitor or Reference and click **Add**. You can also specify **Favorable outcomes**. See [Fairness in AutoAI experiments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-fairness.html) for more information about fairness. <!-- </ul> --> 2. When you have edited the feature, click **Save**\. You can now see your changes in the **Feature Details** window\. Close this window to return to the feature group\. <!-- </ol> --> ### Removing features from a group ### To remove a feature from a group: <!-- <ol> --> 1. Preview the asset in the project and select the **Feature group** tab\. 2. In the **Features** table that is displayed, select the feature (or features) that you want to remove\. 3. In the toolbar that appears, select **Remove from group**\. ![Removing features](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/images/feature-group-remove3.png) <!-- </ol> --> The feature, or feature group if you selected all the features, is removed\. ### Searching for a feature group ### You can [search for assets or columns across all projects](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/search-assets.html#filter)\. To filter your search results to find assets with a feature group, select **Data** to see the filter options, and select **Feature group**\. Assets containing a feature group will then be listed in the search results\. ### Using the Python API to create and use feature groups ### You can also use the [assetframe\-lib Python library](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/py_assetframe.html) in notebooks to create and edit feature groups\. This library also allows you use feature metadata like fairness information when creating machine learning models\. ## Learn more ## For examples on how to create and use feature groups in notebooks: <!-- <ul> --> * [Creating and using feature store data](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/e756adfa2855bdfc20f588f9c1986382) sample project in the Samples <!-- </ul> --> <!-- </article "role="article" "> -->
5F398F2A5F6A2E75B9376B755C3ECF4B7F18B149
https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/folder-asset.html?context=cdpaas&locale=en
Adding a connected folder asset to a project
Adding a connected folder asset to a project You can create a connected folder asset based on a path within an IBM Cloud Object Storage system that is accessed through a connection. You can view the files and subfolders that share the path with the connected folder asset. The files that you can view within the connected folder asset are not themselves data assets. For example, you can create a connected folder asset for a path that contains news feeds that are continuously updated. Required permissions : You must have the Admin or Editor role in the project to add a connected folder asset. Watch this video to see how to add a connected folder asset in a project, then follow the steps below the video. Video disclaimer: Some minor steps and graphical elements in this video might differ from your platform. This video provides a visual method to learn the concepts and tasks in this documentation. To add a connected folder asset from a connection to a project: 1. If necessary, [create a connection asset](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html). Include an Access Key and a Secret Key to your IBM Cloud Object Storage connection to enable the downloading of files within the connected folder asset. If you're using an existing IBM Cloud Object Storage connection asset that doesn't have an Access Key and Secret Key, edit the connection asset and add them. 2. Click Import assets > Connected data. 3. Select an existing connection asset as the source of the data. 4. Select the folder you want and click Import. 5. Type a name and description. 6. Click Create. The connected folder asset appears on the project Assets page in the Data assets category. Click the connected folder asset name to view the contents of the connected folder asset. Click the eye (![eye icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/images/visibility-on.svg)) icon next to a file name to view the contents of the files within the folder that have these formats: * CSV * JSON * Parquet You can refine the files within a connected folder asset and then save the result as a data asset. While viewing the connected folder asset, select a file and then click Prepare data. You can view the files within the connected folder asset if the IBM Cloud Object Storage connection asset that's associated with the connected folder asset has an Access Key and a Secret Key (also known as HMAC credentials). For more information about HMAC credentials, see [IBM Cloud Object Storage Service credentials](https://console.bluemix.net/docs/services/cloud-object-storage/iam/service-credentials.htmlservice-credentials). Next steps * [Refining a file within the folder](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/refining_data.html) Parent topic:[Adding data to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/add-data-project.html)
# Adding a connected folder asset to a project # You can create a connected folder asset based on a path within an IBM Cloud Object Storage system that is accessed through a connection\. You can view the files and subfolders that share the path with the connected folder asset\. The files that you can view within the connected folder asset are not themselves data assets\. For example, you can create a connected folder asset for a path that contains news feeds that are continuously updated\. **Required permissions** : You must have the **Admin** or **Editor** role in the project to add a connected folder asset\. Watch this video to see how to add a connected folder asset in a project, then follow the steps below the video\. Video disclaimer: Some minor steps and graphical elements in this video might differ from your platform\. This video provides a visual method to learn the concepts and tasks in this documentation\. To add a connected folder asset from a connection to a project: <!-- <ol> --> 1. If necessary, [create a connection asset](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html)\. Include an Access Key and a Secret Key to your IBM Cloud Object Storage connection to enable the downloading of files within the connected folder asset\. If you're using an existing IBM Cloud Object Storage connection asset that doesn't have an Access Key and Secret Key, edit the connection asset and add them\. 2. Click **Import assets > Connected data**\. 3. Select an existing connection asset as the source of the data\. 4. Select the folder you want and click **Import**\. 5. Type a name and description\. 6. Click **Create**\. The connected folder asset appears on the project **Assets** page in the **Data assets** category\. <!-- </ol> --> Click the connected folder asset name to view the contents of the connected folder asset\. Click the eye (![eye icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/images/visibility-on.svg)) icon next to a file name to view the contents of the files within the folder that have these formats: <!-- <ul> --> * CSV * JSON * Parquet <!-- </ul> --> You can refine the files *within* a connected folder asset and then save the result as a data asset\. While viewing the connected folder asset, select a file and then click **Prepare data**\. You can view the files within the connected folder asset if the IBM Cloud Object Storage connection asset that's associated with the connected folder asset has an Access Key and a Secret Key (also known as HMAC credentials)\. For more information about HMAC credentials, see [IBM Cloud Object Storage Service credentials](https://console.bluemix.net/docs/services/cloud-object-storage/iam/service-credentials.html#service-credentials)\. ## Next steps ## <!-- <ul> --> * [Refining a file within the folder](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/refining_data.html) <!-- </ul> --> **Parent topic:**[Adding data to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/add-data-project.html) <!-- </article "role="article" "> -->
E4ECA1E5E22F94051D1C8E115D9D874658B5697A
https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/get-data.html?context=cdpaas&locale=en
Getting and preparing data in a project
Getting and preparing data in a project After you create a project, or join one, the next step is to add data to the project and prepare the data for analysis. Required permissions : You must have the Admin or Editor role in a project to add or prepare data. You can add data assets from your local system, from a catalog, from the Samples, or from connections to data sources. You can add these types of data assets to a project: * Data assets from files from your local system, including structured data, unstructured data, and images. The files are stored in the project's IBM Cloud Object Storage bucket. * Connection assets that contain information for connecting to data sources. You can add connections to IBM or third-party data sources. See [Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html). * Connected data assets that specify a table, view, or file that is accessed through a connection to a data source. * Connected folder assets that specify a path in IBM Cloud Object Storage. To get started quickly, take a tutorial. See [Quick start tutorials](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/quickstart-tutorials.html). To [refine data](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/refining_data.html) by cleansing and shaping it, you can: * Select the Prepare data tile on your watsonx home page. * Add the data to the project, then open the data asset and click Prepare data. To [manage feature groups](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/feature-group.html) for a data asset, open the data asset and go to its Feature group page. To create [synthetic data](https://dataplatform.cloud.ibm.com/docs/content/wsj/synthetic/synthetic_data_overview_sd.html), you can: * Select the Prepare data tile on your watsonx home page. * Select the Generate synthetic tabular data tile. Learn more * [Create a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/projects.html) * [Adding data to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/add-data-project.html) * [Refining data with Data Refinery](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/refining_data.html) * [Adding connections to the Platform assets catalog](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html) * [Manage feature groups (beta)](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/feature-group.html) * [Creating synthetic data](https://dataplatform.cloud.ibm.com/docs/content/wsj/synthetic/synthetic_data_overview_sd.html)
# Getting and preparing data in a project # After you create a project, or join one, the next step is to add data to the project and prepare the data for analysis\. **Required permissions** : You must have the **Admin** or **Editor** role in a project to add or prepare data\. You can add data assets from your local system, from a catalog, from the Samples, or from connections to data sources\. You can add these types of data assets to a project: <!-- <ul> --> * Data assets from files from your local system, including structured data, unstructured data, and images\. The files are stored in the project's IBM Cloud Object Storage bucket\. * Connection assets that contain information for connecting to data sources\. You can add connections to IBM or third\-party data sources\. See [Supported connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html)\. * Connected data assets that specify a table, view, or file that is accessed through a connection to a data source\. * Connected folder assets that specify a path in IBM Cloud Object Storage\. <!-- </ul> --> To get started quickly, take a tutorial\. See [Quick start tutorials](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/quickstart-tutorials.html)\. To [refine data](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/refining_data.html) by cleansing and shaping it, you can: <!-- <ul> --> * Select the **Prepare data** tile on your watsonx home page\. * Add the data to the project, then open the data asset and click **Prepare data**\. <!-- </ul> --> To [manage feature groups](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/feature-group.html) for a data asset, open the data asset and go to its **Feature group** page\. To create [synthetic data](https://dataplatform.cloud.ibm.com/docs/content/wsj/synthetic/synthetic_data_overview_sd.html), you can: <!-- <ul> --> * Select the **Prepare data** tile on your watsonx home page\. * Select the **Generate synthetic tabular data** tile\. <!-- </ul> --> ## Learn more ## <!-- <ul> --> * [Create a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/projects.html) * [Adding data to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/add-data-project.html) * [Refining data with Data Refinery](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/refining_data.html) * [Adding connections to the Platform assets catalog](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html) * [Manage feature groups (beta)](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/feature-group.html) * [Creating synthetic data](https://dataplatform.cloud.ibm.com/docs/content/wsj/synthetic/synthetic_data_overview_sd.html) <!-- </ul> --> <!-- </article "role="article" "> -->
A4CEA84825E512A73C509437103B89B6EF363D5B
https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/import-project.html?context=cdpaas&locale=en
Importing a project
Importing a project You can create a project that is preloaded with assets by importing the project. Requirements A local file of a previously exported project : Importing a project from a local file is a method of copying a project. You can import a project from a file on your local system only if the ZIP file that you select was exported from a IBM watsonx project as a compressed file. You can import only projects that you exported from watsonx.ai. You cannot import a compressed file that was exported from a Cloud Pak for Data as a Service project. : If the exported file that you select to import was encrypted, you must enter the password that was used for encryption to enable decrypting sensitive connection properties. A sample project from Samples : You can create a project [from a project sample](https://dataplatform.cloud.ibm.com/samples?context=wx) to learn how to work with data in tools, such as notebooks to prepare data, analyze data, build and train models, and visualize analysis results. : The sample projects show how to accomplish goals, for example, to load and explore data, to create and train machine learning models for predictive analysis. Each project includes the required assets, such as notebooks, and all the data sets that you need to complete the example use case. Importing a project from a local file or sample To import a project: 1. Click New project on the home page or on your Projects page. 2. Choose whether to create a project based on an exported project file or a sample project. 3. Upload a project file or select a sample project. 4. On the New project screen, add a name and optional description for the project. 5. If the project file that you select to import is encrypted, you must enter the password that was used for encryption to enable decrypting sensitive connection properties. If you enter an incorrect password, the project file imports successfully, but sensitive connection properties are falsely decrypted. 6. Select the Restrict who can be a collaborator checkbox to restrict collaborators to members of your organization. You can't change this setting after you create the project. 7. Choose an existing [object storage service instance](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/storage-options.html) or create a new one. 8. Click Create. You can start adding resources if your project is empty, or begin working with the resources you imported. Learn more * [Administering a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/admin-project.html) * [Exporting project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/export-project.html) Parent topic:[Creating a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/projects.html)
# Importing a project # You can create a project that is preloaded with assets by importing the project\. ## Requirements ## **A local file of a previously exported project** : Importing a project from a local file is a method of copying a project\. You can import a project from a file on your local system only if the ZIP file that you select was exported from a IBM watsonx project as a compressed file\. You can import only projects that you exported from watsonx\.ai\. You cannot import a compressed file that was exported from a Cloud Pak for Data as a Service project\. : If the exported file that you select to import was encrypted, you must enter the password that was used for encryption to enable decrypting sensitive connection properties\. **A sample project from Samples** : You can create a project [from a project sample](https://dataplatform.cloud.ibm.com/samples?context=wx) to learn how to work with data in tools, such as notebooks to prepare data, analyze data, build and train models, and visualize analysis results\. : The sample projects show how to accomplish goals, for example, to load and explore data, to create and train machine learning models for predictive analysis\. Each project includes the required assets, such as notebooks, and all the data sets that you need to complete the example use case\. ## Importing a project from a local file or sample ## To import a project: <!-- <ol> --> 1. Click **New project** on the home page or on your **Projects** page\. 2. Choose whether to create a project based on an exported project file or a sample project\. 3. Upload a project file or select a sample project\. 4. On the **New project** screen, add a name and optional description for the project\. 5. If the project file that you select to import is encrypted, you must enter the password that was used for encryption to enable decrypting sensitive connection properties\. If you enter an incorrect password, the project file imports successfully, but sensitive connection properties are falsely decrypted\. 6. Select the **Restrict who can be a collaborator** checkbox to restrict collaborators to members of your organization\. You can't change this setting after you create the project\. 7. Choose an existing [object storage service instance](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/storage-options.html) or create a new one\. 8. Click **Create**\. You can start adding resources if your project is empty, or begin working with the resources you imported\. <!-- </ol> --> ## Learn more ## <!-- <ul> --> * [Administering a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/admin-project.html) * [Exporting project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/export-project.html) <!-- </ul> --> **Parent topic:**[Creating a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/projects.html) <!-- </article "role="article" "> -->
97B722619AFC616F13BEB20CD7A8FBC29CFF50D1
https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/job-views-projs.html?context=cdpaas&locale=en
Viewing jobs across projects
Viewing jobs across projects You can view the jobs that exist across projects for assets that run in tools, such as notebooks, Data Refinery flows, and SPSS Modeler flows. To view the status of jobs or job runs in projects: 1. From the navigation menu, select Projects > Jobs. 2. Select a view scope: * Jobs with finished runs: all jobs that contain finished runs * Finished runs: all job runs that have finished * Jobs with active runs: all jobs that contain that contain active runs * Active runs: all job runs that are still active 3. Click ![the Filters icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/images/edit-filters.png) from the table toolbar to further narrow down the returned search results for the view scope you selected. The filter options vary depending the view scope selection, for example, for jobs with active runs, you can filter by run state, job type and project, whereas for finished runs by time, run state, whether the runs were started manually or by a schedule, job type, run duration and project. Parent topic:[Jobs](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/jobs.html)
# Viewing jobs across projects # You can view the jobs that exist across projects for assets that run in tools, such as notebooks, Data Refinery flows, and SPSS Modeler flows\. To view the status of jobs or job runs in projects: <!-- <ol> --> 1. From the navigation menu, select **Projects > Jobs**\. 2. Select a view scope: <!-- <ul> --> * **Jobs with finished runs**: all jobs that contain finished runs * **Finished runs**: all job runs that have finished * **Jobs with active runs**: all jobs that contain that contain active runs * **Active runs**: all job runs that are still active <!-- </ul> --> 3. Click ![the Filters icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/images/edit-filters.png) from the table toolbar to further narrow down the returned search results for the view scope you selected\. The filter options vary depending the view scope selection, for example, for jobs with active runs, you can filter by run state, job type and project, whereas for finished runs by time, run state, whether the runs were started manually or by a schedule, job type, run duration and project\. <!-- </ol> --> **Parent topic:**[Jobs](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/jobs.html) <!-- </article "role="article" "> -->
28C4D682B46E9723F538988BB2BDB1EB65618E5E
https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/jobs.html?context=cdpaas&locale=en
Creating and managing jobs in a project
Creating and managing jobs in a project You create jobs to run assets or files in tools, such as Data Refinery flows, SPSS Modeler flows, Notebooks, and scripts, in a project. When you create a job you define the properties for the job, such as the name, definition, environment runtime, schedule and notification specifications on different pages. You can run a job immediately or wait for the job to run at the next scheduled interval. Each time a job is started, a job run is created, which you can monitor and use to compare with the job run history of previous runs. You can view detailed information about each job run, job state changes, and job failures in the job run log. How you create a job depends on the asset or file. Job creation options for assets or files Asset or file Create job in tool Create job from the Assets page More information Data Refinery flow ✓ ✓ [Creating jobs in Data Refinery](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-job-dr.html) SPSS Modeler flow ✓ ✓ [Creating jobs in SPSS Modeler](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-job-spss.html) Notebook created in the Notebook editor ✓ ✓ [Creating jobs in the Notebook editor](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-job-nb-editor.html) Pipelines ✓ [Creating jobs for Pipelines](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-job-pipelines.html) Creating jobs from the Assets page You can create a job to run an asset from the project's Assets page. Required permissions : You must have an Editor or Admin role in the project. Restriction:You cannot run a job by using an API key from a [service ID](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-authentication.html). To create jobs for a listed asset from the Assets page of a project: 1. Select the asset from the section for your asset type and choose New job from the menu icon with the lists of options (![actions icon three vertical dots](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/images/actions.png)) at the end of the table row. 2. Define the job details by entering a name and a description (optional). 3. If you can select Setting, specify the settings that you want for the job. 4. If you can select Configure, choose an environment runtime for the job. Depending on the asset type, you can optionally configure more settings, for example environment variables or script arguments. To avoid accumulating too many finished job runs and job run artifacts, set how long to retain finished job runs and job run artifacts like logs or notebook results. You can either select the number of days to retain the job runs or the last number of job runs to keep. 5. On the Schedule page, you can optionally add a one-time or repeating schedule. If you select the Repeat option and unit of Minutes with the value of n, the job runs at the start of the hour, and then at every multiple of n. For example, if you specify a value of 11 it will run at 0, 11, 22, 33, 44 and 55 minutes of each hour. If you also select the Start of Schedule option, the job starts to run at the first multiple of n of the hour that occurs after the time that you provide in the Start Time field. For example, if you enter 10:24 for the Start of Time value, and you select Repeat and set the job to repeat every 14 minutes, then your job will run at 10:42, 10:56, 11:00, 11:14. 11:28, 11:42, 11:56, and so on. You can't change the time zone; the schedule uses your web browser's time zone setting. If you exclude certain weekdays, the job might not run as you would expect. The reason might be due to a discrepancy between the time zone of the user who creates the schedule, and the time zone of the compute node where the job runs. An API key is generated when you create a scheduled job, and future runs will use this API key. If you didn't create a scheduled job but choose to modify one, an API key is generated for you when you modify the job and future runs will use this API key. 6. (Optional): Select to see notifications for the job. You can select the type of alerts to receive. 7. Review the job settings. Then, create the job and run it immediately, or create the job and run it later. Managing jobs You can view all of the jobs that exist for your project from the project's Jobs page. With Admin or Editor role for the project, you can view and edit the job details. You can run jobs manually and you can delete jobs. With Viewer role for the project, you can only view the job details. You can't run or delete jobs with Viewer role. To view the details of a specific job, click the job. From the job's details page, you can: * View the runs for that job and the status of each run. If a run failed, you can select the run and view the log tail or download the entire log file to help you troubleshoot the run. A failed run might be related to a temporary connection or environment problem. Try running the job again. If the job still fails, you can send the log to Customer Support. * Edit job settings by clicking Edit job, for example to change schedule settings or to pick another environment template. * Run the job manually by clicking ![the run icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/images/run-job.png) from the job's action bar. You can start a scheduled job based on the schedule and on demand. * Delete the job by clicking ![the bin icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/images/delete-job.png) from the job's action bar. Viewing and editing jobs in a tool You can view and edit job settings associated with an asset directly in the following tools: * Data Refinery * DataStage * Match 360 * Notebook editor or viewer * Pipelines Viewing and editing jobs in Data Refinery, Notebooks, and Pipelines 1. In the tool, click the Jobs icon ![the jobs icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/images/Run-schedule_Blue.png) from the toolbar and select Save and view jobs. This action lists the jobs that exist for the asset. 2. Select a job to see its details. You can change job settings by clicking Edit job. Learn more * [Creating jobs in Data Refinery](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-job-dr.html) * [Creating jobs in the Notebook editor or Notebook viewer](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-job-nb-editor.html) * [Creating jobs for Pipelines](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-job-pipelines.html) Parent topic:[Working in projects](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/manage-projects.html)
# Creating and managing jobs in a project # You create jobs to run assets or files in tools, such as Data Refinery flows, SPSS Modeler flows, Notebooks, and scripts, in a project\. When you create a job you define the properties for the job, such as the name, definition, environment runtime, schedule and notification specifications on different pages\. You can run a job immediately or wait for the job to run at the next scheduled interval\. Each time a job is started, a job run is created, which you can monitor and use to compare with the job run history of previous runs\. You can view detailed information about each job run, job state changes, and job failures in the job run log\. How you create a job depends on the asset or file\. <!-- <table> --> Job creation options for assets or files | Asset or file | Create job in tool | Create job from the Assets page | More information | | --------------------------------------- | ------------------ | ------------------------------- | ---------------------------------------- | | Data Refinery flow | ✓ | ✓ | [Creating jobs in Data Refinery](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-job-dr.html) | | SPSS Modeler flow | ✓ | ✓ | [Creating jobs in SPSS Modeler](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-job-spss.html) | | Notebook created in the Notebook editor | ✓ | ✓ | [Creating jobs in the Notebook editor](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-job-nb-editor.html) | | Pipelines | ✓ | | [Creating jobs for Pipelines](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-job-pipelines.html) | <!-- </table ""> --> ## Creating jobs from the Assets page ## You can create a job to run an asset from the project's **Assets** page\. **Required permissions** : You must have an **Editor** or **Admin** role in the project\. Restriction:You cannot run a job by using an API key from a [service ID](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-authentication.html)\. To create jobs for a listed asset from the **Assets** page of a project: <!-- <ol> --> 1. Select the asset from the section for your asset type and choose **New job** from the menu icon with the lists of options (![actions icon three vertical dots](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/images/actions.png)) at the end of the table row\. 2. Define the job details by entering a name and a description (optional)\. 3. If you can select **Setting**, specify the settings that you want for the job\. 4. If you can select **Configure**, choose an environment runtime for the job\. Depending on the asset type, you can optionally configure more settings, for example environment variables or script arguments\. To avoid accumulating too many finished job runs and job run artifacts, set how long to retain finished job runs and job run artifacts like logs or notebook results. You can either select the number of days to retain the job runs or the last number of job runs to keep. 5. On the **Schedule** page, you can optionally add a one\-time or repeating schedule\. If you select the **Repeat** option and unit of **Minutes** with the value of *n*, the job runs at the start of the hour, and then at every multiple of *n*. For example, if you specify a value of 11 it will run at 0, 11, 22, 33, 44 and 55 minutes of each hour. If you also select the **Start of Schedule** option, the job starts to run at the first multiple of *n* of the hour that occurs after the time that you provide in the **Start Time** field. For example, if you enter 10:24 for the **Start of Time** value, and you select **Repeat** and set the job to repeat every 14 minutes, then your job will run at 10:42, 10:56, 11:00, 11:14. 11:28, 11:42, 11:56, and so on. You can't change the time zone; the schedule uses your web browser's time zone setting. If you exclude certain weekdays, the job might not run as you would expect. The reason might be due to a discrepancy between the time zone of the user who creates the schedule, and the time zone of the compute node where the job runs. An API key is generated when you create a scheduled job, and future runs will use this API key. If you didn't create a scheduled job but choose to modify one, an API key is generated for you when you modify the job and future runs will use this API key. 6. (Optional): Select to see notifications for the job\. You can select the type of alerts to receive\. 7. Review the job settings\. Then, create the job and run it immediately, or create the job and run it later\. <!-- </ol> --> ## Managing jobs ## You can view all of the jobs that exist for your project from the project's **Jobs** page\. With **Admin** or **Editor** role for the project, you can view and edit the job details\. You can run jobs manually and you can delete jobs\. With **Viewer** role for the project, you can only view the job details\. You can't run or delete jobs with Viewer role\. To view the details of a specific job, click the job\. From the job's details page, you can: <!-- <ul> --> * *View the runs* for that job and the status of each run\. If a run failed, you can select the run and view the log tail or download the entire log file to help you troubleshoot the run\. A failed run might be related to a temporary connection or environment problem\. Try running the job again\. If the job still fails, you can send the log to Customer Support\. * *Edit job settings* by clicking **Edit job**, for example to change schedule settings or to pick another environment template\. * *Run the job manually* by clicking ![the run icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/images/run-job.png) from the job's action bar\. You can start a scheduled job based on the schedule and on demand\. * *Delete* the job by clicking ![the bin icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/images/delete-job.png) from the job's action bar\. <!-- </ul> --> ### Viewing and editing jobs in a tool ### You can view and edit job settings associated with an asset directly in the following tools: <!-- <ul> --> * Data Refinery * DataStage * Match 360 * Notebook editor or viewer * Pipelines <!-- </ul> --> #### Viewing and editing jobs in Data Refinery, Notebooks, and Pipelines #### <!-- <ol> --> 1. In the tool, click the Jobs icon ![the jobs icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/images/Run-schedule_Blue.png) from the toolbar and select **Save and view jobs**\. This action lists the jobs that exist for the asset\. 2. Select a job to see its details\. You can change job settings by clicking **Edit job**\. <!-- </ol> --> ## Learn more ## <!-- <ul> --> * [Creating jobs in Data Refinery](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-job-dr.html) * [Creating jobs in the Notebook editor or Notebook viewer](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-job-nb-editor.html) * [Creating jobs for Pipelines](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-job-pipelines.html) <!-- </ul> --> **Parent topic:**[Working in projects](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/manage-projects.html) <!-- </article "role="article" "> -->
83198577CAC405AD1A9BF68BE7A5CAEB020D57D4
https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/leave-project.html?context=cdpaas&locale=en
Leaving a project
Leaving a project You can leave a project from within the project or from the Projects page. Restrictions If you are the only collaborator in the project with the Admin role, you must [assign the Admin role](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/collaborate.html) to another collaborator before you can leave the project. Leaving a project from within the project To leave a project from within the project: 1. Open the project. 2. On the Manage tab, go to the General page. 3. In the Danger zone section, click Leave project. 4. Click Leave. Leaving multiple projects To leave one or more projects from the Projects page: 1. Select View all projects from the navigation menu. 2. Select one or more projects to leave. 3. Click Leave. 4. Click Leave to confirm. Parent topic:[Projects ](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/manage-projects.html)
# Leaving a project # You can leave a project from within the project or from the **Projects** page\. ## Restrictions ## If you are the only collaborator in the project with the **Admin** role, you must [assign the Admin role](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/collaborate.html) to another collaborator before you can leave the project\. ## Leaving a project from within the project ## To leave a project from within the project: <!-- <ol> --> 1. Open the project\. 2. On the **Manage** tab, go to the **General** page\. 3. In the **Danger zone** section, click **Leave project**\. 4. Click **Leave**\. <!-- </ol> --> ## Leaving multiple projects ## To leave one or more projects from the **Projects** page: <!-- <ol> --> 1. Select **View all projects** from the navigation menu\. 2. Select one or more projects to leave\. 3. Click **Leave**\. 4. Click **Leave** to confirm\. <!-- </ol> --> **Parent topic:**[Projects ](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/manage-projects.html) <!-- </article "role="article" "> -->
784686DA695F28F867BC35C4416CB8D767D58B7A
https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/manage-assets.html?context=cdpaas&locale=en
Managing assets in projects
Managing assets in projects You can manage assets in a project by adding them, editing them, or deleting them. * [Add data assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/add-data-project.html) * You can add other types of assets by clicking New asset or Import assets on the project's Assets page. * [Edit assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/manage-assets.html?context=cdpaas&locale=eneditassets) * [Download assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/download.html) * [Delete assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/manage-assets.html?context=cdpaas&locale=enremove-asset) * [Search for assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/search-assets.html) Edit assets You can edit the properties of all types of assets, such as the asset name, description, and tags. See [Asset types and properties](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/assets.html). The role you need to edit an asset depends on the asset type. See [Project collaborator roles](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/collaborator-permissions.html). Data assets from files, connected data assets, or imported data assets : - Click the data asset name to open the asset. For some types of data, you can see an [asset preview](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/previews.html). : - To edit the data asset properties, such as its name, tags, and description, click the corresponding edit icon (![edit icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/images/edit.svg)) on the information pane. : - To create or update a [profile](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/profile.html) of relational data, click the Profile tab. : - To cleanse and shape relational data, click Prepare data to open the data asset in Data Refinery. : When you change the name of data assets with file attachments that you uploaded into the project, the file attachments are also renamed. You must update any references to the data asset in code-based assets, like notebooks, to the new data asset name, otherwise, the code-based asset won't run. Connection assets : Click the connection asset name to edit the connection properties, such as the name, description, and connection details. Assets that you create with tools : Click the name of the asset on the Assets page to open it in its tool. On the Assets page of a project, the lock icon (![Lock icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/images/lockicon-new.png)) indicates that another collaborator is editing the asset or locked the asset to prevent editing by other users. * Enabled lock: You can unlock the asset if you locked it or if you have the Admin role in the project. * Disabled lock: You can't unlock a locked asset if you didn't lock it and you have the Editor or Viewer role in the project. When you unlock an asset that another collaborator is editing, you take control of the asset. The other collaborator is not notified and any changes made by that collaborator are overwritten by your edits. Delete an asset from a project Required permissions : You must have the Admin or Editor role to delete assets from the project. To delete an asset from a project, choose the Delete or the Remove option from the action menu next to the asset on the project Assets page. When you delete an asset, its associated file, if it has one, is also deleted. However, when you delete a connected data asset, the data in the associated data source is not affected. Depending on the type of asset, other related assets might also be deleted. Learn more * [Asset types and properties](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/assets.html) Parent topic:[Projects ](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/manage-projects.html)
# Managing assets in projects # You can manage assets in a project by adding them, editing them, or deleting them\. <!-- <ul> --> * [Add data assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/add-data-project.html) * You can add other types of assets by clicking **New asset** or **Import assets** on the project's *Assets* page\. * [Edit assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/manage-assets.html?context=cdpaas&locale=en#editassets) * [Download assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/download.html) * [Delete assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/manage-assets.html?context=cdpaas&locale=en#remove-asset) * [Search for assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/search-assets.html) <!-- </ul> --> ## Edit assets ## You can edit the properties of all types of assets, such as the asset name, description, and tags\. See [Asset types and properties](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/assets.html)\. The role you need to edit an asset depends on the asset type\. See [Project collaborator roles](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/collaborator-permissions.html)\. **Data assets from files, connected data assets, or imported data assets** : \- Click the data asset name to open the asset\. For some types of data, you can see an [asset preview](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/previews.html)\. : \- To edit the data asset properties, such as its name, tags, and description, click the corresponding edit icon (![edit icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/images/edit.svg)) on the information pane\. : \- To create or update a [profile](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/profile.html) of relational data, click the **Profile** tab\. : \- To cleanse and shape relational data, click **Prepare data** to open the data asset in Data Refinery\. : When you change the name of data assets with file attachments that you uploaded into the project, the file attachments are also renamed\. You must update any references to the data asset in code\-based assets, like notebooks, to the new data asset name, otherwise, the code\-based asset won't run\. **Connection assets** : Click the connection asset name to edit the connection properties, such as the name, description, and connection details\. **Assets that you create with tools** : Click the name of the asset on the **Assets** page to open it in its tool\. On the **Assets** page of a project, the lock icon (![Lock icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/images/lockicon-new.png)) indicates that another collaborator is editing the asset or locked the asset to prevent editing by other users\. <!-- <ul> --> * Enabled lock: You can unlock the asset if you locked it or if you have the **Admin** role in the project\. * Disabled lock: You can't unlock a locked asset if you didn't lock it and you have the **Editor** or **Viewer** role in the project\. <!-- </ul> --> When you unlock an asset that another collaborator is editing, you take control of the asset\. The other collaborator is not notified and any changes made by that collaborator are overwritten by your edits\. ## Delete an asset from a project ## **Required permissions** : You must have the **Admin** or **Editor** role to delete assets from the project\. To delete an asset from a project, choose the **Delete** or the **Remove** option from the action menu next to the asset on the project **Assets** page\. When you delete an asset, its associated file, if it has one, is also deleted\. However, when you delete a connected data asset, the data in the associated data source is not affected\. Depending on the type of asset, other related assets might also be deleted\. ## Learn more ## <!-- <ul> --> * [Asset types and properties](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/assets.html) <!-- </ul> --> **Parent topic:**[Projects ](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/manage-projects.html) <!-- </article "role="article" "> -->
7935B285F3E7DBBDCDA940A70CF92DC0E2ED6512
https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/manage-projects.html?context=cdpaas&locale=en
Working in projects
Working in projects A project is a collaborative workspace where you work with data and other assets to accomplish a particular goal. By default, your [sandbox project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/sandbox.html) is created automatically when you sign up for watsonx.ai. Your project can include these types of resources: * [Collaborators](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/manage-projects.html?context=cdpaas&locale=encollaboration) are the people who you work with in your project. * [Data assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/manage-projects.html?context=cdpaas&locale=endata) are what you work with. Data assets often consist of raw data that you work with to refine. * [Tools and their associated assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/manage-projects.html?context=cdpaas&locale=entools) are how you work with data. * [Environments](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/manage-projects.html?context=cdpaas&locale=enenv) are how you configure compute resources for running assets in tools. * [Jobs](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/manage-projects.html?context=cdpaas&locale=enjobs) are how you manage and schedule the running of assets in tools. * [Project documentation](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/manage-projects.html?context=cdpaas&locale=endocs) and notifications are how you stay informed about what's happening in the project. * [Asset storage](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/manage-projects.html?context=cdpaas&locale=enstorage) is where project information and files are stored. * [Integrations](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/manage-projects.html?context=cdpaas&locale=eninteg) are how you incorporate external tools. You can customize projects to suit your goals. You can change the contents of your project and almost all of its properties at any time. However, you must make these choices when you create the project because you can't change them later: * The instance of IBM Cloud Object Storage to use for project storage. You can view projects that you create and collaborate in by selecting Projects > View all projects in the navigation menu, or by viewing the Projects pane on the main page. Collaboration in projects As a project creator, you can add other collaborators and assign them roles that control which actions they can take. You automatically have the Admin role in the project, and if you give other collaborators the Admin role, they can add collaborators too. See [Adding collaborators](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/collaborate.html) and [Project collaborator roles](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/collaborator-permissions.html). Tip: If appropriate, add at least one other user as a project administrator to ensure that someone is able to manage the project if you are unavailable. Collaboration on assets All collaborators work with the same copy of each asset. Only one collaborator can edit an asset at a time. While a collaborator is editing an asset in a tool, that asset is locked. Other collaborators can view a locked asset, but not edit it. See [Managing assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/manage-assets.html). Data assets You can add these types of data assets to projects: * Data assets from local files or the Samples * Connections to cloud and on-premises data sources * Connected data assets from an existing connection asset that provide read-only access to a table or file in an external data source * Folder data assets to view the files within a folder in a file system Learn more about data assets: * [Connectors](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html) * [Adding data to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/add-data-project.html) * [Data asset types and their properties](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/assets.htmldata) * [Searching for assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/search-assets.html) Tools and their associated assets When you run a tool, you create an asset that contains the information for a specific goal. For example, when you run the Data Refinery tool, you create a Data Refinery flow asset that defines the set of ordered operations to run on a specific data asset. Each tool has one or more types of associated assets that run in the tool. For a mapping of assets to the tools that you use to create them, see [Asset types and properties](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/assets.html). Environments Environments control your compute resources. An environment template specifies hardware and software resources to instantiate the environment runtimes that run your assets in tools. Some tools have an automatically selected environment template. However, for other tools, you can choose between multiple environments. When you create an asset in a tool, you assign an environment to it. You can change the environment for an asset when you run it. Watson Studio includes a set of default environment templates that vary by coding language, tool, and compute engine type. You can also create custom environment templates or add services that provide environment templates. The compute resources that you consume in a project are tracked. Depending on your offering plan, you have a limit to your monthly compute resources or you pay for all compute resources. See [Environments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/environments-parent.html). Jobs A job is a single run of an asset in a tool with a specified environment runtime. You can schedule one or repeating jobs, monitor, edit, stop, or cancel jobs. See [Jobs](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/jobs.html). Asset storage Each project has a dedicated, secure storage bucket that contains: * Data assets that you upload to the project as files. * Data assets from files that you copy from another workspace. * Files that you save to the project with a tool. * Files for assets that run in tools, such as notebooks. * Saved models. * The project readme file and internal project files. When you create a project, you must select an instance of IBM Cloud Object Storage or create a new instance. You cannot change the IBM Cloud Object Storage instance after you create the workspace. See [Object storage](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/storage-options.html). When you delete a project, its storage bucket is also deleted. Integrations with external tools Integrations provide a method to interact with tools that are external to the project. You can integrate with a Git repository to [publish notebooks](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/github-integration.html). Project documentation and notifications While you create a project, you can add a short description to document the purpose or goal of the project. You can edit the description later, on the project's Settings page. You can mark the project as sensitive. When users open a project that is marked as sensitive, a notification is displayed stating that no data assets can be downloaded or exported from the project. The Overview page of a project contains a readme file where you can document the status or results of the project. The readme file uses standard [Markdown formatting](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/markd-jupyter.html). Collaborators with the Admin or Editor role can edit the readme file. You can view recent asset activity in the Assets pane on the Overview page, and filter the assets by selecting By you or By all using the dropdown. By you lists assets that you edited, ordered by most recent. By all lists assets that are edited by others and also by you, ordered by most recent. All collaborators in a project are notified when a collaborator changes an asset. Learn more * [Your sandbox project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/sandbox.html) * [Creating a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/projects.html) * [Administering a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/admin-project.html) * [Adding collaborators](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/collaborate.html) * [Managing assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/manage-assets.html) * [Downloading data assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/download.html) * [Environments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/environments-parent.html) * [Jobs](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/jobs.html) * [Adding data](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/add-data-project.html) * [Marking a project as sensitive](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/mark-sensitive.html)
# Working in projects # A project is a collaborative workspace where you work with data and other assets to accomplish a particular goal\. By default, your [sandbox project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/sandbox.html) is created automatically when you sign up for watsonx\.ai\. Your project can include these types of resources: <!-- <ul> --> * [Collaborators](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/manage-projects.html?context=cdpaas&locale=en#collaboration) are the people who you work with in your project\. * [Data assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/manage-projects.html?context=cdpaas&locale=en#data) are what you work with\. Data assets often consist of raw data that you work with to refine\. * [Tools and their associated assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/manage-projects.html?context=cdpaas&locale=en#tools) are how you work with data\. * [Environments](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/manage-projects.html?context=cdpaas&locale=en#env) are how you configure compute resources for running assets in tools\. * [Jobs](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/manage-projects.html?context=cdpaas&locale=en#jobs) are how you manage and schedule the running of assets in tools\. * [Project documentation](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/manage-projects.html?context=cdpaas&locale=en#docs) and notifications are how you stay informed about what's happening in the project\. * [Asset storage](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/manage-projects.html?context=cdpaas&locale=en#storage) is where project information and files are stored\. * [Integrations](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/manage-projects.html?context=cdpaas&locale=en#integ) are how you incorporate external tools\. <!-- </ul> --> You can customize projects to suit your goals\. You can change the contents of your project and almost all of its properties at any time\. However, you must make these choices when you create the project because you can't change them later: <!-- <ul> --> * The instance of IBM Cloud Object Storage to use for project storage\. <!-- </ul> --> You can view projects that you create and collaborate in by selecting **Projects > View all projects** in the navigation menu, or by viewing the **Projects** pane on the main page\. ## Collaboration in projects ## As a project creator, you can add other collaborators and assign them roles that control which actions they can take\. You automatically have the **Admin** role in the project, and if you give other collaborators the **Admin** role, they can add collaborators too\. See [Adding collaborators](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/collaborate.html) and [Project collaborator roles](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/collaborator-permissions.html)\. Tip: If appropriate, add at least one other user as a project administrator to ensure that someone is able to manage the project if you are unavailable\. ### Collaboration on assets ### All collaborators work with the same copy of each asset\. Only one collaborator can edit an asset at a time\. While a collaborator is editing an asset in a tool, that asset is locked\. Other collaborators can view a locked asset, but not edit it\. See [Managing assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/manage-assets.html)\. ## Data assets ## You can add these types of data assets to projects: <!-- <ul> --> * Data assets from local files or the Samples * Connections to cloud and on\-premises data sources * Connected data assets from an existing connection asset that provide read\-only access to a table or file in an external data source * Folder data assets to view the files within a folder in a file system <!-- </ul> --> Learn more about data assets: <!-- <ul> --> * [Connectors](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html) * [Adding data to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/add-data-project.html) * [Data asset types and their properties](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/assets.html#data) * [Searching for assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/search-assets.html) <!-- </ul> --> ## Tools and their associated assets ## When you run a tool, you create an asset that contains the information for a specific goal\. For example, when you run the Data Refinery tool, you create a Data Refinery flow asset that defines the set of ordered operations to run on a specific data asset\. Each tool has one or more types of associated assets that run in the tool\. For a mapping of assets to the tools that you use to create them, see [Asset types and properties](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/assets.html)\. ## Environments ## Environments control your compute resources\. An environment template specifies hardware and software resources to instantiate the environment runtimes that run your assets in tools\. Some tools have an automatically selected environment template\. However, for other tools, you can choose between multiple environments\. When you create an asset in a tool, you assign an environment to it\. You can change the environment for an asset when you run it\. Watson Studio includes a set of default environment templates that vary by coding language, tool, and compute engine type\. You can also create custom environment templates or add services that provide environment templates\. The compute resources that you consume in a project are tracked\. Depending on your offering plan, you have a limit to your monthly compute resources or you pay for all compute resources\. See [Environments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/environments-parent.html)\. ## Jobs ## A job is a single run of an asset in a tool with a specified environment runtime\. You can schedule one or repeating jobs, monitor, edit, stop, or cancel jobs\. See [Jobs](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/jobs.html)\. ## Asset storage ## Each project has a dedicated, secure storage bucket that contains: <!-- <ul> --> * Data assets that you upload to the project as files\. * Data assets from files that you copy from another workspace\. * Files that you save to the project with a tool\. * Files for assets that run in tools, such as notebooks\. * Saved models\. * The project readme file and internal project files\. <!-- </ul> --> When you create a project, you must select an instance of IBM Cloud Object Storage or create a new instance\. You cannot change the IBM Cloud Object Storage instance after you create the workspace\. See [Object storage](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/storage-options.html)\. When you delete a project, its storage bucket is also deleted\. ## Integrations with external tools ## Integrations provide a method to interact with tools that are external to the project\. You can integrate with a Git repository to [publish notebooks](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/github-integration.html)\. ## Project documentation and notifications ## While you create a project, you can add a short description to document the purpose or goal of the project\. You can edit the description later, on the project's **Settings** page\. You can mark the project as sensitive\. When users open a project that is marked as sensitive, a notification is displayed stating that no data assets can be downloaded or exported from the project\. The **Overview** page of a project contains a readme file where you can document the status or results of the project\. The readme file uses standard [Markdown formatting](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/markd-jupyter.html)\. Collaborators with the **Admin** or **Editor** role can edit the readme file\. You can view recent asset activity in the **Assets** pane on the **Overview** page, and filter the assets by selecting **By you** or **By all** using the dropdown\. **By you** lists assets that you edited, ordered by most recent\. **By all** lists assets that are edited by others and also by you, ordered by most recent\. All collaborators in a project are notified when a collaborator changes an asset\. ## Learn more ## <!-- <ul> --> * [Your sandbox project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/sandbox.html) * [Creating a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/projects.html) * [Administering a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/admin-project.html) * [Adding collaborators](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/collaborate.html) * [Managing assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/manage-assets.html) * [Downloading data assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/download.html) * [Environments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/environments-parent.html) * [Jobs](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/jobs.html) * [Adding data](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/add-data-project.html) * [Marking a project as sensitive](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/mark-sensitive.html) <!-- </ul> --> <!-- </article "role="article" "> -->
C324305E8F756140B7B96492D73D35BB32794119
https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/mark-sensitive.html?context=cdpaas&locale=en
Marking a project as sensitive
Marking a project as sensitive When you create a project, you can mark the project as sensitive to prevent project collaborators from moving sensitive data out of the project. Marking a project as sensitive prevents collaborators of a project, including administrators, from downloading or exporting data assets, connections, or connected data from a project. These sensitive assets cannot be added to a catalog or promoted to a space either. Project collaborators with Admin or Editor role can export assets like notebooks or models from the project. When users open a project that is marked as sensitive, a notification is displayed stating that no data assets can be downloaded or exported from the project. Restrictions * You cannot mark a project as sensitive after the project is created. * You cannot mark projects that use Git integration as sensitive. Parent topic:[Administering projects](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/admin-project.html)
# Marking a project as sensitive # When you create a project, you can mark the project as sensitive to prevent project collaborators from moving sensitive data out of the project\. Marking a project as sensitive prevents collaborators of a project, including administrators, from downloading or exporting data assets, connections, or connected data from a project\. These sensitive assets cannot be added to a catalog or promoted to a space either\. Project collaborators with **Admin** or **Editor** role can export assets like notebooks or models from the project\. When users open a project that is marked as sensitive, a notification is displayed stating that no data assets can be downloaded or exported from the project\. ## Restrictions ## <!-- <ul> --> * You cannot mark a project as sensitive after the project is created\. * You cannot mark projects that use Git integration as sensitive\. <!-- </ul> --> **Parent topic:**[Administering projects](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/admin-project.html) <!-- </article "role="article" "> -->
A6D3281CF9382FA606CF60727452A304A5CCDFA5
https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/platform-conn.html?context=cdpaas&locale=en
Adding platform connections
Adding platform connections You can add connections to the Platform assets catalog to share them across your organization. All collaborators in the Platform assets catalog can see the connections in the catalog. However, only users with the credentials for the data source can use a platform connection in a project to create a connected data asset. Required permissions : To create a platform connection, you must be a collaborator in the Platform assets catalog with one of these roles: * Editor * Admin If you're not a collaborator in the Platform assets catalog, ask someone who is a collaborator to add you or tell you who has the Admin role in the catalog. You create connections to these types of data sources: * IBM Cloud services * Other cloud services * On-premises databases See [Connectors](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html) for a full list of data sources. Watch this video to see how to add platform connections. Video disclaimer: Some minor steps and graphical elements in this video might differ from your platform. This video provides a visual method to learn the concepts and tasks in this documentation. To create a platform connection: 1. From the main menu, choose Data > Platform connections. 2. Click New connection. 3. Choose a data source. 4. If necessary, enter the connection information required for your data source. Typically, you need to provide information like the host, port number, username, and password. 5. If prompted, specify whether you want to use personal or shared credentials. You cannot change this option after you create the connection. The credentials type for the connection, either Personal or Shared, is set by the account owner on the [Account page](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/account-settings.html). The default setting is Shared. * Personal: With personal credentials, each user must specify their own credentials to access the connection. Each user's credentials are saved but are not shared with any other users. Use personal credentials instead of shared credentials to protect credentials. For example, if you use personal credentials and another user changes the connection properties (such as the hostname or port number), the credentials are invalidated to prevent malicious redirection. * Shared: With shared credentials, all users access the connection with the credentials that you provide. 6. To connect to a database that is not externalized to the internet (for example, behind a firewall), see [Connecting to data behind a firewall](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.html). 7. Click Create. The connection appears on the Connections page. You can edit the connection by clicking the connection name. Alternatively, you can create a connection in a project and then publish it to the Platform assets catalog. To publish a connection from a project to the Platform assets catalog: 1. Locate the connection in the project's Assets tab in the Data assets section. 2. From the Actions menu (![Actions icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/images/actions.png)), select Publish to catalog. 3. Select Platform assets catalog and click Publish. Next step * [Add a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html) Learn more * [Connectors](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html) * [Creating the Platform assets catalog](https://dataplatform.cloud.ibm.com/docs/content/wsj/catalog/platform-assets.html) * [Set the credentials for connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/account-settings.htmlset-the-credentials-for-connections) Parent topic:[Preparing data](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/get-data.html)
# Adding platform connections # You can add connections to the Platform assets catalog to share them across your organization\. All collaborators in the Platform assets catalog can see the connections in the catalog\. However, only users with the credentials for the data source can use a platform connection in a project to create a connected data asset\. **Required permissions** : To create a platform connection, you must be a collaborator in the Platform assets catalog with one of these roles: <!-- <ul> --> * **Editor** * **Admin** <!-- </ul> --> If you're not a collaborator in the Platform assets catalog, ask someone who is a collaborator to add you or tell you who has the **Admin** role in the catalog\. You create connections to these types of data sources: <!-- <ul> --> * IBM Cloud services * Other cloud services * On\-premises databases <!-- </ul> --> See [Connectors](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html) for a full list of data sources\. Watch this video to see how to add platform connections\. Video disclaimer: Some minor steps and graphical elements in this video might differ from your platform\. This video provides a visual method to learn the concepts and tasks in this documentation\. To create a platform connection: <!-- <ol> --> 1. From the main menu, choose **Data > Platform connections**\. 2. Click **New connection**\. 3. Choose a data source\. 4. If necessary, enter the connection information required for your data source\. Typically, you need to provide information like the host, port number, username, and password\. 5. If prompted, specify whether you want to use personal or shared credentials\. You cannot change this option after you create the connection\. The credentials type for the connection, either Personal or Shared, is set by the account owner on the [Account page](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/account-settings.html)\. The default setting is **Shared**\. <!-- <ul> --> * **Personal**: With personal credentials, each user must specify their own credentials to access the connection. Each user's credentials are saved but are not shared with any other users. Use personal credentials instead of shared credentials to protect credentials. For example, if you use personal credentials and another user changes the connection properties (such as the hostname or port number), the credentials are invalidated to prevent malicious redirection. * **Shared**: With shared credentials, all users access the connection with the credentials that you provide. <!-- </ul> --> 6. To connect to a database that is not externalized to the internet (for example, behind a firewall), see [Connecting to data behind a firewall](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.html)\. 7. Click **Create\.** The connection appears on the **Connections** page\. You can edit the connection by clicking the connection name\. <!-- </ol> --> Alternatively, you can create a connection in a project and then publish it to the Platform assets catalog\. To publish a connection from a project to the Platform assets catalog: <!-- <ol> --> 1. Locate the connection in the project's **Assets** tab in the **Data assets** section\. 2. From the Actions menu (![Actions icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/images/actions.png)), select **Publish to catalog**\. 3. Select **Platform assets catalog** and click **Publish**\. <!-- </ol> --> ## Next step ## <!-- <ul> --> * [Add a connection to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html) <!-- </ul> --> ## Learn more ## <!-- <ul> --> * [Connectors](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html) * [Creating the Platform assets catalog](https://dataplatform.cloud.ibm.com/docs/content/wsj/catalog/platform-assets.html) * [Set the credentials for connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/account-settings.html#set-the-credentials-for-connections) <!-- </ul> --> **Parent topic:**[Preparing data](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/get-data.html) <!-- </article "role="article" "> -->
E70109F320A53829D66F6E07EE0A9B79B59AEE13
https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/sandbox.html?context=cdpaas&locale=en
Your sandbox project
Your sandbox project A project is where you work with data and models by using tools. When you sign up for watsonx.ai, your sandbox project is created automatically, and you can start working in it immediately. Initially, your sandbox project is empty. To start working, click a task tile on the home page or go to the Assets page in your project, click New asset, and select a task. Each task can result in an asset that is saved in the project. Many tasks include samples that you can use. You can find sample prompts, notebooks, data sets, and other assets in the Samples from the home page. You can share your work by [adding collaborators](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/collaborate.html) to your project. If you need to work with data, you can [add data assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/add-data-project.html) to your project. If your sandbox project is your only project, then any task that you select occurs in the context of your sandbox project. When you have multiple projects, you can change the default project by selecting a project from the Open in list on the home page. Other projects that you create have the same functionality as your sandbox project, except that your Watson Machine Learning service instance is automatically associated with your sandbox project. You must manually associate your Watson Machine Learning service instance with other projects. Manually creating a sandbox project If you switch from Cloud Pak for Data as a Service to watsonx, you can create a sandbox project from the watsonx home page when the following conditions are met: * You have one or more instances of the Watson Machine Learning service. * You have exactly one instance of the IBM Cloud Object Storage service. To manually create a sandbox project, click Create sandbox in the Projects section. Otherwise, you can create a different project. See [Creating a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/projects.html). You are guided through associating a Watson Machine Learning service with the project when you open certain tools. You can switch an existing project from the Cloud Pak for Data as a Service to watsonx. See [Switching the platform for a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/switch-platform.html). Learn more * [Asset types and properties](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/assets.html) * [Adding collaborators](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/collaborate.html) * [Add data assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/add-data-project.html) * [Manage assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/manage-assets.html) * [Adding associated services to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/assoc-services.html) * [Object storage for workspaces](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/storage-options.html) Parent topic:[Projects](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/manage-projects.html)
# Your sandbox project # A project is where you work with data and models by using tools\. When you sign up for watsonx\.ai, your sandbox project is created automatically, and you can start working in it immediately\. Initially, your sandbox project is empty\. To start working, click a task tile on the home page or go to the **Assets** page in your project, click **New asset**, and select a task\. Each task can result in an asset that is saved in the project\. Many tasks include samples that you can use\. You can find sample prompts, notebooks, data sets, and other assets in the Samples from the home page\. You can share your work by [adding collaborators](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/collaborate.html) to your project\. If you need to work with data, you can [add data assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/add-data-project.html) to your project\. If your sandbox project is your only project, then any task that you select occurs in the context of your sandbox project\. When you have multiple projects, you can change the default project by selecting a project from the **Open in** list on the home page\. Other projects that you create have the same functionality as your sandbox project, except that your Watson Machine Learning service instance is automatically associated with your sandbox project\. You must manually associate your Watson Machine Learning service instance with other projects\. ## Manually creating a sandbox project ## If you switch from Cloud Pak for Data as a Service to watsonx, you can create a sandbox project from the watsonx home page when the following conditions are met: <!-- <ul> --> * You have one or more instances of the Watson Machine Learning service\. * You have exactly one instance of the IBM Cloud Object Storage service\. <!-- </ul> --> To manually create a sandbox project, click **Create sandbox** in the **Projects** section\. Otherwise, you can create a different project\. See [Creating a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/projects.html)\. You are guided through associating a Watson Machine Learning service with the project when you open certain tools\. You can switch an existing project from the Cloud Pak for Data as a Service to watsonx\. See [Switching the platform for a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/switch-platform.html)\. ## Learn more ## <!-- <ul> --> * [Asset types and properties](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/assets.html) * [Adding collaborators](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/collaborate.html) * [Add data assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/add-data-project.html) * [Manage assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/manage-assets.html) * [Adding associated services to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/assoc-services.html) * [Object storage for workspaces](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/storage-options.html) <!-- </ul> --> **Parent topic:**[Projects](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/manage-projects.html) <!-- </article "role="article" "> -->
977C81385F7825613F1EDBD3C0DBF44C259BA8D7
https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/search-assets.html?context=cdpaas&locale=en
Searching for assets and artifacts across the platform
Searching for assets and artifacts across the platform Searching for assets across the platform You can use the global search bar to search for assets across all the projects and deployment spaces to which you have access. * [Requirements and restrictions](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/search-assets.html?context=cdpaas&locale=enrestrictions) * [Searching for assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/search-assets.html?context=cdpaas&locale=ensearch) * [Selecting results](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/search-assets.html?context=cdpaas&locale=enresult) Requirements and restrictions You can find assets and artifacts under the following circumstances. * Required permissions You can have any role in projects or deployment spaces to find assets. * Workspaces * You can search for assets that are in these workspaces: * Projects * Deployment spaces * Types of assets You can search for all types of assets. * Restrictions * Your search results include only assets in workspaces that you belong to. Searching for assets To search for an asset, you can enter one or more words in the global search field. The search results are matches from these properties of assets: * Name * Description * Tags * Table name You can customize your searches with these techniques: * [Searching for the start of a word](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/search-assets.html?context=cdpaas&locale=enstart) * [Searching for a part of a word](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/search-assets.html?context=cdpaas&locale=enpart) * [Searching for a phrase](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/search-assets.html?context=cdpaas&locale=enphrase) * [Searching for multiple alternative words](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/search-assets.html?context=cdpaas&locale=enmultiple) Searching for the start of a word To search for words starting with a letter or letters, enter the first 1-3 letters of the word. If you enter only one letter, words starting with that letter are returned. If you enter two or three letters, words starting with those letters will be prioritized over the words containing those letters. For example, if you search for i , you will get results like initial and infinite , but not definite. If you search for in you will additionally get results containing definite ranked lower in the results list. Searching for a part of a word To search for partial word matches, include more than 3 letters. For example, if you search for conn, you might get results like connection and disconnect. Only the first 12 characters in a word are used in the search. Any search terms that you enter that are longer than 12 characters are truncated to the first 12 characters. Searches for partial words don't work in the description fields. Searching for a phrase To search for a specific phrase, surround the phrase with double quotation marks. For example, if you search for "payment plan prediction", your results contain exactly that phrase. You can include a quoted phrase within a longer search string. For example, if you search for credit card "payment plan prediction", you might get results that contain credit card, credit, card, and payment plan prediction. When you search for a phrase in English, natural language analysis optimizes the search results in the following ways: * Words that are not important to the search intent are removed from the search query. * Phrases in the search string that are common in English are automatically ranked higher than results for individual words. For example, if you search for find credit card interest in United States, you might get the following results: * Matches for credit card interest and United States are prioritized. * Matches for credit, card, interest, United, and States are returned. * Matches for in are not returned. Searching for multiple alternative words To find results that contain any of your search terms, enter multiple words. For example, if you search for machine learning, the results contain the word machine, the word learning, or both words. Selecting results To select the best result, look at which property of the asset or artifact matches your search string. The matching text is highlighted. The highest scoring results are for matches to the name of the asset. Multiple assets can have the same name. However, the name of the project, deployment or space, is shown underneath the asset name so you can determine which result is the one you want. Click an asset name to view it in its project or deployment space. Results are prioritized in this order: 1. Matches of quoted phrases or common phrases (for English only) 2. Exact matches of complete words 3. Partial matches of complete words 4. Fuzzy matches From the search results, you can click Preview to view more information in the side panel. Filtering and sorting results You can filter search results by these properties: * Type of asset * Tags * Owners (for some types of assets) * The user who modified the asset * The time period when the asset was last modified * Projects (assets only) * Workspaces * Schema * Table * Contains: Feature group You can sort results by the most relevant or the last modified date. Parent topic:[Asset types and properties](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/assets.html)
# Searching for assets and artifacts across the platform # # Searching for assets across the platform # You can use the global search bar to search for assets across all the projects and deployment spaces to which you have access\. <!-- <ul> --> * [Requirements and restrictions](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/search-assets.html?context=cdpaas&locale=en#restrictions) * [Searching for assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/search-assets.html?context=cdpaas&locale=en#search) * [Selecting results](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/search-assets.html?context=cdpaas&locale=en#result) <!-- </ul> --> ## Requirements and restrictions ## You can find assets and artifacts under the following circumstances\. <!-- <ul> --> * **Required permissions** You can have any role in projects or deployment spaces to find assets. <!-- </ul> --> <!-- <ul> --> * **Workspaces** <!-- <ul> --> * You can search for assets that are in these workspaces: <!-- <ul> --> * Projects * Deployment spaces <!-- </ul> --> <!-- </ul> --> * **Types of assets** You can search for all types of assets. * **Restrictions** <!-- <ul> --> * Your search results include only assets in workspaces that you belong to. <!-- </ul> --> <!-- </ul> --> ## Searching for assets ## To search for an asset, you can enter one or more words in the global search field\. The search results are matches from these properties of assets: <!-- <ul> --> * Name * Description * Tags * Table name <!-- </ul> --> You can customize your searches with these techniques: <!-- <ul> --> * [Searching for the start of a word](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/search-assets.html?context=cdpaas&locale=en#start) * [Searching for a part of a word](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/search-assets.html?context=cdpaas&locale=en#part) * [Searching for a phrase](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/search-assets.html?context=cdpaas&locale=en#phrase) * [Searching for multiple alternative words](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/search-assets.html?context=cdpaas&locale=en#multiple) <!-- </ul> --> ### Searching for the start of a word ### To search for words starting with a letter or letters, enter the first 1\-3 letters of the word\. If you enter only one letter, words starting with that letter are returned\. If you enter two or three letters, words starting with those letters will be prioritized over the words containing those letters\. For example, if you search for `i` , you will get results like `initial` and `infinite` , but not `definite`\. If you search for `in` you will additionally get results containing `definite` ranked lower in the results list\. ### Searching for a part of a word ### To search for partial word matches, include more than 3 letters\. For example, if you search for `conn`, you might get results like `connection` and `disconnect`\. Only the first 12 characters in a word are used in the search\. Any search terms that you enter that are longer than 12 characters are truncated to the first 12 characters\. Searches for partial words don't work in the description fields\. ### Searching for a phrase ### To search for a specific phrase, surround the phrase with double quotation marks\. For example, if you search for `"payment plan prediction"`, your results contain exactly that phrase\. You can include a quoted phrase within a longer search string\. For example, if you search for `credit card "payment plan prediction"`, you might get results that contain `credit card`, `credit`, `card`, and `payment plan prediction`\. When you search for a phrase in English, natural language analysis optimizes the search results in the following ways: <!-- <ul> --> * Words that are not important to the search intent are removed from the search query\. * Phrases in the search string that are common in English are automatically ranked higher than results for individual words\. <!-- </ul> --> For example, if you search for `find credit card interest in United States`, you might get the following results: <!-- <ul> --> * Matches for `credit card interest` and `United States` are prioritized\. * Matches for `credit`, `card`, `interest`, `United`, and `States` are returned\. * Matches for `in` are not returned\. <!-- </ul> --> ### Searching for multiple alternative words ### To find results that contain *any* of your search terms, enter multiple words\. For example, if you search for `machine learning`, the results contain the word `machine`, the word `learning`, or both words\. ## Selecting results ## To select the best result, look at which property of the asset or artifact matches your search string\. The matching text is highlighted\. The highest scoring results are for matches to the name of the asset\. Multiple assets can have the same name\. However, the name of the project, deployment or space, is shown underneath the asset name so you can determine which result is the one you want\. Click an asset name to view it in its project or deployment space\. Results are prioritized in this order: <!-- <ol> --> 1. Matches of quoted phrases or common phrases (for English only) 2. Exact matches of complete words 3. Partial matches of complete words 4. Fuzzy matches <!-- </ol> --> From the search results, you can click **Preview** to view more information in the side panel\. ### Filtering and sorting results ### You can filter search results by these properties: <!-- <ul> --> * Type of asset * Tags * Owners (for some types of assets) * The user who modified the asset * The time period when the asset was last modified * Projects (assets only) * Workspaces * Schema * Table * Contains: Feature group <!-- </ul> --> You can sort results by the most relevant or the last modified date\. **Parent topic:**[Asset types and properties](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/assets.html) <!-- </article "role="article" "> -->
44BA508199B214448CB22B7658127E16DD4E7ABF
https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.html?context=cdpaas&locale=en
Connecting to data behind a firewall
Connecting to data behind a firewall To connect to a database that is not accessible via the internet (for example, behind a firewall), you must set up a secure communication path between your on-premises data source and IBM Cloud. Use a Satellite Connector, a Satellite location, or a Secure Gateway instance for the secure communication path. * [Set up a Satellite Connector](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.html?context=cdpaas&locale=ensatctr): Satellite Connector is the replacement for Secure Gateway. Satellite Connector uses a lightweight Docker-based communication that creates secure and auditable communications from your on-prem, cloud, or Edge environment back to IBM Cloud. Your infrastructure needs only a container host, such as Docker. For more information, see [Satellite Connector overview](https://cloud.ibm.com/docs/satellite?topic=satellite-understand-connectors&interface=ui). * [Set up a Satellite location](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.html?context=cdpaas&locale=ensl): A Satellite location provides the same secure communications to IBM Cloud as a Satellite Connector but adds high availability access by default plus the ability to communicate from IBM Cloud to your on-prem location. It supports managed cloud services on on-premises, such as Managed OpenShift and Managed Databases, supported remotely by IBM Cloud PaaS SRE resources. A Satellite location requires at least three x86 hosts in your infrastructure for the HA control plane. A Satellite location is a superset of the capabilities of the Satellite Connector. If you need only client data communication, set up a Satellite Connector. * [Configure a Secure Gateway](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.html?context=cdpaas&locale=engateway): Secure Gateway is IBM Cloud's former solution for communication between on-prem or third-party cloud environments. Secure Gateway is now [deprecated by IBM Cloud](https://cloud.ibm.com/docs/SecureGateway?topic=SecureGateway-dep-overview). For a new connection, set up a Satellite Connector instead. Set up a Satellite Connector To set up a Satellite Connector, you create the Connector in your IBM Cloud account. Next, you configure agents to run in your local Docker host platform on-premises. Finally, you create the endpoints for your data source that IBM watsonx uses to access the data source from IBM Cloud. Requirements for a Satellite Connector Required permissions : You must have Administrator access to the Satellite service in IAM access policies to do the steps in IBM Cloud. Required host systems : Minimum one x86 Docker host in your own infrastructure to run the Connector container. See [Minimum requirements](https://cloud.ibm.com/docs/satellite?topic=satellite-understand-connectors&interface=uimin-requirements). Setting up a Satellite Connector Note: Not all connections support Satellite. If the connection supports Satellite, the IBM Cloud Satellite tile will be available in the Private Connectivity section of the Create connection form. Alternatively, you can filter all the connections that support Satellite in the New connection page. 1. Access the Create connector page in IBM Cloud from one of these places: * Log in to the [Connectors](https://cloud.ibm.com/satellite/connectors) page in IBM Cloud. * In IBM watsonx: 1. Go to the project page. Click the Assets tab. 2. Click New asset > Connect to a data source. 3. Select the IBM watsonx connector. 4. In the Create connection page, scroll down to the Private connectivity section, and click the IBM Cloud Satellite tile. 5. Click Configure Satellite and then log in to IBM Cloud. 6. Click Create connector. 2. Follow the steps for [Creating a Connector](https://cloud.ibm.com/docs/satellite?topic=satellite-create-connector). 3. Set up the Connector agent containers in your local Docker host environment. For high availability, use three agents per connector that are deployed on separate Docker hosts. It is best to use a separate infrastructure and network connectivity for each agent. Follow the steps for [Running a Connector agent](https://cloud.ibm.com/docs/satellite?topic=satellite-run-agent-locally). The agents will appear in the Active Agents list for the connector. 4. In IBM watsonx, go back to the Create connection page. In the Private connectivity section, click Reload, and then select the Satellite Connector that you created. In the [Satellite Connectors dashboard](https://cloud.ibm.com/satellite/connectors) in IBM Cloud, for each connection that you create, a user endpoint is added in the Satellite Connector. Set up a Satellite location Use the Satellite location feature of IBM Cloud Satellite to securely connect to a Satellite location that you configure for your IBM Cloud account. Requirements for a Satellite location Required permissions : You must be the Admin in the IBM Cloud account to do the tasks in IBM Cloud. Required host systems : You need at least three computers or virtual machines in your own infrastructure to act as Satellite hosts. Confirm the [host system requirements](https://cloud.ibm.com/docs/satellite?topic=satellite-host-reqs). (The IBM Cloud docs instructions for additional features such as Red Hat OpenShift clusters and Kubernetes are not required for a connection in IBM watsonx.) Note: Not all connections support Satellite. If the connection supports Satellite, the IBM Cloud Satellite tile will be available in the Private Connectivity section of the Create connection form. Alternatively, you can filter all the connections that support Satellite in the New connection page. Setting up a Satellite location Configure the Satellite location in IBM Cloud. * [Task 1: Create a Satellite location](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.html?context=cdpaas&locale=entask1) * [Task 2: Attach the hosts to the Satellite location](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.html?context=cdpaas&locale=entask2) * [Task 3: Assign the hosts to the control plane](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.html?context=cdpaas&locale=entask3) * [Task 4: Create the connection secured with a Satellite location](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.html?context=cdpaas&locale=entask4) * [Maintaining the Satellite location](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.html?context=cdpaas&locale=enmaintain) Task 1: Create a Satellite location A Satellite location is a representation of an environment in your infrastructure provider, such as an on-prem data center or cloud. To connect to data sources in IBM watsonx, you need three computers or virtual machines. To create the Satellite location: 1. Access the Create a Satellite location setup page in IBM Cloud from one of these places: * Log in to [IBM Cloud](https://cloud.ibm.com/satellite/overview), and select Create location. * In IBM watsonx: 1. Go to the project page. Click the Assets tab. 2. Click New asset > Connect to a data source. 3. Select the connector. 4. In the Create connection page, scroll down to the Private connectivity section, and click the IBM Cloud Satellite tile. 5. Click Configure Satellite and then log in to IBM Cloud. 6. Click Create location. These instructions follow the On-premises & edge template. Depending on your infrastructure, you can select a different template. Refer to the template instructions and the information at [Understanding Satellite location and hosts](https://cloud.ibm.com/docs/satellite?topic=satellite-location-host) in the IBM Cloud docs. 2. Click Edit to modify the Satellite location information: * Name: You can use this field to differentiate between different networks such as my US East network or my Japan network. * The Tags and Description fields are optional. * Managed from: Select the IBM Cloud region that is closest to where your host machines physically reside. * Resource group: is set to default by default. * Zones: IBM automatically spreads the control plane instances across three zones within the same IBM Cloud multizone metro. For example, if you manage your location from the wdc metro in the US East region, your Satellite location control plane instances are spread across the us-east-1, us-east-2, and us-east-3 zones. This zonal spread ensures that your control plane is available, even if one zone becomes unavailable. * Red Hat CoreOS: Do not select this option. Leave it cleared or as No. * Object storage: Click Edit to enter the exact name of an existing IBM Cloud Object Storage bucket that you want to use to back up Satellite location control plane data. Otherwise, a new bucket is automatically created in an Object Storage instance in your account. 3. Review your order details, and then click Create location. A location control plane is deployed to one of the zones that are located in the IBM Cloud region that you selected. The control plane is ready for you to attach hosts to it. Task 2: Attach the hosts to the Satellite location Attach three hosts that conform to the [host requirements](https://cloud.ibm.com/docs/satellite?topic=satellite-host-reqs) to the Satellite location. Important considerations for Satellite location hosts * Satellite hosts are dedicated servers and cannot be shared with other applications. You cannot log in to a host with SSH. The root password will be changed. * You need only three hosts for IBM watsonx connections. * Worker nodes are not required. Only control plane hosts are needed for IBM watsonx connections. * The Red Hat OpenShift Container Platform (OCP) is not needed for IBM watsonx connections. * Container Linux CoreOS Linux is not needed for IBM watsonx connections. * Hosts connect to IBM Cloud with the TLS 1.3 protocol. To attach the hosts to the Satellite location: 1. From the [Satellite Locations dashboard](https://cloud.ibm.com/satellite/locations), click the name of your location. 2. Click Attach Hosts to generate and download a script. 3. Run the script on all the hosts to be attached to the Satellite location. 4. Save the attach script in case you attach more hosts to the location in the future. The token in the attach script is an API key, which must be treated and protected as sensitive information. See [Maintaining the Satellite location](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.html?context=cdpaas&locale=enmaintain). Task 3: Assign the hosts to the control plane To assign the hosts: 1. From the [Satellite Locations dashboard](https://cloud.ibm.com/satellite/locations), click the name of your location. 2. For each host, click the overflow menu (![Overflow menu](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/images/actions.png)) and then select Assign. Assign one host to each zone. Task 4: Create the connection secured with a Satellite location To create the secure connection: 1. In IBM watsonx, go to the project page. Click the Assets tab. 2. Click New asset > Connect to a data source. 3. Select the connector. 4. In the Create connection form, complete the connection details. The hostname or IP address and the port of the data source must be available from each host that is attached to the Satellite location. 5. Click Reload, and then select the Satellite location that you created. In the [Satellite Locations dashboard](https://cloud.ibm.com/satellite/locations) in IBM Cloud, for each connection that you create, a link endpoint is created with Destination typeLocation, and Created byConnectivity in the Satellite location. Maintaining the Satellite location * The host attach script expires one year from the creation date. To make sure that the hosts don't have authentication problems, download a new copy of the host attach script at least once per year. * Save the attach script in case you attach more hosts to the location in the future. If you generate a new host attach script, it detaches all the existing hosts. * Hosts can be reclaimed by detaching them from the Satellite location and reloading the operating system in the infrastructure provider. Configure a Secure Gateway The IBM Cloud Secure Gateway service provides a remote client to create a secure connection to a database that is not externalized to the internet. You can provision a Secure Gateway service in one service region and use it in service instances that you provisioned in other regions. After you create an instance of the Secure Gateway service, you add a Secure Gateway. Important: Secure Gateway is deprecated by IBM Cloud. For information see [Secure Gateway deprecation overview and timeline](https://cloud.ibm.com/docs/SecureGateway?topic=SecureGateway-dep-overviewq082720___AMQ_SSL_ALLOW_DEFAULT_CERT__title__1). Prerequisite When you log in to IBM watsonx, select Enable Cloud Foundry access. Note: Not all connections support Secure Gateway. If the connection supports Secure Gateway, the IBM Cloud Secure Gateway tile will be available in the Private Connectivity section of the Create connection form. Alternatively, you can filter all the connections that support Secure Gateway in the New connection page. To configure a secure gateway: 1. Configure a secure gateway from the Create connection screen: 1. Click the IBM Cloud Secure Gateway tile. 2. Click New Secure Gateway and then Create Secure Gateway. Otherwise, from the main menu in IBM watsonx, choose Administration > Services > Services catalog and then select Secure Gateway. 2. Select a service plan and click Create. 3. On the Services instances page, find the Secure Gateway service and click its name. 4. Follow the instructions to add a gateway [Adding a gateway](https://cloud.ibm.com/docs/SecureGateway?topic=SecureGateway-add-sg-gw). To maintain security for the connection, make sure that you configure the Secure Gateway to require a security token. Make sure you copy your Gateway ID and security token. 5. From within your new gateway, on the Clients tab, click Connect Client to open the Connect Client pane. 6. Select the client download for your operating system. 7. Follow the instructions for [installing the Client](https://cloud.ibm.com/docs/SecureGateway?topic=SecureGateway-client-install). 8. Depending on the resource authentication protocol that you specify, you might need to upload a certificate. A destination is created when the connection is first established. 9. In IBM watsonx, go to the project page. Click the Assets tab. In the Private connectivity section, click Reload, and then select the secure gateway that you created. Learn more * [Getting started with IBM Cloud Satellite](https://cloud.ibm.com/docs/satellite?topic=satellite-getting-started) * [Secure Gateway deprecation](https://cloud.ibm.com/docs/SecureGateway) Parent topic: [Adding data to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/add-data-project.html)
# Connecting to data behind a firewall # To connect to a database that is not accessible via the internet (for example, behind a firewall), you must set up a secure communication path between your on\-premises data source and IBM Cloud\. Use a Satellite Connector, a Satellite location, or a Secure Gateway instance for the secure communication path\. <!-- <ul> --> * [Set up a Satellite Connector](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.html?context=cdpaas&locale=en#satctr): Satellite Connector is the replacement for Secure Gateway\. Satellite Connector uses a lightweight Docker\-based communication that creates secure and auditable communications from your on\-prem, cloud, or Edge environment back to IBM Cloud\. Your infrastructure needs only a container host, such as Docker\. For more information, see [Satellite Connector overview](https://cloud.ibm.com/docs/satellite?topic=satellite-understand-connectors&interface=ui)\. <!-- </ul> --> <!-- <ul> --> * [Set up a Satellite location](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.html?context=cdpaas&locale=en#sl): A Satellite location provides the same secure communications to IBM Cloud as a Satellite Connector but adds high availability access by default plus the ability to communicate from IBM Cloud to your on\-prem location\. It supports managed cloud services on on\-premises, such as Managed OpenShift and Managed Databases, supported remotely by IBM Cloud PaaS SRE resources\. A Satellite location requires at least three x86 hosts in your infrastructure for the HA control plane\. A Satellite location is a superset of the capabilities of the Satellite Connector\. If you need only client data communication, set up a Satellite Connector\. <!-- </ul> --> <!-- <ul> --> * [Configure a Secure Gateway](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.html?context=cdpaas&locale=en#gateway): Secure Gateway is IBM Cloud's former solution for communication between on\-prem or third\-party cloud environments\. Secure Gateway is now [deprecated by IBM Cloud](https://cloud.ibm.com/docs/SecureGateway?topic=SecureGateway-dep-overview)\. For a new connection, set up a Satellite Connector instead\. <!-- </ul> --> ## Set up a Satellite Connector ## To set up a Satellite Connector, you create the Connector in your IBM Cloud account\. Next, you configure agents to run in your local Docker host platform on\-premises\. Finally, you create the endpoints for your data source that IBM watsonx uses to access the data source from IBM Cloud\. ### Requirements for a Satellite Connector ### **Required permissions** : You must have Administrator access to the Satellite service in IAM access policies to do the steps in IBM Cloud\. **Required host systems** : Minimum one x86 Docker host in your own infrastructure to run the Connector container\. See [Minimum requirements](https://cloud.ibm.com/docs/satellite?topic=satellite-understand-connectors&interface=ui#min-requirements)\. ### Setting up a Satellite Connector ### Note: Not all connections support Satellite\. If the connection supports Satellite, the **IBM Cloud Satellite** tile will be available in the **Private Connectivity** section of the **Create connection** form\. Alternatively, you can filter all the connections that support **Satellite** in the **New connection** page\. <!-- <ol> --> 1. Access the **Create connector** page in IBM Cloud from one of these places: <!-- <ul> --> * Log in to the [Connectors](https://cloud.ibm.com/satellite/connectors) page in IBM Cloud. * In IBM watsonx: <!-- <ol> --> 1. Go to the project page. Click the **Assets** tab. 2. Click **New asset > Connect to a data source**. 3. Select the IBM watsonx connector. 4. In the **Create connection** page, scroll down to the **Private connectivity** section, and click the **IBM Cloud Satellite** tile. 5. Click **Configure Satellite** and then log in to IBM Cloud. 6. Click **Create connector**. <!-- </ol> --> <!-- </ul> --> 2. Follow the steps for [Creating a Connector](https://cloud.ibm.com/docs/satellite?topic=satellite-create-connector)\. 3. Set up the Connector agent containers in your local Docker host environment\. For high availability, use three agents per connector that are deployed on separate Docker hosts\. It is best to use a separate infrastructure and network connectivity for each agent\. Follow the steps for [Running a Connector agent](https://cloud.ibm.com/docs/satellite?topic=satellite-run-agent-locally)\. The agents will appear in the **Active Agents** list for the connector. 4. In IBM watsonx, go back to the **Create connection** page\. In the **Private connectivity** section, click **Reload**, and then select the Satellite Connector that you created\. <!-- </ol> --> In the [Satellite Connectors dashboard](https://cloud.ibm.com/satellite/connectors) in IBM Cloud, for each connection that you create, a user endpoint is added in the Satellite Connector\. ## Set up a Satellite location ## Use the Satellite location feature of IBM Cloud Satellite to securely connect to a Satellite location that you configure for your IBM Cloud account\. ### Requirements for a Satellite location ### **Required permissions** : You must be the Admin in the IBM Cloud account to do the tasks in IBM Cloud\. **Required host systems** : You need at least three computers or virtual machines in your own infrastructure to act as Satellite hosts\. Confirm the [host system requirements](https://cloud.ibm.com/docs/satellite?topic=satellite-host-reqs)\. (The IBM Cloud docs instructions for additional features such as Red Hat OpenShift clusters and Kubernetes are not required for a connection in IBM watsonx\.) Note: Not all connections support Satellite\. If the connection supports Satellite, the **IBM Cloud Satellite** tile will be available in the **Private Connectivity** section of the **Create connection** form\. Alternatively, you can filter all the connections that support **Satellite** in the **New connection** page\. ### Setting up a Satellite location ### Configure the Satellite location in IBM Cloud\. <!-- <ul> --> * [Task 1: Create a Satellite location](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.html?context=cdpaas&locale=en#task1) * [Task 2: Attach the hosts to the Satellite location](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.html?context=cdpaas&locale=en#task2) * [Task 3: Assign the hosts to the control plane](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.html?context=cdpaas&locale=en#task3) * [Task 4: Create the connection secured with a Satellite location](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.html?context=cdpaas&locale=en#task4) * [Maintaining the Satellite location](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.html?context=cdpaas&locale=en#maintain) <!-- </ul> --> ### Task 1: Create a Satellite location ### A Satellite location is a representation of an environment in your infrastructure provider, such as an on\-prem data center or cloud\. To connect to data sources in IBM watsonx, you need three computers or virtual machines\. To create the Satellite location: <!-- <ol> --> 1. Access the **Create a Satellite location** setup page in IBM Cloud from one of these places: <!-- <ul> --> * Log in to [IBM Cloud](https://cloud.ibm.com/satellite/overview), and select **Create location**. * In IBM watsonx: <!-- <ol> --> 1. Go to the project page. Click the **Assets** tab. 2. Click **New asset > Connect to a data source**. 3. Select the connector. 4. In the **Create connection** page, scroll down to the **Private connectivity** section, and click the **IBM Cloud Satellite** tile. 5. Click **Configure Satellite** and then log in to IBM Cloud. 6. Click **Create location**. These instructions follow the **On-premises & edge** template. Depending on your infrastructure, you can select a different template. Refer to the template instructions and the information at [Understanding Satellite location and hosts](https://cloud.ibm.com/docs/satellite?topic=satellite-location-host) in the IBM Cloud docs. <!-- </ol> --> <!-- </ul> --> 2. Click **Edit** to modify the Satellite location information: <!-- <ul> --> * **Name**: You can use this field to differentiate between different networks such as `my US East network` or my `Japan network`. * The **Tags** and **Description** fields are optional. * **Managed from**: Select the IBM Cloud region that is closest to where your host machines physically reside. * **Resource group**: is set to `default` by default. * **Zones**: IBM automatically spreads the control plane instances across three zones within the same IBM Cloud multizone metro. For example, if you manage your location from the **wdc** metro in the US East region, your Satellite location control plane instances are spread across the us-east-1, us-east-2, and us-east-3 zones. This zonal spread ensures that your control plane is available, even if one zone becomes unavailable. * **Red Hat CoreOS**: Do not select this option. Leave it cleared or as **No**. * **Object storage**: Click **Edit** to enter the exact name of an existing IBM Cloud Object Storage bucket that you want to use to back up Satellite location control plane data. Otherwise, a new bucket is automatically created in an Object Storage instance in your account. <!-- </ul> --> 3. Review your order details, and then click **Create location**\. A location control plane is deployed to one of the zones that are located in the IBM Cloud region that you selected. The control plane is ready for you to attach hosts to it. <!-- </ol> --> ### Task 2: Attach the hosts to the Satellite location ### Attach three hosts that conform to the [host requirements](https://cloud.ibm.com/docs/satellite?topic=satellite-host-reqs) to the Satellite location\. #### Important considerations for Satellite location hosts #### <!-- <ul> --> * Satellite hosts are dedicated servers and cannot be shared with other applications\. You cannot log in to a host with SSH\. The root password will be changed\. * You need only three hosts for IBM watsonx connections\. * Worker nodes are not required\. Only control plane hosts are needed for IBM watsonx connections\. * The Red Hat OpenShift Container Platform (OCP) is not needed for IBM watsonx connections\. * Container Linux CoreOS Linux is not needed for IBM watsonx connections\. * Hosts connect to IBM Cloud with the TLS 1\.3 protocol\. <!-- </ul> --> To attach the hosts to the Satellite location: <!-- <ol> --> 1. From the [Satellite Locations dashboard](https://cloud.ibm.com/satellite/locations), click the name of your location\. 2. Click **Attach Hosts** to generate and download a script\. 3. Run the script on all the hosts to be attached to the Satellite location\. 4. Save the attach script in case you attach more hosts to the location in the future\. The token in the attach script is an API key, which must be treated and protected as sensitive information\. See [Maintaining the Satellite location](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.html?context=cdpaas&locale=en#maintain)\. <!-- </ol> --> ### Task 3: Assign the hosts to the control plane ### To assign the hosts: <!-- <ol> --> 1. From the [Satellite Locations dashboard](https://cloud.ibm.com/satellite/locations), click the name of your location\. 2. For each host, click the overflow menu (![Overflow menu](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/images/actions.png)) and then select **Assign**\. Assign one host to each zone\. <!-- </ol> --> ### Task 4: Create the connection secured with a Satellite location ### To create the secure connection: <!-- <ol> --> 1. In IBM watsonx, go to the project page\. Click the **Assets** tab\. 2. Click **New asset > Connect to a data source**\. 3. Select the connector\. 4. In the **Create connection** form, complete the connection details\. The hostname or IP address and the port of the data source must be available from each host that is attached to the Satellite location\. 5. Click **Reload**, and then select the Satellite location that you created\. <!-- </ol> --> In the [Satellite Locations dashboard](https://cloud.ibm.com/satellite/locations) in IBM Cloud, for each connection that you create, a link endpoint is created with **Destination type**`Location`, and **Created by**`Connectivity` in the Satellite location\. ### Maintaining the Satellite location ### <!-- <ul> --> * The host attach script expires one year from the creation date\. To make sure that the hosts don't have authentication problems, download a new copy of the host attach script at least once per year\. * Save the attach script in case you attach more hosts to the location in the future\. If you generate a new host attach script, it detaches all the existing hosts\. * Hosts can be reclaimed by detaching them from the Satellite location and reloading the operating system in the infrastructure provider\. <!-- </ul> --> ## Configure a Secure Gateway ## The IBM Cloud Secure Gateway service provides a remote client to create a secure connection to a database that is not externalized to the internet\. You can provision a Secure Gateway service in one service region and use it in service instances that you provisioned in other regions\. After you create an instance of the Secure Gateway service, you add a Secure Gateway\. Important: Secure Gateway is deprecated by IBM Cloud\. For information see [Secure Gateway deprecation overview and timeline](https://cloud.ibm.com/docs/SecureGateway?topic=SecureGateway-dep-overview#q082720___AMQ_SSL_ALLOW_DEFAULT_CERT__title__1)\. ### Prerequisite ### When you log in to IBM watsonx, select **Enable Cloud Foundry access**\. Note: Not all connections support Secure Gateway\. If the connection supports Secure Gateway, the **IBM Cloud Secure Gateway** tile will be available in the **Private Connectivity** section of the **Create connection** form\. Alternatively, you can filter all the connections that support **Secure Gateway** in the **New connection** page\. To configure a secure gateway: <!-- <ol> --> 1. Configure a secure gateway from the **Create connection** screen: <!-- <ol> --> 1. Click the **IBM Cloud Secure Gateway** tile. 2. Click **New Secure Gateway** and then **Create Secure Gateway**. Otherwise, from the main menu in IBM watsonx, choose **Administration > Services > Services catalog** and then select **Secure Gateway**. <!-- </ol> --> 2. Select a service plan and click **Create**\. 3. On the **Services instances** page, find the Secure Gateway service and click its name\. 4. Follow the instructions to add a gateway [Adding a gateway](https://cloud.ibm.com/docs/SecureGateway?topic=SecureGateway-add-sg-gw)\. To maintain security for the connection, make sure that you configure the Secure Gateway to require a security token\. Make sure you copy your Gateway ID and security token\. 5. From within your new gateway, on the **Clients** tab, click **Connect Client** to open the **Connect Client** pane\. 6. Select the client download for your operating system\. 7. Follow the instructions for [installing the Client](https://cloud.ibm.com/docs/SecureGateway?topic=SecureGateway-client-install)\. 8. Depending on the resource authentication protocol that you specify, you might need to upload a certificate\. A destination is created when the connection is first established\. 9. In IBM watsonx, go to the project page\. Click the **Assets** tab\. In the **Private connectivity** section, click **Reload**, and then select the secure gateway that you created\. <!-- </ol> --> ## Learn more ## <!-- <ul> --> * [Getting started with IBM Cloud Satellite](https://cloud.ibm.com/docs/satellite?topic=satellite-getting-started) * [Secure Gateway deprecation](https://cloud.ibm.com/docs/SecureGateway) <!-- </ul> --> **Parent topic**: [Adding data to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/add-data-project.html) <!-- </article "role="article" "> -->
C122739764B1EC75B64E1B740F493BAD8616A9DB
https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/store-large-objs-in-cos.html?context=cdpaas&locale=en
Adding very large objects to a project's Cloud Object Storage
Adding very large objects to a project's Cloud Object Storage The amount of data you can load to a project's Cloud Object Storage at any one time depends on where you load the data from. If you are loading the data in the product UI, the limit is 5 GB. To add larger objects to a project's Cloud Object Storage, you can use an API or an FTP client. * [The Cloud Object Storage API](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/store-large-objs-in-cos.html?context=cdpaas&locale=enapi) * An FTP client * [The IBM Cloud Object Storage Python SDK](https://github.com/IBM/ibm-cos-sdk-python) (in case you can't use an FTP client) Load data in multiple parts by using the Cloud Object Storage API With the Cloud Object Storage API, you can load data objects as large as 5 GB in a single PUT, and objects as large as 5 TB by loading the data into object storage as a set of parts which can be loaded independently in any order and in parallel. After all of the parts have been loaded, they are presented as a single object in Cloud Object Storage. You can load files with these formats and mime types in multiple parts: * application/xml * application/pdf * text/plain; charset=utf-8 To load a data object in multiple parts: 1. Initiate a [multipart load](https://cloud.ibm.com/docs/services/cloud-object-storage/basics?topic=cloud-object-storage-store-very-large-objectsinitiate-a-multipart-upload): curl -X "POST" "https://(endpoint)/(bucket-name)/(object-name)?uploads" -H "Authorization: bearer (token)" The values for bucket-name and token are on the project's General page on the Manage tab. Click Manage in IBM Cloud on the Watson Studio for the endpoint value. 1. Load the parts by specifying arbitrary sequential part numbers and an UploadId for the object: curl -X "PUT" "https://(endpoint)/(bucket-name)/(object-name)?partNumber=(sequential-integer)&uploadId=(upload-id)" -H "Authorization: bearer (token)" -H "Content-Type: (content-type)" Replacecontent-type with application/xml, application/pdf or text/plain; charset=utf-8. 1. Complete the multipart load: curl -X "POST" "https://(endpoint)/(bucket-name)/(object-name)?uploadId=(upload-id)" -H "Authorization: bearer (token)" -H "Content-Type: text/plain; charset=utf-8" -d $'<CompleteMultipartUpload> <Part> <PartNumber>1</PartNumber> <ETag>(etag)</ETag> </Part> <Part> <PartNumber>2</PartNumber> <ETag>(etag)</ETag> </Part> 1. Add your file to the project as an asset. From the Assets page of your project, click the Upload asset to project icon. Then, from the Files pane, click the action menu and select Add as data set. Next steps * [Refining the data](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/refining_data.html) * [Analyzing the data and building models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/data-science.html) Learn more * [Storing very large objects in Cloud Object Storage](https://cloud.ibm.com/docs/services/cloud-object-storage/basics?topic=cloud-object-storage-store-very-large-objectsstore-very-large-objects) * [Using curl to store very large objects](https://cloud.ibm.com/docs/services/cloud-object-storage/cli?topic=cloud-object-storage-using-curl-using-curl-) Parent topic:[Adding connections to projects](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html)
# Adding very large objects to a project's Cloud Object Storage # The amount of data you can load to a project's Cloud Object Storage at any one time depends on where you load the data from\. If you are loading the data in the product UI, the limit is 5 GB\. To add larger objects to a project's Cloud Object Storage, you can use an API or an FTP client\. <!-- <ul> --> * [The Cloud Object Storage API](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/store-large-objs-in-cos.html?context=cdpaas&locale=en#api) * An FTP client * [The IBM Cloud Object Storage Python SDK](https://github.com/IBM/ibm-cos-sdk-python) (in case you can't use an FTP client) <!-- </ul> --> ## Load data in multiple parts by using the Cloud Object Storage API ## With the Cloud Object Storage API, you can load data objects as large as 5 GB in a single PUT, and objects as large as 5 TB by loading the data into object storage as a set of parts which can be loaded independently in any order and in parallel\. After all of the parts have been loaded, they are presented as a single object in Cloud Object Storage\. You can load files with these formats and mime types in multiple parts: <!-- <ul> --> * application/xml * application/pdf * text/plain; charset=utf\-8 <!-- </ul> --> To load a data object in multiple parts: <!-- <ol> --> 1. Initiate a [multipart load](https://cloud.ibm.com/docs/services/cloud-object-storage/basics?topic=cloud-object-storage-store-very-large-objects#initiate-a-multipart-upload): <!-- </ol> --> curl -X "POST" "https://(endpoint)/(bucket-name)/(object-name)?uploads" -H "Authorization: bearer (token)" The values for `bucket-name` and `token` are on the project's **General** page on the **Manage** tab\. Click **Manage in IBM Cloud** on the Watson Studio for the endpoint value\. <!-- <ol> --> 1. Load the parts by specifying arbitrary sequential part numbers and an UploadId for the object: <!-- </ol> --> curl -X "PUT" "https://(endpoint)/(bucket-name)/(object-name)?partNumber=(sequential-integer)&uploadId=(upload-id)" -H "Authorization: bearer (token)" -H "Content-Type: (content-type)" Replace`content-type` with `application/xml`, `application/pdf` or `text/plain; charset=utf-8`\. <!-- <ol> --> 1. Complete the multipart load: <!-- </ol> --> curl -X "POST" "https://(endpoint)/(bucket-name)/(object-name)?uploadId=(upload-id)" -H "Authorization: bearer (token)" -H "Content-Type: text/plain; charset=utf-8" -d \$'<CompleteMultipartUpload> <Part> <PartNumber>1</PartNumber> <ETag>(etag)</ETag> </Part> <Part> <PartNumber>2</PartNumber> <ETag>(etag)</ETag> </Part> <!-- <ol> --> 1. Add your file to the project as an asset\. From the **Assets** page of your project, click the **Upload asset to project** icon\. Then, from the **Files** pane, click the action menu and select **Add as data set**\. <!-- </ol> --> ## Next steps ## <!-- <ul> --> * [Refining the data](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/refining_data.html) * [Analyzing the data and building models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/data-science.html) <!-- </ul> --> ## Learn more ## <!-- <ul> --> * [Storing very large objects in Cloud Object Storage](https://cloud.ibm.com/docs/services/cloud-object-storage/basics?topic=cloud-object-storage-store-very-large-objects#store-very-large-objects) * [Using curl to store very large objects](https://cloud.ibm.com/docs/services/cloud-object-storage/cli?topic=cloud-object-storage-using-curl-#using-curl-) <!-- </ul> --> **Parent topic:**[Adding connections to projects](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html) <!-- </article "role="article" "> -->
3E791F66AD3D5FD3DA45D85F27D6A1A7621A4CD3
https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/switch-platform.html?context=cdpaas&locale=en
Switching the platform for a project
Switching the platform for a project You can switch the platform for some projects between the Cloud Pak for Data as a Service and the watsonx platform. When you switch the platform for a project, you can use the tools that are specific to that platform. For example, you might switch an existing Cloud Pak for Data as a Service project to watsonx so that you can use the Prompt Lab tool and create prompt and prompt session assets. See [Comparison between watsonx and Cloud Pak for Data as a Service](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/compare-platforms.html). Important:Foundation model inferencing with the Prompt Lab is available in the Dallas and Frankfurt regions. Your Watson Studio and Watson Machine Learning service instances are shared between watsonx and Cloud Pak for Data as a Service. If your Watson Studio and Watson Machine Learning service instances are provisioned in another region, you can't use foundation model inferencing or the Prompt Lab. * [Requirements](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/switch-platform.html?context=cdpaas&locale=enrequirements) * [Restrictions](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/switch-platform.html?context=cdpaas&locale=enrestrictions) * [What happens when you switch a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/switch-platform.html?context=cdpaas&locale=enconsequences) * [Switch the platform for a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/switch-platform.html?context=cdpaas&locale=enmove-one) * [Switching multiple projects to watsonx](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/switch-platform.html?context=cdpaas&locale=enmove-many) Requirements You can switch a project from one platform to the other if you have the required accounts and permissions. Required accounts : You must be signed up for both Cloud Pak for Data as a Service and watsonx. Required permissions : You must have the Admin role in the project that you want to switch. Required services : The current account that you are working in must have both of these services provisioned: : - Watson Studio : - Watson Machine Learning Project settings : The project must have the Restrict who can be a collaborator setting enabled. On Cloud Pak for Data as a Service, you can enable this setting during project creation. On watsonx, this setting is automatic. Restrictions To switch a project from Cloud Pak for Data as a Service to watsonx, all the assets in the project must have asset types that are supported by both platforms. Projects that contain any of the following asset types, but no other types of assets, are eligible to switch from Cloud Pak for Data as a Service to watsonx: * AutoAI experiment * COBOL copybook * Connected data asset * Connection * Data asset from a file * Data Refinery flow * Decision Optimization experiment * Federated Learning experiment * Folder asset * Jupyter notebook * Model * Python function * Script * SPSS Modeler flow * Visualization You can’t switch a project that contains assets that are specific to Cloud Pak for Data as a Service. If you add any assets that you created with services other than Watson Studio and Watson Machine Learning to a project, you can't switch that project to watsonx. Although Pipelines assets are supported in both Cloud Pak for Data as a Service and watsonx projects, you can't switch a project that contains pipeline assets because pipelines can reference unsupported assets. You can switch a project that contains assets from watsonx to Cloud Pak for Data as a Service. However, assets that are only supported in watsonx are not available on Cloud Pak for Data as a Service. These assets include: * Prompt Lab assets * Synthetic data flows For more information about asset types, see [Asset types and properties](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/assets.html). What happens when you switch the platform for a project Switching a project between platforms has the following effects: Collaborators : Collaborators in the project receive notifications of the switch on the original platform. If any collaborators do not have accounts for the destination platform, those collaborators can no longer access the project. Jobs : Scheduled jobs are switched. Any jobs that are running at the time of the switch continue until completion on the original platform. Any jobs that are scheduled for times after the switch are run on the destination platform. Job history is not retained. Environments : Custom environment templates are retained. Project history : Recent activity and asset activities are not retained. Resource usage : Resource usage is cumulative because you continue to use the same service instances. Storage : The project's IBM Cloud Object Storage bucket remains the same. Switch the platform for a project You can switch the platform for a project from within the project on the original platform. You can switch between either Cloud Pak for Data as a Service and watsonx. To switch the platform for a project: 1. On the original platform, go to the project's Manage tab, select the General page, and in the Controls section, click Switch platform. If you don't see a Switch platform button or the button is not active, you can't switch the project. 2. Select the destination platform and click Switch platform. Switching multiple projects to watsonx You can switch one or more eligible projects to watsonx from Cloud Pak for Data as a Service from the watsonx home page. 1. On the watsonx home page, click the Switch projects icon (![Switch projects icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/images/move-projects-icon.svg)). 2. Select the projects that you want to switch. Only the projects that meet the requirements are listed. 3. Optional. You can view the projects that contain unsupported asset types and the projects for which you don't have the Admin role. 4. Click the Switch projects icon. Learn more * [Comparison between watsonx and Cloud Pak for Data as a Service](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/compare-platforms.html) * [Asset types and properties](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/assets.html) * [Switching the platform for a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/switch-platform-space.html) Parent topic:[Administering projects](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/admin-project.html)
# Switching the platform for a project # You can switch the platform for some projects between the Cloud Pak for Data as a Service and the watsonx platform\. When you switch the platform for a project, you can use the tools that are specific to that platform\. For example, you might switch an existing Cloud Pak for Data as a Service project to watsonx so that you can use the Prompt Lab tool and create prompt and prompt session assets\. See [Comparison between watsonx and Cloud Pak for Data as a Service](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/compare-platforms.html)\. Important:Foundation model inferencing with the Prompt Lab is available in the Dallas and Frankfurt regions\. Your Watson Studio and Watson Machine Learning service instances are shared between watsonx and Cloud Pak for Data as a Service\. If your Watson Studio and Watson Machine Learning service instances are provisioned in another region, you can't use foundation model inferencing or the Prompt Lab\. <!-- <ul> --> * [Requirements](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/switch-platform.html?context=cdpaas&locale=en#requirements) * [Restrictions](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/switch-platform.html?context=cdpaas&locale=en#restrictions) * [What happens when you switch a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/switch-platform.html?context=cdpaas&locale=en#consequences) * [Switch the platform for a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/switch-platform.html?context=cdpaas&locale=en#move-one) * [Switching multiple projects to watsonx](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/switch-platform.html?context=cdpaas&locale=en#move-many) <!-- </ul> --> ## Requirements ## You can switch a project from one platform to the other if you have the required accounts and permissions\. **Required accounts** : You must be signed up for both Cloud Pak for Data as a Service and watsonx\. **Required permissions** : You must have the **Admin** role in the project that you want to switch\. **Required services** : The current account that you are working in must have both of these services provisioned: : \- Watson Studio : \- Watson Machine Learning **Project settings** : The project must have the **Restrict who can be a collaborator** setting enabled\. On Cloud Pak for Data as a Service, you can enable this setting during project creation\. On watsonx, this setting is automatic\. ## Restrictions ## To switch a project from Cloud Pak for Data as a Service to watsonx, all the assets in the project must have asset types that are supported by both platforms\. Projects that contain any of the following asset types, but no other types of assets, are eligible to switch from Cloud Pak for Data as a Service to watsonx: <!-- <ul> --> * AutoAI experiment * COBOL copybook * Connected data asset * Connection * Data asset from a file * Data Refinery flow * Decision Optimization experiment * Federated Learning experiment * Folder asset * Jupyter notebook * Model * Python function * Script * SPSS Modeler flow * Visualization <!-- </ul> --> You can’t switch a project that contains assets that are specific to Cloud Pak for Data as a Service\. If you add any assets that you created with services other than Watson Studio and Watson Machine Learning to a project, you can't switch that project to watsonx\. Although Pipelines assets are supported in both Cloud Pak for Data as a Service and watsonx projects, you can't switch a project that contains pipeline assets because pipelines can reference unsupported assets\. You can switch a project that contains assets from watsonx to Cloud Pak for Data as a Service\. However, assets that are only supported in watsonx are not available on Cloud Pak for Data as a Service\. These assets include: <!-- <ul> --> * Prompt Lab assets * Synthetic data flows <!-- </ul> --> For more information about asset types, see [Asset types and properties](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/assets.html)\. ## What happens when you switch the platform for a project ## Switching a project between platforms has the following effects: **Collaborators** : Collaborators in the project receive notifications of the switch on the original platform\. If any collaborators do not have accounts for the destination platform, those collaborators can no longer access the project\. **Jobs** : Scheduled jobs are switched\. Any jobs that are running at the time of the switch continue until completion on the original platform\. Any jobs that are scheduled for times after the switch are run on the destination platform\. Job history is not retained\. **Environments** : Custom environment templates are retained\. **Project history** : Recent activity and asset activities are not retained\. **Resource usage** : Resource usage is cumulative because you continue to use the same service instances\. **Storage** : The project's IBM Cloud Object Storage bucket remains the same\. ## Switch the platform for a project ## You can switch the platform for a project from within the project on the original platform\. You can switch between either Cloud Pak for Data as a Service and watsonx\. To switch the platform for a project: <!-- <ol> --> 1. On the original platform, go to the project's **Manage** tab, select the **General** page, and in the **Controls** section, click **Switch platform**\. If you don't see a **Switch platform** button or the button is not active, you can't switch the project\. 2. Select the destination platform and click **Switch platform**\. <!-- </ol> --> ## Switching multiple projects to watsonx ## You can switch one or more eligible projects to watsonx from Cloud Pak for Data as a Service from the watsonx home page\. <!-- <ol> --> 1. On the watsonx home page, click the **Switch projects** icon (![Switch projects icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/images/move-projects-icon.svg))\. 2. Select the projects that you want to switch\. Only the projects that meet the requirements are listed\. 3. Optional\. You can view the projects that contain unsupported asset types and the projects for which you don't have the **Admin** role\. 4. Click the **Switch projects** icon\. <!-- </ol> --> ## Learn more ## <!-- <ul> --> * [Comparison between watsonx and Cloud Pak for Data as a Service](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/compare-platforms.html) * [Asset types and properties](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/assets.html) * [Switching the platform for a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/switch-platform-space.html) <!-- </ul> --> **Parent topic:**[Administering projects](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/admin-project.html) <!-- </article "role="article" "> -->
6922B4D2CB89EB1EF4AA112AF8B7922327062B95
https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/task-credentials.html?context=cdpaas&locale=en
Adding task credentials
Adding task credentials A task credential is a form of user authentication that is required by some services to perform operations in projects and spaces, for example to run certain tasks in a service or to enable the execution of long operations such as scheduled jobs without interruption. In IBM watsonx, IBM Cloud API keys are used as task credentials. You can either provide an existing IBM Cloud API key, or you can generate a new key. Only one task credential can be stored per user, per IBM Cloud account, and is stored securely in a vault. You can generate and rotate API keys in Profile and settings > User API key. Any user with an IBM Cloud account can create an API key. The API key can be seen as a type of user name and password, enabling access to resources in your IBM Cloud account and should never be shared. If your service requires a task credential to perform an operation, you are prompted to provide it in the form of an existing or newly generated API key. Note that service administrators are responsible for defining a strategy to revoke task credentials when these are no longer required. Learn more * [Managing the user API key](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/admin-apikeys.html) * [Understanding API keys](https://cloud.ibm.com/docs/account?topic=account-manapikey&interface=ui) Parent topic:[Administering projects](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/manage-projects.html)
# Adding task credentials # A task credential is a form of user authentication that is required by some services to perform operations in projects and spaces, for example to run certain tasks in a service or to enable the execution of long operations such as scheduled jobs without interruption\. In IBM watsonx, IBM Cloud API keys are used as task credentials\. You can either provide an existing IBM Cloud API key, or you can generate a new key\. Only one task credential can be stored per user, per IBM Cloud account, and is stored securely in a vault\. You can generate and rotate API keys in **Profile and settings > User API key**\. Any user with an IBM Cloud account can create an API key\. The API key can be seen as a type of user name and password, enabling access to resources in your IBM Cloud account and should never be shared\. If your service requires a task credential to perform an operation, you are prompted to provide it in the form of an existing or newly generated API key\. Note that service administrators are responsible for defining a strategy to revoke task credentials when these are no longer required\. ## Learn more ## <!-- <ul> --> * [Managing the user API key](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/admin-apikeys.html) * [Understanding API keys](https://cloud.ibm.com/docs/account?topic=account-manapikey&interface=ui) <!-- </ul> --> **Parent topic:**[Administering projects](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/manage-projects.html) <!-- </article "role="article" "> -->
92B00BEE2E48F01962BBBBAC49CF87587710F35F
https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/wif-examples.html?context=cdpaas&locale=en
Workload identity federation examples
Workload identity federation examples Workload identity federation for the Google BigQuery connection is supported by any identity provider that supports OpenID Connect (OIDC) or SAML 2.0. These examples are for [AWS with Amazon Cognito](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/wif-examples.html?context=cdpaas&locale=enaws) and for [Microsoft Azure](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/wif-examples.html?context=cdpaas&locale=enazure). AWS Configure workload identity federation in Amazon Cognito 1. Create an OIDC identity provider (IdP) with Cognito by following the instructions in the Amazon documentation: * [Step 1. Create a user pool](https://docs.aws.amazon.com/cognito/latest/developerguide/cognito-user-pool-as-user-directory.html) * [Step 2. Add an app client and set up the hosted UI](https://docs.aws.amazon.com/cognito/latest/developerguide/cognito-user-pools-configuring-app-integration.html) For more information, see [Getting started with Amazon Cognito](https://docs.aws.amazon.com/cognito/latest/developerguide/cognito-getting-started.html). 2. Create a group and user in the IdP with the AWS console. Or you can use AWS CLI: CLIENT_ID=YourClientId ISSUER_URL=https://cognito-idp.YourRegion.amazonaws.com/YourPoolId POOL_ID=YourPoolId USERNAME=YourUsername PASSWORD=YourPassword GROUPNAME=YourGroupName aws cognito-idp admin-create-user --user-pool-id $POOL_ID --username $USERNAME --temporary-password Temp-Pass1 aws cognito-idp admin-set-user-password --user-pool-id $POOL_ID --username $USERNAME --password $PASSWORD --permanent aws cognito-idp create-group --group-name $GROUPNAME --user-pool-id $POOL_ID aws cognito-idp admin-add-user-to-group --user-pool-id $POOL_ID --username $USERNAME --group-name $GROUPNAME 3. From the AWS console, click View Hosted UI and log in to the IDP UI in a browser to ensure that any new password challenge is resolved. 4. Get an IdToken with the AWS CLI: aws cognito-idp admin-initiate-auth --auth-flow ADMIN_USER_PASSWORD_AUTH --client-id $CLIENT_ID --auth-parameters USERNAME=$USERNAME,PASSWORD=$PASSWORD --user-pool-id $POOL_ID For more information on the Amazon Cognito User Pools authentication flow, see [AdminInitiateAuth](https://docs.aws.amazon.com/cognito-user-identity-pools/latest/APIReference/API_AdminInitiateAuth.html). Configure Google Cloud for Amazon Cognito When you create the provider in Google Cloud, use these settings: * Set Issuer (URL) to https://cognito-idp.YourRegion.amazonaws.com/YourPoolId. * Set Allowed Audience to your client ID. * Under Attribute Mapping, map google.subject to assertion.sub. Create the Google BigQuery connection with Amazon Cognito workload identity federation 1. Choose the Workload Identity Federation with access token authentication method. 2. For the Security Token Service audience field, use this format: //iam.googleapis.com/projects/PROJECT_NUMBER/locations/global/workloadIdentityPools/POOL_ID/providers/PROVIDER_ID 3. For the Service account e-mail, enter the email address of the Google service account to be impersonated. For more information, see [Create a service account for the external workload](https://cloud.google.com/iam/docs/workload-identity-federation-with-other-cloudscreate_a_service_account_for_the_external_workload). 4. (Optional) Specify a value for the Service account token lifetime in seconds. The default lifetime of a service account access token is one hour. For more information, see [URL-sourced credentials](https://cloud.google.com/iam/docs/workload-identity-federation-with-other-providersurl-sourced-credentials). 5. Set Token format to Text 6. Set Token type to ID token Azure Configure workload identity federation in Azure 1. [Create an Azure AD application and service principal](https://learn.microsoft.com/en-au/azure/active-directory/develop/howto-create-service-principal-portalregister-an-application-with-azure-ad-and-create-a-service-principal). 2. Set an Application ID URI for the application. You can use the default Application ID URI (api://APPID) or specify a custom URI. You can skip the instructions on creating a managed identity. 3. Follow the instructions to [create a new application secret](https://learn.microsoft.com/en-au/azure/active-directory/develop/howto-create-service-principal-portaloption-2-create-a-new-application-secret) to get an access token with the REST API. For more information, see [Configure workload identity federation with Azure](https://cloud.google.com/iam/docs/workload-identity-federation-with-other-cloudsazure). Configure Google Cloud for Azure 1. Follow the instructions: [Configure workload identity federation](https://cloud.google.com/iam/docs/workload-identity-federation-with-other-cloudsconfigure). 2. Follow the instructions: [Create the workload identity pool and provider](https://cloud.google.com/iam/docs/workload-identity-federation-with-other-cloudscreate_the_workload_identity_pool_and_provider). When you configure the provider, use these settings: * Set Issuer (URL) to https://sts.windows.net/TENANTID/, where TENANTID is the tenant ID that you received when you set up Azure Active Directory. * Set the Allowed audience to the client ID that you received when you set up the app registration. Or specify another Application ID URI that you used when you set up the application identity in Azure. * Under Attribute Mapping, map google.subject to assertion.sub. Create the Google BigQuery connection with Azure workload identity federation 1. Choose one of these authentication methods: * Workload Identity Federation with access token * Workload Identity Federation with token URL 2. For the Security Token Service audience field, use the format that is described in [Authenticate a workload that uses the REST API](https://cloud.google.com/iam/docs/workload-identity-federation-with-other-cloudsazure_7). For example: //iam.googleapis.com/projects/PROJECT_NUMBER/locations/global/workloadIdentityPools/POOL_ID/providers/PROVIDER_ID 3. For the Service account e-mail, enter the email address of the Google service account to be impersonated. For more information, see [Create a service account for the external workload](https://cloud.google.com/iam/docs/workload-identity-federation-with-other-cloudscreate_a_service_account_for_the_external_workload). 4. (Optional) Specify a value for the Service account token lifetime in seconds. The default lifetime of a service account access token is one hour. For more information, see [URL-sourced credentials](https://cloud.google.com/iam/docs/workload-identity-federation-with-other-providersurl-sourced-credentials). 5. If you specified Workload Identity Federation with token URL, use these values: * Token URL: https://login.microsoftonline.com/TENANT_ID/oauth2/v2.0/token. This URL will fetch a token from Azure. * HTTP method: POST * HTTP headers: "Content-Type"="application/x-www-form-urlencoded;charset=UTF-8","Accept"="application/json" * Request body: grant_type=client_credentials&client_id=CLIENT_ID&client_secret=CLIENT_SECRET&scope=APPLICATION_ID_URI/.default 6. For Token type, select ID token for an identity provider that complies with the OpenID Connect (OIDC) specification. For information, see [Token types](https://cloud.google.com/docs/authentication/token-types). 7. The Token format option depends on that authentication selection: * Workload Identity Federation with access token: Select Text if you supplied the raw token value in the Access token field. * Workload Identity Federation with token URL: For a response from the token URL in JSON format with the access token that is returned in a field named access_token, use these settings: * Token format: JSON * Token field name: access_token Learn more * [Workload identity federation (Google Cloud)](https://cloud.google.com/iam/docs/workload-identity-federation) * [Configure workload identity federation on the identity provider (Google Cloud)](https://cloud.google.com/iam/docs/workload-identity-federation-with-other-clouds) * [Generate a credentials configuration file (Google Cloud)](https://github.com/googleapis/google-auth-library-javaworkforce-identity-federation) Parent topic:[Google BigQuery connection](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-bigquery.html)
# Workload identity federation examples # Workload identity federation for the Google BigQuery connection is supported by any identity provider that supports OpenID Connect (OIDC) or SAML 2\.0\. These examples are for [AWS with Amazon Cognito](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/wif-examples.html?context=cdpaas&locale=en#aws) and for [Microsoft Azure](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/wif-examples.html?context=cdpaas&locale=en#azure)\. ## AWS ## ### Configure workload identity federation in Amazon Cognito ### <!-- <ol> --> 1. Create an OIDC identity provider (IdP) with Cognito by following the instructions in the Amazon documentation: <!-- <ul> --> * [Step 1. Create a user pool](https://docs.aws.amazon.com/cognito/latest/developerguide/cognito-user-pool-as-user-directory.html) * [Step 2. Add an app client and set up the hosted UI](https://docs.aws.amazon.com/cognito/latest/developerguide/cognito-user-pools-configuring-app-integration.html) <!-- </ul> --> For more information, see [Getting started with Amazon Cognito](https://docs.aws.amazon.com/cognito/latest/developerguide/cognito-getting-started.html). 2. Create a group and user in the IdP with the AWS console\. Or you can use AWS CLI: CLIENT_ID=YourClientId ISSUER_URL=https://cognito-idp.YourRegion.amazonaws.com/YourPoolId POOL_ID=YourPoolId USERNAME=YourUsername PASSWORD=YourPassword GROUPNAME=YourGroupName aws cognito-idp admin-create-user --user-pool-id $POOL_ID --username $USERNAME --temporary-password Temp-Pass1 aws cognito-idp admin-set-user-password --user-pool-id $POOL_ID --username $USERNAME --password $PASSWORD --permanent aws cognito-idp create-group --group-name $GROUPNAME --user-pool-id $POOL_ID aws cognito-idp admin-add-user-to-group --user-pool-id $POOL_ID --username $USERNAME --group-name $GROUPNAME 3. From the AWS console, click **View Hosted UI** and log in to the IDP UI in a browser to ensure that any new password challenge is resolved\. 4. Get an IdToken with the AWS CLI: aws cognito-idp admin-initiate-auth --auth-flow ADMIN_USER_PASSWORD_AUTH --client-id $CLIENT_ID --auth-parameters USERNAME=$USERNAME,PASSWORD=$PASSWORD --user-pool-id $POOL_ID For more information on the Amazon Cognito User Pools authentication flow, see [AdminInitiateAuth](https://docs.aws.amazon.com/cognito-user-identity-pools/latest/APIReference/API_AdminInitiateAuth.html). <!-- </ol> --> ### Configure Google Cloud for Amazon Cognito ### When you create the provider in Google Cloud, use these settings: <!-- <ul> --> * Set **Issuer (URL)** to `https://cognito-idp.YourRegion.amazonaws.com/YourPoolId`\. * Set **Allowed Audience** to your client ID\. * Under **Attribute Mapping**, map `google.subject` to `assertion.sub`\. <!-- </ul> --> ### Create the Google BigQuery connection with Amazon Cognito workload identity federation ### <!-- <ol> --> 1. Choose the **Workload Identity Federation with access token** authentication method\. 2. For the **Security Token Service audience** field, use this format: //iam.googleapis.com/projects/PROJECT_NUMBER/locations/global/workloadIdentityPools/POOL_ID/providers/PROVIDER_ID 3. For the **Service account e\-mail**, enter the email address of the Google service account to be impersonated\. For more information, see [Create a service account for the external workload](https://cloud.google.com/iam/docs/workload-identity-federation-with-other-clouds#create_a_service_account_for_the_external_workload)\. 4. (Optional) Specify a value for the **Service account token lifetime** in seconds\. The default lifetime of a service account access token is one hour\. For more information, see [URL\-sourced credentials](https://cloud.google.com/iam/docs/workload-identity-federation-with-other-providers#url-sourced-credentials)\. 5. Set **Token format** to `Text` 6. Set **Token type** to `ID token` <!-- </ol> --> ## Azure ## ### Configure workload identity federation in Azure ### <!-- <ol> --> 1. [Create an Azure AD application and service principal](https://learn.microsoft.com/en-au/azure/active-directory/develop/howto-create-service-principal-portal#register-an-application-with-azure-ad-and-create-a-service-principal)\. 2. Set an **Application ID URI** for the application\. You can use the default Application ID URI (`api://APPID`) or specify a custom URI\. You can skip the instructions on creating a managed identity. 3. Follow the instructions to [create a new application secret](https://learn.microsoft.com/en-au/azure/active-directory/develop/howto-create-service-principal-portal#option-2-create-a-new-application-secret) to get an access token with the REST API\. For more information, see [Configure workload identity federation with Azure](https://cloud.google.com/iam/docs/workload-identity-federation-with-other-clouds#azure). <!-- </ol> --> ### Configure Google Cloud for Azure ### <!-- <ol> --> 1. Follow the instructions: [Configure workload identity federation](https://cloud.google.com/iam/docs/workload-identity-federation-with-other-clouds#configure)\. 2. Follow the instructions: [Create the workload identity pool and provider](https://cloud.google.com/iam/docs/workload-identity-federation-with-other-clouds#create_the_workload_identity_pool_and_provider)\. When you configure the provider, use these settings: <!-- <ul> --> * Set **Issuer (URL)** to `https://sts.windows.net/TENANTID/`, where `TENANTID` is the tenant ID that you received when you set up Azure Active Directory. * Set the **Allowed audience** to the client ID that you received when you set up the app registration. Or specify another **Application ID URI** that you used when you set up the application identity in Azure. * Under **Attribute Mapping**, map `google.subject` to `assertion.sub`. <!-- </ul> --> <!-- </ol> --> ### Create the Google BigQuery connection with Azure workload identity federation ### <!-- <ol> --> 1. Choose one of these authentication methods: <!-- <ul> --> * **Workload Identity Federation with access token** * **Workload Identity Federation with token URL** <!-- </ul> --> 2. For the **Security Token Service audience** field, use the format that is described in [Authenticate a workload that uses the REST API](https://cloud.google.com/iam/docs/workload-identity-federation-with-other-clouds#azure_7)\. For example: //iam.googleapis.com/projects/PROJECT_NUMBER/locations/global/workloadIdentityPools/POOL_ID/providers/PROVIDER_ID 3. For the **Service account e\-mail**, enter the email address of the Google service account to be impersonated\. For more information, see [Create a service account for the external workload](https://cloud.google.com/iam/docs/workload-identity-federation-with-other-clouds#create_a_service_account_for_the_external_workload)\. 4. (Optional) Specify a value for the **Service account token lifetime** in seconds\. The default lifetime of a service account access token is one hour\. For more information, see [URL\-sourced credentials](https://cloud.google.com/iam/docs/workload-identity-federation-with-other-providers#url-sourced-credentials)\. 5. If you specified **Workload Identity Federation with token URL**, use these values: <!-- <ul> --> * **Token URL**: `https://login.microsoftonline.com/TENANT_ID/oauth2/v2.0/token`. This URL will fetch a token from Azure. * **HTTP method**: `POST` * **HTTP headers**: `"Content-Type"="application/x-www-form-urlencoded;charset=UTF-8","Accept"="application/json"` * **Request body**: `grant_type=client_credentials&client_id=CLIENT_ID&client_secret=CLIENT_SECRET&scope=APPLICATION_ID_URI/.default` <!-- </ul> --> 6. For **Token type**, select **ID token** for an identity provider that complies with the OpenID Connect (OIDC) specification\. For information, see [Token types](https://cloud.google.com/docs/authentication/token-types)\. 7. The **Token format** option depends on that authentication selection: <!-- <ul> --> * **Workload Identity Federation with access token**: Select **Text** if you supplied the raw token value in the **Access token** field. * **Workload Identity Federation with token URL**: For a response from the token URL in JSON format with the access token that is returned in a field named `access_token`, use these settings: <!-- <ul> --> * **Token format**: `JSON` * **Token field name**: `access_token` <!-- </ul> --> <!-- </ul> --> <!-- </ol> --> ## Learn more ## <!-- <ul> --> * [Workload identity federation (Google Cloud)](https://cloud.google.com/iam/docs/workload-identity-federation) * [Configure workload identity federation on the identity provider (Google Cloud)](https://cloud.google.com/iam/docs/workload-identity-federation-with-other-clouds) * [Generate a credentials configuration file (Google Cloud)](https://github.com/googleapis/google-auth-library-java#workforce-identity-federation) <!-- </ul> --> **Parent topic:**[Google BigQuery connection](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-bigquery.html) <!-- </article "role="article" "> -->
777F72F32FD20E96C4A5F0CCA461FE9A79334E96
https://dataplatform.cloud.ibm.com/docs/content/wsj/model/getting-started.html?context=cdpaas&locale=en
Evaluating AI models with Watson OpenScale
Evaluating AI models with Watson OpenScale IBM Watson OpenScale tracks and measures outcomes from your AI models, and helps ensure that they remain fair, explainable, and compliant no matter where your models were built or are running. Watson OpenScale also detects and helps correct the drift in accuracy when an AI model is in production. Required service : Watson Machine Learning Training data format : Relational: Tables in relational data sources : Tabular: Excel files (.xls or .xlsx), CSV files : Textual: In the supported relational tables or files Connected data : Cloud Object Storage (infrastructure) : Db2 Data size : Any Enterprises use model evaluation as part of an AI governance strategy to make sure that models in development and production meet established compliance standards. This approach ensures that AI models are free from bias, can be easily explained and understood by business users, and are auditable in business transactions. You can evaluate models regardless of the tools and frameworks that you use to build and run models. Watch this short video to learn more about Watson OpenScale: This video provides a visual method to learn the concepts and tasks in this documentation. Trustworthy AI in action To learn more about model evaluation in action, see [How AI picks the highlights from Wimbledon fairly and fast](https://www.ibm.com/blog/how-ai-picks-the-highlights-from-wimbledon-fairly-and-fast/). Components of Watson OpenScale Watson OpenScale has four main areas: * Insights: The Insights page displays the models that you are monitoring and provides status on the results of model evaluations. * Explain a transaction: The Explanations page describes how the model determined a prediction. You can understand and be confident in the model by viewing some of the most important factors that led to its predictions. * Configuration: The Configuration page can be used to select a database, set up a machine learning provider, and optionally add integrated services. * Support: The Support page provides you with resources to get the help you need with Watson OpenScale. Access product documentation or connect with IBM Community on Stack Overflow. To create a service ticket with the IBM Support team, click Manage tickets. Evaluations Evaluations validate your deployments against specified metrics. Configure alerts that indicate when a threshold is crossed for a metric. Watson OpenScale evaluates your deployments based on three default monitors: * Quality describes the model’s ability to provide correct outcomes based on labeled test data called Feedback data. * Fairness describes how evenly the model delivers favorable outcomes between groups. The Fairness monitor looks for biased outcomes in your model. * Drift warns you of a drop in accuracy or data consistency. Note:You can also create Custom evaluations for your deployment. Next steps Learn more Parent topic:[Managing predictive deployments](https://dataplatform.cloud.ibm.com/docs/content/wsj/wmls/wmls-deploy-overview.html)
# Evaluating AI models with Watson OpenScale # IBM Watson OpenScale tracks and measures outcomes from your AI models, and helps ensure that they remain fair, explainable, and compliant no matter where your models were built or are running\. Watson OpenScale also detects and helps correct the drift in accuracy when an AI model is in production\. **Required service** : Watson Machine Learning **Training data format** : Relational: Tables in relational data sources : Tabular: Excel files (\.xls or \.xlsx), CSV files : Textual: In the supported relational tables or files **Connected data** : Cloud Object Storage (infrastructure) : Db2 **Data size** : Any Enterprises use model evaluation as part of an AI governance strategy to make sure that models in development and production meet established compliance standards\. This approach ensures that AI models are free from bias, can be easily explained and understood by business users, and are auditable in business transactions\. You can evaluate models regardless of the tools and frameworks that you use to build and run models\. Watch this short video to learn more about Watson OpenScale: This video provides a visual method to learn the concepts and tasks in this documentation\. ### Trustworthy AI in action ### To learn more about model evaluation in action, see [How AI picks the highlights from Wimbledon fairly and fast](https://www.ibm.com/blog/how-ai-picks-the-highlights-from-wimbledon-fairly-and-fast/)\. ## Components of Watson OpenScale ## Watson OpenScale has four main areas: <!-- <ul> --> * **Insights:** The Insights page displays the models that you are monitoring and provides status on the results of model evaluations\. * **Explain a transaction:** The Explanations page describes how the model determined a prediction\. You can understand and be confident in the model by viewing some of the most important factors that led to its predictions\. * **Configuration:** The Configuration page can be used to select a database, set up a machine learning provider, and optionally add integrated services\. * **Support:** The Support page provides you with resources to get the help you need with Watson OpenScale\. Access product documentation or connect with IBM Community on Stack Overflow\. To create a service ticket with the IBM Support team, click **Manage tickets**\. <!-- </ul> --> ### Evaluations ### Evaluations validate your deployments against specified metrics\. Configure alerts that indicate when a threshold is crossed for a metric\. Watson OpenScale evaluates your deployments based on three default monitors: <!-- <ul> --> * **Quality** describes the model’s ability to provide correct outcomes based on labeled test data called Feedback data\. * **Fairness** describes how evenly the model delivers favorable outcomes between groups\. The Fairness monitor looks for biased outcomes in your model\. * **Drift** warns you of a drop in accuracy or data consistency\. <!-- </ul> --> Note:You can also create **Custom** evaluations for your deployment\. ## Next steps ## ## Learn more ## **Parent topic:**[Managing predictive deployments](https://dataplatform.cloud.ibm.com/docs/content/wsj/wmls/wmls-deploy-overview.html) <!-- </article "role="article" "> -->
AC97CE4D4DF6402240F1A6A67DFB9462BC1FAFAC
https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-driftv2-config.html?context=cdpaas&locale=en
Configuring drift v2 evaluations in watsonx.governance
Configuring drift v2 evaluations in watsonx.governance You can configure drift v2 evaluations with watsonx.governance to measure changes in your data over time to ensure consistent outcomes for your model. Use drift v2 evaluations to identify changes in your model output, the accuracy of your predictions, and the distribution of your input data. The following sections describe the steps that you must complete to configure drift v2 evaluations with watsonx.governance: Set sample size watsonx.governance uses sample sizes to understand how to process the number of transactions that are evaluated during evaluations. You must set a minimum sample size to indicate the lowest number of transactions that you want watsonx.governance to evaluate. You can also set a maximum sample size to indicate the maximum number of transactions that you want watsonx.governance to evaluate. Configure baseline data watsonx.governance uses payload records to establish the baseline for drift v2 calculations. You must configure the number of records that you want to calculate as your baseline data. Set drift thresholds You must set threshold values for each metric to enable watsonx.governance to understand how to identify issues with your evaluation results. The values that you set create alerts on the evaluation summary page that appear when metric scores violate your thresholds. You must set the values between the range of 0 to 1. The metric scores must be lower than the threshold values to avoid violations. Supported drift v2 metrics When you enable drift v2 evaluations, you can view a summary of evaluation results with metrics for the type of model that you're evaluating. The following drift v2 metrics are supported by watsonx.governance: * Output drift watsonx.governance calculates output drift by measuring the change in the model confidence distribution. - How it works: watsonx.governance measures how much your model output changes from the time that you train the model. To evaluate prompt templates, watsonx.governance calculates output drift by measuring the change in distribution of prediction probabilities. The prediction probability is calculated by aggregating the log probabilities of the tokens from the model output. When you upload payload data with CSV files, you must include prediction_probability values or output drift cannot be calculated. For regression models, watsonx.governance calculates output drift by measuring the change in distribution of predictions on the training and payload data. For classification models, watsonx.governance calculates output drift for each class probability by measuring the change in distribution for class probabilities on the training and payload data. For multi-classification models, watsonx.governance also aggregates output drift for each class probability by measuring a weighted average. - Do the math: watsonx.governance uses the following formulas to calculate output drift: - [Total variation distance](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-driftv2-config.html?context=cdpaas&locale=entotal-variation-distance) - [Overlap coefficient](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-driftv2-config.html?context=cdpaas&locale=enoverlap-coefficient) - Applies to prompt template evaluations: Yes - Task types: - Text summarization - Text classification - Content generation - Entity extraction - Question answering * Model quality drift watsonx.governance calculates model quality drift by comparing the estimated runtime accuracy to the training accuracy to measure the drop in accuracy. - How it works: watsonx.governance builds its own drift detection model that processes your payload data when you configure drift v2 evaluations to predict whether your model generates accurate predictions without the ground truth. The drift detection model uses the input features and class probabilities from your model to create its own input features. - Do the math: watsonx.governance uses the following formula to calculate model quality drift: ![model quality score](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/images/wos-model-quality-score.svg) watsonx.governance calculates the accuracy of your model as the base_accuracy by measuring the fraction of correctly predicted transactions in your training data. During evaluations, your transactions are scored against the drift detection model to measure the amount of transactions that are likely predicted correctly by your model. These transactions are compared to the total number of transactions that watsonx.governance processes to calculate the predicted_accuracy. If the predicted_accuracy is less than the base_accuracy, watsonx.governance generates a model quality drift score. - Applies to prompt template evaluations: No * Feature drift watsonx.governance calculates feature drift by measuring the change in value distribution for important features. - How it works: watsonx.governance calculates drift for categorical and numeric features by measuring the probability distribution of continuous and discrete values. To identify discrete values for numeric features, watsonx.governance uses a binary logarithm to compare the number of distinct values of each feature to the total number of values of each feature. watsonx.governance uses the following binary logarithm formula to identify discrete numeric features: ![Binary logarithm formula is displayed](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/images/wos-feature-drift-equation.svg) If the distinct_values_count is less than the binary logarithm of the total_count, the feature is identified as discrete. - Do the math: watsonx.governance uses the following formulas to calculate feature drift: - [Jensen Shannon distance](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-driftv2-config.html?context=cdpaas&locale=enjensen-shannon-distance) - [Total variation distance](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-driftv2-config.html?context=cdpaas&locale=entotal-variation-distance) - [Overlap coefficient](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-driftv2-config.html?context=cdpaas&locale=enoverlap-coefficient) - Applies to prompt template evaluations: No * Prediction drift Prediction drift measures the change in distribution of the LLM predicted classes. - Do the math: watsonx.governance uses the [Jensen Shannon distance](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-driftv2-config.html?context=cdpaas&locale=enjensen-shannon-distance) formula to calculate prediction drift. - Applies to prompt template evaluations: Yes - Task types: Text classification * Input metadata drift Input metadata drift measures the change in distribution of the LLM input text metadata. - How it works: watsonx.governance calculates the following metadata with the LLM input text: Character count: Total number of characters in the input text Word count: Total number of words in the input text Sentence count: Total number of sentences in the input text Average word length: Average length of words in the input text Average sentence length: Average length of the sentences in the input text watsonx.governance calculates input metadata drift by measuring the change in distribution of the metadata columns. The input token count column, if present in the payload, is also used to compute the input metadata drift. You can also choose to specify any meta fields while adding records to the payload table. These meta fields are also used to compute the input metadata drift. To identify discrete numeric input metadata columns, watsonx.governance uses the following binary logarithm formula: ![Binary logarithm formula is displayed](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/images/wos-feature-drift-equation.svg) If the distinct_values_count is less than the binary logarithm of the total_count, the feature is identified as discrete. For discrete input metadata columns, watsonx.governance uses the [Jensen Shannon distance](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-driftv2-config.html?context=cdpaas&locale=enjensen-shannon-distance) formula to calculate input metadata drift. For continuous input metadata columns, watsonx.governance uses the [total variation distance](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-driftv2-config.html?context=cdpaas&locale=entotal-variation-distance) and [overlap coefficient](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-driftv2-config.html?context=cdpaas&locale=enoverlap-coefficient) formulas to calculate input metadata drift. - Applies to prompt template evaluations: Yes - Task types: - Text summarization - Text classification - Content generation - Entity extraction - Question answering * Output metadata drift Output metadata drift measures the change in distribution of the LLM output text metadata. - How it works: watsonx.governance calculates the following metadata with the LLM output text: Character count: Total number of characters in the output text Word count: Total number of words in the output text Sentence count: Total number of sentences in the output text Average word length: Average length of words in the output text Average sentence length: Average length of the sentences in the output text watsonx.governance calculates output metadata drift by measuring the change in distribution of the metadata columns. The output token count column, if present in the payload, is also used to compute the output metadata drift. You can also choose to specify any meta fields while adding records to the payload table. These meta fields are also used to compute the output metadata drift. To identify discrete numeric output metadata columns, watsonx.governance uses the following binary logarithm formula: ![Binary logarithm formula is displayed](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/images/wos-feature-drift-equation.svg) If the distinct_values_count is less than the binary logarithm of the total_count, the feature is identified as discrete. For discrete output metadata columns, watsonx.governance uses the [Jensen Shannon distance](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-driftv2-config.html?context=cdpaas&locale=enjensen-shannon-distance) formula to calculate input metadata drift. For continuous output metadata columns, watsonx.governance uses the [total variation distance](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-driftv2-config.html?context=cdpaas&locale=entotal-variation-distance) and [overlap coefficient](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-driftv2-config.html?context=cdpaas&locale=enoverlap-coefficient) formulas to calculate output metadata drift: - Applies to prompt template evaluations: Yes - Task types: - Text summarization - Text classification - Content generation - Question answering watsonx.governance uses the following formulas to calculate drift v2 evaluation metrics: Total variation distance Total variation distance measures the maximum difference between the probabilities that two probability distributions, baseline (B) and production (P), assign to the same transaction as shown in the following formula: ![Probability distribution formula is displayed](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/images/wos-total-variation-distance-0.svg) If the two distributions are equal, the total variation distance between them becomes 0. watsonx.governance uses the following formula to calculate total variation distance: ![Total variation distance formula is displayed](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/images/wos-total-variation-distance.svg) * 푥 is a series of equidistant samples that span the domain of ![circumflex f is displayed](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/images/wos-f-symbol.svg) that range from the combined miniumum of the baseline and production data to the combined maximum of the baseline and production data. * ![d(x) symbol is displayed](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/images/wos-d-x.svg) is the difference between two consecutive 푥 samples. * ![explanation of formula](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/images/wos-total-variation-formula-2.svg) is the value of the density function for production data at a 푥 sample. * ![explanation of formula](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/images/wos-total-variation-formula-3.svg) is the value of the density function for baseline data for at a 푥 sample. The ![explanation of formula](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/images/wos-total-variation-formula-4.svg) denominator represents the total area under the density function plots for production and baseline data. These summations are an approximation of the integrations over the domain space and both these terms should be 1 and total should be 2. Overlap coefficient watsonx.governance calculates the overlap coefficient by measuring the total area of the intersection between two probability distributions. To measure dissimilarity between distributions, the intersection or the overlap area is subtracted from 1 to calculate the amount of drift. watsonx.governance uses the following formula to calculate the overlap coefficient: ![Overlap coefficient formula is displayed](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/images/wos-total-variation-formula-5.svg) * 푥 is a series of equidistant samples that span the domain of ![circumflex f is displayed](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/images/wos-f-symbol.svg) that range from the combined miniumum of the baseline and production data to the combined maximum of the baseline and production data. * ![d(x) symbol is displayed](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/images/wos-d-x.svg) is the difference between two consecutive 푥 samples. * ![explanation of formula](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/images/wos-total-variation-formula-2.svg) is the value of the density function for production data at a 푥 sample. * ![explanation of formula](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/images/wos-total-variation-formula-3.svg) is the value of the density function for baseline data for at a 푥 sample. Jensen Shannon distance Jensen Shannon Distance is the normalized form of Kullback-Liebler (KL) Divergence that measures how much one probability distribution differs from the second probabillity distribution. Jensen Shannon Distance is a symmetrical score and always has a finite value. watsonx.governance uses the following formula to calculate the Jensen Shannon distance for two probability distributions, baseline (B) and production (P): ![Jensen Shannon distance formula is displayed](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/images/wos-jensen-shannon-distance.svg) ![KL Divergence is displayed](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/images/wos-KL-divergence.svg) is the KL Divergence. Parent topic:[Configuring model evaluations](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-monitors-overview.html)
# Configuring drift v2 evaluations in watsonx\.governance # You can configure drift v2 evaluations with watsonx\.governance to measure changes in your data over time to ensure consistent outcomes for your model\. Use drift v2 evaluations to identify changes in your model output, the accuracy of your predictions, and the distribution of your input data\. The following sections describe the steps that you must complete to configure drift v2 evaluations with watsonx\.governance: ## Set sample size ## watsonx\.governance uses sample sizes to understand how to process the number of transactions that are evaluated during evaluations\. You must set a minimum sample size to indicate the lowest number of transactions that you want watsonx\.governance to evaluate\. You can also set a maximum sample size to indicate the maximum number of transactions that you want watsonx\.governance to evaluate\. ## Configure baseline data ## watsonx\.governance uses payload records to establish the baseline for drift v2 calculations\. You must configure the number of records that you want to calculate as your baseline data\. ## Set drift thresholds ## You must set threshold values for each metric to enable watsonx\.governance to understand how to identify issues with your evaluation results\. The values that you set create alerts on the evaluation summary page that appear when metric scores violate your thresholds\. You must set the values between the range of 0 to 1\. The metric scores must be lower than the threshold values to avoid violations\. ## Supported drift v2 metrics ## When you enable drift v2 evaluations, you can view a summary of evaluation results with metrics for the type of model that you're evaluating\. The following drift v2 metrics are supported by watsonx\.governance: <!-- <ul> --> * Output drift watsonx.governance calculates output drift by measuring the change in the model confidence distribution. - **How it works**: watsonx.governance measures how much your model output changes from the time that you train the model. To evaluate prompt templates, watsonx.governance calculates output drift by measuring the change in distribution of prediction probabilities. The prediction probability is calculated by aggregating the log probabilities of the tokens from the model output. When you upload payload data with CSV files, you must include `prediction_probability` values or output drift cannot be calculated. For regression models, watsonx.governance calculates output drift by measuring the change in distribution of predictions on the training and payload data. For classification models, watsonx.governance calculates output drift for each class probability by measuring the change in distribution for class probabilities on the training and payload data. For multi-classification models, watsonx.governance also aggregates output drift for each class probability by measuring a weighted average. - **Do the math**: watsonx.governance uses the following formulas to calculate output drift: - [Total variation distance](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-driftv2-config.html?context=cdpaas&locale=en#total-variation-distance) - [Overlap coefficient](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-driftv2-config.html?context=cdpaas&locale=en#overlap-coefficient) - **Applies to prompt template evaluations**: Yes - **Task types**: - Text summarization - Text classification - Content generation - Entity extraction - Question answering <!-- </ul> --> <!-- <ul> --> * Model quality drift watsonx.governance calculates model quality drift by comparing the estimated runtime accuracy to the training accuracy to measure the drop in accuracy. - **How it works**: watsonx.governance builds its own drift detection model that processes your payload data when you configure drift v2 evaluations to predict whether your model generates accurate predictions without the ground truth. The drift detection model uses the input features and class probabilities from your model to create its own input features. - **Do the math**: watsonx.governance uses the following formula to calculate model quality drift: ![model quality score](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/images/wos-model-quality-score.svg) watsonx.governance calculates the accuracy of your model as the `base_accuracy` by measuring the fraction of correctly predicted transactions in your training data. During evaluations, your transactions are scored against the drift detection model to measure the amount of transactions that are likely predicted correctly by your model. These transactions are compared to the total number of transactions that watsonx.governance processes to calculate the `predicted_accuracy`. If the `predicted_accuracy` is less than the `base_accuracy`, watsonx.governance generates a model quality drift score. - **Applies to prompt template evaluations**: No <!-- </ul> --> <!-- <ul> --> * Feature drift watsonx.governance calculates feature drift by measuring the change in value distribution for important features. - **How it works**: watsonx.governance calculates drift for categorical and numeric features by measuring the probability distribution of continuous and discrete values. To identify discrete values for numeric features, watsonx.governance uses a binary logarithm to compare the number of distinct values of each feature to the total number of values of each feature. watsonx.governance uses the following binary logarithm formula to identify discrete numeric features: ![Binary logarithm formula is displayed](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/images/wos-feature-drift-equation.svg) If the `distinct_values_count` is less than the binary logarithm of the `total_count`, the feature is identified as discrete. - **Do the math**: watsonx.governance uses the following formulas to calculate feature drift: - [Jensen Shannon distance](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-driftv2-config.html?context=cdpaas&locale=en#jensen-shannon-distance) - [Total variation distance](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-driftv2-config.html?context=cdpaas&locale=en#total-variation-distance) - [Overlap coefficient](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-driftv2-config.html?context=cdpaas&locale=en#overlap-coefficient) - **Applies to prompt template evaluations**: No <!-- </ul> --> <!-- <ul> --> * Prediction drift Prediction drift measures the change in distribution of the LLM predicted classes. - **Do the math**: watsonx.governance uses the [Jensen Shannon distance](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-driftv2-config.html?context=cdpaas&locale=en#jensen-shannon-distance) formula to calculate prediction drift. - **Applies to prompt template evaluations**: Yes - **Task types**: Text classification <!-- </ul> --> <!-- <ul> --> * Input metadata drift Input metadata drift measures the change in distribution of the LLM input text metadata. - **How it works**: watsonx.governance calculates the following metadata with the LLM input text: **Character count**: Total number of characters in the input text **Word count**: Total number of words in the input text **Sentence count**: Total number of sentences in the input text **Average word length**: Average length of words in the input text **Average sentence length**: Average length of the sentences in the input text watsonx.governance calculates input metadata drift by measuring the change in distribution of the metadata columns. The input token count column, if present in the payload, is also used to compute the input metadata drift. You can also choose to specify any meta fields while adding records to the payload table. These meta fields are also used to compute the input metadata drift. To identify discrete numeric input metadata columns, watsonx.governance uses the following binary logarithm formula: ![Binary logarithm formula is displayed](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/images/wos-feature-drift-equation.svg) If the `distinct_values_count` is less than the binary logarithm of the `total_count`, the feature is identified as discrete. For discrete input metadata columns, watsonx.governance uses the [Jensen Shannon distance](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-driftv2-config.html?context=cdpaas&locale=en#jensen-shannon-distance) formula to calculate input metadata drift. For continuous input metadata columns, watsonx.governance uses the [total variation distance](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-driftv2-config.html?context=cdpaas&locale=en#total-variation-distance) and [overlap coefficient](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-driftv2-config.html?context=cdpaas&locale=en#overlap-coefficient) formulas to calculate input metadata drift. - **Applies to prompt template evaluations**: Yes - **Task types**: - Text summarization - Text classification - Content generation - Entity extraction - Question answering <!-- </ul> --> <!-- <ul> --> * Output metadata drift Output metadata drift measures the change in distribution of the LLM output text metadata. - **How it works**: watsonx.governance calculates the following metadata with the LLM output text: **Character count**: Total number of characters in the output text **Word count**: Total number of words in the output text **Sentence count**: Total number of sentences in the output text **Average word length**: Average length of words in the output text **Average sentence length**: Average length of the sentences in the output text watsonx.governance calculates output metadata drift by measuring the change in distribution of the metadata columns. The output token count column, if present in the payload, is also used to compute the output metadata drift. You can also choose to specify any meta fields while adding records to the payload table. These meta fields are also used to compute the output metadata drift. To identify discrete numeric output metadata columns, watsonx.governance uses the following binary logarithm formula: ![Binary logarithm formula is displayed](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/images/wos-feature-drift-equation.svg) If the `distinct_values_count` is less than the binary logarithm of the `total_count`, the feature is identified as discrete. For discrete output metadata columns, watsonx.governance uses the [Jensen Shannon distance](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-driftv2-config.html?context=cdpaas&locale=en#jensen-shannon-distance) formula to calculate input metadata drift. For continuous output metadata columns, watsonx.governance uses the [total variation distance](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-driftv2-config.html?context=cdpaas&locale=en#total-variation-distance) and [overlap coefficient](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-driftv2-config.html?context=cdpaas&locale=en#overlap-coefficient) formulas to calculate output metadata drift: - **Applies to prompt template evaluations**: Yes - **Task types**: - Text summarization - Text classification - Content generation - Question answering <!-- </ul> --> watsonx\.governance uses the following formulas to calculate drift v2 evaluation metrics: ### Total variation distance ### Total variation distance measures the maximum difference between the probabilities that two probability distributions, baseline (B) and production (P), assign to the same transaction as shown in the following formula: ![Probability distribution formula is displayed](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/images/wos-total-variation-distance-0.svg) If the two distributions are equal, the total variation distance between them becomes 0\. watsonx\.governance uses the following formula to calculate total variation distance: ![Total variation distance formula is displayed](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/images/wos-total-variation-distance.svg) <!-- <ul> --> * 푥 is a series of equidistant samples that span the domain of ![circumflex f is displayed](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/images/wos-f-symbol.svg) that range from the combined miniumum of the baseline and production data to the combined maximum of the baseline and production data\. * ![d(x) symbol is displayed](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/images/wos-d-x.svg) is the difference between two consecutive 푥 samples\. * ![explanation of formula](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/images/wos-total-variation-formula-2.svg) is the value of the density function for production data at a 푥 sample\. * ![explanation of formula](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/images/wos-total-variation-formula-3.svg) is the value of the density function for baseline data for at a 푥 sample\. <!-- </ul> --> The ![explanation of formula](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/images/wos-total-variation-formula-4.svg) denominator represents the total area under the density function plots for production and baseline data\. These summations are an approximation of the integrations over the domain space and both these terms should be 1 and total should be 2\. ### Overlap coefficient ### watsonx\.governance calculates the overlap coefficient by measuring the total area of the intersection between two probability distributions\. To measure dissimilarity between distributions, the intersection or the overlap area is subtracted from 1 to calculate the amount of drift\. watsonx\.governance uses the following formula to calculate the overlap coefficient: ![Overlap coefficient formula is displayed](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/images/wos-total-variation-formula-5.svg) <!-- <ul> --> * 푥 is a series of equidistant samples that span the domain of ![circumflex f is displayed](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/images/wos-f-symbol.svg) that range from the combined miniumum of the baseline and production data to the combined maximum of the baseline and production data\. * ![d(x) symbol is displayed](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/images/wos-d-x.svg) is the difference between two consecutive 푥 samples\. * ![explanation of formula](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/images/wos-total-variation-formula-2.svg) is the value of the density function for production data at a 푥 sample\. * ![explanation of formula](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/images/wos-total-variation-formula-3.svg) is the value of the density function for baseline data for at a 푥 sample\. <!-- </ul> --> ### Jensen Shannon distance ### Jensen Shannon Distance is the normalized form of Kullback\-Liebler (KL) Divergence that measures how much one probability distribution differs from the second probabillity distribution\. Jensen Shannon Distance is a symmetrical score and always has a finite value\. watsonx\.governance uses the following formula to calculate the Jensen Shannon distance for two probability distributions, baseline (B) and production (P): ![Jensen Shannon distance formula is displayed](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/images/wos-jensen-shannon-distance.svg) ![KL Divergence is displayed](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/images/wos-KL-divergence.svg) is the KL Divergence\. **Parent topic:**[Configuring model evaluations](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-monitors-overview.html) <!-- </article "role="article" "> -->
F8EBEE6125BEE635DD8AF7BF2F6340D58CE99958
https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-eval-prompt-spaces.html?context=cdpaas&locale=en
Evaluating prompt templates in deployment spaces
Evaluating prompt templates in deployment spaces You can evaluate prompt templates in deployment spaces to measure the performance of foundation model tasks and understand how your model generates responses. With watsonx.governance, you can evaluate prompt templates in deployment spaces to measure how effectively your foundation models generate responses for the following task types: * [Classification](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.htmlclassification) * [Summarization](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.htmlsummarization) * [Generation](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.htmlgeneration) * [Question answering](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.htmlqa) * [Entity extraction](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.htmlextraction) Prompt templates are saved prompt inputs for foundation models. You can evaluate prompt template deployments in pre-production and production spaces. Before you begin You must have access to a watsonx.governance deployment space to evaluate prompt templates. For more information, see [Setting up watsonx.governance](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-setup-wos.html). To run evaluations, you must log in and [switch](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/personal-settings.htmlaccount) to a watsonx account that has watsonx.governance and watsonx.ai instances that are installed and open a deployment space. You must be assigned the Admin or Editor roles for the account to open deployment spaces. In your project, you must also [create and save a prompt template](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-lab.htmlcreating-and-running-a-prompt) and [promote a prompt template to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/prompt-template-deploy.html). You must specify at least one variable when you create prompt templates to enable evaluations. The following sections describe how to evaluate prompt templates in deployment spaces and review your evaluation results: * [Evaluating prompt templates in pre-production spaces](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-eval-prompt-spaces.html?context=cdpaas&locale=enprompt-eval-pre-prod) * [Evaluating prompt templates in production spaces](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-eval-prompt-spaces.html?context=cdpaas&locale=enprompt-eval-prod) Evaluating prompt templates in pre-production spaces Activate evaluation To run prompt template evaluations, you can click Activate on the Evaluations tab when you open a deployment to open the Evaluate prompt template wizard. You can run evaluations only if you are assigned the Admin or Editor roles for your deployment space. ![Run prompt template evaluation](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/images/wos-activate-prompt-eval.png) Select dimensions The Evaluate prompt template wizard displays the dimensions that are available to evaluate for the task type that is associated with your prompt. You can expand the dimensions to view the list of metrics that are used to evaluate the dimensions that you select. ![Select dimensions to evaluate](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/images/wos-select-dimension-preprod-spaces.png) watsonx.governance automatically configures evaluations for each dimension with default settings. To [configure evaluations](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-monitors-overview.html) with different settings, you can select Advanced settings to set minimum sample sizes and threshold values for each metric as shown in the following example: ![Configure evaluations](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/images/wos-config-eval-settings.png) Select test data You must upload a CSV file that contains test data with reference columns and columns for each prompt variable. When the upload completes, you must also map [prompt variables](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-variables.htmlcreating-prompt-variables) to the associated columns from your test data. ![Select test data to upload](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/images/wos-select-test-data-preprod-spaces.png) Review and evaluate You can review the selections for the prompt task type, the uploaded test data, and the type of evaluation that runs. You must select Evaluate to run the evaluation. ![Review and evaluate prompt template evaluation settings](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/images/wos-review-evaluate-preprod-spaces.png) Reviewing evaluation results When your evaluation finishes, you can review a summary of your evaluation results on the Evaluations tab in watsonx.governance to gain insights about your model performance. The summary provides an overview of metric scores and violations of default score thresholds for your prompt template evaluations. To analyze results, you can click the arrow ![navigation arrow](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/images/wos-nav-arrow.png) next to your prompt template evaluation to view data visualizations of your results over time. You can also analyze results from the model health evaluation that is run by default during prompt template evaluations to understand how efficiently your model processes your data. The Actions menu also provides the following options to help you analyze your results: * Evaluate now: Run evaluation with a different test data set * All evaluations: Display a history of your evaluations to understand how your results change over time. * Configure monitors: Configure evaluation thresholds and sample sizes. * View model information: View details about your model to understand how your deployment environment is set up. ![Analyze prompt template evaluation results](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/images/wos-review-results-preprod.png) If you [track your prompt templates](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-track-prompt-temp.html), you can review evaluation results to gain insights about your model performance throughout the AI lifecycle. Evaluating prompt templates in production spaces Activate evaluation To run prompt template evaluations, you can click Activate on the Evaluations tab when you open a deployment to open the Evaluate prompt template wizard. You can run evaluations only if you are assigned the Admin or Editor roles for your deployment space. ![Run prompt template evaluation](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/images/wos-activate-prompt-eval.png) Select dimensions The Evaluate prompt template wizard displays the dimensions that are available to evaluate for the task type that is associated with your prompt. You can provide a label column name for the reference output that you specify in your feedback data. You can also expand the dimensions to view the list of metrics that are used to evaluate the dimensions that you select. ![Select dimensions to evaluate](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/images/wos-select-dimensions-pre-prod-spaces.png) watsonx.governance automatically configures evaluations for each dimension with default settings. To [configure evaluations](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-monitors-overview.html) with different settings, you can select Advanced settings to set minimum sample sizes and threshold values for each metric as shown in the following example: ![Configure evaluations](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/images/wos-config-eval-settings.png) Review and evaluate You can review the selections for the prompt task type and the type of evaluation that runs. You can also select View payload schema or View feedback schema to validate that your column names match the prompt variable names in the prompt template. You must select Activate to run the evaluation. ![Review and evaluate selections](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/images/wos-review-evaluate-prod-spaces.png) To generate evaluation results, select Evaluate now in the Actions menu to open the Import test data window when the evaluation summary page displays. ![Select evaluate now](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/images/wos-evaluate-now-prod-space.png) Import test data In the Import test data window, you can select Upload payload data or Upload feedback data to upload a CSV file that contains labeled columns that match the columns in your payload and feedback schemas. ![Import test data](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/images/wos-import-test-data-prod-space.png) When your upload completes successfully, you can select Evaluate now to run your evaluation. Reviewing evaluation results When your evaluation finishes, you can review a summary of your evaluation results on the Evaluations tab in watsonx.governance to gain insights about your model performance. The summary provides an overview of metric scores and violations of default score thresholds for your prompt template evaluations. To analyze results, you can click the arrow ![navigation arrow](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/images/wos-nav-arrow.png) next to your prompt template evaluation to view data visualizations of your results over time. You can also analyze results from the model health evaluation that is run by default during prompt template evaluations to understand how efficiently your model processes your data. The Actions menu also provides the following options to help you analyze your results: * Evaluate now: Run evaluation with a different test data set * Configure monitors: Configure evaluation thresholds and sample sizes. * View model information: View details about your model to understand how your deployment environment is set up. ![Analyze prompt template evaluation results](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/images/wos-eval-results-prod-spaces.png) If you [track your prompt templates](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-track-prompt-temp.html), you can review evaluation results to gain insights about your model performance throughout the AI lifecycle.
# Evaluating prompt templates in deployment spaces # You can evaluate prompt templates in deployment spaces to measure the performance of foundation model tasks and understand how your model generates responses\. With watsonx\.governance, you can evaluate prompt templates in deployment spaces to measure how effectively your foundation models generate responses for the following task types: <!-- <ul> --> * [Classification](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html#classification) * [Summarization](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html#summarization) * [Generation](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html#generation) * [Question answering](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html#qa) * [Entity extraction](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html#extraction) <!-- </ul> --> Prompt templates are saved prompt inputs for foundation models\. You can evaluate prompt template deployments in pre\-production and production spaces\. ## Before you begin ## You must have access to a watsonx\.governance deployment space to evaluate prompt templates\. For more information, see [Setting up watsonx\.governance](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-setup-wos.html)\. To run evaluations, you must log in and [switch](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/personal-settings.html#account) to a watsonx account that has watsonx\.governance and watsonx\.ai instances that are installed and open a deployment space\. You must be assigned the **Admin** or **Editor** roles for the account to open deployment spaces\. In your project, you must also [create and save a prompt template](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-lab.html#creating-and-running-a-prompt) and [promote a prompt template to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/prompt-template-deploy.html)\. You must specify at least one variable when you create prompt templates to enable evaluations\. The following sections describe how to evaluate prompt templates in deployment spaces and review your evaluation results: <!-- <ul> --> * [Evaluating prompt templates in pre\-production spaces](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-eval-prompt-spaces.html?context=cdpaas&locale=en#prompt-eval-pre-prod) * [Evaluating prompt templates in production spaces](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-eval-prompt-spaces.html?context=cdpaas&locale=en#prompt-eval-prod) <!-- </ul> --> ## Evaluating prompt templates in pre\-production spaces ## ### Activate evaluation ### To run prompt template evaluations, you can click **Activate** on the **Evaluations** tab when you open a deployment to open the **Evaluate prompt template** wizard\. You can run evaluations only if you are assigned the **Admin** or **Editor** roles for your deployment space\. ![Run prompt template evaluation](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/images/wos-activate-prompt-eval.png) ### Select dimensions ### The **Evaluate prompt template** wizard displays the dimensions that are available to evaluate for the task type that is associated with your prompt\. You can expand the dimensions to view the list of metrics that are used to evaluate the dimensions that you select\. ![Select dimensions to evaluate](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/images/wos-select-dimension-preprod-spaces.png) watsonx\.governance automatically configures evaluations for each dimension with default settings\. To [configure evaluations](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-monitors-overview.html) with different settings, you can select **Advanced settings** to set minimum sample sizes and threshold values for each metric as shown in the following example: ![Configure evaluations](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/images/wos-config-eval-settings.png) ### Select test data ### You must upload a CSV file that contains test data with reference columns and columns for each prompt variable\. When the upload completes, you must also map [prompt variables](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-variables.html#creating-prompt-variables) to the associated columns from your test data\. ![Select test data to upload](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/images/wos-select-test-data-preprod-spaces.png) ### Review and evaluate ### You can review the selections for the prompt task type, the uploaded test data, and the type of evaluation that runs\. You must select **Evaluate** to run the evaluation\. ![Review and evaluate prompt template evaluation settings](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/images/wos-review-evaluate-preprod-spaces.png) ### Reviewing evaluation results ### When your evaluation finishes, you can review a summary of your evaluation results on the **Evaluations** tab in watsonx\.governance to gain insights about your model performance\. The summary provides an overview of metric scores and violations of default score thresholds for your prompt template evaluations\. To analyze results, you can click the arrow ![navigation arrow](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/images/wos-nav-arrow.png) next to your prompt template evaluation to view data visualizations of your results over time\. You can also analyze results from the model health evaluation that is run by default during prompt template evaluations to understand how efficiently your model processes your data\. The **Actions** menu also provides the following options to help you analyze your results: <!-- <ul> --> * **Evaluate now**: Run evaluation with a different test data set * **All evaluations**: Display a history of your evaluations to understand how your results change over time\. * **Configure monitors**: Configure evaluation thresholds and sample sizes\. * **View model information**: View details about your model to understand how your deployment environment is set up\. <!-- </ul> --> ![Analyze prompt template evaluation results](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/images/wos-review-results-preprod.png) If you [track your prompt templates](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-track-prompt-temp.html), you can review evaluation results to gain insights about your model performance throughout the AI lifecycle\. ## Evaluating prompt templates in production spaces ## ### Activate evaluation ### To run prompt template evaluations, you can click **Activate** on the **Evaluations** tab when you open a deployment to open the **Evaluate prompt template** wizard\. You can run evaluations only if you are assigned the **Admin** or **Editor** roles for your deployment space\. ![Run prompt template evaluation](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/images/wos-activate-prompt-eval.png) ### Select dimensions ### The **Evaluate prompt template** wizard displays the dimensions that are available to evaluate for the task type that is associated with your prompt\. You can provide a label column name for the reference output that you specify in your feedback data\. You can also expand the dimensions to view the list of metrics that are used to evaluate the dimensions that you select\. ![Select dimensions to evaluate](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/images/wos-select-dimensions-pre-prod-spaces.png) watsonx\.governance automatically configures evaluations for each dimension with default settings\. To [configure evaluations](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-monitors-overview.html) with different settings, you can select **Advanced settings** to set minimum sample sizes and threshold values for each metric as shown in the following example: ![Configure evaluations](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/images/wos-config-eval-settings.png) ### Review and evaluate ### You can review the selections for the prompt task type and the type of evaluation that runs\. You can also select **View payload schema** or **View feedback schema** to validate that your column names match the prompt variable names in the prompt template\. You must select **Activate** to run the evaluation\. ![Review and evaluate selections](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/images/wos-review-evaluate-prod-spaces.png) To generate evaluation results, select **Evaluate now** in the **Actions** menu to open the **Import test data** window when the evaluation summary page displays\. ![Select evaluate now](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/images/wos-evaluate-now-prod-space.png) ### Import test data ### In the **Import test data** window, you can select **Upload payload data** or **Upload feedback data** to upload a CSV file that contains labeled columns that match the columns in your payload and feedback schemas\. ![Import test data](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/images/wos-import-test-data-prod-space.png) When your upload completes successfully, you can select **Evaluate now** to run your evaluation\. ### Reviewing evaluation results ### When your evaluation finishes, you can review a summary of your evaluation results on the **Evaluations** tab in watsonx\.governance to gain insights about your model performance\. The summary provides an overview of metric scores and violations of default score thresholds for your prompt template evaluations\. To analyze results, you can click the arrow ![navigation arrow](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/images/wos-nav-arrow.png) next to your prompt template evaluation to view data visualizations of your results over time\. You can also analyze results from the model health evaluation that is run by default during prompt template evaluations to understand how efficiently your model processes your data\. The **Actions** menu also provides the following options to help you analyze your results: <!-- <ul> --> * **Evaluate now**: Run evaluation with a different test data set * **Configure monitors**: Configure evaluation thresholds and sample sizes\. * **View model information**: View details about your model to understand how your deployment environment is set up\. <!-- </ul> --> ![Analyze prompt template evaluation results](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/images/wos-eval-results-prod-spaces.png) If you [track your prompt templates](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-track-prompt-temp.html), you can review evaluation results to gain insights about your model performance throughout the AI lifecycle\. <!-- </article "role="article" "> -->
B8581C38346F1FE8900D18DB8FCEF8145F5965BC
https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-eval-prompt.html?context=cdpaas&locale=en
Evaluating prompt templates in projects
Evaluating prompt templates in projects You can evaluate prompt templates in projects to measure the performance of foundation model tasks and understand how your model generates responses. With watsonx.governance, you can evaluate prompt templates in projects to measure how effectively your foundation models generate responses for the following task types: * [Classification](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.htmlclassification) * [Summarization](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.htmlsummarization) * [Generation](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.htmlgeneration) * [Question answering](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.htmlqa) * [Entity extraction](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.htmlextraction) Before you begin You must have access to a watsonx.governance project to evaluate prompt templates. For more information, see [Setting up Watson OpenScale](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-setup-wos.html). To run evaluations, you must log in and [switch](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/personal-settings.htmlaccount) to a watsonx account that has watsonx.governance and watsonx.ai instances that are installed and open a project. You must be assigned the Admin or Editor roles for the account to open projects. In your project, you must use the watsonx.ai [Prompt Lab](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-lab.html) to create and save a prompt template. You must specify variables when you create prompt templates to enable evaluations. The Try section in the Prompt Lab must contain at least one variable. Watch this video to see how to evaluate a prompt template in a project. This video provides a visual method to learn the concepts and tasks in this documentation. The following sections describe how to evaluate prompt templates in projects and review your evaluation results. Running evaluations To run prompt template evaluations, you can click Evaluate when you open a saved prompt template on the Assets tab in watsonx.governance to open the Evaluate prompt template wizard. You can run evaluations only if you are assigned the Admin or Editor roles for your project. ![Run prompt template evaluation](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/images/wos-run-eval-prompt.png) Select dimensions The Evaluate prompt template wizard displays the dimensions that are available to evaluate for the task type that is associated with your prompt. You can expand the dimensions to view the list of metrics that are used to evaluate the dimensions that you select. ![Select dimensions to evaluate](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/images/wos-select-dimension-preprod-spaces.png) watsonx.governance automatically configures evaluations for each dimension with default settings. To [configure evaluations](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-monitors-overview.html) with different settings, you can select Advanced settings to set minimum sample sizes and threshold values for each metric as shown in the following example: ![Configure evaluations](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/images/wos-config-eval-settings.png) Select test data You must upload a CSV file that contains test data with reference columns and columns for each prompt variable. When the upload completes, you must also map [prompt variables](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-variables.htmlcreating-prompt-variables) to the associated columns from your test data. ![Select test data to upload](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/images/wos-select-test-data.png) Review and evaluate Before you run your prompt template evaluation, you can review the selections for the prompt task type, the uploaded test data, and the type of evaluation that runs. ![Review and evaluate prompt template evaluation settings](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/images/wos-review-prompt-eval-select.png) Reviewing evaluation results When your evaluation completes, you can review a summary of your evaluation results on the Evaluate tab in watsonx.governance to gain insights about your model performance. The summary provides an overview of metric scores and violations of default score thresholds for your prompt template evaluations. If you are assigned the Viewer role for your project, you can select Evaluate from the asset list on the Assets tab to view evaluation results. ![Run prompt template evaluation from asset list](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/images/wos-run-eval-asset.png) To analyze results, you can click the arrow ![navigation arrow](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/images/wos-nav-arrow.png) next to your prompt template evaluation to view data visualizations of your results over time. You can also analyze results from the model health evaluation that is run by default during prompt template evaluations to understand how efficiently your model processes your data. The Actions menu also provides the following options to help you analyze your results: * Evaluate now: Run evaluation with a different test data set * All evaluations: Display a history of your evaluations to understand how your results change over time. * Configure monitors: Configure evaluation thresholds and sample sizes. * View model information: View details about your model to understand how your deployment environment is set up. ![Analyze prompt template evaluation results](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/images/wos-analyze-prompt-eval-results.png) If you [track prompt templates](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-track-prompt-temp.html), you can review evaluation results to gain insights about your model performance throughout the AI lifecycle. Parent topic:[Evaluating AI models with Watson OpenScale](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/getting-started.html).
# Evaluating prompt templates in projects # You can evaluate prompt templates in projects to measure the performance of foundation model tasks and understand how your model generates responses\. With watsonx\.governance, you can evaluate prompt templates in projects to measure how effectively your foundation models generate responses for the following task types: <!-- <ul> --> * [Classification](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html#classification) * [Summarization](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html#summarization) * [Generation](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html#generation) * [Question answering](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html#qa) * [Entity extraction](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html#extraction) <!-- </ul> --> ## Before you begin ## You must have access to a watsonx\.governance project to evaluate prompt templates\. For more information, see [Setting up Watson OpenScale](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-setup-wos.html)\. To run evaluations, you must log in and [switch](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/personal-settings.html#account) to a watsonx account that has watsonx\.governance and watsonx\.ai instances that are installed and open a project\. You must be assigned the **Admin** or **Editor** roles for the account to open projects\. In your project, you must use the watsonx\.ai [Prompt Lab](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-lab.html) to create and save a prompt template\. You must specify variables when you create prompt templates to enable evaluations\. The **Try** section in the Prompt Lab must contain at least one variable\. Watch this video to see how to evaluate a prompt template in a project\. This video provides a visual method to learn the concepts and tasks in this documentation\. The following sections describe how to evaluate prompt templates in projects and review your evaluation results\. ## Running evaluations ## To run prompt template evaluations, you can click **Evaluate** when you open a saved prompt template on the **Assets** tab in watsonx\.governance to open the **Evaluate prompt template** wizard\. You can run evaluations only if you are assigned the **Admin** or **Editor** roles for your project\. ![Run prompt template evaluation](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/images/wos-run-eval-prompt.png) ### Select dimensions ### The **Evaluate prompt template** wizard displays the dimensions that are available to evaluate for the task type that is associated with your prompt\. You can expand the dimensions to view the list of metrics that are used to evaluate the dimensions that you select\. ![Select dimensions to evaluate](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/images/wos-select-dimension-preprod-spaces.png) watsonx\.governance automatically configures evaluations for each dimension with default settings\. To [configure evaluations](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-monitors-overview.html) with different settings, you can select **Advanced settings** to set minimum sample sizes and threshold values for each metric as shown in the following example: ![Configure evaluations](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/images/wos-config-eval-settings.png) ### Select test data ### You must upload a CSV file that contains test data with reference columns and columns for each prompt variable\. When the upload completes, you must also map [prompt variables](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-variables.html#creating-prompt-variables) to the associated columns from your test data\. ![Select test data to upload](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/images/wos-select-test-data.png) ### Review and evaluate ### Before you run your prompt template evaluation, you can review the selections for the prompt task type, the uploaded test data, and the type of evaluation that runs\. ![Review and evaluate prompt template evaluation settings](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/images/wos-review-prompt-eval-select.png) ## Reviewing evaluation results ## When your evaluation completes, you can review a summary of your evaluation results on the **Evaluate** tab in watsonx\.governance to gain insights about your model performance\. The summary provides an overview of metric scores and violations of default score thresholds for your prompt template evaluations\. If you are assigned the **Viewer** role for your project, you can select **Evaluate** from the asset list on the **Assets** tab to view evaluation results\. ![Run prompt template evaluation from asset list](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/images/wos-run-eval-asset.png) To analyze results, you can click the arrow ![navigation arrow](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/images/wos-nav-arrow.png) next to your prompt template evaluation to view data visualizations of your results over time\. You can also analyze results from the model health evaluation that is run by default during prompt template evaluations to understand how efficiently your model processes your data\. The **Actions** menu also provides the following options to help you analyze your results: <!-- <ul> --> * **Evaluate now**: Run evaluation with a different test data set * **All evaluations**: Display a history of your evaluations to understand how your results change over time\. * **Configure monitors**: Configure evaluation thresholds and sample sizes\. * **View model information**: View details about your model to understand how your deployment environment is set up\. <!-- </ul> --> ![Analyze prompt template evaluation results](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/images/wos-analyze-prompt-eval-results.png) If you [track prompt templates](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-track-prompt-temp.html), you can review evaluation results to gain insights about your model performance throughout the AI lifecycle\. **Parent topic:**[Evaluating AI models with Watson OpenScale](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/getting-started.html)\. <!-- </article "role="article" "> -->
259E5A974F6170CBFDF7B0014CC1A0A0111423DE
https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-feedback-logging.html?context=cdpaas&locale=en
Feedback logging in watsonx.governance
Feedback logging in watsonx.governance You can enable feedback logging in watsonx.governance to configure model evaluations. To [manage feedback data](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-manage-feedback-data.html) for configuring quality and generative AI quality evaluations, watsonx.governance must log your feedback data in the feedback logging table. Generative AI quality evaluations use feedback data to generate results for the following task types when you evaluate prompt templates: * Text summarization * Content generation * Question answering * Entity extraction Quality evaluations use feedback data to generate results for text classification tasks.
# Feedback logging in watsonx\.governance # You can enable feedback logging in watsonx\.governance to configure model evaluations\. To [manage feedback data](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-manage-feedback-data.html) for configuring quality and generative AI quality evaluations, watsonx\.governance must log your feedback data in the feedback logging table\. Generative AI quality evaluations use feedback data to generate results for the following task types when you evaluate prompt templates: <!-- <ul> --> * Text summarization * Content generation * Question answering * Entity extraction <!-- </ul> --> Quality evaluations use feedback data to generate results for text classification tasks\. <!-- </article "role="article" "> -->
DAC8A5E350D74E41C1738F4E2A02258FECF9D20D
https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-manage-data.html?context=cdpaas&locale=en
Managing data for model evaluations in watsonx.governance
Managing data for model evaluations in watsonx.governance To enable model evaluations in watsonx.governance, you must prepare your data for logging to generate insights. You must provide your model data to watsonx.governance in a format that it supports to enable model evaluations. watsonx.governance processes your model transactions and logs the data in the watsonx.governance data mart. The data mart is the logging database that stores the data that is used for model evaluations. The following sections describe the different types of data that watsonx.governance logs for model evaluations: Payload data Payload data contains the input and output transactions for your deployment. To configure explainability and fairness and drift evaluations, watsonx.governance must receive payload data from your model that it stores in a payload logging table. The payload logging table contains the feature and prediction columns that exist in your training data and a prediction probability column that contains the model's confidence in the prediction that it provides. The table also includes timestamp and ID columns to identify each scoring request that you send to watsonx.governance as shown in the following example: ![Python SDK sample output of payload logging table](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/images/wos-ntbok.png) You must send scoring requests to provide watsonx.governance with a log of your model transactions. For more information, see [Managing payload data](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-payload-logging.html). Feedback data Feedback data is labeled data that matches the structure of training data and includes known model outcomes that are compared to your model predictions to measure the accuracy of your model. watsonx.governance uses feedback data to enable you to configure quality evaluations. You must upload feedback data regularly to watsonx.governance to continuously measure the accuracy of your model predictions. For more information, see [Managing feedback data](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-manage-feedback-data.html). Learn more [Sending model transactions](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-send-model-transactions.html)
# Managing data for model evaluations in watsonx\.governance # To enable model evaluations in watsonx\.governance, you must prepare your data for logging to generate insights\. You must provide your model data to watsonx\.governance in a format that it supports to enable model evaluations\. watsonx\.governance processes your model transactions and logs the data in the watsonx\.governance data mart\. The data mart is the logging database that stores the data that is used for model evaluations\. The following sections describe the different types of data that watsonx\.governance logs for model evaluations: ## Payload data ## Payload data contains the input and output transactions for your deployment\. To configure explainability and fairness and drift evaluations, watsonx\.governance must receive payload data from your model that it stores in a payload logging table\. The payload logging table contains the feature and prediction columns that exist in your training data and a prediction probability column that contains the model's confidence in the prediction that it provides\. The table also includes timestamp and ID columns to identify each scoring request that you send to watsonx\.governance as shown in the following example: ![Python SDK sample output of payload logging table](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/images/wos-ntbok.png) You must send scoring requests to provide watsonx\.governance with a log of your model transactions\. For more information, see [Managing payload data](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-payload-logging.html)\. ## Feedback data ## Feedback data is labeled data that matches the structure of training data and includes known model outcomes that are compared to your model predictions to measure the accuracy of your model\. watsonx\.governance uses feedback data to enable you to configure quality evaluations\. You must upload feedback data regularly to watsonx\.governance to continuously measure the accuracy of your model predictions\. For more information, see [Managing feedback data](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-manage-feedback-data.html)\. ## Learn more ## [Sending model transactions](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-send-model-transactions.html) <!-- </article "role="article" "> -->
FB7F7B9A220C66F7E3407CA9553D974CD4A14402
https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-manage-feedback-data.html?context=cdpaas&locale=en
Managing feedback data for watsonx.governance
Managing feedback data for watsonx.governance You must provide feedback data to watsonx.governance to enable you to configure quality and generative AI quality evaluations and determine any changes in your model predictions. When you provide feedback data to watsonx.governance, you can regularly evaluate the accuracy of your model predictions. Feedback logging watsonx.governance stores the feedback data that you provide as records in a feedback logging table. The feedback logging table contains the following columns when you evaluate prompt templates: * Required columns: * Prompt variable(s): Contains the values for the variables that are created for prompt templates * reference_output: Contains the ground truth value * Optional columns: * _original_prediction: Contains the output that's generated by the foundation model Uploading feedback data You can use a feedback logging endpoint to upload data for quality evaluations. You can also upload feedback data with a CSV file. For more information, see [Sending model transactions](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-send-model-transactions.html). Learn more [Sending model transactions](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-send-model-transactions.html) Parent topic:[Managing data for model evaluations in Watson OpenScale](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-manage-data.html)
# Managing feedback data for watsonx\.governance # You must provide feedback data to watsonx\.governance to enable you to configure quality and generative AI quality evaluations and determine any changes in your model predictions\. When you provide feedback data to watsonx\.governance, you can regularly evaluate the accuracy of your model predictions\. ## Feedback logging ## watsonx\.governance stores the feedback data that you provide as records in a feedback logging table\. The feedback logging table contains the following columns when you evaluate prompt templates: <!-- <ul> --> * **Required columns**: <!-- <ul> --> * Prompt variable(s): Contains the values for the variables that are created for prompt templates * `reference_output`: Contains the ground truth value <!-- </ul> --> * **Optional columns**: <!-- <ul> --> * `_original_prediction`: Contains the output that's generated by the foundation model <!-- </ul> --> <!-- </ul> --> ## Uploading feedback data ## You can use a feedback logging endpoint to upload data for quality evaluations\. You can also upload feedback data with a CSV file\. For more information, see [Sending model transactions](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-send-model-transactions.html)\. ## Learn more ## [Sending model transactions](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-send-model-transactions.html) **Parent topic:**[Managing data for model evaluations in Watson OpenScale](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-manage-data.html) <!-- </article "role="article" "> -->
D2F4F71189D7F5C92DDC2CCB38F2BCE1EFD4BC65
https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-manage-payload-data.html?context=cdpaas&locale=en
Managing payload data for watsonx.governance
Managing payload data for watsonx.governance You must provide payload data to configure drift v2 and generative AI quality evaluations in watsonx.governance. Payload data contains all of your model transactions. You can log payload data with watsonx.governance to enable evaluations. To log payload data, watsonx.governance must receive scoring requests. Logging payload data When you send a scoring request, watsonx.governance processes your model transactions to enable model evaluations. watsonx.governance scores the data and stores it as records in a payload logging table within the watsonx.governance data mart. The payload logging table contains the following columns when you evaluate prompt templates: * Required columns: * Prompt variable(s): Contains the values for the variables that are created for prompt templates * generated_text: Contains the output that's generated by the foundation model * Optional columns: * input_token_count: Contains the number of tokens in the input text * generated_token_count: Contains the number of tokens in the generated text * prediction_probability: Contains the aggregate value of log probabilities of generated tokens that represent the winning output The table can also include timestamp and ID columns to store your data as scoring records. You can view your payload logging table by accessing the database that you specified for the data mart or by using the [Watson OpenScale Python SDK](https://client-docs.aiopenscale.cloud.ibm.com/html/index.html) as shown in the following example: ![Python SDK sample output of payload logging table](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/images/wos-ntbok.png) Sending payload data If you are using IBM Watson Machine Learning as your machine learning provider, watsonx.governance automatically logs payload data when your model is scored. After you configure evaluations, you can also use a payload logging endpoint to send scoring requests to run on-demand evaluations. For production models, you can also upload payload data with a CSV file to send scoring requests. For more information see, [Sending model transactions](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-send-model-transactions.html). Parent topic:[Managing data for model evaluations in Watson OpenScale](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-manage-data.html)
# Managing payload data for watsonx\.governance # You must provide payload data to configure drift v2 and generative AI quality evaluations in watsonx\.governance\. Payload data contains all of your model transactions\. You can log payload data with watsonx\.governance to enable evaluations\. To log payload data, watsonx\.governance must receive scoring requests\. ## Logging payload data ## When you send a scoring request, watsonx\.governance processes your model transactions to enable model evaluations\. watsonx\.governance scores the data and stores it as records in a payload logging table within the watsonx\.governance data mart\. The payload logging table contains the following columns when you evaluate prompt templates: <!-- <ul> --> * **Required columns**: <!-- <ul> --> * Prompt variable(s): Contains the values for the variables that are created for prompt templates * `generated_text`: Contains the output that's generated by the foundation model <!-- </ul> --> * **Optional columns**: <!-- <ul> --> * `input_token_count`: Contains the number of tokens in the input text * `generated_token_count`: Contains the number of tokens in the generated text * `prediction_probability`: Contains the aggregate value of log probabilities of generated tokens that represent the winning output <!-- </ul> --> <!-- </ul> --> The table can also include timestamp and ID columns to store your data as scoring records\. You can view your payload logging table by accessing the database that you specified for the data mart or by using the [Watson OpenScale Python SDK](https://client-docs.aiopenscale.cloud.ibm.com/html/index.html) as shown in the following example: ![Python SDK sample output of payload logging table](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/images/wos-ntbok.png) ## Sending payload data ## If you are using IBM Watson Machine Learning as your machine learning provider, watsonx\.governance automatically logs payload data when your model is scored\. After you configure evaluations, you can also use a payload logging endpoint to send scoring requests to run on\-demand evaluations\. For production models, you can also upload payload data with a CSV file to send scoring requests\. For more information see, [Sending model transactions](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-send-model-transactions.html)\. **Parent topic:**[Managing data for model evaluations in Watson OpenScale](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-manage-data.html) <!-- </article "role="article" "> -->