doc_id
stringlengths
40
40
url
stringlengths
90
160
title
stringlengths
5
96
document
stringlengths
24
62.1k
md_document
stringlengths
63
109k
2779271745A02F4DE48BD92AB93A7A4BE4A73D38
https://dataplatform.cloud.ibm.com/docs/content/wsd/tutorials/tut_classify.html?context=cdpaas&locale=en
Classifying telecommunications customers (SPSS Modeler)
Classifying telecommunications customers Logistic regression is a statistical technique for classifying records based on values of input fields. It is analogous to linear regression, but takes a categorical target field instead of a numeric one. For example, suppose a telecommunications provider has segmented its customer base by service usage patterns, categorizing the customers into four groups. If demographic data can be used to predict group membership, you can customize offers for individual prospective customers. This example uses the flow named Classifying Telecommmunications Customers, available in the example project . The data file is telco.csv. The example focuses on using demographic data to predict usage patterns. The target field custcat has four possible values that correspond to the four customer groups, as follows: Table 1. Possible values for the target field Value Label 1 Basic Service 2 E-Service 3 Plus Service 4 Total Service Because the target has multiple categories, a multinomial model is used. In the case of a target with two distinct categories, such as yes/no, true/false, or churn/don't churn, a binomial model could be created instead. See [Telecommunications churn](https://dataplatform.cloud.ibm.com/docs/content/wsd/tutorials/tut_churn.htmltut_churn) for more information.
# Classifying telecommunications customers # Logistic regression is a statistical technique for classifying records based on values of input fields\. It is analogous to linear regression, but takes a categorical target field instead of a numeric one\. For example, suppose a telecommunications provider has segmented its customer base by service usage patterns, categorizing the customers into four groups\. If demographic data can be used to predict group membership, you can customize offers for individual prospective customers\. This example uses the flow named Classifying Telecommmunications Customers, available in the example project \. The data file is telco\.csv\. The example focuses on using demographic data to predict usage patterns\. The target field `custcat` has four possible values that correspond to the four customer groups, as follows: <!-- <table "summary="" class="defaultstyle" "> --> Table 1\. Possible values for the target field | Value | Label | | ----- | ------------- | | 1 | Basic Service | | 2 | E\-Service | | 3 | Plus Service | | 4 | Total Service | <!-- </table "summary="" class="defaultstyle" "> --> Because the target has multiple categories, a multinomial model is used\. In the case of a target with two distinct categories, such as yes/no, true/false, or churn/don't churn, a binomial model could be created instead\. See [Telecommunications churn](https://dataplatform.cloud.ibm.com/docs/content/wsd/tutorials/tut_churn.html#tut_churn) for more information\. <!-- </article "role="article" "> -->
400E9E780D8A149530DF21E38B256B71BDA12D83
https://dataplatform.cloud.ibm.com/docs/content/wsd/tutorials/tut_classify_build.html?context=cdpaas&locale=en
Building the flow (SPSS Modeler)
Building the flow Figure 1. Example flow to classify customers using multinomial logistic regression ![Example flow to classify customers using multinomial logistic regression](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_classify.png) 1. Add a Data Asset node that points to telco.csv. 2. Add a Type node, double-click it to open its properties, and click Read Values. Make sure all measurement levels are set correctly. For example, most fields with values of 0.0 and 1.0 can be regarded as flags. Figure 2. Measurement levels ![Measurement levels](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_classify_measurement.png) Notice that gender is more correctly considered as a field with a set of two values, instead of a flag, so leave its measurement value as Nominal. 3. Set the role for the custcat field to Target. Leave the role for all other fields set to Input. 4. Since this example focuses on demographics, use a Filter node to include only the relevant fields: region, age, marital, address, income, ed, employ, retire, gender, reside, and custcat). Other fields will be excluded for the purpose of this analysis. To filter them out, in the Filter node properties, click Add Columns and select the fields to exclude. Figure 3. Filtering on demographic fields ![Filtering on demographic fields](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_classify_filter.png) (Alternatively, you could change the role to None for these fields rather than excluding them, or select the fields you want to use in the modeling node.) 5. In the Logistic node properties, under MODEL SETTINGS, select the Stepwise method. Also select Multinomial, Main Effects, and Include constant in equation. Figure 4. Example flow to classify customers using multinomial logistic regression ![Example flow to classify customers using multinomial logistic regression](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_classify_logistic.png) 6. Under EXPERT OPTIONS, select Expert mode, expand the Output section, and select Classification table. Figure 5. Example flow to classify customers using multinomial logistic regression ![Example flow to classify customers using multinomial logistic regression](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_classify_output.png)
# Building the flow # Figure 1\. Example flow to classify customers using multinomial logistic regression ![Example flow to classify customers using multinomial logistic regression](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_classify.png) <!-- <ol> --> 1. Add a Data Asset node that points to telco\.csv\. 2. Add a Type node, double\-click it to open its properties, and click Read Values\. Make sure all measurement levels are set correctly\. For example, most fields with values of `0.0` and `1.0` can be regarded as flags\. Figure 2. Measurement levels ![Measurement levels](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_classify_measurement.png) Notice that `gender` is more correctly considered as a field with a set of two values, instead of a flag, so leave its measurement value as Nominal. 3. Set the role for the `custcat` field to Target\. Leave the role for all other fields set to Input\. 4. Since this example focuses on demographics, use a Filter node to include only the relevant fields: `region`, `age`, `marital`, `address`, `income`, `ed`, `employ`, `retire`, `gender`, `reside`, and `custcat`)\. Other fields will be excluded for the purpose of this analysis\. To filter them out, in the Filter node properties, click Add Columns and select the fields to exclude\. Figure 3. Filtering on demographic fields ![Filtering on demographic fields](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_classify_filter.png) (Alternatively, you could change the role to None for these fields rather than excluding them, or select the fields you want to use in the modeling node.) 5. In the Logistic node properties, under MODEL SETTINGS, select the Stepwise method\. Also select Multinomial, Main Effects, and Include constant in equation\. Figure 4. Example flow to classify customers using multinomial logistic regression ![Example flow to classify customers using multinomial logistic regression](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_classify_logistic.png) 6. Under EXPERT OPTIONS, select Expert mode, expand the Output section, and select Classification table\. Figure 5. Example flow to classify customers using multinomial logistic regression ![Example flow to classify customers using multinomial logistic regression](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_classify_output.png) <!-- </ol> --> <!-- </article "role="article" "> -->
D7FD91BAC6BE16ABD9B158C6B118E5E09E047C6D
https://dataplatform.cloud.ibm.com/docs/content/wsd/tutorials/tut_classify_model.html?context=cdpaas&locale=en
Browsing the model (SPSS Modeler)
Browsing the model * Run the Logistic node to generate the model. Right-click the model nugget and select View Model. Figure 1. Browsing the model results ![Browsing the model results](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_classify_model.png) You can then explore the model information, feature (predictor) importance, and parameter estimates information. Note that these results are based on the training data only. To assess how well the model generalizes to other data in the real world, you can use a Partition node to hold out a subset of records for purposes of testing and validation.
# Browsing the model # <!-- <ul> --> * Run the Logistic node to generate the model\. Right\-click the model nugget and select View Model\. Figure 1. Browsing the model results ![Browsing the model results](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_classify_model.png) You can then explore the model information, feature (predictor) importance, and parameter estimates information. Note that these results are based on the training data only. To assess how well the model generalizes to other data in the real world, you can use a Partition node to hold out a subset of records for purposes of testing and validation. <!-- </ul> --> <!-- </article "role="article" "> -->
9555087B12B80060FB337F8974FEA9261174115E
https://dataplatform.cloud.ibm.com/docs/content/wsd/tutorials/tut_condition.html?context=cdpaas&locale=en
Condition monitoring (SPSS Modeler)
Condition monitoring This example concerns monitoring status information from a machine and the problem of recognizing and predicting fault states. The data is created from a fictitious simulation and consists of a number of concatenated series measured over time. Each record is a snapshot report on the machine in terms of the following: * Time. An integer. * Power. An integer. * Temperature. An integer. * Pressure. 0 if normal, 1 for a momentary pressure warning. * Uptime. Time since last serviced. * Status. Normally 0, changes to an error code if an error occurs (101, 202, or 303). * Outcome. The error code that appears in this time series, or 0 if no error occurs. (These codes are available only with the benefit of hindsight.) This example uses the flow named Condition Monitoring, available in the example project . The data files are cond1n.csv and cond2n.csv. For each time series, there's a series of records from a period of normal operation followed by a period leading to the fault, as shown in the following table: Time Power Temperature Pressure Uptime Status Outcome 0 1059 259 0 404 0 0 1 1059 259 0 404 0 0 ... 51 1059 259 0 404 0 0 52 1059 259 0 404 0 0 53 1007 259 0 404 0 303 54 998 259 0 404 0 303 ... 89 839 259 0 404 0 303 90 834 259 0 404 303 303 0 965 251 0 209 0 0 1 965 251 0 209 0 0 ... 51 965 251 0 209 0 0 52 965 251 0 209 0 0 53 938 251 0 209 0 101 54 936 251 0 209 0 101 ... 208 644 251 0 209 0 101 209 640 251 0 209 101 101 The following process is common to most data mining projects: * Examine the data to determine which attributes may be relevant to the prediction or recognition of the states of interest. * Retain those attributes (if already present), or derive and add them to the data, if necessary. * Use the resultant data to train rules and neural nets. * Test the trained systems using independent test data.
# Condition monitoring # This example concerns monitoring status information from a machine and the problem of recognizing and predicting fault states\. The data is created from a fictitious simulation and consists of a number of concatenated series measured over time\. Each record is a snapshot report on the machine in terms of the following: <!-- <ul> --> * `Time`\. An integer\. * `Power`\. An integer\. * `Temperature`\. An integer\. * `Pressure`\. `0` if normal, `1` for a momentary pressure warning\. * `Uptime`\. Time since last serviced\. * `Status`\. Normally `0`, changes to an error code if an error occurs (`101`, `202`, or `303`)\. * `Outcome`\. The error code that appears in this time series, or `0` if no error occurs\. (These codes are available only with the benefit of hindsight\.) <!-- </ul> --> This example uses the flow named Condition Monitoring, available in the example project \. The data files are cond1n\.csv and cond2n\.csv\. For each time series, there's a series of records from a period of normal operation followed by a period leading to the fault, as shown in the following table: <!-- <table "summary="" class="defaultstyle" "> --> | Time | Power | Temperature | Pressure | Uptime | Status | Outcome | | ---- | ----- | ----------- | -------- | ------ | ------ | ------- | | 0 | 1059 | 259 | 0 | 404 | 0 | 0 | | 1 | 1059 | 259 | 0 | 404 | 0 | 0 | | | | | \.\.\. | | | | | 51 | 1059 | 259 | 0 | 404 | 0 | 0 | | 52 | 1059 | 259 | 0 | 404 | 0 | 0 | | 53 | 1007 | 259 | 0 | 404 | 0 | 303 | | 54 | 998 | 259 | 0 | 404 | 0 | 303 | | | | | \.\.\. | | | | | 89 | 839 | 259 | 0 | 404 | 0 | 303 | | 90 | 834 | 259 | 0 | 404 | 303 | 303 | | 0 | 965 | 251 | 0 | 209 | 0 | 0 | | 1 | 965 | 251 | 0 | 209 | 0 | 0 | | | | | \.\.\. | | | | | 51 | 965 | 251 | 0 | 209 | 0 | 0 | | 52 | 965 | 251 | 0 | 209 | 0 | 0 | | 53 | 938 | 251 | 0 | 209 | 0 | 101 | | 54 | 936 | 251 | 0 | 209 | 0 | 101 | | | | | \.\.\. | | | | | 208 | 644 | 251 | 0 | 209 | 0 | 101 | | 209 | 640 | 251 | 0 | 209 | 101 | 101 | <!-- </table "summary="" class="defaultstyle" "> --> The following process is common to most data mining projects: <!-- <ul> --> * Examine the data to determine which attributes may be relevant to the prediction or recognition of the states of interest\. * Retain those attributes (if already present), or derive and add them to the data, if necessary\. * Use the resultant data to train rules and neural nets\. * Test the trained systems using independent test data\. <!-- </ul> --> <!-- </article "role="article" "> -->
D59300B05666E072EA812EFFA009E2DD4B60A508
https://dataplatform.cloud.ibm.com/docs/content/wsd/tutorials/tut_condition_data.html?context=cdpaas&locale=en
Examining the data (SPSS Modeler)
Examining the data For the first part of the process, imagine you have a flow that plots a number of graphs. If the time series of temperature or power contains visible patterns, you could differentiate between impending error conditions or possibly predict their occurrence. For both temperature and power, the flow plots the time series associated with the three different error codes on separate graphs, yielding six graphs. Select nodes separate the data associated with the different error codes. The graphs clearly display patterns distinguishing 202 errors from 101 and 303 errors. The 202 errors show rising temperature and fluctuating power over time; the other errors don't. However, patterns distinguishing 101 from 303 errors are less clear. Both errors show even temperature and a drop in power, but the drop in power seems steeper for 303 errors. Based on these graphs, it appears that the presence and rate of change for both temperature and power, as well as the presence and degree of fluctuation, are relevant to predicting and distinguishing faults. These attributes should therefore be added to the data before applying the learning systems.
# Examining the data # For the first part of the process, imagine you have a flow that plots a number of graphs\. If the time series of temperature or power contains visible patterns, you could differentiate between impending error conditions or possibly predict their occurrence\. For both temperature and power, the flow plots the time series associated with the three different error codes on separate graphs, yielding six graphs\. Select nodes separate the data associated with the different error codes\. The graphs clearly display patterns distinguishing 202 errors from 101 and 303 errors\. The 202 errors show rising temperature and fluctuating power over time; the other errors don't\. However, patterns distinguishing 101 from 303 errors are less clear\. Both errors show even temperature and a drop in power, but the drop in power seems steeper for 303 errors\. Based on these graphs, it appears that the presence and rate of change for both temperature and power, as well as the presence and degree of fluctuation, are relevant to predicting and distinguishing faults\. These attributes should therefore be added to the data before applying the learning systems\. <!-- </article "role="article" "> -->
D11A81E7333F63092FCF2C047744F2F3C18C1903
https://dataplatform.cloud.ibm.com/docs/content/wsd/tutorials/tut_condition_learn.html?context=cdpaas&locale=en
Learning (SPSS Modeler)
Learning Running the flow trains the C5.0 rule and neural network (net). The network may take some time to train, but training can be interrupted early to save a net that produces reasonable results. After the learning is complete, model nuggets are generated: one represents the neural net and one represents the rule. Figure 1. Generated model nuggets ![Generated model nuggets](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_condition_nuggets.png) These model nuggets enable us to test the system or export the results of the model. In this example, we will test the results of the model.
# Learning # Running the flow trains the C5\.0 rule and neural network (net)\. The network may take some time to train, but training can be interrupted early to save a net that produces reasonable results\. After the learning is complete, model nuggets are generated: one represents the neural net and one represents the rule\. Figure 1\. Generated model nuggets ![Generated model nuggets](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_condition_nuggets.png) These model nuggets enable us to test the system or export the results of the model\. In this example, we will test the results of the model\. <!-- </article "role="article" "> -->
43071778B4E33375953AFB1AB743B342D3CC906A
https://dataplatform.cloud.ibm.com/docs/content/wsd/tutorials/tut_condition_prep.html?context=cdpaas&locale=en
Data preparation (SPSS Modeler)
Data preparation Based on the results of exploring the data, the following flow derives the relevant data and learns to predict faults. This example uses the flow named Condition Monitoring, available in the example project installed with the product. The data files are cond1n.csv and cond2n.csv. 1. On the My Projects screen, click Example Project. 2. Scroll down to the Modeler flows section, click View all, and select the Condition Monitoring flow. Figure 1. Condition Monitoring example flow ![Condition Monitoring example flow](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_condition.png)The flow uses a number of Derive nodes to prepare the data for modeling. * Data Asset import node. Reads data file cond1n.csv. * Pressure Warnings (Derive). Counts the number of momentary pressure warnings. Reset when time returns to 0. * TempInc (Derive). Calculates momentary rate of temperature change using @DIFF1. * PowerInc (Derive). Calculates momentary rate of power change using @DIFF1. * PowerFlux (Derive). A flag, true if power varied in opposite directions in the last record and this one; that is, for a power peak or trough. * PowerState (Derive). A state that starts as Stable and switches to Fluctuating when two successive power fluxes are detected. Switches back to Stable only when there hasn't been a power flux for five time intervals or when Time is reset. * PowerChange (Derive). Average of PowerInc over the last five time intervals. * TempChange (Derive). Average of TempInc over the last five time intervals. * Discard Initial (Select). Discards the first record of each time series to avoid large (incorrect) jumps in Power and Temperature at boundaries. * Discard fields (Filter). Cuts records down to Uptime, Status, Outcome, Pressure Warnings, PowerState, PowerChange, and TempChange. * Type. Defines the role of Outcome as Target (the field to predict). In addition, defines the measurement level of Outcome as Nominal, Pressure Warnings as Continuous, and PowerState as Flag.
# Data preparation # Based on the results of exploring the data, the following flow derives the relevant data and learns to predict faults\. This example uses the flow named Condition Monitoring, available in the example project installed with the product\. The data files are cond1n\.csv and cond2n\.csv\. <!-- <ol> --> 1. On the My Projects screen, click Example Project\. 2. Scroll down to the Modeler flows section, click View all, and select the Condition Monitoring flow\. <!-- </ol> --> Figure 1\. Condition Monitoring example flow ![Condition Monitoring example flow](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_condition.png)The flow uses a number of Derive nodes to prepare the data for modeling\. <!-- <ul> --> * Data Asset import node\. Reads data file cond1n\.csv\. * Pressure Warnings (Derive)\. Counts the number of momentary pressure warnings\. Reset when time returns to 0\. * TempInc (Derive)\. Calculates momentary rate of temperature change using `@DIFF1`\. * PowerInc (Derive)\. Calculates momentary rate of power change using `@DIFF1`\. * PowerFlux (Derive)\. A flag, true if power varied in opposite directions in the last record and this one; that is, for a power peak or trough\. * PowerState (Derive)\. A state that starts as `Stable` and switches to `Fluctuating` when two successive power fluxes are detected\. Switches back to `Stable` only when there hasn't been a power flux for five time intervals or when `Time` is reset\. * PowerChange (Derive)\. Average of `PowerInc` over the last five time intervals\. * TempChange (Derive)\. Average of `TempInc` over the last five time intervals\. * Discard Initial (Select)\. Discards the first record of each time series to avoid large (incorrect) jumps in `Power` and `Temperature` at boundaries\. * Discard fields (Filter)\. Cuts records down to `Uptime`, `Status`, `Outcome`, `Pressure Warnings`, `PowerState`, `PowerChange`, and `TempChange`\. * Type\. Defines the role of `Outcome` as Target (the field to predict)\. In addition, defines the measurement level of `Outcome` as Nominal, `Pressure Warnings` as Continuous, and `PowerState` as Flag\. <!-- </ul> --> <!-- </article "role="article" "> -->
A187344EB767BAC8E4D674651BEDAFA33F70BFA1
https://dataplatform.cloud.ibm.com/docs/content/wsd/tutorials/tut_condition_test.html?context=cdpaas&locale=en
Testing (SPSS Modeler)
Testing Both of the generated model nuggets are connected to the Type node. 1. Reposition the nuggets as shown, so the Type node connects to the neural net nugget, which connects to the C5.0 nugget. 2. Attach an Analysis node to the C5.0 nugget. 3. Edit the Data Asset node to use the file cond2n.csv (instead of cond1n.csv), which contains unseen test data. 4. Right-click the Analysis node and select Run. Doing so yields figures reflecting the accuracy of the trained network and rule. Figure 1. Testing the trained network ![Testing the trained network](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_condition_analysis.png)
# Testing # Both of the generated model nuggets are connected to the Type node\. <!-- <ol> --> 1. Reposition the nuggets as shown, so the Type node connects to the neural net nugget, which connects to the C5\.0 nugget\. 2. Attach an Analysis node to the C5\.0 nugget\. 3. Edit the Data Asset node to use the file cond2n\.csv (instead of cond1n\.csv), which contains unseen test data\. 4. Right\-click the Analysis node and select Run\. Doing so yields figures reflecting the accuracy of the trained network and rule\. Figure 1. Testing the trained network ![Testing the trained network](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_condition_analysis.png) <!-- </ol> --> <!-- </article "role="article" "> -->
3D9FB046D583A2D0177ECB4DA25EEAEB4FEBCCA9
https://dataplatform.cloud.ibm.com/docs/content/wsd/tutorials/tut_drug.html?context=cdpaas&locale=en
Drug treatment - exploratory graphs (SPSS Modeler)
Drug treatment - exploratory graphs In this example, imagine you're a medical researcher compiling data for a study. You've collected data about a set of patients, all of whom suffered from the same illness. During their course of treatment, each patient responded to one of five medications. Part of your job is to use data mining to find out which drug might be appropriate for a future patient with the same illness. This example uses the flow named Drug Treatment - Exploratory Graphs, available in the example project . The data file is drug1n.csv. Figure 1. Drug treatment example flow ![Drug treatment example flow](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_data.png) The data fields used in this example are: Data field Description Age Age of patient (number) Sex M or F BP Blood pressure: HIGH, NORMAL, or LOW Cholesterol Blood cholesterol: NORMAL or HIGH Na Blood sodium concentration K Blood potassium concentration Drug Prescription drug to which a patient responded
# Drug treatment \- exploratory graphs # In this example, imagine you're a medical researcher compiling data for a study\. You've collected data about a set of patients, all of whom suffered from the same illness\. During their course of treatment, each patient responded to one of five medications\. Part of your job is to use data mining to find out which drug might be appropriate for a future patient with the same illness\. This example uses the flow named Drug Treatment \- Exploratory Graphs, available in the example project \. The data file is drug1n\.csv\. Figure 1\. Drug treatment example flow ![Drug treatment example flow](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_data.png) The data fields used in this example are: <!-- <table "summary="" class="defaultstyle" "> --> | Data field | Description | | ------------- | ---------------------------------------------- | | `Age` | Age of patient (number) | | `Sex` | `M` or `F` | | `BP` | Blood pressure: `HIGH`, `NORMAL`, or `LOW` | | `Cholesterol` | Blood cholesterol: `NORMAL` or `HIGH` | | `Na` | Blood sodium concentration | | `K` | Blood potassium concentration | | `Drug` | Prescription drug to which a patient responded | <!-- </table "summary="" class="defaultstyle" "> --> <!-- </article "role="article" "> -->
13D83AF5CCD616F312472FBAB4AC7D7A56D0F41D
https://dataplatform.cloud.ibm.com/docs/content/wsd/tutorials/tut_drug_analysis.html?context=cdpaas&locale=en
Using an Analysis node (SPSS Modeler)
Using an Analysis node You can assess the accuracy of the model using an Analysis node. From the Palette, under Outputs, place an Analysis node on the canvas and attach it to the C5.0 model nugget. Then right-click the Analysis node and select Run. Figure 1. Analysis node ![Analysis node](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_drug_analysis.png)The Analysis node output shows that with this artificial dataset, the model correctly predicted the choice of drug for every record in the dataset. With a real dataset you are unlikely to see 100% accuracy, but you can use the Analysis node to help determine whether the model is acceptably accurate for your particular application. Figure 2. Analysis node output ![Analysis node output](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_drug_analysis_output.png)
# Using an Analysis node # You can assess the accuracy of the model using an Analysis node\. From the Palette, under Outputs, place an Analysis node on the canvas and attach it to the C5\.0 model nugget\. Then right\-click the Analysis node and select Run\. Figure 1\. Analysis node ![Analysis node](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_drug_analysis.png)The Analysis node output shows that with this artificial dataset, the model correctly predicted the choice of drug for every record in the dataset\. With a real dataset you are unlikely to see 100% accuracy, but you can use the Analysis node to help determine whether the model is acceptably accurate for your particular application\. Figure 2\. Analysis node output ![Analysis node output](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_drug_analysis_output.png) <!-- </article "role="article" "> -->
18A7A354C4B46E26DF8304755C8BE954BB922B04
https://dataplatform.cloud.ibm.com/docs/content/wsd/tutorials/tut_drug_browse.html?context=cdpaas&locale=en
Browsing a model (SPSS Modeler)
Browsing the model When the C5.0 node runs, its model nugget is added to the flow. To browse the model, right-click the model nugget and choose View Model. The Tree Diagram displays the set of rules generated by the C5.0 node in a tree format. Now you can see the missing pieces of the puzzle. For people with an Na-to-K ratio less than 14.829 and high blood pressure, age determines the choice of drug. For people with low blood pressure, cholesterol level seems to be the best predictor. Figure 1. Tree diagram ![Tree diagram](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_drug_browse_tree.png) You can hover over the nodes in the tree to see more details such as the number of cases for each blood pressure category and the confidence percentage of cases.
# Browsing the model # When the C5\.0 node runs, its model nugget is added to the flow\. To browse the model, right\-click the model nugget and choose View Model\. The Tree Diagram displays the set of rules generated by the C5\.0 node in a tree format\. Now you can see the missing pieces of the puzzle\. For people with an `Na-to-K` ratio less than `14.829` and high blood pressure, age determines the choice of drug\. For people with low blood pressure, cholesterol level seems to be the best predictor\. Figure 1\. Tree diagram ![Tree diagram](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_drug_browse_tree.png) You can hover over the nodes in the tree to see more details such as the number of cases for each blood pressure category and the confidence percentage of cases\. <!-- </article "role="article" "> -->
D1B1E93AD61D2B095BF8A00E9739FCF7D1DC974C
https://dataplatform.cloud.ibm.com/docs/content/wsd/tutorials/tut_drug_build.html?context=cdpaas&locale=en
Building a model (SPSS Modeler)
Building a model By exploring and manipulating the data, you have been able to form some hypotheses. The ratio of sodium to potassium in the blood seems to affect the choice of drug, as does blood pressure. But you cannot fully explain all of the relationships yet. This is where modeling will likely provide some answers. In this case, you will try to fit the data using a rule-building model called C5.0. Since you're using a derived field, Na_to_K, you can filter out the original fields, Na and K, so they're not used twice in the modeling algorithm. You can do this by using a Filter node. 1. Place a Filter node on the canvas and connect it to the Derive node. Figure 1. Filter node ![Derive node](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_drug_build_flow.png) 2. Double-click the Filter node to edit its properties. Name it Discard Fields. 3. For Mode, make sure Filter the selected fields is selected. Then select the K and Na fields. Click Save. 4. Place a Type node on the canvas and connect it to the Filter node. With the Type node, you can indicate the types of fields you're using and how they're used to predict the outcomes. Figure 2. Type node ![Type node](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_drug_build_flow2.png) 5. Double-click the Type node to edit its properties. Name it Define Types. 6. Set the role for the Drug field to Target, indicating that Drug is the field you want to predict. Leave the role for the other fields set to Input so they'll be used as predictors. Click Save. 7. To estimate the model, place a C5.0 node on the canvas and attach it to the end of the flow. Then click the Run button on the toolbar to run the flow.
# Building a model # By exploring and manipulating the data, you have been able to form some hypotheses\. The ratio of sodium to potassium in the blood seems to affect the choice of drug, as does blood pressure\. But you cannot fully explain all of the relationships yet\. This is where modeling will likely provide some answers\. In this case, you will try to fit the data using a rule\-building model called C5\.0\. Since you're using a derived field, `Na_to_K`, you can filter out the original fields, `Na` and `K`, so they're not used twice in the modeling algorithm\. You can do this by using a Filter node\. <!-- <ol> --> 1. Place a Filter node on the canvas and connect it to the Derive node\. Figure 1. Filter node ![Derive node](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_drug_build_flow.png) 2. Double\-click the Filter node to edit its properties\. Name it Discard Fields\. 3. For Mode, make sure Filter the selected fields is selected\. Then select the `K` and `Na` fields\. Click `Save`\. 4. Place a Type node on the canvas and connect it to the Filter node\. With the Type node, you can indicate the types of fields you're using and how they're used to predict the outcomes\. Figure 2. Type node ![Type node](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_drug_build_flow2.png) 5. Double\-click the Type node to edit its properties\. Name it Define Types\. 6. Set the role for the `Drug` field to Target, indicating that `Drug` is the field you want to predict\. Leave the role for the other fields set to Input so they'll be used as predictors\. Click Save\. 7. To estimate the model, place a C5\.0 node on the canvas and attach it to the end of the flow\. Then click the Run button on the toolbar to run the flow\. <!-- </ol> --> <!-- </article "role="article" "> -->
D733288343A1790788E8069EB55908F9D12566A9
https://dataplatform.cloud.ibm.com/docs/content/wsd/tutorials/tut_drug_data.html?context=cdpaas&locale=en
Reading in text data (SPSS Modeler)
Reading in text data 1. You can read in delimited text data using a Data Asset import node. From the Palette, under Import, add a Data Asset node to your flow. 2. Double-click the node to display its properties and select the data file drug1n.csv. 3. Now that you've added the data file, you may want to glance at the values for some of the records. One way to do this is by building a flow that includes a Table node. An easier way is to simply right-click the Data Asset node you just added and select Preview.
# Reading in text data # <!-- <ol> --> 1. You can read in delimited text data using a Data Asset import node\. From the Palette, under Import, add a Data Asset node to your flow\. 2. Double\-click the node to display its properties and select the data file drug1n\.csv\. 3. Now that you've added the data file, you may want to glance at the values for some of the records\. One way to do this is by building a flow that includes a Table node\. An easier way is to simply right\-click the Data Asset node you just added and select Preview\. <!-- </ol> --> <!-- </article "role="article" "> -->
A73CA4F67523DBB58FD3521AE9BFF83AEE634607
https://dataplatform.cloud.ibm.com/docs/content/wsd/tutorials/tut_drug_distribution.html?context=cdpaas&locale=en
Creating a distribution chart (SPSS Modeler)
Creating a distribution chart During data mining, it is often useful to explore the data by creating visual summaries. Watson Studio offers many different types of charts to choose from, depending on the kind of data you want to summarize. For example, to find out what proportion of the patients responded to each drug, use a Distribution node. Figure 1. Distribution node ![Distribution node](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_drug_distribution.png) 1. Under Graphs on the Palette, add a Distribution node to the flow and connect it to the drug1n.csv Data Asset node. Then double-click the node to edit its options. 2. Select Drug as the target field whose distribution you want to show. Then click Save, right-click the Distribution node, and select Run. A distribution chart is added to the Outputs panel. The chart helps you see the shape of the data. It shows that patients responded to drug Y most often and to drugs B and C least often. Alternatively, you can attach and run a Data Audit node for a quick glance at distributions and histograms for all fields at once. The Data Audit node is available under Outputs on the Palette.
# Creating a distribution chart # During data mining, it is often useful to explore the data by creating visual summaries\. Watson Studio offers many different types of charts to choose from, depending on the kind of data you want to summarize\. For example, to find out what proportion of the patients responded to each drug, use a Distribution node\. Figure 1\. Distribution node ![Distribution node](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_drug_distribution.png) <!-- <ol> --> 1. Under Graphs on the Palette, add a Distribution node to the flow and connect it to the drug1n\.csv Data Asset node\. Then double\-click the node to edit its options\. 2. Select Drug as the target field whose distribution you want to show\. Then click Save, right\-click the Distribution node, and select Run\. A distribution chart is added to the Outputs panel\. <!-- </ol> --> The chart helps you see the shape of the data\. It shows that patients responded to drug `Y` most often and to drugs `B` and `C` least often\. Alternatively, you can attach and run a Data Audit node for a quick glance at distributions and histograms for all fields at once\. The Data Audit node is available under Outputs on the Palette\. <!-- </article "role="article" "> -->
0A5C26B7B5B7C1E3AFD8901D8B91F2E3C527DA3E
https://dataplatform.cloud.ibm.com/docs/content/wsd/tutorials/tut_drug_newfield.html?context=cdpaas&locale=en
Deriving a new field (SPSS Modeler)
Deriving a new field Figure 1. Scatterplot of drug distribution ![Scatterplot of drug distribution](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_drug_scatterplot.png) Since the ratio of sodium to potassium seems to predict when to use drug Y, you can derive a field that contains the value of this ratio for each record. This field might be useful later when you build a model to predict when to use each of the five drugs. 1. To simplify your flow layout, start by deleting all the nodes except the drug1n.csv Data Asset node. 2. Place a Derive node on the canvas and connect it to the drug1n.csv Data Asset node. Figure 2. Derive node ![Derive node](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_drug_newfield_flow.png) 3. Double-click the Derive node to edit its properties. 4. Name the new field Na_to_K. Since you obtain the new field by dividing the sodium value by the potassium value, enter Na/K for the expression. You can also create an expression by clicking the calculator icon. This opens the Expression Builder, a way to interactively create expressions using built-in lists of functions, operands, and fields and their values. 5. You can check the distribution of your new field by attaching a Histogram node to the Derive node. In the Histogram node properties, specify Na_to_K as the field to be plotted and Drug as the color overlay field. Figure 3. Histogram node ![Histogram node](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_drug_newfield_histogram_flow.png) 6. Right-click the Histogram node and select Run. A histogram chart is added to the Outputs pane. Based on the chart, you can conclude that when the Na_to_K value is around 15 or more, drug Y is the drug of choice. Figure 4. Histogram chart output ![Histogram chart output](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_drug_histogram.png)
# Deriving a new field # Figure 1\. Scatterplot of drug distribution ![Scatterplot of drug distribution](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_drug_scatterplot.png) Since the ratio of sodium to potassium seems to predict when to use drug `Y`, you can derive a field that contains the value of this ratio for each record\. This field might be useful later when you build a model to predict when to use each of the five drugs\. <!-- <ol> --> 1. To simplify your flow layout, start by deleting all the nodes except the drug1n\.csv Data Asset node\. 2. Place a Derive node on the canvas and connect it to the drug1n\.csv Data Asset node\. Figure 2. Derive node ![Derive node](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_drug_newfield_flow.png) 3. Double\-click the Derive node to edit its properties\. 4. Name the new field Na\_to\_K\. Since you obtain the new field by dividing the sodium value by the potassium value, enter Na/K for the expression\. You can also create an expression by clicking the calculator icon\. This opens the Expression Builder, a way to interactively create expressions using built\-in lists of functions, operands, and fields and their values\. 5. You can check the distribution of your new field by attaching a Histogram node to the Derive node\. In the Histogram node properties, specify `Na_to_K` as the field to be plotted and `Drug` as the color overlay field\. Figure 3. Histogram node ![Histogram node](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_drug_newfield_histogram_flow.png) 6. Right\-click the Histogram node and select Run\. A histogram chart is added to the Outputs pane\. Based on the chart, you can conclude that when the `Na_to_K` value is around 15 or more, drug `Y` is the drug of choice\. Figure 4. Histogram chart output ![Histogram chart output](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_drug_histogram.png) <!-- </ol> --> <!-- </article "role="article" "> -->
BB659D7B00DB3096C4082BB93C7FDB933738B013
https://dataplatform.cloud.ibm.com/docs/content/wsd/tutorials/tut_drug_scatterplot.html?context=cdpaas&locale=en
Creating a scatterplot (SPSS Modeler)
Creating a scatterplot Now let's take a look at what factors might influence Drug, the target variable. As a researcher, you know that the concentrations of sodium and potassium in the blood are important factors. Since these are both numeric values, you can create a scatterplot of sodium versus potassium, using the drug categories as a color overlay. Figure 1. Plot node ![Plot node](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_drug_scatterplot_flow.png) 1. Place a Plot node on the canvas and connect it to the drug1n.csv Data Asset node. Then double-click the Plot node to edit its properties. 2. Select Na as the X field, K as the Y field, and Drug as the Color (overlay) field. Click Save, then right-click the Plot node and select Run. A plot chart is added to the Outputs pane. The plot clearly shows a threshold above which the correct drug is always drug Y and below which the correct drug is never drug Y. This threshold is a ratio -- the ratio of sodium (Na) to potassium (K). Figure 2. Scatterplot of drug distribution ![Scatterplot of drug distribution](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_drug_scatterplot.png)
# Creating a scatterplot # Now let's take a look at what factors might influence `Drug`, the target variable\. As a researcher, you know that the concentrations of sodium and potassium in the blood are important factors\. Since these are both numeric values, you can create a scatterplot of sodium versus potassium, using the drug categories as a color overlay\. Figure 1\. Plot node ![Plot node](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_drug_scatterplot_flow.png) <!-- <ol> --> 1. Place a Plot node on the canvas and connect it to the drug1n\.csv Data Asset node\. Then double\-click the Plot node to edit its properties\. 2. Select `Na` as the X field, `K` as the Y field, and `Drug` as the Color (overlay) field\. Click Save, then right\-click the Plot node and select Run\. A plot chart is added to the Outputs pane\. The plot clearly shows a threshold above which the correct drug is always drug `Y` and below which the correct drug is never drug `Y`. This threshold is a ratio -- the ratio of sodium (`Na`) to potassium (`K`). Figure 2. Scatterplot of drug distribution ![Scatterplot of drug distribution](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_drug_scatterplot.png) <!-- </ol> --> <!-- </article "role="article" "> -->
F7D95A9FCCA49861B0D4B7DCE677D4E6EFF1F7C1
https://dataplatform.cloud.ibm.com/docs/content/wsd/tutorials/tut_drug_viz.html?context=cdpaas&locale=en
Creating advanced visualizations (SPSS Modeler)
Creating advanced visualizations The previous three sections use different types of graph nodes. Another way to explore data is with the advanced visualizations feature. You can use the Charts node to launch the chart builder and create advanced charts to explore your data from different perspectives and identify patterns, connections, and relationships within your data. Figure 1. Advanced visualizations ![Advanced visualizations](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_drug_viz.png)
# Creating advanced visualizations # The previous three sections use different types of graph nodes\. Another way to explore data is with the advanced visualizations feature\. You can use the Charts node to launch the chart builder and create advanced charts to explore your data from different perspectives and identify patterns, connections, and relationships within your data\. Figure 1\. Advanced visualizations ![Advanced visualizations](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_drug_viz.png) <!-- </article "role="article" "> -->
95C10FDC6D0C3B142DA650044E1A0581D04EF8E4
https://dataplatform.cloud.ibm.com/docs/content/wsd/tutorials/tut_drug_web.html?context=cdpaas&locale=en
Creating a web chart (SPSS Modeler)
Creating a web chart Since many of the data fields are categorical, you can also try plotting a web chart, which maps associations between different categories. Figure 1. Web node ![Web node](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_drug_web_flow.png) 1. Place a Web node on the canvas and connect it to the drug1n.csv Data Asset node. Then double-click the Web node to edit its properties. 2. Select the fields BP (for blood pressure) and Drug. Click Save, then right-click the Web node and select Run. A web chart is added to the Outputs pane. Figure 2. Web graph of drugs vs. blood pressure ![Web graph of drugs vs. blood pressure](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_drug_web.png) From the plot, it appears that drug Y is associated with all three levels of blood pressure. This is no surprise; you have already determined the situation in which drug Y is best. But if you ignore drug Y and focus on the other drugs, you can see that drugs A and B are also associated with high blood pressure. And drugs C and X are associated with low blood pressure. And normal blood pressure is associated with drug X. At this point, though, you still don't know how to choose between drugs A and B or between drugs C and X, for a given patient. This is where modeling can help.
# Creating a web chart # Since many of the data fields are categorical, you can also try plotting a web chart, which maps associations between different categories\. Figure 1\. Web node ![Web node](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_drug_web_flow.png) <!-- <ol> --> 1. Place a Web node on the canvas and connect it to the drug1n\.csv Data Asset node\. Then double\-click the Web node to edit its properties\. 2. Select the fields `BP` (for blood pressure) and `Drug`\. Click Save, then right\-click the Web node and select Run\. A web chart is added to the Outputs pane\. Figure 2. Web graph of drugs vs. blood pressure ![Web graph of drugs vs. blood pressure](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_drug_web.png) <!-- </ol> --> From the plot, it appears that drug `Y` is associated with all three levels of blood pressure\. This is no surprise; you have already determined the situation in which drug `Y` is best\. But if you ignore drug `Y` and focus on the other drugs, you can see that drugs `A` and `B` are also associated with high blood pressure\. And drugs `C` and `X` are associated with low blood pressure\. And normal blood pressure is associated with drug `X`\. At this point, though, you still don't know how to choose between drugs `A` and `B` or between drugs `C` and `X`, for a given patient\. This is where modeling can help\. <!-- </article "role="article" "> -->
E8B776685A4C1FFCDC8F90C57C3AD7243A43B2B3
https://dataplatform.cloud.ibm.com/docs/content/wsd/tutorials/tut_forecast.html?context=cdpaas&locale=en
Forecasting catalog sales (SPSS Modeler)
Forecasting catalog sales A catalog company is interested in forecasting monthly sales of its men's clothing line, based on 10 years of their sales data. This example uses the flow Forecasting Catalog Sales, available in the example project . The data file is catalog_seasfac.csv. We've seen in an earlier tutorial how you can let the Expert Modeler decide which is the most appropriate model for your time series. Now it's time to take a closer look at the two methods that are available when choosing a model yourself—exponential smoothing and ARIMA. To help you decide on an appropriate model, it's a good idea to plot the time series first. Visual inspection of a time series can often be a powerful guide in helping you choose. In particular, you need to ask yourself: * Does the series have an overall trend? If so, does the trend appear constant or does it appear to be dying out with time? * Does the series show seasonality? If so, do the seasonal fluctuations seem to grow with time or do they appear constant over successive periods?
# Forecasting catalog sales # A catalog company is interested in forecasting monthly sales of its men's clothing line, based on 10 years of their sales data\. This example uses the flow Forecasting Catalog Sales, available in the example project \. The data file is catalog\_seasfac\.csv\. We've seen in an earlier tutorial how you can let the Expert Modeler decide which is the most appropriate model for your time series\. Now it's time to take a closer look at the two methods that are available when choosing a model yourself—exponential smoothing and ARIMA\. To help you decide on an appropriate model, it's a good idea to plot the time series first\. Visual inspection of a time series can often be a powerful guide in helping you choose\. In particular, you need to ask yourself: <!-- <ul> --> * Does the series have an overall trend? If so, does the trend appear constant or does it appear to be dying out with time? * Does the series show seasonality? If so, do the seasonal fluctuations seem to grow with time or do they appear constant over successive periods? <!-- </ul> --> <!-- </article "role="article" "> -->
B5873013457AADDCC20DB880B3FC9D9BFB7BD348
https://dataplatform.cloud.ibm.com/docs/content/wsd/tutorials/tut_forecast_arima.html?context=cdpaas&locale=en
ARIMA (SPSS Modeler)
ARIMA With the ARIMA procedure, you can create an autoregressive integrated moving-average (ARIMA) model that is suitable for finely tuned modeling of time series. ARIMA models provide more sophisticated methods for modeling trend and seasonal components than do exponential smoothing models, and they have the added benefit of being able to include predictor variables in the model. Continuing the example of the catalog company that wants to develop a forecasting model, we have seen how the company has collected data on monthly sales of men's clothing along with several series that might be used to explain some of the variation in sales. Possible predictors include the number of catalogs mailed and the number of pages in the catalog, the number of phone lines open for ordering, the amount spent on print advertising, and the number of customer service representatives. Are any of these predictors useful for forecasting? Is a model with predictors really better than one without? Using the ARIMA procedure, we can create a forecasting model with predictors, and see if there's a significant difference in predictive ability over the exponential smoothing model with no predictors. With the ARIMA method, you can fine-tune the model by specifying orders of autoregression, differencing, and moving average, as well as seasonal counterparts to these components. Determining the best values for these components manually can be a time-consuming process involving a good deal of trial and error so, for this example, we'll let the Expert Modeler choose an ARIMA model for us. We'll try to build a better model by treating some of the other variables in the dataset as predictor variables. The ones that seem most useful to include as predictors are the number of catalogs mailed (mail), the number of pages in the catalog (page), the number of phone lines open for ordering (phone), the amount spent on print advertising (print), and the number of customer service representatives (service). 1. Double-click the Type node to open its properties. 2. Set the role for mail, page, phone, print, and service to Input. 3. Ensure that the role for men is set to Target and that all the remaining fields are set to None. 4. Click Save. 5. Double-click the Time Series node. 6. Under BUILD OPTIONS - GENERAL, select Expert Modeler for the method. 7. Select the options ARIMA models only and Expert Modeler considers seasonal models. Figure 1. Choosing only ARIMA models ![Choosing only ARIMA models](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_forecast_arima.png) 8. Click Save and run the flow. 9. Right-click the model nugget and select View Model. Click men and then click Model information. Notice how the Expert Modeler has chosen only two of the five specified predictors as being significant to the model. Figure 2. Expert Modeler chooses two predictors ![Expert Modeler chooses two predictors](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_forecast_arima_predictors.png) 10. Open the latest chart output. Figure 3. ARIMA model with predictors specified ![ARIMA model with predictors specified](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_forecast_arima_chart.png) This model improves on the previous one by capturing the large downward spike as well, making it the best fit so far. We could try refining the model even further, but any improvements from this point on are likely to be minimal. We've established that the ARIMA model with predictors is preferable, so let's use the model we have just built. For the purposes of this example, we'll forecast sales for the coming year. 11. Double-click the Time Series node. 12. Under MODEL OPTIONS, select the option Extend records into the future and set its value to 12. 13. Select the Compute future values of inputs option. 14. Click Save and run the flow.The forecast looks good. As expected, there's a return to normal sales levels following the December peak, and a steady upward trend in the second half of the year, with sales in general better than those for the previous year. Figure 4. Sales forecast extended by 12 months ![Sales forecast extended by 12 months](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_forecast_arima_finalchart.png)
# ARIMA # With the ARIMA procedure, you can create an autoregressive integrated moving\-average (ARIMA) model that is suitable for finely tuned modeling of time series\. ARIMA models provide more sophisticated methods for modeling trend and seasonal components than do exponential smoothing models, and they have the added benefit of being able to include predictor variables in the model\. Continuing the example of the catalog company that wants to develop a forecasting model, we have seen how the company has collected data on monthly sales of men's clothing along with several series that might be used to explain some of the variation in sales\. Possible predictors include the number of catalogs mailed and the number of pages in the catalog, the number of phone lines open for ordering, the amount spent on print advertising, and the number of customer service representatives\. Are any of these predictors useful for forecasting? Is a model with predictors really better than one without? Using the ARIMA procedure, we can create a forecasting model with predictors, and see if there's a significant difference in predictive ability over the exponential smoothing model with no predictors\. With the ARIMA method, you can fine\-tune the model by specifying orders of autoregression, differencing, and moving average, as well as seasonal counterparts to these components\. Determining the best values for these components manually can be a time\-consuming process involving a good deal of trial and error so, for this example, we'll let the Expert Modeler choose an ARIMA model for us\. We'll try to build a better model by treating some of the other variables in the dataset as predictor variables\. The ones that seem most useful to include as predictors are the number of catalogs mailed (`mail`), the number of pages in the catalog (`page`), the number of phone lines open for ordering (`phone`), the amount spent on print advertising (`print`), and the number of customer service representatives (`service`)\. <!-- <ol> --> 1. Double\-click the Type node to open its properties\. 2. Set the role for `mail`, `page`, `phone`, `print`, and `service` to Input\. 3. Ensure that the role for `men` is set to Target and that all the remaining fields are set to None\. 4. Click Save\. 5. Double\-click the Time Series node\. 6. Under BUILD OPTIONS \- GENERAL, select Expert Modeler for the method\. 7. Select the options ARIMA models only and Expert Modeler considers seasonal models\. Figure 1. Choosing only ARIMA models ![Choosing only ARIMA models](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_forecast_arima.png) 8. Click Save and run the flow\. 9. Right\-click the model nugget and select View Model\. Click men and then click Model information\. Notice how the Expert Modeler has chosen only two of the five specified predictors as being significant to the model\. Figure 2. Expert Modeler chooses two predictors ![Expert Modeler chooses two predictors](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_forecast_arima_predictors.png) 10. Open the latest chart output\. Figure 3. ARIMA model with predictors specified ![ARIMA model with predictors specified](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_forecast_arima_chart.png) This model improves on the previous one by capturing the large downward spike as well, making it the best fit so far. We could try refining the model even further, but any improvements from this point on are likely to be minimal. We've established that the ARIMA model with predictors is preferable, so let's use the model we have just built. For the purposes of this example, we'll forecast sales for the coming year. 11. Double\-click the Time Series node\. 12. Under MODEL OPTIONS, select the option Extend records into the future and set its value to 12\. 13. Select the Compute future values of inputs option\. 14. Click Save and run the flow\.The forecast looks good\. As expected, there's a return to normal sales levels following the December peak, and a steady upward trend in the second half of the year, with sales in general better than those for the previous year\. Figure 4. Sales forecast extended by 12 months ![Sales forecast extended by 12 months](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_forecast_arima_finalchart.png) <!-- </ol> --> <!-- </article "role="article" "> -->
05F38627C9EC286CA7C379A31AA27392A65411AB
https://dataplatform.cloud.ibm.com/docs/content/wsd/tutorials/tut_forecast_data.html?context=cdpaas&locale=en
Examing the data (SPSS Modeler)
Examining the data The series shows a general upward trend; that is, the series values tend to increase over time. The upward trend is seemingly constant, which indicates a linear trend. Figure 1. Actual sales of men's clothing ![Actual sales of men's clothing](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_forecast_series.png) The series also has a distinct seasonal pattern with annual highs in December, as indicated by the vertical lines on the graph. The seasonal variations appear to grow with the upward series trend, which suggests multiplicative rather than additive seasonality. Now that you've identified the characteristics of the series, you're ready to try modeling it. The exponential smoothing method is useful for forecasting series that exhibit trend, seasonality, or both. As we've seen, this data exhibits both characteristics.
# Examining the data # The series shows a general upward trend; that is, the series values tend to increase over time\. The upward trend is seemingly constant, which indicates a linear trend\. Figure 1\. Actual sales of men's clothing ![Actual sales of men's clothing](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_forecast_series.png) The series also has a distinct seasonal pattern with annual highs in December, as indicated by the vertical lines on the graph\. The seasonal variations appear to grow with the upward series trend, which suggests multiplicative rather than additive seasonality\. Now that you've identified the characteristics of the series, you're ready to try modeling it\. The exponential smoothing method is useful for forecasting series that exhibit trend, seasonality, or both\. As we've seen, this data exhibits both characteristics\. <!-- </article "role="article" "> -->
2ED4D7860687B2EF6F85FF81B6AF4CFD2C6EA839
https://dataplatform.cloud.ibm.com/docs/content/wsd/tutorials/tut_forecast_flow.html?context=cdpaas&locale=en
Creating the flow (SPSS Modeler)
Creating the flow 1. Create a new flow and add a Data Asset node that points to catalog_seasfac.csv. 2. Connect a Type node to the Data Asset node and double-click it to open its properties. 3. Click Read Values. For the men field, set the role to Target. Figure 1. Specifying the target field ![Specifying the target field](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_forecast_fields.png) 4. Set the role for all other fields to None and click Save. 5. Attach a Time Plot graph node to the Type node and double-click it. Figure 2. Plotting the time series ![Plotting the time series](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_forecast_plot.png) 6. For the Plot, add the field men to the Series list. 7. Select Use custom x axis field label and select date. 8. Deselect the Normalize option and click Save. 9. Run the flow.
# Creating the flow # <!-- <ol> --> 1. Create a new flow and add a Data Asset node that points to catalog\_seasfac\.csv\. 2. Connect a Type node to the Data Asset node and double\-click it to open its properties\. 3. Click Read Values\. For the `men` field, set the role to Target\. Figure 1. Specifying the target field ![Specifying the target field](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_forecast_fields.png) 4. Set the role for all other fields to `None` and click `Save`\. 5. Attach a Time Plot graph node to the Type node and double\-click it\. Figure 2. Plotting the time series ![Plotting the time series](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_forecast_plot.png) 6. For the Plot, add the field `men` to the Series list\. 7. Select Use custom x axis field label and select `date`\. 8. Deselect the Normalize option and click Save\. 9. Run the flow\. <!-- </ol> --> <!-- </article "role="article" "> -->
7394B97DA7B0846274940F439675051521A7DD7C
https://dataplatform.cloud.ibm.com/docs/content/wsd/tutorials/tut_forecast_smoothing.html?context=cdpaas&locale=en
Exponential smoothing (SPSS Modeler)
Exponential smoothing Building a best-fit exponential smoothing model involves determining the model type (whether the model needs to include trend, seasonality, or both) and then obtaining the best-fit parameters for the chosen model. The plot of men's clothing sales over time suggested a model with both a linear trend component and a multiplicative seasonality component. This implies a Winters' model. First, however, we will explore a simple model (no trend and no seasonality) and then a Holt's model (incorporates linear trend but no seasonality). This will give you practice in identifying when a model is not a good fit to the data, an essential skill in successful model building. We'll start with a simple exponential smoothing model. 1. Add a Time Series node and attach it to the Type node. Double-click the node to edit its properties. 2. Under OBSERVATIONS AND TIME INTERVAL, select date as the time/date field. 3. Select Months as the time interval. Figure 1. Setting the time interval ![Setting the time interval](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_forecast_timedate.png) 4. Under BUILD OPTIONS - GENERAL, select Exponential Smoothing for the Method. 5. Set Model Type to Simple. Click Save. Figure 2. Setting the method ![Setting the method](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_forecast_smoothing.png) 6. Run the flow to create the model nugget. 7. Attach a Time Plot node to the model nugget. 8. Under Plot, add the fields men and $TS-men to the Series list. 9. Select the option Use custom x axis field label and select the date field. 10. Deselect the Display series in separate panel and Normalize options. Click Save. Figure 3. Setting the plot options ![Setting the plot options](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_forecast_smoothing_plot.png) 11. Run the flow and then open the output.The men plot represents the actual data, while $TS-men denotes the time series model. Figure 4. Simple exponential smoothing model ![Simple exponential smoothing model](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_forecast_smoothing_chart.png) Although the simple model does, in fact, exhibit gradual (and rather ponderous) upward trend, it takes no account of seasonality. You can safely reject this model. Now let's try a Holt's linear model. This should at least model the trend better than the simple model, although it too is unlikely to capture the seasonality. 12. Double-click the Time Series node. Under BUILD OPTIONS - GENERAL, with Exponential Smoothing still selected as the method, select HoltsLinearTrend as the model type. 13. Click Save and run the flow again to regenerate the model nugget. Open the output. Figure 5. Holt's linear trend model ![Holt's linear trend model](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_forecast_smoothing_holtchart.png) Holt's model displays a smoother upward trend than the simple model, but it still takes no account of the seasonality, so you can disregard this one too. You may recall that the initial plot of men's clothing sales over time suggested a model incorporating a linear trend and multiplicative seasonality. A more suitable candidate, therefore, might be Winters' model. 14. Double-click the Time Series node again to edit its properties. 15. Under BUILD OPTIONS - GENERAL, with Exponential Smoothing still selected as the method, select WintersMultiplicative as the model type. 16. Run the flow. Figure 6. Winters' multiplicative model ![Winters' multiplicative model](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_forecast_smoothing_winterschart.png) This looks better. The model reflects both the trend and the seasonality of the data. The dataset covers a period of 10 years and includes 10 seasonal peaks occurring in December of each year. The 10 peaks present in the predicted results match up well with the 10 annual peaks in the real data. However, the results also underscore the limitations of the Exponential Smoothing procedure. Looking at both the upward and downward spikes, there is significant structure that's not accounted for. If you're primarily interested in modeling a long-term trend with seasonal variation, then exponential smoothing may be a good choice. To model a more complex structure such as this one, we need to consider using the ARIMA procedure.
# Exponential smoothing # Building a best\-fit exponential smoothing model involves determining the model type (whether the model needs to include trend, seasonality, or both) and then obtaining the best\-fit parameters for the chosen model\. The plot of men's clothing sales over time suggested a model with both a linear trend component and a multiplicative seasonality component\. This implies a Winters' model\. First, however, we will explore a simple model (no trend and no seasonality) and then a Holt's model (incorporates linear trend but no seasonality)\. This will give you practice in identifying when a model is not a good fit to the data, an essential skill in successful model building\. We'll start with a simple exponential smoothing model\. <!-- <ol> --> 1. Add a Time Series node and attach it to the Type node\. Double\-click the node to edit its properties\. 2. Under OBSERVATIONS AND TIME INTERVAL, select `date` as the time/date field\. 3. Select Months as the time interval\. Figure 1. Setting the time interval ![Setting the time interval](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_forecast_timedate.png) 4. Under BUILD OPTIONS \- GENERAL, select Exponential Smoothing for the Method\. 5. Set Model Type to Simple\. Click Save\. Figure 2. Setting the method ![Setting the method](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_forecast_smoothing.png) 6. Run the flow to create the model nugget\. 7. Attach a Time Plot node to the model nugget\. 8. Under Plot, add the fields `men` and `$TS-men` to the Series list\. 9. Select the option Use custom x axis field label and select the `date` field\. 10. Deselect the Display series in separate panel and Normalize options\. Click Save\. Figure 3. Setting the plot options ![Setting the plot options](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_forecast_smoothing_plot.png) 11. Run the flow and then open the output\.The men plot represents the actual data, while $TS\-men denotes the time series model\. Figure 4. Simple exponential smoothing model ![Simple exponential smoothing model](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_forecast_smoothing_chart.png) Although the simple model does, in fact, exhibit gradual (and rather ponderous) upward trend, it takes no account of seasonality. You can safely reject this model. Now let's try a Holt's linear model. This should at least model the trend better than the simple model, although it too is unlikely to capture the seasonality. 12. Double\-click the Time Series node\. Under BUILD OPTIONS \- GENERAL, with Exponential Smoothing still selected as the method, select HoltsLinearTrend as the model type\. 13. Click Save and run the flow again to regenerate the model nugget\. Open the output\. Figure 5. Holt's linear trend model ![Holt's linear trend model](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_forecast_smoothing_holtchart.png) Holt's model displays a smoother upward trend than the simple model, but it still takes no account of the seasonality, so you can disregard this one too. You may recall that the initial plot of men's clothing sales over time suggested a model incorporating a linear trend and multiplicative seasonality. A more suitable candidate, therefore, might be Winters' model. 14. Double\-click the Time Series node again to edit its properties\. 15. Under BUILD OPTIONS \- GENERAL, with Exponential Smoothing still selected as the method, select WintersMultiplicative as the model type\. 16. Run the flow\. Figure 6. Winters' multiplicative model ![Winters' multiplicative model](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_forecast_smoothing_winterschart.png) This looks better. The model reflects both the trend and the seasonality of the data. The dataset covers a period of 10 years and includes 10 seasonal peaks occurring in December of each year. The 10 peaks present in the predicted results match up well with the 10 annual peaks in the real data. However, the results also underscore the limitations of the Exponential Smoothing procedure. Looking at both the upward and downward spikes, there is significant structure that's not accounted for. If you're primarily interested in modeling a long-term trend with seasonal variation, then exponential smoothing may be a good choice. To model a more complex structure such as this one, we need to consider using the ARIMA procedure. <!-- </ol> --> <!-- </article "role="article" "> -->
5AE2F0D8BD974C7393BC5FFA773B90FD0A2229B0
https://dataplatform.cloud.ibm.com/docs/content/wsd/tutorials/tut_forecast_summary.html?context=cdpaas&locale=en
Summary (SPSS Modeler)
Summary You've successfully modeled a complex time series, incorporating not only an upward trend but also seasonal and other variations. You've also seen how, through trial and error, you can get closer and closer to an accurate model, which you can then use to forecast future sales. In practice, you would need to reapply the model as your actual sales data are updated—for example, every month or every quarter—and produce updated forecasts.
# Summary # You've successfully modeled a complex time series, incorporating not only an upward trend but also seasonal and other variations\. You've also seen how, through trial and error, you can get closer and closer to an accurate model, which you can then use to forecast future sales\. In practice, you would need to reapply the model as your actual sales data are updated—for example, every month or every quarter—and produce updated forecasts\. <!-- </article "role="article" "> -->
02244F39BE9A15FA55C94C9F2775606247969A61
https://dataplatform.cloud.ibm.com/docs/content/wsd/tutorials/tut_intro.html?context=cdpaas&locale=en
Introduction to modeling (SPSS Modeler)
Introduction to modeling A model is a set of rules, formulas, or equations that can be used to predict an outcome based on a set of input fields or variables. For example, a financial institution might use a model to predict whether loan applicants are likely to be good or bad risks, based on information that is already known about past applicants. Video disclaimer: Some minor steps and graphical elements in these videos might differ from your platform. [https://video.ibm.com/embed/recorded/131116287](https://video.ibm.com/embed/recorded/131116287) The ability to predict an outcome is the central goal of predictive analytics, and understanding the modeling process is the key to using flows in Watson Studio. Figure 1. A decision tree model ![A decision tree model](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/spss-tree-diagram-Jun2023.png) This example uses a decision tree model, which classifies records (and predicts a response) using a series of decision rules. For example: IF income = Medium AND cards <5 THEN -> 'Good' While this example uses a CHAID (Chi-squared Automatic Interaction Detection) model, it is intended as a general introduction, and most of the concepts apply broadly to other modeling types in Watson Studio. To understand any model, you first need to understand the data that goes into it. The data in this example contains information about the customers of a bank. The following fields are used: Field name Description Credit_rating Credit rating: 0=Bad, 1=Good, 9=missing values Age Age in years Income Income level: 1=Low, 2=Medium, 3=High Credit_cards Number of credit cards held: 1=Less than five, 2=Five or more Education Level of education: 1=High school, 2=College Car_loans Number of car loans taken out: 1=None or one, 2=More than two The bank maintains a database of historical information on customers who have taken out loans with the bank, including whether or not they repaid the loans (Credit rating = Good) or defaulted (Credit rating = Bad). Using this existing data, the bank wants to build a model that will enable them to predict how likely future loan applicants are to default on the loan. Using a decision tree model, you can analyze the characteristics of the two groups of customers and predict the likelihood of loan defaults. This example uses the flow named Introduction to Modeling, available in the example project . The data file is tree_credit.csv. Let's take a look at the flow. 1. Open the Example Project. 2. Scroll down to the Modeler flows section, click View all, and select the Introduction to Modeling flow.
# Introduction to modeling # A model is a set of rules, formulas, or equations that can be used to predict an outcome based on a set of input fields or variables\. For example, a financial institution might use a model to predict whether loan applicants are likely to be good or bad risks, based on information that is already known about past applicants\. Video disclaimer: Some minor steps and graphical elements in these videos might differ from your platform\. [https://video\.ibm\.com/embed/recorded/131116287](https://video.ibm.com/embed/recorded/131116287) The ability to predict an outcome is the central goal of predictive analytics, and understanding the modeling process is the key to using flows in Watson Studio\. Figure 1\. A decision tree model ![A decision tree model](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/spss-tree-diagram-Jun2023.png) This example uses a decision tree model, which classifies records (and predicts a response) using a series of decision rules\. For example: IF income = Medium AND cards <5 THEN -> 'Good' While this example uses a CHAID (Chi\-squared Automatic Interaction Detection) model, it is intended as a general introduction, and most of the concepts apply broadly to other modeling types in Watson Studio\. To understand any model, you first need to understand the data that goes into it\. The data in this example contains information about the customers of a bank\. The following fields are used: <!-- <table "summary="" class="defaultstyle" "> --> | Field name | Description | | -------------- | ------------------------------------------------------------- | | Credit\_rating | Credit rating: 0=Bad, 1=Good, 9=missing values | | Age | Age in years | | Income | Income level: 1=Low, 2=Medium, 3=High | | Credit\_cards | Number of credit cards held: 1=Less than five, 2=Five or more | | Education | Level of education: 1=High school, 2=College | | Car\_loans | Number of car loans taken out: 1=None or one, 2=More than two | <!-- </table "summary="" class="defaultstyle" "> --> The bank maintains a database of historical information on customers who have taken out loans with the bank, including whether or not they repaid the loans (Credit rating = Good) or defaulted (Credit rating = Bad)\. Using this existing data, the bank wants to build a model that will enable them to predict how likely future loan applicants are to default on the loan\. Using a decision tree model, you can analyze the characteristics of the two groups of customers and predict the likelihood of loan defaults\. This example uses the flow named Introduction to Modeling, available in the example project \. The data file is tree\_credit\.csv\. Let's take a look at the flow\. <!-- <ol> --> 1. Open the Example Project\. 2. Scroll down to the Modeler flows section, click View all, and select the Introduction to Modeling flow\. <!-- </ol> --> <!-- </article "role="article" "> -->
A3022FF9DB2732F0AB3091884B428763D3879FD2
https://dataplatform.cloud.ibm.com/docs/content/wsd/tutorials/tut_intro_build.html?context=cdpaas&locale=en
Building the flow (SPSS Modeler)
Building the flow Figure 1. Modeling flow ![Modeling flow](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_intro_build_flow.png) To build a flow that will create a model, we need at least three elements: * A Data Asset node that reads in data from an external source, in this case a .csv data file * An Import or Type node that specifies field properties, such as measurement level (the type of data that the field contains), and the role of each field as a target or input in modeling * A modeling node that generates a model nugget when the flow runs In this example, we're using a CHAID modeling node. CHAID, or Chi-squared Automatic Interaction Detection, is a classification method that builds decision trees by using a particular type of statistics known as chi-square statistics to work out the best places to make the splits in the decision tree. If measurement levels are specified in the source node, the separate Type node can be eliminated. Functionally, the result is the same. This flow also has Table and Analysis nodes that will be used to view the scoring results after the model nugget has been created and added to the flow. The Data Asset import node reads data in from the sample tree_credit.csv data file. The Type node specifies the measurement level for each field. The measurement level is a category that indicates the type of data in the field. Our source data file uses three different measurement levels: A Continuous field (such as the Age field) contains continuous numeric values, while a Nominal field (such as the Credit rating field) has two or more distinct values, for example Bad, Good, or No credit history. An Ordinal field (such as the Income level field) describes data with multiple distinct values that have an inherent order—in this case Low, Medium and High. Figure 2. Setting the target and input fields with the Type node ![Setting the target and input fields with the Type node](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/intro-build.jpg) For each field, the Type node also specifies a role to indicate the part that each field plays in modeling. The role is set to Target for the field Credit rating, which is the field that indicates whether or not a given customer defaulted on the loan. This is the target, or the field for which we want to predict the value. Role is set to Input for the other fields. Input fields are sometimes known as predictors, or fields whose values are used by the modeling algorithm to predict the value of the target field. The CHAID modeling node generates the model. In the node's properties, under FIELDS, the option Use custom field roles is available. We could select this option and change the field roles, but for this example we'll use the default targets and inputs as specified in the Type node. 1. Double-click the CHAID node (named Creditrating). The node properties are displayed. Figure 3. CHAID modeling node properties ![CHAID modeling node properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/spss-fields.png) Here there are several options where we could specify the kind of model we want to build. We want a brand-new model, so under OBJECTIVES we'll use the default option Build new model. We also just want a single, standard decision tree model without any enhancements, so we'll also use the default objective option Create a standard model. Figure 4. CHAID modeling node objectives ![CHAID modeling node objectives](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/spss-objectives.png) For this example, we want to keep the tree fairly simple, so we'll limit the tree growth by raising the minimum number of cases for parent and child nodes. 2. Under STOPPING RULES, select Use absolute value. 3. Set Minimum records in parent branch to 400. 4. Set Minimum records in child branch to 200. Figure 5. Setting the stopping criteria for decision tree building ![Setting the stopping criteria for decision tree building](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_intro_stopping.png) We can use all the other default options for this example, so click Save and then click the Run button on the toolbar to create the model. (Alternatively, right-click the CHAID node and choose Run from the context menu.)
# Building the flow # Figure 1\. Modeling flow ![Modeling flow](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_intro_build_flow.png) To build a flow that will create a model, we need at least three elements: <!-- <ul> --> * A Data Asset node that reads in data from an external source, in this case a \.csv data file * An Import or Type node that specifies field properties, such as measurement level (the type of data that the field contains), and the role of each field as a target or input in modeling * A modeling node that generates a model nugget when the flow runs <!-- </ul> --> In this example, we're using a CHAID modeling node\. CHAID, or Chi\-squared Automatic Interaction Detection, is a classification method that builds decision trees by using a particular type of statistics known as chi\-square statistics to work out the best places to make the splits in the decision tree\. If measurement levels are specified in the source node, the separate Type node can be eliminated\. Functionally, the result is the same\. This flow also has Table and Analysis nodes that will be used to view the scoring results after the model nugget has been created and added to the flow\. The Data Asset import node reads data in from the sample tree\_credit\.csv data file\. The Type node specifies the measurement level for each field\. The measurement level is a category that indicates the type of data in the field\. Our source data file uses three different measurement levels: A Continuous field (such as the `Age` field) contains continuous numeric values, while a Nominal field (such as the `Credit rating` field) has two or more distinct values, for example `Bad`, `Good`, or `No credit history`\. An Ordinal field (such as the `Income level` field) describes data with multiple distinct values that have an inherent order—in this case `Low`, `Medium` and `High`\. Figure 2\. Setting the target and input fields with the Type node ![Setting the target and input fields with the Type node](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/intro-build.jpg) For each field, the Type node also specifies a role to indicate the part that each field plays in modeling\. The role is set to `Target` for the field `Credit rating`, which is the field that indicates whether or not a given customer defaulted on the loan\. This is the `target`, or the field for which we want to predict the value\. Role is set to `Input` for the other fields\. Input fields are sometimes known as `predictors`, or fields whose values are used by the modeling algorithm to predict the value of the target field\. The CHAID modeling node generates the model\. In the node's properties, under FIELDS, the option Use custom field roles is available\. We could select this option and change the field roles, but for this example we'll use the default targets and inputs as specified in the Type node\. <!-- <ol> --> 1. Double\-click the CHAID node (named Creditrating)\. The node properties are displayed\. Figure 3. CHAID modeling node properties ![CHAID modeling node properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/spss-fields.png) Here there are several options where we could specify the kind of model we want to build. We want a brand-new model, so under OBJECTIVES we'll use the default option Build new model. We also just want a single, standard decision tree model without any enhancements, so we'll also use the default objective option Create a standard model. Figure 4. CHAID modeling node objectives ![CHAID modeling node objectives](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/spss-objectives.png) For this example, we want to keep the tree fairly simple, so we'll limit the tree growth by raising the minimum number of cases for parent and child nodes. 2. Under STOPPING RULES, select Use absolute value\. 3. Set Minimum records in parent branch to 400\. 4. Set Minimum records in child branch to 200\. <!-- </ol> --> Figure 5\. Setting the stopping criteria for decision tree building ![Setting the stopping criteria for decision tree building](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_intro_stopping.png) We can use all the other default options for this example, so click Save and then click the Run button on the toolbar to create the model\. (Alternatively, right\-click the CHAID node and choose Run from the context menu\.) <!-- </article "role="article" "> -->
9DEAC0E5B403BAEDEABE9C76A295651289E6416C
https://dataplatform.cloud.ibm.com/docs/content/wsd/tutorials/tut_intro_evaluate.html?context=cdpaas&locale=en
Evaluating the model (SPSS Modeler)
Evaluating the model We've been browsing the model to understand how scoring works. But to evaluate how accurately it works, we need to score some records and compare the responses predicted by the model to the actual results. We're going to score the same records that were used to estimate the model, allowing us to compare the observed and predicted responses. Figure 1. Attaching the model nugget to output nodes for model evaluation ![Attaching the model nugget to output nodes for model evaluation](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_intro_attach.png) 1. To see the scores or predictions, attach the Table node to the model nugget and then right-click the Table node and select Run. A table will be generated and added to the Outputs panel. Double-click it to open it. The table displays the predicted scores in a field named $R-Credit rating, which was created by the model. We can compare these values to the original Credit rating field that contains the actual responses. By convention, the names of the fields generated during scoring are based on the target field, but with a standard prefix. Prefixes $G and $GE are generated by the Generalized Linear Model, $R is the prefix used for the prediction generated by the CHAID model in this case, $RC is for confidence values, $X is typically generated by using an ensemble, and $XR, $XS, and $XF are used as prefixes in cases where the target field is a Continuous, Categorical, Set, or Flag field, respectively. Different model types use different sets of prefixes. A confidence value is the model's own estimation, on a scale from 0.0 to 1.0, of how accurate each predicted value is. Figure 2. Table showing generated scores and confidence values ![Table showing generated scores and confidence values](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/spss-eval-table.png) As expected, the predicted value matches the actual responses for many records but not all. The reason for this is that each CHAID terminal node has a mix of responses. The prediction matches the most common one, but will be wrong for all the others in that node. (Recall the 18% minority of low-income customers who did not default.) To avoid this, we could continue splitting the tree into smaller and smaller branches, until every node was 100% pure—all Good or Bad with no mixed responses. But such a model would be extremely complicated and would probably not generalize well to other datasets. To find out exactly how many predictions are correct, we could read through the table and tally the number of records where the value of the predicted field $R-Credit rating matches the value of Credit rating. Fortunately, there's a much easier way; we can use an Analysis node, which does this automatically. 2. Connect the model nugget to the Analysis node. 3. Right-click the Analysis node and select Run. An Analysis entry will be added to the Outputs panel. Double-click it to open it. Figure 3. Attaching an Analysis node ![Attaching an Analysis node](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_intro_attach.png) The analysis shows that for 1960 out of 2464 records—over 79%—the value predicted by the model matched the actual response. Figure 4. Analysis results comparing observed and predicted responses ![Analysis results comparing observed and predicted responses](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_intro_analysis.png) This result is limited by the fact that the records being scored are the same ones used to estimate the model. In a real situation, you could use a Partition node to split the data into separate samples for training and evaluation. By using one sample partition to generate the model and another sample to test it, you can get a much better indication of how well it will generalize to other datasets. The Analysis node allows us to test the model against records for which we already know the actual result. The next stage illustrates how we can use the model to score records for which we don't know the outcome. For example, this might include people who are not currently customers of the bank, but who are prospective targets for a promotional mailing.
# Evaluating the model # We've been browsing the model to understand how scoring works\. But to evaluate how accurately it works, we need to score some records and compare the responses predicted by the model to the actual results\. We're going to score the same records that were used to estimate the model, allowing us to compare the observed and predicted responses\. Figure 1\. Attaching the model nugget to output nodes for model evaluation ![Attaching the model nugget to output nodes for model evaluation](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_intro_attach.png) <!-- <ol> --> 1. To see the scores or predictions, attach the Table node to the model nugget and then right\-click the Table node and select Run\. A table will be generated and added to the Outputs panel\. Double\-click it to open it\. The table displays the predicted scores in a field named `$R-Credit rating`, which was created by the model. We can compare these values to the original `Credit rating` field that contains the actual responses. By convention, the names of the fields generated during scoring are based on the target field, but with a standard prefix. Prefixes `$G` and `$GE` are generated by the Generalized Linear Model, `$R` is the prefix used for the prediction generated by the CHAID model in this case, `$RC` is for confidence values, `$X` is typically generated by using an ensemble, and `$XR`, `$XS`, and `$XF` are used as prefixes in cases where the target field is a Continuous, Categorical, Set, or Flag field, respectively. Different model types use different sets of prefixes. A confidence value is the model's own estimation, on a scale from 0.0 to 1.0, of how accurate each predicted value is. Figure 2. Table showing generated scores and confidence values ![Table showing generated scores and confidence values](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/spss-eval-table.png) As expected, the predicted value matches the actual responses for many records but not all. The reason for this is that each CHAID terminal node has a mix of responses. The prediction matches the most common one, but will be wrong for all the others in that node. (Recall the 18% minority of low-income customers who did not default.) To avoid this, we could continue splitting the tree into smaller and smaller branches, until every node was 100% pure—all Good or Bad with no mixed responses. But such a model would be extremely complicated and would probably not generalize well to other datasets. To find out exactly how many predictions are correct, we could read through the table and tally the number of records where the value of the predicted field `$R-Credit rating` matches the value of `Credit rating`. Fortunately, there's a much easier way; we can use an Analysis node, which does this automatically. 2. Connect the model nugget to the Analysis node\. 3. Right\-click the Analysis node and select Run\. An Analysis entry will be added to the Outputs panel\. Double\-click it to open it\. <!-- </ol> --> Figure 3\. Attaching an Analysis node ![Attaching an Analysis node](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_intro_attach.png) The analysis shows that for 1960 out of 2464 records—over 79%—the value predicted by the model matched the actual response\. Figure 4\. Analysis results comparing observed and predicted responses ![Analysis results comparing observed and predicted responses](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_intro_analysis.png) This result is limited by the fact that the records being scored are the same ones used to estimate the model\. In a real situation, you could use a Partition node to split the data into separate samples for training and evaluation\. By using one sample partition to generate the model and another sample to test it, you can get a much better indication of how well it will generalize to other datasets\. The Analysis node allows us to test the model against records for which we already know the actual result\. The next stage illustrates how we can use the model to score records for which we don't know the outcome\. For example, this might include people who are not currently customers of the bank, but who are prospective targets for a promotional mailing\. <!-- </article "role="article" "> -->
A62A258BB486FBE7E7FC91C611DC2BC400E32308
https://dataplatform.cloud.ibm.com/docs/content/wsd/tutorials/tut_intro_nugget.html?context=cdpaas&locale=en
Browsing the model (SPSS Modeler)
Browsing the model After running a flow, an orange model nugget is added to the canvas with a link to the modeling node from which it was created. To view the model details, right-click the model nugget and choose View Model. Figure 1. Model nugget ![Model nugget](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_intro_nugget.png) In the case of the CHAID nugget, the CHAID Tree Model screen includes pages for Model Information, Feature Importance, Top Decision Rules, Tree Diagram, Build Settings, and Training Summary. For example, you can see details in the form of a rule set—essentially a series of rules that can be used to assign individual records to child nodes based on the values of different input fields. Figure 2. CHAID model nugget, rule set ![CHAID model nugget, rule set](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_intro_rules.png) For each decision tree terminal node – meaning those tree nodes that are not split further—a prediction of Good or Bad is returned. In each case, the prediction is determined by the mode, or most common response, for records that fall within that node. The Feature Importance chart shows the relative importance of each predictor in estimating the model. From this, we can see that Income level is easily the most significant in this case, with Number of credit cards being the next most significant factor. Figure 3. Feature Importance chart ![Feature Importance chart](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/feature-importance.jpg) The Tree Diagram page displays the same model in the form of a tree, with a node at each decision point. Hover over branches and nodes to explore details. Figure 4. Tree diagram in the model nugget ![Tree diagram in the model nugget](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tree-diagram.jpg) Looking at the start of the tree, the first node (node 0) gives us a summary for all the records in the data set. Just over 40% of the cases in the data set are classified as a bad risk. This is quite a high proportion, so let's see if the tree can give us any clues as to what factors might be responsible. We can see that the first split is by Income level. Records where the income level is in the Low category are assigned to node 2, and it's no surprise to see that this category contains the highest percentage of loan defaulters. Clearly, lending to customers in this category carries a high risk. However, almost 18% of the customers in this category actually didn’t default, so the prediction won't always be correct. No model can feasibly predict every response, but a good model should allow us to predict the most likely response for each record based on the available data. In the same way, if we look at the high income customers (node 1), we see that the vast majority (over 88%) are a good risk. But more than 1 in 10 of these customers has also defaulted. Can we refine our lending criteria to minimize the risk here? Notice how the model has divided these customers into two sub-categories (nodes 4 and 5), based on the number of credit cards held. For high-income customers, if we lend only to those with fewer than five credit cards, we can increase our success rate from 88% to almost 97%—an even more satisfactory outcome. Figure 5. High-income customers with fewer than five credit cards ![High-income customers with fewer than five credit cards](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_intro_node5.png) But what about those customers in the Medium income category (node 3)? They’re much more evenly divided between Good and Bad ratings. Again, the sub-categories (nodes 6 and 7 in this case) can help us. This time, lending only to those medium-income customers with fewer than five credit cards increases the percentage of Good ratings from 58% to 86%, a significant improvement. Figure 6. Tree view of medium-income customers ![Tree view of medium-income customers](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_intro_node7.png) So, we’ve learned that every record that is input to this model will be assigned to a specific node, and assigned a prediction of Good or Bad based on the most common response for that node. This process of assigning predictions to individual records is known as scoring. By scoring the same records used to estimate the model, we can evaluate how accurately it performs on the training data—the data for which we know the outcome. Let's examine how to do this.
# Browsing the model # After running a flow, an orange model nugget is added to the canvas with a link to the modeling node from which it was created\. To view the model details, right\-click the model nugget and choose View Model\. Figure 1\. Model nugget ![Model nugget](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_intro_nugget.png) In the case of the CHAID nugget, the CHAID Tree Model screen includes pages for Model Information, Feature Importance, Top Decision Rules, Tree Diagram, Build Settings, and Training Summary\. For example, you can see details in the form of a rule set—essentially a series of rules that can be used to assign individual records to child nodes based on the values of different input fields\. Figure 2\. CHAID model nugget, rule set ![CHAID model nugget, rule set](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_intro_rules.png) For each decision tree terminal node – meaning those tree nodes that are not split further—a prediction of Good or Bad is returned\. In each case, the prediction is determined by the mode, or most common response, for records that fall within that node\. The Feature Importance chart shows the relative importance of each predictor in estimating the model\. From this, we can see that Income level is easily the most significant in this case, with Number of credit cards being the next most significant factor\. Figure 3\. Feature Importance chart ![Feature Importance chart](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/feature-importance.jpg) The Tree Diagram page displays the same model in the form of a tree, with a node at each decision point\. Hover over branches and nodes to explore details\. Figure 4\. Tree diagram in the model nugget ![Tree diagram in the model nugget](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tree-diagram.jpg) Looking at the start of the tree, the first node (node 0) gives us a summary for all the records in the data set\. Just over 40% of the cases in the data set are classified as a bad risk\. This is quite a high proportion, so let's see if the tree can give us any clues as to what factors might be responsible\. We can see that the first split is by Income level\. Records where the income level is in the Low category are assigned to node 2, and it's no surprise to see that this category contains the highest percentage of loan defaulters\. Clearly, lending to customers in this category carries a high risk\. However, almost 18% of the customers in this category actually didn’t default, so the prediction won't always be correct\. No model can feasibly predict every response, but a good model should allow us to predict the most likely response for each record based on the available data\. In the same way, if we look at the high income customers (node 1), we see that the vast majority (over 88%) are a good risk\. But more than 1 in 10 of these customers has also defaulted\. Can we refine our lending criteria to minimize the risk here? Notice how the model has divided these customers into two sub\-categories (nodes 4 and 5), based on the number of credit cards held\. For high\-income customers, if we lend only to those with fewer than five credit cards, we can increase our success rate from 88% to almost 97%—an even more satisfactory outcome\. Figure 5\. High\-income customers with fewer than five credit cards ![High\-income customers with fewer than five credit cards](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_intro_node5.png) But what about those customers in the Medium income category (node 3)? They’re much more evenly divided between Good and Bad ratings\. Again, the sub\-categories (nodes 6 and 7 in this case) can help us\. This time, lending only to those medium\-income customers with fewer than five credit cards increases the percentage of Good ratings from 58% to 86%, a significant improvement\. Figure 6\. Tree view of medium\-income customers ![Tree view of medium\-income customers](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_intro_node7.png) So, we’ve learned that every record that is input to this model will be assigned to a specific node, and assigned a prediction of Good or Bad based on the most common response for that node\. This process of assigning predictions to individual records is known as scoring\. By scoring the same records used to estimate the model, we can evaluate how accurately it performs on the training data—the data for which we know the outcome\. Let's examine how to do this\. <!-- </article "role="article" "> -->
3CF77633A489E42B01086588D6613D65BFD51F7F
https://dataplatform.cloud.ibm.com/docs/content/wsd/tutorials/tut_intro_score.html?context=cdpaas&locale=en
Scoring records (SPSS Modeler)
Scoring records Earlier, we scored the same records used to estimate the model so we could evaluate how accurate the model was. Now we'll score a different set of records from the ones used to create the model. This is the goal of modeling with a target field: Study records for which you know the outcome, to identify patterns that will allow you to predict outcomes you don't yet know. Figure 1. Attaching new data for scoring ![Attaching new data for scoring](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_intro_score.png) You could update the data asset Import node to point to a different data file, or you could add a new Import node that reads in the data you want to score. Either way, the new dataset must contain the same input fields used by the model (Age, Income level, Education and so on), but not the target field Credit rating. Alternatively, you could add the model nugget to any flow that includes the expected input fields. Whether read from a file or a database, the source type doesn't matter as long as the field names and types match those used by the model.
# Scoring records # Earlier, we scored the same records used to estimate the model so we could evaluate how accurate the model was\. Now we'll score a different set of records from the ones used to create the model\. This is the goal of modeling with a target field: Study records for which you know the outcome, to identify patterns that will allow you to predict outcomes you don't yet know\. Figure 1\. Attaching new data for scoring ![Attaching new data for scoring](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_intro_score.png) You could update the data asset Import node to point to a different data file, or you could add a new Import node that reads in the data you want to score\. Either way, the new dataset must contain the same input fields used by the model (`Age`, `Income level`, `Education` and so on), but not the target field `Credit rating`\. Alternatively, you could add the model nugget to any flow that includes the expected input fields\. Whether read from a file or a database, the source type doesn't matter as long as the field names and types match those used by the model\. <!-- </article "role="article" "> -->
F140F179614D126E483732933A5CA8DCF0A32876
https://dataplatform.cloud.ibm.com/docs/content/wsd/tutorials/tut_intro_summary.html?context=cdpaas&locale=en
Summary (SPSS Modeler)
Summary This example Introduction to Modeling flow demonstrates the basic steps for creating, evaluating, and scoring a model. * The modeling node estimates the model by studying records for which the outcome is known, and creates a model nugget. This is sometimes referred to as training the model. * The model nugget can be added to any flow with the expected fields to score records. By scoring the records for which you already know the outcome (such as existing customers), you can evaluate how well it performs. * After you're satisfied that the model performs acceptably well, you can score new data (such as prospective customers) to predict how they will respond. * The data used to train or estimate the model may be referred to as the analytical or historical data; the scoring data may also be referred to as the operational data.
# Summary # This example Introduction to Modeling flow demonstrates the basic steps for creating, evaluating, and scoring a model\. <!-- <ul> --> * The modeling node estimates the model by studying records for which the outcome is known, and creates a model nugget\. This is sometimes referred to as training the model\. * The model nugget can be added to any flow with the expected fields to score records\. By scoring the records for which you already know the outcome (such as existing customers), you can evaluate how well it performs\. * After you're satisfied that the model performs acceptably well, you can score new data (such as prospective customers) to predict how they will respond\. * The data used to train or estimate the model may be referred to as the analytical or historical data; the scoring data may also be referred to as the operational data\. <!-- </ul> --> <!-- </article "role="article" "> -->
2828FD5943ABBA08AA260F1080B850C90FC4EFBE
https://dataplatform.cloud.ibm.com/docs/content/wsd/tutorials/tut_reducing.html?context=cdpaas&locale=en
Reducing input data string length (SPSS Modeler)
Reducing input data string length For binomial logistic regression, and auto classifier models that include a binomial logistic regression model, string fields are limited to a maximum of eight characters. Where strings are more than eight characters, you can recode them using a Reclassify node. This example uses the flow named Reducing Input Data String Length, available in the example project . The data file is drug_long_name.csv. This example focuses on a small part of a flow to show the type of errors that may be generated with overlong strings, and explains how to use the Reclassify node to change the string details to an acceptable length. Although the example uses a binomial Logistic Regression node, it is equally applicable when using the Auto Classifier node to generate a binomial Logistic Regression model.
# Reducing input data string length # For binomial logistic regression, and auto classifier models that include a binomial logistic regression model, string fields are limited to a maximum of eight characters\. Where strings are more than eight characters, you can recode them using a Reclassify node\. This example uses the flow named Reducing Input Data String Length, available in the example project \. The data file is drug\_long\_name\.csv\. This example focuses on a small part of a flow to show the type of errors that may be generated with overlong strings, and explains how to use the Reclassify node to change the string details to an acceptable length\. Although the example uses a binomial Logistic Regression node, it is equally applicable when using the Auto Classifier node to generate a binomial Logistic Regression model\. <!-- </article "role="article" "> -->
85381B4DF6F42B35CA5097709523038ABDCDC555
https://dataplatform.cloud.ibm.com/docs/content/wsd/tutorials/tut_reducing_reclassify.html?context=cdpaas&locale=en
Reclassifying the data (SPSS Modeler)
Reclassifying the data Figure 1. Example flow showing string reclassification for binomial logistic regression ![Example flow showing string reclassification for binomial logistic regression](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_reducing.png) 1. Add a Data Asset node that points to drug_long_name.csv. 2. Add a Type node after the Data Asset node. Double-click the Type node to open its properties, and select Cholesterol_long as the target. 3. Add a Logistic Regression node after the Type node. Double-click the node and select the Binomial procedure (instead of the default Multinomial procedure). 4. Right-click the Logistic Regression node and run it. An error message warns you that the Cholesterol_long string values are too long. When you encounter this type of message, follow the procedure described in the rest of this example to modify your data. Figure 2. Error message displayed when running the binomial logistic regression node ![Error message displayed when running the binomial logistic regression node](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_reducing_error.png) 5. Add a Reclassify node after the Type node and double-click it to open its properties. 6. For the Reclassify Field, select Cholesterol_long and type Cholesterol for the new field name. 7. Click Get values to add the Cholesterol_long values to the original value column. 8. In the new value column, type High next to the original value of High level of cholesterol and Normal next to the original value of Normal level of cholesterol. Figure 3. Reclassifying long strings ![Reclassifying long strings](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_reducing_reclassify.png) 9. Add a Filter node after the Reclassify node. Double-click the node, choose Filter the selected fields, and select the Cholesterol_long field. Figure 4. Filtering the "Cholesterol_long" field from the data ![Filtering the "Cholesterol_long" field from the data](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_reducing_filter.png) 10. Add a Type node after the Filter node. Double-click the node and select Cholesterol as the target. Figure 5. Short string details in the "Cholesterol" field ![Short string details in the "Cholesterol" field](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_reducing_type.png) 11. Add a Logistic node after the Type node. Double-click the node and select the Binomial procedure. You can now run the binomial Logistic node and generate a model without encountering the error as you did before. This example only shows part of a flow. For more information about the types of flows in which you might need to reclassify long strings, see the following example: * Auto Classifier node. See [Automated modeling for a flag target](https://dataplatform.cloud.ibm.com/docs/content/wsd/tutorials/tut_autoflag.html).
# Reclassifying the data # Figure 1\. Example flow showing string reclassification for binomial logistic regression ![Example flow showing string reclassification for binomial logistic regression](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_reducing.png) <!-- <ol> --> 1. Add a Data Asset node that points to drug\_long\_name\.csv\. 2. Add a Type node after the Data Asset node\. Double\-click the Type node to open its properties, and select `Cholesterol_long` as the target\. 3. Add a Logistic Regression node after the Type node\. Double\-click the node and select the Binomial procedure (instead of the default Multinomial procedure)\. 4. Right\-click the Logistic Regression node and run it\. An error message warns you that the `Cholesterol_long` string values are too long\. When you encounter this type of message, follow the procedure described in the rest of this example to modify your data\. Figure 2. Error message displayed when running the binomial logistic regression node ![Error message displayed when running the binomial logistic regression node](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_reducing_error.png) 5. Add a Reclassify node after the Type node and double\-click it to open its properties\. 6. For the Reclassify Field, select `Cholesterol_long` and type Cholesterol for the new field name\. 7. Click Get values to add the `Cholesterol_long` values to the original value column\. 8. In the new value column, type High next to the original value of `High level of cholesterol` and Normal next to the original value of `Normal level of cholesterol`\. Figure 3. Reclassifying long strings ![Reclassifying long strings](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_reducing_reclassify.png) 9. Add a Filter node after the Reclassify node\. Double\-click the node, choose Filter the selected fields, and select the `Cholesterol_long` field\. Figure 4. Filtering the "Cholesterol\_long" field from the data ![Filtering the "Cholesterol\_long" field from the data](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_reducing_filter.png) 10. Add a Type node after the Filter node\. Double\-click the node and select `Cholesterol` as the target\. Figure 5. Short string details in the "Cholesterol" field ![Short string details in the "Cholesterol" field](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_reducing_type.png) 11. Add a Logistic node after the Type node\. Double\-click the node and select the Binomial procedure\. <!-- </ol> --> You can now run the binomial Logistic node and generate a model without encountering the error as you did before\. This example only shows part of a flow\. For more information about the types of flows in which you might need to reclassify long strings, see the following example: <!-- <ul> --> * Auto Classifier node\. See [Automated modeling for a flag target](https://dataplatform.cloud.ibm.com/docs/content/wsd/tutorials/tut_autoflag.html)\. <!-- </ul> --> <!-- </article "role="article" "> -->
75891659AB1DF929D219741C3F2D69384A01835C
https://dataplatform.cloud.ibm.com/docs/content/wsd/tutorials/tut_retail.html?context=cdpaas&locale=en
Retail sales promotion (SPSS Modeler)
Retail sales promotion This example deals with fictitious data that describes retail product lines and the effects of promotion on sales. Your goal in this example is to predict the effects of future sales promotions. Similar to the [condition monitoring example](https://dataplatform.cloud.ibm.com/docs/content/wsd/tutorials/tut_condition.html), the data mining process consists of the exploration, data preparation, training, and test phases. This example uses the flow named Retail Sales Promotion, available in the example project . The data files are goods1n.csv and goods2n.csv. 1. Open the Example Project. 2. Scroll down to the Modeler flows section, click View all, and select the Retail Sales Promotion flow.
# Retail sales promotion # This example deals with fictitious data that describes retail product lines and the effects of promotion on sales\. Your goal in this example is to predict the effects of future sales promotions\. Similar to the [condition monitoring example](https://dataplatform.cloud.ibm.com/docs/content/wsd/tutorials/tut_condition.html), the data mining process consists of the exploration, data preparation, training, and test phases\. This example uses the flow named Retail Sales Promotion, available in the example project \. The data files are goods1n\.csv and goods2n\.csv\. <!-- <ol> --> 1. Open the Example Project\. 2. Scroll down to the Modeler flows section, click View all, and select the Retail Sales Promotion flow\. <!-- </ol> --> <!-- </article "role="article" "> -->
6F55360D336A77A06F2C4235B286A869CFF0986C
https://dataplatform.cloud.ibm.com/docs/content/wsd/tutorials/tut_retail_data.html?context=cdpaas&locale=en
Examining the data (SPSS Modeler)
Examining the data Each record contains: * Class. Product type. * Cost. Unit price. * Promotion. Index of amount spent on a particular promotion. * Before. Revenue before promotion. * After. Revenue after promotion. The flow is simple. It displays the data in a table. The two revenue fields (Before and After) are expressed in absolute terms. However, it seems likely that the increase in revenue after the promotion (and presumably as a result of it) would be a more useful figure. Figure 1. Effects of promotion on product sales ![Effects of promotion on product sales](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_retail_data_effects.png) The flow also contains a node to derive this value, expressed as a percentage of the revenue before the promotion, in a field called Increase. A table shows this field. Figure 2. Increase in revenue after promotion ![Increase in revenue after promotion](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_retail_data_increase.png) For each class of product, and almost linear relationship exists between the increase in revenue and the cost of the promotion. Therefore, it seems likely that a decision tree or neural network could predict, with reasonable accuracy, the increase in revenue from the other available fields.
# Examining the data # Each record contains: <!-- <ul> --> * `Class`\. Product type\. * `Cost`\. Unit price\. * `Promotion`\. Index of amount spent on a particular promotion\. * `Before`\. Revenue before promotion\. * `After`\. Revenue after promotion\. <!-- </ul> --> The flow is simple\. It displays the data in a table\. The two revenue fields (`Before` and `After`) are expressed in absolute terms\. However, it seems likely that the increase in revenue after the promotion (and presumably as a result of it) would be a more useful figure\. Figure 1\. Effects of promotion on product sales ![Effects of promotion on product sales](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_retail_data_effects.png) The flow also contains a node to derive this value, expressed as a percentage of the revenue before the promotion, in a field called `Increase`\. A table shows this field\. Figure 2\. Increase in revenue after promotion ![Increase in revenue after promotion](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_retail_data_increase.png) For each class of product, and almost linear relationship exists between the increase in revenue and the cost of the promotion\. Therefore, it seems likely that a decision tree or neural network could predict, with reasonable accuracy, the increase in revenue from the other available fields\. <!-- </article "role="article" "> -->
1399CD9C09634E30C0F099C0FAE66A756153DAB1
https://dataplatform.cloud.ibm.com/docs/content/wsd/tutorials/tut_retail_learn.html?context=cdpaas&locale=en
Learning and testing (SPSS Modeler)
Learning and testing The flow trains a neural network and a decision tree to make this prediction of revenue increase. Figure 1. Retail Sales Promotion example flow ![Retail Sales Promotion example flow](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_retail.png) After you run the flow to generate the model nuggets, you can test the results of the learning process. You do this by connecting the decision tree and network in series between the Type node and a new Analysis node, changing the Data Asset import node to point to goods2n.csv, and running the Analysis node. From the output of this node, in particular from the linear correlation between the predicted increase and the correct answer, you will find that the trained systems predict the increase in revenue with a high degree of success. Further exploration might focus on the cases where the trained systems make relatively large errors. These could be identified by plotting the predicted increase in revenue against the actual increase. Outliers on this graph could be selected using the interactive graphics within SPSS Modeler, and from their properties, it might be possible to tune the data description or learning process to improve accuracy.
# Learning and testing # The flow trains a neural network and a decision tree to make this prediction of revenue increase\. Figure 1\. Retail Sales Promotion example flow ![Retail Sales Promotion example flow](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_retail.png) After you run the flow to generate the model nuggets, you can test the results of the learning process\. You do this by connecting the decision tree and network in series between the Type node and a new Analysis node, changing the Data Asset import node to point to goods2n\.csv, and running the Analysis node\. From the output of this node, in particular from the linear correlation between the predicted increase and the correct answer, you will find that the trained systems predict the increase in revenue with a high degree of success\. Further exploration might focus on the cases where the trained systems make relatively large errors\. These could be identified by plotting the predicted increase in revenue against the actual increase\. Outliers on this graph could be selected using the interactive graphics within SPSS Modeler, and from their properties, it might be possible to tune the data description or learning process to improve accuracy\. <!-- </article "role="article" "> -->
420946CA7E893CC5A2B3D1A8F47A7A2C7059D7F6
https://dataplatform.cloud.ibm.com/docs/content/wsd/tutorials/tut_screening.html?context=cdpaas&locale=en
Screening predictors (SPSS Modeler)
Screening predictors The Feature Selection node helps you identify the fields that are most important in predicting a certain outcome. From a set of hundreds or even thousands of predictors, the Feature Selection node screens, ranks, and selects the predictors that may be most important. Ultimately, you may end up with a quicker, more efficient model—one that uses fewer predictors, runs more quickly, and may be easier to understand. The data used in this example represents a data warehouse for a hypothetical telephone company and contains information about responses to a special promotion by 5,000 of the company's customers. The data includes many fields that contain customers' age, employment, income, and telephone usage statistics. Three "target" fields show whether or not the customer responded to each of three offers. The company wants to use this data to help predict which customers are most likely to respond to similar offers in the future. This example uses the flow named Screening Predictors, available in the example project . The data file is customer_dbase.csv. This example focuses on only one of the offers as a target. It uses the CHAID tree-building node to develop a model to describe which customers are most likely to respond to the promotion. It contrasts two approaches: * Without feature selection. All predictor fields in the dataset are used as inputs to the CHAID tree. * With feature selection. The Feature Selection node is used to select the best 10 predictors. These are then input into the CHAID tree. By comparing the two resulting tree models, we can see how feature selection can produce effective results.
# Screening predictors # The Feature Selection node helps you identify the fields that are most important in predicting a certain outcome\. From a set of hundreds or even thousands of predictors, the Feature Selection node screens, ranks, and selects the predictors that may be most important\. Ultimately, you may end up with a quicker, more efficient model—one that uses fewer predictors, runs more quickly, and may be easier to understand\. The data used in this example represents a data warehouse for a hypothetical telephone company and contains information about responses to a special promotion by 5,000 of the company's customers\. The data includes many fields that contain customers' age, employment, income, and telephone usage statistics\. Three "target" fields show whether or not the customer responded to each of three offers\. The company wants to use this data to help predict which customers are most likely to respond to similar offers in the future\. This example uses the flow named Screening Predictors, available in the example project \. The data file is customer\_dbase\.csv\. This example focuses on only one of the offers as a target\. It uses the CHAID tree\-building node to develop a model to describe which customers are most likely to respond to the promotion\. It contrasts two approaches: <!-- <ul> --> * Without feature selection\. All predictor fields in the dataset are used as inputs to the CHAID tree\. * With feature selection\. The Feature Selection node is used to select the best 10 predictors\. These are then input into the CHAID tree\. <!-- </ul> --> By comparing the two resulting tree models, we can see how feature selection can produce effective results\. <!-- </article "role="article" "> -->
5A328CF6319859F041C48974E44046BCFCEA3B87
https://dataplatform.cloud.ibm.com/docs/content/wsd/tutorials/tut_screening_flow.html?context=cdpaas&locale=en
Building the flow (SPSS Modeler)
Building the flow Figure 1. Feature Selection example flow ![Feature Selection example flow](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_screening.png) 1. Add a Data Asset node that points to customer_dbase.csv. 2. Add a Type node after the Data Asset node. 3. Double-click the Type node to open its properties, and change the role for response_01 to Target. Change the role to None for the other response fields (response_02 and response_03) and for the customer ID (custid) field. Leave the role set to Input for all other fields. Figure 2. Adding a Type node ![Adding a Type node](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_screening_target.png) 4. Click Read Values and then click Save. 5. Add a Feature Selection modeling node after the Type node. In the node properties, the rules and criteria used for screening or disqualifying fields are defined. Figure 3. Adding a Feature Selection node ![Adding a Feature Selection node](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_screening_criteria.png) 6. Run the flow to generate the Feature Selection model nugget. 7. To look at the results, right-click the model nugget and choose View Model. The results show the fields found to be useful in the prediction, ranked by importance. By examining these fields, you can decide which ones to use in subsequent modeling sessions. 8. To compare results without feature selection, you must add two CHAID modeling nodes to the flow: one that uses feature selection and one that doesn't. Add two CHAID nodes, one connected to the Type node and the other connected to the Feature Selection model nugget, as shown in the example flow at the beginning of this section. 9. Double-click each CHAID node to open its properties. Under Objectives, make sure that Build new model and Create a standard model are selected. Under , select Custom and set it to 5.
# Building the flow # Figure 1\. Feature Selection example flow ![Feature Selection example flow](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_screening.png) <!-- <ol> --> 1. Add a Data Asset node that points to customer\_dbase\.csv\. 2. Add a Type node after the Data Asset node\. 3. Double\-click the Type node to open its properties, and change the role for `response_01` to Target\. Change the role to None for the other response fields (`response_02` and `response_03`) and for the customer ID (`custid`) field\. Leave the role set to Input for all other fields\. Figure 2. Adding a Type node ![Adding a Type node](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_screening_target.png) 4. Click Read Values and then click Save\. 5. Add a Feature Selection modeling node after the Type node\. In the node properties, the rules and criteria used for screening or disqualifying fields are defined\. Figure 3. Adding a Feature Selection node ![Adding a Feature Selection node](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_screening_criteria.png) 6. Run the flow to generate the Feature Selection model nugget\. 7. To look at the results, right\-click the model nugget and choose View Model\. The results show the fields found to be useful in the prediction, ranked by importance\. By examining these fields, you can decide which ones to use in subsequent modeling sessions\. 8. To compare results without feature selection, you must add two CHAID modeling nodes to the flow: one that uses feature selection and one that doesn't\. Add two CHAID nodes, one connected to the Type node and the other connected to the Feature Selection model nugget, as shown in the example flow at the beginning of this section\. 9. Double\-click each CHAID node to open its properties\. Under Objectives, make sure that Build new model and Create a standard model are selected\. Under , select Custom and set it to 5\. <!-- </ol> --> <!-- </article "role="article" "> -->
9B120FF1F8482EB617E16738D5160C966C6EDF3D
https://dataplatform.cloud.ibm.com/docs/content/wsd/tutorials/tut_screening_model.html?context=cdpaas&locale=en
Building the models (SPSS Modeler)
Building the models 1. Run the CHAID node that uses all the predictors in the dataset (the one connected to the Type node). As it runs, notice how long it takes to finish. 2. Right-click the generated model nugget, select View Model, and look at the tree diagram. 3. Now run the other CHAID model, which uses less predictors. Again, look at its tree diagram. It might be hard to tell, but the second model ran faster than the first one. Because this dataset is relatively small, the difference in run times is probably only a few seconds; but for larger real-world datasets, the difference might be very noticeable—minutes or even hours. Using feature selection may speed up your processing times dramatically. The second tree also contains fewer tree nodes than the first. It's easier to comprehend. Using fewer predictors is less expensive. It means that you have less data to collect, process, and feed into your models. Computing time is improved. In this example, even with the extra feature selection step, model building was faster with the smaller set of predictors. With a larger real-world dataset, the time savings should be greatly amplified. Using fewer predictors results in simpler scoring. For example, you might identify only four profiles of customers who are likely to respond to the promotion. Note that with larger numbers of predictors, you run the risk of overfitting your model. The simpler model may generalize better to other datasets (although you would need to test this to be sure). You could instead use a tree-building algorithm to do the feature selection work, allowing the tree to identify the most important predictors for you. In fact, the CHAID algorithm is often used for this purpose, and it's even possible to grow the tree level-by-level to control its depth and complexity. However, the Feature Selection node is faster and easier to use. It ranks all of the predictors in one fast step, allowing you to identify the most important fields quickly.
# Building the models # <!-- <ol> --> 1. Run the CHAID node that uses all the predictors in the dataset (the one connected to the Type node)\. As it runs, notice how long it takes to finish\. 2. Right\-click the generated model nugget, select View Model, and look at the tree diagram\. 3. Now run the other CHAID model, which uses less predictors\. Again, look at its tree diagram\. It might be hard to tell, but the second model ran faster than the first one. Because this dataset is relatively small, the difference in run times is probably only a few seconds; but for larger real-world datasets, the difference might be very noticeable—minutes or even hours. Using feature selection may speed up your processing times dramatically. The second tree also contains fewer tree nodes than the first. It's easier to comprehend. Using fewer predictors is less expensive. It means that you have less data to collect, process, and feed into your models. Computing time is improved. In this example, even with the extra feature selection step, model building was faster with the smaller set of predictors. With a larger real-world dataset, the time savings should be greatly amplified. Using fewer predictors results in simpler scoring. For example, you might identify only four profiles of customers who are likely to respond to the promotion. Note that with larger numbers of predictors, you run the risk of overfitting your model. The simpler model may generalize better to other datasets (although you would need to test this to be sure). You could instead use a tree-building algorithm to do the feature selection work, allowing the tree to identify the most important predictors for you. In fact, the CHAID algorithm is often used for this purpose, and it's even possible to grow the tree level-by-level to control its depth and complexity. However, the Feature Selection node is faster and easier to use. It ranks all of the predictors in one fast step, allowing you to identify the most important fields quickly. <!-- </ol> --> <!-- </article "role="article" "> -->
C41C78F27BB2F48542141EA85EDA7AD333E3FD0B
https://dataplatform.cloud.ibm.com/docs/content/wsd/tutorials/tut_selflearn.html?context=cdpaas&locale=en
Making the offers to customers (SPSS Modeler)
Making offers to customers (self-learning) The Self-Learning Response Model (SLRM) node generates and enables the updating of a model that allows you to predict which offers are most appropriate for customers and the probability of the offers being accepted. These sorts of models are most beneficial in customer relationship management, such as marketing applications or call centers. This example is based on a fictional banking company. The marketing department wants to achieve more profitable results in future campaigns by matching the appropriate offer of financial services to each customer. Specifically, the example uses a Self-Learning Response model to identify the characteristics of customers who are most likely to respond favorably based on previous offers and responses and to promote the best current offer based on the results. This example uses the flow named Making Offers to Customers - Self-Learning, available in the example project . The data files are pm_customer_train1.csv, pm_customer_train2.csv, and pm_customer_train3.csv. 1. Open the Example Project. 2. Scroll down to the Modeler flows section, click View all, and select the Making Offers to Customers - Self-Learning flow.
# Making offers to customers (self\-learning) # The Self\-Learning Response Model (SLRM) node generates and enables the updating of a model that allows you to predict which offers are most appropriate for customers and the probability of the offers being accepted\. These sorts of models are most beneficial in customer relationship management, such as marketing applications or call centers\. This example is based on a fictional banking company\. The marketing department wants to achieve more profitable results in future campaigns by matching the appropriate offer of financial services to each customer\. Specifically, the example uses a Self\-Learning Response model to identify the characteristics of customers who are most likely to respond favorably based on previous offers and responses and to promote the best current offer based on the results\. This example uses the flow named Making Offers to Customers \- Self\-Learning, available in the example project \. The data files are pm\_customer\_train1\.csv, pm\_customer\_train2\.csv, and pm\_customer\_train3\.csv\. <!-- <ol> --> 1. Open the Example Project\. 2. Scroll down to the Modeler flows section, click View all, and select the Making Offers to Customers \- Self\-Learning flow\. <!-- </ol> --> <!-- </article "role="article" "> -->
7CEF749C4ED4703D00346FCDEF795D0431BC7C26
https://dataplatform.cloud.ibm.com/docs/content/wsd/tutorials/tut_selflearn_build.html?context=cdpaas&locale=en
Browsing the flow (SPSS Modeler)
Building the flow 1. Add a Data Asset node that points to pm_customer_train1.csv. Figure 1. SLRM example flow ![SLRM example flow](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_selflearn_slrm.png) 2. Attach a Filler node to the Data Asset node. Double-click the node to open its properties and, under Fill in fields, select campaign. 3. Select a Replace type of Always. 4. In the Replace with text box, enter to_string(campaign) and click Save. Figure 2. Derive a campaign field ![Derive a campaign field](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_selflearn_derive.png) 5. Add a Type node and set the Role to None for the following fields: * customer_id * response_date * purchase_date * product_id * Rowid * X_random 6. Set the Role to Target for the campaign and response fields. These are the fields on which you want to base your predictions. Set the Measurement to Flag for the response field. 7. Click Read Values then click Save. Because the campaign field data shows as a list of numbers (1, 2, 3, and 4), you can reclassify the fields to have more meaningful titles. 8. Add a Reclassify node after the Type node and open its properties. 9. Under Reclassify Into, select Existing field. 10. Under Reclassify Field, select campaign. 11. Click Get values. The campaign values are added to the ORIGINAL VALUE column. 12. In the NEW VALUE column, enter the following campaign names in the first four rows: * Mortgage * Car loan * Savings * Pension 13. Click Save. Figure 3. Reclassify the campaign names ![Reclassify the campaign names](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_selflearn_reclassify.png) 14. Attach an SLRM modeling node to the Reclassify node. Select campaign for the Target field, and response for the Target response field. 15. Under MODEL OPTIONS, for Maximum number of predictions per record, reduce the number to 2. This means that for each customer there will be two offers identified that have the highest probability of being accepted. 16. Make sure Take account of model reliability is selected, then click Save and run the flow.
# Building the flow # <!-- <ol> --> 1. Add a Data Asset node that points to pm\_customer\_train1\.csv\. Figure 1. SLRM example flow ![SLRM example flow](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_selflearn_slrm.png) 2. Attach a Filler node to the Data Asset node\. Double\-click the node to open its properties and, under Fill in fields, select `campaign`\. 3. Select a Replace type of Always\. 4. In the Replace with text box, enter to\_string(campaign) and click Save\. Figure 2. Derive a campaign field ![Derive a campaign field](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_selflearn_derive.png) 5. Add a Type node and set the Role to `None` for the following fields: <!-- <ul> --> * `customer_id` * `response_date` * `purchase_date` * `product_id` * `Rowid` * `X_random` <!-- </ul> --> 6. Set the Role to `Target` for the `campaign` and `response` fields\. These are the fields on which you want to base your predictions\. Set the Measurement to `Flag` for the `response` field\. 7. Click Read Values then click Save\. Because the campaign field data shows as a list of numbers (1, 2, 3, and 4), you can reclassify the fields to have more meaningful titles\. 8. Add a Reclassify node after the Type node and open its properties\. 9. Under Reclassify Into, select Existing field\. 10. Under Reclassify Field, select `campaign`\. 11. Click Get values\. The campaign values are added to the `ORIGINAL VALUE` column\. 12. In the `NEW VALUE` column, enter the following campaign names in the first four rows: <!-- <ul> --> * Mortgage * Car loan * Savings * Pension <!-- </ul> --> 13. Click Save\. Figure 3. Reclassify the campaign names ![Reclassify the campaign names](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_selflearn_reclassify.png) 14. Attach an SLRM modeling node to the Reclassify node\. Select `campaign` for the Target field, and `response` for the Target response field\. 15. Under MODEL OPTIONS, for Maximum number of predictions per record, reduce the number to 2\. This means that for each customer there will be two offers identified that have the highest probability of being accepted\. 16. Make sure Take account of model reliability is selected, then click Save and run the flow\. <!-- </ol> --> <!-- </article "role="article" "> -->
AA1C79D72D6A7D37CE1E72735B6DCFCA2B546DCE
https://dataplatform.cloud.ibm.com/docs/content/wsd/tutorials/tut_selflearn_model.html?context=cdpaas&locale=en
Browsing the model (SPSS Modeler)
Browsing the model 1. Right-click the model nugget and select View Model. The initial view shows the estimated accuracy of the predictions for each offer. You can also click Predictor Importance to see the relative importance of each predictor in estimating the model, or click Association With Response to show the correlation of each predictor with the target variable. 2. To switch between each of the four offers for which there are prediction, use the View drop-down. Figure 1. SLRM model nugget ![SLRM model nugget](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_selflearn_nugget.png) 3. Return to the flow. 4. Disconnect the Data Asset node that points to pm_customer_train1.csv. 5. Add a new Data Asset node that points to pm_customer_train2.csv and connect it to the Filler node. 6. Double-click the SLRM node and select Continue training existing model (under BUILD OPTIONS). Click Save. 7. Run the flow to regenerate the model nugget. Then right-click it and select View Model. The model now shows the revised estimates of accuracy of the predictions for each offer. 8. Add a new Data Asset node that points to pm_customer_train3.csv and connect it to the Filler node 9. Run the flow again, then right-click the model nugget and select View Model. The model now shows the final estimated accuracy of the predictions for each offer. As you can see, the average accuracy fell slightly as you added the additional data sources. However, this fluctuation is a minimal amount and may be attributed to slight anomalies within the available data. 10. Attach a Table node to the generated model nugget, then right-click the Table node and run it. In the Outputs pane, open the table output that was just generated.The predictions in the table show which offers a customer is most likely to accept and the confidence that they'll accept, depending on each customer's details. For example, in the first row, there's only a 13.2% confidence rating (denoted by the value 0.132 in the $SC-campaign-1 column) that a customer who previously took out a car loan will accept a pension if offered one. However, the second and third lines show two more customers who also took out a car loan; in their cases, there is a 95.7% confidence that they, and other customers with similar histories, would open a savings account if offered one, and over 80% confidence that they would accept a pension. Figure 2. Model output - predicted offers and confidences ![Model output - predicted offers and confidences](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_selflearn_table.png) Explanations of the mathematical foundations of the modeling methods used in SPSS Modeler are available in the [SPSS Modeler Algorithms Guide](http://public.dhe.ibm.com/software/analytics/spss/documentation/modeler/new/AlgorithmsGuide.pdf). Note that these results are based on the training data only. To assess how well the model generalizes to other data in the real world, you would use a Partition node to hold out a subset of records for purposes of testing and validation.
# Browsing the model # <!-- <ol> --> 1. Right\-click the model nugget and select View Model\. The initial view shows the estimated accuracy of the predictions for each offer\. You can also click Predictor Importance to see the relative importance of each predictor in estimating the model, or click Association With Response to show the correlation of each predictor with the target variable\. 2. To switch between each of the four offers for which there are prediction, use the View drop\-down\. Figure 1. SLRM model nugget ![SLRM model nugget](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_selflearn_nugget.png) 3. Return to the flow\. 4. Disconnect the Data Asset node that points to pm\_customer\_train1\.csv\. 5. Add a new Data Asset node that points to pm\_customer\_train2\.csv and connect it to the Filler node\. 6. Double\-click the SLRM node and select Continue training existing model (under BUILD OPTIONS)\. Click Save\. 7. Run the flow to regenerate the model nugget\. Then right\-click it and select View Model\. The model now shows the revised estimates of accuracy of the predictions for each offer\. 8. Add a new Data Asset node that points to pm\_customer\_train3\.csv and connect it to the Filler node 9. Run the flow again, then right\-click the model nugget and select View Model\. The model now shows the final estimated accuracy of the predictions for each offer. As you can see, the average accuracy fell slightly as you added the additional data sources. However, this fluctuation is a minimal amount and may be attributed to slight anomalies within the available data. 10. Attach a Table node to the generated model nugget, then right\-click the Table node and run it\. In the Outputs pane, open the table output that was just generated\.The predictions in the table show which offers a customer is most likely to accept and the confidence that they'll accept, depending on each customer's details\. For example, in the first row, there's only a 13\.2% confidence rating (denoted by the value `0.132` in the `$SC-campaign-1` column) that a customer who previously took out a car loan will accept a pension if offered one\. However, the second and third lines show two more customers who also took out a car loan; in their cases, there is a 95\.7% confidence that they, and other customers with similar histories, would open a savings account if offered one, and over 80% confidence that they would accept a pension\. Figure 2. Model output - predicted offers and confidences ![Model output - predicted offers and confidences](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_selflearn_table.png) Explanations of the mathematical foundations of the modeling methods used in SPSS Modeler are available in the [SPSS Modeler Algorithms Guide](http://public.dhe.ibm.com/software/analytics/spss/documentation/modeler/new/AlgorithmsGuide.pdf). Note that these results are based on the training data only. To assess how well the model generalizes to other data in the real world, you would use a Partition node to hold out a subset of records for purposes of testing and validation. <!-- </ol> --> <!-- </article "role="article" "> -->
BBB6FC842A370135B8488D9A2E09FCF17341954B
https://dataplatform.cloud.ibm.com/docs/content/wsd/tutorials/tut_ta_hotel.html?context=cdpaas&locale=en
Hotel satisfaction example for Text Analytics (SPSS Modeler)
Hotel satisfaction example for Text Analytics SPSS Modeler offers nodes that are specialized for handling text. In this example, a hotel manager is interested in learning what customers think about the hotel. Figure 1. Chart of positive opinions ![Chart of positive opinions](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_ta_hotel_positive.png) Figure 2. Chart of negative opinions ![Chart of negative opinions](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_ta_hotel_negative.png) This example uses the flow named Hotel Satisfaction, available in the example project . The data files are hotelSatisfaction.csv and hotelSatisfaction.xlsx. The flow uses Text Analytics nodes to analyze fictional text data about hotel personnel, comfort, cleanliness, price, etc. This flow illustrates two ways of analyzing data with a Text Mining node and a Text Link Analysis node. It also illustrates how you can deploy a text model and score current or new data. Let's take a look at the flow. 1. Open the . 2. Scroll down to the Modeler flows section and select the Hotel Satisfaction flow. Figure 3. Completed flow ![Completed flow](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_ta_hotel.png)
# Hotel satisfaction example for Text Analytics # SPSS Modeler offers nodes that are specialized for handling text\. In this example, a hotel manager is interested in learning what customers think about the hotel\. Figure 1\. Chart of positive opinions ![Chart of positive opinions](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_ta_hotel_positive.png) Figure 2\. Chart of negative opinions ![Chart of negative opinions](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_ta_hotel_negative.png) This example uses the flow named Hotel Satisfaction, available in the example project \. The data files are hotelSatisfaction\.csv and hotelSatisfaction\.xlsx\. The flow uses Text Analytics nodes to analyze fictional text data about hotel personnel, comfort, cleanliness, price, etc\. This flow illustrates two ways of analyzing data with a Text Mining node and a Text Link Analysis node\. It also illustrates how you can deploy a text model and score current or new data\. Let's take a look at the flow\. <!-- <ol> --> 1. Open the \. 2. Scroll down to the Modeler flows section and select the Hotel Satisfaction flow\. Figure 3. Completed flow ![Completed flow](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_ta_hotel.png) <!-- </ol> --> <!-- </article "role="article" "> -->
1924AE74643C2D9D416204693C9BB84D5212E3B0
https://dataplatform.cloud.ibm.com/docs/content/wsd/tutorials/tut_ta_hotel_build.html?context=cdpaas&locale=en
Building an deploying the model (SPSS Modeler)
Building and deploying the model 1. When your model is ready, click Generate a model to generate a text nugget. Figure 1. Generate a new model ![Generate a new model](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_ta_hotel_build.png) Figure 2. Build a category model ![Build a category model](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_ta_hotel_buildcat.png) 2. If you want to save the Text Analytics Workbench session, instead click Return to flow and then Save and exit. Figure 3. Saving your session ![Saving your session](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_ta_hotel_build_save.png)The generated text nugget appears on your flow canvas. Figure 4. Generated text nugget ![Generated text nugget](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_ta_hotel_build_nugget.png)After the category model has been validated and generated in the Text Analytics Workbench, you can deploy it in your flow and score the same data set or score a new one. Figure 5. Example flow with two modes for scoring ![Example flow with two modes for scoring](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_ta_hotel_build_ex.png)This example flow illustrates the two modes for scoring: * Categories as fields. With this option, there are just as many output records as there were in the input. However, each record now contains one new field for every category that was selected on the Model tab. For each field, enter a flag value for true and for false, such as True/False, or 1/0. In this flow, values are set to 1 and 0 to aggregate results and count the number of positive, negative, mixed (both positive and negative), or no score (no opinion) answers. Figure 6. Model results - categories as fields ![Model results - categories as fields](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_ta_hotel_build_excats.png) * Categories as records. With this option, a new record is created for each category, document pair. Typically, there are more records in the output than there were in the input. Along with the input fields, new fields are also added to the data depending on what kind of model it is. Figure 7. Model results - categories as records ![Model results - categories as records](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_ta_hotel_build_exrecs.png) 3. You can add a Select node after the DeriveSentiment SuperNode, include Sentiments=Pos, and add a Charts node to gain quick insight about what guests appreciate about the hotel: Figure 8. Chart of positive opinions ![Chart of positive opinions](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_ta_hotel_positive.png)
# Building and deploying the model # <!-- <ol> --> 1. When your model is ready, click Generate a model to generate a text nugget\. Figure 1. Generate a new model ![Generate a new model](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_ta_hotel_build.png) Figure 2. Build a category model ![Build a category model](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_ta_hotel_buildcat.png) 2. If you want to save the Text Analytics Workbench session, instead click Return to flow and then Save and exit\. Figure 3. Saving your session ![Saving your session](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_ta_hotel_build_save.png)The generated text nugget appears on your flow canvas. Figure 4. Generated text nugget ![Generated text nugget](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_ta_hotel_build_nugget.png)After the category model has been validated and generated in the Text Analytics Workbench, you can deploy it in your flow and score the same data set or score a new one. Figure 5. Example flow with two modes for scoring ![Example flow with two modes for scoring](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_ta_hotel_build_ex.png)This example flow illustrates the two modes for scoring: <!-- <ul> --> * Categories as fields. With this option, there are just as many output records as there were in the input. However, each record now contains one new field for every category that was selected on the Model tab. For each field, enter a flag value for true and for false, such as `True/False`, or `1/0`. In this flow, values are set to `1` and `0` to aggregate results and count the number of positive, negative, mixed (both positive and negative), or no score (no opinion) answers. Figure 6. Model results - categories as fields ![Model results - categories as fields](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_ta_hotel_build_excats.png) * Categories as records. With this option, a new record is created for each `category, document` pair. Typically, there are more records in the output than there were in the input. Along with the input fields, new fields are also added to the data depending on what kind of model it is. Figure 7. Model results - categories as records ![Model results - categories as records](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_ta_hotel_build_exrecs.png) <!-- </ul> --> 3. You can add a Select node after the DeriveSentiment SuperNode, include `Sentiments=Pos`, and add a Charts node to gain quick insight about what guests appreciate about the hotel: Figure 8. Chart of positive opinions ![Chart of positive opinions](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_ta_hotel_positive.png) <!-- </ol> --> <!-- </article "role="article" "> -->
5E4D2166BB8C2B95E515591E014E7CA00B87BCA2
https://dataplatform.cloud.ibm.com/docs/content/wsd/tutorials/tut_ta_hotel_iwb.html?context=cdpaas&locale=en
Using the Text Analytics Workbench (SPSS Modeler)
Using the Text Analytics Workbench The Text Analytics Workbench contains the extraction results and the category model contained in the text analytics package.
# Using the Text Analytics Workbench # The Text Analytics Workbench contains the extraction results and the category model contained in the text analytics package\. <!-- </article "role="article" "> -->
F161A94239C1DC6696DBB583EC46BC64F3AA8906
https://dataplatform.cloud.ibm.com/docs/content/wsd/tutorials/tut_ta_hotel_tla.html?context=cdpaas&locale=en
Text Link Analyisis node (SPSS Modeler)
Text Link Analysis node In some cases, you may not need to create a category model to score. The Text Link Analysis (TLA) node adds a pattern-matching technology to text mining's concept extraction. This identifies relationships between the concepts in the text data based on known patterns. These relationships can describe how a customer feels about a product, which companies are doing business together, or even the relationships between genes or pharmaceutical agents. Figure 1. Text Link Analysis node ![Text Link Analysis node](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_ta_hotel_tla.png) 1. Add a Text Link Analysis node to your canvas and connect it to the Data Asset node that points to hotelSatisfaction.csv. Double-click the node to open its properties. 2. Select id for the ID field and Comments for the Text field. Note that only the Text field is required. 3. For Copy resources from, select the Hotel Satisfaction (English) template. Figure 2. Text Link Analysis node FIELD properties ![Text Link Analysis node FIELD properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_ta_hotel_tlaprops.png) 4. Under Expert, select Accommodate spelling for a minimum word character length of. Figure 3. Text Link Analysis node Expert properties ![Text Link Analysis node Expert properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_ta_hotel_tlaexpert.png)The resulting output is a table (or the result of an Export node). Figure 4. Raw TLA output ![Raw TLA output](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_ta_hotel_tlaraw.png) Figure 5. Counting sentiments on a TLA node ![Counting sentiments on a TLA node](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_ta_hotel_tlacount.png)
# Text Link Analysis node # In some cases, you may not need to create a category model to score\. The Text Link Analysis (TLA) node adds a pattern\-matching technology to text mining's concept extraction\. This identifies relationships between the concepts in the text data based on known patterns\. These relationships can describe how a customer feels about a product, which companies are doing business together, or even the relationships between genes or pharmaceutical agents\. Figure 1\. Text Link Analysis node ![Text Link Analysis node](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_ta_hotel_tla.png) <!-- <ol> --> 1. Add a Text Link Analysis node to your canvas and connect it to the Data Asset node that points to hotelSatisfaction\.csv\. Double\-click the node to open its properties\. 2. Select `id` for the ID field and `Comments` for the Text field\. Note that only the Text field is required\. 3. For Copy resources from, select the Hotel Satisfaction (English) template\. Figure 2. Text Link Analysis node FIELD properties ![Text Link Analysis node FIELD properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_ta_hotel_tlaprops.png) 4. Under Expert, select Accommodate spelling for a minimum word character length of\. Figure 3. Text Link Analysis node Expert properties ![Text Link Analysis node Expert properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_ta_hotel_tlaexpert.png)The resulting output is a table (or the result of an Export node). Figure 4. Raw TLA output ![Raw TLA output](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_ta_hotel_tlaraw.png) Figure 5. Counting sentiments on a TLA node ![Counting sentiments on a TLA node](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_ta_hotel_tlacount.png) <!-- </ol> --> <!-- </article "role="article" "> -->
E7FAC7868F0D237EFBFCC625D8C265AEEBCA3E7D
https://dataplatform.cloud.ibm.com/docs/content/wsd/tutorials/tut_ta_hotel_tm.html?context=cdpaas&locale=en
Text Mining node (SPSS Modeler)
Text Mining node Figure 1. Text Mining node to analyze comments from hotel guests ![Text Mining node to analyze comments from hotel guests](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_ta_hotel_tm.png) 1. Add a Data Asset node that points to hotelSatisfaction.csv. 2. From the Text Analytics category on the node palette, add a Text Mining node, connect it to the Data Asset node you added in the previous step, and double-click it to open its properties. 3. Under Fields, select Comments for the Text field and select id for the ID field. Note that only the Text field is required. Figure 2. Text Mining node properties ![Text Mining node build properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_ta_hotel_tm_props1.png) 4. Under Copy resources from, select Text analysis package, click Select Resources, and then load Hotel Satisfaction (English).tap (with Current category set(s) = Topic + Opinion).A text analysis package (TAP) is a predefined set of libraries and advanced linguistic and nonlinguistic resources bundled with one or more sets of predefined categories. If no text analysis package is relevant for your application, you can instead start by selecting Resource template under Copy resources from. A resource template is a predefined set of libraries and advanced linguistic and nonlinguistic resources that have been fine-tuned for a particular domain or usage. Figure 3. Text Mining node properties ![Text Mining node build properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_ta_hotel_tm_props2.png) 5. Under Build models, make sure Build interactively (category model nugget) is selected. Later when you run the node, this option will launch an interactive interface (known as the Text Analytics Workbench) in which you can extract concepts and patterns, explore and fine-tune the extracted results, build and refine categories, and build category model nuggets. 6. Under Begin session by, select Extracting concepts and text links. The option Extracting concepts extracts only concepts, whereas TLA extraction outputs both concepts and text links that are connections between topics (service, personnel, food, etc.) and opinions. 7. Under Expert, select Accommodate spelling for a minimum word character length of. This option applies a fuzzy grouping technique that helps group commonly misspelled words or closely spelled words under one concept. The fuzzy grouping algorithm temporarily strips all vowels (except the first one) and strips double/triple consonants from extracted words and then compares them to see if they're the same (so, for example, location and locatoin are grouped together). Figure 4. Text Mining node properties ![Text Mining node expert properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_ta_hotel_tm_props3.png) 8. Click Save. Right-click the Text Mining node and run it to open the Text Analytics Workbench and proceed to the next section of this tutorial.
# Text Mining node # Figure 1\. Text Mining node to analyze comments from hotel guests ![Text Mining node to analyze comments from hotel guests](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_ta_hotel_tm.png) <!-- <ol> --> 1. Add a Data Asset node that points to hotelSatisfaction\.csv\. 2. From the Text Analytics category on the node palette, add a Text Mining node, connect it to the Data Asset node you added in the previous step, and double\-click it to open its properties\. 3. Under Fields, select `Comments` for the Text field and select `id` for the ID field\. Note that only the Text field is required\. Figure 2. Text Mining node properties ![Text Mining node build properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_ta_hotel_tm_props1.png) 4. Under Copy resources from, select Text analysis package, click Select Resources, and then load Hotel Satisfaction (English)\.tap (with `Current category set(s) = Topic + Opinion`)\.A text analysis package (TAP) is a predefined set of libraries and advanced linguistic and nonlinguistic resources bundled with one or more sets of predefined categories\. If no text analysis package is relevant for your application, you can instead start by selecting Resource template under Copy resources from\. A resource template is a predefined set of libraries and advanced linguistic and nonlinguistic resources that have been fine\-tuned for a particular domain or usage\. Figure 3. Text Mining node properties ![Text Mining node build properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_ta_hotel_tm_props2.png) 5. Under Build models, make sure Build interactively (category model nugget) is selected\. Later when you run the node, this option will launch an interactive interface (known as the Text Analytics Workbench) in which you can extract concepts and patterns, explore and fine\-tune the extracted results, build and refine categories, and build category model nuggets\. 6. Under Begin session by, select Extracting concepts and text links\. The option Extracting concepts extracts only concepts, whereas TLA extraction outputs both concepts and text links that are connections between topics (service, personnel, food, etc\.) and opinions\. 7. Under Expert, select Accommodate spelling for a minimum word character length of\. This option applies a fuzzy grouping technique that helps group commonly misspelled words or closely spelled words under one concept\. The fuzzy grouping algorithm temporarily strips all vowels (except the first one) and strips double/triple consonants from extracted words and then compares them to see if they're the same (so, for example, `location` and `locatoin` are grouped together)\. Figure 4. Text Mining node properties ![Text Mining node expert properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/images/tut_ta_hotel_tm_props3.png) 8. Click Save\. Right\-click the Text Mining node and run it to open the Text Analytics Workbench and proceed to the next section of this tutorial\. <!-- </ol> --> <!-- </article "role="article" "> -->
ED7AFE85422B1DB8EAED166840D275DDDB63CAFA
https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/account-settings.html?context=cdpaas&locale=en
Managing your account settings
Managing your account settings From the Account window you can view information about your IBM Cloud account and set the Resource scope, Credentials for connections, and Regional project storage settings for IBM watsonx. * [View account information](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/account-settings.html?context=cdpaas&locale=enview-account-information) * [Set the scope for resources](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/account-settings.html?context=cdpaas&locale=enset-the-scope-for-resources) * [Set the type of credentials for connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/account-settings.html?context=cdpaas&locale=enset-the-credentials-for-connections) * [Set the login session expiration](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/account-settings.html?context=cdpaas&locale=enset-expiration) You must be the IBM Cloud account owner or administrator to manage the account settings. View account information You can see the account name, ID and type. 1. Select Administration > Account and billing > Account to open the account window. 2. If you need to manage your Cloud account, click the Manage in IBM Cloud link to navigate to the Account page on IBM Cloud. Set the scope for resources By default, account users see resources based on membership. You can restrict the resource scope to the current account to control access. By setting the resource scope to the current account, users cannot access resources outside of their account, regardless of membership. The scope applies to projects, catalogs, and spaces. To restrict resources to current account: 1. Select Administration > Account and billing > Account to open the account settings window. 2. Set Resource scope to On. Access is updated immediately to be restricted to the current account. Set the credentials for connections The credentials for connections setting determines the type of credentials users must specify when creating a new connection. This setting applies only when new connections are created; existing connections are not affected. Either personal or shared credentials You can allow users the ability to specify personal or shared credentials when creating a new connection. Radio buttons will appear on the new connection form, allowing the user to select personal or shared. To allow the credential type to be chosen on the new connection form: 1. Select Administration > Account and billing > Account to open the account settings window. 2. Set both Shared credentials and Personal credentials to Enabled. Personal credentials When personal credentials are specified, each user enters their own credentials when creating a new connection or when using a connection to access data. To require personal credentials for all new connections: 1. Select Administration > Account and billing > Account to open the account settings window. 2. Set Personal credentials to Enabled. 3. Set Shared credentials to Disabled. Shared credentials With shared credentials, the credentials that were entered by the creator of the connection are made available to all other users when accessing data with the connection. To require shared credentials for all new connections: 1. Select Administration > Account and billing > Account to open the account settings window. 2. Set Shared credentials to Enabled. 3. Set Personal credentials to Disabled. Set the login session expiration Active and inactive session durations are managed through IBM Cloud. You are notified of a session expiration 5 minutes before the session expires. Unless your service supports autosaving, your work is not saved when your session expires. You can change the default durations for active and inactive sessions. For more information on required permissions and duration limits, see [Setting limits for login sessions](https://cloud.ibm.com/docs/account?topic=account-iam-work-sessions&interface=ui). To change the default durations: 1. From the watsonx navigation menu, select Administration > Access (IAM). 2. In IBM Cloud, select Manage > Access (IAM) > Settings. 3. Select the Login session tab. 4. For each expiration time that you want to change, edit the time and click Save. The inactivity duration cannot be longer than the maximum session duration, and the token lifetime cannot be longer than the inactivity duration. IBM Cloud prevents you from inputing an invalid combination of settings. Learn more * [Managing all projects in the account](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/admin-manage-projects.html) * [Adding connections to projects](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html) Parent topic:[Managing IBM watsonx](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/wdp_admin_console.html)
# Managing your account settings # From the Account window you can view information about your IBM Cloud account and set the **Resource scope**, **Credentials for connections**, and **Regional project storage** settings for IBM watsonx\. <!-- <ul> --> * [View account information](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/account-settings.html?context=cdpaas&locale=en#view-account-information) * [Set the scope for resources](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/account-settings.html?context=cdpaas&locale=en#set-the-scope-for-resources) * [Set the type of credentials for connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/account-settings.html?context=cdpaas&locale=en#set-the-credentials-for-connections) * [Set the login session expiration](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/account-settings.html?context=cdpaas&locale=en#set-expiration) <!-- </ul> --> You must be the IBM Cloud account owner or administrator to manage the account settings\. ## View account information ## You can see the account name, ID and type\. <!-- <ol> --> 1. Select **Administration > Account and billing > Account** to open the account window\. 2. If you need to manage your Cloud account, click the **Manage in IBM Cloud** link to navigate to the Account page on IBM Cloud\. <!-- </ol> --> ## Set the scope for resources ## By default, account users see resources based on membership\. You can restrict the resource scope to the current account to control access\. By setting the resource scope to the current account, users cannot access resources outside of their account, regardless of membership\. The scope applies to projects, catalogs, and spaces\. To restrict resources to current account: <!-- <ol> --> 1. Select **Administration > Account and billing > Account** to open the account settings window\. 2. Set **Resource scope** to **On**\. Access is updated immediately to be restricted to the current account\. <!-- </ol> --> ## Set the credentials for connections ## The credentials for connections setting determines the type of credentials users must specify when creating a new connection\. This setting applies only when new connections are created; existing connections are not affected\. ### Either personal or shared credentials ### You can allow users the ability to specify personal or shared credentials when creating a new connection\. Radio buttons will appear on the new connection form, allowing the user to select personal or shared\. To allow the credential type to be chosen on the new connection form: <!-- <ol> --> 1. Select **Administration > Account and billing > Account** to open the account settings window\. 2. Set both **Shared credentials** and **Personal credentials** to **Enabled**\. <!-- </ol> --> ### Personal credentials ### When personal credentials are specified, each user enters their own credentials when creating a new connection or when using a connection to access data\. To require personal credentials for all new connections: <!-- <ol> --> 1. Select **Administration > Account and billing > Account** to open the account settings window\. 2. Set **Personal credentials** to **Enabled**\. 3. Set **Shared credentials** to **Disabled**\. <!-- </ol> --> ### Shared credentials ### With shared credentials, the credentials that were entered by the creator of the connection are made available to all other users when accessing data with the connection\. To require shared credentials for all new connections: <!-- <ol> --> 1. Select **Administration > Account and billing > Account** to open the account settings window\. 2. Set **Shared credentials** to **Enabled**\. 3. Set **Personal credentials** to **Disabled**\. <!-- </ol> --> ## Set the login session expiration ## Active and inactive session durations are managed through IBM Cloud\. You are notified of a session expiration 5 minutes before the session expires\. Unless your service supports autosaving, your work is not saved when your session expires\. You can change the default durations for active and inactive sessions\. For more information on required permissions and duration limits, see [Setting limits for login sessions](https://cloud.ibm.com/docs/account?topic=account-iam-work-sessions&interface=ui)\. To change the default durations: <!-- <ol> --> 1. From the watsonx navigation menu, select **Administration > Access (IAM)**\. 2. In IBM Cloud, select **Manage > Access (IAM) > Settings**\. 3. Select the **Login session** tab\. 4. For each expiration time that you want to change, edit the time and click **Save**\. <!-- </ol> --> The inactivity duration cannot be longer than the maximum session duration, and the token lifetime cannot be longer than the inactivity duration\. IBM Cloud prevents you from inputing an invalid combination of settings\. ### Learn more ### <!-- <ul> --> * [Managing all projects in the account](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/admin-manage-projects.html) * [Adding connections to projects](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html) <!-- </ul> --> **Parent topic:**[Managing IBM watsonx](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/wdp_admin_console.html) <!-- </article "role="article" "> -->
88BAC0DA2CCB09C93C0013A209147CC5A5DCEE68
https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/admin-apikeys.html?context=cdpaas&locale=en
Managing the user API key
Managing the user API key Certain operations in IBM watsonx require an API key for secure authorization. You can generate and rotate a user API key as needed to help ensure your operations run smoothly. User API key overview Operations running within services in IBM watsonx require credentials for secure authorization. These operations use an API key for authorization. A valid API key is required for many long-running tasks, including the following: * Model training in Watson Machine Learning * Problem solving with Decision Optimization * Data transformation with DataStage flows * Other runtime services (for example, Data Refinery and Pipelines) that accept API key references Both scheduled and ad hoc jobs require an API key for authorization. An API key is used for jobs when: * Creating a job schedule with a predefined key * Updating the API key for a scheduled job * Providing an API key for an ad hoc job User API keys give control to the account owner to secure and renew credentials, thus helping to ensure operations run without interruption. Keys are unique to the IBMid and account. If you change the account you are working in, you must generate a new key. Active and Phased out keys When you create an API key, it is placed in Active state. The Active key is used for authorization for operations in IBM watsonx. When you rotate a key, a new key is created in Active state and the existing key is changed to Phased out state. A Phased out key is not used for authorization and can be deleted. Viewing the current API key Click your avatar and select Profile and settings to open your account profile. Select User API key to view the Active and Phased out keys. Creating an API key If you do not have an API key, you can create a key by clicking Create a key. A new key is created in Active state. The key automatically authorizes operations that require a secure credential. The key is stored in both IBM Cloud and IBM watsonx. You can view the API keys for your IBM Cloud account at [API keys](https://cloud.ibm.com/iam/apikeys). User API Keys take the form cpd-apikey-{username}-{timeStamp}, where username is the IBMid of the account owner and timestamp indicates when the key was created. Rotating an API key If the API key becomes stale or invalid, you can generate a new Active key for use by all operations. To rotate a key, click Rotate. A new key is created to replace the current key. The rotated key is placed in Phased out status. A Phased out key is not available for use. Deleting a phased out API key When you are certain the phased out key is no longer needed for operations, click the minus sign to delete it. Deleting keys might cause running operations to fail. Deleting all API keys Delete all keys (both Active and Phased out) by clicking the trash can. Deleting keys might cause running operations to fail. Learn more * [Creating and managing jobs in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/jobs.html) * [Adding task credentials](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/task-credentials.html) * [Understanding API keys](https://cloud.ibm.com/docs/account?topic=account-manapikey&interface=ui) Parent topic:[Administering your accounts and services](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/administer-accounts.html)
# Managing the user API key # Certain operations in IBM watsonx require an API key for secure authorization\. You can generate and rotate a user API key as needed to help ensure your operations run smoothly\. ## User API key overview ## Operations running within services in IBM watsonx require credentials for secure authorization\. These operations use an API key for authorization\. A valid API key is required for many long\-running tasks, including the following: <!-- <ul> --> * Model training in Watson Machine Learning * Problem solving with Decision Optimization * Data transformation with DataStage flows * Other runtime services (for example, Data Refinery and Pipelines) that accept API key references <!-- </ul> --> Both scheduled and ad hoc jobs require an API key for authorization\. An API key is used for jobs when: <!-- <ul> --> * Creating a job schedule with a predefined key * Updating the API key for a scheduled job * Providing an API key for an ad hoc job <!-- </ul> --> User API keys give control to the account owner to secure and renew credentials, thus helping to ensure operations run without interruption\. Keys are unique to the IBMid and account\. If you change the account you are working in, you must generate a new key\. ### Active and Phased out keys ### When you create an API key, it is placed in **Active** state\. The **Active key** is used for authorization for operations in IBM watsonx\. When you rotate a key, a new key is created in **Active** state and the existing key is changed to **Phased out** state\. A Phased out key is not used for authorization and can be deleted\. ## Viewing the current API key ## Click your avatar and select **Profile and settings** to open your account profile\. Select **User API key** to view the **Active** and **Phased out** keys\. ## Creating an API key ## If you do not have an API key, you can create a key by clicking **Create a key**\. A new key is created in **Active** state\. The key automatically authorizes operations that require a secure credential\. The key is stored in both IBM Cloud and IBM watsonx\. You can view the API keys for your IBM Cloud account at [API keys](https://cloud.ibm.com/iam/apikeys)\. User API Keys take the form **cpd\-apikey\-\{username\}\-\{timeStamp\}**, where username is the IBMid of the account owner and timestamp indicates when the key was created\. ## Rotating an API key ## If the API key becomes stale or invalid, you can generate a new **Active** key for use by all operations\. To rotate a key, click **Rotate**\. A new key is created to replace the current key\. The rotated key is placed in **Phased out** status\. A **Phased out** key is not available for use\. ## Deleting a phased out API key ## When you are certain the phased out key is no longer needed for operations, click the minus sign to delete it\. Deleting keys might cause running operations to fail\. ## Deleting all API keys ## Delete all keys (both **Active** and **Phased out**) by clicking the trash can\. Deleting keys might cause running operations to fail\. ## Learn more ## <!-- <ul> --> * [Creating and managing jobs in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/jobs.html) * [Adding task credentials](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/task-credentials.html) * [Understanding API keys](https://cloud.ibm.com/docs/account?topic=account-manapikey&interface=ui) <!-- </ul> --> **Parent topic:**[Administering your accounts and services](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/administer-accounts.html) <!-- </article "role="article" "> -->
A10DE0E026BA0CF397108621D5927E16436ACF58
https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/admin-appid-tips.html?context=cdpaas&locale=en
Configuring App ID with your identity provider
Configuring App ID with your identity provider To use App ID for user authentication for IBM watsonx, you configure App ID as a service on IBM Cloud. You configure an identity provider (IdP) such as Azure Active Directory. You then configure App ID and the identity provider to communicate with each other to grant access to authorized users. To configure App ID and your identity provider to work together, follow these steps: * [Configure your identity provider to communicate with IBM Cloud](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/admin-appid-tips.html?context=cdpaas&locale=encfg_idp) * [Configure App ID to communicate with your identify provider](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/admin-appid-tips.html?context=cdpaas&locale=encfg_appid) * [Configure IAM to enable login through your identity provider](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/admin-appid-tips.html?context=cdpaas&locale=encfg_iam) Configuring your identity provider To configure your identity provider to communicate with IBM Cloud, you enter the entityID and Location into your SAML configuration for your identity provider. An overview of the steps for configuring Azure Active Directory is provided as an example. Refer to the documentation for your identity provider for detailed instructions for its platform. The prerequisites for configuring App ID with an identity provider are: * An IBM Cloud account * An App ID instance * An identity provider, for example, Azure Active Directory To configure your identity provider for SAML-based single sign-on: 1. Download the SAML metadata file from App ID to find the values for entityID and Location. These values are entered into the identity provider configuration screen to establish communication with App ID on IBM Cloud. (The corresponding values from the identity provider, plus the primary certificate, are entered in App ID. See [Configuring App ID](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/admin-appid-tips.html?context=cdpaas&locale=encfg_appid)). * In App ID, choose Identity providers > SAML 2.0 federation. * Download the appid-metadata.xml file. * Find the values for entityID and Location. 2. Copy the values for entityID and Location from the SAML metadata file and paste them into the corresponding fields on your identity provider. For Azure Active Directory, the fields are located in Section 1: Basic SAML Configuration in the Enterprise applications configuration screen. App ID value Active Directory field Example entityID Identifier (Entity ID) urn:ibm:cloud:services:appid:value Location Reply URL (Assertion Consumer Service URL) https://us-south.appid.cloud.ibm.com/saml2/v1/value/login-acs 3. In Section 2: Attributes & Claims for Azure Active Directory, you map the username parameter to user.mail to identify the users by their unique email address. IBM watsonx requires that you set username to the user.mail attribute. For other identity providers, a similar field that uniquely identifies users must be mapped to user.mail. Configuring App ID You establish communication between App ID and your identity provider by entering the SAML values from the identity provider into the corresponding App ID fields. An example is provided for configuring App ID to communicate with an Active Directory Enterprise Application. 1. Choose Identity providers > SAML 2.0 federation and complete the Provide metadata from SAML IdP section. 2. Download the Base64 certificate from Section 3: SAML Certificates in Active Directory (or your identity provider) and paste it into the Primary certificate field. 3. Copy the values from Section 4: Set up your-enterprise-application in Active Directory into the corresponding fields in Provide metadata from SAML IdP in IBM App ID. App ID field Value from Active Directory Entity ID Azure AD Identifier Sign in URL Login URL Primary certificate Certificate (Base64) 4. Click Test on the App ID page to test that App ID can connect to the identity provider. The happy face response indicates that App ID can communicate with the identity provider. ![Successful test](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/images/appid_good_job.png) Configuring IAM You must assign the appropriate role to the users in IBM Cloud IAM and also configure your identity provider in IAM. Users require at least the Viewer role for All Identity and IAM enabled services. Create an identity provider reference in IBM Cloud IAM Create an identity provider reference to connect your external repository to your IBM Cloud account. 1. Navigate to Manage > Access(IAM) > Identity providers. 2. For the type, choose IBM Cloud App ID. 3. Click Create. 4. Enter a name for the identity provider. 5. Select the App ID service instance. 6. Select how to on board users. Static adds users when they log in for the first time. 7. Enable the identity provider for logging in by checking the Enable for account login? box. 8. If you have more than one identity providers, set the identity provider as the default by checking the box. 9. Click Create. Change the App ID login alias A login alias is generated for App ID. Users enter the alias when logging on to IBM Cloud. You can change the default alias string to be easier to remember. 1. Navigate to Manage > Access(IAM) > Identity providers. 2. Select IBM Cloud App ID as the type. 3. Edit the Default IdP URL to make it simpler. For example, https://cloud.ibm.com/authorize/540f5scc241a24a70513961 can be changed to https://cloud.ibm.com/authorize/my-company. Users log in with the alias my-company instead of 540f5scc241a24a70513961. Learn more * [IBM Cloud docs: Managing authentication](https://cloud.ibm.com/docs/appid?topic=appid-managing-idp) * [IBM Cloud docs: Configuring federated identity providers: SAML](https://cloud.ibm.com/docs/appid?topic=appid-enterpriseenterprise) * [IBM Cloud SAML Federation Guide](https://www.ibm.com/cloud/blog/ibm-cloud-saml-federation-guide) * [Setting up IBM Cloud App ID with your Azure Active Directory](https://www.ibm.com/cloud/blog/setting-ibm-cloud-app-id-azure-active-directory) * [Reusing Existing Red Hat SSO and Keycloak for Applications That Run on IBM Cloud with App ID](https://www.ibm.com/cloud/blog/reusing-existing-red-hat-sso-and-keycloak-for-applications-that-run-on-ibm-cloud-with-app-id) Parent topic:[Setting up IBM Cloud App ID (beta)](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/admin-appid.html)
# Configuring App ID with your identity provider # To use App ID for user authentication for IBM watsonx, you configure App ID as a service on IBM Cloud\. You configure an identity provider (IdP) such as Azure Active Directory\. You then configure App ID and the identity provider to communicate with each other to grant access to authorized users\. To configure App ID and your identity provider to work together, follow these steps: <!-- <ul> --> * [Configure your identity provider to communicate with IBM Cloud](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/admin-appid-tips.html?context=cdpaas&locale=en#cfg_idp) * [Configure App ID to communicate with your identify provider](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/admin-appid-tips.html?context=cdpaas&locale=en#cfg_appid) * [Configure IAM to enable login through your identity provider](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/admin-appid-tips.html?context=cdpaas&locale=en#cfg_iam) <!-- </ul> --> ## Configuring your identity provider ## To configure your identity provider to communicate with IBM Cloud, you enter the **entityID** and **Location** into your SAML configuration for your identity provider\. An overview of the steps for configuring Azure Active Directory is provided as an example\. Refer to the documentation for your identity provider for detailed instructions for its platform\. The prerequisites for configuring App ID with an identity provider are: <!-- <ul> --> * An IBM Cloud account * An App ID instance * An identity provider, for example, Azure Active Directory <!-- </ul> --> To configure your identity provider for SAML\-based single sign\-on: 1\. Download the SAML metadata file from App ID to find the values for **entityID** and **Location**\. These values are entered into the identity provider configuration screen to establish communication with App ID on IBM Cloud\. (The corresponding values from the identity provider, plus the primary certificate, are entered in App ID\. See [Configuring App ID](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/admin-appid-tips.html?context=cdpaas&locale=en#cfg_appid))\. <!-- <ul> --> * In App ID, choose **Identity providers > SAML 2\.0 federation**\. * Download the **appid\-metadata\.xml** file\. * Find the values for **entityID** and **Location**\. <!-- </ul> --> 2\. Copy the values for **entityID** and **Location** from the SAML metadata file and paste them into the corresponding fields on your identity provider\. For Azure Active Directory, the fields are located in **Section 1: Basic SAML Configuration** in the Enterprise applications configuration screen\. <!-- <table> --> | App ID value | Active Directory field | Example | | ------------ | ------------------------------------------ | --------------------------------------------------------------- | | entityID | Identifier (Entity ID) | urn:ibm:cloud:services:appid:value | | Location | Reply URL (Assertion Consumer Service URL) | `https://us-south.appid.cloud.ibm.com/saml2/v1/value/login-acs` | <!-- </table ""> --> 3\. In **Section 2: Attributes & Claims** for Azure Active Directory, you map the username parameter to **user\.mail** to identify the users by their unique email address\. IBM watsonx requires that you set username to the **user\.mail** attribute\. For other identity providers, a similar field that uniquely identifies users must be mapped to **user\.mail**\. ## Configuring App ID ## You establish communication between App ID and your identity provider by entering the SAML values from the identity provider into the corresponding App ID fields\. An example is provided for configuring App ID to communicate with an Active Directory Enterprise Application\. 1\. Choose **Identity providers > SAML 2\.0 federation** and complete the **Provide metadata from SAML IdP** section\. 2\. Download the Base64 certificate from **Section 3: SAML Certificates** in Active Directory (or your identity provider) and paste it into the **Primary certificate** field\. 3\. Copy the values from **Section 4: Set up your\-enterprise\-application** in Active Directory into the corresponding fields in **Provide metadata from SAML IdP** in IBM App ID\. <!-- <table> --> | App ID field | Value from Active Directory | | ------------------- | --------------------------- | | Entity ID | Azure AD Identifier | | Sign in URL | Login URL | | Primary certificate | Certificate (Base64) | <!-- </table ""> --> 4\. Click **Test** on the App ID page to test that App ID can connect to the identity provider\. The happy face response indicates that App ID can communicate with the identity provider\. ![Successful test](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/images/appid_good_job.png) ## Configuring IAM ## You must assign the appropriate role to the users in IBM Cloud IAM and also configure your identity provider in IAM\. Users require at least the **Viewer** role for **All Identity and IAM enabled services**\. ### Create an identity provider reference in IBM Cloud IAM ### Create an identity provider reference to connect your external repository to your IBM Cloud account\. <!-- <ol> --> 1. Navigate to **Manage > Access(IAM) > Identity providers**\. 2. For the type, choose **IBM Cloud App ID**\. 3. Click **Create**\. 4. Enter a name for the identity provider\. 5. Select the App ID service instance\. 6. Select how to on board users\. Static adds users when they log in for the first time\. 7. Enable the identity provider for logging in by checking the **Enable for account login?** box\. 8. If you have more than one identity providers, set the identity provider as the default by checking the box\. 9. Click **Create**\. <!-- </ol> --> ### Change the App ID login alias ### A login alias is generated for App ID\. Users enter the alias when logging on to IBM Cloud\. You can change the default alias string to be easier to remember\. <!-- <ol> --> 1. Navigate to **Manage > Access(IAM) > Identity providers**\. 2. Select **IBM Cloud App ID** as the type\. 3. Edit the **Default IdP URL** to make it simpler\. For example, **`https://cloud.ibm.com/authorize/540f5scc241a24a70513961`** can be changed to **`https://cloud.ibm.com/authorize/my-company`**\. Users log in with the alias **my\-company** instead of **540f5scc241a24a70513961**\. <!-- </ol> --> ## Learn more ## <!-- <ul> --> * [IBM Cloud docs: Managing authentication](https://cloud.ibm.com/docs/appid?topic=appid-managing-idp) * [IBM Cloud docs: Configuring federated identity providers: SAML](https://cloud.ibm.com/docs/appid?topic=appid-enterprise#enterprise) * [IBM Cloud SAML Federation Guide](https://www.ibm.com/cloud/blog/ibm-cloud-saml-federation-guide) * [Setting up IBM Cloud App ID with your Azure Active Directory](https://www.ibm.com/cloud/blog/setting-ibm-cloud-app-id-azure-active-directory) * [Reusing Existing Red Hat SSO and Keycloak for Applications That Run on IBM Cloud with App ID](https://www.ibm.com/cloud/blog/reusing-existing-red-hat-sso-and-keycloak-for-applications-that-run-on-ibm-cloud-with-app-id) <!-- </ul> --> **Parent topic:**[Setting up IBM Cloud App ID (beta)](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/admin-appid.html) <!-- </article "role="article" "> -->
77393F760A3A3F834809ACA1078BDF229331C2FD
https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/admin-appid.html?context=cdpaas&locale=en
Overview for setting up IBM Cloud App ID (beta)
Overview for setting up IBM Cloud App ID (beta) IBM watsonx supports IBM Cloud App ID to integrate customer's registries for user authentication. You configure App ID on IBM Cloud to communicate with an identiry provider. You then provide an alias to the people in your organization to log in to IBM watsonx. Required roles : To configure identity providers for App ID, you must have one of the following roles in the IBM Cloud account: : - Account owner : - Operator or higher on the App ID instance : - Operator or Administrator role on the IAM Identity Service App ID is configured entirely on IBM Cloud. An identity provider, for example, Active Directory, must also be configured separately to communicate with App ID. For more information on configuring App ID to work with an identity provider, see [Configuring App ID with your identity provider](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/admin-appid-tips.html). Configuring the log on alias The App ID instance is configured as the default identity provider for the account. For instructions on configuring an identity provider, refer to [IBM Cloud docs: Enabling authentication from an external identity provider](https://cloud.ibm.com/docs/account?topic=account-idp-integration). Each App ID instance requires a unique alias. There is one alias per account. All users in an account log in with the same alias. When the identity provider is configured, the alias is initially set to the account ID. You can [change the initial alias](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/admin-appid-tips.htmlcfg_alias) to be easier to type and remember. Logging in with App ID (beta) Users choose App ID (beta) as the login method on the IBM watsonx login page and enter the alias. Then, they are redirected to their company's login page to enter their company credentials. Upon logging in successfully to their company, they are redirected to IBM watsonx. To verify that the alias is correctly configured, go to the User profile and settings page. Verify that the username in the profile is the email from your company’s registry. The alias is correct if the correct email is shown in the profile, as it indicates that the mapping was successful. You cannot switch accounts when logging in through App ID. Limitations The following limitations apply to this beta release: * You must map the name/username/sub SAML profile properties to the email property in the user registry. If the mapping is absent or incorrect, a default opaque user ID is used, which is not supported in this beta release. * The IBM Cloud login page does not support an App ID alias. Users log in into IBM Cloud with a custom URL, following this form: https://cloud.ibm.com/authorize/{app_id_alias}. * If you are using the Cloud Directory included with App ID as your user registry, you must select Username and password as the option for Manage authentication > Cloud Directory > Settings > Allow users to sign-up and sign-in using. Learn more * [Logging in to watsonx.ai through IBM App ID (beta)](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/signup-wx.htmlappid) * [Configuring App ID with your identity provider](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/admin-appid-tips.html) * [IBM Cloud docs: Getting started with App ID](https://cloud.ibm.com/docs/appid?topic=appid-getting-started) * [IBM Cloud docs: Enabling authentication from an external identity provider](https://cloud.ibm.com/docs/account?topic=account-idp-integration) Parent topic:[Managing IBM watsonx](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/wdp_admin_console.html)
# Overview for setting up IBM Cloud App ID (beta) # IBM watsonx supports IBM Cloud App ID to integrate customer's registries for user authentication\. You configure App ID on IBM Cloud to communicate with an identiry provider\. You then provide an alias to the people in your organization to log in to IBM watsonx\. **Required roles** : To configure identity providers for App ID, you must have one of the following roles in the IBM Cloud account: : \- **Account owner** : \- **Operator** or higher on the App ID instance : \- **Operator** or **Administrator** role on the IAM Identity Service App ID is configured entirely on IBM Cloud\. An identity provider, for example, Active Directory, must also be configured separately to communicate with App ID\. For more information on configuring App ID to work with an identity provider, see [Configuring App ID with your identity provider](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/admin-appid-tips.html)\. ## Configuring the log on alias ## The App ID instance is configured as the default identity provider for the account\. For instructions on configuring an identity provider, refer to [IBM Cloud docs: Enabling authentication from an external identity provider](https://cloud.ibm.com/docs/account?topic=account-idp-integration)\. Each App ID instance requires a unique alias\. There is one alias per account\. All users in an account log in with the same alias\. When the identity provider is configured, the alias is initially set to the account ID\. You can [change the initial alias](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/admin-appid-tips.html#cfg_alias) to be easier to type and remember\. ## Logging in with App ID (beta) ## Users choose **App ID (beta)** as the login method on the IBM watsonx login page and enter the alias\. Then, they are redirected to their company's login page to enter their company credentials\. Upon logging in successfully to their company, they are redirected to IBM watsonx\. To verify that the alias is correctly configured, go to the **User profile and settings** page\. Verify that the username in the profile is the email from your company’s registry\. The alias is correct if the correct email is shown in the profile, as it indicates that the mapping was successful\. You cannot switch accounts when logging in through App ID\. ## Limitations ## The following limitations apply to this beta release: <!-- <ul> --> * You must map the **name/username/sub** SAML profile properties to the email property in the user registry\. If the mapping is absent or incorrect, a default opaque user ID is used, which is not supported in this beta release\. * The IBM Cloud login page does not support an App ID alias\. Users log in into IBM Cloud with a custom URL, following this form: `https://cloud.ibm.com/authorize/{app_id_alias}`\. <!-- </ul> --> <!-- <ul> --> * If you are using the Cloud Directory included with App ID as your user registry, you must select **Username and password** as the option for **Manage authentication > Cloud Directory > Settings > Allow users to sign\-up and sign\-in using**\. <!-- </ul> --> ## Learn more ## <!-- <ul> --> * [Logging in to watsonx\.ai through IBM App ID (beta)](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/signup-wx.html##appid) * [Configuring App ID with your identity provider](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/admin-appid-tips.html) * [IBM Cloud docs: Getting started with App ID](https://cloud.ibm.com/docs/appid?topic=appid-getting-started) * [IBM Cloud docs: Enabling authentication from an external identity provider](https://cloud.ibm.com/docs/account?topic=account-idp-integration) <!-- </ul> --> **Parent topic:**[Managing IBM watsonx](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/wdp_admin_console.html) <!-- </article "role="article" "> -->
78A4D6515FAA2766FEB3A03CA6A378846CF33D83
https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/admin-manage-projects.html?context=cdpaas&locale=en
Managing all projects in the account
Managing all projects in the account If you have the required permission, you can view and manage all projects in your IBM Cloud account. You can add yourself to a project so that you can delete it or change its collaborators. Requirements To manage all projects in the account, you must: * Restrict resources to the current account. See steps to [set the scope for resources](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/account-settings.htmlset-the-scope-for-resources). * Have the Manage projects permission that is provided by the IAM Manager role for the IBM Cloud Pak for Data service. Assigning the Manage projects permission To grant the Manage projects permission to a user who is already in your IBM Cloud account: 1. From the navigation menu, choose Administration > Access (IAM) to open the Manage access and users page in your IBM Cloud account. 2. Select the user on the Users page. 3. Click the Access tab and then choose Assign access+. 4. Select Access policy. 5. For Service, choose IBM Cloud Pak for Data. 6. For Service access, select the Manager role. 7. For Platform access, assign the Editor role. 8. Click Add and Assign to assign the policy to the user. Managing projects You can add yourself to a project when you need to delete the project, delete collaborators, or assign the Admin role to a collaborator in the project. To manage projects: * View all active projects on the Projects page in IBM watsonx by clicking the drop-down menu next to the search field and selecting All active projects. * Join any project as Admin by clicking Join as admin in the Your role column. * Filter projects to identify which projects you are not a collaborator in, by clicking the filter icon ![Filter icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/filter.svg) and selecting Your role > No membership. For more details on managing projects, see [Administering projects](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/admin-project.html). Learn more * [Administering projects](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/admin-project.html) Parent topic:[Managing IBM watsonx](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/wdp_admin_console.html)
# Managing all projects in the account # If you have the required permission, you can view and manage all projects in your IBM Cloud account\. You can add yourself to a project so that you can delete it or change its collaborators\. ## Requirements ## To manage all projects in the account, you must: <!-- <ul> --> * Restrict resources to the current account\. See steps to [set the scope for resources](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/account-settings.html#set-the-scope-for-resources)\. * Have the **Manage projects** permission that is provided by the IAM **Manager** role for the IBM Cloud Pak for Data service\. <!-- </ul> --> ## Assigning the Manage projects permission ## To grant the **Manage projects** permission to a user who is already in your IBM Cloud account: <!-- <ol> --> 1. From the navigation menu, choose **Administration > Access (IAM)** to open the **Manage access and users** page in your IBM Cloud account\. 2. Select the user on the **Users** page\. 3. Click the **Access** tab and then choose **Assign access\+**\. 4. Select **Access policy**\. 5. For **Service**, choose **IBM Cloud Pak for Data**\. 6. For **Service access**, select the **Manager** role\. 7. For **Platform access**, assign the **Editor** role\. 8. Click **Add** and **Assign** to assign the policy to the user\. <!-- </ol> --> ## Managing projects ## You can add yourself to a project when you need to delete the project, delete collaborators, or assign the **Admin** role to a collaborator in the project\. To manage projects: <!-- <ul> --> * View all active projects on the **Projects** page in IBM watsonx by clicking the drop\-down menu next to the search field and selecting **All active projects**\. * Join any project as **Admin** by clicking **Join as admin** in the **Your role** column\. * Filter projects to identify which projects you are not a collaborator in, by clicking the filter icon ![Filter icon](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/images/filter.svg) and selecting **Your role > No membership**\. <!-- </ul> --> For more details on managing projects, see [Administering projects](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/admin-project.html)\. ## Learn more ## <!-- <ul> --> * [Administering projects](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/admin-project.html) <!-- </ul> --> **Parent topic:**[Managing IBM watsonx](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/wdp_admin_console.html) <!-- </article "role="article" "> -->
96C0566DA4EB3450616C3F358C32837BFD4DE6C8
https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/admin-removeusers.html?context=cdpaas&locale=en
Removing users from the account or from the workspace
Removing users from the account or from the workspace The IBM Cloud account administrator or owner can remove users from the IBM Cloud account. Any use with the Admin role can remove users from a workspace. Removing users from the IBM Cloud account You can remove a user from an IBM Cloud account, so that the user can no longer log in to the console, switch to your account, or access account resources. Required roles : To remove a user from an IBM Cloud account, you must have one of the following roles for your IBM Cloud account: : - Owner : - Administrator : - Editor To remove a user from the IBM Cloud account: 1. From the IBM watsonx navigation menu, click Administration > Access (IAM). 2. Click Users and find the name of the user that you want to remove. 3. Choose Remove user from the action menu and confirm the removal. Removing a user from an account doesn't delete the IBMid for the user. Any resources such as projects or catalogs that were created by the user remain in the account, but the user no longer has access to work with those resources. The account owner, or an administrator for the service instance, can assign other users to work with the projects and catalogs or delete them entirely. For more information, see [IBM Cloud docs: Removing users from an account](https://cloud.ibm.com/docs/account?topic=account-remove). Removing users from a workspace You can remove collaborators from a workspace, such as a project or space, so that the user can no longer access the workspace or any of its contents. Required role : To remove a user from a workspace, you must have the Admin collaborator role for the workspace that you are editing. To remove a collaborator, select one or more users (or user groups) on the Access control page of the workspace and click Remove. The user is still a member of the IBM Cloud account and can be added as a collaborator to other workspaces as needed. Learn more * [Stop using IBM watsonx](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/stopapps.html) * [Project collaborators](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/collaborate.html) * [IBM Cloud docs: Removing users from an account](https://cloud.ibm.com/docs/account?topic=account-remove) Parent topic:[Managing the platform](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/wdp_admin_console.html)
# Removing users from the account or from the workspace # The IBM Cloud account administrator or owner can remove users from the IBM Cloud account\. Any use with the **Admin** role can remove users from a workspace\. ## Removing users from the IBM Cloud account ## You can remove a user from an IBM Cloud account, so that the user can no longer log in to the console, switch to your account, or access account resources\. ### Required roles ### : To remove a user from an IBM Cloud account, you must have one of the following roles for your IBM Cloud account: : \- **Owner** : \- **Administrator** : \- **Editor** To remove a user from the IBM Cloud account: <!-- <ol> --> 1. From the IBM watsonx navigation menu, click **Administration > Access (IAM)**\. 2. Click **Users** and find the name of the user that you want to remove\. 3. Choose **Remove user** from the action menu and confirm the removal\. <!-- </ol> --> Removing a user from an account doesn't delete the IBMid for the user\. Any resources such as projects or catalogs that were created by the user remain in the account, but the user no longer has access to work with those resources\. The account owner, or an administrator for the service instance, can assign other users to work with the projects and catalogs or delete them entirely\. For more information, see [IBM Cloud docs: Removing users from an account](https://cloud.ibm.com/docs/account?topic=account-remove)\. ## Removing users from a workspace ## You can remove collaborators from a workspace, such as a project or space, so that the user can no longer access the workspace or any of its contents\. **Required role** : To remove a user from a workspace, you must have the **Admin** collaborator role for the workspace that you are editing\. To remove a collaborator, select one or more users (or user groups) on the **Access control** page of the workspace and click **Remove**\. The user is still a member of the IBM Cloud account and can be added as a collaborator to other workspaces as needed\. ## Learn more ## <!-- <ul> --> * [Stop using IBM watsonx](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/stopapps.html) * [Project collaborators](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/collaborate.html) * [IBM Cloud docs: Removing users from an account](https://cloud.ibm.com/docs/account?topic=account-remove) <!-- </ul> --> **Parent topic:**[Managing the platform](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/wdp_admin_console.html) <!-- </article "role="article" "> -->
28F15AC17715506BB29327874DE7F76CB9FB2908
https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/administer-accounts.html?context=cdpaas&locale=en
Administering your accounts and services
Administering your accounts and services For most administration tasks, you must be the IBM Cloud account owner or administrator. If you log in to your own account, you are the account owner. If you log in to someone else's account or an enterprise account, you might not be the account owner or administrator. Tasks for all users: * [Managing your personal settings](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/personal-settings.html) * [Determining your roles](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/your-roles.html) * [Understanding accessibility features](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/accessibility.html) Tasks for IBM Cloud account owners or administrators in IBM watsonx and in IBM Cloud: * [Managing IBM watsonx services](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/wdp_admin_console.html) * [Securing IBM watsonx](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-overview.html) * [Managing your IBM Cloud account](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/manage-account.html) * [Adding and managing IBM Cloud services](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/create-services.html) * [Reading notices](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/notices.html)
# Administering your accounts and services # For most administration tasks, you must be the IBM Cloud account owner or administrator\. If you log in to your own account, you are the account owner\. If you log in to someone else's account or an enterprise account, you might not be the account owner or administrator\. Tasks for all users: <!-- <ul> --> * [Managing your personal settings](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/personal-settings.html) * [Determining your roles](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/your-roles.html) * [Understanding accessibility features](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/accessibility.html) <!-- </ul> --> Tasks for IBM Cloud account owners or administrators in IBM watsonx and in IBM Cloud: <!-- <ul> --> * [Managing IBM watsonx services](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/wdp_admin_console.html) * [Securing IBM watsonx](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-overview.html) * [Managing your IBM Cloud account](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/manage-account.html) * [Adding and managing IBM Cloud services](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/create-services.html) * [Reading notices](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/notices.html) <!-- </ul> --> <!-- </article "role="article" "> -->
6E25D98279484E1D63CDEEFDD1D6A9F1917F1BA8
https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/at-events.html?context=cdpaas&locale=en
Activity Tracker events
Activity Tracker events You can see the events for actions for your provisioned services in the IBM Cloud Activity Tracker. You can use the information that is registered through the IBM Cloud Activity Tracker service to identify security incidents, detect unauthorized access, and comply with regulatory and internal auditing requirements. To get started, provision an instance of the IBM Cloud Activity Tracker service. See [IBM Cloud Activity Tracker](https://cloud.ibm.com/docs/activity-tracker?topic=activity-tracker-getting-started). View events in the Activity Tracker in the same IBM Cloud region where you provisioned your services. To view the account and user management events and other global platform events, you must provision an instance of the IBM Cloud Activity Tracker service in the Frankfurt (eu-de) region. See [Platform services](https://cloud.ibm.com/docs/activity-tracker?topic=activity-tracker-cloud_services_locationscloud_services_locations_core_integrated). * [Events for account and user management](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/at-events.html?context=cdpaas&locale=enacct) * [Events for Watson Studio](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/at-events.html?context=cdpaas&locale=enws) * [Events for Watson Machine Learning](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/at-events.html?context=cdpaas&locale=enwml) * [Events for model evaluation (Watson OpenScale)](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/at-events.html?context=cdpaas&locale=enwos) Events for account and user management You can audit account and user management events in Activity Tracker, including: * Billing events * Global catalog events * IAM and user management events For the complete list of account and user management events, see [IBM Cloud docs: Auditing events for account management](https://cloud.ibm.com/docs/activity-tracker?topic=activity-tracker-at_events_acc_mgt). Events for Watson Studio Events in Activity Tracker for Watson Studio Action Description data-science-experience.project.create Create a project. data-science-experience.project.delete Delete a project. data-science-experience.notebook.create Create a Notebook. data-science-experience.notebook.delete Delete a Notebook. data-science-experience.notebook.update Change the runtime service of a Notebook by selecting another one. data-science-experience.rstudio.start Open RStudio. data-science-experience.rstudio.stop RStudio session timed out. Events for Decision Optimization Events in Activity Tracker for Decision Optimization Action Description domodel.decision.create Create experiments domodel.decision.update Update experiments domodel.decision.delete Delete experiments domodel.container.create Create scenarios domodel.container.update Update scenarios domodel.container.delete Delete scenarios domodel.notebook.import Update a scenario from a notebook domodel.notebook.export Generate a model notebook from a scenario domodel.wml.export Generate Watson Machine Learning models from a scenario domodel.solve.start Solve a scenario domodel.solve.stop Cancel a solve Events for feature groups Events in Activity Tracker for feature groups (Watson Studio) Action Description data_science_experience.feature-group.retrieve Retrieve a feature group data_science_experience.feature-group.create Create a feature group data_science_experience.feature-group.update Update a feature group data_science_experience.feature-group.delete Delete a feature group Events for asset management Events in Activity Tracker for asset management in Watson Studio Action Description datacatalog.asset.clone Copy an asset. datacatalog.asset.create Create an asset. datacatalog.data-asset.create Create a data asset. datacatalog.folder-asset.create Create a folder asset. datacatalog.type.create Create an asset type. datacatalog.asset.purge Delete an asset from the trash. datacatalog.asset.restore Restore an asset from the trash. datacatalog.asset.trash Send an asset to the trash. datacatalog.asset.update Update an asset. datacatalog.promoted-asset.create Create a project asset in a space. datacatalog.promoted-asset.update Update a space asset that started in a project. datacatalog.asset.promote Promote an asset from project to space. Events for asset attachments Events in Activity Tracker for attachments Action Description datacatalog.attachment.create Create an attachment. datacatalog.attachment.delete Delete an attachment. datacatalog.attachment-resources.increase Increase resources for an attachment. datacatalog.complete.transfer Mark an attachment as transfer complete. datacatalog.attachment.update Update attachment metadata. Events for asset attributes Events in Activity Tracker for attributes Action Description datacatalog.attribute.create Create an attribute. datacatalog.attribute.delete Delete an attribute. datacatalog.attribute.update Update an attribute. Events for connections Events in Activity Tracker for connections Action Description wdp-connect-connection.connection.read Read a connection. wdp-connect-connection.connection.get Retrieve a connection. wdp-connect-connection.connection.get.list Get a list of connections. wdp-connect-connection.connection.create Create a connection. wdp-connect-connection.connection.delete Delete a connection. Events for scheduling Events in Activity Tracker for scheduling Action Description wdp.scheduling.schedule.update.failed An update to a schedule failed. wdp.scheduling.schedule.create.failed The creation of a schedule failed. wdp.scheduling.schedule.read Read a schedule. wdp.scheduling.schedule.update Update a schedule. wdp.scheduling.schedule.delete.multiple Delete multiple schedules. wdp.scheduling.schedule.list List all schedules. wdp.scheduling.schedule.create Create a schedule. Events for Data Refinery flows Events in Activity Tracker for Data Refinery flows Action Description data-science-experience.datarefinery-flow.read Read a Data Refinery flow data-science-experience.datarefinery-flow.create Create a Data Refinery flow data-science-experience.datarefinery-flow.delete Delete a Data Refinery flow data-science-experience.datarefinery-flow.update Update (save) a Data Refinery flow data-science-experience.datarefinery-flow.backup Clone (duplicate) a Data Refinery flow data-science-experience.datarefinery-flowrun.create Create a Data Refinery flow job run data-science-experience.datarefinery-flowrun-complete.update Complete a Data Refinery flow job run data-science-experience.datarefinery-flowrun-cancel.update Cancel a Data Refinery flow job run Events for profiling Events in Activity Tracker for profiling Action Description wdp-profiling.profile.start Initiate profiling. wdp-profiling.profile.create Create a profile. wdp-profiling.profile.delete Delete a profile. wdp-profiling.profile.read Read a profile. wdp-profiling.profile.list List the profiles of a data asset. wdp-profiling.profile.update Update a profile. wdp-profiling.profile.asset-classification.update Update the asset classification of a profile. wdp-profiling.profile.column-classification.update Update the column classification of a profile. wdp-profiling.profile.create.failed Profile could not be created. wdp-profiling.profile.delete.failed Profile could not be deleted. wdp-profiling.profile.read.failed Profile could not be read. wdp-profiling.profile.list.failed Profiles could not be listed. wdp-profiling.profile.update.failed Profile could not be updated. wdp-profiling.profile.asset-classification.update.failed Asset classification of the profile could not be updated. wdp-profiling.profile.column-classification.update.failed Column classification of the profile could not be updated. Events for profiling options Events in Activity Tracker for profiling options Action Description wdp-profiling.profile_options.create Create profiling options. wdp-profiling.profile_options.read Read profiling options. wdp-profiling.profile_options.update Update profiling options. wdp-profiling.profile_options.delete Delete profiling options wdp-profiling.profile_options.create.failed Profiling options could not be created. wdp-profiling.profile_options.read.failed Profiling options could not be read. wdp-profiling.profile_options.update.failed Profiling options could not be updated. wdp-profiling.profile_options.delete.failed Profiling options could not be deleted. Events for feature groups Events in Activity Tracker for feature groups (IBM Knowledge Catalog) Action Description data_catalog.feature-group.retrieve Retrieve a feature group data_catalog.feature-group.create Create a feature group data_catalog.feature-group.update Update a feature group data_catalog.feature-group.delete Delete a feature group Events for Watson Machine Learning Event for Prompt Lab Event in Activity Tracker for Prompt Lab Action Description pm-20.foundation-model.send Send a prompt to a foundation model or tuned foundation model for inferencing. Events for Watson Machine Learning deployments Events in Activity Tracker for Watson Machine Learning deployments Action Description pm-20.deployment.create Create a Watson Machine Learning deployment. pm-20.deployment.read Get a Watson Machine Learning deployment. pm-20.deployment.update Update a Watson Machine Learning deployment. pm-20.deployment.delete Delete a Watson Machine Learning deployment. pm-20.deployment_job.create Create a Watson Machine Learning deployment job. pm-20.deployment_job.read Get a Watson Machine Learning deployment job. pm-20.deployment_job.delete Delete a Watson Machine Learning deployment job. Events for SPSS Modeler flows Events in Activity Tracker for SPSS Modeler flows Action Description data-science-experience.modeler-session.create Create a new SPSS Modeler session. data-science-experience.modeler-flow.send Store the current SPSS Modeler flow. data-science-experience.modeler-flows-user.receive Get the current user information. data-science-experience.modeler-flow-preview.create Preview a node in an SPSS Modeler flow. data-science-experience.modeler-examples.receive Get the list of example SPSS Modeler flows. data-science-experience.modeler-runtimes.receive Get the list of available SPSS Modeler runtimes. data-science-experience.lock-modeler-flow.enable Allocate the lock for the SPSS Modeler flow to the user. data-science-experience.project-name.receive Get the name of the project. Event for model visualizations Event in Activity Tracker for modeler visualizations Action Description pm-20.model.visualize Visualize model output. The model output can have a single model, ensemble models, or a time-series model. The visualization type can be single, auto, or time-series. This visualization type is in requestedData section. Events for Watson Machine Learning training assets Event in Activity Tracker for Watson Machine Learning training assets Action Description pm-20.training.authenticate Authenticate user. pm-20.training.authorize Authorize user. pm-20.training.list List all of training. pm-20.training.get Get one training. pm-20.training.create Start a training. pm-20.training.delete Stop a training. Events for Watson Machine Learning repository assets The deployment events are tracked for these Watson Machine Learning repository assets: Event in Activity Tracker for Watson Machine Learning repository assets Asset type Description wml_model Represents a machine learning model asset. wml_model_definition Represents the code that is used to train one or more models. wml_pipeline Represents a hybrid-pipeline, a SparkML pipeline or a sklearn pipeline that is represented as a JSON document that is used to train one or more models. wml_experiment Represents the assets that capture a set of wml_pipeline or wml_model_definition assets that are trained at the same time on the same data set. wml_function Represents a Python function (code is packaged in a compressed file) that will be deployed as online deployment in Watson Machine Learning. This code needs to contain a score(...) python function. wml_training_definition Represents the training metadata necessary to start a training job. wml_deployment_job_definition Represents the deployment metadata information to create a batch job in WML. This asset type contains the same metadata that is used by the /ml/v4/deployment_jobs endpoint. When you submit batch deployment jobs, you can either provide the job definition inline or reference a job definition in a query parameter. These activities are tracked for each asset type: Event in Activity Tracker for Watson Machine Learning repository assets Action Description pm-20.<asset_type>.list List all of the specified asset type. pm-20.<asset_type>.create Create one of the specified asset types. pm-20.<asset_type>.delete Delete one of the specified asset types. pm-20.<asset_type>.update Update a specified asset type. pm-20.<asset_type>.read View a specified asset type. pm-20.<asset_type>.add Add a specified asset type. Events for model evaluation (Watson OpenScale) Events for public APIs Events in Activity Tracker for Watson OpenScale public APIs Action Description aiopenscale.metrics.create Store metric in the Watson OpenScale instance aiopenscale.payload.create Log payload in the Watson OpenScale instance Events for private APIs Events in Activity Tracker for Watson OpenScale private APIs Action Description aiopenscale.datamart.configure Configure the Watson OpenScale instance aiopenscale.datamart.delete Delete the Watson OpenScale instance aiopenscale.binding.create Add service binding to the Watson OpenScale instance aiopenscale.binding.delete Delete service binding from the Watson OpenScale instance aiopenscale.subscription.create Add subscription to the Watson OpenScale instance aiopenscale.subscription.delete Delete subscription from the Watson OpenScale instance Parent topic:[Administration](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/administer-accounts.html)
# Activity Tracker events # You can see the events for actions for your provisioned services in the IBM Cloud Activity Tracker\. You can use the information that is registered through the IBM Cloud Activity Tracker service to identify security incidents, detect unauthorized access, and comply with regulatory and internal auditing requirements\. To get started, provision an instance of the IBM Cloud Activity Tracker service\. See [IBM Cloud Activity Tracker](https://cloud.ibm.com/docs/activity-tracker?topic=activity-tracker-getting-started)\. View events in the Activity Tracker in the same IBM Cloud region where you provisioned your services\. To view the account and user management events and other global platform events, you must provision an instance of the IBM Cloud Activity Tracker service in the **Frankfurt (eu\-de)** region\. See [Platform services](https://cloud.ibm.com/docs/activity-tracker?topic=activity-tracker-cloud_services_locations#cloud_services_locations_core_integrated)\. <!-- <ul> --> * [Events for account and user management](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/at-events.html?context=cdpaas&locale=en#acct) * [Events for Watson Studio](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/at-events.html?context=cdpaas&locale=en#ws) * [Events for Watson Machine Learning](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/at-events.html?context=cdpaas&locale=en#wml) * [Events for model evaluation (Watson OpenScale)](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/at-events.html?context=cdpaas&locale=en#wos) <!-- </ul> --> ## Events for account and user management ## You can audit account and user management events in Activity Tracker, including: <!-- <ul> --> * Billing events * Global catalog events * IAM and user management events <!-- </ul> --> For the complete list of account and user management events, see [IBM Cloud docs: Auditing events for account management](https://cloud.ibm.com/docs/activity-tracker?topic=activity-tracker-at_events_acc_mgt)\. ## Events for Watson Studio ## <!-- <table> --> Events in Activity Tracker for Watson Studio | Action | Description | | ------------------------------------------- | ------------------------------------------------------------------- | | data\-science\-experience\.project\.create | Create a project\. | | data\-science\-experience\.project\.delete | Delete a project\. | | data\-science\-experience\.notebook\.create | Create a Notebook\. | | data\-science\-experience\.notebook\.delete | Delete a Notebook\. | | data\-science\-experience\.notebook\.update | Change the runtime service of a Notebook by selecting another one\. | | data\-science\-experience\.rstudio\.start | Open RStudio\. | | data\-science\-experience\.rstudio\.stop | RStudio session timed out\. | <!-- </table ""> --> ### Events for Decision Optimization ### <!-- <table> --> Events in Activity Tracker for Decision Optimization | Action | Description | | -------------------------- | ------------------------------------------------------- | | domodel\.decision\.create | Create experiments | | domodel\.decision\.update | Update experiments | | domodel\.decision\.delete | Delete experiments | | domodel\.container\.create | Create scenarios | | domodel\.container\.update | Update scenarios | | domodel\.container\.delete | Delete scenarios | | domodel\.notebook\.import | Update a scenario from a notebook | | domodel\.notebook\.export | Generate a model notebook from a scenario | | domodel\.wml\.export | Generate Watson Machine Learning models from a scenario | | domodel\.solve\.start | Solve a scenario | | domodel\.solve\.stop | Cancel a solve | <!-- </table ""> --> ### Events for feature groups ### <!-- <table> --> Events in Activity Tracker for feature groups (Watson Studio) | Action | Description | | --------------------------------------------------- | ------------------------ | | data\_science\_experience\.feature\-group\.retrieve | Retrieve a feature group | | data\_science\_experience\.feature\-group\.create | Create a feature group | | data\_science\_experience\.feature\-group\.update | Update a feature group | | data\_science\_experience\.feature\-group\.delete | Delete a feature group | <!-- </table ""> --> ### Events for asset management ### <!-- <table> --> Events in Activity Tracker for asset management in Watson Studio | Action | Description | | ------------------------------------ | ------------------------------------------------ | | datacatalog\.asset\.clone | Copy an asset\. | | datacatalog\.asset\.create | Create an asset\. | | datacatalog\.data\-asset\.create | Create a data asset\. | | datacatalog\.folder\-asset\.create | Create a folder asset\. | | datacatalog\.type\.create | Create an asset type\. | | datacatalog\.asset\.purge | Delete an asset from the trash\. | | datacatalog\.asset\.restore | Restore an asset from the trash\. | | datacatalog\.asset\.trash | Send an asset to the trash\. | | datacatalog\.asset\.update | Update an asset\. | | datacatalog\.promoted\-asset\.create | Create a project asset in a space\. | | datacatalog\.promoted\-asset\.update | Update a space asset that started in a project\. | | datacatalog\.asset\.promote | Promote an asset from project to space\. | <!-- </table ""> --> ### Events for asset attachments ### <!-- <table> --> Events in Activity Tracker for attachments | Action | Description | | -------------------------------------------- | ----------------------------------------- | | datacatalog\.attachment\.create | Create an attachment\. | | datacatalog\.attachment\.delete | Delete an attachment\. | | datacatalog\.attachment\-resources\.increase | Increase resources for an attachment\. | | datacatalog\.complete\.transfer | Mark an attachment as transfer complete\. | | datacatalog\.attachment\.update | Update attachment metadata\. | <!-- </table ""> --> ### Events for asset attributes ### <!-- <table> --> Events in Activity Tracker for attributes | Action | Description | | ------------------------------ | --------------------- | | datacatalog\.attribute\.create | Create an attribute\. | | datacatalog\.attribute\.delete | Delete an attribute\. | | datacatalog\.attribute\.update | Update an attribute\. | <!-- </table ""> --> ### Events for connections ### <!-- <table> --> Events in Activity Tracker for connections | Action | Description | | ----------------------------------------------- | --------------------------- | | wdp\-connect\-connection\.connection\.read | Read a connection\. | | wdp\-connect\-connection\.connection\.get | Retrieve a connection\. | | wdp\-connect\-connection\.connection\.get\.list | Get a list of connections\. | | wdp\-connect\-connection\.connection\.create | Create a connection\. | | wdp\-connect\-connection\.connection\.delete | Delete a connection\. | <!-- </table ""> --> ### Events for scheduling ### <!-- <table> --> Events in Activity Tracker for scheduling | Action | Description | | ------------------------------------------- | ----------------------------------- | | wdp\.scheduling\.schedule\.update\.failed | An update to a schedule failed\. | | wdp\.scheduling\.schedule\.create\.failed | The creation of a schedule failed\. | | wdp\.scheduling\.schedule\.read | Read a schedule\. | | wdp\.scheduling\.schedule\.update | Update a schedule\. | | wdp\.scheduling\.schedule\.delete\.multiple | Delete multiple schedules\. | | wdp\.scheduling\.schedule\.list | List all schedules\. | | wdp\.scheduling\.schedule\.create | Create a schedule\. | <!-- </table ""> --> ### Events for Data Refinery flows ### <!-- <table> --> Events in Activity Tracker for Data Refinery flows | Action | Description | | ------------------------------------------------------------------ | -------------------------------------- | | data\-science\-experience\.datarefinery\-flow\.read | Read a Data Refinery flow | | data\-science\-experience\.datarefinery\-flow\.create | Create a Data Refinery flow | | data\-science\-experience\.datarefinery\-flow\.delete | Delete a Data Refinery flow | | data\-science\-experience\.datarefinery\-flow\.update | Update (save) a Data Refinery flow | | data\-science\-experience\.datarefinery\-flow\.backup | Clone (duplicate) a Data Refinery flow | | data\-science\-experience\.datarefinery\-flowrun\.create | Create a Data Refinery flow job run | | data\-science\-experience\.datarefinery\-flowrun\-complete\.update | Complete a Data Refinery flow job run | | data\-science\-experience\.datarefinery\-flowrun\-cancel\.update | Cancel a Data Refinery flow job run | <!-- </table ""> --> ### Events for profiling ### <!-- <table> --> Events in Activity Tracker for profiling | Action | Description | | --------------------------------------------------------------- | ----------------------------------------------------------- | | wdp\-profiling\.profile\.start | Initiate profiling\. | | wdp\-profiling\.profile\.create | Create a profile\. | | wdp\-profiling\.profile\.delete | Delete a profile\. | | wdp\-profiling\.profile\.read | Read a profile\. | | wdp\-profiling\.profile\.list | List the profiles of a data asset\. | | wdp\-profiling\.profile\.update | Update a profile\. | | wdp\-profiling\.profile\.asset\-classification\.update | Update the asset classification of a profile\. | | wdp\-profiling\.profile\.column\-classification\.update | Update the column classification of a profile\. | | wdp\-profiling\.profile\.create\.failed | Profile could not be created\. | | wdp\-profiling\.profile\.delete\.failed | Profile could not be deleted\. | | wdp\-profiling\.profile\.read\.failed | Profile could not be read\. | | wdp\-profiling\.profile\.list\.failed | Profiles could not be listed\. | | wdp\-profiling\.profile\.update\.failed | Profile could not be updated\. | | wdp\-profiling\.profile\.asset\-classification\.update\.failed | Asset classification of the profile could not be updated\. | | wdp\-profiling\.profile\.column\-classification\.update\.failed | Column classification of the profile could not be updated\. | <!-- </table ""> --> ### Events for profiling options ### <!-- <table> --> Events in Activity Tracker for profiling options | Action | Description | | ------------------------------------------------ | ---------------------------------------- | | wdp\-profiling\.profile\_options\.create | Create profiling options\. | | wdp\-profiling\.profile\_options\.read | Read profiling options\. | | wdp\-profiling\.profile\_options\.update | Update profiling options\. | | wdp\-profiling\.profile\_options\.delete | Delete profiling options | | wdp\-profiling\.profile\_options\.create\.failed | Profiling options could not be created\. | | wdp\-profiling\.profile\_options\.read\.failed | Profiling options could not be read\. | | wdp\-profiling\.profile\_options\.update\.failed | Profiling options could not be updated\. | | wdp\-profiling\.profile\_options\.delete\.failed | Profiling options could not be deleted\. | <!-- </table ""> --> ### Events for feature groups ### <!-- <table> --> Events in Activity Tracker for feature groups (IBM Knowledge Catalog) | Action | Description | | --------------------------------------- | ------------------------ | | data\_catalog\.feature\-group\.retrieve | Retrieve a feature group | | data\_catalog\.feature\-group\.create | Create a feature group | | data\_catalog\.feature\-group\.update | Update a feature group | | data\_catalog\.feature\-group\.delete | Delete a feature group | <!-- </table ""> --> ## Events for Watson Machine Learning ## ### Event for Prompt Lab ### <!-- <table> --> Event in Activity Tracker for Prompt Lab | Action | Description | | ------------------------------- | ------------------------------------------------------------------------------- | | pm\-20\.foundation\-model\.send | Send a prompt to a foundation model or tuned foundation model for inferencing\. | <!-- </table ""> --> ### Events for Watson Machine Learning deployments ### <!-- <table> --> Events in Activity Tracker for Watson Machine Learning deployments | Action | Description | | ------------------------------- | ------------------------------------------------- | | pm\-20\.deployment\.create | Create a Watson Machine Learning deployment\. | | pm\-20\.deployment\.read | Get a Watson Machine Learning deployment\. | | pm\-20\.deployment\.update | Update a Watson Machine Learning deployment\. | | pm\-20\.deployment\.delete | Delete a Watson Machine Learning deployment\. | | pm\-20\.deployment\_job\.create | Create a Watson Machine Learning deployment job\. | | pm\-20\.deployment\_job\.read | Get a Watson Machine Learning deployment job\. | | pm\-20\.deployment\_job\.delete | Delete a Watson Machine Learning deployment job\. | <!-- </table ""> --> ### Events for SPSS Modeler flows ### <!-- <table> --> Events in Activity Tracker for SPSS Modeler flows | Action | Description | | --------------------------------------------------------- | --------------------------------------------------------- | | data\-science\-experience\.modeler\-session\.create | Create a new SPSS Modeler session\. | | data\-science\-experience\.modeler\-flow\.send | Store the current SPSS Modeler flow\. | | data\-science\-experience\.modeler\-flows\-user\.receive | Get the current user information\. | | data\-science\-experience\.modeler\-flow\-preview\.create | Preview a node in an SPSS Modeler flow\. | | data\-science\-experience\.modeler\-examples\.receive | Get the list of example SPSS Modeler flows\. | | data\-science\-experience\.modeler\-runtimes\.receive | Get the list of available SPSS Modeler runtimes\. | | data\-science\-experience\.lock\-modeler\-flow\.enable | Allocate the lock for the SPSS Modeler flow to the user\. | | data\-science\-experience\.project\-name\.receive | Get the name of the project\. | <!-- </table ""> --> ### Event for model visualizations ### <!-- <table> --> Event in Activity Tracker for modeler visualizations | Action | Description | | ------------------------ | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | pm\-20\.model\.visualize | Visualize model output\. The model output can have a single model, ensemble models, or a time\-series model\. The visualization type can be single, auto, or time\-series\. This visualization type is in requestedData section\. | <!-- </table ""> --> ### Events for Watson Machine Learning training assets ### <!-- <table> --> Event in Activity Tracker for Watson Machine Learning training assets | Action | Description | | ------------------------------ | ---------------------- | | pm\-20\.training\.authenticate | Authenticate user\. | | pm\-20\.training\.authorize | Authorize user\. | | pm\-20\.training\.list | List all of training\. | | pm\-20\.training\.get | Get one training\. | | pm\-20\.training\.create | Start a training\. | | pm\-20\.training\.delete | Stop a training\. | <!-- </table ""> --> ### Events for Watson Machine Learning repository assets ### The deployment events are tracked for these Watson Machine Learning repository assets: <!-- <table> --> Event in Activity Tracker for Watson Machine Learning repository assets | Asset type | Description | | -------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | wml\_model | Represents a machine learning model asset\. | | wml\_model\_definition | Represents the code that is used to train one or more models\. | | wml\_pipeline | Represents a hybrid\-pipeline, a SparkML pipeline or a sklearn pipeline that is represented as a JSON document that is used to train one or more models\. | | wml\_experiment | Represents the assets that capture a set of wml\_pipeline or wml\_model\_definition assets that are trained at the same time on the same data set\. | | wml\_function | Represents a Python function (code is packaged in a compressed file) that will be deployed as online deployment in Watson Machine Learning\. This code needs to contain a score(\.\.\.) python function\. | | wml\_training\_definition | Represents the training metadata necessary to start a training job\. | | wml\_deployment\_job\_definition | Represents the deployment metadata information to create a batch job in WML\. This asset type contains the same metadata that is used by the /ml/v4/deployment\_jobs endpoint\. When you submit batch deployment jobs, you can either provide the job definition inline or reference a job definition in a query parameter\. | <!-- </table ""> --> These activities are tracked for each asset type: <!-- <table> --> Event in Activity Tracker for Watson Machine Learning repository assets | Action | Description | | ------------------------------------ | ----------------------------------------- | | pm\-20\.`<asset_type>`\.list | List all of the specified asset type\. | | pm\-20\.`<asset_type>`\.create | Create one of the specified asset types\. | | pm\-20\.`<asset_type>`\.delete | Delete one of the specified asset types\. | | pm\-20\.`<asset_type>`\.update | Update a specified asset type\. | | pm\-20\.`<asset_type>`\.read | View a specified asset type\. | | pm\-20\.`<asset_type>`\.add | Add a specified asset type\. | <!-- </table ""> --> ## Events for model evaluation (Watson OpenScale) ## ### Events for public APIs ### <!-- <table> --> Events in Activity Tracker for Watson OpenScale public APIs | Action | Description | | ---------------------------- | --------------------------------------------- | | aiopenscale\.metrics\.create | Store metric in the Watson OpenScale instance | | aiopenscale\.payload\.create | Log payload in the Watson OpenScale instance | <!-- </table ""> --> ### Events for private APIs ### <!-- <table> --> Events in Activity Tracker for Watson OpenScale private APIs | Action | Description | | --------------------------------- | --------------------------------------------------------- | | aiopenscale\.datamart\.configure | Configure the Watson OpenScale instance | | aiopenscale\.datamart\.delete | Delete the Watson OpenScale instance | | aiopenscale\.binding\.create | Add service binding to the Watson OpenScale instance | | aiopenscale\.binding\.delete | Delete service binding from the Watson OpenScale instance | | aiopenscale\.subscription\.create | Add subscription to the Watson OpenScale instance | | aiopenscale\.subscription\.delete | Delete subscription from the Watson OpenScale instance | <!-- </table ""> --> **Parent topic:**[Administration](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/administer-accounts.html) <!-- </article "role="article" "> -->
2C6B0F77C4CA0CAFB52E1FE3E10800D56015CADF
https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/create-services.html?context=cdpaas&locale=en
Creating and managing IBM Cloud services
Creating and managing IBM Cloud services You can create IBM Cloud service instances within IBM watsonx from the Services catalog. Prerequisite : You must be [signed up for watsonx](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/signup-wx.html). Required permissions : For creating or managing a service instance, you must have Administrator or Editor platform access roles in the IBM Cloud account for IBM watsonx. If you signed up for IBM watsonx with your own IBM Cloud account, you are the owner of the account. Otherwise, you can [check your IBM Cloud account roles and permissions](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/your-roles.html). Creating a service To view the Services catalog, select Administration > Services > Services catalog from the main menu. For a description of each service, see [Services](https://dataplatform.cloud.ibm.com/docs/content/svc-welcome/cloud-services.html). To check which service instances you have, select Administration > Services > Service instances from the main menu. You can filter which services you see by resource group, organization, and region. To create a service: 1. Log in to IBM watsonx. 2. Select Administration > Services > Services catalog from the main menu. 3. Click the service you want to create. 4. Specify the IBM Cloud service region. 5. Select a plan. 6. If necessary, select the resource group or organization. 7. Click Create. Managing services To manage a service: 1. Select Administration > Services > Services instances from the main menu. 2. Click the Action menu next to the service name and select Manage in IBM Cloud. The service page in IBM Cloud opens in a separate browser tab. 3. To change pricing plans, select Plan and choose the desired plan. Learn more * [Associate a service with a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/assoc-services.html) * [Managing the platform](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/wdp_admin_console.html) Parent topic:[IBM Cloud services](https://dataplatform.cloud.ibm.com/docs/content/svc-welcome/cloud-services.html)
# Creating and managing IBM Cloud services # You can create IBM Cloud service instances within IBM watsonx from the Services catalog\. **Prerequisite** : You must be [signed up for watsonx](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/signup-wx.html)\. **Required permissions** : For creating or managing a service instance, you must have **Administrator** or **Editor** platform access roles in the IBM Cloud account for IBM watsonx\. If you signed up for IBM watsonx with your own IBM Cloud account, you are the owner of the account\. Otherwise, you can [check your IBM Cloud account roles and permissions](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/your-roles.html)\. ## Creating a service ## To view the Services catalog, select **Administration > Services > Services catalog** from the main menu\. For a description of each service, see [Services](https://dataplatform.cloud.ibm.com/docs/content/svc-welcome/cloud-services.html)\. To check which service instances you have, select **Administration > Services > Service instances** from the main menu\. You can filter which services you see by resource group, organization, and region\. To create a service: <!-- <ol> --> 1. Log in to IBM watsonx\. 2. Select **Administration > Services > Services catalog** from the main menu\. 3. Click the service you want to create\. 4. Specify the IBM Cloud service region\. 5. Select a plan\. 6. If necessary, select the resource group or organization\. 7. Click **Create**\. <!-- </ol> --> ## Managing services ## To manage a service: <!-- <ol> --> 1. Select **Administration > Services > Services instances** from the main menu\. 2. Click the Action menu next to the service name and select **Manage in IBM Cloud**\. The service page in IBM Cloud opens in a separate browser tab\. 3. To change pricing plans, select **Plan** and choose the desired plan\. <!-- </ol> --> ## Learn more ## <!-- <ul> --> * [Associate a service with a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/assoc-services.html) * [Managing the platform](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/wdp_admin_console.html) <!-- </ul> --> **Parent topic:**[IBM Cloud services](https://dataplatform.cloud.ibm.com/docs/content/svc-welcome/cloud-services.html) <!-- </article "role="article" "> -->
A392BDDEAD4F42155DC83FBA8512775DB313FC53
https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/endpoints-vrf.html?context=cdpaas&locale=en
Securing connections to services with private service endpoints
Securing connections to services with private service endpoints You can configure isolated connectivity to your cloud-based services for production workloads with IBM Cloud service endpoints. When you enable IBM Cloud service endpoints in your account, you can expose a private network endpoint when you create a resource. You then connect directly to this endpoint over the IBM Cloud private network rather than the public network. Because resources that use private network endpoints don't have an internet-routable IP address, connections to these resources are more secure. To use service endpoints: 1. Enable virtual routing and forwarding (VRF) in your account, if necessary, and enable the use of service endpoints. 2. Create services that support VRF and service endpoints. See [Enabling VRF and service endpoints](https://cloud.ibm.com/docs/account?topic=account-vrf-service-endpoint). Learn more * [Secure access to services using service endpoints](https://cloud.ibm.com/docs/account?topic=account-service-endpoints-overview) * [Enabling VRF and service endpoints](https://cloud.ibm.com/docs/account?topic=account-vrf-service-endpoint) * [List of services that support service endpoints](https://cloud.ibm.com/docs/account?topic=account-vrf-service-endpointuse-service-endpoint) Parent topic:[Security](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-overview.html)
# Securing connections to services with private service endpoints # You can configure isolated connectivity to your cloud\-based services for production workloads with IBM Cloud service endpoints\. When you enable IBM Cloud service endpoints in your account, you can expose a private network endpoint when you create a resource\. You then connect directly to this endpoint over the IBM Cloud private network rather than the public network\. Because resources that use private network endpoints don't have an internet\-routable IP address, connections to these resources are more secure\. To use service endpoints: <!-- <ol> --> 1. Enable virtual routing and forwarding (VRF) in your account, if necessary, and enable the use of service endpoints\. 2. Create services that support VRF and service endpoints\. <!-- </ol> --> See [Enabling VRF and service endpoints](https://cloud.ibm.com/docs/account?topic=account-vrf-service-endpoint)\. ## Learn more ## <!-- <ul> --> * [Secure access to services using service endpoints](https://cloud.ibm.com/docs/account?topic=account-service-endpoints-overview) * [Enabling VRF and service endpoints](https://cloud.ibm.com/docs/account?topic=account-vrf-service-endpoint) * [List of services that support service endpoints](https://cloud.ibm.com/docs/account?topic=account-vrf-service-endpoint#use-service-endpoint) <!-- </ul> --> **Parent topic:**[Security](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-overview.html) <!-- </article "role="article" "> -->
71C98B1AE9BB65177C030CF1DE6760D41B7D7DF5
https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/firewall-cfg-private-cos.html?context=cdpaas&locale=en
Firewall access for Cloud Object Storage
Firewall access for Cloud Object Storage Private IP addresses are required when IBM watsonx and Cloud Object Storage are located on the same network. When creating a connection to a Cloud Object Storage bucket that is protected by a firewall on the same network as IBM watsonx, the connector automatically maps to private IP addresses for IBM watsonx. The private IP addresses must be added to a Bucket access policy to allow inbound connections from IBM watsonx. Follow these steps to search the private IP addresses for the IBM watsonx cluster and add them to the Bucket access policy: 1. Go to the Administration > Cloud integrations page. 2. Click the Firewall configuration link to view the list of IP ranges used by IBM watsonx. 3. Choose Include private IPs to view the private IP addresses for the IBM watsonx cluster. ![A list of private IP addresses for the IBM watsonx cluster](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/images/ip-ranges-private.png) 4. From your IBM Cloud Object Storage instance on IBM Cloud, open the Buckets list and choose the Bucket for the connection. 5. Copy each of the private IP ranges listed and paste them into the Buckets > Permissions > IP address field on IBM Cloud. ![A list of permitted private IP addresses for the IBM watsonx cluster](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/images/bucket-ips.png) Learn more * [IBM Cloud Object Storage connection](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-cos.html) * [IBM Cloud docs: Setting a firewall](https://cloud.ibm.com/docs/cloud-object-storage?topic=cloud-object-storage-setting-a-firewallfirewall) Parent topic:[Configuring firewall access](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/firewall_ovrvw.html)
# Firewall access for Cloud Object Storage # Private IP addresses are required when IBM watsonx and Cloud Object Storage are located on the same network\. When creating a connection to a Cloud Object Storage bucket that is protected by a firewall on the same network as IBM watsonx, the connector automatically maps to private IP addresses for IBM watsonx\. The private IP addresses must be added to a **Bucket access policy** to allow inbound connections from IBM watsonx\. Follow these steps to search the private IP addresses for the IBM watsonx cluster and add them to the **Bucket access policy**: <!-- <ol> --> 1. Go to the **Administration > Cloud integrations** page\. 2. Click the **Firewall configuration** link to view the list of IP ranges used by IBM watsonx\. 3. Choose **Include private IPs** to view the private IP addresses for the IBM watsonx cluster\. ![A list of private IP addresses for the IBM watsonx cluster](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/images/ip-ranges-private.png) 4. From your IBM Cloud Object Storage instance on IBM Cloud, open the **Buckets** list and choose the Bucket for the connection\. 5. Copy each of the private IP ranges listed and paste them into the **Buckets > Permissions > IP address** field on IBM Cloud\. ![A list of permitted private IP addresses for the IBM watsonx cluster](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/images/bucket-ips.png) <!-- </ol> --> ## Learn more ## <!-- <ul> --> * [IBM Cloud Object Storage connection](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-cos.html) * [IBM Cloud docs: Setting a firewall](https://cloud.ibm.com/docs/cloud-object-storage?topic=cloud-object-storage-setting-a-firewall#firewall) <!-- </ul> --> **Parent topic:**[Configuring firewall access](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/firewall_ovrvw.html) <!-- </article "role="article" "> -->
3D1B3C707202F30F8995025F356F82ABBE685B93
https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/firewall-dsx.html?context=cdpaas&locale=en
Firewall access for Watson Studio
Firewall access for Watson Studio Inbound firewall access is granted to the Watson Studio service by allowing the IP addresses for IBM watsonx on IBM Cloud. If Watson Studio is installed behind a firewall, you must add the WebSocket connection for your region to the firewall settings. Enabling the WebSocket connection is required for notebooks and RStudio. Following are the WebSocket settings for each region: Table 1. Regional WebSockets Location Region WebSocket United States (Dallas) us-south wss://dataplatform.cloud.ibm.com Europe (Frankfurt) eu-de wss://eu-de.dataplatform.cloud.ibm.com United Kingdom (London) eu-gb wss://eu-gb.dataplatform.cloud.ibm.com Asia Pacific (Tokyo) jp-tok wss://jp-tok.dataplatform.cloud.ibm.com Follow these steps to look up the IP addresses for IBM watsonx and allow them on IBM Cloud: 1. From the main menu, choose Administration > Cloud integrations. 2. Click Firewall configuration to display the IP addresses for the current region. Use CIDR notation. 3. Copy each CIDR range into the IP address restrictions for either a user or an account. You must also enter the allowed individual client IP addresses. Enter the IP addresses as a comma-separated list. Then, click Apply. 4. Repeat for each region to allow access for Watson Studio. When you configure the allowed IP addresses for Watson Studio, you include the CIDR ranges for the Watson Studio cluster. You can also allow individual client system IP addresses. For step-by-step instructions for both user and account restrictions, see [IBM Cloud docs: Allowing specific IP addresses](https://cloud.ibm.com/docs/account?topic=account-ips) Parent topic:[Configuring firewall access](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/firewall_ovrvw.html)
# Firewall access for Watson Studio # Inbound firewall access is granted to the Watson Studio service by allowing the IP addresses for IBM watsonx on IBM Cloud\. If Watson Studio is installed behind a firewall, you must add the WebSocket connection for your region to the firewall settings\. Enabling the WebSocket connection is required for notebooks and RStudio\. Following are the WebSocket settings for each region: <!-- <table> --> Table 1\. Regional WebSockets | Location | Region | WebSocket | | ----------------------- | --------- | -------------------------------------------- | | United States (Dallas) | us\-south | wss://dataplatform\.cloud\.ibm\.com | | Europe (Frankfurt) | eu\-de | wss://eu\-de\.dataplatform\.cloud\.ibm\.com | | United Kingdom (London) | eu\-gb | wss://eu\-gb\.dataplatform\.cloud\.ibm\.com | | Asia Pacific (Tokyo) | jp\-tok | wss://jp\-tok\.dataplatform\.cloud\.ibm\.com | <!-- </table ""> --> Follow these steps to look up the IP addresses for IBM watsonx and allow them on IBM Cloud: <!-- <ol> --> 1. From the main menu, choose **Administration > Cloud integrations**\. 2. Click **Firewall configuration** to display the IP addresses for the current region\. Use CIDR notation\. 3. Copy each CIDR range into the **IP address restrictions** for either a user or an account\. You must also enter the allowed individual client IP addresses\. Enter the IP addresses as a comma\-separated list\. Then, click **Apply**\. 4. Repeat for each region to allow access for Watson Studio\. <!-- </ol> --> When you configure the allowed IP addresses for Watson Studio, you include the CIDR ranges for the Watson Studio cluster\. You can also allow individual client system IP addresses\. For step\-by\-step instructions for both user and account restrictions, see [IBM Cloud docs: Allowing specific IP addresses](https://cloud.ibm.com/docs/account?topic=account-ips) **Parent topic:**[Configuring firewall access](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/firewall_ovrvw.html) <!-- </article "role="article" "> -->
34974DEE293BA190CFA1B3383EB2417D0FD4B601
https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/firewall-redshift.html?context=cdpaas&locale=en
Firewall access for AWS Redshift
Firewall access for AWS Redshift Inbound firewall access allows IBM watsonx to connect to Redshift on AWS through the firewall. You need inbound firewall access to work with your data stored in Redshift. To connect to Redshift from IBM watsonx, you configure inbound access through the Redshift firewall by entering the IP ranges for IBM watsonx into the inbound firewall rules (also called ingress rules). Inbound access through the firewall is configurable if Redshift resides on a public subnet. If Redshift resides on a private subnet, then no access is possible. Follow these steps to configure inbound firewall access to AWS Redshift: 1. Go to your provisioned Amazon Redshift cluster. 2. Select Properties and then scroll down to Network and security settings. 3. Click the VPC security group. ![AWS VPC security group](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/images/int-aws-active.png) 4. Edit the active/default security group. ![AWS active security group](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/images/int-aws-vpc.png) 5. Under Inbound rules, change the port range to 5439 to specify the Redshift port. Then select Edit inbound rules > Add rule. ![Edit inbound rules](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/images/int-aws-IPs.png) 6. From IBM watsonx, go to the Administration > Cloud integrations page. 7. Click the Firewall configuration link to view the list of IP ranges used by IBM watsonx. IP addresses can be viewed in either CIDR notation or as Start and End addresses. 8. Copy each of the IP ranges listed and paste them into the Source field for inbound firewall rules. Learn more * [Working with Redshift-managed VPC endpoints in Amazon Redshift](https://docs.aws.amazon.com/redshift/latest/mgmt/managing-cluster-cross-vpc.html)
# Firewall access for AWS Redshift # Inbound firewall access allows IBM watsonx to connect to Redshift on AWS through the firewall\. You need inbound firewall access to work with your data stored in Redshift\. To connect to Redshift from IBM watsonx, you configure inbound access through the Redshift firewall by entering the IP ranges for IBM watsonx into the inbound firewall rules (also called ingress rules)\. Inbound access through the firewall is configurable if Redshift resides on a public subnet\. If Redshift resides on a private subnet, then no access is possible\. Follow these steps to configure inbound firewall access to AWS Redshift: <!-- <ol> --> 1. Go to your provisioned Amazon Redshift cluster\. 2. Select **Properties** and then scroll down to **Network and security settings**\. 3. Click the **VPC security group**\. ![AWS VPC security group](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/images/int-aws-active.png) 4. Edit the active/default security group\. ![AWS active security group](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/images/int-aws-vpc.png) 5. Under **Inbound rules**, change the port range to 5439 to specify the Redshift port\. Then select **Edit inbound rules > Add rule**\. ![Edit inbound rules](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/images/int-aws-IPs.png) 6. From IBM watsonx, go to the **Administration > Cloud integrations** page\. 7. Click the **Firewall configuration** link to view the list of IP ranges used by IBM watsonx\. IP addresses can be viewed in either CIDR notation or as Start and End addresses\. 8. Copy each of the IP ranges listed and paste them into the **Source** field for inbound firewall rules\. <!-- </ol> --> ## Learn more ## <!-- <ul> --> * [Working with Redshift\-managed VPC endpoints in Amazon Redshift](https://docs.aws.amazon.com/redshift/latest/mgmt/managing-cluster-cross-vpc.html) <!-- </ul> --> <!-- </article "role="article" "> -->
648122BED05213950C23287CB4845FA56660232B
https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/firewall-spark.html?context=cdpaas&locale=en
Firewall access for Spark
Firewall access for Spark To allow Spark to access data that is located behind a firewall, you add the appropriate IP addresses for your region to the inbound rules for your firewall. Dallas (us-south) * dal12 - 169.61.173.96/27, 169.63.15.128/26, 150.239.143.0/25, 169.61.133.240/28, 169.63.56.0/24 * dal13 - 169.61.57.48/28, 169.62.200.96/27, 169.62.235.64/26 * dal10 - 169.60.246.160/27, 169.61.194.0/26, 169.46.22.128/26, 52.118.59.0/25 Parent topic:[Configuring firewall access](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/firewall_ovrvw.html)
# Firewall access for Spark # To allow Spark to access data that is located behind a firewall, you add the appropriate IP addresses for your region to the inbound rules for your firewall\. ## Dallas (us\-south) ## <!-- <ul> --> * dal12 \- 169\.61\.173\.96/27, 169\.63\.15\.128/26, 150\.239\.143\.0/25, 169\.61\.133\.240/28, 169\.63\.56\.0/24 * dal13 \- 169\.61\.57\.48/28, 169\.62\.200\.96/27, 169\.62\.235\.64/26 * dal10 \- 169\.60\.246\.160/27, 169\.61\.194\.0/26, 169\.46\.22\.128/26, 52\.118\.59\.0/25 <!-- </ul> --> **Parent topic:**[Configuring firewall access](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/firewall_ovrvw.html) <!-- </article "role="article" "> -->
E732DFB3C4F38ABECBA99DA31750FB6291560DB5
https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/firewall-wml.html?context=cdpaas&locale=en
Firewall access for Watson Machine Learning
Firewall access for Watson Machine Learning To allow Watson Machine Learning to access data that is located behind a firewall, you add the appropriate IP addresses for your region to the inbound rules for your firewall. Dallas (us-south) * dal10 - 169.60.39.152/29 * dal12 - 169.48.198.96/29 * dal13 - 169.61.47.128/29,169.62.162.88/29 Parent topic:[Configuring firewall access](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/firewall_ovrvw.html)
# Firewall access for Watson Machine Learning # To allow Watson Machine Learning to access data that is located behind a firewall, you add the appropriate IP addresses for your region to the inbound rules for your firewall\. ## Dallas (us\-south) ## <!-- <ul> --> * dal10 \- 169\.60\.39\.152/29 * dal12 \- 169\.48\.198\.96/29 * dal13 \- 169\.61\.47\.128/29,169\.62\.162\.88/29 <!-- </ul> --> **Parent topic:**[Configuring firewall access](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/firewall_ovrvw.html) <!-- </article "role="article" "> -->
E176531BA95036356A7E5DCA50A8DF728C78CE79
https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/firewall_cfg.html?context=cdpaas&locale=en
Firewall access for the platform
Firewall access for the platform If a data source resides behind a firewall, then IBM watsonx requires inbound access through the firewall in order to make a connection. Inbound firewall access is required whether the data source resides on a third-party cloud provider or in an data center. The method for configuring inbound access varies for different vendor's firewalls. In general, you configure inbound access rules by entering the IP addresses for the IBM watsonx cluster to allow for access by IBM watsonx. You can enter the IP addresses using the starting and ending addresses for a range or by using CIDR notation. Classless Inter-Domain Routing (CIDR) notation is a compact representation of an IP address and its associated network mask. For start and end addresses, copy each address and enter them in the inbound rules for your firewall. Alternately, copy the addresses in CIDR notation. The IBM watsonx IP addresses vary by region. The user interface lists the IP addresses for the current region. The IP addresses apply to the base infrastructure for IBM watsonx. Follow these steps to look up the IP addresses for IBM watsonx cluster: 1. Go to the Administration > Cloud integrations page. 2. Click the Firewall configuration link to view the list of IP ranges used by IBM watsonx in your region. 3. View the IP ranges for the IBM watsonx cluster in either CIDR notation or as Start and End addresses. 4. Choose Include private IPs to view the private IP addresses. The private IP addresses allow connections to IBM Cloud Object Storage buckets that are behind a firewall. See [Firewall access for Cloud Object Storage](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/firewall-cfg-private-cos.html). 5. Copy each of the IP ranges listed and paste them into the appropriate security configuration or inbound firewall rules area for your cloud provider. For example, if your data source resides on AWS, open the Create Security Group dialog for your AWS Management Console. Paste the IP ranges into the Inbound section for the security group rules. Parent topic:[Configuring firewall access](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/firewall_ovrvw.html)
# Firewall access for the platform # If a data source resides behind a firewall, then IBM watsonx requires inbound access through the firewall in order to make a connection\. Inbound firewall access is required whether the data source resides on a third\-party cloud provider or in an data center\. The method for configuring inbound access varies for different vendor's firewalls\. In general, you configure inbound access rules by entering the IP addresses for the IBM watsonx cluster to allow for access by IBM watsonx\. You can enter the IP addresses using the starting and ending addresses for a range or by using CIDR notation\. Classless Inter\-Domain Routing (CIDR) notation is a compact representation of an IP address and its associated network mask\. For start and end addresses, copy each address and enter them in the inbound rules for your firewall\. Alternately, copy the addresses in CIDR notation\. The IBM watsonx IP addresses vary by region\. The user interface lists the IP addresses for the current region\. The IP addresses apply to the base infrastructure for IBM watsonx\. Follow these steps to look up the IP addresses for IBM watsonx cluster: <!-- <ol> --> 1. Go to the **Administration > Cloud integrations** page\. 2. Click the **Firewall configuration** link to view the list of IP ranges used by IBM watsonx in your region\. 3. View the IP ranges for the IBM watsonx cluster in either CIDR notation or as Start and End addresses\. 4. Choose **Include private IPs** to view the private IP addresses\. The private IP addresses allow connections to IBM Cloud Object Storage buckets that are behind a firewall\. See [Firewall access for Cloud Object Storage](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/firewall-cfg-private-cos.html)\. 5. Copy each of the IP ranges listed and paste them into the appropriate security configuration or inbound firewall rules area for your cloud provider\. <!-- </ol> --> For example, if your data source resides on AWS, open the **Create Security Group** dialog for your AWS Management Console\. Paste the IP ranges into the **Inbound** section for the security group rules\. **Parent topic:**[Configuring firewall access](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/firewall_ovrvw.html) <!-- </article "role="article" "> -->
E7B64045AF2C3FF02183FB1CCC036327CEE5E971
https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/firewall_ovrvw.html?context=cdpaas&locale=en
Configuring firewall access
Configuring firewall access Firewalls protect valuable data from public access. If your data sources reside behind a firewall for protection, and you are not using a Satellite Connector or Satellite location, then you must configure the firewall to allow the IP addresses for IBM watsonx and also for individual services. Otherwise, IBM watsonx is denied access to the data sources. To allow IBM watsonx access to private data sources, you configure inbound firewall rules using the security mechanisms for your firewall. Inbound firewall rules are not required for connections that use a Satellite Connector or Satellite location, which establishes a link by performing an outbound connection. For more information, see [Connecting to data behind a firewall](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.html). All services in IBM watsonx actively use WebSockets for the proper functioning of the user interface and APIs. Any firewall between the user and the IBM watsonx domain must allow HTTPUpgrade. If IBM watsonx is installed behind a firewall, traffic for the wss:// protocol must be enabled. Configuring inbound access rules for firewalls If data sources reside behind a firewall, then inbound access rules are required for IBM watsonx. Inbound firewall rules protect the network against incoming traffic from the internet. The following scenarios require inbound access rules through a firewall: * [Firewall access for IBM watsonx](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/firewall_cfg.html) * [Firewall access for Cloud Object Storage](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/firewall-cfg-private-cos.html) * [Firewall access for AWS Redshift](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/firewall-redshift.html) * [Firewall access for Watson Studio](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/firewall-dsx.html) * [Firewall access for Watson Machine Learning](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/firewall-wml.html) * [Firewall access for Spark](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/firewall-spark.html) Learn more * [Connecting to data behind a firewall](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.html) Parent topic:[Setting up the platform for administrators](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/setup-platform.html)
# Configuring firewall access # Firewalls protect valuable data from public access\. If your data sources reside behind a firewall for protection, and you are not using a Satellite Connector or Satellite location, then you must configure the firewall to allow the IP addresses for IBM watsonx and also for individual services\. Otherwise, IBM watsonx is denied access to the data sources\. To allow IBM watsonx access to private data sources, you configure inbound firewall rules using the security mechanisms for your firewall\. Inbound firewall rules are not required for connections that use a Satellite Connector or Satellite location, which establishes a link by performing an outbound connection\. For more information, see [Connecting to data behind a firewall](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.html)\. All services in IBM watsonx actively use WebSockets for the proper functioning of the user interface and APIs\. Any firewall between the user and the IBM watsonx domain must allow **HTTPUpgrade**\. If IBM watsonx is installed behind a firewall, traffic for the **wss://** protocol must be enabled\. ## Configuring inbound access rules for firewalls ## If data sources reside behind a firewall, then inbound access rules are required for IBM watsonx\. Inbound firewall rules protect the network against incoming traffic from the internet\. The following scenarios require inbound access rules through a firewall: <!-- <ul> --> * [Firewall access for IBM watsonx](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/firewall_cfg.html) * [Firewall access for Cloud Object Storage](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/firewall-cfg-private-cos.html) * [Firewall access for AWS Redshift](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/firewall-redshift.html) * [Firewall access for Watson Studio](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/firewall-dsx.html) * [Firewall access for Watson Machine Learning](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/firewall-wml.html) * [Firewall access for Spark](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/firewall-spark.html) <!-- </ul> --> ## Learn more ## <!-- <ul> --> * [Connecting to data behind a firewall](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.html) <!-- </ul> --> **Parent topic:**[Setting up the platform for administrators](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/setup-platform.html) <!-- </article "role="article" "> -->
9E71F112F9AF39E61A59914D87689B4B8DB13F50
https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/int-aws.html?context=cdpaas&locale=en
Integrating with AWS
Integrating with AWS You can configure an integration with the Amazon Web Services (AWS) platform to allow IBM watsonx users access to data sources from AWS. Before proceeding, make sure you have proper permissions. For example, you'll need to be able to create services and credentials in the AWS account. After you configure an integration, you'll see it under Service instances. You'll see a new AWS tab that lists your instances of Redshift and S3. To configure an integration with AWS: 1. Log on to the [AWS Console](https://aws.amazon.com/console/). 2. From the account drop-down at the upper right, select My Security Credentials. 3. Under Access keys (access key ID and secret access key), click Create New Access Key. 4. Copy the key ID and secret. Important: Write down your key ID and secret and store them in a safe place. 5. In IBM watsonx, under Administration > Cloud integrations, go to the AWS tab, enable integration, and then paste the access key ID and access key secret into the appropriate fields. 6. If you need to access Redshift, [configure firewall access](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/firewall-redshift.html). 7. Confirm that you can see your AWS services. From the main menu, choose Administration > Services > Services instances. Click the AWS tab to see those services. Now users who have credentials to your AWS services can [create connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html) to them by selecting them on the Add connection page. Then they can access data from those connections by [creating connected data assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html). Next steps * [Set up a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/projects.html) * [Create connections in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html) Parent topic:[Integrations with other cloud platforms](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/int-cloud.html)
# Integrating with AWS # You can configure an integration with the Amazon Web Services (AWS) platform to allow IBM watsonx users access to data sources from AWS\. Before proceeding, make sure you have proper permissions\. For example, you'll need to be able to create services and credentials in the AWS account\. After you configure an integration, you'll see it under **Service instances**\. You'll see a new **AWS** tab that lists your instances of Redshift and S3\. To configure an integration with AWS: <!-- <ol> --> 1. Log on to the [AWS Console](https://aws.amazon.com/console/)\. 2. From the account drop\-down at the upper right, select **My Security Credentials**\. 3. Under **Access keys (access key ID and secret access key)**, click **Create New Access Key**\. 4. Copy the key ID and secret\. Important: Write down your key ID and secret and store them in a safe place. 5. In IBM watsonx, under **Administration > Cloud integrations**, go to the **AWS** tab, enable integration, and then paste the access key ID and access key secret into the appropriate fields\. 6. If you need to access Redshift, [configure firewall access](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/firewall-redshift.html)\. 7. Confirm that you can see your AWS services\. From the main menu, choose **Administration > Services > Services instances**\. Click the **AWS** tab to see those services\. <!-- </ol> --> Now users who have credentials to your AWS services can [create connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html) to them by selecting them on the **Add connection** page\. Then they can access data from those connections by [creating connected data assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html)\. ## Next steps ## <!-- <ul> --> * [Set up a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/projects.html) * [Create connections in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html) <!-- </ul> --> **Parent topic:**[Integrations with other cloud platforms](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/int-cloud.html) <!-- </article "role="article" "> -->
496C8703EBA5C4C6BCD6D65EE60D3E768F1BF071
https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/int-azure.html?context=cdpaas&locale=en
Integrating with Microsoft Azure
Integrating with Microsoft Azure You can configure an integration with the Microsoft Azure platform to allow IBM watsonx users access to data sources from Microsoft Azure. Before proceeding, make sure you have proper permissions. For example, you'll need permission in your subscription to create an application integration in Azure Active Directory. After you configure an integration, you'll see it under Service instances. You'll see a new Azure tab that lists your instances of Data Lake Storage Gen1 and SQL Database. To configure an integration with Microsoft Azure: 1. Log on to your Microsoft Azure account at [https://portal.azure.com](https://portal.azure.com). 2. Navigate to the Subscriptions panel and copy your subscription ID. 1. In IBM watsonx, go to Administration > Cloud integrations and click the Azure tab. Paste the subscription ID you copied in the previous step into the Subscription ID field. 1. In Microsoft Azure Active Directory, navigate to Manage > App registrations and click New registration to register an application. Give it a name such as IBM integration and select the desired option for supported account types. 1. Copy the Application (client) ID and the Tenant ID and paste them into the appropriate fields on the IBM watsonx Integrations page, as you did with the subscription ID. 1. In Microsoft Azure, navigate to Certificates & secrets > New client secret to create a new secret. Important! * Write down your secret and store it in a safe place. After you leave this page, you won't be able to retrieve the secret again. You'd need to delete the secret and create a new one. * If you ever need to revoke the secret for some reason, you can simply delete it from this page. * Pay attention to the expiration date. When the secret expires, integration will stop working. 2. Copy the secret from Microsoft Azure and paste it into the appropriate field on the Integrations page as you did with the subscription ID and client ID. 3. Configure [firewall access](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/int-azure.html?context=cdpaas&locale=enfirewall). 4. Confirm that you can see your Azure services. From the main menu, choose Administration > Services > Services instances. Click the Azure tab to see those services. Now users who have credentials to your Azure services can [create connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html) to them by selecting them on the Add connection page. Then they can access data from those connections by [creating connected data assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html). Configuring firewall access You must also configure access so IBM watsonx can access data through the firewall. For Microsoft Azure SQL Database firewall: 1. Open the database instance in Microsoft Azure. 2. From the top list of actions, select Set server firewall. 3. Set Deny public network access to No. 4. In a separate tab or window, open IBM watsonx and go to Administration > Cloud integrations. In the Firewall configuration panel, for each firewall IP range, copy the start and end address values into the list of rules in the Microsoft Azure QL Database firewall. For Microsoft Azure Data Lake Storage Gen1 firewall: 1. Open the Data Lake instance. 2. Go to Settings > Firewall and virtual networks. 3. In a separate tab or window, open IBM watsonx and go to Administration > Cloud integrations. In the Firewall configuration panel, for each firewall IP range, copy the start and end address values into the list of rules under Firewall in the Data Lake instance. You can now create connections, preview data from Microsoft Azure data sources, and access Microsoft Azure data in Notebooks, Data Refinery, SPSS Modeler, and other tools in projects and in catalogs. You can see your Microsoft Azure instances under Services > Service instances. Next steps * [Set up a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/projects.html) * [Create connections in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html) Parent topic:[Integrations with other cloud platforms](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/int-cloud.html)
# Integrating with Microsoft Azure # You can configure an integration with the Microsoft Azure platform to allow IBM watsonx users access to data sources from Microsoft Azure\. Before proceeding, make sure you have proper permissions\. For example, you'll need permission in your subscription to create an application integration in Azure Active Directory\. After you configure an integration, you'll see it under **Service instances**\. You'll see a new **Azure** tab that lists your instances of Data Lake Storage Gen1 and SQL Database\. To configure an integration with Microsoft Azure: <!-- <ol> --> 1. Log on to your Microsoft Azure account at [https://portal\.azure\.com](https://portal.azure.com)\. 2. Navigate to the **Subscriptions** panel and copy your subscription ID\. <!-- </ol> --> <!-- <ol> --> 1. In IBM watsonx, go to **Administration > Cloud integrations** and click the **Azure** tab\. Paste the subscription ID you copied in the previous step into the **Subscription ID** field\. <!-- </ol> --> <!-- <ol> --> 1. In Microsoft Azure Active Directory, navigate to **Manage > App registrations** and click **New registration** to register an application\. Give it a name such as *IBM integration* and select the desired option for supported account types\. <!-- </ol> --> <!-- <ol> --> 1. Copy the **Application (client) ID** and the **Tenant ID** and paste them into the appropriate fields on the IBM watsonx **Integrations** page, as you did with the subscription ID\. <!-- </ol> --> <!-- <ol> --> 1. In Microsoft Azure, navigate to **Certificates & secrets > New client secret** to create a new secret\. **Important\!** <!-- <ul> --> * Write down your secret and store it in a safe place. After you leave this page, you won't be able to retrieve the secret again. You'd need to delete the secret and create a new one. * If you ever need to revoke the secret for some reason, you can simply delete it from this page. * Pay attention to the expiration date. When the secret expires, integration will stop working. <!-- </ul> --> 2. Copy the secret from Microsoft Azure and paste it into the appropriate field on the **Integrations** page as you did with the subscription ID and client ID\. 3. Configure [firewall access](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/int-azure.html?context=cdpaas&locale=en#firewall)\. 4. Confirm that you can see your Azure services\. From the main menu, choose **Administration > Services > Services instances**\. Click the **Azure** tab to see those services\. <!-- </ol> --> Now users who have credentials to your Azure services can [create connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html) to them by selecting them on the **Add connection** page\. Then they can access data from those connections by [creating connected data assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html)\. ## Configuring firewall access ## You must also configure access so IBM watsonx can access data through the firewall\. For Microsoft Azure SQL Database firewall: <!-- <ol> --> 1. Open the database instance in Microsoft Azure\. 2. From the top list of actions, select **Set server firewall**\. 3. Set **Deny public network access** to **No**\. 4. In a separate tab or window, open IBM watsonx and go to **Administration > Cloud integrations**\. In the **Firewall configuration** panel, for each firewall IP range, copy the start and end address values into the list of rules in the Microsoft Azure QL Database firewall\. <!-- </ol> --> For Microsoft Azure Data Lake Storage Gen1 firewall: <!-- <ol> --> 1. Open the Data Lake instance\. 2. Go to **Settings > Firewall and virtual networks**\. 3. In a separate tab or window, open IBM watsonx and go to **Administration > Cloud integrations**\. In the **Firewall configuration** panel, for each firewall IP range, copy the start and end address values into the list of rules under **Firewall** in the Data Lake instance\. <!-- </ol> --> You can now create connections, preview data from Microsoft Azure data sources, and access Microsoft Azure data in Notebooks, Data Refinery, SPSS Modeler, and other tools in projects and in catalogs\. You can see your Microsoft Azure instances under **Services > Service instances**\. ## Next steps ## <!-- <ul> --> * [Set up a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/projects.html) * [Create connections in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html) <!-- </ul> --> **Parent topic:**[Integrations with other cloud platforms](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/int-cloud.html) <!-- </article "role="article" "> -->
72B9EC702C95AC86DE08E0FB8F8C3404B1228B5F
https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/int-cloud.html?context=cdpaas&locale=en
Integrations with other cloud platforms
Integrations with other cloud platforms You can integrate IBM watsonx with other cloud platforms to configure access to the data source services on that platform. Then, users can easily create connections to those data source services and access the data in those data sources. You need to be the Account Owner or Administrator for the IBM Cloud account to configure integrations with other cloud platforms. You must have the proper permissions in your cloud platform subscription before you can configure an integration. If you are using Amazon Web Services (AWS) Redshift (or other AWS data sources) or Microsoft Azure, you must also configure firewall access to allow IBM watsonx to access data. After you configure integration and firewall access with another cloud platform, you can access and connect to the services on that platform: * The service instances for that platform are shown on the Service instances page. From the main menu, choose Administration > Services > Services instances. Each cloud platform that you integrate with has its own page. * The data source services in that platform are shown when you create a connection. Start adding a connection in a project, catalog, or other workspace. When the Add connection page appears, click the To service tab. The services are listed by cloud platform. You can configure integrations with these cloud platforms: * [Amazon Web Services (AWS)](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/int-aws.html) * [Microsoft Azure](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/int-azure.html) * [Google Cloud Platform (GCP)](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/int-google.html) Parent topic:[Services and integrations](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/svc-int.html)
# Integrations with other cloud platforms # You can integrate IBM watsonx with other cloud platforms to configure access to the data source services on that platform\. Then, users can easily create connections to those data source services and access the data in those data sources\. You need to be the Account Owner or Administrator for the IBM Cloud account to configure integrations with other cloud platforms\. You must have the proper permissions in your cloud platform subscription before you can configure an integration\. If you are using Amazon Web Services (AWS) Redshift (or other AWS data sources) or Microsoft Azure, you must also configure firewall access to allow IBM watsonx to access data\. After you configure integration and firewall access with another cloud platform, you can access and connect to the services on that platform: <!-- <ul> --> * The service instances for that platform are shown on the **Service instances** page\. From the main menu, choose **Administration > Services > Services instances**\. Each cloud platform that you integrate with has its own page\. * The data source services in that platform are shown when you create a connection\. Start adding a connection in a project, catalog, or other workspace\. When the **Add connection** page appears, click the **To service** tab\. The services are listed by cloud platform\. <!-- </ul> --> You can configure integrations with these cloud platforms: <!-- <ul> --> * [Amazon Web Services (AWS)](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/int-aws.html) * [Microsoft Azure](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/int-azure.html) * [Google Cloud Platform (GCP)](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/int-google.html) <!-- </ul> --> **Parent topic:**[Services and integrations](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/svc-int.html) <!-- </article "role="article" "> -->
CB81643BE8EE3B3DC2F6BCCDB77BD2CEC32C8926
https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/int-google.html?context=cdpaas&locale=en
Integrating with Google Cloud Platform
Integrating with Google Cloud Platform You can configure an integration with the Google Cloud Platform (GCP) to allow IBM watsonx users to access data sources from GCP. Before proceeding, make sure you have proper permissions. After you configure an integration, you'll see it under Service instances. For example, you'll see a new GCP tab that lists your BigQuery data sets and Storage buckets. To configure an integration with GCP: 1. Log on to the Google Cloud Platform at [https://console.cloud.google.com](https://console.cloud.google.com). 2. Go to IAM & Admin > Service Accounts. 3. Open your project and then click CREATE SERVICE ACCOUNT.1. Specify a name and description for the new service account and click CREATE. Specify other options as desired and click DONE.1. Click the actions menu next to the service instance and select Create key. For key type, select JSON and then click CREATE. The JSON key file will be downloaded to your machine. Important: Write down your key ID and secret and store them in a sStore the key file in a secure location. 4. In IBM watsonx, under Administrator > Cloud integrations, go to the GCP tab, enable integration, and then paste the contents from the JSON key file into the text field. Only certain properties from the JSON will be stored, and the private_key property will be encrypted. 5. Go back to Google Cloud Platform and edit the service account you created previously. Add the following roles: 6. Confirm that you can see your GCP services. From the main menu, choose Administration > Services > Services instances. Click the GCP tab to see those services, for example, BigQuery data sets and Storage buckets. Now users who have credentials to your GCP services can can [create connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html) to them by selecting them on the Add connection page. Then they can access data from those connections by [creating connected data assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html). Next steps * [Set up a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/projects.html) * [Create connections in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html) Parent topic:
# Integrating with Google Cloud Platform # You can configure an integration with the Google Cloud Platform (GCP) to allow IBM watsonx users to access data sources from GCP\. Before proceeding, make sure you have proper permissions\. After you configure an integration, you'll see it under **Service instances**\. For example, you'll see a new **GCP** tab that lists your BigQuery data sets and Storage buckets\. To configure an integration with GCP: <!-- <ol> --> 1. Log on to the Google Cloud Platform at [https://console\.cloud\.google\.com](https://console.cloud.google.com)\. 2. Go to **IAM & Admin > Service Accounts**\. 3. Open your project and then click **CREATE SERVICE ACCOUNT**\.1\. Specify a name and description for the new service account and click **CREATE**\. Specify other options as desired and click **DONE**\.1\. Click the actions menu next to the service instance and select **Create key**\. For key type, select **JSON** and then click **CREATE**\. The JSON key file will be downloaded to your machine\. Important: Write down your key ID and secret and store them in a sStore the key file in a secure location. 4. In IBM watsonx, under **Administrator > Cloud integrations**, go to the **GCP** tab, enable integration, and then paste the contents from the JSON key file into the text field\. Only certain properties from the JSON will be stored, and the `private_key` property will be encrypted\. 5. Go back to Google Cloud Platform and edit the service account you created previously\. Add the following roles: 6. Confirm that you can see your GCP services\. From the main menu, choose **Administration > Services > Services instances**\. Click the **GCP** tab to see those services, for example, BigQuery data sets and Storage buckets\. <!-- </ol> --> Now users who have credentials to your GCP services can can [create connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html) to them by selecting them on the **Add connection** page\. Then they can access data from those connections by [creating connected data assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html)\. ## Next steps ## <!-- <ul> --> * [Set up a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/projects.html) * [Create connections in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html) <!-- </ul> --> **Parent topic:** <!-- </article "role="article" "> -->
E6A30655CBD3745ACBCBF18E79B4C3979CA6B35B
https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/manage-account.html?context=cdpaas&locale=en
Managing your IBM Cloud account
Managing your IBM Cloud account You can manage your IBM Cloud account to view billing and usage, manage account users, and manage services. Required permissions : You must be the IBM Cloud account owner or administrator. To manage your IBM Cloud account, choose Administration > Account and billing > Account > Manage in IBM Cloud from IBM watsonx. Then from the IBM Cloud console, choose an option from the Manage menu. * Account: See [Adding orgs and spaces](https://cloud.ibm.com/docs/account?topic=account-orgsspacesusersorgsspacesusers) and [Managing resource groups](https://cloud.ibm.com/docs/account?topic=account-rgs). * Billing and Usage: See [How you're charged](https://cloud.ibm.com/docs/billing-usage?topic=billing-usage-chargescharges). * Access (IAM): See [Inviting users](https://cloud.ibm.com/docs/account?topic=account-access-getstarted). Learn more * [Activity Tracker events](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/at-events.html) * [Manage your settings](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/personal-settings.html) * [Set up IBM watsonx for your organization](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/setup-platform.html) * [Manage users and access](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/setup-access.html) * [IBM Cloud SAML Federation Guide](https://www.ibm.com/cloud/blog/ibm-cloud-saml-federation-guide) * [Delete your IBM Cloud account](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/wdp_admin_console.htmldeletecloud) * [Check the status of IBM Cloud services](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/service-status.html) * [Configure private service endpoints](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/endpoints-vrf.html) Parent topic:[Administration](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/administer-accounts.html)
# Managing your IBM Cloud account # You can manage your IBM Cloud account to view billing and usage, manage account users, and manage services\. **Required permissions** : You must be the IBM Cloud account owner or administrator\. To manage your IBM Cloud account, choose **Administration > Account and billing > Account > Manage in IBM Cloud** from IBM watsonx\. Then from the IBM Cloud console, choose an option from the **Manage** menu\. <!-- <ul> --> * Account: See [Adding orgs and spaces](https://cloud.ibm.com/docs/account?topic=account-orgsspacesusers#orgsspacesusers) and [Managing resource groups](https://cloud.ibm.com/docs/account?topic=account-rgs)\. * Billing and Usage: See [How you're charged](https://cloud.ibm.com/docs/billing-usage?topic=billing-usage-charges#charges)\. * Access (IAM): See [Inviting users](https://cloud.ibm.com/docs/account?topic=account-access-getstarted)\. <!-- </ul> --> ## Learn more ## <!-- <ul> --> * [Activity Tracker events](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/at-events.html) * [Manage your settings](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/personal-settings.html) * [Set up IBM watsonx for your organization](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/setup-platform.html) * [Manage users and access](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/setup-access.html) * [IBM Cloud SAML Federation Guide](https://www.ibm.com/cloud/blog/ibm-cloud-saml-federation-guide) * [Delete your IBM Cloud account](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/wdp_admin_console.html#deletecloud) * [Check the status of IBM Cloud services](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/service-status.html) * [Configure private service endpoints](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/endpoints-vrf.html) <!-- </ul> --> **Parent topic:**[Administration](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/administer-accounts.html) <!-- </article "role="article" "> -->
BEDA84E76E7F8FA5594F63E640DC17B4F6CB3E5E
https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/monitor-resources.html?context=cdpaas&locale=en
Monitoring account resource usage
Monitoring account resource usage Some service plans charge for compute usage and other types of resource usage. If you are the IBM Cloud account owner or administrator, you can monitor the resources usage to ensure the limits are not exceeded. For Lite plans, you cannot exceed the limits of the plan. You must wait until the start of your next billing month to use resources that are calculated monthly. Alternatively, you can [upgrade to a paid plan](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/upgrade.html). For most paid plans, you pay for the resources that the tools and processes that are provided by the service consume each month. To see the costs of your plan, log in to IBM Cloud, open your service instance from your IBM Cloud dashboard, and click Plan. * [Capacity unit hours (CUH) for compute usage](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/monitor-resources.html?context=cdpaas&locale=encompute) * [Resource units for foundation model inferencing](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/monitor-resources.html?context=cdpaas&locale=enrus) * [Monitor monthly billing](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/monitor-resources.html?context=cdpaas&locale=enbilling) Capacity unit hours (CUH) for compute usage Many tools consume compute usage that is measured in capacity unit hours (CUH). A capacity unit hour is a specific amount of compute capability with a set cost. How compute usage is calculated Different types of processes and different levels of compute power are billed at different rates of capacity units per hour. For example, the hourly rate for a data profiling process is 6 capacity units. Compute usage for Watson Studio is charged by the minute, with a minimum charge of 10 minutes (0.16 hours). Compute usage for Watson Machine Learning is charged by the minute with a minimum charge of one minute. Compute usage is calculated by adding the minimum number of minutes billed for each process plus the number of minutes the process runs beyond the minimum minutes, then multiplying the total by the capacity unit rate for the process. The following table shows examples of how the billed CUH is calculated. Rate Usage time Calculation Total CUH billed 1 CUH/hour 1 hour 1 hour * 1 CUH/hour 1 CUH 2 CUH/hour 45 minutes 0.75 hours * 2 CUH/hour 1.5 CUH 6 CUH/hour 5 minutes 0.16 hours * 6 CUH/hour 0.96 CUH. The minimum charge for Watson Studio applies. 6 CUH/hour 30 minutes 0.5 hours * 6 CUH/hour 3 CUH 6 CUH/hour 1 hour 1 hour * 6 CUH/hour 6 CUH Processes that consume capacity unit hours Some types of processes, such as AutoAI and Federated Learning, have a single compute rate for the runtime. However, with many tools you have a choice of compute resources for the runtime. The notebook editor, Data Refinery, SPSS Modeler, and other tools have different rates that reflect the memory and compute power for the environment. Environments with more memory and compute power consume capacity unit hours at a higher rate. This table shows each process that consumes CUH, where it runs, and against which service CUH is billed, and whether you can choose from more than one environment. Follow the links to view the available CUH rates for each process. Tool or Process Workspace Service that provides CUH Multiple CUH rates? [Notebook editor](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebook-environments.html) Project Watson Studio, Analytics Engine (Spark) Multiple rates [Invoking the machine learning API from a notebook](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebook-environments.htmlwml) Project Watson Machine Learning Multiple rates [Data Refinery](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/spark-dr-envs.html) Project Watson Studio Multiple rates [SPSS Modeler](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/spss-envs.html) Project Watson Studio Multiple rates [RStudio IDE](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/rstudio-envs.html) Project Watson Studio Multiple rates [AutoAI experiments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/run-autoai.html) Project Watson Machine Learning Multiple rates [Decision Optimization experiments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/run-decisionopt.html) Spaces Watson Machine Learning Multiple rates [Running deployments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/run-cuh-deploy-spaces.html) Spaces Watson Machine Learning Multiple rates [Profiling](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/environments-parent.htmlprofiling) Project Watson Studio One rate [Synthetic Data Generator](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/synthetic-envs.html) Project Watson Studio One rate Monitoring compute usage You can monitor compute usage for all services at the account level. To view the monthly CUH usage for a service, open the service instance from your IBM Cloud dashboard and click Plan. You can also monitor compute usage in a project on the Environments page on the Manage tab. To see the total amount of capacity unit hours that are used and that are remaining for Watson Studio and Watson Machine Learning, look at the Environment Runtimes page. From the navigation menu, select Administration > Environment runtimes. The Environment Runtimes page shows details of the [CUH used by environments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/track-runtime-usage.htmltrack-account). You can calculate the amount of CUH you use for data flows and profiling by subtracting the amount used by environments from the total amount used. Resource units for foundation model inferencing Calling a foundation model to generate output in response to a prompt is known as inferencing. Foundation model inferencing is measure in resource units (RU). Each RU equals 1,000 tokens. A token is a basic unit of text (typically 4 characters or 0.75 words) used in the input or output for a foundation model prompt. For details on tokens, see [Tokens](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tokens.html). Resource unit billing is based on the rate of the foundation model class multipled by the number of tokens. Foundation models are classified into three classes. See [Resource unit metering](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/wml-plans.htmlru-metering). Note: You do not consume tokens when you use the generative AI search and answer app for this documentation site. Monitoring token usage for foundation model inferencing You can monitor foundation model token usage in a project on the Environments page on the Manage tab. Monitor monthly billing You must be an IBM Cloud account owner or administrator to see resource usage information. To view a summary of your monthly billing, from the navigation menu, choose Administration > Account and billing > Billing and usage. The IBM Cloud usage dashboard opens. To view the usage for each service, in the Usage summary section, click View usage. Learn more * [Choosing compute resources for running tools in projects](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/environments-parent.html) * [Upgrade services](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/upgrade.html) * [Environments compute usage](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/track-runtime-usage.htmltrack-account) * [Watson Studio offering plans](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/ws-plans.html) * [Watson Machine Learning plans and compute usage](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/wml-plans.html) Parent topic:[Managing the platform](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/wdp_admin_console.html)
# Monitoring account resource usage # Some service plans charge for compute usage and other types of resource usage\. If you are the IBM Cloud account owner or administrator, you can monitor the resources usage to ensure the limits are not exceeded\. For Lite plans, you cannot exceed the limits of the plan\. You must wait until the start of your next billing month to use resources that are calculated monthly\. Alternatively, you can [upgrade to a paid plan](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/upgrade.html)\. For most paid plans, you pay for the resources that the tools and processes that are provided by the service consume each month\. To see the costs of your plan, log in to IBM Cloud, open your service instance from your IBM Cloud dashboard, and click **Plan**\. <!-- <ul> --> * [Capacity unit hours (CUH) for compute usage](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/monitor-resources.html?context=cdpaas&locale=en#compute) * [Resource units for foundation model inferencing](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/monitor-resources.html?context=cdpaas&locale=en#rus) * [Monitor monthly billing](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/monitor-resources.html?context=cdpaas&locale=en#billing) <!-- </ul> --> ## Capacity unit hours (CUH) for compute usage ## Many tools consume compute usage that is measured in capacity unit hours (CUH)\. A capacity unit hour is a specific amount of compute capability with a set cost\. ### How compute usage is calculated ### Different types of processes and different levels of compute power are billed at different rates of capacity units per hour\. For example, the hourly rate for a data profiling process is 6 capacity units\. Compute usage for Watson Studio is charged by the minute, with a minimum charge of 10 minutes (0\.16 hours)\. Compute usage for Watson Machine Learning is charged by the minute with a minimum charge of one minute\. Compute usage is calculated by adding the minimum number of minutes billed for each process plus the number of minutes the process runs beyond the minimum minutes, then multiplying the total by the capacity unit rate for the process\. The following table shows examples of how the billed CUH is calculated\. <!-- <table> --> | Rate | Usage time | Calculation | Total CUH billed | | ---------- | ---------- | ------------------------- | ---------------------------------------------------------- | | 1 CUH/hour | 1 hour | 1 hour \* 1 CUH/hour | 1 CUH | | 2 CUH/hour | 45 minutes | 0\.75 hours \* 2 CUH/hour | 1\.5 CUH | | 6 CUH/hour | 5 minutes | 0\.16 hours \* 6 CUH/hour | 0\.96 CUH\. The minimum charge for Watson Studio applies\. | | 6 CUH/hour | 30 minutes | 0\.5 hours \* 6 CUH/hour | 3 CUH | | 6 CUH/hour | 1 hour | 1 hour \* 6 CUH/hour | 6 CUH | <!-- </table ""> --> ### Processes that consume capacity unit hours ### Some types of processes, such as AutoAI and Federated Learning, have a single compute rate for the runtime\. However, with many tools you have a choice of compute resources for the runtime\. The notebook editor, Data Refinery, SPSS Modeler, and other tools have different rates that reflect the memory and compute power for the environment\. Environments with more memory and compute power consume capacity unit hours at a higher rate\. This table shows each process that consumes CUH, where it runs, and against which service CUH is billed, and whether you can choose from more than one environment\. Follow the links to view the available CUH rates for each process\. <!-- <table> --> | Tool or Process | Workspace | Service that provides CUH | Multiple CUH rates? | | ----------------------------------------------------- | --------- | --------------------------------------- | ------------------- | | [Notebook editor](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebook-environments.html) | Project | Watson Studio, Analytics Engine (Spark) | Multiple rates | | [Invoking the machine learning API from a notebook](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebook-environments.html#wml) | Project | Watson Machine Learning | Multiple rates | | [Data Refinery](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/spark-dr-envs.html) | Project | Watson Studio | Multiple rates | | [SPSS Modeler](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/spss-envs.html) | Project | Watson Studio | Multiple rates | | [RStudio IDE](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/rstudio-envs.html) | Project | Watson Studio | Multiple rates | | [AutoAI experiments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/run-autoai.html) | Project | Watson Machine Learning | Multiple rates | | [Decision Optimization experiments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/run-decisionopt.html) | Spaces | Watson Machine Learning | Multiple rates | | [Running deployments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/run-cuh-deploy-spaces.html) | Spaces | Watson Machine Learning | Multiple rates | | [Profiling](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/environments-parent.html#profiling) | Project | Watson Studio | One rate | | [Synthetic Data Generator](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/synthetic-envs.html) | Project | Watson Studio | One rate | <!-- </table ""> --> ### Monitoring compute usage ### You can monitor compute usage for all services at the account level\. To view the monthly CUH usage for a service, open the service instance from your IBM Cloud dashboard and click **Plan**\. You can also monitor compute usage in a project on the **Environments** page on the **Manage** tab\. To see the total amount of capacity unit hours that are used and that are remaining for Watson Studio and Watson Machine Learning, look at the **Environment Runtimes** page\. From the navigation menu, select **Administration > Environment runtimes**\. The **Environment Runtimes** page shows details of the [CUH used by environments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/track-runtime-usage.html#track-account)\. You can calculate the amount of CUH you use for data flows and profiling by subtracting the amount used by environments from the total amount used\. ## Resource units for foundation model inferencing ## Calling a foundation model to generate output in response to a prompt is known as inferencing\. Foundation model inferencing is measure in resource units (RU)\. Each RU equals 1,000 tokens\. A token is a basic unit of text (typically 4 characters or 0\.75 words) used in the input or output for a foundation model prompt\. For details on tokens, see [Tokens](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tokens.html)\. Resource unit billing is based on the rate of the foundation model class multipled by the number of tokens\. Foundation models are classified into three classes\. See [Resource unit metering](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/wml-plans.html#ru-metering)\. Note: You do not consume tokens when you use the generative AI search and answer app for this documentation site\. ### Monitoring token usage for foundation model inferencing ### You can monitor foundation model token usage in a project on the **Environments** page on the **Manage** tab\. ## Monitor monthly billing ## You must be an IBM Cloud account owner or administrator to see resource usage information\. To view a summary of your monthly billing, from the navigation menu, choose **Administration > Account and billing > Billing and usage**\. The IBM Cloud usage dashboard opens\. To view the usage for each service, in the **Usage summary** section, click **View usage**\. ## Learn more ## <!-- <ul> --> * [Choosing compute resources for running tools in projects](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/environments-parent.html) * [Upgrade services](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/upgrade.html) * [Environments compute usage](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/track-runtime-usage.html#track-account) * [Watson Studio offering plans](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/ws-plans.html) * [Watson Machine Learning plans and compute usage](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/wml-plans.html) <!-- </ul> --> **Parent topic:**[Managing the platform](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/wdp_admin_console.html) <!-- </article "role="article" "> -->
C4E83640891EA5D02EAE76027D05FDEFE2C4EFFE
https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/personal-settings.html?context=cdpaas&locale=en
Managing your settings
Managing your settings You can manage your profile, services, integrations, and notifications while logged in to IBM watsonx. * [Manage your profile](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/personal-settings.html?context=cdpaas&locale=enprofile) * [Manage user API keys](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/admin-apikeys.html) * [Switch accounts](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/personal-settings.html?context=cdpaas&locale=enaccount) * [Manage your services](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/wdp_admin_console.html) * [Manage your integrations](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/personal-settings.html?context=cdpaas&locale=enintegrations) * [Manage your notification settings](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/personal-settings.html?context=cdpaas&locale=enbell) * [View and personalize your project summary](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/personal-settings.html?context=cdpaas&locale=enproject-summary) Manage your profile You can manage your profile on the Profile page by clicking your avatar in the banner and then clicking Profile and settings. You can make these changes to your profile: * Add or change your avatar photo. * Change your IBMid or password. Do not change your IBMid (email address) after you register with the IBM watsonx platform. The IBMid (email address) uniquely identifies users in the platform and also authorizes access to various IBM watsonx resources, including projects, spaces, models, and catalogs. If you change your IBMid (email address) in your IBM Cloud profile after you have registered with IBM watsonx, you will lose access to the platform and associated resources. * Set your service locations filters by resource group and location. The filters apply throughout the platform. For example, the Service instances page that you access through the Services menu shows only the filtered services. Ensure you have selected the region where Watson Studio is located, for example, Dallas, as well as the Global location. Global is required to provide access to your IBM Cloud Object Storage instance. * Access your IBM Cloud account. * [Leave IBM watsonx](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/stopapps.htmldeactivate). Switch accounts If you are added to a shared IBM Cloud account that is different from your individual account, you can switch your account by selecting a different account from the account list in the menu bar, next to your avatar. Manage your integrations To set up or modify an integration to GitHub: 1. Click your avatar in the banner. 2. Click Profile and settings. 3. Click the Git integrations tab. See [Publish notebooks on GitHub](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/github-integration.html). Manage your notification settings To see your notification settings, click the notification bell icon and then click the settings icon. You can make these changes to your notification settings: * Specify to receive push notifications that appear briefly on screen. If you select Do not disturb, you continue to see notifications on the home page and the number of notifications on the bell. * Specify to receive notifications by email. * Specify for which projects or spaces you receive notifications. View and personalize your project summary Use the Overview page of a project to view a summary of what's happening in your project. You can jump back into your most recent work and keep up to date with alerts, tasks, project history, and compute usage. View recent asset activity in the Assets pane on the Overview page, and filter the assets by selecting By you or By all using the dropdown. Selecting By you lists assets edited by you, ordered by most recent at the top. Selecting By all lists assets edited by others and also by you, ordered by most recent at the top. You can use the readme file on the Overview page to document the status or results of the project. The readme file uses standard [Markdown formatting](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/markd-jupyter.html). Collaborators with the Admin or Editor role can edit the readme file. Learn more * [Managing your IBM Cloud account](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/manage-account.html) * [Managing your services](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/create-services.htmlmanage) Parent topic:[Administration](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/administer-accounts.html)
# Managing your settings # You can manage your profile, services, integrations, and notifications while logged in to IBM watsonx\. <!-- <ul> --> * [Manage your profile](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/personal-settings.html?context=cdpaas&locale=en#profile) * [Manage user API keys](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/admin-apikeys.html) * [Switch accounts](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/personal-settings.html?context=cdpaas&locale=en#account) * [Manage your services](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/wdp_admin_console.html) * [Manage your integrations](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/personal-settings.html?context=cdpaas&locale=en#integrations) * [Manage your notification settings](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/personal-settings.html?context=cdpaas&locale=en#bell) * [View and personalize your project summary](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/personal-settings.html?context=cdpaas&locale=en#project-summary) <!-- </ul> --> ## Manage your profile ## You can manage your profile on the **Profile** page by clicking your avatar in the banner and then clicking **Profile and settings**\. You can make these changes to your profile: <!-- <ul> --> * Add or change your avatar photo\. * Change your IBMid or password\. Do not change your IBMid (email address) after you register with the IBM watsonx platform\. The IBMid (email address) uniquely identifies users in the platform and also authorizes access to various IBM watsonx resources, including projects, spaces, models, and catalogs\. If you change your IBMid (email address) in your IBM Cloud profile after you have registered with IBM watsonx, you will lose access to the platform and associated resources\. * Set your service locations filters by resource group and location\. The filters apply throughout the platform\. For example, the **Service instances** page that you access through the **Services** menu shows only the filtered services\. Ensure you have selected the region where Watson Studio is located, for example, **Dallas**, as well as the **Global** location\. **Global** is required to provide access to your IBM Cloud Object Storage instance\. * Access your IBM Cloud account\. * [Leave IBM watsonx](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/stopapps.html#deactivate)\. <!-- </ul> --> ## Switch accounts ## If you are added to a shared IBM Cloud account that is different from your individual account, you can switch your account by selecting a different account from the account list in the menu bar, next to your avatar\. ## Manage your integrations ## To set up or modify an integration to GitHub: <!-- <ol> --> 1. Click your avatar in the banner\. 2. Click **Profile and settings**\. 3. Click the **Git integrations** tab\. <!-- </ol> --> See [Publish notebooks on GitHub](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/github-integration.html)\. ## Manage your notification settings ## To see your notification settings, click the notification bell icon and then click the settings icon\. You can make these changes to your notification settings: <!-- <ul> --> * Specify to receive push notifications that appear briefly on screen\. If you select **Do not disturb**, you continue to see notifications on the home page and the number of notifications on the bell\. * Specify to receive notifications by email\. * Specify for which projects or spaces you receive notifications\. <!-- </ul> --> ## View and personalize your project summary ## Use the **Overview** page of a project to view a summary of what's happening in your project\. You can jump back into your most recent work and keep up to date with alerts, tasks, project history, and compute usage\. View recent asset activity in the **Assets** pane on the **Overview** page, and filter the assets by selecting **By you** or **By all** using the dropdown\. Selecting **By you** lists assets edited by you, ordered by most recent at the top\. Selecting **By all** lists assets edited by others and also by you, ordered by most recent at the top\. You can use the readme file on the **Overview** page to document the status or results of the project\. The readme file uses standard [Markdown formatting](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/markd-jupyter.html)\. Collaborators with the **Admin** or **Editor** role can edit the readme file\. ## Learn more ## <!-- <ul> --> * [Managing your IBM Cloud account](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/manage-account.html) * [Managing your services](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/create-services.html#manage) <!-- </ul> --> **Parent topic:**[Administration](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/administer-accounts.html) <!-- </article "role="article" "> -->
FA4ACDC5DB590992630C704D00DEFB142F2F0489
https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/storage-options.html?context=cdpaas&locale=en
Object storage for workspaces
Object storage for workspaces You must choose an IBM Cloud Object Storage instance when you create a project, catalog, or deployment space workspace. Information that is stored in IBM Cloud Object Storage is encrypted and resilient. Each workspace has its own dedicated bucket. You can encrypt the Cloud Object Storage instance that you use for workspaces with your own key. See [Encrypt IBM Cloud Object Storage with your own key](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/wdp_admin_cos.htmlbyok). The Locations in each user's Profile must include the Global location to allow access to Cloud Object Storage. When you create a workspace, the Cloud Object Storage bucket defaults to Regional resiliency. Regional buckets distribute data across several data centers that are within the same metropolitan area. If one of these data centers suffers an outage or destruction, availability and performance are not affected. If you are the account owner or administrator, you administer Cloud Object Storage from the Resource list > Storage page on the IBM Cloud dashboard. For example, you can upload and download assets, manage buckets, and configure credentials and other security settings for the Cloud Object Storage instance. Follow these steps to manage the Cloud Object Storage instance on IBM Cloud: 1. Select a project from the Project list. 2. Click the Manage tab. 3. On the General page, locate the Storage section that displays the bucket name for the project. 4. Select Manage in IBM Cloud to open the Cloud Object Storage Buckets list. 5. Select the bucket name for the project to display a list of assets. 6. Checkmark an asset to download it or perform other tasks as needed. Watch this video to see how to manage an object storage instance. Video disclaimer: Some minor steps and graphical elements in this video might differ from your platform. This video provides a visual method to learn the concepts and tasks in this documentation. * Transcript Synchronize transcript with video Time Transcript 00:00 This video shows how to manage an IBM Cloud Object Storage instance. 00:06 When you create a Watson Studio project, an IBM Cloud Object Storage instance is associated with the project. 00:15 On the Manage tab, you'll see the associated object storage instance and have the option to manage it in IBM Cloud. 00:24 IBM Cloud Object Storage uses buckets to organize your data. 00:30 You can see that this instance contains a bucket with the "jupyternotebooks" prefix, which was created when the "Jupyter Notebooks" project was created. 00:41 If you open that bucket, you'll see all of the files that you added to that project. 00:47 From here, you can download an object or delete it from the bucket. 00:53 You can also view the object SQL URL to access that object from your application. 01:00 You can add objects to the bucket from here. 01:03 Just browse to select the file and wait for it to upload to storage. 01:10 And then that file will be available in the Files slide-out panel in the project. 01:16 Let's create a bucket. 01:20 You can create a Standard or Archive bucket, based on predefined settings, or create a custom bucket. 01:28 Provide a bucket name, which must be unique across the IBM Cloud Object Storage system. 01:35 Select a resiliency. 01:38 Cross Region provides higher availability and durability and Regional provides higher performance. 01:45 The Single Site option will only distribute data across devices within a single site. 01:52 Then select the location based on workload proximity. 01:57 Next, select a storage class, which defines the cost of storing data based on frequency of access. 02:05 Smart Tier provides automatic cost optimization for your storage. 02:11 Standard indicates frequent access. 02:14 Vault is for less frequent access. 02:18 And Cold Vault is for rare access. 02:21 There are other, optional settings to add rules, keys, and services. 02:27 Refer to the documentation for more details on these options. 02:32 When you're ready, create the bucket. 02:35 And, from here, you could add files to that bucket. 02:40 On the Access policies panel, you can manage access to buckets using IAM policies - that's Identity and Access Management. 02:50 On the Configuration panel, you'll find information about Key Protect encryption keys, as well as the bucket instance CRN and endpoints to access the data in the buckets from your application. 03:01 You can also find some of the same information on the Endpoints panel. 03:06 On the Service credentials panel, you'll find the API and access keys to authenticate with your instance from your application. 03:15 You can also connect the object storage to a Cloud Foundry application, check usage details, and view your plan details. 03:26 Find more videos in the Cloud Pak for Data as a Service documentation. Learn more * [Setting up IBM Cloud Object Storage](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/wdp_admin_cos.html) * [IBM Cloud docs: Getting started with IBM Cloud Object Storage](https://cloud.ibm.com/docs/cloud-object-storage?topic=cloud-object-storage-getting-started-cloud-object-storage) * [IBM Cloud docs: Endpoints and storage locations](https://cloud.ibm.com/docs/cloud-object-storage/basics?topic=cloud-object-storage-endpoints) * [Troubleshooting Cloud Object Storage for projects](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/troubleshoot-cos.html) Parent topic:[Creating a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/projects.html)
# Object storage for workspaces # You must choose an IBM Cloud Object Storage instance when you create a project, catalog, or deployment space workspace\. Information that is stored in IBM Cloud Object Storage is encrypted and resilient\. Each workspace has its own dedicated bucket\. You can encrypt the Cloud Object Storage instance that you use for workspaces with your own key\. See [Encrypt IBM Cloud Object Storage with your own key](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/wdp_admin_cos.html#byok)\. The Locations in each user's Profile must include the **Global** location to allow access to Cloud Object Storage\. When you create a workspace, the Cloud Object Storage bucket defaults to Regional resiliency\. Regional buckets distribute data across several data centers that are within the same metropolitan area\. If one of these data centers suffers an outage or destruction, availability and performance are not affected\. If you are the account owner or administrator, you administer Cloud Object Storage from the **Resource list > Storage** page on the IBM Cloud dashboard\. For example, you can upload and download assets, manage buckets, and configure credentials and other security settings for the Cloud Object Storage instance\. Follow these steps to manage the Cloud Object Storage instance on IBM Cloud: <!-- <ol> --> 1. Select a project from the Project list\. 2. Click the **Manage** tab\. 3. On the **General** page, locate the **Storage** section that displays the bucket name for the project\. 4. Select **Manage in IBM Cloud** to open the Cloud Object Storage **Buckets** list\. 5. Select the bucket name for the project to display a list of assets\. 6. Checkmark an asset to download it or perform other tasks as needed\. <!-- </ol> --> Watch this video to see how to manage an object storage instance\. Video disclaimer: Some minor steps and graphical elements in this video might differ from your platform\. This video provides a visual method to learn the concepts and tasks in this documentation\. <!-- <ul> --> * Transcript Synchronize transcript with video <!-- <table "class="bx--data-table bx--data-table--zebra" style="border-collapse: collapse; border: none;" "> --> | Time | Transcript | | ----- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | 00:00 | This video shows how to manage an IBM Cloud Object Storage instance. | | 00:06 | When you create a Watson Studio project, an IBM Cloud Object Storage instance is associated with the project. | | 00:15 | On the Manage tab, you'll see the associated object storage instance and have the option to manage it in IBM Cloud. | | 00:24 | IBM Cloud Object Storage uses buckets to organize your data. | | 00:30 | You can see that this instance contains a bucket with the "jupyternotebooks" prefix, which was created when the "Jupyter Notebooks" project was created. | | 00:41 | If you open that bucket, you'll see all of the files that you added to that project. | | 00:47 | From here, you can download an object or delete it from the bucket. | | 00:53 | You can also view the object SQL URL to access that object from your application. | | 01:00 | You can add objects to the bucket from here. | | 01:03 | Just browse to select the file and wait for it to upload to storage. | | 01:10 | And then that file will be available in the Files slide-out panel in the project. | | 01:16 | Let's create a bucket. | | 01:20 | You can create a Standard or Archive bucket, based on predefined settings, or create a custom bucket. | | 01:28 | Provide a bucket name, which must be unique across the IBM Cloud Object Storage system. | | 01:35 | Select a resiliency. | | 01:38 | Cross Region provides higher availability and durability and Regional provides higher performance. | | 01:45 | The Single Site option will only distribute data across devices within a single site. | | 01:52 | Then select the location based on workload proximity. | | 01:57 | Next, select a storage class, which defines the cost of storing data based on frequency of access. | | 02:05 | Smart Tier provides automatic cost optimization for your storage. | | 02:11 | Standard indicates frequent access. | | 02:14 | Vault is for less frequent access. | | 02:18 | And Cold Vault is for rare access. | | 02:21 | There are other, optional settings to add rules, keys, and services. | | 02:27 | Refer to the documentation for more details on these options. | | 02:32 | When you're ready, create the bucket. | | 02:35 | And, from here, you could add files to that bucket. | | 02:40 | On the Access policies panel, you can manage access to buckets using IAM policies - that's Identity and Access Management. | | 02:50 | On the Configuration panel, you'll find information about Key Protect encryption keys, as well as the bucket instance CRN and endpoints to access the data in the buckets from your application. | | 03:01 | You can also find some of the same information on the Endpoints panel. | | 03:06 | On the Service credentials panel, you'll find the API and access keys to authenticate with your instance from your application. | | 03:15 | You can also connect the object storage to a Cloud Foundry application, check usage details, and view your plan details. | | 03:26 | Find more videos in the Cloud Pak for Data as a Service documentation. | <!-- </table "class="bx--data-table bx--data-table--zebra" style="border-collapse: collapse; border: none;" "> --> <!-- </ul> --> ## Learn more ## <!-- <ul> --> * [Setting up IBM Cloud Object Storage](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/wdp_admin_cos.html) * [IBM Cloud docs: Getting started with IBM Cloud Object Storage](https://cloud.ibm.com/docs/cloud-object-storage?topic=cloud-object-storage-getting-started-cloud-object-storage) * [IBM Cloud docs: Endpoints and storage locations](https://cloud.ibm.com/docs/cloud-object-storage/basics?topic=cloud-object-storage-endpoints) * [Troubleshooting Cloud Object Storage for projects](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/troubleshoot-cos.html) <!-- </ul> --> **Parent topic:**[Creating a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/projects.html) <!-- </article "role="article" "> -->
26FB8B86499454EFD078384D70B02917D1C7DAE1
https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/svc-int.html?context=cdpaas&locale=en
Services and integrations
Services and integrations You can extend the functionality of the platform by provisioning other services and components, and integrating with other cloud platforms. * [Provision instances of services and components from the Services catalog](https://dataplatform.cloud.ibm.com/docs/content/svc-welcome/cloud-services.html). Add service instances and components to the IBM Cloud account to add functionality to the platform. You must be the owner or be assigned the Administrator or Editor role in the IBM Cloud account for IBM watsonx to provision service instances. * [Integrate with other cloud platforms](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/int-cloud.html). Allow users to easily create connections to data sources on those cloud platforms. You must have the required roles or permissions on the other cloud platform accounts. * [View regional availability and limitations](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/regional-datactr.html). Get information about where services are available by region. * To integrate with data sources, you can [create many types of connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html) to work with a broad array of data sources. Refer to [Connecting to data behind a firewall](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.html) to create secure connections for data sources that are not externalized to the internet.
# Services and integrations # You can extend the functionality of the platform by provisioning other services and components, and integrating with other cloud platforms\. <!-- <ul> --> * [Provision instances of services and components from the Services catalog](https://dataplatform.cloud.ibm.com/docs/content/svc-welcome/cloud-services.html)\. Add service instances and components to the IBM Cloud account to add functionality to the platform\. You must be the owner or be assigned the **Administrator** or **Editor** role in the IBM Cloud account for IBM watsonx to provision service instances\. * [Integrate with other cloud platforms](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/int-cloud.html)\. Allow users to easily create connections to data sources on those cloud platforms\. You must have the required roles or permissions on the other cloud platform accounts\. * [View regional availability and limitations](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/regional-datactr.html)\. Get information about where services are available by region\. * To integrate with data sources, you can [create many types of connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html) to work with a broad array of data sources\. Refer to [Connecting to data behind a firewall](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.html) to create secure connections for data sources that are not externalized to the internet\. <!-- </ul> --> <!-- </article "role="article" "> -->
0B4E34A88C9328EC6E0CC8A690B466F441E5EFC6
https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/upgrade.html?context=cdpaas&locale=en
Upgrading services on IBM watsonx
Upgrading services on IBM watsonx When you're ready to upgrade services, you can upgrade in place without losing any of your work or data. Each service has its own plan and is independent of other plans. Required permissions : You must have an IBM Cloud IAM access policy with the Editor or Administrator role on all account management services. Step 1: Update your IBM Cloud account You can skip this step if your IBM Cloud account has billing information with a Pay-As-You-Go or a subscription plan. You must update your IBM Cloud account in the following circumstances: * You have a Trial account from signing up for watsonx. * You have a Trial account that you [registered through an academic institution](https://ibm.biz/academic). * You have a [Lite account](https://cloud.ibm.com/docs/account?topic=account-accountsliteaccount) that you created before 25 October 2021. * You want to change a Pay-As-You-Go plan to a subscription plan. For instructions on updating your IBM Cloud account, see [Update your IBM Cloud account](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/setup-account.htmlpaid-account). Step 2: Upgrade your service plans You can upgrade the service plans for services. To upgrade service plans, you must have an IBM Cloud access policy with either the Editor or Administrator platform role for the services. To upgrade a service plan: 1. Click Upgrade on the header or choose Administration > Account and billing > Upgrade service plans from the main menu to open the Upgrade service plans page. 2. Select one or more services to change the service plans. 3. Click Select plan for each service in the Pricing summary pane. Select the plan from the Services catalog page for the service. 4. Agree to the terms, then click Buy. Your service plans are instantly updated. After the upgrade, the additional features and capacity for the new plan are automatically available. For the following services, the difference between plans can be significant: * [Watson Studio offering plans](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/ws-plans.html) * [Watson Machine Learning plans and compute usage](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/wml-plans.html) Learn more * [IBM Cloud docs: Account types](https://cloud.ibm.com/docs/account?topic=account-accounts) * [IBM Cloud docs: Upgrading your account](https://cloud.ibm.com/docs/account?topic=account-upgrading-account) * [Setting up the IBM Cloud account](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/setup-account.html) * [Monitoring account resource usage](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/monitor-resources.html) * [Find your account administrators](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/faq.htmlaccountadmin) Parent topic:[Managing IBM watsonx](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/wdp_admin_console.html)
# Upgrading services on IBM watsonx # When you're ready to upgrade services, you can upgrade in place without losing any of your work or data\. Each service has its own plan and is independent of other plans\. **Required permissions** : You must have an IBM Cloud IAM access policy with the **Editor** or **Administrator** role on all account management services\. ## Step 1: Update your IBM Cloud account ## You can skip this step if your IBM Cloud account has billing information with a Pay\-As\-You\-Go or a subscription plan\. You must update your IBM Cloud account in the following circumstances: <!-- <ul> --> * You have a Trial account from signing up for watsonx\. * You have a Trial account that you [registered through an academic institution](https://ibm.biz/academic)\. * You have a [Lite account](https://cloud.ibm.com/docs/account?topic=account-accounts#liteaccount) that you created before 25 October 2021\. * You want to change a Pay\-As\-You\-Go plan to a subscription plan\. <!-- </ul> --> For instructions on updating your IBM Cloud account, see [Update your IBM Cloud account](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/setup-account.html##paid-account)\. ## Step 2: Upgrade your service plans ## You can upgrade the service plans for services\. To upgrade service plans, you must have an IBM Cloud access policy with either the **Editor** or **Administrator** platform role for the services\. To upgrade a service plan: <!-- <ol> --> 1. Click **Upgrade** on the header or choose **Administration > Account and billing > Upgrade service plans** from the main menu to open the **Upgrade service plans** page\. 2. Select one or more services to change the service plans\. 3. Click **Select plan** for each service in the **Pricing summary** pane\. Select the plan from the **Services catalog** page for the service\. 4. Agree to the terms, then click **Buy**\. Your service plans are instantly updated\. <!-- </ol> --> After the upgrade, the additional features and capacity for the new plan are automatically available\. For the following services, the difference between plans can be significant: <!-- <ul> --> * [Watson Studio offering plans](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/ws-plans.html) * [Watson Machine Learning plans and compute usage](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/wml-plans.html) <!-- </ul> --> ## Learn more ## <!-- <ul> --> * [IBM Cloud docs: Account types](https://cloud.ibm.com/docs/account?topic=account-accounts) * [IBM Cloud docs: Upgrading your account](https://cloud.ibm.com/docs/account?topic=account-upgrading-account) * [Setting up the IBM Cloud account](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/setup-account.html) * [Monitoring account resource usage](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/monitor-resources.html) * [Find your account administrators](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/faq.html#accountadmin) <!-- </ul> --> **Parent topic:**[Managing IBM watsonx](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/wdp_admin_console.html) <!-- </article "role="article" "> -->
4BFEB479BCB6BBD28DD18EC423FDC5FB9C39B4B6
https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/your-roles.html?context=cdpaas&locale=en
Determining your roles and permissions
Determining your roles and permissions You have multiple roles within IBM Cloud and IBM watsonx that provide permissions. You can determine what each of your roles are, and, when necessary, who can change your roles. Projects and catalogs roles To determine your role in a project or deployment space, look at the Access Control page on the Manage tab. Your role is listed next to your name or the service ID you use to log in. The permissions that are associated with each role are specific to the type of workspace: * [Project collaborator permissions](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/collaborator-permissions.html) * [Deployment space collaborator permissions](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/collaborator-permissions-wml.html) If you want a different role, ask someone who has the Admin role on the Access Control page to change your role. IBM Cloud IAM account and service access roles You can see your IAM account and service access roles in IBM Cloud. To see your IAM account and service access roles in IBM Cloud: 1. From the IBM watsonx main menu, click Administration > Access (IAM). 2. Click Users, then click your name. 3. Click the Access policies tab. You might have multiple entries: * The All resources in account (including future IAM enabled services) entry shows your general roles for all services in the account. * Other entries might show your roles for individual services. If you want the IBM Cloud account administrator role or another role, ask an IBM Cloud account owner or administrator to assign it to you. You can [find your account administrators](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/faq.htmlaccountadmin) on your Access (IAM) > Users page in IBM Cloud. Learn more * [Roles in IBM watsonx](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/roles.html) * [Find your IBM Cloud account owner or administrator](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/faq.htmlaccountadmin) Parent topic:[Administration](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/administer-accounts.html)
# Determining your roles and permissions # You have multiple roles within IBM Cloud and IBM watsonx that provide permissions\. You can determine what each of your roles are, and, when necessary, who can change your roles\. ## Projects and catalogs roles ## To determine your role in a project or deployment space, look at the **Access Control** page on the **Manage** tab\. Your role is listed next to your name or the service ID you use to log in\. The permissions that are associated with each role are specific to the type of workspace: <!-- <ul> --> * [Project collaborator permissions](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/collaborator-permissions.html) * [Deployment space collaborator permissions](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/collaborator-permissions-wml.html) <!-- </ul> --> If you want a different role, ask someone who has the **Admin** role on the **Access Control** page to change your role\. ## IBM Cloud IAM account and service access roles ## You can see your IAM account and service access roles in IBM Cloud\. To see your IAM account and service access roles in IBM Cloud: <!-- <ol> --> 1. From the IBM watsonx main menu, click **Administration > Access (IAM)**\. 2. Click **Users**, then click your name\. 3. Click the **Access policies** tab\. You might have multiple entries: <!-- <ul> --> * The **All resources in account (including future IAM enabled services)** entry shows your general roles for all services in the account. * Other entries might show your roles for individual services. <!-- </ul> --> <!-- </ol> --> If you want the IBM Cloud account administrator role or another role, ask an IBM Cloud account owner or administrator to assign it to you\. You can [find your account administrators](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/faq.html#accountadmin) on your **Access (IAM) > Users** page in IBM Cloud\. ## Learn more ## <!-- <ul> --> * [Roles in IBM watsonx](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/roles.html) * [Find your IBM Cloud account owner or administrator](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/faq.html#accountadmin) <!-- </ul> --> **Parent topic:**[Administration](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/administer-accounts.html) <!-- </article "role="article" "> -->
8B34DED3493E5181B1D19F6D14A9598CFEAA5997
https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html?context=cdpaas&locale=en
{{ document.title.text }}
AI risk atlas Explore this atlas to understand some of the risks of working with generative AI, foundation models, and machine learning models. Risks associated with input Training and tuning phase ![icon for fairness risk](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/images/risk-fairness.svg) Fairness [Data bias](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/bias.html)Amplified![icon for robustness risk](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/images/risk-robustness.svg) Robustness [Data poisoning](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/data-poisoning.html)Traditional![icon for value alignment risk](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/images/risk-value-alignment.svg) Value alignment [Data curation](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/data-curation.html)Amplified [Downstream retraining](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/downstream-retraining.html)New![icon for data laws risk](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/images/risk-data-laws.svg) Data laws [Data transfer](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/data-transfer.html)Traditional [Data usage](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/data-usage.html)Traditional [Data aquisition](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/data-aquisition.html)Traditional![icon for intellectual property risk](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/images/risk-intellectual-property.svg) Intellectual property [Data usage rights](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/data-usage-rights.html)Amplified [Confidential data disclosure](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/confidential-data-disclosure.html)Traditional![icon for transparency risk](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/images/risk-transparency.svg) Transparency [Data transparency](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/data-transparency.html)Amplified [Data provenance](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/data-provenance.html)Amplified![icon for privacy risk](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/images/risk-privacy.svg) Privacy [Personal information in data](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/personal-information-in-data.html)Traditional [Reidentification](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/reidentification.html)Traditional [Data privacy rights](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/data-privacy-rights.html)Amplified Inference phase ![icon for privacy risk](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/images/risk-privacy.svg) Privacy [Personal information in prompt](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/personal-information-in-prompt.html)New[Membership inference attack](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/membership-inference-attack.html)Traditional[Attribute inference attack](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/attribute-inference-attack.html)Amplified![icon for intellectual property risk](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/images/risk-intellectual-property.svg) Intellectual property [Confidential data in prompt](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/confidential-data-in-prompt.html)New![icon for robustness risk](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/images/risk-robustness.svg) Robustness [Evasion attack](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/evasion-attack.html)Amplified[Extraction attack](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/extraction-attack.html)Amplified[Prompt injection](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/prompt-injection.html)New[Prompt leaking](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/prompt-leaking.html)Amplified![icon for multi-category risk](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/images/risk-multi-category.svg) Multi-category [Prompt priming](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/prompt-priming.html)Amplified[Jailbreaking](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/jailbreaking.html)Amplified Risks associated with output ![icon for fairness risk](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/images/risk-fairness.svg) Fairness [Output bias](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/output-bias.html)New[Decision bias](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/decision-bias.html)New![icon for intellectual property risk](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/images/risk-intellectual-property.svg) Intellectual property [Copyright infringement](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/copyright-infringement.html)New![icon for value alignment risk](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/images/risk-value-alignment.svg) Value alignment [Hallucination](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/hallucination.html)New[Toxic output](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/toxic-output.html)New[Trust calibration](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/trust-calibration.html)New[Physical harm](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/physical-harm.html)New[Benign advice](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/benign-advice.html)New[Improper usage](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/improper-usage.html)New![icon for misuse risk](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/images/risk-misuse.svg) Misuse [Spreading disinformation](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/spreading-disinformation.html)Amplified[Toxicity](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/toxicity.html)New[Nonconsensual use](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/nonconsensual-use.html)Amplified[Dangerous use](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/dangerous-use.html)New[Non-disclosure](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/non-disclosure.html)New![icon for harmful code generation risk](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/images/risk-harmful-code-generation.svg) Harmful code generation [Harmful code generation](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/harmful-code-generation.html)New![icon for privacy risk](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/images/risk-privacy.svg) Privacy [Personal information in output](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/personal-information-in-output.html)New![icon for explainability risk](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/images/risk-explainability.svg) Explainability [Explaining output](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/explaining-output.html)Amplified[Unreliable source attribution](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/unreliable-source-attribution.html)Amplified[Inaccessible training data](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/inaccessible-training-data.html)Amplified[Untraceable attribution](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/untraceable-attribution.html)Amplified
# AI risk atlas # Explore this atlas to understand some of the risks of working with generative AI, foundation models, and machine learning models\. ### Risks associated with input ### #### Training and tuning phase #### ![icon for fairness risk](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/images/risk-fairness.svg) #### Fairness #### [Data bias](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/bias.html)Amplified![icon for robustness risk](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/images/risk-robustness.svg) #### Robustness #### [Data poisoning](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/data-poisoning.html)Traditional![icon for value alignment risk](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/images/risk-value-alignment.svg) #### Value alignment #### [Data curation](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/data-curation.html)Amplified [Downstream retraining](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/downstream-retraining.html)New![icon for data laws risk](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/images/risk-data-laws.svg) #### Data laws #### [Data transfer](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/data-transfer.html)Traditional [Data usage](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/data-usage.html)Traditional [Data aquisition](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/data-aquisition.html)Traditional![icon for intellectual property risk](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/images/risk-intellectual-property.svg) #### Intellectual property #### [Data usage rights](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/data-usage-rights.html)Amplified [Confidential data disclosure](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/confidential-data-disclosure.html)Traditional![icon for transparency risk](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/images/risk-transparency.svg) #### Transparency #### [Data transparency](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/data-transparency.html)Amplified [Data provenance](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/data-provenance.html)Amplified![icon for privacy risk](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/images/risk-privacy.svg) #### Privacy #### [Personal information in data](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/personal-information-in-data.html)Traditional [Reidentification](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/reidentification.html)Traditional [Data privacy rights](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/data-privacy-rights.html)Amplified #### Inference phase #### ![icon for privacy risk](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/images/risk-privacy.svg) #### Privacy #### [Personal information in prompt](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/personal-information-in-prompt.html)New[Membership inference attack](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/membership-inference-attack.html)Traditional[Attribute inference attack](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/attribute-inference-attack.html)Amplified![icon for intellectual property risk](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/images/risk-intellectual-property.svg) #### Intellectual property #### [Confidential data in prompt](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/confidential-data-in-prompt.html)New![icon for robustness risk](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/images/risk-robustness.svg) #### Robustness #### [Evasion attack](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/evasion-attack.html)Amplified[Extraction attack](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/extraction-attack.html)Amplified[Prompt injection](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/prompt-injection.html)New[Prompt leaking](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/prompt-leaking.html)Amplified![icon for multi\-category risk](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/images/risk-multi-category.svg) #### Multi\-category #### [Prompt priming](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/prompt-priming.html)Amplified[Jailbreaking](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/jailbreaking.html)Amplified ### Risks associated with output ### ![icon for fairness risk](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/images/risk-fairness.svg) #### Fairness #### [Output bias](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/output-bias.html)New[Decision bias](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/decision-bias.html)New![icon for intellectual property risk](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/images/risk-intellectual-property.svg) #### Intellectual property #### [Copyright infringement](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/copyright-infringement.html)New![icon for value alignment risk](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/images/risk-value-alignment.svg) #### Value alignment #### [Hallucination](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/hallucination.html)New[Toxic output](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/toxic-output.html)New[Trust calibration](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/trust-calibration.html)New[Physical harm](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/physical-harm.html)New[Benign advice](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/benign-advice.html)New[Improper usage](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/improper-usage.html)New![icon for misuse risk](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/images/risk-misuse.svg) #### Misuse #### [Spreading disinformation](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/spreading-disinformation.html)Amplified[Toxicity](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/toxicity.html)New[Nonconsensual use](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/nonconsensual-use.html)Amplified[Dangerous use](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/dangerous-use.html)New[Non\-disclosure](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/non-disclosure.html)New![icon for harmful code generation risk](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/images/risk-harmful-code-generation.svg) #### Harmful code generation #### [Harmful code generation](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/harmful-code-generation.html)New![icon for privacy risk](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/images/risk-privacy.svg) #### Privacy #### [Personal information in output](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/personal-information-in-output.html)New![icon for explainability risk](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/images/risk-explainability.svg) #### Explainability #### [Explaining output](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/explaining-output.html)Amplified[Unreliable source attribution](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/unreliable-source-attribution.html)Amplified[Inaccessible training data](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/inaccessible-training-data.html)Amplified[Untraceable attribution](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/untraceable-attribution.html)Amplified <!-- </article "role="article" "> -->
A304B9E82543C150236ECAD30F1594E1B832B8B1
https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/attribute-inference-attack.html?context=cdpaas&locale=en
{{ document.title.text }}
Attribute inference attack ![icon for privacy risk](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/images/risk-privacy.svg)Risks associated with inputInferencePrivacyAmplified Description An attribute inference attack is used to detect whether certain sensitive features can be inferred about individuals who participated in training a model. These attacks occur when an adversary has some prior knowledge about the training data and uses that knowledge to infer the sensitive data. Why is attribute inference attack a concern for foundation models? With a successful attack, the attacker can gain valuable information such as sensitive personal information or intellectual property. Parent topic:[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html)
# Attribute inference attack # ![icon for privacy risk](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/images/risk-privacy.svg)Risks associated with inputInferencePrivacyAmplified ### Description ### An attribute inference attack is used to detect whether certain sensitive features can be inferred about individuals who participated in training a model\. These attacks occur when an adversary has some prior knowledge about the training data and uses that knowledge to infer the sensitive data\. ### Why is attribute inference attack a concern for foundation models? ### With a successful attack, the attacker can gain valuable information such as sensitive personal information or intellectual property\. **Parent topic:**[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html) <!-- </article "role="article" "> -->
857C8C3489AC9B2891ED1AE9C81EA881CF1CED80
https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/benign-advice.html?context=cdpaas&locale=en
{{ document.title.text }}
Benign advice ![icon for value alignment risk](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/images/risk-value-alignment.svg)Risks associated with outputValue alignmentNew Description When a model generates information that is factually correct but not specific enough for the current context, the benign advice can be potentially harmful. For example, a model might provide medical, financial, and legal advice or recommendations for a specific problem that the end user may act on even when they should not. Why is benign advice a concern for foundation models? A person might act on incomplete advice or worry about a situation that is not applicable to them due to the overgeneralized nature of the content generated. Parent topic:[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html)
# Benign advice # ![icon for value alignment risk](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/images/risk-value-alignment.svg)Risks associated with outputValue alignmentNew ### Description ### When a model generates information that is factually correct but not specific enough for the current context, the benign advice can be potentially harmful\. For example, a model might provide medical, financial, and legal advice or recommendations for a specific problem that the end user may act on even when they should not\. ### Why is benign advice a concern for foundation models? ### A person might act on incomplete advice or worry about a situation that is not applicable to them due to the overgeneralized nature of the content generated\. **Parent topic:**[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html) <!-- </article "role="article" "> -->
BC8E9394D23A5320BFCE0EBE7F208CA18CB6B65C
https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/bias.html?context=cdpaas&locale=en
{{ document.title.text }}
Data bias ![icon for fairness risk](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/images/risk-fairness.svg)Risks associated with inputTraining and tuning phaseFairnessAmplified Description Historical, representational, and societal biases present in the data used to train and fine tune the model can adversely affect model behavior. Why is data bias a concern for foundation models? Training an AI system on data with bias, such as historical or representational bias, could lead to biased or skewed outputs that may unfairly represent or otherwise discriminate against certain groups or individuals. In addition to negative societal impacts, business entities could face legal consequences or reputational harms from biased model outcomes. Example Healthcare Bias Research on reinforcing disparities in medicine highlights that using data and AI to transform how people receive healthcare is only as strong as the data behind it, meaning use of training data with poor minority representation can lead to growing health inequalities. Sources: [Science, September 2022](https://www.science.org/doi/10.1126/science.abo2788) [Forbes, December 2022](https://www.forbes.com/sites/adigaskell/2022/12/02/minority-patients-often-left-behind-by-health-ai/?sh=31d28a225b41) Parent topic:[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html)
# Data bias # ![icon for fairness risk](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/images/risk-fairness.svg)Risks associated with inputTraining and tuning phaseFairnessAmplified ### Description ### Historical, representational, and societal biases present in the data used to train and fine tune the model can adversely affect model behavior\. ### Why is data bias a concern for foundation models? ### Training an AI system on data with bias, such as historical or representational bias, could lead to biased or skewed outputs that may unfairly represent or otherwise discriminate against certain groups or individuals\. In addition to negative societal impacts, business entities could face legal consequences or reputational harms from biased model outcomes\. Example #### Healthcare Bias #### Research on reinforcing disparities in medicine highlights that using data and AI to transform how people receive healthcare is only as strong as the data behind it, meaning use of training data with poor minority representation can lead to growing health inequalities\. Sources: [Science, September 2022](https://www.science.org/doi/10.1126/science.abo2788) [Forbes, December 2022](https://www.forbes.com/sites/adigaskell/2022/12/02/minority-patients-often-left-behind-by-health-ai/?sh=31d28a225b41) **Parent topic:**[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html) <!-- </article "role="article" "> -->
F6CC81E55C6AAD12849A56837F14538576F5A42C
https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/confidential-data-disclosure.html?context=cdpaas&locale=en
{{ document.title.text }}
Confidential data disclosure ![icon for intellectual property risk](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/images/risk-intellectual-property.svg)Risks associated with inputTraining and tuning phaseIntellectual propertyTraditional Description Models might be trained or fine-tuned using confidential data or the company’s intellectual property, which could result in unwanted disclosure of that information. Why is confidential data disclosure a concern for foundation models? If not developed in accordance with data protection rules and regulations, the model might expose confidential information or IP in the generated output or through an adversarial attack. Parent topic:[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html)
# Confidential data disclosure # ![icon for intellectual property risk](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/images/risk-intellectual-property.svg)Risks associated with inputTraining and tuning phaseIntellectual propertyTraditional ### Description ### Models might be trained or fine\-tuned using confidential data or the company’s intellectual property, which could result in unwanted disclosure of that information\. ### Why is confidential data disclosure a concern for foundation models? ### If not developed in accordance with data protection rules and regulations, the model might expose confidential information or IP in the generated output or through an adversarial attack\. **Parent topic:**[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html) <!-- </article "role="article" "> -->
2D3A398B8394671D9383F214FF5E69A00391BB22
https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/confidential-data-in-prompt.html?context=cdpaas&locale=en
{{ document.title.text }}
Confidential data in prompt ![icon for intellectual property risk](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/images/risk-intellectual-property.svg)Risks associated with inputInferenceIntellectual propertyNew Description Inclusion of confidential data as a part of a generative model's prompt, either through the system prompt design or through the inclusion of end user input, might later result in unintended reuse or disclosure of that information. Why is confidential data in prompt a concern for foundation models? If not properly developed to secure confidential data, the model might expose confidential information or IP in the generated output. Additionally, end users' confidential information might be unintentionally collected and stored. Example Disclosure of Confidential Information As per the source article, employees of Samsung disclosed confidential information to OpenAI through their use of ChatGPT. In one instance, an employee pasted confidential source code to check for errors. In another, an employee shared code with ChatGPT and "requested code optimization." A third shared a recording of a meeting to convert into notes for a presentation. Samsung has limited internal ChatGPT usage in response to these incidents, but it is unlikely that they will be able to recall any of their data. Additionally, that article highlighted that in response to the risk of leaking confidential information and other sensitive information, companies like Apple, JPMorgan Chase. Deutsche Bank, Verizon, Walmart, Samsung, Amazon, and Accenture have placed several restrictions on the usage of ChatGPT. Sources: [Business Insider, February 2023](https://www.businessinsider.com/walmart-warns-workers-dont-share-sensitive-information-chatgpt-generative-ai-2023-2) Parent topic:[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html)
# Confidential data in prompt # ![icon for intellectual property risk](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/images/risk-intellectual-property.svg)Risks associated with inputInferenceIntellectual propertyNew ### Description ### Inclusion of confidential data as a part of a generative model's prompt, either through the system prompt design or through the inclusion of end user input, might later result in unintended reuse or disclosure of that information\. ### Why is confidential data in prompt a concern for foundation models? ### If not properly developed to secure confidential data, the model might expose confidential information or IP in the generated output\. Additionally, end users' confidential information might be unintentionally collected and stored\. Example #### Disclosure of Confidential Information #### As per the source article, employees of Samsung disclosed confidential information to OpenAI through their use of ChatGPT\. In one instance, an employee pasted confidential source code to check for errors\. In another, an employee shared code with ChatGPT and "requested code optimization\." A third shared a recording of a meeting to convert into notes for a presentation\. Samsung has limited internal ChatGPT usage in response to these incidents, but it is unlikely that they will be able to recall any of their data\. Additionally, that article highlighted that in response to the risk of leaking confidential information and other sensitive information, companies like Apple, JPMorgan Chase\. Deutsche Bank, Verizon, Walmart, Samsung, Amazon, and Accenture have placed several restrictions on the usage of ChatGPT\. Sources: [Business Insider, February 2023](https://www.businessinsider.com/walmart-warns-workers-dont-share-sensitive-information-chatgpt-generative-ai-2023-2) **Parent topic:**[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html) <!-- </article "role="article" "> -->
C9FB652C433A0A0BC419CBFE4ECC3680252D2FE3
https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/copyright-infringement.html?context=cdpaas&locale=en
{{ document.title.text }}
Copyright infringement ![icon for intellectual property risk](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/images/risk-intellectual-property.svg)Risks associated with outputIntellectual propertyNew Description Generative AI output that is too similar or identical to existing work risks claims of copyright infringement. Uncertainty and variability around the ownership, copyrightability, and patentability of output generated by AI increases the risk of copyright infringement problems. Why is copyright infringement a concern for foundation models? Laws and regulations concerning the use of content that looks the same or closely similar to other copyrighted data are largely unsettled and can vary from country to country, providing challenges in determining and implementing compliance. Business entities could face fines, reputational harms, and other legal consequences. Parent topic:[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html)
# Copyright infringement # ![icon for intellectual property risk](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/images/risk-intellectual-property.svg)Risks associated with outputIntellectual propertyNew ### Description ### Generative AI output that is too similar or identical to existing work risks claims of copyright infringement\. Uncertainty and variability around the ownership, copyrightability, and patentability of output generated by AI increases the risk of copyright infringement problems\. ### Why is copyright infringement a concern for foundation models? ### Laws and regulations concerning the use of content that looks the same or closely similar to other copyrighted data are largely unsettled and can vary from country to country, providing challenges in determining and implementing compliance\. Business entities could face fines, reputational harms, and other legal consequences\. **Parent topic:**[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html) <!-- </article "role="article" "> -->
6BFF9C4DB2BB43376A2A2CD681714ED3273E991E
https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/dangerous-use.html?context=cdpaas&locale=en
{{ document.title.text }}
Dangerous use ![icon for misuse risk](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/images/risk-misuse.svg)Risks associated with outputMisuseNew Description The possibility that a model could be misused for dangerous purposes such as creating plans to develop weapons, malware, or causing harm to others is the risk of dangerous use. Why is dangerous use a concern for foundation models? Enabling people to harm others is unethical and can be illegal. A model that has this potential must be properly governed. Otherwise, business entities could face fines, reputational harms, and other legal consequences. Parent topic:[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html)
# Dangerous use # ![icon for misuse risk](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/images/risk-misuse.svg)Risks associated with outputMisuseNew ### Description ### The possibility that a model could be misused for dangerous purposes such as creating plans to develop weapons, malware, or causing harm to others is the risk of dangerous use\. ### Why is dangerous use a concern for foundation models? ### Enabling people to harm others is unethical and can be illegal\. A model that has this potential must be properly governed\. Otherwise, business entities could face fines, reputational harms, and other legal consequences\. **Parent topic:**[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html) <!-- </article "role="article" "> -->
C2F1FF7794524DB18EE5FADFAA7232D0A94F8B4C
https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/data-aquisition.html?context=cdpaas&locale=en
{{ document.title.text }}
Data aquisition ![icon for data laws risk](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/images/risk-data-laws.svg)Risks associated with inputTraining and tuning phaseData lawsTraditional Description Laws and other regulations might limit the collection of certain types of data for specific AI use cases. Why is data aquisition a concern for foundation models? Failing to comply with data usage laws might result in fines and other legal consequences. Parent topic:[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html)
# Data aquisition # ![icon for data laws risk](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/images/risk-data-laws.svg)Risks associated with inputTraining and tuning phaseData lawsTraditional ### Description ### Laws and other regulations might limit the collection of certain types of data for specific AI use cases\. ### Why is data aquisition a concern for foundation models? ### Failing to comply with data usage laws might result in fines and other legal consequences\. **Parent topic:**[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html) <!-- </article "role="article" "> -->
6BC478053FEFD091742C6775DFAC9EB5B8C4923F
https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/data-curation.html?context=cdpaas&locale=en
{{ document.title.text }}
Data curation ![icon for value alignment risk](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/images/risk-value-alignment.svg)Risks associated with inputTraining and tuning phaseValue alignmentAmplified Description When training or tuning data is improperly collected or prepared, the result can be a misalignment of a model's desired values or intent and the actual outcome. Why is data curation a concern for foundation models? Improper data curation can adversely affect how a model is trained, resulting in a model that does not behave in accordance with the intended values. Correcting problems after the model is trained and deployed might be insufficient for guaranteeing proper behavior. Improper model behavior can result in business entities facing legal consequences or reputational harms. Parent topic:[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html)
# Data curation # ![icon for value alignment risk](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/images/risk-value-alignment.svg)Risks associated with inputTraining and tuning phaseValue alignmentAmplified ### Description ### When training or tuning data is improperly collected or prepared, the result can be a misalignment of a model's desired values or intent and the actual outcome\. ### Why is data curation a concern for foundation models? ### Improper data curation can adversely affect how a model is trained, resulting in a model that does not behave in accordance with the intended values\. Correcting problems after the model is trained and deployed might be insufficient for guaranteeing proper behavior\. Improper model behavior can result in business entities facing legal consequences or reputational harms\. **Parent topic:**[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html) <!-- </article "role="article" "> -->
C471B8B14614C985391115EC1ED53E0B56D2E27E
https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/data-poisoning.html?context=cdpaas&locale=en
{{ document.title.text }}
Data poisoning ![icon for robustness risk](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/images/risk-robustness.svg)Risks associated with inputTraining and tuning phaseRobustnessTraditional Description Data poisoning is a type of adversarial attack where an adversary or malicious insider injects intentionally corrupted, false, misleading, or incorrect samples into the training or fine-tuning dataset. Why is data poisoning a concern for foundation models? Poisoning data can make the model sensitive to a malicious data pattern and produce the adversary’s desired output. It can create a security risk where adversaries can force model behavior for their own benefit. In addition to producing unintended and potentially malicious results, a model misalignment from data poisoning can result in business entities facing legal consequences or reputational harms. Parent topic:[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html)
# Data poisoning # ![icon for robustness risk](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/images/risk-robustness.svg)Risks associated with inputTraining and tuning phaseRobustnessTraditional ### Description ### Data poisoning is a type of adversarial attack where an adversary or malicious insider injects intentionally corrupted, false, misleading, or incorrect samples into the training or fine\-tuning dataset\. ### Why is data poisoning a concern for foundation models? ### Poisoning data can make the model sensitive to a malicious data pattern and produce the adversary’s desired output\. It can create a security risk where adversaries can force model behavior for their own benefit\. In addition to producing unintended and potentially malicious results, a model misalignment from data poisoning can result in business entities facing legal consequences or reputational harms\. **Parent topic:**[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html) <!-- </article "role="article" "> -->
773F81DD69D3ADBBE1998FF5974CA83347EFFC76
https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/data-privacy-rights.html?context=cdpaas&locale=en
{{ document.title.text }}
Data privacy rights ![icon for privacy risk](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/images/risk-privacy.svg)Risks associated with inputTraining and tuning phasePrivacyAmplified Description In some countries, privacy laws give individuals the right to access, correct, verify, or remove certain types of information that companies hold or process about them. Tracking the usage of an individual’s personal information in training a model and providing appropriate rights to comply with such laws can be a complex endeavor. Why is data privacy rights a concern for foundation models? The identification or improper usage of data could lead to violation of privacy laws. Improper usage or a request for data removal could force organizations to retrain the model, which is expensive. In addition, business entities could face fines, reputational harms, and other legal consequences if they fail to comply with data privacy rules and regulations. Example Right to Be Forgotten (RTBF) As stated in the article, laws in multiple locales, including Europe (GDPR); Canada (CPPA); and Japan (APPI), grant users’ rights for their personal data to be “forgotten” by technology (Right To Be Forgotten). However, the emerging and increasingly popular AI (LLMs) services present new challenges for the right to be forgotten (RTBF). According to Data61’s research, the only way for users to identify usage of their personal information in an LLM is “by either inspecting the original training dataset or perhaps prompting the model.” However, training data is either not public or companies do not disclose it, citing safety and other concerns, and guardrails may prevent users from accessing the information via prompting. Due to these barriers, users cannot initiate RTBF procedures and companies deploying LLMs may be unable to meet RTBF laws. Sources: [Zhang et al., Sept 2023](https://arxiv.org/pdf/2307.03941.pdf) Example Lawsuit About LLM Unlearning According to the report, a lawsuit was filed against Google that alleges the use of copyright material and personal information as training data for its AI systems, which includes its Bard chatbot. Opt-out and deletion rights are guaranteed rights for California residents under the CCPA and children in the United States below 13 under the COPPA. The plaintiffs allege that because there is no way for Bard to “unlearn” or fully remove all the scraped PI it has been fed. The plaintiffs note that Bard’s privacy notice states that Bard conversations cannot be deleted by the user once they have been reviewed and annotated by the company and may be kept up to 3 years, which plaintiffs allege further contributes to non-compliance with these laws. Sources: [Reuters, July 2023](https://www.reuters.com/legal/litigation/google-hit-with-class-action-lawsuit-over-ai-data-scraping-2023-07-11/) [J.L. v. Alphabet Inc., July 2023](https://fingfx.thomsonreuters.com/gfx/legaldocs/myvmodloqvr/GOOGLE%20AI%20LAWSUIT%20complaint.pdf) Parent topic:[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html)
# Data privacy rights # ![icon for privacy risk](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/images/risk-privacy.svg)Risks associated with inputTraining and tuning phasePrivacyAmplified ### Description ### In some countries, privacy laws give individuals the right to access, correct, verify, or remove certain types of information that companies hold or process about them\. Tracking the usage of an individual’s personal information in training a model and providing appropriate rights to comply with such laws can be a complex endeavor\. ### Why is data privacy rights a concern for foundation models? ### The identification or improper usage of data could lead to violation of privacy laws\. Improper usage or a request for data removal could force organizations to retrain the model, which is expensive\. In addition, business entities could face fines, reputational harms, and other legal consequences if they fail to comply with data privacy rules and regulations\. Example #### Right to Be Forgotten (RTBF) #### As stated in the article, laws in multiple locales, including Europe (GDPR); Canada (CPPA); and Japan (APPI), grant users’ rights for their personal data to be “forgotten” by technology (Right To Be Forgotten)\. However, the emerging and increasingly popular AI (LLMs) services present new challenges for the right to be forgotten (RTBF)\. According to Data61’s research, the only way for users to identify usage of their personal information in an LLM is “by either inspecting the original training dataset or perhaps prompting the model\.” However, training data is either not public or companies do not disclose it, citing safety and other concerns, and guardrails may prevent users from accessing the information via prompting\. Due to these barriers, users cannot initiate RTBF procedures and companies deploying LLMs may be unable to meet RTBF laws\. Sources: [Zhang et al\., Sept 2023](https://arxiv.org/pdf/2307.03941.pdf) Example #### Lawsuit About LLM Unlearning #### According to the report, a lawsuit was filed against Google that alleges the use of copyright material and personal information as training data for its AI systems, which includes its Bard chatbot\. Opt\-out and deletion rights are guaranteed rights for California residents under the CCPA and children in the United States below 13 under the COPPA\. The plaintiffs allege that because there is no way for Bard to “unlearn” or fully remove all the scraped PI it has been fed\. The plaintiffs note that Bard’s privacy notice states that Bard conversations cannot be deleted by the user once they have been reviewed and annotated by the company and may be kept up to 3 years, which plaintiffs allege further contributes to non\-compliance with these laws\. Sources: [Reuters, July 2023](https://www.reuters.com/legal/litigation/google-hit-with-class-action-lawsuit-over-ai-data-scraping-2023-07-11/) [J\.L\. v\. Alphabet Inc\., July 2023](https://fingfx.thomsonreuters.com/gfx/legaldocs/myvmodloqvr/GOOGLE%20AI%20LAWSUIT%20complaint.pdf) **Parent topic:**[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html) <!-- </article "role="article" "> -->
40D279ADA16512E67B7FB78FDAC4ADA9CFE5C645
https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/data-provenance.html?context=cdpaas&locale=en
{{ document.title.text }}
Data provenance ![icon for transparency risk](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/images/risk-transparency.svg)Risks associated with inputTraining and tuning phaseTransparencyAmplified Description Without standardized and established methods for verifying where data came from, there are no guarantees that available data is what it claims to be. Why is data provenance a concern for foundation models? Not all data sources are trustworthy. Data might have been unethically collected, manipulated, or falsified. Using such data can result in undesirable behaviors in the model. Business entities could face fines, reputational harms, and other legal consequences. Parent topic:[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html)
# Data provenance # ![icon for transparency risk](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/images/risk-transparency.svg)Risks associated with inputTraining and tuning phaseTransparencyAmplified ### Description ### Without standardized and established methods for verifying where data came from, there are no guarantees that available data is what it claims to be\. ### Why is data provenance a concern for foundation models? ### Not all data sources are trustworthy\. Data might have been unethically collected, manipulated, or falsified\. Using such data can result in undesirable behaviors in the model\. Business entities could face fines, reputational harms, and other legal consequences\. **Parent topic:**[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html) <!-- </article "role="article" "> -->
C8816BF425EF039884DBF6A7282F8D7ADB7C5D04
https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/data-transfer.html?context=cdpaas&locale=en
{{ document.title.text }}
Data transfer ![icon for data laws risk](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/images/risk-data-laws.svg)Risks associated with inputTraining and tuning phaseData lawsTraditional Description Laws and other restrictions that apply to the transfer of data can limit or prohibit transferring or repurposing data from one country to another. Repurposing data can be further restricted within countries or with local regulations. Why is data transfer a concern for foundation models? Data transfer restrictions can impact the availability of the data required for training an AI model and can lead to poorly represented data. Failing to comply with data transfer laws might result in fines and other legal consequences. Example Data Restriction Laws As stated in the research article, data localization measures which restrict the ability to move data globally will reduce the capacity to develop tailored AI capacities. It will affect AI directly by providing less training data and indirectly by undercutting the building blocks on which AI is built. Examples include [China's data localization laws](https://iapp.org/resources/article/demystifying-data-localization-in-china-a-practical-guide/), GDPR restrictions on the processing and use of personal data, and [Singapore's bilateral data sharing](https://www.imda.gov.sg/how-we-can-help/data-innovation/trusted-data-sharing-framework). Sources: [Brookings, December 2018](https://www.brookings.edu/articles/the-impact-of-artificial-intelligence-on-international-trade) Parent topic:[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html)
# Data transfer # ![icon for data laws risk](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/images/risk-data-laws.svg)Risks associated with inputTraining and tuning phaseData lawsTraditional ### Description ### Laws and other restrictions that apply to the transfer of data can limit or prohibit transferring or repurposing data from one country to another\. Repurposing data can be further restricted within countries or with local regulations\. ### Why is data transfer a concern for foundation models? ### Data transfer restrictions can impact the availability of the data required for training an AI model and can lead to poorly represented data\. Failing to comply with data transfer laws might result in fines and other legal consequences\. Example #### Data Restriction Laws #### As stated in the research article, data localization measures which restrict the ability to move data globally will reduce the capacity to develop tailored AI capacities\. It will affect AI directly by providing less training data and indirectly by undercutting the building blocks on which AI is built\. Examples include [China's data localization laws](https://iapp.org/resources/article/demystifying-data-localization-in-china-a-practical-guide/), GDPR restrictions on the processing and use of personal data, and [Singapore's bilateral data sharing](https://www.imda.gov.sg/how-we-can-help/data-innovation/trusted-data-sharing-framework)\. Sources: [Brookings, December 2018](https://www.brookings.edu/articles/the-impact-of-artificial-intelligence-on-international-trade) **Parent topic:**[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html) <!-- </article "role="article" "> -->
221F46D0A3C2C3D3A623BE815B45E8B90AF61340
https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/data-transparency.html?context=cdpaas&locale=en
{{ document.title.text }}
Data transparency ![icon for transparency risk](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/images/risk-transparency.svg)Risks associated with inputTraining and tuning phaseTransparencyAmplified Description Without accurate documentation on how a model's data was collected, curated, and used to train a model, it might be harder to satisfactorily explain the behavior of the model with respect to the data. Why is data transparency a concern for foundation models? Data transparency is important for legal compliance and AI ethics. Missing information limits the ability to evaluate risks associated with the data. The lack of standardized requirements might limit disclosure as organizations protect trade secrets and try to limit others from copying their models. Example Data and Model Metadata Disclosure OpenAI's technical report is an example of the dichotomy around disclosing data and model metadata. While many model developers see value in enabling transparency for consumers, disclosure poses real safety issues and could increase the ability to misuse the models. In the GPT-4 technical report, they state: "Given both the competitive landscape and the safety implications of large-scale models like GPT-4, this report contains no further details about the architecture (including model size), hardware, training compute, dataset construction, training method, or similar." Sources: [OpenAI, March 2023](https://cdn.openai.com/papers/gpt-4.pdf) Parent topic:[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html)
# Data transparency # ![icon for transparency risk](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/images/risk-transparency.svg)Risks associated with inputTraining and tuning phaseTransparencyAmplified ### Description ### Without accurate documentation on how a model's data was collected, curated, and used to train a model, it might be harder to satisfactorily explain the behavior of the model with respect to the data\. ### Why is data transparency a concern for foundation models? ### Data transparency is important for legal compliance and AI ethics\. Missing information limits the ability to evaluate risks associated with the data\. The lack of standardized requirements might limit disclosure as organizations protect trade secrets and try to limit others from copying their models\. Example #### Data and Model Metadata Disclosure #### OpenAI's technical report is an example of the dichotomy around disclosing data and model metadata\. While many model developers see value in enabling transparency for consumers, disclosure poses real safety issues and could increase the ability to misuse the models\. In the GPT\-4 technical report, they state: "Given both the competitive landscape and the safety implications of large\-scale models like GPT\-4, this report contains no further details about the architecture (including model size), hardware, training compute, dataset construction, training method, or similar\." Sources: [OpenAI, March 2023](https://cdn.openai.com/papers/gpt-4.pdf) **Parent topic:**[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html) <!-- </article "role="article" "> -->
34FFE04319CE15E4451729B183C35F288A58A1B7
https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/data-usage-rights.html?context=cdpaas&locale=en
{{ document.title.text }}
Data usage rights ![icon for intellectual property risk](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/images/risk-intellectual-property.svg)Risks associated with inputTraining and tuning phaseIntellectual propertyAmplified Description Terms of service, copyright laws, or other rules restrict the ability to use certain data for building models. Why is data usage rights a concern for foundation models? Laws and regulations concerning the use of data to train AI are unsettled and can vary from country to country, which creates challenges in the development of models. If data usage violates rules or restrictions, business entities might face fines, reputational harms, and other legal consequences. Example Text Copyright Infringement Claims According to the source article, bestselling novelists Sarah Silverman, Richard Kadrey, and Christopher Golden have sued Meta and OpenAI for copyright infringement. The article further stated that the authors had alleged the two tech companies had “ingested” text from their books into generative AI software (LLMs) and failed to give them credit or compensation. Sources: [Los Angeles Times, July 2023](https://www.latimes.com/entertainment-arts/books/story/2023-07-10/sarah-silverman-authors-sue-meta-openai-chatgpt-copyright-infringement) Parent topic:[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html)
# Data usage rights # ![icon for intellectual property risk](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/images/risk-intellectual-property.svg)Risks associated with inputTraining and tuning phaseIntellectual propertyAmplified ### Description ### Terms of service, copyright laws, or other rules restrict the ability to use certain data for building models\. ### Why is data usage rights a concern for foundation models? ### Laws and regulations concerning the use of data to train AI are unsettled and can vary from country to country, which creates challenges in the development of models\. If data usage violates rules or restrictions, business entities might face fines, reputational harms, and other legal consequences\. Example #### Text Copyright Infringement Claims #### According to the source article, bestselling novelists Sarah Silverman, Richard Kadrey, and Christopher Golden have sued Meta and OpenAI for copyright infringement\. The article further stated that the authors had alleged the two tech companies had “ingested” text from their books into generative AI software (LLMs) and failed to give them credit or compensation\. Sources: [Los Angeles Times, July 2023](https://www.latimes.com/entertainment-arts/books/story/2023-07-10/sarah-silverman-authors-sue-meta-openai-chatgpt-copyright-infringement) **Parent topic:**[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html) <!-- </article "role="article" "> -->
B00BEB80E522D712DC9062F835AD10E787B8C5FC
https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/data-usage.html?context=cdpaas&locale=en
{{ document.title.text }}
Data usage ![icon for data laws risk](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/images/risk-data-laws.svg)Risks associated with inputTraining and tuning phaseData lawsTraditional Description Laws and other restrictions can limit or prohibit the use of some data for specific AI use cases. Why is data usage a concern for foundation models? Failing to comply with data usage laws might result in fines and other legal consequences. Parent topic:[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html)
# Data usage # ![icon for data laws risk](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/images/risk-data-laws.svg)Risks associated with inputTraining and tuning phaseData lawsTraditional ### Description ### Laws and other restrictions can limit or prohibit the use of some data for specific AI use cases\. ### Why is data usage a concern for foundation models? ### Failing to comply with data usage laws might result in fines and other legal consequences\. **Parent topic:**[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html) <!-- </article "role="article" "> -->
DD88591C39C90F2CF211C3EE3330B7E7939C3472
https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/decision-bias.html?context=cdpaas&locale=en
{{ document.title.text }}
Decision bias ![icon for fairness risk](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/images/risk-fairness.svg)Risks associated with outputFairnessNew Description Decision bias occurs when one group is unfairly advantaged over another due to decisions of the model. This bias can result from bias in the training data or as an unintended consequence of how the model was trained. Why is decision bias a concern for foundation models? Bias can harm persons affected by the decisions of the model. Business entities could face fines, reputational harms, and other legal consequences. Example Unfair health risk assignment for black patients A study on racial bias in health algorithms estimated that racial bias reduces the number of black patients identified for extra care by more than half. The study found that bias occurred because the algorithm used health costs as a proxy for health needs. Less money is spent on black patients who have the same level of need, and the algorithm thus falsely concludes that black patients are healthier than equally sick white patients. Sources: [Science, October 2019](https://www.science.org/doi/10.1126/science.aax2342) [American Civil Liberties Union, 2022](https://www.aclu.org/news/privacy-technology/algorithms-in-health-care-may-worsen-medical-racism::text=In%202019%2C%20a%20bombshell%20study,recommended%20for%20the%20same%20care) Parent topic:[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html)
# Decision bias # ![icon for fairness risk](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/images/risk-fairness.svg)Risks associated with outputFairnessNew ### Description ### Decision bias occurs when one group is unfairly advantaged over another due to decisions of the model\. This bias can result from bias in the training data or as an unintended consequence of how the model was trained\. ### Why is decision bias a concern for foundation models? ### Bias can harm persons affected by the decisions of the model\. Business entities could face fines, reputational harms, and other legal consequences\. Example #### Unfair health risk assignment for black patients #### A study on racial bias in health algorithms estimated that racial bias reduces the number of black patients identified for extra care by more than half\. The study found that bias occurred because the algorithm used health costs as a proxy for health needs\. Less money is spent on black patients who have the same level of need, and the algorithm thus falsely concludes that black patients are healthier than equally sick white patients\. Sources: [Science, October 2019](https://www.science.org/doi/10.1126/science.aax2342) [American Civil Liberties Union, 2022](https://www.aclu.org/news/privacy-technology/algorithms-in-health-care-may-worsen-medical-racism#:~:text=In%202019%2C%20a%20bombshell%20study,recommended%20for%20the%20same%20care) **Parent topic:**[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html) <!-- </article "role="article" "> -->
C7049B7393149EDC2256A9D4EDB1D6E5A6E24B72
https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/downstream-retraining.html?context=cdpaas&locale=en
{{ document.title.text }}
Downstream retraining ![icon for value alignment risk](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/images/risk-value-alignment.svg)Risks associated with inputTraining and tuning phaseValue alignmentNew Description Using data from user-generated content or AI-generated content from downstream applications for retraining a model can result in misalignment, undesirable output, and inaccurate or inappropriate model behavior. Why is downstream retraining a concern for foundation models? Repurposing downstream output for re-training a model without implementing proper human vetting increases the chances of undesirable outputs being incorporated into the training or tuning data of the model, resulting in an echo chamber effect. Improper model behavior can result in business entities facing legal consequences or reputational harms. Example Model collapse due to training using AI-generated content As stated in the source article, a group of researchers from the UK and Canada have investigated the problem of using AI-generated content for training instead of human-generated content. They found that using model-generated content in training causes irreversible defects in the resulting models and that learning from data produced by other models causes [model collapse](https://arxiv.org/pdf/2305.17493v2.pdf). Sources: [VentureBeat, June 2023](https://venturebeat.com/ai/the-ai-feedback-loop-researchers-warn-of-model-collapse-as-ai-trains-on-ai-generated-content/) Parent topic:[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html)
# Downstream retraining # ![icon for value alignment risk](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/images/risk-value-alignment.svg)Risks associated with inputTraining and tuning phaseValue alignmentNew ### Description ### Using data from user\-generated content or AI\-generated content from downstream applications for retraining a model can result in misalignment, undesirable output, and inaccurate or inappropriate model behavior\. ### Why is downstream retraining a concern for foundation models? ### Repurposing downstream output for re\-training a model without implementing proper human vetting increases the chances of undesirable outputs being incorporated into the training or tuning data of the model, resulting in an echo chamber effect\. Improper model behavior can result in business entities facing legal consequences or reputational harms\. Example #### Model collapse due to training using AI\-generated content #### As stated in the source article, a group of researchers from the UK and Canada have investigated the problem of using AI\-generated content for training instead of human\-generated content\. They found that using model\-generated content in training causes irreversible defects in the resulting models and that learning from data produced by other models causes [model collapse](https://arxiv.org/pdf/2305.17493v2.pdf)\. Sources: [VentureBeat, June 2023](https://venturebeat.com/ai/the-ai-feedback-loop-researchers-warn-of-model-collapse-as-ai-trains-on-ai-generated-content/) **Parent topic:**[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html) <!-- </article "role="article" "> -->
E3E5FA98908EEE308D960761E9F29CF7A8AAD690
https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/evasion-attack.html?context=cdpaas&locale=en
{{ document.title.text }}
Evasion attack ![icon for robustness risk](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/images/risk-robustness.svg)Risks associated with inputInferenceRobustnessAmplified Description Evasion attacks attempt to make a model output incorrect results by perturbing the data sent to the trained model. Why is evasion attack a concern for foundation models? Evasion attacks alter model behavior, usually to benefit the attacker. If not properly accounted for, business entities could face fines, reputational harms, and other legal consequences. Example Adversarial attacks on autonomous vehicles' AI components A report from the European Union Agency for Cybersecurity (ENISA) found that autonomous vehicles are “highly vulnerable to a wide range of attacks” that could be dangerous for passengers, pedestrians, and people in other vehicles. The report states that an adversarial attack might be used to make the AI ‘blind’ to pedestrians by manipulating the image recognition component to misclassify pedestrians. This attack could lead to havoc on the streets, as autonomous cars may hit pedestrians on the road or crosswalks. Other studies have demonstrated potential adversarial attacks on autonomous vehicles: * Fooling machine learning algorithms by making minor changes to street sign graphics, such as adding stickers. * Security researchers from Tencent demonstrated how adding three small stickers in an intersection could cause Tesla's autopilot system to swerve into the wrong lane. * Two McAfee researchers demonstrated how using only black electrical tape could trick a 2016 Tesla into a dangerous burst of acceleration by changing a speed limit sign from 35 mph to 85 mph. Sources: [Venture Beat, February 2021](https://venturebeat.com/business/eu-report-warns-that-ai-makes-autonomous-vehicles-highly-vulnerable-to-attack/) [IEEE, August 2017](https://spectrum.ieee.org/slight-street-sign-modifications-can-fool-machine-learning-algorithms) [IEEE, April 2019](https://spectrum.ieee.org/three-small-stickers-on-road-can-steer-tesla-autopilot-into-oncoming-lane) [Market Watch, February 2020](https://www.marketwatch.com/story/85-in-a-35-hackers-show-how-easy-it-is-to-manipulate-a-self-driving-tesla-2020-02-19) Parent topic:[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html)
# Evasion attack # ![icon for robustness risk](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/images/risk-robustness.svg)Risks associated with inputInferenceRobustnessAmplified ### Description ### Evasion attacks attempt to make a model output incorrect results by perturbing the data sent to the trained model\. ### Why is evasion attack a concern for foundation models? ### Evasion attacks alter model behavior, usually to benefit the attacker\. If not properly accounted for, business entities could face fines, reputational harms, and other legal consequences\. Example #### Adversarial attacks on autonomous vehicles' AI components #### A report from the European Union Agency for Cybersecurity (ENISA) found that autonomous vehicles are “highly vulnerable to a wide range of attacks” that could be dangerous for passengers, pedestrians, and people in other vehicles\. The report states that an adversarial attack might be used to make the AI ‘blind’ to pedestrians by manipulating the image recognition component to misclassify pedestrians\. This attack could lead to havoc on the streets, as autonomous cars may hit pedestrians on the road or crosswalks\. Other studies have demonstrated potential adversarial attacks on autonomous vehicles: <!-- <ul> --> * Fooling machine learning algorithms by making minor changes to street sign graphics, such as adding stickers\. * Security researchers from Tencent demonstrated how adding three small stickers in an intersection could cause Tesla's autopilot system to swerve into the wrong lane\. * Two McAfee researchers demonstrated how using only black electrical tape could trick a 2016 Tesla into a dangerous burst of acceleration by changing a speed limit sign from 35 mph to 85 mph\. <!-- </ul> --> Sources: [Venture Beat, February 2021](https://venturebeat.com/business/eu-report-warns-that-ai-makes-autonomous-vehicles-highly-vulnerable-to-attack/) [IEEE, August 2017](https://spectrum.ieee.org/slight-street-sign-modifications-can-fool-machine-learning-algorithms) [IEEE, April 2019](https://spectrum.ieee.org/three-small-stickers-on-road-can-steer-tesla-autopilot-into-oncoming-lane) [Market Watch, February 2020](https://www.marketwatch.com/story/85-in-a-35-hackers-show-how-easy-it-is-to-manipulate-a-self-driving-tesla-2020-02-19) **Parent topic:**[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html) <!-- </article "role="article" "> -->
6B6F04AA6BBD6BE14B11EA62AA0D844979BDFCDF
https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/explaining-output.html?context=cdpaas&locale=en
{{ document.title.text }}
Explaining output ![icon for explainability risk](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/images/risk-explainability.svg)Risks associated with outputExplainabilityAmplified Description Explanations for model output decisions might be difficult, imprecise, or not possible to obtain. Why is explaining output a concern for foundation models? Foundation models are based on complex deep learning architectures, making explanations for their outputs difficult. Without clear explanations for model output, it is difficult for users, model validators, and auditors to understand and trust the model. Lack of transparency might carry legal consequences in highly regulated domains. Wrong explanations might lead to over-trust. Example Unexplainable accuracy in race prediction According to the source article, researchers analyzing multiple machine learning models using patient medical images were able to confirm the models’ ability to predict race with high accuracy from images. They were stumped as to what exactly is enabling the systems to consistently guess correctly. The researchers found that even factors like disease and physical build were not strong predictors of race—in other words, the algorithmic systems don’t seem to be using any particular aspect of the images to make their determinations. Sources: [Banerjee et al., July 2021](https://arxiv.org/abs/2107.10356) Parent topic:[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html)
# Explaining output # ![icon for explainability risk](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/images/risk-explainability.svg)Risks associated with outputExplainabilityAmplified ### Description ### Explanations for model output decisions might be difficult, imprecise, or not possible to obtain\. ### Why is explaining output a concern for foundation models? ### Foundation models are based on complex deep learning architectures, making explanations for their outputs difficult\. Without clear explanations for model output, it is difficult for users, model validators, and auditors to understand and trust the model\. Lack of transparency might carry legal consequences in highly regulated domains\. Wrong explanations might lead to over\-trust\. Example #### Unexplainable accuracy in race prediction #### According to the source article, researchers analyzing multiple machine learning models using patient medical images were able to confirm the models’ ability to predict race with high accuracy from images\. They were stumped as to what exactly is enabling the systems to consistently guess correctly\. The researchers found that even factors like disease and physical build were not strong predictors of race—in other words, the algorithmic systems don’t seem to be using any particular aspect of the images to make their determinations\. Sources: [Banerjee et al\., July 2021](https://arxiv.org/abs/2107.10356) **Parent topic:**[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html) <!-- </article "role="article" "> -->
EAEF856F725CD9A9605000F3AE98CBE61A9F50F0
https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/extraction-attack.html?context=cdpaas&locale=en
{{ document.title.text }}
Extraction attack ![icon for robustness risk](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/images/risk-robustness.svg)Risks associated with inputInferenceRobustnessAmplified Description An attack that attempts to copy or steal the AI model by appropriately sampling the input space, observing outputs, and building a surrogate model, is known as an extraction attack. Why is extraction attack a concern for foundation models? A successful attack mimics the model, enabling the attacker to repurpose it for their benefit such as eliminating a competitive advantage or causing reputational harm. Parent topic:[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html)
# Extraction attack # ![icon for robustness risk](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/images/risk-robustness.svg)Risks associated with inputInferenceRobustnessAmplified ### Description ### An attack that attempts to copy or steal the AI model by appropriately sampling the input space, observing outputs, and building a surrogate model, is known as an extraction attack\. ### Why is extraction attack a concern for foundation models? ### A successful attack mimics the model, enabling the attacker to repurpose it for their benefit such as eliminating a competitive advantage or causing reputational harm\. **Parent topic:**[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html) <!-- </article "role="article" "> -->
339C9129C24AAB66EEAF55A9F003F6501F72B81B
https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/hallucination.html?context=cdpaas&locale=en
{{ document.title.text }}
Hallucination ![icon for value alignment risk](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/images/risk-value-alignment.svg)Risks associated with outputValue alignmentNew Description Hallucinations occur when models produce factually inaccurate or untruthful information. Often, hallucinatory output is presented in a plausible or convincing manner, making detection by end users difficult. Why is hallucination a concern for foundation models? False output can mislead users and be incorporated into downstream artifacts, further spreading misinformation. This can harm both owners and users of the AI models. Business entities could face fines, reputational harms, and other legal consequences. Example Fake Legal Cases According to the source article, a lawyer cited fake cases and quotes generated by ChatGPT in a legal brief filed in federal court. The lawyers consulted ChatGPT to supplement their legal research for an aviation injury claim. The lawyer subsequently asked ChatGPT if the cases provided were fake. The chatbot responded that they were real and “can be found on legal research databases such as Westlaw and LexisNexis.” Sources: [AP News, June 2023](https://apnews.com/article/artificial-intelligence-chatgpt-fake-case-lawyers-d6ae9fa79d0542db9e1455397aef381c) Parent topic:[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html)
# Hallucination # ![icon for value alignment risk](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/images/risk-value-alignment.svg)Risks associated with outputValue alignmentNew ### Description ### Hallucinations occur when models produce factually inaccurate or untruthful information\. Often, hallucinatory output is presented in a plausible or convincing manner, making detection by end users difficult\. ### Why is hallucination a concern for foundation models? ### False output can mislead users and be incorporated into downstream artifacts, further spreading misinformation\. This can harm both owners and users of the AI models\. Business entities could face fines, reputational harms, and other legal consequences\. Example #### Fake Legal Cases #### According to the source article, a lawyer cited fake cases and quotes generated by ChatGPT in a legal brief filed in federal court\. The lawyers consulted ChatGPT to supplement their legal research for an aviation injury claim\. The lawyer subsequently asked ChatGPT if the cases provided were fake\. The chatbot responded that they were real and “can be found on legal research databases such as Westlaw and LexisNexis\.” Sources: [AP News, June 2023](https://apnews.com/article/artificial-intelligence-chatgpt-fake-case-lawyers-d6ae9fa79d0542db9e1455397aef381c) **Parent topic:**[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html) <!-- </article "role="article" "> -->
658967520625FAC8039485004A1E80C32992077E
https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/harmful-code-generation.html?context=cdpaas&locale=en
{{ document.title.text }}
Harmful code generation ![icon for harmful code generation risk](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/images/risk-harmful-code-generation.svg)Risks associated with outputHarmful code generationNew Description Models might generate code that causes harm or unintentionally affects other systems. Why is harmful code generation a concern for foundation models? Without human review and testing of generated code, its use might cause unintentional behavior and open new system vulnerabilities. Business entities could face fines, reputational harms, and other legal consequences. Example Undisclosed AI Interaction According to their paper, researchers at Stanford University have investigated the impact of code-generation tools on code quality and found that programmers tend to include more bugs in their final code when using AI assistants. These bugs could increase the code's security vulnerabilities, yet the programmers believed their code to be more secure. Sources: [Neil Perry, Megha Srivastava, Deepak Kumar, and Dan Boneh. 2023. Do Users Write More Insecure Code with AI Assistants?. In Proceedings of the 2023 ACM SIGSAC Conference on Computer and Communications Security (CCS '23), November 26-30, 2023, Copenhagen, Denmark. ACM, New York, NY, USA, 15 pages.](https://dl.acm.org/doi/10.1145/3576915.3623157) Parent topic:[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html)
# Harmful code generation # ![icon for harmful code generation risk](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/images/risk-harmful-code-generation.svg)Risks associated with outputHarmful code generationNew ### Description ### Models might generate code that causes harm or unintentionally affects other systems\. ### Why is harmful code generation a concern for foundation models? ### Without human review and testing of generated code, its use might cause unintentional behavior and open new system vulnerabilities\. Business entities could face fines, reputational harms, and other legal consequences\. Example #### Undisclosed AI Interaction #### According to their paper, researchers at Stanford University have investigated the impact of code\-generation tools on code quality and found that programmers tend to include *more* bugs in their final code when using AI assistants\. These bugs could increase the code's security vulnerabilities, yet the programmers believed their code to be *more* secure\. Sources: [Neil Perry, Megha Srivastava, Deepak Kumar, and Dan Boneh\. 2023\. Do Users Write More Insecure Code with AI Assistants?\. In Proceedings of the 2023 ACM SIGSAC Conference on Computer and Communications Security (CCS '23), November 26\-30, 2023, Copenhagen, Denmark\. ACM, New York, NY, USA, 15 pages\.](https://dl.acm.org/doi/10.1145/3576915.3623157) **Parent topic:**[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html) <!-- </article "role="article" "> -->
E5E1D00DC75181EDE4FC66BDC17BF3C07EB314EC
https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/improper-usage.html?context=cdpaas&locale=en
{{ document.title.text }}
Improper usage ![icon for value alignment risk](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/images/risk-value-alignment.svg)Risks associated with outputValue alignmentNew Description Using a model for a purpose the model was not designed for might result in inaccurate or undesired behavior. Without proper documentation of the model purpose and constraints, models can be used or repurposed for tasks for which they are not suited. Why is improper usage a concern for foundation models? Reusing a model without understanding its original data, design intent, and goals might result in unexpected and unwanted model behaviors. Parent topic:[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html)
# Improper usage # ![icon for value alignment risk](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/images/risk-value-alignment.svg)Risks associated with outputValue alignmentNew ### Description ### Using a model for a purpose the model was not designed for might result in inaccurate or undesired behavior\. Without proper documentation of the model purpose and constraints, models can be used or repurposed for tasks for which they are not suited\. ### Why is improper usage a concern for foundation models? ### Reusing a model without understanding its original data, design intent, and goals might result in unexpected and unwanted model behaviors\. **Parent topic:**[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html) <!-- </article "role="article" "> -->
D778DF3DC8EF2D3AB4EC511B8D20D35778794B93
https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/inaccessible-training-data.html?context=cdpaas&locale=en
{{ document.title.text }}
Inaccessible training data ![icon for explainability risk](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/images/risk-explainability.svg)Risks associated with outputExplainabilityAmplified Description Without access to the training data, the types of explanations a model can provide are limited and more likely to be incorrect. Why is inaccessible training data a concern for foundation models? Low quality explanations without source data make it difficult for users, model validators, and auditors to understand and trust the model. Parent topic:[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html)
# Inaccessible training data # ![icon for explainability risk](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/images/risk-explainability.svg)Risks associated with outputExplainabilityAmplified ### Description ### Without access to the training data, the types of explanations a model can provide are limited and more likely to be incorrect\. ### Why is inaccessible training data a concern for foundation models? ### Low quality explanations without source data make it difficult for users, model validators, and auditors to understand and trust the model\. **Parent topic:**[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html) <!-- </article "role="article" "> -->