doc_id
stringlengths 40
40
| url
stringlengths 90
160
| title
stringlengths 5
96
| document
stringlengths 24
62.1k
| md_document
stringlengths 63
109k
|
---|---|---|---|---|
82546B72EDBFB76F571CFD06A7009E01615FA054 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/simeval.html?context=cdpaas&locale=en | Sim Eval node (SPSS Modeler) | Sim Eval node
The Simulation Evaluation (Sim Eval) node is a terminal node that evaluates a specified field, provides a distribution of the field, and produces charts of distributions and correlations.
This node is primarily used to evaluate continuous fields. It therefore compliments the evaluation chart, which is generated by an Evaluation node and is useful for evaluating discrete fields. Another difference is that the Sim Eval node evaluates a single prediction across several iterations, whereas the Evaluation node evaluates multiple predictions each with a single iteration. Iterations are generated when more than one value is specified for a distribution parameter in the Sim Gen node.
The Sim Eval node is designed to be used with data that was obtained from the Sim Fit and Sim Gen nodes. The node can, however, be used with any other node. Any number of processing steps can be placed between the Sim Gen node and the Sim Eval node.
Important: The Sim Eval node requires a minimum of 1000 records with valid values for the target field.
| # Sim Eval node #
The Simulation Evaluation (Sim Eval) node is a terminal node that evaluates a specified field, provides a distribution of the field, and produces charts of distributions and correlations\.
This node is primarily used to evaluate continuous fields\. It therefore compliments the evaluation chart, which is generated by an Evaluation node and is useful for evaluating discrete fields\. Another difference is that the Sim Eval node evaluates a single prediction across several iterations, whereas the Evaluation node evaluates multiple predictions each with a single iteration\. Iterations are generated when more than one value is specified for a distribution parameter in the Sim Gen node\.
The Sim Eval node is designed to be used with data that was obtained from the Sim Fit and Sim Gen nodes\. The node can, however, be used with any other node\. Any number of processing steps can be placed between the Sim Gen node and the Sim Eval node\.
Important: The Sim Eval node requires a minimum of 1000 records with valid values for the target field\.
<!-- </article "role="article" "> -->
|
51389B2D808C1F7D81DF9EC75F053528AE1BC128 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/simfit.html?context=cdpaas&locale=en | Sim Fit node (SPSS Modeler) | Sim Fit node
The Simulation Fitting node fits a set of candidate statistical distributions to each field in the data. The fit of each distribution to a field is assessed using a goodness of fit criterion. When a Simulation Fitting node runs, a Simulation Generate node is built (or an existing node is updated). Each field is assigned its best fitting distribution. The Simulation Generate node can then be used to generate simulated data for each field.
Although the Simulation Fitting node is a terminal node, it does not add output to the Outputs panel, or export data.
Note: If the historical data is sparse (that is, there are many missing values), it may be difficult for the fitting component to find enough valid values to fit distributions to the data. In cases where the data is sparse, before fitting you should either remove the sparse fields if they are not required, or impute the missing values. Using the QUALITY options in the Data Audit node, you can view the number of complete records, identify which fields are sparse, and select an imputation method. If there are an insufficient number of records for distribution fitting, you can use a Balance node to increase the number of records.
| # Sim Fit node #
The Simulation Fitting node fits a set of candidate statistical distributions to each field in the data\. The fit of each distribution to a field is assessed using a goodness of fit criterion\. When a Simulation Fitting node runs, a Simulation Generate node is built (or an existing node is updated)\. Each field is assigned its best fitting distribution\. The Simulation Generate node can then be used to generate simulated data for each field\.
Although the Simulation Fitting node is a terminal node, it does not add output to the Outputs panel, or export data\.
Note: If the historical data is sparse (that is, there are many missing values), it may be difficult for the fitting component to find enough valid values to fit distributions to the data\. In cases where the data is sparse, before fitting you should either remove the sparse fields if they are not required, or impute the missing values\. Using the QUALITY options in the Data Audit node, you can view the number of complete records, identify which fields are sparse, and select an imputation method\. If there are an insufficient number of records for distribution fitting, you can use a Balance node to increase the number of records\.
<!-- </article "role="article" "> -->
|
EC10AC085BA8A12BA0D8AF2DC66ADFBE759B3183 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/simgen.html?context=cdpaas&locale=en | Sim Gen node (SPSS Modeler) | Sim Gen node
The Simulation Generate node provides an easy way to generate simulated data, either without historical data using user specified statistical distributions, or automatically using the distributions obtained from running a Simulation Fitting node on existing historical data. Generating simulated data is useful when you want to evaluate the outcome of a predictive model in the presence of uncertainty in the model inputs.
| # Sim Gen node #
The Simulation Generate node provides an easy way to generate simulated data, either without historical data using user specified statistical distributions, or automatically using the distributions obtained from running a Simulation Fitting node on existing historical data\. Generating simulated data is useful when you want to evaluate the outcome of a predictive model in the presence of uncertainty in the model inputs\.
<!-- </article "role="article" "> -->
|
EFAE4449CEB6F88AA4545F33BD886EC3080171B4 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/slrm.html?context=cdpaas&locale=en | SLRM node (SPSS Modeler) | SLRM node
Use the Self-Learning Response Model (SLRM) node to build a model that you can continually update, or reestimate, as a dataset grows without having to rebuild the model every time using the complete dataset. For example, this is useful when you have several products and you want to identify which one a customer is most likely to buy if you offer it to them. This model allows you to predict which offers are most appropriate for customers and the probability of the offers being accepted.
Initially, you can build the model using a small dataset with randomly made offers and the responses to those offers. As the dataset grows, the model can be updated and therefore becomes more able to predict the most suitable offers for customers and the probability of their acceptance based upon other input fields such as age, gender, job, and income. You can change the offers available by adding or removing them from within the node, instead of having to change the target field of the dataset.
Before running an SLRM node, you must specify both the target and target response fields in the node properties. The target field must have string storage, not numeric. The target response field must be a flag. The true value of the flag indicates offer acceptance and the false value indicates offer refusal.
Example. A financial institution wants to achieve more profitable results by matching the offer that is most likely to be accepted to each customer. You can use a self-learning model to identify the characteristics of customers most likely to respond favorably based on previous promotions and to update the model in real time based on the latest customer responses.
| # SLRM node #
Use the Self\-Learning Response Model (SLRM) node to build a model that you can continually update, or reestimate, as a dataset grows without having to rebuild the model every time using the complete dataset\. For example, this is useful when you have several products and you want to identify which one a customer is most likely to buy if you offer it to them\. This model allows you to predict which offers are most appropriate for customers and the probability of the offers being accepted\.
Initially, you can build the model using a small dataset with randomly made offers and the responses to those offers\. As the dataset grows, the model can be updated and therefore becomes more able to predict the most suitable offers for customers and the probability of their acceptance based upon other input fields such as age, gender, job, and income\. You can change the offers available by adding or removing them from within the node, instead of having to change the target field of the dataset\.
Before running an SLRM node, you must specify both the target and target response fields in the node properties\. The target field must have string storage, not numeric\. The target response field must be a flag\. The true value of the flag indicates offer acceptance and the false value indicates offer refusal\.
Example\. A financial institution wants to achieve more profitable results by matching the offer that is most likely to be accepted to each customer\. You can use a self\-learning model to identify the characteristics of customers most likely to respond favorably based on previous promotions and to update the model in real time based on the latest customer responses\.
<!-- </article "role="article" "> -->
|
F837935A2FEFED20E2CAC93656E376F9868CC515 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/smote.html?context=cdpaas&locale=en | SMOTE node (SPSS Modeler) | SMOTE node
The Synthetic Minority Over-sampling Technique (SMOTE) node provides an over-sampling algorithm to deal with imbalanced data sets. It provides an advanced method for balancing data. The SMOTE node in watsonx.ai is implemented in Python and requires the imbalanced-learn© Python library.
For details about the imbalanced-learn library, see [imbalanced-learn documentation](https://imbalanced-learn.org/stable/index.html)^1^.
The Modeling tab on the nodes palette contains the SMOTE node and other Python nodes.
^1^Lemaître, Nogueira, Aridas. "Imbalanced-learn: A Python Toolbox to Tackle the Curse of Imbalanced Datasets in Machine Learning." Journal of Machine Learning Research, vol. 18, no. 17, 2017, pp. 1-5. (http://jmlr.org/papers/v18/16-365.html)
| # SMOTE node #
The Synthetic Minority Over\-sampling Technique (SMOTE) node provides an over\-sampling algorithm to deal with imbalanced data sets\. It provides an advanced method for balancing data\. The SMOTE node in watsonx\.ai is implemented in Python and requires the imbalanced\-learn© Python library\.
For details about the imbalanced\-learn library, see [imbalanced\-learn documentation](https://imbalanced-learn.org/stable/index.html)^1^\.
The Modeling tab on the nodes palette contains the SMOTE node and other Python nodes\.
^1^Lemaître, Nogueira, Aridas\. "Imbalanced\-learn: A Python Toolbox to Tackle the Curse of Imbalanced Datasets in Machine Learning\." *Journal of Machine Learning Research*, vol\. 18, no\. 17, 2017, pp\. 1\-5\. (http://jmlr\.org/papers/v18/16\-365\.html)
<!-- </article "role="article" "> -->
|
8F64225936D78B691574900D641C0CB7C3CE78EF | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/sort.html?context=cdpaas&locale=en | Sort node (SPSS Modeler) | Sort node
You can use Sort nodes to sort records into ascending or descending order based on the values of one or more fields. For example, Sort nodes are frequently used to view and select records with the most common data values. Typically, you would first aggregate the data using the Aggregate node and then use the Sort node to sort the aggregated data into descending order of record counts. Displaying these results in a table will allow you to explore the data and to make decisions, such as selecting the records of the 10 best customers.
The following settings are available for the Sort node
Sort by. All fields selected to use as sort keys are displayed in a table. A key field works best for sorting when it is numeric.
* Add fields to this list using the Field Chooser button.
* Select an order by clicking the Ascending or Descending arrow in the table's Order column.
* Delete fields using the red delete button.
* Sort directives using the arrow buttons.
Default sort order. Select either Ascending or Descending to use as the default sort order when new fields are added.
Note: The Sort node is not applied if there is a Distinct node down the model flow. For information about the Distinct node, see [Distinct node](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/distinct.htmldistinct).
| # Sort node #
You can use Sort nodes to sort records into ascending or descending order based on the values of one or more fields\. For example, Sort nodes are frequently used to view and select records with the most common data values\. Typically, you would first aggregate the data using the Aggregate node and then use the Sort node to sort the aggregated data into descending order of record counts\. Displaying these results in a table will allow you to explore the data and to make decisions, such as selecting the records of the 10 best customers\.
The following settings are available for the Sort node
Sort by\. All fields selected to use as sort keys are displayed in a table\. A key field works best for sorting when it is numeric\.
<!-- <ul> -->
* Add fields to this list using the Field Chooser button\.
* Select an order by clicking the Ascending or Descending arrow in the table's *Order* column\.
* Delete fields using the red delete button\.
* Sort directives using the arrow buttons\.
<!-- </ul> -->
Default sort order\. Select either Ascending or Descending to use as the default sort order when new fields are added\.
Note: The Sort node is not applied if there is a Distinct node down the model flow\. For information about the Distinct node, see [Distinct node](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/distinct.html#distinct)\.
<!-- </article "role="article" "> -->
|
C81BEEA067CCC7FED12806F3FF0F20519092F2E4 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/statistics.html?context=cdpaas&locale=en | Statistics (SPSS Modeler) | Statistics node
The Statistics node gives you basic summary information about numeric fields. You can get summary statistics for individual fields and correlations between fields.
| # Statistics node #
The Statistics node gives you basic summary information about numeric fields\. You can get summary statistics for individual fields and correlations between fields\.
<!-- </article "role="article" "> -->
|
2E2A2BE1CB20EF0C663E591532D71CFB5637E57F | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/streamingtcm.html?context=cdpaas&locale=en | Streaming TCM node (SPSS Modeler) | Streaming TCM node
You can use this node to build and score temporal causal models in one step.
After adding a Streaming TCM node to your flow canvas, double-click it to open the node properties. To see information about the properties, hover over the tool-tip icons. For more information about temporal causal modeling, see [TCM node](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/tcm.html).
| # Streaming TCM node #
You can use this node to build and score temporal causal models in one step\.
After adding a Streaming TCM node to your flow canvas, double\-click it to open the node properties\. To see information about the properties, hover over the tool\-tip icons\. For more information about temporal causal modeling, see [TCM node](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/tcm.html)\.
<!-- </article "role="article" "> -->
|
84D42E162FEFC977AE807AF123CEDFDF400E403A | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/supernodes.html?context=cdpaas&locale=en | SuperNodes (SPSS Modeler) | SuperNodes
One of the reasons the SPSS Modeler visual interface is so easy to learn is that each node has a clearly defined function. However, for complex processing, a long sequence of nodes may be necessary. Eventually, this may clutter your flow canvas and make it difficult to follow flow diagrams.
There are two ways to avoid the clutter of a long and complex flow:
* You can split a processing sequence into several flows. The first flow, for example, creates a data file that the second uses as input. The second creates a file that the third uses as input, and so on. However, this requires you to manage multiple flows.
* You can create a SuperNode as a more streamlined alternative when working with complex flow processes. SuperNodes group multiple nodes into a single node by encapsulating sections of flow. This provides benefits to the data miner:
* Grouping nodes results in a neater and more manageable flow.
* Nodes can be combined into a business-specific SuperNode.
To group nodes into a SuperNode:
1. Ctrl + click to select the nodes you want to group.
2. Right-click and select Create supernode. The nodes are grouped into a single SuperNode with a special star icon.
Figure 1. SuperNode icon

| # SuperNodes #
One of the reasons the SPSS Modeler visual interface is so easy to learn is that each node has a clearly defined function\. However, for complex processing, a long sequence of nodes may be necessary\. Eventually, this may clutter your flow canvas and make it difficult to follow flow diagrams\.
There are two ways to avoid the clutter of a long and complex flow:
<!-- <ul> -->
* You can split a processing sequence into several flows\. The first flow, for example, creates a data file that the second uses as input\. The second creates a file that the third uses as input, and so on\. However, this requires you to manage multiple flows\.
* You can create a **SuperNode** as a more streamlined alternative when working with complex flow processes\. SuperNodes group multiple nodes into a single node by encapsulating sections of flow\. This provides benefits to the data miner:
<!-- <ul> -->
* Grouping nodes results in a neater and more manageable flow.
* Nodes can be combined into a business-specific SuperNode.
<!-- </ul> -->
<!-- </ul> -->
To group nodes into a SuperNode:
<!-- <ol> -->
1. *Ctrl \+ click* to select the nodes you want to group\.
2. Right\-click and select Create supernode\. The nodes are grouped into a single SuperNode with a special star icon\.
Figure 1. SuperNode icon

<!-- </ol> -->
<!-- </article "role="article" "> -->
|
8ED36D5E1CCDFB0139D9D3DB3AEA2B90AE1B405E | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/svm.html?context=cdpaas&locale=en | SVM node (SPSS Modeler) | SVM node
The SVM node uses a support vector machine to classify data. SVM is particularly suited for use with wide datasets, that is, those with a large number of predictor fields. You can use the default settings on the node to produce a basic model relatively quickly, or you can use the Expert settings to experiment with different types of SVM models.
After the model is built, you can:
* Browse the model nugget to display the relative importance of the input fields in building the model.
* Append a Table node to the model nugget to view the model output.
Example. A medical researcher has obtained a dataset containing characteristics of a number of human cell samples extracted from patients who were believed to be at risk of developing cancer. Analysis of the original data showed that many of the characteristics differed significantly between benign and malignant samples. The researcher wants to develop an SVM model that can use the values of similar cell characteristics in samples from other patients to give an early indication of whether their samples might be benign or malignant.
| # SVM node #
The SVM node uses a support vector machine to classify data\. SVM is particularly suited for use with wide datasets, that is, those with a large number of predictor fields\. You can use the default settings on the node to produce a basic model relatively quickly, or you can use the Expert settings to experiment with different types of SVM models\.
After the model is built, you can:
<!-- <ul> -->
* Browse the model nugget to display the relative importance of the input fields in building the model\.
* Append a Table node to the model nugget to view the model output\.
<!-- </ul> -->
Example\. A medical researcher has obtained a dataset containing characteristics of a number of human cell samples extracted from patients who were believed to be at risk of developing cancer\. Analysis of the original data showed that many of the characteristics differed significantly between benign and malignant samples\. The researcher wants to develop an SVM model that can use the values of similar cell characteristics in samples from other patients to give an early indication of whether their samples might be benign or malignant\.
<!-- </article "role="article" "> -->
|
7434988303BF295C1586C5EE42100E8AF244859C | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/ta_reuse_category.html?context=cdpaas&locale=en | Reusing category sets in Text Analytics Workbench (SPSS Modeler) | Reusing custom category sets
You can customize a category set in Text Analytics Workbench and then download it to use in other SPSS Modeler flows.
Procedure
1. Optional: Customize the category set.
1. Select a category to customize.
2. To add descriptors, click the Descriptors tab and then drag-and-drop from the Descriptors tab into categories to add them.
2. Download the customized category set.
1. From the Text Analytics Workbench, go to the Categories tab.
2. Click the Options icon and select Download category set.
3. Give the category set a name and click Download.
3. Add the category set to another Text Mining node.
1. In a different flow session, go to the Categories tab in the Text Analytics Workbench.
2. Click the Options icon and select Add category set.
3. Browse to or drag-and-drop your category set.
4. Choose whether to replace the existing category set in the Text Mining node or to append your category set to the existing one. You can preview the final category set based on your choices.
5. Click Create.
| # Reusing custom category sets #
You can customize a category set in Text Analytics Workbench and then download it to use in other SPSS Modeler flows\.
## Procedure ##
<!-- <ol> -->
1. Optional: Customize the category set\.
<!-- <ol> -->
1. Select a category to customize.
2. To add descriptors, click the Descriptors tab and then drag-and-drop from the Descriptors tab into categories to add them.
<!-- </ol> -->
2. Download the customized category set\.
<!-- <ol> -->
1. From the Text Analytics Workbench, go to the Categories tab.
2. Click the Options icon and select Download category set.
3. Give the category set a name and click Download.
<!-- </ol> -->
3. Add the category set to another Text Mining node\.
<!-- <ol> -->
1. In a different flow session, go to the Categories tab in the Text Analytics Workbench.
2. Click the Options icon and select Add category set.
3. Browse to or drag-and-drop your category set.
4. Choose whether to replace the existing category set in the Text Mining node or to append your category set to the existing one. You can preview the final category set based on your choices.
5. Click Create.
<!-- </ol> -->
<!-- </ol> -->
<!-- </article "role="article" "> -->
|
E6A2EF28A33AA6A8C8B2321133A8816257CD1612 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/ta_save_resource_editor.html?context=cdpaas&locale=en | Reusing a project asset in Resource editor | Reusing a project asset in Resource editor
From the Text Analytics Workbench, you can save a template or library as a project asset. You can then use the template or library in other Text Mining nodes by loading it in the Resource editor.
Procedure
1. Save a library or template in Text Analytics Workbench.
1. On the Resource Editor tab, select the template or library to save.
2. Click the Options icon and select Save as project asset.
3. Enter details about the asset, and click Submit.
2. Load a library or template in a different Text Analytics Workbench.
1. On the Resource Editor tab, open the toolbar menu for your current template or library.
2. Click the Options icon and select Load library or Change template.
3. Find your library or template and select it.
4. Click Apply.
| # Reusing a project asset in Resource editor #
From the Text Analytics Workbench, you can save a template or library as a project asset\. You can then use the template or library in other Text Mining nodes by loading it in the Resource editor\.
## Procedure ##
<!-- <ol> -->
1. Save a library or template in Text Analytics Workbench\.
<!-- <ol> -->
1. On the Resource Editor tab, select the template or library to save.
2. Click the Options icon and select Save as project asset.
3. Enter details about the asset, and click Submit.
<!-- </ol> -->
2. Load a library or template in a different Text Analytics Workbench\.
<!-- <ol> -->
1. On the Resource Editor tab, open the toolbar menu for your current template or library.
2. Click the Options icon and select Load library or Change template.
3. Find your library or template and select it.
4. Click Apply.
<!-- </ol> -->
<!-- </ol> -->
<!-- </article "role="article" "> -->
|
0F58073F0D5B237C3241126E98851A9E0C912792 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/ta_upload_tap-template_TMnode.html?context=cdpaas&locale=en | Uploading a text analysis package (TAP) in a Text Mining node (SPSS Modeler) | Uploading a custom asset in a Text Mining node
You can add a custom text analysis package (TAP) or template directly in the Text Mining node. When your SPSS Modeler flow runs, it will use your custom asset.
Procedure
1. If you want to download a TAP, save it locally.
1. Click Text analysis package while in the Text Analytics Workbench.
2. Enter details about the asset, and then click Submit. The text analysis package is saved locally as a .tap file.
2. If you want to download a template, see [Linguistic resources](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/tmwb-linguistic-resource.htmltmwb-templates-intro__DownloadAssetsSteps).
3. Add the TAP or template file to another Text Mining node.
1. In the Text Mining node, click Select resources.
2. Click the Text analysis package or Resource template tab depending on the asset you want.
3. Click Import , and then browse to or drag-and-drop your TAP or template.
4. Enter details about the asset, and then click Add. You can now see the uploaded TAP in the list of resources. It is also saved to your project as a project asset.
5. Click Ok.
| # Uploading a custom asset in a Text Mining node #
You can add a custom text analysis package (TAP) or template directly in the Text Mining node\. When your SPSS Modeler flow runs, it will use your custom asset\.
## Procedure ##
<!-- <ol> -->
1. If you want to download a TAP, save it locally\.
<!-- <ol> -->
1. Click Text analysis package while in the Text Analytics Workbench.
2. Enter details about the asset, and then click Submit. The text analysis package is saved locally as a .tap file.
<!-- </ol> -->
2. If you want to download a template, see [Linguistic resources](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/tmwb-linguistic-resource.html#tmwb-templates-intro__DownloadAssetsSteps)\.
3. Add the TAP or template file to another Text Mining node\.
<!-- <ol> -->
1. In the Text Mining node, click Select resources.
2. Click the Text analysis package or Resource template tab depending on the asset you want.
3. Click Import , and then browse to or drag-and-drop your TAP or template.
4. Enter details about the asset, and then click Add. You can now see the uploaded TAP in the list of resources. It is also saved to your project as a project asset.
5. Click Ok.
<!-- </ol> -->
<!-- </ol> -->
<!-- </article "role="article" "> -->
|
8654D0CBB99EE82483F99972EF5247401EB8E8D9 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/table.html?context=cdpaas&locale=en | Table (SPSS Modeler) | Table node
The Table node creates a table that lists the values in your data. All fields and all values in the stream are included, making this an easy way to inspect your data values or export them in an easily readable form. Optionally, you can highlight records that meet a certain condition.
Note: Unless you are working with small datasets, we recommend that you select a subset of the data to pass into the Table node. The Table node cannot display properly when the number of records surpasses a size that can be contained in the display structure (for example, 100 million rows).
| # Table node #
The Table node creates a table that lists the values in your data\. All fields and all values in the stream are included, making this an easy way to inspect your data values or export them in an easily readable form\. Optionally, you can highlight records that meet a certain condition\.
Note: Unless you are working with small datasets, we recommend that you select a subset of the data to pass into the Table node\. The Table node cannot display properly when the number of records surpasses a size that can be contained in the display structure (for example, 100 million rows)\.
<!-- </article "role="article" "> -->
|
6B6D315FFD086296183DE20086EE752A6A2B88C8 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/tcm.html?context=cdpaas&locale=en | TCM node (SPSS Modeler) | TCM node
Use this node to create a temporal causal model (TCM).
Temporal causal modeling attempts to discover key causal relationships in time series data. In temporal causal modeling, you specify a set of target series and a set of candidate inputs to those targets. The procedure then builds an autoregressive time series model for each target and includes only those inputs that have a causal relationship with the target. This approach differs from traditional time series modeling where you must explicitly specify the predictors for a target series. Since temporal causal modeling typically involves building models for multiple related time series, the result is referred to as a model system.
In the context of temporal causal modeling, the term causal refers to Granger causality. A time series X is said to "Granger cause" another time series Y if regressing for Y in terms of past values of both X and Y results in a better model for Y than regressing only on past values of Y.
Note: To build a temporal causal model, you need enough data points. The product uses the constraint:
m>(L + KL + 1)
where m is the number of data points, L is the number of lags, and K is the number of predictors. Make sure your data set is big enough so that the number of data points (m) satisfies the condition.
| # TCM node #
Use this node to create a temporal causal model (TCM)\.
Temporal causal modeling attempts to discover key causal relationships in time series data\. In temporal causal modeling, you specify a set of target series and a set of candidate inputs to those targets\. The procedure then builds an autoregressive time series model for each target and includes only those inputs that have a causal relationship with the target\. This approach differs from traditional time series modeling where you must explicitly specify the predictors for a target series\. Since temporal causal modeling typically involves building models for multiple related time series, the result is referred to as a model system\.
In the context of temporal causal modeling, the term causal refers to Granger causality\. A time series X is said to "Granger cause" another time series Y if regressing for Y in terms of past values of both X and Y results in a better model for Y than regressing only on past values of Y\.
Note: To build a temporal causal model, you need enough data points\. The product uses the constraint:
m>(L + KL + 1)
where `m` is the number of data points, `L` is the number of lags, and `K` is the number of predictors\. Make sure your data set is big enough so that the number of data points (`m`) satisfies the condition\.
<!-- </article "role="article" "> -->
|
6153F9F311CD2BB2DF31C6A4A1CB76D64E36BFE6 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/textlinkanalysis.html?context=cdpaas&locale=en | Mining for text links (SPSS Modeler) | Mining for text links
The Text Link Analysis (TLA) node adds pattern-matching technology to text mining's concept extraction in order to identify relationships between the concepts in the text data based on known patterns. These relationships can describe how a customer feels about a product, which companies are doing business together, or even the relationships between genes or pharmaceutical agents.

For example, extracting your competitor’s product name may not be interesting enough to you. Using this node, you could also learn how people feel about this product, if such opinions exist in the data. The relationships and associations are identified and extracted by matching known patterns to your text data.
You can use the TLA pattern rules inside certain resource templates shipped with Text Analytics or create/edit your own. Pattern rules are made up of macros, word lists, and word gaps to form a Boolean query, or rule, that is compared to your input text. Whenever a TLA pattern rule matches text, this text can be extracted as a TLA result and restructured as output data.
The Text Link Analysis node offers a more direct way to identify and extract TLA pattern results from your text and then add the results to the dataset in the flow. But the Text Link Analysis node is not the only way in which you can perform text link analysis. You can also use a Text Analytics Workbench session in the Text Mining modeling node.
In the Text Analytics Workbench, you can explore the TLA pattern results and use them as category descriptors and/or to learn more about the results using drill-down and graphs. In fact, using the Text Mining node to extract TLA results is a great way to explore and fine-tune templates to your data for later use directly in the TLA node.
The output can be represented in up to 6 slots, or parts.
You can find this node under the Text Analytics section of the node palette.
Requirements. The Text Link Analysis node accepts text data read into a field using an Import node.
Strengths. The Text Link Analysis node goes beyond basic concept extraction to provide information about the relationships between concepts, as well as related opinions or qualifiers that may be revealed in the data.
| # Mining for text links #
The Text Link Analysis (TLA) node adds pattern\-matching technology to text mining's concept extraction in order to identify relationships between the concepts in the text data based on known patterns\. These relationships can describe how a customer feels about a product, which companies are doing business together, or even the relationships between genes or pharmaceutical agents\.

For example, extracting your competitor’s product name may not be interesting enough to you\. Using this node, you could also learn how people feel about this product, if such opinions exist in the data\. The relationships and associations are identified and extracted by matching known patterns to your text data\.
You can use the TLA pattern rules inside certain resource templates shipped with Text Analytics or create/edit your own\. Pattern rules are made up of macros, word lists, and word gaps to form a Boolean query, or rule, that is compared to your input text\. Whenever a TLA pattern rule matches text, this text can be extracted as a TLA result and restructured as output data\.
The Text Link Analysis node offers a more direct way to identify and extract TLA pattern results from your text and then add the results to the dataset in the flow\. But the Text Link Analysis node is not the only way in which you can perform text link analysis\. You can also use a Text Analytics Workbench session in the Text Mining modeling node\.
In the Text Analytics Workbench, you can explore the TLA pattern results and use them as category descriptors and/or to learn more about the results using drill\-down and graphs\. In fact, using the Text Mining node to extract TLA results is a great way to explore and fine\-tune templates to your data for later use directly in the TLA node\.
The output can be represented in up to 6 slots, or parts\.
You can find this node under the Text Analytics section of the node palette\.
Requirements\. The Text Link Analysis node accepts text data read into a field using an Import node\.
Strengths\. The Text Link Analysis node goes beyond basic concept extraction to provide information about the relationships *between* concepts, as well as related opinions or qualifiers that may be revealed in the data\.
<!-- </article "role="article" "> -->
|
0FAF8791603EB1A93ADC49EA8F9E5859D1E3360F | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/timeintervals.html?context=cdpaas&locale=en | Time Intervals node (SPSS Modeler) | Time Intervals node
Use the Time Intervals node to specify intervals and derive a new time field for estimating or forecasting. A full range of time intervals is supported, from seconds to years.
Use the node to derive a new time field. The new field has the same storage type as the input time field you chose. The node generates the following items:
* The field specified in the node properties as the Time Field, along with the chosen prefix/suffix. By default the prefix is $TI_.
* The fields specified in the node properties as the Dimension fields.
* The fields specified in the node properties as the Fields to aggregate.
You can also generate a number of extra fields, depending on the selected interval or period (such as the minute or second within which a measurement falls).
| # Time Intervals node #
Use the Time Intervals node to specify intervals and derive a new time field for estimating or forecasting\. A full range of time intervals is supported, from seconds to years\.
Use the node to derive a new time field\. The new field has the same storage type as the input time field you chose\. The node generates the following items:
<!-- <ul> -->
* The field specified in the node properties as the Time Field, along with the chosen prefix/suffix\. By default the prefix is `$TI_`\.
* The fields specified in the node properties as the Dimension fields\.
* The fields specified in the node properties as the Fields to aggregate\.
<!-- </ul> -->
You can also generate a number of extra fields, depending on the selected interval or period (such as the minute or second within which a measurement falls)\.
<!-- </article "role="article" "> -->
|
99675D0DDD35D743F2F0BECF008D9CBED68C0534 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/timeplot.html?context=cdpaas&locale=en | Time Plot node (SPSS Modeler) | Time Plot node
Time Plot nodes allow you to view one or more time series plotted over time. The series you plot must contain numeric values and are assumed to occur over a range of time in which the periods are uniform.
Figure 1. Plotting sales of men's and women's clothing and jewelry over time

| # Time Plot node #
Time Plot nodes allow you to view one or more time series plotted over time\. The series you plot must contain numeric values and are assumed to occur over a range of time in which the periods are uniform\.
Figure 1\. Plotting sales of men's and women's clothing and jewelry over time

<!-- </article "role="article" "> -->
|
AC040F5709AB00AB3ED8275862FA2328D20842B2 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/tla_expert.html?context=cdpaas&locale=en | Expert options (SPSS Modeler) | Expert options
With the Text Link Analysis (TLA) node, the extraction of text link analysis pattern results is automatically enabled. In the node's properties, the expert options include certain additional parameters that impact how text is extracted and handled. The expert parameters control the basic behavior, as well as a few advanced behaviors, of the extraction process. There are also a number of linguistic resources and options that also impact the extraction results, which are controlled by the resource template you select.
Limit extraction to concepts with a global frequency of at least [n]. This option specifies the minimum number of times a word or phrase must occur in the text in order for it to be extracted. In this way, a value of 5 limits the extraction to those words or phrases that occur at least five times in the entire set of records or documents.
In some cases, changing this limit can make a big difference in the resulting extraction results, and consequently, your categories. Let's say that you're working with some restaurant data and you don't increase the limit beyond 1 for this option. In this case, you might find pizza (1), thin pizza (2), spinach pizza (2), and favorite pizza (2) in your extraction results. However, if you were to limit the extraction to a global frequency of 5 or more and re-extract, you would no longer get three of these concepts. Instead you would get pizza (7), since pizza is the simplest form and this word already existed as a possible candidate. And depending on the rest of your text, you might actually have a frequency of more than seven, depending on whether there are still other phrases with pizza in the text. Additionally, if spinach pizza was already a category descriptor, you might need to add pizza as a descriptor instead to capture all of the records. For this reason, change this limit with care whenever categories have already been created.
Note that this is an extraction-only feature; if your template contains terms (they usually do), and a term for the template is found in the text, then the term will be indexed regardless of its frequency.
For example, suppose you use a Basic Resources template that includes "los angeles" under the <Location> type in the Core library; if your document contains Los Angeles only once, then Los Angeles will be part of the list of concepts. To prevent this, you'll need to set a filter to display concepts occurring at least the same number of times as the value entered in the Limit extraction to concepts with a global frequency of at least [n] field.
Accommodate punctuation errors. This option temporarily normalizes text containing punctuation errors (for example, improper usage) during extraction to improve the extractability of concepts. This option is extremely useful when text is short and of poor quality (as, for example, in open-ended survey responses, e-mail, and CRM data), or when the text contains many abbreviations.
Accommodate spelling for a minimum word character length of [n]. This option applies a fuzzy grouping technique that helps group commonly misspelled words or closely spelled words under one concept. The fuzzy grouping algorithm temporarily strips all vowels (except the first one) and strips double/triple consonants from extracted words and then compares them to see if they're the same so that modeling and modelling would be grouped together. However, if each term is assigned to a different type, excluding the <Unknown> type, the fuzzy grouping technique won't be applied.
You can also define the minimum number of root characters required before fuzzy grouping is used. The number of root characters in a term is calculated by totaling all of the characters and subtracting any characters that form inflection suffixes and, in the case of compound-word terms, determiners and prepositions. For example, the term exercises is counted as 8 root characters in the form "exercise," since the letter s at the end of the word is an inflection (plural form). Similarly, apple sauce counts as 10 root characters ("apple sauce") and manufacturing of cars counts as 16 root characters (“manufacturing car”). This method of counting is only used to check whether the fuzzy grouping should be applied but doesn't influence how the words are matched.
Note: If you find that certain words are later grouped incorrectly, you can exclude word pairs from this technique by explicitly declaring them in the Fuzzy Grouping: Exceptions section under the Advanced Resources properties.
Extract uniterms. This option extracts single words (uniterms) as long as the word isn't already part of a compound word and if it's either a noun or an unrecognized part of speech.
Extract nonlinguistic entities. This option extracts nonlinguistic entities, such as phone numbers, social security numbers, times, dates, currencies, digits, percentages, e-mail addresses, and HTTP addresses. You can include or exclude certain types of nonlinguistic entities in the Nonlinguistic Entities: Configuration section under the Advanced Resources properties. By disabling any unnecessary entities, the extraction engine won't waste processing time.
Uppercase algorithm. This option extracts simple and compound terms that aren't in the built-in dictionaries as long as the first letter of the term is in uppercase. This option offers a good way to extract most proper nouns.
Group partial and full person names together when possible. This option groups names that appear differently in the text together. This feature is helpful since names are often referred to in their full form at the beginning of the text and then only by a shorter version. This option attempts to match any uniterm with the <Unknown> type to the last word of any of the compound terms that is typed as <Person>. For example, if doe is found and initially typed as <Unknown>, the extraction engine checks to see if any compound terms in the <Person> type include doe as the last word, such as john doe. This option doesn't apply to first names since most are never extracted as uniterms.
Maximum nonfunction word permutation. This option specifies the maximum number of nonfunction words that can be present when applying the permutation technique. This permutation technique groups similar phrases that differ from each other only by the nonfunction words (for example, of and the) contained, regardless of inflection. For example, let's say that you set this value to—at most—two words, and both company officials and officials of the company were extracted. In this case, both extracted terms would be grouped together in the final concept list since both terms are deemed to be the same when of the is ignored.
Use derivation when grouping multiterms. When processing Big Data, select this option to group multiterms by using derivation rules.
| # Expert options #
With the Text Link Analysis (TLA) node, the extraction of text link analysis pattern results is automatically enabled\. In the node's properties, the expert options include certain additional parameters that impact how text is extracted and handled\. The expert parameters control the basic behavior, as well as a few advanced behaviors, of the extraction process\. There are also a number of linguistic resources and options that also impact the extraction results, which are controlled by the resource template you select\.
Limit extraction to concepts with a global frequency of at least \[n\]\. This option specifies the minimum number of times a word or phrase must occur in the text in order for it to be extracted\. In this way, a value of 5 limits the extraction to those words or phrases that occur at least five times in the entire set of records or documents\.
In some cases, changing this limit can make a big difference in the resulting extraction results, and consequently, your categories\. Let's say that you're working with some restaurant data and you don't increase the limit beyond 1 for this option\. In this case, you might find `pizza (1), thin pizza (2), spinach pizza (2)`, and `favorite pizza (2)` in your extraction results\. However, if you were to limit the extraction to a global frequency of 5 or more and re\-extract, you would no longer get three of these concepts\. Instead you would get `pizza (7)`, since `pizza` is the simplest form and this word already existed as a possible candidate\. And depending on the rest of your text, you might actually have a frequency of more than seven, depending on whether there are still other phrases with pizza in the text\. Additionally, if `spinach pizza` was already a category descriptor, you might need to add `pizza` as a descriptor instead to capture all of the records\. For this reason, change this limit with care whenever categories have already been created\.
Note that this is an extraction\-only feature; if your template contains terms (they usually do), and a term for the template is found in the text, then the term will be indexed regardless of its frequency\.
For example, suppose you use a Basic Resources template that includes "los angeles" under the `<Location>` type in the Core library; if your document contains Los Angeles only once, then Los Angeles will be part of the list of concepts\. To prevent this, you'll need to set a filter to display concepts occurring at least the same number of times as the value entered in the Limit extraction to concepts with a global frequency of at least \[n\] field\.
Accommodate punctuation errors\. This option temporarily normalizes text containing punctuation errors (for example, improper usage) during extraction to improve the extractability of concepts\. This option is extremely useful when text is short and of poor quality (as, for example, in open\-ended survey responses, e\-mail, and CRM data), or when the text contains many abbreviations\.
Accommodate spelling for a minimum word character length of \[n\]\. This option applies a fuzzy grouping technique that helps group commonly misspelled words or closely spelled words under one concept\. The fuzzy grouping algorithm temporarily strips all vowels (except the first one) and strips double/triple consonants from extracted words and then compares them to see if they're the same so that `modeling` and `modelling` would be grouped together\. However, if each term is assigned to a different type, excluding the `<Unknown>` type, the fuzzy grouping technique won't be applied\.
You can also define the minimum number of root characters required before fuzzy grouping is used\. The number of root characters in a term is calculated by totaling all of the characters and subtracting any characters that form inflection suffixes and, in the case of compound\-word terms, determiners and prepositions\. For example, the term `exercises` is counted as 8 root characters in the form "exercise," since the letter `s` at the end of the word is an inflection (plural form)\. Similarly, `apple sauce` counts as 10 root characters ("apple sauce") and `manufacturing of cars` counts as 16 root characters (“manufacturing car”)\. This method of counting is only used to check whether the fuzzy grouping should be applied but doesn't influence how the words are matched\.
Note: If you find that certain words are later grouped incorrectly, you can exclude word pairs from this technique by explicitly declaring them in the Fuzzy Grouping: Exceptions section under the Advanced Resources properties\.
Extract uniterms\. This option extracts single words (uniterms) as long as the word isn't already part of a compound word and if it's either a noun or an unrecognized part of speech\.
Extract nonlinguistic entities\. This option extracts nonlinguistic entities, such as phone numbers, social security numbers, times, dates, currencies, digits, percentages, e\-mail addresses, and HTTP addresses\. You can include or exclude certain types of nonlinguistic entities in the Nonlinguistic Entities: Configuration section under the Advanced Resources properties\. By disabling any unnecessary entities, the extraction engine won't waste processing time\.
Uppercase algorithm\. This option extracts simple and compound terms that aren't in the built\-in dictionaries as long as the first letter of the term is in uppercase\. This option offers a good way to extract most proper nouns\.
Group partial and full person names together when possible\. This option groups names that appear differently in the text together\. This feature is helpful since names are often referred to in their full form at the beginning of the text and then only by a shorter version\. This option attempts to match any uniterm with the `<Unknown>` type to the last word of any of the compound terms that is typed as `<Person>`\. For example, if `doe` is found and initially typed as `<Unknown>`, the extraction engine checks to see if any compound terms in the `<Person>` type include `doe` as the last word, such as `john doe`\. This option doesn't apply to first names since most are never extracted as uniterms\.
Maximum nonfunction word permutation\. This option specifies the maximum number of nonfunction words that can be present when applying the permutation technique\. This permutation technique groups similar phrases that differ from each other only by the nonfunction words (for example, `of` and `the`) contained, regardless of inflection\. For example, let's say that you set this value to—at most—two words, and both `company officials` and `officials of the company` were extracted\. In this case, both extracted terms would be grouped together in the final concept list since both terms are deemed to be the same when `of the` is ignored\.
Use derivation when grouping multiterms\. When processing Big Data, select this option to group multiterms by using derivation rules\.
<!-- </article "role="article" "> -->
|
EFD36F1BF92225311B684D6AA0D05A597F00D707 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/tla_output.html?context=cdpaas&locale=en | TLA node output (SPSS Modeler) | TLA node output
After running a Text Link Analysis node, the data is restructured. It's important to understand the way text mining restructures your data.
If you desire a different structure for data mining, you can use nodes on the Field Operations palette to accomplish this. For example, if you're working with data in which each row represents a text record, then one row is created for each pattern uncovered in the source text data. For each row in the output, there are 15 fields:
* Six fields ( Concept#, such as Concept1, Concept2, ..., and Concept6) represent any concepts found in the pattern match
* Six fields ( Type#, such as Type1, Type2, ..., and Type6) represent the type for each concept
* Rule Name represents the name of the text link rule used to match the text and produce the output
* A field using the name of the ID field you specified in the node and representing the record or document ID as it was in the input data
* Matched Text represents the portion of the text data in the original record or document that was matched to the TLA pattern
| # TLA node output #
After running a Text Link Analysis node, the data is restructured\. It's important to understand the way text mining restructures your data\.
If you desire a different structure for data mining, you can use nodes on the Field Operations palette to accomplish this\. For example, if you're working with data in which each row represents a text record, then one row is created for each pattern uncovered in the source text data\. For each row in the output, there are 15 fields:
<!-- <ul> -->
* Six fields ( Concept\#, such as Concept1, Concept2, \.\.\., and Concept6) represent any concepts found in the pattern match
* Six fields ( Type\#, such as Type1, Type2, \.\.\., and Type6) represent the type for each concept
* Rule Name represents the name of the text link rule used to match the text and produce the output
* A field using the name of the ID field you specified in the node and representing the record or document ID as it was in the input data
* Matched Text represents the portion of the text data in the original record or document that was matched to the TLA pattern
<!-- </ul> -->
<!-- </article "role="article" "> -->
|
B2250C2A2E20F6F123C6D1091BFD635DC74EE4FE | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/tmwb-linguistic-resource.html?context=cdpaas&locale=en | Linguistic resources used in Text Analytics (SPSS Modeler) | Linguistic resources
SPSS Modeler uses an extraction process that relies on linguistic resources. These resources serve as the basis for how to process the text data and extract information to get the concepts, types, and sometimes patterns.
The linguistic resources can be divided into different types:
Category sets
: Categories are a group of closely related ideas and patterns that the text data is assigned to through a scoring process.
Libraries
: Libraries are used as building blocks for both TAPs and templates. Each library is made up of several dictionaries, which are used to define and manage terms, synonyms, and exclude lists. While libraries are also delivered individually, they are prepackaged together in templates and TAPs.
Templates
: Templates are made up of a set of libraries and some advanced linguistic and nonlinguistic resources. These resources form a specialized set that is adapted to a particular domain or context, such as product opinions.
Text analysis packages (TAP)
: A text analysis package is a predefined template that is bundled with one or more sets of predefined category sets. TAPs bundle together these resources so that the categories and the resources that were used to generate them are both stored together and reusable.
Note: During extraction, some compiled internal resources are also used. These compiled resources contain many definitions that complement the types in the Core library. These compiled resources cannot be edited.
| # Linguistic resources #
SPSS Modeler uses an extraction process that relies on linguistic resources\. These resources serve as the basis for how to process the text data and extract information to get the concepts, types, and sometimes patterns\.
The linguistic resources can be divided into different types:
Category sets
: Categories are a group of closely related ideas and patterns that the text data is assigned to through a scoring process\.
Libraries
: Libraries are used as building blocks for both TAPs and templates\. Each library is made up of several dictionaries, which are used to define and manage terms, synonyms, and exclude lists\. While libraries are also delivered individually, they are prepackaged together in templates and TAPs\.
Templates
: Templates are made up of a set of libraries and some advanced linguistic and nonlinguistic resources\. These resources form a specialized set that is adapted to a particular domain or context, such as product opinions\.
Text analysis packages (TAP)
: A text analysis package is a predefined template that is bundled with one or more sets of predefined category sets\. TAPs bundle together these resources so that the categories and the resources that were used to generate them are both stored together and reusable\.
Note: During extraction, some compiled internal resources are also used\. These compiled resources contain many definitions that complement the types in the Core library\. These compiled resources cannot be edited\.
<!-- </article "role="article" "> -->
|
05275F4EC521878B13AD7DCE825E167B2FC7EF93 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/tmwb_advanced_frequencies.html?context=cdpaas&locale=en | Advanced frequency settings (SPSS Modeler) | Advanced frequency settings
You can build categories based on a straightforward and mechanical frequency technique. With this technique, you can build one category for each item (type, concept, or pattern) that was found to be higher than a given record or document count. Additionally, you can build a single category for all of the less frequently occurring items. By count, we refer to the number of records or documents containing the extracted concept (and any of its synonyms), type, or pattern in question as opposed to the total number of occurrences in the entire text.
Grouping frequently occurring items can yield interesting results, since it may indicate a common or significant response. The technique is very useful on the unused extraction results after other techniques have been applied. Another application is to run this technique immediately after extraction when no other categories exist, edit the results to delete uninteresting categories, and then extend those categories so that they match even more records or documents.
Instead of using this technique, you could sort the concepts or concept patterns by descending number of records or documents in the extraction results pane and then drag-and-drop the ones with the most records into the categories pane to create the corresponding categories.
The following advanced settings are available for the Use frequencies to build categories option in the category settings.
Generate category descriptors at. Select the kind of input for descriptors.
* Concepts level. Selecting this option means that concepts or concept patterns frequencies will be used. Concepts will be used if types were selected as input for category building and concept patterns are used, if type patterns were selected. In general, applying this technique to the concept level will produce more specific results, since concepts and concept patterns represent a lower level of measurement.
* Types level. Selecting this option means that type or type patterns frequencies will be used. Types will be used if types were selected as input for category building and type patterns are used, if type patterns were selected. By applying this technique to the type level, you can get a quick view of the kind of information given.
Minimum record/doc. count for items to have their own category. With this option, you can build categories from frequently occurring items. This option restricts the output to only those categories containing a descriptor that occurred in at least X number of records or documents, where X is the value to enter for this option.
Group all remaining items into a category called. Use this option if you want to group all concepts or types occurring infrequently into a single catch-all category with the name of your choice. By default, this category is named Other.
Category input. Select the group to which to apply the techniques:
* Unused extraction results. This option enables categories to be built from extraction results that aren't used in any existing categories. This minimizes the tendency for records to match multiple categories and limits the number of categories produced.
* All extraction results. This option enables categories to be built using any of the extraction results. This is most useful when no or few categories already exist.
Resolve duplicate category names by. Select how to handle any new categories or subcategories whose names would be the same as existing categories. You can either merge the new ones (and their descriptors) with the existing categories with the same name, or you can choose to skip the creation of any categories if a duplicate name is found in the existing categories.
| # Advanced frequency settings #
You can build categories based on a straightforward and mechanical frequency technique\. With this technique, you can build one category for each item (type, concept, or pattern) that was found to be higher than a given record or document count\. Additionally, you can build a single category for all of the less frequently occurring items\. By count, we refer to the number of records or documents containing the extracted concept (and any of its synonyms), type, or pattern in question as opposed to the total number of occurrences in the entire text\.
Grouping frequently occurring items can yield interesting results, since it may indicate a common or significant response\. The technique is very useful on the unused extraction results after other techniques have been applied\. Another application is to run this technique immediately after extraction when no other categories exist, edit the results to delete uninteresting categories, and then extend those categories so that they match even more records or documents\.
Instead of using this technique, you could sort the concepts or concept patterns by descending number of records or documents in the extraction results pane and then drag\-and\-drop the ones with the most records into the categories pane to create the corresponding categories\.
The following advanced settings are available for the Use frequencies to build categories option in the category settings\.
Generate category descriptors at\. Select the kind of input for descriptors\.
<!-- <ul> -->
* Concepts level\. Selecting this option means that concepts or concept patterns frequencies will be used\. Concepts will be used if types were selected as input for category building and concept patterns are used, if type patterns were selected\. In general, applying this technique to the concept level will produce more specific results, since concepts and concept patterns represent a lower level of measurement\.
* Types level\. Selecting this option means that type or type patterns frequencies will be used\. Types will be used if types were selected as input for category building and type patterns are used, if type patterns were selected\. By applying this technique to the type level, you can get a quick view of the kind of information given\.
<!-- </ul> -->
Minimum record/doc\. count for items to have their own category\. With this option, you can build categories from frequently occurring items\. This option restricts the output to only those categories containing a descriptor that occurred in at least X number of records or documents, where X is the value to enter for this option\.
Group all remaining items into a category called\. Use this option if you want to group all concepts or types occurring infrequently into a single catch\-all category with the name of your choice\. By default, this category is named Other\.
Category input\. Select the group to which to apply the techniques:
<!-- <ul> -->
* Unused extraction results\. This option enables categories to be built from extraction results that aren't used in any existing categories\. This minimizes the tendency for records to match multiple categories and limits the number of categories produced\.
* All extraction results\. This option enables categories to be built using any of the extraction results\. This is most useful when no or few categories already exist\.
<!-- </ul> -->
Resolve duplicate category names by\. Select how to handle any new categories or subcategories whose names would be the same as existing categories\. You can either merge the new ones (and their descriptors) with the existing categories with the same name, or you can choose to skip the creation of any categories if a duplicate name is found in the existing categories\.
<!-- </article "role="article" "> -->
|
A1365CD1E2ACBEE6E9BF025DD493FEB17A0D428F | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/tmwb_advanced_linguistic.html?context=cdpaas&locale=en | Advanced linguistic settings (SPSS Modeler) | Advanced linguistic settings
When you build categories, you can select from a number of advanced linguistic category building techniques such as concept inclusion and semantic networks (English text only). These techniques can be used individually or in combination with each other to create categories.
Keep in mind that because every dataset is unique, the number of methods and the order in which you apply them may change over time. Since your text mining goals may be different from one set of data to the next, you may need to experiment with the different techniques to see which one produces the best results for the given text data. None of the automatic techniques will perfectly categorize your data; therefore we recommend finding and applying one or more automatic techniques that work well with your data.
The following advanced settings are available for the Use linguistic techniques to build categories option in the category settings.
| # Advanced linguistic settings #
When you build categories, you can select from a number of advanced linguistic category building techniques such as concept inclusion and semantic networks (English text only)\. These techniques can be used individually or in combination with each other to create categories\.
Keep in mind that because every dataset is unique, the number of methods and the order in which you apply them may change over time\. Since your text mining goals may be different from one set of data to the next, you may need to experiment with the different techniques to see which one produces the best results for the given text data\. None of the automatic techniques will perfectly categorize your data; therefore we recommend finding and applying one or more automatic techniques that work well with your data\.
The following advanced settings are available for the Use linguistic techniques to build categories option in the category settings\.
<!-- </article "role="article" "> -->
|
D171FCF10D8A1699FD8AC67E44053BBF6405631C | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/tmwb_conceptstab.html?context=cdpaas&locale=en | The Concepts tab (SPSS Modeler) | The Concepts tab
In the Text Analytics Workbench, you can use the Concepts tab to create and explore concepts as well as explore and tweak the extraction results.
Concepts are the most basic level of extraction results available to use as building blocks, called descriptors, for your categories. Categories are a group of closely related ideas and patterns to which documents and records are assigned through a scoring process.
Text mining is an iterative process in which extraction results are reviewed according to the context of the text data, fine-tuned to produce new results, and then reevaluated. Extraction results can be refined by modifying the linguistic resources. To simplify the process of fine-tuning your linguistic resources, you can perform common dictionary tasks directly from the Concepts tab. You can fine-tune other linguistic resources directly from the Resource editor tab.
Figure 1. Concepts tab

| # The Concepts tab #
In the Text Analytics Workbench, you can use the Concepts tab to create and explore concepts as well as explore and tweak the extraction results\.
Concepts are the most basic level of extraction results available to use as building blocks, called descriptors, for your categories\. Categories are a group of closely related ideas and patterns to which documents and records are assigned through a scoring process\.
Text mining is an iterative process in which extraction results are reviewed according to the context of the text data, fine\-tuned to produce new results, and then reevaluated\. Extraction results can be refined by modifying the linguistic resources\. To simplify the process of fine\-tuning your linguistic resources, you can perform common dictionary tasks directly from the Concepts tab\. You can fine\-tune other linguistic resources directly from the Resource editor tab\.
Figure 1\. Concepts tab

<!-- </article "role="article" "> -->
|
6068B2555E5014D386397335D0ED56B430082FF7 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/tmwb_dewindow.html?context=cdpaas&locale=en | The Resource editor tab (SPSS Modeler) | The Resource editor tab
Text Analytics rapidly and accurately captures key concepts from text data by using an extraction process. This process relies on linguistic resources to dictate how large amounts of unstructured, textual data should be analyzed and interpreted.
You can use the Resource editor tab to view the linguistic resources used in the extraction process. These resources are stored in the form of templates and libraries, which are used to extract concepts, group them under types, discover patterns in the text data, and other processes. Text Analytics offers several preconfigured resource templates, and in some languages, you can also use the resources in text analysis packages.
Figure 1. Resource editor tab

| # The Resource editor tab #
Text Analytics rapidly and accurately captures key concepts from text data by using an extraction process\. This process relies on linguistic resources to dictate how large amounts of unstructured, textual data should be analyzed and interpreted\.
You can use the Resource editor tab to view the linguistic resources used in the extraction process\. These resources are stored in the form of templates and libraries, which are used to extract concepts, group them under types, discover patterns in the text data, and other processes\. Text Analytics offers several preconfigured resource templates, and in some languages, you can also use the resources in text analysis packages\.
Figure 1\. Resource editor tab

<!-- </article "role="article" "> -->
|
342AD3ABFEECA87987ED595047CC869E15F148BF | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/tmwb_generate_model.html?context=cdpaas&locale=en | Generating a model nugget (SPSS Modeler) | Generating a model nugget
When you're working in the Text Analytics Workbench, you may want to use the work you've done to generate a category model nugget.
A model generated from a Text Analytics Workbench session is a category model nugget. You must first have at least one category before you can generate a category model nugget.
| # Generating a model nugget #
When you're working in the Text Analytics Workbench, you may want to use the work you've done to generate a category model nugget\.
A model generated from a Text Analytics Workbench session is a category model nugget\. You must first have at least one category before you can generate a category model nugget\.
<!-- </article "role="article" "> -->
|
7FE671DB2B6972A1CFB04E0902F8D82DC979D42A | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/tmwb_intro.html?context=cdpaas&locale=en | Text Analytics Workbench (SPSS Modeler) | Text Analytics Workbench
From a Text Mining modeling node, you can choose to launch an interactive Text Analytics Workbench session when your flow runs. In this workbench, you can extract key concepts from your text data, build categories, explore patterns in text link analysis, and generate category models.
You can use the Text Analytics Workbench to explore the results and tune the configuration for the node.
Concepts
: Concepts are the key words and phrases identified and extracted from your text data, also referred to as extraction results. These concepts are grouped into types. You can use these concepts to explore your data and create your categories. You can manage the concepts on the Concepts tab.
Text links
: If you have text link analysis (TLA) pattern rules in your linguistic resources or are using a resource template that already has some TLA rules, you can extract patterns from your text data. These patterns can help you uncover interesting relationships between concepts in your data. You can also use these patterns as descriptors in your categories. You can manage these on the Text links tab.
Categories
: Using descriptors (such as extraction results, patterns, and rules) as a definition, you can manually or automatically create a set of categories. Documents and records are assigned to these categories based on whether or not they contain a part of the category definition. You can manage categories on the Categories tab.
Resources
: The extraction process relies on a set of parameters and definitions from linguistic resources to govern how text is extracted and handled. These are managed in the form of templates and libraries on the Resource editor tab.
Figure 1. Text Analytics Workbench

| # Text Analytics Workbench #
From a Text Mining modeling node, you can choose to launch an interactive Text Analytics Workbench session when your flow runs\. In this workbench, you can extract key concepts from your text data, build categories, explore patterns in text link analysis, and generate category models\.
You can use the Text Analytics Workbench to explore the results and tune the configuration for the node\.
Concepts
: Concepts are the key words and phrases identified and extracted from your text data, also referred to as extraction results\. These concepts are grouped into types\. You can use these concepts to explore your data and create your categories\. You can manage the concepts on the Concepts tab\.
Text links
: If you have text link analysis (TLA) pattern rules in your linguistic resources or are using a resource template that already has some TLA rules, you can extract patterns from your text data\. These patterns can help you uncover interesting relationships between concepts in your data\. You can also use these patterns as descriptors in your categories\. You can manage these on the Text links tab\.
Categories
: Using descriptors (such as extraction results, patterns, and rules) as a definition, you can manually or automatically create a set of categories\. Documents and records are assigned to these categories based on whether or not they contain a part of the category definition\. You can manage categories on the Categories tab\.
Resources
: The extraction process relies on a set of parameters and definitions from linguistic resources to govern how text is extracted and handled\. These are managed in the form of templates and libraries on the Resource editor tab\.
Figure 1\. Text Analytics Workbench

<!-- </article "role="article" "> -->
|
925108D09CFC6F2B5193D0D7414BFC83748111A9 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/tmwb_intro_options.html?context=cdpaas&locale=en | Setting options (SPSS Modeler) | Setting options
You can access settings in various panes of the Text Analytics Workbench, such as extraction settings for concepts.
On the Concepts, Text links, and Categories tabs, categories are built from descriptors derived from either types or type patterns. In the table, you can select the individual types or patterns to include in the category building process. A description of all settings on each tab follows.
| # Setting options #
You can access settings in various panes of the Text Analytics Workbench, such as extraction settings for concepts\.
On the Concepts, Text links, and Categories tabs, categories are built from descriptors derived from either types or type patterns\. In the table, you can select the individual types or patterns to include in the category building process\. A description of all settings on each tab follows\.
<!-- </article "role="article" "> -->
|
31A670D6B3F0D7AB4EAD7DAE3795589F161249DE | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/tmwb_tawindow.html?context=cdpaas&locale=en | The Categories tab (SPSS Modeler) | The Categories tab
In the Text Analytics Workbench, you can use the Categories tab to create and explore categories as well as tweak the extraction results.
Extraction results can be refined by modifying the linguistic resources, which you can do directly from the Categories tab.
Figure 1. Categories tab

| # The Categories tab #
In the Text Analytics Workbench, you can use the Categories tab to create and explore categories as well as tweak the extraction results\.
Extraction results can be refined by modifying the linguistic resources, which you can do directly from the Categories tab\.
Figure 1\. Categories tab

<!-- </article "role="article" "> -->
|
799CE322C90ECAD9CC4BACAD45F9749EC21E912E | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/tmwb_view_tla.html?context=cdpaas&locale=en | The Text links tab (SPSS Modeler) | The Text links tab
On the Text links tab, you can build and explore text link analysis patterns found in your text data. Text link analysis (TLA) is a pattern-matching technology that enables you to define TLA rules and compare them to actual extracted concepts and relationships found in your text.
Patterns are most useful when you are attempting to discover relationships between concepts or opinions about a particular subject. Some examples include wanting to extract opinions on products from survey data, genomic relationships from within medical research papers, or relationships between people or places from intelligence data.
After you've extracted some TLA patterns, you can explore them and even add them to categories. To extract TLA results, there must be some TLA rules defined in the resource template or libraries you're using.
With no type patterns selected, you can click the Settings icon to change the extraction settings. For details, see [Setting options](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/tmwb_intro_options.html). You can also click the Filter icon to filter the type patterns that are displayed
Figure 1. Text links view

| # The Text links tab #
On the Text links tab, you can build and explore text link analysis patterns found in your text data\. Text link analysis (TLA) is a pattern\-matching technology that enables you to define TLA rules and compare them to actual extracted concepts and relationships found in your text\.
Patterns are most useful when you are attempting to discover relationships between concepts or opinions about a particular subject\. Some examples include wanting to extract opinions on products from survey data, genomic relationships from within medical research papers, or relationships between people or places from intelligence data\.
After you've extracted some TLA patterns, you can explore them and even add them to categories\. To extract TLA results, there must be some TLA rules defined in the resource template or libraries you're using\.
With no type patterns selected, you can click the Settings icon to change the extraction settings\. For details, see [Setting options](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/tmwb_intro_options.html)\. You can also click the Filter icon to filter the type patterns that are displayed
Figure 1\. Text links view

<!-- </article "role="article" "> -->
|
BE6A4C0BB6BCC7166FF88D60FD433C220962730D | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/transform.html?context=cdpaas&locale=en | Transform node (SPSS Modeler) | Transform node
Normalizing input fields is an important step before using traditional scoring techniques such as regression, logistic regression, and discriminant analysis. These techniques carry assumptions about normal distributions of data that may not be true for many raw data files. One approach to dealing with real-world data is to apply transformations that move a raw data element toward a more normal distribution. In addition, normalized fields can easily be compared with each other—for example, income and age are on totally different scales in a raw data file but, when normalized, the relative impact of each can be easily interpreted.
The Transform node provides an output viewer that enables you to perform a rapid visual assessment of the best transformation to use. You can see at a glance whether variables are normally distributed and, if necessary, choose the transformation you want and apply it. You can pick multiple fields and perform one transformation per field.
After selecting the preferred transformations for the fields, you can generate Derive or Filler nodes that perform the transformations and attach these nodes to the flow. The Derive node creates new fields, while the Filler node transforms the existing ones.
| # Transform node #
Normalizing input fields is an important step before using traditional scoring techniques such as regression, logistic regression, and discriminant analysis\. These techniques carry assumptions about normal distributions of data that may not be true for many raw data files\. One approach to dealing with real\-world data is to apply transformations that move a raw data element toward a more normal distribution\. In addition, normalized fields can easily be compared with each other—for example, income and age are on totally different scales in a raw data file but, when normalized, the relative impact of each can be easily interpreted\.
The Transform node provides an output viewer that enables you to perform a rapid visual assessment of the best transformation to use\. You can see at a glance whether variables are normally distributed and, if necessary, choose the transformation you want and apply it\. You can pick multiple fields and perform one transformation per field\.
After selecting the preferred transformations for the fields, you can generate Derive or Filler nodes that perform the transformations and attach these nodes to the flow\. The Derive node creates new fields, while the Filler node transforms the existing ones\.
<!-- </article "role="article" "> -->
|
22B8136F68AC74838B9C2B9EAF3996CCFAA14921 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/transpose.html?context=cdpaas&locale=en | Transpose node (SPSS Modeler) | Transpose node
By default, columns are fields and rows are records or observations. If necessary, you can use a Transpose node to swap the data in rows and columns so that fields become records and records become fields.
For example, if you have time series data where each series is a row rather than a column, you can transpose the data prior to analysis.
| # Transpose node #
By default, columns are fields and rows are records or observations\. If necessary, you can use a Transpose node to swap the data in rows and columns so that fields become records and records become fields\.
For example, if you have time series data where each series is a row rather than a column, you can transpose the data prior to analysis\.
<!-- </article "role="article" "> -->
|
015755C65C274F262396747D3F32A59AE74C08D7 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/treeas.html?context=cdpaas&locale=en | Tree-AS node (SPSS Modeler) | Tree-AS node
The Tree-AS node can be used with data in a distributed environment. With this node, you can choose to build decision trees using either a CHAID or Exhaustive CHAID model.
CHAID, or Chi-squared Automatic Interaction Detection, is a classification method for building decision trees by using chi-square statistics to identify optimal splits.
CHAID first examines the crosstabulations between each of the input fields and the outcome, and tests for significance using a chi-square independence test. If more than one of these relations is statistically significant, CHAID will select the input field that is the most significant (smallest p value). If an input has more than two categories, these are compared, and categories that show no differences in the outcome are collapsed together. This is done by successively joining the pair of categories showing the least significant difference. This category-merging process stops when all remaining categories differ at the specified testing level. For nominal input fields, any categories can be merged; for an ordinal set, only contiguous categories can be merged.
Exhaustive CHAID is a modification of CHAID that does a more thorough job of examining all possible splits for each predictor but takes longer to compute.
Requirements. Target and input fields can be continuous or categorical; nodes can be split into two or more subgroups at each level. Any ordinal fields used in the model must have numeric storage (not string). If necessary, use the Reclassify node to convert them.
Strengths. CHAID can generate nonbinary trees, meaning that some splits have more than two branches. It therefore tends to create a wider tree than the binary growing methods. CHAID works for all types of inputs, and it accepts both case weights and frequency variables.
| # Tree\-AS node #
The Tree\-AS node can be used with data in a distributed environment\. With this node, you can choose to build decision trees using either a CHAID or Exhaustive CHAID model\.
CHAID, or Chi\-squared Automatic Interaction Detection, is a classification method for building decision trees by using chi\-square statistics to identify optimal splits\.
CHAID first examines the crosstabulations between each of the input fields and the outcome, and tests for significance using a chi\-square independence test\. If more than one of these relations is statistically significant, CHAID will select the input field that is the most significant (smallest `p` value)\. If an input has more than two categories, these are compared, and categories that show no differences in the outcome are collapsed together\. This is done by successively joining the pair of categories showing the least significant difference\. This category\-merging process stops when all remaining categories differ at the specified testing level\. For nominal input fields, any categories can be merged; for an ordinal set, only contiguous categories can be merged\.
Exhaustive CHAID is a modification of CHAID that does a more thorough job of examining all possible splits for each predictor but takes longer to compute\.
Requirements\. Target and input fields can be continuous or categorical; nodes can be split into two or more subgroups at each level\. Any ordinal fields used in the model must have numeric storage (not string)\. If necessary, use the Reclassify node to convert them\.
Strengths\. CHAID can generate nonbinary trees, meaning that some splits have more than two branches\. It therefore tends to create a wider tree than the binary growing methods\. CHAID works for all types of inputs, and it accepts both case weights and frequency variables\.
<!-- </article "role="article" "> -->
|
18C44D2A29B576F708BC515CEDE91227B6B4FC4E | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/ts.html?context=cdpaas&locale=en | Time Series node (SPSS Modeler) | Time Series node
The Time Series node can be used with data in either a local or distributed environment. With this node, you can choose to estimate and build exponential smoothing, univariate Autoregressive Integrated Moving Average (ARIMA), or multivariate ARIMA (or transfer function) models for time series, and produce forecasts based on the time series data.
Exponential smoothing is a method of forecasting that uses weighted values of previous series observations to predict future values. As such, exponential smoothing is not based on a theoretical understanding of the data. It forecasts one point at a time, adjusting its forecasts as new data come in. The technique is useful for forecasting series that exhibit trend, seasonality, or both. You can choose from various exponential smoothing models that differ in their treatment of trend and seasonality.
ARIMA models provide more sophisticated methods for modeling trend and seasonal components than do exponential smoothing models, and, in particular, they allow the added benefit of including independent (predictor) variables in the model. This involves explicitly specifying autoregressive and moving average orders as well as the degree of differencing. You can include predictor variables and define transfer functions for any or all of them, as well as specify automatic detection of outliers or an explicit set of outliers.
Note: In practical terms, ARIMA models are most useful if you want to include predictors that might help to explain the behavior of the series that is being forecast, such as the number of catalogs that are mailed or the number of hits to a company web page. Exponential smoothing models describe the behavior of the time series without attempting to understand why it behaves as it does. For example, a series that historically peaks every 12 months will probably continue to do so even if you don't know why.
An Expert Modeler option is also available, which attempts to automatically identify and estimate the best-fitting ARIMA or exponential smoothing model for one or more target variables, thus eliminating the need to identify an appropriate model through trial and error. If in doubt, use the Expert Modeler option.
If predictor variables are specified, the Expert Modeler selects those variables that have a statistically significant relationship with the dependent series for inclusion in ARIMA models. Model variables are transformed where appropriate using differencing and/or a square root or natural log transformation. By default, the Expert Modeler considers all exponential smoothing models and all ARIMA models and picks the best model among them for each target field. You can, however, limit the Expert Modeler only to pick the best of the exponential smoothing models or only to pick the best of the ARIMA models. You can also specify automatic detection of outliers.
| # Time Series node #
The Time Series node can be used with data in either a local or distributed environment\. With this node, you can choose to estimate and build exponential smoothing, univariate Autoregressive Integrated Moving Average (ARIMA), or multivariate ARIMA (or transfer function) models for time series, and produce forecasts based on the time series data\.
Exponential smoothing is a method of forecasting that uses weighted values of previous series observations to predict future values\. As such, exponential smoothing is not based on a theoretical understanding of the data\. It forecasts one point at a time, adjusting its forecasts as new data come in\. The technique is useful for forecasting series that exhibit trend, seasonality, or both\. You can choose from various exponential smoothing models that differ in their treatment of trend and seasonality\.
ARIMA models provide more sophisticated methods for modeling trend and seasonal components than do exponential smoothing models, and, in particular, they allow the added benefit of including independent (predictor) variables in the model\. This involves explicitly specifying autoregressive and moving average orders as well as the degree of differencing\. You can include predictor variables and define transfer functions for any or all of them, as well as specify automatic detection of outliers or an explicit set of outliers\.
Note: In practical terms, ARIMA models are most useful if you want to include predictors that might help to explain the behavior of the series that is being forecast, such as the number of catalogs that are mailed or the number of hits to a company web page\. Exponential smoothing models describe the behavior of the time series without attempting to understand why it behaves as it does\. For example, a series that historically peaks every 12 months will probably continue to do so even if you don't know why\.
An Expert Modeler option is also available, which attempts to automatically identify and estimate the best\-fitting ARIMA or exponential smoothing model for one or more target variables, thus eliminating the need to identify an appropriate model through trial and error\. If in doubt, use the Expert Modeler option\.
If predictor variables are specified, the Expert Modeler selects those variables that have a statistically significant relationship with the dependent series for inclusion in ARIMA models\. Model variables are transformed where appropriate using differencing and/or a square root or natural log transformation\. By default, the Expert Modeler considers all exponential smoothing models and all ARIMA models and picks the best model among them for each target field\. You can, however, limit the Expert Modeler only to pick the best of the exponential smoothing models or only to pick the best of the ARIMA models\. You can also specify automatic detection of outliers\.
<!-- </article "role="article" "> -->
|
A5D736B45EC8EC0B906E183DE5DAA8BFA4C1F2D6 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/ts_streaming.html?context=cdpaas&locale=en | Streaming Time Series node (SPSS Modeler) | Streaming Time Series node
You use the Streaming Time Series node to build and score time series models in one step. A separate time series model is built for each target field, however model nuggets are not added to the generated models palette and the model information cannot be browsed.
Methods for modeling time series data require a uniform interval between each measurement, with any missing values indicated by empty rows. If your data does not already meet this requirement, you need to transform values as needed.
Other points of interest regarding Time Series nodes:
* Fields must be numeric.
* Date fields cannot be used as inputs.
* Partitions are ignored.
The Streaming Time Series node estimates exponential smoothing, univariate Autoregressive Integrated Moving Average (ARIMA), and multivariate ARIMA (or transfer function) models for time series and produces forecasts based on the time series data. Also available is an Expert Modeler, which attempts to automatically identify and estimate the best-fitting ARIMA or exponential smoothing model for one or more target fields.
| # Streaming Time Series node #
You use the Streaming Time Series node to build and score time series models in one step\. A separate time series model is built for each target field, however model nuggets are not added to the generated models palette and the model information cannot be browsed\.
Methods for modeling time series data require a uniform interval between each measurement, with any missing values indicated by empty rows\. If your data does not already meet this requirement, you need to transform values as needed\.
Other points of interest regarding Time Series nodes:
<!-- <ul> -->
* Fields must be numeric\.
* Date fields cannot be used as inputs\.
* Partitions are ignored\.
<!-- </ul> -->
The Streaming Time Series node estimates exponential smoothing, univariate Autoregressive Integrated Moving Average (ARIMA), and multivariate ARIMA (or transfer function) models for time series and produces forecasts based on the time series data\. Also available is an Expert Modeler, which attempts to automatically identify and estimate the best\-fitting ARIMA or exponential smoothing model for one or more target fields\.
<!-- </article "role="article" "> -->
|
94FE9993A8201BDBD9D383CC4CC4CA4F2DDDB47D | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/twostep.html?context=cdpaas&locale=en | TwoStep cluster node (SPSS Modeler) | TwoStep cluster node
The TwoStep Cluster node provides a form of cluster analysis. It can be used to cluster the dataset into distinct groups when you don't know what those groups are at the beginning. As with Kohonen nodes and K-Means nodes, TwoStep Cluster models do not use a target field. Instead of trying to predict an outcome, TwoStep Cluster tries to uncover patterns in the set of input fields. Records are grouped so that records within a group or cluster tend to be similar to each other, but records in different groups are dissimilar.
TwoStep Cluster is a two-step clustering method. The first step makes a single pass through the data, during which it compresses the raw input data into a manageable set of subclusters. The second step uses a hierarchical clustering method to progressively merge the subclusters into larger and larger clusters, without requiring another pass through the data. Hierarchical clustering has the advantage of not requiring the number of clusters to be selected ahead of time. Many hierarchical clustering methods start with individual records as starting clusters and merge them recursively to produce ever larger clusters. Though such approaches often break down with large amounts of data, TwoStep's initial preclustering makes hierarchical clustering fast even for large datasets.
Note: The resulting model depends to a certain extent on the order of the training data. Reordering the data and rebuilding the model may lead to a different final cluster model.
Requirements. To train a TwoStep Cluster model, you need one or more fields with the role set to Input. Fields with the role set to Target, Both, or None are ignored. The TwoStep Cluster algorithm does not handle missing values. Records with blanks for any of the input fields will be ignored when building the model.
Strengths. TwoStep Cluster can handle mixed field types and is able to handle large datasets efficiently. It also has the ability to test several cluster solutions and choose the best, so you don't need to know how many clusters to ask for at the outset. TwoStep Cluster can be set to automatically exclude outliers, or extremely unusual cases that can contaminate your results.
| # TwoStep cluster node #
The TwoStep Cluster node provides a form of cluster analysis\. It can be used to cluster the dataset into distinct groups when you don't know what those groups are at the beginning\. As with Kohonen nodes and K\-Means nodes, TwoStep Cluster models do *not* use a target field\. Instead of trying to predict an outcome, TwoStep Cluster tries to uncover patterns in the set of input fields\. Records are grouped so that records within a group or cluster tend to be similar to each other, but records in different groups are dissimilar\.
TwoStep Cluster is a two\-step clustering method\. The first step makes a single pass through the data, during which it compresses the raw input data into a manageable set of subclusters\. The second step uses a hierarchical clustering method to progressively merge the subclusters into larger and larger clusters, without requiring another pass through the data\. Hierarchical clustering has the advantage of not requiring the number of clusters to be selected ahead of time\. Many hierarchical clustering methods start with individual records as starting clusters and merge them recursively to produce ever larger clusters\. Though such approaches often break down with large amounts of data, TwoStep's initial preclustering makes hierarchical clustering fast even for large datasets\.
Note: The resulting model depends to a certain extent on the order of the training data\. Reordering the data and rebuilding the model may lead to a different final cluster model\.
Requirements\. To train a TwoStep Cluster model, you need one or more fields with the role set to `Input`\. Fields with the role set to `Target`, `Both`, or `None` are ignored\. The TwoStep Cluster algorithm does not handle missing values\. Records with blanks for any of the input fields will be ignored when building the model\.
Strengths\. TwoStep Cluster can handle mixed field types and is able to handle large datasets efficiently\. It also has the ability to test several cluster solutions and choose the best, so you don't need to know how many clusters to ask for at the outset\. TwoStep Cluster can be set to automatically exclude outliers, or extremely unusual cases that can contaminate your results\.
<!-- </article "role="article" "> -->
|
B7E56BEBF29F9AA59A9ABC9E299F19613E5859DA | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/twostepAS.html?context=cdpaas&locale=en | TwoStep-AS cluster node (SPSS Modeler) | TwoStep-AS cluster node
TwoStep Cluster is an exploratory tool that is designed to reveal natural groupings (or clusters) within a data set that would otherwise not be apparent. The algorithm that is employed by this procedure has several desirable features that differentiate it from traditional clustering techniques.
* Handling of categorical and continuous variables. By assuming variables to be independent, a joint multinomial-normal distribution can be placed on categorical and continuous variables.
* Automatic selection of number of clusters. By comparing the values of a model-choice criterion across different clustering solutions, the procedure can automatically determine the optimal number of clusters.
* Scalability. By constructing a cluster feature (CF) tree that summarizes the records, the TwoStep algorithm can analyze large data files.
For example, retail and consumer product companies regularly apply clustering techniques to information that describes their customers' buying habits, gender, age, income level, and other attributes. These companies tailor their marketing and product development strategies to each consumer group to increase sales and build brand loyalty.
| # TwoStep\-AS cluster node #
TwoStep Cluster is an exploratory tool that is designed to reveal natural groupings (or clusters) within a data set that would otherwise not be apparent\. The algorithm that is employed by this procedure has several desirable features that differentiate it from traditional clustering techniques\.
<!-- <ul> -->
* Handling of categorical and continuous variables\. By assuming variables to be independent, a joint multinomial\-normal distribution can be placed on categorical and continuous variables\.
* Automatic selection of number of clusters\. By comparing the values of a model\-choice criterion across different clustering solutions, the procedure can automatically determine the optimal number of clusters\.
* Scalability\. By constructing a cluster feature (CF) tree that summarizes the records, the TwoStep algorithm can analyze large data files\.
<!-- </ul> -->
For example, retail and consumer product companies regularly apply clustering techniques to information that describes their customers' buying habits, gender, age, income level, and other attributes\. These companies tailor their marketing and product development strategies to each consumer group to increase sales and build brand loyalty\.
<!-- </article "role="article" "> -->
|
A967430DA16338281405CF73A802C233911B6A13 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/type.html?context=cdpaas&locale=en | Type node (SPSS Modeler) | Type node
You can specify field properties in a Type node.
The following main properties are available.
* Field. Specify value and field labels for data in watsonx.ai. For example, field metadata imported from a data asset can be viewed or modified here. Similarly, you can create new labels for fields and their values.
* Measure. This is the measurement level, used to describe characteristics of the data in a given field. If all the details of a field are known, it's called fully instantiated. Note: The measurement level of a field is different from its storage type, which indicates whether the data is stored as strings, integers, real numbers, dates, times, timestamps, or lists.
* Role. Used to tell modeling nodes whether fields will be Input (predictor fields) or Target (predicted fields) for a machine-learning process. Both and None are also available roles, along with Partition, which indicates a field used to partition records into separate samples for training, testing, and validation. The value Split specifies that separate models will be built for each possible value of the field.
* Value mode. Use this column to specify options for reading data values from the dataset, or use the Specify option to specify measurement levels and values.
* Values. With this column, you can specify options for reading data values from the data set, or specify measurement levels and values separately. You can also choose to pass fields without reading their values. You can't amend the cell in this column if the corresponding Field entry contains a list.
* Check. With this column, you can set options to ensure that field values conform to the specified values or ranges. You can't amend the cell in this column if the corresponding Field entry contains a list.
Click the Edit (gear) icon next to each row to open additional options.
Tip: Icons in the Type node properties quickly indicate the data type of each field, such as string, date, double integer, or hashtag.
Figure 1. New Type node icons

| # Type node #
You can specify field properties in a Type node\.
The following main properties are available\.
<!-- <ul> -->
* Field\. Specify value and field labels for data in watsonx\.ai\. For example, field metadata imported from a data asset can be viewed or modified here\. Similarly, you can create new labels for fields and their values\.
* Measure\. This is the measurement level, used to describe characteristics of the data in a given field\. If all the details of a field are known, it's called fully instantiated\. Note: The measurement level of a field is different from its storage type, which indicates whether the data is stored as strings, integers, real numbers, dates, times, timestamps, or lists\.
* Role\. Used to tell modeling nodes whether fields will be Input (predictor fields) or Target (predicted fields) for a machine\-learning process\. Both and None are also available roles, along with Partition, which indicates a field used to partition records into separate samples for training, testing, and validation\. The value Split specifies that separate models will be built for each possible value of the field\.
* Value mode\. Use this column to specify options for reading data values from the dataset, or use the Specify option to specify measurement levels and values\.
* Values\. With this column, you can specify options for reading data values from the data set, or specify measurement levels and values separately\. You can also choose to pass fields without reading their values\. You can't amend the cell in this column if the corresponding Field entry contains a list\.
* Check\. With this column, you can set options to ensure that field values conform to the specified values or ranges\. You can't amend the cell in this column if the corresponding Field entry contains a list\.
<!-- </ul> -->
Click the Edit (gear) icon next to each row to open additional options\.
Tip: Icons in the Type node properties quickly indicate the data type of each field, such as string, date, double integer, or hashtag\.
Figure 1\. New Type node icons

<!-- </article "role="article" "> -->
|
5F584AEED890D6EFB4C9FAF133A26BD9F9E4F219 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/type_checktype.html?context=cdpaas&locale=en | Checking type values (SPSS Modeler) | Checking type values
Turning on the Check option for each field examines all values in that field to determine whether they comply with the current type settings or the values that you've specified. This is useful for cleaning up datasets and reducing the size of a dataset within a single operation.
The Check column in the Type node determines what happens when a value outside of the type limits is discovered. To change the check settings for a field, use the drop-down list for that field in the Check column. To set the check settings for all fields, select the check box for the top-level Field column heading. Then use the top-level drop-down above the Check column.
The following check options are available:
None. Values will be passed through without checking. This is the default setting.
Nullify. Change values outside of the limits to the system null ($null$).
Coerce. Fields whose measurement levels are fully instantiated will be checked for values that fall outside the specified ranges. Unspecified values will be converted to a legal value for that measurement level using the following rules:
* For flags, any value other than the true and false value is converted to the false value
* For sets (nominal or ordinal), any unknown value is converted to the first member of the set's values
* Numbers greater than the upper limit of a range are replaced by the upper limit
* Numbers less than the lower limit of a range are replaced by the lower limit
* Null values in a range are given the midpoint value for that range
Discard. When illegal values are found, the entire record is discarded.
Warn. The number of illegal items is counted and reported in the flow properties dialog when all of the data has been read.
Abort. The first illegal value encountered terminates the running of the flow. The error is reported in the flow properties dialog.
| # Checking type values #
Turning on the Check option for each field examines all values in that field to determine whether they comply with the current type settings or the values that you've specified\. This is useful for cleaning up datasets and reducing the size of a dataset within a single operation\.
The Check column in the Type node determines what happens when a value outside of the type limits is discovered\. To change the check settings for a field, use the drop\-down list for that field in the Check column\. To set the check settings for all fields, select the check box for the top\-level Field column heading\. Then use the top\-level drop\-down above the Check column\.
The following check options are available:
None\. Values will be passed through without checking\. This is the default setting\.
Nullify\. Change values outside of the limits to the system null (`$null$`)\.
Coerce\. Fields whose measurement levels are fully instantiated will be checked for values that fall outside the specified ranges\. Unspecified values will be converted to a legal value for that measurement level using the following rules:
<!-- <ul> -->
* For flags, any value other than the true and false value is converted to the false value
* For sets (nominal or ordinal), any unknown value is converted to the first member of the set's values
* Numbers greater than the upper limit of a range are replaced by the upper limit
* Numbers less than the lower limit of a range are replaced by the lower limit
* Null values in a range are given the midpoint value for that range
<!-- </ul> -->
Discard\. When illegal values are found, the entire record is discarded\.
Warn\. The number of illegal items is counted and reported in the flow properties dialog when all of the data has been read\.
Abort\. The first illegal value encountered terminates the running of the flow\. The error is reported in the flow properties dialog\.
<!-- </article "role="article" "> -->
|
916C197A1A18FBE44382A30782B1FF7C13DBFEEC | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/type_convert.html?context=cdpaas&locale=en | Converting continuous data (SPSS Modeler) | Converting continuous data
Treating categorical data as continuous can have a serious impact on the quality of a model, especially if it's the target field (for example, producing a regression model rather than a binary model). To prevent this, you can convert integer ranges to categorical types such as Ordinal or Flag.
1. Double-click a Type node to open its properties. Expand the Type Operations section.
2. Specify a value for Set continuous integer field to ordinal if range less than or equal to.
3. Click Apply to convert the affected ranges.
4. If desired, you can also specify a value for Set categorical fields to None if they exceed this many values to automatically ignore large sets with many members.
| # Converting continuous data #
Treating categorical data as continuous can have a serious impact on the quality of a model, especially if it's the target field (for example, producing a regression model rather than a binary model)\. To prevent this, you can convert integer ranges to categorical types such as `Ordinal` or `Flag`\.
<!-- <ol> -->
1. Double\-click a Type node to open its properties\. Expand the Type Operations section\.
2. Specify a value for Set continuous integer field to ordinal if range less than or equal to\.
3. Click Apply to convert the affected ranges\.
4. If desired, you can also specify a value for Set categorical fields to None if they exceed this many values to automatically ignore large sets with many members\.
<!-- </ol> -->
<!-- </article "role="article" "> -->
|
8F5EA4DC23CAEE3B6887B07AE9D319BFE5E39CA8 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/type_field_format.html?context=cdpaas&locale=en | Setting field format options (SPSS Modeler) | Setting field format options
With the FORMAT settings in the Type and Table nodes you can specify formatting options for current or unused fields.
Under each formatting type, click Add Columns and add one or more fields. The field name and format setting will be displayed for each field you select. Then click the gear icon to specify formatting options.
The following formatting options are available on a per-field basis:
Date format. Select a date format to use for date storage fields or when strings are interpreted as dates by CLEM date functions.
Time format. Select a time format to use for time storage fields or when strings are interpreted as times by CLEM time functions.
Number format. You can choose from standard (.), scientific (.E+), or currency display formats ($.).
Decimal symbol. Select either a comma (,) or period (.) as the decimal separator.
Number grouping symbol. For number display formats, select the symbol used to group values (for example, the comma in 3,000.00). Options include none, period, comma, space, and locale-defined (in which case the default for the current locale is used).
Decimal places (standard, scientific, currency, export). For number display formats, specify the number of decimal places to use when displaying real numbers. This option is specified separately for each display format. Note that the Export decimal places setting only applies to flat file exports. The number of decimal places exported by the XML Export node is always 6.
Justify. Specifies how the values should be justified within the column. The default setting is Auto, which left-justifies symbolic values and right-justifies numeric values. You can override the default by selecting left, right, or center.
Column width. By default, column widths are automatically calculated based on the values of the field. You can specify a custom width, if needed.
| # Setting field format options #
With the FORMAT settings in the Type and Table nodes you can specify formatting options for current or unused fields\.
Under each formatting type, click Add Columns and add one or more fields\. The field name and format setting will be displayed for each field you select\. Then click the gear icon to specify formatting options\.
The following formatting options are available on a per\-field basis:
Date format\. Select a date format to use for date storage fields or when strings are interpreted as dates by CLEM date functions\.
Time format\. Select a time format to use for time storage fields or when strings are interpreted as times by CLEM time functions\.
Number format\. You can choose from standard (`####.###`), scientific (`#.###E+##`), or currency display formats (`$###.##`)\.
Decimal symbol\. Select either a comma (`,`) or period (`.`) as the decimal separator\.
Number grouping symbol\. For number display formats, select the symbol used to group values (for example, the comma in `3,000.00`)\. Options include none, period, comma, space, and locale\-defined (in which case the default for the current locale is used)\.
Decimal places (standard, scientific, currency, export)\. For number display formats, specify the number of decimal places to use when displaying real numbers\. This option is specified separately for each display format\. Note that the Export decimal places setting only applies to flat file exports\. The number of decimal places exported by the XML Export node is always 6\.
Justify\. Specifies how the values should be justified within the column\. The default setting is Auto, which left\-justifies symbolic values and right\-justifies numeric values\. You can override the default by selecting left, right, or center\.
Column width\. By default, column widths are automatically calculated based on the values of the field\. You can specify a custom width, if needed\.
<!-- </article "role="article" "> -->
|
7292DE7C0036B9064A85D1DA77A860BD989EA638 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/type_fieldrole.html?context=cdpaas&locale=en | Setting the field role (SPSS Modeler) | Setting the field role
A field's role controls how it's used in model building—for example, whether a field is an input or target (the thing being predicted).
Note: The Partition, Frequency, and Record ID roles can each be applied to a single field only.
The following roles are available:
Input. The field is used as an input to machine learning (a predictor field).
Target. The field is used as an output or target for machine learning (one of the fields that the model will try to predict).
Both. The field is used as both an input and an output by the Apriori node. All other modeling nodes will ignore the field.
None. The field is ignored by machine learning. Fields whose measurement level is set to Typeless are automatically set to None in the Role column.
Partition. Indicates a field used to partition the data into separate samples for training, testing, and (optional) validation purposes. The field must be an instantiated set type with two or three possible values (as defined in the advanced settings by clicking the gear icon). The first value represents the training sample, the second represents the testing sample, and the third (if present) represents the validation sample. Any additional values are ignored, and flag fields can't be used. Note that to use the partition in an analysis, partitioning must be enabled in the node settings of the appropriate model-building or analysis node. Records with null values for the partition field are excluded from the analysis when partitioning is enabled. If you defined multiple partition fields in the flow, you must specify a single partition field in the node settings for each applicable modeling node. If a suitable field doesn't already exist in your data, you can create one using a Partition node or Derive node. See [Partition node](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/partition.html) for more information.
Split. (Nominal, ordinal, and flag fields only.) Specifies that a model is built for each possible value of the field.
Frequency. (Numeric fields only.) Setting this role enables the field value to be used as a frequency weighting factor for the record. This feature is supported by C&R Tree, CHAID, QUEST, and Linear nodes only; all other nodes ignore this role. Frequency weighting is enabled by means of the Use frequency weight option in the node settings of those modeling nodes that support the feature.
Record ID. The field is used as the unique record identifier. This feature is ignored by most nodes; however, it's supported by Linear models.
| # Setting the field role #
A field's role controls how it's used in model building—for example, whether a field is an input or target (the thing being predicted)\.
Note: The Partition, Frequency, and Record ID roles can each be applied to a single field only\.
The following roles are available:
Input\. The field is used as an input to machine learning (a predictor field)\.
Target\. The field is used as an output or target for machine learning (one of the fields that the model will try to predict)\.
Both\. The field is used as both an input and an output by the Apriori node\. All other modeling nodes will ignore the field\.
None\. The field is ignored by machine learning\. Fields whose measurement level is set to Typeless are automatically set to None in the Role column\.
Partition\. Indicates a field used to partition the data into separate samples for training, testing, and (optional) validation purposes\. The field must be an instantiated set type with two or three possible values (as defined in the advanced settings by clicking the gear icon)\. The first value represents the training sample, the second represents the testing sample, and the third (if present) represents the validation sample\. Any additional values are ignored, and flag fields can't be used\. Note that to use the partition in an analysis, partitioning must be enabled in the node settings of the appropriate model\-building or analysis node\. Records with null values for the partition field are excluded from the analysis when partitioning is enabled\. If you defined multiple partition fields in the flow, you must specify a single partition field in the node settings for each applicable modeling node\. If a suitable field doesn't already exist in your data, you can create one using a Partition node or Derive node\. See [Partition node](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/partition.html) for more information\.
Split\. (Nominal, ordinal, and flag fields only\.) Specifies that a model is built for each possible value of the field\.
Frequency\. (Numeric fields only\.) Setting this role enables the field value to be used as a frequency weighting factor for the record\. This feature is supported by C&R Tree, CHAID, QUEST, and Linear nodes only; all other nodes ignore this role\. Frequency weighting is enabled by means of the Use frequency weight option in the node settings of those modeling nodes that support the feature\.
Record ID\. The field is used as the unique record identifier\. This feature is ignored by most nodes; however, it's supported by Linear models\.
<!-- </article "role="article" "> -->
|
B8C3B95FC688C347D679F81711781B29578CFC19 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/type_info.html?context=cdpaas&locale=en | Viewing and setting information about types (SPSS Modeler) | Viewing and setting information about types
From the Type node, you can specify field metadata and properties that are invaluable to modeling and other work.
These properties include:
* Specifying a usage type, such as range, set, ordered set, or flag, for each field in your data
* Setting options for handling missing values and system nulls
* Setting the role of a field for modeling purposes
* Specifying values for a field and options used to automatically read values from your data
* Specifying value labels
| # Viewing and setting information about types #
From the Type node, you can specify field metadata and properties that are invaluable to modeling and other work\.
These properties include:
<!-- <ul> -->
* Specifying a usage type, such as range, set, ordered set, or flag, for each field in your data
* Setting options for handling missing values and system nulls
* Setting the role of a field for modeling purposes
* Specifying values for a field and options used to automatically read values from your data
* Specifying value labels
<!-- </ul> -->
<!-- </article "role="article" "> -->
|
9F878A46B28C19B951157A5F31BB7A1A9920A89E | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/type_instantiation.html?context=cdpaas&locale=en | What is instantiation? (SPSS Modeler) | What is instantiation?
Instantiation is the process of reading or specifying information, such as storage type and values for a data field. To optimize system resources, instantiating is a user-directed process—you tell the software to read values by running data through a Type node.
* Data with unknown types is also referred to as uninstantiated. Data whose storage type and values are unknown is displayed in the Measure column of the Type node settings as Typeless.
* When you have some information about a field's storage, such as string or numeric, the data is called partially instantiated. Categorical or Continuous are partially instantiated measurement levels. For example, Categorical specifies that the field is symbolic, but you don't know whether it's nominal, ordinal, or flag.
* When all of the details about a type are known, including the values, a fully instantiated measurement level—nominal, ordinal, flag, or continuous—is displayed in this column. Note that the continuous type is used for both partially instantiated and fully instantiated data fields. Continuous data can be either integers or real numbers.
When a data flow with a Type node runs, uninstantiated types immediately become partially instantiated, based on the initial data values. After all of the data passes through the node, all data becomes fully instantiated unless values were set to Pass. If the flow run is interrupted, the data will remain partially instantiated. After the Types settings are instantiated, the values of a field are static at that point in the flow. This means that any upstream changes will not affect the values of a particular field, even if you rerun the flow. To change or update the values based on new data or added manipulations, you need to edit them in the Types settings or set the value for a field to Read or Extend.
| # What is instantiation? #
Instantiation is the process of reading or specifying information, such as storage type and values for a data field\. To optimize system resources, instantiating is a user\-directed process—you tell the software to read values by running data through a Type node\.
<!-- <ul> -->
* Data with unknown types is also referred to as uninstantiated\. Data whose storage type and values are unknown is displayed in the Measure column of the Type node settings as Typeless\.
* When you have some information about a field's storage, such as string or numeric, the data is called partially instantiated\. Categorical or Continuous are partially instantiated measurement levels\. For example, Categorical specifies that the field is symbolic, but you don't know whether it's nominal, ordinal, or flag\.
* When all of the details about a type are known, including the values, a fully instantiated measurement level—nominal, ordinal, flag, or continuous—is displayed in this column\. Note that the continuous type is used for both partially instantiated and fully instantiated data fields\. Continuous data can be either integers or real numbers\.
<!-- </ul> -->
When a data flow with a Type node runs, uninstantiated types immediately become partially instantiated, based on the initial data values\. After all of the data passes through the node, all data becomes fully instantiated unless values were set to Pass\. If the flow run is interrupted, the data will remain partially instantiated\. After the Types settings are instantiated, the values of a field are static at that point in the flow\. This means that any upstream changes will not affect the values of a particular field, even if you rerun the flow\. To change or update the values based on new data or added manipulations, you need to edit them in the Types settings or set the value for a field to Read or Extend\.
<!-- </article "role="article" "> -->
|
21DB0146B79B8256259507C62876E01ADA143BD6 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/type_levels.html?context=cdpaas&locale=en | Measurement levels (SPSS Modeler) | Measurement levels
The measure, also referred to as measurement level, describes the usage of data fields in SPSS Modeler.
You can specify the Measure in the node properties of an import node or a Type node. For example, you may want to set the measure for an integer field with values of 1 and 0 to Flag. This usually indicates that 1 = True and 0 = False.
Storage versus measurement. Note that the measurement level of a field is different from its storage type, which indicates whether data is stored as a string, integer, real number, date, time, or timestamp. While you can modify data types at any point in a flow by using a Type node, storage must be determined at the source when reading data in (although you can subsequently change it using a conversion function).
The following measurement levels are available:
* Default. Data whose storage type and values are unknown (for example, because they haven't yet been read) are displayed as Default.
* Continuous. Used to describe numeric values, such as a range of 0–100 or 0.75–1.25. A continuous value can be an integer, real number, or date/time.
* Categorical. Used for string values when an exact number of distinct values is unknown. This is an uninstantiated data type, meaning that all possible information about the storage and usage of the data is not yet known. After data is read, the measurement level will be Flag, Nominal, or Typeless, depending on the maximum number of members for nominal fields specified.
* Flag. Used for data with two distinct values that indicate the presence or absence of a trait, such as true and false, Yes and No, or 0 and 1. The values used may vary, but one must always be designated as the "true" value, and the other as the "false" value. Data may be represented as text, integer, real number, date, time, or timestamp.
* Nominal. Used to describe data with multiple distinct values, each treated as a member of a set, such as small/medium/large. Nominal data can have any storage—numeric, string, or date/time. Note that setting the measurement level to Nominal doesn't automatically change the values to string storage.
* Ordinal. Used to describe data with multiple distinct values that have an inherent order. For example, salary categories or satisfaction rankings can be typed as ordinal data. The order is defined by the natural sort order of the data elements. For example, 1, 3, 5 is the default sort order for a set of integers, while HIGH, LOW, NORMAL (ascending alphabetically) is the order for a set of strings. The ordinal measurement level enables you to define a set of categorical data as ordinal data for the purposes of visualization, model building, and export to other applications (such as IBM SPSS Statistics) that recognize ordinal data as a distinct type. You can use an ordinal field anywhere that a nominal field can be used. Additionally, fields of any storage type (real, integer, string, date, time, and so on) can be defined as ordinal.
* Typeless. Used for data that doesn't conform to any of the Default, Continuous, Categorical, Flag, Nominal, or Ordinal types, for fields with a single value, or for nominal data where the set has more members than the defined maximum. Typeless is also useful for cases in which the measurement level would otherwise be a set with many members (such as an account number). When you select Typeless for a field, the role is automatically set to None, with Record ID as the only alternative. The default maximum size for sets is 250 unique values.
* Collection. Used to identify non-geospatial data that is recorded in a list. A collection is effectively a list field of zero depth, where the elements in that list have one of the other measurement levels.
* Geospatial. Used with the List storage type to identify geospatial data. Lists can be either List of Integer or List of Real fields with a list depth that's between zero and two, inclusive.
You can manually specify measurement levels, or you can allow the software to read the data and determine the measurement level based on the values it reads. Alternatively, where you have several continuous data fields that should be treated as categorical data, you can choose an option to convert them. See [Converting continuous data](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/type_convert.html).
| # Measurement levels #
The measure, also referred to as measurement level, describes the usage of data fields in SPSS Modeler\.
You can specify the Measure in the node properties of an import node or a Type node\. For example, you may want to set the measure for an integer field with values of `1` and `0` to Flag\. This usually indicates that `1 = True` and `0 = False`\.
Storage versus measurement\. Note that the measurement level of a field is different from its storage type, which indicates whether data is stored as a string, integer, real number, date, time, or timestamp\. While you can modify data types at any point in a flow by using a Type node, storage must be determined at the source when reading data in (although you can subsequently change it using a conversion function)\.
The following measurement levels are available:
<!-- <ul> -->
* Default\. Data whose storage type and values are unknown (for example, because they haven't yet been read) are displayed as Default\.
* Continuous\. Used to describe numeric values, such as a range of 0–100 or 0\.75–1\.25\. A continuous value can be an integer, real number, or date/time\.
* Categorical\. Used for string values when an exact number of distinct values is unknown\. This is an uninstantiated data type, meaning that all possible information about the storage and usage of the data is not yet known\. After data is read, the measurement level will be Flag, Nominal, or Typeless, depending on the maximum number of members for nominal fields specified\.
* Flag\. Used for data with two distinct values that indicate the presence or absence of a trait, such as `true` and `false`, `Yes` and `No`, or `0` and `1`\. The values used may vary, but one must always be designated as the "true" value, and the other as the "false" value\. Data may be represented as text, integer, real number, date, time, or timestamp\.
* Nominal\. Used to describe data with multiple distinct values, each treated as a member of a set, such as `small/medium/large`\. Nominal data can have any storage—numeric, string, or date/time\. Note that setting the measurement level to Nominal doesn't automatically change the values to string storage\.
* Ordinal\. Used to describe data with multiple distinct values that have an inherent order\. For example, salary categories or satisfaction rankings can be typed as ordinal data\. The order is defined by the natural sort order of the data elements\. For example, `1, 3, 5` is the default sort order for a set of integers, while `HIGH, LOW, NORMAL` (ascending alphabetically) is the order for a set of strings\. The ordinal measurement level enables you to define a set of categorical data as ordinal data for the purposes of visualization, model building, and export to other applications (such as IBM SPSS Statistics) that recognize ordinal data as a distinct type\. You can use an ordinal field anywhere that a nominal field can be used\. Additionally, fields of any storage type (real, integer, string, date, time, and so on) can be defined as ordinal\.
* Typeless\. Used for data that doesn't conform to any of the Default, Continuous, Categorical, Flag, Nominal, or Ordinal types, for fields with a single value, or for nominal data where the set has more members than the defined maximum\. Typeless is also useful for cases in which the measurement level would otherwise be a set with many members (such as an account number)\. When you select Typeless for a field, the role is automatically set to None, with Record ID as the only alternative\. The default maximum size for sets is 250 unique values\.
* Collection\. Used to identify non\-geospatial data that is recorded in a list\. A collection is effectively a list field of zero depth, where the elements in that list have one of the other measurement levels\.
* Geospatial\. Used with the List storage type to identify geospatial data\. Lists can be either List of Integer or List of Real fields with a list depth that's between zero and two, inclusive\.
<!-- </ul> -->
You can manually specify measurement levels, or you can allow the software to read the data and determine the measurement level based on the values it reads\. Alternatively, where you have several continuous data fields that should be treated as categorical data, you can choose an option to convert them\. See [Converting continuous data](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/type_convert.html)\.
<!-- </article "role="article" "> -->
|
E0F6FBCA52D2EE44AC2E0795FA11FB53E3054C47 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/type_levels_geo.html?context=cdpaas&locale=en | Geospatial measurement sublevels (SPSS Modeler) | Geospatial measurement sublevels
The Geospatial measurement level, which is used with the List storage type, has six sublevels that are used to identify different types of geospatial data.
* Point. Identifies a specific location (for example, the center of a city).
* Polygon. A series of points that identifies the single boundary of a region and its location (for example, a county).
* LineString. Also referred to as a Polyline or just a Line, a LineString is a series of points that identifies the route of a line. For example, a LineString might be a fixed item, such as a road, river, or railway; or the track of something that moves, such as an aircraft's flight path or a ship's voyage.
* MultiPoint. Used when each row in your data contains multiple points per region. For example, if each row represents a city street, the multiple points for each street can be used to identify every street lamp.
* MultiPolygon. Used when each row in your data contains several polygons. For example, if each row represents the outline of a country, the US can be recorded as several polygons to identify the different areas such as the mainland, Alaska, and Hawaii.
* MultiLineString. Used when each row in your data contains several lines. Because lines cannot branch, you can use a MultiLineString to identify a group of lines (for example, data such as the navigable waterways or the railway network in each country).
| # Geospatial measurement sublevels #
The Geospatial measurement level, which is used with the List storage type, has six sublevels that are used to identify different types of geospatial data\.
<!-- <ul> -->
* Point\. Identifies a specific location (for example, the center of a city)\.
* Polygon\. A series of points that identifies the single boundary of a region and its location (for example, a county)\.
* LineString\. Also referred to as a Polyline or just a Line, a LineString is a series of points that identifies the route of a line\. For example, a LineString might be a fixed item, such as a road, river, or railway; or the track of something that moves, such as an aircraft's flight path or a ship's voyage\.
* MultiPoint\. Used when each row in your data contains multiple points per region\. For example, if each row represents a city street, the multiple points for each street can be used to identify every street lamp\.
* MultiPolygon\. Used when each row in your data contains several polygons\. For example, if each row represents the outline of a country, the US can be recorded as several polygons to identify the different areas such as the mainland, Alaska, and Hawaii\.
* MultiLineString\. Used when each row in your data contains several lines\. Because lines cannot branch, you can use a MultiLineString to identify a group of lines (for example, data such as the navigable waterways or the railway network in each country)\.
<!-- </ul> -->
<!-- </article "role="article" "> -->
|
FD903F9A58632DF14BE5C98EEDA32E1FC2F46F4B | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/type_missing.html?context=cdpaas&locale=en | Defining missing values (SPSS Modeler) | Defining missing values
In the Type node settings, select the desired field in the table and then click the gear icon at the end of its row. Missing values settings are available in the window that appears.
Select Define missing values to define missing value handing for this field. Here you can define explicit values to be considered as missing values for this field, or this can also be accomplished by means of a downstream Filler node.
| # Defining missing values #
In the Type node settings, select the desired field in the table and then click the gear icon at the end of its row\. Missing values settings are available in the window that appears\.
Select Define missing values to define missing value handing for this field\. Here you can define explicit values to be considered as missing values for this field, or this can also be accomplished by means of a downstream Filler node\.
<!-- </article "role="article" "> -->
|
063D5E4C6E2094F964752D376B5FF49FFD47433B | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/type_values.html?context=cdpaas&locale=en | Data values (SPSS Modeler) | Data values
Using the Value mode column in the Type node settings, you can read values automatically from the data, or you can specify measures and values.
The options available in the Value mode drop-down provide instructions for auto-typing, as shown in the following table.
Table 1. Instructions for auto-typing
Option Function
Read Data is read when the node runs.
Extend Data is read and appended to the current data (if any exists).
Pass No data is read.
Current Keep current data values.
Specify You can click the gear icon at the end of the row to specify values.
Running a Type node or clicking Read Values auto-types and reads values from your data source based on your selection. You can also specify these values manually by using the Specify option and clicking the gear icon at the end of a row.
After you make changes for fields in the Type node, you can reset value information using the following buttons:
* Using the Clear all values button, you can clear changes to field values made in this node (non-inherited values) and reread values from upstream operations. This option is useful for resetting changes that you may have made for specific fields upstream.
* Using the Clear values button, you can reset values for all fields read into the node. This option effectively sets the Value mode column to Read for all fields. This option is useful for resetting values for all fields and rereading values and measurement levels from upstream operations.
| # Data values #
Using the Value mode column in the Type node settings, you can read values automatically from the data, or you can specify measures and values\.
The options available in the Value mode drop\-down provide instructions for auto\-typing, as shown in the following table\.
<!-- <table "summary="" class="defaultstyle" "> -->
Table 1\. Instructions for auto\-typing
| Option | Function |
| --------- | --------------------------------------------------------------------- |
| `Read` | Data is read when the node runs\. |
| `Extend` | Data is read and appended to the current data (if any exists)\. |
| `Pass` | No data is read\. |
| `Current` | Keep current data values\. |
| `Specify` | You can click the gear icon at the end of the row to specify values\. |
<!-- </table "summary="" class="defaultstyle" "> -->
Running a Type node or clicking Read Values auto\-types and reads values from your data source based on your selection\. You can also specify these values manually by using the Specify option and clicking the gear icon at the end of a row\.
After you make changes for fields in the Type node, you can reset value information using the following buttons:
<!-- <ul> -->
* Using the Clear all values button, you can clear changes to field values made in this node (non\-inherited values) and reread values from upstream operations\. This option is useful for resetting changes that you may have made for specific fields upstream\.
* Using the Clear values button, you can reset values for all fields read into the node\. This option effectively sets the Value mode column to `Read` for all fields\. This option is useful for resetting values for all fields and rereading values and measurement levels from upstream operations\.
<!-- </ul> -->
<!-- </article "role="article" "> -->
|
98AC4398E3EA902007D99E5BDB0686AEF04A4DAA | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/type_values_collection.html?context=cdpaas&locale=en | Specifying values for collection data (SPSS Modeler) | Specifying values for collection data
Collection fields display non-geospatial data that's in a list.
The only item you can set for the Collection measurement level is the List measure. By default, this measure is set to Typeless, but you can select another value to set the measurement level of the elements within the list. You can choose one of the following options:
* Typeless
* Continuous
* Nominal
* Ordinal
* Flag
| # Specifying values for collection data #
Collection fields display non\-geospatial data that's in a list\.
The only item you can set for the Collection measurement level is the List measure\. By default, this measure is set to Typeless, but you can select another value to set the measurement level of the elements within the list\. You can choose one of the following options:
<!-- <ul> -->
* Typeless
* Continuous
* Nominal
* Ordinal
* Flag
<!-- </ul> -->
<!-- </article "role="article" "> -->
|
A82CB1ABABCF08E9FD361F13050D47850AF8768A | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/type_values_continuous.html?context=cdpaas&locale=en | Specifying values and labels for continuous data (SPSS Modeler) | Specifying values and labels for continuous data
The Continuous measurement level is for numeric fields.
There are three storage types for continuous data:
* Real
* Integer
* Date/Time
The same settings are used to edit all continuous fields. The storage type is displayed for reference only. Select the desired field in the Type node settings and then click the gear icon at the end of its row.
| # Specifying values and labels for continuous data #
The Continuous measurement level is for numeric fields\.
There are three storage types for continuous data:
<!-- <ul> -->
* Real
* Integer
* Date/Time
<!-- </ul> -->
The same settings are used to edit all continuous fields\. The storage type is displayed for reference only\. Select the desired field in the Type node settings and then click the gear icon at the end of its row\.
<!-- </article "role="article" "> -->
|
077AFC6B667F6747FF066182E2F04AF486C13368 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/type_values_flag.html?context=cdpaas&locale=en | Specifying values for a flag (SPSS Modeler) | Specifying values for a flag
Use flag fields to display data that has two distinct values. The storage types for flags can be string, integer, real number, or date/time.
True. Specify a flag value for the field when the condition is met.
False. Specify a flag value for the field when the condition is not met.
Labels. Specify labels for each value in the flag field. These labels appear in a variety of locations, such as graphs, tables, output, and model browsers.
| # Specifying values for a flag #
Use flag fields to display data that has two distinct values\. The storage types for flags can be string, integer, real number, or date/time\.
True\. Specify a flag value for the field when the condition is met\.
False\. Specify a flag value for the field when the condition is not met\.
Labels\. Specify labels for each value in the flag field\. These labels appear in a variety of locations, such as graphs, tables, output, and model browsers\.
<!-- </article "role="article" "> -->
|
24D2987869B1C8C34EFA1204903A7A8F3E35D459 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/type_values_geo.html?context=cdpaas&locale=en | Specifying values for geospatial data (SPSS Modeler) | Specifying values for geospatial data
Geospatial fields display geospatial data that's in a list. For the Geospatial measurement level, you can use various options to set the measurement level of the elements within the list.
Type. Select the measurement sublevel of the geospatial field. The available sublevels are determined by the depth of the list field. The defaults are: Point (zero depth), LineString (depth of one), and Polygon (depth of one).
For more information about sublevels, see [Geospatial measurement sublevels](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/type_levels_geo.html).
Coordinate system. This option is only available if you changed the measurement level to Geospatial from a non-geospatial level. To apply a coordinate system to your geospatial data, select this option. To use a different coordinate system, click Change.
| # Specifying values for geospatial data #
Geospatial fields display geospatial data that's in a list\. For the Geospatial measurement level, you can use various options to set the measurement level of the elements within the list\.
Type\. Select the measurement sublevel of the geospatial field\. The available sublevels are determined by the depth of the list field\. The defaults are: Point (zero depth), LineString (depth of one), and Polygon (depth of one)\.
For more information about sublevels, see [Geospatial measurement sublevels](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/type_levels_geo.html)\.
Coordinate system\. This option is only available if you changed the measurement level to Geospatial from a non\-geospatial level\. To apply a coordinate system to your geospatial data, select this option\. To use a different coordinate system, click Change\.
<!-- </article "role="article" "> -->
|
2C991135B30B24A268FC9D847E3F43522543A96B | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/type_values_nominal.html?context=cdpaas&locale=en | Specifying values and labels for nominal and ordinal data (SPSS Modeler) | Specifying values and labels for nominal and ordinal data
Nominal (set) and ordinal (ordered set) measurement levels indicate that the data values are used discretely as a member of the set. The storage types for a set can be string, integer, real number, or date/time.
The following controls are unique to nominal and ordinal fields. You can use them to specify values and labels. Select the desired field in the Type node settings and then click the gear icon at the end of its row.
Values and Labels. You can specify values based on your knowledge of the current field. You can enter expected values for the field and check the dataset's conformity to these values using the Check options. And you can specify lables for each value in the set. Thse labels appear in a variety of locations, such as graphs, tables, output, and model browsers.
| # Specifying values and labels for nominal and ordinal data #
Nominal (set) and ordinal (ordered set) measurement levels indicate that the data values are used discretely as a member of the set\. The storage types for a set can be string, integer, real number, or date/time\.
The following controls are unique to nominal and ordinal fields\. You can use them to specify values and labels\. Select the desired field in the Type node settings and then click the gear icon at the end of its row\.
Values and Labels\. You can specify values based on your knowledge of the current field\. You can enter expected values for the field and check the dataset's conformity to these values using the Check options\. And you can specify lables for each value in the set\. Thse labels appear in a variety of locations, such as graphs, tables, output, and model browsers\.
<!-- </article "role="article" "> -->
|
C9857AFEF4C7E7C2AD0B764277B90A2BCE51ADC8 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/type_values_using.html?context=cdpaas&locale=en | Setting options for values (SPSS Modeler) | Setting options for values
The Value mode column under the Type node settings displays a drop-down list of predefined values. Choosing the Specify option on this list and then clicking the gear icon opens a new screen where you can set options for reading, specifying, labeling, and handling values for the selected field.
Many of the controls are common to all types of data. These common controls are discussed here.
Measure. Displays the currently selected measurement level. You can change this setting to reflect the way that you intend to use data. For instance, if a field called day_of_week contains numbers that represent individual days, you might want to change this to nominal data in order to create a distribution node that examines each category individually.
Role. Used to tell modeling nodes whether fields will be Input (predictor fields) or Target (predicted fields) for a machine-learning process. Other roles are also available such as Both , None, Partition, Split, Frequency, or Record ID.
Value mode. Select a mode to determine values for the selected field. Choices for reading values include the following:
* Read. Select to read values when the node runs.
* Pass. Select not to read data for the current field.
* Specify. Options here are used to specify values and labels for the selected field. Used with value checking, use this option to specify values that are based on your knowledge of the current field. This option activates unique controls for each type of field. You can't specify values or labels for a field whose measurement level is Typeless.
* Extend. Select to append the current data with the values that you enter here. For example, if field_1 has a range from (0,10) and you enter a range of values from (8,16), the range is extended by adding the 16 without removing the original minimum. The new range would be (0,16).
* Current. Select to keep the current data values.
Value Labels (Add/Edit Labels). In this section you can enter custom labels for each value of the selected field.
Max list length. Only available for data with a measurement level of either Geospatial or Collection. Set the maximum length of the list by specifying the number of elements the list can contain.
Max string length. Only available for typeless data. Use this field when you're generating SQL to create a table. Enter the value of the largest string in your data; this generates a column in the table that's big enough for the string. If the string length value is not available, a default string size is used that may not be appropriate for the data (for example, if the value is too small, errors can occur when writing data to the table; too large a value could adversely affect performance).
Check. Select a method of coercing values to conform to the specified continuous, flag, or nominal values. This option corresponds to the Check column in the main Type node settings, and a selection made here will override those in the main settings. Used with the options for specifying values and labels, value checking allows you to conform values in the data with expected values. For example, if you specify values as 1, 0 and then use the Discard. option here, you can discard all records with values other than 1 or 0.
Define missing values. Select to activate the following controls you can use to declare missing values or blanks in your data.
* Missing values. Use this field to define specific values (such as 99 or 0) as blanks. The value should be appropriate for the storage type of the field.
* Range. Used to specify a range of missing values (such as ages 1–17 or greater than 65). If a bound value is blank, then the range is unbounded. For example, if you specify a lower bound of 100 with no upper bound, then all values greater than or equal to 100 are defined as missing. The bound values are inclusive. For example, a range with a lower bound of 5 and an upper bound of 10 includes 5 and 10 in the range definition. You can define a missing value range for any storage type, including date/time and string (in which case the alphabetic sort order is used to determine whether a value is within the range).
* Null/White space. You can also specify system nulls (displayed in the data as $null$) and white space (string values with no visible characters) as blanks. Note that the Type node also treats empty strings as white space for purposes of analysis, although they are stored differently internally and may be handled differently in certain cases.
Note: To code blanks as undefined or $null$, use the Filler node.
| # Setting options for values #
The Value mode column under the Type node settings displays a drop\-down list of predefined values\. Choosing the Specify option on this list and then clicking the gear icon opens a new screen where you can set options for reading, specifying, labeling, and handling values for the selected field\.
Many of the controls are common to all types of data\. These common controls are discussed here\.
Measure\. Displays the currently selected measurement level\. You can change this setting to reflect the way that you intend to use data\. For instance, if a field called `day_of_week` contains numbers that represent individual days, you might want to change this to nominal data in order to create a distribution node that examines each category individually\.
Role\. Used to tell modeling nodes whether fields will be Input (predictor fields) or Target (predicted fields) for a machine\-learning process\. Other roles are also available such as Both , None, Partition, Split, Frequency, or Record ID\.
Value mode\. Select a mode to determine values for the selected field\. Choices for reading values include the following:
<!-- <ul> -->
* Read\. Select to read values when the node runs\.
* Pass\. Select not to read data for the current field\.
* Specify\. Options here are used to specify values and labels for the selected field\. Used with value checking, use this option to specify values that are based on your knowledge of the current field\. This option activates unique controls for each type of field\. You can't specify values or labels for a field whose measurement level is Typeless\.
* Extend\. Select to append the current data with the values that you enter here\. For example, if field\_1 has a range from `(0,10)` and you enter a range of values from `(8,16)`, the range is extended by adding the `16` without removing the original minimum\. The new range would be `(0,16)`\.
* Current\. Select to keep the current data values\.
<!-- </ul> -->
Value Labels (Add/Edit Labels)\. In this section you can enter custom labels for each value of the selected field\.
Max list length\. Only available for data with a measurement level of either Geospatial or Collection\. Set the maximum length of the list by specifying the number of elements the list can contain\.
Max string length\. Only available for typeless data\. Use this field when you're generating SQL to create a table\. Enter the value of the largest string in your data; this generates a column in the table that's big enough for the string\. If the string length value is not available, a default string size is used that may not be appropriate for the data (for example, if the value is too small, errors can occur when writing data to the table; too large a value could adversely affect performance)\.
Check\. Select a method of coercing values to conform to the specified continuous, flag, or nominal values\. This option corresponds to the Check column in the main Type node settings, and a selection made here will override those in the main settings\. Used with the options for specifying values and labels, value checking allows you to conform values in the data with expected values\. For example, if you specify values as `1, 0` and then use the Discard\. option here, you can discard all records with values other than `1` or `0`\.
Define missing values\. Select to activate the following controls you can use to declare missing values or blanks in your data\.
<!-- <ul> -->
* Missing values\. Use this field to define specific values (such as `99` or `0`) as blanks\. The value should be appropriate for the storage type of the field\.
* Range\. Used to specify a range of missing values (such as ages `1–17` or greater than `65`)\. If a bound value is blank, then the range is unbounded\. For example, if you specify a lower bound of `100` with no upper bound, then all values greater than or equal to `100` are defined as missing\. The bound values are inclusive\. For example, a range with a lower bound of `5` and an upper bound of `10` includes `5` and `10` in the range definition\. You can define a missing value range for any storage type, including date/time and string (in which case the alphabetic sort order is used to determine whether a value is within the range)\.
* Null/White space\. You can also specify system nulls (displayed in the data as `$null$`) and white space (string values with no visible characters) as blanks\. Note that the Type node also treats empty strings as white space for purposes of analysis, although they are stored differently internally and may be handled differently in certain cases\.
<!-- </ul> -->
Note: To code blanks as undefined or `$null$`, use the Filler node\.
<!-- </article "role="article" "> -->
|
74706148818BD2ACE30029492DD8AD7D47283EDC | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/userinput.html?context=cdpaas&locale=en | User Input node (SPSS Modeler) | User Input node
The User Input node provides an easy way for you to create synthetic data--either from scratch or by altering existing data. This is useful, for example, when you want to create a test dataset for modeling.
| # User Input node #
The User Input node provides an easy way for you to create synthetic data\-\-either from scratch or by altering existing data\. This is useful, for example, when you want to create a test dataset for modeling\.
<!-- </article "role="article" "> -->
|
5B3FB712903B0D1044610C93E6FCDE6A41BE1CF6 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/web.html?context=cdpaas&locale=en | Web node (SPSS Modeler) | Web node
Web nodes show the strength of relationships between values of two or more symbolic fields. The graph displays connections using varying types of lines to indicate connection strength. You can use a Web node, for example, to explore the relationship between the purchase of various items at an e-commerce site or a traditional retail outlet.
Figure 1. Web graph showing relationships between the purchase of grocery items

| # Web node #
Web nodes show the strength of relationships between values of two or more symbolic fields\. The graph displays connections using varying types of lines to indicate connection strength\. You can use a Web node, for example, to explore the relationship between the purchase of various items at an e\-commerce site or a traditional retail outlet\.
Figure 1\. Web graph showing relationships between the purchase of grocery items

<!-- </article "role="article" "> -->
|
114EBF33612531C5020FD739010049E5126E0E5B | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/xgboostas.html?context=cdpaas&locale=en | XGBoost-AS node (SPSS Modeler) | XGBoost-AS node
XGBoost© is an advanced implementation of a gradient boosting algorithm. Boosting algorithms iteratively learn weak classifiers and then add them to a final strong classifier. XGBoost is very flexible and provides many parameters that can be overwhelming to most users, so the XGBoost-AS node in Watson Studio exposes the core features and commonly used parameters. The XGBoost-AS node is implemented in Spark.
For more information about boosting algorithms, see the [XGBoost Tutorials](http://xgboost.readthedocs.io/en/latest/tutorials/index.html). ^1^
Note that the XGBoost cross-validation function is not supported in Watson Studio. You can use the Partition node for this functionality. Also note that XGBoost in Watson Studio performs one-hot encoding automatically for categorical variables.
Notes:
* On Mac, version 10.12.3 or higher is required for building XGBoost-AS models.
* XGBoost isn't supported on IBM POWER.
^1^ "XGBoost Tutorials." Scalable and Flexible Gradient Boosting. Web. © 2015-2016 DMLC.
| # XGBoost\-AS node #
XGBoost© is an advanced implementation of a gradient boosting algorithm\. Boosting algorithms iteratively learn weak classifiers and then add them to a final strong classifier\. XGBoost is very flexible and provides many parameters that can be overwhelming to most users, so the XGBoost\-AS node in Watson Studio exposes the core features and commonly used parameters\. The XGBoost\-AS node is implemented in Spark\.
For more information about boosting algorithms, see the [XGBoost Tutorials](http://xgboost.readthedocs.io/en/latest/tutorials/index.html)\. ^1^
Note that the XGBoost cross\-validation function is not supported in Watson Studio\. You can use the Partition node for this functionality\. Also note that XGBoost in Watson Studio performs one\-hot encoding automatically for categorical variables\.
Notes:
<!-- <ul> -->
* On Mac, version 10\.12\.3 or higher is required for building XGBoost\-AS models\.
* XGBoost isn't supported on IBM POWER\.
<!-- </ul> -->
^1^ "XGBoost Tutorials\." *Scalable and Flexible Gradient Boosting*\. Web\. © 2015\-2016 DMLC\.
<!-- </article "role="article" "> -->
|
8937DB13972E4DEDBCC542303EF3A783287FD10B | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/xgboostlinear.html?context=cdpaas&locale=en | XGBoost Linear (SPSS Modeler) | XGBoost Linear node
XGBoost Linear© is an advanced implementation of a gradient boosting algorithm with a linear model as the base model. Boosting algorithms iteratively learn weak classifiers and then add them to a final strong classifier. The XGBoost Linear node in watsonx.ai is implemented in Python.
For more information about boosting algorithms, see the [XGBoost Tutorials](http://xgboost.readthedocs.io/en/latest/tutorials/index.html). ^1^
Note that the XGBoost cross-validation function is not supported in watsonx.ai. You can use the Partition node for this functionality. Also note that XGBoost in watsonx.ai performs one-hot encoding automatically for categorical variables.
^1^ "XGBoost Tutorials." Scalable and Flexible Gradient Boosting. Web. © 2015-2016 DMLC.
| # XGBoost Linear node #
XGBoost Linear© is an advanced implementation of a gradient boosting algorithm with a linear model as the base model\. Boosting algorithms iteratively learn weak classifiers and then add them to a final strong classifier\. The XGBoost Linear node in watsonx\.ai is implemented in Python\.
For more information about boosting algorithms, see the [XGBoost Tutorials](http://xgboost.readthedocs.io/en/latest/tutorials/index.html)\. ^1^
Note that the XGBoost cross\-validation function is not supported in watsonx\.ai\. You can use the Partition node for this functionality\. Also note that XGBoost in watsonx\.ai performs one\-hot encoding automatically for categorical variables\.
^1^ "XGBoost Tutorials\." *Scalable and Flexible Gradient Boosting*\. Web\. © 2015\-2016 DMLC\.
<!-- </article "role="article" "> -->
|
35F4C4A97CF58FA0642D88E501314F3D75FF9E01 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/xgboosttree.html?context=cdpaas&locale=en | Supported data sources (SPSS Modeler) | XGBoost Tree node
XGBoost Tree© is an advanced implementation of a gradient boosting algorithm with a tree model as the base model. Boosting algorithms iteratively learn weak classifiers and then add them to a final strong classifier. XGBoost Tree is very flexible and provides many parameters that can be overwhelming to most users, so the XGBoost Tree node in watsonx.ai exposes the core features and commonly used parameters. The node is implemented in Python.
For more information about boosting algorithms, see the [XGBoost Tutorials](http://xgboost.readthedocs.io/en/latest/tutorials/index.html). ^1^
Note that the XGBoost cross-validation function is not supported in watsonx.ai. You can use the Partition node for this functionality. Also note that XGBoost in watsonx.ai performs one-hot encoding automatically for categorical variables.
^1^ "XGBoost Tutorials." Scalable and Flexible Gradient Boosting. Web. © 2015-2016 DMLC.
| # XGBoost Tree node #
XGBoost Tree© is an advanced implementation of a gradient boosting algorithm with a tree model as the base model\. Boosting algorithms iteratively learn weak classifiers and then add them to a final strong classifier\. XGBoost Tree is very flexible and provides many parameters that can be overwhelming to most users, so the XGBoost Tree node in watsonx\.ai exposes the core features and commonly used parameters\. The node is implemented in Python\.
For more information about boosting algorithms, see the [XGBoost Tutorials](http://xgboost.readthedocs.io/en/latest/tutorials/index.html)\. ^1^
Note that the XGBoost cross\-validation function is not supported in watsonx\.ai\. You can use the Partition node for this functionality\. Also note that XGBoost in watsonx\.ai performs one\-hot encoding automatically for categorical variables\.
^1^ "XGBoost Tutorials\." *Scalable and Flexible Gradient Boosting*\. Web\. © 2015\-2016 DMLC\.
<!-- </article "role="article" "> -->
|
717B697E0045B5D7DFF6ACC93AD5DEC98E27EBDC | https://dataplatform.cloud.ibm.com/docs/content/wsd/parameters.html?context=cdpaas&locale=en | Flow and SuperNode parameters | Flow and SuperNode parameters
You can define parameters for use in CLEM expressions and in scripting. They are, in effect, user-defined variables that are saved and persisted with the current flow or SuperNode and can be accessed from the user interface as well as through scripting.
If you save a flow, for example, any parameters you set for that flow are also saved. (This distinguishes them from local script variables, which can be used only in the script in which they are declared.) Parameters are often used in scripting to control the behavior of the script, by providing information about fields and values that don't need to be hard coded in the script.
You can set flow parameters in a flow script or in a flow's properties (right-click the canvas in your flow and select Flow properties), and they're available to all nodes in the flow. They're displayed in the Parameters list in the Expression Builder.
You can also set parameters for SuperNodes, in which case they're visible only to nodes encapsulated within that SuperNode.
Tip: For complete details about scripting, see the [Scripting and automation](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/scripting_overview.html) guide.
| # Flow and SuperNode parameters #
You can define parameters for use in CLEM expressions and in scripting\. They are, in effect, user\-defined variables that are saved and persisted with the current flow or SuperNode and can be accessed from the user interface as well as through scripting\.
If you save a flow, for example, any parameters you set for that flow are also saved\. (This distinguishes them from local script variables, which can be used only in the script in which they are declared\.) Parameters are often used in scripting to control the behavior of the script, by providing information about fields and values that don't need to be hard coded in the script\.
You can set flow parameters in a flow script or in a flow's properties (right\-click the canvas in your flow and select Flow properties), and they're available to all nodes in the flow\. They're displayed in the Parameters list in the Expression Builder\.
You can also set parameters for SuperNodes, in which case they're visible only to nodes encapsulated within that SuperNode\.
Tip: For complete details about scripting, see the [Scripting and automation](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/scripting_overview.html) guide\.
<!-- </article "role="article" "> -->
|
2B67D1EB41065CF9DA0EB68D429B69803D49EAA1 | https://dataplatform.cloud.ibm.com/docs/content/wsd/reference_guides.html?context=cdpaas&locale=en | Reference information | Reference information
This section provides reference information about various topics.
| # Reference information #
This section provides reference information about various topics\.
<!-- </article "role="article" "> -->
|
C0CC7AE4029730B9846B6A05F4160643D3A8C393 | https://dataplatform.cloud.ibm.com/docs/content/wsd/spss-comments.html?context=cdpaas&locale=en | Adding comments and annotations to SPSS Modeler flows |
You may need to describe a flow to others in your organization. To help you do this, you can attach explanatory comments to nodes, and model nuggets.
Others can then view these comments on-screen, or you might even print out an image of the flow that includes the comments. You can also add notes in the form of text annotations to nodes and model nuggets by means of the Annotations tab in a node's properties. These annotations are visible only when the Annotations tab is open.
| <!-- <article "role="article" "> -->
You may need to describe a flow to others in your organization\. To help you do this, you can attach explanatory comments to nodes, and model nuggets\.
Others can then view these comments on\-screen, or you might even print out an image of the flow that includes the comments\. You can also add notes in the form of text annotations to nodes and model nuggets by means of the Annotations tab in a node's properties\. These annotations are visible only when the Annotations tab is open\.
<!-- </article "role="article" "> -->
|
E1232C341B3F590C23E9E81DDD157BC99FF77191 | https://dataplatform.cloud.ibm.com/docs/content/wsd/spss-connections.html?context=cdpaas&locale=en | Supported data sources (SPSS Modeler) | Supported data sources for SPSS Modeler
In SPSS Modeler, you can connect to your data no matter where it lives.
* [Connectors](https://dataplatform.cloud.ibm.com/docs/content/wsd/spss-connections.html?context=cdpaas&locale=ensql_overview__ibm-data-src-spss)
* [Data files](https://dataplatform.cloud.ibm.com/docs/content/wsd/spss-connections.html?context=cdpaas&locale=ensql_overview__file-types-spss)
| # Supported data sources for SPSS Modeler #
In SPSS Modeler, you can connect to your data no matter where it lives\.
<!-- <ul> -->
* [Connectors](https://dataplatform.cloud.ibm.com/docs/content/wsd/spss-connections.html?context=cdpaas&locale=en#sql_overview__ibm-data-src-spss)
* [Data files](https://dataplatform.cloud.ibm.com/docs/content/wsd/spss-connections.html?context=cdpaas&locale=en#sql_overview__file-types-spss)
<!-- </ul> -->
<!-- </article "role="article" "> -->
|
7D1E61EF82BC5DC1029D55C8F5C2EBB56082CDAC | https://dataplatform.cloud.ibm.com/docs/content/wsd/spss-modeler.html?context=cdpaas&locale=en | Creating SPSS Modeler flows | Creating SPSS Modeler flows
With SPSS Modeler flows, you can quickly develop predictive models using business expertise and deploy them into business operations to improve decision making. Designed around the long-established SPSS Modeler client software and the industry-standard CRISP-DM model it uses, the flows interface supports the entire data mining process, from data to better business results.
SPSS Modeler offers a variety of modeling methods taken from machine learning, artificial intelligence, and statistics. The methods available on the node palette allow you to derive new information from your data and to develop predictive models. Each method has certain strengths and is best suited for particular types of problems.
Data format
: Relational: Tables in relational data sources
: Tabular: .xls, .xlsx, .csv, .sav, .json, .xml, or .sas. For Excel files, only the first sheet is read.
: Textual: In the supported relational tables or files
Data size
: Any
How can I prepare data?
: Use automatic data preparation functions
: Write SQL statements to manipulate data
: Cleanse, shape, sample, sort, and derive data
How can I analyze data?
: Visualize data with many chart options
: Identify the natural language of a text field
How can I build models?
: Build predictive models
: Choose from over 40 modeling algorithms, and many other nodes
: Use automatic modeling functions
: Model time series or geospatial data
: Classify textual data
: Identify relationships between the concepts in textual data
Getting started
: To create an SPSS Modeler flow from the project's Assets tab, click .
Note: Watsonx.ai doesn't include SPSS functionality in Peru, Ecuador, Colombia, or Venezuela.
| # Creating SPSS Modeler flows #
With SPSS Modeler flows, you can quickly develop predictive models using business expertise and deploy them into business operations to improve decision making\. Designed around the long\-established SPSS Modeler client software and the industry\-standard CRISP\-DM model it uses, the flows interface supports the entire data mining process, from data to better business results\.
SPSS Modeler offers a variety of modeling methods taken from machine learning, artificial intelligence, and statistics\. The methods available on the node palette allow you to derive new information from your data and to develop predictive models\. Each method has certain strengths and is best suited for particular types of problems\.
Data format
: Relational: Tables in relational data sources
: Tabular: \.xls, \.xlsx, \.csv, \.sav, \.json, \.xml, or \.sas\. For Excel files, only the first sheet is read\.
: Textual: In the supported relational tables or files
Data size
: Any
How can I prepare data?
: Use automatic data preparation functions
: Write SQL statements to manipulate data
: Cleanse, shape, sample, sort, and derive data
How can I analyze data?
: Visualize data with many chart options
: Identify the natural language of a text field
How can I build models?
: Build predictive models
: Choose from over 40 modeling algorithms, and many other nodes
: Use automatic modeling functions
: Model time series or geospatial data
: Classify textual data
: Identify relationships between the concepts in textual data
Getting started
: To create an SPSS Modeler flow from the project's Assets tab, click \.
Note: Watsonx\.ai doesn't include SPSS functionality in Peru, Ecuador, Colombia, or Venezuela\.
<!-- </article "role="article" "> -->
|
68061CDEDA9E9E83180CA7513620B5988266CEBF | https://dataplatform.cloud.ibm.com/docs/content/wsd/spss_algorithms.html?context=cdpaas&locale=en | SPSS Modeler algorithms guide | SPSS algorithms
Many of the nodes available in SPSS Modeler are based on statistical algorithms.
If you're interested in learning more about the underlying algorithms used in your flows, you can read the SPSS Modeler Algorithms Guide available in PDF format. The guide is for advanced users, and the information is provided by a team of SPSS statisticians.
[Download the SPSS Modeler Algorithms Guide<br><br>](https://public.dhe.ibm.com/software/analytics/spss/documentation/modeler/new/AlgorithmsGuide.pdf)
| # SPSS algorithms #
Many of the nodes available in SPSS Modeler are based on statistical algorithms\.
If you're interested in learning more about the underlying algorithms used in your flows, you can read the SPSS Modeler Algorithms Guide available in PDF format\. The guide is for advanced users, and the information is provided by a team of SPSS statisticians\.
[Download the SPSS Modeler Algorithms Guide<br><br>](https://public.dhe.ibm.com/software/analytics/spss/documentation/modeler/new/AlgorithmsGuide.pdf)
<!-- </article "role="article" "> -->
|
23080E48C7B666C07E92A6E4F4BB256D77BE49B4 | https://dataplatform.cloud.ibm.com/docs/content/wsd/spss_tips.html?context=cdpaas&locale=en | Tips and shortcuts for SPSS Modeler | Tips and shortcuts
Work quickly and easily by familiarizing yourself with the following shortcuts and tips:
* Quickly find nodes. You can use the search bar on the Nodes palette to search for certain node types, and hover over them to see helpful descriptions.
* Quickly edit nodes. After adding a node to your flow, double-click it to open its properties.
* Add a node to a flow connection. To add a new node between two connected nodes, drag the node to the connection line.
* Replace a connection. To replace an existing connection on a node, simply create a new connection and the old one will be replaced.
* Start from an SPSS Modeler stream. You can import a stream ( .str) that was created in SPSS Modeler Subscription or SPSS Modeler client
* Use tool tips. In node properties, helpful tool tips are available in various locations. Hover over the tooltip icon to see tool tips. 
* Rename nodes and add annotations. Each node properties panel includes an Annotations section in which you can specify a custom name for nodes on the canvas. You can also include lengthy annotations to track progress, save process details, and denote any business decisions required or achieved.
* Generate new nodes from table output. When viewing table output, you can select one or more fields, click Generate, and select a node to add to your flow.
* Insert values automatically into a CLEM expression. Using the Expression Builder, accessible from various areas of the user interface (such as those for Derive and Filler nodes), you can automatically insert field values into a CLEM expression.
Keyboard shortcuts are available for SPSS Modeler. See the following table. Note that all Ctrl keys listed are Cmd on macOS.
Shortcut keys
Table 1. Shortcut keys
Shortcut Key Function
Ctrl + F1 Navigate to the header.
Ctrl + F2 Navigate to the Nodes palette, then use arrow keys to move between nodes. Press Enter or the space key to add the selected node to your canvas.
Ctrl + F3 Navigate to the toolbar.
Ctrl + F4 Navigate to the flow canvas, then use arrow keys to move between nodes. Press Enter or space twice to open the node's context menu. Then use the arrow keys to select the desired context menu action and press Enter or space to perform the action.
Ctrl + F5 Navigate to the node properties panel if it's open.
Ctrl + F6 Move between areas of the user interface (header, palette, canvas, toolbar, etc.).
Ctrl + F7 Open and navigate to the Messages panel.
Ctrl + F8 Open and navigate to the Outputs panel.
Ctrl + A Select all nodes when focus is on the canvas
Ctrl + E With a node selected on the canvas, open its node properties. Then use the tab or arrow keys to move around the list of node properties. Press Ctrl + S to save your changes or press Ctrl + to cancel your changes.
Ctrl + I Open the settings panel.
Ctrl + J With a node selected on the canvas, connect it to another node. Use the arrow keys to select the node to connect to, then press Enter or space (or press Esc to cancel).
Ctrl + K Disconnect a node.
Ctrl + Enter Run a branch from where the focus is.
Ctrl + Shift + Enter Run the entire flow.
Ctrl + Shift + P Launch preview.
Ctrl + arrow Move a selected node around the canvas.
Ctrl + Alt + arrow Move the canvas in a direction.
Ctrl + Shift + arrow Move a selected node around the canvas ten times faster than Ctrl + arrow.
Ctrl + Shift + C Toggle cache on/off.
Ctrl + Shift + up arrow Select all nodes upstream of the selected node.
Ctrl + Shift + down arrow Select all nodes downstream of the selected node.
Enter + space twice Open the context menu when a node is selected on the flow canvas
Shift + arrow Select multiple nodes.
| # Tips and shortcuts #
Work quickly and easily by familiarizing yourself with the following shortcuts and tips:
<!-- <ul> -->
* **Quickly find nodes\.** You can use the search bar on the Nodes palette to search for certain node types, and hover over them to see helpful descriptions\.
* **Quickly edit nodes\.** After adding a node to your flow, double\-click it to open its properties\.
* **Add a node to a flow connection\.** To add a new node between two connected nodes, drag the node to the connection line\.
* **Replace a connection\.** To replace an existing connection on a node, simply create a new connection and the old one will be replaced\.
* **Start from an SPSS Modeler stream\.** You can import a stream ( \.str) that was created in SPSS Modeler Subscription or SPSS Modeler client
* **Use tool tips\.** In node properties, helpful tool tips are available in various locations\. Hover over the tooltip icon to see tool tips\. 
* **Rename nodes and add annotations\.** Each node properties panel includes an Annotations section in which you can specify a custom name for nodes on the canvas\. You can also include lengthy annotations to track progress, save process details, and denote any business decisions required or achieved\.
* **Generate new nodes from table output\.** When viewing table output, you can select one or more fields, click Generate, and select a node to add to your flow\.
* **Insert values automatically into a CLEM expression\.** Using the Expression Builder, accessible from various areas of the user interface (such as those for Derive and Filler nodes), you can automatically insert field values into a CLEM expression\.
<!-- </ul> -->
Keyboard shortcuts are available for SPSS Modeler\. See the following table\. Note that all **Ctrl** keys listed are **Cmd** on macOS\.
<!-- <table "summary="Shortcut keys" id="spss_tips__table_mjf_jyn_zcb" class="defaultstyle" "> -->
Shortcut keys
Table 1\. Shortcut keys
| Shortcut Key | Function |
| --------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Ctrl \+ F1 | Navigate to the header\. |
| Ctrl \+ F2 | Navigate to the Nodes palette, then use arrow keys to move between nodes\. Press Enter or the space key to add the selected node to your canvas\. |
| Ctrl \+ F3 | Navigate to the toolbar\. |
| Ctrl \+ F4 | Navigate to the flow canvas, then use arrow keys to move between nodes\. Press Enter or space twice to open the node's context menu\. Then use the arrow keys to select the desired context menu action and press Enter or space to perform the action\. |
| Ctrl \+ F5 | Navigate to the node properties panel if it's open\. |
| Ctrl \+ F6 | Move between areas of the user interface (header, palette, canvas, toolbar, etc\.)\. |
| Ctrl \+ F7 | Open and navigate to the Messages panel\. |
| Ctrl \+ F8 | Open and navigate to the Outputs panel\. |
| Ctrl \+ A | Select all nodes when focus is on the canvas |
| Ctrl \+ E | With a node selected on the canvas, open its node properties\. Then use the tab or arrow keys to move around the list of node properties\. Press Ctrl \+ S to save your changes or press Ctrl \+ to cancel your changes\. |
| Ctrl \+ I | Open the settings panel\. |
| Ctrl \+ J | With a node selected on the canvas, connect it to another node\. Use the arrow keys to select the node to connect to, then press Enter or space (or press Esc to cancel)\. |
| Ctrl \+ K | Disconnect a node\. |
| Ctrl \+ Enter | Run a branch from where the focus is\. |
| Ctrl \+ Shift \+ Enter | Run the entire flow\. |
| Ctrl \+ Shift \+ P | Launch preview\. |
| Ctrl \+ arrow | Move a selected node around the canvas\. |
| Ctrl \+ Alt \+ arrow | Move the canvas in a direction\. |
| Ctrl \+ Shift \+ arrow | Move a selected node around the canvas ten times faster than Ctrl \+ arrow\. |
| Ctrl \+ Shift \+ C | Toggle cache on/off\. |
| Ctrl \+ Shift \+ up arrow | Select all nodes upstream of the selected node\. |
| Ctrl \+ Shift \+ down arrow | Select all nodes downstream of the selected node\. |
| Enter \+ space twice | Open the context menu when a node is selected on the flow canvas |
| Shift \+ arrow | Select multiple nodes\. |
<!-- </table "summary="Shortcut keys" id="spss_tips__table_mjf_jyn_zcb" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
C5F5ACC006CD6F06BE3266EE98F89FABF4F6FBAF | https://dataplatform.cloud.ibm.com/docs/content/wsd/spss_troubleshooting.html?context=cdpaas&locale=en | Troubleshooting information for SPSS Modeler | Troubleshooting SPSS Modeler
The information in this section provides troubleshooting details for issues you may encounter in SPSS Modeler.
| # Troubleshooting SPSS Modeler #
The information in this section provides troubleshooting details for issues you may encounter in SPSS Modeler\.
<!-- </article "role="article" "> -->
|
33FE18D89140517AB2A75D6FC64A4A3DB962B88B | https://dataplatform.cloud.ibm.com/docs/content/wsd/sql_clem.html?context=cdpaas&locale=en | SQL optimization (SPSS Modeler) | CLEM expressions and operators supporting SQL pushback
The tables in this section list the mathematical operations and expressions that support SQL generation and are often used during data mining. Operations absent from these tables don't support SQL generation.
Table 1. Operators
Operations supporting SQL generation Notes
+
-
/
*
>< Used to concatenate strings.
Table 2. Relational operators
Operations supporting SQL generation Notes
=
/= Used to specify "not equal."
>
>=
<
<=
Table 3. Functions
Operations supporting SQL generation Notes
abs
allbutfirst
allbutlast
and
arccos
arcsin
arctan
arctanh
cos
div
exp
fracof
hasstartstring
hassubstring
integer
intof
isaplhacode
islowercode
isnumbercode
isstartstring
issubstring
isuppercode
last
length
locchar
log
log10
lowertoupper
max
member
min
negate
not
number
or
pi
real
rem
round
sign
sin
sqrt
string
strmember
subscrs
substring
substring_between
uppertolower
to_string
Table 4. Special functions
Operations supporting SQL generation Notes
@NULL
@GLOBAL_AVE You can use the special global functions to retrieve global values computed by the Set Globals node.
@GLOBAL_SUM
@GLOBAL_MAX
@GLOBAL_MEAN
@GLOBAL_MIN
@GLOBALSDEV
Table 5. Aggregate functions
Operations supporting SQL generation Notes
Sum
Mean
Min
Max
Count
SDev
| # CLEM expressions and operators supporting SQL pushback #
The tables in this section list the mathematical operations and expressions that support SQL generation and are often used during data mining\. Operations absent from these tables don't support SQL generation\.
<!-- <table "summary="" class="defaultstyle" "> -->
Table 1\. Operators
| Operations supporting SQL generation | Notes |
| ------------------------------------ | ----------------------------- |
| `+` | |
| `-` | |
| `/` | |
| `*` | |
| `><` | Used to concatenate strings\. |
<!-- </table "summary="" class="defaultstyle" "> -->
<!-- <table "summary="" id="sql_clem__table_uqw_jk1_2db" class="defaultstyle" "> -->
Table 2\. Relational operators
| Operations supporting SQL generation | Notes |
| ------------------------------------ | ----------------------------- |
| `=` | |
| `/=` | Used to specify "not equal\." |
| `>` | |
| `>=` | |
| `<` | |
| `<=` | |
<!-- </table "summary="" id="sql_clem__table_uqw_jk1_2db" class="defaultstyle" "> -->
<!-- <table "summary="" id="sql_clem__table_wqw_jk1_2db" class="defaultstyle" "> -->
Table 3\. Functions
| Operations supporting SQL generation | Notes |
| ------------------------------------ | ----- |
| `abs` | |
| `allbutfirst` | |
| `allbutlast` | |
| `and` | |
| `arccos` | |
| `arcsin` | |
| `arctan` | |
| `arctanh` | |
| `cos` | |
| `div` | |
| `exp` | |
| `fracof` | |
| `hasstartstring` | |
| `hassubstring` | |
| `integer` | |
| `intof` | |
| `isaplhacode` | |
| `islowercode` | |
| `isnumbercode` | |
| `isstartstring` | |
| `issubstring` | |
| `isuppercode` | |
| `last` | |
| `length` | |
| `locchar` | |
| `log` | |
| `log10` | |
| `lowertoupper` | |
| `max` | |
| `member` | |
| `min` | |
| `negate` | |
| `not` | |
| `number` | |
| `or` | |
| `pi` | |
| `real` | |
| `rem` | |
| `round` | |
| `sign` | |
| `sin` | |
| `sqrt` | |
| `string` | |
| `strmember` | |
| `subscrs` | |
| `substring` | |
| `substring_between` | |
| `uppertolower` | |
| `to_string` | |
<!-- </table "summary="" id="sql_clem__table_wqw_jk1_2db" class="defaultstyle" "> -->
<!-- <table "summary="" id="sql_clem__table_yqw_jk1_2db" class="defaultstyle" "> -->
Table 4\. Special functions
| Operations supporting SQL generation | Notes |
| ------------------------------------ | ----------------------------------------------------------------------------------------------------- |
| `@NULL` | |
| `@GLOBAL_AVE` | You can use the special global functions to retrieve global values computed by the Set Globals node\. |
| `@GLOBAL_SUM` | |
| `@GLOBAL_MAX` | |
| `@GLOBAL_MEAN` | |
| `@GLOBAL_MIN` | |
| `@GLOBALSDEV` | |
<!-- </table "summary="" id="sql_clem__table_yqw_jk1_2db" class="defaultstyle" "> -->
<!-- <table "summary="" id="sql_clem__table_arw_jk1_2db" class="defaultstyle" "> -->
Table 5\. Aggregate functions
| Operations supporting SQL generation | Notes |
| ------------------------------------ | ----- |
| `Sum` | |
| `Mean` | |
| `Min` | |
| `Max` | |
| `Count` | |
| `SDev` | |
<!-- </table "summary="" id="sql_clem__table_arw_jk1_2db" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
262C45D286C9B8A7EDBA8635E636824F2B043D73 | https://dataplatform.cloud.ibm.com/docs/content/wsd/sql_howitworks.html?context=cdpaas&locale=en | SQL optimization (SPSS Modeler) | How does SQL pushback work?
The initial fragments of a flow leading from the data import nodes are the main targets for SQL generation. When a node is encountered that can't be compiled to SQL, the data is extracted from the database and subsequent processing is performed.
During flow preparation and prior to running, the SQL generation process happens as follows:
* The software reorders flows to move downstream nodes into the “SQL zone” where it can be proven safe to do so.
* Working from the import nodes toward the terminal nodes, SQL expressions are constructed incrementally. This phase stops when a node is encountered that can't be converted to SQL or when the terminal node (for example, a Table node or a Graph node) is converted to SQL. At the end of this phase, each node is labeled with an SQL statement if the node and its predecessors have an SQL equivalent.
* Working from the nodes with the most complicated SQL equivalents back toward the import nodes, the SQL is checked for validity. The SQL that was successfully validated is chosen for execution.
* Nodes for which all operations have generated SQL are highlighted with a SQL icon next to the node on the flow canvas. Based on the results, you may want to further reorganize your flow where appropriate to take full advantage of database execution.
| # How does SQL pushback work? #
The initial fragments of a flow leading from the data import nodes are the main targets for SQL generation\. When a node is encountered that can't be compiled to SQL, the data is extracted from the database and subsequent processing is performed\.
During flow preparation and prior to running, the SQL generation process happens as follows:
<!-- <ul> -->
* The software reorders flows to move downstream nodes into the “SQL zone” where it can be proven safe to do so\.
* Working from the import nodes toward the terminal nodes, SQL expressions are constructed incrementally\. This phase stops when a node is encountered that can't be converted to SQL or when the terminal node (for example, a Table node or a Graph node) is converted to SQL\. At the end of this phase, each node is labeled with an SQL statement if the node and its predecessors have an SQL equivalent\.
* Working from the nodes with the most complicated SQL equivalents back toward the import nodes, the SQL is checked for validity\. The SQL that was successfully validated is chosen for execution\.
* Nodes for which all operations have generated SQL are highlighted with a SQL icon next to the node on the flow canvas\. Based on the results, you may want to further reorganize your flow where appropriate to take full advantage of database execution\.
<!-- </ul> -->
<!-- </article "role="article" "> -->
|
BDB3689801D81676AE642F1EBFF81D27C07F1F3C | https://dataplatform.cloud.ibm.com/docs/content/wsd/sql_native.html?context=cdpaas&locale=en | SQL optimization (SPSS Modeler) | Generating SQL from model nuggets
When using data from a database, SQL code can be pushed back to the database for execution, providing superior performance for many operations. For some nodes, SQL for the model nugget can be generated, pushing back the model scoring stage to the database. This allows flows containing these nuggets to have their full SQL pushed back.
For a generated model nugget that supports SQL pushback:
1. Double-click the model nugget to open its settings.
2. Depending on the node type, one or more of the following options is available. Choose one of these options to specify how SQL generation is performed.
Generate SQL for this model
* Default: Score using Server Scoring Adapter (if installed) otherwise in process. This is the default option. If connected to a database with a scoring adapter installed, this option generates SQL using the scoring adapter and associated user defined functions (UDF) and scores your model within the database. When no scoring adapter is available, this option fetches your data back from the database and scores it in SPSS Modeler.
* Score by converting to native SQL without Missing Value Support. This option generates native SQL to score the model within the database, without the overhead of handling missing values. This option simply sets the prediction to null ($null$) when a missing value is encountered while scoring a case.
* Score by converting to native SQL with Missing Value Support. For CHAID, QUEST, and C&R Tree models, you can generate native SQL to score the model within the database with full missing value support. This means that SQL is generated so that missing values are handled as specified in the model. For example, C&R Trees use surrogate rules and biggest child fallback.
* Score outside of the Database. This option fetches your data back from the database and scores it in SPSS Modeler.
| # Generating SQL from model nuggets #
When using data from a database, SQL code can be pushed back to the database for execution, providing superior performance for many operations\. For some nodes, SQL for the model nugget can be generated, pushing back the model scoring stage to the database\. This allows flows containing these nuggets to have their full SQL pushed back\.
For a generated model nugget that supports SQL pushback:
<!-- <ol> -->
1. Double\-click the model nugget to open its settings\.
2. Depending on the node type, one or more of the following options is available\. Choose one of these options to specify how SQL generation is performed\.
Generate SQL for this model
<!-- <ul> -->
* Default: Score using Server Scoring Adapter (if installed) otherwise in process. This is the default option. If connected to a database with a scoring adapter installed, this option generates SQL using the scoring adapter and associated user defined functions (UDF) and scores your model within the database. When no scoring adapter is available, this option fetches your data back from the database and scores it in SPSS Modeler.
* Score by converting to native SQL without Missing Value Support. This option generates native SQL to score the model within the database, without the overhead of handling missing values. This option simply sets the prediction to null (`$null$`) when a missing value is encountered while scoring a case.
* Score by converting to native SQL with Missing Value Support. For CHAID, QUEST, and C&R Tree models, you can generate native SQL to score the model within the database with full missing value support. This means that SQL is generated so that missing values are handled as specified in the model. For example, C&R Trees use surrogate rules and biggest child fallback.
* Score outside of the Database. This option fetches your data back from the database and scores it in SPSS Modeler.
<!-- </ul> -->
<!-- </ol> -->
<!-- </article "role="article" "> -->
|
D69F33671E13DF29FE56579AC4654EBC54A11F12 | https://dataplatform.cloud.ibm.com/docs/content/wsd/sql_nodes.html?context=cdpaas&locale=en | SQL optimization (SPSS Modeler) | Nodes supporting SQL pushback
The tables in this section show nodes representing data-mining operations that support SQL pushback. If a node doesn't appear in these tables, it doesn't support SQL pushback.
Table 1. Record Operations nodes
Nodes supporting SQL generation Notes
Select Supports generation only if SQL generation for the select expression itself is supported. If any fields have nulls, SQL generation does not give the same results for discard as are given in native SPSS Modeler.
Sample Simple sampling supports SQL generation to varying degrees depending on the database.
Aggregate SQL generation support for aggregation depends on the data storage type.
RFM Aggregate Supports generation except if saving the date of the second or third most recent transactions, or if only including recent transactions. However, including recent transactions does work if the datetime_date(YEAR,MONTH,DAY) function is pushed back.
Sort
Merge No SQL generated for merge by order.<br><br>Merge by key with full or partial outer join is only supported if the database/driver supports it. Non-matching input fields can be renamed by means of a Filter node, or the Filter settings of an import node.<br><br>Supports SQL generation for merge by condition.<br><br>For all types of merge, SQL_SP_EXISTS is not supported if inputs originate in different databases.
Append Supports generation if inputs are unsorted. SQL optimization is only possible when your inputs have the same number of columns.
Distinct A Distinct node with the (default) mode Create a composite record for each group selected doesn't support SQL optimization.
Table 2. SQL generation support in the Sample node for simple sampling
Mode Sample Max size Seed Db2 for z/OS Db2 for OS/400 Db2 for Win/UNIX Oracle SQL Server Teradata
Include First n/a Y Y Y Y Y Y
1-in-n off Y Y Y Y Y
max Y Y Y Y Y
Random % off off Y Y Y Y
on Y Y Y
max off Y Y Y Y
on Y Y Y
Discard First off Y
max Y
1-in-n off Y Y Y Y Y
max Y Y Y Y Y
Random % off off Y Y Y Y
on Y Y Y
max off Y Y Y Y
on Y Y Y
Table 3. SQL generation support in the Aggregate node
Storage Sum Mean Min Max SDev Median Count Variance Percentile
Integer Y Y Y Y Y Y* Y Y Y*
Real Y Y Y Y Y Y* Y Y Y*
Date Y Y Y* Y Y*
Time Y Y Y* Y Y*
Timestamp Y Y Y* Y Y*
String Y Y Y* Y Y*
* Median and Percentile are supported on Oracle.
Table 4. Field Operations nodes
Nodes supporting SQL generation Notes
Type Supports SQL generation if the Type node is instantiated and no ABORT or WARN type checking is specified.
Filter
Derive Supports SQL generation if SQL generated for the derive expression is supported (see expressions later on this page).
Ensemble Supports SQL generation for Continuous targets. For other targets, supports generation only if the Highest confidence wins ensemble method is used.
Filler Supports SQL generation if the SQL generated for the derive expression is supported.
Anonymize Supports SQL generation for Continuous targets, and partial SQL generation for Nominal and Flag targets.
Reclassify
Binning Supports SQL generation if the Tiles (equal count) binning method is used and the Read from Bin Values tab if available option is selected. Due to differences in the way that bin boundaries are calculated (this is caused by the nature of the distribution of data in bin fields), you might see differences in the binning output when comparing normal flow execution results and SQL pushback results. To avoid this, use the Record count tiling method, and either Add to next or Keep in current tiles to obtain the closest match between the two methods of flow execution.
RFM Analysis Supports SQL generation if the Read from Bin Values tab if available option is selected, but downstream nodes will not support it.
Partition Supports SQL generation to assign records to partitions.
Set To Flag
Restructure
Table 5. Graphs nodes
Nodes supporting SQL generation Notes
Distribution
Web
Evaluation
For some models, SQL for the model nugget can be generated, pushing back the model scoring stage to the database. The main use of this feature is not to improve performance, but to allow flows containing these nuggets to have their full SQL pushed back. See [Generating SQL from model nuggets](https://dataplatform.cloud.ibm.com/docs/content/wsd/sql_native.html) for more information.
Table 6. Model nuggets
Model nuggets supporting SQL generation Notes
C&R Tree Supports SQL generation for the single tree option, but not for the boosting, bagging, or large dataset options.
QUEST
CHAID
C5.0
Decision List
Linear Supports SQL generation for the standard model option, but not for the boosting, bagging, or large dataset options.
Neural Net Supports SQL generation for the standard model option (Multilayer Perceptron only), but not for the boosting, bagging, or large dataset options.
PCA/Factor
Logistic Supports SQL generation for Multinomial procedure but not Binomial. For Multinomial, generation isn't supported when confidences are selected, unless the target type is Flag.
Generated Rulesets
Auto Classifier If a User Defined Function (UDF) scoring adapter is enabled, these nuggets support SQL pushback. Also, if either SQL generation for Continuous targets, or the Highest confidence wins ensemble method are used, these nuggets support further pushback downstream.
Auto Numeric If a User Defined Function (UDF) scoring adapter is enabled, these nuggets support SQL pushback. Also, if either SQL generation for Continuous targets, or the Highest confidence wins ensemble method are used, these nuggets support further pushback downstream.
Table 7. Outputs nodes
Nodes supporting SQL generation Notes
Table Supports generation if SQL generation is supported for highlight expression.
Matrix Supports generation except if All numerics is selected for the Fields option.
Analysis Supports generation, depending on the options selected.
Transform
Statistics Supports generation if the Correlate option isn't used.
Report
Set Globals
| # Nodes supporting SQL pushback #
The tables in this section show nodes representing data\-mining operations that support SQL pushback\. If a node doesn't appear in these tables, it doesn't support SQL pushback\.
<!-- <table "summary="" id="sql_nodes__sqlgen_recordops" class="defaultstyle" "> -->
Table 1\. Record Operations nodes
| Nodes supporting SQL generation | Notes |
| ------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Select | Supports generation only if SQL generation for the select expression itself is supported\. If any fields have nulls, SQL generation does not give the same results for discard as are given in native SPSS Modeler\. |
| Sample | Simple sampling supports SQL generation to varying degrees depending on the database\. |
| Aggregate | SQL generation support for aggregation depends on the data storage type\. |
| RFM Aggregate | Supports generation except if saving the date of the second or third most recent transactions, or if only including recent transactions\. However, including recent transactions does work if the `datetime_date(YEAR,MONTH,DAY)` function is pushed back\. |
| Sort | |
| Merge | No SQL generated for merge by order\.<br><br>Merge by key with full or partial outer join is only supported if the database/driver supports it\. Non\-matching input fields can be renamed by means of a Filter node, or the Filter settings of an import node\.<br><br>Supports SQL generation for merge by condition\.<br><br>For all types of merge, `SQL_SP_EXISTS` is not supported if inputs originate in different databases\. |
| Append | Supports generation if inputs are unsorted\. SQL optimization is only possible when your inputs have the same number of columns\. |
| Distinct | A Distinct node with the (default) mode Create a composite record for each group selected doesn't support SQL optimization\. |
<!-- </table "summary="" id="sql_nodes__sqlgen_recordops" class="defaultstyle" "> -->
<!-- <table "summary="" id="sql_nodes__sqlgen_sampling" class="defaultstyle" "> -->
Table 2\. SQL generation support in the Sample node for simple sampling
| Mode | Sample | Max size | Seed | Db2 for z/OS | Db2 for OS/400 | Db2 for Win/UNIX | Oracle | SQL Server | Teradata |
| ------- | -------- | -------- | ---- | ------------ | -------------- | ---------------- | ------ | ---------- | -------- |
| Include | First | n/a | | Y | Y | Y | Y | Y | Y |
| | 1\-in\-n | off | | Y | Y | Y | Y | | Y |
| | | max | | Y | Y | Y | Y | | Y |
| | Random % | off | off | Y | | Y | Y | | Y |
| | | | on | Y | | Y | Y | | |
| | | max | off | Y | | Y | Y | | Y |
| | | | on | Y | | Y | Y | | |
| Discard | First | off | | | | | Y | | |
| | | max | | | | | Y | | |
| | 1\-in\-n | off | | Y | Y | Y | Y | | Y |
| | | max | | Y | Y | Y | Y | | Y |
| | Random % | off | off | Y | | Y | Y | | Y |
| | | | on | Y | | Y | Y | | |
| | | max | off | Y | | Y | Y | | Y |
| | | | on | Y | | Y | Y | | |
<!-- </table "summary="" id="sql_nodes__sqlgen_sampling" class="defaultstyle" "> -->
<!-- <table "summary="" id="sql_nodes__sqlgen_aggregate" class="defaultstyle" "> -->
Table 3\. SQL generation support in the Aggregate node
| Storage | Sum | Mean | Min | Max | SDev | Median | Count | Variance | Percentile |
| --------- | --- | ---- | --- | --- | ---- | ------ | ----- | -------- | ---------- |
| Integer | Y | Y | Y | Y | Y | Y\* | Y | Y | Y\* |
| Real | Y | Y | Y | Y | Y | Y\* | Y | Y | Y\* |
| Date | | | Y | Y | | Y\* | Y | | Y\* |
| Time | | | Y | Y | | Y\* | Y | | Y\* |
| Timestamp | | | Y | Y | | Y\* | Y | | Y\* |
| String | | | Y | Y | | Y\* | Y | | Y\* |
<!-- </table "summary="" id="sql_nodes__sqlgen_aggregate" class="defaultstyle" "> -->
\* Median and Percentile are supported on Oracle\.
<!-- <table "summary="" id="sql_nodes__sqlgen_fieldops" class="defaultstyle" "> -->
Table 4\. Field Operations nodes
| Nodes supporting SQL generation | Notes |
| ------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Type | Supports SQL generation if the Type node is instantiated and no `ABORT` or `WARN` type checking is specified\. |
| Filter | |
| Derive | Supports SQL generation if SQL generated for the derive expression is supported (see expressions later on this page)\. |
| Ensemble | Supports SQL generation for Continuous targets\. For other targets, supports generation only if the Highest confidence wins ensemble method is used\. |
| Filler | Supports SQL generation if the SQL generated for the derive expression is supported\. |
| Anonymize | Supports SQL generation for Continuous targets, and partial SQL generation for Nominal and Flag targets\. |
| Reclassify | |
| Binning | Supports SQL generation if the Tiles (equal count) binning method is used and the Read from Bin Values tab if available option is selected\. Due to differences in the way that bin boundaries are calculated (this is caused by the nature of the distribution of data in bin fields), you might see differences in the binning output when comparing normal flow execution results and SQL pushback results\. To avoid this, use the Record count tiling method, and either Add to next or Keep in current tiles to obtain the closest match between the two methods of flow execution\. |
| RFM Analysis | Supports SQL generation if the Read from Bin Values tab if available option is selected, but downstream nodes will not support it\. |
| Partition | Supports SQL generation to assign records to partitions\. |
| Set To Flag | |
| Restructure | |
<!-- </table "summary="" id="sql_nodes__sqlgen_fieldops" class="defaultstyle" "> -->
<!-- <table "summary="" id="sql_nodes__sqlgen_graphs" class="defaultstyle" "> -->
Table 5\. Graphs nodes
| Nodes supporting SQL generation | Notes |
| ------------------------------- | ----- |
| Distribution | |
| Web | |
| Evaluation | |
<!-- </table "summary="" id="sql_nodes__sqlgen_graphs" class="defaultstyle" "> -->
For some models, SQL for the model nugget can be generated, pushing back the model scoring stage to the database\. The main use of this feature is not to improve performance, but to allow flows containing these nuggets to have their full SQL pushed back\. See [Generating SQL from model nuggets](https://dataplatform.cloud.ibm.com/docs/content/wsd/sql_native.html) for more information\.
<!-- <table "summary="" id="sql_nodes__sqlgen_nuggets" class="defaultstyle" "> -->
Table 6\. Model nuggets
| Model nuggets supporting SQL generation | Notes |
| --------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| C&R Tree | Supports SQL generation for the single tree option, but not for the boosting, bagging, or large dataset options\. |
| QUEST | |
| CHAID | |
| C5\.0 | |
| Decision List | |
| Linear | Supports SQL generation for the standard model option, but not for the boosting, bagging, or large dataset options\. |
| Neural Net | Supports SQL generation for the standard model option (Multilayer Perceptron only), but not for the boosting, bagging, or large dataset options\. |
| PCA/Factor | |
| Logistic | Supports SQL generation for Multinomial procedure but not Binomial\. For Multinomial, generation isn't supported when confidences are selected, unless the target type is Flag\. |
| Generated Rulesets | |
| Auto Classifier | If a User Defined Function (UDF) scoring adapter is enabled, these nuggets support SQL pushback\. Also, if either SQL generation for Continuous targets, or the Highest confidence wins ensemble method are used, these nuggets support further pushback downstream\. |
| Auto Numeric | If a User Defined Function (UDF) scoring adapter is enabled, these nuggets support SQL pushback\. Also, if either SQL generation for Continuous targets, or the Highest confidence wins ensemble method are used, these nuggets support further pushback downstream\. |
<!-- </table "summary="" id="sql_nodes__sqlgen_nuggets" class="defaultstyle" "> -->
<!-- <table "summary="" id="sql_nodes__sqlgen_output" class="defaultstyle" "> -->
Table 7\. Outputs nodes
| Nodes supporting SQL generation | Notes |
| ------------------------------- | ------------------------------------------------------------------------------- |
| Table | Supports generation if SQL generation is supported for highlight expression\. |
| Matrix | Supports generation except if All numerics is selected for the Fields option\. |
| Analysis | Supports generation, depending on the options selected\. |
| Transform | |
| Statistics | Supports generation if the Correlate option isn't used\. |
| Report | |
| Set Globals | |
<!-- </table "summary="" id="sql_nodes__sqlgen_output" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
AF0F7C335A10C372C36A0CCEC76057C41B93731B | https://dataplatform.cloud.ibm.com/docs/content/wsd/sql_overview.html?context=cdpaas&locale=en | SQL optimization (SPSS Modeler) | SQL optimization
You can push many data preparation and mining operations directly in your database to improve performance.
One of the most powerful capabilities of SPSS Modeler is the ability to perform many data preparation and mining operations directly in the database. By generating SQL code that can be pushed back to the database for execution, many operations, such as sampling, sorting, deriving new fields, and certain types of graphing, can be performed in the database rather than on the client or server computer. When you're working with large datasets, these pushbacks can dramatically enhance performance in several ways:
* By reducing the size of the result set to be transferred from the DBMS to watsonx.ai. When large result sets are read through an ODBC driver, network I/O or driver inefficiencies may result. For this reason, the operations that benefit most from SQL optimization are row and column selection and aggregation (Select, Sample, Aggregate nodes), which typically reduce the size of the dataset to be transferred. Data can also be cached to a temporary table in the database at critical points in the flow (after a Merge or Select node, for example) to further improve performance.
* By making use of the performance and scalability of the database. Efficiency is increased because a DBMS can often take advantage of parallel processing, more powerful hardware, more sophisticated management of disk storage, and the presence of indexes.
Given these advantages, watsonx.ai is designed to maximize the amount of SQL generated by each SPSS Modeler flow so that only those operations that can't be compiled to SQL are executed by watsonx.ai. Because of limitations in what can be expressed in standard SQL (SQL-92), however, certain operations may not be supported.
For details about currently supported databases, see [Supported data sources for SPSS Modeler](https://dataplatform.cloud.ibm.com/docs/content/wsd/spss-connections.html).
Tips:
* When running a flow, nodes that push back to your database are highlighted with a small SQL icon beside the node. When you start making edits to a flow after running it, the icons will be removed until the next time you run the flow.
Figure 1. SQL pushback indicator

* If you want to see which nodes will push back before running a flow, click SQL preview. This enables you to modify the flow before you run it to improve performance by moving the non-pushback operations as far downstream as possible, for example.
* If a node can't be pushed back, all subsequent nodes in the flow won't be pushed back either (pushback stops at that node). This may impact how you want to organize the order of nodes in your flow.
Notes: Keep the following information in mind regarding SQL:
* Because of minor differences in SQL implementation, flows that run in a database may return slightly different results when executed in watsonx.ai. These differences may also vary depending on the database vendor, for similar reasons. For example, depending on the database configuration for case sensitivity in string comparison and string collation, SPSS Modeler flows that run using SQL pushback may produce different results from those that run without SQL pushback. Contact your database administrator for advice on configuring your database. To maximize compatibility with watsonx.ai, database string comparisons should be case sensitive.
* When using watsonx.ai to generate SQL, it's possible the result using SQL pushback is not consistent on some platforms (Linux, for example). This is because floating point is handled differently on different platforms.
| # SQL optimization #
You can push many data preparation and mining operations directly in your database to improve performance\.
One of the most powerful capabilities of SPSS Modeler is the ability to perform many data preparation and mining operations directly in the database\. By generating SQL code that can be pushed back to the database for execution, many operations, such as sampling, sorting, deriving new fields, and certain types of graphing, can be performed in the database rather than on the client or server computer\. When you're working with large datasets, these pushbacks can dramatically enhance performance in several ways:
<!-- <ul> -->
* By reducing the size of the result set to be transferred from the DBMS to watsonx\.ai\. When large result sets are read through an ODBC driver, network I/O or driver inefficiencies may result\. For this reason, the operations that benefit most from SQL optimization are row and column selection and aggregation (Select, Sample, Aggregate nodes), which typically reduce the size of the dataset to be transferred\. Data can also be cached to a temporary table in the database at critical points in the flow (after a Merge or Select node, for example) to further improve performance\.
* By making use of the performance and scalability of the database\. Efficiency is increased because a DBMS can often take advantage of parallel processing, more powerful hardware, more sophisticated management of disk storage, and the presence of indexes\.
<!-- </ul> -->
Given these advantages, watsonx\.ai is designed to maximize the amount of SQL generated by each SPSS Modeler flow so that only those operations that can't be compiled to SQL are executed by watsonx\.ai\. Because of limitations in what can be expressed in standard SQL (SQL\-92), however, certain operations may not be supported\.
For details about currently supported databases, see [Supported data sources for SPSS Modeler](https://dataplatform.cloud.ibm.com/docs/content/wsd/spss-connections.html)\.
Tips:
<!-- <ul> -->
* When running a flow, nodes that push back to your database are highlighted with a small SQL icon beside the node\. When you start making edits to a flow after running it, the icons will be removed until the next time you run the flow\.
Figure 1. SQL pushback indicator

* If you want to see which nodes will push back *before* running a flow, click SQL preview\. This enables you to modify the flow before you run it to improve performance by moving the non\-pushback operations as far downstream as possible, for example\.
* If a node can't be pushed back, all subsequent nodes in the flow won't be pushed back either (pushback stops at that node)\. This may impact how you want to organize the order of nodes in your flow\.
<!-- </ul> -->
Notes: Keep the following information in mind regarding SQL:
<!-- <ul> -->
* Because of minor differences in SQL implementation, flows that run in a database may return slightly different results when executed in watsonx\.ai\. These differences may also vary depending on the database vendor, for similar reasons\. For example, depending on the database configuration for case sensitivity in string comparison and string collation, SPSS Modeler flows that run using SQL pushback may produce different results from those that run without SQL pushback\. Contact your database administrator for advice on configuring your database\. To maximize compatibility with watsonx\.ai, database string comparisons should be case sensitive\.
* When using watsonx\.ai to generate SQL, it's possible the result using SQL pushback is not consistent on some platforms (Linux, for example)\. This is because floating point is handled differently on different platforms\.
<!-- </ul> -->
<!-- </article "role="article" "> -->
|
2C669E0145DAC26A7517D9402874BAC048E46E82 | https://dataplatform.cloud.ibm.com/docs/content/wsd/sql_tips.html?context=cdpaas&locale=en | SQL optimization (SPSS Modeler) | Tips for maximizing SQL pushback
To get the best performance boost from SQL optimization, pay attention to the items in this section.
Flow order. SQL generation may be halted when the function of the node has no semantic equivalent in SQL because SPSS Modeler’s data-mining functionality is richer than the traditional data-processing operations supported by standard SQL. When this happens, SQL generation is also suppressed for any downstream nodes. Therefore, you may be able to significantly improve performance by reordering nodes to put operations that halt SQL as far downstream as possible. The SQL optimizer can do a certain amount of reordering automatically, but further improvements may be possible. A good candidate for this is the Select node, which can often be brought forward. See [Nodes supporting SQL pushback](https://dataplatform.cloud.ibm.com/docs/content/wsd/sql_nodes.html) for more information.
CLEM expressions. If a flow can't be reordered, you may be able to change node options or CLEM expressions or otherwise recast the way the operation is performed, so that it no longer inhibits SQL generation. Derive, Select, and similar nodes can commonly be rendered into SQL, provided that all of the CLEM expression operators have SQL equivalents. Most operators can be rendered, but there are a number of operators that inhibit SQL generation (in particular, the sequence functions [“@ functions”]). Sometimes generation is halted because the generated query has become too complex for the database to handle. See [CLEM expressions and operators supporting SQL pushback](https://dataplatform.cloud.ibm.com/docs/content/wsd/sql_clem.html) for more information.
Multiple input nodes. Where a flow has multiple data import nodes, SQL generation is applied to each import branch independently. If generation is halted on one branch, it can continue on another. Where two branches merge (and both branches can be expressed in SQL up to the merge), the merge itself can often be replaced with a database join, and generation can be continued downstream.
Scoring models. In-database scoring is supported for some models by rendering the generated model into SQL. However, some models generate extremely complex SQL expressions that aren't always evaluated effectively within the database. For this reason, SQL generation must be enabled separately for each generated model nugget. If you find that a model nugget is inhibiting SQL generation, open the model nugget's settings and select Generate SQL for this model (with some models, you may have additional options controlling generation). Run tests to confirm that the option is beneficial for your application. See [Nodes supporting SQL pushback](https://dataplatform.cloud.ibm.com/docs/content/wsd/sql_nodes.html) for more information.
When testing modeling nodes to see if SQL generation for models works effectively, we recommend first saving all flows from SPSS Modeler. Note that some database systems may hang while trying to process the (potentially complex) generated SQL.
Database caching. If you are using a node cache to save data at critical points in the flow (for example, following a Merge or Aggregate node), make sure that database caching is enabled along with SQL optimization. This will allow data to be cached to a temporary table in the database (rather than the file system) in most cases.
Vendor-specific SQL. Most of the generated SQL is standards-conforming (SQL-92), but some nonstandard, vendor-specific features are exploited where practical. The degree of SQL optimization can vary, depending on the database source.
| # Tips for maximizing SQL pushback #
To get the best performance boost from SQL optimization, pay attention to the items in this section\.
Flow order\. SQL generation may be halted when the function of the node has no semantic equivalent in SQL because SPSS Modeler’s data\-mining functionality is richer than the traditional data\-processing operations supported by standard SQL\. When this happens, SQL generation is also suppressed for any downstream nodes\. Therefore, you may be able to significantly improve performance by reordering nodes to put operations that halt SQL as far downstream as possible\. The SQL optimizer can do a certain amount of reordering automatically, but further improvements may be possible\. A good candidate for this is the Select node, which can often be brought forward\. See [Nodes supporting SQL pushback](https://dataplatform.cloud.ibm.com/docs/content/wsd/sql_nodes.html) for more information\.
CLEM expressions\. If a flow can't be reordered, you may be able to change node options or CLEM expressions or otherwise recast the way the operation is performed, so that it no longer inhibits SQL generation\. Derive, Select, and similar nodes can commonly be rendered into SQL, provided that all of the CLEM expression operators have SQL equivalents\. Most operators can be rendered, but there are a number of operators that inhibit SQL generation (in particular, the sequence functions \[“@ functions”\])\. Sometimes generation is halted because the generated query has become too complex for the database to handle\. See [CLEM expressions and operators supporting SQL pushback](https://dataplatform.cloud.ibm.com/docs/content/wsd/sql_clem.html) for more information\.
Multiple input nodes\. Where a flow has multiple data import nodes, SQL generation is applied to each import branch independently\. If generation is halted on one branch, it can continue on another\. Where two branches merge (and both branches can be expressed in SQL up to the merge), the merge itself can often be replaced with a database join, and generation can be continued downstream\.
Scoring models\. In\-database scoring is supported for some models by rendering the generated model into SQL\. However, some models generate extremely complex SQL expressions that aren't always evaluated effectively within the database\. For this reason, SQL generation must be enabled separately for each generated model nugget\. If you find that a model nugget is inhibiting SQL generation, open the model nugget's settings and select Generate SQL for this model (with some models, you may have additional options controlling generation)\. Run tests to confirm that the option is beneficial for your application\. See [Nodes supporting SQL pushback](https://dataplatform.cloud.ibm.com/docs/content/wsd/sql_nodes.html) for more information\.
When testing modeling nodes to see if SQL generation for models works effectively, we recommend first saving all flows from SPSS Modeler\. Note that some database systems may hang while trying to process the (potentially complex) generated SQL\.
Database caching\. If you are using a node cache to save data at critical points in the flow (for example, following a Merge or Aggregate node), make sure that database caching is enabled along with SQL optimization\. This will allow data to be cached to a temporary table in the database (rather than the file system) in most cases\.
Vendor\-specific SQL\. Most of the generated SQL is standards\-conforming (SQL\-92), but some nonstandard, vendor\-specific features are exploited where practical\. The degree of SQL optimization can vary, depending on the database source\.
<!-- </article "role="article" "> -->
|
3874AAF67EF04BB4D623FFF07E1CDB4C25B3B33E | https://dataplatform.cloud.ibm.com/docs/content/wsd/tutorials.html?context=cdpaas&locale=en | Tutorials (SPSS Modeler) | Tutorials
These tutorials use the assets that are available in the sample project, and they provide brief, targeted introductions to specific modeling methods and techniques.
You can build the example flows provided by following the steps in the tutorials.
Some of the simple flows are already completed in the projects, but you can still walk through them using their accompanying tutorials. Some of the more complicated flows must be completed by following the steps in the tutorials.
Important: Before you begin the tutorials, complete the following steps to create the sample projects.
| # Tutorials #
These tutorials use the assets that are available in the sample project, and they provide brief, targeted introductions to specific modeling methods and techniques\.
You can build the example flows provided by following the steps in the tutorials\.
Some of the simple flows are already completed in the projects, but you can still walk through them using their accompanying tutorials\. Some of the more complicated flows must be completed by following the steps in the tutorials\.
Important: Before you begin the tutorials, complete the following steps to create the sample projects\.
<!-- </article "role="article" "> -->
|
6E50438308B85E969B79DED22CC5E15F6872EE85 | https://dataplatform.cloud.ibm.com/docs/content/wsd/tutorials/tut_autocont.html?context=cdpaas&locale=en | Automated modeling for a continuous target (SPSS Modeler) | Automated modeling for a continuous target
You can use the Auto Numeric node to automatically create and compare different models for continuous (numeric range) outcomes, such as predicting the taxable value of a property. With a single node, you can estimate and compare a set of candidate models and generate a subset of models for further analysis. The node works in the same manner as the Auto Classifier node, but for continuous rather than flag or nominal targets.
| # Automated modeling for a continuous target #
You can use the Auto Numeric node to automatically create and compare different models for continuous (numeric range) outcomes, such as predicting the taxable value of a property\. With a single node, you can estimate and compare a set of candidate models and generate a subset of models for further analysis\. The node works in the same manner as the Auto Classifier node, but for continuous rather than flag or nominal targets\.
<!-- </article "role="article" "> -->
|
2D5B33F1352D8BA7CEF029D1979CCF0D44AAD63E | https://dataplatform.cloud.ibm.com/docs/content/wsd/tutorials/tut_autocont_build.html?context=cdpaas&locale=en | Building the flow (SPSS Modeler) | Building the flow
1. Add a Data Asset node that points to property_values_train.csv.
2. Add a Type node, and select taxable_value as the target field (Role = Target). Other fields will be used as predictors.
Figure 1. Setting the measurement level and role

3. Attach an Auto Numeric node, and select Correlation as the metric used to rank models (under BASICS in the node properties).
4. Set the Number of models to use to 3. This means that the three best models will be built when you run the node.
Figure 2. Auto Numeric node BASICS

5. Under EXPERT, leave the default settings in place. The node will estimate a single model for each algorithm, for a total of six models. (Alternatively, you can modify these settings to compare multiple variants for each model type.)
Because you set Number of models to use to 3 under BASICS, the node will calculate the accuracy of the six algorithms and build a single model nugget containing the three most accurate.
Figure 3. Auto Numeric node EXPERT options

6. Under ENSEMBLE, leave the default settings in place. Since this is a continuous target, the ensemble score is generated by averaging the scores for the individual models.
| # Building the flow #
<!-- <ol> -->
1. Add a Data Asset node that points to property\_values\_train\.csv\.
2. Add a Type node, and select `taxable_value` as the target field (Role = Target)\. Other fields will be used as predictors\.
Figure 1. Setting the measurement level and role

3. Attach an Auto Numeric node, and select Correlation as the metric used to rank models (under BASICS in the node properties)\.
4. Set the Number of models to use to 3\. This means that the three best models will be built when you run the node\.
Figure 2. Auto Numeric node BASICS

5. Under EXPERT, leave the default settings in place\. The node will estimate a single model for each algorithm, for a total of six models\. (Alternatively, you can modify these settings to compare multiple variants for each model type\.)
Because you set Number of models to use to 3 under BASICS, the node will calculate the accuracy of the six algorithms and build a single model nugget containing the three most accurate.
Figure 3. Auto Numeric node EXPERT options

6. Under ENSEMBLE, leave the default settings in place\. Since this is a continuous target, the ensemble score is generated by averaging the scores for the individual models\.
<!-- </ol> -->
<!-- </article "role="article" "> -->
|
EC7FCF477E212945EAB7BB85C2279F37D62D4B49 | https://dataplatform.cloud.ibm.com/docs/content/wsd/tutorials/tut_autocont_compare.html?context=cdpaas&locale=en | Comparing the models (SPSS Modeler) | Comparing the models
1. Run the flow. A generated model nugget is built and placed on the canvas, and results are added to the Outputs panel. You can view the model nugget, or save or deploy it in a number of ways.
Right-click the model nugget and select View Model. You'll see details about each of the models created during the run. (In a real situation, in which hundreds of models are estimated on a large dataset, this could take many hours.)
Figure 1. Auto numeric example flow with model nugget

If you want to explore any of the individual models further, you can click a model name in the ESTIMATOR column to drill down and explore the individual model results.
Figure 2. Auto Numeric results

By default, models are sorted by accuracy (correlation) because correlation this was the measure you selected in the Auto Numeric node's properties. For purposes of ranking, the absolute value of the accuracy is used, with values closer to 1 indicating a stronger relationship.
You can sort on a different column by clicking the header for that column.
Based on these results, you decide to use all three of these most accurate models. By combining predictions from multiple models, limitations in individual models may be avoided, resulting in a higher overall accuracy.
In the USE column, make sure all three models are selected.
Attach an Analysis node (from the Outputs palette) after the model nugget. Right-click the Analysis node and choose Run to run the flow again.
Figure 3. Auto Numeric sample flow

The averaged score generated by the ensembled model is added in a field named $XR-taxable_value, with a correlation of 0.934, which is higher than those of the three individual models. The ensemble scores also show a low mean absolute error and may perform better than any of the individual models when applied to other datasets.
Figure 4. Auto Numeric sample flow analysis results

| # Comparing the models #
<!-- <ol> -->
1. Run the flow\. A generated model nugget is built and placed on the canvas, and results are added to the Outputs panel\. You can view the model nugget, or save or deploy it in a number of ways\.
<!-- </ol> -->
Right\-click the model nugget and select View Model\. You'll see details about each of the models created during the run\. (In a real situation, in which hundreds of models are estimated on a large dataset, this could take many hours\.)
Figure 1\. Auto numeric example flow with model nugget

If you want to explore any of the individual models further, you can click a model name in the ESTIMATOR column to drill down and explore the individual model results\.
Figure 2\. Auto Numeric results

By default, models are sorted by accuracy (correlation) because correlation this was the measure you selected in the Auto Numeric node's properties\. For purposes of ranking, the absolute value of the accuracy is used, with values closer to 1 indicating a stronger relationship\.
You can sort on a different column by clicking the header for that column\.
Based on these results, you decide to use all three of these most accurate models\. By combining predictions from multiple models, limitations in individual models may be avoided, resulting in a higher overall accuracy\.
In the USE column, make sure all three models are selected\.
Attach an Analysis node (from the Outputs palette) after the model nugget\. Right\-click the Analysis node and choose Run to run the flow again\.
Figure 3\. Auto Numeric sample flow

The averaged score generated by the ensembled model is added in a field named `$XR-taxable_value`, with a correlation of 0\.934, which is higher than those of the three individual models\. The ensemble scores also show a low mean absolute error and may perform better than any of the individual models when applied to other datasets\.
Figure 4\. Auto Numeric sample flow analysis results

<!-- </article "role="article" "> -->
|
69ED00ABB6B920D1FE4F5B5675AFDA422F04E8D8 | https://dataplatform.cloud.ibm.com/docs/content/wsd/tutorials/tut_autocont_summary.html?context=cdpaas&locale=en | Summary (SPSS Modeler) | Summary
With this example Automated Modeling for a Flag Target flow, you used the Auto Numeric node to compare a number of different models, selected the three most accurate models, and added them to the flow within an ensembled Auto Numeric model nugget.
The ensembled model showed performance that was better than two of the individual models and may perform better when applied to other datasets. If your goal is to automate the process as much as possible, this approach allows you to obtain a robust model under most circumstances without having to dig deeply into the specifics of any one model.
| # Summary #
With this example Automated Modeling for a Flag Target flow, you used the Auto Numeric node to compare a number of different models, selected the three most accurate models, and added them to the flow within an ensembled Auto Numeric model nugget\.
The ensembled model showed performance that was better than two of the individual models and may perform better when applied to other datasets\. If your goal is to automate the process as much as possible, this approach allows you to obtain a robust model under most circumstances without having to dig deeply into the specifics of any one model\.
<!-- </article "role="article" "> -->
|
3D999C84C01328A45EBF0ECAD358D858C634DF5B | https://dataplatform.cloud.ibm.com/docs/content/wsd/tutorials/tut_autocont_train.html?context=cdpaas&locale=en | Training data (SPSS Modeler) | Training data
The data file includes a field named taxable_value, which is the target field, or value, that you want to predict. The other fields contain information such as neighborhood, building type, and interior volume, and may be used as predictors.
Field name Label
property_id Property ID
neighborhood Area within the city
building_type Type of building
year_built Year built
volume_interior Volume of interior
volume_other Volume of garage and extra buildings
lot_size Lot size
taxable_value Taxable value
| # Training data #
The data file includes a field named `taxable_value`, which is the target field, or value, that you want to predict\. The other fields contain information such as neighborhood, building type, and interior volume, and may be used as predictors\.
<!-- <table "summary="" id="tut_autocont_train__table_yhn" class="defaultstyle" "> -->
| Field name | Label |
| ----------------- | ------------------------------------ |
| `property_id` | Property ID |
| `neighborhood` | Area within the city |
| `building_type` | Type of building |
| `year_built` | Year built |
| `volume_interior` | Volume of interior |
| `volume_other` | Volume of garage and extra buildings |
| `lot_size` | Lot size |
| `taxable_value` | Taxable value |
<!-- </table "summary="" id="tut_autocont_train__table_yhn" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
D96C3A08A5607BDCB1BC85E0BEDD8743EA0B3DC5 | https://dataplatform.cloud.ibm.com/docs/content/wsd/tutorials/tut_autodata.html?context=cdpaas&locale=en | Automated data preparation (SPSS Modeler) | Automated data preparation
Preparing data for analysis is one of the most important steps in any data-mining project—and traditionally, one of the most time consuming. The Auto Data Prep node handles the task for you, analyzing your data and identifying fixes, screening out fields that are problematic or not likely to be useful, deriving new attributes when appropriate, and improving performance through intelligent screening techniques.
You can use the Auto Data Prep node in fully automated fashion, allowing the node to choose and apply fixes, or you can preview the changes before they're made and accept or reject them as desired. With this node, you can ready your data for data mining quickly and easily, without the need for prior knowledge of the statistical concepts involved. If you run the node with the default settings, models will tend to build and score more quickly.
This example uses the flow named Automated Data Preparation, available in the example project . The data file is telco.csv. This example demonstrates the increased accuracy you can find by using the default Auto Data Prep node settings when building models.
Let's take a look at the flow.
1. Open the Example Project.
2. Scroll down to the Modeler flows section, click View all, and select the Automated Data Preparation flow.
| # Automated data preparation #
Preparing data for analysis is one of the most important steps in any data\-mining project—and traditionally, one of the most time consuming\. The Auto Data Prep node handles the task for you, analyzing your data and identifying fixes, screening out fields that are problematic or not likely to be useful, deriving new attributes when appropriate, and improving performance through intelligent screening techniques\.
You can use the Auto Data Prep node in fully automated fashion, allowing the node to choose and apply fixes, or you can preview the changes before they're made and accept or reject them as desired\. With this node, you can ready your data for data mining quickly and easily, without the need for prior knowledge of the statistical concepts involved\. If you run the node with the default settings, models will tend to build and score more quickly\.
This example uses the flow named Automated Data Preparation, available in the example project \. The data file is telco\.csv\. This example demonstrates the increased accuracy you can find by using the default Auto Data Prep node settings when building models\.
Let's take a look at the flow\.
<!-- <ol> -->
1. Open the Example Project\.
2. Scroll down to the Modeler flows section, click View all, and select the Automated Data Preparation flow\.
<!-- </ol> -->
<!-- </article "role="article" "> -->
|
895CD261C9F06F272286BCCA3555846FB1ED8AA3 | https://dataplatform.cloud.ibm.com/docs/content/wsd/tutorials/tut_autodata_build.html?context=cdpaas&locale=en | Building the flow (SPSS Modeler) | Building the flow
1. Add a Data Asset node that points to telco.csv.
Figure 1. Auto Data Prep example flow

2. Attach a Type node to the Data Asset node. Set the measure for the churn field to Flag, and set the role to Target. Make sure the role for all other fields is set to Input.
Figure 2. Setting the measurement level and role

3. Attach a Logistic node to the Type node.
4. In the Logistic node's properties, under MODEL SETTINGS, select the Binomial procedure. For Model Name, select Custom and enter No ADP - churn.
Figure 3. Choosing model options

5. Attach an Auto Data Prep node to the Type node. Under OBJECTIVES, leave the default settings in place to analyze and prepare your data by balancing both speed and accuracy.
6. Run the flow to analyze and process your data. Other Auto Data Prep node properties allow you to specify that you want to concentrate more on accuracy, more on the speed of processing, or to fine tune many of the data preparation processing steps. Note: If you want to adjust the node properties and run the flow again in the future, since the model already exists, you must first click Clear Analysis, under OBJECTIVES before running the flow again.
Figure 4. Auto Data Prep default objectives

7. Attach a Logistic node to the Auto Data Prep node.
8. In the Logistic node's properties, under MODEL SETTINGS, select the Binomial procedure. For Model Name, select Custom and enter After ADP - churn.
| # Building the flow #
<!-- <ol> -->
1. Add a Data Asset node that points to telco\.csv\.
Figure 1. Auto Data Prep example flow

2. Attach a Type node to the Data Asset node\. Set the measure for the `churn` field to Flag, and set the role to Target\. Make sure the role for all other fields is set to Input\.
Figure 2. Setting the measurement level and role

3. Attach a Logistic node to the Type node\.
4. In the Logistic node's properties, under MODEL SETTINGS, select the Binomial procedure\. For Model Name, select Custom and enter No ADP \- churn\.
Figure 3. Choosing model options

5. Attach an Auto Data Prep node to the Type node\. Under OBJECTIVES, leave the default settings in place to analyze and prepare your data by balancing both speed and accuracy\.
6. Run the flow to analyze and process your data\. Other Auto Data Prep node properties allow you to specify that you want to concentrate more on accuracy, more on the speed of processing, or to fine tune many of the data preparation processing steps\. Note: If you want to adjust the node properties and run the flow again in the future, since the model already exists, you must first click Clear Analysis, under OBJECTIVES before running the flow again\.
Figure 4. Auto Data Prep default objectives

7. Attach a Logistic node to the Auto Data Prep node\.
8. In the Logistic node's properties, under MODEL SETTINGS, select the Binomial procedure\. For Model Name, select Custom and enter After ADP \- churn\.
<!-- </ol> -->
<!-- </article "role="article" "> -->
|
B523EBE64275BEE04D480B55CCAEAC3017A36980 | https://dataplatform.cloud.ibm.com/docs/content/wsd/tutorials/tut_autodata_compare.html?context=cdpaas&locale=en | Comparing the models (SPSS Modeler) | Comparing the models
1. Right-click each Logistic node and run it to create the model nuggets, which are added to the flow. Results are also added to the Outputs panel.
Figure 1. Attaching the model nuggets

2. Attach Analysis nodes to the model nuggets and run the Analysis nodes (using their default settings).
Figure 2. Attaching the Analysis nodes
The Analysis of the non Auto Data Prep-derived model shows that just running the data through the Logistic Regression node with its default settings gives a model with low accuracy - just 10.6%.
Figure 3. Non ADP-derived model results
The Analysis of the Auto-Data Prep-derived model shows that by running the data through the default Auto Data Prep settings, you have built a much more accurate model that's 78.3% correct.
Figure 4. ADP-derived model results

In summary, by just running the Auto Data Prep node to fine tune the processing of your data, you were able to build a more accurate model with little direct data manipulation.
Obviously, if you're interested in proving or disproving a certain theory, or want to build specific models, you may find it beneficial to work directly with the model settings. However, for those with a reduced amount of time, or with a large amount of data to prepare, the Auto Data Prep node may give you an advantage.
Note that the results in this example are based on the training data only. To assess how well models generalize to other data in the real world, you would use a Partition node to hold out a subset of records for purposes of testing and validation.
| # Comparing the models #
<!-- <ol> -->
1. Right\-click each Logistic node and run it to create the model nuggets, which are added to the flow\. Results are also added to the Outputs panel\.
Figure 1. Attaching the model nuggets

2. Attach Analysis nodes to the model nuggets and run the Analysis nodes (using their default settings)\.
Figure 2. Attaching the Analysis nodes
The Analysis of the non Auto Data Prep-derived model shows that just running the data through the Logistic Regression node with its default settings gives a model with low accuracy - just 10.6%.
Figure 3. Non ADP-derived model results
The Analysis of the Auto-Data Prep-derived model shows that by running the data through the default Auto Data Prep settings, you have built a much more accurate model that's 78.3% correct.
Figure 4. ADP-derived model results

<!-- </ol> -->
In summary, by just running the Auto Data Prep node to fine tune the processing of your data, you were able to build a more accurate model with little direct data manipulation\.
Obviously, if you're interested in proving or disproving a certain theory, or want to build specific models, you may find it beneficial to work directly with the model settings\. However, for those with a reduced amount of time, or with a large amount of data to prepare, the Auto Data Prep node may give you an advantage\.
Note that the results in this example are based on the training data only\. To assess how well models generalize to other data in the real world, you would use a Partition node to hold out a subset of records for purposes of testing and validation\.
<!-- </article "role="article" "> -->
|
1A548D934DFE57DD0F12195461F2DDB348EAE68C | https://dataplatform.cloud.ibm.com/docs/content/wsd/tutorials/tut_autoflag.html?context=cdpaas&locale=en | Automated modeling for a flag target (SPSS Modeler) | Automated modeling for a flag target
With the Auto Classifier node, you can automatically create and compare a number of different models for either flag (such as whether or not a given customer is likely to default on a loan or respond to a particular offer) or nominal (set) targets.
| # Automated modeling for a flag target #
With the Auto Classifier node, you can automatically create and compare a number of different models for either flag (such as whether or not a given customer is likely to default on a loan or respond to a particular offer) or nominal (set) targets\.
<!-- </article "role="article" "> -->
|
CE7976AFE82E2D17EE1FA308570AFA42E0E91667 | https://dataplatform.cloud.ibm.com/docs/content/wsd/tutorials/tut_autoflag_build.html?context=cdpaas&locale=en | Building the flow (SPSS Modeler) | Building the flow
1. Add a Data Asset node that points to pm_customer_train1.csv.
2. Add a Type node, and select response as the target field (Role = Target). Set the measure for this field to Flag.
Figure 1. Setting the measurement level and role

3. Set the role to None for the following fields: customer_id, campaign, response_date, purchase, purchase_date, product_id, Rowid, and X_random. These fields will be ignored when you are building the model.
4. Click Read Values in the Type node to make sure that values are instantiated.
As we saw earlier, our source data includes information about four different campaigns, each targeted to a different type of customer account. These campaigns are coded as integers in the data, so to make it easier to remember which account type each integer represents, let's define labels for each one.
Figure 2. Choosing to specify values for a field

5. On the row for the campaign field, click the entry in the Value mode column.
6. Choose Specify from the drop-down.
Figure 3. Defining labels for the field values

7. Click the Edit icon in the column for the campaign field. Type the labels as shown for each of the four values.
8. Click OK. Now the labels will be displayed in output windows instead of the integers.
9. Attach a Table node to the Type node.
10. Right-click the Table node and select Run.
11. In the Outputs panel, double-click the table output to open it.
12. Click OK to close the output window.
Although the data includes information about four different campaigns, you will focus the analysis on one campaign at a time. Since the largest number of records fall under the Premium account campaign (coded campaign=2 in the data), you can use a Select node to include only these records in the flow.
Figure 4. Selecting records for a single campaign

| # Building the flow #
<!-- <ol> -->
1. Add a Data Asset node that points to pm\_customer\_train1\.csv\.
2. Add a Type node, and select `response` as the target field (Role = Target)\. Set the measure for this field to Flag\.
Figure 1. Setting the measurement level and role

3. Set the role to None for the following fields: `customer_id`, `campaign`, `response_date`, `purchase`, `purchase_date`, `product_id`, `Rowid`, and `X_random`\. These fields will be ignored when you are building the model\.
4. Click Read Values in the Type node to make sure that values are instantiated\.
As we saw earlier, our source data includes information about four different campaigns, each targeted to a different type of customer account. These campaigns are coded as integers in the data, so to make it easier to remember which account type each integer represents, let's define labels for each one.
Figure 2. Choosing to specify values for a field

5. On the row for the campaign field, click the entry in the Value mode column\.
6. Choose Specify from the drop\-down\.
Figure 3. Defining labels for the field values

7. Click the Edit icon in the column for the campaign field\. Type the labels as shown for each of the four values\.
8. Click OK\. Now the labels will be displayed in output windows instead of the integers\.
9. Attach a Table node to the Type node\.
10. Right\-click the Table node and select Run\.
11. In the Outputs panel, double\-click the table output to open it\.
12. Click OK to close the output window\.
<!-- </ol> -->
Although the data includes information about four different campaigns, you will focus the analysis on one campaign at a time\. Since the largest number of records fall under the Premium account campaign (coded `campaign=2` in the data), you can use a Select node to include only these records in the flow\.
Figure 4\. Selecting records for a single campaign

<!-- </article "role="article" "> -->
|
B57A4B94BFAFDD0CD6EDBDFA4ABA1F708286E918 | https://dataplatform.cloud.ibm.com/docs/content/wsd/tutorials/tut_autoflag_historical.html?context=cdpaas&locale=en | Historical data (SPSS Modeler) | Historical data
This example uses the data file pm_customer_train1.csv, which contains historical data that tracks the offers made to specific customers in past campaigns, as indicated by the value of the campaign field. The largest number of records fall under the Premium account campaign.
The values of the campaign field are actually coded as integers in the data (for example 2 = Premium account). Later, you'll define labels for these values that you can use to give more meaningful output.
Figure 1. Data about previous promotions

The file also includes a response field that indicates whether the offer was accepted (0 = no, and 1 = yes). This will be the target field, or value, that you want to predict. A number of fields containing demographic and financial information about each customer are also included. These can be used to build or "train" a model that predicts response rates for individuals or groups based on characteristics such as income, age, or number of transactions per month.
| # Historical data #
This example uses the data file pm\_customer\_train1\.csv, which contains historical data that tracks the offers made to specific customers in past campaigns, as indicated by the value of the `campaign` field\. The largest number of records fall under the `Premium account` campaign\.
The values of the `campaign` field are actually coded as integers in the data (for example `2 = Premium account`)\. Later, you'll define labels for these values that you can use to give more meaningful output\.
Figure 1\. Data about previous promotions

The file also includes a `response` field that indicates whether the offer was accepted (`0 = no`, and `1 = yes`)\. This will be the target field, or value, that you want to predict\. A number of fields containing demographic and financial information about each customer are also included\. These can be used to build or "train" a model that predicts response rates for individuals or groups based on characteristics such as income, age, or number of transactions per month\.
<!-- </article "role="article" "> -->
|
C4773EF8B0935E8DE084C1A6285EFE11E2A5F80A | https://dataplatform.cloud.ibm.com/docs/content/wsd/tutorials/tut_autoflag_models.html?context=cdpaas&locale=en | Generating and comparing models (SPSS Modeler) | Generating and comparing models
1. Attach an Auto Classifier node, open its BUILD OPTIONS properties, and select Overall accuracy as the metric used to rank models.
2. Set the Number of models to use to 3. This means that the three best models will be built when you run the node.
Figure 1. Auto Classifier node, build options

Under the EXPERT options, you can choose from many different modeling algorithms.
3. Deselect the Discriminant and SVM model types. (These models take longer to train on this data, so deselecting them will speed up the example. If you don't mind waiting, feel free to leave them selected.)
Because you set Number of models to use to 3 under BUILD OPTIONS, the node will calculate the accuracy of the remaining algorithms and generate a single model nugget containing the three most accurate.
Figure 2. Auto Classifier node, expert options

4. Under the ENSEMBLE options, select Confidence-weighted voting for the ensemble method. This determines how a single aggregated score is produced for each record.
With simple voting, if two out of three models predict yes, then yes wins by a vote of 2 to 1. In the case of confidence-weighted voting, the votes are weighted based on the confidence value for each prediction. Thus, if one model predicts no with a higher confidence than the two yes predictions combined, then no wins.
Figure 3. Auto Classifier node, ensemble options

5. Run the flow. After a few minutes, the generated model nugget is built and placed on the canvas, and results are added to the Outputs panel. You can view the model nugget, or save or deploy it in a number of other ways.
6. Right-click the model nugget and select View Model. You'll see details about each of the models created during the run. (In a real situation, in which hundreds of models may be created on a large dataset, this could take many hours.)
If you want to explore any of the individual models further, you can click their links in the Estimator column to drill down and browse the individual model results.
Figure 4. Auto Classifier results

By default, models are sorted based on overall accuracy, because this was the measure you selected in the Auto Classifier node properties. The XGBoost Tree model ranks best by this measure, but the C5.0 and C&RT models are nearly as accurate.
Based on these results, you decide to use all three of these most accurate models. By combining predictions from multiple models, limitations in individual models may be avoided, resulting in a higher overall accuracy.
7. In the USE column, select the three models. Return to the flow.
8. Attach an Analysis output node after the model nugget. Right-click the Analysis node and choose Run to run the flow.
Figure 5. Auto Classifier example flow

The aggregated score generated by the ensembled model is shown in a field named $XF-response. When measured against the training data, the predicted value matches the actual response (as recorded in the original response field) with an overall accuracy of 92.77%. While not quite as accurate as the best of the three individual models in this case (92.82% for C5.0), the difference is too small to be meaningful. In general terms, an ensembled model will typically be more likely to perform well when applied to datasets other than the training data.
Figure 6. Analysis of the three ensembled models

| # Generating and comparing models #
<!-- <ol> -->
1. Attach an Auto Classifier node, open its BUILD OPTIONS properties, and select Overall accuracy as the metric used to rank models\.
2. Set the Number of models to use to 3\. This means that the three best models will be built when you run the node\.
Figure 1. Auto Classifier node, build options

Under the EXPERT options, you can choose from many different modeling algorithms.
3. Deselect the Discriminant and SVM model types\. (These models take longer to train on this data, so deselecting them will speed up the example\. If you don't mind waiting, feel free to leave them selected\.)
Because you set Number of models to use to 3 under BUILD OPTIONS, the node will calculate the accuracy of the remaining algorithms and generate a single model nugget containing the three most accurate.
Figure 2. Auto Classifier node, expert options

4. Under the ENSEMBLE options, select Confidence\-weighted voting for the ensemble method\. This determines how a single aggregated score is produced for each record\.
With simple voting, if two out of three models predict *yes*, then *yes* wins by a vote of 2 to 1. In the case of confidence-weighted voting, the votes are weighted based on the confidence value for each prediction. Thus, if one model predicts *no* with a higher confidence than the two *yes* predictions combined, then *no* wins.
Figure 3. Auto Classifier node, ensemble options

5. Run the flow\. After a few minutes, the generated model nugget is built and placed on the canvas, and results are added to the Outputs panel\. You can view the model nugget, or save or deploy it in a number of other ways\.
6. Right\-click the model nugget and select View Model\. You'll see details about each of the models created during the run\. (In a real situation, in which hundreds of models may be created on a large dataset, this could take many hours\.)
If you want to explore any of the individual models further, you can click their links in the Estimator column to drill down and browse the individual model results.
Figure 4. Auto Classifier results

By default, models are sorted based on overall accuracy, because this was the measure you selected in the Auto Classifier node properties. The XGBoost Tree model ranks best by this measure, but the C5.0 and C&RT models are nearly as accurate.
Based on these results, you decide to use all three of these most accurate models. By combining predictions from multiple models, limitations in individual models may be avoided, resulting in a higher overall accuracy.
7. In the USE column, select the three models\. Return to the flow\.
8. Attach an Analysis output node after the model nugget\. Right\-click the Analysis node and choose Run to run the flow\.
Figure 5. Auto Classifier example flow

The aggregated score generated by the ensembled model is shown in a field named `$XF-response`. When measured against the training data, the predicted value matches the actual response (as recorded in the original `response` field) with an overall accuracy of 92.77%. While not quite as accurate as the best of the three individual models in this case (92.82% for C5.0), the difference is too small to be meaningful. In general terms, an ensembled model will typically be more likely to perform well when applied to datasets other than the training data.
Figure 6. Analysis of the three ensembled models

<!-- </ol> -->
<!-- </article "role="article" "> -->
|
823D9660B5B41B7C85904D0EB88A8D40AC57383F | https://dataplatform.cloud.ibm.com/docs/content/wsd/tutorials/tut_autoflag_summary.html?context=cdpaas&locale=en | Summary (SPSS Modeler) | Summary
With this example Automated Modeling for a Flag Target flow, you used the Auto Classifier node to compare a number of different models, used the three most accurate models, and added them to the flow within an ensembled Auto Classifier model nugget.
* Based on overall accuracy, the XGBoost Tree, C5.0, and C&R Tree models performed best on the training data.
* The ensembled model performed nearly as well as the best of the individual models and may perform better when applied to other datasets. If your goal is to automate the process as much as possible, this approach allows you to obtain a robust model under most circumstances without having to dig deeply into the specifics of any one model.
| # Summary #
With this example Automated Modeling for a Flag Target flow, you used the Auto Classifier node to compare a number of different models, used the three most accurate models, and added them to the flow within an ensembled Auto Classifier model nugget\.
<!-- <ul> -->
* Based on overall accuracy, the XGBoost Tree, C5\.0, and C&R Tree models performed best on the training data\.
* The ensembled model performed nearly as well as the best of the individual models and may perform better when applied to other datasets\. If your goal is to automate the process as much as possible, this approach allows you to obtain a robust model under most circumstances without having to dig deeply into the specifics of any one model\.
<!-- </ul> -->
<!-- </article "role="article" "> -->
|
B2CA734AE719BA79AB4B5F877CF044F47090FAEC | https://dataplatform.cloud.ibm.com/docs/content/wsd/tutorials/tut_bandwidth.html?context=cdpaas&locale=en | Forecasting bandwidth utilization (SPSS Modeler) | Forecasting bandwidth utilization
An analyst for a national broadband provider is required to produce forecasts of user subscriptions to predict utilization of bandwidth. Forecasts are needed for each of the local markets that make up the national subscriber base.
You'll use time series modeling to produce forecasts for the next three months for a number of local markets.
| # Forecasting bandwidth utilization #
An analyst for a national broadband provider is required to produce forecasts of user subscriptions to predict utilization of bandwidth\. Forecasts are needed for each of the local markets that make up the national subscriber base\.
You'll use time series modeling to produce forecasts for the next three months for a number of local markets\.
<!-- </article "role="article" "> -->
|
718CD1A731E0F4E5ABFD77519ED254B5CCC670FB | https://dataplatform.cloud.ibm.com/docs/content/wsd/tutorials/tut_bandwidth_forecast.html?context=cdpaas&locale=en | Forecasting with the Time Series node (SPSS Modeler) | Forecasting with the Time Series node
This example uses the flow Forecasting Bandwidth Utilization, available in the example project . The data file is broadband_1.csv.
In SPSS Modeler, you can produce multiple time series models in a single operation. The broadband_1.csv data file has monthly usage data for each of 85 local markets. For the purposes of this example, only the first five series will be used; a separate model will be created for each of these five series, plus a total.
The file also includes a date field that indicates the month and year for each record. This field will be used to label records. The date field reads into SPSS Modeler as a string, but to use the field in SPSS Modeler you will convert the storage type to numeric Date format using a Filler node.
Figure 1. Example flow to show Time Series modeling

The Time Series node requires that each series be in a separate column, with a row for each interval. Watson Studio provides methods for transforming data to match this format if necessary.
Figure 2. Monthly subscription data for broadband local markets

| # Forecasting with the Time Series node #
This example uses the flow Forecasting Bandwidth Utilization, available in the example project \. The data file is broadband\_1\.csv\.
In SPSS Modeler, you can produce multiple time series models in a single operation\. The broadband\_1\.csv data file has monthly usage data for each of 85 local markets\. For the purposes of this example, only the first five series will be used; a separate model will be created for each of these five series, plus a total\.
The file also includes a date field that indicates the month and year for each record\. This field will be used to label records\. The date field reads into SPSS Modeler as a string, but to use the field in SPSS Modeler you will convert the storage type to numeric Date format using a Filler node\.
Figure 1\. Example flow to show Time Series modeling

The Time Series node requires that each series be in a separate column, with a row for each interval\. Watson Studio provides methods for transforming data to match this format if necessary\.
Figure 2\. Monthly subscription data for broadband local markets

<!-- </article "role="article" "> -->
|
C143A9F5185D9303301630D3FC53B604D3DCED2E | https://dataplatform.cloud.ibm.com/docs/content/wsd/tutorials/tut_bandwidth_forecast_build.html?context=cdpaas&locale=en | Creating the flow (SPSS Modeler) | Creating the flow
1. Add a Data Asset node that points to broadband_1.csv.
2. To simplify the model, use a Filter node to filter out the Market_6 to Market_85 fields and the MONTH_ and YEAR_ fields.
Figure 1. Example flow to show Time Series modeling

| # Creating the flow #
<!-- <ol> -->
1. Add a Data Asset node that points to broadband\_1\.csv\.
2. To simplify the model, use a Filter node to filter out the `Market_6` to `Market_85` fields and the `MONTH_` and `YEAR_` fields\.
<!-- </ol> -->
Figure 1\. Example flow to show Time Series modeling

<!-- </article "role="article" "> -->
|
EDB1038F1D71A450556D13AE34A416E46D7213FE | https://dataplatform.cloud.ibm.com/docs/content/wsd/tutorials/tut_bandwidth_forecast_data.html?context=cdpaas&locale=en | Examining the data (SPSS Modeler) | Examining the data
It's always a good idea to have a feel for the nature of your data before building a model.
Does the data exhibit seasonal variations? Although Watson Studio can automatically find the best seasonal or nonseasonal model for each series, you can often obtain faster results by limiting the search to nonseasonal models when seasonality is not present in your data. Without examining the data for each of the local markets, we can get a rough picture of the presence or absence of seasonality by plotting the total number of subscribers over all five markets.
Figure 1. Plotting the total number of subscribers

1. From the Graphs palette, attach a Time Plot node to the Filter node.
2. Add the Total field to the Series list.
3. Deselect the Display series in separate panel and Normalize options. Save the changes.
4. Right-click the Time Plot node and run it, then open the output that was generated.
Figure 2. Time plot of the Total field

The series exhibits a very smooth upward trend with no hint of seasonal variations. There might be individual series with seasonality, but it appears that seasonality isn't a prominent feature of the data in general.
Of course, you should inspect each of the series before ruling out seasonal models. You can then separate out series exhibiting seasonality and model them separately.
Watson Studio makes it easy to plot multiple series together.
5. Double-click the Time Plot node to open its properties again.
6. Remove the Total field from the Series list.
7. Add the Market_1 through Market_5 fields to the list.
8. Run the Time Plot node again.
Figure 3. Time plot of multiple fields

Inspection of each of the markets reveals a steady upward trend in each case. Although some markets are a little more erratic than others, there's no evidence of seasonality.
| # Examining the data #
It's always a good idea to have a feel for the nature of your data before building a model\.
Does the data exhibit seasonal variations? Although Watson Studio can automatically find the best seasonal or nonseasonal model for each series, you can often obtain faster results by limiting the search to nonseasonal models when seasonality is not present in your data\. Without examining the data for each of the local markets, we can get a rough picture of the presence or absence of seasonality by plotting the total number of subscribers over all five markets\.
Figure 1\. Plotting the total number of subscribers

<!-- <ol> -->
1. From the Graphs palette, attach a Time Plot node to the Filter node\.
2. Add the `Total` field to the Series list\.
3. Deselect the Display series in separate panel and Normalize options\. Save the changes\.
4. Right\-click the Time Plot node and run it, then open the output that was generated\.
Figure 2. Time plot of the Total field

The series exhibits a very smooth upward trend with no hint of seasonal variations. There might be individual series with seasonality, but it appears that seasonality isn't a prominent feature of the data in general.
Of course, you should inspect each of the series before ruling out seasonal models. You can then separate out series exhibiting seasonality and model them separately.
Watson Studio makes it easy to plot multiple series together.
5. Double\-click the Time Plot node to open its properties again\.
6. Remove the `Total` field from the Series list\.
7. Add the `Market_1` through `Market_5` fields to the list\.
8. Run the Time Plot node again\.
Figure 3. Time plot of multiple fields

Inspection of each of the markets reveals a steady upward trend in each case. Although some markets are a little more erratic than others, there's no evidence of seasonality.
<!-- </ol> -->
<!-- </article "role="article" "> -->
|
0721692D3F363B864A241FC4644D7D57B2DFF881 | https://dataplatform.cloud.ibm.com/docs/content/wsd/tutorials/tut_bandwidth_forecast_dates.html?context=cdpaas&locale=en | Defining the dates (SPSS Modeler) | Defining the dates
Now you need to change the storage type of the DATE_ field to date format.
1. Attach a Filler node to the Filter node, then double-click the Filler node to open its properties
2. Add the DATE_ field, set the Replace option to Always, and set the Replace with value to to_date(DATE_).
Figure 1. Setting the date storage type

| # Defining the dates #
Now you need to change the storage type of the `DATE_` field to date format\.
<!-- <ol> -->
1. Attach a Filler node to the Filter node, then double\-click the Filler node to open its properties
2. Add the `DATE_` field, set the Replace option to Always, and set the Replace with value to `to_date(DATE_)`\.
Figure 1. Setting the date storage type

<!-- </ol> -->
<!-- </article "role="article" "> -->
|
03DA4D2D23A65C146BA5AFD8F7175908F868F3EB | https://dataplatform.cloud.ibm.com/docs/content/wsd/tutorials/tut_bandwidth_forecast_examine.html?context=cdpaas&locale=en | Examining the model (SPSS Modeler) | Examining the model
1. Right-click the Time Series model nugget and select View Model to see information about the models generated for each of the markets.
Figure 1. Time Series models generated for the markets

2. In the left TARGET column, select any of the markets. Then go to Model Information. The Number of Predictors row shows how many fields were used as predictors for each target.
The other rows in the Model Information tables show various goodness-of-fit measures for each model. Stationary R-Squared measures how a model is better than a baseline model. If the final model is ARIMA(p,d,q)(P,D,Q), the baseline model is ARIMA(0,d,0)(0,D,0). If the final model is an Exponential Smoothing model, then d is 2 for Brown and Holt model and 1 for other models, and D is 1 if the seasonal length is greater than 1, otherwise D is 0. A negative stationary R squared means that the model under consideration is worse than the baseline model. Zero stationary R squared means that the model is as good or bad as the baseline model and a positive stationary R squared means the model is better than the baseline model
The Statistic and df lines, and the Significance under Parameter Estimates, relate to the Ljung-Box statistic, a test of the randomness of the residual errors in the model. The more random the errors, the better the model is likely to be. Statistic is the Ljung-Box statistic itself, while df (degrees of freedom) indicates the number of model parameters that are free to vary when estimating a particular target.
The Significance gives the significance value of the Ljung-Box statistic, providing another indication of whether the model is correctly specified. A significance value less than 0.05 indicates that the residual errors are not random, implying that there is structure in the observed series that is not accounted for by the model.
Taking both the Stationary R-Squared and Significance values into account, the models that the Expert Modeler has chosen for Market_3, and Market_4 are quite acceptable. The Significance values for Market_1, Market_2, and Market_5 are all less than 0.05, indicating that some experimentation with better-fitting models for these markets might be necessary.
The display shows a number of additional goodness-of-fit measures. The R-Squared value gives an estimation of the total variation in the time series that can be explained by the model. As the maximum value for this statistic is 1.0, our models are fine in this respect.
RMSE is the root mean square error, a measure of how much the actual values of a series differ from the values predicted by the model, and is expressed in the same units as those used for the series itself. As this is a measurement of an error, we want this value to be as low as possible. At first sight it appears that the models for Market_2 and Market_3, while still acceptable according to the statistics we have seen so far, are less successful than those for the other three markets.
These additional goodness-of-fit measures include the mean absolute percentage errors ( MAPE) and its maximum value ( MAXAPE). Absolute percentage error is a measure of how much a target series varies from its model-predicted level, expressed as a percentage value. By examining the mean and maximum across all models, you can get an indication of the uncertainty in your predictions.
The MAPE value shows that all models display a mean uncertainty of around 1%, which is very low. The MAXAPE value displays the maximum absolute percentage error and is useful for imagining a worst-case scenario for your forecasts. It shows that the largest percentage error for most of the models falls in the range of roughly 1.8% to 3.7%, again a very low set of figures, with only Market_4 being higher at close to 7%.
The MAE (mean absolute error) value shows the mean of the absolute values of the forecast errors. Like the RMSE value, this is expressed in the same units as those used for the series itself. MAXAE shows the largest forecast error in the same units and indicates worst-case scenario for the forecasts.
Although these absolute values are interesting, it's the values of the percentage errors ( MAPE and MAXAPE) that are more useful in this case, as the target series represent subscriber numbers for markets of varying sizes.
Do the MAPE and MAXAPE values represent an acceptable amount of uncertainty with the models? They are certainly very low. This is a situation in which business sense comes into play, because acceptable risk will change from problem to problem. We'll assume that the goodness-of-fit statistics fall within acceptable bounds, so let's go on to look at the residual errors.
Examining the values of the autocorrelation function ( ACF) and partial autocorrelation function ( PACF) for the model residuals provides more quantitative insight into the models than simply viewing goodness-of-fit statistics.
A well-specified time series model will capture all of the nonrandom variation, including seasonality, trend, and cyclic and other factors that are important. If this is the case, any error should not be correlated with itself (autocorrelated) over time. A significant structure in either of the autocorrelation functions would imply that the underlying model is incomplete.
3. For the fourth market, click Correlogram to display the values of the autocorrelation function ( ACF) and partial autocorrelation function ( PACF) for the residual errors in the model.
Figure 2. ACF and PACF values for the fourth market

In these plots, the original values of the error variable have been lagged (under BUILD OPTIONS - OUTPUT) up to the default value of 24 time periods and compared with the original value to see if there's any correlation over time. Ideally, the bars representing all lags of ACF and PACF should be within the shaded area. However, in practice, there may be some lags that extend outside of the shaded area. This is because, for example, some larger lags may not have been tried for inclusion in the model in order to save computation time. Some lags are insignificant and are removed from the model. If you want to improve the model further and don't care whether these lags are redundant or not, these plots serve as tips for you as to which lags are potential predictors.
Should this occur, you'd need to check the lower ( PACF) plot to see whether the structure is confirmed there. The PACF plot looks at correlations after controlling for the series values at the intervening time points.
The values for Market_4 are all within the shaded area, so we can continue and check the values for the other markets.
4. Open the Correlogram for each of the other markets and the totals.
The values for the other markets all show some values outside the shaded area, confirming what we suspected earlier from their Significance values. We'll need to experiment with some different models for those markets at some point to see if we can get a better fit, but for the rest of this example, we'll concentrate on what else we can learn from the Market_4 model.
5. Return to your flow canvas. Attach a new Time Plot node to the Time Series model nugget. Double-click the node to open its properties.
6. Deselect the Display series in separate panel option.
7. For the Series list, add the Market_4 and $TS-Market_4 fields.
8. Save the properties, then right-click the Time Plot node and select Run to generate a line graph of the actual and forecast data for the first of the local markets.Notice how the forecast ($TS-Market_4) line extends past the end of the actual data. You now have a forecast of expected demand for the next three months in this market.
Figure 3. Time Plot of actual and forecast data for Market_4

The lines for actual and forecast data over the entire time series are very close together on the graph, indicating that this is a reliable model for this particular time series.
You have a reliable model for this particular market, but what margin of error does the forecast have? You can get an indication of this by examining the confidence interval.
9. Double-click the last Time Plot node in the flow (the one labeled Market_4 $TS-Market_4).
10. Add the $TSLCI-Market_4 and $TSUCI-Market_4 fields to the Series list.
11. Save the properties and run the node again.
Now you have the same graph as before, but with the upper ($TSUCI) and lower ($TSLCI) limits of the confidence interval added. Notice how the boundaries of the confidence interval diverge over the forecast period, indicating increasing uncertainty as you forecast further into the future. However, as each time period goes by, you'll have another (in this case) month's worth of actual usage data on which to base your forecast. In a real-world scenario, you could read the new data into the flow and reapply your model now that you know it's reliable.
Figure 4. Time Plot with confidence interval added

| # Examining the model #
<!-- <ol> -->
1. Right\-click the Time Series model nugget and select View Model to see information about the models generated for each of the markets\.
Figure 1. Time Series models generated for the markets

2. In the left TARGET column, select any of the markets\. Then go to Model Information\. The Number of Predictors row shows how many fields were used as predictors for each target\.
The other rows in the Model Information tables show various goodness-of-fit measures for each model. Stationary R-Squared measures how a model is better than a baseline model. If the final model is ARIMA(p,d,q)(P,D,Q), the baseline model is ARIMA(0,d,0)(0,D,0). If the final model is an Exponential Smoothing model, then d is 2 for Brown and Holt model and 1 for other models, and D is 1 if the seasonal length is greater than 1, otherwise D is 0. A negative stationary R squared means that the model under consideration is worse than the baseline model. Zero stationary R squared means that the model is as good or bad as the baseline model and a positive stationary R squared means the model is better than the baseline model
The Statistic and df lines, and the Significance under Parameter Estimates, relate to the Ljung-Box statistic, a test of the randomness of the residual errors in the model. The more random the errors, the better the model is likely to be. Statistic is the Ljung-Box statistic itself, while df (degrees of freedom) indicates the number of model parameters that are free to vary when estimating a particular target.
The Significance gives the significance value of the Ljung-Box statistic, providing another indication of whether the model is correctly specified. A significance value less than 0.05 indicates that the residual errors are not random, implying that there is structure in the observed series that is not accounted for by the model.
Taking both the Stationary R-Squared and Significance values into account, the models that the Expert Modeler has chosen for `Market_3`, and `Market_4` are quite acceptable. The Significance values for `Market_1`, `Market_2`, and `Market_5` are all less than 0.05, indicating that some experimentation with better-fitting models for these markets might be necessary.
The display shows a number of additional goodness-of-fit measures. The R-Squared value gives an estimation of the total variation in the time series that can be explained by the model. As the maximum value for this statistic is 1.0, our models are fine in this respect.
RMSE is the root mean square error, a measure of how much the actual values of a series differ from the values predicted by the model, and is expressed in the same units as those used for the series itself. As this is a measurement of an error, we want this value to be as low as possible. At first sight it appears that the models for `Market_2` and `Market_3`, while still acceptable according to the statistics we have seen so far, are less successful than those for the other three markets.
These additional goodness-of-fit measures include the mean absolute percentage errors ( MAPE) and its maximum value ( MAXAPE). Absolute percentage error is a measure of how much a target series varies from its model-predicted level, expressed as a percentage value. By examining the mean and maximum across all models, you can get an indication of the uncertainty in your predictions.
The MAPE value shows that all models display a mean uncertainty of around 1%, which is very low. The MAXAPE value displays the maximum absolute percentage error and is useful for imagining a worst-case scenario for your forecasts. It shows that the largest percentage error for most of the models falls in the range of roughly 1.8% to 3.7%, again a very low set of figures, with only `Market_4` being higher at close to 7%.
The MAE (mean absolute error) value shows the mean of the absolute values of the forecast errors. Like the RMSE value, this is expressed in the same units as those used for the series itself. MAXAE shows the largest forecast error in the same units and indicates worst-case scenario for the forecasts.
Although these absolute values are interesting, it's the values of the percentage errors ( MAPE and MAXAPE) that are more useful in this case, as the target series represent subscriber numbers for markets of varying sizes.
Do the MAPE and MAXAPE values represent an acceptable amount of uncertainty with the models? They are certainly very low. This is a situation in which business sense comes into play, because acceptable risk will change from problem to problem. We'll assume that the goodness-of-fit statistics fall within acceptable bounds, so let's go on to look at the residual errors.
Examining the values of the autocorrelation function ( ACF) and partial autocorrelation function ( PACF) for the model residuals provides more quantitative insight into the models than simply viewing goodness-of-fit statistics.
A well-specified time series model will capture all of the nonrandom variation, including seasonality, trend, and cyclic and other factors that are important. If this is the case, any error should not be correlated with itself (autocorrelated) over time. A significant structure in either of the autocorrelation functions would imply that the underlying model is incomplete.
3. For the fourth market, click Correlogram to display the values of the autocorrelation function ( ACF) and partial autocorrelation function ( PACF) for the residual errors in the model\.
Figure 2. ACF and PACF values for the fourth market

In these plots, the original values of the error variable have been lagged (under BUILD OPTIONS - OUTPUT) up to the default value of 24 time periods and compared with the original value to see if there's any correlation over time. Ideally, the bars representing all lags of ACF and PACF should be within the shaded area. However, in practice, there may be some lags that extend outside of the shaded area. This is because, for example, some larger lags may not have been tried for inclusion in the model in order to save computation time. Some lags are insignificant and are removed from the model. If you want to improve the model further and don't care whether these lags are redundant or not, these plots serve as tips for you as to which lags are potential predictors.
Should this occur, you'd need to check the lower ( PACF) plot to see whether the structure is confirmed there. The PACF plot looks at correlations after controlling for the series values at the intervening time points.
The values for `Market_4` are all within the shaded area, so we can continue and check the values for the other markets.
4. Open the Correlogram for each of the other markets and the totals\.
The values for the other markets all show some values outside the shaded area, confirming what we suspected earlier from their Significance values. We'll need to experiment with some different models for those markets at some point to see if we can get a better fit, but for the rest of this example, we'll concentrate on what else we can learn from the `Market_4` model.
5. Return to your flow canvas\. Attach a new Time Plot node to the Time Series model nugget\. Double\-click the node to open its properties\.
6. Deselect the Display series in separate panel option\.
7. For the Series list, add the `Market_4` and `$TS-Market_4` fields\.
8. Save the properties, then right\-click the Time Plot node and select Run to generate a line graph of the actual and forecast data for the first of the local markets\.Notice how the forecast (`$TS-Market_4`) line extends past the end of the actual data\. You now have a forecast of expected demand for the next three months in this market\.
Figure 3. Time Plot of actual and forecast data for Market\_4

The lines for actual and forecast data over the entire time series are very close together on the graph, indicating that this is a reliable model for this particular time series.
You have a reliable model for this particular market, but what margin of error does the forecast have? You can get an indication of this by examining the confidence interval.
9. Double\-click the last Time Plot node in the flow (the one labeled Market\_4 $TS\-Market\_4)\.
10. Add the `$TSLCI-Market_4` and `$TSUCI-Market_4` fields to the Series list\.
11. Save the properties and run the node again\.
<!-- </ol> -->
Now you have the same graph as before, but with the upper (`$TSUCI`) and lower (`$TSLCI`) limits of the confidence interval added\. Notice how the boundaries of the confidence interval diverge over the forecast period, indicating increasing uncertainty as you forecast further into the future\. However, as each time period goes by, you'll have another (in this case) month's worth of actual usage data on which to base your forecast\. In a real\-world scenario, you could read the new data into the flow and reapply your model now that you know it's reliable\.
Figure 4\. Time Plot with confidence interval added

<!-- </article "role="article" "> -->
|
A69DA07F8EE0529080646A4B1EAB45C1074AB683 | https://dataplatform.cloud.ibm.com/docs/content/wsd/tutorials/tut_bandwidth_forecast_model.html?context=cdpaas&locale=en | Creating the model (SPSS Modeler) | Creating the model
1. Double-click the Time Series node to open its properties.
2. Under FIELDS, add all 5 of the markets to the Candidate Inputs lists. Also add the Total field to the Targets list.
3. Under BUILD OPTIONS - GENERAL, make sure the Expert Modeler method is selected using all default settings. Doing so enables the Expert Modeler to decide the most appropriate model to use for each time series.
Figure 1. Choosing the Expert Modeler method for Time Series

4. Save the settings and then run the flow. A Time Series model nugget is generated. Attach it to the Time Series node.
5. Attach a Table node to the Time Series model nugget and run the flow again.
Figure 2. Example flow showing Time Series modeling

There are now three new rows appended to the end of the original data. These are the rows for the forecast period, in this case January to March 2004.
Several new columns are also present now. The $TS- columns are added by the Time Series node. The columns indicate the following for each row (that is, for each interval in the time series data):
Column Description
$TS-colname The generated model data for each column of the original data.
$TSLCI-colname The lower confidence interval value for each column of the generated model data.
$TSUCI-colname The upper confidence interval value for each column of the generated model data.
$TS-Total The total of the $TS-colname values for this row.
$TSLCI-Total The total of the $TSLCI-colname values for this row.
$TSUCI-Total The total of the $TSUCI-colname values for this row.
The most significant columns for the forecast operation are the $TS-Market_n, $TSLCI-Market_n, and $TSUCI-Market_n columns. In particular, these columns in the last three rows contain the user subscription forecast data and confidence intervals for each of the local markets.
| # Creating the model #
<!-- <ol> -->
1. Double\-click the Time Series node to open its properties\.
2. Under FIELDS, add all 5 of the markets to the Candidate Inputs lists\. Also add the `Total` field to the Targets list\.
3. Under BUILD OPTIONS \- GENERAL, make sure the Expert Modeler method is selected using all default settings\. Doing so enables the Expert Modeler to decide the most appropriate model to use for each time series\.
Figure 1. Choosing the Expert Modeler method for Time Series

4. Save the settings and then run the flow\. A Time Series model nugget is generated\. Attach it to the Time Series node\.
5. Attach a Table node to the Time Series model nugget and run the flow again\.
Figure 2. Example flow showing Time Series modeling

<!-- </ol> -->
There are now three new rows appended to the end of the original data\. These are the rows for the forecast period, in this case January to March 2004\.
Several new columns are also present now\. The `$TS-` columns are added by the Time Series node\. The columns indicate the following for each row (that is, for each interval in the time series data):
<!-- <table "summary="" class="defaultstyle" "> -->
| Column | Description |
| ----------------- | --------------------------------------------------------------------------------- |
| $TS\-*colname* | The generated model data for each column of the original data\. |
| $TSLCI\-*colname* | The lower confidence interval value for each column of the generated model data\. |
| $TSUCI\-*colname* | The upper confidence interval value for each column of the generated model data\. |
| $TS\-Total | The total of the $TS\-*colname* values for this row\. |
| $TSLCI\-Total | The total of the $TSLCI\-*colname* values for this row\. |
| $TSUCI\-Total | The total of the $TSUCI\-*colname* values for this row\. |
<!-- </table "summary="" class="defaultstyle" "> -->
The most significant columns for the forecast operation are the `$TS-Market_n`, `$TSLCI-Market_n`, and `$TSUCI-Market_n` columns\. In particular, these columns in the last three rows contain the user subscription forecast data and confidence intervals for each of the local markets\.
<!-- </article "role="article" "> -->
|
8CCC5CD4A9C103249435FC0A7FB18874B447DE3D | https://dataplatform.cloud.ibm.com/docs/content/wsd/tutorials/tut_bandwidth_forecast_summary.html?context=cdpaas&locale=en | Summary (SPSS Modeler) | Summary
You've learned how to use the Expert Modeler to produce forecasts for multiple time series. In a real-world scenario, you could now transform nonstandard time series data into a format suitable for input to a Time Series node.
| # Summary #
You've learned how to use the Expert Modeler to produce forecasts for multiple time series\. In a real\-world scenario, you could now transform nonstandard time series data into a format suitable for input to a Time Series node\.
<!-- </article "role="article" "> -->
|
59CDBABC75E7EC8987A3C464F3277923F444A724 | https://dataplatform.cloud.ibm.com/docs/content/wsd/tutorials/tut_bandwidth_forecast_targets.html?context=cdpaas&locale=en | Defining the targets (SPSS Modeler) | Defining the targets
1. Add a Type node after the Filler node, then double-click the Type node to open its properties.
2. Set the role to None for the DATE_ field. Set the role to Target for all other fields (the Market_n fields plus the Total field).
3. Click Read Values to populate the Values column.
Figure 1. Setting the role for fields

| # Defining the targets #
<!-- <ol> -->
1. Add a Type node after the Filler node, then double\-click the Type node to open its properties\.
2. Set the role to None for the `DATE_` field\. Set the role to Target for all other fields (the `Market_n` fields plus the `Total` field)\.
3. Click Read Values to populate the Values column\.
Figure 1. Setting the role for fields

<!-- </ol> -->
<!-- </article "role="article" "> -->
|
83579304F7F59126FE983B1ED44BBBB1AC8BFCB2 | https://dataplatform.cloud.ibm.com/docs/content/wsd/tutorials/tut_bandwidth_forecast_time.html?context=cdpaas&locale=en | Setting the time intervals (SPSS Modeler) | Setting the time intervals
1. Add a Time Series node and attach it to the Type node. Double-click the node to edit its properties.
2. Under OBSERVATIONS AND TIME INTERVAL, select DATE_ as the Time/Date field.
3. Select Months as the time interval.
Figure 1. Setting the time interval

4. Under MODEL OPTIONS, select the Extend records into the future option and set the value to 3.
Figure 2. Setting the forecast period

| # Setting the time intervals #
<!-- <ol> -->
1. Add a Time Series node and attach it to the Type node\. Double\-click the node to edit its properties\.
2. Under OBSERVATIONS AND TIME INTERVAL, select `DATE_` as the Time/Date field\.
3. Select Months as the time interval\.
Figure 1. Setting the time interval

4. Under MODEL OPTIONS, select the Extend records into the future option and set the value to 3\.
Figure 2. Setting the forecast period

<!-- </ol> -->
<!-- </article "role="article" "> -->
|
7E9A5F54713CE7CB98EA4BCB223A40C4952F0083 | https://dataplatform.cloud.ibm.com/docs/content/wsd/tutorials/tut_churn.html?context=cdpaas&locale=en | Telecommunications churn (SPSS Modeler) | Telecommunications churn
Logistic regression is a statistical technique for classifying records based on values of input fields. It is analogous to linear regression, but takes a categorical target field instead of a numeric one.
For example, suppose a telecommunications provider is concerned about the number of customers it's losing to competitors. If service usage data can be used to predict which customers are liable to transfer to another provider, offers can be customized to retain as many customers as possible.
This example uses the flow named Telecommunications Churn, available in the example project . The data file is telco.csv.
This example focuses on using usage data to predict customer loss (churn). Because the target has two distinct categories, a binomial model is used. In the case of a target with multiple categories, a multinomial model could be created instead. See [Classifying telecommunications customers](https://dataplatform.cloud.ibm.com/docs/content/wsd/tutorials/tut_classify.htmltut_classify) for more information.
| # Telecommunications churn #
Logistic regression is a statistical technique for classifying records based on values of input fields\. It is analogous to linear regression, but takes a categorical target field instead of a numeric one\.
For example, suppose a telecommunications provider is concerned about the number of customers it's losing to competitors\. If service usage data can be used to predict which customers are liable to transfer to another provider, offers can be customized to retain as many customers as possible\.
This example uses the flow named Telecommunications Churn, available in the example project \. The data file is telco\.csv\.
This example focuses on using usage data to predict customer loss (churn)\. Because the target has two distinct categories, a binomial model is used\. In the case of a target with multiple categories, a multinomial model could be created instead\. See [Classifying telecommunications customers](https://dataplatform.cloud.ibm.com/docs/content/wsd/tutorials/tut_classify.html#tut_classify) for more information\.
<!-- </article "role="article" "> -->
|
433775834EA8AE82CBFA6077FC361C3C52A99E42 | https://dataplatform.cloud.ibm.com/docs/content/wsd/tutorials/tut_churn_build.html?context=cdpaas&locale=en | Building the flow (SPSS Modeler) | Building the flow
Figure 1. Example flow to classify customers using binomial logistic regression

1. Add a Data Asset node that points to telco.csv.
2. Add a Type node, double-click it to open its properties, and make sure all measurement levels are set correctly. For example, most fields with values of 0 and 1 can be regarded as flags, but certain fields, such as gender, are more accurately viewed as a nominal field with two values.
Figure 2. Measurement levels

3. Set the measurement level for the churn field to Flag, and set the role to Target. Leave the role for all other fields set to Input.
4. Add a Feature Selection modeling node to the Type node. You can use a Feature Selection node to remove predictors or data that don't add any useful information about the predictor/target relationship.
5. Run the flow. Right-click the resulting model nugget and select View Model. You'll see a list of the most important fields.
6. Add a Filter node after the Type node. Not all of the data in the telco.csv data file will be useful in predicting churn. You can use the filter to only select data considered to be important for use as a predictor (the fields marked as Important in the model generated in the previous step).
7. Double-click the Filter node to open its properties, select the option Retain the selected fields (all other fields are filtered), and add the following important fields from the Feature Selection model nugget:
tenure
age
address
income
ed
employ
equip
callcard
wireless
longmon
tollmon
equipmon
cardmon
wiremon
longten
tollten
cardten
voice
pager
internet
callwait
confer
ebill
loglong
logtoll
lninc
custcat
churn
8. Add a Data Audit output node after the Filter node. Right-click the node and run it, then open the output that was added to the Outputs pane.
9. Look at the % Complete column, which lets you identify any fields with large amounts of missing data. In this case, the only field you need to amend is logtoll, which is less than 50% complete.
10. Close the output, and add a Filler node after the Filter node. Double-click the node to open its properties, click Add Columns, and select the logtoll field.
11. Under Replace, select Blank and null values. Click Save to close the node properties.
12. Right-click the Filler node you just created and select Create supernode. Double-click the supernode and change its name to Missing Value Imputation.
13. Add a Logistic node after the Filler node. Double-click the node to open its properties. Under Model Settings, select the Binomial procedure and the Forwards Stepwise method.
Figure 3. Choosing model settings

14. Under Expert Options, select Expert.
Figure 4. Choosing expert options

15. Click Output to open the display settings. Select At each step, Iteration history, and Parameter estimates, then click OK.
Figure 5. Choosing expert options

| # Building the flow #
Figure 1\. Example flow to classify customers using binomial logistic regression

<!-- <ol> -->
1. Add a Data Asset node that points to telco\.csv\.
2. Add a Type node, double\-click it to open its properties, and make sure all measurement levels are set correctly\. For example, most fields with values of `0` and `1` can be regarded as flags, but certain fields, such as gender, are more accurately viewed as a nominal field with two values\.
Figure 2. Measurement levels

3. Set the measurement level for the `churn` field to Flag, and set the role to Target\. Leave the role for all other fields set to Input\.
4. Add a Feature Selection modeling node to the Type node\. You can use a Feature Selection node to remove predictors or data that don't add any useful information about the predictor/target relationship\.
5. Run the flow\. Right\-click the resulting model nugget and select View Model\. You'll see a list of the most important fields\.
6. Add a Filter node after the Type node\. Not all of the data in the telco\.csv data file will be useful in predicting churn\. You can use the filter to only select data considered to be important for use as a predictor (the fields marked as Important in the model generated in the previous step)\.
7. Double\-click the Filter node to open its properties, select the option Retain the selected fields (all other fields are filtered), and add the following important fields from the Feature Selection model nugget:
tenure
age
address
income
ed
employ
equip
callcard
wireless
longmon
tollmon
equipmon
cardmon
wiremon
longten
tollten
cardten
voice
pager
internet
callwait
confer
ebill
loglong
logtoll
lninc
custcat
churn
8. Add a Data Audit output node after the Filter node\. Right\-click the node and run it, then open the output that was added to the Outputs pane\.
9. Look at the % Complete column, which lets you identify any fields with large amounts of missing data\. In this case, the only field you need to amend is `logtoll`, which is less than 50% complete\.
10. Close the output, and add a Filler node after the Filter node\. Double\-click the node to open its properties, click Add Columns, and select the `logtoll` field\.
11. Under Replace, select Blank and null values\. Click Save to close the node properties\.
12. Right\-click the Filler node you just created and select Create supernode\. Double\-click the supernode and change its name to Missing Value Imputation\.
13. Add a Logistic node after the Filler node\. Double\-click the node to open its properties\. Under Model Settings, select the Binomial procedure and the Forwards Stepwise method\.
Figure 3. Choosing model settings

14. Under Expert Options, select Expert\.
Figure 4. Choosing expert options

15. Click Output to open the display settings\. Select At each step, Iteration history, and Parameter estimates, then click OK\.
Figure 5. Choosing expert options

<!-- </ol> -->
<!-- </article "role="article" "> -->
|
B648A5DEE55D7DBF258B7B088830F18C040C61D5 | https://dataplatform.cloud.ibm.com/docs/content/wsd/tutorials/tut_churn_model.html?context=cdpaas&locale=en | Browsing the model (SPSS Modeler) | Browsing the model
* Right-click the Logistic node and run it to generate its model nugget. Right-click the nugget and select View Model.The Parameter Estimates page shows the target (churn) and inputs (predictor fields) used by the model. These are the fields that were actually chosen based on the Forwards Stepwise method, not the complete list submitted for consideration.
Figure 1. Parameter estimates showing input fields

To assess how well the model actually fits your data, a number of diagnostics are available in the expert node settings when you're building the flow.
Note also that these results are based on the training data only. To assess how well the model generalizes to other data in the real world, you would use a Partition node to hold out a subset of records for purposes of testing and validation.
| # Browsing the model #
<!-- <ul> -->
* Right\-click the Logistic node and run it to generate its model nugget\. Right\-click the nugget and select View Model\.The Parameter Estimates page shows the target (churn) and inputs (predictor fields) used by the model\. These are the fields that were actually chosen based on the Forwards Stepwise method, not the complete list submitted for consideration\.
Figure 1. Parameter estimates showing input fields

To assess how well the model actually fits your data, a number of diagnostics are available in the expert node settings when you're building the flow.
Note also that these results are based on the training data only. To assess how well the model generalizes to other data in the real world, you would use a Partition node to hold out a subset of records for purposes of testing and validation.
<!-- </ul> -->
<!-- </article "role="article" "> -->
|
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.