doc_id
stringlengths 40
40
| url
stringlengths 90
160
| title
stringlengths 5
96
| document
stringlengths 24
62.1k
| md_document
stringlengths 63
109k
|
---|---|---|---|---|
F5A6D2AE83A7989E17704E69F0A640368C676594 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/expressionbuilder.html?context=cdpaas&locale=en | Expression Builder (SPSS Modeler) | Expression Builder
You can type CLEM expressions manually or use the Expression Builder, which displays a complete list of CLEM functions and operators as well as data fields from the current flow, allowing you to quickly build expressions without memorizing the exact names of fields or functions.
The Expression Builder controls automatically add the proper quotes for fields and values, making it easier to create syntactically correct expressions.
Notes:
* The Expression Builder is not supported in parameter settings.
* If you want to change your datasource, before changing the source you should check that the Expression Builder can still support the functions you have selected. Because not all databases support all functions, you may encounter an error if you run against a new datasource.
* You can run an SPSS Modeler desktop stream file ( .str) that contains database functions. But they aren't yet available in the Expression Builder user interface.
Figure 1. Expression Builder

| # Expression Builder #
You can type CLEM expressions manually or use the Expression Builder, which displays a complete list of CLEM functions and operators as well as data fields from the current flow, allowing you to quickly build expressions without memorizing the exact names of fields or functions\.
The Expression Builder controls automatically add the proper quotes for fields and values, making it easier to create syntactically correct expressions\.
Notes:
<!-- <ul> -->
* The Expression Builder is not supported in parameter settings\.
* If you want to change your datasource, before changing the source you should check that the Expression Builder can still support the functions you have selected\. Because not all databases support all functions, you may encounter an error if you run against a new datasource\.
* You can run an SPSS Modeler desktop stream file ( \.str) that contains database functions\. But they aren't yet available in the Expression Builder user interface\.
<!-- </ul> -->
Figure 1\. Expression Builder

<!-- </article "role="article" "> -->
|
9DA0D100A88228AB463CB9B1B6CF1C051253911A | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/expressionbuilder_functions.html?context=cdpaas&locale=en | Selecting functions (SPSS Modeler) | Selecting functions
The function list displays all available CLEM functions and operators. Scroll to select a function from the list, or, for easier searching, use the drop-down list to display a subset of functions or operators.
The following categories of functions are available:
Table 1. CLEM functions for use with your data
Function type Description
Operators Lists all the operators you can use when building expressions. Operators are also available from the buttons.
Information Used to gain insight into field values. For example, the function is_string returns true for all records whose type is a string.
Conversion Used to construct new fields or convert storage type. For example, the function to_timestamp converts the selected field to a timestamp.
Comparison Used to compare field values to each other or to a specified string. For example, <= is used to compare whether the values of two fields are lesser or equal.
Logical Used to perform logical operations, such as if, then, else operations.
Numeric Used to perform numeric calculations, such as the natural log of field values.
Trigonometric Used to perform trigonometric calculations, such as the arccosine of a specified angle.
Probability Returns probabilities that are based on various distributions, such as probability that a value from Student's t distribution is less than a specific value.
Spatial Functions Used to perform spatial calculations on geospatial data.
Bitwise Used to manipulate integers as bit patterns.
Random Used to randomly select items or generate numbers.
String Used to perform various operations on strings, such as stripchar, which allows you to remove a specified character.
Date and time Used to perform various operations on date, time, and timestamp fields.
Sequence Used to gain insight into the record sequence of a data set or perform operations that are based on that sequence.
Global Used to access global values that are created by a Set Globals node. For example, @MEAN is used to refer to the mean average of all values for a field across the entire data set.
Blanks and Null Used to access, flag, and frequently fill user-specified blanks or system-missing values. For example, @BLANK(FIELD) is used to raise a true flag for records where blanks are present.
Special Fields Used to denote the specific fields under examination. For example, @FIELD is used when deriving multiple fields.
After you select a group of functions, double-click to insert the functions into the Expression box at the point indicated by the position of the cursor.
| # Selecting functions #
The function list displays all available CLEM functions and operators\. Scroll to select a function from the list, or, for easier searching, use the drop\-down list to display a subset of functions or operators\.
The following categories of functions are available:
<!-- <table "summary="" id="expressionbuilder_functions__table_sfr_j4w_2fb" class="defaultstyle" "> -->
Table 1\. CLEM functions for use with your data
| Function type | Description |
| ----------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Operators | Lists all the operators you can use when building expressions\. Operators are also available from the buttons\. |
| Information | Used to gain insight into field values\. For example, the function `is_string` returns `true` for all records whose type is a string\. |
| Conversion | Used to construct new fields or convert storage type\. For example, the function `to_timestamp` converts the selected field to a timestamp\. |
| Comparison | Used to compare field values to each other or to a specified string\. For example, `<=` is used to compare whether the values of two fields are lesser or equal\. |
| Logical | Used to perform logical operations, such as `if, then, else` operations\. |
| Numeric | Used to perform numeric calculations, such as the natural log of field values\. |
| Trigonometric | Used to perform trigonometric calculations, such as the arccosine of a specified angle\. |
| Probability | Returns probabilities that are based on various distributions, such as probability that a value from Student's t distribution is less than a specific value\. |
| Spatial Functions | Used to perform spatial calculations on geospatial data\. |
| Bitwise | Used to manipulate integers as bit patterns\. |
| Random | Used to randomly select items or generate numbers\. |
| String | Used to perform various operations on strings, such as `stripchar`, which allows you to remove a specified character\. |
| Date and time | Used to perform various operations on date, time, and timestamp fields\. |
| Sequence | Used to gain insight into the record sequence of a data set or perform operations that are based on that sequence\. |
| Global | Used to access global values that are created by a Set Globals node\. For example, `@MEAN` is used to refer to the mean average of all values for a field across the entire data set\. |
| Blanks and Null | Used to access, flag, and frequently fill user\-specified blanks or system\-missing values\. For example, `@BLANK(FIELD)` is used to raise a true flag for records where blanks are present\. |
| Special Fields | Used to denote the specific fields under examination\. For example, `@FIELD` is used when deriving multiple fields\. |
<!-- </table "summary="" id="expressionbuilder_functions__table_sfr_j4w_2fb" class="defaultstyle" "> -->
After you select a group of functions, double\-click to insert the functions into the Expression box at the point indicated by the position of the cursor\.
<!-- </article "role="article" "> -->
|
1B0AB9084C7DD9546BDC2F376B58E32C0ECFEE85 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/extension_build.html?context=cdpaas&locale=en | Extension Model node (SPSS Modeler) | Extension Model node
With the Extension Model node, you can run R scripts or Python for Spark scripts to build and score models.
After adding the node to your canvas, double-click the node to open its properties.
| # Extension Model node #
With the Extension Model node, you can run R scripts or Python for Spark scripts to build and score models\.
After adding the node to your canvas, double\-click the node to open its properties\.
<!-- </article "role="article" "> -->
|
6402316FEBFAD11A582D9C567811003F4BEE596A | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/extension_export.html?context=cdpaas&locale=en | Extension Export node (SPSS Modeler) | Extension Export node
You can use the Extension Export node to run R scripts or Python for Spark scripts to export data.
| # Extension Export node #
You can use the Extension Export node to run R scripts or Python for Spark scripts to export data\.
<!-- </article "role="article" "> -->
|
378F6A8306234029DE1642CBFF8E44ED6848BF74 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/extension_importer.html?context=cdpaas&locale=en | Extension Import node (SPSS Modeler) | Extension Import node
With the Extension Import node, you can run R scripts or Python for Spark scripts to import data.
After adding the node to your canvas, double-click the node to open its properties.
| # Extension Import node #
With the Extension Import node, you can run R scripts or Python for Spark scripts to import data\.
After adding the node to your canvas, double\-click the node to open its properties\.
<!-- </article "role="article" "> -->
|
97FA49D526786021CF325FF9AFF15646A8270B48 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/extension_nativepython_api.html?context=cdpaas&locale=en | Native Python APIs (SPSS Modeler) | Native Python APIs
You can invoke native Python APIs from your scripts to interact with SPSS Modeler.
The following APIs are supported.
To see an example, you can download the sample stream [python-extension-str.zip](https://github.com/IBMDataScience/ModelerFlowsExamples/blob/main/samples) and import it into SPSS Modeler (from your project, click New asset, select SPSS Modeler, then select Local file). Then open the Extension node properties in the flow to see example syntax.
| # Native Python APIs #
You can invoke native Python APIs from your scripts to interact with SPSS Modeler\.
The following APIs are supported\.
To see an example, you can download the sample stream [python\-extension\-str\.zip](https://github.com/IBMDataScience/ModelerFlowsExamples/blob/main/samples) and import it into SPSS Modeler (from your project, click New asset, select SPSS Modeler, then select Local file)\. Then open the Extension node properties in the flow to see example syntax\.
<!-- </article "role="article" "> -->
|
1D46D1240377AEA562F14A560CB9F24DF33EDF88 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/extension_output.html?context=cdpaas&locale=en | Extension Output node (SPSS Modeler) | Extension Output node
With the Extension Output node, you can run R scripts or Python for Spark scripts to produce output.
After adding the node to your canvas, double-click the node to open its properties.
| # Extension Output node #
With the Extension Output node, you can run R scripts or Python for Spark scripts to produce output\.
After adding the node to your canvas, double\-click the node to open its properties\.
<!-- </article "role="article" "> -->
|
FF6C435ADBD62DE03C06CE4F90343D3CD04F9E8F | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/extension_process.html?context=cdpaas&locale=en | Extension Transform node (SPSS Modeler) | Extension Transform node
With the Extension Transform node, you can take data from an SPSS Modeler flow and apply transformations to the data using R scripting or Python for Spark scripting.
When the data has been modified, it's returned to the flow for further processing, model building, and model scoring. The Extension Transform node makes it possible to transform data using algorithms that are written in R or Python for Spark, and enables you to develop data transformation methods that are tailored to a particular problem.
After adding the node to your canvas, double-click the node to open its properties.
| # Extension Transform node #
With the Extension Transform node, you can take data from an SPSS Modeler flow and apply transformations to the data using R scripting or Python for Spark scripting\.
When the data has been modified, it's returned to the flow for further processing, model building, and model scoring\. The Extension Transform node makes it possible to transform data using algorithms that are written in R or Python for Spark, and enables you to develop data transformation methods that are tailored to a particular problem\.
After adding the node to your canvas, double\-click the node to open its properties\.
<!-- </article "role="article" "> -->
|
63C0DFB695860E1DA7981D86959D998BEBC2DD03 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/extension_pyspark.html?context=cdpaas&locale=en | Python for Spark scripts (SPSS Modeler) | Python for Spark scripts
SPSS Modeler supports Python scripts for Apache Spark.
Note:
* Python nodes depend on the Spark environment.
* Python scripts must use the Spark API because data is presented in the form of a Spark DataFrame.
* When installing Python, make sure all users have permission to access the Python installation.
* If you want to use the Machine Learning Library (MLlib), you must install a version of Python that includes NumPy.
| # Python for Spark scripts #
SPSS Modeler supports Python scripts for Apache Spark\.
Note:
<!-- <ul> -->
* Python nodes depend on the Spark environment\.
* Python scripts must use the Spark API because data is presented in the form of a Spark DataFrame\.
* When installing Python, make sure all users have permission to access the Python installation\.
* If you want to use the Machine Learning Library (MLlib), you must install a version of Python that includes NumPy\.
<!-- </ul> -->
<!-- </article "role="article" "> -->
|
17470065AFC59337B207721AB539B4622BBB3055 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/extension_pyspark_api.html?context=cdpaas&locale=en | Scripting with Python for Spark (SPSS Modeler) | Scripting with Python for Spark
SPSS Modeler can run Python scripts using the Apache Spark framework to process data. This documentation provides the Python API description for the interfaces provided.
The SPSS Modeler installation includes a Spark distribution.
| # Scripting with Python for Spark #
SPSS Modeler can run Python scripts using the Apache Spark framework to process data\. This documentation provides the Python API description for the interfaces provided\.
The SPSS Modeler installation includes a Spark distribution\.
<!-- </article "role="article" "> -->
|
7436F8933CA1DD44E05CD59F8E2CB13052763643 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/extension_pyspark_date.html?context=cdpaas&locale=en | Date, time, timestamp (SPSS Modeler) | Date, time, timestamp
For operations that use date, time, or timestamp type data, the value is converted to the real value based on the value 1970-01-01:00:00:00 (using Coordinated Universal Time).
For the date, the value represents the number of days, based on the value 1970-01-01 (using Coordinated Universal Time).
For the time, the value represents the number of seconds at 24 hours.
For the timestamp, the value represents the number of seconds based on the value 1970-01-01:00:00:00 (using Coordinated Universal Time).
| # Date, time, timestamp #
For operations that use **date**, **time**, or **timestamp** type data, the value is converted to the real value based on the value `1970-01-01:00:00:00` (using Coordinated Universal Time)\.
For the **date**, the value represents the number of days, based on the value `1970-01-01` (using Coordinated Universal Time)\.
For the **time**, the value represents the number of seconds at 24 hours\.
For the **timestamp**, the value represents the number of seconds based on the value `1970-01-01:00:00:00` (using Coordinated Universal Time)\.
<!-- </article "role="article" "> -->
|
835B998310E6E268F648D4AA28528190EBBB48CA | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/extension_pyspark_examples.html?context=cdpaas&locale=en | Examples (SPSS Modeler) | Examples
This section provides Python for Spark scripting examples.
| # Examples #
This section provides Python for Spark scripting examples\.
<!-- </article "role="article" "> -->
|
AD61BC1B395A071D8850BC2405A8C311CFDC931F | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/extension_pyspark_exceptions.html?context=cdpaas&locale=en | Exceptions (SPSS Modeler) | Exceptions
This section describes possible exception instances. They are all a subclass of python exception.
| # Exceptions #
This section describes possible exception instances\. They are all a subclass of python exception\.
<!-- </article "role="article" "> -->
|
450CAAACD51ABDEDAB940CAFB4BC47EBFBCBBA67 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/extension_pyspark_metadata.html?context=cdpaas&locale=en | Data metadata (SPSS Modeler) | Data metadata
This section describes how to set up the data model attributes based on pyspark.sql.StructField.
| # Data metadata #
This section describes how to set up the data model attributes based on `pyspark.sql.StructField`\.
<!-- </article "role="article" "> -->
|
B98506EB96C587BDFD06CBF67617E25D9DAE8E60 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/extension_r.html?context=cdpaas&locale=en | R scripts (SPSS Modeler) | R scripts
SPSS Modeler supports R scripts.
| # R scripts #
SPSS Modeler supports R scripts\.
<!-- </article "role="article" "> -->
|
50636405C61E0AF7D2EE0EE31256C4CD0F6C5DED | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/factor.html?context=cdpaas&locale=en | PCA/Factor node (SPSS Modeler) | PCA/Factor node
The PCA/Factor node provides powerful data-reduction techniques to reduce the complexity of your data. Two similar but distinct approaches are provided.
* Principal components analysis (PCA) finds linear combinations of the input fields that do the best job of capturing the variance in the entire set of fields, where the components are orthogonal (perpendicular) to each other. PCA focuses on all variance, including both shared and unique variance.
* Factor analysis attempts to identify underlying concepts, or factors, that explain the pattern of correlations within a set of observed fields. Factor analysis focuses on shared variance only. Variance that is unique to specific fields is not considered in estimating the model. Several methods of factor analysis are provided by the Factor/PCA node.
For both approaches, the goal is to find a small number of derived fields that effectively summarize the information in the original set of fields.
Requirements. Only numeric fields can be used in a PCA-Factor model. To estimate a factor analysis or PCA, you need one or more fields with the role set to Input fields. Fields with the role set to Target, Both, or None are ignored, as are non-numeric fields.
Strengths. Factor analysis and PCA can effectively reduce the complexity of your data without sacrificing much of the information content. These techniques can help you build more robust models that execute more quickly than would be possible with the raw input fields.
| # PCA/Factor node #
The PCA/Factor node provides powerful data\-reduction techniques to reduce the complexity of your data\. Two similar but distinct approaches are provided\.
<!-- <ul> -->
* Principal components analysis (PCA) finds linear combinations of the input fields that do the best job of capturing the variance in the entire set of fields, where the components are orthogonal (perpendicular) to each other\. PCA focuses on all variance, including both shared and unique variance\.
* Factor analysis attempts to identify underlying concepts, or factors, that explain the pattern of correlations within a set of observed fields\. Factor analysis focuses on shared variance only\. Variance that is unique to specific fields is not considered in estimating the model\. Several methods of factor analysis are provided by the Factor/PCA node\.
<!-- </ul> -->
For both approaches, the goal is to find a small number of derived fields that effectively summarize the information in the original set of fields\.
Requirements\. Only numeric fields can be used in a PCA\-Factor model\. To estimate a factor analysis or PCA, you need one or more fields with the role set to `Input` fields\. Fields with the role set to `Target`, `Both`, or `None` are ignored, as are non\-numeric fields\.
Strengths\. Factor analysis and PCA can effectively reduce the complexity of your data without sacrificing much of the information content\. These techniques can help you build more robust models that execute more quickly than would be possible with the raw input fields\.
<!-- </article "role="article" "> -->
|
9E1CDB994E758D43D9D8CDC5D88E2B5C7E0088D7 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/featureselection.html?context=cdpaas&locale=en | Feature Selection node (SPSS Modeler) | Feature Selection node
Data mining problems may involve hundreds, or even thousands, of fields that can potentially be used as inputs. As a result, a great deal of time and effort may be spent examining which fields or variables to include in the model. To narrow down the choices, the Feature Selection algorithm can be used to identify the fields that are most important for a given analysis. For example, if you are trying to predict patient outcomes based on a number of factors, which factors are the most likely to be important?
Feature selection consists of three steps:
* Screening. Removes unimportant and problematic inputs and records, or cases such as input fields with too many missing values or with too much or too little variation to be useful.
* Ranking. Sorts remaining inputs and assigns ranks based on importance.
* Selecting. Identifies the subset of features to use in subsequent models—for example, by preserving only the most important inputs and filtering or excluding all others.
In an age where many organizations are overloaded with too much data, the benefits of feature selection in simplifying and speeding the modeling process can be substantial. By focusing attention quickly on the fields that matter most, you can reduce the amount of computation required; more easily locate small but important relationships that might otherwise be overlooked; and, ultimately, obtain simpler, more accurate, and more easily explainable models. By reducing the number of fields used in the model, you may find that you can reduce scoring times as well as the amount of data collected in future iterations.
Example. A telephone company has a data warehouse containing information about responses to a special promotion by 5,000 of the company's customers. The data includes a large number of fields containing customers' ages, employment, income, and telephone usage statistics. Three target fields show whether or not the customer responded to each of three offers. The company wants to use this data to help predict which customers are most likely to respond to similar offers in the future.
Requirements. A single target field (one with its role set to Target), along with multiple input fields that you want to screen or rank relative to the target. Both target and input fields can have a measurement level of Continuous (numeric range) or Categorical.
| # Feature Selection node #
Data mining problems may involve hundreds, or even thousands, of fields that can potentially be used as inputs\. As a result, a great deal of time and effort may be spent examining which fields or variables to include in the model\. To narrow down the choices, the Feature Selection algorithm can be used to identify the fields that are most important for a given analysis\. For example, if you are trying to predict patient outcomes based on a number of factors, which factors are the most likely to be important?
Feature selection consists of three steps:
<!-- <ul> -->
* Screening\. Removes unimportant and problematic inputs and records, or cases such as input fields with too many missing values or with too much or too little variation to be useful\.
* Ranking\. Sorts remaining inputs and assigns ranks based on importance\.
* Selecting\. Identifies the subset of features to use in subsequent models—for example, by preserving only the most important inputs and filtering or excluding all others\.
<!-- </ul> -->
In an age where many organizations are overloaded with too much data, the benefits of feature selection in simplifying and speeding the modeling process can be substantial\. By focusing attention quickly on the fields that matter most, you can reduce the amount of computation required; more easily locate small but important relationships that might otherwise be overlooked; and, ultimately, obtain simpler, more accurate, and more easily explainable models\. By reducing the number of fields used in the model, you may find that you can reduce scoring times as well as the amount of data collected in future iterations\.
Example\. A telephone company has a data warehouse containing information about responses to a special promotion by 5,000 of the company's customers\. The data includes a large number of fields containing customers' ages, employment, income, and telephone usage statistics\. Three target fields show whether or not the customer responded to each of three offers\. The company wants to use this data to help predict which customers are most likely to respond to similar offers in the future\.
Requirements\. A single target field (one with its role set to `Target`), along with multiple input fields that you want to screen or rank relative to the target\. Both target and input fields can have a measurement level of `Continuous` (numeric range) or `Categorical`\.
<!-- </article "role="article" "> -->
|
38D24508B131BEB6138652C2FD1E0380A001BB54 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/filler.html?context=cdpaas&locale=en | Filler node (SPSS Modeler) | Filler node
Filler nodes are used to replace field values and change storage. You can choose to replace values based on a specified CLEM condition, such as @BLANK(FIELD). Alternatively, you can choose to replace all blanks or null values with a specific value. Filler nodes are often used in conjunction with the Type node to replace missing values.
Fill in fields. Select fields from the dataset whose values will be examined and replaced. The default behavior is to replace values depending on the specified Condition and Replace with expressions. You can also select an alternative method of replacement using the Replace options.
Note: When selecting multiple fields to replace with a user-defined value, it is important that the field types are similar (all numeric or all symbolic).
Replace. Select to replace the values of the selected field(s) using one of the following methods:
* Based on condition. This option activates the Condition field and Expression Builder for you to create an expression used as a condition for replacement with the value specified.
* Always. Replaces all values of the selected field. For example, you could use this option to convert the storage of income to a string using the following CLEM expression: (to_string(income)).
* Blank values. Replaces all user-specified blank values in the selected field. The standard condition @BLANK(@FIELD) is used to select blanks. Note: You can define blanks using the Types tab of the source node or with a Type node.
* Null values. Replaces all system null values in the selected field. The standard condition @NULL(@FIELD) is used to select nulls.
* Blank and null values. Replaces both blank values and system nulls in the selected field. This option is useful when you are unsure whether or not nulls have been defined as missing values.
Condition. This option is available when you have selected the Based on condition option. Use this text box to specify a CLEM expression for evaluating the selected fields. Click the calculator button to open the Expression Builder.
Replace with. Specify a CLEM expression to give a new value to the selected fields. You can also replace the value with a null value by typing undef in the text box. Click the calculator button to open the Expression Builder.
Note: When the field(s) selected are string, you should replace them with a string value. Using the default 0 or another numeric value as the replacement value for string fields will result in an error.Note that use of the following may change row order:
* Running in a database via SQL pushback
* Deriving a list
* Calling any of the CLEM spatial functions
| # Filler node #
Filler nodes are used to replace field values and change storage\. You can choose to replace values based on a specified CLEM condition, such as `@BLANK(FIELD)`\. Alternatively, you can choose to replace all blanks or null values with a specific value\. Filler nodes are often used in conjunction with the Type node to replace missing values\.
Fill in fields\. Select fields from the dataset whose values will be examined and replaced\. The default behavior is to replace values depending on the specified Condition and Replace with expressions\. You can also select an alternative method of replacement using the Replace options\.
Note: When selecting multiple fields to replace with a user\-defined value, it is important that the field types are similar (all numeric or all symbolic)\.
Replace\. Select to replace the values of the selected field(s) using one of the following methods:
<!-- <ul> -->
* Based on condition\. This option activates the Condition field and Expression Builder for you to create an expression used as a condition for replacement with the value specified\.
* Always\. Replaces all values of the selected field\. For example, you could use this option to convert the storage of income to a string using the following CLEM expression: `(to_string(income))`\.
* Blank values\. Replaces all user\-specified blank values in the selected field\. The standard condition `@BLANK(@FIELD)` is used to select blanks\. *Note*: You can define blanks using the Types tab of the source node or with a Type node\.
* Null values\. Replaces all system null values in the selected field\. The standard condition `@NULL(@FIELD)` is used to select nulls\.
* Blank and null values\. Replaces both blank values and system nulls in the selected field\. This option is useful when you are unsure whether or not nulls have been defined as missing values\.
<!-- </ul> -->
Condition\. This option is available when you have selected the Based on condition option\. Use this text box to specify a CLEM expression for evaluating the selected fields\. Click the calculator button to open the Expression Builder\.
Replace with\. Specify a CLEM expression to give a new value to the selected fields\. You can also replace the value with a null value by typing undef in the text box\. Click the calculator button to open the Expression Builder\.
Note: When the field(s) selected are string, you should replace them with a string value\. Using the default 0 or another numeric value as the replacement value for string fields will result in an error\.Note that use of the following may change row order:
<!-- <ul> -->
* Running in a database via SQL pushback
* Deriving a list
* Calling any of the CLEM spatial functions
<!-- </ul> -->
<!-- </article "role="article" "> -->
|
EED64F79EBFDD957DEEBEC6261B3A70A248F3D35 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/filter.html?context=cdpaas&locale=en | Filter node (SPSS Modeler) | Filter node
You can rename or exclude fields at any point in a flow. For example, as a medical researcher, you may not be concerned about the potassium level (field-level data) of patients (record-level data); therefore, you can filter out the K (potassium) field. This can be done using a separate Filter node or using the Filter tab on an import or output node. The functionality is the same regardless of which node it's accessed from.
* From import nodes, you can rename or filter fields as the data is read in.
* Using a Filter node, you can rename or filter fields at any point in the flow.
* You can use the Filter tab in various nodes to define or edit multiple response sets.
* Finally, you can use a Filter node to map fields from one import node to another.
| # Filter node #
You can rename or exclude fields at any point in a flow\. For example, as a medical researcher, you may not be concerned about the potassium level (field\-level data) of patients (record\-level data); therefore, you can filter out the `K` (potassium) field\. This can be done using a separate Filter node or using the Filter tab on an import or output node\. The functionality is the same regardless of which node it's accessed from\.
<!-- <ul> -->
* From import nodes, you can rename or filter fields as the data is read in\.
* Using a Filter node, you can rename or filter fields at any point in the flow\.
* You can use the Filter tab in various nodes to define or edit multiple response sets\.
* Finally, you can use a Filter node to map fields from one import node to another\.
<!-- </ul> -->
<!-- </article "role="article" "> -->
|
B8522E9801281DD4118A5012ACF885A7EC2354E4 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/genlin.html?context=cdpaas&locale=en | GenLin node (SPSS Modeler) | GenLin node
The generalized linear model expands the general linear model so that the dependent variable is linearly related to the factors and covariates via a specified link function. Moreover, the model allows for the dependent variable to have a non-normal distribution. It covers widely used statistical models, such as linear regression for normally distributed responses, logistic models for binary data, loglinear models for count data, complementary log-log models for interval-censored survival data, plus many other statistical models through its very general model formulation.
Examples. A shipping company can use generalized linear models to fit a Poisson regression to damage counts for several types of ships constructed in different time periods, and the resulting model can help determine which ship types are most prone to damage.
A car insurance company can use generalized linear models to fit a gamma regression to damage claims for cars, and the resulting model can help determine the factors that contribute the most to claim size.
Medical researchers can use generalized linear models to fit a complementary log-log regression to interval-censored survival data to predict the time to recurrence for a medical condition.
Generalized linear models work by building an equation that relates the input field values to the output field values. After the model is generated, you can use it to estimate values for new data. For each record, a probability of membership is computed for each possible output category. The target category with the highest probability is assigned as the predicted output value for that record.
Requirements. You need one or more input fields and exactly one target field (which can have a measurement level of Continuous or Flag) with two or more categories. Fields used in the model must have their types fully instantiated.
Strengths. The generalized linear model is extremely flexible, but the process of choosing the model structure is not automated and thus demands a level of familiarity with your data that is not required by "black box" algorithms.
| # GenLin node #
The generalized linear model expands the general linear model so that the dependent variable is linearly related to the factors and covariates via a specified link function\. Moreover, the model allows for the dependent variable to have a non\-normal distribution\. It covers widely used statistical models, such as linear regression for normally distributed responses, logistic models for binary data, loglinear models for count data, complementary log\-log models for interval\-censored survival data, plus many other statistical models through its very general model formulation\.
Examples\. A shipping company can use generalized linear models to fit a Poisson regression to damage counts for several types of ships constructed in different time periods, and the resulting model can help determine which ship types are most prone to damage\.
A car insurance company can use generalized linear models to fit a gamma regression to damage claims for cars, and the resulting model can help determine the factors that contribute the most to claim size\.
Medical researchers can use generalized linear models to fit a complementary log\-log regression to interval\-censored survival data to predict the time to recurrence for a medical condition\.
Generalized linear models work by building an equation that relates the input field values to the output field values\. After the model is generated, you can use it to estimate values for new data\. For each record, a probability of membership is computed for each possible output category\. The target category with the highest probability is assigned as the predicted output value for that record\.
Requirements\. You need one or more input fields and exactly one target field (which can have a measurement level of `Continuous` or `Flag`) with two or more categories\. Fields used in the model must have their types fully instantiated\.
Strengths\. The generalized linear model is extremely flexible, but the process of choosing the model structure is not automated and thus demands a level of familiarity with your data that is not required by "black box" algorithms\.
<!-- </article "role="article" "> -->
|
CF6FE4E4058C24F0BEB94D379FB9E820C09456D2 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/gle.html?context=cdpaas&locale=en | GLE node (SPSS Modeler) | GLE node
The GLE model identifies the dependent variable that is linearly related to the factors and covariates via a specified link function. Moreover, the model allows for the dependent variable to have a non-normal distribution. It covers widely used statistical models, such as linear regression for normally distributed responses, logistic models for binary data, loglinear models for count data, complementary log-log models for interval-censored survival data, plus many other statistical models through its very general model formulation.
Examples. A shipping company can use generalized linear models to fit a Poisson regression to damage counts for several types of ships constructed in different time periods, and the resulting model can help determine which ship types are most prone to damage.
A car insurance company can use generalized linear models to fit a gamma regression to damage claims for cars, and the resulting model can help determine the factors that contribute the most to claim size.
Medical researchers can use generalized linear models to fit a complementary log-log regression to interval-censored survival data to predict the time to recurrence for a medical condition.
GLE models work by building an equation that relates the input field values to the output field values. After the model is generated, you can use it to estimate values for new data.
For a categorical target, for each record, a probability of membership is computed for each possible output category. The target category with the highest probability is assigned as the predicted output value for that record.
Requirements. You need one or more input fields and exactly one target field (which can have a measurement level of Continuous, Categorical, or Flag) with two or more categories. Fields used in the model must have their types fully instantiated.
Note: When first creating a flow, you select which runtime to use. By default, flows use the IBM SPSS Modeler runtime. If you want to use native Spark algorithms instead of SPSS algorithms, select the Spark runtime. Properties for this node will vary depending on which runtime option you choose.
| # GLE node #
The GLE model identifies the dependent variable that is linearly related to the factors and covariates via a specified link function\. Moreover, the model allows for the dependent variable to have a non\-normal distribution\. It covers widely used statistical models, such as linear regression for normally distributed responses, logistic models for binary data, loglinear models for count data, complementary log\-log models for interval\-censored survival data, plus many other statistical models through its very general model formulation\.
Examples\. A shipping company can use generalized linear models to fit a Poisson regression to damage counts for several types of ships constructed in different time periods, and the resulting model can help determine which ship types are most prone to damage\.
A car insurance company can use generalized linear models to fit a gamma regression to damage claims for cars, and the resulting model can help determine the factors that contribute the most to claim size\.
Medical researchers can use generalized linear models to fit a complementary log\-log regression to interval\-censored survival data to predict the time to recurrence for a medical condition\.
GLE models work by building an equation that relates the input field values to the output field values\. After the model is generated, you can use it to estimate values for new data\.
For a categorical target, for each record, a probability of membership is computed for each possible output category\. The target category with the highest probability is assigned as the predicted output value for that record\.
Requirements\. You need one or more input fields and exactly one target field (which can have a measurement level of `Continuous`, `Categorical`, or `Flag`) with two or more categories\. Fields used in the model must have their types fully instantiated\.
Note: When first creating a flow, you select which runtime to use\. By default, flows use the IBM SPSS Modeler runtime\. If you want to use native Spark algorithms instead of SPSS algorithms, select the Spark runtime\. Properties for this node will vary depending on which runtime option you choose\.
<!-- </article "role="article" "> -->
|
B561F461842BB0D185F097E0ADB8D3AC13266172 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/glmm.html?context=cdpaas&locale=en | GLMM node (SPSS Modeler) | GLMM node
This node creates a generalized linear mixed model (GLMM).
Generalized linear mixed models extend the linear model so that:
* The target is linearly related to the factors and covariates via a specified link function
* The target can have a non-normal distribution
* The observations can be correlated
Generalized linear mixed models cover a wide variety of models, from simple linear regression to complex multilevel models for non-normal longitudinal data.
Examples. The district school board can use a generalized linear mixed model to determine whether an experimental teaching method is effective at improving math scores. Students from the same classroom should be correlated since they are taught by the same teacher, and classrooms within the same school may also be correlated, so we can include random effects at school and class levels to account for different sources of variability.
Medical researchers can use a generalized linear mixed model to determine whether a new anticonvulsant drug can reduce a patient's rate of epileptic seizures. Repeated measurements from the same patient are typically positively correlated so a mixed model with some random effects should be appropriate. The target field – the number of seizures – takes positive integer values, so a generalized linear mixed model with a Poisson distribution and log link may be appropriate.
Executives at a cable provider of television, phone, and internet services can use a generalized linear mixed model to learn more about potential customers. Since possible answers have nominal measurement levels, the company analyst uses a generalized logit mixed model with a random intercept to capture correlation between answers to the service usage questions across service types (tv, phone, internet) within a given survey responder's answers.
In the node properties, data structure options allow you to specify the structural relationships between records in your dataset when observations are correlated. If the records in the dataset represent independent observations, you don't need to specify any data structure options.
Subjects. The combination of values of the specified categorical fields should uniquely define subjects within the dataset. For example, a single Patient ID field should be sufficient to define subjects in a single hospital, but the combination of Hospital ID and Patient ID may be necessary if patient identification numbers are not unique across hospitals. In a repeated measures setting, multiple observations are recorded for each subject, so each subject may occupy multiple records in the dataset.
A subject is an observational unit that can be considered independent of other subjects. For example, the blood pressure readings from a patient in a medical study can be considered independent of the readings from other patients. Defining subjects becomes particularly important when there are repeated measurements per subject and you want to model the correlation between these observations. For example, you might expect that blood pressure readings from a single patient during consecutive visits to the doctor are correlated.
All of the fields specified as subjects in the node properties are used to define subjects for the residual covariance structure, and provide the list of possible fields for defining subjects for random-effects covariance structures on the Random Effect Block.
Repeated measures. The fields specified here are used to identify repeated observations. For example, a single variable Week might identify the 10 weeks of observations in a medical study, or Month and Day might be used together to identify daily observations over the course of a year.
Define covariance groups by. The categorical fields specified here define independent sets of repeated effects covariance parameters; one for each category defined by the cross-classification of the grouping fields. All subjects have the same covariance type, and subjects within the same covariance grouping will have the same values for the parameters.
Spatial covariance coordinates. The variables in this list specify the coordinates of the repeated observations when one of the spatial covariance types is selected for the repeated covariance type.
Repeated covariance type. This specifies the covariance structure for the residuals. The available structures are:
* First-order autoregressive (AR1)
* Autoregressive moving average (1,1) (ARMA11)
* Compound symmetry
* Diagonal
* Scaled identity
* Spatial: Power
* Spatial: Exponential
* Spatial: Gaussian
* Spatial: Linear
* Spatial: Linear-log
* Spatial: Spherical
* Toeplitz
* Unstructured
* Variance components
| # GLMM node #
This node creates a generalized linear mixed model (GLMM)\.
Generalized linear mixed models extend the linear model so that:
<!-- <ul> -->
* The target is linearly related to the factors and covariates via a specified link function
* The target can have a non\-normal distribution
* The observations can be correlated
<!-- </ul> -->
Generalized linear mixed models cover a wide variety of models, from simple linear regression to complex multilevel models for non\-normal longitudinal data\.
Examples\. The district school board can use a generalized linear mixed model to determine whether an experimental teaching method is effective at improving math scores\. Students from the same classroom should be correlated since they are taught by the same teacher, and classrooms within the same school may also be correlated, so we can include random effects at school and class levels to account for different sources of variability\.
Medical researchers can use a generalized linear mixed model to determine whether a new anticonvulsant drug can reduce a patient's rate of epileptic seizures\. Repeated measurements from the same patient are typically positively correlated so a mixed model with some random effects should be appropriate\. The target field – the number of seizures – takes positive integer values, so a generalized linear mixed model with a Poisson distribution and log link may be appropriate\.
Executives at a cable provider of television, phone, and internet services can use a generalized linear mixed model to learn more about potential customers\. Since possible answers have nominal measurement levels, the company analyst uses a generalized logit mixed model with a random intercept to capture correlation between answers to the service usage questions across service types (tv, phone, internet) within a given survey responder's answers\.
In the node properties, data structure options allow you to specify the structural relationships between records in your dataset when observations are correlated\. If the records in the dataset represent independent observations, you don't need to specify any data structure options\.
Subjects\. The combination of values of the specified categorical fields should uniquely define subjects within the dataset\. For example, a single `Patient ID` field should be sufficient to define subjects in a single hospital, but the combination of `Hospital ID` and `Patient ID` may be necessary if patient identification numbers are not unique across hospitals\. In a repeated measures setting, multiple observations are recorded for each subject, so each subject may occupy multiple records in the dataset\.
A subject is an observational unit that can be considered independent of other subjects\. For example, the blood pressure readings from a patient in a medical study can be considered independent of the readings from other patients\. Defining subjects becomes particularly important when there are repeated measurements per subject and you want to model the correlation between these observations\. For example, you might expect that blood pressure readings from a single patient during consecutive visits to the doctor are correlated\.
All of the fields specified as subjects in the node properties are used to define subjects for the residual covariance structure, and provide the list of possible fields for defining subjects for random\-effects covariance structures on the Random Effect Block\.
Repeated measures\. The fields specified here are used to identify repeated observations\. For example, a single variable `Week` might identify the 10 weeks of observations in a medical study, or `Month` and `Day` might be used together to identify daily observations over the course of a year\.
Define covariance groups by\. The categorical fields specified here define independent sets of repeated effects covariance parameters; one for each category defined by the cross\-classification of the grouping fields\. All subjects have the same covariance type, and subjects within the same covariance grouping will have the same values for the parameters\.
Spatial covariance coordinates\. The variables in this list specify the coordinates of the repeated observations when one of the spatial covariance types is selected for the repeated covariance type\.
Repeated covariance type\. This specifies the covariance structure for the residuals\. The available structures are:
<!-- <ul> -->
* First\-order autoregressive (AR1)
* Autoregressive moving average (1,1) (ARMA11)
* Compound symmetry
* Diagonal
* Scaled identity
* Spatial: Power
* Spatial: Exponential
* Spatial: Gaussian
* Spatial: Linear
* Spatial: Linear\-log
* Spatial: Spherical
* Toeplitz
* Unstructured
* Variance components
<!-- </ul> -->
<!-- </article "role="article" "> -->
|
E6B5EAD096E68A255C5526ADD4C828534891C090 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/gmm.html?context=cdpaas&locale=en | Gaussian Mixture node (SPSS Modeler) | Gaussian Mixture node
A Gaussian Mixture© model is a probabilistic model that assumes all the data points are generated from a mixture of a finite number of Gaussian distributions with unknown parameters.
One can think of mixture models as generalizing k-means clustering to incorporate information about the covariance structure of the data as well as the centers of the latent Gaussians.^1^
The Gaussian Mixture node in watsonx.ai exposes the core features and commonly used parameters of the Gaussian Mixture library. The node is implemented in Python.
For more information about Gaussian Mixture modeling algorithms and parameters, see [Gaussian Mixture Models](http://scikit-learn.org/stable/modules/mixture.html) and [Gaussian Mixture](https://scikit-learn.org/stable/modules/generated/sklearn.mixture.GaussianMixture.html). ^2^
^1^ [User Guide.](https://scikit-learn.org/stable/modules/mixture.html)Gaussian mixture models. Web. © 2007 - 2017. scikit-learn developers.
^2^ [Scikit-learn: Machine Learning in Python](http://jmlr.csail.mit.edu/papers/v12/pedregosa11a.html), Pedregosa et al., JMLR 12, pp. 2825-2830, 2011.
| # Gaussian Mixture node #
A Gaussian Mixture© model is a probabilistic model that assumes all the data points are generated from a mixture of a finite number of Gaussian distributions with unknown parameters\.
One can think of mixture models as generalizing k\-means clustering to incorporate information about the covariance structure of the data as well as the centers of the latent Gaussians\.^1^
The Gaussian Mixture node in watsonx\.ai exposes the core features and commonly used parameters of the Gaussian Mixture library\. The node is implemented in Python\.
For more information about Gaussian Mixture modeling algorithms and parameters, see [Gaussian Mixture Models](http://scikit-learn.org/stable/modules/mixture.html) and [Gaussian Mixture](https://scikit-learn.org/stable/modules/generated/sklearn.mixture.GaussianMixture.html)\. ^2^
^1^ [User Guide\.](https://scikit-learn.org/stable/modules/mixture.html)*Gaussian mixture models*\. Web\. © 2007 \- 2017\. scikit\-learn developers\.
^2^ [Scikit\-learn: Machine Learning in Python](http://jmlr.csail.mit.edu/papers/v12/pedregosa11a.html), Pedregosa *et al\.*, JMLR 12, pp\. 2825\-2830, 2011\.
<!-- </article "role="article" "> -->
|
A1FE4B06DB60F8A9C916FBEAF5C7482155BD62E3 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/hdbscan.html?context=cdpaas&locale=en | HDBSCAN node (SPSS Modeler) | HDBSCAN node
Hierarchical Density-Based Spatial Clustering (HDBSCAN)© uses unsupervised learning to find clusters, or dense regions, of a data set.
The HDBSCAN node in watsonx.ai exposes the core features and commonly used parameters of the HDBSCAN library. The node is implemented in Python, and you can use it to cluster your dataset into distinct groups when you don't know what those groups are at first. Unlike most learning methods in watsonx.ai, HDBSCAN models do not use a target field. This type of learning, with no target field, is called unsupervised learning. Rather than trying to predict an outcome, HDBSCAN tries to uncover patterns in the set of input fields. Records are grouped so that records within a group or cluster tend to be similar to each other, but records in different groups are dissimilar. The HDBSCAN algorithm views clusters as areas of high density separated by areas of low density. Due to this rather generic view, clusters found by HDBSCAN can be any shape, as opposed to k-means which assumes that clusters are convex shaped. Outlier points that lie alone in low-density regions are also marked. HDBSCAN also supports scoring of new samples.^1^
To use the HDBSCAN node, you must set up an upstream Type node. The HDBSCAN node will read input values from the Type node (or from the Types of an upstream import node).
For more information about HDBSCAN clustering algorithms, see the [HDBSCAN documentation](http://hdbscan.readthedocs.io/en/latest/). ^1^
^1^ "User Guide / Tutorial." The hdbscan Clustering Library. Web. © 2016, Leland McInnes, John Healy, Steve Astels.
| # HDBSCAN node #
Hierarchical Density\-Based Spatial Clustering (HDBSCAN)© uses unsupervised learning to find clusters, or dense regions, of a data set\.
The HDBSCAN node in watsonx\.ai exposes the core features and commonly used parameters of the HDBSCAN library\. The node is implemented in Python, and you can use it to cluster your dataset into distinct groups when you don't know what those groups are at first\. Unlike most learning methods in watsonx\.ai, HDBSCAN models do *not* use a target field\. This type of learning, with no target field, is called unsupervised learning\. Rather than trying to predict an outcome, HDBSCAN tries to uncover patterns in the set of input fields\. Records are grouped so that records within a group or cluster tend to be similar to each other, but records in different groups are dissimilar\. The HDBSCAN algorithm views clusters as areas of high density separated by areas of low density\. Due to this rather generic view, clusters found by HDBSCAN can be any shape, as opposed to k\-means which assumes that clusters are convex shaped\. Outlier points that lie alone in low\-density regions are also marked\. HDBSCAN also supports scoring of new samples\.^1^
To use the HDBSCAN node, you must set up an upstream Type node\. The HDBSCAN node will read input values from the Type node (or from the Types of an upstream import node)\.
For more information about HDBSCAN clustering algorithms, see the [HDBSCAN documentation](http://hdbscan.readthedocs.io/en/latest/)\. ^1^
^1^ "User Guide / Tutorial\." *The hdbscan Clustering Library*\. Web\. © 2016, Leland McInnes, John Healy, Steve Astels\.
<!-- </article "role="article" "> -->
|
13F7C9C7B52EC7152F2B3D81B6EB42DB0319A6F4 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/histogram.html?context=cdpaas&locale=en | Histogram node (SPSS Modeler) | Histogram node
Histogram nodes show the occurrence of values for numeric fields. They are often used to explore the data before manipulations and model building. Similar to the Distribution node, Histogram nodes are frequently used to reveal imbalances in the data.
Note: To show the occurrence of values for symbolic fields, you should use a Distribution node.
| # Histogram node #
Histogram nodes show the occurrence of values for numeric fields\. They are often used to explore the data before manipulations and model building\. Similar to the Distribution node, Histogram nodes are frequently used to reveal imbalances in the data\.
Note: To show the occurrence of values for symbolic fields, you should use a Distribution node\.
<!-- </article "role="article" "> -->
|
00205C92C52FA28DB619EE1F9C8D76FE8564DB88 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/history.html?context=cdpaas&locale=en | History node (SPSS Modeler) | History node
History nodes are most often used for sequential data, such as time series data.
They are used to create new fields containing data from fields in previous records. When using a History node, you may want to use data that is presorted by a particular field. You can use a Sort node to do this.
| # History node #
History nodes are most often used for sequential data, such as time series data\.
They are used to create new fields containing data from fields in previous records\. When using a History node, you may want to use data that is presorted by a particular field\. You can use a Sort node to do this\.
<!-- </article "role="article" "> -->
|
1BC1FE73146C70FA2A76241470314A4732EFD918 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/isotonicas.html?context=cdpaas&locale=en | Isotonic-AS node (SPSS Modeler) | Isotonic-AS node
Isotonic Regression belongs to the family of regression algorithms. The Isotonic-AS node in watsonx.ai is implemented in Spark.
For details, see [Isotonic regression](https://spark.apache.org/docs/2.2.0/mllib-isotonic-regression.html). ^1^
^1^ "Regression - RDD-based API." Apache Spark. MLlib: Main Guide. Web. 3 Oct 2017.
| # Isotonic\-AS node #
Isotonic Regression belongs to the family of regression algorithms\. The Isotonic\-AS node in watsonx\.ai is implemented in Spark\.
For details, see [Isotonic regression](https://spark.apache.org/docs/2.2.0/mllib-isotonic-regression.html)\. ^1^
^1^ "Regression \- RDD\-based API\." *Apache Spark*\. MLlib: Main Guide\. Web\. 3 Oct 2017\.
<!-- </article "role="article" "> -->
|
22A8F7539D1374784E9BF247B1370C430910F43D | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/kdemodel.html?context=cdpaas&locale=en | KDE node (SPSS Modeler) | KDE node
Kernel Density Estimation (KDE)© uses the Ball Tree or KD Tree algorithms for efficient queries, and walks the line between unsupervised learning, feature engineering, and data modeling.
Neighbor-based approaches such as KDE are some of the most popular and useful density estimation techniques. KDE can be performed in any number of dimensions, though in practice high dimensionality can cause a degradation of performance. The KDE Modeling node and the KDE Simulation node in watsonx.ai expose the core features and commonly used parameters of the KDE library. The nodes are implemented in Python. ^1^
To use a KDE node, you must set up an upstream Type node. The KDE node will read input values from the Type node (or from the Types of an upstream import node).
The KDE Modeling node is available under the Modeling node palette. The KDE Modeling node generates a model nugget, and the nugget's scored values are kernel density values from the input data.
The KDE Simulation node is available under the Outputs node palette. The KDE Simulation node generates a KDE Gen source node that can create some records that have the same distribution as the input data. In the KDE Gen node properties, you can specify how many records the node will create (default is 1) and generate a random seed.
For more information about KDE, including examples, see the [KDE documentation](http://scikit-learn.org/stable/modules/density.htmlkernel-density-estimation). ^1^
^1^ "User Guide." Kernel Density Estimation. Web. © 2007-2018, scikit-learn developers.
| # KDE node #
Kernel Density Estimation (KDE)© uses the Ball Tree or KD Tree algorithms for efficient queries, and walks the line between unsupervised learning, feature engineering, and data modeling\.
Neighbor\-based approaches such as KDE are some of the most popular and useful density estimation techniques\. KDE can be performed in any number of dimensions, though in practice high dimensionality can cause a degradation of performance\. The KDE Modeling node and the KDE Simulation node in watsonx\.ai expose the core features and commonly used parameters of the KDE library\. The nodes are implemented in Python\. ^1^
To use a KDE node, you must set up an upstream Type node\. The KDE node will read input values from the Type node (or from the Types of an upstream import node)\.
The KDE Modeling node is available under the Modeling node palette\. The KDE Modeling node generates a model nugget, and the nugget's scored values are kernel density values from the input data\.
The KDE Simulation node is available under the Outputs node palette\. The KDE Simulation node generates a KDE Gen source node that can create some records that have the same distribution as the input data\. In the KDE Gen node properties, you can specify how many records the node will create (default is 1) and generate a random seed\.
For more information about KDE, including examples, see the [KDE documentation](http://scikit-learn.org/stable/modules/density.html#kernel-density-estimation)\. ^1^
^1^ "User Guide\." *Kernel Density Estimation*\. Web\. © 2007\-2018, scikit\-learn developers\.
<!-- </article "role="article" "> -->
|
033E2B1CD9E006383C2D2C045B8834BFBBAB0F09 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/kdesimulation.html?context=cdpaas&locale=en | KDE Simulation node (SPSS Modeler) | KDE Simulation node
Kernel Density Estimation (KDE)© uses the Ball Tree or KD Tree algorithms for efficient queries, and walks the line between unsupervised learning, feature engineering, and data modeling.
Neighbor-based approaches such as KDE are some of the most popular and useful density estimation techniques. KDE can be performed in any number of dimensions, though in practice high dimensionality can cause a degradation of performance. The KDE Modeling node and the KDE Simulation node in watsonx.ai expose the core features and commonly used parameters of the KDE library. The nodes are implemented in Python. ^1^
To use a KDE node, you must set up an upstream Type node. The KDE node will read input values from the Type node (or from the Types of an upstream import node).
The KDE Modeling node is available under the Modeling node palette. The KDE Modeling node generates a model nugget, and the nugget's scored values are kernel density values from the input data.
The KDE Simulation node is available under the Outputs node palette. The KDE Simulation node generates a KDE Gen source node that can create some records that have the same distribution as the input data. In the KDE Gen node properties, you can specify how many records the node will create (default is 1) and generate a random seed.
For more information about KDE, including examples, see the [KDE documentation](http://scikit-learn.org/stable/modules/density.htmlkernel-density-estimation). ^1^
^1^ "User Guide." Kernel Density Estimation. Web. © 2007-2018, scikit-learn developers.
| # KDE Simulation node #
Kernel Density Estimation (KDE)© uses the Ball Tree or KD Tree algorithms for efficient queries, and walks the line between unsupervised learning, feature engineering, and data modeling\.
Neighbor\-based approaches such as KDE are some of the most popular and useful density estimation techniques\. KDE can be performed in any number of dimensions, though in practice high dimensionality can cause a degradation of performance\. The KDE Modeling node and the KDE Simulation node in watsonx\.ai expose the core features and commonly used parameters of the KDE library\. The nodes are implemented in Python\. ^1^
To use a KDE node, you must set up an upstream Type node\. The KDE node will read input values from the Type node (or from the Types of an upstream import node)\.
The KDE Modeling node is available under the Modeling node palette\. The KDE Modeling node generates a model nugget, and the nugget's scored values are kernel density values from the input data\.
The KDE Simulation node is available under the Outputs node palette\. The KDE Simulation node generates a KDE Gen source node that can create some records that have the same distribution as the input data\. In the KDE Gen node properties, you can specify how many records the node will create (default is 1) and generate a random seed\.
For more information about KDE, including examples, see the [KDE documentation](http://scikit-learn.org/stable/modules/density.html#kernel-density-estimation)\. ^1^
^1^ "User Guide\." *Kernel Density Estimation*\. Web\. © 2007\-2018, scikit\-learn developers\.
<!-- </article "role="article" "> -->
|
13A1FF3338F4AC1EB2CF3FF6781283B49AC8B5A6 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/kmeans.html?context=cdpaas&locale=en | K-Means node (SPSS Modeler) | K-Means node
The K-Means node provides a method of cluster analysis. It can be used to cluster the dataset into distinct groups when you don't know what those groups are at the beginning. Unlike most learning methods in SPSS Modeler, K-Means models do not use a target field. This type of learning, with no target field, is called unsupervised learning. Instead of trying to predict an outcome, K-Means tries to uncover patterns in the set of input fields. Records are grouped so that records within a group or cluster tend to be similar to each other, but records in different groups are dissimilar.
K-Means works by defining a set of starting cluster centers derived from data. It then assigns each record to the cluster to which it is most similar, based on the record's input field values. After all cases have been assigned, the cluster centers are updated to reflect the new set of records assigned to each cluster. The records are then checked again to see whether they should be reassigned to a different cluster, and the record assignment/cluster iteration process continues until either the maximum number of iterations is reached, or the change between one iteration and the next fails to exceed a specified threshold.
Note: The resulting model depends to a certain extent on the order of the training data. Reordering the data and rebuilding the model may lead to a different final cluster model.
Requirements. To train a K-Means model, you need one or more fields with the role set to Input. Fields with the role set to Output, Both, or None are ignored.
Strengths. You do not need to have data on group membership to build a K-Means model. The K-Means model is often the fastest method of clustering for large datasets.
| # K\-Means node #
The K\-Means node provides a method of cluster analysis\. It can be used to cluster the dataset into distinct groups when you don't know what those groups are at the beginning\. Unlike most learning methods in SPSS Modeler, K\-Means models do not use a target field\. This type of learning, with no target field, is called unsupervised learning\. Instead of trying to predict an outcome, K\-Means tries to uncover patterns in the set of input fields\. Records are grouped so that records within a group or cluster tend to be similar to each other, but records in different groups are dissimilar\.
K\-Means works by defining a set of starting cluster centers derived from data\. It then assigns each record to the cluster to which it is most similar, based on the record's input field values\. After all cases have been assigned, the cluster centers are updated to reflect the new set of records assigned to each cluster\. The records are then checked again to see whether they should be reassigned to a different cluster, and the record assignment/cluster iteration process continues until either the maximum number of iterations is reached, or the change between one iteration and the next fails to exceed a specified threshold\.
Note: The resulting model depends to a certain extent on the order of the training data\. Reordering the data and rebuilding the model may lead to a different final cluster model\.
Requirements\. To train a K\-Means model, you need one or more fields with the role set to `Input`\. Fields with the role set to `Output`, `Both`, or `None` are ignored\.
Strengths\. You do not need to have data on group membership to build a K\-Means model\. The K\-Means model is often the fastest method of clustering for large datasets\.
<!-- </article "role="article" "> -->
|
DCE39CA6C888CA6D5CF3F9B9D18D06FD3BD2DFBE | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/kmeansas.html?context=cdpaas&locale=en | K-Means-AS node (SPSS Modeler) | K-Means-AS node
K-Means is one of the most commonly used clustering algorithms. It clusters data points into a predefined number of clusters. The K-Means-AS node in SPSS Modeler is implemented in Spark.
See [K-Means Algorithms](https://spark.apache.org/docs/2.2.0/ml-clustering.html) for more details.^1^
Note that the K-Means-AS node performs one-hot encoding automatically for categorical variables.
^1^ "Clustering." Apache Spark. MLlib: Main Guide. Web. 3 Oct 2017.
| # K\-Means\-AS node #
K\-Means is one of the most commonly used clustering algorithms\. It clusters data points into a predefined number of clusters\. The K\-Means\-AS node in SPSS Modeler is implemented in Spark\.
See [K\-Means Algorithms](https://spark.apache.org/docs/2.2.0/ml-clustering.html) for more details\.^1^
Note that the K\-Means\-AS node performs one\-hot encoding automatically for categorical variables\.
^1^ "Clustering\." *Apache Spark*\. MLlib: Main Guide\. Web\. 3 Oct 2017\.
<!-- </article "role="article" "> -->
|
1DD1ED59E93DA4F6576E7EB1E420213AB34DD1DD | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/knn.html?context=cdpaas&locale=en | KNN node (SPSS Modeler) | KNN node
Nearest Neighbor Analysis is a method for classifying cases based on their similarity to other cases. In machine learning, it was developed as a way to recognize patterns of data without requiring an exact match to any stored patterns, or cases. Similar cases are near each other and dissimilar cases are distant from each other. Thus, the distance between two cases is a measure of their dissimilarity.
Cases that are near each other are said to be "neighbors." When a new case (holdout) is presented, its distance from each of the cases in the model is computed. The classifications of the most similar cases – the nearest neighbors – are tallied and the new case is placed into the category that contains the greatest number of nearest neighbors.
You can specify the number of nearest neighbors to examine; this value is called k. The pictures show how a new case would be classified using two different values of k. When k = 5, the new case is placed in category 1 because a majority of the nearest neighbors belong to category 1. However, when k = 9, the new case is placed in category 0 because a majority of the nearest neighbors belong to category 0.
Nearest neighbor analysis can also be used to compute values for a continuous target. In this situation, the average or median target value of the nearest neighbors is used to obtain the predicted value for the new case.
| # KNN node #
Nearest Neighbor Analysis is a method for classifying cases based on their similarity to other cases\. In machine learning, it was developed as a way to recognize patterns of data without requiring an exact match to any stored patterns, or cases\. Similar cases are near each other and dissimilar cases are distant from each other\. Thus, the distance between two cases is a measure of their dissimilarity\.
Cases that are near each other are said to be "neighbors\." When a new case (holdout) is presented, its distance from each of the cases in the model is computed\. The classifications of the most similar cases – the nearest neighbors – are tallied and the new case is placed into the category that contains the greatest number of nearest neighbors\.
You can specify the number of nearest neighbors to examine; this value is called `k`\. The pictures show how a new case would be classified using two different values of `k`\. When `k` = 5, the new case is placed in category `1` because a majority of the nearest neighbors belong to category `1`\. However, when `k` = 9, the new case is placed in category `0` because a majority of the nearest neighbors belong to category `0`\.
Nearest neighbor analysis can also be used to compute values for a continuous target\. In this situation, the average or median target value of the nearest neighbors is used to obtain the predicted value for the new case\.
<!-- </article "role="article" "> -->
|
F965BE0F67B8B3C26BE38939A33FA8AB74AEA4CC | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/kohonen.html?context=cdpaas&locale=en | Kohonen node (SPSS Modeler) | Kohonen node
Kohonen networks are a type of neural network that perform clustering, also known as a knet or a self-organizing map. This type of network can be used to cluster the dataset into distinct groups when you don't know what those groups are at the beginning. Records are grouped so that records within a group or cluster tend to be similar to each other, and records in different groups are dissimilar.
The basic units are neurons, and they are organized into two layers: the input layer and the output layer (also called the output map). All of the input neurons are connected to all of the output neurons, and these connections have strengths, or weights, associated with them. During training, each unit competes with all of the others to "win" each record.
The output map is a two-dimensional grid of neurons, with no connections between the units.
Input data is presented to the input layer, and the values are propagated to the output layer. The output neuron with the strongest response is said to be the winner and is the answer for that input.
Initially, all weights are random. When a unit wins a record, its weights (along with those of other nearby units, collectively referred to as a neighborhood) are adjusted to better match the pattern of predictor values for that record. All of the input records are shown, and weights are updated accordingly. This process is repeated many times until the changes become very small. As training proceeds, the weights on the grid units are adjusted so that they form a two-dimensional "map" of the clusters (hence the term self-organizing map).
When the network is fully trained, records that are similar should be close together on the output map, whereas records that are vastly different will be far apart.
Unlike most learning methods in watsonx.ai, Kohonen networks do not use a target field. This type of learning, with no target field, is called unsupervised learning. Instead of trying to predict an outcome, Kohonen nets try to uncover patterns in the set of input fields. Usually, a Kohonen net will end up with a few units that summarize many observations (strong units), and several units that don't really correspond to any of the observations (weak units). The strong units (and sometimes other units adjacent to them in the grid) represent probable cluster centers.
Another use of Kohonen networks is in dimension reduction. The spatial characteristic of the two-dimensional grid provides a mapping from the k original predictors to two derived features that preserve the similarity relationships in the original predictors. In some cases, this can give you the same kind of benefit as factor analysis or PCA.
Note that the method for calculating default size of the output grid is different from older versions of SPSS Modeler. The method will generally produce smaller output layers that are faster to train and generalize better. If you find that you get poor results with the default size, try increasing the size of the output grid on the Expert tab.
Requirements. To train a Kohonen net, you need one or more fields with the role set to Input. Fields with the role set to Target, Both, or None are ignored.
Strengths. You do not need to have data on group membership to build a Kohonen network model. You don't even need to know the number of groups to look for. Kohonen networks start with a large number of units, and as training progresses, the units gravitate toward the natural clusters in the data. You can look at the number of observations captured by each unit in the model nugget to identify the strong units, which can give you a sense of the appropriate number of clusters.
| # Kohonen node #
Kohonen networks are a type of neural network that perform clustering, also known as a knet or a self\-organizing map\. This type of network can be used to cluster the dataset into distinct groups when you don't know what those groups are at the beginning\. Records are grouped so that records within a group or cluster tend to be similar to each other, and records in different groups are dissimilar\.
The basic units are neurons, and they are organized into two layers: the input layer and the output layer (also called the output map)\. All of the input neurons are connected to all of the output neurons, and these connections have strengths, or weights, associated with them\. During training, each unit competes with all of the others to "win" each record\.
The output map is a two\-dimensional grid of neurons, with no connections between the units\.
Input data is presented to the input layer, and the values are propagated to the output layer\. The output neuron with the strongest response is said to be the winner and is the answer for that input\.
Initially, all weights are random\. When a unit wins a record, its weights (along with those of other nearby units, collectively referred to as a neighborhood) are adjusted to better match the pattern of predictor values for that record\. All of the input records are shown, and weights are updated accordingly\. This process is repeated many times until the changes become very small\. As training proceeds, the weights on the grid units are adjusted so that they form a two\-dimensional "map" of the clusters (hence the term self\-organizing map)\.
When the network is fully trained, records that are similar should be close together on the output map, whereas records that are vastly different will be far apart\.
Unlike most learning methods in watsonx\.ai, Kohonen networks do *not* use a target field\. This type of learning, with no target field, is called unsupervised learning\. Instead of trying to predict an outcome, Kohonen nets try to uncover patterns in the set of input fields\. Usually, a Kohonen net will end up with a few units that summarize many observations (strong units), and several units that don't really correspond to any of the observations (weak units)\. The strong units (and sometimes other units adjacent to them in the grid) represent probable cluster centers\.
Another use of Kohonen networks is in dimension reduction\. The spatial characteristic of the two\-dimensional grid provides a mapping from the `k` original predictors to two derived features that preserve the similarity relationships in the original predictors\. In some cases, this can give you the same kind of benefit as factor analysis or PCA\.
Note that the method for calculating default size of the output grid is different from older versions of SPSS Modeler\. The method will generally produce smaller output layers that are faster to train and generalize better\. If you find that you get poor results with the default size, try increasing the size of the output grid on the Expert tab\.
Requirements\. To train a Kohonen net, you need one or more fields with the role set to `Input`\. Fields with the role set to `Target`, `Both`, or `None` are ignored\.
Strengths\. You do not need to have data on group membership to build a Kohonen network model\. You don't even need to know the number of groups to look for\. Kohonen networks start with a large number of units, and as training progresses, the units gravitate toward the natural clusters in the data\. You can look at the number of observations captured by each unit in the model nugget to identify the strong units, which can give you a sense of the appropriate number of clusters\.
<!-- </article "role="article" "> -->
|
67241853FC2471C6C0719F1B98E40625358B2E19 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/languageidentifier.html?context=cdpaas&locale=en | Reading in source text (SPSS Modeler) | Reading in source text
You can use the Language Identifier node to identify the natural language of a text field within your source data. The output of this node is a derived field that contains the detected language code.

Data for text mining can be in any of the standard formats that are used by SPSS Modeler flows, including databases or other "rectangular" formats that represent data in rows and columns.
* To read in text from any of the standard data formats used by SPSS Modeler flows, such as a database with one or more text fields for customer comments, you can use an Import node.
* When you're processing large amounts of data, which might include text in several different languages, use the Language Identifier node to identify the language used in a specific field.
| # Reading in source text #
You can use the Language Identifier node to identify the natural language of a text field within your source data\. The output of this node is a derived field that contains the detected language code\.

Data for text mining can be in any of the standard formats that are used by SPSS Modeler flows, including databases or other "rectangular" formats that represent data in rows and columns\.
<!-- <ul> -->
* To read in text from any of the standard data formats used by SPSS Modeler flows, such as a database with one or more text fields for customer comments, you can use an Import node\.
* When you're processing large amounts of data, which might include text in several different languages, use the Language Identifier node to identify the language used in a specific field\.
<!-- </ul> -->
<!-- </article "role="article" "> -->
|
FC8006009802AE14770BE53062787D8A392B0070 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/linear.html?context=cdpaas&locale=en | Linear node (SPSS Modeler) | Linear node
Linear regression is a common statistical technique for classifying records based on the values of numeric input fields. Linear regression fits a straight line or surface that minimizes the discrepancies between predicted and actual output values.
Requirements. Only numeric fields can be used in a linear regression model. You must have exactly one target field (with the role set to Target) and one or more predictors (with the role set to Input). Fields with a role of Both or None are ignored, as are non-numeric fields. (If necessary, non-numeric fields can be recoded using a Derive node.)
Strengths. Linear regression models are relatively simple and give an easily interpreted mathematical formula for generating predictions. Because linear regression is a long-established statistical procedure, the properties of these models are well understood. Linear models are also typically very fast to train. The Linear node provides methods for automatic field selection in order to eliminate nonsignificant input fields from the equation.
Tip: In cases where the target field is categorical rather than a continuous range, such as yes/no or churn/don't churn, logistic regression can be used as an alternative. Logistic regression also provides support for non-numeric inputs, removing the need to recode these fields. Note: When first creating a flow, you select which runtime to use. By default, flows use the IBM SPSS Modeler runtime. If you want to use native Spark algorithms instead of SPSS algorithms, select the Spark runtime. Properties for this node will vary depending on which runtime option you choose.
| # Linear node #
Linear regression is a common statistical technique for classifying records based on the values of numeric input fields\. Linear regression fits a straight line or surface that minimizes the discrepancies between predicted and actual output values\.
Requirements\. Only numeric fields can be used in a linear regression model\. You must have exactly one target field (with the role set to Target) and one or more predictors (with the role set to Input)\. Fields with a role of Both or None are ignored, as are non\-numeric fields\. (If necessary, non\-numeric fields can be recoded using a Derive node\.)
Strengths\. Linear regression models are relatively simple and give an easily interpreted mathematical formula for generating predictions\. Because linear regression is a long\-established statistical procedure, the properties of these models are well understood\. Linear models are also typically very fast to train\. The Linear node provides methods for automatic field selection in order to eliminate nonsignificant input fields from the equation\.
Tip: In cases where the target field is categorical rather than a continuous range, such as yes/no or churn/don't churn, logistic regression can be used as an alternative\. Logistic regression also provides support for non\-numeric inputs, removing the need to recode these fields\. Note: When first creating a flow, you select which runtime to use\. By default, flows use the IBM SPSS Modeler runtime\. If you want to use native Spark algorithms instead of SPSS algorithms, select the Spark runtime\. Properties for this node will vary depending on which runtime option you choose\.
<!-- </article "role="article" "> -->
|
2D9ACE87F4859BF7EF8CDF4EBBF8307C51034471 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/linearas.html?context=cdpaas&locale=en | Linear-AS node (SPSS Modeler) | Linear-AS node
Linear regression is a common statistical technique for classifying records based on the values of numeric input fields. Linear regression fits a straight line or surface that minimizes the discrepancies between predicted and actual output values.
Requirements. Only numeric fields and categorical predictors can be used in a linear regression model. You must have exactly one target field (with the role set to Target) and one or more predictors (with the role set to Input). Fields with a role of Both or None are ignored, as are non-numeric fields. (If necessary, non-numeric fields can be recoded using a Derive node.)
Strengths. Linear regression models are relatively simple and give an easily interpreted mathematical formula for generating predictions. Because linear regression is a long-established statistical procedure, the properties of these models are well understood. Linear models are also typically very fast to train. The Linear node provides methods for automatic field selection in order to eliminate non-significant input fields from the equation.
Note: In cases where the target field is categorical rather than a continuous range, such as yes/no or churn/don't churn, logistic regression can be used as an alternative. Logistic regression also provides support for non-numeric inputs, removing the need to recode these fields.
| # Linear\-AS node #
Linear regression is a common statistical technique for classifying records based on the values of numeric input fields\. Linear regression fits a straight line or surface that minimizes the discrepancies between predicted and actual output values\.
Requirements\. Only numeric fields and categorical predictors can be used in a linear regression model\. You must have exactly one target field (with the role set to Target) and one or more predictors (with the role set to Input)\. Fields with a role of Both or None are ignored, as are non\-numeric fields\. (If necessary, non\-numeric fields can be recoded using a Derive node\.)
Strengths\. Linear regression models are relatively simple and give an easily interpreted mathematical formula for generating predictions\. Because linear regression is a long\-established statistical procedure, the properties of these models are well understood\. Linear models are also typically very fast to train\. The Linear node provides methods for automatic field selection in order to eliminate non\-significant input fields from the equation\.
Note: In cases where the target field is categorical rather than a continuous range, such as yes/no or churn/don't churn, logistic regression can be used as an alternative\. Logistic regression also provides support for non\-numeric inputs, removing the need to recode these fields\.
<!-- </article "role="article" "> -->
|
DE0C1913D6D770641762ED518FEFE8FFFC5A1F13 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/logreg.html?context=cdpaas&locale=en | Logistic node (SPSS Modeler) | Logistic node
Logistic regression, also known as nominal regression, is a statistical technique for classifying records based on values of input fields. It is analogous to linear regression but takes a categorical target field instead of a numeric one. Both binomial models (for targets with two discrete categories) and multinomial models (for targets with more than two categories) are supported.
Logistic regression works by building a set of equations that relate the input field values to the probabilities associated with each of the output field categories. After the model is generated, you can use it to estimate probabilities for new data. For each record, a probability of membership is computed for each possible output category. The target category with the highest probability is assigned as the predicted output value for that record.
Binomial example. A telecommunications provider is concerned about the number of customers it is losing to competitors. Using service usage data, you can create a binomial model to predict which customers are liable to transfer to another provider and customize offers so as to retain as many customers as possible. A binomial model is used because the target has two distinct categories (likely to transfer or not).
Note: For binomial models only, string fields are limited to eight characters. If necessary, longer strings can be recoded using a Reclassify node or by using the Anonymize node.
Multinomial example. A telecommunications provider has segmented its customer base by service usage patterns, categorizing the customers into four groups. Using demographic data to predict group membership, you can create a multinomial model to classify prospective customers into groups and then customize offers for individual customers.
Requirements. One or more input fields and exactly one categorical target field with two or more categories. For a binomial model the target must have a measurement level of Flag. For a multinomial model the target can have a measurement level of Flag, or of Nominal with two or more categories. Fields set to Both or None are ignored. Fields used in the model must have their types fully instantiated.
Strengths. Logistic regression models are often quite accurate. They can handle symbolic and numeric input fields. They can give predicted probabilities for all target categories so that a second-best guess can easily be identified. Logistic models are most effective when group membership is a truly categorical field; if group membership is based on values of a continuous range field (for example, high IQ versus low IQ), you should consider using linear regression to take advantage of the richer information offered by the full range of values. Logistic models can also perform automatic field selection, although other approaches such as tree models or Feature Selection might do this more quickly on large datasets. Finally, since logistic models are well understood by many analysts and data miners, they may be used by some as a baseline against which other modeling techniques can be compared.
When processing large datasets, you can improve performance noticeably by disabling the likelihood-ratio test, an advanced output option.
| # Logistic node #
Logistic regression, also known as nominal regression, is a statistical technique for classifying records based on values of input fields\. It is analogous to linear regression but takes a categorical target field instead of a numeric one\. Both binomial models (for targets with two discrete categories) and multinomial models (for targets with more than two categories) are supported\.
Logistic regression works by building a set of equations that relate the input field values to the probabilities associated with each of the output field categories\. After the model is generated, you can use it to estimate probabilities for new data\. For each record, a probability of membership is computed for each possible output category\. The target category with the highest probability is assigned as the predicted output value for that record\.
Binomial example\. A telecommunications provider is concerned about the number of customers it is losing to competitors\. Using service usage data, you can create a binomial model to predict which customers are liable to transfer to another provider and customize offers so as to retain as many customers as possible\. A binomial model is used because the target has two distinct categories (likely to transfer or not)\.
Note: For binomial models only, string fields are limited to eight characters\. If necessary, longer strings can be recoded using a Reclassify node or by using the Anonymize node\.
Multinomial example\. A telecommunications provider has segmented its customer base by service usage patterns, categorizing the customers into four groups\. Using demographic data to predict group membership, you can create a multinomial model to classify prospective customers into groups and then customize offers for individual customers\.
Requirements\. One or more input fields and exactly one categorical target field with two or more categories\. For a binomial model the target must have a measurement level of `Flag`\. For a multinomial model the target can have a measurement level of `Flag`, or of `Nominal` with two or more categories\. Fields set to `Both` or `None` are ignored\. Fields used in the model must have their types fully instantiated\.
Strengths\. Logistic regression models are often quite accurate\. They can handle symbolic and numeric input fields\. They can give predicted probabilities for all target categories so that a second\-best guess can easily be identified\. Logistic models are most effective when group membership is a truly categorical field; if group membership is based on values of a continuous range field (for example, high IQ versus low IQ), you should consider using linear regression to take advantage of the richer information offered by the full range of values\. Logistic models can also perform automatic field selection, although other approaches such as tree models or Feature Selection might do this more quickly on large datasets\. Finally, since logistic models are well understood by many analysts and data miners, they may be used by some as a baseline against which other modeling techniques can be compared\.
When processing large datasets, you can improve performance noticeably by disabling the likelihood\-ratio test, an advanced output option\.
<!-- </article "role="article" "> -->
|
A9E9D62E92156CEBC0D4619CDE322AF48CACE913 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/lsvm.html?context=cdpaas&locale=en | LSVM node (SPSS Modeler) | LSVM node
With the LSVM node, you can use a linear support vector machine to classify data. LSVM is particularly suited for use with wide datasets--that is, those with a large number of predictor fields. You can use the default settings on the node to produce a basic model relatively quickly, or you can use the build options to experiment with different settings.
The LSVM node is similar to the SVM node, but it is linear and is better at handling a large number of records.
After the model is built, you can:
* Browse the model nugget to display the relative importance of the input fields in building the model.
* Append a Table node to the model nugget to view the model output.
Example. A medical researcher has obtained a dataset containing characteristics of a number of human cell samples extracted from patients who were believed to be at risk of developing cancer. Analysis of the original data showed that many of the characteristics differed significantly between benign and malignant samples. The researcher wants to develop an LSVM model that can use the values of similar cell characteristics in samples from other patients to give an early indication of whether their samples might be benign or malignant.
| # LSVM node #
With the LSVM node, you can use a linear support vector machine to classify data\. LSVM is particularly suited for use with wide datasets\-\-that is, those with a large number of predictor fields\. You can use the default settings on the node to produce a basic model relatively quickly, or you can use the build options to experiment with different settings\.
The LSVM node is similar to the SVM node, but it is linear and is better at handling a large number of records\.
After the model is built, you can:
<!-- <ul> -->
* Browse the model nugget to display the relative importance of the input fields in building the model\.
* Append a Table node to the model nugget to view the model output\.
<!-- </ul> -->
Example\. A medical researcher has obtained a dataset containing characteristics of a number of human cell samples extracted from patients who were believed to be at risk of developing cancer\. Analysis of the original data showed that many of the characteristics differed significantly between benign and malignant samples\. The researcher wants to develop an LSVM model that can use the values of similar cell characteristics in samples from other patients to give an early indication of whether their samples might be benign or malignant\.
<!-- </article "role="article" "> -->
|
774FD49C617DAC62F48EB31E08757E0AEC3D1282 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/matrix.html?context=cdpaas&locale=en | Matrix node (SPSS Modeler) | Matrix node
Use the Matrix to create a table that shows relationships between fields. It is most commonly used to show the relationship between two categorical fields (flag, nominal, or ordinal), but it can also be used to show relationships between continuous (numeric range) fields.
| # Matrix node #
Use the Matrix to create a table that shows relationships between fields\. It is most commonly used to show the relationship between two categorical fields (flag, nominal, or ordinal), but it can also be used to show relationships between continuous (numeric range) fields\.
<!-- </article "role="article" "> -->
|
7B586E10794F26EA2654A7F7C34EC9EA48C8BFD4 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/means.html?context=cdpaas&locale=en | Means node (SPSS Modeler) | Means node
The Means node compares the means between independent groups or between pairs of related fields to test whether a significant difference exists. For example, you can compare mean revenues before and after running a promotion or compare revenues from customers who didn't receive the promotion with those who did.
You can compare means in two different ways, depending on your data:
* Between groups within a field. To compare independent groups, select a test field and a grouping field. For example, you could exclude a sample of "holdout" customers when sending a promotion and compare mean revenues for the holdout group with all of the others. In this case, you would specify a single test field that indicates the revenue for each customer, with a flag or nominal field that indicates whether they received the offer. The samples are independent in the sense that each record is assigned to one group or another, and there is no way to link a specific member of one group to a specific member of another. You can also specify a nominal field with more than two values to compare the means for multiple groups. When executed, the node calculates a one-way ANOVA test on the selected fields. In cases where there are only two field groups, the one-way ANOVA results are essentially the same as an independent-samples t test.
* Between pairs of fields. When comparing means for two related fields, the groups must be paired in some way for the results to be meaningful. For example, you could compare the mean revenues from the same group of customers before and after running a promotion or compare usage rates for a service between husband-wife pairs to see if they are different. Each record contains two separate but related measures that can be compared meaningfully. When executed, the node calculates a paired-samples t test on each field pair selected.
| # Means node #
The Means node compares the means between independent groups or between pairs of related fields to test whether a significant difference exists\. For example, you can compare mean revenues before and after running a promotion or compare revenues from customers who didn't receive the promotion with those who did\.
You can compare means in two different ways, depending on your data:
<!-- <ul> -->
* Between groups within a field\. To compare independent groups, select a test field and a grouping field\. For example, you could exclude a sample of "holdout" customers when sending a promotion and compare mean revenues for the holdout group with all of the others\. In this case, you would specify a single test field that indicates the revenue for each customer, with a flag or nominal field that indicates whether they received the offer\. The samples are independent in the sense that each record is assigned to one group or another, and there is no way to link a specific member of one group to a specific member of another\. You can also specify a nominal field with more than two values to compare the means for multiple groups\. When executed, the node calculates a one\-way ANOVA test on the selected fields\. In cases where there are only two field groups, the one\-way ANOVA results are essentially the same as an independent\-samples `t` test\.
* Between pairs of fields\. When comparing means for two related fields, the groups must be paired in some way for the results to be meaningful\. For example, you could compare the mean revenues from the same group of customers before and after running a promotion or compare usage rates for a service between husband\-wife pairs to see if they are different\. Each record contains two separate but related measures that can be compared meaningfully\. When executed, the node calculates a paired\-samples `t` test on each field pair selected\.
<!-- </ul> -->
<!-- </article "role="article" "> -->
|
6647035446FC3A28586EBABC619D10DB5FE3F4FD | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/merge.html?context=cdpaas&locale=en | Merge node (SPSS Modeler) | Merge node
The function of a Merge node is to take multiple input records and create a single output record containing all or some of the input fields. This is a useful operation when you want to merge data from different sources, such as internal customer data and purchased demographic data.
You can merge data in the following ways.
* Merge by Order concatenates corresponding records from all sources in the order of input until the smallest data source is exhausted. It is important if using this option that you have sorted your data using a Sort node.
* Merge using a Key field, such as Customer ID, to specify how to match records from one data source with records from the other(s). Several types of joins are possible, including inner join, full outer join, partial outer join, and anti-join.
* Merge by Condition means that you can specify a condition to be satisfied for the merge to take place. You can specify the condition directly in the node, or build the condition using the Expression Builder.
* Merge by Ranked Condition is a left sided outer join in which you specify a condition to be satisfied for the merge to take place and a ranking expression which sorts into order from low to high. Most often used to merge geospatial data, you can specify the condition directly in the node, or build the condition using the Expression Builder.
| # Merge node #
The function of a Merge node is to take multiple input records and create a single output record containing all or some of the input fields\. This is a useful operation when you want to merge data from different sources, such as internal customer data and purchased demographic data\.
You can merge data in the following ways\.
<!-- <ul> -->
* Merge by Order concatenates corresponding records from all sources in the order of input until the smallest data source is exhausted\. It is important if using this option that you have sorted your data using a Sort node\.
* Merge using a Key field, such as `Customer ID`, to specify how to match records from one data source with records from the other(s)\. Several types of joins are possible, including inner join, full outer join, partial outer join, and anti\-join\.
* Merge by Condition means that you can specify a condition to be satisfied for the merge to take place\. You can specify the condition directly in the node, or build the condition using the Expression Builder\.
* Merge by Ranked Condition is a left sided outer join in which you specify a condition to be satisfied for the merge to take place and a ranking expression which sorts into order from low to high\. Most often used to merge geospatial data, you can specify the condition directly in the node, or build the condition using the Expression Builder\.
<!-- </ul> -->
<!-- </article "role="article" "> -->
|
61E8DF28E1A79B4BBA03CDA39F350BE5E55DAC7B | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/missingvalues_changes.html?context=cdpaas&locale=en | Functions available for missing values (SPSS Modeler) | Functions available for missing values
Different methods are available for dealing with missing values in your data. You may choose to use functionality available in Data Refinery or in nodes.
| # Functions available for missing values #
Different methods are available for dealing with missing values in your data\. You may choose to use functionality available in Data Refinery or in nodes\.
<!-- </article "role="article" "> -->
|
0E5C87704E816097FF9E649620A1818798B5DB3F | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/missingvalues_fields.html?context=cdpaas&locale=en | Handling fields with missing values (SPSS Modeler) | Handling fields with missing values
If the majority of missing values are concentrated in a small number of fields, you can address them at the field level rather than at the record level. This approach also allows you to experiment with the relative importance of particular fields before deciding on an approach for handling missing values. If a field is unimportant in modeling, it probably isn't worth keeping, regardless of how many missing values it has.
For example, a market research company may collect data from a general questionnaire containing 50 questions. Two of the questions address age and political persuasion, information that many people are reluctant to give. In this case, Age and Political_persuasion have many missing values.
| # Handling fields with missing values #
If the majority of missing values are concentrated in a small number of fields, you can address them at the field level rather than at the record level\. This approach also allows you to experiment with the relative importance of particular fields before deciding on an approach for handling missing values\. If a field is unimportant in modeling, it probably isn't worth keeping, regardless of how many missing values it has\.
For example, a market research company may collect data from a general questionnaire containing 50 questions\. Two of the questions address age and political persuasion, information that many people are reluctant to give\. In this case, `Age` and `Political_persuasion` have many missing values\.
<!-- </article "role="article" "> -->
|
D5FAFC625D1A1D0793D9521351E9B59A04AF00E9 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/missingvalues_overview.html?context=cdpaas&locale=en | Missing data values (SPSS Modeler) | Missing data values
During the data preparation phase of data mining, you will often want to replace missing values in the data.
Missing values are values in the data set that are unknown, uncollected, or incorrectly entered. Usually, such values aren't valid for their fields. For example, the field Sex should contain the values M and F. If you discover the values Y or Z in the field, you can safely assume that such values aren't valid and should therefore be interpreted as blanks. Likewise, a negative value for the field Age is meaningless and should also be interpreted as a blank. Frequently, such obviously wrong values are purposely entered, or fields are left blank, during a questionnaire to indicate a nonresponse. At times, you may want to examine these blanks more closely to determine whether a nonresponse, such as the refusal to give one's age, is a factor in predicting a specific outcome.
Some modeling techniques handle missing data better than others. For example, the [C5.0 node](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/c50.html) and the [Apriori node](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/apriori.html) cope well with values that are explicitly declared as "missing" in a [Type node](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/type.html). Other modeling techniques have trouble dealing with missing values and experience longer training times, resulting in less-accurate models.
There are several types of missing values recognized by :
* Null or system-missing values. These are nonstring values that have been left blank in the database or source file and have not been specifically defined as "missing" in an [Import](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/_nodes_import.html) or Type node. System-missing values are displayed as $null$. Note that empty strings are not considered nulls in , although they may be treated as nulls by certain databases.
* Empty strings and white space. Empty string values and white space (strings with no visible characters) are treated as distinct from null values. Empty strings are treated as equivalent to white space for most purposes. For example, if you select the option to treat white space as blanks in an Import or Type node, this setting applies to empty strings as well.
* Blank or user-defined missing values. These are values such as unknown, 99, or –1 that are explicitly defined in an Import node or Type node as missing. Optionally, you can also choose to treat nulls and white space as blanks, which allows them to be flagged for special treatment and to be excluded from most calculations. For example, you can use the @BLANK function to treat these values, along with other types of missing values, as blanks.
Reading in mixed data. Note that when you're reading in fields with numeric storage (either integer, real, time, timestamp, or date), any non-numeric values are set to null or system missing. This is because, unlike some applications, doesn't allow mixed storage types within a field. To avoid this, you should read in any fields with mixed data as strings by changing the storage type in the Import node or external application as necessary.
Reading empty strings from Oracle. When reading from or writing to an Oracle database, be aware that, unlike and unlike most other databases, Oracle treats and stores empty string values as equivalent to null values. This means that the same data extracted from an Oracle database may behave differently than when extracted from a file or another database, and the data may return different results.
| # Missing data values #
During the data preparation phase of data mining, you will often want to replace missing values in the data\.
Missing values are values in the data set that are unknown, uncollected, or incorrectly entered\. Usually, such values aren't valid for their fields\. For example, the field `Sex` should contain the values `M` and `F`\. If you discover the values `Y` or `Z` in the field, you can safely assume that such values aren't valid and should therefore be interpreted as blanks\. Likewise, a negative value for the field `Age` is meaningless and should also be interpreted as a blank\. Frequently, such obviously wrong values are purposely entered, or fields are left blank, during a questionnaire to indicate a nonresponse\. At times, you may want to examine these blanks more closely to determine whether a nonresponse, such as the refusal to give one's age, is a factor in predicting a specific outcome\.
Some modeling techniques handle missing data better than others\. For example, the [C5\.0 node](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/c50.html) and the [Apriori node](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/apriori.html) cope well with values that are explicitly declared as "missing" in a [Type node](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/type.html)\. Other modeling techniques have trouble dealing with missing values and experience longer training times, resulting in less\-accurate models\.
There are several types of missing values recognized by :
<!-- <ul> -->
* Null or system\-missing values\. These are nonstring values that have been left blank in the database or source file and have not been specifically defined as "missing" in an [Import](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/_nodes_import.html) or Type node\. System\-missing values are displayed as `$null$`\. Note that empty strings are not considered nulls in , although they may be treated as nulls by certain databases\.
* Empty strings and white space\. Empty string values and white space (strings with no visible characters) are treated as distinct from null values\. Empty strings are treated as equivalent to white space for most purposes\. For example, if you select the option to treat white space as blanks in an Import or Type node, this setting applies to empty strings as well\.
* Blank or user\-defined missing values\. These are values such as `unknown`, `99`, or `–1` that are explicitly defined in an Import node or Type node as missing\. Optionally, you can also choose to treat nulls and white space as blanks, which allows them to be flagged for special treatment and to be excluded from most calculations\. For example, you can use the `@BLANK` function to treat these values, along with other types of missing values, as blanks\.
<!-- </ul> -->
Reading in mixed data\. Note that when you're reading in fields with numeric storage (either integer, real, time, timestamp, or date), any non\-numeric values are set to `null` or `system missing`\. This is because, unlike some applications, doesn't allow mixed storage types within a field\. To avoid this, you should read in any fields with mixed data as strings by changing the storage type in the Import node or external application as necessary\.
Reading empty strings from Oracle\. When reading from or writing to an Oracle database, be aware that, unlike and unlike most other databases, Oracle treats and stores empty string values as equivalent to null values\. This means that the same data extracted from an Oracle database may behave differently than when extracted from a file or another database, and the data may return different results\.
<!-- </article "role="article" "> -->
|
FE9FF9F5CC449798C00D008182F55BDAA91E546C | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/missingvalues_records.html?context=cdpaas&locale=en | Handling records with missing values (SPSS Modeler) | Handling records with missing values
If the majority of missing values are concentrated in a small number of records, you can just exclude those records. For example, a bank usually keeps detailed and complete records on its loan customers.
If, however, the bank is less restrictive in approving loans for its own staff members, data gathered for staff loans is likely to have several blank fields. In such a case, there are two options for handling these missing values:
* You can use a [Select node](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/select.html) to remove the staff records
* If the data set is large, you can discard all records with blanks
| # Handling records with missing values #
If the majority of missing values are concentrated in a small number of records, you can just exclude those records\. For example, a bank usually keeps detailed and complete records on its loan customers\.
If, however, the bank is less restrictive in approving loans for its own staff members, data gathered for staff loans is likely to have several blank fields\. In such a case, there are two options for handling these missing values:
<!-- <ul> -->
* You can use a [Select node](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/select.html) to remove the staff records
* If the data set is large, you can discard all records with blanks
<!-- </ul> -->
<!-- </article "role="article" "> -->
|
3BA46A09CF64CE6120BE65C44614995B50B67DA1 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/missingvalues_system.html?context=cdpaas&locale=en | Handling records with system missing values (SPSS Modeler) | Handling records with system missing values
| # Handling records with system missing values #
<!-- </article "role="article" "> -->
|
01C8222216B795904018497993CC5E44D51A3B35 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/missingvalues_treating.html?context=cdpaas&locale=en | Handling missing values (SPSS Modeler) | Handling missing values
You should decide how to treat missing values in light of your business or domain knowledge. To ease training time and increase accuracy, you may want to remove blanks from your data set. On the other hand, the presence of blank values may lead to new business opportunities or additional insights.
In choosing the best technique, you should consider the following aspects of your data:
* Size of the data set
* Number of fields containing blanks
* Amount of missing information
In general terms, there are two approaches you can follow:
* You can exclude fields or records with missing values
* You can impute, replace, or coerce missing values using a variety of methods
| # Handling missing values #
You should decide how to treat missing values in light of your business or domain knowledge\. To ease training time and increase accuracy, you may want to remove blanks from your data set\. On the other hand, the presence of blank values may lead to new business opportunities or additional insights\.
In choosing the best technique, you should consider the following aspects of your data:
<!-- <ul> -->
* Size of the data set
* Number of fields containing blanks
* Amount of missing information
<!-- </ul> -->
In general terms, there are two approaches you can follow:
<!-- <ul> -->
* You can exclude fields or records with missing values
* You can impute, replace, or coerce missing values using a variety of methods
<!-- </ul> -->
<!-- </article "role="article" "> -->
|
6576530EC5D705B8BF323F6C459C32A87AE3F9A4 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/mlpas.html?context=cdpaas&locale=en | MultiLayerPerceptron-AS node (SPSS Modeler) | MultiLayerPerceptron-AS node
Multilayer perceptron is a classifier based on the feedforward artificial neural network and consists of multiple layers.
Each layer is fully connected to the next layer in the network. See [Multilayer Perceptron Classifier (MLPC)](https://spark.apache.org/docs/latest/ml-classification-regression.htmlmultilayer-perceptron-classifier) for details.^1^
The MultiLayerPerceptron-AS node in watsonx.ai is implemented in Spark. To use a this node, you must set up an upstream Type node. The MultiLayerPerceptron-AS node will read input values from the Type node (or from the Types of an upstream import node).
^1^ "Multilayer perceptron classifier." Apache Spark. MLlib: Main Guide. Web. 5 Oct 2018.
| # MultiLayerPerceptron\-AS node #
Multilayer perceptron is a classifier based on the feedforward artificial neural network and consists of multiple layers\.
Each layer is fully connected to the next layer in the network\. See [Multilayer Perceptron Classifier (MLPC)](https://spark.apache.org/docs/latest/ml-classification-regression.html#multilayer-perceptron-classifier) for details\.^1^
The MultiLayerPerceptron\-AS node in watsonx\.ai is implemented in Spark\. To use a this node, you must set up an upstream Type node\. The MultiLayerPerceptron\-AS node will read input values from the Type node (or from the Types of an upstream import node)\.
^1^ "Multilayer perceptron classifier\." *Apache Spark*\. MLlib: Main Guide\. Web\. 5 Oct 2018\.
<!-- </article "role="article" "> -->
|
5F0FC43F57AB9AF130DEA6A795E1E81A6AA95ACC | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/multiplot.html?context=cdpaas&locale=en | Multiplot node (SPSS Modeler) | Multiplot node
A multiplot is a special type of plot that displays multiple Y fields over a single X field. The Y fields are plotted as colored lines and each is equivalent to a Plot node with Style set to Line and X Mode set to Sort. Multiplots are useful when you have time sequence data and want to explore the fluctuation of several variables over time.
| # Multiplot node #
A multiplot is a special type of plot that displays multiple `Y` fields over a single `X` field\. The `Y` fields are plotted as colored lines and each is equivalent to a Plot node with Style set to Line and X Mode set to Sort\. Multiplots are useful when you have time sequence data and want to explore the fluctuation of several variables over time\.
<!-- </article "role="article" "> -->
|
9F06DF311976F336CB3164B08D5DA7D6F93419E2 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/neuralnetwork.html?context=cdpaas&locale=en | Neural Net node (SPSS Modeler) | Neural Net node
A neural network can approximate a wide range of predictive models with minimal demands on model structure and assumption. The form of the relationships is determined during the learning process. If a linear relationship between the target and predictors is appropriate, the results of the neural network should closely approximate those of a traditional linear model. If a nonlinear relationship is more appropriate, the neural network will automatically approximate the "correct" model structure.
The trade-off for this flexibility is that the neural network is not easily interpretable. If you are trying to explain an underlying process that produces the relationships between the target and predictors, it would be better to use a more traditional statistical model. However, if model interpretability is not important, you can obtain good predictions using a neural network.
Field requirements. There must be at least one Target and one Input. Fields set to Both or None are ignored. There are no measurement level restrictions on targets or predictors (inputs).
The initial weights assigned to neural networks during model building, and therefore the final models produced, depend on the order of the fields in the data. Watsonx.ai automatically sorts data by field name before presenting it to the neural network for training. This means that explicitly changing the order of the fields in the data upstream will not affect the generated neural net models when a random seed is set in the model builder. However, changing the input field names in a way that changes their sort order will produce different neural network models, even with a random seed set in the model builder. The model quality will not be affected significantly given different sort order of field names.
| # Neural Net node #
A neural network can approximate a wide range of predictive models with minimal demands on model structure and assumption\. The form of the relationships is determined during the learning process\. If a linear relationship between the target and predictors is appropriate, the results of the neural network should closely approximate those of a traditional linear model\. If a nonlinear relationship is more appropriate, the neural network will automatically approximate the "correct" model structure\.
The trade\-off for this flexibility is that the neural network is not easily interpretable\. If you are trying to explain an underlying process that produces the relationships between the target and predictors, it would be better to use a more traditional statistical model\. However, if model interpretability is not important, you can obtain good predictions using a neural network\.
Field requirements\. There must be at least one Target and one Input\. Fields set to Both or None are ignored\. There are no measurement level restrictions on targets or predictors (inputs)\.
The initial weights assigned to neural networks during model building, and therefore the final models produced, depend on the order of the fields in the data\. Watsonx\.ai automatically sorts data by field name before presenting it to the neural network for training\. This means that explicitly changing the order of the fields in the data upstream will not affect the generated neural net models when a random seed is set in the model builder\. However, changing the input field names in a way that changes their sort order will produce different neural network models, even with a random seed set in the model builder\. The model quality will not be affected significantly given different sort order of field names\.
<!-- </article "role="article" "> -->
|
9933646421686556C9AE8459EE2E51ED9DAB1C33 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/nodes_cache_disable.html?context=cdpaas&locale=en | Disabling or caching nodes in a flow (SPSS Modeler) | Disabling or caching nodes in a flow
You can disable a node so it's ignored when the flow runs. And you can set up a cache on a node.
| # Disabling or caching nodes in a flow #
You can disable a node so it's ignored when the flow runs\. And you can set up a cache on a node\.
<!-- </article "role="article" "> -->
|
759B6927189FEA6BE3124BF79FA527873CB84EA6 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/ocsvm.html?context=cdpaas&locale=en | One-Class SVM node (SPSS Modeler) | One-Class SVM node
The One-Class SVM© node uses an unsupervised learning algorithm. The node can be used for novelty detection. It will detect the soft boundary of a given set of samples, to then classify new points as belonging to that set or not. This One-Class SVM modeling node is implemented in Python and requires the scikit-learn© Python library.
For details about the scikit-learn library, see [Support Vector Machines](http://scikit-learn.org/stable/modules/svm.htmlsvm-outlier-detection)^1^.
The Modeling tab on the palette contains the One-Class SVM node and other Python nodes.
Note: One-Class SVM is used for usupervised outlier and novelty detection. In most cases, we recommend using a known, "normal" dataset to build the model so the algorithm can set a correct boundary for the given samples. Parameters for the model – such as nu, gamma, and kernel – impact the result significantly. So you may need to experiment with these options until you find the optimal settings for your situation.
^1^Smola, Schölkopf. "A Tutorial on Support Vector Regression." Statistics and Computing Archive, vol. 14, no. 3, August 2004, pp. 199-222. (http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.114.4288)
| # One\-Class SVM node #
The One\-Class SVM© node uses an unsupervised learning algorithm\. The node can be used for novelty detection\. It will detect the soft boundary of a given set of samples, to then classify new points as belonging to that set or not\. This One\-Class SVM modeling node is implemented in Python and requires the scikit\-learn© Python library\.
For details about the scikit\-learn library, see [Support Vector Machines](http://scikit-learn.org/stable/modules/svm.html#svm-outlier-detection)^1^\.
The Modeling tab on the palette contains the One\-Class SVM node and other Python nodes\.
Note: One\-Class SVM is used for usupervised outlier and novelty detection\. In most cases, we recommend using a known, "normal" dataset to build the model so the algorithm can set a correct boundary for the given samples\. Parameters for the model – such as nu, gamma, and kernel – impact the result significantly\. So you may need to experiment with these options until you find the optimal settings for your situation\.
^1^Smola, Schölkopf\. "A Tutorial on Support Vector Regression\." *Statistics and Computing Archive*, vol\. 14, no\. 3, August 2004, pp\. 199\-222\. (http://citeseerx\.ist\.psu\.edu/viewdoc/summary?doi=10\.1\.1\.114\.4288)
<!-- </article "role="article" "> -->
|
98FC8E9A3380E4593D9BF08B78CE6A7797C0204B | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/partition.html?context=cdpaas&locale=en | Partition node (SPSS Modeler) | Partition node
Partition nodes are used to generate a partition field that splits the data into separate subsets or samples for the training, testing, and validation stages of model building. By using one sample to generate the model and a separate sample to test it, you can get a good indication of how well the model will generalize to larger datasets that are similar to the current data.
The Partition node generates a nominal field with the role set to Partition. Alternatively, if an appropriate field already exists in your data, it can be designated as a partition using a Type node. In this case, no separate Partition node is required. Any instantiated nominal field with two or three values can be used as a partition, but flag fields cannot be used.
Multiple partition fields can be defined in a flow, but if so, a single partition field must be selected in each modeling node that uses partitioning. (If only one partition is present, it is automatically used whenever partitioning is enabled.)
To create a partition field based on some other criterion such as a date range or location, you can also use a Derive node. See [Derive node](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/derive.htmlderive) for more information.
Example. When building an RFM flow to identify recent customers who have positively responded to previous marketing campaigns, the marketing department of a sales company uses a Partition node to split the data into training and test partitions.
| # Partition node #
Partition nodes are used to generate a partition field that splits the data into separate subsets or samples for the training, testing, and validation stages of model building\. By using one sample to generate the model and a separate sample to test it, you can get a good indication of how well the model will generalize to larger datasets that are similar to the current data\.
The Partition node generates a nominal field with the role set to Partition\. Alternatively, if an appropriate field already exists in your data, it can be designated as a partition using a Type node\. In this case, no separate Partition node is required\. Any instantiated nominal field with two or three values can be used as a partition, but flag fields cannot be used\.
Multiple partition fields can be defined in a flow, but if so, a single partition field must be selected in each modeling node that uses partitioning\. (If only one partition is present, it is automatically used whenever partitioning is enabled\.)
To create a partition field based on some other criterion such as a date range or location, you can also use a Derive node\. See [Derive node](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/derive.html#derive) for more information\.
Example\. When building an RFM flow to identify recent customers who have positively responded to previous marketing campaigns, the marketing department of a sales company uses a Partition node to split the data into training and test partitions\.
<!-- </article "role="article" "> -->
|
CFC54BB4CEA29104BD4F9793B51ABE558AA0250D | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/plot.html?context=cdpaas&locale=en | Plot node (SPSS Modeler) | Plot node
Plot nodes show the relationship between numeric fields. You can create a plot using points (also known as a scatterplot), or you can use lines. You can create three types of line plots by specifying an X Mode in the node properties.
| # Plot node #
Plot nodes show the relationship between numeric fields\. You can create a plot using points (also known as a scatterplot), or you can use lines\. You can create three types of line plots by specifying an X Mode in the node properties\.
<!-- </article "role="article" "> -->
|
5E2A4B92C4F5F84B3DDE2EAD6827C7FA89EB0565 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/quest.html?context=cdpaas&locale=en | QUEST node (SPSS Modeler) | QUEST node
QUEST—or Quick, Unbiased, Efficient Statistical Tree—is a binary classification method for building decision trees. A major motivation in its development was to reduce the processing time required for large C&R Tree analyses with either many variables or many cases. A second goal of QUEST was to reduce the tendency found in classification tree methods to favor inputs that allow more splits, that is, continuous (numeric range) input fields or those with many categories.
* QUEST uses a sequence of rules, based on significance tests, to evaluate the input fields at a node. For selection purposes, as little as a single test may need to be performed on each input at a node. Unlike C&R Tree, all splits are not examined, and unlike C&R Tree and CHAID, category combinations are not tested when evaluating an input field for selection. This speeds the analysis.
* Splits are determined by running quadratic discriminant analysis using the selected input on groups formed by the target categories. This method again results in a speed improvement over exhaustive search (C&R Tree) to determine the optimal split.
Requirements. Input fields can be continuous (numeric ranges), but the target field must be categorical. All splits are binary. Weight fields cannot be used. Any ordinal (ordered set) fields used in the model must have numeric storage (not string). If necessary, the Reclassify node can be used to convert them.
Strengths. Like CHAID, but unlike C&R Tree, QUEST uses statistical tests to decide whether or not an input field is used. It also separates the issues of input selection and splitting, applying different criteria to each. This contrasts with CHAID, in which the statistical test result that determines variable selection also produces the split. Similarly, C&R Tree employs the impurity-change measure to both select the input field and to determine the split.
| # QUEST node #
QUEST—or Quick, Unbiased, Efficient Statistical Tree—is a binary classification method for building decision trees\. A major motivation in its development was to reduce the processing time required for large C&R Tree analyses with either many variables or many cases\. A second goal of QUEST was to reduce the tendency found in classification tree methods to favor inputs that allow more splits, that is, continuous (numeric range) input fields or those with many categories\.
<!-- <ul> -->
* QUEST uses a sequence of rules, based on significance tests, to evaluate the input fields at a node\. For selection purposes, as little as a single test may need to be performed on each input at a node\. Unlike C&R Tree, all splits are not examined, and unlike C&R Tree and CHAID, category combinations are not tested when evaluating an input field for selection\. This speeds the analysis\.
* Splits are determined by running quadratic discriminant analysis using the selected input on groups formed by the target categories\. This method again results in a speed improvement over exhaustive search (C&R Tree) to determine the optimal split\.
<!-- </ul> -->
Requirements\. Input fields can be continuous (numeric ranges), but the target field must be categorical\. All splits are binary\. Weight fields cannot be used\. Any ordinal (ordered set) fields used in the model must have numeric storage (not string)\. If necessary, the Reclassify node can be used to convert them\.
Strengths\. Like CHAID, but unlike C&R Tree, QUEST uses statistical tests to decide whether or not an input field is used\. It also separates the issues of input selection and splitting, applying different criteria to each\. This contrasts with CHAID, in which the statistical test result that determines variable selection also produces the split\. Similarly, C&R Tree employs the impurity\-change measure to both select the input field and to determine the split\.
<!-- </article "role="article" "> -->
|
2581DD8F04F917BA91F1201137AE0EFEA1F82E26 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/randomforest.html?context=cdpaas&locale=en | Random Forest node (SPSS Modeler) | Random Forest node
Random Forest© is an advanced implementation of a bagging algorithm with a tree model as the base model.
In random forests, each tree in the ensemble is built from a sample drawn with replacement (for example, a bootstrap sample) from the training set. When splitting a node during the construction of the tree, the split that is chosen is no longer the best split among all features. Instead, the split that is picked is the best split among a random subset of the features. Because of this randomness, the bias of the forest usually slightly increases (with respect to the bias of a single non-random tree) but, due to averaging, its variance also decreases, usually more than compensating for the increase in bias, hence yielding an overall better model.^1^
The Random Forest node in watsonx.ai is implemented in Python. The nodes palette contains this node and other Python nodes.
For more information about random forest algorithms, see [Forests of randomized trees](https://scikit-learn.org/stable/modules/ensemble.htmlforest).
^1^L. Breiman, "Random Forests," Machine Learning, 45(1), 5-32, 2001.
| # Random Forest node #
Random Forest© is an advanced implementation of a bagging algorithm with a tree model as the base model\.
In random forests, each tree in the ensemble is built from a sample drawn with replacement (for example, a bootstrap sample) from the training set\. When splitting a node during the construction of the tree, the split that is chosen is no longer the best split among all features\. Instead, the split that is picked is the best split among a random subset of the features\. Because of this randomness, the bias of the forest usually slightly increases (with respect to the bias of a single non\-random tree) but, due to averaging, its variance also decreases, usually more than compensating for the increase in bias, hence yielding an overall better model\.^1^
The Random Forest node in watsonx\.ai is implemented in Python\. The nodes palette contains this node and other Python nodes\.
For more information about random forest algorithms, see [Forests of randomized trees](https://scikit-learn.org/stable/modules/ensemble.html#forest)\.
^1^L\. Breiman, "Random Forests," Machine Learning, 45(1), 5\-32, 2001\.
<!-- </article "role="article" "> -->
|
01800E00BDFB7CFE0E751FA6C616160C48E6ED21 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/randomtrees.html?context=cdpaas&locale=en | Random Trees node (SPSS Modeler) | Random Trees node
The Random Trees node can be used with data in a distributed environment. In this node, you build an ensemble model that consists of multiple decision trees.
The Random Trees node is a tree-based classification and prediction method that is built on Classification and Regression Tree methodology. As with C&R Tree, this prediction method uses recursive partitioning to split the training records into segments with similar output field values. The node starts by examining the input fields available to it to find the best split, which is measured by the reduction in an impurity index that results from the split. The split defines two subgroups, each of which is then split into two more subgroups, and so on, until one of the stopping criteria is triggered. All splits are binary (only two subgroups).
The Random Trees node uses bootstrap sampling with replacement to generate sample data. The sample data is used to grow a tree model. During tree growth, Random Trees will not sample the data again. Instead, it randomly selects part of the predictors and uses the best one to split a tree node. This process is repeated when splitting each tree node. This is the basic idea of growing a tree in random forest.
Random Trees uses C&R Tree-like trees. Since such trees are binary, each field for splitting results in two branches. For a categorical field with multiple categories, the categories are grouped into two groups based on the inner splitting criterion. Each tree grows to the largest extent possible (there is no pruning). In scoring, Random Trees combines individual tree scores by majority voting (for classification) or average (for regression).
Random Trees differ from C&R Trees as follows:
* Random Trees nodes randomly select a specified number of predictors and uses the best one from the selection to split a node. In contrast, C&R Tree finds the best one from all predictors.
* Each tree in Random Trees grows fully until each leaf node typically contains a single record. So the tree depth could be very large. But standard C&R Tree uses different stopping rules for tree growth, which usually leads to a much shallower tree.
Random Trees adds two features compared to C&R Tree:
* The first feature is bagging, where replicas of the training dataset are created by sampling with replacement from the original dataset. This action creates bootstrap samples that are of equal size to the original dataset, after which a component model is built on each replica. Together these component models form an ensemble model.
* The second feature is that, at each split of the tree, only a sampling of the input fields is considered for the impurity measure.
Requirements. To train a Random Trees model, you need one or more Input fields and one Target field. Target and input fields can be continuous (numeric range) or categorical. Fields that are set to either Both or None are ignored. Fields that are used in the model must have their types fully instantiated, and any ordinal (ordered set) fields that are used in the model must have numeric storage (not string). If necessary, the Reclassify node can be used to convert them.
Strengths. Random Trees models are robust when you are dealing with large data sets and numbers of fields. Due to the use of bagging and field sampling, they are much less prone to overfitting and thus the results that are seen in testing are more likely to be repeated when you use new data.
Note: When first creating a flow, you select which runtime to use. By default, flows use the IBM SPSS Modeler runtime. If you want to use native Spark algorithms instead of SPSS algorithms, select the Spark runtime. Properties for this node will vary depending on which runtime option you choose.
| # Random Trees node #
The Random Trees node can be used with data in a distributed environment\. In this node, you build an ensemble model that consists of multiple decision trees\.
The Random Trees node is a tree\-based classification and prediction method that is built on Classification and Regression Tree methodology\. As with C&R Tree, this prediction method uses recursive partitioning to split the training records into segments with similar output field values\. The node starts by examining the input fields available to it to find the best split, which is measured by the reduction in an impurity index that results from the split\. The split defines two subgroups, each of which is then split into two more subgroups, and so on, until one of the stopping criteria is triggered\. All splits are binary (only two subgroups)\.
The Random Trees node uses bootstrap sampling with replacement to generate sample data\. The sample data is used to grow a tree model\. During tree growth, Random Trees will not sample the data again\. Instead, it randomly selects part of the predictors and uses the best one to split a tree node\. This process is repeated when splitting each tree node\. This is the basic idea of growing a tree in random forest\.
Random Trees uses C&R Tree\-like trees\. Since such trees are binary, each field for splitting results in two branches\. For a categorical field with multiple categories, the categories are grouped into two groups based on the inner splitting criterion\. Each tree grows to the largest extent possible (there is no pruning)\. In scoring, Random Trees combines individual tree scores by majority voting (for classification) or average (for regression)\.
Random Trees differ from C&R Trees as follows:
<!-- <ul> -->
* Random Trees nodes randomly select a specified number of predictors and uses the best one from the selection to split a node\. In contrast, C&R Tree finds the best one from all predictors\.
* Each tree in Random Trees grows fully until each leaf node typically contains a single record\. So the tree depth could be very large\. But standard C&R Tree uses different stopping rules for tree growth, which usually leads to a much shallower tree\.
<!-- </ul> -->
Random Trees adds two features compared to C&R Tree:
<!-- <ul> -->
* The first feature is bagging, where replicas of the training dataset are created by sampling with replacement from the original dataset\. This action creates bootstrap samples that are of equal size to the original dataset, after which a component model is built on each replica\. Together these component models form an ensemble model\.
* The second feature is that, at each split of the tree, only a sampling of the input fields is considered for the impurity measure\.
<!-- </ul> -->
Requirements\. To train a Random Trees model, you need one or more Input fields and one Target field\. Target and input fields can be continuous (numeric range) or categorical\. Fields that are set to either Both or None are ignored\. Fields that are used in the model must have their types fully instantiated, and any ordinal (ordered set) fields that are used in the model must have numeric storage (not string)\. If necessary, the Reclassify node can be used to convert them\.
Strengths\. Random Trees models are robust when you are dealing with large data sets and numbers of fields\. Due to the use of bagging and field sampling, they are much less prone to overfitting and thus the results that are seen in testing are more likely to be repeated when you use new data\.
Note: When first creating a flow, you select which runtime to use\. By default, flows use the IBM SPSS Modeler runtime\. If you want to use native Spark algorithms instead of SPSS algorithms, select the Spark runtime\. Properties for this node will vary depending on which runtime option you choose\.
<!-- </article "role="article" "> -->
|
2D3F7F5EFB161E0D88AE69C4710D70AA99DB0BDE | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/reclassify.html?context=cdpaas&locale=en | Reclassify node (SPSS Modeler) | Reclassify node
The Reclassify node enables the transformation from one set of categorical values to another. Reclassification is useful for collapsing categories or regrouping data for analysis.
For example, you could reclassify the values for Product into three groups, such as Kitchenware, Bath and Linens, and Appliances.
Reclassification can be performed for one or more symbolic fields. You can also choose to substitute the new values for the existing field or generate a new field.
| # Reclassify node #
The Reclassify node enables the transformation from one set of categorical values to another\. Reclassification is useful for collapsing categories or regrouping data for analysis\.
For example, you could reclassify the values for `Product` into three groups, such as `Kitchenware`, `Bath and Linens`, and `Appliances`\.
Reclassification can be performed for one or more symbolic fields\. You can also choose to substitute the new values for the existing field or generate a new field\.
<!-- </article "role="article" "> -->
|
BBDEDA771A051A9B1871F9BEC9589D91421E7C0C | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/regression.html?context=cdpaas&locale=en | Refression (SPSS Modeler) | Regression node
Linear regression is a common statistical technique for classifying records based on the values of numeric input fields. Linear regression fits a straight line or surface that minimizes the discrepancies between predicted and actual output values.
Requirements. Only numeric fields can be used in a regression model. You must have exactly one target field (with the role set to Target) and one or more predictors (with the role set to Input). Fields with a role of Both or None are ignored, as are non-numeric fields. (If necessary, non-numeric fields can be recoded using a Derive node. )
Strengths. Regression models are relatively simple and give an easily interpreted mathematical formula for generating predictions. Because regression modeling is a long-established statistical procedure, the properties of these models are well understood. Regression models are also typically very fast to train. The Regression node provides methods for automatic field selection in order to eliminate nonsignificant input fields from the equation.
Note: In cases where the target field is categorical rather than a continuous range, such as yes/no or churn/don't churn, logistic regression can be used as an alternative. Logistic regression also provides support for non-numeric inputs, removing the need to recode these fields. See [Logistic node](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/logreg.htmllogreg) for more information.
| # Regression node #
Linear regression is a common statistical technique for classifying records based on the values of numeric input fields\. Linear regression fits a straight line or surface that minimizes the discrepancies between predicted and actual output values\.
Requirements\. Only numeric fields can be used in a regression model\. You must have exactly one target field (with the role set to `Target`) and one or more predictors (with the role set to `Input`)\. Fields with a role of `Both` or `None` are ignored, as are non\-numeric fields\. (If necessary, non\-numeric fields can be recoded using a Derive node\. )
Strengths\. Regression models are relatively simple and give an easily interpreted mathematical formula for generating predictions\. Because regression modeling is a long\-established statistical procedure, the properties of these models are well understood\. Regression models are also typically very fast to train\. The Regression node provides methods for automatic field selection in order to eliminate nonsignificant input fields from the equation\.
Note: In cases where the target field is categorical rather than a continuous range, such as `yes`/`no` or `churn`/`don't churn`, logistic regression can be used as an alternative\. Logistic regression also provides support for non\-numeric inputs, removing the need to recode these fields\. See [Logistic node](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/logreg.html#logreg) for more information\.
<!-- </article "role="article" "> -->
|
8322C981206A5C7EEEC48C32C9DDCEC9FCE98AEE | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/reorder.html?context=cdpaas&locale=en | Field Reorder node (SPSS Modeler) | Field Reorder node
With the Field Reorder node, you can define the natural order used to display fields downstream. This order affects the display of fields in a variety of places, such as tables, lists, and the Field Chooser.
This operation is useful, for example, when working with wide datasets to make fields of interest more visible.
| # Field Reorder node #
With the Field Reorder node, you can define the natural order used to display fields downstream\. This order affects the display of fields in a variety of places, such as tables, lists, and the Field Chooser\.
This operation is useful, for example, when working with wide datasets to make fields of interest more visible\.
<!-- </article "role="article" "> -->
|
BF6A65F061558B6AED8A438A887B6474A0FDFFC3 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/report.html?context=cdpaas&locale=en | Report node (SPSS Modeler) | Report node
You can use the Report node to create formatted reports containing fixed text, data, or other expressions derived from the data. Specify the format of the report by using text templates to define the fixed text and the data output constructions. You can provide custom text formatting using HTML tags in the template and by setting output options. Data values and other conditional output are included in the report using CLEM expressions in the template.
| # Report node #
You can use the Report node to create formatted reports containing fixed text, data, or other expressions derived from the data\. Specify the format of the report by using text templates to define the fixed text and the data output constructions\. You can provide custom text formatting using HTML tags in the template and by setting output options\. Data values and other conditional output are included in the report using CLEM expressions in the template\.
<!-- </article "role="article" "> -->
|
36C8AF3BBAFFF1C227CF611D7327AFA8E378D6EC | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/restructure.html?context=cdpaas&locale=en | Restructure node (SPSS Modeler) | Restructure node
With the Restructure node, you can generate multiple fields based on the values of a nominal or flag field. The newly generated fields can contain values from another field or numeric flags (0 and 1). The functionality of this node is similar to that of the Set to Flag node. However, it offers more flexibility by allowing you to create fields of any type (including numeric flags), using the values from another field. You can then perform aggregation or other manipulations with other nodes downstream. (The Set to Flag node lets you aggregate fields in one step, which may be convenient if you are creating flag fields.)
Figure 1. Restructure node

| # Restructure node #
With the Restructure node, you can generate multiple fields based on the values of a nominal or flag field\. The newly generated fields can contain values from another field or numeric flags (0 and 1)\. The functionality of this node is similar to that of the Set to Flag node\. However, it offers more flexibility by allowing you to create fields of any type (including numeric flags), using the values from another field\. You can then perform aggregation or other manipulations with other nodes downstream\. (The Set to Flag node lets you aggregate fields in one step, which may be convenient if you are creating flag fields\.)
Figure 1\. Restructure node

<!-- </article "role="article" "> -->
|
265714702B012F1010CE06D97EC16623360F4E2B | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/rfm_aggregate.html?context=cdpaas&locale=en | RFM Aggregate node (SPSS Modeler) | RFM Aggregate node
The Recency, Frequency, Monetary (RFM) Aggregate node allows you to take customers' historical transactional data, strip away any unused data, and combine all of their remaining transaction data into a single row (using their unique customer ID as a key) that lists when they last dealt with you (recency), how many transactions they have made (frequency), and the total value of those transactions (monetary).
Before proceeding with any aggregation, you should take time to clean the data, concentrating especially on any missing values.
After you identify and transform the data using the RFM Aggregate node, you might use an RFM Analysis node to carry out further analysis.
Note that after the data file has been run through the RFM Aggregate node, it won't have any target values; therefore, before using the data file as input for further predictive analysis with any modeling nodes such as C5.0 or CHAID, you need to merge it with other customer data (for example, by matching the customer IDs).
The RFM Aggregate and RFM Analysis nodes use independent binning; that is, they rank and bin data on each measure of recency, frequency, and monetary value, without regard to their values or the other two measures.
| # RFM Aggregate node #
The Recency, Frequency, Monetary (RFM) Aggregate node allows you to take customers' historical transactional data, strip away any unused data, and combine all of their remaining transaction data into a single row (using their unique customer ID as a key) that lists when they last dealt with you (recency), how many transactions they have made (frequency), and the total value of those transactions (monetary)\.
Before proceeding with any aggregation, you should take time to clean the data, concentrating especially on any missing values\.
After you identify and transform the data using the RFM Aggregate node, you might use an RFM Analysis node to carry out further analysis\.
Note that after the data file has been run through the RFM Aggregate node, it won't have any target values; therefore, before using the data file as input for further predictive analysis with any modeling nodes such as C5\.0 or CHAID, you need to merge it with other customer data (for example, by matching the customer IDs)\.
The RFM Aggregate and RFM Analysis nodes use independent binning; that is, they rank and bin data on each measure of recency, frequency, and monetary value, without regard to their values or the other two measures\.
<!-- </article "role="article" "> -->
|
9E15D946EDFB82EF911D36032C073CF1736B39DA | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/rfm_analysis.html?context=cdpaas&locale=en | RFM Analysis node (SPSS Modeler) | RFM Analysis node
You can use the Recency, Frequency, Monetary (RFM) Analysis node to determine quantitatively which customers are likely to be the best ones by examining how recently they last purchased from you (recency), how often they purchased (frequency), and how much they spent over all transactions (monetary).
The reasoning behind RFM analysis is that customers who purchase a product or service once are more likely to purchase again. The categorized customer data is separated into a number of bins, with the binning criteria adjusted as you require. In each of the bins, customers are assigned a score; these scores are then combined to provide an overall RFM score. This score is a representation of the customer's membership in the bins created for each of the RFM parameters. This binned data may be sufficient for your needs, for example, by identifying the most frequent, high-value customers; alternatively, it can be passed on in a flow for further modeling and analysis.
Note, however, that although the ability to analyze and rank RFM scores is a useful tool, you must be aware of certain factors when using it. There may be a temptation to target customers with the highest rankings; however, over-solicitation of these customers could lead to resentment and an actual fall in repeat business. It is also worth remembering that customers with low scores should not be neglected but instead may be cultivated to become better customers. Conversely, high scores alone do not necessarily reflect a good sales prospect, depending on the market. For example, a customer in bin 5 for recency, meaning that they have purchased very recently, may not actually be the best target customer for someone selling expensive, longer-life products such as cars or televisions.
Note: Depending on how your data is stored, you may need to precede the RFM Analysis node with an RFM Aggregate node to transform the data into a usable format. For example, input data must be in customer format, with one row per customer; if the customers' data is in transactional form, use an RFM Aggregate node upstream to derive the recency, frequency, and monetary fields.
The RFM Aggregate and RFM Analysis nodes in are set up to use independent binning; that is, they rank and bin data on each measure of recency, frequency, and monetary value, without regard to their values or the other two measures.
| # RFM Analysis node #
You can use the Recency, Frequency, Monetary (RFM) Analysis node to determine quantitatively which customers are likely to be the best ones by examining how recently they last purchased from you (recency), how often they purchased (frequency), and how much they spent over all transactions (monetary)\.
The reasoning behind RFM analysis is that customers who purchase a product or service once are more likely to purchase again\. The categorized customer data is separated into a number of bins, with the binning criteria adjusted as you require\. In each of the bins, customers are assigned a score; these scores are then combined to provide an overall RFM score\. This score is a representation of the customer's membership in the bins created for each of the RFM parameters\. This binned data may be sufficient for your needs, for example, by identifying the most frequent, high\-value customers; alternatively, it can be passed on in a flow for further modeling and analysis\.
Note, however, that although the ability to analyze and rank RFM scores is a useful tool, you must be aware of certain factors when using it\. There may be a temptation to target customers with the highest rankings; however, over\-solicitation of these customers could lead to resentment and an actual fall in repeat business\. It is also worth remembering that customers with low scores should not be neglected but instead may be cultivated to become better customers\. Conversely, high scores alone do not necessarily reflect a good sales prospect, depending on the market\. For example, a customer in bin 5 for recency, meaning that they have purchased very recently, may not actually be the best target customer for someone selling expensive, longer\-life products such as cars or televisions\.
Note: Depending on how your data is stored, you may need to precede the RFM Analysis node with an RFM Aggregate node to transform the data into a usable format\. For example, input data must be in customer format, with one row per customer; if the customers' data is in transactional form, use an RFM Aggregate node upstream to derive the recency, frequency, and monetary fields\.
The RFM Aggregate and RFM Analysis nodes in are set up to use independent binning; that is, they rank and bin data on each measure of recency, frequency, and monetary value, without regard to their values or the other two measures\.
<!-- </article "role="article" "> -->
|
AF3DA662099BD616B642F69925AEC7C8AFC84611 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/sample.html?context=cdpaas&locale=en | Sample node (SPSS Modeler) | Sample node
You can use Sample nodes to select a subset of records for analysis, or to specify a proportion of records to discard. A variety of sample types are supported, including stratified, clustered, and nonrandom (structured) samples.
Sampling can be used for several reasons:
* To improve performance by estimating models on a subset of the data. Models estimated from a sample are often as accurate as those derived from the full dataset, and may be more so if the improved performance allows you to experiment with different methods you might not otherwise have attempted.
* To select groups of related records or transactions for analysis, such as selecting all the items in an online shopping cart (or market basket), or all the properties in a specific neighborhood.
* To identify units or cases for random inspection in the interest of quality assurance, fraud prevention, or security.
Note: If you simply want to partition your data into training and test samples for purposes of validation, a Partition node can be used instead. See [Partition node](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/partition.htmlpartition) for more information.
| # Sample node #
You can use Sample nodes to select a subset of records for analysis, or to specify a proportion of records to discard\. A variety of sample types are supported, including stratified, clustered, and nonrandom (structured) samples\.
Sampling can be used for several reasons:
<!-- <ul> -->
* To improve performance by estimating models on a subset of the data\. Models estimated from a sample are often as accurate as those derived from the full dataset, and may be more so if the improved performance allows you to experiment with different methods you might not otherwise have attempted\.
* To select groups of related records or transactions for analysis, such as selecting all the items in an online shopping cart (or market basket), or all the properties in a specific neighborhood\.
* To identify units or cases for random inspection in the interest of quality assurance, fraud prevention, or security\.
<!-- </ul> -->
Note: If you simply want to partition your data into training and test samples for purposes of validation, a Partition node can be used instead\. See [Partition node](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/partition.html#partition) for more information\.
<!-- </article "role="article" "> -->
|
84E8928D464D412B225638BCC41F2837F98AEF43 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/adpnodeslots.html?context=cdpaas&locale=en | autodataprepnode properties | autodataprepnode properties
The Auto Data Prep (ADP) node can analyze your data and identify fixes, screen out fields that are problematic or not likely to be useful, derive new attributes when appropriate, and improve performance through intelligent screening and sampling techniques. You can use the node in fully automated fashion, allowing the node to choose and apply fixes, or you can preview the changes before they are made and accept, reject, or amend them as desired.
autodataprepnode properties
Table 1. autodataprepnode properties
autodataprepnode properties Data type Property description
objective Balanced <br>Speed <br>Accuracy <br>Custom
custom_fields flag If true, allows you to specify target, input, and other fields for the current node. If false, the current settings from an upstream Type node are used.
target field Specifies a single target field.
inputs [field1 ... fieldN] Input or predictor fields used by the model.
use_frequency flag
frequency_field field
use_weight flag
weight_field field
excluded_fields Filter <br>None
if_fields_do_not_match StopExecution <br>ClearAnalysis
prepare_dates_and_times flag Control access to all the date and time fields
compute_time_until_date flag
reference_date Today <br>Fixed
fixed_date date
units_for_date_durations Automatic <br>Fixed
fixed_date_units Years <br>Months <br>Days
compute_time_until_time flag
reference_time CurrentTime <br>Fixed
fixed_time time
units_for_time_durations Automatic <br>Fixed
fixed_time_units Hours <br>Minutes <br>Seconds
extract_year_from_date flag
extract_month_from_date flag
extract_day_from_date flag
extract_hour_from_time flag
extract_minute_from_time flag
extract_second_from_time flag
exclude_low_quality_inputs flag
exclude_too_many_missing flag
maximum_percentage_missing number
exclude_too_many_categories flag
maximum_number_categories number
exclude_if_large_category flag
maximum_percentage_category number
prepare_inputs_and_target flag
adjust_type_inputs flag
adjust_type_target flag
reorder_nominal_inputs flag
reorder_nominal_target flag
replace_outliers_inputs flag
replace_outliers_target flag
replace_missing_continuous_inputs flag
replace_missing_continuous_target flag
replace_missing_nominal_inputs flag
replace_missing_nominal_target flag
replace_missing_ordinal_inputs flag
replace_missing_ordinal_target flag
maximum_values_for_ordinal number
minimum_values_for_continuous number
outlier_cutoff_value number
outlier_method Replace <br>Delete
rescale_continuous_inputs flag
rescaling_method MinMax <br>ZScore
min_max_minimum number
min_max_maximum number
z_score_final_mean number
z_score_final_sd number
rescale_continuous_target flag
target_final_mean number
target_final_sd number
transform_select_input_fields flag
maximize_association_with_target flag
p_value_for_merging number
merge_ordinal_features flag
merge_nominal_features flag
minimum_cases_in_category number
bin_continuous_fields flag
p_value_for_binning number
perform_feature_selection flag
p_value_for_selection number
perform_feature_construction flag
transformed_target_name_extension string
transformed_inputs_name_extension string
constructed_features_root_name string
years_duration_ name_extension string
months_duration_ name_extension string
days_duration_ name_extension string
hours_duration_ name_extension string
minutes_duration_ name_extension string
seconds_duration_ name_extension string
year_cyclical_name_extension string
month_cyclical_name_extension string
day_cyclical_name_extension string
hour_cyclical_name_extension string
minute_cyclical_name_extension string
second_cyclical_name_extension string
| # autodataprepnode properties #
The Auto Data Prep (ADP) node can analyze your data and identify fixes, screen out fields that are problematic or not likely to be useful, derive new attributes when appropriate, and improve performance through intelligent screening and sampling techniques\. You can use the node in fully automated fashion, allowing the node to choose and apply fixes, or you can preview the changes before they are made and accept, reject, or amend them as desired\.
<!-- <table "summary="autodataprepnode properties" id="adpnodeslots__table_dmv_w33_cdb" class="defaultstyle" "> -->
autodataprepnode properties
Table 1\. autodataprepnode properties
| `autodataprepnode` properties | Data type | Property description |
| ----------------------------------- | ----------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `objective` | `Balanced` <br>`Speed` <br>`Accuracy` <br>`Custom` | |
| `custom_fields` | *flag* | If true, allows you to specify target, input, and other fields for the current node\. If false, the current settings from an upstream Type node are used\. |
| `target` | *field* | Specifies a single target field\. |
| `inputs` | \[*field1 \.\.\. fieldN*\] | Input or predictor fields used by the model\. |
| `use_frequency` | *flag* | |
| `frequency_field` | *field* | |
| `use_weight` | *flag* | |
| `weight_field` | *field* | |
| `excluded_fields` | `Filter` <br>`None` | |
| `if_fields_do_not_match` | `StopExecution` <br>`ClearAnalysis` | |
| `prepare_dates_and_times` | *flag* | Control access to all the date and time fields |
| `compute_time_until_date` | *flag* | |
| `reference_date` | `Today` <br>`Fixed` | |
| `fixed_date` | *date* | |
| `units_for_date_durations` | `Automatic` <br>`Fixed` | |
| `fixed_date_units` | `Years` <br>`Months` <br>`Days` | |
| `compute_time_until_time` | *flag* | |
| `reference_time` | `CurrentTime` <br>`Fixed` | |
| `fixed_time` | *time* | |
| `units_for_time_durations` | `Automatic` <br>`Fixed` | |
| `fixed_time_units` | `Hours` <br>`Minutes` <br>`Seconds` | |
| `extract_year_from_date` | *flag* | |
| `extract_month_from_date` | *flag* | |
| `extract_day_from_date` | *flag* | |
| `extract_hour_from_time` | *flag* | |
| `extract_minute_from_time` | *flag* | |
| `extract_second_from_time` | *flag* | |
| `exclude_low_quality_inputs` | *flag* | |
| `exclude_too_many_missing` | *flag* | |
| `maximum_percentage_missing` | *number* | |
| `exclude_too_many_categories` | *flag* | |
| `maximum_number_categories` | *number* | |
| `exclude_if_large_category` | *flag* | |
| `maximum_percentage_category` | *number* | |
| `prepare_inputs_and_target` | *flag* | |
| `adjust_type_inputs` | *flag* | |
| `adjust_type_target` | *flag* | |
| `reorder_nominal_inputs` | *flag* | |
| `reorder_nominal_target` | *flag* | |
| `replace_outliers_inputs` | *flag* | |
| `replace_outliers_target` | *flag* | |
| `replace_missing_continuous_inputs` | *flag* | |
| `replace_missing_continuous_target` | *flag* | |
| `replace_missing_nominal_inputs` | *flag* | |
| `replace_missing_nominal_target` | *flag* | |
| `replace_missing_ordinal_inputs` | *flag* | |
| `replace_missing_ordinal_target` | *flag* | |
| `maximum_values_for_ordinal` | *number* | |
| `minimum_values_for_continuous` | *number* | |
| `outlier_cutoff_value` | *number* | |
| `outlier_method` | `Replace` <br>`Delete` | |
| `rescale_continuous_inputs` | *flag* | |
| `rescaling_method` | `MinMax` <br>`ZScore` | |
| `min_max_minimum` | *number* | |
| `min_max_maximum` | *number* | |
| `z_score_final_mean` | *number* | |
| `z_score_final_sd` | *number* | |
| `rescale_continuous_target` | *flag* | |
| `target_final_mean` | *number* | |
| `target_final_sd` | *number* | |
| `transform_select_input_fields` | *flag* | |
| `maximize_association_with_target` | *flag* | |
| `p_value_for_merging` | *number* | |
| `merge_ordinal_features` | *flag* | |
| `merge_nominal_features` | *flag* | |
| `minimum_cases_in_category` | *number* | |
| `bin_continuous_fields` | *flag* | |
| `p_value_for_binning` | *number* | |
| `perform_feature_selection` | *flag* | |
| `p_value_for_selection` | *number* | |
| `perform_feature_construction` | *flag* | |
| `transformed_target_name_extension` | *string* | |
| `transformed_inputs_name_extension` | *string* | |
| `constructed_features_root_name` | *string* | |
| `years_duration_ name_extension` | *string* | |
| `months_duration_ name_extension` | *string* | |
| `days_duration_ name_extension` | *string* | |
| `hours_duration_ name_extension` | *string* | |
| `minutes_duration_ name_extension` | *string* | |
| `seconds_duration_ name_extension` | *string* | |
| `year_cyclical_name_extension` | *string* | |
| `month_cyclical_name_extension` | *string* | |
| `day_cyclical_name_extension` | *string* | |
| `hour_cyclical_name_extension` | *string* | |
| `minute_cyclical_name_extension` | *string* | |
| `second_cyclical_name_extension` | *string* | |
<!-- </table "summary="autodataprepnode properties" id="adpnodeslots__table_dmv_w33_cdb" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
8CD81C0F5F84DFE58834AEB8B71E6D7780B8DEAD | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/aggregatenodeslots.html?context=cdpaas&locale=en | aggregatenode properties | aggregatenode properties
 The Aggregate node replaces a sequence of input records with summarized, aggregated output records.
aggregatenode properties
Table 1. aggregatenode properties
aggregatenode properties Data type Property description
keys list Lists fields that can be used as keys for aggregation. For example, if Sex and Region are your key fields, each unique combination of M and F with regions N and S (four unique combinations) will have an aggregated record.
contiguous flag Select this option if you know that all records with the same key values are grouped together in the input (for example, if the input is sorted on the key fields). Doing so can improve performance.
aggregates Structured property listing the numeric fields whose values will be aggregated, as well as the selected modes of aggregation.
aggregate_exprs Keyed property which keys the derived field name with the aggregate expression used to compute it. For example:<br><br>aggregatenode.setKeyedPropertyValue ("aggregate_exprs", "Na_MAX", "MAX('Na')")
extension string Specify a prefix or suffix for duplicate aggregated fields.
add_as Suffix <br>Prefix
inc_record_count flag Creates an extra field that specifies how many input records were aggregated to form each aggregate record.
count_field string Specifies the name of the record count field.
allow_approximation Boolean Allows approximation of order statistics when aggregation is performed in SPSS Analytic Server.
bin_count integer Specifies the number of bins to use in approximation
aggregate_defaults Mean <br>Sum <br>Min <br>Max <br>SDev <br>Median <br>Count <br>Variance <br>FirstQuartile <br>ThirdQuartile Specify the field aggregation mode to use for newly added fields.
| # aggregatenode properties #
 The Aggregate node replaces a sequence of input records with summarized, aggregated output records\.
<!-- <table "summary="aggregatenode properties" class="defaultstyle" "> -->
aggregatenode properties
Table 1\. aggregatenode properties
| `aggregatenode` properties | Data type | Property description |
| -------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `keys` | *list* | Lists fields that can be used as keys for aggregation\. For example, if `Sex` and `Region` are your key fields, each unique combination of `M` and `F` with regions `N` and `S` (four unique combinations) will have an aggregated record\. |
| `contiguous` | *flag* | Select this option if you know that all records with the same key values are grouped together in the input (for example, if the input is sorted on the key fields)\. Doing so can improve performance\. |
| `aggregates` | | Structured property listing the numeric fields whose values will be aggregated, as well as the selected modes of aggregation\. |
| `aggregate_exprs` | | Keyed property which keys the derived field name with the aggregate expression used to compute it\. For example:<br><br>`aggregatenode.setKeyedPropertyValue ("aggregate_exprs", "Na_MAX", "MAX('Na')")` |
| `extension` | *string* | Specify a prefix or suffix for duplicate aggregated fields\. |
| `add_as` | `Suffix` <br>`Prefix` | |
| `inc_record_count` | *flag* | Creates an extra field that specifies how many input records were aggregated to form each aggregate record\. |
| `count_field` | *string* | Specifies the name of the record count field\. |
| `allow_approximation` | *Boolean* | Allows approximation of order statistics when aggregation is performed in SPSS Analytic Server\. |
| `bin_count` | *integer* | Specifies the number of bins to use in approximation |
| `aggregate_defaults` | `Mean` <br>`Sum` <br>`Min` <br>`Max` <br>`SDev` <br>`Median` <br>`Count` <br>`Variance` <br>`FirstQuartile` <br>`ThirdQuartile` | Specify the field aggregation mode to use for newly added fields\. |
<!-- </table "summary="aggregatenode properties" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
2C17E0A9E72FE65317838E81ACF1FA77620E0C6C | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/analysisnodeslots.html?context=cdpaas&locale=en | analysisnode properties | analysisnode properties
The Analysis node evaluates predictive models' ability to generate accurate predictions. Analysis nodes perform various comparisons between predicted values and actual values for one or more model nuggets. They can also compare predictive models to each other.
analysisnode properties
Table 1. analysisnode properties
analysisnode properties Data type Property description
output_mode ScreenFile Used to specify target location for output generated from the output node.
use_output_name flag Specifies whether a custom output name is used.
output_name string If use_output_name is true, specifies the name to use.
output_format Text (.txt) HTML (.html) Output (.cou) Used to specify the type of output.
by_fields list
full_filename string If disk, data, or HTML output, the name of the output file.
coincidence flag
performance flag
evaluation_binary flag
confidence flag
threshold number
improve_accuracy number
field_detection_method MetadataName Determines how predicted fields are matched to the original target field. Specify Metadata or Name.
inc_user_measure flag
user_if expr
user_then expr
user_else expr
user_compute [Mean Sum Min Max SDev]
split_by_partition boolean Whether to separate by partition.
| # analysisnode properties #
The Analysis node evaluates predictive models' ability to generate accurate predictions\. Analysis nodes perform various comparisons between predicted values and actual values for one or more model nuggets\. They can also compare predictive models to each other\.
<!-- <table "summary="analysisnode properties" id="analysisnodeslots__table_uwc_cj3_cdb" class="defaultstyle" "> -->
analysisnode properties
Table 1\. analysisnode properties
| `analysisnode` properties | Data type | Property description |
| ------------------------- | ----------------------------------------------------- | --------------------------------------------------------------------------------------------------------- |
| `output_mode` | `Screen``File` | Used to specify target location for output generated from the output node\. |
| `use_output_name` | *flag* | Specifies whether a custom output name is used\. |
| `output_name` | *string* | If `use_output_name` is true, specifies the name to use\. |
| `output_format` | `Text` (\.*txt*) `HTML` (\.*html*) `Output` (\.*cou*) | Used to specify the type of output\. |
| `by_fields` | *list* | |
| `full_filename` | *string* | If disk, data, or HTML output, the name of the output file\. |
| `coincidence` | *flag* | |
| `performance` | *flag* | |
| `evaluation_binary` | *flag* | |
| `confidence` | *flag* | |
| `threshold` | *number* | |
| `improve_accuracy` | *number* | |
| `field_detection_method` | `Metadata``Name` | Determines how predicted fields are matched to the original target field\. Specify `Metadata` or `Name`\. |
| `inc_user_measure` | *flag* | |
| `user_if` | *expr* | |
| `user_then` | *expr* | |
| `user_else` | *expr* | |
| `user_compute` | `[Mean Sum Min Max SDev]` | |
| `split_by_partition` | *boolean* | Whether to separate by partition\. |
<!-- </table "summary="analysisnode properties" id="analysisnodeslots__table_uwc_cj3_cdb" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
5C2296329A2D24B1A22A3848731708D78949E74C | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/anomalydetectionnodeslots.html?context=cdpaas&locale=en | anomalydetectionnode properties | anomalydetectionnode properties
The Anomaly node identifies unusual cases, or outliers, that don't conform to patterns of "normal" data. With this node, it's possible to identify outliers even if they don't fit any previously known patterns and even if you're not exactly sure what you're looking for.
anomalydetectionnode properties
Table 1. anomalydetectionnode properties
anomalydetectionnode Properties Values Property description
inputs [field1 ... fieldN] Anomaly Detection models screen records based on the specified input fields. They don't use a target field. Weight and frequency fields are also not used. See [Common modeling node properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/modelingnodeslots_common.htmlmodelingnodeslots_common) for more information.
mode ExpertSimple
anomaly_method IndexLevelPerRecordsNumRecords Specifies the method used to determine the cutoff value for flagging records as anomalous.
index_level number Specifies the minimum cutoff value for flagging anomalies.
percent_records number Sets the threshold for flagging records based on the percentage of records in the training data.
num_records number Sets the threshold for flagging records based on the number of records in the training data.
num_fields integer The number of fields to report for each anomalous record.
impute_missing_values flag
adjustment_coeff number Value used to balance the relative weight given to continuous and categorical fields in calculating the distance.
peer_group_num_auto flag Automatically calculates the number of peer groups.
min_num_peer_groups integer Specifies the minimum number of peer groups used when peer_group_num_auto is set to True.
max_num_per_groups integer Specifies the maximum number of peer groups.
num_peer_groups integer Specifies the number of peer groups used when peer_group_num_auto is set to False.
noise_level number Determines how outliers are treated during clustering. Specify a value between 0 and 0.5.
noise_ratio number Specifies the portion of memory allocated for the component that should be used for noise buffering. Specify a value between 0 and 0.5.
| # anomalydetectionnode properties #
The Anomaly node identifies unusual cases, or outliers, that don't conform to patterns of "normal" data\. With this node, it's possible to identify outliers even if they don't fit any previously known patterns and even if you're not exactly sure what you're looking for\.
<!-- <table "summary="anomalydetectionnode properties" id="anomalydetectionnodeslots__table_zgs_cj3_cdb" class="defaultstyle" "> -->
anomalydetectionnode properties
Table 1\. anomalydetectionnode properties
| `anomalydetectionnode` Properties | Values | Property description |
| --------------------------------- | ------------------------------------ | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `inputs` | *\[field1 \.\.\. fieldN\]* | Anomaly Detection models screen records based on the specified input fields\. They don't use a target field\. Weight and frequency fields are also not used\. See [Common modeling node properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/modelingnodeslots_common.html#modelingnodeslots_common) for more information\. |
| `mode` | `Expert``Simple` | |
| `anomaly_method` | `IndexLevel``PerRecords``NumRecords` | Specifies the method used to determine the cutoff value for flagging records as anomalous\. |
| `index_level` | *number* | Specifies the minimum cutoff value for flagging anomalies\. |
| `percent_records` | *number* | Sets the threshold for flagging records based on the percentage of records in the training data\. |
| `num_records` | *number* | Sets the threshold for flagging records based on the number of records in the training data\. |
| `num_fields` | *integer* | The number of fields to report for each anomalous record\. |
| `impute_missing_values` | *flag* | |
| `adjustment_coeff` | *number* | Value used to balance the relative weight given to continuous and categorical fields in calculating the distance\. |
| `peer_group_num_auto` | *flag* | Automatically calculates the number of peer groups\. |
| `min_num_peer_groups` | *integer* | Specifies the minimum number of peer groups used when `peer_group_num_auto` is set to `True`\. |
| `max_num_per_groups` | *integer* | Specifies the maximum number of peer groups\. |
| `num_peer_groups` | *integer* | Specifies the number of peer groups used when `peer_group_num_auto` is set to `False`\. |
| `noise_level` | *number* | Determines how outliers are treated during clustering\. Specify a value between 0 and 0\.5\. |
| `noise_ratio` | *number* | Specifies the portion of memory allocated for the component that should be used for noise buffering\. Specify a value between 0 and 0\.5\. |
<!-- </table "summary="anomalydetectionnode properties" id="anomalydetectionnodeslots__table_zgs_cj3_cdb" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
B51FF1FBA515035A93290F353D20AD9D54BC043C | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/anomalydetectionnuggetnodeslots.html?context=cdpaas&locale=en | applyanomalydetectionnode properties | applyanomalydetectionnode properties
You can use Anomaly Detection modeling nodes to generate an Anomaly Detection model nugget. The scripting name of this model nugget is applyanomalydetectionnode. For more information on scripting the modeling node itself, see [anomalydetectionnode properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/anomalydetectionnodeslots.htmlanomalydetectionnodeslots).
applyanomalydetectionnode properties
Table 1. applyanomalydetectionnode properties
applyanomalydetectionnode Properties Values Property description
anomaly_score_method FlagAndScoreFlagOnlyScoreOnly Determines which outputs are created for scoring.
num_fields integer Fields to report.
discard_records flag Indicates whether records are discarded from the output or not.
discard_anomalous_records flag Indicator of whether to discard the anomalous or non-anomalous records. The default is off, meaning that non-anomalous records are discarded. Otherwise, if on, anomalous records will be discarded. This property is enabled only if the discard_records property is enabled.
enable_sql_generation udfnative When using data from a database, SQL code can be pushed back to the database for execution, providing superior performance for many operations.
| # applyanomalydetectionnode properties #
You can use Anomaly Detection modeling nodes to generate an Anomaly Detection model nugget\. The scripting name of this model nugget is *applyanomalydetectionnode*\. For more information on scripting the modeling node itself, see [anomalydetectionnode properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/anomalydetectionnodeslots.html#anomalydetectionnodeslots)\.
<!-- <table "summary="applyanomalydetectionnode properties" id="anomalydetectionnuggetnodeslots__table_otk_dj3_cdb" class="defaultstyle" "> -->
applyanomalydetectionnode properties
Table 1\. applyanomalydetectionnode properties
| `applyanomalydetectionnode` Properties | Values | Property description |
| -------------------------------------- | ----------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `anomaly_score_method` | `FlagAndScore``FlagOnly``ScoreOnly` | Determines which outputs are created for scoring\. |
| `num_fields` | *integer* | Fields to report\. |
| `discard_records` | *flag* | Indicates whether records are discarded from the output or not\. |
| `discard_anomalous_records` | *flag* | Indicator of whether to discard the anomalous or *non*\-anomalous records\. The default is `off`, meaning that *non*\-anomalous records are discarded\. Otherwise, if `on`, anomalous records will be discarded\. This property is enabled only if the `discard_records` property is enabled\. |
| `enable_sql_generation` | `udf``native` | When using data from a database, SQL code can be pushed back to the database for execution, providing superior performance for many operations\. |
<!-- </table "summary="applyanomalydetectionnode properties" id="anomalydetectionnuggetnodeslots__table_otk_dj3_cdb" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
65FFB2E27EACD57BCADC6C1646EB280212D3B2C2 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/anonymizenodeslots.html?context=cdpaas&locale=en | anonymizenode properties | anonymizenode properties
The Anonymize node transforms the way field names and values are represented downstream, thus disguising the original data. This can be useful if you want to allow other users to build models using sensitive data, such as customer names or other details.
anonymizenode properties
Table 1. anonymizenode properties
anonymizenode properties Data type Property description
enable_anonymize flag When set to True, activates anonymization of field values (equivalent to selecting Yes for that field in the Anonymize Values column).
use_prefix flag When set to True, a custom prefix will be used if one has been specified. Applies to fields that will be anonymized by the Hash method and is equivalent to choosing the Custom option in the Replace Values settings for that field.
prefix string Equivalent to typing a prefix into the text box in the Replace Values settings. The default prefix is the default value if nothing else has been specified.
transformation RandomFixed Determines whether the transformation parameters for a field anonymized by the Transform method will be random or fixed.
set_random_seed flag When set to True, the specified seed value will be used (if transformation is also set to Random).
random_seed integer When set_random_seed is set to True, this is the seed for the random number.
scale number When transformation is set to Fixed, this value is used for "scale by." The maximum scale value is normally 10 but may be reduced to avoid overflow.
translate number When transformation is set to Fixed, this value is used for "translate." The maximum translate value is normally 1000 but may be reduced to avoid overflow.
| # anonymizenode properties #
The Anonymize node transforms the way field names and values are represented downstream, thus disguising the original data\. This can be useful if you want to allow other users to build models using sensitive data, such as customer names or other details\.
<!-- <table "summary="anonymizenode properties" id="anonymizenodeslots__table_ycb_2j3_cdb" class="defaultstyle" "> -->
anonymizenode properties
Table 1\. anonymizenode properties
| `anonymizenode` properties | Data type | Property description |
| -------------------------- | --------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| `enable_anonymize` | *flag* | When set to `True`, activates anonymization of field values (equivalent to selecting Yes for that field in the Anonymize Values column)\. |
| `use_prefix` | *flag* | When set to `True`, a custom prefix will be used if one has been specified\. Applies to fields that will be anonymized by the Hash method and is equivalent to choosing the Custom option in the Replace Values settings for that field\. |
| `prefix` | *string* | Equivalent to typing a prefix into the text box in the Replace Values settings\. The default prefix is the default value if nothing else has been specified\. |
| `transformation` | `Random``Fixed` | Determines whether the transformation parameters for a field anonymized by the Transform method will be random or fixed\. |
| `set_random_seed` | *flag* | When set to `True`, the specified seed value will be used (if `transformation` is also set to `Random`)\. |
| `random_seed` | *integer* | When `set_random_seed` is set to `True`, this is the seed for the random number\. |
| `scale` | *number* | When `transformation` is set to `Fixed`, this value is used for "scale by\." The maximum scale value is normally 10 but may be reduced to avoid overflow\. |
| `translate` | *number* | When `transformation` is set to `Fixed`, this value is used for "translate\." The maximum translate value is normally 1000 but may be reduced to avoid overflow\. |
<!-- </table "summary="anonymizenode properties" id="anonymizenodeslots__table_ycb_2j3_cdb" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
8D328FC36822024D739F83A36FEF66E5ABE61128 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/appendnodeslots.html?context=cdpaas&locale=en | appendnode properties | appendnode properties
 The Append node concatenates sets of records. It's useful for combining datasets with similar structures but different data.
appendnode properties
Table 1. appendnode properties
appendnode properties Data type Property description
match_by PositionName You can append datasets based on the position of fields in the main data source or the name of fields in the input datasets.
match_case flag Enables case sensitivity when matching field names.
include_fields_from MainAll
create_tag_field flag
tag_field_name string
| # appendnode properties #
 The Append node concatenates sets of records\. It's useful for combining datasets with similar structures but different data\.
<!-- <table "summary="appendnode properties" class="defaultstyle" "> -->
appendnode properties
Table 1\. appendnode properties
| `appendnode` properties | Data type | Property description |
| ----------------------- | ---------------- | ----------------------------------------------------------------------------------------------------------------------------- |
| `match_by` | `Position``Name` | You can append datasets based on the position of fields in the main data source or the name of fields in the input datasets\. |
| `match_case` | *flag* | Enables case sensitivity when matching field names\. |
| `include_fields_from` | `Main``All` | |
| `create_tag_field` | *flag* | |
| `tag_field_name` | *string* | |
<!-- </table "summary="appendnode properties" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
76EC742BC2D093C10C6A5B85456BFBB6571C416D | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/apriorinodeslots.html?context=cdpaas&locale=en | apriorinode properties | apriorinode properties
The Apriori node extracts a set of rules from the data, pulling out the rules with the highest information content. Apriori offers five different methods of selecting rules and uses a sophisticated indexing scheme to process large data sets efficiently. For large problems, Apriori is generally faster to train; it has no arbitrary limit on the number of rules that can be retained, and it can handle rules with up to 32 preconditions. Apriori requires that input and output fields all be categorical but delivers better performance because it'ss optimized for this type of data.
apriorinode properties
Table 1. apriorinode properties
apriorinode Properties Values Property description
consequents field Apriori models use Consequents and Antecedents in place of the standard target and input fields. Weight and frequency fields are not used. See [Common modeling node properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/modelingnodeslots_common.htmlmodelingnodeslots_common) for more information.
antecedents [field1 ... fieldN]
min_supp number
min_conf number
max_antecedents number
true_flags flag
optimize Speed <br>Memory
use_transactional_data flag
contiguous flag
id_field string
content_field string
mode SimpleExpert
evaluation RuleConfidence <br>DifferenceToPrior <br>ConfidenceRatio <br>InformationDifference <br>NormalizedChiSquare
lower_bound number
optimize Speed <br>Memory Use to specify whether model building should be optimized for speed or for memory.
rules_without_antececents boolean Select to allow rules that include only the consequent (item or item set). This is useful when you are interested in determining common items or item sets. For example, cannedveg is a single-item rule without an antecedent that indicates purchasing cannedveg is a common occurrence in the data.
| # apriorinode properties #
The Apriori node extracts a set of rules from the data, pulling out the rules with the highest information content\. Apriori offers five different methods of selecting rules and uses a sophisticated indexing scheme to process large data sets efficiently\. For large problems, Apriori is generally faster to train; it has no arbitrary limit on the number of rules that can be retained, and it can handle rules with up to 32 preconditions\. Apriori requires that input and output fields all be categorical but delivers better performance because it'ss optimized for this type of data\.
<!-- <table "summary="apriorinode properties" id="apriorinodeslots__table_o4h_jj3_cdb" class="defaultstyle" "> -->
apriorinode properties
Table 1\. apriorinode properties
| `apriorinode` Properties | Values | Property description |
| --------------------------- | ------------------------------------------------------------------------------------------------------------------------ | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `consequents` | *field* | Apriori models use Consequents and Antecedents in place of the standard target and input fields\. Weight and frequency fields are not used\. See [Common modeling node properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/modelingnodeslots_common.html#modelingnodeslots_common) for more information\. |
| `antecedents` | \[*field1 \.\.\. fieldN*\] | |
| `min_supp` | *number* | |
| `min_conf` | *number* | |
| `max_antecedents` | *number* | |
| `true_flags` | *flag* | |
| `optimize` | `Speed` <br>`Memory` | |
| `use_transactional_data` | *flag* | |
| `contiguous` | *flag* | |
| `id_field` | *string* | |
| `content_field` | *string* | |
| `mode` | `Simple``Expert` | |
| `evaluation` | `RuleConfidence` <br>`DifferenceToPrior` <br>`ConfidenceRatio` <br>`InformationDifference` <br>`NormalizedChiSquare` | |
| `lower_bound` | *number* | |
| `optimize` | `Speed` <br>`Memory` | Use to specify whether model building should be optimized for speed or for memory\. |
| `rules_without_antececents` | *boolean* | Select to allow rules that include only the consequent (item or item set)\. This is useful when you are interested in determining common items or item sets\. For example, `cannedveg` is a single\-item rule without an antecedent that indicates purchasing `cannedveg` is a common occurrence in the data\. |
<!-- </table "summary="apriorinode properties" id="apriorinodeslots__table_o4h_jj3_cdb" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
292C0E87B8E56B15991C954508AB125A8FB80972 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/apriorinuggetnodeslots.html?context=cdpaas&locale=en | applyapriorinode properties | applyapriorinode properties
You can use Apriori modeling nodes to generate an Apriori model nugget. The scripting name of this model nugget is applyapriorinode. For more information on scripting the modeling node itself, see [apriorinode properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/apriorinodeslots.htmlapriorinodeslots).
applyapriorinode properties
Table 1. applyapriorinode properties
applyapriorinode Properties Values Property description
max_predictions number (integer)
ignore_unmatached flag
allow_repeats flag
check_basket NoPredictionsPredictionsNoCheck
criterion ConfidenceSupportRuleSupportLiftDeployability
enable_sql_generation udfnative When using data from a database, SQL code can be pushed back to the database for execution, providing superior performance for many operations.
| # applyapriorinode properties #
You can use Apriori modeling nodes to generate an Apriori model nugget\. The scripting name of this model nugget is *applyapriorinode*\. For more information on scripting the modeling node itself, see [apriorinode properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/apriorinodeslots.html#apriorinodeslots)\.
<!-- <table "summary="applyapriorinode properties" id="apriorinuggetnodeslots__table_czw_jj3_cdb" class="defaultstyle" "> -->
applyapriorinode properties
Table 1\. applyapriorinode properties
| `applyapriorinode` Properties | Values | Property description |
| ----------------------------- | ------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------ |
| `max_predictions` | *number (integer)* | |
| `ignore_unmatached` | *flag* | |
| `allow_repeats` | *flag* | |
| `check_basket` | `NoPredictions``Predictions``NoCheck` | |
| `criterion` | `Confidence``Support``RuleSupport``Lift``Deployability` | |
| `enable_sql_generation` | `udf``native` | When using data from a database, SQL code can be pushed back to the database for execution, providing superior performance for many operations\. |
<!-- </table "summary="applyapriorinode properties" id="apriorinuggetnodeslots__table_czw_jj3_cdb" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
2BCBD3D61CC24296EA38B26B10306B7F50CE4988 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/astimeintervalnodeslots.html?context=cdpaas&locale=en | astimeintervalsnode properties | astimeintervalsnode properties
Use the Time Intervals node to specify intervals and derive a new time field for estimating or forecasting. A full range of time intervals is supported, from seconds to years.
astimeintervalsnode properties
Table 1. astimeintervalsnode properties
astimeintervalsnode properties Data type Property description
time_field field Can accept only a single continuous field. That field is used by the node as the aggregation key for converting the interval. If an integer field is used here it's considered to be a time index.
dimensions [field1 field2 … fieldn] These fields are used to create individual time series based on the field values.
fields_to_aggregate [field1 field2 … fieldn] These fields are aggregated as part of changing the period of the time field. Any fields not included in this picker are filtered out of the data leaving the node.
interval_type_timestamp Years <br>Quarters <br>Months <br>Weeks <br>Days <br>Hours <br>Minutes <br>Seconds Specify intervals and derive a new time field for estimating or forecasting.
interval_type_time Hours <br>Minutes <br>Seconds
interval_type_date Years <br>Quarters <br>Months <br>Weeks <br>Days Time interval
interval_type_integer Periods Time interval
periods_per_interval integer Periods per interval
start_month JanuaryFebruaryMarchAprilMayJuneJulyAugustSeptemberOctoberNovemberDecember
week_begins_on Sunday Monday Tuesday Wednesday Thursday Friday Saturday
minute_interval 1 2 3 4 5 6 10 12 15 20 30
second_interval 1 2 3 4 5 6 10 12 15 20 30
agg_range_default Sum Mean Min Max Median 1stQuartile 3rdQuartile Available functions for continuous fields include Sum, Mean, Min, Max, Median, 1st Quartile, and 3rd Quartile.
agg_set_default Mode Min Max Nominal options include Mode, Min, and Max.
agg_flag_default TrueIfAnyTrue FalseIfAnyFalse Options are either True if any true or False if any false.
custom_agg array Custom settings for specified fields.
field_name_extension string Specify the prefix or suffix applied to all fields generated by the node.
field_name_extension_as_prefix true false Add extension as prefix.
| # astimeintervalsnode properties #
Use the Time Intervals node to specify intervals and derive a new time field for estimating or forecasting\. A full range of time intervals is supported, from seconds to years\.
<!-- <table "summary="astimeintervalsnode properties" class="defaultstyle" "> -->
astimeintervalsnode properties
Table 1\. astimeintervalsnode properties
| `astimeintervalsnode` properties | Data type | Property description |
| -------------------------------- | --------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `time_field` | *field* | Can accept only a single continuous field\. That field is used by the node as the aggregation key for converting the interval\. If an integer field is used here it's considered to be a time index\. |
| `dimensions` | *\[field1 field2 … fieldn\]* | These fields are used to create individual time series based on the field values\. |
| `fields_to_aggregate` | *\[field1 field2 … fieldn\]* | These fields are aggregated as part of changing the period of the time field\. Any fields not included in this picker are filtered out of the data leaving the node\. |
| `interval_type_timestamp` | `Years` <br>`Quarters` <br>`Months` <br>`Weeks` <br>`Days` <br>`Hours` <br>`Minutes` <br>`Seconds` | Specify intervals and derive a new time field for estimating or forecasting\. |
| `interval_type_time` | `Hours` <br>`Minutes` <br>`Seconds` | |
| `interval_type_date` | `Years` <br>`Quarters` <br>`Months` <br>`Weeks` <br>`Days` | Time interval |
| `interval_type_integer` | `Periods` | Time interval |
| `periods_per_interval` | *integer* | Periods per interval |
| `start_month` | `January``February``March``April``May``June``July``August``September``October``November``December` | |
| `week_begins_on` | `Sunday Monday Tuesday Wednesday Thursday Friday Saturday` | |
| `minute_interval` | `1 2 3 4 5 6 10 12 15 20 30` | |
| `second_interval` | `1 2 3 4 5 6 10 12 15 20 30` | |
| `agg_range_default` | `Sum Mean Min Max Median 1stQuartile 3rdQuartile` | Available functions for continuous fields include `Sum`, `Mean`, `Min`, `Max`, `Median`, `1st Quartile`, and `3rd Quartile`\. |
| `agg_set_default` | `Mode Min Max` | Nominal options include `Mode`, `Min`, and `Max`\. |
| `agg_flag_default` | `TrueIfAnyTrue FalseIfAnyFalse` | Options are either `True` if any true or `False` if any false\. |
| `custom_agg` | *array* | Custom settings for specified fields\. |
| `field_name_extension` | *string* | Specify the prefix or suffix applied to all fields generated by the node\. |
| `field_name_extension_as_prefix` | `true false` | Add extension as prefix\. |
<!-- </table "summary="astimeintervalsnode properties" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
27963DF2327FBE202B836AC5905258D063A8770D | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/autoclassifiernuggetnodeslots.html?context=cdpaas&locale=en | applyautoclassifiernode properties | applyautoclassifiernode properties
You can use Auto Classifier modeling nodes to generate an Auto Classifier model nugget. The scripting name of this model nugget is applyautoclassifiernode. For more information on scripting the modeling node itself, see [autoclassifiernode properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/binaryclassifiernodeslots.htmlbinaryclassifiernodeslots).
applyautoclassifiernode properties
Table 1. applyautoclassifiernode properties
applyautoclassifiernode Properties Values Property description
flag_ensemble_method VotingConfidenceWeightedVotingRawPropensityWeightedVotingHighestConfidenceAverageRawPropensity Specifies the method used to determine the ensemble score. This setting applies only if the selected target is a flag field.
flag_voting_tie_selection RandomHighestConfidenceRawPropensity If a voting method is selected, specifies how ties are resolved. This setting applies only if the selected target is a flag field.
set_ensemble_method VotingConfidenceWeightedVotingHighestConfidence Specifies the method used to determine the ensemble score. This setting applies only if the selected target is a set field.
set_voting_tie_selection RandomHighestConfidence If a voting method is selected, specifies how ties are resolved. This setting applies only if the selected target is a nominal field.
| # applyautoclassifiernode properties #
You can use Auto Classifier modeling nodes to generate an Auto Classifier model nugget\. The scripting name of this model nugget is *applyautoclassifiernode*\. For more information on scripting the modeling node itself, see [autoclassifiernode properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/binaryclassifiernodeslots.html#binaryclassifiernodeslots)\.
<!-- <table "summary="applyautoclassifiernode properties" id="autoclassifiernuggetnodeslots__table_r5l_kj3_cdb" class="defaultstyle" "> -->
applyautoclassifiernode properties
Table 1\. applyautoclassifiernode properties
| `applyautoclassifiernode` Properties | Values | Property description |
| ------------------------------------ | -------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------- |
| `flag_ensemble_method` | `Voting``ConfidenceWeightedVoting``RawPropensityWeightedVoting``HighestConfidence``AverageRawPropensity` | Specifies the method used to determine the ensemble score\. This setting applies only if the selected target is a flag field\. |
| `flag_voting_tie_selection` | `Random``HighestConfidence``RawPropensity` | If a voting method is selected, specifies how ties are resolved\. This setting applies only if the selected target is a flag field\. |
| `set_ensemble_method` | `Voting``ConfidenceWeightedVoting``HighestConfidence` | Specifies the method used to determine the ensemble score\. This setting applies only if the selected target is a set field\. |
| `set_voting_tie_selection` | `Random``HighestConfidence` | If a voting method is selected, specifies how ties are resolved\. This setting applies only if the selected target is a nominal field\. |
<!-- </table "summary="applyautoclassifiernode properties" id="autoclassifiernuggetnodeslots__table_r5l_kj3_cdb" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
E399A5B6FA720C6F21337792F822F20F20F98910 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/autoclusternodeslots.html?context=cdpaas&locale=en | autoclusternode properties | autoclusternode properties
The Auto Cluster node estimates and compares clustering models, which identify groups of records that have similar characteristics. The node works in the same manner as other automated modeling nodes, allowing you to experiment with multiple combinations of options in a single modeling pass. Models can be compared using basic measures with which to attempt to filter and rank the usefulness of the cluster models, and provide a measure based on the importance of particular fields.
autoclusternode properties
Table 1. autoclusternode properties
autoclusternode Properties Values Property description
evaluation field Note: Auto Cluster node only. Identifies the field for which an importance value will be calculated. Alternatively, can be used to identify how well the cluster differentiates the value of this field and, therefore, how well the model will predict this field.
ranking_measure SilhouetteNum_clustersSize_smallest_clusterSize_largest_clusterSmallest_to_largestImportance
ranking_dataset TrainingTest
summary_limit integer Number of models to list in the report. Specify an integer between 1 and 100.
enable_silhouette_limit flag
silhouette_limit integer Integer between 0 and 100.
enable_number_less_limit flag
number_less_limit number Real number between 0.0 and 1.0.
enable_number_greater_limit flag
number_greater_limit number Integer greater than 0.
enable_smallest_cluster_limit flag
smallest_cluster_units PercentageCounts
smallest_cluster_limit_percentage number
smallest_cluster_limit_count integer Integer greater than 0.
enable_largest_cluster_limit flag
largest_cluster_units PercentageCounts
largest_cluster_limit_percentage number
largest_cluster_limit_count integer
enable_smallest_largest_limit flag
smallest_largest_limit number
enable_importance_limit flag
importance_limit_condition Greater_thanLess_than
importance_limit_greater_than number Integer between 0 and 100.
importance_limit_less_than number Integer between 0 and 100.
<algorithm> flag Enables or disables the use of a specific algorithm.
<algorithm>.<property> string Sets a property value for a specific algorithm. See [Setting algorithm properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/factorymodeling_algorithmproperties.htmlfactorymodeling_algorithmproperties) for more information.
number_of_models integer
enable_model_build_time_limit boolean (K-Means, Kohonen, TwoStep, SVM, KNN, Bayes Net and Decision List models only.) <br>Sets a maximum time limit for any one model. For example, if a particular model requires an unexpectedly long time to train because of some complex interaction, you probably don't want it to hold up your entire modeling run.
model_build_time_limit integer Time spent on model build.
enable_stop_after_time_limit boolean (Neural Network, K-Means, Kohonen, TwoStep, SVM, KNN, Bayes Net and C&R Tree models only.) <br>Stops a run after a specified number of hours. All models generated up to that point will be included in the model nugget, but no further models will be produced.
stop_after_time_limit double Run time limit (hours).
stop_if_valid_model boolean Stops a run when a model passes all criteria specified under the Discard settings.
| # autoclusternode properties #
The Auto Cluster node estimates and compares clustering models, which identify groups of records that have similar characteristics\. The node works in the same manner as other automated modeling nodes, allowing you to experiment with multiple combinations of options in a single modeling pass\. Models can be compared using basic measures with which to attempt to filter and rank the usefulness of the cluster models, and provide a measure based on the importance of particular fields\.
<!-- <table "summary="autoclusternode properties" id="autoclusternodeslots__table_dt1_lj3_cdb" class="defaultstyle" "> -->
autoclusternode properties
Table 1\. autoclusternode properties
| `autoclusternode` Properties | Values | Property description |
| ------------------------------------ | -------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `evaluation` | *field* | Note: Auto Cluster node only\. Identifies the field for which an importance value will be calculated\. Alternatively, can be used to identify how well the cluster differentiates the value of this field and, therefore, how well the model will predict this field\. |
| `ranking_measure` | `Silhouette``Num_clusters``Size_smallest_cluster``Size_largest_cluster``Smallest_to_largest``Importance` | |
| `ranking_dataset` | `Training``Test` | |
| `summary_limit` | *integer* | Number of models to list in the report\. Specify an integer between 1 and 100\. |
| `enable_silhouette_limit` | *flag* | |
| `silhouette_limit` | *integer* | Integer between 0 and 100\. |
| `enable_number_less_limit` | *flag* | |
| `number_less_limit` | *number* | Real number between 0\.0 and 1\.0\. |
| `enable_number_greater_limit` | *flag* | |
| `number_greater_limit` | *number* | Integer greater than 0\. |
| `enable_smallest_cluster_limit` | *flag* | |
| `smallest_cluster_units` | `Percentage``Counts` | |
| `smallest_cluster_limit_percentage` | *number* | |
| `smallest_cluster_limit_count` | *integer* | Integer greater than 0\. |
| `enable_largest_cluster_limit` | *flag* | |
| `largest_cluster_units` | `Percentage``Counts` | |
| `largest_cluster_limit_percentage` | *number* | |
| `largest_cluster_limit_count` | *integer* | |
| `enable_smallest_largest_limit` | *flag* | |
| `smallest_largest_limit` | *number* | |
| `enable_importance_limit` | *flag* | |
| `importance_limit_condition` | `Greater_than``Less_than` | |
| `importance_limit_greater_than` | *number* | Integer between 0 and 100\. |
| `importance_limit_less_than` | *number* | Integer between 0 and 100\. |
| `<algorithm>` | *flag* | Enables or disables the use of a specific algorithm\. |
| `<algorithm>.<property>` | *string* | Sets a property value for a specific algorithm\. See [Setting algorithm properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/factorymodeling_algorithmproperties.html#factorymodeling_algorithmproperties) for more information\. |
| `number_of_models` | *integer* | |
| `enable_model_build_time_limit` | *boolean* | (K\-Means, Kohonen, TwoStep, SVM, KNN, Bayes Net and Decision List models only\.) <br>Sets a maximum time limit for any one model\. For example, if a particular model requires an unexpectedly long time to train because of some complex interaction, you probably don't want it to hold up your entire modeling run\. |
| `model_build_time_limit` | *integer* | Time spent on model build\. |
| `enable_stop_after_time_limit` | *boolean* | (Neural Network, K\-Means, Kohonen, TwoStep, SVM, KNN, Bayes Net and C&R Tree models only\.) <br>Stops a run after a specified number of hours\. All models generated up to that point will be included in the model nugget, but no further models will be produced\. |
| `stop_after_time_limit` | *double* | Run time limit (hours)\. |
| `stop_if_valid_model` | *boolean* | Stops a run when a model passes all criteria specified under the Discard settings\. |
<!-- </table "summary="autoclusternode properties" id="autoclusternodeslots__table_dt1_lj3_cdb" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
14416203D840C788359110B18CFD9CE922DE0D67 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/autoclusternuggetnodeslots.html?context=cdpaas&locale=en | applyautoclusternode properties | applyautoclusternode properties
You can use Auto Cluster modeling nodes to generate an Auto Cluster model nugget. The scripting name of this model nugget is applyautoclusternode. No other properties exist for this model nugget. For more information on scripting the modeling node itself, see [autoclusternode properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/autoclusternodeslots.htmlautoclusternodeslots).
| # applyautoclusternode properties #
You can use Auto Cluster modeling nodes to generate an Auto Cluster model nugget\. The scripting name of this model nugget is *applyautoclusternode*\. No other properties exist for this model nugget\. For more information on scripting the modeling node itself, see [autoclusternode properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/autoclusternodeslots.html#autoclusternodeslots)\.
<!-- </article "role="article" "> -->
|
3EAAFDDADE769D3B0300BE1401BB3D7E68B312DD | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/autonumericnuggetnodeslots.html?context=cdpaas&locale=en | applyautonumericnode properties | applyautonumericnode properties
You can use Auto Numeric modeling nodes to generate an Auto Numeric model nugget. The scripting name of this model nugget is applyautonumericnode.For more information on scripting the modeling node itself, see [autonumericnode properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/rangepredictornodeslots.htmlrangepredictornodeslots).
applyautonumericnode properties
Table 1. applyautonumericnode properties
applyautonumericnode Properties Values Property description
calculate_standard_error flag
| # applyautonumericnode properties #
You can use Auto Numeric modeling nodes to generate an Auto Numeric model nugget\. The scripting name of this model nugget is *applyautonumericnode*\.For more information on scripting the modeling node itself, see [autonumericnode properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/rangepredictornodeslots.html#rangepredictornodeslots)\.
<!-- <table "summary="applyautonumericnode properties" id="autonumericnuggetnodeslots__table_lcn_nj3_cdb" class="defaultstyle" "> -->
applyautonumericnode properties
Table 1\. applyautonumericnode properties
| `applyautonumericnode` Properties | Values | Property description |
| --------------------------------- | ------ | -------------------- |
| `calculate_standard_error` | *flag* | |
<!-- </table "summary="applyautonumericnode properties" id="autonumericnuggetnodeslots__table_lcn_nj3_cdb" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
D2D9F4E05CABC566B2021116ED28EF413FA96779 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/available_slot_parameters.html?context=cdpaas&locale=en | Node properties overview | Node properties overview
Each type of node has its own set of legal properties, and each property has a type. This type may be a general type—number, flag, or string—in which case settings for the property are coerced to the correct type. An error is raised if they can't be coerced. Alternatively, the property reference may specify the range of legal values, such as Discard, PairAndDiscard, and IncludeAsText, in which case an error is raised if any other value is used. Flag properties should be read or set by using values of true and false. (Variations including Off, OFF, off, No, NO, no, n, N, f, F, false, False, FALSE, or 0 are also recognized when setting values, but may cause errors when reading property values in some cases. All other values are regarded as true. Using true and false consistently will avoid any confusion.) In this documentation's reference tables, the structured properties are indicated as such in the Property description column, and their usage formats are provided.
| # Node properties overview #
Each type of node has its own set of legal properties, and each property has a type\. This type may be a general type—number, flag, or string—in which case settings for the property are coerced to the correct type\. An error is raised if they can't be coerced\. Alternatively, the property reference may specify the range of legal values, such as `Discard`, `PairAndDiscard`, and `IncludeAsText`, in which case an error is raised if any other value is used\. Flag properties should be read or set by using values of `true` and `false`\. (Variations including `Off`, `OFF`, `off`, `No`, `NO`, `no`, `n`, `N`, `f`, `F`, `false`, `False`, `FALSE`, or `0` are also recognized when setting values, but may cause errors when reading property values in some cases\. All other values are regarded as true\. Using `true` and `false` consistently will avoid any confusion\.) In this documentation's reference tables, the structured properties are indicated as such in the Property description column, and their usage formats are provided\.
<!-- </article "role="article" "> -->
|
7A9F4CDF362D1F06C3644EDBD634B2A77DDC6005 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/balancenodeslots.html?context=cdpaas&locale=en | balancenode properties | balancenode properties
 The Balance node corrects imbalances in a dataset, so it conforms to a specified condition. The balancing directive adjusts the proportion of records where a condition is true by the factor specified.
balancenode properties
Table 1. balancenode properties
balancenode properties Data type Property description
directives Structured property to balance proportion of field values based on number specified.
training_data_only flag Specifies that only training data should be balanced. If no partition field is present in the stream, then this option is ignored.
This node property uses the format:
[[ number, string ] \ [ number, string] \ ... [number, string ]].
Note: If strings (using double quotation marks) are embedded in the expression, they must be preceded by the escape character " ". The " " character is also the line continuation character, which you can use to align the arguments for clarity.
| # balancenode properties #
 The Balance node corrects imbalances in a dataset, so it conforms to a specified condition\. The balancing directive adjusts the proportion of records where a condition is true by the factor specified\.
<!-- <table "summary="balancenode properties" id="balancenodeslots__table_zvt_pj3_cdb" class="defaultstyle" "> -->
balancenode properties
Table 1\. balancenode properties
| `balancenode` properties | Data type | Property description |
| ------------------------ | --------- | ------------------------------------------------------------------------------------------------------------------------------------ |
| `directives` | | Structured property to balance proportion of field values based on number specified\. |
| `training_data_only` | *flag* | Specifies that only training data should be balanced\. If no partition field is present in the stream, then this option is ignored\. |
<!-- </table "summary="balancenode properties" id="balancenodeslots__table_zvt_pj3_cdb" class="defaultstyle" "> -->
This node property uses the format:
\[\[ *number, string* \] \\ \[ *number, string*\] \\ \.\.\. \[*number, string* \]\]\.
Note: If strings (using double quotation marks) are embedded in the expression, they must be preceded by the escape character `" \ "`\. The `" \ "` character is also the line continuation character, which you can use to align the arguments for clarity\.
<!-- </article "role="article" "> -->
|
FE2254205E6DD1EE2A4EC62036AB86BC5E084F5D | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/bayesnetnodeslots.html?context=cdpaas&locale=en | bayesnetnode properties | bayesnetnode properties
With the Bayesian Network (Bayes Net) node, you can build a probability model by combining observed and recorded evidence with real-world knowledge to establish the likelihood of occurrences. The node focuses on Tree Augmented Naïve Bayes (TAN) and Markov Blanket networks that are primarily used for classification.
bayesnetnode properties
Table 1. bayesnetnode properties
bayesnetnode Properties Values Property description
inputs [field1 ... fieldN] Bayesian network models use a single target field, and one or more input fields. Continuous fields are automatically binned. See the topic [Common modeling node properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/modelingnodeslots_common.htmlmodelingnodeslots_common) for more information.
continue_training_existing_model flag
structure_type TANMarkovBlanket Select the structure to be used when building the Bayesian network.
use_feature_selection flag
parameter_learning_method LikelihoodBayes Specifies the method used to estimate the conditional probability tables between nodes where the values of the parents are known.
mode ExpertSimple
missing_values flag
all_probabilities flag
independence LikelihoodPearson Specifies the method used to determine whether paired observations on two variables are independent of each other.
significance_level number Specifies the cutoff value for determining independence.
maximal_conditioning_set number Sets the maximal number of conditioning variables to be used for independence testing.
inputs_always_selected [field1 ... fieldN] Specifies which fields from the dataset are always to be used when building the Bayesian network.<br><br>Note: The target field is always selected.
maximum_number_inputs number Specifies the maximum number of input fields to be used in building the Bayesian network.
calculate_variable_importance flag
calculate_raw_propensities flag
calculate_adjusted_propensities flag
adjusted_propensity_partition TestValidation
| # bayesnetnode properties #
With the Bayesian Network (Bayes Net) node, you can build a probability model by combining observed and recorded evidence with real\-world knowledge to establish the likelihood of occurrences\. The node focuses on Tree Augmented Naïve Bayes (TAN) and Markov Blanket networks that are primarily used for classification\.
<!-- <table "summary="bayesnetnode properties" id="bayesnetnodeslots__table_mty_tj3_cdb" class="defaultstyle" "> -->
bayesnetnode properties
Table 1\. bayesnetnode properties
| `bayesnetnode` Properties | Values | Property description |
| ---------------------------------- | -------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `inputs` | *\[field1 \.\.\. fieldN\]* | Bayesian network models use a single target field, and one or more input fields\. Continuous fields are automatically binned\. See the topic [Common modeling node properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/modelingnodeslots_common.html#modelingnodeslots_common) for more information\. |
| `continue_training_existing_model` | *flag* | |
| `structure_type` | `TAN``MarkovBlanket` | Select the structure to be used when building the Bayesian network\. |
| `use_feature_selection` | *flag* | |
| `parameter_learning_method` | `Likelihood``Bayes` | Specifies the method used to estimate the conditional probability tables between nodes where the values of the parents are known\. |
| `mode` | `Expert``Simple` | |
| `missing_values` | *flag* | |
| `all_probabilities` | *flag* | |
| `independence` | `Likelihood``Pearson` | Specifies the method used to determine whether paired observations on two variables are independent of each other\. |
| `significance_level` | *number* | Specifies the cutoff value for determining independence\. |
| `maximal_conditioning_set` | *number* | Sets the maximal number of conditioning variables to be used for independence testing\. |
| `inputs_always_selected` | *\[field1 \.\.\. fieldN\]* | Specifies which fields from the dataset are always to be used when building the Bayesian network\.<br><br>Note: The target field is always selected\. |
| `maximum_number_inputs` | *number* | Specifies the maximum number of input fields to be used in building the Bayesian network\. |
| `calculate_variable_importance` | *flag* | |
| `calculate_raw_propensities` | *flag* | |
| `calculate_adjusted_propensities` | *flag* | |
| `adjusted_propensity_partition` | `Test``Validation` | |
<!-- </table "summary="bayesnetnode properties" id="bayesnetnodeslots__table_mty_tj3_cdb" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
EC154AE6F7FE894644424BFA90C6CA31E13A4B71 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/bayesnetnuggetnodeslots.html?context=cdpaas&locale=en | applybayesnetnode properties | applybayesnetnode properties
You can use Bayesian network modeling nodes to generate a Bayesian network model nugget. The scripting name of this model nugget is applybayesnetnode. For more information on scripting the modeling node itself, see [bayesnetnode properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/bayesnetnodeslots.htmlbayesnetnodeslots).
applybayesnetnode properties
Table 1. applybayesnetnode properties
applybayesnetnode Properties Values Property description
all_probabilities flag
raw_propensity flag
adjusted_propensity flag
calculate_raw_propensities flag
calculate_adjusted_propensities flag
enable_sql_generation falsenative When using data from a database, SQL code can be pushed back to the database for execution, providing superior performance for many operations.
| # applybayesnetnode properties #
You can use Bayesian network modeling nodes to generate a Bayesian network model nugget\. The scripting name of this model nugget is *applybayesnetnode*\. For more information on scripting the modeling node itself, see [bayesnetnode properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/bayesnetnodeslots.html#bayesnetnodeslots)\.
<!-- <table "summary="applybayesnetnode properties" class="defaultstyle" "> -->
applybayesnetnode properties
Table 1\. applybayesnetnode properties
| `applybayesnetnode` Properties | Values | Property description |
| --------------------------------- | --------------- | ------------------------------------------------------------------------------------------------------------------------------------------------ |
| `all_probabilities` | *flag* | |
| `raw_propensity` | *flag* | |
| `adjusted_propensity` | *flag* | |
| `calculate_raw_propensities` | *flag* | |
| `calculate_adjusted_propensities` | *flag* | |
| `enable_sql_generation` | `false``native` | When using data from a database, SQL code can be pushed back to the database for execution, providing superior performance for many operations\. |
<!-- </table "summary="applybayesnetnode properties" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
CDA0897D49B56EE521BF16E52014DA5E2E1D2710 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/binaryclassifiernodeslots.html?context=cdpaas&locale=en | autoclassifiernode properties | autoclassifiernode properties
The Auto Classifier node creates and compares a number of different models for binary outcomes (yes or no, churn or do not churn, and so on), allowing you to choose the best approach for a given analysis. A number of modeling algorithms are supported, making it possible to select the methods you want to use, the specific options for each, and the criteria for comparing the results. The node generates a set of models based on the specified options and ranks the best candidates according to the criteria you specify.
autoclassifiernode properties
Table 1. autoclassifiernode properties
autoclassifiernode Properties Values Property description
target field For flag targets, the Auto Classifier node requires a single target and one or more input fields. Weight and frequency fields can also be specified. See [Common modeling node properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/modelingnodeslots_common.htmlmodelingnodeslots_common) for more information.
ranking_measure Accuracy <br>Area_under_curve <br>Profit <br>Lift <br>Num_variables
ranking_dataset Training <br>Test
number_of_models integer Number of models to include in the model nugget. Specify an integer between 1 and 100.
calculate_variable_importance flag
enable_accuracy_limit flag
accuracy_limit integer Integer between 0 and 100.
enable_area_under_curve_limit flag
area_under_curve_limit number Real number between 0.0 and 1.0.
enable_profit_limit flag
profit_limit number Integer greater than 0.
enable_lift_limit flag
lift_limit number Real number greater than 1.0.
enable_number_of_variables_limit flag
number_of_variables_limit number Integer greater than 0.
use_fixed_cost flag
fixed_cost number Real number greater than 0.0.
variable_cost field
use_fixed_revenue flag
fixed_revenue number Real number greater than 0.0.
variable_revenue field
use_fixed_weight flag
fixed_weight number Real number greater than 0.0
variable_weight field
lift_percentile number Integer between 0 and 100.
enable_model_build_time_limit flag
model_build_time_limit number Integer set to the number of minutes to limit the time taken to build each individual model.
enable_stop_after_time_limit flag
stop_after_time_limit number Real number set to the number of hours to limit the overall elapsed time for an auto classifier run.
enable_stop_after_valid_model_produced flag
use_costs flag
<algorithm> flag Enables or disables the use of a specific algorithm.
<algorithm>.<property> string Sets a property value for a specific algorithm. See [Setting algorithm properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/factorymodeling_algorithmproperties.htmlfactorymodeling_algorithmproperties) for more information.
use_cross_validation field Fields added to this list can take either the condition or prediction role in rules that are generated by the model. This is on a rule by rule basis, so a field might be a condition in one rule and a prediction in another.
number_of_folds integer N fold parameter for cross validation, with range from 3 to 10.
set_random_seed boolean Setting a random seed allows you to replicate analyses. Specify an integer or click Generate, which will create a pseudo-random integer between 1 and 2147483647, inclusive. By default, analyses are replicated with seed 229176228.
random_seed integer Random seed
stop_if_valid_model boolean
filter_individual_model_output boolean Removes from the output all of the additional fields generated by the individual models that feed into the Ensemble node. Select this option if you're interested only in the combined score from all of the input models. Ensure that this option is deselected if, for example, you want to use an Analysis node or Evaluation node to compare the accuracy of the combined score with that of each of the individual input models
set_ensemble_method "Voting" "ConfidenceWeightedVoting" "HighestConfidence" Ensemble method for set targets.
set_voting_tie_selection "Random" "HighestConfidence" If voting is tied, select value randomly or by using highest confidence.
flag_ensemble_method "Voting" "ConfidenceWeightedVoting" "RawPropensityWeightedVoting" "HighestConfidence" "AverageRawPropensity" Ensemble method for flag targets.
flag_voting_tie_selection "Random" "HighestConfidence" "RawPropensity" If voting is tied, select the value randomly, with highest confidence, or with raw propensity.
| # autoclassifiernode properties #
The Auto Classifier node creates and compares a number of different models for binary outcomes (yes or no, churn or do not churn, and so on), allowing you to choose the best approach for a given analysis\. A number of modeling algorithms are supported, making it possible to select the methods you want to use, the specific options for each, and the criteria for comparing the results\. The node generates a set of models based on the specified options and ranks the best candidates according to the criteria you specify\.
<!-- <table "summary="autoclassifiernode properties" id="binaryclassifiernodeslots__table_hwb_wj3_cdb" class="defaultstyle" "> -->
autoclassifiernode properties
Table 1\. autoclassifiernode properties
| `autoclassifiernode` Properties | Values | Property description |
| ---------------------------------------- | -------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `target` | *field* | For flag targets, the Auto Classifier node requires a single target and one or more input fields\. Weight and frequency fields can also be specified\. See [Common modeling node properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/modelingnodeslots_common.html#modelingnodeslots_common) for more information\. |
| `ranking_measure` | `Accuracy` <br>`Area_under_curve` <br>`Profit` <br>`Lift` <br>`Num_variables` | |
| `ranking_dataset` | `Training` <br>`Test` | |
| `number_of_models` | *integer* | Number of models to include in the model nugget\. Specify an integer between 1 and 100\. |
| `calculate_variable_importance` | *flag* | |
| `enable_accuracy_limit` | *flag* | |
| `accuracy_limit` | *integer* | Integer between 0 and 100\. |
| `enable_area_under_curve_limit` | *flag* | |
| `area_under_curve_limit` | *number* | Real number between 0\.0 and 1\.0\. |
| `enable_profit_limit` | *flag* | |
| `profit_limit` | *number* | Integer greater than 0\. |
| `enable_lift_limit` | *flag* | |
| `lift_limit` | *number* | Real number greater than 1\.0\. |
| `enable_number_of_variables_limit` | *flag* | |
| `number_of_variables_limit` | *number* | Integer greater than 0\. |
| `use_fixed_cost` | *flag* | |
| `fixed_cost` | *number* | Real number greater than 0\.0\. |
| `variable_cost` | *field* | |
| `use_fixed_revenue` | *flag* | |
| `fixed_revenue` | *number* | Real number greater than 0\.0\. |
| `variable_revenue` | *field* | |
| `use_fixed_weight` | *flag* | |
| `fixed_weight` | *number* | Real number greater than 0\.0 |
| `variable_weight` | *field* | |
| `lift_percentile` | *number* | Integer between 0 and 100\. |
| `enable_model_build_time_limit` | *flag* | |
| `model_build_time_limit` | *number* | Integer set to the number of minutes to limit the time taken to build each individual model\. |
| `enable_stop_after_time_limit` | *flag* | |
| `stop_after_time_limit` | *number* | Real number set to the number of hours to limit the overall elapsed time for an auto classifier run\. |
| `enable_stop_after_valid_model_produced` | *flag* | |
| `use_costs` | *flag* | |
| `<algorithm>` | *flag* | Enables or disables the use of a specific algorithm\. |
| `<algorithm>.<property>` | *string* | Sets a property value for a specific algorithm\. See [Setting algorithm properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/factorymodeling_algorithmproperties.html#factorymodeling_algorithmproperties) for more information\. |
| `use_cross_validation` | *field* | Fields added to this list can take either the condition or prediction role in rules that are generated by the model\. This is on a rule by rule basis, so a field might be a condition in one rule and a prediction in another\. |
| `number_of_folds` | *integer* | N fold parameter for cross validation, with range from 3 to 10\. |
| `set_random_seed` | *boolean* | Setting a random seed allows you to replicate analyses\. Specify an integer or click Generate, which will create a pseudo\-random integer between 1 and 2147483647, inclusive\. By default, analyses are replicated with seed 229176228\. |
| `random_seed` | *integer* | Random seed |
| `stop_if_valid_model` | *boolean* | |
| `filter_individual_model_output` | *boolean* | Removes from the output all of the additional fields generated by the individual models that feed into the Ensemble node\. Select this option if you're interested only in the combined score from all of the input models\. Ensure that this option is deselected if, for example, you want to use an Analysis node or Evaluation node to compare the accuracy of the combined score with that of each of the individual input models |
| `set_ensemble_method` | `"Voting" "ConfidenceWeightedVoting" "HighestConfidence"` | Ensemble method for set targets\. |
| `set_voting_tie_selection` | `"Random" "HighestConfidence"` | If voting is tied, select value randomly or by using highest confidence\. |
| `flag_ensemble_method` | `"Voting" "ConfidenceWeightedVoting" "RawPropensityWeightedVoting" "HighestConfidence" "AverageRawPropensity"` | Ensemble method for flag targets\. |
| `flag_voting_tie_selection` | `"Random" "HighestConfidence" "RawPropensity"` | If voting is tied, select the value randomly, with highest confidence, or with raw propensity\. |
<!-- </table "summary="autoclassifiernode properties" id="binaryclassifiernodeslots__table_hwb_wj3_cdb" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
B741FE5CDD06D606F869B15DEB2173C1F134D22D | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/binningnodeslots.html?context=cdpaas&locale=en | binningnode properties | binningnode properties
The Binning node automatically creates new nominal (set) fields based on the values of one or more existing continuous (numeric range) fields. For example, you can transform a continuous income field into a new categorical field containing groups of income as deviations from the mean. After you create bins for the new field, you can generate a Derive node based on the cut points.
binningnode properties
Table 1. binningnode properties
binningnode properties Data type Property description
fields [field1 field2 ... fieldn] Continuous (numeric range) fields pending transformation. You can bin multiple fields simultaneously.
method FixedWidthEqualCountRankSDevOptimal Method used for determining cut points for new field bins (categories).
recalculate_bins AlwaysIfNecessary Specifies whether the bins are recalculated and the data placed in the relevant bin every time the node is executed, or that data is added only to existing bins and any new bins that have been added.
fixed_width_name_extension string The default extension is _BIN.
fixed_width_add_as SuffixPrefix Specifies whether the extension is added to the end (suffix) of the field name or to the start (prefix). The default extension is income_BIN.
fixed_bin_method WidthCount
fixed_bin_count integer Specifies an integer used to determine the number of fixed-width bins (categories) for the new field(s).
fixed_bin_width real Value (integer or real) for calculating width of the bin.
equal_count_name_extension string The default extension is _TILE.
equal_count_add_as SuffixPrefix Specifies an extension, either suffix or prefix, used for the field name generated by using standard p-tiles. The default extension is _TILE plus N, where N is the tile number.
tile4 flag Generates four quantile bins, each containing 25% of cases.
tile5 flag Generates five quintile bins.
tile10 flag Generates 10 decile bins.
tile20 flag Generates 20 vingtile bins.
tile100 flag Generates 100 percentile bins.
use_custom_tile flag
custom_tile_name_extension string The default extension is _TILEN.
custom_tile_add_as SuffixPrefix
custom_tile integer
equal_count_method RecordCountValueSum The RecordCount method seeks to assign an equal number of records to each bin, while ValueSum assigns records so that the sum of the values in each bin is equal.
tied_values_method NextCurrentRandom Specifies which bin tied value data is to be put in.
rank_order AscendingDescending This property includes Ascending (lowest value is marked 1) or Descending (highest value is marked 1).
rank_add_as SuffixPrefix This option applies to rank, fractional rank, and percentage rank.
rank flag
rank_name_extension string The default extension is _RANK.
rank_fractional flag Ranks cases where the value of the new field equals rank divided by the sum of the weights of the nonmissing cases. Fractional ranks fall in the range of 0–1.
rank_fractional_name_extension string The default extension is _F_RANK.
rank_pct flag Each rank is divided by the number of records with valid values and multiplied by 100. Percentage fractional ranks fall in the range of 1–100.
rank_pct_name_extension string The default extension is _P_RANK.
sdev_name_extension string
sdev_add_as SuffixPrefix
sdev_count OneTwoThree
optimal_name_extension string The default extension is _OPTIMAL.
optimal_add_as SuffixPrefix
optimal_supervisor_field field Field chosen as the supervisory field to which the fields selected for binning are related.
optimal_merge_bins flag Specifies that any bins with small case counts will be added to a larger, neighboring bin.
optimal_small_bin_threshold integer
optimal_pre_bin flag Indicates that prebinning of dataset is to take place.
optimal_max_bins integer Specifies an upper limit to avoid creating an inordinately large number of bins.
optimal_lower_end_point InclusiveExclusive
optimal_first_bin UnboundedBounded
optimal_last_bin UnboundedBounded
| # binningnode properties #
The Binning node automatically creates new nominal (set) fields based on the values of one or more existing continuous (numeric range) fields\. For example, you can transform a continuous income field into a new categorical field containing groups of income as deviations from the mean\. After you create bins for the new field, you can generate a Derive node based on the cut points\.
<!-- <table "summary="binningnode properties" id="binningnodeslots__table_x1r_wj3_cdb" class="defaultstyle" "> -->
binningnode properties
Table 1\. binningnode properties
| `binningnode` properties | Data type | Property description |
| ---------------------------------- | --------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `fields` | *\[field1 field2 \.\.\. fieldn\]* | Continuous (numeric range) fields pending transformation\. You can bin multiple fields simultaneously\. |
| `method` | `FixedWidth``EqualCount``Rank``SDev``Optimal` | Method used for determining cut points for new field bins (categories)\. |
| `recalculate_bins` | `Always``IfNecessary` | Specifies whether the bins are recalculated and the data placed in the relevant bin every time the node is executed, or that data is added only to existing bins and any new bins that have been added\. |
| `fixed_width_name_extension` | *string* | The default extension is *\_BIN*\. |
| `fixed_width_add_as` | `Suffix``Prefix` | Specifies whether the extension is added to the end (suffix) of the field name or to the start (prefix)\. The default extension is *income\_BIN*\. |
| `fixed_bin_method` | `Width``Count` | |
| `fixed_bin_count` | *integer* | Specifies an integer used to determine the number of fixed\-width bins (categories) for the new field(s)\. |
| `fixed_bin_width` | *real* | Value (integer or real) for calculating width of the bin\. |
| `equal_count_name_``extension` | *string* | The default extension is *\_TILE*\. |
| `equal_count_add_as` | `Suffix``Prefix` | Specifies an extension, either suffix or prefix, used for the field name generated by using standard p\-tiles\. The default extension is *\_TILE* plus *N*, where *N* is the tile number\. |
| `tile4` | *flag* | Generates four quantile bins, each containing 25% of cases\. |
| `tile5` | *flag* | Generates five quintile bins\. |
| `tile10` | *flag* | Generates 10 decile bins\. |
| `tile20` | *flag* | Generates 20 vingtile bins\. |
| `tile100` | *flag* | Generates 100 percentile bins\. |
| `use_custom_tile` | *flag* | |
| `custom_tile_name_extension` | *string* | The default extension is *\_TILEN*\. |
| `custom_tile_add_as` | `Suffix``Prefix` | |
| `custom_tile` | *integer* | |
| `equal_count_method` | `RecordCount``ValueSum` | The `RecordCount` method seeks to assign an equal number of records to each bin, while `ValueSum` assigns records so that the sum of the values in each bin is equal\. |
| `tied_values_method` | `Next``Current``Random` | Specifies which bin tied value data is to be put in\. |
| `rank_order` | `Ascending``Descending` | This property includes `Ascending` (lowest value is marked 1) or `Descending` (highest value is marked 1)\. |
| `rank_add_as` | `Suffix``Prefix` | This option applies to rank, fractional rank, and percentage rank\. |
| `rank` | *flag* | |
| `rank_name_extension` | *string* | The default extension is *\_RANK*\. |
| `rank_fractional` | *flag* | Ranks cases where the value of the new field equals rank divided by the sum of the weights of the nonmissing cases\. Fractional ranks fall in the range of 0–1\. |
| `rank_fractional_name_``extension` | *string* | The default extension is *\_F\_RANK*\. |
| `rank_pct` | *flag* | Each rank is divided by the number of records with valid values and multiplied by 100\. Percentage fractional ranks fall in the range of 1–100\. |
| `rank_pct_name_extension` | *string* | The default extension is *\_P\_RANK*\. |
| `sdev_name_extension` | *string* | |
| `sdev_add_as` | `Suffix``Prefix` | |
| `sdev_count` | `One``Two``Three` | |
| `optimal_name_extension` | *string* | The default extension is *\_OPTIMAL*\. |
| `optimal_add_as` | `Suffix``Prefix` | |
| `optimal_supervisor_field` | *field* | Field chosen as the supervisory field to which the fields selected for binning are related\. |
| `optimal_merge_bins` | *flag* | Specifies that any bins with small case counts will be added to a larger, neighboring bin\. |
| `optimal_small_bin_threshold` | *integer* | |
| `optimal_pre_bin` | *flag* | Indicates that prebinning of dataset is to take place\. |
| `optimal_max_bins` | *integer* | Specifies an upper limit to avoid creating an inordinately large number of bins\. |
| `optimal_lower_end_point` | `Inclusive``Exclusive` | |
| `optimal_first_bin` | `Unbounded``Bounded` | |
| `optimal_last_bin` | `Unbounded``Bounded` | |
<!-- </table "summary="binningnode properties" id="binningnodeslots__table_x1r_wj3_cdb" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
5C95F2D19465DDA8969D0498D1B96D870BD02A1F | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/c50nodeslots.html?context=cdpaas&locale=en | c50node properties | c50node properties
The C5.0 node builds either a decision tree or a rule set. The model works by splitting the sample based on the field that provides the maximum information gain at each level. The target field must be categorical. Multiple splits into more than two subgroups are allowed.
c50node properties
Table 1. c50node properties
c50node Properties Values Property description
target field C50 models use a single target field and one or more input fields. You can also specify a weight field. See [Common modeling node properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/modelingnodeslots_common.htmlmodelingnodeslots_common) for more information.
output_type DecisionTreeRuleSet
group_symbolics flag
use_boost flag
boost_num_trials number
use_xval flag
xval_num_folds number
mode SimpleExpert
favor AccuracyGenerality Favor accuracy or generality.
expected_noise number
min_child_records number
pruning_severity number
use_costs flag
costs structured This is a structured property. See the example for usage.
use_winnowing flag
use_global_pruning flag On (True) by default.
calculate_variable_importance flag
calculate_raw_propensities flag
calculate_adjusted_propensities flag
adjusted_propensity_partition TestValidation
| # c50node properties #
The C5\.0 node builds either a decision tree or a rule set\. The model works by splitting the sample based on the field that provides the maximum information gain at each level\. The target field must be categorical\. Multiple splits into more than two subgroups are allowed\.
<!-- <table "summary="c50node properties" id="c50nodeslots__table_c5j_xj3_cdb" class="defaultstyle" "> -->
c50node properties
Table 1\. c50node properties
| `c50node` Properties | Values | Property description |
| --------------------------------- | ----------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| `target` | *field* | C50 models use a single target field and one or more input fields\. You can also specify a weight field\. See [Common modeling node properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/modelingnodeslots_common.html#modelingnodeslots_common) for more information\. |
| `output_type` | `DecisionTree``RuleSet` | |
| `group_symbolics` | *flag* | |
| `use_boost` | *flag* | |
| `boost_num_trials` | *number* | |
| `use_xval` | *flag* | |
| `xval_num_folds` | *number* | |
| `mode` | `Simple``Expert` | |
| `favor` | `Accuracy``Generality` | Favor accuracy or generality\. |
| `expected_noise` | *number* | |
| `min_child_records` | *number* | |
| `pruning_severity` | *number* | |
| `use_costs` | *flag* | |
| `costs` | *structured* | This is a structured property\. See the example for usage\. |
| `use_winnowing` | *flag* | |
| `use_global_pruning` | *flag* | On (`True`) by default\. |
| `calculate_variable_importance` | *flag* | |
| `calculate_raw_propensities` | *flag* | |
| `calculate_adjusted_propensities` | *flag* | |
| `adjusted_propensity_partition` | `Test``Validation` | |
<!-- </table "summary="c50node properties" id="c50nodeslots__table_c5j_xj3_cdb" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
FCBDBFD3E4BEBEFE552FAD012509948FABA34B44 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/c50nuggetnodeslots.html?context=cdpaas&locale=en | applyc50node properties | applyc50node properties
You can use C5.0 modeling nodes to generate a C5.0 model nugget. The scripting name of this model nugget is applyc50node. For more information on scripting the modeling node itself, see [c50node properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/c50nodeslots.htmlc50nodeslots).
applyc50node properties
Table 1. applyc50node properties
applyc50node Properties Values Property description
sql_generate udfNeverNoMissingValues Used to set SQL generation options during rule set execution. The default value is udf.
calculate_conf flag Available when SQL generation is enabled; this property includes confidence calculations in the generated tree.
calculate_raw_propensities flag
calculate_adjusted_propensities flag
| # applyc50node properties #
You can use C5\.0 modeling nodes to generate a C5\.0 model nugget\. The scripting name of this model nugget is *applyc50node*\. For more information on scripting the modeling node itself, see [c50node properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/c50nodeslots.html#c50nodeslots)\.
<!-- <table "summary="applyc50node properties" id="c50nuggetnodeslots__table_xyy_xj3_cdb" class="defaultstyle" "> -->
applyc50node properties
Table 1\. applyc50node properties
| `applyc50node` Properties | Values | Property description |
| --------------------------------- | ----------------------------- | ---------------------------------------------------------------------------------------------------------------- |
| `sql_generate` | `udf``Never``NoMissingValues` | Used to set SQL generation options during rule set execution\. The default value is `udf`\. |
| `calculate_conf` | *flag* | Available when SQL generation is enabled; this property includes confidence calculations in the generated tree\. |
| `calculate_raw_propensities` | *flag* | |
| `calculate_adjusted_propensities` | *flag* | |
<!-- </table "summary="applyc50node properties" id="c50nuggetnodeslots__table_xyy_xj3_cdb" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
499553788712E55ABE1345C61CCDB15D1CE04E83 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/carmanodeslots.html?context=cdpaas&locale=en | carmanode properties | carmanode properties
The CARMA model extracts a set of rules from the data without requiring you to specify input or target fields. In contrast to Apriori, the CARMA node offers build settings for rule support (support for both antecedent and consequent) rather than just antecedent support. This means that the rules generated can be used for a wider variety of applications—for example, to find a list of products or services (antecedents) whose consequent is the item that you want to promote this holiday season.
carmanode properties
Table 1. carmanode properties
carmanode Properties Values Property description
inputs [field1 ... fieldn] CARMA models use a list of input fields, but no target. Weight and frequency fields are not used. See [Common modeling node properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/modelingnodeslots_common.htmlmodelingnodeslots_common) for more information.
id_field field Field used as the ID field for model building.
contiguous flag Used to specify whether IDs in the ID field are contiguous.
use_transactional_data flag
content_field field
min_supp number(percent) Relates to rule support rather than antecedent support. The default is 20%.
min_conf number(percent) The default is 20%.
max_size number The default is 10.
mode SimpleExpert The default is Simple.
exclude_multiple flag Excludes rules with multiple consequents. The default is False.
use_pruning flag The default is False.
pruning_value number The default is 500.
vary_support flag
estimated_transactions integer
rules_without_antecedents flag
| # carmanode properties #
The CARMA model extracts a set of rules from the data without requiring you to specify input or target fields\. In contrast to Apriori, the CARMA node offers build settings for rule support (support for both antecedent and consequent) rather than just antecedent support\. This means that the rules generated can be used for a wider variety of applications—for example, to find a list of products or services (antecedents) whose consequent is the item that you want to promote this holiday season\.
<!-- <table "summary="carmanode properties" class="defaultstyle" "> -->
carmanode properties
Table 1\. carmanode properties
| `carmanode` Properties | Values | Property description |
| --------------------------- | -------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| `inputs` | *\[field1 \.\.\. fieldn\]* | CARMA models use a list of input fields, but no target\. Weight and frequency fields are not used\. See [Common modeling node properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/modelingnodeslots_common.html#modelingnodeslots_common) for more information\. |
| `id_field` | *field* | Field used as the ID field for model building\. |
| `contiguous` | *flag* | Used to specify whether IDs in the ID field are contiguous\. |
| `use_transactional_data` | *flag* | |
| `content_field` | *field* | |
| `min_supp` | *number(percent)* | Relates to rule support rather than antecedent support\. The default is 20%\. |
| `min_conf` | *number(percent)* | The default is 20%\. |
| `max_size` | *number* | The default is 10\. |
| `mode` | `Simple``Expert` | The default is `Simple`\. |
| `exclude_multiple` | *flag* | Excludes rules with multiple consequents\. The default is `False`\. |
| `use_pruning` | *flag* | The default is `False`\. |
| `pruning_value` | *number* | The default is 500\. |
| `vary_support` | *flag* | |
| `estimated_transactions` | *integer* | |
| `rules_without_antecedents` | *flag* | |
<!-- </table "summary="carmanode properties" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
CE14B5EFF03A17683C6AA16D02F62E1EBAD0D7F2 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/carmanuggetnodeslots.html?context=cdpaas&locale=en | applycarmanode properties | applycarmanode properties
You can use Carma modeling nodes to generate a Carma model nugget. The scripting name of this model nugget is applycarmanode. For more information on scripting the modeling node itself, see [carmanode properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/carmanodeslots.htmlcarmanodeslots).
applycarmanode properties
Table 1. applycarmanode properties
applycarmanode Properties Values Property description
enable_sql_generation udfnative When using data from a database, SQL code can be pushed back to the database for execution, providing superior performance for many operations.
| # applycarmanode properties #
You can use Carma modeling nodes to generate a Carma model nugget\. The scripting name of this model nugget is *applycarmanode*\. For more information on scripting the modeling node itself, see [carmanode properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/carmanodeslots.html#carmanodeslots)\.
<!-- <table "summary="applycarmanode properties" id="carmanuggetnodeslots__table_otk_dj3_cdb" class="defaultstyle" "> -->
applycarmanode properties
Table 1\. applycarmanode properties
| `applycarmanode` Properties | Values | Property description |
| --------------------------- | ------------- | ------------------------------------------------------------------------------------------------------------------------------------------------ |
| `enable_sql_generation` | `udf``native` | When using data from a database, SQL code can be pushed back to the database for execution, providing superior performance for many operations\. |
<!-- </table "summary="applycarmanode properties" id="carmanuggetnodeslots__table_otk_dj3_cdb" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
CB130D4E1AE505CE39CBD49BF9D22359B9EC80AB | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/cartnodeslots.html?context=cdpaas&locale=en | cartnode properties | cartnode properties
The Classification and Regression (C&R) Tree node generates a decision tree that allows you to predict or classify future observations. The method uses recursive partitioning to split the training records into segments by minimizing the impurity at each step, where a node in the tree is considered "pure" if 100% of cases in the node fall into a specific category of the target field. Target and input fields can be numeric ranges or categorical (nominal, ordinal, or flags); all splits are binary (only two subgroups).
cartnode properties
Table 1. cartnode properties
cartnode Properties Values Property description
target field C&R Tree models require a single target and one or more input fields. A frequency field can also be specified. See the topic [Common modeling node properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/modelingnodeslots_common.htmlmodelingnodeslots_common) for more information.
continue_training_existing_model flag
objective StandardBoostingBaggingpsm psm is used for very large datasets, and requires a Server connection.
model_output_type SingleInteractiveBuilder
use_tree_directives flag
tree_directives string Specify directives for growing the tree. Directives can be wrapped in triple quotes to avoid escaping newlines or quotes. Note that directives may be highly sensitive to minor changes in data or modeling options and may not generalize to other datasets.
use_max_depth DefaultCustom
max_depth integer Maximum tree depth, from 0 to 1000. Used only if use_max_depth = Custom.
prune_tree flag Prune tree to avoid overfitting.
use_std_err flag Use maximum difference in risk (in Standard Errors).
std_err_multiplier number Maximum difference.
max_surrogates number Maximum surrogates.
use_percentage flag
min_parent_records_pc number
min_child_records_pc number
min_parent_records_abs number
min_child_records_abs number
use_costs flag
costs structured Structured property.
priors DataEqualCustom
custom_priors structured Structured property.
adjust_priors flag
trails number Number of component models for boosting or bagging.
set_ensemble_method VotingHighestProbabilityHighestMeanProbability Default combining rule for categorical targets.
range_ensemble_method MeanMedian Default combining rule for continuous targets.
large_boost flag Apply boosting to very large data sets.
min_impurity number
impurity_measure GiniTwoingOrdered
train_pct number Overfit prevention set.
set_random_seed flag Replicate results option.
seed number
calculate_variable_importance flag
calculate_raw_propensities flag
calculate_adjusted_propensities flag
adjusted_propensity_partition TestValidation
| # cartnode properties #
The Classification and Regression (C&R) Tree node generates a decision tree that allows you to predict or classify future observations\. The method uses recursive partitioning to split the training records into segments by minimizing the impurity at each step, where a node in the tree is considered "pure" if 100% of cases in the node fall into a specific category of the target field\. Target and input fields can be numeric ranges or categorical (nominal, ordinal, or flags); all splits are binary (only two subgroups)\.
<!-- <table "summary="cartnode properties" class="defaultstyle" "> -->
cartnode properties
Table 1\. cartnode properties
| `cartnode` Properties | Values | Property description |
| ---------------------------------- | ---------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `target` | *field* | C&R Tree models require a single target and one or more input fields\. A frequency field can also be specified\. See the topic [Common modeling node properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/modelingnodeslots_common.html#modelingnodeslots_common) for more information\. |
| `continue_training_existing_model` | *flag* | |
| `objective` | `Standard``Boosting``Bagging``psm` | `psm` is used for very large datasets, and requires a Server connection\. |
| `model_output_type` | `Single``InteractiveBuilder` | |
| `use_tree_directives` | *flag* | |
| `tree_directives` | *string* | Specify directives for growing the tree\. Directives can be wrapped in triple quotes to avoid escaping newlines or quotes\. Note that directives may be highly sensitive to minor changes in data or modeling options and may not generalize to other datasets\. |
| `use_max_depth` | `Default``Custom` | |
| `max_depth` | *integer* | Maximum tree depth, from 0 to 1000\. Used only if `use_max_depth = Custom`\. |
| `prune_tree` | *flag* | Prune tree to avoid overfitting\. |
| `use_std_err` | *flag* | Use maximum difference in risk (in Standard Errors)\. |
| `std_err_multiplier` | *number* | Maximum difference\. |
| `max_surrogates` | *number* | Maximum surrogates\. |
| `use_percentage` | *flag* | |
| `min_parent_records_pc` | *number* | |
| `min_child_records_pc` | *number* | |
| `min_parent_records_abs` | *number* | |
| `min_child_records_abs` | *number* | |
| `use_costs` | *flag* | |
| `costs` | *structured* | Structured property\. |
| `priors` | `Data``Equal``Custom` | |
| `custom_priors` | *structured* | Structured property\. |
| `adjust_priors` | *flag* | |
| `trails` | *number* | Number of component models for boosting or bagging\. |
| `set_ensemble_method` | `Voting``HighestProbability``HighestMeanProbability` | Default combining rule for categorical targets\. |
| `range_ensemble_method` | `Mean``Median` | Default combining rule for continuous targets\. |
| `large_boost` | *flag* | Apply boosting to very large data sets\. |
| `min_impurity` | *number* | |
| `impurity_measure` | `Gini``Twoing``Ordered` | |
| `train_pct` | *number* | Overfit prevention set\. |
| `set_random_seed` | *flag* | Replicate results option\. |
| `seed` | *number* | |
| `calculate_variable_importance` | *flag* | |
| `calculate_raw_propensities` | *flag* | |
| `calculate_adjusted_propensities` | *flag* | |
| `adjusted_propensity_partition` | `Test``Validation` | |
<!-- </table "summary="cartnode properties" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
C53BD428F2955B76BF24620A21A6461A1CC19F11 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/cartnuggetnodeslots.html?context=cdpaas&locale=en | applycartnode properties | applycartnode properties
You can use C&R Tree modeling nodes to generate a C&R Tree model nugget. The scripting name of this model nugget is applycartnode. For more information on scripting the modeling node itself, see [cartnode properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/cartnodeslots.htmlcartnodeslots).
applycartnode properties
Table 1. applycartnode properties
applycartnode Properties Values Property description
calculate_conf flag Available when SQL generation is enabled; this property includes confidence calculations in the generated tree.
display_rule_id flag Adds a field in the scoring output that indicates the ID for the terminal node to which each record is assigned.
calculate_raw_propensities flag
calculate_adjusted_propensities flag
sql_generate NeverNoMissingValuesMissingValuesnative When using data from a database, SQL code can be pushed back to the database for execution, providing superior performance for many operations.
| # applycartnode properties #
You can use C&R Tree modeling nodes to generate a C&R Tree model nugget\. The scripting name of this model nugget is *applycartnode*\. For more information on scripting the modeling node itself, see [cartnode properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/cartnodeslots.html#cartnodeslots)\.
<!-- <table "summary="applycartnode properties" id="cartnuggetnodeslots__table_ffc_ck3_cdb" class="defaultstyle" "> -->
applycartnode properties
Table 1\. applycartnode properties
| `applycartnode` Properties | Values | Property description |
| --------------------------------- | ----------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------ |
| `calculate_conf` | *flag* | Available when SQL generation is enabled; this property includes confidence calculations in the generated tree\. |
| `display_rule_id` | *flag* | Adds a field in the scoring output that indicates the ID for the terminal node to which each record is assigned\. |
| `calculate_raw_propensities` | *flag* | |
| `calculate_adjusted_propensities` | *flag* | |
| `sql_generate` | `Never``NoMissingValues``MissingValues``native` | When using data from a database, SQL code can be pushed back to the database for execution, providing superior performance for many operations\. |
<!-- </table "summary="applycartnode properties" id="cartnuggetnodeslots__table_ffc_ck3_cdb" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
B0B1665F022C9E781CE1AE94FA885266391FBCFE | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/chaidnodeslots.html?context=cdpaas&locale=en | chaidnode properties | chaidnode properties
The CHAID node generates decision trees using chi-square statistics to identify optimal splits. Unlike the C&R Tree and Quest nodes, CHAID can generate non-binary trees, meaning that some splits have more than two branches. Target and input fields can be numeric range (continuous) or categorical. Exhaustive CHAID is a modification of CHAID that does a more thorough job of examining all possible splits but takes longer to compute.
chaidnode properties
Table 1. chaidnode properties
chaidnode Properties Values Property description
target field CHAID models require a single target and one or more input fields. You can also specify a frequency. See [Common modeling node properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/modelingnodeslots_common.htmlmodelingnodeslots_common) for more information.
continue_training_existing_model flag
objective Standard <br>Boosting <br>Bagging <br>psm psm is used for very large datasets, and requires a server connection.
model_output_type Single <br>InteractiveBuilder
use_tree_directives flag
tree_directives string
method Chaid <br>ExhaustiveChaid
use_max_depth Default <br>Custom
max_depth integer Maximum tree depth, from 0 to 1000. Used only if use_max_depth = Custom.
use_percentage flag
min_parent_records_pc number
min_child_records_pc number
min_parent_records_abs number
min_child_records_abs number
use_costs flag
costs structured Structured property.
trails number Number of component models for boosting or bagging.
set_ensemble_method Voting <br>HighestProbability <br>HighestMeanProbability Default combining rule for categorical targets.
range_ensemble_method Mean <br>Median Default combining rule for continuous targets.
large_boost flag Apply boosting to very large data sets.
split_alpha number Significance level for splitting.
merge_alpha number Significance level for merging.
bonferroni_adjustment flag Adjust significance values using Bonferroni method.
split_merged_categories flag Allow resplitting of merged categories.
chi_square Pearson <br>LR Method used to calculate the chi-square statistic: Pearson or Likelihood Ratio
epsilon number Minimum change in expected cell frequencies..
max_iterations number Maximum iterations for convergence.
set_random_seed integer
seed number
calculate_variable_importance flag
calculate_raw_propensities flag
calculate_adjusted_propensities flag
adjusted_propensity_partition Test <br>Validation
maximum_number_of_models integer
train_pct double The algorithm internally separates records into a model building set and an overfit prevention set, which is an independent set of data records used to track errors during training in order to prevent the method from modeling chance variation in the data. Specify a percentage of records. The default is 30.
| # chaidnode properties #
The CHAID node generates decision trees using chi\-square statistics to identify optimal splits\. Unlike the C&R Tree and Quest nodes, CHAID can generate non\-binary trees, meaning that some splits have more than two branches\. Target and input fields can be numeric range (continuous) or categorical\. Exhaustive CHAID is a modification of CHAID that does a more thorough job of examining all possible splits but takes longer to compute\.
<!-- <table "summary="chaidnode properties" id="chaidnodeslots__table_glq_ck3_cdb" class="defaultstyle" "> -->
chaidnode properties
Table 1\. chaidnode properties
| `chaidnode` Properties | Values | Property description |
| ---------------------------------- | ---------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| `target` | *field* | CHAID models require a single target and one or more input fields\. You can also specify a frequency\. See [Common modeling node properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/modelingnodeslots_common.html#modelingnodeslots_common) for more information\. |
| `continue_training_existing_model` | *flag* | |
| `objective` | `Standard` <br>`Boosting` <br>`Bagging` <br>`psm` | `psm` is used for very large datasets, and requires a server connection\. |
| `model_output_type` | `Single` <br>`InteractiveBuilder` | |
| `use_tree_directives` | *flag* | |
| `tree_directives` | *string* | |
| `method` | `Chaid` <br>`ExhaustiveChaid` | |
| `use_max_depth` | `Default` <br>`Custom` | |
| `max_depth` | *integer* | Maximum tree depth, from 0 to 1000\. Used only if `use_max_depth = Custom`\. |
| `use_percentage` | *flag* | |
| `min_parent_records_pc` | *number* | |
| `min_child_records_pc` | *number* | |
| `min_parent_records_abs` | *number* | |
| `min_child_records_abs` | *number* | |
| `use_costs` | *flag* | |
| `costs` | *structured* | Structured property\. |
| `trails` | *number* | Number of component models for boosting or bagging\. |
| `set_ensemble_method` | `Voting` <br>`HighestProbability` <br>`HighestMeanProbability` | Default combining rule for categorical targets\. |
| `range_ensemble_method` | `Mean` <br>`Median` | Default combining rule for continuous targets\. |
| `large_boost` | *flag* | Apply boosting to very large data sets\. |
| `split_alpha` | *number* | Significance level for splitting\. |
| `merge_alpha` | *number* | Significance level for merging\. |
| `bonferroni_adjustment` | *flag* | Adjust significance values using Bonferroni method\. |
| `split_merged_categories` | *flag* | Allow resplitting of merged categories\. |
| `chi_square` | `Pearson` <br>`LR` | Method used to calculate the chi\-square statistic: Pearson or Likelihood Ratio |
| `epsilon` | *number* | Minimum change in expected cell frequencies\.\. |
| `max_iterations` | *number* | Maximum iterations for convergence\. |
| `set_random_seed` | *integer* | |
| `seed` | *number* | |
| `calculate_variable_importance` | *flag* | |
| `calculate_raw_propensities` | *flag* | |
| `calculate_adjusted_propensities` | *flag* | |
| `adjusted_propensity_partition` | `Test` <br>`Validation` | |
| `maximum_number_of_models` | *integer* | |
| `train_pct` | *double* | The algorithm internally separates records into a model building set and an overfit prevention set, which is an independent set of data records used to track errors during training in order to prevent the method from modeling chance variation in the data\. Specify a percentage of records\. The default is `30`\. |
<!-- </table "summary="chaidnode properties" id="chaidnodeslots__table_glq_ck3_cdb" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
6644EAA4A383F7ED21C0CA1ADAE80A634867870A | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/chaidnuggetnodeslots.html?context=cdpaas&locale=en | applychaidnode properties | applychaidnode properties
You can use CHAID modeling nodes to generate a CHAID model nugget. The scripting name of this model nugget is applychaidnode. For more information on scripting the modeling node itself, see [chaidnode properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/chaidnodeslots.htmlchaidnodeslots).
applychaidnode properties
Table 1. applychaidnode properties
applychaidnode Properties Values Property description
calculate_conf flag
display_rule_id flag Adds a field in the scoring output that indicates the ID for the terminal node to which each record is assigned.
calculate_raw_propensities flag
calculate_adjusted_propensities flag
sql_generate NeverNoMissingValuesMissingValuesnative When using data from a database, SQL code can be pushed back to the database for execution, providing superior performance for many operations.
| # applychaidnode properties #
You can use CHAID modeling nodes to generate a CHAID model nugget\. The scripting name of this model nugget is *applychaidnode*\. For more information on scripting the modeling node itself, see [chaidnode properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/chaidnodeslots.html#chaidnodeslots)\.
<!-- <table "summary="applychaidnode properties" id="chaidnuggetnodeslots__table_s4j_dk3_cdb" class="defaultstyle" "> -->
applychaidnode properties
Table 1\. applychaidnode properties
| `applychaidnode` Properties | Values | Property description |
| --------------------------------- | ----------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------ |
| `calculate_conf` | *flag* | |
| `display_rule_id` | *flag* | Adds a field in the scoring output that indicates the ID for the terminal node to which each record is assigned\. |
| `calculate_raw_propensities` | *flag* | |
| `calculate_adjusted_propensities` | *flag* | |
| `sql_generate` | `Never``NoMissingValues``MissingValues``native` | When using data from a database, SQL code can be pushed back to the database for execution, providing superior performance for many operations\. |
<!-- </table "summary="applychaidnode properties" id="chaidnuggetnodeslots__table_s4j_dk3_cdb" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
FD45693344E2B3CC3BDB7D1AA209AD9FBACB5309 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/chartnodeslots.html?context=cdpaas&locale=en | dvcharts properties | dvcharts properties
With the Charts node, you can launch the chart builder and create chart definitions to save with your flow. Then when you run the node, chart output is generated.
dvcharts properties
Table 1. dvcharts properties
dvcharts properties Data type Property description
chart_definition list List of chart definitions, including chart type (string), chart name (string), chart template (string), and used fields (list of field names),
| # dvcharts properties #
With the Charts node, you can launch the chart builder and create chart definitions to save with your flow\. Then when you run the node, chart output is generated\.
<!-- <table "summary="dvcharts properties" id="chartsnodeslots__table_x2s_ll3_cdb" class="defaultstyle" "> -->
dvcharts properties
Table 1\. dvcharts properties
| `dvcharts` properties | Data type | Property description |
| --------------------- | --------- | ---------------------------------------------------------------------------------------------------------------------------------------------- |
| `chart_definition` | `list` | List of chart definitions, including chart type (string), chart name (string), chart template (string), and used fields (list of field names), |
<!-- </table "summary="dvcharts properties" id="chartsnodeslots__table_x2s_ll3_cdb" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
F24C445F7AB9052A92E411B826C60DEE2DF78448 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/collectionnodeslots.html?context=cdpaas&locale=en | collectionnode properties | collectionnode properties
The Collection node shows the distribution of values for one numeric field relative to the values of another. (It creates graphs that are similar to histograms.) It's useful for illustrating a variable or field whose values change over time. Using 3-D graphing, you can also include a symbolic axis displaying distributions by category.
collectionnode properties
Table 1. collectionnode properties
collectionnode properties Data type Property description
over_field field
over_label_auto flag
over_label string
collect_field field
collect_label_auto flag
collect_label string
three_D flag
by_field field
by_label_auto flag
by_label string
operation SumMeanMinMaxSDev
color_field string
panel_field string
animation_field string
range_mode AutomaticUserDefined
range_min number
range_max number
bins ByNumberByWidth
num_bins number
bin_width number
use_grid flag
graph_background color Standard graph colors are described at the beginning of this section.
page_background color Standard graph colors are described at the beginning of this section.
| # collectionnode properties #
The Collection node shows the distribution of values for one numeric field relative to the values of another\. (It creates graphs that are similar to histograms\.) It's useful for illustrating a variable or field whose values change over time\. Using 3\-D graphing, you can also include a symbolic axis displaying distributions by category\.
<!-- <table "summary="collectionnode properties" id="collectionnodeslots__table_x2s_ll3_cdb" class="defaultstyle" "> -->
collectionnode properties
Table 1\. collectionnode properties
| `collectionnode` properties | Data type | Property description |
| --------------------------- | --------------------------- | ---------------------------------------------------------------------- |
| `over_field` | *field* | |
| `over_label_auto` | *flag* | |
| `over_label` | *string* | |
| `collect_field` | *field* | |
| `collect_label_auto` | *flag* | |
| `collect_label` | *string* | |
| `three_D` | *flag* | |
| `by_field` | *field* | |
| `by_label_auto` | *flag* | |
| `by_label` | *string* | |
| `operation` | `Sum``Mean``Min``Max``SDev` | |
| `color_field` | *string* | |
| `panel_field` | *string* | |
| `animation_field` | *string* | |
| `range_mode` | `Automatic``UserDefined` | |
| `range_min` | *number* | |
| `range_max` | *number* | |
| `bins` | `ByNumber``ByWidth` | |
| `num_bins` | *number* | |
| `bin_width` | *number* | |
| `use_grid` | *flag* | |
| `graph_background` | *color* | Standard graph colors are described at the beginning of this section\. |
| `page_background` | *color* | Standard graph colors are described at the beginning of this section\. |
<!-- </table "summary="collectionnode properties" id="collectionnodeslots__table_x2s_ll3_cdb" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
F1B21B1232720492424BB07CD73C93DF2B9CD229 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/coxregnodeslots.html?context=cdpaas&locale=en | coxregnode properties | coxregnode properties
The Cox regression node enables you to build a survival model for time-to-event data in the presence of censored records. The model produces a survival function that predicts the probability that the event of interest has occurred at a given time (t) for given values of the input variables.
coxregnode properties
Table 1. coxregnode properties
coxregnode Properties Values Property description
survival_time field Cox regression models require a single field containing the survival times.
target field Cox regression models require a single target field, and one or more input fields. See [Common modeling node properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/modelingnodeslots_common.htmlmodelingnodeslots_common) for more information.
method Enter <br>Stepwise <br>BackwardsStepwise
groups field
model_type MainEffects <br>Custom
custom_terms [BP*Sex" "BP*Age]
mode Expert <br>Simple
max_iterations number
p_converge 1.0E-4 <br>1.0E-5 <br>1.0E-6 <br>1.0E-7 <br>1.0E-8 <br>0
l_converge 1.0E-1 <br>1.0E-2 <br>1.0E-3 <br>1.0E-4 <br>1.0E-5 <br>0
removal_criterion LR <br>Wald <br>Conditional
probability_entry number
probability_removal number
output_display EachStep <br>LastStep
ci_enable flag
ci_value 90 <br>95 <br>99
correlation flag
display_baseline flag
survival flag
hazard flag
log_minus_log flag
one_minus_survival flag
separate_line field
value number or string If no value is specified for a field, the default option "Mean" will be used for that field.
| # coxregnode properties #
The Cox regression node enables you to build a survival model for time\-to\-event data in the presence of censored records\. The model produces a survival function that predicts the probability that the event of interest has occurred at a given time (*t*) for given values of the input variables\.
<!-- <table "summary="coxregnode properties" id="coxregnodeslots__table_rhv_fbj_cdb" class="defaultstyle" "> -->
coxregnode properties
Table 1\. coxregnode properties
| `coxregnode` Properties | Values | Property description |
| ----------------------- | ------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------- |
| `survival_time` | *field* | Cox regression models require a single field containing the survival times\. |
| `target` | *field* | Cox regression models require a single target field, and one or more input fields\. See [Common modeling node properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/modelingnodeslots_common.html#modelingnodeslots_common) for more information\. |
| `method` | `Enter` <br>`Stepwise` <br>`BackwardsStepwise` | |
| `groups` | *field* | |
| `model_type` | `MainEffects` <br>`Custom` | |
| `custom_terms` | \[*BP\*Sex" "BP\*Age*\] | |
| `mode` | `Expert` <br>`Simple` | |
| `max_iterations` | *number* | |
| `p_converge` | `1.0E-4` <br>`1.0E-5` <br>`1.0E-6` <br>`1.0E-7` <br>`1.0E-8` <br>`0` | |
| `l_converge` | `1.0E-1` <br>`1.0E-2` <br>`1.0E-3` <br>`1.0E-4` <br>`1.0E-5` <br>`0` | |
| `removal_criterion` | `LR` <br>`Wald` <br>`Conditional` | |
| `probability_entry` | *number* | |
| `probability_removal` | *number* | |
| `output_display` | `EachStep` <br>`LastStep` | |
| `ci_enable` | *flag* | |
| `ci_value` | `90` <br>`95` <br>`99` | |
| `correlation` | *flag* | |
| `display_baseline` | *flag* | |
| `survival` | *flag* | |
| `hazard` | *flag* | |
| `log_minus_log` | *flag* | |
| `one_minus_survival` | *flag* | |
| `separate_line` | *field* | |
| `value` | *number* or *string* | If no value is specified for a field, the default option "Mean" will be used for that field\. |
<!-- </table "summary="coxregnode properties" id="coxregnodeslots__table_rhv_fbj_cdb" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
CEBDC984A6E14E7DC6B7526324BF06A0CE6FFE34 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/coxregnuggetnodeslots.html?context=cdpaas&locale=en | applycoxregnode properties | applycoxregnode properties
You can use Cox modeling nodes to generate a Cox model nugget. The scripting name of this model nugget is applycoxregnode. For more information on scripting the modeling node itself, see [coxregnode properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/coxregnodeslots.htmlcoxregnodeslots).
applycoxregnode properties
Table 1. applycoxregnode properties
applycoxregnode Properties Values Property description
future_time_as IntervalsFields
time_interval number
num_future_times integer
time_field field
past_survival_time field
all_probabilities flag
cumulative_hazard flag
enable_sql_generation falsenative When using data from a database, SQL code can be pushed back to the database for execution, providing superior performance for many operations.
| # applycoxregnode properties #
You can use Cox modeling nodes to generate a Cox model nugget\. The scripting name of this model nugget is *applycoxregnode*\. For more information on scripting the modeling node itself, see [coxregnode properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/coxregnodeslots.html#coxregnodeslots)\.
<!-- <table "summary="applycoxregnode properties" id="coxregnuggetnodeslots__table_vcl_gbj_cdb" class="defaultstyle" "> -->
applycoxregnode properties
Table 1\. applycoxregnode properties
| `applycoxregnode` Properties | Values | Property description |
| ---------------------------- | ------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------ |
| `future_time_as` | `Intervals``Fields` | |
| `time_interval` | *number* | |
| `num_future_times` | *integer* | |
| `time_field` | *field* | |
| `past_survival_time` | *field* | |
| `all_probabilities` | *flag* | |
| `cumulative_hazard` | *flag* | |
| `enable_sql_generation` | `false``native` | When using data from a database, SQL code can be pushed back to the database for execution, providing superior performance for many operations\. |
<!-- </table "summary="applycoxregnode properties" id="coxregnuggetnodeslots__table_vcl_gbj_cdb" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
7566F3896A5AC6F89F4E7E18DC21B4A6A63864B4 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/cplexnodeslots.html?context=cdpaas&locale=en | cplexoptnode properties | cplexoptnode properties
 The CPLEX Optimization node provides the ability to use complex mathematical (CPLEX) based optimization via an Optimization Programming Language (OPL) model file.
cplexoptnode properties
Table 1. cplexoptnode properties
cplexoptnode properties Data type Property description
opl_model_text string The OPL (Optimization Programming Language) script program that the CPLEX Optimization node will run and then generate the optimization result.
opl_tuple_set_name string The tuple set name in the OPL model that corresponds to the incoming data. This isn't required and is normally not set via script. It should only be used for editing field mappings of a selected data source.
data_input_map List of structured properties The input field mappings for a data source. This isn't required and is normally not set via script. It should only be used for editing field mappings of a selected data source.
md_data_input_map List of structured properties The field mappings between each tuple defined in the OPL, with each corresponding field data source (incoming data). Users can edit them each individually per data source. With this script, you can set the property directly to set all mappings at once. This setting isn't shown in the user interface.<br><br>Each entity in the list is structured data:<br><br>Data Source Tag. The tag of the data source. For example, for 0_Products_Type the tag is 0.<br><br>Data Source Index. The physical sequence (index) of the data source. This is determined by the connection order.<br><br>Source Node. The source node (annotation) of the data source. For example, for 0_Products_Type the source node is Products.<br><br>Connected Node. The prior node (annotation) that connects the current CPLEX optimization node. For example, for 0_Products_Type the connected node is Type.<br><br>Tuple Set Name. The tuple set name of the data source. It must match what's defined in the OPL.<br><br>Tuple Field Name. The tuple set field name of the data source. It must match what's defined in the OPL tuple set definition.<br><br>Storage Type. The field storage type. Possible values are int, float, or string.
Data Field Name. The field name of the data source.<br><br>Example:<br><br>[0,0,'Product','Type','Products','prod_id_tup','int','prod_id'], 0,0,'Product','Type','Products','prod_name_tup','string', 'prod_name'],1,1,'Components','Type','Components', 'comp_id_tup','int','comp_id'],1,1,'Components','Type', 'Components','comp_name_tup','string','comp_name']]
opl_data_text string The definition of some variables or data used for the OPL.
output_value_mode string Possible values are raw or dvar. If dvar is specified, on the Output tab the user must specify the object function variable name in OPL for the output. If raw is specified, the objective function will be output directly, regardless of name.
decision_variable_name string The objective function variable name in defined in the OPL. This is enabled only when the output_value_mode property is set to dvar.
objective_function_value_fieldname string The field name for the objective function value to use in the output. Default is _OBJECTIVE.
output_tuple_set_names string The name of the predefined tuples from the incoming data. This acts as the indexes of the decision variable and is expected to be output with the Variable Outputs. The Output Tuple must be consistent with the decision variable definition in the OPL. If there are multiple indexes, the tuple names must be joined by a comma (,).<br><br>An example for a single tuple is Products, with the corresponding OPL definition being dvar float+ Production[Products];<br><br>An example for multiple tuples is Products,Components, with the corresponding OPL definition being dvar float+ Production[Products];
decision_output_map List of structured properties The field mapping between variables defined in the OPL that will be output and the output fields. Each entity in the list is structured data:<br><br>Variable Name. The variable name in the OPL to output.<br><br>Storage Type. Possible values are int, float, or string.<br><br>Output Field Name. The expected field name in the results (output or export).<br><br>Example:<br><br>['Production','int','res'],'Remark','string','res_1']'Cost', 'float','res_2']]
| # cplexoptnode properties #
 The CPLEX Optimization node provides the ability to use complex mathematical (CPLEX) based optimization via an Optimization Programming Language (OPL) model file\.
<!-- <table "summary="cplexoptnode properties" class="defaultstyle" "> -->
cplexoptnode properties
Table 1\. cplexoptnode properties
| `cplexoptnode` properties | Data type | Property description |
| ------------------------------------ | ------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| `opl_model_text` | *string* | The OPL (Optimization Programming Language) script program that the CPLEX Optimization node will run and then generate the optimization result\. |
| `opl_tuple_set_name` | *string* | The tuple set name in the OPL model that corresponds to the incoming data\. This isn't required and is normally not set via script\. It should only be used for editing field mappings of a selected data source\. |
| `data_input_map` | *List of structured properties* | The input field mappings for a data source\. This isn't required and is normally not set via script\. It should only be used for editing field mappings of a selected data source\. |
| `md_data_input_map` | *List of structured properties* | The field mappings between each tuple defined in the OPL, with each corresponding field data source (incoming data)\. Users can edit them each individually per data source\. With this script, you can set the property directly to set all mappings at once\. This setting isn't shown in the user interface\.<br><br>Each entity in the list is structured data:<br><br>Data Source Tag\. The tag of the data source\. For example, for `0_Products_Type` the tag is `0`\.<br><br>Data Source Index\. The physical sequence (index) of the data source\. This is determined by the connection order\.<br><br>Source Node\. The source node (annotation) of the data source\. For example, for `0_Products_Type` the source node is `Products`\.<br><br>Connected Node\. The prior node (annotation) that connects the current CPLEX optimization node\. For example, for `0_Products_Type` the connected node is `Type`\.<br><br>Tuple Set Name\. The tuple set name of the data source\. It must match what's defined in the OPL\.<br><br>Tuple Field Name\. The tuple set field name of the data source\. It must match what's defined in the OPL tuple set definition\.<br><br>Storage Type\. The field storage type\. Possible values are `int`, `float`, or `string`\. |
| | | Data Field Name\. The field name of the data source\.<br><br>Example:<br><br>`[0,0,'Product','Type','Products','prod_id_tup','int','prod_id'], 0,0,'Product','Type','Products','prod_name_tup','string', 'prod_name'],1,1,'Components','Type','Components', 'comp_id_tup','int','comp_id'],1,1,'Components','Type', 'Components','comp_name_tup','string','comp_name']]` |
| `opl_data_text` | *string* | The definition of some variables or data used for the OPL\. |
| `output_value_mode` | *string* | Possible values are `raw` or `dvar`\. If `dvar` is specified, on the Output tab the user must specify the object function variable name in OPL for the output\. If `raw` is specified, the objective function will be output directly, regardless of name\. |
| `decision_variable_name` | *string* | The objective function variable name in defined in the OPL\. This is enabled only when the `output_value_mode` property is set to `dvar`\. |
| `objective_function_value_fieldname` | *string* | The field name for the objective function value to use in the output\. Default is `_OBJECTIVE`\. |
| `output_tuple_set_names` | *string* | The name of the predefined tuples from the incoming data\. This acts as the indexes of the decision variable and is expected to be output with the Variable Outputs\. The Output Tuple must be consistent with the decision variable definition in the OPL\. If there are multiple indexes, the tuple names must be joined by a comma (`,`)\.<br><br>An example for a single tuple is `Products`, with the corresponding OPL definition being `dvar float+ Production[Products];`<br><br>An example for multiple tuples is `Products,Components`, with the corresponding OPL definition being `dvar float+ Production[Products];` |
| `decision_output_map` | *List of structured properties* | The field mapping between variables defined in the OPL that will be output and the output fields\. Each entity in the list is structured data:<br><br>Variable Name\. The variable name in the OPL to output\.<br><br>Storage Type\. Possible values are `int`, `float`, or `string`\.<br><br>Output Field Name\. The expected field name in the results (output or export)\.<br><br>Example:<br><br>`['Production','int','res'],'Remark','string','res_1']'Cost', 'float','res_2']]` |
<!-- </table "summary="cplexoptnode properties" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
02D819D225558542A49AB6E43F94FE062A509EA5 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/dataassetexportnodeslots.html?context=cdpaas&locale=en | dataassetexport properties | dataassetexport properties
You can use the Data Asset Export node to write to remove data sources using connections, write to a data file on your local computer, or write data to a project.
dataassetexport properties
Table 1. dataassetexport properties
dataassetexport properties Data type Property description
user_settings string Escaped JSON string containing the interaction properties for the connection. Contact IBM for details about available interaction points.<br><br>Example:<br><br>user_settings: "{"interactionProperties":{"write_mode":"write","file_name":"output.csv","file_format":"csv","quote_numerics":true,"encoding":"utf-8","first_line_header":true,"include_types":false}}"<br><br>Note that these values will change based on the type of connection you're using.
| # dataassetexport properties #
You can use the Data Asset Export node to write to remove data sources using connections, write to a data file on your local computer, or write data to a project\.
<!-- <table "summary="dataassetexport properties" class="defaultstyle" "> -->
dataassetexport properties
Table 1\. dataassetexport properties
| `dataassetexport` properties | Data type | Property description |
| ---------------------------- | --------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `user_settings` | *string* | Escaped JSON string containing the interaction properties for the connection\. Contact IBM for details about available interaction points\.<br><br>Example:<br><br>`user_settings: "{\"interactionProperties\":{\"write_mode\":\"write\",\"file_name\":\"output.csv\",\"file_format\":\"csv\",\"quote_numerics\":true,\"encoding\":\"utf-8\",\"first_line_header\":true,\"include_types\":false}}"`<br><br>Note that these values will change based on the type of connection you're using\. |
<!-- </table "summary="dataassetexport properties" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
46915AFE957CA00C5B825C5F2BDC618BFEA43DE8 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/dataassetimportnodeslots.html?context=cdpaas&locale=en | dataassetimport properties | dataassetimport properties
 You can use the Data Asset import node to pull in data from remote data sources using connections or from your local computer.
dataassetimport properties
Table 1. dataassetimport properties
dataassetimport properties Data type Property description
connection_path string Name of the data asset (table) you want to access from a selected connection. The value of this property is: /asset_name or /schema_name/table_name.
| # dataassetimport properties #
 You can use the Data Asset import node to pull in data from remote data sources using connections or from your local computer\.
<!-- <table "summary="dataassetimport properties" class="defaultstyle" "> -->
dataassetimport properties
Table 1\. dataassetimport properties
| `dataassetimport` properties | Data type | Property description |
| ---------------------------- | --------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `connection_path` | string | Name of the data asset (table) you want to access from a selected connection\. The value of this property is: `/asset_name` or `/schema_name/table_name`\. |
<!-- </table "summary="dataassetimport properties" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.