doc_id
stringlengths 40
40
| url
stringlengths 90
160
| title
stringlengths 5
96
| document
stringlengths 24
62.1k
| md_document
stringlengths 63
109k
|
---|---|---|---|---|
ADB2D2B53C7F2A464A38F7DE5D7A74A39E697528 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/modelingnodeslots_common.html?context=cdpaas&locale=en | Common modeling node properties | Common modeling node properties
The following properties are common to some or all modeling nodes. Any exceptions are noted in the documentation for individual modeling nodes as appropriate.
Common modeling node properties
Table 1. Common modeling node properties
Property Values Property description
custom_fields flag If true, allows you to specify target, input, and other fields for the current node. If false, the current settings from an upstream Type node are used.
target or targets field or [field1 ... fieldN] Specifies a single target field or multiple target fields depending on the model type.
inputs [field1 ... fieldN] Input or predictor fields used by the model.
partition field
use_partitioned_data flag If a partition field is defined, this option ensures that only data from the training partition is used to build the model.
use_split_data flag
splits [field1 ... fieldN] Specifies the field or fields to use for split modeling. Effective only if use_split_data is set to True.
use_frequency flag Weight and frequency fields are used by specific models as noted for each model type.
frequency_field field
use_weight flag
weight_field field
use_model_name flag
model_name string Custom name for new model.
mode SimpleExpert
| # Common modeling node properties #
The following properties are common to some or all modeling nodes\. Any exceptions are noted in the documentation for individual modeling nodes as appropriate\.
<!-- <table "summary="Common modeling node properties" id="modelingnodeslots_common__table_qf4_kdj_cdb" class="defaultstyle" "> -->
Common modeling node properties
Table 1\. Common modeling node properties
| Property | Values | Property description |
| ---------------------- | ------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `custom_fields` | *flag* | If true, allows you to specify target, input, and other fields for the current node\. If false, the current settings from an upstream Type node are used\. |
| `target` or `targets` | *field* or \[*field1 \.\.\. fieldN*\] | Specifies a single target field or multiple target fields depending on the model type\. |
| `inputs` | \[*field1 \.\.\. fieldN*\] | Input or predictor fields used by the model\. |
| `partition` | *field* | |
| `use_partitioned_data` | *flag* | If a partition field is defined, this option ensures that only data from the training partition is used to build the model\. |
| `use_split_data` | *flag* | |
| `splits` | *\[field1 \.\.\. fieldN\]* | Specifies the field or fields to use for split modeling\. Effective only if `use_split_data` is set to `True`\. |
| `use_frequency` | *flag* | Weight and frequency fields are used by specific models as noted for each model type\. |
| `frequency_field` | *field* | |
| `use_weight` | *flag* | |
| `weight_field` | *field* | |
| `use_model_name` | *flag* | |
| `model_name` | *string* | Custom name for new model\. |
| `mode` | `Simple``Expert` | |
<!-- </table "summary="Common modeling node properties" id="modelingnodeslots_common__table_qf4_kdj_cdb" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
5E1CE04D915B9A758F234F859DFFEFAB46484C97 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/multilayerperceptronnodeslots.html?context=cdpaas&locale=en | multilayerperceptronnode properties | multilayerperceptronnode properties
Multilayer perceptron is a classifier based on the feedforward artificial neural network and consists of multiple layers. Each layer is fully connected to the next layer in the network. The MultiLayerPerceptron-AS node in SPSS Modeler is implemented in Spark. For details about the multilayer perceptron classifier (MLPC), see
[https://spark.apache.org/docs/latest/ml-classification-regression.html#multilayer-perceptron-classifier](https://spark.apache.org/docs/latest/ml-classification-regression.htmlmultilayer-perceptron-classifier).
multilayerperceptronnode properties
Table 1. multilayerperceptronnode properties
multilayerperceptronnode properties Data type Property description
custom_fields boolean This option tells the node to use field information specified here instead of that given in any upstream Type node(s). After selecting this option, specify the following fields as required.
target field One field name for target.
inputs field List of the field names for input.
num_hidden_layers string Specify the number of hidden layers. Use a comma between multiple hidden layers.
num_output_number string Specify the number of output layers.
random_seed integer Generate the seed used by the random number generator.
maxiter integer Specify the maximum number of iterations to perform.
set_expert boolean Select the Expert Mode option in the Model Building section if you want to specify the block size for stacking input data in matrices.
block_size integer This option can speed up the computation.
use_model_name boolean Specify a custom name for the model or use auto, which sets the label as the target field.
model_name string Renamed model name.
| # multilayerperceptronnode properties #
Multilayer perceptron is a classifier based on the feedforward artificial neural network and consists of multiple layers\. Each layer is fully connected to the next layer in the network\. The MultiLayerPerceptron\-AS node in SPSS Modeler is implemented in Spark\. For details about the multilayer perceptron classifier (MLPC), see
[https://spark\.apache\.org/docs/latest/ml\-classification\-regression\.html\#multilayer\-perceptron\-classifier](https://spark.apache.org/docs/latest/ml-classification-regression.html#multilayer-perceptron-classifier)\.
<!-- <table "summary="multilayerperceptronnode properties" id="multilayerperceptronnodeslots__table_r3k_5bw_tz" class="defaultstyle" "> -->
multilayerperceptronnode properties
Table 1\. multilayerperceptronnode properties
| `multilayerperceptronnode` properties | Data type | Property description |
| ------------------------------------- | --------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `custom_fields` | *boolean* | This option tells the node to use field information specified here instead of that given in any upstream Type node(s)\. After selecting this option, specify the following fields as required\. |
| `target` | *field* | One field name for target\. |
| `inputs` | *field* | List of the field names for input\. |
| `num_hidden_layers` | *string* | Specify the number of hidden layers\. Use a comma between multiple hidden layers\. |
| `num_output_number` | *string* | Specify the number of output layers\. |
| `random_seed` | *integer* | Generate the seed used by the random number generator\. |
| `maxiter` | *integer* | Specify the maximum number of iterations to perform\. |
| `set_expert` | *boolean* | Select the Expert Mode option in the Model Building section if you want to specify the block size for stacking input data in matrices\. |
| `block_size` | *integer* | This option can speed up the computation\. |
| `use_model_name` | *boolean* | Specify a custom name for the model or use `auto`, which sets the label as the target field\. |
| `model_name` | *string* | Renamed model name\. |
<!-- </table "summary="multilayerperceptronnode properties" id="multilayerperceptronnodeslots__table_r3k_5bw_tz" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
093BFFCB43C46F1068A59A6B6338C955BF20AABF | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/multiplotnodeslots.html?context=cdpaas&locale=en | multiplotnode properties | multiplotnode properties
The Multiplot node creates a plot that displays multiple Y fields over a single X field. The Y fields are plotted as colored lines; each is equivalent to a Plot node with Style set to Line and X Mode set to Sort. Multiplots are useful when you want to explore the fluctuation of several variables over time.
multiplotnode properties
Table 1. multiplotnode properties
multiplotnode properties Data type Property description
x_field field
y_fields list
panel_field field
animation_field field
normalize flag
use_overlay_expr flag
overlay_expression string
records_limit number
if_over_limit PlotBinsPlotSamplePlotAll
x_label_auto flag
x_label string
y_label_auto flag
y_label string
use_grid flag
graph_background color Standard graph colors are described at the beginning of this section.
page_background color Standard graph colors are described at the beginning of this section.
| # multiplotnode properties #
The Multiplot node creates a plot that displays multiple `Y` fields over a single `X` field\. The `Y` fields are plotted as colored lines; each is equivalent to a Plot node with Style set to Line and X Mode set to Sort\. Multiplots are useful when you want to explore the fluctuation of several variables over time\.
<!-- <table "summary="multiplotnode properties" id="multiplotnodeslots__table_snq_ndj_cdb" class="defaultstyle" "> -->
multiplotnode properties
Table 1\. multiplotnode properties
| `multiplotnode` properties | Data type | Property description |
| -------------------------- | ------------------------------- | ---------------------------------------------------------------------- |
| `x_field` | *field* | |
| `y_fields` | *list* | |
| `panel_field` | *field* | |
| `animation_field` | *field* | |
| `normalize` | *flag* | |
| `use_overlay_expr` | *flag* | |
| `overlay_expression` | *string* | |
| `records_limit` | *number* | |
| `if_over_limit` | `PlotBins``PlotSample``PlotAll` | |
| `x_label_auto` | *flag* | |
| `x_label` | *string* | |
| `y_label_auto` | *flag* | |
| `y_label` | *string* | |
| `use_grid` | *flag* | |
| `graph_background` | *color* | Standard graph colors are described at the beginning of this section\. |
| `page_background` | *color* | Standard graph colors are described at the beginning of this section\. |
<!-- </table "summary="multiplotnode properties" id="multiplotnodeslots__table_snq_ndj_cdb" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
665B81FCF30212BA535DEDFFC35E22901ED3E3B6 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/ocsvmnuggetnodeslots.html?context=cdpaas&locale=en | applyocsvmnode properties | applyocsvmnode properties
You can use One-Class SVM nodes to generate a One-Class SVM model nugget. The scripting name of this model nugget is applyocsvmnode. No other properties exist for this model nugget. For more information on scripting the modeling node itself, see [ocsvmnode properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/oneclasssvmnodeslots.htmloneclasssvmnodeslots).
| # applyocsvmnode properties #
You can use One\-Class SVM nodes to generate a One\-Class SVM model nugget\. The scripting name of this model nugget is *applyocsvmnode*\. No other properties exist for this model nugget\. For more information on scripting the modeling node itself, see [ocsvmnode properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/oneclasssvmnodeslots.html#oneclasssvmnodeslots)\.
<!-- </article "role="article" "> -->
|
B6DC15D9F3F199C8BAB5F85EDA67D50627BB3E08 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/oneclasssvmnodeslots.html?context=cdpaas&locale=en | ocsvmnode properties | ocsvmnode properties
The One-Class SVM node uses an unsupervised learning algorithm. The node can be used for novelty detection. It will detect the soft boundary of a given set of samples, to then classify new points as belonging to that set or not. This One-Class SVM modeling node in SPSS Modeler is implemented in Python and requires the scikit-learn© Python library.
ocsvmnode properties
Table 1. ocsvmnode properties
ocsvmnode properties Data type Property description
custom_fields boolean This option tells the node to use field information specified here instead of that given in any upstream Type node(s). After selecting this option, specify the following fields as required.
inputs field List of the field names for input.
role_use string Specify predefined to use predefined roles or custom to use custom field assignments. Default is predefined.
splits field List of the field names for split.
use_partition Boolean Specify true or false. Default is true. If set to true, only training data will be used when building the model.
mode_type string The mode. Possible values are simple or expert. All parameters on the Expert tab will be disabled if simple is specified.
stopping_criteria string A string of scientific notation. Possible values are 1.0E-1, 1.0E-2, 1.0E-3, 1.0E-4, 1.0E-5, or 1.0E-6. Default is 1.0E-3.
precision float The regression precision (nu). Bound on the fraction of training errors and support vectors. Specify a number greater than 0 and less than or equal to 1.0. Default is 0.1.
kernel string The kernel type to use in the algorithm. Possible values are linear, poly, rbf, sigmoid, or precomputed. Default is rbf.
enable_gamma Boolean Enables the gamma parameter. Specify true or false. Default is true.
gamma float This parameter is only enabled for the kernels rbf, poly, and sigmoid. If the enable_gamma parameter is set to false, this parameter will be set to auto. If set to true, the default is 0.1.
coef0 float Independent term in the kernel function. This parameter is only enabled for the poly kernel and the sigmoid kernel. Default value is 0.0.
degree integer Degree of the polynomial kernel function. This parameter is only enabled for the poly kernel. Specify any integer. Default is 3.
shrinking Boolean Specifies whether to use the shrinking heuristic option. Specify true or false. Default is false.
enable_cache_size Boolean Enables the cache_size parameter. Specify true or false. Default is false.
cache_size float The size of the kernel cache in MB. Default is 200.
enable_random_seed Boolean Enables the random_seed parameter. Specify true or false. Default is false.
random_seed integer The random number seed to use when shuffling data for probability estimation. Specify any integer.
pc_type string The type of the parallel coordinates graphic. Possible options are independent or general.
lines_amount integer Maximum number of lines to include on the graphic. Specify an integer between 1 and 1000.
lines_fields_custom Boolean Enables the lines_fields parameter, which allows you to specify custom fields to show in the graph output. If set to false, all fields will be shown. If set to true, only the fields specified with the lines_fields parameter will be shown. For performance reasons, a maximum of 20 fields will be displayed.
lines_fields field List of the field names to include on the graphic as vertical axes.
enable_graphic Boolean Specify true or false. Enables graphic output (disable this option if you want to save time and reduce stream file size).
enable_hpo Boolean Specify true or false to enable or disable the HPO options. If set to true, Rbfopt will be applied to find out the "best" One-Class SVM model automatically, which reaches the target objective value defined by the user with the following target_objval parameter.
target_objval float The objective function value (error rate of the model on the samples) we want to reach (for example, the value of the unknown optimum). Set this parameter to the appropriate value if the optimum is unknown (for example, 0.01).
max_iterations integer Maximum number of iterations for trying the model. Default is 1000.
max_evaluations integer Maximum number of function evaluations for trying the model, where the focus is accuracy over speed. Default is 300.
| # ocsvmnode properties #
The One\-Class SVM node uses an unsupervised learning algorithm\. The node can be used for novelty detection\. It will detect the soft boundary of a given set of samples, to then classify new points as belonging to that set or not\. This One\-Class SVM modeling node in SPSS Modeler is implemented in Python and requires the scikit\-learn© Python library\.
<!-- <table "summary="ocsvmnode properties" id="oneclasssvmnodeslots__table_r3k_5bw_tz" class="defaultstyle" "> -->
ocsvmnode properties
Table 1\. ocsvmnode properties
| `ocsvmnode` properties | Data type | Property description |
| ---------------------- | --------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `custom_fields` | *boolean* | This option tells the node to use field information specified here instead of that given in any upstream Type node(s)\. After selecting this option, specify the following fields as required\. |
| `inputs` | *field* | List of the field names for input\. |
| `role_use` | *string* | Specify `predefined` to use predefined roles or `custom` to use custom field assignments\. Default is predefined\. |
| `splits` | *field* | List of the field names for split\. |
| `use_partition` | *Boolean* | Specify `true` or `false`\. Default is `true`\. If set to `true`, only training data will be used when building the model\. |
| `mode_type` | *string* | The mode\. Possible values are `simple` or `expert`\. All parameters on the Expert tab will be disabled if `simple` is specified\. |
| `stopping_criteria` | *string* | A string of scientific notation\. Possible values are `1.0E-1`, `1.0E-2`, `1.0E-3`, `1.0E-4`, `1.0E-5`, or `1.0E-6`\. Default is `1.0E-3`\. |
| `precision` | *float* | The regression precision (nu)\. Bound on the fraction of training errors and support vectors\. Specify a number greater than `0` and less than or equal to `1.0`\. Default is `0.1`\. |
| `kernel` | *string* | The kernel type to use in the algorithm\. Possible values are `linear`, `poly`, `rbf`, `sigmoid`, or `precomputed`\. Default is `rbf`\. |
| `enable_gamma` | *Boolean* | Enables the `gamma` parameter\. Specify `true` or `false`\. Default is `true`\. |
| `gamma` | *float* | This parameter is only enabled for the kernels `rbf`, `poly`, and `sigmoid`\. If the `enable_gamma` parameter is set to `false`, this parameter will be set to `auto`\. If set to `true`, the default is `0.1`\. |
| `coef0` | *float* | Independent term in the kernel function\. This parameter is only enabled for the `poly` kernel and the `sigmoid` kernel\. Default value is `0.0`\. |
| `degree` | *integer* | Degree of the polynomial kernel function\. This parameter is only enabled for the `poly` kernel\. Specify any integer\. Default is `3`\. |
| `shrinking` | *Boolean* | Specifies whether to use the shrinking heuristic option\. Specify `true` or `false`\. Default is `false`\. |
| `enable_cache_size` | *Boolean* | Enables the `cache_size` parameter\. Specify `true` or `false`\. Default is `false`\. |
| `cache_size` | *float* | The size of the kernel cache in MB\. Default is `200`\. |
| `enable_random_seed` | *Boolean* | Enables the `random_seed` parameter\. Specify `true` or `false`\. Default is `false`\. |
| `random_seed` | *integer* | The random number seed to use when shuffling data for probability estimation\. Specify any integer\. |
| `pc_type` | *string* | The type of the parallel coordinates graphic\. Possible options are `independent` or `general`\. |
| `lines_amount` | *integer* | Maximum number of lines to include on the graphic\. Specify an integer between `1` and `1000`\. |
| `lines_fields_custom` | *Boolean* | Enables the `lines_fields` parameter, which allows you to specify custom fields to show in the graph output\. If set to `false`, all fields will be shown\. If set to `true`, only the fields specified with the lines\_fields parameter will be shown\. For performance reasons, a maximum of 20 fields will be displayed\. |
| `lines_fields` | *field* | List of the field names to include on the graphic as vertical axes\. |
| `enable_graphic` | *Boolean* | Specify `true` or `false`\. Enables graphic output (disable this option if you want to save time and reduce stream file size)\. |
| `enable_hpo` | *Boolean* | Specify `true` or `false` to enable or disable the HPO options\. If set to `true`, Rbfopt will be applied to find out the "best" One\-Class SVM model automatically, which reaches the target objective value defined by the user with the following `target_objval` parameter\. |
| `target_objval` | *float* | The objective function value (error rate of the model on the samples) we want to reach (for example, the value of the unknown optimum)\. Set this parameter to the appropriate value if the optimum is unknown (for example, `0.01`)\. |
| `max_iterations` | *integer* | Maximum number of iterations for trying the model\. Default is `1000`\. |
| `max_evaluations` | *integer* | Maximum number of function evaluations for trying the model, where the focus is accuracy over speed\. Default is `300`\. |
<!-- </table "summary="ocsvmnode properties" id="oneclasssvmnodeslots__table_r3k_5bw_tz" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
1B83FE669CB3776D00A1A78E4764F115DFD5A40A | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/output_nodes_slot_parameters.html?context=cdpaas&locale=en | Output node properties | Output node properties
Refer to this section for a list of available properties for Output nodes.
Output node properties differ slightly from those of other node types. Rather than referring to a particular node option, output node properties store a reference to the output object. This can be useful in taking a value from a table and then setting it as a flow parameter.
| # Output node properties #
Refer to this section for a list of available properties for Output nodes\.
Output node properties differ slightly from those of other node types\. Rather than referring to a particular node option, output node properties store a reference to the output object\. This can be useful in taking a value from a table and then setting it as a flow parameter\.
<!-- </article "role="article" "> -->
|
1C19733ED0D3400BAF6FF05317475A6518B5BA1A | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/partitionnodeslots.html?context=cdpaas&locale=en | partitionnode properties | partitionnode properties
The Partition node generates a partition field, which splits the data into separate subsets for the training, testing, and validation stages of model building.
partitionnode properties
Table 1. partitionnode properties
partitionnode properties Data type Property description
new_name string Name of the partition field generated by the node.
create_validation flag Specifies whether a validation partition should be created.
training_size integer Percentage of records (0–100) to be allocated to the training partition.
testing_size integer Percentage of records (0–100) to be allocated to the testing partition.
validation_size integer Percentage of records (0–100) to be allocated to the validation partition. Ignored if a validation partition is not created.
training_label string Label for the training partition.
testing_label string Label for the testing partition.
validation_label string Label for the validation partition. Ignored if a validation partition is not created.
value_mode SystemSystemAndLabelLabel Specifies the values used to represent each partition in the data. For example, the training sample can be represented by the system integer 1, the label Training, or a combination of the two, 1_Training.
set_random_seed Boolean Specifies whether a user-specified random seed should be used.
random_seed integer A user-specified random seed value. For this value to be used, set_random_seed must be set to True.
enable_sql_generation Boolean Specifies whether to use SQL pushback to assign records to partitions.
unique_field Specifies the input field used to ensure that records are assigned to partitions in a random but repeatable way. For this value to be used, enable_sql_generation must be set to True.
| # partitionnode properties #
The Partition node generates a partition field, which splits the data into separate subsets for the training, testing, and validation stages of model building\.
<!-- <table "summary="partitionnode properties" id="partitionnodeslots__table_tv5_zdj_cdb" class="defaultstyle" "> -->
partitionnode properties
Table 1\. partitionnode properties
| `partitionnode` properties | Data type | Property description |
| -------------------------- | ------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `new_name` | *string* | Name of the partition field generated by the node\. |
| `create_validation` | *flag* | Specifies whether a validation partition should be created\. |
| `training_size` | *integer* | Percentage of records (0–100) to be allocated to the training partition\. |
| `testing_size` | *integer* | Percentage of records (0–100) to be allocated to the testing partition\. |
| `validation_size` | *integer* | Percentage of records (0–100) to be allocated to the validation partition\. Ignored if a validation partition is not created\. |
| `training_label` | *string* | Label for the training partition\. |
| `testing_label` | *string* | Label for the testing partition\. |
| `validation_label` | *string* | Label for the validation partition\. Ignored if a validation partition is not created\. |
| `value_mode` | `System``SystemAndLabel``Label` | Specifies the values used to represent each partition in the data\. For example, the training sample can be represented by the system integer `1`, the label `Training`, or a combination of the two, `1_Training`\. |
| `set_random_seed` | *Boolean* | Specifies whether a user\-specified random seed should be used\. |
| `random_seed` | *integer* | A user\-specified random seed value\. For this value to be used, `set_random_seed` must be set to `True`\. |
| `enable_sql_generation` | *Boolean* | Specifies whether to use SQL pushback to assign records to partitions\. |
| `unique_field` | | Specifies the input field used to ensure that records are assigned to partitions in a random but repeatable way\. For this value to be used, `enable_sql_generation` must be set to `True`\. |
<!-- </table "summary="partitionnode properties" id="partitionnodeslots__table_tv5_zdj_cdb" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
F1CDB96AD5A56206F662BB3025B93F6D5820242B | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/plotnodeslots.html?context=cdpaas&locale=en | plotnode properties | plotnode properties
The Plot node shows the relationship between numeric fields. You can create a plot by using points (a scatterplot) or lines.
plotnode properties
Table 1. plotnode properties
plotnode properties Data type Property description
x_field field Specifies a custom label for the x axis. Available only for labels.
y_field field Specifies a custom label for the y axis. Available only for labels.
three_D flag Specifies a custom label for the y axis. Available only for labels in 3-D graphs.
z_field field
color_field field Overlay field.
size_field field
shape_field field
panel_field field Specifies a nominal or flag field for use in making a separate chart for each category. Charts are paneled together in one output window.
animation_field field Specifies a nominal or flag field for illustrating data value categories by creating a series of charts displayed in sequence using animation.
transp_field field Specifies a field for illustrating data value categories by using a different level of transparency for each category. Not available for line plots.
overlay_type NoneSmootherFunction Specifies whether an overlay function or LOESS smoother is displayed.
overlay_expression string Specifies the expression used when overlay_type is set to Function.
style PointLine
point_type Rectangle Dot Triangle Hexagon Plus Pentagon Star BowTie HorizontalDash VerticalDash IronCross Factory House Cathedral OnionDome ConcaveTriangle OblateGlobe CatEye FourSidedPillow RoundRectangle Fan
x_mode SortOverlayAsRead
x_range_mode AutomaticUserDefined
x_range_min number
x_range_max number
y_range_mode AutomaticUserDefined
y_range_min number
y_range_max number
z_range_mode AutomaticUserDefined
z_range_min number
z_range_max number
jitter flag
records_limit number
if_over_limit PlotBinsPlotSamplePlotAll
x_label_auto flag
x_label string
y_label_auto flag
y_label string
z_label_auto flag
z_label string
use_grid flag
graph_background color Standard graph colors are described at the beginning of this section.
page_background color Standard graph colors are described at the beginning of this section.
use_overlay_expr flag Deprecated in favor of overlay_type.
| # plotnode properties #
The Plot node shows the relationship between numeric fields\. You can create a plot by using points (a scatterplot) or lines\.
<!-- <table "summary="plotnode properties" id="plotnodeslots__table_lhj_12j_cdb" class="defaultstyle" "> -->
plotnode properties
Table 1\. plotnode properties
| `plotnode` properties | Data type | Property description |
| --------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------ |
| `x_field` | *field* | Specifies a custom label for the *x* axis\. Available only for labels\. |
| `y_field` | *field* | Specifies a custom label for the *y* axis\. Available only for labels\. |
| `three_D` | *flag* | Specifies a custom label for the *y* axis\. Available only for labels in 3\-D graphs\. |
| `z_field` | *field* | |
| `color_field` | *field* | Overlay field\. |
| `size_field` | *field* | |
| `shape_field` | *field* | |
| `panel_field` | *field* | Specifies a nominal or flag field for use in making a separate chart for each category\. Charts are paneled together in one output window\. |
| `animation_field` | *field* | Specifies a nominal or flag field for illustrating data value categories by creating a series of charts displayed in sequence using animation\. |
| `transp_field` | *field* | Specifies a field for illustrating data value categories by using a different level of transparency for each category\. Not available for line plots\. |
| `overlay_type` | `None``Smoother``Function` | Specifies whether an overlay function or LOESS smoother is displayed\. |
| `overlay_expression` | *string* | Specifies the expression used when `overlay_type` is set to `Function`\. |
| `style` | `Point``Line` | |
| `point_type` | `Rectangle Dot Triangle Hexagon Plus Pentagon Star BowTie HorizontalDash VerticalDash IronCross Factory House Cathedral OnionDome ConcaveTriangle OblateGlobe CatEye FourSidedPillow RoundRectangle Fan` | |
| `x_mode` | `Sort``Overlay``AsRead` | |
| `x_range_mode` | `Automatic``UserDefined` | |
| `x_range_min` | *number* | |
| `x_range_max` | *number* | |
| `y_range_mode` | `Automatic``UserDefined` | |
| `y_range_min` | *number* | |
| `y_range_max` | *number* | |
| `z_range_mode` | `Automatic``UserDefined` | |
| `z_range_min` | *number* | |
| `z_range_max` | *number* | |
| `jitter` | *flag* | |
| `records_limit` | *number* | |
| `if_over_limit` | `PlotBins``PlotSample``PlotAll` | |
| `x_label_auto` | *flag* | |
| `x_label` | *string* | |
| `y_label_auto` | *flag* | |
| `y_label` | *string* | |
| `z_label_auto` | *flag* | |
| `z_label` | *string* | |
| `use_grid` | *flag* | |
| `graph_background` | *color* | Standard graph colors are described at the beginning of this section\. |
| `page_background` | *color* | Standard graph colors are described at the beginning of this section\. |
| `use_overlay_expr` | *flag* | Deprecated in favor of `overlay_type`\. |
<!-- </table "summary="plotnode properties" id="plotnodeslots__table_lhj_12j_cdb" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
8BAD741CD92F2DB6AB2CE3A3C2D35D000235BFE9 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/properties/applylinearASslots.html?context=cdpaas&locale=en | applylinearasnode properties | applylinearasnode properties
You can use Linear-AS modeling nodes to generate a Linear-AS model nugget. The scripting name of this model nugget is applylinearasnode. For more information on scripting the modeling node itself, see [linearasnode properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/properties/linearASslots.htmllinearASslots).
applylinearasnode Properties
Table 1. applylinearasnode Properties
applylinearasnode Property Values Property description
enable_sql_generation falsenative When using data from a database, SQL code can be pushed back to the database for execution, providing superior performance for many operations.
| # applylinearasnode properties #
You can use Linear\-AS modeling nodes to generate a Linear\-AS model nugget\. The scripting name of this model nugget is *applylinearasnode*\. For more information on scripting the modeling node itself, see [linearasnode properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/properties/linearASslots.html#linearASslots)\.
<!-- <table "summary="applylinearasnode Properties" class="defaultstyle" "> -->
applylinearasnode Properties
Table 1\. applylinearasnode Properties
| `applylinearasnode` Property | Values | Property description |
| ---------------------------- | --------------- | ------------------------------------------------------------------------------------------------------------------------------------------------ |
| `enable_sql_generation` | `false``native` | When using data from a database, SQL code can be pushed back to the database for execution, providing superior performance for many operations\. |
<!-- </table "summary="applylinearasnode Properties" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
0E7FF3238A69FA701A2067672493CCB1B9698CC1 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/properties/applylinearslots.html?context=cdpaas&locale=en | applylinearnode properties | applylinearnode properties
Linear modeling nodes can be used to generate a Linear model nugget. The scripting name of this model nugget is applylinearnode. For more information on scripting the modeling node itself, see [linearnode properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/properties/linearslots.htmllinearslots).
applylinearnode Properties
Table 1. applylinearnode Properties
linear Properties Values Property description
use_custom_name flag
custom_name string
enable_sql_generation udfnativepuresql Used to set SQL generation options during flow execution. The options are to push back to the database and score using an SPSS Modeler Server scoring adapter (if connected to a database with a scoring adapter installed), to score within SPSS Modeler, or to push back to the database and score using SQL. The default value is udf.
| # applylinearnode properties #
Linear modeling nodes can be used to generate a Linear model nugget\. The scripting name of this model nugget is *applylinearnode*\. For more information on scripting the modeling node itself, see [linearnode properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/properties/linearslots.html#linearslots)\.
<!-- <table "summary="applylinearnode Properties" id="applylinearslots__table_w2c_hj3_cdb" class="defaultstyle" "> -->
applylinearnode Properties
Table 1\. applylinearnode Properties
| `linear` Properties | Values | Property description |
| ----------------------- | ---------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `use_custom_name` | *flag* | |
| `custom_name` | *string* | |
| `enable_sql_generation` | `udf``native``puresql` | Used to set SQL generation options during flow execution\. The options are to push back to the database and score using an SPSS Modeler Server scoring adapter (if connected to a database with a scoring adapter installed), to score within SPSS Modeler, or to push back to the database and score using SQL\. The default value is `udf`\. |
<!-- </table "summary="applylinearnode Properties" id="applylinearslots__table_w2c_hj3_cdb" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
CE40B0CEF1449476821A1EBD8D0CF339C866D16A | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/properties/applyneuralnetworkslots.html?context=cdpaas&locale=en | applyneuralnetworknode properties | applyneuralnetworknode properties
You can use Neural Network modeling nodes to generate a Neural Network model nugget. The scripting name of this model nugget is applyneuralnetworknode. For more information on scripting the modeling node itself, see [neuralnetworknode properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/properties/neuralnetworkslots.htmlneuralnetworkslots).
applyneuralnetworknode properties
Table 1. applyneuralnetworknode properties
applyneuralnetworknode Properties Values Property description
use_custom_name flag
custom_name string
confidence onProbability <br>onIncrease
score_category_probabilities flag
max_categories number
score_propensity flag
enable_sql_generation false <br>true <br>native When using data from a database, SQL code can be pushed back to the database for execution, providing superior performance for many operations.
| # applyneuralnetworknode properties #
You can use Neural Network modeling nodes to generate a Neural Network model nugget\. The scripting name of this model nugget is *applyneuralnetworknode*\. For more information on scripting the modeling node itself, see [neuralnetworknode properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/properties/neuralnetworkslots.html#neuralnetworkslots)\.
<!-- <table "summary="applyneuralnetworknode properties" id="applyneuralnetworkslots__table_x3s_3j3_cdb" class="defaultstyle" "> -->
applyneuralnetworknode properties
Table 1\. applyneuralnetworknode properties
| `applyneuralnetworknode` Properties | Values | Property description |
| ----------------------------------- | --------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------ |
| `use_custom_name` | *flag* | |
| `custom_name` | *string* | |
| `confidence` | `onProbability` <br>`onIncrease` | |
| `score_category_probabilities` | *flag* | |
| `max_categories` | *number* | |
| `score_propensity` | *flag* | |
| `enable_sql_generation` | `false` <br>`true` <br>`native` | When using data from a database, SQL code can be pushed back to the database for execution, providing superior performance for many operations\. |
<!-- </table "summary="applyneuralnetworknode properties" id="applyneuralnetworkslots__table_x3s_3j3_cdb" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
90FAFE76840267470228854A202752832D54A787 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/properties/linearASslots.html?context=cdpaas&locale=en | linearasnode properties | linearasnode properties
Linear regression models predict a continuous target based on linear relationships between the target and one or more predictors.
linearasnode properties
Table 1. linearasnode properties
linearasnode Properties Values Property description
target field Specifies a single target field.
inputs [field1 ... fieldN] Predictor fields used by the model.
weight_field field Analysis field used by the model.
custom_fields flag The default value is TRUE.
intercept flag The default value is TRUE.
detect_2way_interaction flag Whether or not to consider two way interaction. The default value is TRUE.
cin number The interval of confidence used to compute estimates of the model coefficients. Specify a value greater than 0 and less than 100. The default value is 95.
factor_order ascendingdescending The sort order for categorical predictors. The default value is ascending.
var_select_method ForwardStepwiseBestSubsetsnone The model selection method to use. The default value is ForwardStepwise.
criteria_for_forward_stepwise AICCFstatisticsAdjustedRSquareASE The statistic used to determine whether an effect should be added to or removed from the model. The default value is AdjustedRSquare.
pin number The effect that has the smallest p-value less than this specified pin threshold is added to the model. The default value is 0.05.
pout number Any effects in the model with a p-value greater than this specified pout threshold are removed. The default value is 0.10.
use_custom_max_effects flag Whether to use max number of effects in the final model. The default value is FALSE.
max_effects number Maximum number of effects to use in the final model. The default value is 1.
use_custom_max_steps flag Whether to use the maximum number of steps. The default value is FALSE.
max_steps number The maximum number of steps before the stepwise algorithm stops. The default value is 1.
criteria_for_best_subsets AICCAdjustedRSquareASE The mode of criteria to use. The default value is AdjustedRSquare.
| # linearasnode properties #
Linear regression models predict a continuous target based on linear relationships between the target and one or more predictors\.
<!-- <table "summary="linearasnode properties" class="defaultstyle" "> -->
linearasnode properties
Table 1\. linearasnode properties
| `linearasnode` Properties | Values | Property description |
| ------------------------------- | ----------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `target` | *field* | Specifies a single target field\. |
| `inputs` | \[*field1 \.\.\. fieldN*\] | Predictor fields used by the model\. |
| `weight_field` | *field* | Analysis field used by the model\. |
| `custom_fields` | *flag* | The default value is `TRUE`\. |
| `intercept` | *flag* | The default value is `TRUE`\. |
| `detect_2way_interaction` | *flag* | Whether or not to consider two way interaction\. The default value is `TRUE`\. |
| `cin` | *number* | The interval of confidence used to compute estimates of the model coefficients\. Specify a value greater than 0 and less than 100\. The default value is `95`\. |
| `factor_order` | `ascending``descending` | The sort order for categorical predictors\. The default value is `ascending`\. |
| `var_select_method` | `ForwardStepwise``BestSubsets``none` | The model selection method to use\. The default value is `ForwardStepwise`\. |
| `criteria_for_forward_stepwise` | `AICC``Fstatistics``AdjustedRSquare``ASE` | The statistic used to determine whether an effect should be added to or removed from the model\. The default value is `AdjustedRSquare`\. |
| `pin` | *number* | The effect that has the smallest p\-value less than this specified `pin` threshold is added to the model\. The default value is `0.05`\. |
| `pout` | *number* | Any effects in the model with a p\-value greater than this specified `pout` threshold are removed\. The default value is `0.10`\. |
| `use_custom_max_effects` | *flag* | Whether to use max number of effects in the final model\. The default value is `FALSE`\. |
| `max_effects` | *number* | Maximum number of effects to use in the final model\. The default value is `1`\. |
| `use_custom_max_steps` | *flag* | Whether to use the maximum number of steps\. The default value is `FALSE`\. |
| `max_steps` | *number* | The maximum number of steps before the stepwise algorithm stops\. The default value is `1`\. |
| `criteria_for_best_subsets` | `AICC``AdjustedRSquare``ASE` | The mode of criteria to use\. The default value is `AdjustedRSquare`\. |
<!-- </table "summary="linearasnode properties" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
4DEAFAC111CF37F37A2F20CFF35606827D940390 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/properties/linearslots.html?context=cdpaas&locale=en | linearnode properties | linearnode properties
Linear regression models predict a continuous target based on linear relationships between the target and one or more predictors.
linearnode properties
Table 1. linearnode properties
linearnode Properties Values Property description
target field Specifies a single target field.
inputs [field1 ... fieldN] Predictor fields used by the model.
continue_training_existing_model flag
objective Standard <br>Bagging <br>Boosting <br>psm psm is used for very large datasets, and requires a server connection.
use_auto_data_preparation flag
confidence_level number
model_selection ForwardStepwise <br>BestSubsets <br>None
criteria_forward_stepwise AICC <br>Fstatistics <br>AdjustedRSquare <br>ASE
probability_entry number
probability_removal number
use_max_effects flag
max_effects number
use_max_steps flag
max_steps number
criteria_best_subsets AICC <br>AdjustedRSquare <br>ASE
combining_rule_continuous Mean <br>Median
component_models_n number
use_random_seed flag
random_seed number
use_custom_model_name flag
custom_model_name string
use_custom_name flag
custom_name string
tooltip string
keywords string
annotation string
perform_model_effect_tests boolean Perform model effect tests for each regression effect.
confidence_level double This is the interval of confidence used to compute estimates of the model coefficients. Specify a value greater than 0 and less than 100. The default is 95.
probability_entry double If F Statistics is chosen as the criterion, then at each step the effect that has the smallest p-value less than the specified threshold is added to the model (include effects with p-values less than). The default is 0.05.
probability_removal double Any effects in the model with a p-value greater than the specified threshold are removed (remove effects with p-values greater than). The default is 0.10.
| # linearnode properties #
Linear regression models predict a continuous target based on linear relationships between the target and one or more predictors\.
<!-- <table "summary="linearnode properties" id="linearslots__table_nyz_2dj_cdb" class="defaultstyle" "> -->
linearnode properties
Table 1\. linearnode properties
| `linearnode` Properties | Values | Property description |
| ---------------------------------- | ----------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `target` | *field* | Specifies a single target field\. |
| `inputs` | \[*field1 \.\.\. fieldN*\] | Predictor fields used by the model\. |
| `continue_training_existing_model` | *flag* | |
| `objective` | `Standard` <br>`Bagging` <br>`Boosting` <br>`psm` | `psm` is used for very large datasets, and requires a server connection\. |
| `use_auto_data_preparation` | *flag* | |
| `confidence_level` | *number* | |
| `model_selection` | `ForwardStepwise` <br>`BestSubsets` <br>`None` | |
| `criteria_forward_stepwise` | `AICC` <br>`Fstatistics` <br>`AdjustedRSquare` <br>`ASE` | |
| `probability_entry` | *number* | |
| `probability_removal` | *number* | |
| `use_max_effects` | *flag* | |
| `max_effects` | *number* | |
| `use_max_steps` | *flag* | |
| `max_steps` | *number* | |
| `criteria_best_subsets` | `AICC` <br>`AdjustedRSquare` <br>`ASE` | |
| `combining_rule_continuous` | `Mean` <br>`Median` | |
| `component_models_n` | *number* | |
| `use_random_seed` | *flag* | |
| `random_seed` | *number* | |
| `use_custom_model_name` | *flag* | |
| `custom_model_name` | *string* | |
| `use_custom_name` | *flag* | |
| `custom_name` | *string* | |
| `tooltip` | *string* | |
| `keywords` | *string* | |
| `annotation` | *string* | |
| `perform_model_effect_tests` | *boolean* | Perform model effect tests for each regression effect\. |
| `confidence_level` | *double* | This is the interval of confidence used to compute estimates of the model coefficients\. Specify a value greater than 0 and less than 100\. The default is 95\. |
| `probability_entry` | *double* | If F Statistics is chosen as the criterion, then at each step the effect that has the smallest p\-value less than the specified threshold is added to the model (include effects with p\-values less than)\. The default is 0\.05\. |
| `probability_removal` | *double* | Any effects in the model with a p\-value greater than the specified threshold are removed (remove effects with p\-values greater than)\. The default is 0\.10\. |
<!-- </table "summary="linearnode properties" id="linearslots__table_nyz_2dj_cdb" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
7F4719A688D4C15D72918EBBE43B908300138D2C | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/properties/neuralnetworkslots.html?context=cdpaas&locale=en | neuralnetworknode properties | neuralnetworknode properties
The Neural Net node uses a simplified model of the way the human brain processes information. It works by simulating a large number of interconnected simple processing units that resemble abstract versions of neurons. Neural networks are powerful general function estimators and require minimal statistical or mathematical knowledge to train or apply.
neuralnetworknode properties
Table 1. neuralnetworknode properties
neuralnetworknode Properties Values Property description
targets [field1 ... fieldN] Specifies target fields.
inputs [field1 ... fieldN] Predictor fields used by the model.
splits [field1 ... fieldN Specifies the field or fields to use for split modeling.
use_partition flag If a partition field is defined, this option ensures that only data from the training partition is used to build the model.
continue flag Continue training existing model.
objective Standard <br>Bagging <br>Boosting <br>psm psm is used for very large datasets, and requires a server connection.
method MultilayerPerceptron <br>RadialBasisFunction
use_custom_layers flag
first_layer_units number
second_layer_units number
use_max_time flag
max_time number
use_max_cycles flag
max_cycles number
use_min_accuracy flag
min_accuracy number
combining_rule_categorical Voting <br>HighestProbability <br>HighestMeanProbability
combining_rule_continuous MeanMedian
component_models_n number
overfit_prevention_pct number
use_random_seed flag
random_seed number
missing_values listwiseDeletion <br>missingValueImputation
use_model_name boolean
model_name string
confidence onProbability <br>onIncrease
score_category_probabilities flag
max_categories number
score_propensity flag
use_custom_name flag
custom_name string
tooltip string
keywords string
annotation string
calculate_variable_importance boolean For models that produce an appropriate measure of importance, you can display a chart that indicates the relative importance of each predictor in estimating the model. Typically, you'll want to focus your modeling efforts on the predictors that matter most, and consider dropping or ignoring those that matter least.
| # neuralnetworknode properties #
The Neural Net node uses a simplified model of the way the human brain processes information\. It works by simulating a large number of interconnected simple processing units that resemble abstract versions of neurons\. Neural networks are powerful general function estimators and require minimal statistical or mathematical knowledge to train or apply\.
<!-- <table "summary="neuralnetworknode properties" class="defaultstyle" "> -->
neuralnetworknode properties
Table 1\. neuralnetworknode properties
| `neuralnetworknode` Properties | Values | Property description |
| ------------------------------- | ---------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| `targets` | \[*field1 \.\.\. fieldN*\] | Specifies target fields\. |
| `inputs` | \[*field1 \.\.\. fieldN*\] | Predictor fields used by the model\. |
| `splits` | *\[field1 \.\.\. fieldN* | Specifies the field or fields to use for split modeling\. |
| `use_partition` | *flag* | If a partition field is defined, this option ensures that only data from the training partition is used to build the model\. |
| `continue` | *flag* | Continue training existing model\. |
| `objective` | `Standard` <br>`Bagging` <br>`Boosting` <br>`psm` | `psm` is used for very large datasets, and requires a server connection\. |
| `method` | `MultilayerPerceptron` <br>`RadialBasisFunction` | |
| `use_custom_layers` | *flag* | |
| `first_layer_units` | *number* | |
| `second_layer_units` | *number* | |
| `use_max_time` | *flag* | |
| `max_time` | *number* | |
| `use_max_cycles` | *flag* | |
| `max_cycles` | *number* | |
| `use_min_accuracy` | *flag* | |
| `min_accuracy` | *number* | |
| `combining_rule_categorical` | `Voting` <br>`HighestProbability` <br>`HighestMeanProbability` | |
| `combining_rule_continuous` | `Mean``Median` | |
| `component_models_n` | *number* | |
| `overfit_prevention_pct` | *number* | |
| `use_random_seed` | *flag* | |
| `random_seed` | *number* | |
| `missing_values` | `listwiseDeletion` <br>`missingValueImputation` | |
| `use_model_name` | *boolean* | |
| `model_name` | *string* | |
| `confidence` | `onProbability` <br>`onIncrease` | |
| `score_category_probabilities` | *flag* | |
| `max_categories` | *number* | |
| `score_propensity` | *flag* | |
| `use_custom_name` | *flag* | |
| `custom_name` | *string* | |
| `tooltip` | *string* | |
| `keywords` | *string* | |
| `annotation` | *string* | |
| `calculate_variable_importance` | *boolean* | For models that produce an appropriate measure of importance, you can display a chart that indicates the relative importance of each predictor in estimating the model\. Typically, you'll want to focus your modeling efforts on the predictors that matter most, and consider dropping or ignoring those that matter least\. |
<!-- </table "summary="neuralnetworknode properties" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
3DC76AC891E282BADF1D7845B2B8A9B3A26DE3D2 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/properties_exportnodes_container.html?context=cdpaas&locale=en | Export node properties | Export node properties
Refer to this section for a list of available properties for Export nodes.
| # Export node properties #
Refer to this section for a list of available properties for Export nodes\.
<!-- </article "role="article" "> -->
|
049829EEA8EECD997E6CA05584CDE2D9BAE92218 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/properties_fieldopnodes_container.html?context=cdpaas&locale=en | Field Operations node properties | Field Operations node properties
Refer to this section for a list of available properties for Field Operations nodes.
| # Field Operations node properties #
Refer to this section for a list of available properties for Field Operations nodes\.
<!-- </article "role="article" "> -->
|
9A1025416CDA5EA57E6B2D9525BDFC7F1AE58692 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/properties_graphnodes_container.html?context=cdpaas&locale=en | Graph node properties | Graph node properties
Refer to this section for a list of available properties for Graph nodes.
| # Graph node properties #
Refer to this section for a list of available properties for Graph nodes\.
<!-- </article "role="article" "> -->
|
9F78EEC8E37DB19F2C3220F8E43029B2C5370B5D | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/properties_modelingnodes_container.html?context=cdpaas&locale=en | Modeling node properties | Modeling node properties
Refer to this section for a list of available properties for Modeling nodes.
| # Modeling node properties #
Refer to this section for a list of available properties for Modeling nodes\.
<!-- </article "role="article" "> -->
|
F650943069620AA0BD7652DF1ABDCE2C076DE464 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/properties_pythonnodes_container.html?context=cdpaas&locale=en | Python node properties | Python node properties
Refer to this section for a list of available properties for Python nodes.
| # Python node properties #
Refer to this section for a list of available properties for Python nodes\.
<!-- </article "role="article" "> -->
|
8CE361C94FAB69503049EA703FD6D5A53CD81057 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/properties_recordopnodes_container.html?context=cdpaas&locale=en | Record Operations node properties | Record Operations node properties
Refer to this section for a list of available properties for Record Operations nodes.
| # Record Operations node properties #
Refer to this section for a list of available properties for Record Operations nodes\.
<!-- </article "role="article" "> -->
|
179BDEFA68B788A2C197F0094C43979D9265BA77 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/properties_sourcenodes_container.html?context=cdpaas&locale=en | Data Asset Import node properties | Data Asset Import node properties
Refer to this section for a list of available properties for Import nodes.
| # Data Asset Import node properties #
Refer to this section for a list of available properties for Import nodes\.
<!-- </article "role="article" "> -->
|
F585DF82F7A94309AF9FB51196F188B4FA212118 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/properties_sparknodes_container.html?context=cdpaas&locale=en | Spark node properties | Spark node properties
Refer to this section for a list of available properties for Spark nodes.
| # Spark node properties #
Refer to this section for a list of available properties for Spark nodes\.
<!-- </article "role="article" "> -->
|
C1CA39FF2C12CC12697E62A37C7C52A256248AF7 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/questnodeslots.html?context=cdpaas&locale=en | questnode properties | questnode properties
The Quest node provides a binary classification method for building decision trees, designed to reduce the processing time required for large C&R Tree analyses while also reducing the tendency found in classification tree methods to favor inputs that allow more splits. Input fields can be numeric ranges (continuous), but the target field must be categorical. All splits are binary.
questnode properties
Table 1. questnode properties
questnode Properties Values Property description
target field Quest models require a single target and one or more input fields. A frequency field can also be specified. See [Common modeling node properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/modelingnodeslots_common.html) for more information.
continue_training_existing_model flag
objective StandardBoostingBaggingpsm psm is used for very large datasets, and requires a server connection.
model_output_type SingleInteractiveBuilder
use_tree_directives flag
tree_directives string
use_max_depth DefaultCustom
max_depth integer Maximum tree depth, from 0 to 1000. Used only if use_max_depth = Custom.
prune_tree flag Prune tree to avoid overfitting.
use_std_err flag Use maximum difference in risk (in Standard Errors).
std_err_multiplier number Maximum difference.
max_surrogates number Maximum surrogates.
use_percentage flag
min_parent_records_pc number
min_child_records_pc number
min_parent_records_abs number
min_child_records_abs number
use_costs flag
costs structured Structured property.
priors DataEqualCustom
custom_priors structured Structured property.
adjust_priors flag
trails number Number of component models for boosting or bagging.
set_ensemble_method VotingHighestProbabilityHighestMeanProbability Default combining rule for categorical targets.
range_ensemble_method MeanMedian Default combining rule for continuous targets.
large_boost flag Apply boosting to very large data sets.
split_alpha number Significance level for splitting.
train_pct number Overfit prevention set.
set_random_seed flag Replicate results option.
seed number
calculate_variable_importance flag
calculate_raw_propensities flag
calculate_adjusted_propensities flag
adjusted_propensity_partition TestValidation
| # questnode properties #
The Quest node provides a binary classification method for building decision trees, designed to reduce the processing time required for large C&R Tree analyses while also reducing the tendency found in classification tree methods to favor inputs that allow more splits\. Input fields can be numeric ranges (continuous), but the target field must be categorical\. All splits are binary\.
<!-- <table "summary="questnode properties" id="questnodeslots__table_oj3_d2j_cdb" class="defaultstyle" "> -->
questnode properties
Table 1\. questnode properties
| `questnode` Properties | Values | Property description |
| ---------------------------------- | ---------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `target` | *field* | Quest models require a single target and one or more input fields\. A frequency field can also be specified\. See [Common modeling node properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/modelingnodeslots_common.html) for more information\. |
| `continue_training_existing_model` | *flag* | |
| `objective` | `Standard``Boosting``Bagging``psm` | `psm` is used for very large datasets, and requires a server connection\. |
| `model_output_type` | `Single``InteractiveBuilder` | |
| `use_tree_directives` | *flag* | |
| `tree_directives` | *string* | |
| `use_max_depth` | `Default``Custom` | |
| `max_depth` | *integer* | Maximum tree depth, from 0 to 1000\. Used only if `use_max_depth = Custom`\. |
| `prune_tree` | *flag* | Prune tree to avoid overfitting\. |
| `use_std_err` | *flag* | Use maximum difference in risk (in Standard Errors)\. |
| `std_err_multiplier` | *number* | Maximum difference\. |
| `max_surrogates` | *number* | Maximum surrogates\. |
| `use_percentage` | *flag* | |
| `min_parent_records_pc` | *number* | |
| `min_child_records_pc` | *number* | |
| `min_parent_records_abs` | *number* | |
| `min_child_records_abs` | *number* | |
| `use_costs` | *flag* | |
| `costs` | *structured* | Structured property\. |
| `priors` | `Data``Equal``Custom` | |
| `custom_priors` | *structured* | Structured property\. |
| `adjust_priors` | *flag* | |
| `trails` | *number* | Number of component models for boosting or bagging\. |
| `set_ensemble_method` | `Voting``HighestProbability``HighestMeanProbability` | Default combining rule for categorical targets\. |
| `range_ensemble_method` | `Mean``Median` | Default combining rule for continuous targets\. |
| `large_boost` | *flag* | Apply boosting to very large data sets\. |
| `split_alpha` | *number* | Significance level for splitting\. |
| `train_pct` | *number* | Overfit prevention set\. |
| `set_random_seed` | *flag* | Replicate results option\. |
| `seed` | *number* | |
| `calculate_variable_importance` | *flag* | |
| `calculate_raw_propensities` | *flag* | |
| `calculate_adjusted_propensities` | *flag* | |
| `adjusted_propensity_partition` | `Test``Validation` | |
<!-- </table "summary="questnode properties" id="questnodeslots__table_oj3_d2j_cdb" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
2B2899A3878E20A4B73B0F11CFC4FD815A81E13F | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/questnuggetnodeslots.html?context=cdpaas&locale=en | applyquestnode properties | applyquestnode properties
You can use QUEST modeling nodes can be used to generate a QUEST model nugget. The scripting name of this model nugget is applyquestnode. For more information on scripting the modeling node itself, see [questnode properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/questnodeslots.html).
applyquestnode properties
Table 1. applyquestnode properties
applyquestnode Properties Values Property description
sql_generate Never <br>NoMissingValues <br>MissingValues <br>native When using data from a database, SQL code can be pushed back to the database for execution, providing superior performance for many operations.
calculate_conf flag
display_rule_id flag Adds a field in the scoring output that indicates the ID for the terminal node to which each record is assigned.
calculate_raw_propensities flag
calculate_adjusted_propensities flag
| # applyquestnode properties #
You can use QUEST modeling nodes can be used to generate a QUEST model nugget\. The scripting name of this model nugget is *applyquestnode*\. For more information on scripting the modeling node itself, see [questnode properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/questnodeslots.html)\.
<!-- <table "summary="applyquestnode properties" id="questnuggetnodeslots__table_wcx_d2j_cdb" class="defaultstyle" "> -->
applyquestnode properties
Table 1\. applyquestnode properties
| `applyquestnode` Properties | Values | Property description |
| --------------------------------- | ----------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------ |
| `sql_generate` | `Never` <br>`NoMissingValues` <br>`MissingValues` <br>`native` | When using data from a database, SQL code can be pushed back to the database for execution, providing superior performance for many operations\. |
| `calculate_conf` | *flag* | |
| `display_rule_id` | *flag* | Adds a field in the scoring output that indicates the ID for the terminal node to which each record is assigned\. |
| `calculate_raw_propensities` | *flag* | |
| `calculate_adjusted_propensities` | *flag* | |
<!-- </table "summary="applyquestnode properties" id="questnuggetnodeslots__table_wcx_d2j_cdb" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
19AE3ADCF2DA2FFE5186553229FEF07CB2B55043 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/rangepredictornodeslots.html?context=cdpaas&locale=en | autonumericnode properties | autonumericnode properties
The Auto Numeric node estimates and compares models for continuous numeric range outcomes using a number of different methods. The node works in the same manner as the Auto Classifier node, allowing you to choose the algorithms to use and to experiment with multiple combinations of options in a single modeling pass. Supported algorithms include neural networks, C&R Tree, CHAID, linear regression, generalized linear regression, and support vector machines (SVM). Models can be compared based on correlation, relative error, or number of variables used.
autonumericnode properties
Table 1. autonumericnode properties
autonumericnode Properties Values Property description
custom_fields flag If True, custom field settings will be used instead of type node settings.
target field The Auto Numeric node requires a single target and one or more input fields. Weight and frequency fields can also be specified. See [Common modeling node properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/modelingnodeslots_common.htmlmodelingnodeslots_common) for more information.
inputs [field1 … field2]
partition field
use_frequency flag
frequency_field field
use_weight flag
weight_field field
use_partitioned_data flag If a partition field is defined, only the training data is used for model building.
ranking_measure CorrelationNumberOfFields
ranking_dataset TestTraining
number_of_models integer Number of models to include in the model nugget. Specify an integer between 1 and 100.
calculate_variable_importance flag
enable_correlation_limit flag
correlation_limit integer
enable_number_of_fields_limit flag
number_of_fields_limit integer
enable_relative_error_limit flag
relative_error_limit integer
enable_model_build_time_limit flag
model_build_time_limit integer
enable_stop_after_time_limit flag
stop_after_time_limit integer
stop_if_valid_model flag
<algorithm> flag Enables or disables the use of a specific algorithm.
<algorithm>.<property> string Sets a property value for a specific algorithm. See [Setting algorithm properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/factorymodeling_algorithmproperties.htmlfactorymodeling_algorithmproperties) for more information.
use_cross_validation boolean Instead of using a single partition, a cross validation partition is used.
number_of_folds integer N fold parameter for cross validation, with range from 3 to 10.
set_random_seed boolean Setting a random seed allows you to replicate analyses. Specify an integer or click Generate, which will create a pseudo-random integer between 1 and 2147483647, inclusive. By default, analyses are replicated with seed 229176228.
random_seed integer Random seed
filter_individual_model_output boolean Removes from the output all of the additional fields generated by the individual models that feed into the Ensemble node. Select this option if you're interested only in the combined score from all of the input models. Ensure that this option is deselected if, for example, you want to use an Analysis node or Evaluation node to compare the accuracy of the combined score with that of each of the individual input models.
calculate_standard_error boolean For a continuous (numeric range) target, a standard error calculation runs by default to calculate the difference between the measured or estimated values and the true values; and to show how close those estimates matched.
| # autonumericnode properties #
The Auto Numeric node estimates and compares models for continuous numeric range outcomes using a number of different methods\. The node works in the same manner as the Auto Classifier node, allowing you to choose the algorithms to use and to experiment with multiple combinations of options in a single modeling pass\. Supported algorithms include neural networks, C&R Tree, CHAID, linear regression, generalized linear regression, and support vector machines (SVM)\. Models can be compared based on correlation, relative error, or number of variables used\.
<!-- <table "summary="autonumericnode properties" id="rangepredictornodeslots__table_isl_22j_cdb" class="defaultstyle" "> -->
autonumericnode properties
Table 1\. autonumericnode properties
| `autonumericnode` Properties | Values | Property description |
| ------------------------------------ | ----------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `custom_fields` | *flag* | If True, custom field settings will be used instead of type node settings\. |
| `target` | *field* | The Auto Numeric node requires a single target and one or more input fields\. Weight and frequency fields can also be specified\. See [Common modeling node properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/modelingnodeslots_common.html#modelingnodeslots_common) for more information\. |
| `inputs` | *\[field1 … field2\]* | |
| `partition` | *field* | |
| `use_frequency` | *flag* | |
| `frequency_field` | *field* | |
| `use_weight` | *flag* | |
| `weight_field` | *field* | |
| `use_partitioned_data` | *flag* | If a partition field is defined, only the training data is used for model building\. |
| `ranking_measure` | `Correlation``NumberOfFields` | |
| `ranking_dataset` | `Test``Training` | |
| `number_of_models` | *integer* | Number of models to include in the model nugget\. Specify an integer between 1 and 100\. |
| `calculate_variable_importance` | *flag* | |
| `enable_correlation_limit` | *flag* | |
| `correlation_limit` | *integer* | |
| `enable_number_of_fields_limit` | *flag* | |
| `number_of_fields_limit` | *integer* | |
| `enable_relative_error_limit` | *flag* | |
| `relative_error_limit` | *integer* | |
| `enable_model_build_time_limit` | *flag* | |
| `model_build_time_limit` | *integer* | |
| `enable_stop_after_time_limit` | *flag* | |
| `stop_after_time_limit` | *integer* | |
| `stop_if_valid_model` | *flag* | |
| `<algorithm>` | *flag* | Enables or disables the use of a specific algorithm\. |
| `<algorithm>.<property>` | *string* | Sets a property value for a specific algorithm\. See [Setting algorithm properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/factorymodeling_algorithmproperties.html#factorymodeling_algorithmproperties) for more information\. |
| `use_cross_validation` | *boolean* | Instead of using a single partition, a cross validation partition is used\. |
| `number_of_folds` | *integer* | N fold parameter for cross validation, with range from 3 to 10\. |
| `set_random_seed` | *boolean* | Setting a random seed allows you to replicate analyses\. Specify an integer or click Generate, which will create a pseudo\-random integer between 1 and 2147483647, inclusive\. By default, analyses are replicated with seed 229176228\. |
| `random_seed` | *integer* | Random seed |
| `filter_individual_model_output` | *boolean* | Removes from the output all of the additional fields generated by the individual models that feed into the Ensemble node\. Select this option if you're interested only in the combined score from all of the input models\. Ensure that this option is deselected if, for example, you want to use an Analysis node or Evaluation node to compare the accuracy of the combined score with that of each of the individual input models\. |
| `calculate_standard_error` | *boolean* | For a continuous (numeric range) target, a standard error calculation runs by default to calculate the difference between the measured or estimated values and the true values; and to show how close those estimates matched\. |
<!-- </table "summary="autonumericnode properties" id="rangepredictornodeslots__table_isl_22j_cdb" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
B7CAC3027EB08D3E2CFBFAB0F0AF2ACF4DD0F990 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/reclassifynodeslots.html?context=cdpaas&locale=en | reclassifynode properties | reclassifynode properties
The Reclassify node transforms one set of categorical values to another. Reclassification is useful for collapsing categories or regrouping data for analysis.
reclassifynode properties
Table 1. reclassifynode properties
reclassifynode properties Data type Property description
mode SingleMultiple Single reclassifies the categories for one field. Multiple activates options enabling the transformation of more than one field at a time.
replace_field flag
field string Used only in Single mode.
new_name string Used only in Single mode.
fields [field1 field2 ... fieldn] Used only in Multiple mode.
name_extension string Used only in Multiple mode.
add_as SuffixPrefix Used only in Multiple mode.
reclassify string Structured property for field values.
use_default flag Use the default value.
default string Specify a default value.
pick_list [string string … string] Allows a user to import a list of known new values to populate the drop-down list in the table.
| # reclassifynode properties #
The Reclassify node transforms one set of categorical values to another\. Reclassification is useful for collapsing categories or regrouping data for analysis\.
<!-- <table "summary="reclassifynode properties" id="reclassifynodeslots__table_cst_mvs_ddb" class="defaultstyle" "> -->
reclassifynode properties
Table 1\. reclassifynode properties
| `reclassifynode` properties | Data type | Property description |
| --------------------------- | --------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------ |
| `mode` | `Single``Multiple` | `Single` reclassifies the categories for one field\. `Multiple` activates options enabling the transformation of more than one field at a time\. |
| `replace_field` | *flag* | |
| `field` | *string* | Used only in Single mode\. |
| `new_name` | *string* | Used only in Single mode\. |
| `fields` | *\[field1 field2 \.\.\. fieldn\]* | Used only in Multiple mode\. |
| `name_extension` | *string* | Used only in Multiple mode\. |
| `add_as` | `Suffix``Prefix` | Used only in Multiple mode\. |
| `reclassify` | *string* | Structured property for field values\. |
| `use_default` | *flag* | Use the default value\. |
| `default` | *string* | Specify a default value\. |
| `pick_list` | *\[string string … string\]* | Allows a user to import a list of known new values to populate the drop\-down list in the table\. |
<!-- </table "summary="reclassifynode properties" id="reclassifynodeslots__table_cst_mvs_ddb" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
8023AC0A48264DB31F3C9DA92FD84F947BFD4047 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/regressionnodeslots.html?context=cdpaas&locale=en | regressionnode properties | regressionnode properties
Linear regression is a common statistical technique for summarizing data and making predictions by fitting a straight line or surface that minimizes the discrepancies between predicted and actual output values.
regressionnode properties
Table 1. regressionnode properties
regressionnode Properties Values Property description
target field Regression models require a single target field and one or more input fields. A weight field can also be specified. See the topic [Common modeling node properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/modelingnodeslots_common.htmlmodelingnodeslots_common) for more information.
method Enter <br>Stepwise <br>Backwards <br>Forwards
include_constant flag
use_weight flag
weight_field field
mode Simple <br>Expert
complete_records flag
tolerance 1.0E-1 <br>1.0E-2 <br>1.0E-3 <br>1.0E-4 <br>1.0E-5 <br>1.0E-6 <br>1.0E-7 <br>1.0E-8 <br>1.0E-9 <br>1.0E-10 <br>1.0E-11 <br>1.0E-12 Use double quotes for arguments.
stepping_method useP <br>useF useP: use probability of F useF: use F value
probability_entry number
probability_removal number
F_value_entry number
F_value_removal number
selection_criteria flag
confidence_interval flag
covariance_matrix flag
collinearity_diagnostics flag
regression_coefficients flag
exclude_fields flag
durbin_watson flag
model_fit flag
r_squared_change flag
p_correlations flag
descriptives flag
calculate_variable_importance flag
residuals boolean Statistics for the residuals (or the differences between predicted values and actual values).
| # regressionnode properties #
Linear regression is a common statistical technique for summarizing data and making predictions by fitting a straight line or surface that minimizes the discrepancies between predicted and actual output values\.
<!-- <table "summary="regressionnode properties" class="defaultstyle" "> -->
regressionnode properties
Table 1\. regressionnode properties
| `regressionnode` Properties | Values | Property description |
| ------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `target` | *field* | Regression models require a single target field and one or more input fields\. A weight field can also be specified\. See the topic [Common modeling node properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/modelingnodeslots_common.html#modelingnodeslots_common) for more information\. |
| `method` | `Enter` <br>`Stepwise` <br>`Backwards` <br>`Forwards` | |
| `include_constant` | *flag* | |
| `use_weight` | *flag* | |
| `weight_field` | *field* | |
| `mode` | `Simple` <br>`Expert` | |
| `complete_records` | *flag* | |
| `tolerance` | `1.0E-1` <br>`1.0E-2` <br>`1.0E-3` <br>`1.0E-4` <br>`1.0E-5` <br>`1.0E-6` <br>`1.0E-7` <br>`1.0E-8` <br>`1.0E-9` <br>`1.0E-10` <br>`1.0E-11` <br>`1.0E-12` | Use double quotes for arguments\. |
| `stepping_method` | `useP` <br>`useF` | `useP`: use probability of F `useF`: use F value |
| `probability_entry` | *number* | |
| `probability_removal` | *number* | |
| `F_value_entry` | *number* | |
| `F_value_removal` | *number* | |
| `selection_criteria` | *flag* | |
| `confidence_interval` | *flag* | |
| `covariance_matrix` | *flag* | |
| `collinearity_diagnostics` | *flag* | |
| `regression_coefficients` | *flag* | |
| `exclude_fields` | *flag* | |
| `durbin_watson` | *flag* | |
| `model_fit` | *flag* | |
| `r_squared_change` | *flag* | |
| `p_correlations` | *flag* | |
| `descriptives` | *flag* | |
| `calculate_variable_importance` | *flag* | |
| `residuals` | *boolean* | Statistics for the residuals (or the differences between predicted values and actual values)\. |
<!-- </table "summary="regressionnode properties" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
D6A347CB86DF46925701892180F4D8A5B8E14508 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/regressionnuggetnodeslots.html?context=cdpaas&locale=en | applyregressionnode properties | applyregressionnode properties
You can use Linear Regression modeling nodes to generate a Linear Regression model nugget. The scripting name of this model nugget is applyregressionnode. No other properties exist for this model nugget. For more information on scripting the modeling node itself, see [regressionnode properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/regressionnodeslots.htmlregressionnodeslots).
| # applyregressionnode properties #
You can use Linear Regression modeling nodes to generate a Linear Regression model nugget\. The scripting name of this model nugget is *applyregressionnode*\. No other properties exist for this model nugget\. For more information on scripting the modeling node itself, see [regressionnode properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/regressionnodeslots.html#regressionnodeslots)\.
<!-- </article "role="article" "> -->
|
56DC9CABDA3980A4D5D41AA5B3E5612E727B289A | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/reordernodeslots.html?context=cdpaas&locale=en | reordernode properties | reordernode properties
The Field Reorder node defines the natural order used to display fields downstream. This order affects the display of fields in a variety of places, such as tables, lists, and when selecting fields. This operation is useful when working with wide datasets to make fields of interest more visible.
reordernode properties
Table 1. reordernode properties
reordernode properties Data type Property description
mode CustomAuto You can sort values automatically or specify a custom order.
sort_by NameTypeStorage
ascending flag
start_fields [field1 field2 … fieldn] New fields are inserted after these fields.
end_fields [field1 field2 … fieldn] New fields are inserted before these fields.
| # reordernode properties #
The Field Reorder node defines the natural order used to display fields downstream\. This order affects the display of fields in a variety of places, such as tables, lists, and when selecting fields\. This operation is useful when working with wide datasets to make fields of interest more visible\.
<!-- <table "summary="reordernode properties" class="defaultstyle" "> -->
reordernode properties
Table 1\. reordernode properties
| `reordernode` properties | Data type | Property description |
| ------------------------ | ---------------------------- | ------------------------------------------------------------- |
| `mode` | `Custom``Auto` | You can sort values automatically or specify a custom order\. |
| `sort_by` | `Name``Type``Storage` | |
| `ascending` | *flag* | |
| `start_fields` | *\[field1 field2 … fieldn\]* | New fields are inserted after these fields\. |
| `end_fields` | *\[field1 field2 … fieldn\]* | New fields are inserted before these fields\. |
<!-- </table "summary="reordernode properties" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
57ED2F2E8EAA8DAB5B26C3759FD1BD102D03B975 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/reportnodeslots.html?context=cdpaas&locale=en | reportnode properties | reportnode properties
The Report node creates formatted reports containing fixed text as well as data and other expressions derived from the data. You specify the format of the report using text templates to define the fixed text and data output constructions. You can provide custom text formatting by using HTML tags in the template and by setting output options. You can include data values and other conditional output by using CLEM expressions in the template.
reportnode properties
Table 1. reportnode properties
reportnode properties Data type Property description
output_mode ScreenFile Used to specify target location for output generated from the output node.
output_format HTML (.html) Text (.txt) Output (.cou) Used to specify the type of file output.
format AutoCustom Used to choose whether output is automatically formatted or formatted using HTML included in the template. To use HTML formatting in the template, specify Custom.
use_output_name flag Specifies whether a custom output name is used.
output_name string If use_output_name is true, specifies the name to use.
text string
full_filename string
highlights flag
title string
lines_per_page number
| # reportnode properties #
The Report node creates formatted reports containing fixed text as well as data and other expressions derived from the data\. You specify the format of the report using text templates to define the fixed text and data output constructions\. You can provide custom text formatting by using HTML tags in the template and by setting output options\. You can include data values and other conditional output by using CLEM expressions in the template\.
<!-- <table "summary="reportnode properties" class="defaultstyle" "> -->
reportnode properties
Table 1\. reportnode properties
| `reportnode` properties | Data type | Property description |
| ----------------------- | ----------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `output_mode` | `Screen``File` | Used to specify target location for output generated from the output node\. |
| `output_format` | `HTML` (\.*html*) `Text` (\.*txt*) `Output` (\.*cou*) | Used to specify the type of file output\. |
| `format` | `Auto``Custom` | Used to choose whether output is automatically formatted or formatted using HTML included in the template\. To use HTML formatting in the template, specify `Custom`\. |
| `use_output_name` | *flag* | Specifies whether a custom output name is used\. |
| `output_name` | *string* | If `use_output_name` is true, specifies the name to use\. |
| `text` | *string* | |
| `full_filename` | *string* | |
| `highlights` | *flag* | |
| `title` | *string* | |
| `lines_per_page` | *number* | |
<!-- </table "summary="reportnode properties" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
5D9039607C167566CED9A4D7CC9F30F2B0C58554 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/restructurenodeslots.html?context=cdpaas&locale=en | restructurenode properties | restructurenode properties
The Restructure node converts a nominal or flag field into a group of fields that can be populated with the values of yet another field. For example, given a field named payment type, with values of credit, cash, and debit, three new fields would be created (credit, cash, debit), each of which might contain the value of the actual payment made.
Example
node = stream.create("restructure", "My node")
node.setKeyedPropertyValue("fields_from", "Drug", ["drugA", "drugX"])
node.setPropertyValue("include_field_name", True)
node.setPropertyValue("value_mode", "OtherFields")
node.setPropertyValue("value_fields", ["Age", "BP"])
restructurenode properties
Table 1. restructurenode properties
restructurenode properties Data type Property description
fields_from [category category category] all
include_field_name flag Indicates whether to use the field name in the restructured field name.
value_mode OtherFieldsFlags Indicates the mode for specifying the values for the restructured fields. With OtherFields, you must specify which fields to use. With Flags, the values are numeric flags.
value_fields list Required if value_mode is OtherFields. Specifies which fields to use as value fields.
| # restructurenode properties #
The Restructure node converts a nominal or flag field into a group of fields that can be populated with the values of yet another field\. For example, given a field named `payment type`, with values of `credit`, `cash`, and `debit`, three new fields would be created (`credit`, `cash`, `debit`), each of which might contain the value of the actual payment made\.
Example
node = stream.create("restructure", "My node")
node.setKeyedPropertyValue("fields_from", "Drug", ["drugA", "drugX"])
node.setPropertyValue("include_field_name", True)
node.setPropertyValue("value_mode", "OtherFields")
node.setPropertyValue("value_fields", ["Age", "BP"])
<!-- <table "summary="restructurenode properties" id="restructurenodeslots__table_er4_5vs_ddb" class="defaultstyle" "> -->
restructurenode properties
Table 1\. restructurenode properties
| `restructurenode` properties | Data type | Property description |
| ---------------------------- | -------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `fields_from` | \[*category category category*\] `all` | |
| `include_field_name` | *flag* | Indicates whether to use the field name in the restructured field name\. |
| `value_mode` | `OtherFields``Flags` | Indicates the mode for specifying the values for the restructured fields\. With `OtherFields`, you must specify which fields to use\. With `Flags`, the values are numeric flags\. |
| `value_fields` | *list* | Required if `value_mode` is `OtherFields`\. Specifies which fields to use as value fields\. |
<!-- </table "summary="restructurenode properties" id="restructurenodeslots__table_er4_5vs_ddb" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
CD0745062372B6A66356728DEA39EE6D8237D0DE | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/rf_nodeslots.html?context=cdpaas&locale=en | randomtrees properties | randomtrees properties
The Random Trees node is similar to the C&RT Tree node; however, the Random Trees node is designed to process big data to create a single tree. The Random Trees tree node generates a decision tree that you use to predict or classify future observations. The method uses recursive partitioning to split the training records into segments by minimizing the impurity at each step, where a node in the tree is considered pure if 100% of cases in the node fall into a specific category of the target field. Target and input fields can be numeric ranges or categorical (nominal, ordinal, or flags); all splits are binary (only two subgroups).
randomtrees properties
Table 1. randomtrees properties
randomtrees Properties Values Property description
target field In the Random Trees node, models require a single target and one or more input fields. A frequency field can also be specified. See [Common modeling node properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/modelingnodeslots_common.htmlmodelingnodeslots_common) for more information.
number_of_models integer Determines the number of models to build as part of the ensemble modeling.
use_number_of_predictors flag Determines whether number_of_predictors is used.
number_of_predictors integer Specifies the number of predictors to be used when building split models.
use_stop_rule_for_accuracy flag Determines whether model building stops when accuracy can't be improved.
sample_size number Reduce this value to improve performance when processing very large datasets.
handle_imbalanced_data flag If the target of the model is a particular flag outcome, and the ratio of the desired outcome to a non-desired outcome is very small, then the data is imbalanced and the bootstrap sampling that's conducted by the model may affect the model's accuracy. Enable imbalanced data handling so that the model will capture a larger proportion of the desired outcome and generate a stronger model.
use_weighted_sampling flag When False, variables for each node are randomly selected with the same probability. When True, variables are weighted and selected accordingly.
max_node_number integer Maximum number of nodes allowed in individual trees. If the number would be exceeded on the next split, tree growth halts.
max_depth integer Maximum tree depth before growth halts.
min_child_node_size integer Determines the minimum number of records allowed in a child node after the parent node is split. If a child node would contain fewer records than specified here, the parent node won't be split.
use_costs flag
costs structured Structured property. The format is a list of 3 values: the actual value, the predicted value, and the cost if that prediction is wrong. For example: tree.setPropertyValue("costs", ["drugA", "drugB", 3.0], "drugX", "drugY", 4.0]])
default_cost_increase nonelinearsquarecustom Note this is only enabled for ordinal targets. Set default values in the costs matrix.
max_pct_missing integer If the percentage of missing values in any input is greater than the value specified here, the input is excluded. Minimum 0, maximum 100.
exclude_single_cat_pct integer If one category value represents a higher percentage of the records than specified here, the entire field is excluded from model building. Minimum 1, maximum 99.
max_category_number integer If the number of categories in a field exceeds this value, the field is excluded from model building. Minimum 2.
min_field_variation number If the coefficient of variation of a continuous field is smaller than this value, the field is excluded from model building.
num_bins integer Only used if the data is made up of continuous inputs. Set the number of equal frequency bins to be used for the inputs; options are: 2, 4, 5, 10, 20, 25, 50, or 100.
topN integer Specifies the number of rules to report. Default value is 50, with a minimum of 1 and a maximum of 1000.
| # randomtrees properties #
The Random Trees node is similar to the C&RT Tree node; however, the Random Trees node is designed to process big data to create a single tree\. The Random Trees tree node generates a decision tree that you use to predict or classify future observations\. The method uses recursive partitioning to split the training records into segments by minimizing the impurity at each step, where a node in the tree is considered pure if 100% of cases in the node fall into a specific category of the target field\. Target and input fields can be numeric ranges or categorical (nominal, ordinal, or flags); all splits are binary (only two subgroups)\.
<!-- <table "summary="randomtrees properties" class="defaultstyle" "> -->
randomtrees properties
Table 1\. randomtrees properties
| `randomtrees` Properties | Values | Property description |
| ---------------------------- | ------------------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `target` | *field* | In the Random Trees node, models require a single target and one or more input fields\. A frequency field can also be specified\. See [Common modeling node properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/modelingnodeslots_common.html#modelingnodeslots_common) for more information\. |
| `number_of_models` | *integer* | Determines the number of models to build as part of the ensemble modeling\. |
| `use_number_of_predictors` | *flag* | Determines whether `number_of_predictors` is used\. |
| `number_of_predictors` | *integer* | Specifies the number of predictors to be used when building split models\. |
| `use_stop_rule_for_accuracy` | *flag* | Determines whether model building stops when accuracy can't be improved\. |
| `sample_size` | *number* | Reduce this value to improve performance when processing very large datasets\. |
| `handle_imbalanced_data` | *flag* | If the target of the model is a particular flag outcome, and the ratio of the desired outcome to a non\-desired outcome is very small, then the data is imbalanced and the bootstrap sampling that's conducted by the model may affect the model's accuracy\. Enable imbalanced data handling so that the model will capture a larger proportion of the desired outcome and generate a stronger model\. |
| `use_weighted_sampling` | *flag* | When False, variables for each node are randomly selected with the same probability\. When True, variables are weighted and selected accordingly\. |
| `max_node_number` | *integer* | Maximum number of nodes allowed in individual trees\. If the number would be exceeded on the next split, tree growth halts\. |
| `max_depth` | *integer* | Maximum tree depth before growth halts\. |
| `min_child_node_size` | *integer* | Determines the minimum number of records allowed in a child node after the parent node is split\. If a child node would contain fewer records than specified here, the parent node won't be split\. |
| `use_costs` | *flag* | |
| `costs` | *structured* | Structured property\. The format is a list of 3 values: the actual value, the predicted value, and the cost if that prediction is wrong\. For example: `tree.setPropertyValue("costs", ["drugA", "drugB", 3.0], "drugX", "drugY", 4.0]])` |
| `default_cost_increase` | `none``linear``square``custom` | Note this is only enabled for ordinal targets\. Set default values in the costs matrix\. |
| `max_pct_missing` | *integer* | If the percentage of missing values in any input is greater than the value specified here, the input is excluded\. Minimum 0, maximum 100\. |
| `exclude_single_cat_pct` | *integer* | If one category value represents a higher percentage of the records than specified here, the entire field is excluded from model building\. Minimum 1, maximum 99\. |
| `max_category_number` | *integer* | If the number of categories in a field exceeds this value, the field is excluded from model building\. Minimum 2\. |
| `min_field_variation` | *number* | If the coefficient of variation of a continuous field is smaller than this value, the field is excluded from model building\. |
| `num_bins` | *integer* | Only used if the data is made up of continuous inputs\. Set the number of equal frequency bins to be used for the inputs; options are: 2, 4, 5, 10, 20, 25, 50, or 100\. |
| `topN` | *integer* | Specifies the number of rules to report\. Default value is 50, with a minimum of 1 and a maximum of 1000\. |
<!-- </table "summary="randomtrees properties" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
E10CEBBD89F23E057645097B776A51DEA0C1555F | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/rf_nuggetnodeslots.html?context=cdpaas&locale=en | applyrandomtrees properties | applyrandomtrees properties
You can use the Random Trees modeling node to generate a Random Trees model nugget. The scripting name of this model nugget is applyrandomtrees. For more information on scripting the modeling node itself, see [randomtrees properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/rf_nodeslots.htmlrf_nodeslots).
applyrandomtrees properties
Table 1. applyrandomtrees properties
applyrandomtrees Properties Values Property description
calculate_conf flag This property includes confidence calculations in the generated tree.
enable_sql_generation false <br>native When using data from a database, SQL code can be pushed back to the database for execution, providing superior performance for many operations.
| # applyrandomtrees properties #
You can use the Random Trees modeling node to generate a Random Trees model nugget\. The scripting name of this model nugget is *applyrandomtrees*\. For more information on scripting the modeling node itself, see [randomtrees properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/rf_nodeslots.html#rf_nodeslots)\.
<!-- <table "summary="applyrandomtrees properties" class="defaultstyle" "> -->
applyrandomtrees properties
Table 1\. applyrandomtrees properties
| `applyrandomtrees` Properties | Values | Property description |
| ----------------------------- | --------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------ |
| `calculate_conf` | *flag* | This property includes confidence calculations in the generated tree\. |
| `enable_sql_generation` | `false` <br>`native` | When using data from a database, SQL code can be pushed back to the database for execution, providing superior performance for many operations\. |
<!-- </table "summary="applyrandomtrees properties" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
F6670B3B49F00E4EE1F44E8B1C09E24AFEDD2529 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/rfmaggregatenodeslots.html?context=cdpaas&locale=en | rfmaggregatenode properties | rfmaggregatenode properties
 The Recency, Frequency, Monetary (RFM) Aggregate node enables you to take customers' historical transactional data, strip away any unused data, and combine all of their remaining transaction data into a single row that lists when they last dealt with you, how many transactions they have made, and the total monetary value of those transactions.
Example
node = stream.create("rfmaggregate", "My node")
node.setPropertyValue("relative_to", "Fixed")
node.setPropertyValue("reference_date", "2007-10-12")
node.setPropertyValue("id_field", "CardID")
node.setPropertyValue("date_field", "Date")
node.setPropertyValue("value_field", "Amount")
node.setPropertyValue("only_recent_transactions", True)
node.setPropertyValue("transaction_date_after", "2000-10-01")
rfmaggregatenode properties
Table 1. rfmaggregatenode properties
rfmaggregatenode properties Data type Property description
relative_to FixedToday Specify the date from which the recency of transactions will be calculated.
reference_date date Only available if Fixed is chosen in relative_to.
contiguous flag If your data is presorted so that all records with the same ID appear together in the data stream, selecting this option speeds up processing.
id_field field Specify the field to be used to identify the customer and their transactions.
date_field field Specify the date field to be used to calculate recency against.
value_field field Specify the field to be used to calculate the monetary value.
extension string Specify a prefix or suffix for duplicate aggregated fields.
add_as SuffixPrefix Specify if the extension should be added as a suffix or a prefix.
discard_low_value_records flag Enable use of the discard_records_below setting.
discard_records_below number Specify a minimum value below which any transaction details are not used when calculating the RFM totals. The units of value relate to the value field selected.
only_recent_transactions flag Enable use of either the specify_transaction_date or transaction_within_last settings.
specify_transaction_date flag
transaction_date_after date Only available if specify_transaction_date is selected. Specify the transaction date after which records will be included in your analysis.
transaction_within_last number Only available if transaction_within_last is selected. Specify the number and type of periods (days, weeks, months, or years) back from the Calculate Recency relative to date after which records will be included in your analysis.
transaction_scale DaysWeeksMonthsYears Only available if transaction_within_last is selected. Specify the number and type of periods (days, weeks, months, or years) back from the Calculate Recency relative to date after which records will be included in your analysis.
save_r2 flag Displays the date of the second most recent transaction for each customer.
save_r3 flag Only available if save_r2 is selected. Displays the date of the third most recent transaction for each customer.
| # rfmaggregatenode properties #
 The Recency, Frequency, Monetary (RFM) Aggregate node enables you to take customers' historical transactional data, strip away any unused data, and combine all of their remaining transaction data into a single row that lists when they last dealt with you, how many transactions they have made, and the total monetary value of those transactions\.
Example
node = stream.create("rfmaggregate", "My node")
node.setPropertyValue("relative_to", "Fixed")
node.setPropertyValue("reference_date", "2007-10-12")
node.setPropertyValue("id_field", "CardID")
node.setPropertyValue("date_field", "Date")
node.setPropertyValue("value_field", "Amount")
node.setPropertyValue("only_recent_transactions", True)
node.setPropertyValue("transaction_date_after", "2000-10-01")
<!-- <table "summary="rfmaggregatenode properties" id="rfmaggregatenodeslots__table_tqg_vvs_ddb" class="defaultstyle" "> -->
rfmaggregatenode properties
Table 1\. rfmaggregatenode properties
| `rfmaggregatenode` properties | Data type | Property description |
| ----------------------------- | ---------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `relative_to` | `Fixed``Today` | Specify the date from which the recency of transactions will be calculated\. |
| `reference_date` | *date* | Only available if `Fixed` is chosen in `relative_to`\. |
| `contiguous` | *flag* | If your data is presorted so that all records with the same ID appear together in the data stream, selecting this option speeds up processing\. |
| `id_field` | *field* | Specify the field to be used to identify the customer and their transactions\. |
| `date_field` | *field* | Specify the date field to be used to calculate recency against\. |
| `value_field` | *field* | Specify the field to be used to calculate the monetary value\. |
| `extension` | *string* | Specify a prefix or suffix for duplicate aggregated fields\. |
| `add_as` | `Suffix``Prefix` | Specify if the `extension` should be added as a suffix or a prefix\. |
| `discard_low_value_records` | *flag* | Enable use of the `discard_records_below` setting\. |
| `discard_records_below` | *number* | Specify a minimum value below which any transaction details are not used when calculating the RFM totals\. The units of value relate to the `value` field selected\. |
| `only_recent_transactions` | *flag* | Enable use of either the `specify_transaction_date` or `transaction_within_last` settings\. |
| `specify_transaction_date` | *flag* | |
| `transaction_date_after` | *date* | Only available if `specify_transaction_date` is selected\. Specify the transaction date after which records will be included in your analysis\. |
| `transaction_within_last` | *number* | Only available if `transaction_within_last` is selected\. Specify the number and type of periods (days, weeks, months, or years) back from the Calculate Recency relative to date after which records will be included in your analysis\. |
| `transaction_scale` | `Days``Weeks``Months``Years` | Only available if `transaction_within_last` is selected\. Specify the number and type of periods (days, weeks, months, or years) back from the Calculate Recency relative to date after which records will be included in your analysis\. |
| `save_r2` | *flag* | Displays the date of the second most recent transaction for each customer\. |
| `save_r3` | *flag* | Only available if `save_r2` is selected\. Displays the date of the third most recent transaction for each customer\. |
<!-- </table "summary="rfmaggregatenode properties" id="rfmaggregatenodeslots__table_tqg_vvs_ddb" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
4292721E4524AC59FA259576D39665946DB8849D | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/rfmanalysisnodeslots.html?context=cdpaas&locale=en | rfmanalysisnode properties | rfmanalysisnode properties
The Recency, Frequency, Monetary (RFM) Analysis node enables you to determine quantitatively which customers are likely to be the best ones by examining how recently they last purchased from you (recency), how often they purchased (frequency), and how much they spent over all transactions (monetary).
rfmanalysisnode properties
Table 1. rfmanalysisnode properties
rfmanalysisnode properties Data type Property description
recency field Specify the recency field. This may be a date, timestamp, or simple number.
frequency field Specify the frequency field.
monetary field Specify the monetary field.
recency_bins integer Specify the number of recency bins to be generated.
recency_weight number Specify the weighting to be applied to recency data. The default is 100.
frequency_bins integer Specify the number of frequency bins to be generated.
frequency_weight number Specify the weighting to be applied to frequency data. The default is 10.
monetary_bins integer Specify the number of monetary bins to be generated.
monetary_weight number Specify the weighting to be applied to monetary data. The default is 1.
tied_values_method NextCurrent Specify which bin tied value data is to be put in.
recalculate_bins AlwaysIfNecessary
add_outliers flag Available only if recalculate_bins is set to IfNecessary. If set, records that lie below the lower bin will be added to the lower bin, and records above the highest bin will be added to the highest bin.
binned_field RecencyFrequencyMonetary
recency_thresholds value value Available only if recalculate_bins is set to Always. Specify the upper and lower thresholds for the recency bins. The upper threshold of one bin is used as the lower threshold of the next—for example, [10 30 60] would define two bins, the first bin with upper and lower thresholds of 10 and 30, with the second bin thresholds of 30 and 60.
frequency_thresholds value value Available only if recalculate_bins is set to Always.
monetary_thresholds value value Available only if recalculate_bins is set to Always.
| # rfmanalysisnode properties #
The Recency, Frequency, Monetary (RFM) Analysis node enables you to determine quantitatively which customers are likely to be the best ones by examining how recently they last purchased from you (recency), how often they purchased (frequency), and how much they spent over all transactions (monetary)\.
<!-- <table "summary="rfmanalysisnode properties" id="rfmanalysisnodeslots__table_wtw_vvs_ddb" class="defaultstyle" "> -->
rfmanalysisnode properties
Table 1\. rfmanalysisnode properties
| `rfmanalysisnode` properties | Data type | Property description |
| ---------------------------- | ------------------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| `recency` | *field* | Specify the recency field\. This may be a date, timestamp, or simple number\. |
| `frequency` | *field* | Specify the frequency field\. |
| `monetary` | *field* | Specify the monetary field\. |
| `recency_bins` | *integer* | Specify the number of recency bins to be generated\. |
| `recency_weight` | *number* | Specify the weighting to be applied to recency data\. The default is 100\. |
| `frequency_bins` | *integer* | Specify the number of frequency bins to be generated\. |
| `frequency_weight` | *number* | Specify the weighting to be applied to frequency data\. The default is 10\. |
| `monetary_bins` | *integer* | Specify the number of monetary bins to be generated\. |
| `monetary_weight` | *number* | Specify the weighting to be applied to monetary data\. The default is 1\. |
| `tied_values_method` | `Next``Current` | Specify which bin tied value data is to be put in\. |
| `recalculate_bins` | `Always``IfNecessary` | |
| `add_outliers` | *flag* | Available only if `recalculate_bins` is set to `IfNecessary`\. If set, records that lie below the lower bin will be added to the lower bin, and records above the highest bin will be added to the highest bin\. |
| `binned_field` | `Recency``Frequency``Monetary` | |
| `recency_thresholds` | *value value* | Available only if `recalculate_bins` is set to `Always`\. Specify the upper and lower thresholds for the recency bins\. The upper threshold of one bin is used as the lower threshold of the next—for example, `[10 30 60]` would define two bins, the first bin with upper and lower thresholds of 10 and 30, with the second bin thresholds of 30 and 60\. |
| `frequency_thresholds` | *value value* | Available only if `recalculate_bins` is set to `Always`\. |
| `monetary_thresholds` | *value value* | Available only if `recalculate_bins` is set to `Always`\. |
<!-- </table "summary="rfmanalysisnode properties" id="rfmanalysisnodeslots__table_wtw_vvs_ddb" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
D1908D2F2C1701D4A9AC3354E42DFF295C06B40D | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/rfnodeslots.html?context=cdpaas&locale=en | rfnode properties | rfnode properties
The Random Forest node uses an advanced implementation of a bagging algorithm with a tree model as the base model. This Random Forest modeling node in SPSS Modeler is implemented in Python and requires the scikit-learn© Python library.
rfnode properties
Table 1. rfnode properties
rfnode properties Data type Property description
custom_fields boolean This option tells the node to use field information specified here instead of that given in any upstream Type node(s). After selecting this option, specify the following fields as required.
inputs field List of the field names for input.
target field One field name for target.
fast_build boolean Utilize multiple CPU cores to improve model building.
role_use string Specify predefined to use predefined roles or custom to use custom field assignments. Default is predefined.
splits field List of the field names for split.
n_estimators integer Number of trees to build. Default is 10.
specify_max_depth Boolean Specify custom max depth. If false, nodes are expanded until all leaves are pure or until all leaves contain less than min_samples_split samples. Default is false.
max_depth integer The maximum depth of the tree. Default is 10.
min_samples_leaf integer Minimum leaf node size. Default is 1.
max_features string The number of features to consider when looking for the best split:<br><br><br><br> * If auto, then max_features=sqrt(n_features) for classifier and max_features=sqrt(n_features) for regression.<br> * If sqrt, then max_features=sqrt(n_features).<br> * If log2, then max_features=log2 (n_features).<br><br><br><br>Default is auto.
bootstrap Boolean Use bootstrap samples when building trees. Default is true.
oob_score Boolean Use out-of-bag samples to estimate the generalization accuracy. Default value is false.
extreme Boolean Use extremely randomized trees. Default is false.
use_random_seed Boolean Specify this to get replicated results. Default is false.
random_seed integer The random number seed to use when build trees. Specify any integer.
cache_size float The size of the kernel cache in MB. Default is 200.
enable_random_seed Boolean Enables the random_seed parameter. Specify true or false. Default is false.
enable_hpo Boolean Specify true or false to enable or disable the HPO options. If set to true, Rbfopt will be applied to determine the "best" Random Forest model automatically, which reaches the target objective value defined by the user with the following target_objval parameter.
target_objval float The objective function value (error rate of the model on the samples) you want to reach (for example, the value of the unknown optimum). Set this parameter to the appropriate value if the optimum is unknown (for example, 0.01).
max_iterations integer Maximum number of iterations for trying the model. Default is 1000.
max_evaluations integer Maximum number of function evaluations for trying the model, where the focus is accuracy over speed. Default is 300.
| # rfnode properties #
The Random Forest node uses an advanced implementation of a bagging algorithm with a tree model as the base model\. This Random Forest modeling node in SPSS Modeler is implemented in Python and requires the scikit\-learn© Python library\.
<!-- <table "summary="rfnode properties" id="rfnodeslots__table_r3k_5bw_tz" class="defaultstyle" "> -->
rfnode properties
Table 1\. rfnode properties
| `rfnode` properties | Data type | Property description |
| -------------------- | --------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `custom_fields` | *boolean* | This option tells the node to use field information specified here instead of that given in any upstream Type node(s)\. After selecting this option, specify the following fields as required\. |
| `inputs` | *field* | List of the field names for input\. |
| `target` | *field* | One field name for target\. |
| `fast_build` | *boolean* | Utilize multiple CPU cores to improve model building\. |
| `role_use` | *string* | Specify `predefined` to use predefined roles or `custom` to use custom field assignments\. Default is predefined\. |
| `splits` | *field* | List of the field names for split\. |
| `n_estimators` | *integer* | Number of trees to build\. Default is `10`\. |
| `specify_max_depth` | *Boolean* | Specify custom max depth\. If `false`, nodes are expanded until all leaves are pure or until all leaves contain less than `min_samples_split` samples\. Default is `false`\. |
| `max_depth` | *integer* | The maximum depth of the tree\. Default is `10`\. |
| `min_samples_leaf` | *integer* | Minimum leaf node size\. Default is `1`\. |
| `max_features` | *string* | The number of features to consider when looking for the best split:<br><br><!-- <ul> --><br><br> * If `auto`, then `max_features=sqrt(n_features)` for classifier and `max_features=sqrt(n_features)` for regression\.<br> * If `sqrt`, then `max_features=sqrt(n_features)`\.<br> * If `log2`, then `max_features=log2 (n_features)`\.<br><br><!-- </ul> --><br><br>Default is `auto`\. |
| `bootstrap` | *Boolean* | Use bootstrap samples when building trees\. Default is `true`\. |
| `oob_score` | *Boolean* | Use out\-of\-bag samples to estimate the generalization accuracy\. Default value is `false`\. |
| `extreme` | *Boolean* | Use extremely randomized trees\. Default is `false`\. |
| `use_random_seed` | *Boolean* | Specify this to get replicated results\. Default is `false`\. |
| `random_seed` | *integer* | The random number seed to use when build trees\. Specify any integer\. |
| `cache_size` | *float* | The size of the kernel cache in MB\. Default is `200`\. |
| `enable_random_seed` | *Boolean* | Enables the `random_seed` parameter\. Specify true or false\. Default is `false`\. |
| `enable_hpo` | *Boolean* | Specify `true` or `false` to enable or disable the HPO options\. If set to `true`, Rbfopt will be applied to determine the "best" Random Forest model automatically, which reaches the target objective value defined by the user with the following `target_objval` parameter\. |
| `target_objval` | *float* | The objective function value (error rate of the model on the samples) you want to reach (for example, the value of the unknown optimum)\. Set this parameter to the appropriate value if the optimum is unknown (for example, `0.01`)\. |
| `max_iterations` | *integer* | Maximum number of iterations for trying the model\. Default is `1000`\. |
| `max_evaluations` | *integer* | Maximum number of function evaluations for trying the model, where the focus is accuracy over speed\. Default is `300`\. |
<!-- </table "summary="rfnode properties" id="rfnodeslots__table_r3k_5bw_tz" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
949025C4DEEA46FD131C7B8D89978D75FCC440C4 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/samplenodeslots.html?context=cdpaas&locale=en | samplenode properties | samplenode properties
 The Sample node selects a subset of records. A variety of sample types are supported, including stratified, clustered, and nonrandom (structured) samples. Sampling can be useful for improving performance, and for selecting groups of related records or transactions for analysis.
Example
/* Create two Sample nodes to extract
different samples from the same data /
node = stream.create("sample", "My node")
node.setPropertyValue("method", "Simple")
node.setPropertyValue("mode", "Include")
node.setPropertyValue("sample_type", "First")
node.setPropertyValue("first_n", 500)
node = stream.create("sample", "My node")
node.setPropertyValue("method", "Complex")
node.setPropertyValue("stratify_by", ["Sex", "Cholesterol"])
node.setPropertyValue("sample_units", "Proportions")
node.setPropertyValue("sample_size_proportions", "Custom")
node.setPropertyValue("sizes_proportions", ["M", "High", "Default"], "M", "Normal", "Default"],
"F", "High", 0.3], "F", "Normal", 0.3]])
samplenode properties
Table 1. samplenode properties
samplenode properties Data type Property description
method Simple Complex
mode IncludeDiscard Include or discard records that meet the specified condition.
sample_type FirstOneInNRandomPct Specifies the sampling method.
first_n integer Records up to the specified cutoff point will be included or discarded.
one_in_n number Include or discard every nth record.
rand_pct number Specify the percentage of records to include or discard.
use_max_size flag Enable use of the maximum_size setting.
maximum_size integer Specify the largest sample to be included or discarded from the data stream. This option is redundant and therefore disabled when First and Include are specified.
set_random_seed flag Enables use of the random seed setting.
random_seed integer Specify the value used as a random seed.
complex_sample_type RandomSystematic
sample_units ProportionsCounts
sample_size_proportions FixedCustomVariable
sample_size_counts FixedCustomVariable
fixed_proportions number
fixed_counts integer
variable_proportions field
variable_counts field
use_min_stratum_size flag
minimum_stratum_size integer This option only applies when a Complex sample is taken with Sample units=Proportions.
use_max_stratum_size flag
maximum_stratum_size integer This option only applies when a Complex sample is taken with Sample units=Proportions.
clusters field
stratify_by [field1 ... fieldN]
specify_input_weight flag
input_weight field
new_output_weight string
sizes_proportions [[stringstring value][stringstring value]…] If sample_units=proportions and sample_size_proportions=Custom, specifies a value for each possible combination of values of stratification fields.
default_proportion number
sizes_counts [[stringstring value][stringstring value]…] Specifies a value for each possible combination of values of stratification fields. Usage is similar to sizes_proportions but specifying an integer rather than a proportion.
default_count number
| # samplenode properties #
 The Sample node selects a subset of records\. A variety of sample types are supported, including stratified, clustered, and nonrandom (structured) samples\. Sampling can be useful for improving performance, and for selecting groups of related records or transactions for analysis\.
Example
/* Create two Sample nodes to extract
different samples from the same data */
node = stream.create("sample", "My node")
node.setPropertyValue("method", "Simple")
node.setPropertyValue("mode", "Include")
node.setPropertyValue("sample_type", "First")
node.setPropertyValue("first_n", 500)
node = stream.create("sample", "My node")
node.setPropertyValue("method", "Complex")
node.setPropertyValue("stratify_by", ["Sex", "Cholesterol"])
node.setPropertyValue("sample_units", "Proportions")
node.setPropertyValue("sample_size_proportions", "Custom")
node.setPropertyValue("sizes_proportions", ["M", "High", "Default"], "M", "Normal", "Default"],
"F", "High", 0.3], "F", "Normal", 0.3]])
<!-- <table "summary="samplenode properties" id="samplenodeslots__table_fft_wvs_ddb" class="defaultstyle" "> -->
samplenode properties
Table 1\. samplenode properties
| `samplenode` properties | Data type | Property description |
| ------------------------- | --------------------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `method` | Simple Complex | |
| `mode` | `Include``Discard` | Include or discard records that meet the specified condition\. |
| `sample_type` | `First``OneInN``RandomPct` | Specifies the sampling method\. |
| `first_n` | *integer* | Records up to the specified cutoff point will be included or discarded\. |
| `one_in_n` | *number* | Include or discard every *n*th record\. |
| `rand_pct` | *number* | Specify the percentage of records to include or discard\. |
| `use_max_size` | *flag* | Enable use of the `maximum_size` setting\. |
| `maximum_size` | *integer* | Specify the largest sample to be included or discarded from the data stream\. This option is redundant and therefore disabled when `First` and `Include` are specified\. |
| `set_random_seed` | *flag* | Enables use of the random seed setting\. |
| `random_seed` | *integer* | Specify the value used as a random seed\. |
| `complex_sample_type` | `Random``Systematic` | |
| `sample_units` | `Proportions``Counts` | |
| `sample_size_proportions` | `Fixed``Custom``Variable` | |
| `sample_size_counts` | `Fixed``Custom``Variable` | |
| `fixed_proportions` | *number* | |
| `fixed_counts` | *integer* | |
| `variable_proportions` | *field* | |
| `variable_counts` | *field* | |
| `use_min_stratum_size` | *flag* | |
| `minimum_stratum_size` | *integer* | This option only applies when a Complex sample is taken with `Sample units=Proportions`\. |
| `use_max_stratum_size` | *flag* | |
| `maximum_stratum_size` | *integer* | This option only applies when a Complex sample is taken with `Sample units=Proportions`\. |
| `clusters` | *field* | |
| `stratify_by` | *\[field1 \.\.\. fieldN\]* | |
| `specify_input_weight` | *flag* | |
| `input_weight` | *field* | |
| `new_output_weight` | *string* | |
| `sizes_proportions` | \[\[`string`*string value*\]\[`string`*string value*\]…\] | If `sample_units=proportions` and `sample_size_proportions=Custom`, specifies a value for each possible combination of values of stratification fields\. |
| `default_proportion` | *number* | |
| `sizes_counts` | \[\[`string`*string value*\]\[`string`*string value*\]…\] | Specifies a value for each possible combination of values of stratification fields\. Usage is similar to `sizes_proportions` but specifying an integer rather than a proportion\. |
| `default_count` | *number* | |
<!-- </table "summary="samplenode properties" id="samplenodeslots__table_fft_wvs_ddb" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
3E0860FD12FA0BB5BE75C68FBD34D69A631F2324 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/script_execution_and_interruption.html?context=cdpaas&locale=en | Running and interrupting scripts | Running and interrupting scripts
You can run scripts in a number of ways. For example, in the flow script or standalone script pane, click Run This Script to run the complete script.
You can run a script using any of the following methods:
* Click Run script within a flow script or standalone script.
* Run a flow where Run script is set as the default execution method.
Note: A SuperNode script runs when the SuperNode is run as long as you select Run script within the SuperNode script dialog box.
| # Running and interrupting scripts #
You can run scripts in a number of ways\. For example, in the flow script or standalone script pane, click Run This Script to run the complete script\.
You can run a script using any of the following methods:
<!-- <ul> -->
* Click Run script within a flow script or standalone script\.
* Run a flow where Run script is set as the default execution method\.
<!-- </ul> -->
Note: A SuperNode script runs when the SuperNode is run as long as you select Run script within the SuperNode script dialog box\.
<!-- </article "role="article" "> -->
|
27E7AD16129A9DC8AC8CE2EE79C9B584D441F0DE | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/scripting_accessresults.html?context=cdpaas&locale=en | Accessing flow run results | Accessing flow run results
Many SPSS Modeler nodes produce output objects such as models, charts, and tabular data. Many of these outputs contain useful values that can be used by scripts to guide subsequent runs. These values are grouped into content containers (referred to as simply containers) which can be accessed using tags or IDs that identify each container. The way these values are accessed depends on the format or "content model" used by that container.
For example, many predictive model outputs use a variant of XML called PMML to represent information about the model such as which fields a decision tree uses at each split, or how the neurons in a neural network are connected and with what strengths. Model outputs that use PMML provide an XML Content Model that can be used to access that information. For example:
stream = modeler.script.stream()
Assume the flow contains a single C5.0 model builder node
and that the datasource, predictors, and targets have already been
set up
modelbuilder = stream.findByType("c50", None)
results = []
modelbuilder.run(results)
modeloutput = results[0]
Now that we have the C5.0 model output object, access the
relevant content model
cm = modeloutput.getContentModel("PMML")
The PMML content model is a generic XML-based content model that
uses XPath syntax. Use that to find the names of the data fields.
The call returns a list of strings match the XPath values
dataFieldNames = cm.getStringValues("/PMML/DataDictionary/DataField", "name")
SPSS Modeler supports the following content models in scripting:
* Table content model provides access to the simple tabular data represented as rows and columns.
* XML content model provides access to content stored in XML format.
* JSON content model provides access to content stored in JSON format.
* Column statistics content model provides access to summary statistics about a specific field.
* Pair-wise column statistics content model provides access to summary statistics between two fields or values between two separate fields.
Note that the following nodes don't contain these content models:
* Time Series
* Discriminant
* SLRM
* All Extension nodes
* All Database Modeling nodes
| # Accessing flow run results #
Many SPSS Modeler nodes produce output objects such as models, charts, and tabular data\. Many of these outputs contain useful values that can be used by scripts to guide subsequent runs\. These values are grouped into content containers (referred to as simply containers) which can be accessed using tags or IDs that identify each container\. The way these values are accessed depends on the format or "content model" used by that container\.
For example, many predictive model outputs use a variant of XML called PMML to represent information about the model such as which fields a decision tree uses at each split, or how the neurons in a neural network are connected and with what strengths\. Model outputs that use PMML provide an XML Content Model that can be used to access that information\. For example:
stream = modeler.script.stream()
# Assume the flow contains a single C5.0 model builder node
# and that the datasource, predictors, and targets have already been
# set up
modelbuilder = stream.findByType("c50", None)
results = []
modelbuilder.run(results)
modeloutput = results[0]
# Now that we have the C5.0 model output object, access the
# relevant content model
cm = modeloutput.getContentModel("PMML")
# The PMML content model is a generic XML-based content model that
# uses XPath syntax. Use that to find the names of the data fields.
# The call returns a list of strings match the XPath values
dataFieldNames = cm.getStringValues("/PMML/DataDictionary/DataField", "name")
SPSS Modeler supports the following content models in scripting:
<!-- <ul> -->
* Table content model provides access to the simple tabular data represented as rows and columns\.
* XML content model provides access to content stored in XML format\.
* JSON content model provides access to content stored in JSON format\.
* Column statistics content model provides access to summary statistics about a specific field\.
* Pair\-wise column statistics content model provides access to summary statistics between two fields or values between two separate fields\.
<!-- </ul> -->
Note that the following nodes don't contain these content models:
<!-- <ul> -->
* Time Series
* Discriminant
* SLRM
* All Extension nodes
* All Database Modeling nodes
<!-- </ul> -->
<!-- </article "role="article" "> -->
|
6638B9F61F15821F7A92D9C30FC6C24C029B78DC | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/scripting_accessresults_columnstats.html?context=cdpaas&locale=en | Column Statistics content model and Pairwise Statistics content model | Column Statistics content model and Pairwise Statistics content model
The Column Statistics content model provides access to statistics that can be computed for each field (univariate statistics). The Pairwise Statistics content model provides access to statistics that can be computed between pairs of fields or values in a field.
Any of these statistics measures are possible:
* Count
* UniqueCount
* ValidCount
* Mean
* Sum
* Min
* Max
* Range
* Variance
* StandardDeviation
* StandardErrorOfMean
* Skewness
* SkewnessStandardError
* Kurtosis
* KurtosisStandardError
* Median
* Mode
* Pearson
* Covariance
* TTest
* FTest
Some values are only appropriate from single column statistics while others are only appropriate for pairwise statistics.
Nodes that produce these are:
* Statistics node produces column statistics and can produce pairwise statistics when correlation fields are specified
* Data Audit node produces column and can produce pairwise statistics when an overlay field is specified.
* Means node produces pairwise statistics when comparing pairs of fields or comparing a field's values with other field summaries.
Which content models and statistics are available depends on both the particular node's capabilities and the settings within the node.
Methods for the Column Statistics content model
Table 1. Methods for the Column Statistics content model
Method Return types Description
getAvailableStatistics() List<StatisticType> Returns the available statistics in this model. Not all fields necessarily have values for all statistics.
getAvailableColumns() List<String> Returns the column names for which statistics were computed.
getStatistic(String column, StatisticType statistic) Number Returns the statistic values associated with the column.
reset() void Flushes any internal storage associated with this content model.
Methods for the Pairwise Statistics content model
Table 2. Methods for the Pairwise Statistics content model
Method Return types Description
getAvailableStatistics() List<StatisticType> Returns the available statistics in this model. Not all fields necessarily have values for all statistics.
getAvailablePrimaryColumns() List<String> Returns the primary column names for which statistics were computed.
getAvailablePrimaryValues() List<Object> Returns the values of the primary column for which statistics were computed.
getAvailableSecondaryColumns() List<String> Returns the secondary column names for which statistics were computed.
getStatistic(String primaryColumn, String secondaryColumn, StatisticType statistic) Number Returns the statistic values associated with the columns.
getStatistic(String primaryColumn, Object primaryValue, String secondaryColumn, StatisticType statistic) Number Returns the statistic values associated with the primary column value and the secondary column.
reset() void Flushes any internal storage associated with this content model.
| # Column Statistics content model and Pairwise Statistics content model #
The Column Statistics content model provides access to statistics that can be computed for each field (univariate statistics)\. The Pairwise Statistics content model provides access to statistics that can be computed between pairs of fields or values in a field\.
Any of these statistics measures are possible:
<!-- <ul> -->
* `Count`
* `UniqueCount`
* `ValidCount`
* `Mean`
* `Sum`
* `Min`
* `Max`
* `Range`
* `Variance`
* `StandardDeviation`
* `StandardErrorOfMean`
* `Skewness`
* `SkewnessStandardError`
* `Kurtosis`
* `KurtosisStandardError`
* `Median`
* `Mode`
* `Pearson`
* `Covariance`
* `TTest`
* `FTest`
<!-- </ul> -->
Some values are only appropriate from single column statistics while others are only appropriate for pairwise statistics\.
Nodes that produce these are:
<!-- <ul> -->
* Statistics node produces column statistics and can produce pairwise statistics when correlation fields are specified
* Data Audit node produces column and can produce pairwise statistics when an overlay field is specified\.
* Means node produces pairwise statistics when comparing pairs of fields or comparing a field's values with other field summaries\.
<!-- </ul> -->
Which content models and statistics are available depends on both the particular node's capabilities and the settings within the node\.
<!-- <table "summary="Methods for the Column Statistics content model" class="defaultstyle" "> -->
Methods for the Column Statistics content model
Table 1\. Methods for the Column Statistics content model
| Method | Return types | Description |
| ------------------------------------------------------ | --------------------------- | ------------------------------------------------------------------------------------------------------------ |
| `getAvailableStatistics()` | `List<StatisticType>` | Returns the available statistics in this model\. Not all fields necessarily have values for all statistics\. |
| `getAvailableColumns()` | `List<String>` | Returns the column names for which statistics were computed\. |
| `getStatistic(String column, StatisticType statistic)` | `Number` | Returns the statistic values associated with the column\. |
| `reset()` | `void` | Flushes any internal storage associated with this content model\. |
<!-- </table "summary="Methods for the Column Statistics content model" class="defaultstyle" "> -->
<!-- <table "summary="Methods for the Pairwise Statistics content model" class="defaultstyle" "> -->
Methods for the Pairwise Statistics content model
Table 2\. Methods for the Pairwise Statistics content model
| Method | Return types | Description |
| ---------------------------------------------------------------------------------------------------------- | --------------------------- | ------------------------------------------------------------------------------------------------------------ |
| `getAvailableStatistics()` | `List<StatisticType>` | Returns the available statistics in this model\. Not all fields necessarily have values for all statistics\. |
| `getAvailablePrimaryColumns()` | `List<String>` | Returns the primary column names for which statistics were computed\. |
| `getAvailablePrimaryValues()` | `List<Object>` | Returns the values of the primary column for which statistics were computed\. |
| `getAvailableSecondaryColumns()` | `List<String>` | Returns the secondary column names for which statistics were computed\. |
| `getStatistic(String primaryColumn, String secondaryColumn, StatisticType statistic)` | `Number` | Returns the statistic values associated with the columns\. |
| `getStatistic(String primaryColumn, Object primaryValue, String secondaryColumn, StatisticType statistic)` | `Number` | Returns the statistic values associated with the primary column value and the secondary column\. |
| `reset()` | `void` | Flushes any internal storage associated with this content model\. |
<!-- </table "summary="Methods for the Pairwise Statistics content model" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
6FC8A7D53D6951306E0FD23667A802538A81D6FF | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/scripting_accessresults_json.html?context=cdpaas&locale=en | JSON content model | JSON content model
The JSON content model is used to access content stored in JSON format. It provides a basic API to allow callers to extract values on the assumption that they know which values are to be accessed.
Methods for the JSON content model
Table 1. Methods for the JSON content model
Method Return types Description
getJSONAsString() String Returns the JSON content as a string.
getObjectAt(<List of cbjecta> path, JSONArtifact artifact) throws Exception Object Returns the object at the specified path. The supplied root artifact might be null, in which case the root of the content is used. The returned value can be a literal string, integer, real or boolean, or a JSON artifact (either a JSON object or a JSON array).
getChildValuesAt(<List of object> path, JSONArtifact artifact) throws Exception Hash table (key:object, value:object> Returns the child values of the specified path if the path leads to a JSON object or null otherwise. The keys in the table are strings while the associated value can be a literal string, integer, real or boolean, or a JSON artifact (either a JSON object or a JSON array).
getChildrenAt(<List of object> path path, JSONArtifact artifact) throws Exception List of objects Returns the list of objects at the specified path if the path leads to a JSON array or null otherwise. The returned values can be a literal string, integer, real or boolean, or a JSON artifact (either a JSON object or a JSON array).
reset() void Flushes any internal storage associated with this content model (for example, a cached DOM object).
| # JSON content model #
The JSON content model is used to access content stored in JSON format\. It provides a basic API to allow callers to extract values on the assumption that they know which values are to be accessed\.
<!-- <table "summary="Methods for the JSON content model" class="defaultstyle" "> -->
Methods for the JSON content model
Table 1\. Methods for the JSON content model
| Method | Return types | Description |
| ----------------------------------------------------------------------------------------- | ------------------------------------------ | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `getJSONAsString()` | `String` | Returns the JSON content as a string\. |
| `getObjectAt(<List of cbjecta> path, JSONArtifact artifact) throws Exception` | `Object` | Returns the object at the specified path\. The supplied root artifact might be null, in which case the root of the content is used\. The returned value can be a literal string, integer, real or boolean, or a JSON artifact (either a JSON object or a JSON array)\. |
| `getChildValuesAt(<List of object> path, JSONArtifact artifact) throws Exception` | `Hash table (key:object, value:object>` | Returns the child values of the specified path if the path leads to a JSON object or null otherwise\. The keys in the table are strings while the associated value can be a literal string, integer, real or boolean, or a JSON artifact (either a JSON object or a JSON array)\. |
| `getChildrenAt(<List of object> path path, JSONArtifact artifact) throws Exception` | `List of objects` | Returns the list of objects at the specified path if the path leads to a JSON array or null otherwise\. The returned values can be a literal string, integer, real or boolean, or a JSON artifact (either a JSON object or a JSON array)\. |
| `reset()` | `void` | Flushes any internal storage associated with this content model (for example, a cached DOM object)\. |
<!-- </table "summary="Methods for the JSON content model" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
8FDDCA5B0D9D19DB5B349AB7F72625B8C6D5744C | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/scripting_accessresults_table.html?context=cdpaas&locale=en | Table content model | Table content model
The table content model provides a simple model for accessing simple row and column data. The values in a particular column must all have the same type of storage (for example, strings or integers).
| # Table content model #
The table content model provides a simple model for accessing simple row and column data\. The values in a particular column must all have the same type of storage (for example, strings or integers)\.
<!-- </article "role="article" "> -->
|
198246E6E7F694D36936989D23B2255B15C2A92B | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/scripting_accessresults_xml.html?context=cdpaas&locale=en | XML content model | XML content model
The XML content model provides access to XML-based content.
The XML content model supports the ability to access components based on XPath expressions. XPath expressions are strings that define which elements or attributes are required by the caller. The XML content model hides the details of constructing various objects and compiling expressions that are typically required by XPath support. It is simpler to call from Python scripting.
The XML content model includes a function that returns the XML document as a string, so Python script users can use their preferred Python library to parse the XML.
Methods for the XML content model
Table 1. Methods for the XML content model
Method Return types Description
getXMLAsString() String Returns the XML as a string.
getNumericValue(String xpath) number Returns the result of evaluating the path with return type of numeric (for example, count the number of elements that match the path expression).
getBooleanValue(String xpath) boolean Returns the boolean result of evaluating the specified path expression.
getStringValue(String xpath, String attribute) String Returns either the attribute value or XML node value that matches the specified path.
getStringValues(String xpath, String attribute) List of strings Returns a list of all attribute values or XML node values that match the specified path.
getValuesList(String xpath, <List of strings> attributes, boolean includeValue) List of lists of strings Returns a list of all attribute values that match the specified path along with the XML node value if required.
getValuesMap(String xpath, String keyAttribute, <List of strings> attributes, boolean includeValue) Hash table (key:string, value:list of string) Returns a hash table that uses either the key attribute or XML node value as key, and the list of specified attribute values as table values.
isNamespaceAware() boolean Returns whether the XML parsers should be aware of namespaces. Default is False.
setNamespaceAware(boolean value) void Sets whether the XML parsers should be aware of namespaces. This also calls reset() to ensure changes are picked up by subsequent calls.
reset() void Flushes any internal storage associated with this content model (for example, a cached DOM object).
| # XML content model #
The XML content model provides access to XML\-based content\.
The XML content model supports the ability to access components based on XPath expressions\. XPath expressions are strings that define which elements or attributes are required by the caller\. The XML content model hides the details of constructing various objects and compiling expressions that are typically required by XPath support\. It is simpler to call from Python scripting\.
The XML content model includes a function that returns the XML document as a string, so Python script users can use their preferred Python library to parse the XML\.
<!-- <table "summary="Methods for the XML content model" class="defaultstyle" "> -->
Methods for the XML content model
Table 1\. Methods for the XML content model
| Method | Return types | Description |
| ----------------------------------------------------------------------------------------------------------- | ----------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------- |
| `getXMLAsString()` | `String` | Returns the XML as a string\. |
| `getNumericValue(String xpath)` | `number` | Returns the result of evaluating the path with return type of numeric (for example, count the number of elements that match the path expression)\. |
| `getBooleanValue(String xpath)` | `boolean` | Returns the boolean result of evaluating the specified path expression\. |
| `getStringValue(String xpath, String attribute)` | `String` | Returns either the attribute value or XML node value that matches the specified path\. |
| `getStringValues(String xpath, String attribute)` | `List of strings` | Returns a list of all attribute values or XML node values that match the specified path\. |
| `getValuesList(String xpath, <List of strings> attributes, boolean includeValue)` | `List of lists of strings` | Returns a list of all attribute values that match the specified path along with the XML node value if required\. |
| `getValuesMap(String xpath, String keyAttribute, <List of strings> attributes, boolean includeValue)` | `Hash table (key:string, value:list of string)` | Returns a hash table that uses either the key attribute or XML node value as key, and the list of specified attribute values as table values\. |
| `isNamespaceAware()` | `boolean` | Returns whether the XML parsers should be aware of namespaces\. Default is `False`\. |
| `setNamespaceAware(boolean value)` | `void` | Sets whether the XML parsers should be aware of namespaces\. This also calls `reset()` to ensure changes are picked up by subsequent calls\. |
| `reset()` | `void` | Flushes any internal storage associated with this content model (for example, a cached DOM object)\. |
<!-- </table "summary="Methods for the XML content model" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
4D8B25691C26B2BA05F7E8A96B99FD3F15A124C6 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/scripting_loopthroughnodes.html?context=cdpaas&locale=en | Looping through nodes | Looping through nodes
You can use a for loop to loop through all the nodes in a flow. For example, the following two script examples loop through all nodes and change field names in any Filter nodes to uppercase.
You can use this script in any flow that contains a Filter node, even if no fields are actually filtered. Simply add a Filter node that passes all fields in order to change field names to uppercase across the board.
Alternative 1: using the data model nameIterator() function
stream = modeler.script.stream()
for node in stream.iterator():
if (node.getTypeName() == "filter"):
nameIterator() returns the field names
for field in node.getInputDataModel().nameIterator():
newname = field.upper()
node.setKeyedPropertyValue("new_name", field, newname)
Alternative 2: using the data model iterator() function
stream = modeler.script.stream()
for node in stream.iterator():
if (node.getTypeName() == "filter"):
iterator() returns the field objects so we need
to call getColumnName() to get the name
for field in node.getInputDataModel().iterator():
newname = field.getColumnName().upper()
node.setKeyedPropertyValue("new_name", field.getColumnName(), newname)
The script loops through all nodes in the current flow, and checks whether each node is a Filter. If so, the script loops through each field in the node and uses either the field.upper() or field.getColumnName().upper() function to change the name to uppercase.
| # Looping through nodes #
You can use a `for` loop to loop through all the nodes in a flow\. For example, the following two script examples loop through all nodes and change field names in any Filter nodes to uppercase\.
You can use this script in any flow that contains a Filter node, even if no fields are actually filtered\. Simply add a Filter node that passes all fields in order to change field names to uppercase across the board\.
# Alternative 1: using the data model nameIterator() function
stream = modeler.script.stream()
for node in stream.iterator():
if (node.getTypeName() == "filter"):
# nameIterator() returns the field names
for field in node.getInputDataModel().nameIterator():
newname = field.upper()
node.setKeyedPropertyValue("new_name", field, newname)
# Alternative 2: using the data model iterator() function
stream = modeler.script.stream()
for node in stream.iterator():
if (node.getTypeName() == "filter"):
# iterator() returns the field objects so we need
# to call getColumnName() to get the name
for field in node.getInputDataModel().iterator():
newname = field.getColumnName().upper()
node.setKeyedPropertyValue("new_name", field.getColumnName(), newname)
The script loops through all nodes in the current flow, and checks whether each node is a Filter\. If so, the script loops through each field in the node and uses either the `field.upper()` or `field.getColumnName().upper()` function to change the name to uppercase\.
<!-- </article "role="article" "> -->
|
14A06DE43E6B08188A7672B5BE8068A572DE5B7C | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/scripting_overview.html?context=cdpaas&locale=en | Scripting and automation | Scripting and automation
Scripting in SPSS Modeler is a powerful tool for automating processes in the user interface. Scripts can perform the same types of actions that you perform with a mouse or a keyboard, and you can use them to automate tasks that would be highly repetitive or time consuming to perform manually.
You can use scripts to:
* Impose a specific order for node executions in a flow.
* Set properties for a node as well as perform derivations using a subset of CLEM (Control Language for Expression Manipulation).
* Specify an automatic sequence of actions that normally involves user interaction—for example, you can build a model and then test it.
* Set up complex processes that require substantial user interaction—for example, cross-validation procedures that require repeated model generation and testing.
* Set up processes that manipulate flows—for example, you can take a model training flow, run it, and produce the corresponding model-testing flow automatically.
| # Scripting and automation #
Scripting in SPSS Modeler is a powerful tool for automating processes in the user interface\. Scripts can perform the same types of actions that you perform with a mouse or a keyboard, and you can use them to automate tasks that would be highly repetitive or time consuming to perform manually\.
You can use scripts to:
<!-- <ul> -->
* Impose a specific order for node executions in a flow\.
* Set properties for a node as well as perform derivations using a subset of CLEM (Control Language for Expression Manipulation)\.
* Specify an automatic sequence of actions that normally involves user interaction—for example, you can build a model and then test it\.
* Set up complex processes that require substantial user interaction—for example, cross\-validation procedures that require repeated model generation and testing\.
* Set up processes that manipulate flows—for example, you can take a model training flow, run it, and produce the corresponding model\-testing flow automatically\.
<!-- </ul> -->
<!-- </article "role="article" "> -->
|
AE3F5B72354288CC106BB10263673EBC80B2D544 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/scriptingtips_overview.html?context=cdpaas&locale=en | Scripting tips | Scripting tips
This section provides tips and techniques for using scripts, including modifying flow execution, and using an encoded password in a script.
| # Scripting tips #
This section provides tips and techniques for using scripts, including modifying flow execution, and using an encoded password in a script\.
<!-- </article "role="article" "> -->
|
0301D6611A36E44C345083F6E2C3BDE58DE59982 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/scripts_and_streams.html?context=cdpaas&locale=en | Types of scripts | Types of scripts
SPSS Modeler uses three types of scripts:
* Flow scripts are stored as a flow property and are therefore saved and loaded with a specific flow. For example, you can write a flow script that automates the process of training and applying a model nugget. You can also specify that whenever a particular flow runs, the script should be run instead of the flow's canvas content.
* Standalone scripts aren't associated with any particular flow and are saved in external text files. You might use a standalone script, for example, to manipulate multiple flows together.
* SuperNode scripts are stored as a SuperNode flow property. SuperNode scripts are only available in terminal SuperNodes. You might use a SuperNode script to control the execution sequence of the SuperNode contents. For nonterminal (import or process) SuperNodes, you can define properties for the SuperNode or the nodes it contains in your flow script directly.
| # Types of scripts #
SPSS Modeler uses three types of scripts:
<!-- <ul> -->
* Flow scripts are stored as a flow property and are therefore saved and loaded with a specific flow\. For example, you can write a flow script that automates the process of training and applying a model nugget\. You can also specify that whenever a particular flow runs, the script should be run instead of the flow's canvas content\.
* Standalone scripts aren't associated with any particular flow and are saved in external text files\. You might use a standalone script, for example, to manipulate multiple flows together\.
* SuperNode scripts are stored as a SuperNode flow property\. SuperNode scripts are only available in terminal SuperNodes\. You might use a SuperNode script to control the execution sequence of the SuperNode contents\. For nonterminal (import or process) SuperNodes, you can define properties for the SuperNode or the nodes it contains in your flow script directly\.
<!-- </ul> -->
<!-- </article "role="article" "> -->
|
92FE6B199A3B4773C5B57EDEDBA80500E6C66FAF | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/selectnodeslots.html?context=cdpaas&locale=en | selectnode properties | selectnode properties
 The Select node selects or discards a subset of records from the data stream based on a specific condition. For example, you might select the records that pertain to a particular sales region.
selectnode properties
Table 1. selectnode properties
selectnode properties Data type Property description
mode IncludeDiscard Specifies whether to include or discard selected records.
condition string Condition for including or discarding records.
| # selectnode properties #
 The Select node selects or discards a subset of records from the data stream based on a specific condition\. For example, you might select the records that pertain to a particular sales region\.
<!-- <table "summary="selectnode properties" id="selectnodeslots__table_ucw_bwy_ddb" class="defaultstyle" "> -->
selectnode properties
Table 1\. selectnode properties
| `selectnode` properties | Data type | Property description |
| ----------------------- | ------------------ | ---------------------------------------------------------- |
| `mode` | `Include``Discard` | Specifies whether to include or discard selected records\. |
| `condition` | *string* | Condition for including or discarding records\. |
<!-- </table "summary="selectnode properties" id="selectnodeslots__table_ucw_bwy_ddb" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
2B4D4CA6A91C05D12F5C7942E73ABAE74BF08472 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/selflearnnodeslots.html?context=cdpaas&locale=en | slrmnode properties | slrmnode properties
The Self-Learning Response Model (SLRM) node enables you to build a model in which a single new case, or small number of new cases, can be used to reestimate the model without having to retrain the model using all data.
slrmnode properties
Table 1. slrmnode properties
slrmnode Properties Values Property description
target field The target field must be a nominal or flag field. A frequency field can also be specified. See [Common modeling node properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/modelingnodeslots_common.htmlmodelingnodeslots_common) for more information.
target_response field Type must be flag.
continue_training_existing_model flag
target_field_values flag Use all: Use all values from source. Specify: Select values required.
target_field_values_specify [field1 ... fieldN]
include_model_assessment flag
model_assessment_random_seed number Must be a real number.
model_assessment_sample_size number Must be a real number.
model_assessment_iterations number Number of iterations.
display_model_evaluation flag
max_predictions number
randomization number
scoring_random_seed number
sort AscendingDescending Specifies whether the offers with the highest or lowest scores will be displayed first.
model_reliability flag
calculate_variable_importance flag
| # slrmnode properties #
The Self\-Learning Response Model (SLRM) node enables you to build a model in which a single new case, or small number of new cases, can be used to reestimate the model without having to retrain the model using all data\.
<!-- <table "summary="slrmnode properties" class="defaultstyle" "> -->
slrmnode properties
Table 1\. slrmnode properties
| `slrmnode` Properties | Values | Property description |
| ---------------------------------- | -------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `target` | *field* | The target field must be a nominal or flag field\. A frequency field can also be specified\. See [Common modeling node properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/modelingnodeslots_common.html#modelingnodeslots_common) for more information\. |
| `target_response` | *field* | Type must be flag\. |
| `continue_training_existing_model` | *flag* | |
| `target_field_values` | *flag* | Use all: Use all values from source\. Specify: Select values required\. |
| `target_field_values_specify` | *\[field1 \.\.\. fieldN\]* | |
| `include_model_assessment` | *flag* | |
| `model_assessment_random_seed` | *number* | Must be a real number\. |
| `model_assessment_sample_size` | *number* | Must be a real number\. |
| `model_assessment_iterations` | *number* | Number of iterations\. |
| `display_model_evaluation` | *flag* | |
| `max_predictions` | *number* | |
| `randomization` | *number* | |
| `scoring_random_seed` | *number* | |
| `sort` | `Ascending``Descending` | Specifies whether the offers with the highest or lowest scores will be displayed first\. |
| `model_reliability` | *flag* | |
| `calculate_variable_importance` | *flag* | |
<!-- </table "summary="slrmnode properties" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
AEE1A739F2EA11F815EC571163BA99C9B2A97245 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/selflearnnuggetnodeslots.html?context=cdpaas&locale=en | applyselflearningnode properties | applyselflearningnode properties
You can use Self-Learning Response Model (SLRM) modeling nodes to generate a SLRM model nugget. The scripting name of this model nugget is applyselflearningnode. For more information on scripting the modeling node itself, see [slrmnode properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/selflearnnodeslots.htmlselflearnnodeslots).
applyselflearningnode properties
Table 1. applyselflearningnode properties
applyselflearningnode Properties Values Property description
max_predictions number
randomization number
scoring_random_seed number
sort ascending <br>descending Specifies whether the offers with the highest or lowest scores will be displayed first.
model_reliability flag Takes account of the model reliability option in the node settings.
enable_sql_generation false <br>native When using data from a database, SQL code can be pushed back to the database for execution, providing superior performance for many operations.
| # applyselflearningnode properties #
You can use Self\-Learning Response Model (SLRM) modeling nodes to generate a SLRM model nugget\. The scripting name of this model nugget is *applyselflearningnode*\. For more information on scripting the modeling node itself, see [slrmnode properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/selflearnnodeslots.html#selflearnnodeslots)\.
<!-- <table "summary="applyselflearningnode properties" id="selflearnnuggetnodeslots__table_czt_2wy_ddb" class="defaultstyle" "> -->
applyselflearningnode properties
Table 1\. applyselflearningnode properties
| `applyselflearningnode` Properties | Values | Property description |
| ---------------------------------- | ----------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------ |
| `max_predictions` | *number* | |
| `randomization` | *number* | |
| `scoring_random_seed` | *number* | |
| `sort` | `ascending` <br>`descending` | Specifies whether the offers with the highest or lowest scores will be displayed first\. |
| `model_reliability` | *flag* | Takes account of the model reliability option in the node settings\. |
| `enable_sql_generation` | `false` <br>`native` | When using data from a database, SQL code can be pushed back to the database for execution, providing superior performance for many operations\. |
<!-- </table "summary="applyselflearningnode properties" id="selflearnnuggetnodeslots__table_czt_2wy_ddb" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
641B0015A5A634BFC40F10AE59873CA784232F14 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/sequencenodeslots.html?context=cdpaas&locale=en | sequencenode properties | sequencenode properties
The Sequence node discovers association rules in sequential or time-oriented data. A sequence is a list of item sets that tends to occur in a predictable order. For example, a customer who purchases a razor and aftershave lotion may purchase shaving cream the next time he shops. The Sequence node is based on the CARMA association rules algorithm, which uses an efficient two-pass method for finding sequences.
sequencenode properties
Table 1. sequencenode properties
sequencenode Properties Values Property description
id_field field To create a Sequence model, you need to specify an ID field, an optional time field, and one or more content fields. Weight and frequency fields are not used. See [Common modeling node properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/modelingnodeslots_common.htmlmodelingnodeslots_common) for more information.
time_field field
use_time_field flag
content_fields [field1 ... fieldn]
contiguous flag
min_supp number
min_conf number
max_size number
max_predictions number
mode SimpleExpert
use_max_duration flag
max_duration number
use_gaps flag
min_item_gap number
max_item_gap number
use_pruning flag
pruning_value number
set_mem_sequences flag
mem_sequences integer
| # sequencenode properties #
The Sequence node discovers association rules in sequential or time\-oriented data\. A sequence is a list of item sets that tends to occur in a predictable order\. For example, a customer who purchases a razor and aftershave lotion may purchase shaving cream the next time he shops\. The Sequence node is based on the CARMA association rules algorithm, which uses an efficient two\-pass method for finding sequences\.
<!-- <table "summary="sequencenode properties" id="sequencenodeslots__table_l5v_fwy_ddb" class="defaultstyle" "> -->
sequencenode properties
Table 1\. sequencenode properties
| `sequencenode` Properties | Values | Property description |
| ------------------------- | -------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `id_field` | *field* | To create a Sequence model, you need to specify an ID field, an optional time field, and one or more content fields\. Weight and frequency fields are not used\. See [Common modeling node properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/modelingnodeslots_common.html#modelingnodeslots_common) for more information\. |
| `time_field` | *field* | |
| `use_time_field` | *flag* | |
| `content_fields` | \[*field1 \.\.\. fieldn*\] | |
| `contiguous` | *flag* | |
| `min_supp` | *number* | |
| `min_conf` | *number* | |
| `max_size` | *number* | |
| `max_predictions` | *number* | |
| `mode` | `Simple``Expert` | |
| `use_max_duration` | *flag* | |
| `max_duration` | *number* | |
| `use_gaps` | *flag* | |
| `min_item_gap` | *number* | |
| `max_item_gap` | *number* | |
| `use_pruning` | *flag* | |
| `pruning_value` | *number* | |
| `set_mem_sequences` | *flag* | |
| `mem_sequences` | *integer* | |
<!-- </table "summary="sequencenode properties" id="sequencenodeslots__table_l5v_fwy_ddb" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
29AF55B95D387BE39D4E9D328936B95CAD5BEB67 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/sequencenuggetnodeslots.html?context=cdpaas&locale=en | applysequencenode properties | applysequencenode properties
You can use Sequence modeling nodes to generate a Sequence model nugget. The scripting name of this model nugget is applysequencenode. No other properties exist for this model nugget. For more information on scripting the modeling node itself, see [sequencenode properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/sequencenodeslots.htmlsequencenodeslots).
| # applysequencenode properties #
You can use Sequence modeling nodes to generate a Sequence model nugget\. The scripting name of this model nugget is *applysequencenode*\. No other properties exist for this model nugget\. For more information on scripting the modeling node itself, see [sequencenode properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/sequencenodeslots.html#sequencenodeslots)\.
<!-- </article "role="article" "> -->
|
2F88CC7897776EAD3F1A7052A740701B8E1A6969 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/setglobalsnodeslots.html?context=cdpaas&locale=en | setglobalsnode properties | setglobalsnode properties
The Set Globals node scans the data and computes summary values that can be used in CLEM expressions. For example, you can use this node to compute statistics for a field called age and then use the overall mean of age in CLEM expressions by inserting the function @GLOBAL_MEAN(age).
setglobalsnode properties
Table 1. setglobalsnode properties
setglobalsnode properties Data type Property description
globals [Sum Mean Min Max SDev] Structured property where fields to be set must be referenced with the following syntax: node.setKeyedPropertyValue( "globals", "Age", ["Max", "Sum", "Mean", "SDev"])
clear_first flag
show_preview flag
| # setglobalsnode properties #
The Set Globals node scans the data and computes summary values that can be used in CLEM expressions\. For example, you can use this node to compute statistics for a field called `age` and then use the overall mean of `age` in CLEM expressions by inserting the function `@GLOBAL_MEAN(age)`\.
<!-- <table "summary="setglobalsnode properties" class="defaultstyle" "> -->
setglobalsnode properties
Table 1\. setglobalsnode properties
| `setglobalsnode` properties | Data type | Property description |
| --------------------------- | ------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| `globals` | `[Sum Mean Min Max SDev]` | Structured property where fields to be set must be referenced with the following syntax: `node.setKeyedPropertyValue( "globals", "Age", ["Max", "Sum", "Mean", "SDev"])` |
| `clear_first` | *flag* | |
| `show_preview` | *flag* | |
<!-- </table "summary="setglobalsnode properties" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
17E39C164E92D0646C4DDDADFDF178BF3B5E2AD0 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/settoflagnodeslots.html?context=cdpaas&locale=en | settoflagnode properties | settoflagnode properties
The SetToFlag node derives multiple flag fields based on the categorical values defined for one or more nominal fields.
settoflagnode properties
Table 1. settoflagnode properties
settoflagnode properties Data type Property description
fields_from [category category category] all
true_value string Specifies the true value used by the node when setting a flag. The default is T.
false_value string Specifies the false value used by the node when setting a flag. The default is F.
use_extension flag Use an extension as a suffix or prefix to the new flag field.
extension string
add_as SuffixPrefix Specifies whether the extension is added as a suffix or prefix.
aggregate flag Groups records together based on key fields. All flag fields in a group are enabled if any record is set to true.
keys list Key fields.
| # settoflagnode properties #
The SetToFlag node derives multiple flag fields based on the categorical values defined for one or more nominal fields\.
<!-- <table "summary="settoflagnode properties" id="settoflagnodeslots__table_agr_lwy_ddb" class="defaultstyle" "> -->
settoflagnode properties
Table 1\. settoflagnode properties
| `settoflagnode` properties | Data type | Property description |
| -------------------------- | -------------------------------------- | ------------------------------------------------------------------------------------------------------------------- |
| `fields_from` | \[*category category category*\] `all` | |
| `true_value` | *string* | Specifies the true value used by the node when setting a flag\. The default is `T`\. |
| `false_value` | *string* | Specifies the false value used by the node when setting a flag\. The default is `F`\. |
| `use_extension` | *flag* | Use an extension as a suffix or prefix to the new flag field\. |
| `extension` | *string* | |
| `add_as` | `Suffix``Prefix` | Specifies whether the extension is added as a suffix or prefix\. |
| `aggregate` | *flag* | Groups records together based on key fields\. All flag fields in a group are enabled if any record is set to true\. |
| `keys` | *list* | Key fields\. |
<!-- </table "summary="settoflagnode properties" id="settoflagnodeslots__table_agr_lwy_ddb" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
723FD865C01F3AC097E03B74F7D81D574A1A13D4 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/simfitnodeslots.html?context=cdpaas&locale=en | simfitnode properties | simfitnode properties
The Simulation Fitting (Sim Fit) node examines the statistical distribution of the data in each field and generates (or updates) a Simulation Generate node, with the best fitting distribution assigned to each field. The Simulation Generate node can then be used to generate simulated data.
simfitnode properties
Table 1. simfitnode properties
simfitnode properties Data type Property description
custom_gen_node_name boolean You can generate the name of the generated (or updated) Simulation Generate node automatically by selecting Auto.
gen_node_name string Specify a custom name for the generated (or updated) node.
used_cases_type string Specifies the number of cases to use when fitting distributions to the fields in the data set. Use AllCases or FirstNCases.
used_cases integer The number of cases
good_fit_type string For continuous fields, specify either the AnderDarling test or the KolmogSmirn test of goodness of fit to rank distributions when fitting distributions to the fields.
bins integer For continuous fields, the Empirical distribution is the cumulative distribution function of the historical data.
frequency_weight_field field Specify the weight field if your data set contains one. The weight field is then excluded from the distribution fitting process.
| # simfitnode properties #
The Simulation Fitting (Sim Fit) node examines the statistical distribution of the data in each field and generates (or updates) a Simulation Generate node, with the best fitting distribution assigned to each field\. The Simulation Generate node can then be used to generate simulated data\.
<!-- <table "summary="simfitnode properties" class="defaultstyle" "> -->
simfitnode properties
Table 1\. simfitnode properties
| `simfitnode` properties | Data type | Property description |
| ------------------------ | --------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `custom_gen_node_name` | *boolean* | You can generate the name of the generated (or updated) Simulation Generate node automatically by selecting Auto\. |
| `gen_node_name` | *string* | Specify a custom name for the generated (or updated) node\. |
| `used_cases_type` | *string* | Specifies the number of cases to use when fitting distributions to the fields in the data set\. Use `AllCases` or `FirstNCases`\. |
| `used_cases` | *integer* | The number of cases |
| `good_fit_type` | *string* | For continuous fields, specify either the `AnderDarling` test or the `KolmogSmirn` test of goodness of fit to rank distributions when fitting distributions to the fields\. |
| `bins` | *integer* | For continuous fields, the Empirical distribution is the cumulative distribution function of the historical data\. |
| `frequency_weight_field` | *field* | Specify the weight field if your data set contains one\. The weight field is then excluded from the distribution fitting process\. |
<!-- </table "summary="simfitnode properties" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
C24646ED4724E2A2D856392DDA9C1B9B05145E11 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/simgennodeslots.html?context=cdpaas&locale=en | simgennode properties | simgennode properties
 The Simulation Generate (Sim Gen) node provides an easy way to generate simulated data—either from scratch using user specified statistical distributions or automatically using the distributions obtained from running a Simulation Fitting (Sim Fit) node on existing historical data. This is useful when you want to evaluate the outcome of a predictive model in the presence of uncertainty in the model inputs.
simgennode properties
Table 1. simgennode properties
simgennode properties Data type Property description
fields Structured property See example
correlations Structured property See example
keep_min_max_setting boolean
refit_correlations boolean
max_cases integer Minimum value is 1000, maximum value is 2,147,483,647
create_iteration_field boolean
iteration_field_name string
replicate_results boolean
random_seed integer
parameter_xml string Returns the parameter Xml as a string
| # simgennode properties #
 The Simulation Generate (Sim Gen) node provides an easy way to generate simulated data—either from scratch using user specified statistical distributions or automatically using the distributions obtained from running a Simulation Fitting (Sim Fit) node on existing historical data\. This is useful when you want to evaluate the outcome of a predictive model in the presence of uncertainty in the model inputs\.
<!-- <table "summary="simgennode properties" class="defaultstyle" "> -->
simgennode properties
Table 1\. simgennode properties
| `simgennode` properties | Data type | Property description |
| ------------------------ | ------------------- | ----------------------------------------------------- |
| `fields` | Structured property | See example |
| `correlations` | Structured property | See example |
| `keep_min_max_setting` | *boolean* | |
| `refit_correlations` | *boolean* | |
| `max_cases` | *integer* | Minimum value is 1000, maximum value is 2,147,483,647 |
| `create_iteration_field` | *boolean* | |
| `iteration_field_name` | *string* | |
| `replicate_results` | *boolean* | |
| `random_seed` | *integer* | |
| `parameter_xml` | *string* | Returns the parameter Xml as a string |
<!-- </table "summary="simgennode properties" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
984B203B8A0054A07F5BE3EB99438C7FBCB6CE85 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/slot_parameter_examples.html?context=cdpaas&locale=en | Node and flow property examples | Node and flow property examples
You can use node and flow properties in a variety of ways with SPSS Modeler. They're most commonly used as part of a script: either a standalone script, used to automate multiple flows or operations, or a flow script, used to automate processes within a single flow. You can also specify node parameters by using the node properties within the SuperNode. At the most basic level, properties can also be used as a command line option for starting SPSS Modeler. Using the -p argument as part of command line invocation, you can use a flow property to change a setting in the flow.
Node and flow property examples
Table 1. Node and flow property examples
Property Meaning
s.max_size Refers to the property max_size of the node named s.
s:samplenode.max_size Refers to the property max_size of the node named s, which must be a Sample node.
:samplenode.max_size Refers to the property max_size of the Sample node in the current flow (there must be only one Sample node).
s:sample.max_size Refers to the property max_size of the node named s, which must be a Sample node.
t.direction.Age Refers to the role of the field Age in the Type node t.
:.max_size *** NOT LEGAL *** You must specify either the node name or the node type.
The example s:sample.max_size illustrates that you don't need to spell out node types in full.
The example t.direction.Age illustrates that some slot names can themselves be structured—in cases where the attributes of a node are more complex than simply individual slots with individual values. Such slots are called structured or complex properties.
| # Node and flow property examples #
You can use node and flow properties in a variety of ways with SPSS Modeler\. They're most commonly used as part of a script: either a standalone script, used to automate multiple flows or operations, or a flow script, used to automate processes within a single flow\. You can also specify node parameters by using the node properties within the SuperNode\. At the most basic level, properties can also be used as a command line option for starting SPSS Modeler\. Using the `-p` argument as part of command line invocation, you can use a flow property to change a setting in the flow\.
<!-- <table "summary="Node and flow property examples" class="defaultstyle" "> -->
Node and flow property examples
Table 1\. Node and flow property examples
| Property | Meaning |
| ----------------------- | --------------------------------------------------------------------------------------------------------------- |
| `s.max_size` | Refers to the property `max_size` of the node named `s`\. |
| `s:samplenode.max_size` | Refers to the property `max_size` of the node named `s`, which must be a Sample node\. |
| `:samplenode.max_size` | Refers to the property `max_size` of the Sample node in the current flow (there must be only one Sample node)\. |
| `s:sample.max_size` | Refers to the property `max_size` of the node named `s`, which must be a Sample node\. |
| `t.direction.Age` | Refers to the role of the field `Age` in the Type node `t`\. |
| `:.max_size` | \*\*\* NOT LEGAL \*\*\* You must specify either the node name or the node type\. |
<!-- </table "summary="Node and flow property examples" class="defaultstyle" "> -->
The example `s:sample.max_size` illustrates that you don't need to spell out node types in full\.
The example `t.direction.Age` illustrates that some slot names can themselves be structured—in cases where the attributes of a node are more complex than simply individual slots with individual values\. Such slots are called structured or complex properties\.
<!-- </article "role="article" "> -->
|
6601B619D597C89F715BC2FAFD703452D64F21CD | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/slot_parameter_syntax.html?context=cdpaas&locale=en | Syntax for properties | Syntax for properties
You can set properties using the following syntax:
OBJECT.setPropertyValue(PROPERTY, VALUE)
or:
OBJECT.setKeyedPropertyValue(PROPERTY, KEY, VALUE)
You can retrieve the value of properties using the following syntax:
VARIABLE = OBJECT.getPropertyValue(PROPERTY)
or:
VARIABLE = OBJECT.getKeyedPropertyValue(PROPERTY, KEY)
where OBJECT is a node or output, PROPERTY is the name of the node property that your expression refers to, and KEY is the key value for keyed properties. For example, the following syntax finds the Filter node and then sets the default to include all fields and filter the Age field from downstream data:
filternode = modeler.script.stream().findByType("filter", None)
filternode.setPropertyValue("default_include", True)
filternode.setKeyedPropertyValue("include", "Age", False)
All nodes used in SPSS Modeler can be located using the flow function findByType(TYPE, LABEL). At least one of TYPE or LABEL must be specified.
| # Syntax for properties #
You can set properties using the following syntax:
OBJECT.setPropertyValue(PROPERTY, VALUE)
or:
OBJECT.setKeyedPropertyValue(PROPERTY, KEY, VALUE)
You can retrieve the value of properties using the following syntax:
VARIABLE = OBJECT.getPropertyValue(PROPERTY)
or:
VARIABLE = OBJECT.getKeyedPropertyValue(PROPERTY, KEY)
where `OBJECT` is a node or output, `PROPERTY` is the name of the node property that your expression refers to, and `KEY` is the key value for keyed properties\. For example, the following syntax finds the Filter node and then sets the default to include all fields and filter the `Age` field from downstream data:
filternode = modeler.script.stream().findByType("filter", None)
filternode.setPropertyValue("default_include", True)
filternode.setKeyedPropertyValue("include", "Age", False)
All nodes used in SPSS Modeler can be located using the flow function `findByType(TYPE, LABEL)`\. At least one of `TYPE` or `LABEL` must be specified\.
<!-- </article "role="article" "> -->
|
6008CEE94719E6B3CAABFBA9BFF1973B9125E02F | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/slot_parameters_abbreviations.html?context=cdpaas&locale=en | Abbreviations | Abbreviations
Standard abbreviations are used throughout the syntax for node properties. Learning the abbreviations is helpful in constructing scripts.
Standard abbreviations used throughout the syntax
Table 1. Standard abbreviations used throughout the syntax
Abbreviation Meaning
abs Absolute value
len Length
min Minimum
max Maximum
correl Correlation
covar Covariance
num Number or numeric
pct Percent or percentage
transp Transparency
xval Cross-validation
var Variance or variable (in source nodes)
| # Abbreviations #
Standard abbreviations are used throughout the syntax for node properties\. Learning the abbreviations is helpful in constructing scripts\.
<!-- <table "summary="Standard abbreviations used throughout the syntax" id="slot_parameters_abbreviations__table_fw3_qwy_ddb" class="defaultstyle" "> -->
Standard abbreviations used throughout the syntax
Table 1\. Standard abbreviations used throughout the syntax
| Abbreviation | Meaning |
| ------------ | -------------------------------------- |
| abs | Absolute value |
| len | Length |
| min | Minimum |
| max | Maximum |
| correl | Correlation |
| covar | Covariance |
| num | Number or numeric |
| pct | Percent or percentage |
| transp | Transparency |
| xval | Cross\-validation |
| var | Variance or variable (in source nodes) |
<!-- </table "summary="Standard abbreviations used throughout the syntax" id="slot_parameters_abbreviations__table_fw3_qwy_ddb" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
FBD84CB5A6901DDAF7412396F4C6CC190E1B7328 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/slot_parameters_common.html?context=cdpaas&locale=en | Common node properties | Common node properties
A number of properties are common to all nodes in SPSS Modeler.
Common node properties
Table 1. Common node properties
Property name Data type Property description
use_custom_name flag
name string Read-only property that reads the name (either auto or custom) for a node on the canvas.
custom_name string Specifies a custom name for the node.
tooltip string
annotation string
keywords string Structured slot that specifies a list of keywords associated with the object (for example, ["Keyword1" "Keyword2"]).
cache_enabled flag
node_type source_supernode <br> <br>process_supernode <br> <br>terminal_supernode <br> <br>all node names as specified for scripting Read-only property used to refer to a node by type. For example, instead of referring to a node only by name, such as real_income, you can also specify the type, such as userinputnode or filternode.
SuperNode-specific properties are discussed separately, as with all other nodes. See [SuperNode properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/defining_slot_parameters_in_supernodes.htmldefining_slot_parameters_in_supernodes) for more information.
| # Common node properties #
A number of properties are common to all nodes in SPSS Modeler\.
<!-- <table "summary="Common node properties" id="slot_parameters_common__table_t4z_qwy_ddb" class="defaultstyle" "> -->
Common node properties
Table 1\. Common node properties
| Property name | Data type | Property description |
| ----------------- | -------------------------------------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `use_custom_name` | *flag* | |
| `name` | *string* | Read\-only property that reads the name (either auto or custom) for a node on the canvas\. |
| `custom_name` | *string* | Specifies a custom name for the node\. |
| `tooltip` | *string* | |
| `annotation` | *string* | |
| `keywords` | *string* | Structured slot that specifies a list of keywords associated with the object (for example, `["Keyword1" "Keyword2"]`)\. |
| `cache_enabled` | *flag* | |
| `node_type` | `source_supernode` <br> <br>`process_supernode` <br> <br>`terminal_supernode` <br> <br>all node names as specified for scripting | Read\-only property used to refer to a node by type\. For example, instead of referring to a node only by name, such as `real_income`, you can also specify the type, such as `userinputnode` or `filternode`\. |
<!-- </table "summary="Common node properties" id="slot_parameters_common__table_t4z_qwy_ddb" class="defaultstyle" "> -->
SuperNode\-specific properties are discussed separately, as with all other nodes\. See [SuperNode properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/defining_slot_parameters_in_supernodes.html#defining_slot_parameters_in_supernodes) for more information\.
<!-- </article "role="article" "> -->
|
6F2CB7C072A05F7BE0C6CE2ECA39FC9A1BA5E107 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/slot_parameters_generatedmodels.html?context=cdpaas&locale=en | Model nugget node properties | Model nugget node properties
Refer to this section for a list of available properties for Model nuggets.
Model nugget nodes share the same common properties as other nodes. See [Common node properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/slot_parameters_common.htmlslot_parameters_common) for more information.
| # Model nugget node properties #
Refer to this section for a list of available properties for Model nuggets\.
Model nugget nodes share the same common properties as other nodes\. See [Common node properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/slot_parameters_common.html#slot_parameters_common) for more information\.
<!-- </article "role="article" "> -->
|
29DCFC3FB6EE0CCBA63E0FF3A797936DA9E0C874 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/slot_parameters_reference.html?context=cdpaas&locale=en | Properties reference overview | Properties reference overview
You can specify a number of different properties for nodes, flows, projects, and SuperNodes. Some properties are common to all nodes, such as name, annotation, and ToolTip, while others are specific to certain types of nodes. Other properties refer to high-level flow operations, such as caching or SuperNode behavior. Properties can be accessed through the standard user interface (for example, when you open the properties for a node) and can also be used in a number of other ways.
* Properties can be modified through scripts, as described in this section. For more information, see [Syntax for properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/slot_parameter_syntax.html).
* Node properties can be used in SuperNode parameters.
In the context of scripting within SPSS Modeler, node and flow properties are often called slot parameters. In this documentation, they are referred to as node properties or flow properties.
For more information about the scripting language, see [The scripting language](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/jython/clementine/python_language_overview.html).
| # Properties reference overview #
You can specify a number of different properties for nodes, flows, projects, and SuperNodes\. Some properties are common to all nodes, such as name, annotation, and ToolTip, while others are specific to certain types of nodes\. Other properties refer to high\-level flow operations, such as caching or SuperNode behavior\. Properties can be accessed through the standard user interface (for example, when you open the properties for a node) and can also be used in a number of other ways\.
<!-- <ul> -->
* Properties can be modified through scripts, as described in this section\. For more information, see [Syntax for properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/slot_parameter_syntax.html)\.
* Node properties can be used in SuperNode parameters\.
<!-- </ul> -->
In the context of scripting within SPSS Modeler, node and flow properties are often called slot parameters\. In this documentation, they are referred to as node properties or flow properties\.
For more information about the scripting language, see [The scripting language](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/jython/clementine/python_language_overview.html)\.
<!-- </article "role="article" "> -->
|
F127EFF442D2C1D1A1EA01B23E8135B502EF2E79 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/smotenodeslots.html?context=cdpaas&locale=en | smotenode properties | smotenode properties
The Synthetic Minority Over-sampling Technique (SMOTE) node provides an over-sampling algorithm to deal with imbalanced data sets. It provides an advanced method for balancing data. The SMOTE process node in SPSS Modeler is implemented in Python and requires the imbalanced-learn© Python library.
smotenode properties
Table 1. smotenode properties
smotenode properties Data type Property description
target field The target field.
sample_ratio string Enables a custom ratio value. The two options are Auto (sample_ratio_auto) or Set ratio (sample_ratio_manual).
sample_ratio_value float The ratio is the number of samples in the minority class over the number of samples in the majority class. It must be larger than 0 and less than or equal to 1. Default is auto.
enable_random_seed Boolean If set to true, the random_seed property will be enabled.
random_seed integer The seed used by the random number generator.
k_neighbours integer The number of nearest neighbors to be used for constructing synthetic samples. Default is 5.
m_neighbours integer The number of nearest neighbors to be used for determining if a minority sample is in danger. This option is only enabled with the SMOTE algorithm types borderline1 and borderline2. Default is 10.
algorithm string The type of SMOTE algorithm: regular, borderline1, or borderline2.
use_partition Boolean If set to true, only training data will be used for model building. Default is true.
| # smotenode properties #
The Synthetic Minority Over\-sampling Technique (SMOTE) node provides an over\-sampling algorithm to deal with imbalanced data sets\. It provides an advanced method for balancing data\. The SMOTE process node in SPSS Modeler is implemented in Python and requires the imbalanced\-learn© Python library\.
<!-- <table "summary="smotenode properties" class="defaultstyle" "> -->
smotenode properties
Table 1\. smotenode properties
| `smotenode` properties | Data type | Property description |
| ---------------------- | --------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `target` | *field* | The target field\. |
| `sample_ratio` | *string* | Enables a custom ratio value\. The two options are Auto (`sample_ratio_auto`) or Set ratio (`sample_ratio_manual`)\. |
| `sample_ratio_value` | *float* | The ratio is the number of samples in the minority class over the number of samples in the majority class\. It must be larger than `0` and less than or equal to `1`\. Default is `auto`\. |
| `enable_random_seed` | *Boolean* | If set to `true`, the `random_seed` property will be enabled\. |
| `random_seed` | *integer* | The seed used by the random number generator\. |
| `k_neighbours` | *integer* | The number of nearest neighbors to be used for constructing synthetic samples\. Default is `5`\. |
| `m_neighbours` | *integer* | The number of nearest neighbors to be used for determining if a minority sample is in danger\. This option is only enabled with the SMOTE algorithm types `borderline1` and `borderline2`\. Default is `10`\. |
| `algorithm` | *string* | The type of SMOTE algorithm: `regular`, `borderline1`, or `borderline2`\. |
| `use_partition` | *Boolean* | If set to `true`, only training data will be used for model building\. Default is `true`\. |
<!-- </table "summary="smotenode properties" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
3259E737315294C6380ED46645AB8D073A5ED861 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/sortnodeslots.html?context=cdpaas&locale=en | sortnode properties | sortnode properties
 The Sort node sorts records into ascending or descending order based on the values of one or more fields.
sortnode properties
Table 1. sortnode properties
sortnode properties Data type Property description
keys list Specifies the fields you want to sort against. If no direction is specified, the default is used.
default_ascending flag Specifies the default sort order.
use_existing_keys flag Specifies whether sorting is optimized by using the previous sort order for fields that are already sorted.
existing_keys Specifies the fields that are already sorted and the direction in which they are sorted. Uses the same format as the keys property.
default_sort_order Ascending <br>Descending Specify whether, by default, records are sorted in ascending or descending order of the sort key values.
| # sortnode properties #
 The Sort node sorts records into ascending or descending order based on the values of one or more fields\.
<!-- <table "summary="sortnode properties" id="sortnodeslots__table_d4z_vwy_ddb" class="defaultstyle" "> -->
sortnode properties
Table 1\. sortnode properties
| `sortnode` properties | Data type | Property description |
| --------------------- | ----------------------------- | --------------------------------------------------------------------------------------------------------------------------------------- |
| `keys` | *list* | Specifies the fields you want to sort against\. If no direction is specified, the default is used\. |
| `default_ascending` | *flag* | Specifies the default sort order\. |
| `use_existing_keys` | *flag* | Specifies whether sorting is optimized by using the previous sort order for fields that are already sorted\. |
| `existing_keys` | | Specifies the fields that are already sorted and the direction in which they are sorted\. Uses the same format as the `keys` property\. |
| `default_sort_order` | `Ascending` <br>`Descending` | Specify whether, by default, records are sorted in ascending or descending order of the sort key values\. |
<!-- </table "summary="sortnode properties" id="sortnodeslots__table_d4z_vwy_ddb" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
F3DD7962CB3AA07C8C469EDE0C7852993AC3F290 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/source_nodes_slot_parameters.html?context=cdpaas&locale=en | Import node common properties | Import node common properties
Properties that are common to most import nodes are listed here, with information on specific nodes in the topics that follow.
Import node common properties
Table 1. Import node common properties
Property name Data type Property description
asset_type DataAsset <br>Connection Specify your data type: DataAsset or Connection.
asset_id string When DataAsset is set for the asset_type, this is the ID of the asset.
asset_name string When DataAsset is set for the asset_type, this is the name of the asset.
connection_id string When Connection is set for the asset_type, this is the ID of the connection.
connection_name string When Connection is set for the asset_type, this is the name of the connection.
connection_path string When Connection is set for the asset_type, this is the path of the connection.
user_settings string Escaped JSON string containing the interaction properties for the connection. Contact IBM for details about available interaction points.<br><br>Example:<br><br>user_settings: "{"interactionProperties":{"write_mode":"write","file_name":"output.csv","file_format":"csv","quote_numerics":true,"encoding":"utf-8","first_line_header":true,"include_types":false}}"<br><br>Note that these values will change based on the type of connection you're using.
| # Import node common properties #
Properties that are common to most import nodes are listed here, with information on specific nodes in the topics that follow\.
<!-- <table "summary="Import node common properties" class="defaultstyle" "> -->
Import node common properties
Table 1\. Import node common properties
| Property name | Data type | Property description |
| ----------------- | ----------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `asset_type` | `DataAsset` <br>`Connection` | Specify your data type: `DataAsset` or `Connection`\. |
| `asset_id` | *string* | When `DataAsset` is set for the `asset_type`, this is the ID of the asset\. |
| `asset_name` | *string* | When `DataAsset` is set for the `asset_type`, this is the name of the asset\. |
| `connection_id` | *string* | When `Connection` is set for the `asset_type`, this is the ID of the connection\. |
| `connection_name` | *string* | When `Connection` is set for the `asset_type`, this is the name of the connection\. |
| `connection_path` | *string* | When `Connection` is set for the `asset_type`, this is the path of the connection\. |
| `user_settings` | *string* | Escaped JSON string containing the interaction properties for the connection\. Contact IBM for details about available interaction points\.<br><br>Example:<br><br>`user_settings: "{\"interactionProperties\":{\"write_mode\":\"write\",\"file_name\":\"output.csv\",\"file_format\":\"csv\",\"quote_numerics\":true,\"encoding\":\"utf-8\",\"first_line_header\":true,\"include_types\":false}}"`<br><br>Note that these values will change based on the type of connection you're using\. |
<!-- </table "summary="Import node common properties" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
8F42BD98BE9767332CE949506A9E193393DA73FA | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/statisticsnodeslots.html?context=cdpaas&locale=en | statisticsnode properties | statisticsnode properties
The Statistics node provides basic summary information about numeric fields. It calculates summary statistics for individual fields and correlations between fields.
statisticsnode properties
Table 1. statisticsnode properties
statisticsnode properties Data type Property description
use_output_name flag Specifies whether a custom output name is used.
output_name string If use_output_name is true, specifies the name to use.
output_mode ScreenFile Used to specify target location for output generated from the output node.
output_format Text (.txt) HTML (.html) Output (.cou) Used to specify the type of output.
full_filename string
examine list
correlate list
statistics [count mean sum min max range variance sdev semean median mode]
correlation_mode ProbabilityAbsolute Specifies whether to label correlations by probability or absolute value.
label_correlations flag
weak_label string
medium_label string
strong_label string
weak_below_probability number When correlation_mode is set to Probability, specifies the cutoff value for weak correlations. This must be a value between 0 and 1—for example, 0.90.
strong_above_probability number Cutoff value for strong correlations.
weak_below_absolute number When correlation_mode is set to Absolute, specifies the cutoff value for weak correlations. This must be a value between 0 and 1—for example, 0.90.
strong_above_absolute number Cutoff value for strong correlations.
| # statisticsnode properties #
The Statistics node provides basic summary information about numeric fields\. It calculates summary statistics for individual fields and correlations between fields\.
<!-- <table "summary="statisticsnode properties" id="statisticsnodeslots__table_hqp_xxy_ddb" class="defaultstyle" "> -->
statisticsnode properties
Table 1\. statisticsnode properties
| `statisticsnode` properties | Data type | Property description |
| --------------------------- | ----------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `use_output_name` | *flag* | Specifies whether a custom output name is used\. |
| `output_name` | *string* | If `use_output_name` is true, specifies the name to use\. |
| `output_mode` | `Screen``File` | Used to specify target location for output generated from the output node\. |
| `output_format` | `Text` (\.*txt*) `HTML` (\.*html*) `Output` (\.*cou*) | Used to specify the type of output\. |
| `full_filename` | *string* | |
| `examine` | *list* | |
| `correlate` | *list* | |
| `statistics` | `[count mean sum min max range variance sdev semean median mode]` | |
| `correlation_mode` | `Probability``Absolute` | Specifies whether to label correlations by probability or absolute value\. |
| `label_correlations` | *flag* | |
| `weak_label` | *string* | |
| `medium_label` | *string* | |
| `strong_label` | *string* | |
| `weak_below_probability` | *number* | When `correlation_mode` is set to `Probability`, specifies the cutoff value for weak correlations\. This must be a value between 0 and 1—for example, 0\.90\. |
| `strong_above_probability` | *number* | Cutoff value for strong correlations\. |
| `weak_below_absolute` | *number* | When `correlation_mode` is set to `Absolute`, specifies the cutoff value for weak correlations\. This must be a value between 0 and 1—for example, 0\.90\. |
| `strong_above_absolute` | *number* | Cutoff value for strong correlations\. |
<!-- </table "summary="statisticsnode properties" id="statisticsnodeslots__table_hqp_xxy_ddb" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
5B85770138782723E09D9ED65F8655484D03BE44 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/stbnodeslots.html?context=cdpaas&locale=en | derive_stbnode properties | derive_stbnode properties
 The Space-Time-Boxes node derives Space-Time-Boxes from latitude, longitude, and timestamp fields. You can also identify frequent Space-Time-Boxes as hangouts.
Space-Time-Boxes node properties
Table 1. Space-Time-Boxes node properties
derive_stbnode properties Data type Property description
mode IndividualRecords <br>Hangouts
latitude_field field
longitude_field field
timestamp_field field
hangout_density density A single density. See densities for valid density values.
densities [density,density,..., density] Each density is a string (for example, STB_GH8_1DAY). Note that there are limits to which densities are valid. For the geohash, you can use values from GH1 to GH15. For the temporal part, you can use the following values: <br>EVER <br>1YEAR <br>1MONTH <br>1DAY <br>12HOURS <br>8HOURS <br>6HOURS <br>4HOURS <br>3HOURS <br>2HOURS <br>1HOUR <br>30MIN <br>15MIN <br>10MIN <br>5MIN <br>2MIN <br>1MIN <br>30SECS <br>15SECS <br>10SECS <br>5SECS <br>2SECS <br>1SEC
id_field field
qualifying_duration 1DAY <br>12HOURS <br>8HOURS <br>6HOURS <br>4HOURS <br>3HOURS <br>2HOURS <br>1HOUR <br>30MIN <br>15MIN <br>10MIN <br>5MIN <br>2MIN <br>1MIN <br>30SECS <br>15SECS <br>10SECS <br>5SECS <br>2SECS <br>1SECS Must be a string.
min_events integer Minimum valid integer value is 2.
qualifying_pct integer Must be in the range of 1 and 100.
add_extension_as Prefix <br>Suffix
name_extension string
span_stb_boundaries boolean Allow hangouts to span STB boundaries.
| # derive\_stbnode properties #
 The Space\-Time\-Boxes node derives Space\-Time\-Boxes from latitude, longitude, and timestamp fields\. You can also identify frequent Space\-Time\-Boxes as hangouts\.
<!-- <table "summary="Space-Time-Boxes node properties" class="defaultstyle" "> -->
Space-Time-Boxes node properties
Table 1\. Space\-Time\-Boxes node properties
| `derive_stbnode` properties | Data type | Property description |
| --------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| `mode` | `IndividualRecords` <br>`Hangouts` | |
| `latitude_field` | *field* | |
| `longitude_field` | *field* | |
| `timestamp_field` | *field* | |
| `hangout_density` | *density* | A single density\. See `densities` for valid density values\. |
| `densities` | \[*density*,*density*,\.\.\., *density*\] | Each density is a string (for example, `STB_GH8_1DAY`)\. Note that there are limits to which densities are valid\. For the geohash, you can use values from `GH1` to `GH15`\. For the temporal part, you can use the following values: <br>`EVER` <br>`1YEAR` <br>`1MONTH` <br>`1DAY` <br>`12HOURS` <br>`8HOURS` <br>`6HOURS` <br>`4HOURS` <br>`3HOURS` <br>`2HOURS` <br>`1HOUR` <br>`30MIN` <br>`15MIN` <br>`10MIN` <br>`5MIN` <br>`2MIN` <br>`1MIN` <br>`30SECS` <br>`15SECS` <br>`10SECS` <br>`5SECS` <br>`2SECS` <br>`1SEC` |
| `id_field` | *field* | |
| `qualifying_duration` | `1DAY` <br>`12HOURS` <br>`8HOURS` <br>`6HOURS` <br>`4HOURS` <br>`3HOURS` <br>`2HOURS` <br>`1HOUR` <br>`30MIN` <br>`15MIN` <br>`10MIN` <br>`5MIN` <br>`2MIN` <br>`1MIN` <br>`30SECS` <br>`15SECS` <br>`10SECS` <br>`5SECS` <br>`2SECS` <br>`1SECS` | Must be a string\. |
| `min_events` | *integer* | Minimum valid integer value is 2\. |
| `qualifying_pct` | *integer* | Must be in the range of 1 and 100\. |
| `add_extension_as` | `Prefix` <br>`Suffix` | |
| `name_extension` | *string* | |
| `span_stb_boundaries` | *boolean* | Allow hangouts to span STB boundaries\. |
<!-- </table "summary="Space-Time-Boxes node properties" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
5D193C88D3E3235EA441BB82CCEEAAE20BB3EFCC | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/stream_scripttab.html?context=cdpaas&locale=en | Flow scripts | Flow scripts
You can use scripts to customize operations within a particular flow, and they're saved with that flow. You can specify a particular execution order for the terminal nodes within a flow. You use the flow script settings to edit the script that's saved with the current flow.
To access the flow script settings:
1. Click the Flow Properties icon on the toolbar.
2. Open the Scripting section to work with scripts for the current flow. You can also launch the Expression Builder from here by clicking the calculator icon. 
You can specify whether a script does or doesn't run when the flow runs. To run the script each time the flow runs, respecting the execution order of the script, select Run the script. This setting provides automation at the flow level for quicker model building. However, the default setting is to ignore this script during flow execution ( Run all terminal nodes.
| # Flow scripts #
You can use scripts to customize operations within a particular flow, and they're saved with that flow\. You can specify a particular execution order for the terminal nodes within a flow\. You use the flow script settings to edit the script that's saved with the current flow\.
To access the flow script settings:
<!-- <ol> -->
1. Click the Flow Properties icon on the toolbar\.
2. Open the Scripting section to work with scripts for the current flow\. You can also launch the Expression Builder from here by clicking the calculator icon\. 
<!-- </ol> -->
You can specify whether a script does or doesn't run when the flow runs\. To run the script each time the flow runs, respecting the execution order of the script, select Run the script\. This setting provides automation at the flow level for quicker model building\. However, the default setting is to ignore this script during flow execution ( Run all terminal nodes\.
<!-- </article "role="article" "> -->
|
DA0357B0ADE596E1A23F676F76FF4304B97AEF2B | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/stream_scripttab_javalimits.html?context=cdpaas&locale=en | Jython code size limits | Jython code size limits
Jython compiles each script to Java bytecode, which the Java Virtual Machine (JVM) then runs. However, Java imposes a limit on the size of a single bytecode file. So when Jython attempts to load the bytecode, it can cause the JVM to crash. SPSS Modeler is unable to prevent this from happening.
Ensure that you write your Jython scripts using good coding practices (such as minimizing duplicated code by using variables or functions to compute common intermediate values). If necessary, you may need to split your code over several source files or define it using modules as these are compiled into separate bytecode files.
| # Jython code size limits #
Jython compiles each script to Java bytecode, which the Java Virtual Machine (JVM) then runs\. However, Java imposes a limit on the size of a single bytecode file\. So when Jython attempts to load the bytecode, it can cause the JVM to crash\. SPSS Modeler is unable to prevent this from happening\.
Ensure that you write your Jython scripts using good coding practices (such as minimizing duplicated code by using variables or functions to compute common intermediate values)\. If necessary, you may need to split your code over several source files or define it using modules as these are compiled into separate bytecode files\.
<!-- </article "role="article" "> -->
|
AAC6535CAB0B4600A9683433FCAB805B2C4EAA53 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/structured_slot_parameters.html?context=cdpaas&locale=en | Structured properties | Structured properties
There are two ways in which scripting uses structured properties for increased clarity when parsing:
* To give structure to the names of properties for complex nodes, such as Type, Filter, or Balance nodes.
* To provide a format for specifying multiple properties at once.
| # Structured properties #
There are two ways in which scripting uses structured properties for increased clarity when parsing:
<!-- <ul> -->
* To give structure to the names of properties for complex nodes, such as Type, Filter, or Balance nodes\.
* To provide a format for specifying multiple properties at once\.
<!-- </ul> -->
<!-- </article "role="article" "> -->
|
C64A69EBC1360788037B11E8B0DC5BB74D913819 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/svmnodeslots.html?context=cdpaas&locale=en | svmnode properties | svmnode properties
The Support Vector Machine (SVM) node enables you to classify data into one of two groups without overfitting. SVM works well with wide data sets, such as those with a very large number of input fields.
svmnode properties
Table 1. svmnode properties
svmnode Properties Values Property description
all_probabilities flag
stopping_criteria 1.0E-1 <br>1.0E-2 <br>1.0E-3 <br>1.0E-4 <br>1.0E-5 <br>1.0E-6 Determines when to stop the optimization algorithm.
regularization number Also known as the C parameter.
precision number Used only if measurement level of target field is Continuous.
kernel RBF <br>Polynomial <br>Sigmoid <br>Linear Type of kernel function used for the transformation. RBF is the default.
rbf_gamma number Used only if kernel is RBF.
gamma number Used only if kernel is Polynomial or Sigmoid.
bias number
degree number Used only if kernel is Polynomial.
calculate_variable_importance flag
calculate_raw_propensities flag
calculate_adjusted_propensities flag
adjusted_propensity_partition Test <br>Validation
| # svmnode properties #
The Support Vector Machine (SVM) node enables you to classify data into one of two groups without overfitting\. SVM works well with wide data sets, such as those with a very large number of input fields\.
<!-- <table "summary="svmnode properties" class="defaultstyle" "> -->
svmnode properties
Table 1\. svmnode properties
| `svmnode` Properties | Values | Property description |
| --------------------------------- | ------------------------------------------------------------------------------ | ---------------------------------------------------------------------------- |
| `all_probabilities` | *flag* | |
| `stopping_criteria` | `1.0E-1` <br>`1.0E-2` <br>`1.0E-3` <br>`1.0E-4` <br>`1.0E-5` <br>`1.0E-6` | Determines when to stop the optimization algorithm\. |
| `regularization` | *number* | Also known as the C parameter\. |
| `precision` | *number* | Used only if measurement level of target field is `Continuous`\. |
| `kernel` | `RBF` <br>`Polynomial` <br>`Sigmoid` <br>`Linear` | Type of kernel function used for the transformation\. `RBF` is the default\. |
| `rbf_gamma` | *number* | Used only if `kernel` is `RBF`\. |
| `gamma` | *number* | Used only if `kernel` is `Polynomial` or `Sigmoid`\. |
| `bias` | *number* | |
| `degree` | *number* | Used only if `kernel` is `Polynomial`\. |
| `calculate_variable_importance` | *flag* | |
| `calculate_raw_propensities` | *flag* | |
| `calculate_adjusted_propensities` | *flag* | |
| `adjusted_propensity_partition` | `Test` <br>`Validation` | |
<!-- </table "summary="svmnode properties" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
BCAE38614C57F1ABB775C4C9372DC02531830659 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/svmnuggetnodeslots.html?context=cdpaas&locale=en | applysvmnode properties | applysvmnode properties
You can use SVM modeling nodes to generate an SVM model nugget. The scripting name of this model nugget is applysvmnode. For more information on scripting the modeling node itself, see [svmnode properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/svmnodeslots.htmlsvmnodeslots).
applysvmnode properties
Table 1. applysvmnode properties
applysvmnode Properties Values Property description
all_probabilities flag
calculate_raw_propensities flag
calculate_adjusted_propensities flag
enable_sql_generation false <br>native When using data from a database, SQL code can be pushed back to the database for execution, providing superior performance for many operations.
| # applysvmnode properties #
You can use SVM modeling nodes to generate an SVM model nugget\. The scripting name of this model nugget is *applysvmnode*\. For more information on scripting the modeling node itself, see [svmnode properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/svmnodeslots.html#svmnodeslots)\.
<!-- <table "summary="applysvmnode properties" id="svmnuggetnodeslots__table_ax1_hyy_ddb" class="defaultstyle" "> -->
applysvmnode properties
Table 1\. applysvmnode properties
| `applysvmnode` Properties | Values | Property description |
| --------------------------------- | --------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------ |
| `all_probabilities` | *flag* | |
| `calculate_raw_propensities` | *flag* | |
| `calculate_adjusted_propensities` | *flag* | |
| `enable_sql_generation` | `false` <br>`native` | When using data from a database, SQL code can be pushed back to the database for execution, providing superior performance for many operations\. |
<!-- </table "summary="applysvmnode properties" id="svmnuggetnodeslots__table_ax1_hyy_ddb" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
3F5D0FD7E429FEDBFC62DFC9BAB41B3CC5FB4E4F | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/tablenodeslots.html?context=cdpaas&locale=en | tablenode properties | tablenode properties
The Table node displays data in table format. This is useful whenever you need to inspect your data values.
Note: Some of the properties on this page might not be available in your platform.
tablenode properties
Table 1. tablenode properties
tablenode properties Data type Property description
full_filename string If disk, data, or HTML output, the name of the output file.
use_output_name flag Specifies whether a custom output name is used.
output_name string If use_output_name is true, specifies the name to use.
output_mode Screen <br>File Used to specify target location for output generated from the output node.
output_format Formatted (.tab) <br>Delimited (.csv) <br>HTML (.html) <br>Output (.cou) Used to specify the type of output.
transpose_data flag Transposes the data before export so that rows represent fields and columns represent records.
paginate_output flag When the output_format is HTML, causes the output to be separated into pages.
lines_per_page number When used with paginate_output, specifies the lines per page of output.
highlight_expr string
output string A read-only property that holds a reference to the last table built by the node.
value_labels [[Value LabelString] <br>[Value LabelString] ...] Used to specify labels for value pairs.
display_places integer Sets the number of decimal places for the field when displayed (applies only to fields with REAL storage). A value of –1 will use the flow default.
export_places integer Sets the number of decimal places for the field when exported (applies only to fields with REAL storage). A value of –1 will use the stream default.
decimal_separator DEFAULT <br>PERIOD <br>COMMA Sets the decimal separator for the field (applies only to fields with REAL storage).
date_format "DDMMYY" "MMDDYY" "YYMMDD" "YYYYMMDD" "YYYYDDD" DAY MONTH "DD-MM-YY" "DD-MM-YYYY" "MM-DD-YY" "MM-DD-YYYY" "DD-MON-YY" "DD-MON-YYYY" "YYYY-MM-DD" "DD.MM.YY" "DD.MM.YYYY" "MM.DD.YYYY" "DD.MON.YY" "DD.MON.YYYY" "DD/MM/YY" "DD/MM/YYYY" "MM/DD/YY" "MM/DD/YYYY" "DD/MON/YY" "DD/MON/YYYY" MON YYYY q Q YYYY ww WK YYYY Sets the date format for the field (applies only to fields with DATE or TIMESTAMP storage).
time_format "HHMMSS" <br>"HHMM" <br>"MMSS" <br>"HH:MM:SS" <br>"HH:MM" <br>"MM:SS" <br>"(H)H:(M)M:(S)S" <br>"(H)H:(M)M" <br>"(M)M:(S)S" <br>"HH.MM.SS" <br>"HH.MM" <br>"MM.SS" <br>"(H)H.(M)M.(S)S" <br>"(H)H.(M)M" <br>"(M)M.(S)S" Sets the time format for the field (applies only to fields with TIME or TIMESTAMP storage).
column_width integer Sets the column width for the field. A value of –1 will set column width to Auto.
justify AUTO <br>CENTER <br>LEFT <br>RIGHT Sets the column justification for the field.
| # tablenode properties #
The Table node displays data in table format\. This is useful whenever you need to inspect your data values\.
Note: Some of the properties on this page might not be available in your platform\.
<!-- <table "summary="tablenode properties" class="defaultstyle" "> -->
tablenode properties
Table 1\. tablenode properties
| `tablenode` properties | Data type | Property description |
| ---------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | -------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `full_filename` | *string* | If disk, data, or HTML output, the name of the output file\. |
| `use_output_name` | *flag* | Specifies whether a custom output name is used\. |
| `output_name` | *string* | If `use_output_name` is true, specifies the name to use\. |
| `output_mode` | `Screen` <br>`File` | Used to specify target location for output generated from the output node\. |
| `output_format` | `Formatted` (\.*tab*) <br>`Delimited` (\.*csv*) <br>`HTML` (\.*html*) <br>`Output` (\.*cou*) | Used to specify the type of output\. |
| `transpose_data` | *flag* | Transposes the data before export so that rows represent fields and columns represent records\. |
| `paginate_output` | *flag* | When the `output_format` is `HTML`, causes the output to be separated into pages\. |
| `lines_per_page` | *number* | When used with `paginate_output`, specifies the lines per page of output\. |
| `highlight_expr` | *string* | |
| `output` | *string* | A read\-only property that holds a reference to the last table built by the node\. |
| `value_labels` | *\[\[Value LabelString\]* <br>*\[Value LabelString\] \.\.\.\]* | Used to specify labels for value pairs\. |
| `display_places` | *integer* | Sets the number of decimal places for the field when displayed (applies only to fields with REAL storage)\. A value of `–1` will use the flow default\. |
| `export_places` | *integer* | Sets the number of decimal places for the field when exported (applies only to fields with REAL storage)\. A value of `–1` will use the stream default\. |
| `decimal_separator` | `DEFAULT` <br>`PERIOD` <br>`COMMA` | Sets the decimal separator for the field (applies only to fields with REAL storage)\. |
| `date_format` | `"DDMMYY" "MMDDYY" "YYMMDD" "YYYYMMDD" "YYYYDDD" DAY MONTH "DD-MM-YY" "DD-MM-YYYY" "MM-DD-YY" "MM-DD-YYYY" "DD-MON-YY" "DD-MON-YYYY" "YYYY-MM-DD" "DD.MM.YY" "DD.MM.YYYY" "MM.DD.YYYY" "DD.MON.YY" "DD.MON.YYYY" "DD/MM/YY" "DD/MM/YYYY" "MM/DD/YY" "MM/DD/YYYY" "DD/MON/YY" "DD/MON/YYYY" MON YYYY q Q YYYY ww WK YYYY` | Sets the date format for the field (applies only to fields with `DATE` or `TIMESTAMP` storage)\. |
| `time_format` | `"HHMMSS"` <br>`"HHMM"` <br>`"MMSS"` <br>`"HH:MM:SS"` <br>`"HH:MM"` <br>`"MM:SS"` <br>`"(H)H:(M)M:(S)S"` <br>`"(H)H:(M)M"` <br>`"(M)M:(S)S"` <br>`"HH.MM.SS"` <br>`"HH.MM"` <br>`"MM.SS"` <br>`"(H)H.(M)M.(S)S"` <br>`"(H)H.(M)M"` <br>`"(M)M.(S)S"` | Sets the time format for the field (applies only to fields with `TIME` or `TIMESTAMP` storage)\. |
| `column_width` | *integer* | Sets the column width for the field\. A value of `–1` will set column width to `Auto`\. |
| `justify` | `AUTO` <br>`CENTER` <br>`LEFT` <br>`RIGHT` | Sets the column justification for the field\. |
<!-- </table "summary="tablenode properties" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
85C99B52BBBC96007BD819861E675C61D7B742CA | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/tcmnodeslots.html?context=cdpaas&locale=en | tcmnode properties | tcmnode properties
Temporal causal modeling attempts to discover key causal relationships in time series data. In temporal causal modeling, you specify a set of target series and a set of candidate inputs to those targets. The procedure then builds an autoregressive time series model for each target and includes only those inputs that have the most significant causal relationship with the target.
tcmnode properties
Table 1. tcmnode properties
tcmnode Properties Values Property description
custom_fields Boolean
dimensionlist [dimension1 ... dimensionN]
data_struct Multiple <br>Single
metric_fields fields
both_target_and_input [f1 ... fN]
targets [f1 ... fN]
candidate_inputs [f1 ... fN]
forced_inputs [f1 ... fN]
use_timestamp Timestamp <br>Period
input_interval None <br>Unknown <br>Year <br>Quarter <br>Month <br>Week <br>Day <br>Hour <br>Hour_nonperiod <br>Minute <br>Minute_nonperiod <br>Second <br>Second_nonperiod
period_field string
period_start_value integer
num_days_per_week integer
start_day_of_week Sunday <br>Monday <br>Tuesday <br>Wednesday <br>Thursday <br>Friday <br>Saturday
num_hours_per_day integer
start_hour_of_day integer
timestamp_increments integer
cyclic_increments integer
cyclic_periods list
output_interval None <br>Year <br>Quarter <br>Month <br>Week <br>Day <br>Hour <br>Minute <br>Second
is_same_interval Same <br>Notsame
cross_hour Boolean
aggregate_and_distribute list
aggregate_default Mean <br>Sum <br>Mode <br>Min <br>Max
distribute_default Mean <br>Sum
group_default Mean <br>Sum <br>Mode <br>Min <br>Max
missing_imput Linear_interp <br>Series_mean <br>K_mean <br>K_meridian <br>Linear_trend <br>None
k_mean_param integer
k_median_param integer
missing_value_threshold integer
conf_level integer
max_num_predictor integer
max_lag integer
epsilon number
threshold integer
is_re_est Boolean
num_targets integer
percent_targets integer
fields_display list
series_dispaly list
network_graph_for_target Boolean
sign_level_for_target number
fit_and_outlier_for_target Boolean
sum_and_para_for_target Boolean
impact_diag_for_target Boolean
impact_diag_type_for_target Effect <br>Cause <br>Both
impact_diag_level_for_target integer
series_plot_for_target Boolean
res_plot_for_target Boolean
top_input_for_target Boolean
forecast_table_for_target Boolean
same_as_for_target Boolean
network_graph_for_series Boolean
sign_level_for_series number
fit_and_outlier_for_series Boolean
sum_and_para_for_series Boolean
impact_diagram_for_series Boolean
impact_diagram_type_for_series Effect <br>Cause <br>Both
impact_diagram_level_for_series integer
series_plot_for_series Boolean
residual_plot_for_series Boolean
forecast_table_for_series Boolean
outlier_root_cause_analysis Boolean
causal_levels integer
outlier_table Interactive <br>Pivot <br>Both
rmsp_error Boolean
bic Boolean
r_square Boolean
outliers_over_time Boolean
series_transormation Boolean
use_estimation_period Boolean
estimation_period Times <br>Observation
observations list
observations_type Latest <br>Earliest
observations_num integer
observations_exclude integer
extend_records_into_future Boolean
forecastperiods integer
max_num_distinct_values integer
display_targets FIXEDNUMBER <br>PERCENTAGE
goodness_fit_measure ROOTMEAN <br>BIC <br>RSQUARE
top_input_for_series Boolean
aic Boolean
rmse Boolean
date_time_field field Time/Date field
auto_detect_lag Boolean This setting specifies the number of lag terms for each input in the model for each target.
numoflags Integer By default, the number of lag terms is automatically determined from the time interval that is used for the analysis.
re_estimate Boolean If you already generated a temporal causal model, select this option to reuse the criteria settings that are specified for that model, rather than building a new model.
display_targets "FIXEDNUMBER" <br>"PERCENTAGE" By default, output is displayed for the targets that are associated with the 10 best-fitting models, as determined by the R square value. You can specify a different fixed number of best-fitting models or you can specify a percentage of best-fitting models.
| # tcmnode properties #
Temporal causal modeling attempts to discover key causal relationships in time series data\. In temporal causal modeling, you specify a set of target series and a set of candidate inputs to those targets\. The procedure then builds an autoregressive time series model for each target and includes only those inputs that have the most significant causal relationship with the target\.
<!-- <table "summary="tcmnode properties" class="defaultstyle" "> -->
tcmnode properties
Table 1\. tcmnode properties
| `tcmnode` Properties | Values | Property description |
| --------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `custom_fields` | *Boolean* | |
| `dimensionlist` | \[*dimension1 \.\.\. dimensionN*\] | |
| `data_struct` | `Multiple` <br>`Single` | |
| `metric_fields` | *fields* | |
| `both_target_and_input` | \[*f1 \.\.\. fN*\] | |
| `targets` | \[*f1 \.\.\. fN*\] | |
| `candidate_inputs` | \[*f1 \.\.\. fN*\] | |
| `forced_inputs` | \[*f1 \.\.\. fN*\] | |
| `use_timestamp` | `Timestamp` <br>`Period` | |
| `input_interval` | `None` <br>`Unknown` <br>`Year` <br>`Quarter` <br>`Month` <br>`Week` <br>`Day` <br>`Hour` <br>`Hour_nonperiod` <br>`Minute` <br>`Minute_nonperiod` <br>`Second` <br>`Second_nonperiod` | |
| `period_field` | *string* | |
| `period_start_value` | *integer* | |
| `num_days_per_week` | *integer* | |
| `start_day_of_week` | `Sunday` <br>`Monday` <br>`Tuesday` <br>`Wednesday` <br>`Thursday` <br>`Friday` <br>`Saturday` | |
| `num_hours_per_day` | *integer* | |
| `start_hour_of_day` | *integer* | |
| `timestamp_increments` | *integer* | |
| `cyclic_increments` | *integer* | |
| `cyclic_periods` | *list* | |
| `output_interval` | `None` <br>`Year` <br>`Quarter` <br>`Month` <br>`Week` <br>`Day` <br>`Hour` <br>`Minute` <br>`Second` | |
| `is_same_interval` | `Same` <br>`Notsame` | |
| `cross_hour` | *Boolean* | |
| `aggregate_and_distribute` | *list* | |
| `aggregate_default` | `Mean` <br>`Sum` <br>`Mode` <br>`Min` <br>`Max` | |
| `distribute_default` | `Mean` <br>`Sum` | |
| `group_default` | `Mean` <br>`Sum` <br>`Mode` <br>`Min` <br>`Max` | |
| `missing_imput` | `Linear_interp` <br>`Series_mean` <br>`K_mean` <br>`K_meridian` <br>`Linear_trend` <br>`None` | |
| `k_mean_param` | *integer* | |
| `k_median_param` | *integer* | |
| `missing_value_threshold` | *integer* | |
| `conf_level` | *integer* | |
| `max_num_predictor` | *integer* | |
| `max_lag` | *integer* | |
| `epsilon` | *number* | |
| `threshold` | *integer* | |
| `is_re_est` | *Boolean* | |
| `num_targets` | *integer* | |
| `percent_targets` | *integer* | |
| `fields_display` | *list* | |
| `series_dispaly` | *list* | |
| `network_graph_for_target` | *Boolean* | |
| `sign_level_for_target` | *number* | |
| `fit_and_outlier_for_target` | *Boolean* | |
| `sum_and_para_for_target` | *Boolean* | |
| `impact_diag_for_target` | *Boolean* | |
| `impact_diag_type_for_target` | `Effect` <br>`Cause` <br>`Both` | |
| `impact_diag_level_for_target` | *integer* | |
| `series_plot_for_target` | *Boolean* | |
| `res_plot_for_target` | *Boolean* | |
| `top_input_for_target` | *Boolean* | |
| `forecast_table_for_target` | *Boolean* | |
| `same_as_for_target` | *Boolean* | |
| `network_graph_for_series` | *Boolean* | |
| `sign_level_for_series` | *number* | |
| `fit_and_outlier_for_series` | *Boolean* | |
| `sum_and_para_for_series` | *Boolean* | |
| `impact_diagram_for_series` | *Boolean* | |
| `impact_diagram_type_for_series` | `Effect` <br>`Cause` <br>`Both` | |
| `impact_diagram_level_for_series` | *integer* | |
| `series_plot_for_series` | *Boolean* | |
| `residual_plot_for_series` | *Boolean* | |
| `forecast_table_for_series` | *Boolean* | |
| `outlier_root_cause_analysis` | *Boolean* | |
| `causal_levels` | *integer* | |
| `outlier_table` | `Interactive` <br>`Pivot` <br>`Both` | |
| `rmsp_error` | *Boolean* | |
| `bic` | *Boolean* | |
| `r_square` | *Boolean* | |
| `outliers_over_time` | *Boolean* | |
| `series_transormation` | *Boolean* | |
| `use_estimation_period` | *Boolean* | |
| `estimation_period` | `Times` <br>`Observation` | |
| `observations` | *list* | |
| `observations_type` | `Latest` <br>`Earliest` | |
| `observations_num` | *integer* | |
| `observations_exclude` | *integer* | |
| `extend_records_into_future` | *Boolean* | |
| `forecastperiods` | *integer* | |
| `max_num_distinct_values` | *integer* | |
| `display_targets` | `FIXEDNUMBER` <br>`PERCENTAGE` | |
| `goodness_fit_measure` | `ROOTMEAN` <br>`BIC` <br>`RSQUARE` | |
| `top_input_for_series` | *Boolean* | |
| `aic` | *Boolean* | |
| `rmse` | *Boolean* | |
| `date_time_field` | *field* | Time/Date field |
| `auto_detect_lag` | *Boolean* | This setting specifies the number of lag terms for each input in the model for each target\. |
| `numoflags` | *Integer* | By default, the number of lag terms is automatically determined from the time interval that is used for the analysis\. |
| `re_estimate` | *Boolean* | If you already generated a temporal causal model, select this option to reuse the criteria settings that are specified for that model, rather than building a new model\. |
| `display_targets` | `"FIXEDNUMBER"` <br>`"PERCENTAGE"` | By default, output is displayed for the targets that are associated with the 10 best\-fitting models, as determined by the R square value\. You can specify a different fixed number of best\-fitting models or you can specify a percentage of best\-fitting models\. |
<!-- </table "summary="tcmnode properties" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
DB504727C8688251CAAB0C18E12BDE9DC625ECD1 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/tcmnuggetnodeslots.html?context=cdpaas&locale=en | applytcmnode properties | applytcmnode properties
You can use Temporal Causal Modeling (TCM) modeling nodes to generate a TCM model nugget. The scripting name of this model nugget is applytcmnode. For more information on scripting the modeling node itself, see [tcmnode properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/tcmnodeslots.htmltcmnodeslots).
applytcmnode properties
Table 1. applytcmnode properties
applytcmnode Properties Values Property description
ext_future boolean
ext_future_num integer
noise_res boolean
conf_limits boolean
target_fields list
target_series list
| # applytcmnode properties #
You can use Temporal Causal Modeling (TCM) modeling nodes to generate a TCM model nugget\. The scripting name of this model nugget is *applytcmnode*\. For more information on scripting the modeling node itself, see [tcmnode properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/tcmnodeslots.html#tcmnodeslots)\.
<!-- <table "summary="applytcmnode properties" class="defaultstyle" "> -->
applytcmnode properties
Table 1\. applytcmnode properties
| `applytcmnode` Properties | Values | Property description |
| ------------------------- | --------- | -------------------- |
| `ext_future` | *boolean* | |
| `ext_future_num` | *integer* | |
| `noise_res` | *boolean* | |
| `conf_limits` | *boolean* | |
| `target_fields` | *list* | |
| `target_series` | *list* | |
<!-- </table "summary="applytcmnode properties" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
5062008D59B761C5CF7F32F131021EA81A03B048 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/timeplotnodeslots.html?context=cdpaas&locale=en | timeplotnode properties | timeplotnode properties
The Time Plot node displays one or more sets of time series data. Typically, you would first use a Time Intervals node to create a TimeLabel field, which would be used to label the x axis.
timeplotnode properties
Table 1. timeplotnode properties
timeplotnode properties Data type Property description
plot_series SeriesModels
use_custom_x_field flag
x_field field
y_fields list
panel flag
normalize flag
line flag
points flag
point_type Rectangle <br>Dot <br>Triangle <br>Hexagon <br>Plus <br>Pentagon <br>Star <br>BowTie <br>HorizontalDash <br>VerticalDash <br>IronCross <br>Factory <br>House <br>Cathedral <br>OnionDome <br>ConcaveTriangleOblateGlobe <br>CatEye <br>FourSidedPillow <br>RoundRectangle <br>Fan
smoother flag You can add smoothers to the plot only if you set panel to True.
use_records_limit flag
records_limit integer
symbol_size number Specifies a symbol size.
panel_layout HorizontalVertical
use_grid boolean Display grid lines.
| # timeplotnode properties #
The Time Plot node displays one or more sets of time series data\. Typically, you would first use a Time Intervals node to create a *TimeLabel* field, which would be used to label the *x* axis\.
<!-- <table "summary="timeplotnode properties" id="timeplotnodeslots__table_zjj_myy_ddb" class="defaultstyle" "> -->
timeplotnode properties
Table 1\. timeplotnode properties
| `timeplotnode` properties | Data type | Property description |
| ------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------- |
| `plot_series` | `Series``Models` | |
| `use_custom_x_field` | *flag* | |
| `x_field` | *field* | |
| `y_fields` | *list* | |
| `panel` | *flag* | |
| `normalize` | *flag* | |
| `line` | *flag* | |
| `points` | *flag* | |
| `point_type` | `Rectangle` <br>`Dot` <br>`Triangle` <br>`Hexagon` <br>`Plus` <br>`Pentagon` <br>`Star` <br>`BowTie` <br>`HorizontalDash` <br>`VerticalDash` <br>`IronCross` <br>`Factory` <br>`House` <br>`Cathedral` <br>`OnionDome` <br>`ConcaveTriangle``OblateGlobe` <br>`CatEye` <br>`FourSidedPillow` <br>`RoundRectangle` <br>`Fan` | |
| `smoother` | *flag* | You can add smoothers to the plot only if you set `panel` to `True`\. |
| `use_records_limit` | *flag* | |
| `records_limit` | *integer* | |
| `symbol_size` | *number* | Specifies a symbol size\. |
| `panel_layout` | `Horizontal``Vertical` | |
| `use_grid` | *boolean* | Display grid lines\. |
<!-- </table "summary="timeplotnode properties" id="timeplotnodeslots__table_zjj_myy_ddb" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
76B3F98C842554781D96B8DDE05A74D4D78B4E7A | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/timeser_as_nodeslots.html?context=cdpaas&locale=en | ts properties | ts properties
The Time Series node estimates exponential smoothing, univariate Autoregressive Integrated Moving Average (ARIMA), and multivariate ARIMA (or transfer function) models for time series data and produces forecasts of future performance.
ts properties
Table 1. ts properties
ts Properties Values Property description
targets field The Time Series node forecasts one or more targets, optionally using one or more input fields as predictors. Frequency and weight fields are not used. See [Common modeling node properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/modelingnodeslots_common.htmlmodelingnodeslots_common) for more information.
candidate_inputs [field1 ... fieldN] Input or predictor fields used by the model.
use_period flag
date_time_field field
input_interval None <br>Unknown <br>Year <br>Quarter <br>Month <br>Week <br>Day <br>Hour <br>Hour_nonperiod <br>Minute <br>Minute_nonperiod <br>Second <br>Second_nonperiod
period_field field
period_start_value integer
num_days_per_week integer
start_day_of_week Sunday <br>Monday <br>Tuesday <br>Wednesday <br>Thursday <br>Friday <br>Saturday
num_hours_per_day integer
start_hour_of_day integer
timestamp_increments integer
cyclic_increments integer
cyclic_periods list
output_interval None <br>Year <br>Quarter <br>Month <br>Week <br>Day <br>Hour <br>Minute <br>Second
is_same_interval flag
cross_hour flag
aggregate_and_distribute list
aggregate_default Mean <br>Sum <br>Mode <br>Min <br>Max
distribute_default Mean <br>Sum
group_default Mean <br>Sum <br>Mode <br>Min <br>Max
missing_imput Linear_interp <br>Series_mean <br>K_mean <br>K_median <br>Linear_trend
k_span_points integer
use_estimation_period flag
estimation_period Observations <br>Times
date_estimation list Only available if you use date_time_field
period_estimation list Only available if you use use_period
observations_type Latest <br>Earliest
observations_num integer
observations_exclude integer
method ExpertModeler <br>Exsmooth <br>Arima
expert_modeler_method ExpertModeler <br>Exsmooth <br>Arima
consider_seasonal flag
detect_outliers flag
expert_outlier_additive flag
expert_outlier_level_shift flag
expert_outlier_innovational flag
expert_outlier_level_shift flag
expert_outlier_transient flag
expert_outlier_seasonal_additive flag
expert_outlier_local_trend flag
expert_outlier_additive_patch flag
consider_newesmodels flag
exsmooth_model_type Simple <br>HoltsLinearTrend <br>BrownsLinearTrend <br>DampedTrend <br>SimpleSeasonal <br>WintersAdditive <br>WintersMultiplicative <br>DampedTrendAdditive <br>DampedTrendMultiplicative <br>MultiplicativeTrendAdditive <br>MultiplicativeSeasonal <br>MultiplicativeTrendMultiplicative <br>MultiplicativeTrend Specifies the Exponential Smoothing method. Default is Simple.
futureValue_type_method Compute <br>specify If Compute is used, the system computes the Future Values for the forecast period for each predictor.<br><br>For each predictor, you can choose from a list of functions (blank, mean of recent points, most recent value) or use specify to enter values manually. To specify individual fields and properties, use the extend_metric_values property. For example:<br><br>set :ts.futureValue_type_method="specify" set :ts.extend_metric_values=[{'Market_1','USER_SPECIFY', 1,2,3]}, {'Market_2','MOST_RECENT_VALUE', ''},{'Market_3','RECENT_POINTS_MEAN', ''}]
exsmooth_transformation_type None <br>SquareRoot <br>NaturalLog
arima.p integer
arima.d integer
arima.q integer
arima.sp integer
arima.sd integer
arima.sq integer
arima_transformation_type None <br>SquareRoot <br>NaturalLog
arima_include_constant flag
tf_arima.p.fieldname integer For transfer functions.
tf_arima.d.fieldname integer For transfer functions.
tf_arima.q.fieldname integer For transfer functions.
tf_arima.sp.fieldname integer For transfer functions.
tf_arima.sd.fieldname integer For transfer functions.
tf_arima.sq.fieldname integer For transfer functions.
tf_arima.delay.fieldname integer For transfer functions.
tf_arima.transformation_type.fieldname None <br>SquareRoot <br>NaturalLog For transfer functions.
arima_detect_outliers flag
arima_outlier_additive flag
arima_outlier_level_shift flag
arima_outlier_innovational flag
arima_outlier_transient flag
arima_outlier_seasonal_additive flag
arima_outlier_local_trend flag
arima_outlier_additive_patch flag
max_lags integer
cal_PI flag
conf_limit_pct real
events fields
continue flag
scoring_model_only flag Use for models with very large numbers (tens of thousands) of time series.
forecastperiods integer
extend_records_into_future flag
extend_metric_values fields Allows you to provide future values for predictors.
conf_limits flag
noise_res flag
max_models_output integer Controls how many models are shown in output. Default is 10. Models are not shown in output if the total number of models built exceeds this value. Models are still available for scoring.
missing_value_threshold double Computes data quality measures for the time variable and for input data corresponding to each time series. If the data quality score is lower than this threshold, the corresponding time series will be discarded.
compute_future_values_input boolean False: Compute future values of inputs. <br>True: Select fields whose values you wish to add to the data.
| # ts properties #
The Time Series node estimates exponential smoothing, univariate Autoregressive Integrated Moving Average (ARIMA), and multivariate ARIMA (or transfer function) models for time series data and produces forecasts of future performance\.
<!-- <table "summary="ts properties" id="timeser_as_nodeslots__1" class="defaultstyle" "> -->
ts properties
Table 1\. ts properties
| `ts` Properties | Values | Property description |
| ------------------------------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `targets` | *field* | The Time Series node forecasts one or more targets, optionally using one or more input fields as predictors\. Frequency and weight fields are not used\. See [Common modeling node properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/modelingnodeslots_common.html#modelingnodeslots_common) for more information\. |
| `candidate_inputs` | \[*field1 \.\.\. fieldN*\] | Input or predictor fields used by the model\. |
| `use_period` | *flag* | |
| `date_time_field` | *field* | |
| `input_interval` | `None` <br>`Unknown` <br>`Year` <br>`Quarter` <br>`Month` <br>`Week` <br>`Day` <br>`Hour` <br>`Hour_nonperiod` <br>`Minute` <br>`Minute_nonperiod` <br>`Second` <br>`Second_nonperiod` | |
| `period_field` | *field* | |
| `period_start_value` | *integer* | |
| `num_days_per_week` | *integer* | |
| `start_day_of_week` | `Sunday` <br>`Monday` <br>`Tuesday` <br>`Wednesday` <br>`Thursday` <br>`Friday` <br>`Saturday` | |
| `num_hours_per_day` | *integer* | |
| `start_hour_of_day` | *integer* | |
| `timestamp_increments` | *integer* | |
| `cyclic_increments` | *integer* | |
| `cyclic_periods` | *list* | |
| `output_interval` | `None` <br>`Year` <br>`Quarter` <br>`Month` <br>`Week` <br>`Day` <br>`Hour` <br>`Minute` <br>`Second` | |
| `is_same_interval` | *flag* | |
| `cross_hour` | *flag* | |
| `aggregate_and_distribute` | *list* | |
| `aggregate_default` | `Mean` <br>`Sum` <br>`Mode` <br>`Min` <br>`Max` | |
| `distribute_default` | `Mean` <br>`Sum` | |
| `group_default` | `Mean` <br>`Sum` <br>`Mode` <br>`Min` <br>`Max` | |
| `missing_imput` | `Linear_interp` <br>`Series_mean` <br>`K_mean` <br>`K_median` <br>`Linear_trend` | |
| `k_span_points` | *integer* | |
| `use_estimation_period` | *flag* | |
| `estimation_period` | `Observations` <br>`Times` | |
| `date_estimation` | *list* | Only available if you use `date_time_field` |
| `period_estimation` | *list* | Only available if you use `use_period` |
| `observations_type` | `Latest` <br>`Earliest` | |
| `observations_num` | *integer* | |
| `observations_exclude` | *integer* | |
| `method` | `ExpertModeler` <br>`Exsmooth` <br>`Arima` | |
| `expert_modeler_method` | `ExpertModeler` <br>`Exsmooth` <br>`Arima` | |
| `consider_seasonal` | *flag* | |
| `detect_outliers` | *flag* | |
| `expert_outlier_additive` | *flag* | |
| `expert_outlier_level_shift` | *flag* | |
| `expert_outlier_innovational` | *flag* | |
| `expert_outlier_level_shift` | *flag* | |
| `expert_outlier_transient` | *flag* | |
| `expert_outlier_seasonal_additive` | *flag* | |
| `expert_outlier_local_trend` | *flag* | |
| `expert_outlier_additive_patch` | *flag* | |
| `consider_newesmodels` | *flag* | |
| `exsmooth_model_type` | `Simple` <br>`HoltsLinearTrend` <br>`BrownsLinearTrend` <br>`DampedTrend` <br>`SimpleSeasonal` <br>`WintersAdditive` <br>`WintersMultiplicative` <br>`DampedTrendAdditive` <br>`DampedTrendMultiplicative` <br>`MultiplicativeTrendAdditive` <br>`MultiplicativeSeasonal` <br>`MultiplicativeTrendMultiplicative` <br>`MultiplicativeTrend` | Specifies the Exponential Smoothing method\. Default is `Simple`\. |
| `futureValue_type_method` | `Compute` <br>`specify` | If `Compute` is used, the system computes the Future Values for the forecast period for each predictor\.<br><br>For each predictor, you can choose from a list of functions (blank, mean of recent points, most recent value) or use `specify` to enter values manually\. To specify individual fields and properties, use the `extend_metric_values` property\. For example:<br><br>`set :ts.futureValue_type_method="specify" set :ts.extend_metric_values=[{'Market_1','USER_SPECIFY', 1,2,3]}, {'Market_2','MOST_RECENT_VALUE', ''},{'Market_3','RECENT_POINTS_MEAN', ''}]` |
| `exsmooth_transformation_type` | `None` <br>`SquareRoot` <br>`NaturalLog` | |
| `arima.p` | *integer* | |
| `arima.d` | *integer* | |
| `arima.q` | *integer* | |
| `arima.sp` | *integer* | |
| `arima.sd` | *integer* | |
| `arima.sq` | *integer* | |
| `arima_transformation_type` | `None` <br>`SquareRoot` <br>`NaturalLog` | |
| `arima_include_constant` | *flag* | |
| `tf_arima.p.`*fieldname* | *integer* | For transfer functions\. |
| `tf_arima.d.`*fieldname* | *integer* | For transfer functions\. |
| `tf_arima.q.`*fieldname* | *integer* | For transfer functions\. |
| `tf_arima.sp.`*fieldname* | *integer* | For transfer functions\. |
| `tf_arima.sd.`*fieldname* | *integer* | For transfer functions\. |
| `tf_arima.sq.`*fieldname* | *integer* | For transfer functions\. |
| `tf_arima.delay.`*fieldname* | *integer* | For transfer functions\. |
| `tf_arima.transformation_type.`*fieldname* | `None` <br>`SquareRoot` <br>`NaturalLog` | For transfer functions\. |
| `arima_detect_outliers` | *flag* | |
| `arima_outlier_additive` | *flag* | |
| `arima_outlier_level_shift` | *flag* | |
| `arima_outlier_innovational` | *flag* | |
| `arima_outlier_transient` | *flag* | |
| `arima_outlier_seasonal_additive` | *flag* | |
| `arima_outlier_local_trend` | *flag* | |
| `arima_outlier_additive_patch` | *flag* | |
| `max_lags` | *integer* | |
| `cal_PI` | *flag* | |
| `conf_limit_pct` | *real* | |
| `events` | *fields* | |
| `continue` | *flag* | |
| `scoring_model_only` | *flag* | Use for models with very large numbers (tens of thousands) of time series\. |
| `forecastperiods` | *integer* | |
| `extend_records_into_future` | *flag* | |
| `extend_metric_values` | *fields* | Allows you to provide future values for predictors\. |
| `conf_limits` | *flag* | |
| `noise_res` | *flag* | |
| `max_models_output` | *integer* | Controls how many models are shown in output\. Default is `10`\. Models are not shown in output if the total number of models built exceeds this value\. Models are still available for scoring\. |
| `missing_value_threshold` | *double* | Computes data quality measures for the time variable and for input data corresponding to each time series\. If the data quality score is lower than this threshold, the corresponding time series will be discarded\. |
| `compute_future_values_input` | *boolean* | `False`: Compute future values of inputs\. <br>`True`: Select fields whose values you wish to add to the data\. |
<!-- </table "summary="ts properties" id="timeser_as_nodeslots__1" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
EED66538A3E4854D56210AB1D6AC49016F1E40A2 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/timeser_as_nodeslots_streaming.html?context=cdpaas&locale=en | streamingtimeseries properties | streamingtimeseries properties
The Streaming Time Series node builds and scores time series models in one step.
streamingtimeseries properties
Table 1. streamingtimeseries properties
streamingtimeseries properties Values Property description
targets field The Streaming TS node forecasts one or more targets, optionally using one or more input fields as predictors. Frequency and weight fields aren't used. See [Common modeling node properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/modelingnodeslots_common.htmlmodelingnodeslots_common) for more information.
candidate_inputs [field1 ... fieldN] Input or predictor fields used by the model.
use_period flag
date_time_field field
input_interval NoneUnknownYearQuarterMonthWeekDayHourHour_nonperiodMinuteMinute_nonperiodSecondSecond_nonperiod
period_field field
period_start_value integer
num_days_per_week integer
start_day_of_week SundayMondayTuesdayWednesdayThursdayFridaySaturday
num_hours_per_day integer
start_hour_of_day integer
timestamp_increments integer
cyclic_increments integer
cyclic_periods list
output_interval NoneYearQuarterMonthWeekDayHourMinuteSecond
is_same_interval flag
cross_hour flag
aggregate_and_distribute list
aggregate_default MeanSumModeMinMax
distribute_default MeanSum
group_default MeanSumModeMinMax
missing_imput Linear_interpSeries_meanK_meanK_medianLinear_trend
k_span_points integer
use_estimation_period flag
estimation_period ObservationsTimes
date_estimation list Only available if you use date_time_field.
period_estimation list Only available if you use use_period.
observations_type LatestEarliest
observations_num integer
observations_exclude integer
method ExpertModelerExsmoothArima
expert_modeler_method ExpertModelerExsmoothArima
consider_seasonal flag
detect_outliers flag
expert_outlier_additive flag
expert_outlier_innovational flag
expert_outlier_level_shift flag
expert_outlier_transient flag
expert_outlier_seasonal_additive flag
expert_outlier_local_trend flag
expert_outlier_additive_patch flag
consider_newesmodels flag
exsmooth_model_type SimpleHoltsLinearTrendBrownsLinearTrendDampedTrendSimpleSeasonalWintersAdditiveWintersMultiplicativeDampedTrendAdditiveDampedTrendMultiplicativeMultiplicativeTrendAdditiveMultiplicativeSeasonalMultiplicativeTrendMultiplicativeMultiplicativeTrend
futureValue_type_method Computespecify
exsmooth_transformation_type NoneSquareRootNaturalLog
arima.p integer
arima.d integer
arima.q integer
arima.sp integer
arima.sd integer
arima.sq integer
arima_transformation_type NoneSquareRootNaturalLog
arima_include_constant flag
tf_arima.p.fieldname integer For transfer functions.
tf_arima.d.fieldname integer For transfer functions.
tf_arima.q.fieldname integer For transfer functions.
tf_arima.sp.fieldname integer For transfer functions.
tf_arima.sd.fieldname integer For transfer functions.
tf_arima.sq.fieldname integer For transfer functions.
tf_arima.delay.fieldname integer For transfer functions.
tf_arima.transformation_type.fieldname NoneSquareRootNaturalLog For transfer functions.
arima_detect_outliers flag
arima_outlier_additive flag
arima_outlier_level_shift flag
arima_outlier_innovational flag
arima_outlier_transient flag
arima_outlier_seasonal_additive flag
arima_outlier_local_trend flag
arima_outlier_additive_patch flag
conf_limit_pct real
events fields
forecastperiods integer
extend_records_into_future flag
conf_limits flag
noise_res flag
max_models_output integer Specify the maximum number of models you want to include in the output. Note that if the number of models built exceeds this threshold, the models aren't shown in the output but they're still available for scoring. Default value is 10. Displaying a large number of models may result in poor performance or instability.
custom_fields boolean This option tells the node to use the field information specified here instead of that given in any upstream Type node(s). After selecting this option, specify the following fields as required.
arima array A list with p, d, q, sp, sd, sq.
tf_arima array A list with name, p, q, d, sp, sq, sd, delay and type.
| # streamingtimeseries properties #
The Streaming Time Series node builds and scores time series models in one step\.
<!-- <table "summary="streamingtimeseries properties" class="defaultstyle" "> -->
streamingtimeseries properties
Table 1\. streamingtimeseries properties
| `streamingtimeseries` properties | Values | Property description |
| ------------------------------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| `targets` | *field* | The Streaming TS node forecasts one or more targets, optionally using one or more input fields as predictors\. Frequency and weight fields aren't used\. See [Common modeling node properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/modelingnodeslots_common.html#modelingnodeslots_common) for more information\. |
| `candidate_inputs` | \[*field1 \.\.\. fieldN*\] | Input or predictor fields used by the model\. |
| `use_period` | *flag* | |
| `date_time_field` | *field* | |
| `input_interval` | `None``Unknown``Year``Quarter``Month``Week``Day``Hour``Hour_nonperiod``Minute``Minute_nonperiod``Second``Second_nonperiod` | |
| `period_field` | *field* | |
| `period_start_value` | *integer* | |
| `num_days_per_week` | *integer* | |
| `start_day_of_week` | `Sunday``Monday``Tuesday``Wednesday``Thursday``Friday``Saturday` | |
| `num_hours_per_day` | *integer* | |
| `start_hour_of_day` | *integer* | |
| `timestamp_increments` | *integer* | |
| `cyclic_increments` | *integer* | |
| `cyclic_periods` | *list* | |
| `output_interval` | `None``Year``Quarter``Month``Week``Day``Hour``Minute``Second` | |
| `is_same_interval` | *flag* | |
| `cross_hour` | *flag* | |
| `aggregate_and_distribute` | *list* | |
| `aggregate_default` | `Mean``Sum``Mode``Min``Max` | |
| `distribute_default` | `Mean``Sum` | |
| `group_default` | `Mean``Sum``Mode``Min``Max` | |
| `missing_imput` | `Linear_interp``Series_mean``K_mean``K_median``Linear_trend` | |
| `k_span_points` | *integer* | |
| `use_estimation_period` | *flag* | |
| `estimation_period` | `Observations``Times` | |
| `date_estimation` | *list* | Only available if you use `date_time_field`\. |
| `period_estimation` | *list* | Only available if you use `use_period`\. |
| `observations_type` | `Latest``Earliest` | |
| `observations_num` | *integer* | |
| `observations_exclude` | *integer* | |
| `method` | `ExpertModeler``Exsmooth``Arima` | |
| `expert_modeler_method` | `ExpertModeler``Exsmooth``Arima` | |
| `consider_seasonal` | *flag* | |
| `detect_outliers` | *flag* | |
| `expert_outlier_additive` | *flag* | |
| `expert_outlier_innovational` | *flag* | |
| `expert_outlier_level_shift` | *flag* | |
| `expert_outlier_transient` | *flag* | |
| `expert_outlier_seasonal_additive` | *flag* | |
| `expert_outlier_local_trend` | *flag* | |
| `expert_outlier_additive_patch` | *flag* | |
| `consider_newesmodels` | *flag* | |
| `exsmooth_model_type` | `Simple``HoltsLinearTrend``BrownsLinearTrend``DampedTrend``SimpleSeasonal``WintersAdditive``WintersMultiplicative``DampedTrendAdditive``DampedTrendMultiplicative``MultiplicativeTrendAdditive``MultiplicativeSeasonal``MultiplicativeTrendMultiplicative``MultiplicativeTrend` | |
| `futureValue_type_method` | `Compute``specify` | |
| `exsmooth_transformation_type` | `None``SquareRoot``NaturalLog` | |
| `arima.p` | *integer* | |
| `arima.d` | *integer* | |
| `arima.q` | *integer* | |
| `arima.sp` | *integer* | |
| `arima.sd` | *integer* | |
| `arima.sq` | *integer* | |
| `arima_transformation_type` | `None``SquareRoot``NaturalLog` | |
| `arima_include_constant` | *flag* | |
| `tf_arima.p.`*fieldname* | *integer* | For transfer functions\. |
| `tf_arima.d.`*fieldname* | *integer* | For transfer functions\. |
| `tf_arima.q.`*fieldname* | *integer* | For transfer functions\. |
| `tf_arima.sp.`*fieldname* | *integer* | For transfer functions\. |
| `tf_arima.sd.`*fieldname* | *integer* | For transfer functions\. |
| `tf_arima.sq.`*fieldname* | *integer* | For transfer functions\. |
| `tf_arima.delay.`*fieldname* | *integer* | For transfer functions\. |
| `tf_arima.transformation_type.`*fieldname* | `None``SquareRoot``NaturalLog` | For transfer functions\. |
| `arima_detect_outliers` | *flag* | |
| `arima_outlier_additive` | *flag* | |
| `arima_outlier_level_shift` | *flag* | |
| `arima_outlier_innovational` | *flag* | |
| `arima_outlier_transient` | *flag* | |
| `arima_outlier_seasonal_additive` | *flag* | |
| `arima_outlier_local_trend` | *flag* | |
| `arima_outlier_additive_patch` | *flag* | |
| `conf_limit_pct` | *real* | |
| `events` | *fields* | |
| `forecastperiods` | *integer* | |
| `extend_records_into_future` | *flag* | |
| `conf_limits` | *flag* | |
| `noise_res` | *flag* | |
| `max_models_output` | *integer* | Specify the maximum number of models you want to include in the output\. Note that if the number of models built exceeds this threshold, the models aren't shown in the output but they're still available for scoring\. Default value is `10`\. Displaying a large number of models may result in poor performance or instability\. |
| `custom_fields` | *boolean* | This option tells the node to use the field information specified here instead of that given in any upstream Type node(s)\. After selecting this option, specify the following fields as required\. |
| `arima` | *array* | A list with `p`, `d`, `q`, `sp`, `sd`, `sq`\. |
| `tf_arima` | *array* | A list with `name`, `p`, `q`, `d`, `sp`, `sq`, `sd`, `delay` and `type`\. |
<!-- </table "summary="streamingtimeseries properties" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
9087B2B5302FD4B7C8343C568C7C8A925544BB40 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/timeser_as_nuggetnodeslots.html?context=cdpaas&locale=en | applyts properties | applyts properties
You can use the Time Series modeling node to generate a Time Series model nugget. The scripting name of this model nugget is applyts. For more information on scripting the modeling node itself, see [ts properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/timeser_as_nodeslots.htmltimeser_as_nodeslots).
applyts properties
Table 1. applyts properties
applyts Properties Values Property description
extend_records_into_future Boolean
ext_future_num integer
compute_future_values_input Boolean
forecastperiods integer
noise_res boolean
conf_limits boolean
target_fields list
target_series list
includeTargets field
| # applyts properties #
You can use the Time Series modeling node to generate a Time Series model nugget\. The scripting name of this model nugget is *applyts*\. For more information on scripting the modeling node itself, see [ts properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/timeser_as_nodeslots.html#timeser_as_nodeslots)\.
<!-- <table "summary="applyts properties" class="defaultstyle" "> -->
applyts properties
Table 1\. applyts properties
| `applyts` Properties | Values | Property description |
| ----------------------------- | --------- | -------------------- |
| `extend_records_into_future` | *Boolean* | |
| `ext_future_num` | *integer* | |
| `compute_future_values_input` | *Boolean* | |
| `forecastperiods` | *integer* | |
| `noise_res` | *boolean* | |
| `conf_limits` | *boolean* | |
| `target_fields` | *list* | |
| `target_series` | *list* | |
| `includeTargets` | *field* | |
<!-- </table "summary="applyts properties" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
EA4CB9CD97FFB8C956B4F5D28D2759C0ED832BB5 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/transformnodeslots.html?context=cdpaas&locale=en | transformnode properties | transformnode properties
The Transform node allows you to select and visually preview the results of transformations before applying them to selected fields.
transformnode properties
Table 1. transformnode properties
transformnode properties Data type Property description
fields [ field1… fieldn] The fields to be used in the transformation.
formula AllSelect Indicates whether all or selected transformations should be calculated.
formula_inverse flag Indicates if the inverse transformation should be used.
formula_inverse_offset number Indicates a data offset to be used for the formula. Set as 0 by default, unless specified by user.
formula_log_n flag Indicates if the logn transformation should be used.
formula_log_n_offset number
formula_log_10 flag Indicates if the log10 transformation should be used.
formula_log_10_offset number
formula_exponential flag Indicates if the exponential transformation (e^x^) should be used.
formula_square_root flag Indicates if the square root transformation should be used.
use_output_name flag Specifies whether a custom output name is used.
output_name string If use_output_name is true, specifies the name to use.
output_mode ScreenFile Used to specify target location for output generated from the output node.
output_format HTML (.html) Output (.cou) Used to specify the type of output.
paginate_output flag When the output_format is HTML, causes the output to be separated into pages.
lines_per_page number When used with paginate_output, specifies the lines per page of output.
full_filename string Indicates the file name to be used for the file output.
| # transformnode properties #
The Transform node allows you to select and visually preview the results of transformations before applying them to selected fields\.
<!-- <table "summary="transformnode properties" id="transformnodeslots__table_jn4_vyy_ddb" class="defaultstyle" "> -->
transformnode properties
Table 1\. transformnode properties
| `transformnode` properties | Data type | Property description |
| -------------------------- | ------------------------------------ | ---------------------------------------------------------------------------------------------------- |
| `fields` | \[ *field1… fieldn*\] | The fields to be used in the transformation\. |
| `formula` | `All``Select` | Indicates whether all or selected transformations should be calculated\. |
| `formula_inverse` | *flag* | Indicates if the inverse transformation should be used\. |
| `formula_inverse_offset` | *number* | Indicates a data offset to be used for the formula\. Set as 0 by default, unless specified by user\. |
| `formula_log_n` | *flag* | Indicates if the log~n~ transformation should be used\. |
| `formula_log_n_offset` | *number* | |
| `formula_log_10` | *flag* | Indicates if the log~10~ transformation should be used\. |
| `formula_log_10_offset` | *number* | |
| `formula_exponential` | *flag* | Indicates if the exponential transformation (e^x^) should be used\. |
| `formula_square_root` | *flag* | Indicates if the square root transformation should be used\. |
| `use_output_name` | *flag* | Specifies whether a custom output name is used\. |
| `output_name` | *string* | If `use_output_name` is true, specifies the name to use\. |
| `output_mode` | `Screen``File` | Used to specify target location for output generated from the output node\. |
| `output_format` | `HTML` (\.*html*) `Output` (\.*cou*) | Used to specify the type of output\. |
| `paginate_output` | *flag* | When the `output_format` is `HTML`, causes the output to be separated into pages\. |
| `lines_per_page` | *number* | When used with `paginate_output`, specifies the lines per page of output\. |
| `full_filename` | *string* | Indicates the file name to be used for the file output\. |
<!-- </table "summary="transformnode properties" id="transformnodeslots__table_jn4_vyy_ddb" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
A20FCF106BA3053C247DAF57A4A396F073D1E4E2 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/transposenodeslots.html?context=cdpaas&locale=en | transposenode properties | transposenode properties
The Transpose node swaps the data in rows and columns so that records become fields and fields become records.
transposenode properties
Table 1. transposenode properties
transposenode properties Data type Property description
transpose_method enum Specifies the transpose method: Normal (normal), CASE to VAR (casetovar), or VAR to CASE (vartocase).
transposed_names PrefixRead Property for the Normal transpose method. New field names can be generated automatically based on a specified prefix, or they can be read from an existing field in the data.
prefix string Property for the Normal transpose method.
num_new_fields integer Property for the Normal transpose method. When using a prefix, specifies the maximum number of new fields to create.
read_from_field field Property for the Normal transpose method. Field from which names are read. This must be an instantiated field or an error will occur when the node is executed.
max_num_fields integer Property for the Normal transpose method. When reading names from a field, specifies an upper limit to avoid creating an inordinately large number of fields.
transpose_type NumericStringCustom Property for the Normal transpose method. By default, only continuous (numeric range) fields are transposed, but you can choose a custom subset of numeric fields or transpose all string fields instead.
transpose_fields list Property for the Normal transpose method. Specifies the fields to transpose when the Custom option is used.
id_field_name field Property for the Normal transpose method.
transpose_casetovar_idfields field Property for the CASE to VAR (casetovar) transpose method. Accepts multiple fields to be used as index fields. field1 ... fieldN
transpose_casetovar_columnfields field Property for the CASE to VAR (casetovar) transpose method. Accepts multiple fields to be used as column fields. field1 ... fieldN
transpose_casetovar_valuefields field Property for the CASE to VAR (casetovar) transpose method. Accepts multiple fields to be used as value fields. field1 ... fieldN
transpose_vartocase_idfields field Property for the VAR to CASE (vartocase) transpose method. Accepts multiple fields to be used as ID variable fields. field1 ... fieldN
transpose_vartocase_valfields field Property for the VAR to CASE (vartocase) transpose method. Accepts multiple fields to be used as value variable fields. field1 ... fieldN
transpose_new_field_names array New field names.
transpose_casetovar_aggregatefunction mean <br>sum <br>min <br>max <br>median <br>count When there's more than one record for an index, you must aggregate the records into one. Use the Aggregate Function drop-down to specify how to aggregate the records using one of the aggregation functions.
default_value_mode Read <br>Pass Set the default mode for all fields to Read or Pass. The import node passes fields by default, while the Type node reads values by default.
| # transposenode properties #
The Transpose node swaps the data in rows and columns so that records become fields and fields become records\.
<!-- <table "summary="transposenode properties" class="defaultstyle" "> -->
transposenode properties
Table 1\. transposenode properties
| `transposenode` properties | Data type | Property description |
| --------------------------------------- | ------------------------------------------------------------------ | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `transpose_method` | *enum* | Specifies the transpose method: Normal (`normal`), CASE to VAR (`casetovar`), or VAR to CASE (`vartocase`)\. |
| `transposed_names` | `Prefix``Read` | Property for the Normal transpose method\. New field names can be generated automatically based on a specified prefix, or they can be read from an existing field in the data\. |
| `prefix` | *string* | Property for the Normal transpose method\. |
| `num_new_fields` | *integer* | Property for the Normal transpose method\. When using a prefix, specifies the maximum number of new fields to create\. |
| `read_from_field` | *field* | Property for the Normal transpose method\. Field from which names are read\. This must be an instantiated field or an error will occur when the node is executed\. |
| `max_num_fields` | *integer* | Property for the Normal transpose method\. When reading names from a field, specifies an upper limit to avoid creating an inordinately large number of fields\. |
| `transpose_type` | `Numeric``String``Custom` | Property for the Normal transpose method\. By default, only continuous (numeric range) fields are transposed, but you can choose a custom subset of numeric fields or transpose all string fields instead\. |
| `transpose_fields` | *list* | Property for the Normal transpose method\. Specifies the fields to transpose when the `Custom` option is used\. |
| `id_field_name` | *field* | Property for the Normal transpose method\. |
| `transpose_casetovar_idfields` | *field* | Property for the CASE to VAR (`casetovar`) transpose method\. Accepts multiple fields to be used as index fields\. `field1 ... fieldN` |
| `transpose_casetovar_columnfields` | *field* | Property for the CASE to VAR (`casetovar`) transpose method\. Accepts multiple fields to be used as column fields\. `field1 ... fieldN` |
| `transpose_casetovar_valuefields` | *field* | Property for the CASE to VAR (`casetovar`) transpose method\. Accepts multiple fields to be used as value fields\. `field1 ... fieldN` |
| `transpose_vartocase_idfields` | *field* | Property for the VAR to CASE (`vartocase`) transpose method\. Accepts multiple fields to be used as ID variable fields\. `field1 ... fieldN` |
| `transpose_vartocase_valfields` | *field* | Property for the VAR to CASE (`vartocase`) transpose method\. Accepts multiple fields to be used as value variable fields\. `field1 ... fieldN` |
| `transpose_new_field_names` | *array* | New field names\. |
| `transpose_casetovar_aggregatefunction` | `mean` <br>`sum` <br>`min` <br>`max` <br>`median` <br>`count` | When there's more than one record for an index, you must aggregate the records into one\. Use the Aggregate Function drop\-down to specify how to aggregate the records using one of the aggregation functions\. |
| `default_value_mode` | `Read` <br>`Pass` | Set the default mode for all fields to `Read` or `Pass`\. The import node passes fields by default, while the Type node reads values by default\. |
<!-- </table "summary="transposenode properties" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
E01C7D12E53747C7ED71D615D7E9DCD8F17638ED | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/treeASnodeslots.html?context=cdpaas&locale=en | treeas properties | treeas properties
The Tree-AS node is similar to the CHAID node; however, the Tree-AS node is designed to process big data to create a single tree and displays the resulting model in the output viewer. The node generates a decision tree by using chi-square statistics (CHAID) to identify optimal splits. This use of CHAID can generate nonbinary trees, meaning that some splits have more than two branches. Target and input fields can be numeric range (continuous) or categorical. Exhaustive CHAID is a modification of CHAID that does a more thorough job of examining all possible splits but takes longer to compute.
treeas properties
Table 1. treeas properties
treeas Properties Values Property description
target field In the Tree-AS node, CHAID models require a single target and one or more input fields. A frequency field can also be specified. See [Common modeling node properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/modelingnodeslots_common.htmlmodelingnodeslots_common) for more information.
method chaidexhaustive_chaid
max_depth integer Maximum tree depth, from 0 to 20. The default value is 5.
num_bins integer Only used if the data is made up of continuous inputs. Set the number of equal frequency bins to be used for the inputs; options are: 2, 4, 5, 10, 20, 25, 50, or 100.
record_threshold integer The number of records at which the model will switch from using p-values to Effect sizes while building the tree. The default is 1,000,000; increase or decrease this in increments of 10,000.
split_alpha number Significance level for splitting. The value must be between 0.01 and 0.99.
merge_alpha number Significance level for merging. The value must be between 0.01 and 0.99.
bonferroni_adjustment flag Adjust significance values using Bonferroni method.
effect_size_threshold_cont number Set the Effect size threshold when splitting nodes and merging categories when using a continuous target. The value must be between 0.01 and 0.99.
effect_size_threshold_cat number Set the Effect size threshold when splitting nodes and merging categories when using a categorical target. The value must be between 0.01 and 0.99.
split_merged_categories flag Allow resplitting of merged categories.
grouping_sig_level number Used to determine how groups of nodes are formed or how unusual nodes are identified.
chi_square pearsonlikelihood_ratio Method used to calculate the chi-square statistic: Pearson or Likelihood Ratio
minimum_record_use use_percentageuse_absolute
min_parent_records_pc number Default value is 2. Minimum 1, maximum 100, in increments of 1. Parent branch value must be higher than child branch.
min_child_records_pc number Default value is 1. Minimum 1, maximum 100, in increments of 1.
min_parent_records_abs number Default value is 100. Minimum 1, maximum 100, in increments of 1. Parent branch value must be higher than child branch.
min_child_records_abs number Default value is 50. Minimum 1, maximum 100, in increments of 1.
epsilon number Minimum change in expected cell frequencies..
max_iterations number Maximum iterations for convergence.
use_costs flag
costs structured Structured property. The format is a list of 3 values: the actual value, the predicted value, and the cost if that prediction is wrong. For example: tree.setPropertyValue("costs", ["drugA", "drugB", 3.0], "drugX", "drugY", 4.0]])
default_cost_increase nonelinearsquarecustom Only enabled for ordinal targets. Set default values in the costs matrix.
calculate_conf flag
display_rule_id flag Adds a field in the scoring output that indicates the ID for the terminal node to which each record is assigned.
| # treeas properties #
The Tree\-AS node is similar to the CHAID node; however, the Tree\-AS node is designed to process big data to create a single tree and displays the resulting model in the output viewer\. The node generates a decision tree by using chi\-square statistics (CHAID) to identify optimal splits\. This use of CHAID can generate nonbinary trees, meaning that some splits have more than two branches\. Target and input fields can be numeric range (continuous) or categorical\. Exhaustive CHAID is a modification of CHAID that does a more thorough job of examining all possible splits but takes longer to compute\.
<!-- <table "summary="treeas properties" class="defaultstyle" "> -->
treeas properties
Table 1\. treeas properties
| `treeas` Properties | Values | Property description |
| ---------------------------- | ------------------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `target` | *field* | In the Tree\-AS node, CHAID models require a single target and one or more input fields\. A frequency field can also be specified\. See [Common modeling node properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/modelingnodeslots_common.html#modelingnodeslots_common) for more information\. |
| `method` | `chaid``exhaustive_chaid` | |
| `max_depth` | *integer* | Maximum tree depth, from 0 to 20\. The default value is 5\. |
| `num_bins` | *integer* | Only used if the data is made up of continuous inputs\. Set the number of equal frequency bins to be used for the inputs; options are: 2, 4, 5, 10, 20, 25, 50, or 100\. |
| `record_threshold` | *integer* | The number of records at which the model will switch from using p\-values to Effect sizes while building the tree\. The default is 1,000,000; increase or decrease this in increments of 10,000\. |
| `split_alpha` | *number* | Significance level for splitting\. The value must be between 0\.01 and 0\.99\. |
| `merge_alpha` | *number* | Significance level for merging\. The value must be between 0\.01 and 0\.99\. |
| `bonferroni_adjustment` | *flag* | Adjust significance values using Bonferroni method\. |
| `effect_size_threshold_cont` | *number* | Set the Effect size threshold when splitting nodes and merging categories when using a continuous target\. The value must be between 0\.01 and 0\.99\. |
| `effect_size_threshold_cat` | *number* | Set the Effect size threshold when splitting nodes and merging categories when using a categorical target\. The value must be between 0\.01 and 0\.99\. |
| `split_merged_categories` | *flag* | Allow resplitting of merged categories\. |
| `grouping_sig_level` | *number* | Used to determine how groups of nodes are formed or how unusual nodes are identified\. |
| `chi_square` | `pearson``likelihood_ratio` | Method used to calculate the chi\-square statistic: Pearson or Likelihood Ratio |
| `minimum_record_use` | `use_percentage``use_absolute` | |
| `min_parent_records_pc` | *number* | Default value is 2\. Minimum 1, maximum 100, in increments of 1\. Parent branch value must be higher than child branch\. |
| `min_child_records_pc` | *number* | Default value is 1\. Minimum 1, maximum 100, in increments of 1\. |
| `min_parent_records_abs` | *number* | Default value is 100\. Minimum 1, maximum 100, in increments of 1\. Parent branch value must be higher than child branch\. |
| `min_child_records_abs` | *number* | Default value is 50\. Minimum 1, maximum 100, in increments of 1\. |
| `epsilon` | *number* | Minimum change in expected cell frequencies\.\. |
| `max_iterations` | *number* | Maximum iterations for convergence\. |
| `use_costs` | *flag* | |
| `costs` | *structured* | Structured property\. The format is a list of 3 values: the actual value, the predicted value, and the cost if that prediction is wrong\. For example: `tree.setPropertyValue("costs", ["drugA", "drugB", 3.0], "drugX", "drugY", 4.0]])` |
| `default_cost_increase` | `none``linear``square``custom` | Only enabled for ordinal targets\. Set default values in the costs matrix\. |
| `calculate_conf` | *flag* | |
| `display_rule_id` | *flag* | Adds a field in the scoring output that indicates the ID for the terminal node to which each record is assigned\. |
<!-- </table "summary="treeas properties" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
8EA57CA1AE730686E86FC3B2AABD71C9F8EA9823 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/treeASnuggetnodeslots.html?context=cdpaas&locale=en | applytreeas properties | applytreeas properties
You can use Tree-AS modeling nodes to generate a Tree-AS model nugget. The scripting name of this model nugget is applytreenas. For more information on scripting the modeling node itself, see [treeas properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/treeASnodeslots.htmltreeASnodeslots).
applytreeas properties
Table 1. applytreeas properties
applytreeas Properties Values Property description
calculate_conf flag This property includes confidence calculations in the generated tree.
display_rule_id flag Adds a field in the scoring output that indicates the ID for the terminal node to which each record is assigned.
enable_sql_generation falsenative When using data from a database, SQL code can be pushed back to the database for execution, providing superior performance for many operations.
| # applytreeas properties #
You can use Tree\-AS modeling nodes to generate a Tree\-AS model nugget\. The scripting name of this model nugget is *applytreenas*\. For more information on scripting the modeling node itself, see [treeas properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/treeASnodeslots.html#treeASnodeslots)\.
<!-- <table "summary="applytreeas properties" class="defaultstyle" "> -->
applytreeas properties
Table 1\. applytreeas properties
| `applytreeas` Properties | Values | Property description |
| ------------------------ | --------------- | ------------------------------------------------------------------------------------------------------------------------------------------------ |
| `calculate_conf` | *flag* | This property includes confidence calculations in the generated tree\. |
| `display_rule_id` | *flag* | Adds a field in the scoring output that indicates the ID for the terminal node to which each record is assigned\. |
| `enable_sql_generation` | `false``native` | When using data from a database, SQL code can be pushed back to the database for execution, providing superior performance for many operations\. |
<!-- </table "summary="applytreeas properties" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
3B763FFD1393292F4C3CA9D236440065B6660E8E | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/twostep_as_nodeslots.html?context=cdpaas&locale=en | twostepAS properties | twostepAS properties
TwoStep Cluster is an exploratory tool that's designed to reveal natural groupings (or clusters) within a data set that would otherwise not be apparent. The algorithm that's employed by this procedure has several desirable features that differentiate it from traditional clustering techniques, such as handling of categorical and continuous variables, automatic selection of number of clusters, and scalability.
twostepAS properties
Table 1. twostepAS properties
twostepAS Properties Values Property description
inputs [f1 ... fN] TwoStepAS models use a list of input fields, but no target. Weight and frequency fields are not recognized.
use_predefined_roles Boolean Default=True
use_custom_field_assignments Boolean Default=False
cluster_num_auto Boolean Default=True
min_num_clusters integer Default=2
max_num_clusters integer Default=15
num_clusters integer Default=5
clustering_criterion AIC <br>BIC
automatic_clustering_method use_clustering_criterion_setting <br>Distance_jump <br>Minimum <br>Maximum
feature_importance_method use_clustering_criterion_setting <br>effect_size
use_random_seed Boolean
random_seed integer
distance_measure Euclidean <br>Loglikelihood
include_outlier_clusters Boolean Default=True
num_cases_in_feature_tree_leaf_is_less_than integer Default=10
top_perc_outliers integer Default=5
initial_dist_change_threshold integer Default=0
leaf_node_maximum_branches integer Default=8
non_leaf_node_maximum_branches integer Default=8
max_tree_depth integer Default=3
adjustment_weight_on_measurement_level integer Default=6
memory_allocation_mb number Default=512
delayed_split Boolean Default=True
fields_not_to_standardize [f1 ... fN]
adaptive_feature_selection Boolean Default=True
featureMisPercent integer Default=70
coefRange number Default=0.05
percCasesSingleCategory integer Default=95
numCases integer Default=24
include_model_specifications Boolean Default=True
include_record_summary Boolean Default=True
include_field_transformations Boolean Default=True
excluded_inputs Boolean Default=True
evaluate_model_quality Boolean Default=True
show_feature_importance bar chart Boolean Default=True
show_feature_importance_ word_cloud Boolean Default=True
show_outlier_clusters_interactive_table_and_chart Boolean Default=True
show_outlier_clusters_pivot_table Boolean Default=True
across_cluster_feature_importance Boolean Default=True
across_cluster_profiles_pivot_table Boolean Default=True
withinprofiles Boolean Default=True
cluster_distances Boolean Default=True
cluster_label String <br>Number
label_prefix String
evaluation_maxNum integer The maximum number of outliers to display in the output. If there are more than twenty outlier clusters, a pivot table will be displayed instead.
across_cluster_profiles_table_and_chart Boolean Table and charts of feature importance and cluster centers for each input (field) used in the cluster solution. Selecting different rows in the table displays a different chart. For categorical fields, a bar chart is displayed. For continuous fields, a chart of means and standard deviations is displayed.
| # twostepAS properties #
TwoStep Cluster is an exploratory tool that's designed to reveal natural groupings (or clusters) within a data set that would otherwise not be apparent\. The algorithm that's employed by this procedure has several desirable features that differentiate it from traditional clustering techniques, such as handling of categorical and continuous variables, automatic selection of number of clusters, and scalability\.
<!-- <table "summary="twostepAS properties" class="defaultstyle" "> -->
twostepAS properties
Table 1\. twostepAS properties
| `twostepAS` Properties | Values | Property description |
| --------------------------------------------------- | ------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `inputs` | \[*f1 \.\.\. fN*\] | TwoStepAS models use a list of input fields, but no target\. Weight and frequency fields are not recognized\. |
| `use_predefined_roles` | *Boolean* | Default=`True` |
| `use_custom_field_assignments` | *Boolean* | Default=`False` |
| `cluster_num_auto` | *Boolean* | Default=`True` |
| `min_num_clusters` | *integer* | Default=`2` |
| `max_num_clusters` | *integer* | Default=`15` |
| `num_clusters` | *integer* | Default=`5` |
| `clustering_criterion` | `AIC` <br>`BIC` | |
| `automatic_clustering_method` | `use_clustering_criterion_setting` <br>`Distance_jump` <br>`Minimum` <br>`Maximum` | |
| `feature_importance_method` | `use_clustering_criterion_setting` <br>`effect_size` | |
| `use_random_seed` | *Boolean* | |
| `random_seed` | *integer* | |
| `distance_measure` | `Euclidean` <br>`Loglikelihood` | |
| `include_outlier_clusters` | *Boolean* | Default=`True` |
| `num_cases_in_feature_tree_leaf_is_less_than` | *integer* | Default=`10` |
| `top_perc_outliers` | *integer* | Default=`5` |
| `initial_dist_change_threshold` | *integer* | Default=`0` |
| `leaf_node_maximum_branches` | *integer* | Default=`8` |
| `non_leaf_node_maximum_branches` | *integer* | Default=`8` |
| `max_tree_depth` | *integer* | Default=`3` |
| `adjustment_weight_on_measurement_level` | *integer* | Default=`6` |
| `memory_allocation_mb` | *number* | Default=`512` |
| `delayed_split` | *Boolean* | Default=`True` |
| `fields_not_to_standardize` | \[*f1 \.\.\. fN*\] | |
| `adaptive_feature_selection` | *Boolean* | Default=`True` |
| `featureMisPercent` | *integer* | Default=`70` |
| `coefRange` | *number* | Default=`0.05` |
| `percCasesSingleCategory` | *integer* | Default=`95` |
| `numCases` | *integer* | Default=`24` |
| `include_model_specifications` | *Boolean* | Default=`True` |
| `include_record_summary` | *Boolean* | Default=`True` |
| `include_field_transformations` | *Boolean* | Default=`True` |
| `excluded_inputs` | *Boolean* | Default=`True` |
| `evaluate_model_quality` | *Boolean* | Default=`True` |
| `show_feature_importance bar chart` | *Boolean* | Default=`True` |
| `show_feature_importance_ word_cloud` | *Boolean* | Default=`True` |
| `show_outlier_clusters_interactive_table_and_chart` | *Boolean* | Default=`True` |
| `show_outlier_clusters_pivot_table` | *Boolean* | Default=True |
| `across_cluster_feature_importance` | *Boolean* | Default=`True` |
| `across_cluster_profiles_pivot_table` | *Boolean* | Default=`True` |
| `withinprofiles` | *Boolean* | Default=`True` |
| `cluster_distances` | *Boolean* | Default=`True` |
| `cluster_label` | `String` <br>`Number` | |
| `label_prefix` | `String` | |
| `evaluation_maxNum` | *integer* | The maximum number of outliers to display in the output\. If there are more than twenty outlier clusters, a pivot table will be displayed instead\. |
| `across_cluster_profiles_table_and_chart` | *Boolean* | Table and charts of feature importance and cluster centers for each input (field) used in the cluster solution\. Selecting different rows in the table displays a different chart\. For categorical fields, a bar chart is displayed\. For continuous fields, a chart of means and standard deviations is displayed\. |
<!-- </table "summary="twostepAS properties" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
356DD425AD5BE4EE255F2F95F7860B6FDFE3BCC0 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/twostep_as_nuggetnodeslots.html?context=cdpaas&locale=en | applytwostepAS properties | applytwostepAS properties
You can use TwoStep-AS modeling nodes to generate a TwoStep-AS model nugget. The scripting name of this model nugget is applytwostepAS. For more information on scripting the modeling node itself, see [twostepAS properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/twostep_as_nodeslots.htmltwostep_as_nodeslots).
applytwostepAS Properties
Table 1. applytwostepAS Properties
applytwostepAS Properties Values Property description
enable_sql_generation false <br>true <br>native When using data from a database, SQL code can be pushed back to the database for execution, providing superior performance for many operations.
| # applytwostepAS properties #
You can use TwoStep\-AS modeling nodes to generate a TwoStep\-AS model nugget\. The scripting name of this model nugget is *applytwostepAS*\. For more information on scripting the modeling node itself, see [twostepAS properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/twostep_as_nodeslots.html#twostep_as_nodeslots)\.
<!-- <table "summary="applytwostepAS Properties" id="twostep_as_nuggetnodeslots__table_m5f_l1l_s5" class="defaultstyle" "> -->
applytwostepAS Properties
Table 1\. applytwostepAS Properties
| `applytwostepAS` Properties | Values | Property description |
| --------------------------- | --------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------ |
| `enable_sql_generation` | `false` <br>`true` <br>`native` | When using data from a database, SQL code can be pushed back to the database for execution, providing superior performance for many operations\. |
<!-- </table "summary="applytwostepAS Properties" id="twostep_as_nuggetnodeslots__table_m5f_l1l_s5" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
0B54763A8146178F9F4809DA458E4DDBD9E28B39 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/twostepnodeslots.html?context=cdpaas&locale=en | twostepnode properties | twostepnode properties
The TwoStep node uses a two-step clustering method. The first step makes a single pass through the data to compress the raw input data into a manageable set of subclusters. The second step uses a hierarchical clustering method to progressively merge the subclusters into larger and larger clusters. TwoStep has the advantage of automatically estimating the optimal number of clusters for the training data. It can handle mixed field types and large data sets efficiently.
twostepnode properties
Table 1. twostepnode properties
twostepnode Properties Values Property description
inputs [field1 ... fieldN] TwoStep models use a list of input fields, but no target. Weight and frequency fields are not recognized. See [Common modeling node properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/modelingnodeslots_common.htmlmodelingnodeslots_common) for more information.
standardize flag
exclude_outliers flag
percentage number
cluster_num_auto flag
min_num_clusters number
max_num_clusters number
num_clusters number
cluster_label StringNumber
label_prefix string
distance_measure EuclideanLoglikelihood
clustering_criterion AICBIC
| # twostepnode properties #
The TwoStep node uses a two\-step clustering method\. The first step makes a single pass through the data to compress the raw input data into a manageable set of subclusters\. The second step uses a hierarchical clustering method to progressively merge the subclusters into larger and larger clusters\. TwoStep has the advantage of automatically estimating the optimal number of clusters for the training data\. It can handle mixed field types and large data sets efficiently\.
<!-- <table "summary="twostepnode properties" class="defaultstyle" "> -->
twostepnode properties
Table 1\. twostepnode properties
| `twostepnode` Properties | Values | Property description |
| ------------------------ | -------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `inputs` | \[*field1 \.\.\. fieldN*\] | TwoStep models use a list of input fields, but no target\. Weight and frequency fields are not recognized\. See [Common modeling node properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/modelingnodeslots_common.html#modelingnodeslots_common) for more information\. |
| `standardize` | *flag* | |
| `exclude_outliers` | *flag* | |
| `percentage` | *number* | |
| `cluster_num_auto` | *flag* | |
| `min_num_clusters` | *number* | |
| `max_num_clusters` | *number* | |
| `num_clusters` | *number* | |
| `cluster_label` | `String``Number` | |
| `label_prefix` | *string* | |
| `distance_measure` | `Euclidean``Loglikelihood` | |
| `clustering_criterion` | `AIC``BIC` | |
<!-- </table "summary="twostepnode properties" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
BAB82891CA84875B6EEC64974558FC838197C99A | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/twostepnuggetnodeslots.html?context=cdpaas&locale=en | applytwostepnode properties | applytwostepnode properties
You can use TwoStep modeling nodes to generate a TwoStep model nugget. The scripting name of this model nugget is applytwostepnode. For more information on scripting the modeling node itself, see [twostepnode properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/twostepnodeslots.htmltwostepnodeslots).
applytwostepnode properties
Table 1. applytwostepnode properties
applytwostepnode Properties Values Property description
enable_sql_generation udf <br>native When using data from a database, SQL code can be pushed back to the database for execution, providing superior performance for many operations.
| # applytwostepnode properties #
You can use TwoStep modeling nodes to generate a TwoStep model nugget\. The scripting name of this model nugget is *applytwostepnode*\. For more information on scripting the modeling node itself, see [twostepnode properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/twostepnodeslots.html#twostepnodeslots)\.
<!-- <table "summary="applytwostepnode properties" id="twostepnuggetnodeslots__table_czt_2wy_ddb" class="defaultstyle" "> -->
applytwostepnode properties
Table 1\. applytwostepnode properties
| `applytwostepnode` Properties | Values | Property description |
| ----------------------------- | ------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------ |
| `enable_sql_generation` | `udf` <br>`native` | When using data from a database, SQL code can be pushed back to the database for execution, providing superior performance for many operations\. |
<!-- </table "summary="applytwostepnode properties" id="twostepnuggetnodeslots__table_czt_2wy_ddb" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
7EC3F9527921FB3F713DD6AE1D8035E6C81753C4 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/typenodeslots.html?context=cdpaas&locale=en | typenode properties | typenode properties
The Type node specifies field metadata and properties. For example, you can specify a measurement level (continuous, nominal, ordinal, or flag) for each field, set options for handling missing values and system nulls, set the role of a field for modeling purposes, specify field and value labels, and specify values for a field.
Note that in some cases you may need to fully instantiate the Type node for other nodes to work correctly, such as the fields from property of the SetToFlag node. You can simply connect a Table node and run it to instantiate the fields:
tablenode = stream.createAt("table", "Table node", 150, 50)
stream.link(node, tablenode)
tablenode.run(None)
stream.delete(tablenode)
typenode properties
Table 1. typenode properties
typenode properties Data type Property description
direction Input <br>Target <br>Both <br>None <br>Partition <br>Split <br>Frequency <br>RecordID Keyed property for field roles.
type Range <br>Flag <br>Set <br>Typeless <br>Discrete <br>OrderedSet <br>Default Measurement level of the field (previously <br>called the "type" of field). Setting type to <br>Default will clear any values parameter <br>setting, and if value_mode has the value <br>Specify, it will be reset to Read. <br>If value_mode is set to Pass or Read, <br>setting type will not affect value_mode.<br><br>The data types used internally differ from those visible in the type node. The correspondence is as follows: Range -> Continuous Set - > Nominal OrderedSet -> Ordinal Discrete- > Categorical.
storage Unknown <br>String <br>Integer <br>Real <br>Time <br>Date <br>Timestamp Read-only keyed property for field storage type.
check None <br>Nullify <br>Coerce <br>Discard <br>Warn <br>Abort Keyed property for field type and range checking.
values [value value] For continuous fields, the first value is the minimum, and the last value is the maximum. For nominal fields, specify all values. For flag fields, the first value represents false, and the last value represents true. Setting this property automatically sets the value_mode property to Specify.
value_mode Read <br>Pass <br>Read+ <br>Current <br>Specify Determines how values are set. Note that you cannot set this property to Specify directly; to use specific values, set the values property.
extend_values flag Applies when value_mode is set to Read. Set to T to add newly read values to any existing values for the field. Set to F to discard existing values in favor of the newly read values.
enable_missing flag When set to T, activates tracking of missing values for the field.
missing_values [value value ...] Specifies data values that denote missing data.
range_missing flag Specifies whether a missing-value (blank) range is defined for a field.
missing_lower string When range_missing is true, specifies the lower bound of the missing-value range.
missing_upper string When range_missing is true, specifies the upper bound of the missing-value range.
null_missing flag When set to T, nulls (undefined values that are displayed as $null$ in the software) are considered missing values.
whitespace_ missing flag When set to T, values containing only white space (spaces, tabs, and new lines) are considered missing values.
description string Specifies the description for a field.
value_labels [[Value LabelString] [ Value LabelString] ...] Used to specify labels for value pairs.
display_places integer Sets the number of decimal places for the field when displayed (applies only to fields with REAL storage). A value of –1 will use the stream default.
export_places integer Sets the number of decimal places for the field when exported (applies only to fields with REAL storage). A value of –1 will use the stream default.
decimal_separator DEFAULT <br>PERIOD <br>COMMA Sets the decimal separator for the field (applies only to fields with REAL storage).
date_format "DDMMYY" "MMDDYY" "YYMMDD" "YYYYMMDD" "YYYYDDD" DAY MONTH "DD-MM-YY" "DD-MM-YYYY" "MM-DD-YY" "MM-DD-YYYY" "DD-MON-YY" "DD-MON-YYYY" "YYYY-MM-DD" "DD.MM.YY" "DD.MM.YYYY" "MM.DD.YYYY" "DD.MON.YY" "DD.MON.YYYY" "DD/MM/YY" "DD/MM/YYYY" "MM/DD/YY" "MM/DD/YYYY" "DD/MON/YY" "DD/MON/YYYY" MON YYYY q Q YYYY ww WK YYYY Sets the date format for the field (applies only to fields with DATE or TIMESTAMP storage).
time_format "HHMMSS" "HHMM" "MMSS" "HH:MM:SS" "HH:MM" "MM:SS" "(H)H:(M)M:(S)S" "(H)H:(M)M" "(M)M:(S)S" "HH.MM.SS" "HH.MM" "MM.SS" "(H)H.(M)M.(S)S" "(H)H.(M)M" "(M)M.(S)S" Sets the time format for the field (applies only to fields with TIME or TIMESTAMP storage).
number_format DEFAULT <br>STANDARD <br>SCIENTIFIC <br>CURRENCY Sets the number display format for the field.
standard_places integer Sets the number of decimal places for the field when displayed in standard format. A value of –1 will use the stream default.
scientific_places integer Sets the number of decimal places for the field when displayed in scientific format. A value of –1 will use the stream default.
currency_places integer Sets the number of decimal places for the field when displayed in currency format. A value of –1 will use the stream default.
grouping_symbol DEFAULT <br>NONE <br>LOCALE <br>PERIOD <br>COMMA <br>SPACE Sets the grouping symbol for the field.
column_width integer Sets the column width for the field. A value of –1 will set column width to Auto.
justify AUTO <br>CENTER <br>LEFT <br>RIGHT Sets the column justification for the field.
measure_type Range / MeasureType.RANGE <br>Discrete / MeasureType.DISCRETE <br>Flag / MeasureType.FLAG <br>Set / MeasureType.SET <br>OrderedSet / MeasureType.ORDERED_SET <br>Typeless / MeasureType.TYPELESS <br>Collection / MeasureType.COLLECTION <br>Geospatial / MeasureType.GEOSPATIAL This keyed property is similar to type in that it can be used to define the measurement associated with the field. What is different is that in Python scripting, the setter function can also be passed one of the MeasureType values while the getter will always return on the MeasureType values.
collection_ measure Range / MeasureType.RANGE <br>Flag / MeasureType.FLAG <br>Set / MeasureType.SET <br>OrderedSet / MeasureType.ORDERED_SET <br>Typeless / MeasureType.TYPELESS For collection fields (lists with a depth of 0), this keyed property defines the measurement type associated with the underlying values.
geo_type Point <br>MultiPoint <br>LineString <br>MultiLineString <br>Polygon <br>MultiPolygon For geospatial fields, this keyed property defines the type of geospatial object represented by this field. This should be consistent with the list depth of the values.
has_coordinate_ system boolean For geospatial fields, this property defines whether this field has a coordinate system
coordinate_system string For geospatial fields, this keyed property defines the coordinate system for this field.
custom_storage_ type Unknown / MeasureType.UNKNOWN <br>String / MeasureType.STRING <br>Integer / MeasureType.INTEGER <br>Real / MeasureType.REAL <br>Time / MeasureType.TIME <br>Date / MeasureType.DATE <br>Timestamp / MeasureType.TIMESTAMP <br>List / MeasureType.LIST This keyed property is similar to custom_storage in that it can be used to define the override storage for the field. What is different is that in Python scripting, the setter function can also be passed one of the StorageType values while the getter will always return on the StorageType values.
custom_list_ storage_type String / MeasureType.STRING <br>Integer / MeasureType.INTEGER <br>Real / MeasureType.REAL <br>Time / MeasureType.TIME <br>Date / MeasureType.DATE <br>Timestamp / MeasureType.TIMESTAMP For list fields, this keyed property specifies the storage type of the underlying values.
custom_list_depth integer For list fields, this keyed property specifies the depth of the field
max_list_length integer Only available for data with a measurement level of either Geospatial or Collection. Set the maximum length of the list by specifying the number of elements the list can contain.
max_string_length integer Only available for typeless data and used when you are generating SQL to create a table. Enter the value of the largest string in your data; this generates a column in the table that is big enough to contain the string.
default_value_mode Read <br>Pass Set the default mode for all fields to Read or Pass. The import node passes fields by default, while the Type node reads values by default.
| # typenode properties #
The Type node specifies field metadata and properties\. For example, you can specify a measurement level (continuous, nominal, ordinal, or flag) for each field, set options for handling missing values and system nulls, set the role of a field for modeling purposes, specify field and value labels, and specify values for a field\.
Note that in some cases you may need to fully instantiate the Type node for other nodes to work correctly, such as the `fields from` property of the SetToFlag node\. You can simply connect a Table node and run it to instantiate the fields:
tablenode = stream.createAt("table", "Table node", 150, 50)
stream.link(node, tablenode)
tablenode.run(None)
stream.delete(tablenode)
<!-- <table "summary="typenode properties" id="typenodeslots__table_tkz_1zy_ddb" class="defaultstyle" "> -->
typenode properties
Table 1\. typenode properties
| `typenode` properties | Data type | Property description |
| --------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `direction` | `Input` <br>`Target` <br>`Both` <br>`None` <br>`Partition` <br>`Split` <br>`Frequency` <br>`RecordID` | Keyed property for field roles\. |
| `type` | `Range` <br>`Flag` <br>`Set` <br>`Typeless` <br>`Discrete` <br>`OrderedSet` <br>`Default` | Measurement level of the field (previously <br>called the "type" of field)\. Setting `type` to <br>`Default` will clear any `values` parameter <br>setting, and if `value_mode` has the value <br>`Specify`, it will be reset to `Read`\. <br>If `value_mode` is set to `Pass` or `Read`, <br>setting `type` will not affect `value_mode`\.<br><br>The data types used internally differ from those visible in the type node\. The correspondence is as follows: Range \-> Continuous Set \- > Nominal OrderedSet \-> Ordinal Discrete\- > Categorical\. |
| `storage` | `Unknown` <br>`String` <br>`Integer` <br>`Real` <br>`Time` <br>`Date` <br>`Timestamp` | Read\-only keyed property for field storage type\. |
| `check` | `None` <br>`Nullify` <br>`Coerce` <br>`Discard` <br>`Warn` <br>`Abort` | Keyed property for field type and range checking\. |
| `values` | \[*value value*\] | For continuous fields, the first value is the minimum, and the last value is the maximum\. For nominal fields, specify all values\. For flag fields, the first value represents *false*, and the last value represents *true*\. Setting this property automatically sets the `value_mode` property to `Specify`\. |
| `value_mode` | `Read` <br>`Pass` <br>`Read+` <br>`Current` <br>`Specify` | Determines how values are set\. Note that you cannot set this property to `Specify` directly; to use specific values, set the `values` property\. |
| `extend_values` | *flag* | Applies when `value_mode` is set to `Read`\. Set to `T` to add newly read values to any existing values for the field\. Set to `F` to discard existing values in favor of the newly read values\. |
| `enable_missing` | *flag* | When set to `T`, activates tracking of missing values for the field\. |
| `missing_values` | \[*value value \.\.\.*\] | Specifies data values that denote missing data\. |
| `range_missing` | *flag* | Specifies whether a missing\-value (blank) range is defined for a field\. |
| `missing_lower` | *string* | When `range_missing` is true, specifies the lower bound of the missing\-value range\. |
| `missing_upper` | *string* | When `range_missing` is true, specifies the upper bound of the missing\-value range\. |
| `null_missing` | *flag* | When set to `T`, *nulls* (undefined values that are displayed as `$null$` in the software) are considered missing values\. |
| `whitespace_ missing` | *flag* | When set to `T`, values containing only white space (spaces, tabs, and new lines) are considered missing values\. |
| `description` | *string* | Specifies the description for a field\. |
| `value_labels` | *\[\[Value LabelString\] \[ Value LabelString\] \.\.\.\]* | Used to specify labels for value pairs\. |
| `display_places` | *integer* | Sets the number of decimal places for the field when displayed (applies only to fields with `REAL` storage)\. A value of `–1` will use the stream default\. |
| `export_places` | *integer* | Sets the number of decimal places for the field when exported (applies only to fields with `REAL` storage)\. A value of `–1` will use the stream default\. |
| `decimal_separator` | `DEFAULT` <br>`PERIOD` <br>`COMMA` | Sets the decimal separator for the field (applies only to fields with `REAL` storage)\. |
| `date_format` | `"DDMMYY" "MMDDYY" "YYMMDD" "YYYYMMDD" "YYYYDDD" DAY MONTH "DD-MM-YY" "DD-MM-YYYY" "MM-DD-YY" "MM-DD-YYYY" "DD-MON-YY" "DD-MON-YYYY" "YYYY-MM-DD" "DD.MM.YY" "DD.MM.YYYY" "MM.DD.YYYY" "DD.MON.YY" "DD.MON.YYYY" "DD/MM/YY" "DD/MM/YYYY" "MM/DD/YY" "MM/DD/YYYY" "DD/MON/YY" "DD/MON/YYYY" MON YYYY q Q YYYY ww WK YYYY` | Sets the date format for the field (applies only to fields with `DATE` or `TIMESTAMP` storage)\. |
| `time_format` | `"HHMMSS" "HHMM" "MMSS" "HH:MM:SS" "HH:MM" "MM:SS" "(H)H:(M)M:(S)S" "(H)H:(M)M" "(M)M:(S)S" "HH.MM.SS" "HH.MM" "MM.SS" "(H)H.(M)M.(S)S" "(H)H.(M)M" "(M)M.(S)S"` | Sets the time format for the field (applies only to fields with `TIME` or `TIMESTAMP` storage)\. |
| `number_format` | `DEFAULT` <br>`STANDARD` <br>`SCIENTIFIC` <br>`CURRENCY` | Sets the number display format for the field\. |
| `standard_places` | *integer* | Sets the number of decimal places for the field when displayed in standard format\. A value of `–1` will use the stream default\. |
| `scientific_places` | *integer* | Sets the number of decimal places for the field when displayed in scientific format\. A value of `–1` will use the stream default\. |
| `currency_places` | *integer* | Sets the number of decimal places for the field when displayed in currency format\. A value of `–1` will use the stream default\. |
| `grouping_symbol` | `DEFAULT` <br>`NONE` <br>`LOCALE` <br>`PERIOD` <br>`COMMA` <br>`SPACE` | Sets the grouping symbol for the field\. |
| `column_width` | *integer* | Sets the column width for the field\. A value of `–1` will set column width to `Auto`\. |
| `justify` | `AUTO` <br>`CENTER` <br>`LEFT` <br>`RIGHT` | Sets the column justification for the field\. |
| `measure_type` | `Range / MeasureType.RANGE` <br>`Discrete / MeasureType.DISCRETE` <br>`Flag / MeasureType.FLAG` <br>`Set / MeasureType.SET` <br>`OrderedSet / MeasureType.ORDERED_SET` <br>`Typeless / MeasureType.TYPELESS` <br>`Collection / MeasureType.COLLECTION` <br>`Geospatial / MeasureType.GEOSPATIAL` | This keyed property is similar to `type` in that it can be used to define the measurement associated with the field\. What is different is that in Python scripting, the setter function can also be passed one of the `MeasureType` values while the getter will always return on the `MeasureType` values\. |
| `collection_ measure` | `Range / MeasureType.RANGE` <br>`Flag / MeasureType.FLAG` <br>`Set / MeasureType.SET` <br>`OrderedSet / MeasureType.ORDERED_SET` <br>`Typeless / MeasureType.TYPELESS` | For collection fields (lists with a depth of 0), this keyed property defines the measurement type associated with the underlying values\. |
| `geo_type` | `Point` <br>`MultiPoint` <br>`LineString` <br>`MultiLineString` <br>`Polygon` <br>`MultiPolygon` | For geospatial fields, this keyed property defines the type of geospatial object represented by this field\. This should be consistent with the list depth of the values\. |
| `has_coordinate_ system` | *boolean* | For geospatial fields, this property defines whether this field has a coordinate system |
| `coordinate_system` | *string* | For geospatial fields, this keyed property defines the coordinate system for this field\. |
| `custom_storage_ type` | `Unknown / MeasureType.UNKNOWN` <br>`String / MeasureType.STRING` <br>`Integer / MeasureType.INTEGER` <br>`Real / MeasureType.REAL` <br>`Time / MeasureType.TIME` <br>`Date / MeasureType.DATE` <br>`Timestamp / MeasureType.TIMESTAMP` <br>`List / MeasureType.LIST` | This keyed property is similar to `custom_storage` in that it can be used to define the override storage for the field\. What is different is that in Python scripting, the setter function can also be passed one of the `StorageType` values while the getter will always return on the `StorageType` values\. |
| `custom_list_ storage_type` | `String / MeasureType.STRING` <br>`Integer / MeasureType.INTEGER` <br>`Real / MeasureType.REAL` <br>`Time / MeasureType.TIME` <br>`Date / MeasureType.DATE` <br>`Timestamp / MeasureType.TIMESTAMP` | For list fields, this keyed property specifies the storage type of the underlying values\. |
| `custom_list_depth` | *integer* | For list fields, this keyed property specifies the depth of the field |
| `max_list_length` | *integer* | Only available for data with a measurement level of either Geospatial or Collection\. Set the maximum length of the list by specifying the number of elements the list can contain\. |
| `max_string_length` | *integer* | Only available for typeless data and used when you are generating SQL to create a table\. Enter the value of the largest string in your data; this generates a column in the table that is big enough to contain the string\. |
| `default_value_mode` | `Read` <br>`Pass` | Set the default mode for all fields to `Read` or `Pass`\. The import node passes fields by default, while the Type node reads values by default\. |
<!-- </table "summary="typenode properties" id="typenodeslots__table_tkz_1zy_ddb" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
0B14841AF65A8855E9D497EF05270B54B245DAF8 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/userinputnodeslots.html?context=cdpaas&locale=en | userinputnode properties | userinputnode properties
The User Input node provides an easy way to create synthetic data—either from scratch or by altering existing data. This is useful, for example, when you want to create a test dataset for modeling.
userinputnode properties
Table 1. userinputnode properties
userinputnode properties Data type Property description
data
names Structured slot that sets or returns a list of field names generated by the node.
custom_storage UnknownStringIntegerRealTimeDateTimestamp Keyed slot that sets or returns the storage for a field.
data_mode CombinedOrdered If Combined is specified, records are generated for each combination of set values and min/max values. The number of records generated is equal to the product of the number of values in each field. If Ordered is specified, one value is taken from each column for each record in order to generate a row of data. The number of records generated is equal to the largest number values associated with a field. Any fields with fewer data values will be padded with null values.
| # userinputnode properties #
The User Input node provides an easy way to create synthetic data—either from scratch or by altering existing data\. This is useful, for example, when you want to create a test dataset for modeling\.
<!-- <table "summary="userinputnode properties" id="userinputnodeslots__table_kwp_bzy_ddb" class="defaultstyle" "> -->
userinputnode properties
Table 1\. userinputnode properties
| `userinputnode` properties | Data type | Property description |
| -------------------------- | ------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `data` | | |
| `names` | | Structured slot that sets or returns a list of field names generated by the node\. |
| `custom_storage` | `Unknown``String``Integer``Real``Time``Date``Timestamp` | Keyed slot that sets or returns the storage for a field\. |
| `data_mode` | `Combined``Ordered` | If `Combined` is specified, records are generated for each combination of set values and min/max values\. The number of records generated is equal to the product of the number of values in each field\. If `Ordered` is specified, one value is taken from each column for each record in order to generate a row of data\. The number of records generated is equal to the largest number values associated with a field\. Any fields with fewer data values will be padded with null values\. |
<!-- </table "summary="userinputnode properties" id="userinputnodeslots__table_kwp_bzy_ddb" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
679F2F7A79672580B5FB797D9C5280B1A83806EF | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/using_scripting.html?context=cdpaas&locale=en | Scripting overview | Scripting overview
This section provides high-level descriptions and examples of flow-level scripts and standalone scripts in the SPSS Modeler interface. More information on scripting language, syntax, and commands is provided in the sections that follow.
Notes:
* Some of the properties and features described in this scripting and automation guide aren't available in Watsonx.ai.
* You can't import and run scripts created in SPSS Statistics.
| # Scripting overview #
This section provides high\-level descriptions and examples of flow\-level scripts and standalone scripts in the SPSS Modeler interface\. More information on scripting language, syntax, and commands is provided in the sections that follow\.
Notes:
<!-- <ul> -->
* Some of the properties and features described in this scripting and automation guide aren't available in Watsonx\.ai\.
* You can't import and run scripts created in SPSS Statistics\.
<!-- </ul> -->
<!-- </article "role="article" "> -->
|
B3FFE77064106EE619C664233B7B7A9ABA75C30A | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/webnodeslots.html?context=cdpaas&locale=en | webnode properties | webnode properties
The Web node illustrates the strength of the relationship between values of two or more symbolic (categorical) fields. The graph uses lines of various widths to indicate connection strength. You might use a Web node, for example, to explore the relationship between the purchase of a set of items at an e-commerce site.
webnode properties
Table 1. webnode properties
webnode properties Data type Property description
use_directed_web flag
fields list
to_field field
from_fields list
true_flags_only flag
line_values AbsoluteOverallPctPctLargerPctSmaller
strong_links_heavier flag
num_links ShowMaximumShowLinksAboveShowAll
max_num_links number
links_above number
discard_links_min flag
links_min_records number
discard_links_max flag
links_max_records number
weak_below number
strong_above number
link_size_continuous flag
web_display CircularNetworkDirectedGrid
graph_background color Standard graph colors are described at the beginning of this section.
symbol_size number Specifies a symbol size.
directed_line_values AbsoluteOverallPctPctToPctFrom Specify a threshold type.
show_legend boolean You can specify whether the legend is displayed. For plots with a large number of fields, hiding the legend may improve the appearance of the plot.
labels_as_nodes boolean You can include the label text within each node rather than displaying adjacent labels. For plots with a small number of fields, this may result in a more readable chart.
| # webnode properties #
The Web node illustrates the strength of the relationship between values of two or more symbolic (categorical) fields\. The graph uses lines of various widths to indicate connection strength\. You might use a Web node, for example, to explore the relationship between the purchase of a set of items at an e\-commerce site\.
<!-- <table "summary="webnode properties" class="defaultstyle" "> -->
webnode properties
Table 1\. webnode properties
| `webnode` properties | Data type | Property description |
| ---------------------- | --------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `use_directed_web` | *flag* | |
| `fields` | *list* | |
| `to_field` | *field* | |
| `from_fields` | *list* | |
| `true_flags_only` | *flag* | |
| `line_values` | `Absolute``OverallPct``PctLarger``PctSmaller` | |
| `strong_links_heavier` | *flag* | |
| `num_links` | `ShowMaximum``ShowLinksAbove``ShowAll` | |
| `max_num_links` | *number* | |
| `links_above` | *number* | |
| `discard_links_min` | *flag* | |
| `links_min_records` | *number* | |
| `discard_links_max` | *flag* | |
| `links_max_records` | *number* | |
| `weak_below` | *number* | |
| `strong_above` | *number* | |
| `link_size_continuous` | *flag* | |
| `web_display` | `Circular``Network``Directed``Grid` | |
| `graph_background` | *color* | Standard graph colors are described at the beginning of this section\. |
| `symbol_size` | *number* | Specifies a symbol size\. |
| `directed_line_values` | `Absolute``OverallPct``PctTo``PctFrom` | Specify a threshold type\. |
| `show_legend` | *boolean* | You can specify whether the legend is displayed\. For plots with a large number of fields, hiding the legend may improve the appearance of the plot\. |
| `labels_as_nodes` | *boolean* | You can include the label text within each node rather than displaying adjacent labels\. For plots with a small number of fields, this may result in a more readable chart\. |
<!-- </table "summary="webnode properties" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
D2EA86E13B810569E718E9DCA4C00DA28A2E1C9A | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/xgboostasnodeslots.html?context=cdpaas&locale=en | xgboostasnode properties | xgboostasnode properties
XGBoost is an advanced implementation of a gradient boosting algorithm. Boosting algorithms iteratively learn weak classifiers and then add them to a final strong classifier. XGBoost is very flexible and provides many parameters that can be overwhelming to most users, so the XGBoost-AS node in SPSS Modeler exposes the core features and commonly used parameters. The XGBoost-AS node is implemented in Spark.
xgboostasnode properties
Table 1. xgboostasnode properties
xgboostasnode properties Data type Property description
target_field field List of the field names for target.
input_fields field List of the field names for inputs.
nWorkers integer The number of workers used to train the XGBoost model. Default is 1.
numThreadPerTask integer The number of threads used per worker. Default is 1.
useExternalMemory Boolean Whether to use external memory as cache. Default is false.
boosterType string The booster type to use. Available options are gbtree, gblinear, or dart. Default is gbtree.
numBoostRound integer The number of rounds for boosting. Specify a value of 0 or higher. Default is 10.
scalePosWeight Double Control the balance of positive and negative weights. Default is 1.
randomseed integer The seed used by the random number generator. Default is 0.
objectiveType string The learning objective. Possible values are reg:linear, reg:logistic, reg:gamma, reg:tweedie, rank:pairwise, binary:logistic, or multi. Note that for flag targets, only binary:logistic or multi can be used. If multi is used, the score result will show the multi:softmax and multi:softprob XGBoost objective types. Default is reg:linear.
evalMetric string Evaluation metrics for validation data. A default metric will be assigned according to the objective. Possible values are rmse, mae, logloss, error, merror, mlogloss, auc, ndcg, map, or gamma-deviance. Default is rmse.
lambda Double L2 regularization term on weights. Increasing this value will make the model more conservative. Specify any number 0 or greater. Default is 1.
alpha Double L1 regularization term on weights. Increasing this value will make the model more conservative. Specify any number 0 or greater. Default is 0.
lambdaBias Double L2 regularization term on bias. If the gblinear booster type is used, this lambda bias linear booster parameter is available. Specify any number 0 or greater. Default is 0.
treeMethod string If the gbtree or dart booster type is used, this tree method parameter for tree growth (and the other tree parameters that follow) is available. It specifies the XGBoost tree construction algorithm to use. Available options are auto, exact, or approx. Default is auto.
maxDepth integer The maximum depth for trees. Specify a value of 2 or higher. Default is 6.
minChildWeight Double The minimum sum of instance weight (hessian) needed in a child. Specify a value of 0 or higher. Default is 1.
maxDeltaStep Double The maximum delta step to allow for each tree's weight estimation. Specify a value of 0 or higher. Default is 0.
sampleSize Double The sub sample for is the ratio of the training instance. Specify a value between 0.1 and 1.0. Default is 1.0.
eta Double The step size shrinkage used during the update step to prevent overfitting. Specify a value between 0 and 1. Default is 0.3.
gamma Double The minimum loss reduction required to make a further partition on a leaf node of the tree. Specify any number 0 or greater. Default is 6.
colsSampleRatio Double The sub sample ratio of columns when constructing each tree. Specify a value between 0.01 and 1. Default is1.
colsSampleLevel Double The sub sample ratio of columns for each split, in each level. Specify a value between 0.01 and 1. Default is 1.
normalizeType string If the dart booster type is used, this dart parameter and the following three dart parameters are available. This parameter sets the normalization algorithm. Specify tree or forest. Default is tree.
sampleType string The sampling algorithm type. Specify uniform or weighted. Default is uniform.
rateDrop Double The dropout rate dart booster parameter. Specify a value between 0.0 and 1.0. Default is 0.0.
skipDrop Double The dart booster parameter for the probability of skip dropout. Specify a value between 0.0 and 1.0. Default is 0.0.
| # xgboostasnode properties #
XGBoost is an advanced implementation of a gradient boosting algorithm\. Boosting algorithms iteratively learn weak classifiers and then add them to a final strong classifier\. XGBoost is very flexible and provides many parameters that can be overwhelming to most users, so the XGBoost\-AS node in SPSS Modeler exposes the core features and commonly used parameters\. The XGBoost\-AS node is implemented in Spark\.
<!-- <table "summary="xgboostasnode properties" id="xboostasnodeslots__table_r3k_5bw_tz" class="defaultstyle" "> -->
xgboostasnode properties
Table 1\. xgboostasnode properties
| `xgboostasnode` properties | Data type | Property description |
| -------------------------- | --------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `target_field` | *field* | List of the field names for target\. |
| `input_fields` | *field* | List of the field names for inputs\. |
| `nWorkers` | *integer* | The number of workers used to train the XGBoost model\. Default is `1`\. |
| `numThreadPerTask` | *integer* | The number of threads used per worker\. Default is `1`\. |
| `useExternalMemory` | *Boolean* | Whether to use external memory as cache\. Default is false\. |
| `boosterType` | *string* | The booster type to use\. Available options are `gbtree`, `gblinear`, or `dart`\. Default is `gbtree`\. |
| `numBoostRound` | *integer* | The number of rounds for boosting\. Specify a value of `0` or higher\. Default is `10`\. |
| `scalePosWeight` | *Double* | Control the balance of positive and negative weights\. Default is `1`\. |
| `randomseed` | *integer* | The seed used by the random number generator\. Default is 0\. |
| `objectiveType` | *string* | The learning objective\. Possible values are `reg:linear`, `reg:logistic`, `reg:gamma`, `reg:tweedie`, `rank:pairwise`, `binary:logistic`, or `multi`\. Note that for flag targets, only `binary:logistic` or `multi` can be used\. If `multi` is used, the score result will show the `multi:softmax` and `multi:softprob` XGBoost objective types\. Default is `reg:linear`\. |
| `evalMetric` | *string* | Evaluation metrics for validation data\. A default metric will be assigned according to the objective\. Possible values are `rmse`, `mae`, `logloss`, `error`, `merror`, `mlogloss`, `auc`, `ndcg`, `map`, or `gamma-deviance`\. Default is `rmse`\. |
| `lambda` | *Double* | L2 regularization term on weights\. Increasing this value will make the model more conservative\. Specify any number `0` or greater\. Default is `1`\. |
| `alpha` | *Double* | L1 regularization term on weights\. Increasing this value will make the model more conservative\. Specify any number `0` or greater\. Default is `0`\. |
| `lambdaBias` | *Double* | L2 regularization term on bias\. If the `gblinear` booster type is used, this lambda bias linear booster parameter is available\. Specify any number `0` or greater\. Default is `0`\. |
| `treeMethod` | *string* | If the `gbtree` or `dart` booster type is used, this tree method parameter for tree growth (and the other tree parameters that follow) is available\. It specifies the XGBoost tree construction algorithm to use\. Available options are `auto`, `exact`, or `approx`\. Default is `auto`\. |
| `maxDepth` | *integer* | The maximum depth for trees\. Specify a value of `2` or higher\. Default is `6`\. |
| `minChildWeight` | *Double* | The minimum sum of instance weight (hessian) needed in a child\. Specify a value of `0` or higher\. Default is `1`\. |
| `maxDeltaStep` | *Double* | The maximum delta step to allow for each tree's weight estimation\. Specify a value of `0` or higher\. Default is `0`\. |
| `sampleSize` | *Double* | The sub sample for is the ratio of the training instance\. Specify a value between `0.1` and `1.0`\. Default is `1.0`\. |
| `eta` | *Double* | The step size shrinkage used during the update step to prevent overfitting\. Specify a value between `0` and `1`\. Default is `0.3`\. |
| `gamma` | *Double* | The minimum loss reduction required to make a further partition on a leaf node of the tree\. Specify any number `0` or greater\. Default is `6`\. |
| `colsSampleRatio` | *Double* | The sub sample ratio of columns when constructing each tree\. Specify a value between `0.01` and `1`\. Default is`1`\. |
| `colsSampleLevel` | *Double* | The sub sample ratio of columns for each split, in each level\. Specify a value between `0.01` and `1`\. Default is `1`\. |
| `normalizeType` | *string* | If the dart booster type is used, this dart parameter and the following three dart parameters are available\. This parameter sets the normalization algorithm\. Specify `tree` or `forest`\. Default is `tree`\. |
| `sampleType` | *string* | The sampling algorithm type\. Specify `uniform` or `weighted`\. Default is `uniform`\. |
| `rateDrop` | *Double* | The dropout rate dart booster parameter\. Specify a value between `0.0` and `1.0`\. Default is `0.0`\. |
| `skipDrop` | *Double* | The dart booster parameter for the probability of skip dropout\. Specify a value between `0.0` and `1.0`\. Default is `0.0`\. |
<!-- </table "summary="xgboostasnode properties" id="xboostasnodeslots__table_r3k_5bw_tz" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
80CCB2CF7A994D218D5C47BBF7F8BBB0D479E399 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/xgboostlinearnodeslots.html?context=cdpaas&locale=en | xgboostlinearnode properties | xgboostlinearnode properties
XGBoost Linear© is an advanced implementation of a gradient boosting algorithm with a linear model as the base model. Boosting algorithms iteratively learn weak classifiers and then add them to a final strong classifier. The XGBoost Linear node in SPSS Modeler is implemented in Python.
xgboostlinearnode properties
Table 1. xgboostlinearnode properties
xgboostlinearnode properties Data type Property description
custom_fields boolean This option tells the node to use field information specified here instead of that given in any upstream Type node(s). After selecting this option, specify fields as required.
target field
inputs field
alpha Double The alpha linear booster parameter. Specify any number 0 or greater. Default is 0.
lambda Double The lambda linear booster parameter. Specify any number 0 or greater. Default is 1.
lambdaBias Double The lambda bias linear booster parameter. Specify any number. Default is 0.
num_boost_round integer The num boost round value for model building. Specify a value between 1 and 1000. Default is 10.
objectiveType string The objective type for the learning task. Possible values are reg:linear, reg:logistic, reg:gamma, reg:tweedie, count:poisson, rank:pairwise, binary:logistic, or multi. Note that for flag targets, only binary:logistic or multi can be used. If multi is used, the score result will show the multi:softmax and multi:softprob XGBoost objective types.
random_seed integer The random number seed. Any number between 0 and 9999999. Default is 0.
useHPO Boolean Specify true or false to enable or disable the HPO options. If set to true, Rbfopt will be applied to find out the "best" One-Class SVM model automatically, which reaches the target objective value defined by the user with the target_objval parameter.
| # xgboostlinearnode properties #
XGBoost Linear© is an advanced implementation of a gradient boosting algorithm with a linear model as the base model\. Boosting algorithms iteratively learn weak classifiers and then add them to a final strong classifier\. The XGBoost Linear node in SPSS Modeler is implemented in Python\.
<!-- <table "summary="xgboostlinearnode properties" id="xboostlinearnodeslots__table_r3k_5bw_tz" class="defaultstyle" "> -->
xgboostlinearnode properties
Table 1\. xgboostlinearnode properties
| `xgboostlinearnode` properties | Data type | Property description |
| ------------------------------ | --------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `custom_fields` | *boolean* | This option tells the node to use field information specified here instead of that given in any upstream Type node(s)\. After selecting this option, specify fields as required\. |
| `target` | *field* | |
| `inputs` | *field* | |
| `alpha` | *Double* | The alpha linear booster parameter\. Specify any number `0` or greater\. Default is `0`\. |
| `lambda` | *Double* | The lambda linear booster parameter\. Specify any number `0` or greater\. Default is `1`\. |
| `lambdaBias` | *Double* | The lambda bias linear booster parameter\. Specify any number\. Default is `0`\. |
| `num_boost_round` | *integer* | The num boost round value for model building\. Specify a value between `1` and `1000`\. Default is `10`\. |
| `objectiveType` | *string* | The objective type for the learning task\. Possible values are `reg:linear`, `reg:logistic`, `reg:gamma`, `reg:tweedie`, `count:poisson`, `rank:pairwise`, `binary:logistic`, or `multi`\. Note that for flag targets, only `binary:logistic` or `multi` can be used\. If `multi` is used, the score result will show the `multi:softmax` and `multi:softprob` XGBoost objective types\. |
| `random_seed` | *integer* | The random number seed\. Any number between `0` and `9999999`\. Default is `0`\. |
| `useHPO` | *Boolean* | Specify `true` or `false` to enable or disable the HPO options\. If set to `true`, Rbfopt will be applied to find out the "best" One\-Class SVM model automatically, which reaches the target objective value defined by the user with the `target_objval` parameter\. |
<!-- </table "summary="xgboostlinearnode properties" id="xboostlinearnodeslots__table_r3k_5bw_tz" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
8672A0AEF022CD97D9E834AB2FD3A607FBDAED4D | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/xgboostlinearnuggetnodeslots.html?context=cdpaas&locale=en | applyxgboostlinearnode properties | applyxgboostlinearnode properties
XGBoost Linear nodes can be used to generate an XGBoost Linear model nugget. The scripting name of this model nugget is applyxgboostlinearnode. No other properties exist for this model nugget. For more information on scripting the modeling node itself, see [xgboostlinearnode properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/xgboostlinearnodeslots.htmlxboostlinearnodeslots).
| # applyxgboostlinearnode properties #
XGBoost Linear nodes can be used to generate an XGBoost Linear model nugget\. The scripting name of this model nugget is *applyxgboostlinearnode*\. No other properties exist for this model nugget\. For more information on scripting the modeling node itself, see [xgboostlinearnode properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/xgboostlinearnodeslots.html#xboostlinearnodeslots)\.
<!-- </article "role="article" "> -->
|
D05D9570CD32ACCCF91588C5886A1C4F5DA56D01 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/xgboosttreenodeslots.html?context=cdpaas&locale=en | xgboosttreenode properties | xgboosttreenode properties
XGBoost Tree© is an advanced implementation of a gradient boosting algorithm with a tree model as the base model. Boosting algorithms iteratively learn weak classifiers and then add them to a final strong classifier. XGBoost Tree is very flexible and provides many parameters that can be overwhelming to most users, so the XGBoost Tree node in SPSS Modeler exposes the core features and commonly used parameters. The node is implemented in Python.
xgboosttreenode properties
Table 1. xgboosttreenode properties
xgboosttreenode properties Data type Property description
custom_fields boolean This option tells the node to use field information specified here instead of that given in any upstream Type node(s). After selecting this option, specify the fields as required.
target field The target fields.
inputs field The input fields.
tree_method string The tree method for model building. Possible values are auto, exact, or approx. Default is auto.
num_boost_round integer The num boost round value for model building. Specify a value between 1 and 1000. Default is 10.
max_depth integer The max depth for tree growth. Specify a value of 1 or higher. Default is 6.
min_child_weight Double The min child weight for tree growth. Specify a value of 0 or higher. Default is 1.
max_delta_step Double The max delta step for tree growth. Specify a value of 0 or higher. Default is 0.
objective_type string The objective type for the learning task. Possible values are reg:linear, reg:logistic, reg:gamma, reg:tweedie, count:poisson, rank:pairwise, binary:logistic, or multi. Note that for flag targets, only binary:logistic or multi can be used. If multi is used, the score result will show the multi:softmax and multi:softprob XGBoost objective types.
early_stopping Boolean Whether to use the early stopping function. Default is False.
early_stopping_rounds integer Validation error needs to decrease at least every early stopping round(s) to continue training. Default is 10.
evaluation_data_ratio Double Ration of input data used for validation errors. Default is 0.3.
random_seed integer The random number seed. Any number between 0 and 9999999. Default is 0.
sample_size Double The sub sample for control overfitting. Specify a value between 0.1 and 1.0. Default is 0.1.
eta Double The eta for control overfitting. Specify a value between 0 and 1. Default is 0.3.
gamma Double The gamma for control overfitting. Specify any number 0 or greater. Default is 6.
col_sample_ratio Double The colsample by tree for control overfitting. Specify a value between 0.01 and 1. Default is 1.
col_sample_level Double The colsample by level for control overfitting. Specify a value between 0.01 and 1. Default is 1.
lambda Double The lambda for control overfitting. Specify any number 0 or greater. Default is 1.
alpha Double The alpha for control overfitting. Specify any number 0 or greater. Default is 0.
scale_pos_weight Double The scale pos weight for handling imbalanced datasets. Default is 1.
use_HPO
| # xgboosttreenode properties #
XGBoost Tree© is an advanced implementation of a gradient boosting algorithm with a tree model as the base model\. Boosting algorithms iteratively learn weak classifiers and then add them to a final strong classifier\. XGBoost Tree is very flexible and provides many parameters that can be overwhelming to most users, so the XGBoost Tree node in SPSS Modeler exposes the core features and commonly used parameters\. The node is implemented in Python\.
<!-- <table "summary="xgboosttreenode properties" id="xboosttreenodeslots__table_r3k_5bw_tz" class="defaultstyle" "> -->
xgboosttreenode properties
Table 1\. xgboosttreenode properties
| `xgboosttreenode` properties | Data type | Property description |
| ---------------------------- | --------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `custom_fields` | *boolean* | This option tells the node to use field information specified here instead of that given in any upstream Type node(s)\. After selecting this option, specify the fields as required\. |
| `target` | *field* | The target fields\. |
| `inputs` | *field* | The input fields\. |
| `tree_method` | *string* | The tree method for model building\. Possible values are `auto`, `exact`, or `approx`\. Default is `auto`\. |
| `num_boost_round` | *integer* | The num boost round value for model building\. Specify a value between `1` and `1000`\. Default is `10`\. |
| `max_depth` | *integer* | The max depth for tree growth\. Specify a value of `1` or higher\. Default is `6`\. |
| `min_child_weight` | *Double* | The min child weight for tree growth\. Specify a value of `0` or higher\. Default is `1`\. |
| `max_delta_step` | *Double* | The max delta step for tree growth\. Specify a value of `0` or higher\. Default is `0`\. |
| `objective_type` | *string* | The objective type for the learning task\. Possible values are `reg:linear`, `reg:logistic`, `reg:gamma`, `reg:tweedie`, `count:poisson`, `rank:pairwise`, `binary:logistic`, or `multi`\. Note that for flag targets, only `binary:logistic` or `multi` can be used\. If `multi` is used, the score result will show the `multi:softmax` and `multi:softprob` XGBoost objective types\. |
| `early_stopping` | *Boolean* | Whether to use the early stopping function\. Default is `False`\. |
| `early_stopping_rounds` | *integer* | Validation error needs to decrease at least every early stopping round(s) to continue training\. Default is `10`\. |
| `evaluation_data_ratio` | *Double* | Ration of input data used for validation errors\. Default is `0.3`\. |
| `random_seed` | *integer* | The random number seed\. Any number between `0` and `9999999`\. Default is `0`\. |
| `sample_size` | *Double* | The sub sample for control overfitting\. Specify a value between `0.1` and `1.0`\. Default is `0.1`\. |
| `eta` | *Double* | The eta for control overfitting\. Specify a value between `0` and `1`\. Default is `0.3`\. |
| `gamma` | *Double* | The gamma for control overfitting\. Specify any number `0` or greater\. Default is `6`\. |
| `col_sample_ratio` | *Double* | The colsample by tree for control overfitting\. Specify a value between `0.01` and `1`\. Default is `1`\. |
| `col_sample_level` | *Double* | The colsample by level for control overfitting\. Specify a value between `0.01` and `1`\. Default is `1`\. |
| `lambda` | *Double* | The lambda for control overfitting\. Specify any number `0` or greater\. Default is `1`\. |
| `alpha` | *Double* | The alpha for control overfitting\. Specify any number `0` or greater\. Default is `0`\. |
| `scale_pos_weight` | *Double* | The scale pos weight for handling imbalanced datasets\. Default is `1`\. |
| `use_HPO` | | |
<!-- </table "summary="xgboosttreenode properties" id="xboosttreenodeslots__table_r3k_5bw_tz" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
116575C57D15C410AC921AEBFAF607E2F86E6C05 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/xgboosttreenuggetnodeslots.html?context=cdpaas&locale=en | applyxgboosttreenode properties | applyxgboosttreenode properties
You can use the XGBoost Tree node to generate an XGBoost Tree model nugget. The scripting name of this model nugget is applyxgboosttreenode. For more information on scripting the modeling node itself, see [xgboosttreenode properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/xgboosttreenodeslots.htmlxboosttreenodeslots).
applyxgboosttreenode properties
Table 1. applyxgboosttreenode properties
applyxgboosttreenode properties Data type Property description
use_model_name
model_name
| # applyxgboosttreenode properties #
You can use the XGBoost Tree node to generate an XGBoost Tree model nugget\. The scripting name of this model nugget is *applyxgboosttreenode*\. For more information on scripting the modeling node itself, see [xgboosttreenode properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/xgboosttreenodeslots.html#xboosttreenodeslots)\.
<!-- <table "summary="applyxgboosttreenode properties" id="kmeansnuggetnodeslots__table_r3k_5bw_tz" class="defaultstyle" "> -->
applyxgboosttreenode properties
Table 1\. applyxgboosttreenode properties
| `applyxgboosttreenode` properties | Data type | Property description |
| --------------------------------- | --------- | -------------------- |
| `use_model_name` | | |
| `model_name` | | |
<!-- </table "summary="applyxgboosttreenode properties" id="kmeansnuggetnodeslots__table_r3k_5bw_tz" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
5C2F280E5C4326883F7B3623EF1B64FE4DDE7C05 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/select.html?context=cdpaas&locale=en | Select node (SPSS Modeler) | Select node
You can use Select nodes to select or discard a subset of records from the data stream based on a specific condition, such as BP (blood pressure) = "HIGH".
Mode. Specifies whether records that meet the condition will be included or excluded from the data stream.
* Include. Select to include records that meet the selection condition.
* Discard. Select to exclude records that meet the selection condition.
Condition. Displays the selection condition that will be used to test each record, which you specify using a CLEM expression. Either enter an expression in the window or use the Expression Builder by clicking the calculator (Expression Builder) button.
If you choose to discard records based on a condition, such as the following:
(var1='value1' and var2='value2')
the Select node by default also discards records having null values for all selection fields. To avoid this, append the following condition to the original one:
and not(@NULL(var1) and @NULL(var2))
Select nodes are also used to choose a proportion of records. Typically, you would use a different node, the Sample node, for this operation. However, if the condition you want to specify is more complex than the parameters provided, you can create your own condition using the Select node. For example, you can create a condition such as:
BP = "HIGH" and random(10) <= 4
This will select approximately 40% of the records showing high blood pressure and pass those records downstream for further analysis.
| # Select node #
You can use Select nodes to select or discard a subset of records from the data stream based on a specific condition, such as BP (blood pressure) = "HIGH"\.
Mode\. Specifies whether records that meet the condition will be included or excluded from the data stream\.
<!-- <ul> -->
* Include\. Select to include records that meet the selection condition\.
* Discard\. Select to exclude records that meet the selection condition\.
<!-- </ul> -->
Condition\. Displays the selection condition that will be used to test each record, which you specify using a CLEM expression\. Either enter an expression in the window or use the Expression Builder by clicking the calculator (Expression Builder) button\.
If you choose to discard records based on a condition, such as the following:
(var1='value1' and var2='value2')
the Select node by default also discards records having null values for all selection fields\. To avoid this, append the following condition to the original one:
and not(@NULL(var1) and @NULL(var2))
Select nodes are also used to choose a proportion of records\. Typically, you would use a different node, the Sample node, for this operation\. However, if the condition you want to specify is more complex than the parameters provided, you can create your own condition using the Select node\. For example, you can create a condition such as:
BP = "HIGH" and random(10) <= 4
This will select approximately 40% of the records showing high blood pressure and pass those records downstream for further analysis\.
<!-- </article "role="article" "> -->
|
CBC6BDA4EC8356F2CE95DD4548406ABEE1EC5B76 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/sequence.html?context=cdpaas&locale=en | Sequence node (SPSS Modeler) | Sequence node
The Sequence node discovers patterns in sequential or time-oriented data, in the format bread -> cheese. The elements of a sequence are item sets that constitute a single transaction.
For example, if a person goes to the store and purchases bread and milk and then a few days later returns to the store and purchases some cheese, that person's buying activity can be represented as two item sets. The first item set contains bread and milk, and the second one contains cheese. A sequence is a list of item sets that tend to occur in a predictable order. The Sequence node detects frequent sequences and creates a generated model node that can be used to make predictions.
Requirements. To create a Sequence rule set, you need to specify an ID field, an optional time field, and one or more content fields. Note that these settings must be made on the Fields tab of the modeling node; they cannot be read from an upstream Type node. The ID field can have any role or measurement level. If you specify a time field, it can have any role but its storage must be numeric, date, time, or timestamp. If you do not specify a time field, the Sequence node will use an implied timestamp, in effect using row numbers as time values. Content fields can have any measurement level and role, but all content fields must be of the same type. If they are numeric, they must be integer ranges (not real ranges).
Strengths. The Sequence node is based on the CARMA association rules algorithm, which uses an efficient two-pass method for finding sequences. In addition, the generated model node created by a Sequence node can be inserted into a data stream to create predictions. The generated model node can also generate supernodes for detecting and counting specific sequences and for making predictions based on specific sequences.
| # Sequence node #
The Sequence node discovers patterns in sequential or time\-oriented data, in the format `bread -> cheese`\. The elements of a sequence are item sets that constitute a single transaction\.
For example, if a person goes to the store and purchases bread and milk and then a few days later returns to the store and purchases some cheese, that person's buying activity can be represented as two item sets\. The first item set contains bread and milk, and the second one contains cheese\. A sequence is a list of item sets that tend to occur in a predictable order\. The Sequence node detects frequent sequences and creates a generated model node that can be used to make predictions\.
Requirements\. To create a Sequence rule set, you need to specify an ID field, an optional time field, and one or more content fields\. Note that these settings must be made on the Fields tab of the modeling node; they cannot be read from an upstream Type node\. The ID field can have any role or measurement level\. If you specify a time field, it can have any role but its storage must be numeric, date, time, or timestamp\. If you do not specify a time field, the Sequence node will use an implied timestamp, in effect using row numbers as time values\. Content fields can have any measurement level and role, but all content fields must be of the same type\. If they are numeric, they must be integer ranges (not real ranges)\.
Strengths\. The Sequence node is based on the CARMA association rules algorithm, which uses an efficient two\-pass method for finding sequences\. In addition, the generated model node created by a Sequence node can be inserted into a data stream to create predictions\. The generated model node can also generate supernodes for detecting and counting specific sequences and for making predictions based on specific sequences\.
<!-- </article "role="article" "> -->
|
A447EC7366D2EB328BCE8E44A73B3A825A9B757B | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/setglobals.html?context=cdpaas&locale=en | Set Globals node (SPSS Modeler) | Set Globals node
The Set Globals node scans the data and computes summary values that can be used in CLEM expressions.
For example, you can use a Set Globals node to compute statistics for a field called age and then use the overall mean of age in CLEM expressions by inserting the function @GLOBAL_MEAN(age).
| # Set Globals node #
The Set Globals node scans the data and computes summary values that can be used in CLEM expressions\.
For example, you can use a Set Globals node to compute statistics for a field called `age` and then use the overall mean of `age` in CLEM expressions by inserting the function `@GLOBAL_MEAN(age)`\.
<!-- </article "role="article" "> -->
|
5CC48263B0C282CA1D65ACCB46D73D7EA3C8A665 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/settoflag.html?context=cdpaas&locale=en | Set to Flag node (SPSS Modeler) | Set to Flag node
Use the Set to Flag node to derive flag fields based on the categorical values defined for one or more nominal fields.
For example, your dataset might contain a nominal field, BP (blood pressure), with the values High, Normal, and Low. For easier data manipulation, you might create a flag field for high blood pressure, which indicates whether or not the patient has high blood pressure.
| # Set to Flag node #
Use the Set to Flag node to derive flag fields based on the categorical values defined for one or more nominal fields\.
For example, your dataset might contain a nominal field, `BP` (blood pressure), with the values `High`, `Normal`, and `Low`\. For easier data manipulation, you might create a flag field for high blood pressure, which indicates whether or not the patient has high blood pressure\.
<!-- </article "role="article" "> -->
|
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.