doc_id
stringlengths 40
40
| url
stringlengths 90
160
| title
stringlengths 5
96
| document
stringlengths 24
62.1k
| md_document
stringlengths 63
109k
|
---|---|---|---|---|
CCDF1D5375060FCDE288A920A6F3C1B48454C6DB | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/dataauditnodeslots.html?context=cdpaas&locale=en | dataauditnode properties | dataauditnode properties
The Data Audit node provides a comprehensive first look at the data, including summary statistics, histograms and distribution for each field, as well as information on outliers, missing values, and extremes. Results are displayed in an easy-to-read matrix that can be sorted and used to generate full-size graphs and data preparation nodes.
dataauditnode properties
Table 1. dataauditnode properties
dataauditnode properties Data type Property description
custom_fields flag
fields [field1 … fieldN]
overlay field
display_graphs flag Used to turn the display of graphs in the output matrix on or off.
basic_stats flag
advanced_stats flag
median_stats flag
calculate CountBreakdown Used to calculate missing values. Select either, both, or neither calculation method.
outlier_detection_method stdiqr Used to specify the detection method for outliers and extreme values.
outlier_detection_std_outlier number If outlier_detection_method is std, specifies the number to use to define outliers.
outlier_detection_std_extreme number If outlier_detection_method is std, specifies the number to use to define extreme values.
outlier_detection_iqr_outlier number If outlier_detection_method is iqr, specifies the number to use to define outliers.
outlier_detection_iqr_extreme number If outlier_detection_method is iqr, specifies the number to use to define extreme values.
use_output_name flag Specifies whether a custom output name is used.
output_name string If use_output_name is true, specifies the name to use.
output_mode ScreenFile Used to specify target location for output generated from the output node.
output_format Formatted (.tab) Delimited (.csv) HTML (.html) Output (.cou) Used to specify the type of output.
paginate_output flag When the output_format is HTML, causes the output to be separated into pages.
lines_per_page number When used with paginate_output, specifies the lines per page of output.
full_filename string
| # dataauditnode properties #
The Data Audit node provides a comprehensive first look at the data, including summary statistics, histograms and distribution for each field, as well as information on outliers, missing values, and extremes\. Results are displayed in an easy\-to\-read matrix that can be sorted and used to generate full\-size graphs and data preparation nodes\.
<!-- <table "summary="dataauditnode properties" id="dataauditnodeslots__table_edy_gbj_cdb" class="defaultstyle" "> -->
dataauditnode properties
Table 1\. dataauditnode properties
| `dataauditnode` properties | Data type | Property description |
| ------------------------------- | -------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------- |
| `custom_fields` | *flag* | |
| `fields` | *\[field1 … fieldN\]* | |
| `overlay` | *field* | |
| `display_graphs` | *flag* | Used to turn the display of graphs in the output matrix on or off\. |
| `basic_stats` | *flag* | |
| `advanced_stats` | *flag* | |
| `median_stats` | *flag* | |
| `calculate` | `Count``Breakdown` | Used to calculate missing values\. Select either, both, or neither calculation method\. |
| `outlier_detection_method` | `std``iqr` | Used to specify the detection method for outliers and extreme values\. |
| `outlier_detection_std_outlier` | *number* | If `outlier_detection_method` is `std`, specifies the number to use to define outliers\. |
| `outlier_detection_std_extreme` | *number* | If `outlier_detection_method` is `std`, specifies the number to use to define extreme values\. |
| `outlier_detection_iqr_outlier` | *number* | If `outlier_detection_method` is `iqr`, specifies the number to use to define outliers\. |
| `outlier_detection_iqr_extreme` | *number* | If `outlier_detection_method` is `iqr`, specifies the number to use to define extreme values\. |
| `use_output_name` | *flag* | Specifies whether a custom output name is used\. |
| `output_name` | *string* | If `use_output_name` is true, specifies the name to use\. |
| `output_mode` | `Screen``File` | Used to specify target location for output generated from the output node\. |
| `output_format` | `Formatted` (\.*tab*) `Delimited` (\.*csv*) `HTML` (\.*html*) `Output` (\.*cou*) | Used to specify the type of output\. |
| `paginate_output` | *flag* | When the `output_format` is `HTML`, causes the output to be separated into pages\. |
| `lines_per_page` | *number* | When used with `paginate_output`, specifies the lines per page of output\. |
| `full_filename` | *string* | |
<!-- </table "summary="dataauditnode properties" id="dataauditnodeslots__table_edy_gbj_cdb" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
DAFB63017668C5DD34A07A1850CE9E9A37D0F525 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/decisionlistnodeslots.html?context=cdpaas&locale=en | decisionlistnode properties | decisionlistnode properties
The Decision List node identifies subgroups, or segments, that show a higher or lower likelihood of a given binary outcome relative to the overall population. For example, you might look for customers who are unlikely to churn or are most likely to respond favorably to a campaign. You can incorporate your business knowledge into the model by adding your own custom segments and previewing alternative models side by side to compare the results. Decision List models consist of a list of rules in which each rule has a condition and an outcome. Rules are applied in order, and the first rule that matches determines the outcome.
decisionlistnode properties
Table 1. decisionlistnode properties
decisionlistnode Properties Values Property description
target field Decision List models use a single target and one or more input fields. A frequency field can also be specified. See [Common modeling node properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/modelingnodeslots_common.htmlmodelingnodeslots_common) for more information.
model_output_type ModelInteractiveBuilder
search_direction UpDown Relates to finding segments; where Up is the equivalent of High Probability, and Down is the equivalent of Low Probability.
target_value string If not specified, will assume true value for flags.
max_rules integer The maximum number of segments excluding the remainder.
min_group_size integer Minimum segment size.
min_group_size_pct number Minimum segment size as a percentage.
confidence_level number Minimum threshold that an input field has to improve the likelihood of response (give lift), to make it worth adding to a segment definition.
max_segments_per_rule integer
mode SimpleExpert
bin_method EqualWidthEqualCount
bin_count number
max_models_per_cycle integer Search width for lists.
max_rules_per_cycle integer Search width for segment rules.
segment_growth number
include_missing flag
final_results_only flag
reuse_fields flag Allows attributes (input fields which appear in rules) to be re-used.
max_alternatives integer
calculate_raw_propensities flag
calculate_adjusted_propensities flag
adjusted_propensity_partition TestValidation
| # decisionlistnode properties #
The Decision List node identifies subgroups, or segments, that show a higher or lower likelihood of a given binary outcome relative to the overall population\. For example, you might look for customers who are unlikely to churn or are most likely to respond favorably to a campaign\. You can incorporate your business knowledge into the model by adding your own custom segments and previewing alternative models side by side to compare the results\. Decision List models consist of a list of rules in which each rule has a condition and an outcome\. Rules are applied in order, and the first rule that matches determines the outcome\.
<!-- <table "summary="decisionlistnode properties" id="decisionlistnodeslots__table_e2q_jbj_cdb" class="defaultstyle" "> -->
decisionlistnode properties
Table 1\. decisionlistnode properties
| `decisionlistnode` Properties | Values | Property description |
| --------------------------------- | --------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `target` | *field* | Decision List models use a single target and one or more input fields\. A frequency field can also be specified\. See [Common modeling node properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/modelingnodeslots_common.html#modelingnodeslots_common) for more information\. |
| `model_output_type` | `Model``InteractiveBuilder` | |
| `search_direction` | `Up``Down` | Relates to finding segments; where Up is the equivalent of High Probability, and Down is the equivalent of Low Probability\. |
| `target_value` | *string* | If not specified, will assume true value for flags\. |
| `max_rules` | *integer* | The maximum number of segments excluding the remainder\. |
| `min_group_size` | *integer* | Minimum segment size\. |
| `min_group_size_pct` | *number* | Minimum segment size as a percentage\. |
| `confidence_level` | *number* | Minimum threshold that an input field has to improve the likelihood of response (give lift), to make it worth adding to a segment definition\. |
| `max_segments_per_rule` | *integer* | |
| `mode` | `Simple``Expert` | |
| `bin_method` | `EqualWidth``EqualCount` | |
| `bin_count` | *number* | |
| `max_models_per_cycle` | *integer* | Search width for lists\. |
| `max_rules_per_cycle` | *integer* | Search width for segment rules\. |
| `segment_growth` | *number* | |
| `include_missing` | *flag* | |
| `final_results_only` | *flag* | |
| `reuse_fields` | *flag* | Allows attributes (input fields which appear in rules) to be re\-used\. |
| `max_alternatives` | *integer* | |
| `calculate_raw_propensities` | *flag* | |
| `calculate_adjusted_propensities` | *flag* | |
| `adjusted_propensity_partition` | `Test``Validation` | |
<!-- </table "summary="decisionlistnode properties" id="decisionlistnodeslots__table_e2q_jbj_cdb" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
082349F7C1E486D18BCA3BB7569D4DE25A8E81A7 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/decisionlistnuggetnodeslots.html?context=cdpaas&locale=en | applydecisionlistnode properties | applydecisionlistnode properties
You can use Decision List modeling nodes to generate a Decision List model nugget. The scripting name of this model nugget is applydecisionlistnode. For more information on scripting the modeling node itself, see [decisionlistnode properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/decisionlistnodeslots.htmldecisionlistnodeslots).
applydecisionlistnode properties
Table 1. applydecisionlistnode properties
applydecisionlistnode Properties Values Property description
enable_sql_generation flag When true, SPSS Modeler will try to push back the Decision List model to SQL.
calculate_raw_propensities flag
calculate_adjusted_propensities flag
enable_sql_generation falsetruenative When using data from a database, SQL code can be pushed back to the database for execution, providing superior performance for many operations.
| # applydecisionlistnode properties #
You can use Decision List modeling nodes to generate a Decision List model nugget\. The scripting name of this model nugget is *applydecisionlistnode*\. For more information on scripting the modeling node itself, see [decisionlistnode properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/decisionlistnodeslots.html#decisionlistnodeslots)\.
<!-- <table "summary="applydecisionlistnode properties" id="decisionlistnuggetnodeslots__table_yky_kbj_cdb" class="defaultstyle" "> -->
applydecisionlistnode properties
Table 1\. applydecisionlistnode properties
| `applydecisionlistnode` Properties | Values | Property description |
| ---------------------------------- | --------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------ |
| `enable_sql_generation` | *flag* | When true, SPSS Modeler will try to push back the Decision List model to SQL\. |
| `calculate_raw_propensities` | *flag* | |
| `calculate_adjusted_propensities` | *flag* | |
| `enable_sql_generation` | `false``true``native` | When using data from a database, SQL code can be pushed back to the database for execution, providing superior performance for many operations\. |
<!-- </table "summary="applydecisionlistnode properties" id="decisionlistnuggetnodeslots__table_yky_kbj_cdb" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
CA6F118DBE9A1782053FE1F5F4697DDA07A7A365 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/defining_slot_parameters_in_streams.html?context=cdpaas&locale=en | Flow properties | Flow properties
You can control a variety of flow properties with scripting. To reference flow properties, you must set the execution method to use scripts:
stream = modeler.script.stream()
stream.setPropertyValue("execute_method", "Script")
The previous example uses the node property to create a list of all nodes in the flow and write that list in the flow annotations. The annotation produced looks like this:
This flow is called "druglearn" and contains the following nodes:
type node called "Define Types"
derive node called "Na_to_K"
variablefile node called "DRUG1n"
neuralnetwork node called "Drug"
c50 node called "Drug"
filter node called "Discard Fields"
Flow properties are described in the following table.
Flow properties
Table 1. Flow properties
Property name Data type Property description
execute_method Normal <br>Script
date_format "DDMMYY" "MMDDYY" "YYMMDD" "YYYYMMDD" "YYYYDDD" DAY MONTH "DD-MM-YY" "DD-MM-YYYY" "MM-DD-YY" "MM-DD-YYYY" "DD-MON-YY" "DD-MON-YYYY" "YYYY-MM-DD" "DD.MM.YY" "DD.MM.YYYY" "MM.DD.YYYY" "DD.MON.YY" "DD.MON.YYYY" "DD/MM/YY" "DD/MM/YYYY" "MM/DD/YY" "MM/DD/YYYY" "DD/MON/YY" "DD/MON/YYYY" MON YYYY q Q YYYY ww WK YYYY
date_baseline number
date_2digit_baseline number
time_format "HHMMSS" "HHMM" "MMSS" "HH:MM:SS" "HH:MM" "MM:SS" "(H)H:(M)M:(S)S" "(H)H:(M)M" "(M)M:(S)S" "HH.MM.SS" "HH.MM" "MM.SS" "(H)H.(M)M.(S)S" "(H)H.(M)M" "(M)M.(S)S"
time_rollover flag
import_datetime_as_string flag
decimal_places number
decimal_symbol Default <br>Period <br>Comma
angles_in_radians flag
use_max_set_size flag
max_set_size number
ruleset_evaluation Voting <br>FirstHit
refresh_source_nodes flag Use to refresh import nodes automatically upon flow execution.
script string
annotation string
name string This property is read-only. If you want to change the name of a flow, you should save it with a different name.
parameters Use this property to update flow parameters from within a stand-alone script.
nodes See detailed information that follows.
encoding SystemDefault <br>"UTF-8"
stream_rewriting boolean
stream_rewriting_maximise_sql boolean
stream_rewriting_optimise_clem_ execution boolean
stream_rewriting_optimise_syntax_ execution boolean
enable_parallelism boolean
sql_generation boolean
database_caching boolean
sql_logging boolean
sql_generation_logging boolean
sql_log_native boolean
sql_log_prettyprint boolean
record_count_suppress_input boolean
record_count_feedback_interval integer
use_stream_auto_create_node_ settings boolean If true, then flow-specific settings are used, otherwise user preferences are used.
create_model_applier_for_new_ models boolean If true, when a model builder creates a new model, and it has no active update links, a new model applier is added.
create_model_applier_update_links createEnabled <br> <br>createDisabled <br> <br>doNotCreate Defines the type of link created when a model applier node is added automatically.
create_source_node_from_builders boolean If true, when a source builder creates a new source output, and it has no active update links, a new import node is added.
create_source_node_update_links createEnabled <br> <br>createDisabled <br> <br>doNotCreate Defines the type of link created when an import node is added automatically.
has_coordinate_system boolean If true, applies a coordinate system to the entire flow.
coordinate_system string The name of the selected projected coordinate system.
deployment_area modelRefresh <br> <br>Scoring <br> <br>None Choose how you want to deploy the flow. If this value is set to None, no other deployment entries are used.
scoring_terminal_node_id string Choose the scoring branch in the flow. It can be any terminal node in the flow.
scoring_node_id string Choose the nugget in the scoring branch.
model_build_node_id string Choose the modeling node in the flow.
| # Flow properties #
You can control a variety of flow properties with scripting\. To reference flow properties, you must set the execution method to use scripts:
stream = modeler.script.stream()
stream.setPropertyValue("execute_method", "Script")
The previous example uses the node property to create a list of all nodes in the flow and write that list in the flow annotations\. The annotation produced looks like this:
This flow is called "druglearn" and contains the following nodes:
type node called "Define Types"
derive node called "Na_to_K"
variablefile node called "DRUG1n"
neuralnetwork node called "Drug"
c50 node called "Drug"
filter node called "Discard Fields"
Flow properties are described in the following table\.
<!-- <table "summary="Flow properties" id="defining_slot_parameters_in_streams__table_vhp_4bj_cdb" class="defaultstyle" "> -->
Flow properties
Table 1\. Flow properties
| Property name | Data type | Property description |
| --------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | --------------------------------------------------------------------------------------------------------------------------- |
| `execute_method` | `Normal` <br>`Script` | |
| `date_format` | `"DDMMYY" "MMDDYY" "YYMMDD" "YYYYMMDD" "YYYYDDD" DAY MONTH "DD-MM-YY" "DD-MM-YYYY" "MM-DD-YY" "MM-DD-YYYY" "DD-MON-YY" "DD-MON-YYYY" "YYYY-MM-DD" "DD.MM.YY" "DD.MM.YYYY" "MM.DD.YYYY" "DD.MON.YY" "DD.MON.YYYY" "DD/MM/YY" "DD/MM/YYYY" "MM/DD/YY" "MM/DD/YYYY" "DD/MON/YY" "DD/MON/YYYY" MON YYYY q Q YYYY ww WK YYYY` | |
| `date_baseline` | *number* | |
| `date_2digit_baseline` | *number* | |
| `time_format` | `"HHMMSS" "HHMM" "MMSS" "HH:MM:SS" "HH:MM" "MM:SS" "(H)H:(M)M:(S)S" "(H)H:(M)M" "(M)M:(S)S" "HH.MM.SS" "HH.MM" "MM.SS" "(H)H.(M)M.(S)S" "(H)H.(M)M" "(M)M.(S)S"` | |
| `time_rollover` | *flag* | |
| `import_datetime_as_string` | *flag* | |
| `decimal_places` | *number* | |
| `decimal_symbol` | `Default` <br>`Period` <br>`Comma` | |
| `angles_in_radians` | *flag* | |
| `use_max_set_size` | *flag* | |
| `max_set_size` | *number* | |
| `ruleset_evaluation` | `Voting` <br>`FirstHit` | |
| `refresh_source_nodes` | *flag* | Use to refresh import nodes automatically upon flow execution\. |
| `script` | *string* | |
| `annotation` | *string* | |
| `name` | *string* | This property is read\-only\. If you want to change the name of a flow, you should save it with a different name\. |
| `parameters` | | Use this property to update flow parameters from within a stand\-alone script\. |
| `nodes` | | See detailed information that follows\. |
| `encoding` | `SystemDefault` <br>`"UTF-8"` | |
| `stream_rewriting` | *boolean* | |
| `stream_rewriting_maximise_sql` | *boolean* | |
| `stream_rewriting_optimise_clem_ execution` | *boolean* | |
| `stream_rewriting_optimise_syntax_ execution` | *boolean* | |
| `enable_parallelism` | *boolean* | |
| `sql_generation` | *boolean* | |
| `database_caching` | *boolean* | |
| `sql_logging` | *boolean* | |
| `sql_generation_logging` | *boolean* | |
| `sql_log_native` | *boolean* | |
| `sql_log_prettyprint` | *boolean* | |
| `record_count_suppress_input` | *boolean* | |
| `record_count_feedback_interval` | *integer* | |
| `use_stream_auto_create_node_ settings` | *boolean* | If true, then flow\-specific settings are used, otherwise user preferences are used\. |
| `create_model_applier_for_new_ models` | *boolean* | If true, when a model builder creates a new model, and it has no active update links, a new model applier is added\. |
| `create_model_applier_update_links` | `createEnabled` <br> <br>`createDisabled` <br> <br>`doNotCreate` | Defines the type of link created when a model applier node is added automatically\. |
| `create_source_node_from_builders` | *boolean* | If true, when a source builder creates a new source output, and it has no active update links, a new import node is added\. |
| `create_source_node_update_links` | `createEnabled` <br> <br>`createDisabled` <br> <br>`doNotCreate` | Defines the type of link created when an import node is added automatically\. |
| `has_coordinate_system` | *boolean* | If true, applies a coordinate system to the entire flow\. |
| `coordinate_system` | *string* | The name of the selected projected coordinate system\. |
| `deployment_area` | `modelRefresh` <br> <br>`Scoring` <br> <br>`None` | Choose how you want to deploy the flow\. If this value is set to `None`, no other deployment entries are used\. |
| `scoring_terminal_node_id` | *string* | Choose the scoring branch in the flow\. It can be any terminal node in the flow\. |
| `scoring_node_id` | *string* | Choose the nugget in the scoring branch\. |
| `model_build_node_id` | *string* | Choose the modeling node in the flow\. |
<!-- </table "summary="Flow properties" id="defining_slot_parameters_in_streams__table_vhp_4bj_cdb" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
ABD445CE46B0329348E6AD464735BDB1D525EDAA | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/defining_slot_parameters_in_supernodes.html?context=cdpaas&locale=en | SuperNode properties | SuperNode properties
The tables in this section describe properties that are specific to SuperNodes. Note that common node properties also apply to SuperNodes.
Terminal supernode properties
Table 1. Terminal supernode properties
Property name Property type/List of values Property description
execute_method ScriptNormal
script string
| # SuperNode properties #
The tables in this section describe properties that are specific to SuperNodes\. Note that common node properties also apply to SuperNodes\.
<!-- <table "summary="Terminal supernode properties" id="defining_slot_parameters_in_supernodes__table_epf_pbj_cdb" class="defaultstyle" "> -->
Terminal supernode properties
Table 1\. Terminal supernode properties
| Property name | Property type/List of values | Property description |
| ---------------- | ---------------------------- | -------------------- |
| `execute_method` | `Script``Normal` | |
| `script` | *string* | |
<!-- </table "summary="Terminal supernode properties" id="defining_slot_parameters_in_supernodes__table_epf_pbj_cdb" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
84573D3FDA739326819C7303EA21DB6DDF2ACC21 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/derivenodeslots.html?context=cdpaas&locale=en | derivenode properties | derivenode properties
The Derive node modifies data values or creates new fields from one or more existing fields. It creates fields of type formula, flag, nominal, state, count, and conditional.
derivenode properties
Table 1. derivenode properties
derivenode properties Data type Property description
new_name string Name of new field.
mode SingleMultiple Specifies single or multiple fields.
fields list Used in Multiple mode only to select multiple fields.
name_extension string Specifies the extension for the new field name(s).
add_as SuffixPrefix Adds the extension as a prefix (at the beginning) or as a suffix (at the end) of the field name.
result_type FormulaFlagSetStateCountConditional The six types of new fields that you can create.
formula_expr string Expression for calculating a new field value in a Derive node.
flag_expr string
flag_true string
flag_false string
set_default string
set_value_cond string Structured to supply the condition associated with a given value.
state_on_val string Specifies the value for the new field when the On condition is met.
state_off_val string Specifies the value for the new field when the Off condition is met.
state_on_expression string
state_off_expression string
state_initial OnOff Assigns each record of the new field an initial value of On or Off. This value can change as each condition is met.
count_initial_val string
count_inc_condition string
count_inc_expression string
count_reset_condition string
cond_if_cond string
cond_then_expr string
cond_else_expr string
formula_measure_type Range / MeasureType.RANGEDiscrete / MeasureType.DISCRETEFlag / MeasureType.FLAGSet / MeasureType.SETOrderedSet / MeasureType.ORDERED_SETTypeless / MeasureType.TYPELESSCollection / MeasureType.COLLECTIONGeospatial / MeasureType.GEOSPATIAL This property can be used to define the measurement associated with the derived field. The setter function can be passed either a string or one of the MeasureType values. The getter will always return on the MeasureType values.
collection_measure Range / MeasureType.RANGEFlag / MeasureType.FLAGSet / MeasureType.SETOrderedSet / MeasureType.ORDERED_SETTypeless / MeasureType.TYPELESS For collection fields (lists with a depth of 0), this property defines the measurement type associated with the underlying values.
geo_type PointMultiPointLineStringMultiLineStringPolygonMultiPolygon For geospatial fields, this property defines the type of geospatial object represented by this field. This should be consistent with the list depth of the values
has_coordinate_system boolean For geospatial fields, this property defines whether this field has a coordinate system
coordinate_system string For geospatial fields, this property defines the coordinate system for this field
| # derivenode properties #
The Derive node modifies data values or creates new fields from one or more existing fields\. It creates fields of type formula, flag, nominal, state, count, and conditional\.
<!-- <table "summary="derivenode properties" id="derivenodeslots__table_wlp_qbj_cdb" class="defaultstyle" "> -->
derivenode properties
Table 1\. derivenode properties
| `derivenode` properties | Data type | Property description |
| ----------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| `new_name` | *string* | Name of new field\. |
| `mode` | `Single``Multiple` | Specifies single or multiple fields\. |
| `fields` | *list* | Used in Multiple mode only to select multiple fields\. |
| `name_extension` | *string* | Specifies the extension for the new field name(s)\. |
| `add_as` | `Suffix``Prefix` | Adds the extension as a prefix (at the beginning) or as a suffix (at the end) of the field name\. |
| `result_type` | `Formula``Flag``Set``State``Count``Conditional` | The six types of new fields that you can create\. |
| `formula_expr` | *string* | Expression for calculating a new field value in a Derive node\. |
| `flag_expr` | *string* | |
| `flag_true` | *string* | |
| `flag_false` | *string* | |
| `set_default` | *string* | |
| `set_value_cond` | *string* | Structured to supply the condition associated with a given value\. |
| `state_on_val` | *string* | Specifies the value for the new field when the On condition is met\. |
| `state_off_val` | *string* | Specifies the value for the new field when the Off condition is met\. |
| `state_on_expression` | *string* | |
| `state_off_expression` | *string* | |
| `state_initial` | `On``Off` | Assigns each record of the new field an initial value of `On` or `Off`\. This value can change as each condition is met\. |
| `count_initial_val` | *string* | |
| `count_inc_condition` | *string* | |
| `count_inc_expression` | *string* | |
| `count_reset_condition` | *string* | |
| `cond_if_cond` | *string* | |
| `cond_then_expr` | *string* | |
| `cond_else_expr` | *string* | |
| `formula_measure_type` | `Range / MeasureType.RANGE``Discrete / MeasureType.DISCRETE``Flag / MeasureType.FLAG``Set / MeasureType.SET``OrderedSet / MeasureType.ORDERED_SET``Typeless / MeasureType.TYPELESS``Collection / MeasureType.COLLECTION``Geospatial / MeasureType.GEOSPATIAL` | This property can be used to define the measurement associated with the derived field\. The setter function can be passed either a string or one of the `MeasureType` values\. The getter will always return on the `MeasureType` values\. |
| `collection_measure` | `Range / MeasureType.RANGE``Flag / MeasureType.FLAG``Set / MeasureType.SET``OrderedSet / MeasureType.ORDERED_SET``Typeless / MeasureType.TYPELESS` | For collection fields (lists with a depth of 0), this property defines the measurement type associated with the underlying values\. |
| `geo_type` | `Point``MultiPoint``LineString``MultiLineString``Polygon``MultiPolygon` | For geospatial fields, this property defines the type of geospatial object represented by this field\. This should be consistent with the list depth of the values |
| `has_coordinate_system` | *boolean* | For geospatial fields, this property defines whether this field has a coordinate system |
| `coordinate_system` | *string* | For geospatial fields, this property defines the coordinate system for this field |
<!-- </table "summary="derivenode properties" id="derivenodeslots__table_wlp_qbj_cdb" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
16048584B029B9BE5DA50D7F9D9AE85FFE740718 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/discriminantnodeslots.html?context=cdpaas&locale=en | discriminantnode properties | discriminantnode properties
Discriminant analysis makes more stringent assumptions than logistic regression, but can be a valuable alternative or supplement to a logistic regression analysis when those assumptions are met.
discriminantnode properties
Table 1. discriminantnode properties
discriminantnode Properties Values Property description
target field Discriminant models require a single target field and one or more input fields. Weight and frequency fields aren't used. See [Common modeling node properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/modelingnodeslots_common.htmlmodelingnodeslots_common) for more information.
method Enter <br>Stepwise
mode Simple <br>Expert
prior_probabilities AllEqual <br>ComputeFromSizes
covariance_matrix WithinGroups <br>SeparateGroups
means flag Statistics options in the node properties under Expert Options.
univariate_anovas flag
box_m flag
within_group_covariance flag
within_groups_correlation flag
separate_groups_covariance flag
total_covariance flag
fishers flag
unstandardized flag
casewise_results flag Classification options in the node properties under Expert Options.
limit_to_first number Default value is 10.
summary_table flag
leave_one_classification flag
separate_groups_covariance flag Matrices option Separate-groups covariance.
territorial_map flag
combined_groups flag Plot option Combined-groups.
separate_groups flag Plot option Separate-groups.
summary_of_steps flag
F_pairwise flag
stepwise_method WilksLambda <br>UnexplainedVariance <br>MahalanobisDistance <br>SmallestF <br>RaosV
V_to_enter number
criteria UseValue <br>UseProbability
F_value_entry number Default value is 3.84.
F_value_removal number Default value is 2.71.
probability_entry number Default value is 0.05.
probability_removal number Default value is 0.10.
calculate_variable_importance flag
calculate_raw_propensities flag
calculate_adjusted_propensities flag
adjusted_propensity_partition Test <br>Validation
| # discriminantnode properties #
Discriminant analysis makes more stringent assumptions than logistic regression, but can be a valuable alternative or supplement to a logistic regression analysis when those assumptions are met\.
<!-- <table "summary="discriminantnode properties" id="discriminantnodeslots__table_r4r_sbj_cdb" class="defaultstyle" "> -->
discriminantnode properties
Table 1\. discriminantnode properties
| `discriminantnode` Properties | Values | Property description |
| --------------------------------- | ------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `target` | *field* | Discriminant models require a single target field and one or more input fields\. Weight and frequency fields aren't used\. See [Common modeling node properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/modelingnodeslots_common.html#modelingnodeslots_common) for more information\. |
| `method` | `Enter` <br>`Stepwise` | |
| `mode` | `Simple` <br>`Expert` | |
| `prior_probabilities` | `AllEqual` <br>`ComputeFromSizes` | |
| `covariance_matrix` | `WithinGroups` <br>`SeparateGroups` | |
| `means` | *flag* | Statistics options in the node properties under Expert Options\. |
| `univariate_anovas` | *flag* | |
| `box_m` | *flag* | |
| `within_group_covariance` | *flag* | |
| `within_groups_correlation` | *flag* | |
| `separate_groups_covariance` | *flag* | |
| `total_covariance` | *flag* | |
| `fishers` | *flag* | |
| `unstandardized` | *flag* | |
| `casewise_results` | *flag* | Classification options in the node properties under Expert Options\. |
| `limit_to_first` | *number* | Default value is 10\. |
| `summary_table` | *flag* | |
| `leave_one_classification` | *flag* | |
| `separate_groups_covariance` | *flag* | Matrices option Separate\-groups covariance\. |
| `territorial_map` | *flag* | |
| `combined_groups` | *flag* | Plot option Combined\-groups\. |
| `separate_groups` | *flag* | Plot option Separate\-groups\. |
| `summary_of_steps` | *flag* | |
| `F_pairwise` | *flag* | |
| `stepwise_method``` | `WilksLambda` <br>`UnexplainedVariance` <br>`MahalanobisDistance` <br>`SmallestF` <br>`RaosV` | |
| `V_to_enter` | *number* | |
| `criteria` | `UseValue` <br>`UseProbability` | |
| `F_value_entry` | *number* | Default value is 3\.84\. |
| `F_value_removal` | *number* | Default value is 2\.71\. |
| `probability_entry` | *number* | Default value is 0\.05\. |
| `probability_removal` | *number* | Default value is 0\.10\. |
| `calculate_variable_importance` | *flag* | |
| `calculate_raw_propensities` | *flag* | |
| `calculate_adjusted_propensities` | *flag* | |
| `adjusted_propensity_partition` | `Test` <br>`Validation` | |
<!-- </table "summary="discriminantnode properties" id="discriminantnodeslots__table_r4r_sbj_cdb" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
2C1E91540BD58780F781F8A06E2B5C62035CA84B | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/discriminantnuggetnodeslots.html?context=cdpaas&locale=en | applydiscriminantnode properties | applydiscriminantnode properties
You can use Discriminant modeling nodes to generate a Discriminant model nugget. The scripting name of this model nugget is applydiscriminantnode. For more information on scripting the modeling node itself, see [discriminantnode properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/discriminantnodeslots.htmldiscriminantnodeslots).
applydiscriminantnode properties
Table 1. applydiscriminantnode properties
applydiscriminantnode Properties Values Property description
calculate_raw_propensities flag
calculate_adjusted_propensities flag
enable_sql_generation falsenative When using data from a database, SQL code can be pushed back to the database for execution, providing superior performance for many operations.
| # applydiscriminantnode properties #
You can use Discriminant modeling nodes to generate a Discriminant model nugget\. The scripting name of this model nugget is *applydiscriminantnode*\. For more information on scripting the modeling node itself, see [discriminantnode properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/discriminantnodeslots.html#discriminantnodeslots)\.
<!-- <table "summary="applydiscriminantnode properties" id="discriminantnuggetnodeslots__table_ekg_tbj_cdb" class="defaultstyle" "> -->
applydiscriminantnode properties
Table 1\. applydiscriminantnode properties
| `applydiscriminantnode` Properties | Values | Property description |
| ---------------------------------- | --------------- | ------------------------------------------------------------------------------------------------------------------------------------------------ |
| `calculate_raw_propensities` | *flag* | |
| `calculate_adjusted_propensities` | *flag* | |
| `enable_sql_generation` | `false``native` | When using data from a database, SQL code can be pushed back to the database for execution, providing superior performance for many operations\. |
<!-- </table "summary="applydiscriminantnode properties" id="discriminantnuggetnodeslots__table_ekg_tbj_cdb" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
BAD5210D0F8114CD4E9B1DB05EB92F0EABC6E233 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/distinctnodeslots.html?context=cdpaas&locale=en | distinctnode properties | distinctnode properties
 The Distinct node removes duplicate records, either by passing the first distinct record to the data flow or by discarding the first record and passing any duplicates to the data flow instead.
Example
node = stream.create("distinct", "My node")
node.setPropertyValue("mode", "Include")
node.setPropertyValue("fields", ["Age" "Sex"])
node.setPropertyValue("keys_pre_sorted", True)
distinctnode properties
Table 1. distinctnode properties
distinctnode properties Data type Property description
mode Include <br>Discard You can include the first distinct record in the data stream, or discard the first distinct record and pass any duplicate records to the data stream instead.
composite_value Structured slot See example below.
composite_values Structured slot See example below.
inc_record_count flag Creates an extra field that specifies how many input records were aggregated to form each aggregate record.
count_field string Specifies the name of the record count field.
default_ascending flag
low_distinct_key_count flag Specifies that you have only a small number of records and/or a small number of unique values of the key field(s).
keys_pre_sorted flag Specifies that all records with the same key values are grouped together in the input.
disable_sql_generation flag
grouping_fields array Lists the field or fields used to determine whether records are identical.
sort_keys array Lists the fields used to determine how records are sorted within each group of duplicates, and whether they're sorted in ascending or descending order. You must specify a sort order if you've chosen to include or exclude the first record in each group, and if it matters to you which record is treated as the first.
default_sort_order Ascending <br>Descending Specify whether, by default, records are sorted in ascending or descending order of the sort key values.
existing_sort_keys array Specify the existing sort order.
| # distinctnode properties #
 The Distinct node removes duplicate records, either by passing the first distinct record to the data flow or by discarding the first record and passing any duplicates to the data flow instead\.
Example
node = stream.create("distinct", "My node")
node.setPropertyValue("mode", "Include")
node.setPropertyValue("fields", ["Age" "Sex"])
node.setPropertyValue("keys_pre_sorted", True)
<!-- <table "summary="distinctnode properties" class="defaultstyle" "> -->
distinctnode properties
Table 1\. distinctnode properties
| `distinctnode` properties | Data type | Property description |
| ------------------------- | ----------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `mode` | `Include` <br>`Discard` | You can include the first distinct record in the data stream, or discard the first distinct record and pass any duplicate records to the data stream instead\. |
| `composite_value` | Structured slot | See example below\. |
| `composite_values` | Structured slot | See example below\. |
| `inc_record_count` | *flag* | Creates an extra field that specifies how many input records were aggregated to form each aggregate record\. |
| `count_field` | *string* | Specifies the name of the record count field\. |
| `default_ascending` | *flag* | |
| `low_distinct_key_count` | *flag* | Specifies that you have only a small number of records and/or a small number of unique values of the key field(s)\. |
| `keys_pre_sorted` | *flag* | Specifies that all records with the same key values are grouped together in the input\. |
| `disable_sql_generation` | *flag* | |
| `grouping_fields` | *array* | Lists the field or fields used to determine whether records are identical\. |
| `sort_keys` | *array* | Lists the fields used to determine how records are sorted within each group of duplicates, and whether they're sorted in ascending or descending order\. You must specify a sort order if you've chosen to include or exclude the first record in each group, and if it matters to you which record is treated as the first\. |
| `default_sort_order` | `Ascending` <br>`Descending` | Specify whether, by default, records are sorted in ascending or descending order of the sort key values\. |
| `existing_sort_keys` | *array* | Specify the existing sort order\. |
<!-- </table "summary="distinctnode properties" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
DCB8FB91999D79190F3E5D54DE32B1B7F1401779 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/distributionnodeslots.html?context=cdpaas&locale=en | distributionnode properties | distributionnode properties
The Distribution node shows the occurrence of symbolic (categorical) values, such as mortgage type or gender. Typically, you might use the Distribution node to show imbalances in the data, which you could then rectify using a Balance node before creating a model.
distributionnode properties
Table 1. distributionnode properties
distributionnode properties Data type Property description
plot SelectedFieldsFlags
x_field field
color_field field Overlay field.
normalize flag
sort_mode ByOccurenceAlphabetic
use_proportional_scale flag
use_grid boolean Display gridlines.
| # distributionnode properties #
The Distribution node shows the occurrence of symbolic (categorical) values, such as mortgage type or gender\. Typically, you might use the Distribution node to show imbalances in the data, which you could then rectify using a Balance node before creating a model\.
<!-- <table "summary="distributionnode properties" id="distributionnodeslots__table_k1p_vbj_cdb" class="defaultstyle" "> -->
distributionnode properties
Table 1\. distributionnode properties
| `distributionnode` properties | Data type | Property description |
| ----------------------------- | ------------------------- | -------------------- |
| `plot` | `SelectedFields``Flags` | |
| `x_field` | *field* | |
| `color_field` | *field* | Overlay field\. |
| `normalize` | *flag* | |
| `sort_mode` | `ByOccurence``Alphabetic` | |
| `use_proportional_scale` | *flag* | |
| `use_grid` | *boolean* | Display gridlines\. |
<!-- </table "summary="distributionnode properties" id="distributionnodeslots__table_k1p_vbj_cdb" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
5DCC543A106EC708FF97817AA0CFDEF8CB89894D | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/ensemblenodeslots.html?context=cdpaas&locale=en | ensemblenode properties | ensemblenode properties
The Ensemble node combines two or more model nuggets to obtain more accurate predictions than can be gained from any one model.
ensemblenode properties
Table 1. ensemblenode properties
ensemblenode properties Data type Property description
ensemble_target_field field Specifies the target field for all models used in the ensemble.
filter_individual_model_output flag Specifies whether scoring results from individual models should be suppressed.
flag_ensemble_method VotingConfidenceWeightedVotingRawPropensityWeightedVotingAdjustedPropensityWeightedVotingHighestConfidenceAverageRawPropensityAverageAdjustedPropensity Specifies the method used to determine the ensemble score. This setting applies only if the selected target is a flag field.
set_ensemble_method VotingConfidenceWeightedVotingHighestConfidence Specifies the method used to determine the ensemble score. This setting applies only if the selected target is a nominal field.
flag_voting_tie_selection RandomHighestConfidenceRawPropensityAdjustedPropensity If a voting method is selected, specifies how ties are resolved. This setting applies only if the selected target is a flag field.
set_voting_tie_selection RandomHighestConfidence If a voting method is selected, specifies how ties are resolved. This setting applies only if the selected target is a nominal field.
calculate_standard_error flag If the target field is continuous, a standard error calculation is run by default to calculate the difference between the measured or estimated values and the true values; and to show how close those estimates matched.
| # ensemblenode properties #
The Ensemble node combines two or more model nuggets to obtain more accurate predictions than can be gained from any one model\.
<!-- <table "summary="ensemblenode properties" id="ensemblenodeslots__table_oy2_wbj_cdb" class="defaultstyle" "> -->
ensemblenode properties
Table 1\. ensemblenode properties
| `ensemblenode` properties | Data type | Property description |
| -------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `ensemble_target_field` | *field* | Specifies the target field for all models used in the ensemble\. |
| `filter_individual_model_output` | *flag* | Specifies whether scoring results from individual models should be suppressed\. |
| `flag_ensemble_method` | `Voting``ConfidenceWeightedVoting``RawPropensityWeightedVoting``AdjustedPropensityWeightedVoting``HighestConfidence``AverageRawPropensity``AverageAdjustedPropensity` | Specifies the method used to determine the ensemble score\. This setting applies only if the selected target is a flag field\. |
| `set_ensemble_method` | `Voting``ConfidenceWeightedVoting``HighestConfidence` | Specifies the method used to determine the ensemble score\. This setting applies only if the selected target is a nominal field\. |
| `flag_voting_tie_selection` | `Random``HighestConfidence``RawPropensity``AdjustedPropensity` | If a voting method is selected, specifies how ties are resolved\. This setting applies only if the selected target is a flag field\. |
| `set_voting_tie_selection` | `Random``HighestConfidence` | If a voting method is selected, specifies how ties are resolved\. This setting applies only if the selected target is a nominal field\. |
| `calculate_standard_error` | *flag* | If the target field is continuous, a standard error calculation is run by default to calculate the difference between the measured or estimated values and the true values; and to show how close those estimates matched\. |
<!-- </table "summary="ensemblenode properties" id="ensemblenodeslots__table_oy2_wbj_cdb" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
98B447B5AF1CD17524E2BA82FED83B8966DDFEFB | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/evalchartnodeslots.html?context=cdpaas&locale=en | evaluationnode properties | evaluationnode properties
The Evaluation node helps to evaluate and compare predictive models. The evaluation chart shows how well models predict particular outcomes. It sorts records based on the predicted value and confidence of the prediction. It splits the records into groups of equal size (quantiles) and then plots the value of the business criterion for each quantile from highest to lowest. Multiple models are shown as separate lines in the plot.
evaluationnode properties
Table 1. evaluationnode properties
evaluationnode properties Data type Property description
chart_type Gains <br>Response <br>Lift <br>Profit <br>ROI <br>ROC
inc_baseline flag
field_detection_method Metadata <br>Name
use_fixed_cost flag
cost_value number
cost_field string
use_fixed_revenue flag
revenue_value number
revenue_field string
use_fixed_weight flag
weight_value number
weight_field field
n_tile Quartiles <br>Quintles <br>Deciles <br>Vingtiles <br>Percentiles <br>1000-tiles
cumulative flag
style Line <br>Point
point_type Rectangle <br>Dot <br>Triangle <br>Hexagon <br>Plus <br>Pentagon <br>Star <br>BowTie <br>HorizontalDash <br>VerticalDash <br>IronCross <br>Factory <br>House <br>Cathedral <br>OnionDome <br>ConcaveTriangleOblateGlobe <br>CatEye <br>FourSidedPillow <br>RoundRectangle <br>Fan
export_data flag
data_filename string
delimiter string
new_line flag
inc_field_names flag
inc_best_line flag
inc_business_rule flag
business_rule_condition string
plot_score_fields flag
score_fields [field1 ... fieldN]
target_field field
use_hit_condition flag
hit_condition string
use_score_expression flag
score_expression string
caption_auto flag
split_by_partition boolean If a partition field is used to split records into training, test, and validation samples, use this option to display a separate evaluation chart for each partition.
use_profit_criteria boolean Enables profit criteria.
use_grid boolean Displays grid lines.
| # evaluationnode properties #
The Evaluation node helps to evaluate and compare predictive models\. The evaluation chart shows how well models predict particular outcomes\. It sorts records based on the predicted value and confidence of the prediction\. It splits the records into groups of equal size (quantiles) and then plots the value of the business criterion for each quantile from highest to lowest\. Multiple models are shown as separate lines in the plot\.
<!-- <table "summary="evaluationnode properties" class="defaultstyle" "> -->
evaluationnode properties
Table 1\. evaluationnode properties
| `evaluationnode` properties | Data type | Property description |
| --------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `chart_type` | `Gains` <br>`Response` <br>`Lift` <br>`Profit` <br>`ROI` <br>`ROC` | |
| `inc_baseline` | *flag* | |
| `field_detection_method` | `Metadata` <br>`Name` | |
| `use_fixed_cost` | *flag* | |
| `cost_value` | *number* | |
| `cost_field` | *string* | |
| `use_fixed_revenue` | *flag* | |
| `revenue_value` | *number* | |
| `revenue_field` | *string* | |
| `use_fixed_weight` | *flag* | |
| `weight_value` | *number* | |
| `weight_field` | *field* | |
| `n_tile` | `Quartiles` <br>`Quintles` <br>`Deciles` <br>`Vingtiles` <br>`Percentiles` <br>`1000-tiles` | |
| `cumulative` | *flag* | |
| `style` | `Line` <br>`Point` | |
| `point_type` | `Rectangle` <br>`Dot` <br>`Triangle` <br>`Hexagon` <br>`Plus` <br>`Pentagon` <br>`Star` <br>`BowTie` <br>`HorizontalDash` <br>`VerticalDash` <br>`IronCross` <br>`Factory` <br>`House` <br>`Cathedral` <br>`OnionDome` <br>`ConcaveTriangle``OblateGlobe` <br>`CatEye` <br>`FourSidedPillow` <br>`RoundRectangle` <br>`Fan` | |
| `export_data` | *flag* | |
| `data_filename` | *string* | |
| `delimiter` | *string* | |
| `new_line` | *flag* | |
| `inc_field_names` | *flag* | |
| `inc_best_line` | *flag* | |
| `inc_business_rule` | *flag* | |
| `business_rule_condition` | *string* | |
| `plot_score_fields` | *flag* | |
| `score_fields` | *\[field1 \.\.\. fieldN\]* | |
| `target_field` | *field* | |
| `use_hit_condition` | *flag* | |
| `hit_condition` | *string* | |
| `use_score_expression` | *flag* | |
| `score_expression` | *string* | |
| `caption_auto` | *flag* | |
| `split_by_partition` | *boolean* | If a partition field is used to split records into training, test, and validation samples, use this option to display a separate evaluation chart for each partition\. |
| `use_profit_criteria` | *boolean* | Enables profit criteria\. |
| `use_grid` | *boolean* | Displays grid lines\. |
<!-- </table "summary="evaluationnode properties" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
6CB2797AB2EF876F05A39F4CEE08EEE4249716D8 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/example_clementine_script.html?context=cdpaas&locale=en | Flow script example: Training a neural net | Flow script example: Training a neural net
You can use a flow to train a neural network model when executed. Normally, to test the model, you might run the modeling node to add the model to the flow, make the appropriate connections, and run an Analysis node.
Using an SPSS Modeler script, you can automate the process of testing the model nugget after you create it. Following is an example:
stream = modeler.script.stream()
neuralnetnode = stream.findByType("neuralnetwork", None)
results = []
neuralnetnode.run(results)
appliernode = stream.createModelApplierAt(results[0], "Drug", 594, 187)
analysisnode = stream.createAt("analysis", "Drug", 688, 187)
typenode = stream.findByType("type", None)
stream.linkBetween(appliernode, typenode, analysisnode)
analysisnode.run([])
The following bullets describe each line in this script example.
* The first line defines a variable that points to the current flow
* In line 2, the script finds the Neural Net builder node
* In line 3, the script creates a list where the execution results can be stored
* In line 4, the Neural Net model nugget is created. This is stored in the list defined on line 3.
* In line 5, a model apply node is created for the model nugget and placed on the flow canvas
* In line 6, an analysis node called Drug is created
* In line 7, the script finds the Type node
* In line 8, the script connects the model apply node created in line 5 between the Type node and the Analysis node
* Finally, the Analysis node runs to produce the Analysis report
It's possible to use a script to build and run a flow from scratch, starting with a blank canvas. To learn more about the scripting language in general, see [Scripting overview](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/using_scripting.html).
| # Flow script example: Training a neural net #
You can use a flow to train a neural network model when executed\. Normally, to test the model, you might run the modeling node to add the model to the flow, make the appropriate connections, and run an Analysis node\.
Using an SPSS Modeler script, you can automate the process of testing the model nugget after you create it\. Following is an example:
stream = modeler.script.stream()
neuralnetnode = stream.findByType("neuralnetwork", None)
results = []
neuralnetnode.run(results)
appliernode = stream.createModelApplierAt(results[0], "Drug", 594, 187)
analysisnode = stream.createAt("analysis", "Drug", 688, 187)
typenode = stream.findByType("type", None)
stream.linkBetween(appliernode, typenode, analysisnode)
analysisnode.run([])
The following bullets describe each line in this script example\.
<!-- <ul> -->
* The first line defines a variable that points to the current flow
* In line 2, the script finds the Neural Net builder node
* In line 3, the script creates a list where the execution results can be stored
* In line 4, the Neural Net model nugget is created\. This is stored in the list defined on line 3\.
* In line 5, a model apply node is created for the model nugget and placed on the flow canvas
* In line 6, an analysis node called `Drug` is created
* In line 7, the script finds the Type node
* In line 8, the script connects the model apply node created in line 5 between the Type node and the Analysis node
* Finally, the Analysis node runs to produce the Analysis report
<!-- </ul> -->
It's possible to use a script to build and run a flow from scratch, starting with a blank canvas\. To learn more about the scripting language in general, see [Scripting overview](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/using_scripting.html)\.
<!-- </article "role="article" "> -->
|
123987D173C0DB88D8E1F59AF46A8D9313A8E601 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/extensionexportnodeslots.html?context=cdpaas&locale=en | extensionexportnode properties | extensionexportnode properties
With the Extension Export node, you can run R or Python for Spark scripts to export data.
extensionexportnode properties
Table 1. extensionexportnode properties
extensionexportnode properties Data type Property description
syntax_type RPython Specify which script runs: R or Python (R is the default).
r_syntax string The R scripting syntax to run.
python_syntax string The Python scripting syntax to run.
convert_flags StringsAndDoubles LogicalValues Option to convert flag fields.
convert_missing flag Option to convert missing values to the R NA value.
convert_datetime flag Option to convert variables with date or datetime formats to R date/time formats.
convert_datetime_class POSIXct POSIXlt Options to specify to what format variables with date or datetime formats are converted.
| # extensionexportnode properties #
With the Extension Export node, you can run R or Python for Spark scripts to export data\.
<!-- <table "summary="extensionexportnode properties" class="defaultstyle" "> -->
extensionexportnode properties
Table 1\. extensionexportnode properties
| `extensionexportnode` properties | Data type | Property description |
| -------------------------------- | --------------------------------- | ----------------------------------------------------------------------------------------- |
| `syntax_type` | *R**Python* | Specify which script runs: R or Python (R is the default)\. |
| `r_syntax` | *string* | The R scripting syntax to run\. |
| `python_syntax` | *string* | The Python scripting syntax to run\. |
| `convert_flags` | `StringsAndDoubles LogicalValues` | Option to convert flag fields\. |
| `convert_missing` | *flag* | Option to convert missing values to the R NA value\. |
| `convert_datetime` | *flag* | Option to convert variables with date or datetime formats to R date/time formats\. |
| `convert_datetime_class` | `POSIXct POSIXlt` | Options to specify to what format variables with date or datetime formats are converted\. |
<!-- </table "summary="extensionexportnode properties" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
9AA00A347BD6F7725014C840F3D39BC0DDF26599 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/extensionimportnodeslots.html?context=cdpaas&locale=en | extensionimportnode properties | extensionimportnode properties
 With the Extension Import node, you can run R or Python for Spark scripts to import data.
extensionimportnode properties
Table 1. extensionimportnode properties
extensionimportnode properties Data type Property description
syntax_type RPython Specify which script runs – R or Python (R is the default).
r_syntax string The R scripting syntax to run.
python_syntax string The Python scripting syntax to run.
| # extensionimportnode properties #
 With the Extension Import node, you can run R or Python for Spark scripts to import data\.
<!-- <table "summary="extensionimportnode properties" class="defaultstyle" "> -->
extensionimportnode properties
Table 1\. extensionimportnode properties
| `extensionimportnode` properties | Data type | Property description |
| -------------------------------- | ----------- | ------------------------------------------------------------ |
| `syntax_type` | *R**Python* | Specify which script runs – R or Python (R is the default)\. |
| `r_syntax` | *string* | The R scripting syntax to run\. |
| `python_syntax` | *string* | The Python scripting syntax to run\. |
<!-- </table "summary="extensionimportnode properties" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
7985570F01D50D057EBD4FAFCF8C8A1BCACB3006 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/extensionmodelnodeslots.html?context=cdpaas&locale=en | extensionmodelnode properties | extensionmodelnode properties
With the Extension Model node, you can run R or Python for Spark scripts to build and score results.
Note that many of the properties and much of the information on this page is only applicable to SPSS Modeler Desktop streams.
extensionmodelnode properties
Table 1. extensionmodelnode properties
extensionmodelnode Properties Values Property description
syntax_type RPython Specify which script runs: R or Python (R is the default).
r_build_syntax string The R scripting syntax for model building.
r_score_syntax string The R scripting syntax for model scoring.
python_build_syntax string The Python scripting syntax for model building.
python_score_syntax string The Python scripting syntax for model scoring.
convert_flags StringsAndDoubles <br>LogicalValues Option to convert flag fields.
convert_missing flag Option to convert missing values to R NA value.
convert_datetime flag Option to convert variables with date or datetime formats to R date/time formats.
convert_datetime_class POSIXct <br> <br>POSIXlt <br> Options to specify to what format variables with date or datetime formats are converted.
output_html flag Option to display graphs in the R model nugget.
output_text flag Option to write R console text output to the R model nugget.
| # extensionmodelnode properties #
With the Extension Model node, you can run R or Python for Spark scripts to build and score results\.
Note that many of the properties and much of the information on this page is only applicable to SPSS Modeler Desktop streams\.
<!-- <table "summary="extensionmodelnode properties" class="defaultstyle" "> -->
extensionmodelnode properties
Table 1\. extensionmodelnode properties
| `extensionmodelnode` Properties | Values | Property description |
| ------------------------------- | ---------------------------------------- | ----------------------------------------------------------------------------------------- |
| `syntax_type` | *R**Python* | Specify which script runs: R or Python (R is the default)\. |
| `r_build_syntax` | *string* | The R scripting syntax for model building\. |
| `r_score_syntax` | *string* | The R scripting syntax for model scoring\. |
| `python_build_syntax` | *string* | The Python scripting syntax for model building\. |
| `python_score_syntax` | *string* | The Python scripting syntax for model scoring\. |
| `convert_flags` | `StringsAndDoubles` <br>`LogicalValues` | Option to convert flag fields\. |
| `convert_missing` | *flag* | Option to convert missing values to R NA value\. |
| `convert_datetime` | *flag* | Option to convert variables with date or datetime formats to R date/time formats\. |
| `convert_datetime_class` | `POSIXct` <br> <br>`POSIXlt` <br> | Options to specify to what format variables with date or datetime formats are converted\. |
| `output_html` | *flag* | Option to display graphs in the R model nugget\. |
| `output_text` | *flag* | Option to write R console text output to the R model nugget\. |
<!-- </table "summary="extensionmodelnode properties" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
E85352E9588726771A8CD594A268ECA7D04379BD | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/extensionnuggetnodeslots.html?context=cdpaas&locale=en | applyextension properties | applyextension properties
You can use Extension Model nodes to generate an Extension model nugget. The scripting name of this model nugget is applyextension. For more information on scripting the modeling node itself, see [extensionmodelnode properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/extensionmodelnodeslots.htmlextensionmodelnodeslots).
applyextension properties
Table 1. applyextension properties
applyextension Properties Values Property Description
r_syntax string R scripting syntax for model scoring.
python_syntax string Python scripting syntax for model scoring.
use_batch_size flag Enable use of batch processing.
batch_size integer Specify the number of data records to be included in each batch.
convert_flags StringsAndDoubles <br>LogicalValues Option to convert flag fields.
convert_missing flag Option to convert missing values to the R NA value.
convert_datetime flag Option to convert variables with date or datetime formats to R date/time formats.
convert_datetime_class POSIXct <br> <br>POSIXlt Options to specify to what format variables with date or datetime formats are converted.
| # applyextension properties #
You can use Extension Model nodes to generate an Extension model nugget\. The scripting name of this model nugget is *applyextension*\. For more information on scripting the modeling node itself, see [extensionmodelnode properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/extensionmodelnodeslots.html#extensionmodelnodeslots)\.
<!-- <table "summary="applyextension properties" class="defaultstyle" "> -->
applyextension properties
Table 1\. applyextension properties
| `applyextension` Properties | Values | Property Description |
| --------------------------- | ---------------------------------------- | ----------------------------------------------------------------------------------------- |
| `r_syntax` | *string* | R scripting syntax for model scoring\. |
| `python_syntax` | *string* | Python scripting syntax for model scoring\. |
| `use_batch_size` | *flag* | Enable use of batch processing\. |
| `batch_size` | *integer* | Specify the number of data records to be included in each batch\. |
| `convert_flags` | `StringsAndDoubles` <br>`LogicalValues` | Option to convert flag fields\. |
| `convert_missing` | *flag* | Option to convert missing values to the R NA value\. |
| `convert_datetime` | *flag* | Option to convert variables with date or datetime formats to R date/time formats\. |
| `convert_datetime_class` | `POSIXct` <br> <br>`POSIXlt` | Options to specify to what format variables with date or datetime formats are converted\. |
<!-- </table "summary="applyextension properties" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
14005F26F286B03F8AC692D42E9F3DFCE1F66962 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/extensionoutputnodeslots.html?context=cdpaas&locale=en | extensionoutputnode properties | extensionoutputnode properties
With the Extension Output node, you can analyze data and the results of model scoring using your own custom R or Python for Spark script. The output of the analysis can be text or graphical.
Note that many of the properties on this page are for streams from SPSS Modeler desktop.
extensionoutputnode properties
Table 1. extensionoutputnode properties
extensionoutputnode properties Data type Property description
syntax_type RPython Specify which script runs: R or Python (R is the default).
r_syntax string R scripting syntax for model scoring.
python_syntax string Python scripting syntax for model scoring.
convert_flags StringsAndDoubles LogicalValues Option to convert flag fields.
convert_missing flag Option to convert missing values to the R NA value.
convert_datetime flag Option to convert variables with date or datetime formats to R date/time formats.
convert_datetime_class POSIXct POSIXlt Options to specify to what format variables with date or datetime formats are converted.
output_to Screen File Specify the output type (Screen or File).
output_type Graph Text Specify whether to produce graphical or text output.
full_filename string File name to use for the generated output.
graph_file_type HTML COU File type for the output file ( .html or .cou).
text_file_type HTML TEXT COU Specify the file type for text output ( .html, .txt, or .cou).
| # extensionoutputnode properties #
With the Extension Output node, you can analyze data and the results of model scoring using your own custom R or Python for Spark script\. The output of the analysis can be text or graphical\.
Note that many of the properties on this page are for streams from SPSS Modeler desktop\.
<!-- <table "summary="extensionoutputnode properties" class="defaultstyle" "> -->
extensionoutputnode properties
Table 1\. extensionoutputnode properties
| `extensionoutputnode` properties | Data type | Property description |
| -------------------------------- | --------------------------------- | ----------------------------------------------------------------------------------------- |
| `syntax_type` | *R**Python* | Specify which script runs: R or Python (R is the default)\. |
| `r_syntax` | *string* | R scripting syntax for model scoring\. |
| `python_syntax` | *string* | Python scripting syntax for model scoring\. |
| `convert_flags` | `StringsAndDoubles LogicalValues` | Option to convert flag fields\. |
| `convert_missing` | *flag* | Option to convert missing values to the R NA value\. |
| `convert_datetime` | *flag* | Option to convert variables with date or datetime formats to R date/time formats\. |
| `convert_datetime_class` | `POSIXct POSIXlt` | Options to specify to what format variables with date or datetime formats are converted\. |
| `output_to` | `Screen File` | Specify the output type (`Screen` or `File`)\. |
| `output_type` | `Graph Text` | Specify whether to produce graphical or text output\. |
| `full_filename` | *string* | File name to use for the generated output\. |
| `graph_file_type` | `HTML COU` | File type for the output file ( \.html or \.cou)\. |
| `text_file_type` | `HTML TEXT COU` | Specify the file type for text output ( \.html, \.txt, or \.cou)\. |
<!-- </table "summary="extensionoutputnode properties" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
D487DB53087C5FD4CD2A25112F1F8A8E496EFC72 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/extensionprocessnodeslots.html?context=cdpaas&locale=en | extensionprocessnode properties | extensionprocessnode properties
 With the Extension Transform node, you can take data from a flow and apply transformations to the data using R scripting or Python for Spark scripting.
extensionprocessnode properties
Table 1. extensionprocessnode properties
extensionprocessnode properties Data type Property description
syntax_type RPython Specify which script runs – R or Python (R is the default).
r_syntax string The R scripting syntax to run.
python_syntax string The Python scripting syntax to run.
use_batch_size flag Enable use of batch processing.
batch_size integer Specify the number of data records to include in each batch.
convert_flags StringsAndDoubles LogicalValues Option to convert flag fields.
convert_missing flag Option to convert missing values to the R NA value.
convert_datetime flag Option to convert variables with date or datetime formats to R date/time formats.
convert_datetime_class POSIXct POSIXlt Options to specify to what format variables with date or datetime formats are converted.
| # extensionprocessnode properties #
 With the Extension Transform node, you can take data from a flow and apply transformations to the data using R scripting or Python for Spark scripting\.
<!-- <table "summary="extensionprocessnode properties" class="defaultstyle" "> -->
extensionprocessnode properties
Table 1\. extensionprocessnode properties
| `extensionprocessnode` properties | Data type | Property description |
| --------------------------------- | --------------------------------- | ----------------------------------------------------------------------------------------- |
| `syntax_type` | *R**Python* | Specify which script runs – R or Python (R is the default)\. |
| `r_syntax` | *string* | The R scripting syntax to run\. |
| `python_syntax` | *string* | The Python scripting syntax to run\. |
| `use_batch_size` | *flag* | Enable use of batch processing\. |
| `batch_size` | *integer* | Specify the number of data records to include in each batch\. |
| `convert_flags` | `StringsAndDoubles LogicalValues` | Option to convert flag fields\. |
| `convert_missing` | *flag* | Option to convert missing values to the R NA value\. |
| `convert_datetime` | *flag* | Option to convert variables with date or datetime formats to R date/time formats\. |
| `convert_datetime_class` | `POSIXct POSIXlt` | Options to specify to what format variables with date or datetime formats are converted\. |
<!-- </table "summary="extensionprocessnode properties" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
5EDDA143971CE5735307FEDE23FB0CD7E963264C | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/factornodeslots.html?context=cdpaas&locale=en | factornode properties | factornode properties
The PCA/Factor node provides powerful data-reduction techniques to reduce the complexity of your data. Principal components analysis (PCA) finds linear combinations of the input fields that do the best job of capturing the variance in the entire set of fields, where the components are orthogonal (perpendicular) to each other. Factor analysis attempts to identify underlying factors that explain the pattern of correlations within a set of observed fields. For both approaches, the goal is to find a small number of derived fields that effectively summarizes the information in the original set of fields.
factornode properties
Table 1. factornode properties
factornode Properties Values Property description
inputs [field1 ... fieldN] PCA/Factor models use a list of input fields, but no target. Weight and frequency fields are not used. See [Common modeling node properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/modelingnodeslots_common.htmlmodelingnodeslots_common) for more information.
method PCULSGLSMLPAFAlphaImage
mode SimpleExpert
max_iterations number
complete_records flag
matrix CorrelationCovariance
extract_factors ByEigenvaluesByFactors
min_eigenvalue number
max_factor number
rotation NoneVarimaxDirectObliminEquamaxQuartimaxPromax
delta number If you select DirectOblimin as your rotation data type, you can specify a value for delta. If you don't specify a value, the default value for delta is used.
kappa number If you select Promax as your rotation data type, you can specify a value for kappa. If you don't specify a value, the default value for kappa is used.
sort_values flag
hide_values flag
hide_below number
| # factornode properties #
The PCA/Factor node provides powerful data\-reduction techniques to reduce the complexity of your data\. Principal components analysis (PCA) finds linear combinations of the input fields that do the best job of capturing the variance in the entire set of fields, where the components are orthogonal (perpendicular) to each other\. Factor analysis attempts to identify underlying factors that explain the pattern of correlations within a set of observed fields\. For both approaches, the goal is to find a small number of derived fields that effectively summarizes the information in the original set of fields\.
<!-- <table "summary="factornode properties" id="factornodeslots__table_qcp_3cj_cdb" class="defaultstyle" "> -->
factornode properties
Table 1\. factornode properties
| `factornode` Properties | Values | Property description |
| ----------------------- | ---------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `inputs` | \[*field1 \.\.\. fieldN*\] | PCA/Factor models use a list of input fields, but no target\. Weight and frequency fields are not used\. See [Common modeling node properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/modelingnodeslots_common.html#modelingnodeslots_common) for more information\. |
| `method` | `PC``ULS``GLS``ML``PAF``Alpha``Image` | |
| `mode` | `Simple``Expert` | |
| `max_iterations` | *number* | |
| `complete_records` | *flag* | |
| `matrix` | `Correlation``Covariance` | |
| `extract_factors` | `ByEigenvalues``ByFactors` | |
| `min_eigenvalue` | *number* | |
| `max_factor` | *number* | |
| `rotation` | `None``Varimax``DirectOblimin``Equamax``Quartimax``Promax` | |
| `delta` | *number* | If you select `DirectOblimin` as your rotation data type, you can specify a value for `delta`\. If you don't specify a value, the default value for `delta` is used\. |
| `kappa` | *number* | If you select `Promax` as your rotation data type, you can specify a value for `kappa`\. If you don't specify a value, the default value for `kappa` is used\. |
| `sort_values` | *flag* | |
| `hide_values` | *flag* | |
| `hide_below` | *number* | |
<!-- </table "summary="factornode properties" id="factornodeslots__table_qcp_3cj_cdb" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
92442D67350644BFCAEC2B2A47B98F4EDE943DC3 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/factornuggetnodeslots.html?context=cdpaas&locale=en | applyfactornode properties | applyfactornode properties
You can use PCA/Factor modeling nodes to generate a PCA/Factor model nugget. The scripting name of this model nugget is applyfactornode. No other properties exist for this model nugget. For more information on scripting the modeling node itself, see [factornode properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/factornodeslots.htmlfactornodeslots).
| # applyfactornode properties #
You can use PCA/Factor modeling nodes to generate a PCA/Factor model nugget\. The scripting name of this model nugget is *applyfactornode*\. No other properties exist for this model nugget\. For more information on scripting the modeling node itself, see [factornode properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/factornodeslots.html#factornodeslots)\.
<!-- </article "role="article" "> -->
|
D5863A9857F07023885A810210DFB819AD692ED7 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/factorymodeling_algorithmproperties.html?context=cdpaas&locale=en | Setting algorithm properties | Setting algorithm properties
For the Auto Classifier, Auto Numeric, and Auto Cluster nodes, you can set properties for specific algorithms used by the node by using the general form:
autonode.setKeyedPropertyValue(<algorithm>, <property>, <value>)
For example:
node.setKeyedPropertyValue("neuralnetwork", "method", "MultilayerPerceptron")
Algorithm names for the Auto Classifier node are cart, chaid, quest, c50, logreg, decisionlist, bayesnet, discriminant, svm and knn.
Algorithm names for the Auto Numeric node are cart, chaid, neuralnetwork, genlin, svm, regression, linear and knn.
Algorithm names for the Auto Cluster node are twostep, k-means, and kohonen.
Property names are standard as documented for each algorithm node.
Algorithm properties that contain periods or other punctuation must be wrapped in single quotes. For example:
node.setKeyedPropertyValue("logreg", "tolerance", "1.0E-5")
Multiple values can also be assigned for a property. For example:
node.setKeyedPropertyValue("decisionlist", "search_direction", ["Up", "Down"])
To enable or disable the use of a specific algorithm:
node.setPropertyValue("chaid", True)
Note: In cases where certain algorithm options aren't available in the Auto Classifier node, or when only a single value can be specified rather than a range of values, the same limits apply with scripting as when accessing the node in the standard manner.
| # Setting algorithm properties #
For the Auto Classifier, Auto Numeric, and Auto Cluster nodes, you can set properties for specific algorithms used by the node by using the general form:
autonode.setKeyedPropertyValue(<algorithm>, <property>, <value>)
For example:
node.setKeyedPropertyValue("neuralnetwork", "method", "MultilayerPerceptron")
Algorithm names for the Auto Classifier node are `cart`, `chaid`, `quest`, `c50`, `logreg`, `decisionlist`, `bayesnet`, `discriminant`, `svm` and `knn`\.
Algorithm names for the Auto Numeric node are `cart`, `chaid`, `neuralnetwork`, `genlin`, `svm`, `regression`, `linear` and `knn`\.
Algorithm names for the Auto Cluster node are `twostep`, `k-means`, and `kohonen`\.
Property names are standard as documented for each algorithm node\.
Algorithm properties that contain periods or other punctuation must be wrapped in single quotes\. For example:
node.setKeyedPropertyValue("logreg", "tolerance", "1.0E-5")
Multiple values can also be assigned for a property\. For example:
node.setKeyedPropertyValue("decisionlist", "search_direction", ["Up", "Down"])
To enable or disable the use of a specific algorithm:
node.setPropertyValue("chaid", True)
Note: In cases where certain algorithm options aren't available in the Auto Classifier node, or when only a single value can be specified rather than a range of values, the same limits apply with scripting as when accessing the node in the standard manner\.
<!-- </article "role="article" "> -->
|
055727FBA02274A87D30DA162E6F5ECA3ACE233D | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/featureselectionnodeslots.html?context=cdpaas&locale=en | featureselectionnode properties | featureselectionnode properties
The Feature Selection node screens input fields for removal based on a set of criteria (such as the percentage of missing values); it then ranks the importance of remaining inputs relative to a specified target. For example, given a data set with hundreds of potential inputs, which are most likely to be useful in modeling patient outcomes?
featureselectionnode properties
Table 1. featureselectionnode properties
featureselectionnode Properties Values Property description
target field Feature Selection models rank predictors relative to the specified target. Weight and frequency fields are not used. See [Common modeling node properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/modelingnodeslots_common.html) for more information.
screen_single_category flag If True, screens fields that have too many records falling into the same category relative to the total number of records.
max_single_category number Specifies the threshold used when screen_single_category is True.
screen_missing_values flag If True, screens fields with too many missing values, expressed as a percentage of the total number of records.
max_missing_values number
screen_num_categories flag If True, screens fields with too many categories relative to the total number of records.
max_num_categories number
screen_std_dev flag If True, screens fields with a standard deviation of less than or equal to the specified minimum.
min_std_dev number
screen_coeff_of_var flag If True, screens fields with a coefficient of variance less than or equal to the specified minimum.
min_coeff_of_var number
criteria PearsonLikelihoodCramersVLambda When ranking categorical predictors against a categorical target, specifies the measure on which the importance value is based.
unimportant_below number Specifies the threshold p values used to rank variables as important, marginal, or unimportant. Accepts values from 0.0 to 1.0.
important_above number Accepts values from 0.0 to 1.0.
unimportant_label string Specifies the label for the unimportant ranking.
marginal_label string
important_label string
selection_mode ImportanceLevelImportanceValueTopN
select_important flag When selection_mode is set to ImportanceLevel, specifies whether to select important fields.
select_marginal flag When selection_mode is set to ImportanceLevel, specifies whether to select marginal fields.
select_unimportant flag When selection_mode is set to ImportanceLevel, specifies whether to select unimportant fields.
importance_value number When selection_mode is set to ImportanceValue, specifies the cutoff value to use. Accepts values from 0 to 100.
top_n integer When selection_mode is set to TopN, specifies the cutoff value to use. Accepts values from 0 to 1000.
| # featureselectionnode properties #
The Feature Selection node screens input fields for removal based on a set of criteria (such as the percentage of missing values); it then ranks the importance of remaining inputs relative to a specified target\. For example, given a data set with hundreds of potential inputs, which are most likely to be useful in modeling patient outcomes?
<!-- <table "summary="featureselectionnode properties" id="featureselectionnodeslots__table_cqh_kcj_cdb" class="defaultstyle" "> -->
featureselectionnode properties
Table 1\. featureselectionnode properties
| `featureselectionnode` Properties | Values | Property description |
| --------------------------------- | ---------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `target` | *field* | Feature Selection models rank predictors relative to the specified target\. Weight and frequency fields are not used\. See [Common modeling node properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/modelingnodeslots_common.html) for more information\. |
| `screen_single_category` | *flag* | If `True`, screens fields that have too many records falling into the same category relative to the total number of records\. |
| `max_single_category` | *number* | Specifies the threshold used when `screen_single_category` is `True`\. |
| `screen_missing_values` | *flag* | If `True`, screens fields with too many missing values, expressed as a percentage of the total number of records\. |
| `max_missing_values` | *number* | |
| `screen_num_categories` | *flag* | If `True`, screens fields with too many categories relative to the total number of records\. |
| `max_num_categories` | *number* | |
| `screen_std_dev` | *flag* | If `True`, screens fields with a standard deviation of less than or equal to the specified minimum\. |
| `min_std_dev` | *number* | |
| `screen_coeff_of_var` | *flag* | If `True`, screens fields with a coefficient of variance less than or equal to the specified minimum\. |
| `min_coeff_of_var` | *number* | |
| `criteria` | `Pearson``Likelihood``CramersV``Lambda` | When ranking categorical predictors against a categorical target, specifies the measure on which the importance value is based\. |
| `unimportant_below` | *number* | Specifies the threshold *p* values used to rank variables as important, marginal, or unimportant\. Accepts values from 0\.0 to 1\.0\. |
| `important_above` | *number* | Accepts values from 0\.0 to 1\.0\. |
| `unimportant_label` | *string* | Specifies the label for the unimportant ranking\. |
| `marginal_label` | *string* | |
| `important_label` | *string* | |
| `selection_mode` | `ImportanceLevel``ImportanceValue``TopN` | |
| `select_important` | *flag* | When `selection_mode` is set to `ImportanceLevel`, specifies whether to select important fields\. |
| `select_marginal` | *flag* | When `selection_mode` is set to `ImportanceLevel`, specifies whether to select marginal fields\. |
| `select_unimportant` | *flag* | When `selection_mode` is set to `ImportanceLevel`, specifies whether to select unimportant fields\. |
| `importance_value` | *number* | When `selection_mode` is set to `ImportanceValue`, specifies the cutoff value to use\. Accepts values from 0 to 100\. |
| `top_n` | *integer* | When `selection_mode` is set to `TopN`, specifies the cutoff value to use\. Accepts values from 0 to 1000\. |
<!-- </table "summary="featureselectionnode properties" id="featureselectionnodeslots__table_cqh_kcj_cdb" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
9A5011652C8FAD610EF217B82B7F28C8256DCE8B | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/featureselectionnuggetnodeslots.html?context=cdpaas&locale=en | applyfeatureselectionnode properties | applyfeatureselectionnode properties
You can use Feature Selection modeling nodes to generate a Feature Selection model nugget. The scripting name of this model nugget is applyfeatureselectionnode. For more information on scripting the modeling node itself, see [featureselectionnode properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/featureselectionnodeslots.htmlfeatureselectionnodeslots).
applyfeatureselectionnode properties
Table 1. applyfeatureselectionnode properties
applyfeatureselectionnode Properties Values Property description
ranked_values Specifies which ranked fields are checked in the model browser.
screened_values Specifies which screened fields are checked in the model browser.
| # applyfeatureselectionnode properties #
You can use Feature Selection modeling nodes to generate a Feature Selection model nugget\. The scripting name of this model nugget is *applyfeatureselectionnode*\. For more information on scripting the modeling node itself, see [featureselectionnode properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/featureselectionnodeslots.html#featureselectionnodeslots)\.
<!-- <table "summary="applyfeatureselectionnode properties" id="featureselectionnuggetnodeslots__table_u2v_kcj_cdb" class="defaultstyle" "> -->
applyfeatureselectionnode properties
Table 1\. applyfeatureselectionnode properties
| `applyfeatureselectionnode` Properties | Values | Property description |
| -------------------------------------- | ------ | ------------------------------------------------------------------ |
| `ranked_values` | | Specifies which ranked fields are checked in the model browser\. |
| `screened_values` | | Specifies which screened fields are checked in the model browser\. |
<!-- </table "summary="applyfeatureselectionnode properties" id="featureselectionnuggetnodeslots__table_u2v_kcj_cdb" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
76910487C819D14F9FEFCBC6252F25652AF1E65B | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/fillernodeslots.html?context=cdpaas&locale=en | fillernode properties | fillernode properties
The Filler node replaces field values and changes storage. You can choose to replace values based on a CLEM condition, such as @BLANK(@FIELD). Alternatively, you can choose to replace all blanks or null values with a specific value. A Filler node is often used together with a Type node to replace missing values.
Example
node = stream.create("filler", "My node")
node.setPropertyValue("fields", ["Age"])
node.setPropertyValue("replace_mode", "Always")
node.setPropertyValue("condition", "("Age" > 60) and ("Sex" = "M"")
node.setPropertyValue("replace_with", ""old man"")
fillernode properties
Table 1. fillernode properties
fillernode properties Data type Property description
fields list Fields from the dataset whose values will be examined and replaced.
replace_mode AlwaysConditionalBlankNullBlankAndNull You can replace all values, blank values, or null values, or replace based on a specified condition.
condition string
replace_with string
| # fillernode properties #
The Filler node replaces field values and changes storage\. You can choose to replace values based on a CLEM condition, such as `@BLANK(@FIELD)`\. Alternatively, you can choose to replace all blanks or null values with a specific value\. A Filler node is often used together with a Type node to replace missing values\.
Example
node = stream.create("filler", "My node")
node.setPropertyValue("fields", ["Age"])
node.setPropertyValue("replace_mode", "Always")
node.setPropertyValue("condition", "(\"Age\" > 60) and (\"Sex\" = \"M\"")
node.setPropertyValue("replace_with", "\"old man\"")
<!-- <table "summary="fillernode properties" id="fillernodeslots__table_e3h_ncj_cdb" class="defaultstyle" "> -->
fillernode properties
Table 1\. fillernode properties
| `fillernode` properties | Data type | Property description |
| ----------------------- | ------------------------------------------------ | ----------------------------------------------------------------------------------------------------- |
| `fields` | *list* | Fields from the dataset whose values will be examined and replaced\. |
| `replace_mode` | `Always``Conditional``Blank``Null``BlankAndNull` | You can replace all values, blank values, or null values, or replace based on a specified condition\. |
| `condition` | *string* | |
| `replace_with` | *string* | |
<!-- </table "summary="fillernode properties" id="fillernodeslots__table_e3h_ncj_cdb" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
D91044A492D05F87613BBA485CD2FAE1F54764DB | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/filternodeslots.html?context=cdpaas&locale=en | filternode properties | filternode properties
The Filter node filters (discards) fields, renames fields, and maps fields from one import node to another.
Using the default_include property. Note that setting the value of the default_include property doesn't automatically include or exclude all fields; it simply determines the default for the current selection. This is functionally equivalent to selecting the Include All Fields option in the Filter node properties. For example, suppose you run the following script:
node = modeler.script.stream().create("filter", "Filter")
node.setPropertyValue("default_include", False)
Include these two fields in the list
for f in ["Age", "Sex"]:
node.setKeyedPropertyValue("include", f, True)
This will cause the node to pass the fields Age and Sex and discard all others. Now suppose you run the same script again but name two different fields:
node = modeler.script.stream().create("filter", "Filter")
node.setPropertyValue("default_include", False)
Include these two fields in the list
for f in ["BP", "Na"]:
node.setKeyedPropertyValue("include", f, True)
This will add two more fields to the filter so that a total of four fields are passed (Age, Sex, BP, Na). In other words, resetting the value of default_include to False doesn't automatically reset all fields.
Alternatively, if you now change default_include to True, either using a script or in the Filter node dialog box, this would flip the behavior so the four fields listed previously would be discarded rather than included. When in doubt, experimenting with the controls in the Filter node properties may be helpful in understanding this interaction.
filternode properties
Table 1. filternode properties
filternode properties Data type Property description
default_include flag Keyed property to specify whether the default behavior is to pass or filter fields: Note that setting this property doesn't automatically include or exclude all fields; it simply determines whether selected fields are included or excluded by default.
include flag Keyed property for field inclusion and removal.
new_name string
| # filternode properties #
The Filter node filters (discards) fields, renames fields, and maps fields from one import node to another\.
Using the default\_include property\. Note that setting the value of the `default_include` property doesn't automatically include or exclude all fields; it simply determines the default for the current selection\. This is functionally equivalent to selecting the Include All Fields option in the Filter node properties\. For example, suppose you run the following script:
node = modeler.script.stream().create("filter", "Filter")
node.setPropertyValue("default_include", False)
# Include these two fields in the list
for f in ["Age", "Sex"]:
node.setKeyedPropertyValue("include", f, True)
This will cause the node to pass the fields `Age` and `Sex` and discard all others\. Now suppose you run the same script again but name two different fields:
node = modeler.script.stream().create("filter", "Filter")
node.setPropertyValue("default_include", False)
# Include these two fields in the list
for f in ["BP", "Na"]:
node.setKeyedPropertyValue("include", f, True)
This will add two more fields to the filter so that a total of four fields are passed (`Age`, `Sex`, `BP`, `Na`)\. In other words, resetting the value of `default_include` to `False` doesn't automatically reset all fields\.
Alternatively, if you now change `default_include` to `True`, either using a script or in the Filter node dialog box, this would flip the behavior so the four fields listed previously would be discarded rather than included\. When in doubt, experimenting with the controls in the Filter node properties may be helpful in understanding this interaction\.
<!-- <table "summary="filternode properties" id="filternodeslots__table_fy5_ncj_cdb" class="defaultstyle" "> -->
filternode properties
Table 1\. filternode properties
| `filternode` properties | Data type | Property description |
| ----------------------- | --------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `default_include` | *flag* | Keyed property to specify whether the default behavior is to pass or filter fields: Note that setting this property doesn't automatically include or exclude all fields; it simply determines whether selected fields are included or excluded by default\. |
| `include` | *flag* | Keyed property for field inclusion and removal\. |
| `new_name` | *string* | |
<!-- </table "summary="filternode properties" id="filternodeslots__table_fy5_ncj_cdb" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
916F0A90D0B8383F2353B3320628E23E38B380B5 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/genlinnodeslots.html?context=cdpaas&locale=en | genlinnode properties | genlinnode properties
The Generalized Linear (GenLin) model expands the general linear model so that the dependent variable is linearly related to the factors and covariates through a specified link function. Moreover, the model allows for the dependent variable to have a non-normal distribution. It covers the functionality of a wide number of statistical models, including linear regression, logistic regression, loglinear models for count data, and interval-censored survival models.
genlinnode properties
Table 1. genlinnode properties
genlinnode Properties Values Property description
target field GenLin models require a single target field which must be a nominal or flag field, and one or more input fields. A weight field can also be specified. See [Common modeling node properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/modelingnodeslots_common.htmlmodelingnodeslots_common) for more information.
use_weight flag
weight_field field Field type is only continuous.
target_represents_trials flag
trials_type VariableFixedValue
trials_field field Field type is continuous, flag, or ordinal.
trials_number number Default value is 10.
model_type MainEffectsMainAndAllTwoWayEffects
offset_type VariableFixedValue
offset_field field Field type is only continuous.
offset_value number Must be a real number.
base_category LastFirst
include_intercept flag
mode SimpleExpert
distribution BINOMIALGAMMAIGAUSSNEGBINNORMALPOISSONTWEEDIEMULTINOMIAL IGAUSS: Inverse Gaussian. NEGBIN: Negative binomial.
negbin_para_type SpecifyEstimate
negbin_parameter number Default value is 1. Must contain a non-negative real number.
tweedie_parameter number
link_function IDENTITYCLOGLOGLOGLOGCLOGITNEGBINNLOGLOGODDSPOWERPROBITPOWERCUMCAUCHITCUMCLOGLOGCUMLOGITCUMNLOGLOGCUMPROBIT CLOGLOG: Complementary log-log. LOGC: log complement. NEGBIN: Negative binomial. NLOGLOG: Negative log-log. CUMCAUCHIT: Cumulative cauchit. CUMCLOGLOG: Cumulative complementary log-log. CUMLOGIT: Cumulative logit. CUMNLOGLOG: Cumulative negative log-log. CUMPROBIT: Cumulative probit.
power number Value must be real, nonzero number.
method HybridFisherNewtonRaphson
max_fisher_iterations number Default value is 1; only positive integers allowed.
scale_method MaxLikelihoodEstimateDeviancePearsonChiSquareFixedValue
scale_value number Default value is 1; must be greater than 0.
covariance_matrix ModelEstimatorRobustEstimator
max_iterations number Default value is 100; non-negative integers only.
max_step_halving number Default value is 5; positive integers only.
check_separation flag
start_iteration number Default value is 20; only positive integers allowed.
estimates_change flag
estimates_change_min number Default value is 1E-006; only positive numbers allowed.
estimates_change_type AbsoluteRelative
loglikelihood_change flag
loglikelihood_change_min number Only positive numbers allowed.
loglikelihood_change_type AbsoluteRelative
hessian_convergence flag
hessian_convergence_min number Only positive numbers allowed.
hessian_convergence_type AbsoluteRelative
case_summary flag
contrast_matrices flag
descriptive_statistics flag
estimable_functions flag
model_info flag
iteration_history flag
goodness_of_fit flag
print_interval number Default value is 1; must be positive integer.
model_summary flag
lagrange_multiplier flag
parameter_estimates flag
include_exponential flag
covariance_estimates flag
correlation_estimates flag
analysis_type TypeITypeIIITypeIAndTypeIII
statistics WaldLR
citype WaldProfile
tolerancelevel number Default value is 0.0001.
confidence_interval number Default value is 95.
loglikelihood_function FullKernel
singularity_tolerance 1E-0071E-0081E-0091E-0101E-0111E-012
value_order AscendingDescendingDataOrder
calculate_variable_importance flag
calculate_raw_propensities flag
calculate_adjusted_propensities flag
adjusted_propensity_partition TestValidation
| # genlinnode properties #
The Generalized Linear (GenLin) model expands the general linear model so that the dependent variable is linearly related to the factors and covariates through a specified link function\. Moreover, the model allows for the dependent variable to have a non\-normal distribution\. It covers the functionality of a wide number of statistical models, including linear regression, logistic regression, loglinear models for count data, and interval\-censored survival models\.
<!-- <table "summary="genlinnode properties" id="genlinnodeslots__table_msf_rcj_cdb" class="defaultstyle" "> -->
genlinnode properties
Table 1\. genlinnode properties
| `genlinnode` Properties | Values | Property description |
| --------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `target` | *field* | GenLin models require a single target field which must be a nominal or flag field, and one or more input fields\. A weight field can also be specified\. See [Common modeling node properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/modelingnodeslots_common.html#modelingnodeslots_common) for more information\. |
| `use_weight` | *flag* | |
| `weight_field` | *field* | Field type is only continuous\. |
| `target_represents_trials` | *flag* | |
| `trials_type` | `Variable``FixedValue` | |
| `trials_field` | *field* | Field type is continuous, flag, or ordinal\. |
| `trials_number` | *number* | Default value is 10\. |
| `model_type` | `MainEffects``MainAndAllTwoWayEffects` | |
| `offset_type` | `Variable``FixedValue` | |
| `offset_field` | *field* | Field type is only continuous\. |
| `offset_value` | *number* | Must be a real number\. |
| `base_category` | `Last``First` | |
| `include_intercept` | *flag* | |
| `mode` | `Simple``Expert` | |
| `distribution` | `BINOMIAL``GAMMA``IGAUSS``NEGBIN``NORMAL``POISSON``TWEEDIE``MULTINOMIAL` | `IGAUSS`: Inverse Gaussian\. `NEGBIN`: Negative binomial\. |
| `negbin_para_type` | `Specify``Estimate` | |
| `negbin_parameter` | *number* | Default value is 1\. Must contain a non\-negative real number\. |
| `tweedie_parameter` | *number* | |
| `link_function` | `IDENTITY``CLOGLOG``LOG``LOGC``LOGIT``NEGBIN``NLOGLOG``ODDSPOWER``PROBIT``POWER``CUMCAUCHIT``CUMCLOGLOG``CUMLOGIT``CUMNLOGLOG``CUMPROBIT` | `CLOGLOG`: Complementary log\-log\. `LOGC`: log complement\. `NEGBIN`: Negative binomial\. `NLOGLOG`: Negative log\-log\. `CUMCAUCHIT`: Cumulative cauchit\. `CUMCLOGLOG`: Cumulative complementary log\-log\. `CUMLOGIT`: Cumulative logit\. `CUMNLOGLOG`: Cumulative negative log\-log\. `CUMPROBIT`: Cumulative probit\. |
| `power` | *number* | Value must be real, nonzero number\. |
| `method` | `Hybrid``Fisher``NewtonRaphson` | |
| `max_fisher_iterations` | *number* | Default value is 1; only positive integers allowed\. |
| `scale_method` | `MaxLikelihoodEstimate``Deviance``PearsonChiSquare``FixedValue` | |
| `scale_value` | *number* | Default value is 1; must be greater than 0\. |
| `covariance_matrix` | `ModelEstimator``RobustEstimator` | |
| `max_iterations` | *number* | Default value is 100; non\-negative integers only\. |
| `max_step_halving` | *number* | Default value is 5; positive integers only\. |
| `check_separation` | *flag* | |
| `start_iteration` | *number* | Default value is 20; only positive integers allowed\. |
| `estimates_change` | *flag* | |
| `estimates_change_min` | *number* | Default value is 1E\-006; only positive numbers allowed\. |
| `estimates_change_type` | `Absolute``Relative` | |
| `loglikelihood_change` | *flag* | |
| `loglikelihood_change_min` | *number* | Only positive numbers allowed\. |
| `loglikelihood_change_type` | `Absolute``Relative` | |
| `hessian_convergence` | *flag* | |
| `hessian_convergence_min` | *number* | Only positive numbers allowed\. |
| `hessian_convergence_type` | `Absolute``Relative` | |
| `case_summary` | *flag* | |
| `contrast_matrices` | *flag* | |
| `descriptive_statistics` | *flag* | |
| `estimable_functions` | *flag* | |
| `model_info` | *flag* | |
| `iteration_history` | *flag* | |
| `goodness_of_fit` | *flag* | |
| `print_interval` | *number* | Default value is 1; must be positive integer\. |
| `model_summary` | *flag* | |
| `lagrange_multiplier` | *flag* | |
| `parameter_estimates` | *flag* | |
| `include_exponential` | *flag* | |
| `covariance_estimates` | *flag* | |
| `correlation_estimates` | *flag* | |
| `analysis_type` | `TypeI``TypeIII``TypeIAndTypeIII` | |
| `statistics` | `Wald``LR` | |
| `citype` | `Wald``Profile` | |
| `tolerancelevel` | *number* | Default value is 0\.0001\. |
| `confidence_interval` | *number* | Default value is 95\. |
| `loglikelihood_function` | `Full``Kernel` | |
| `singularity_tolerance` | `1E-007``1E-008``1E-009``1E-010``1E-011``1E-012` | |
| `value_order` | `Ascending``Descending``DataOrder` | |
| `calculate_variable_importance` | *flag* | |
| `calculate_raw_propensities` | *flag* | |
| `calculate_adjusted_propensities` | *flag* | |
| `adjusted_propensity_partition` | `Test``Validation` | |
<!-- </table "summary="genlinnode properties" id="genlinnodeslots__table_msf_rcj_cdb" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
BC3D88E89001BB639E418AE5971B209535603A18 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/genlinnuggetnodeslots.html?context=cdpaas&locale=en | applygeneralizedlinearnode properties | applygeneralizedlinearnode properties
You can use Generalized Linear (GenLin) modeling nodes to generate a GenLin model nugget. The scripting name of this model nugget is applygeneralizedlinearnode. For more information on scripting the modeling node itself, see [genlinnode properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/genlinnodeslots.htmlgenlinnodeslots).
applygeneralizedlinearnode properties
Table 1. applygeneralizedlinearnode properties
applygeneralizedlinearnode Properties Values Property description
calculate_raw_propensities flag
calculate_adjusted_propensities flag
enable_sql_generation falsenative When using data from a database, SQL code can be pushed back to the database for execution, providing superior performance for many operations.
| # applygeneralizedlinearnode properties #
You can use Generalized Linear (GenLin) modeling nodes to generate a GenLin model nugget\. The scripting name of this model nugget is *applygeneralizedlinearnode*\. For more information on scripting the modeling node itself, see [genlinnode properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/genlinnodeslots.html#genlinnodeslots)\.
<!-- <table "summary="applygeneralizedlinearnode properties" id="genlinnuggetnodeslots__table_sfv_rcj_cdb" class="defaultstyle" "> -->
applygeneralizedlinearnode properties
Table 1\. applygeneralizedlinearnode properties
| `applygeneralizedlinearnode` Properties | Values | Property description |
| --------------------------------------- | --------------- | ------------------------------------------------------------------------------------------------------------------------------------------------ |
| `calculate_raw_propensities` | *flag* | |
| `calculate_adjusted_propensities` | *flag* | |
| `enable_sql_generation` | `false``native` | When using data from a database, SQL code can be pushed back to the database for execution, providing superior performance for many operations\. |
<!-- </table "summary="applygeneralizedlinearnode properties" id="genlinnuggetnodeslots__table_sfv_rcj_cdb" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
ADF96F4DC435CCF7E702EEAC6FCAA9F7DECD1DCB | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/glenodeslots.html?context=cdpaas&locale=en | gle properties | gle properties
A GLE extends the linear model so that the target can have a non-normal distribution, is linearly related to the factors and covariates via a specified link function, and so that the observations can be correlated. Generalized linear mixed models cover a wide variety of models, from simple linear regression to complex multilevel models for non-normal longitudinal data.
gle properties
Table 1. gle properties
gle Properties Values Property description
custom_target flag Indicates whether to use target defined in upstream node (false) or custom target specified by target_field (true).
target_field field Field to use as target if custom_target is true.
use_trials flag Indicates whether additional field or value specifying number of trials is to be used when target response is a number of events occurring in a set of trials. Default is false.
use_trials_field_or_value Field <br>Value Indicates whether field (default) or value is used to specify number of trials.
trials_field field Field to use to specify number of trials.
trials_value integer Value to use to specify number of trials. If specified, minimum value is 1.
use_custom_target_reference flag Indicates whether custom reference category is to be used for a categorical target. Default is false.
target_reference_value string Reference category to use if use_custom_target_reference is true.
dist_link_combination NormalIdentity <br>GammaLog <br>PoissonLog <br>NegbinLog <br>TweedieIdentity <br>NominalLogit <br>BinomialLogit <br>BinomialProbit <br>BinomialLogC <br>CUSTOM Common models for distribution of values for target. Choose CUSTOM to specify a distribution from the list provided by target_distribution.
target_distribution Normal <br>Binomial <br>Multinomial <br>Gamma <br>INVERSE_GAUSS <br>NEG_BINOMIAL <br>Poisson <br>TWEEDIE <br>UNKNOWN Distribution of values for target when dist_link_combination is Custom.
link_function_type UNKNOWN <br>IDENTITY <br>LOG <br>LOGIT <br>PROBIT <br>COMPL_LOG_LOG <br>POWER <br>LOG_COMPL <br>NEG_LOG_LOG <br>ODDS_POWER <br>NEG_BINOMIAL <br>GEN_LOGIT <br>CUMUL_LOGIT <br>CUMUL_PROBIT <br>CUMUL_COMPL_LOG_LOG <br>CUMUL_NEG_LOG_LOG <br>CUMUL_CAUCHIT Link function to relate target values to predictors. If target_distribution is Binomial you can use: <br><br>UNKNOWNIDENTITYLOGLOGITPROBITCOMPL_LOG_LOGPOWERLOG_COMPLNEG_LOG_LOGODDS_POWER<br><br>If target_distribution is NEG_BINOMIAL you can use:<br><br>NEG_BINOMIAL<br><br>If target_distribution is UNKNOWN, you can use:<br><br>GEN_LOGITCUMUL_LOGITCUMUL_PROBITCUMUL_COMPL_LOG_LOGCUMUL_NEG_LOG_LOGCUMUL_CAUCHIT
link_function_param number Tweedie parameter value to use. Only applicable if normal_link_function or link_function_type is POWER.
tweedie_param number Link function parameter value to use. Only applicable if dist_link_combination is set to TweedieIdentity, or link_function_type is TWEEDIE.
use_predefined_inputs flag Indicates whether model effect fields are to be those defined upstream as input fields (true) or those from fixed_effects_list (false).
model_effects_list structured If use_predefined_inputs is false, specifies the input fields to use as model effect fields.
use_intercept flag If true (default), includes the intercept in the model.
regression_weight_field field Field to use as analysis weight field.
use_offset None <br>Value <br>Variable Indicates how offset is specified. Value None means no offset is used.
offset_value number Value to use for offset if use_offset is set to offset_value.
offset_field field Field to use for offset value if use_offset is set to offset_field.
target_category_order Ascending <br>Descending Sorting order for categorical targets. Default is Ascending.
inputs_category_order Ascending <br>Descending Sorting order for categorical predictors. Default is Ascending.
max_iterations integer Maximum number of iterations the algorithm will perform. A non-negative integer; default is 100.
confidence_level number Confidence level used to compute interval estimates of the model coefficients. A non-negative integer; maximum is 100, default is 95.
test_fixed_effects_coeffecients Model <br>Robust Method for computing the parameter estimates covariance matrix.
detect_outliers flag When true the algorithm finds influential outliers for all distributions except multinomial distribution.
conduct_trend_analysis flag When true the algorithm conducts trend analysis for the scatter plot.
estimation_method FISHER_SCORING <br>NEWTON_RAPHSON <br>HYBRID Specify the maximum likelihood estimation algorithm.
max_fisher_iterations integer If using the FISHER_SCORINGestimation_method, the maximum number of iterations. Minimum 0, maximum 20.
scale_parameter_method MLE <br>FIXED <br>DEVIANCE <br>PEARSON_CHISQUARE Specify the method to be used for the estimation of the scale parameter.
scale_value number Only available if scale_parameter_method is set to Fixed.
negative_binomial_method MLE <br>FIXED Specify the method to be for the estimation of the negative binomial ancillary parameter.
negative_binomial_value number Only available if negative_binomial_method is set to Fixed.
use_p_converge flag Option for parameter convergence.
p_converge number Blank, or any positive value.
p_converge_type flag True = Absolute, False = Relative
use_l_converge flag Option for log-likelihood convergence.
l_converge number Blank, or any positive value.
l_converge_type flag True = Absolute, False = Relative
use_h_converge flag Option for Hessian convergence.
h_converge number Blank, or any positive value.
h_converge_type flag True = Absolute, False = Relative
max_iterations integer Maximum number of iterations the algorithm will perform. A non-negative integer; default is 100.
sing_tolerance integer
use_model_selection flag Enables the parameter threshold and model selection method controls..
method LASSO <br> <br>ELASTIC_NET <br> <br>FORWARD_STEPWISE <br>RIDGE Determines the model selection method, or if using Ridge the regularization method, used.
detect_two_way_interactions flag When True the model will automatically detect two-way interactions between input fields. This control should only be enabled if the model is main effects only (that is, where the user has not created any higher order effects) and if the method selected is Forward Stepwise, Lasso, or Elastic Net.
automatic_penalty_params flag Only available if model selection method is Lasso or Elastic Net. Use this function to enter penalty parameters associated with either the Lasso or Elastic Net variable selection methods. If True, default values are used. If False, the penalty parameters are enabled custom values can be entered.
lasso_penalty_param number Only available if model selection method is Lasso or Elastic Net and automatic_penalty_params is False. Specify the penalty parameter value for Lasso.
elastic_net_penalty_param1 number Only available if model selection method is Lasso or Elastic Net and automatic_penalty_params is False. Specify the penalty parameter value for Elastic Net parameter 1.
elastic_net_penalty_param2 number Only available if model selection method is Lasso or Elastic Net and automatic_penalty_params is False. Specify the penalty parameter value for Elastic Net parameter 2.
probability_entry number Only available if the method selected is Forward Stepwise. Specify the significance level of the f statistic criterion for effect inclusion.
probability_removal number Only available if the method selected is Forward Stepwise. Specify the significance level of the f statistic criterion for effect removal.
use_max_effects flag Only available if the method selected is Forward Stepwise. Enables the max_effects control. When False the default number of effects included should equal the total number of effects supplied to the model, minus the intercept.
max_effects integer Specify the maximum number of effects when using the forward stepwise building method.
use_max_steps flag Enables the max_steps control. When False the default number of steps should equal three times the number of effects supplied to the model, excluding the intercept.
max_steps integer Specify the maximum number of steps to be taken when using the Forward Stepwise building method.
use_model_name flag Indicates whether to specify a custom name for the model (true) or to use the system-generated name (false). Default is false.
model_name string If use_model_name is true, specifies the model name to use.
usePI flag If true, predictor importance is calculated..
perform_model_effect_tests boolean Whether to perform model effect tests.
non_neg_least_squares integer Whether to perform non-negative least squares.
| # gle properties #
A GLE extends the linear model so that the target can have a non\-normal distribution, is linearly related to the factors and covariates via a specified link function, and so that the observations can be correlated\. Generalized linear mixed models cover a wide variety of models, from simple linear regression to complex multilevel models for non\-normal longitudinal data\.
<!-- <table "summary="gle properties" id="glenodeslots__table_utv_kw1_zs" class="defaultstyle" "> -->
gle properties
Table 1\. gle properties
| `gle` Properties | Values | Property description |
| --------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `custom_target` | *flag* | Indicates whether to use target defined in upstream node (`false`) or custom target specified by `target_field` (`true`)\. |
| `target_field` | *field* | Field to use as target if `custom_target` is `true`\. |
| `use_trials` | *flag* | Indicates whether additional field or value specifying number of trials is to be used when target response is a number of events occurring in a set of trials\. Default is `false`\. |
| `use_trials_field_or_value` | `Field` <br>`Value` | Indicates whether field (default) or value is used to specify number of trials\. |
| `trials_field` | *field* | Field to use to specify number of trials\. |
| `trials_value` | *integer* | Value to use to specify number of trials\. If specified, minimum value is 1\. |
| `use_custom_target_reference` | *flag* | Indicates whether custom reference category is to be used for a categorical target\. Default is `false`\. |
| `target_reference_value` | *string* | Reference category to use if `use_custom_target_reference` is `true`\. |
| `dist_link_combination` | `NormalIdentity` <br>`GammaLog` <br>`PoissonLog` <br>`NegbinLog` <br>`TweedieIdentity` <br>`NominalLogit` <br>`BinomialLogit` <br>`BinomialProbit` <br>`BinomialLogC` <br>`CUSTOM` | Common models for distribution of values for target\. Choose `CUSTOM` to specify a distribution from the list provided by `target_distribution`\. |
| `target_distribution` | `Normal` <br>`Binomial` <br>`Multinomial` <br>`Gamma` <br>`INVERSE_GAUSS` <br>`NEG_BINOMIAL` <br>`Poisson` <br>`TWEEDIE` <br>`UNKNOWN` | Distribution of values for target when `dist_link_combination` is `Custom`\. |
| `link_function_type` | `UNKNOWN` <br>`IDENTITY` <br>`LOG` <br>`LOGIT` <br>`PROBIT` <br>`COMPL_LOG_LOG` <br>`POWER` <br>`LOG_COMPL` <br>`NEG_LOG_LOG` <br>`ODDS_POWER` <br>`NEG_BINOMIAL` <br>`GEN_LOGIT` <br>`CUMUL_LOGIT` <br>`CUMUL_PROBIT` <br>`CUMUL_COMPL_LOG_LOG` <br>`CUMUL_NEG_LOG_LOG` <br>`CUMUL_CAUCHIT` | Link function to relate target values to predictors\. If `target_distribution` is `Binomial` you can use: <br><br>`UNKNOWN``IDENTITY``LOG``LOGIT``PROBIT``COMPL_LOG_LOG``POWER``LOG_COMPL``NEG_LOG_LOG``ODDS_POWER`<br><br>If `target_distribution` is `NEG_BINOMIAL` you can use:<br><br>`NEG_BINOMIAL`<br><br>If `target_distribution` is `UNKNOWN`, you can use:<br><br>`GEN_LOGIT``CUMUL_LOGIT``CUMUL_PROBIT``CUMUL_COMPL_LOG_LOG``CUMUL_NEG_LOG_LOG``CUMUL_CAUCHIT` |
| `link_function_param` | *number* | Tweedie parameter value to use\. Only applicable if `normal_link_function` or `link_function_type` is `POWER`\. |
| `tweedie_param` | *number* | Link function parameter value to use\. Only applicable if `dist_link_combination` is set to `TweedieIdentity`, or `link_function_type` is `TWEEDIE`\. |
| `use_predefined_inputs` | *flag* | Indicates whether model effect fields are to be those defined upstream as input fields (`true`) or those from `fixed_effects_list` (`false`)\. |
| `model_effects_list` | *structured* | If `use_predefined_inputs` is `false`, specifies the input fields to use as model effect fields\. |
| `use_intercept` | *flag* | If `true` (default), includes the intercept in the model\. |
| `regression_weight_field` | *field* | Field to use as analysis weight field\. |
| `use_offset` | `None` <br>`Value` <br>`Variable` | Indicates how offset is specified\. Value `None` means no offset is used\. |
| `offset_value` | *number* | Value to use for offset if `use_offset` is set to `offset_value`\. |
| `offset_field` | *field* | Field to use for offset value if `use_offset` is set to `offset_field`\. |
| `target_category_order` | `Ascending` <br>`Descending` | Sorting order for categorical targets\. Default is `Ascending`\. |
| `inputs_category_order` | `Ascending` <br>`Descending` | Sorting order for categorical predictors\. Default is `Ascending`\. |
| `max_iterations` | *integer* | Maximum number of iterations the algorithm will perform\. A non\-negative integer; default is 100\. |
| `confidence_level` | *number* | Confidence level used to compute interval estimates of the model coefficients\. A non\-negative integer; maximum is 100, default is 95\. |
| `test_fixed_effects_coeffecients` | `Model` <br>`Robust` | Method for computing the parameter estimates covariance matrix\. |
| `detect_outliers` | *flag* | When true the algorithm finds influential outliers for all distributions except multinomial distribution\. |
| `conduct_trend_analysis` | *flag* | When true the algorithm conducts trend analysis for the scatter plot\. |
| `estimation_method` | `FISHER_SCORING` <br>`NEWTON_RAPHSON` <br>`HYBRID` | Specify the maximum likelihood estimation algorithm\. |
| `max_fisher_iterations` | *integer* | If using the `FISHER_SCORING``estimation_method`, the maximum number of iterations\. Minimum 0, maximum 20\. |
| `scale_parameter_method` | `MLE` <br>`FIXED` <br>`DEVIANCE` <br>`PEARSON_CHISQUARE` | Specify the method to be used for the estimation of the scale parameter\. |
| `scale_value` | *number* | Only available if `scale_parameter_method` is set to `Fixed`\. |
| `negative_binomial_method` | `MLE` <br>`FIXED` | Specify the method to be for the estimation of the negative binomial ancillary parameter\. |
| `negative_binomial_value` | *number* | Only available if `negative_binomial_method` is set to `Fixed`\. |
| `use_p_converge` | *flag* | Option for parameter convergence\. |
| `p_converge` | *number* | Blank, or any positive value\. |
| `p_converge_type` | *flag* | True = Absolute, False = Relative |
| `use_l_converge` | *flag* | Option for log\-likelihood convergence\. |
| `l_converge` | *number* | Blank, or any positive value\. |
| `l_converge_type` | *flag* | True = Absolute, False = Relative |
| `use_h_converge` | *flag* | Option for Hessian convergence\. |
| `h_converge` | *number* | Blank, or any positive value\. |
| `h_converge_type` | *flag* | True = Absolute, False = Relative |
| `max_iterations` | *integer* | Maximum number of iterations the algorithm will perform\. A non\-negative integer; default is 100\. |
| `sing_tolerance` | *integer* | |
| `use_model_selection` | *flag* | Enables the parameter threshold and model selection method controls\.\. |
| `method` | `LASSO` <br> <br>`ELASTIC_NET` <br> <br>`FORWARD_STEPWISE` <br>`RIDGE` | Determines the model selection method, or if using `Ridge` the regularization method, used\. |
| `detect_two_way_interactions` | *flag* | When `True` the model will automatically detect two\-way interactions between input fields\. This control should only be enabled if the model is main effects only (that is, where the user has not created any higher order effects) and if the `method` selected is Forward Stepwise, Lasso, or Elastic Net\. |
| `automatic_penalty_params` | *flag* | Only available if model selection `method` is Lasso or Elastic Net\. Use this function to enter penalty parameters associated with either the Lasso or Elastic Net variable selection methods\. If `True`, default values are used\. If `False`, the penalty parameters are enabled custom values can be entered\. |
| `lasso_penalty_param` | *number* | Only available if model selection `method` is Lasso or Elastic Net and `automatic_penalty_params` is `False`\. Specify the penalty parameter value for Lasso\. |
| `elastic_net_penalty_param1` | *number* | Only available if model selection `method` is Lasso or Elastic Net and `automatic_penalty_params` is `False`\. Specify the penalty parameter value for Elastic Net parameter 1\. |
| `elastic_net_penalty_param2` | *number* | Only available if model selection `method` is Lasso or Elastic Net and `automatic_penalty_params` is `False`\. Specify the penalty parameter value for Elastic Net parameter 2\. |
| `probability_entry` | *number* | Only available if the `method` selected is Forward Stepwise\. Specify the significance level of the f statistic criterion for effect inclusion\. |
| `probability_removal` | *number* | Only available if the `method` selected is Forward Stepwise\. Specify the significance level of the f statistic criterion for effect removal\. |
| `use_max_effects` | *flag* | Only available if the `method` selected is Forward Stepwise\. Enables the `max_effects` control\. When `False` the default number of effects included should equal the total number of effects supplied to the model, minus the intercept\. |
| `max_effects` | *integer* | Specify the maximum number of effects when using the forward stepwise building method\. |
| `use_max_steps` | *flag* | Enables the `max_steps` control\. When `False` the default number of steps should equal three times the number of effects supplied to the model, excluding the intercept\. |
| `max_steps` | *integer* | Specify the maximum number of steps to be taken when using the Forward Stepwise building `method`\. |
| `use_model_name` | *flag* | Indicates whether to specify a custom name for the model (`true`) or to use the system\-generated name (`false`)\. Default is `false`\. |
| `model_name` | *string* | If `use_model_name` is `true`, specifies the model name to use\. |
| `usePI` | *flag* | If `true`, predictor importance is calculated\.\. |
| `perform_model_effect_tests` | *boolean* | Whether to perform model effect tests\. |
| `non_neg_least_squares` | *integer* | Whether to perform non\-negative least squares\. |
<!-- </table "summary="gle properties" id="glenodeslots__table_utv_kw1_zs" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
863FD4EEE7625CF4012BC9E37B5B66CD25554B8A | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/glenuggetnodeslots.html?context=cdpaas&locale=en | applygle properties | applygle properties
You can use the GLE modeling node to generate a GLE model nugget. The scripting name of this model nugget is applygle. For more information on scripting the modeling node itself, see [gle properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/glenodeslots.htmlglenodeslots).
applygle properties
Table 1. applygle properties
applygle Properties Values Property description
enable_sql_generation falsenative When using data from a database, SQL code can be pushed back to the database for execution, providing superior performance for many operations.
| # applygle properties #
You can use the GLE modeling node to generate a GLE model nugget\. The scripting name of this model nugget is *applygle*\. For more information on scripting the modeling node itself, see [gle properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/glenodeslots.html#glenodeslots)\.
<!-- <table "summary="applygle properties" class="defaultstyle" "> -->
applygle properties
Table 1\. applygle properties
| `applygle` Properties | Values | Property description |
| ----------------------- | --------------- | ------------------------------------------------------------------------------------------------------------------------------------------------ |
| `enable_sql_generation` | `false``native` | When using data from a database, SQL code can be pushed back to the database for execution, providing superior performance for many operations\. |
<!-- </table "summary="applygle properties" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
0A6D8500DAC43A18EC5DD8FCC3D31C2A31546554 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/glmmnodeslots.html?context=cdpaas&locale=en | glmmnode properties | glmmnode properties
A generalized linear mixed model (GLMM) extends the linear model so that the target can have a non-normal distribution, is linearly related to the factors and covariates via a specified link function, and so that the observations can be correlated. GLMM models cover a wide variety of models, from simple linear regression to complex multilevel models for non-normal longitudinal data.
glmmnode properties
Table 1. glmmnode properties
glmmnode Properties Values Property description
residual_subject_spec structured The combination of values of the specified categorical fields that uniquely define subjects within the data set
repeated_measures structured Fields used to identify repeated observations.
residual_group_spec [field1 ... fieldN] Fields that define independent sets of repeated effects covariance parameters.
residual_covariance_type Diagonal <br>AR1 <br>ARMA11 <br>COMPOUND_SYMMETRY <br>IDENTITY <br>TOEPLITZ <br>UNSTRUCTURED <br>VARIANCE_COMPONENTS Specifies covariance structure for residuals.
custom_target flag Indicates whether to use target defined in upstream node (false) or custom target specified by target_field (true).
target_field field Field to use as target if custom_target is true.
use_trials flag Indicates whether additional field or value specifying number of trials is to be used when target response is a number of events occurring in a set of trials. Default is false.
use_field_or_value Field <br>Value Indicates whether field (default) or value is used to specify number of trials.
trials_field field Field to use to specify number of trials.
trials_value integer Value to use to specify number of trials. If specified, minimum value is 1.
use_custom_target_reference flag Indicates whether custom reference category is to be used for a categorical target. Default is false.
target_reference_value string Reference category to use if use_custom_target_reference is true.
dist_link_combination Nominal <br>Logit <br>GammaLog <br>BinomialLogit <br>PoissonLog <br>BinomialProbit <br>NegbinLog <br>BinomialLogC <br>Custom Common models for distribution of values for target. Choose Custom to specify a distribution from the list provided bytarget_distribution.
target_distribution Normal <br>Binomial <br>Multinomial <br>Gamma <br>Inverse <br>NegativeBinomial <br>Poisson Distribution of values for target when dist_link_combination is Custom.
link_function_type Identity <br>LogC <br>Log <br>CLOGLOGLogit <br>NLOGLOGPROBIT <br>POWER <br>CAUCHIT Link function to relate target <br>values to predictors. <br>If target_distribution is <br>Binomial you can use any <br>of the listed link functions. <br>If target_distribution is <br>Multinomial you can use <br>CLOGLOG, CAUCHIT, LOGIT, <br>NLOGLOG, or PROBIT. <br>If target_distribution is <br>anything other than Binomial or <br>Multinomial you can use <br>IDENTITY, LOG, or POWER.
link_function_param number Link function parameter value to use. Only applicable if normal_link_function or link_function_type is POWER.
use_predefined_inputs flag Indicates whether fixed effect fields are to be those defined upstream as input fields (true) or those from fixed_effects_list (false). Default is false.
fixed_effects_list structured If use_predefined_inputs is false, specifies the input fields to use as fixed effect fields.
use_intercept flag If true (default), includes the intercept in the model.
random_effects_list structured List of fields to specify as random effects.
regression_weight_field field Field to use as analysis weight field.
use_offset Noneoffset_valueoffset_field Indicates how offset is specified. Value None means no offset is used.
offset_value number Value to use for offset if use_offset is set to offset_value.
offset_field field Field to use for offset value if use_offset is set to offset_field.
target_category_order AscendingDescendingData Sorting order for categorical targets. Value Data specifies using the sort order found in the data. Default is Ascending.
inputs_category_order AscendingDescendingData Sorting order for categorical predictors. Value Data specifies using the sort order found in the data. Default is Ascending.
max_iterations integer Maximum number of iterations the algorithm will perform. A non-negative integer; default is 100.
confidence_level integer Confidence level used to compute interval estimates of the model coefficients. A non-negative integer; maximum is 100, default is 95.
degrees_of_freedom_method FixedVaried Specifies how degrees of freedom are computed for significance test.
test_fixed_effects_coeffecients ModelRobust Method for computing the parameter estimates covariance matrix.
use_p_converge flag Option for parameter convergence.
p_converge number Blank, or any positive value.
p_converge_type AbsoluteRelative
use_l_converge flag Option for log-likelihood convergence.
l_converge number Blank, or any positive value.
l_converge_type AbsoluteRelative
use_h_converge flag Option for Hessian convergence.
h_converge number Blank, or any positive value.
h_converge_type AbsoluteRelative
max_fisher_step integer
sing_tolerance number
use_model_name flag Indicates whether to specify a custom name for the model (true) or to use the system-generated name (false). Default is false.
model_name string If use_model_name is true, specifies the model name to use.
confidence onProbabilityonIncrease Basis for computing scoring confidence value: highest predicted probability, or difference between highest and second highest predicted probabilities.
score_category_probabilities flag If true, produces predicted probabilities for categorical targets. Default is false.
max_categories integer If score_category_probabilities is true, specifies maximum number of categories to save.
score_propensity flag If true, produces propensity scores for flag target fields that indicate likelihood of "true" outcome for field.
emeans structure For each categorical field from the fixed effects list, specifies whether to produce estimated marginal means.
covariance_list structure For each continuous field from the fixed effects list, specifies whether to use the mean or a custom value when computing estimated marginal means.
mean_scale OriginalTransformed Specifies whether to compute estimated marginal means based on the original scale of the target (default) or on the link function transformation.
comparison_adjustment_method LSDSEQBONFERRONISEQSIDAK Adjustment method to use when performing hypothesis tests with multiple contrasts.
use_trials_field_or_value "field""value"
residual_subject_ui_spec array Residual subject specification: The combination of values of the specified categorical fields should uniquely define subjects within the dataset. For example, a single Patient ID field should be sufficient to define subjects in a single hospital, but the combination of Hospital ID and Patient ID may be necessary if patient identification numbers are not unique across hospitals.
repeated_ui_measures array The fields specified here are used to identify repeated observations. For example, a single variable Week might identify the 10 weeks of observations in a medical study, or Month and Day might be used together to identify daily observations over the course of a year.
spatial_field array The variables in this list specify the coordinates of the repeated observations when one of the spatial covariance types is selected for the repeated covariance type.
| # glmmnode properties #
A generalized linear mixed model (GLMM) extends the linear model so that the target can have a non\-normal distribution, is linearly related to the factors and covariates via a specified link function, and so that the observations can be correlated\. GLMM models cover a wide variety of models, from simple linear regression to complex multilevel models for non\-normal longitudinal data\.
<!-- <table "summary="glmmnode properties" id="glmmnodeslots__table_l2j_scj_cdb" class="defaultstyle" "> -->
glmmnode properties
Table 1\. glmmnode properties
| `glmmnode` Properties | Values | Property description |
| --------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------ | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `residual_subject_spec` | *structured* | The combination of values of the specified categorical fields that uniquely define subjects within the data set |
| `repeated_measures` | *structured* | Fields used to identify repeated observations\. |
| `residual_group_spec` | \[*field1 \.\.\. fieldN*\] | Fields that define independent sets of repeated effects covariance parameters\. |
| `residual_covariance_type` | `Diagonal` <br>`AR1` <br>`ARMA11` <br>`COMPOUND_SYMMETRY` <br>`IDENTITY` <br>`TOEPLITZ` <br>`UNSTRUCTURED` <br>`VARIANCE_COMPONENTS` | Specifies covariance structure for residuals\. |
| `custom_target` | *flag* | Indicates whether to use target defined in upstream node (`false`) or custom target specified by `target_field` (`true`)\. |
| `target_field` | *field* | Field to use as target if `custom_target` is `true`\. |
| `use_trials` | *flag* | Indicates whether additional field or value specifying number of trials is to be used when target response is a number of events occurring in a set of trials\. Default is `false`\. |
| `use_field_or_value` | `Field` <br>`Value` | Indicates whether field (default) or value is used to specify number of trials\. |
| `trials_field` | *field* | Field to use to specify number of trials\. |
| `trials_value` | *integer* | Value to use to specify number of trials\. If specified, minimum value is 1\. |
| `use_custom_target_reference` | *flag* | Indicates whether custom reference category is to be used for a categorical target\. Default is `false`\. |
| `target_reference_value` | *string* | Reference category to use if `use_custom_target_reference` is `true`\. |
| `dist_link_combination` | `Nominal` <br>`Logit` <br>`GammaLog` <br>`BinomialLogit` <br>`PoissonLog` <br>`BinomialProbit` <br>`NegbinLog` <br>`BinomialLogC` <br>`Custom` | Common models for distribution of values for target\. Choose `Custom` to specify a distribution from the list provided by`target_distribution`\. |
| `target_distribution` | `Normal` <br>`Binomial` <br>`Multinomial` <br>`Gamma` <br>`Inverse` <br>`NegativeBinomial` <br>`Poisson` | Distribution of values for target when `dist_link_combination` is `Custom`\. |
| `link_function_type` | `Identity` <br>`LogC` <br>`Log` <br>`CLOGLOG``Logit` <br>`NLOGLOG``PROBIT` <br>`POWER` <br>`CAUCHIT` | Link function to relate target <br>values to predictors\. <br>If `target_distribution` is <br>`Binomial` you can use any <br>of the listed link functions\. <br>If `target_distribution` is <br>`Multinomial` you can use <br>`CLOGLOG`, `CAUCHIT`, `LOGIT`, <br>`NLOGLOG`, or `PROBIT`\. <br>If `target_distribution` is <br>anything other than `Binomial` or <br>`Multinomial` you can use <br>`IDENTITY`, `LOG`, or `POWER`\. |
| `link_function_param` | *number* | Link function parameter value to use\. Only applicable if `normal_link_function` or `link_function_type` is `POWER`\. |
| `use_predefined_inputs` | *flag* | Indicates whether fixed effect fields are to be those defined upstream as input fields (`true`) or those from `fixed_effects_list` (`false`)\. Default is `false`\. |
| `fixed_effects_list` | *structured* | If `use_predefined_inputs` is `false`, specifies the input fields to use as fixed effect fields\. |
| `use_intercept` | *flag* | If `true` (default), includes the intercept in the model\. |
| `random_effects_list` | *structured* | List of fields to specify as random effects\. |
| `regression_weight_field` | *field* | Field to use as analysis weight field\. |
| `use_offset` | `None``offset_value``offset_field` | Indicates how offset is specified\. Value `None` means no offset is used\. |
| `offset_value` | *number* | Value to use for offset if `use_offset` is set to `offset_value`\. |
| `offset_field` | *field* | Field to use for offset value if `use_offset` is set to `offset_field`\. |
| `target_category_order` | `Ascending``Descending``Data` | Sorting order for categorical targets\. Value `Data` specifies using the sort order found in the data\. Default is `Ascending`\. |
| `inputs_category_order` | `Ascending``Descending``Data` | Sorting order for categorical predictors\. Value `Data` specifies using the sort order found in the data\. Default is `Ascending`\. |
| `max_iterations` | *integer* | Maximum number of iterations the algorithm will perform\. A non\-negative integer; default is 100\. |
| `confidence_level` | *integer* | Confidence level used to compute interval estimates of the model coefficients\. A non\-negative integer; maximum is 100, default is 95\. |
| `degrees_of_freedom_method` | `Fixed``Varied` | Specifies how degrees of freedom are computed for significance test\. |
| `test_fixed_effects_coeffecients` | `Model``Robust` | Method for computing the parameter estimates covariance matrix\. |
| `use_p_converge` | *flag* | Option for parameter convergence\. |
| `p_converge` | *number* | Blank, or any positive value\. |
| `p_converge_type` | `Absolute``Relative` | |
| `use_l_converge` | *flag* | Option for log\-likelihood convergence\. |
| `l_converge` | *number* | Blank, or any positive value\. |
| `l_converge_type` | `Absolute``Relative` | |
| `use_h_converge` | *flag* | Option for Hessian convergence\. |
| `h_converge` | *number* | Blank, or any positive value\. |
| `h_converge_type` | `Absolute``Relative` | |
| `max_fisher_step` | *integer* | |
| `sing_tolerance` | *number* | |
| `use_model_name` | *flag* | Indicates whether to specify a custom name for the model (`true`) or to use the system\-generated name (`false`)\. Default is `false`\. |
| `model_name` | *string* | If `use_model_name` is `true`, specifies the model name to use\. |
| `confidence` | `onProbability``onIncrease` | Basis for computing scoring confidence value: highest predicted probability, or difference between highest and second highest predicted probabilities\. |
| `score_category_probabilities` | *flag* | If `true`, produces predicted probabilities for categorical targets\. Default is `false`\. |
| `max_categories` | *integer* | If `score_category_probabilities` is `true`, specifies maximum number of categories to save\. |
| `score_propensity` | *flag* | If `true`, produces propensity scores for flag target fields that indicate likelihood of "true" outcome for field\. |
| `emeans` | *structure* | For each categorical field from the fixed effects list, specifies whether to produce estimated marginal means\. |
| `covariance_list` | *structure* | For each continuous field from the fixed effects list, specifies whether to use the mean or a custom value when computing estimated marginal means\. |
| `mean_scale` | `Original``Transformed` | Specifies whether to compute estimated marginal means based on the original scale of the target (default) or on the link function transformation\. |
| `comparison_adjustment_method` | `LSD``SEQBONFERRONI``SEQSIDAK` | Adjustment method to use when performing hypothesis tests with multiple contrasts\. |
| `use_trials_field_or_value` | `"field"``"value"` | |
| `residual_subject_ui_spec` | *array* | Residual subject specification: The combination of values of the specified categorical fields should uniquely define subjects within the dataset\. For example, a single *Patient ID* field should be sufficient to define subjects in a single hospital, but the combination of *Hospital ID* and *Patient ID* may be necessary if patient identification numbers are not unique across hospitals\. |
| `repeated_ui_measures` | *array* | The fields specified here are used to identify repeated observations\. For example, a single variable *Week* might identify the 10 weeks of observations in a medical study, or *Month* and *Day* might be used together to identify daily observations over the course of a year\. |
| `spatial_field` | *array* | The variables in this list specify the coordinates of the repeated observations when one of the spatial covariance types is selected for the repeated covariance type\. |
<!-- </table "summary="glmmnode properties" id="glmmnodeslots__table_l2j_scj_cdb" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
337CC5401082DFD6C8C79D49CD97F7BC197C7303 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/glmmnuggetnodeslots.html?context=cdpaas&locale=en | applyglmmnode properties | applyglmmnode properties
You can use GLMM modeling nodes to generate a GLMM model nugget. The scripting name of this model nugget is applyglmmnode. For more information on scripting the modeling node itself, see [glmmnode properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/glmmnodeslots.htmlglmmnodeslots).
applyglmmnode properties
Table 1. applyglmmnode properties
applyglmmnode Properties Values Property description
confidence onProbabilityonIncrease Basis for computing scoring confidence value: highest predicted probability, or difference between highest and second highest predicted probabilities.
score_category_probabilities flag If set to True, produces the predicted probabilities for categorical targets. A field is created for each category. Default is False.
max_categories integer Maximum number of categories for which to predict probabilities. Used only if score_category_probabilities is True.
score_propensity flag If set to True, produces raw propensity scores (likelihood of "True" outcome) for models with flag targets. If partitions are in effect, also produces adjusted propensity scores based on the testing partition. Default is False.
enable_sql_generation falsetruenative Used to set SQL generation options during flow execution. The options are to push back to the database, or to score within SPSS Modeler.
| # applyglmmnode properties #
You can use GLMM modeling nodes to generate a GLMM model nugget\. The scripting name of this model nugget is *applyglmmnode*\. For more information on scripting the modeling node itself, see [glmmnode properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/glmmnodeslots.html#glmmnodeslots)\.
<!-- <table "summary="applyglmmnode properties" id="glmmnuggetnodeslots__table_ywy_scj_cdb" class="defaultstyle" "> -->
applyglmmnode properties
Table 1\. applyglmmnode properties
| `applyglmmnode` Properties | Values | Property description |
| ------------------------------ | --------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| `confidence` | `onProbability``onIncrease` | Basis for computing scoring confidence value: highest predicted probability, or difference between highest and second highest predicted probabilities\. |
| `score_category_probabilities` | *flag* | If set to `True`, produces the predicted probabilities for categorical targets\. A field is created for each category\. Default is `False`\. |
| `max_categories` | *integer* | Maximum number of categories for which to predict probabilities\. Used only if `score_category_probabilities` is `True`\. |
| `score_propensity` | *flag* | If set to `True`, produces raw propensity scores (likelihood of "True" outcome) for models with flag targets\. If partitions are in effect, also produces adjusted propensity scores based on the testing partition\. Default is `False`\. |
| `enable_sql_generation` | `false``true``native` | Used to set SQL generation options during flow execution\. The options are to push back to the database, or to score within SPSS Modeler\. |
<!-- </table "summary="applyglmmnode properties" id="glmmnuggetnodeslots__table_ywy_scj_cdb" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
D1C3F3DB7837F7C5803F52829A542F6BA8B4837D | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/gmmnodeslots.html?context=cdpaas&locale=en | gmm properties | gmm properties
A Gaussian Mixture© model is a probabilistic model that assumes all the data points are generated from a mixture of a finite number of Gaussian distributions with unknown parameters. One can think of mixture models as generalizing k-means clustering to incorporate information about the covariance structure of the data as well as the centers of the latent Gaussians. The Gaussian Mixture node in SPSS Modeler exposes the core features and commonly used parameters of the Gaussian Mixture library. The node is implemented in Python.
gmm properties
Table 1. gmm properties
gmm properties Data type Property description
custom_fields boolean This option tells the node to use field information specified here instead of that given in any upstream Type node(s). After selecting this option, specify the following fields as required.
inputs field List of the field names for input.
target field One field name for target.
fast_build boolean Utilize multiple CPU cores to improve model building.
use_partition boolean Set to True or False to specify whether to use partitioned data. Default is False.
covariance_type string Specify Full, Tied, Diag, or Spherical to set the covariance type.
number_component integer Specify an integer for the number of mixture components. Minimum value is 1. Default value is 2.
component_lable boolean Specify True to set the cluster label to a string or False to set the cluster label to a number. Default is False.
label_prefix string If using a string cluster label, you can specify a prefix.
enable_random_seed boolean Specify True if you want to use a random seed. Default is False.
random_seed integer If using a random seed, specify an integer to be used for generating random samples.
tol Double Specify the convergence threshold. Default is 0.000.1.
max_iter integer Specify the maximum number of iterations to perform. Default is 100.
init_params string Set the initialization parameter to use. Options are Kmeans or Random.
warm_start boolean Specify True to use the solution of the last fitting as the initialization for the next call of fit. Default is False.
| # gmm properties #
A Gaussian Mixture© model is a probabilistic model that assumes all the data points are generated from a mixture of a finite number of Gaussian distributions with unknown parameters\. One can think of mixture models as generalizing k\-means clustering to incorporate information about the covariance structure of the data as well as the centers of the latent Gaussians\. The Gaussian Mixture node in SPSS Modeler exposes the core features and commonly used parameters of the Gaussian Mixture library\. The node is implemented in Python\.
<!-- <table "summary="gmm properties" id="gmmnodeslots__table_r3k_5bw_tz" class="defaultstyle" "> -->
gmm properties
Table 1\. gmm properties
| `gmm` properties | Data type | Property description |
| -------------------- | --------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `custom_fields` | *boolean* | This option tells the node to use field information specified here instead of that given in any upstream Type node(s)\. After selecting this option, specify the following fields as required\. |
| `inputs` | *field* | List of the field names for input\. |
| `target` | *field* | One field name for target\. |
| `fast_build` | *boolean* | Utilize multiple CPU cores to improve model building\. |
| `use_partition` | *boolean* | Set to `True` or `False` to specify whether to use partitioned data\. Default is `False`\. |
| `covariance_type` | *string* | Specify `Full`, `Tied`, `Diag`, or `Spherical` to set the covariance type\. |
| `number_component` | *integer* | Specify an integer for the number of mixture components\. Minimum value is `1`\. Default value is `2`\. |
| `component_lable` | *boolean* | Specify `True` to set the cluster label to a string or `False` to set the cluster label to a number\. Default is `False`\. |
| `label_prefix` | *string* | If using a string cluster label, you can specify a prefix\. |
| `enable_random_seed` | *boolean* | Specify `True` if you want to use a random seed\. Default is `False`\. |
| `random_seed` | *integer* | If using a random seed, specify an integer to be used for generating random samples\. |
| `tol` | *Double* | Specify the convergence threshold\. Default is `0.000.1`\. |
| `max_iter` | *integer* | Specify the maximum number of iterations to perform\. Default is `100`\. |
| `init_params` | *string* | Set the initialization parameter to use\. Options are `Kmeans` or `Random`\. |
| `warm_start` | *boolean* | Specify `True` to use the solution of the last fitting as the initialization for the next call of fit\. Default is `False`\. |
<!-- </table "summary="gmm properties" id="gmmnodeslots__table_r3k_5bw_tz" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
F2D3C76D5EABBBF72A0314F29374527C8339591A | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/gmmnuggetnodeslots.html?context=cdpaas&locale=en | applygmm properties | applygmm properties
You can use the Gaussian Mixture node to generate a Gaussian Mixture model nugget. The scripting name of this model nugget is applygmm. For more information on scripting the modeling node itself, see [gmm properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/gmmnodeslots.htmlgmmnodeslots).
applygmm properties
Table 1. applygmm properties
applygmm properties Data type Property description
centers
item_count
total
dimension
components
partition
| # applygmm properties #
You can use the Gaussian Mixture node to generate a Gaussian Mixture model nugget\. The scripting name of this model nugget is *applygmm*\. For more information on scripting the modeling node itself, see [gmm properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/gmmnodeslots.html#gmmnodeslots)\.
<!-- <table "summary="applygmm properties" id="gmmnuggetnodeslots__table_r3k_5bw_tz" class="defaultstyle" "> -->
applygmm properties
Table 1\. applygmm properties
| `applygmm` properties | Data type | Property description |
| --------------------- | --------- | -------------------- |
| `centers` | | |
| `item_count` | | |
| `total` | | |
| `dimension` | | |
| `components` | | |
| `partition` | | |
<!-- </table "summary="applygmm properties" id="gmmnuggetnodeslots__table_r3k_5bw_tz" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
1F781DA5779DAFEFBB53038F71A18BBE2649117B | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/gsarnodeslots.html?context=cdpaas&locale=en | associationrulesnode properties | associationrulesnode properties
The Association Rules node is similar to the Apriori Node. However, unlike Apriori, the Association Rules node can process list data. In addition, the Association Rules node can be used with SPSS Analytic Server to process big data and take advantage of faster parallel processing.
associationrulesnode properties
Table 1. associationrulesnode properties
associationrulesnode properties Data type Property description
predictions field Fields in this list can only appear as a predictor of a rule
conditions [field1...fieldN] Fields in this list can only appear as a condition of a rule
max_rule_conditions integer The maximum number of conditions that can be included in a single rule. Minimum 1, maximum 9.
max_rule_predictions integer The maximum number of predictions that can be included in a single rule. Minimum 1, maximum 5.
max_num_rules integer The maximum number of rules that can be considered as part of rule building. Minimum 1, maximum 10,000.
rule_criterion_top_n ConfidenceRulesupportLiftConditionsupportDeployability The rule criterion that determines the value by which the top "N" rules in the model are chosen.
true_flags Boolean Setting as Y determines that only the true values for flag fields are considered during rule building.
rule_criterion Boolean Setting as Y determines that the rule criterion values are used for excluding rules during model building.
min_confidence number 0.1 to 100 - the percentage value for the minimum required confidence level for a rule produced by the model. If the model produces a rule with a confidence level less than the value specified here the rule is discarded.
min_rule_support number 0.1 to 100 - the percentage value for the minimum required rule support for a rule produced by the model. If the model produces a rule with a rule support level less than the specified value the rule is discarded.
min_condition_support number 0.1 to 100 - the percentage value for the minimum required condition support for a rule produced by the model. If the model produces a rule with a condition support level less than the specified value the rule is discarded.
min_lift integer 1 to 10 - represents the minimum required lift for a rule produced by the model. If the model produces a rule with a lift level less than the specified value the rule is discarded.
exclude_rules Boolean Used to select a list of related fields from which you do not want the model to create rules. Example: set :gsarsnode.exclude_rules = [field1,field2, field3]],field4, field5]]] - where each list of fields separated by [] is a row in the table.
num_bins integer Set the number of automatic bins that continuous fields are binned to. Minimum 2, maximum 10.
max_list_length integer Applies to any list fields for which the maximum length is not known. Elements in the list up until the number specified here are included in the model build; any further elements are discarded. Minimum 1, maximum 100.
output_confidence Boolean
output_rule_support Boolean
output_lift Boolean
output_condition_support Boolean
output_deployability Boolean
rules_to_display uptoall The maximum number of rules to display in the output tables.
display_upto integer If upto is set in rules_to_display, set the number of rules to display in the output tables. Minimum 1.
field_transformations Boolean
records_summary Boolean
rule_statistics Boolean
most_frequent_values Boolean
most_frequent_fields Boolean
word_cloud Boolean
word_cloud_sort ConfidenceRulesupportLiftConditionsupportDeployability
word_cloud_display integer Minimum 1, maximum 20
max_predictions integer The maximum number of rules that can be applied to each input to the score.
criterion ConfidenceRulesupportLiftConditionsupportDeployability Select the measure used to determine the strength of rules.
allow_repeats Boolean Determine whether rules with the same prediction are included in the score.
check_input NoPredictionsPredictionsNoCheck
| # associationrulesnode properties #
The Association Rules node is similar to the Apriori Node\. However, unlike Apriori, the Association Rules node can process list data\. In addition, the Association Rules node can be used with SPSS Analytic Server to process big data and take advantage of faster parallel processing\.
<!-- <table "summary="associationrulesnode properties" class="defaultstyle" "> -->
associationrulesnode properties
Table 1\. associationrulesnode properties
| `associationrulesnode` properties | Data type | Property description |
| --------------------------------- | ---------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `predictions` | *field* | Fields in this list can only appear as a predictor of a rule |
| `conditions` | *\[field1\.\.\.fieldN\]* | Fields in this list can only appear as a condition of a rule |
| `max_rule_conditions` | *integer* | The maximum number of conditions that can be included in a single rule\. Minimum 1, maximum 9\. |
| `max_rule_predictions` | *integer* | The maximum number of predictions that can be included in a single rule\. Minimum 1, maximum 5\. |
| `max_num_rules` | *integer* | The maximum number of rules that can be considered as part of rule building\. Minimum 1, maximum 10,000\. |
| `rule_criterion_top_n` | `Confidence``Rulesupport``Lift``Conditionsupport``Deployability` | The rule criterion that determines the value by which the top "N" rules in the model are chosen\. |
| `true_flags` | *Boolean* | Setting as *Y* determines that only the true values for flag fields are considered during rule building\. |
| `rule_criterion` | *Boolean* | Setting as *Y* determines that the rule criterion values are used for excluding rules during model building\. |
| `min_confidence` | *number* | 0\.1 to 100 \- the percentage value for the minimum required confidence level for a rule produced by the model\. If the model produces a rule with a confidence level less than the value specified here the rule is discarded\. |
| `min_rule_support` | *number* | 0\.1 to 100 \- the percentage value for the minimum required rule support for a rule produced by the model\. If the model produces a rule with a rule support level less than the specified value the rule is discarded\. |
| `min_condition_support` | *number* | 0\.1 to 100 \- the percentage value for the minimum required condition support for a rule produced by the model\. If the model produces a rule with a condition support level less than the specified value the rule is discarded\. |
| `min_lift` | *integer* | 1 to 10 \- represents the minimum required lift for a rule produced by the model\. If the model produces a rule with a lift level less than the specified value the rule is discarded\. |
| `exclude_rules` | *Boolean* | Used to select a list of related fields from which you do not want the model to create rules\. Example: `set :gsarsnode.exclude_rules = [field1,field2, field3]],field4, field5]]]` \- where each list of fields separated by \[\] is a row in the table\. |
| `num_bins` | *integer* | Set the number of automatic bins that continuous fields are binned to\. Minimum 2, maximum 10\. |
| `max_list_length` | *integer* | Applies to any list fields for which the maximum length is not known\. Elements in the list up until the number specified here are included in the model build; any further elements are discarded\. Minimum 1, maximum 100\. |
| `output_confidence` | *Boolean* | |
| `output_rule_support` | *Boolean* | |
| `output_lift` | *Boolean* | |
| `output_condition_support` | *Boolean* | |
| `output_deployability` | *Boolean* | |
| `rules_to_display` | `upto``all` | The maximum number of rules to display in the output tables\. |
| `display_upto` | *integer* | If `upto` is set in `rules_to_display`, set the number of rules to display in the output tables\. Minimum 1\. |
| `field_transformations` | *Boolean* | |
| `records_summary` | *Boolean* | |
| `rule_statistics` | *Boolean* | |
| `most_frequent_values` | *Boolean* | |
| `most_frequent_fields` | *Boolean* | |
| `word_cloud` | *Boolean* | |
| `word_cloud_sort` | `Confidence``Rulesupport``Lift``Conditionsupport``Deployability` | |
| `word_cloud_display` | *integer* | Minimum 1, maximum 20 |
| `max_predictions` | *integer* | The maximum number of rules that can be applied to each input to the score\. |
| `criterion` | `Confidence``Rulesupport``Lift``Conditionsupport``Deployability` | Select the measure used to determine the strength of rules\. |
| `allow_repeats` | *Boolean* | Determine whether rules with the same prediction are included in the score\. |
| `check_input` | `NoPredictions``Predictions``NoCheck` | |
<!-- </table "summary="associationrulesnode properties" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
9DA9A2809D484A6CAA70A66A3548CF4A537950FC | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/gsarnuggetnodeslots.html?context=cdpaas&locale=en | applyassociationrulesnode properties | applyassociationrulesnode properties
You can use the Association Rules modeling node to generate an association rules model nugget. The scripting name of this model nugget is applyassociationrulesnode. For more information on scripting the modeling node itself, see [associationrulesnode properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/gsarnodeslots.htmlgsarnodeslots).
applyassociationrulesnode properties
Table 1. applyassociationrulesnode properties
applyassociationrulesnode properties Data type Property description
max_predictions integer The maximum number of rules that can be applied to each input to the score.
criterion ConfidenceRulesupportLiftConditionsupportDeployability Select the measure used to determine the strength of rules.
allow_repeats Boolean Determine whether rules with the same prediction are included in the score.
check_input NoPredictionsPredictionsNoCheck
| # applyassociationrulesnode properties #
You can use the Association Rules modeling node to generate an association rules model nugget\. The scripting name of this model nugget is *applyassociationrulesnode*\. For more information on scripting the modeling node itself, see [associationrulesnode properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/gsarnodeslots.html#gsarnodeslots)\.
<!-- <table "summary="applyassociationrulesnode properties" class="defaultstyle" "> -->
applyassociationrulesnode properties
Table 1\. applyassociationrulesnode properties
| `applyassociationrulesnode` properties | Data type | Property description |
| -------------------------------------- | ---------------------------------------------------------------- | ---------------------------------------------------------------------------- |
| `max_predictions` | *integer* | The maximum number of rules that can be applied to each input to the score\. |
| `criterion` | `Confidence``Rulesupport``Lift``Conditionsupport``Deployability` | Select the measure used to determine the strength of rules\. |
| `allow_repeats` | *Boolean* | Determine whether rules with the same prediction are included in the score\. |
| `check_input` | `NoPredictions``Predictions``NoCheck` | |
<!-- </table "summary="applyassociationrulesnode properties" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
E01184BCBA866D676B5A236D6638E78D3F55C794 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/hdbscannodeslots.html?context=cdpaas&locale=en | hdbscannode properties | hdbscannode properties
Hierarchical Density-Based Spatial Clustering (HDBSCAN)© uses unsupervised learning to find clusters, or dense regions, of a data set. The HDBSCAN node in SPSS Modeler exposes the core features and commonly used parameters of the HDBSCAN library. The node is implemented in Python, and you can use it to cluster your dataset into distinct groups when you don't know what those groups are at first.
hdbscannode properties
Table 1. hdbscannode properties
hdbscannode properties Data type Property description
custom_fields boolean This option tells the node to use field information specified here instead of that given in any upstream Type node(s). After selecting this option, specify the following fields as required.
inputs field Input fields for clustering.
useHPO boolean Specify true or false to enable or disable Hyper-Parameter Optimization (HPO) based on Rbfopt, which automatically discovers the optimal combination of parameters so that the model will achieve the expected or lesser error rate on the samples. Default is false.
min_cluster_size integer The minimum size of clusters. Specify an integer. Default is 5.
min_samples integer The number of samples in a neighborhood for a point to be considered a core point. Specify an integer. If set to 0, the min_cluster_size is used. Default is 0.
algorithm string Specify which algorithm to use: best, generic, prims_kdtree, prims_balltree, boruvka_kdtree, or boruvka_balltree. Default is best.
metric string Specify which metric to use when calculating distance between instances in a feature array: euclidean, cityblock, L1, L2, manhattan, braycurtis, canberra, chebyshev, correlation, minkowski, or sqeuclidean. Default is euclidean.
useStringLabel boolean Specify true to use a string cluster label, or false to use a number cluster label. Default is false.
stringLabelPrefix string If the useStringLabel parameter is set to true, specify a value for the string label prefix. Default prefix is cluster.
approx_min_span_tree boolean Specify true to accept an approximate minimum spanning tree, or false if you are willing to sacrifice speed for correctness. Default is true.
cluster_selection_method string Specify the method to use for selecting clusters from the condensed tree: eom or leaf. Default is eom (Excess of Mass algorithm).
allow_single_cluster boolean Specify true if you want to allow single cluster results. Default is false.
p_value double Specify the p value to use if you're using minkowski for the metric. Default is 1.5.
leaf_size integer If using a space tree algorithm (boruvka_kdtree, or boruvka_balltree), specify the number of points in a leaf node of the tree. Default is 40.
outputValidity boolean Specify true or false to control whether the Validity Index chart is included in the model output.
outputCondensed boolean Specify true or false to control whether the Condensed Tree chart is included in the model output.
outputSingleLinkage boolean Specify true or false to control whether the Single Linkage Tree chart is included in the model output.
outputMinSpan boolean Specify true or false to control whether the Min Span Tree chart is included in the model output.
is_split
| # hdbscannode properties #
Hierarchical Density\-Based Spatial Clustering (HDBSCAN)© uses unsupervised learning to find clusters, or dense regions, of a data set\. The HDBSCAN node in SPSS Modeler exposes the core features and commonly used parameters of the HDBSCAN library\. The node is implemented in Python, and you can use it to cluster your dataset into distinct groups when you don't know what those groups are at first\.
<!-- <table "summary="hdbscannode properties" id="hdbscannodeslots__table_r3k_5bw_tz" class="defaultstyle" "> -->
hdbscannode properties
Table 1\. hdbscannode properties
| `hdbscannode` properties | Data type | Property description |
| -------------------------- | --------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| custom\_fields | *boolean* | This option tells the node to use field information specified here instead of that given in any upstream Type node(s)\. After selecting this option, specify the following fields as required\. |
| `inputs` | *field* | Input fields for clustering\. |
| `useHPO` | *boolean* | Specify `true` or `false` to enable or disable Hyper\-Parameter Optimization (HPO) based on Rbfopt, which automatically discovers the optimal combination of parameters so that the model will achieve the expected or lesser error rate on the samples\. Default is `false`\. |
| `min_cluster_size` | *integer* | The minimum size of clusters\. Specify an integer\. Default is `5`\. |
| `min_samples` | *integer* | The number of samples in a neighborhood for a point to be considered a core point\. Specify an integer\. If set to `0`, the `min_cluster_size` is used\. Default is `0`\. |
| `algorithm` | *string* | Specify which algorithm to use: `best`, `generic`, `prims_kdtree`, `prims_balltree`, `boruvka_kdtree`, or `boruvka_balltree`\. Default is `best`\. |
| `metric` | *string* | Specify which metric to use when calculating distance between instances in a feature array: `euclidean`, `cityblock`, `L1`, `L2`, `manhattan`, `braycurtis`, `canberra`, `chebyshev`, `correlation`, `minkowski`, or `sqeuclidean`\. Default is `euclidean`\. |
| `useStringLabel` | *boolean* | Specify `true` to use a string cluster label, or `false` to use a number cluster label\. Default is `false`\. |
| `stringLabelPrefix` | *string* | If the `useStringLabel` parameter is set to `true`, specify a value for the string label prefix\. Default prefix is `cluster`\. |
| `approx_min_span_tree` | *boolean* | Specify `true` to accept an approximate minimum spanning tree, or `false` if you are willing to sacrifice speed for correctness\. Default is `true`\. |
| `cluster_selection_method` | *string* | Specify the method to use for selecting clusters from the condensed tree: `eom` or `leaf`\. Default is `eom` (Excess of Mass algorithm)\. |
| `allow_single_cluster` | *boolean* | Specify `true` if you want to allow single cluster results\. Default is `false`\. |
| `p_value` | *double* | Specify the `p value` to use if you're using `minkowski` for the metric\. Default is `1.5`\. |
| `leaf_size` | *integer* | If using a space tree algorithm (`boruvka_kdtree`, or `boruvka_balltree`), specify the number of points in a leaf node of the tree\. Default is `40`\. |
| `outputValidity` | *boolean* | Specify `true` or `false` to control whether the Validity Index chart is included in the model output\. |
| `outputCondensed` | *boolean* | Specify `true` or `false` to control whether the Condensed Tree chart is included in the model output\. |
| `outputSingleLinkage` | *boolean* | Specify `true` or `false` to control whether the Single Linkage Tree chart is included in the model output\. |
| `outputMinSpan` | *boolean* | Specify `true` or `false` to control whether the Min Span Tree chart is included in the model output\. |
| `is_split` | | |
<!-- </table "summary="hdbscannode properties" id="hdbscannodeslots__table_r3k_5bw_tz" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
4F0098CE544BA8AC594F98AF8DF26B7911399750 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/hdbscannuggetnodeslots.html?context=cdpaas&locale=en | hdbscannugget properties | hdbscannugget properties
You can use the HDBSCAN node to generate an HDBSCAN model nugget. The scripting name of this model nugget is hdbscannugget. No other properties exist for this model nugget. For more information on scripting the modeling node itself, see [hdbscannode properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/hdbscannodeslots.htmlhdbscannodeslots).
| # hdbscannugget properties #
You can use the HDBSCAN node to generate an HDBSCAN model nugget\. The scripting name of this model nugget is `hdbscannugget`\. No other properties exist for this model nugget\. For more information on scripting the modeling node itself, see [hdbscannode properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/hdbscannodeslots.html#hdbscannodeslots)\.
<!-- </article "role="article" "> -->
|
8DAA2C34D27A7E09C0AB837C191E87F320790F75 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/histogramnodeslots.html?context=cdpaas&locale=en | histogramnode properties | histogramnode properties
The Histogram node shows the occurrence of values for numeric fields. It's often used to explore the data before manipulations and model building. Similar to the Distribution node, the Histogram node frequently reveals imbalances in the data.
histogramnode properties
Table 1. histogramnode properties
histogramnode properties Data type Property description
field field
color_field field
panel_field field
animation_field field
range_mode AutomaticUserDefined
range_min number
range_max number
bins ByNumberByWidth
num_bins number
bin_width number
normalize flag
separate_bands flag
x_label_auto flag
x_label string
y_label_auto flag
y_label string
use_grid flag
graph_background color Standard graph colors are described at the beginning of this section.
page_background color Standard graph colors are described at the beginning of this section.
normal_curve flag Indicates whether the normal distribution curve should be shown on the output.
| # histogramnode properties #
The Histogram node shows the occurrence of values for numeric fields\. It's often used to explore the data before manipulations and model building\. Similar to the Distribution node, the Histogram node frequently reveals imbalances in the data\.
<!-- <table "summary="histogramnode properties" id="histogramnodeslots__table_ytp_5cj_cdb" class="defaultstyle" "> -->
histogramnode properties
Table 1\. histogramnode properties
| `histogramnode` properties | Data type | Property description |
| -------------------------- | ------------------------ | ------------------------------------------------------------------------------- |
| `field` | *field* | |
| `color_field` | *field* | |
| `panel_field` | *field* | |
| `animation_field` | *field* | |
| `range_mode` | `Automatic``UserDefined` | |
| `range_min` | *number* | |
| `range_max` | *number* | |
| `bins` | `ByNumber``ByWidth` | |
| `num_bins` | *number* | |
| `bin_width` | *number* | |
| `normalize` | *flag* | |
| `separate_bands` | *flag* | |
| `x_label_auto` | *flag* | |
| `x_label` | *string* | |
| `y_label_auto` | *flag* | |
| `y_label` | *string* | |
| `use_grid` | *flag* | |
| `graph_background` | *color* | Standard graph colors are described at the beginning of this section\. |
| `page_background` | *color* | Standard graph colors are described at the beginning of this section\. |
| `normal_curve` | *flag* | Indicates whether the normal distribution curve should be shown on the output\. |
<!-- </table "summary="histogramnode properties" id="histogramnodeslots__table_ytp_5cj_cdb" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
BDC1B4283563848E2C775804FC0857DBDE8843AF | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/historynodeslots.html?context=cdpaas&locale=en | historynode properties | historynode properties
The History node creates new fields containing data from fields in previous records. History nodes are most often used for sequential data, such as time series data. Before using a History node, you may want to sort the data using a Sort node.
Example
node = stream.create("history", "My node")
node.setPropertyValue("fields", ["Drug"])
node.setPropertyValue("offset", 1)
node.setPropertyValue("span", 3)
node.setPropertyValue("unavailable", "Discard")
node.setPropertyValue("fill_with", "undef")
historynode properties
Table 1. historynode properties
historynode properties Data type Property description
fields list Fields for which you want a history.
offset number Specifies the latest record (prior to the current record) from which you want to extract historical field values.
span number Specifies the number of prior records from which you want to extract values.
unavailable DiscardLeaveFill For handling records that have no history values, usually referring to the first several records (at the beginning of the dataset) for which there are no previous records to use as a history.
fill_with StringNumber Specifies a value or string to be used for records where no history value is available.
| # historynode properties #
The History node creates new fields containing data from fields in previous records\. History nodes are most often used for sequential data, such as time series data\. Before using a History node, you may want to sort the data using a Sort node\.
Example
node = stream.create("history", "My node")
node.setPropertyValue("fields", ["Drug"])
node.setPropertyValue("offset", 1)
node.setPropertyValue("span", 3)
node.setPropertyValue("unavailable", "Discard")
node.setPropertyValue("fill_with", "undef")
<!-- <table "summary="historynode properties" id="historynodeslots__table_ypd_vcj_cdb" class="defaultstyle" "> -->
historynode properties
Table 1\. historynode properties
| `historynode` properties | Data type | Property description |
| ------------------------ | ---------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| `fields` | *list* | Fields for which you want a history\. |
| `offset` | *number* | Specifies the latest record (prior to the current record) from which you want to extract historical field values\. |
| `span` | *number* | Specifies the number of prior records from which you want to extract values\. |
| `unavailable` | `Discard``Leave``Fill` | For handling records that have no history values, usually referring to the first several records (at the beginning of the dataset) for which there are no previous records to use as a history\. |
| `fill_with` | `String``Number` | Specifies a value or string to be used for records where no history value is available\. |
<!-- </table "summary="historynode properties" id="historynodeslots__table_ypd_vcj_cdb" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
2756DEAD36AC092838F80ACFFE6ECEE13A22A376 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/isotonicasnodeslots.html?context=cdpaas&locale=en | isotonicasnode properties | isotonicasnode properties
Isotonic Regression belongs to the family of regression algorithms. The Isotonic-AS node in SPSS Modeler is implemented in Spark. For details about Isotonic Regression algorithms, see [https://spark.apache.org/docs/2.2.0/mllib-isotonic-regression.html](https://spark.apache.org/docs/2.2.0/mllib-isotonic-regression.html).
isotonicasnode properties
Table 1. isotonicasnode properties
isotonicasnode properties Data type Property description
label string This property is a dependent variable for which isotonic regression is calculated.
features string This property is an independent variable.
weightCol string The weight represents a number of measures. Default is 1.
isotonic Boolean This property indicates whether the type is isotonic or antitonic.
featureIndex integer This property is for the index of the feature if featuresCol is a vector column. Default is 0.
| # isotonicasnode properties #
Isotonic Regression belongs to the family of regression algorithms\. The Isotonic\-AS node in SPSS Modeler is implemented in Spark\. For details about Isotonic Regression algorithms, see [https://spark\.apache\.org/docs/2\.2\.0/mllib\-isotonic\-regression\.html](https://spark.apache.org/docs/2.2.0/mllib-isotonic-regression.html)\.
<!-- <table "summary="isotonicasnode properties" id="isotonicasnodeslots__table_r3k_5bw_tz" class="defaultstyle" "> -->
isotonicasnode properties
Table 1\. isotonicasnode properties
| `isotonicasnode` properties | Data type | Property description |
| --------------------------- | --------- | ---------------------------------------------------------------------------------------------------- |
| `label` | *string* | This property is a dependent variable for which isotonic regression is calculated\. |
| `features` | *string* | This property is an independent variable\. |
| `weightCol` | *string* | The weight represents a number of measures\. Default is `1`\. |
| `isotonic` | *Boolean* | This property indicates whether the type is `isotonic` or `antitonic`\. |
| `featureIndex` | *integer* | This property is for the index of the feature if `featuresCol` is a vector column\. Default is `0`\. |
<!-- </table "summary="isotonicasnode properties" id="isotonicasnodeslots__table_r3k_5bw_tz" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
F67E458A29CF154C33221A8889789241725FE5C7 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/jython/clementine/jython_basics.html?context=cdpaas&locale=en | Python and Jython | Python and Jython
Jython is an implementation of the Python scripting language, which is written in the Java language and integrated with the Java platform. Python is a powerful object-oriented scripting language.
Jython is useful because it provides the productivity features of a mature scripting language and, unlike Python, runs in any environment that supports a Java virtual machine (JVM). This means that the Java libraries on the JVM are available to use when you're writing programs. With Jython, you can take advantage of this difference, and use the syntax and most of the features of the Python language.
As a scripting language, Python (and its Jython implementation) is easy to learn and efficient to code, and has minimal required structure to create a running program. Code can be entered interactively, that is, one line at a time. Python is an interpreted scripting language; there is no precompile step, as there is in Java. Python programs are simply text files that are interpreted as they're input (after parsing for syntax errors). Simple expressions, like defined values, as well as more complex actions, such as function definitions, are immediately executed and available for use. Any changes that are made to the code can be tested quickly. Script interpretation does, however, have some disadvantages. For example, use of an undefined variable is not a compiler error, so it's detected only if (and when) the statement in which the variable is used is executed. In this case, you can edit and run the program to debug the error.
Python sees everything, including all data and code, as an object. You can, therefore, manipulate these objects with lines of code. Some select types, such as numbers and strings, are more conveniently considered as values, not objects; this is supported by Python. There is one null value that's supported. This null value has the reserved name None.
For a more in-depth introduction to Python and Jython scripting, and for some example scripts, see [http://www.ibm.com/developerworks/java/tutorials/j-jython1/j-jython1.html](http://www.ibm.com/developerworks/java/tutorials/j-jython1/j-jython1.html) and [http://www.ibm.com/developerworks/java/tutorials/j-jython2/j-jython2.html](http://www.ibm.com/developerworks/java/tutorials/j-jython2/j-jython2.html).
| # Python and Jython #
Jython is an implementation of the Python scripting language, which is written in the Java language and integrated with the Java platform\. Python is a powerful object\-oriented scripting language\.
Jython is useful because it provides the productivity features of a mature scripting language and, unlike Python, runs in any environment that supports a Java virtual machine (JVM)\. This means that the Java libraries on the JVM are available to use when you're writing programs\. With Jython, you can take advantage of this difference, and use the syntax and most of the features of the Python language\.
As a scripting language, Python (and its Jython implementation) is easy to learn and efficient to code, and has minimal required structure to create a running program\. Code can be entered interactively, that is, one line at a time\. Python is an interpreted scripting language; there is no precompile step, as there is in Java\. Python programs are simply text files that are interpreted as they're input (after parsing for syntax errors)\. Simple expressions, like defined values, as well as more complex actions, such as function definitions, are immediately executed and available for use\. Any changes that are made to the code can be tested quickly\. Script interpretation does, however, have some disadvantages\. For example, use of an undefined variable is not a compiler error, so it's detected only if (and when) the statement in which the variable is used is executed\. In this case, you can edit and run the program to debug the error\.
Python sees everything, including all data and code, as an object\. You can, therefore, manipulate these objects with lines of code\. Some select types, such as numbers and strings, are more conveniently considered as values, not objects; this is supported by Python\. There is one null value that's supported\. This null value has the reserved name `None`\.
For a more in\-depth introduction to Python and Jython scripting, and for some example scripts, see [http://www\.ibm\.com/developerworks/java/tutorials/j\-jython1/j\-jython1\.html](http://www.ibm.com/developerworks/java/tutorials/j-jython1/j-jython1.html) and [http://www\.ibm\.com/developerworks/java/tutorials/j\-jython2/j\-jython2\.html](http://www.ibm.com/developerworks/java/tutorials/j-jython2/j-jython2.html)\.
<!-- </article "role="article" "> -->
|
033F114BFF6D5479C2B4BE7C1542A4C778ABA53E | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/jython/clementine/python_add_attributes.html?context=cdpaas&locale=en | Adding attributes to a class instance | Adding attributes to a class instance
Unlike in Java, in Python clients can add attributes to an instance of a class. Only the one instance is changed. For example, to add attributes to an instance x, set new values on that instance:
x.attr1 = 1
x.attr2 = 2
.
.
x.attrN = n
| # Adding attributes to a class instance #
Unlike in Java, in Python clients can add attributes to an instance of a class\. Only the one instance is changed\. For example, to add attributes to an instance `x`, set new values on that instance:
x.attr1 = 1
x.attr2 = 2
.
.
x.attrN = n
<!-- </article "role="article" "> -->
|
8BC347015FD7CE2AF13B17DE4D287471CB994F38 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/jython/clementine/python_api_intro.html?context=cdpaas&locale=en | The scripting API | The scripting API
The Scripting API provides access to a wide range of SPSS Modeler functionality. All the methods described so far are part of the API and can be accessed implicitly within the script without further imports. However, if you want to reference the API classes, you must import the API explicitly with the following statement:
import modeler.api
This import statement is required by many of the scripting API examples.
| # The scripting API #
The Scripting API provides access to a wide range of SPSS Modeler functionality\. All the methods described so far are part of the API and can be accessed implicitly within the script without further imports\. However, if you want to reference the API classes, you must import the API explicitly with the following statement:
import modeler.api
This import statement is required by many of the scripting API examples\.
<!-- </article "role="article" "> -->
|
F290D0C61B4A664E303DE559BBC559015FD375F9 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/jython/clementine/python_api_search.html?context=cdpaas&locale=en | Example: Searching for nodes using a custom filter | Example: Searching for nodes using a custom filter
The section [Finding nodes](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/jython/clementine/python_node_find.htmlpython_node_find) includes an example of searching for a node in a flow using the type name of the node as the search criterion. In some situations, a more generic search is required and this can be accomplished by using the NodeFilter class and the flow findAll() method. This type of search involves the following two steps:
1. Creating a new class that extends NodeFilter and that implements a custom version of the accept() method.
2. Calling the flow findAll() method with an instance of this new class. This returns all nodes that meet the criteria defined in the accept() method.
The following example shows how to search for nodes in a flow that have the node cache enabled. The returned list of nodes can be used to either flush or disable the caches of these nodes.
import modeler.api
class CacheFilter(modeler.api.NodeFilter):
"""A node filter for nodes with caching enabled"""
def accept(this, node):
return node.isCacheEnabled()
cachingnodes = modeler.script.stream().findAll(CacheFilter(), False)
| # Example: Searching for nodes using a custom filter #
The section [Finding nodes](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/jython/clementine/python_node_find.html#python_node_find) includes an example of searching for a node in a flow using the type name of the node as the search criterion\. In some situations, a more generic search is required and this can be accomplished by using the `NodeFilter` class and the flow `findAll()` method\. This type of search involves the following two steps:
<!-- <ol> -->
1. Creating a new class that extends `NodeFilter` and that implements a custom version of the `accept()` method\.
2. Calling the flow `findAll()` method with an instance of this new class\. This returns all nodes that meet the criteria defined in the `accept()` method\.
<!-- </ol> -->
The following example shows how to search for nodes in a flow that have the node cache enabled\. The returned list of nodes can be used to either flush or disable the caches of these nodes\.
import modeler.api
class CacheFilter(modeler.api.NodeFilter):
"""A node filter for nodes with caching enabled"""
def accept(this, node):
return node.isCacheEnabled()
cachingnodes = modeler.script.stream().findAll(CacheFilter(), False)
<!-- </article "role="article" "> -->
|
78488A77CB39BDD413DBB7682F1DBE2675B3E3A0 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/jython/clementine/python_create_class.html?context=cdpaas&locale=en | Defining a class | Defining a class
Within a Python class, you can define both variables and methods. Unlike in Java, in Python you can define any number of public classes per source file (or module). Therefore, you can think of a module in Python as similar to a package in Java.
In Python, classes are defined using the class statement. The class statement has the following form:
class name (superclasses): statement
or
class name (superclasses):
assignment
.
.
function
.
.
When you define a class, you have the option to provide zero or more assignment statements. These create class attributes that are shared by all instances of the class. You can also provide zero or more function definitions. These function definitions create methods. The superclasses list is optional.
The class name should be unique in the same scope, that is within a module, function, or class. You can define multiple variables to reference the same class.
| # Defining a class #
Within a Python class, you can define both variables and methods\. Unlike in Java, in Python you can define any number of public classes per source file (or module)\. Therefore, you can think of a module in Python as similar to a package in Java\.
In Python, classes are defined using the `class` statement\. The `class` statement has the following form:
class name (superclasses): statement
or
class name (superclasses):
assignment
.
.
function
.
.
When you define a class, you have the option to provide zero or more assignment statements\. These create class attributes that are shared by all instances of the class\. You can also provide zero or more function definitions\. These function definitions create methods\. The `superclasses` list is optional\.
The class name should be unique in the same scope, that is within a module, function, or class\. You can define multiple variables to reference the same class\.
<!-- </article "role="article" "> -->
|
E3EFB6106AE81DB5A8B3379C3EDCF86E31F95AB0 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/jython/clementine/python_create_instance.html?context=cdpaas&locale=en | Creating a class instance | Creating a class instance
You can use classes to hold class (or shared) attributes or to create class instances. To create an instance of a class, you call the class as if it were a function. For example, consider the following class:
class MyClass:
pass
Here, the pass statement is used because a statement is required to complete the class, but no action is required programmatically.
The following statement creates an instance of the class MyClass:
x = MyClass()
| # Creating a class instance #
You can use classes to hold class (or shared) attributes or to create class instances\. To create an instance of a class, you call the class as if it were a function\. For example, consider the following class:
class MyClass:
pass
Here, the `pass` statement is used because a statement is required to complete the class, but no action is required programmatically\.
The following statement creates an instance of the class `MyClass`:
x = MyClass()
<!-- </article "role="article" "> -->
|
3491F666270894EE4BE071FD4A8551DF94CB9889 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/jython/clementine/python_define_method.html?context=cdpaas&locale=en | Defining class attributes and methods | Defining class attributes and methods
Any variable that's bound in a class is a class attribute. Any function defined within a class is a method. Methods receive an instance of the class, conventionally called self, as the first argument. For example, to define some class attributes and methods, you might enter the following script:
class MyClass
attr1 = 10 class attributes
attr2 = "hello"
def method1(self):
print MyClass.attr1 reference the class attribute
def method2(self):
print MyClass.attr2 reference the class attribute
def method3(self, text):
self.text = text instance attribute
print text, self.text print my argument and my attribute
method4 = method3 make an alias for method3
Inside a class, you should qualify all references to class attributes with the class name (for example, MyClass.attr1). All references to instance attributes should be qualified with the self variable (for example, self.text). Outside the class, you should qualify all references to class attributes with the class name (for example, MyClass.attr1) or with an instance of the class (for example, x.attr1, where x is an instance of the class). Outside the class, all references to instance variables should be qualified with an instance of the class (for example, x.text).
| # Defining class attributes and methods #
Any variable that's bound in a class is a class attribute\. Any function defined within a class is a method\. Methods receive an instance of the class, conventionally called `self`, as the first argument\. For example, to define some class attributes and methods, you might enter the following script:
class MyClass
attr1 = 10 #class attributes
attr2 = "hello"
def method1(self):
print MyClass.attr1 #reference the class attribute
def method2(self):
print MyClass.attr2 #reference the class attribute
def method3(self, text):
self.text = text #instance attribute
print text, self.text #print my argument and my attribute
method4 = method3 #make an alias for method3
Inside a class, you should qualify all references to class attributes with the class name (for example, `MyClass.attr1`)\. All references to instance attributes should be qualified with the `self` variable (for example, `self.text`)\. Outside the class, you should qualify all references to class attributes with the class name (for example, `MyClass.attr1`) or with an instance of the class (for example, `x.attr1`, where `x` is an instance of the class)\. Outside the class, all references to instance variables should be qualified with an instance of the class (for example, `x.text`)\.
<!-- </article "role="article" "> -->
|
5ED78E19C7780C6856E1FA05D4B8A3F671FC878B | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/jython/clementine/python_diagrams.html?context=cdpaas&locale=en | Diagrams | Diagrams
The term diagram covers the functions that are supported by both normal flows and SuperNode flows, such as adding and removing nodes and modifying connections between the nodes.
| # Diagrams #
The term diagram covers the functions that are supported by both normal flows and SuperNode flows, such as adding and removing nodes and modifying connections between the nodes\.
<!-- </article "role="article" "> -->
|
640F57E8262F846DA7884E43F6F2F6C04CD15667 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/jython/clementine/python_global_values.html?context=cdpaas&locale=en | Global values | Global values
Global values are used to compute various summary statistics for specified fields. These summary values can be accessed anywhere within the flow. Global values are similar to flow parameters in that they are accessed by name through the flow. They're different from flow parameters in that the associated values are updated automatically when a Set Globals node is run, rather than being assigned by scripting. The global values for a flow are accessed by calling the flow's getGlobalValues() method.
The GlobalValues object defines the functions that are shown in the following table.
Functions that are defined by the GlobalValues object
Table 1. Functions that are defined by the GlobalValues object
Method Return type Description
g.fieldNameIterator() Iterator Returns an iterator for each field name with at least one global value.
g.getValue(type, fieldName) Object Returns the global value for the specified type and field name, or None if no value can be located. The returned value is generally expected to be a number, although future functionality may return different value types.
g.getValues(fieldName) Map Returns a map containing the known entries for the specified field name, or None if there are no existing entries for the field.
GlobalValues.Type defines the type of summary statistics that are available. The following summary statistics are available:
* MAX: the maximum value of the field.
* MEAN: the mean value of the field.
* MIN: the minimum value of the field.
* STDDEV: the standard deviation of the field.
* SUM: the sum of the values in the field.
For example, the following script accesses the mean value of the "income" field, which is computed by a Set Globals node:
import modeler.api
globals = modeler.script.stream().getGlobalValues()
mean_income = globals.getValue(modeler.api.GlobalValues.Type.MEAN, "income")
| # Global values #
Global values are used to compute various summary statistics for specified fields\. These summary values can be accessed anywhere within the flow\. Global values are similar to flow parameters in that they are accessed by name through the flow\. They're different from flow parameters in that the associated values are updated automatically when a Set Globals node is run, rather than being assigned by scripting\. The global values for a flow are accessed by calling the flow's `getGlobalValues()` method\.
The `GlobalValues` object defines the functions that are shown in the following table\.
<!-- <table "summary="Functions that are defined by the GlobalValues object" class="defaultstyle" "> -->
Functions that are defined by the GlobalValues object
Table 1\. Functions that are defined by the GlobalValues object
| Method | Return type | Description |
| ----------------------------- | ----------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `g.fieldNameIterator()` | Iterator | Returns an iterator for each field name with at least one global value\. |
| `g.getValue(type, fieldName)` | Object | Returns the global value for the specified type and field name, or `None` if no value can be located\. The returned value is generally expected to be a number, although future functionality may return different value types\. |
| `g.getValues(fieldName)` | Map | Returns a map containing the known entries for the specified field name, or `None` if there are no existing entries for the field\. |
<!-- </table "summary="Functions that are defined by the GlobalValues object" class="defaultstyle" "> -->
`GlobalValues.Type` defines the type of summary statistics that are available\. The following summary statistics are available:
<!-- <ul> -->
* `MAX`: the maximum value of the field\.
* `MEAN`: the mean value of the field\.
* `MIN`: the minimum value of the field\.
* `STDDEV`: the standard deviation of the field\.
* `SUM`: the sum of the values in the field\.
<!-- </ul> -->
For example, the following script accesses the mean value of the "income" field, which is computed by a Set Globals node:
import modeler.api
globals = modeler.script.stream().getGlobalValues()
mean_income = globals.getValue(modeler.api.GlobalValues.Type.MEAN, "income")
<!-- </article "role="article" "> -->
|
CAD5F0781542A67A581819B52BB1B6B4BB9ECE74 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/jython/clementine/python_globalparameters.html?context=cdpaas&locale=en | Parameters | Parameters
Parameters provide a useful way of passing values at runtime, rather than hard coding them directly in a script. Parameters and their values are defined in the same way as for flows; that is, as entries in the parameters table of a flow, or as parameters on the command line. The Stream class implements a set of functions defined by the ParameterProvider object as shown in the following table. Session provides a getParameters() call which returns an object that defines those functions.
Functions defined by the ParameterProvider object
Table 1. Functions defined by the ParameterProvider object
Method Return type Description
p.parameterIterator() Iterator Returns an iterator of parameter names for this object.
p.getParameterDefinition( parameterName) ParameterDefinition Returns the parameter definition for the parameter with the specified name, or None if no such parameter exists in this provider. The result may be a snapshot of the definition at the time the method was called and need not reflect any subsequent modifications made to the parameter through this provider.
p.getParameterLabel(parameterName) string Returns the label of the named parameter, or None if no such parameter exists.
p.setParameterLabel(parameterName, label) Not applicable Sets the label of the named parameter.
p.getParameterStorage( parameterName) ParameterStorage Returns the storage of the named parameter, or None if no such parameter exists.
p.setParameterStorage( parameterName, storage) Not applicable Sets the storage of the named parameter.
p.getParameterType(parameterName) ParameterType Returns the type of the named parameter, or None if no such parameter exists.
p.setParameterType(parameterName, type) Not applicable Sets the type of the named parameter.
p.getParameterValue(parameterName) Object Returns the value of the named parameter, or None if no such parameter exists.
p.setParameterValue(parameterName, value) Not applicable Sets the value of the named parameter.
In the following example, the script aggregates some Telco data to find which region has the lowest average income data. A flow parameter is then set with this region. That flow parameter is then used in a Select node to exclude that region from the data, before a churn model is built on the remainder.
The example is artificial because the script generates the Select node itself and could therefore have generated the correct value directly into the Select node expression. However, flows are typically pre-built, so setting parameters in this way provides a useful example.
The first part of this example script creates the flow parameter that will contain the region with the lowest average income. The script also creates the nodes in the aggregation branch and the model building branch, and connects them together.
import modeler.api
stream = modeler.script.stream()
Initialize a flow parameter
stream.setParameterStorage("LowestRegion", modeler.api.ParameterStorage.INTEGER)
First create the aggregation branch to compute the average income per region
sourcenode = stream.findByID("idGXVBG5FBZH")
aggregatenode = modeler.script.stream().createAt("aggregate", "Aggregate", 294, 142)
aggregatenode.setPropertyValue("keys", ["region"])
aggregatenode.setKeyedPropertyValue("aggregates", "income", ["Mean"])
tablenode = modeler.script.stream().createAt("table", "Table", 462, 142)
stream.link(sourcenode, aggregatenode)
stream.link(aggregatenode, tablenode)
selectnode = stream.createAt("select", "Select", 210, 232)
selectnode.setPropertyValue("mode", "Discard")
Reference the flow parameter in the selection
selectnode.setPropertyValue("condition", "'region' = '$P-LowestRegion'")
typenode = stream.createAt("type", "Type", 366, 232)
typenode.setKeyedPropertyValue("direction", "Drug", "Target")
c50node = stream.createAt("c50", "C5.0", 534, 232)
stream.link(sourcenode, selectnode)
stream.link(selectnode, typenode)
stream.link(typenode, c50node)
The example script creates the following flow.
Figure 1. Flow that results from the example script

| # Parameters #
Parameters provide a useful way of passing values at runtime, rather than hard coding them directly in a script\. Parameters and their values are defined in the same way as for flows; that is, as entries in the parameters table of a flow, or as parameters on the command line\. The `Stream` class implements a set of functions defined by the `ParameterProvider` object as shown in the following table\. `Session` provides a `getParameters()` call which returns an object that defines those functions\.
<!-- <table "summary="Functions defined by the ParameterProvider object" class="defaultstyle" "> -->
Functions defined by the ParameterProvider object
Table 1\. Functions defined by the ParameterProvider object
| Method | Return type | Description |
| ------------------------------------------------ | --------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `p.parameterIterator()` | Iterator | Returns an iterator of parameter names for this object\. |
| `p.getParameterDefinition( parameterName)` | `ParameterDefinition` | Returns the parameter definition for the parameter with the specified name, or `None` if no such parameter exists in this provider\. The result may be a snapshot of the definition at the time the method was called and need not reflect any subsequent modifications made to the parameter through this provider\. |
| `p.getParameterLabel(parameterName)` | *string* | Returns the label of the named parameter, or `None` if no such parameter exists\. |
| `p.setParameterLabel(parameterName, label)` | Not applicable | Sets the label of the named parameter\. |
| `p.getParameterStorage( parameterName)` | `ParameterStorage` | Returns the storage of the named parameter, or `None` if no such parameter exists\. |
| `p.setParameterStorage( parameterName, storage)` | Not applicable | Sets the storage of the named parameter\. |
| `p.getParameterType(parameterName)` | `ParameterType` | Returns the type of the named parameter, or `None` if no such parameter exists\. |
| `p.setParameterType(parameterName, type)` | Not applicable | Sets the type of the named parameter\. |
| `p.getParameterValue(parameterName)` | Object | Returns the value of the named parameter, or `None` if no such parameter exists\. |
| `p.setParameterValue(parameterName, value)` | Not applicable | Sets the value of the named parameter\. |
<!-- </table "summary="Functions defined by the ParameterProvider object" class="defaultstyle" "> -->
In the following example, the script aggregates some Telco data to find which region has the lowest average income data\. A flow parameter is then set with this region\. That flow parameter is then used in a Select node to exclude that region from the data, before a churn model is built on the remainder\.
The example is artificial because the script generates the Select node itself and could therefore have generated the correct value directly into the Select node expression\. However, flows are typically pre\-built, so setting parameters in this way provides a useful example\.
The first part of this example script creates the flow parameter that will contain the region with the lowest average income\. The script also creates the nodes in the aggregation branch and the model building branch, and connects them together\.
import modeler.api
stream = modeler.script.stream()
# Initialize a flow parameter
stream.setParameterStorage("LowestRegion", modeler.api.ParameterStorage.INTEGER)
# First create the aggregation branch to compute the average income per region
sourcenode = stream.findByID("idGXVBG5FBZH")
aggregatenode = modeler.script.stream().createAt("aggregate", "Aggregate", 294, 142)
aggregatenode.setPropertyValue("keys", ["region"])
aggregatenode.setKeyedPropertyValue("aggregates", "income", ["Mean"])
tablenode = modeler.script.stream().createAt("table", "Table", 462, 142)
stream.link(sourcenode, aggregatenode)
stream.link(aggregatenode, tablenode)
selectnode = stream.createAt("select", "Select", 210, 232)
selectnode.setPropertyValue("mode", "Discard")
# Reference the flow parameter in the selection
selectnode.setPropertyValue("condition", "'region' = '$P-LowestRegion'")
typenode = stream.createAt("type", "Type", 366, 232)
typenode.setKeyedPropertyValue("direction", "Drug", "Target")
c50node = stream.createAt("c50", "C5.0", 534, 232)
stream.link(sourcenode, selectnode)
stream.link(selectnode, typenode)
stream.link(typenode, c50node)
The example script creates the following flow\.
Figure 1\. Flow that results from the example script

<!-- </article "role="article" "> -->
|
E61658D2BA7D0D13E5A6008E28670D1B1F6CB7BB | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/jython/clementine/python_hidden_variables.html?context=cdpaas&locale=en | Hidden variables | Hidden variables
You can hide data by creating Private variables. Private variables can be accessed only by the class itself. If you declare names of the form __xxx or __xxx_yyy, that is with two preceding underscores, the Python parser will automatically add the class name to the declared name, creating hidden variables. For example:
class MyClass:
__attr = 10 private class attribute
def method1(self):
pass
def method2(self, p1, p2):
pass
def __privateMethod(self, text):
self.__text = text private attribute
Unlike in Java, in Python all references to instance variables must be qualified with self; there's no implied use of this.
| # Hidden variables #
You can hide data by creating Private variables\. Private variables can be accessed only by the class itself\. If you declare names of the form `__xxx` or `__xxx_yyy`, that is with two preceding underscores, the Python parser will automatically add the class name to the declared name, creating hidden variables\. For example:
class MyClass:
__attr = 10 #private class attribute
def method1(self):
pass
def method2(self, p1, p2):
pass
def __privateMethod(self, text):
self.__text = text #private attribute
Unlike in Java, in Python all references to instance variables must be qualified with `self`; there's no implied use of `this`\.
<!-- </article "role="article" "> -->
|
9EE303CB0D99042537564DCDFC134B592BF0A3FE | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/jython/clementine/python_inheritance.html?context=cdpaas&locale=en | Inheritance | Inheritance
The ability to inherit from classes is fundamental to object-oriented programming. Python supports both single and multiple inheritance. Single inheritance means that there can be only one superclass. Multiple inheritance means that there can be more than one superclass.
Inheritance is implemented by subclassing other classes. Any number of Python classes can be superclasses. In the Jython implementation of Python, only one Java class can be directly or indirectly inherited from. It's not required for a superclass to be supplied.
Any attribute or method in a superclass is also in any subclass and can be used by the class itself, or by any client as long as the attribute or method isn't hidden. Any instance of a subclass can be used wherever an instance of a superclass can be used; this is an example of polymorphism. These features enable reuse and ease of extension.
| # Inheritance #
The ability to inherit from classes is fundamental to object\-oriented programming\. Python supports both single and multiple inheritance\. Single inheritance means that there can be only one superclass\. Multiple inheritance means that there can be more than one superclass\.
Inheritance is implemented by subclassing other classes\. Any number of Python classes can be superclasses\. In the Jython implementation of Python, only one Java class can be directly or indirectly inherited from\. It's not required for a superclass to be supplied\.
Any attribute or method in a superclass is also in any subclass and can be used by the class itself, or by any client as long as the attribute or method isn't hidden\. Any instance of a subclass can be used wherever an instance of a superclass can be used; this is an example of polymorphism\. These features enable reuse and ease of extension\.
<!-- </article "role="article" "> -->
|
97050C74E0C144E4F16AA808D275A9A472489EFB | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/jython/clementine/python_language_overview.html?context=cdpaas&locale=en | The scripting language | The scripting language
With the scripting facility for SPSS Modeler, you can create scripts that operate on the SPSS Modeler user interface, manipulate output objects, and run command syntax. You can also run scripts directly from within SPSS Modeler.
Scripts in SPSS Modeler are written in the Python scripting language. The Java-based implementation of Python that's used by SPSS Modeler is called Jython. The scripting language consists of the following features:
* A format for referencing nodes, flows, projects, output, and other SPSS Modeler objects
* A set of scripting statements or commands you can use to manipulate these objects
* A scripting expression language for setting the values of variables, parameters, and other objects
* Support for comments, continuations, and blocks of literal text
The following sections of this documentation describe the Python scripting language, the Jython implementation of Python, and the basic syntax for getting started with scripting in SPSS Modeler. Information about specific properties and commands is provided in the sections that follow.
| # The scripting language #
With the scripting facility for SPSS Modeler, you can create scripts that operate on the SPSS Modeler user interface, manipulate output objects, and run command syntax\. You can also run scripts directly from within SPSS Modeler\.
Scripts in SPSS Modeler are written in the Python scripting language\. The Java\-based implementation of Python that's used by SPSS Modeler is called Jython\. The scripting language consists of the following features:
<!-- <ul> -->
* A format for referencing nodes, flows, projects, output, and other SPSS Modeler objects
* A set of scripting statements or commands you can use to manipulate these objects
* A scripting expression language for setting the values of variables, parameters, and other objects
* Support for comments, continuations, and blocks of literal text
<!-- </ul> -->
The following sections of this documentation describe the Python scripting language, the Jython implementation of Python, and the basic syntax for getting started with scripting in SPSS Modeler\. Information about specific properties and commands is provided in the sections that follow\.
<!-- </article "role="article" "> -->
|
1FEFE3C6F1A20841FA1AE6AFAA85CC7FF36778AC | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/jython/clementine/python_metadata.html?context=cdpaas&locale=en | Metadata: Information about data | Metadata: Information about data
Because nodes are connected together in a flow, information about the columns or fields that are available at each node is available. For example, in the SPSS Modeler user interface, this allows you to select which fields to sort or aggregate by. This information is called the data model.
Scripts can also access the data model by looking at the fields coming into or out of a node. For some nodes, the input and output data models are the same (for example, a Sort node simply reorders the records but doesn't change the data model). Some, such as the Derive node, can add new fields. Others, such as the Filter node, can rename or remove fields.
In the following example, the script takes a standard IBM® SPSS® Modeler druglearn.str flow, and for each field, builds a model with one of the input fields dropped. It does this by:
1. Accessing the output data model from the Type node.
2. Looping through each field in the output data model.
3. Modifying the Filter node for each input field.
4. Changing the name of the model being built.
5. Running the model build node.
Note: Before running the script in the druglean.str flow, remember to set the scripting language to Python if the flow was created in an old version of IBM SPSS Modeler desktop and its scripting language is set to Legacy).
import modeler.api
stream = modeler.script.stream()
filternode = stream.findByType("filter", None)
typenode = stream.findByType("type", None)
c50node = stream.findByType("c50", None)
Always use a custom model name
c50node.setPropertyValue("use_model_name", True)
lastRemoved = None
fields = typenode.getOutputDataModel()
for field in fields:
If this is the target field then ignore it
if field.getModelingRole() == modeler.api.ModelingRole.OUT:
continue
Re-enable the field that was most recently removed
if lastRemoved != None:
filternode.setKeyedPropertyValue("include", lastRemoved, True)
Remove the field
lastRemoved = field.getColumnName()
filternode.setKeyedPropertyValue("include", lastRemoved, False)
Set the name of the new model then run the build
c50node.setPropertyValue("model_name", "Exclude " + lastRemoved)
c50node.run([])
The DataModel object provides a number of methods for accessing information about the fields or columns within the data model. These methods are summarized in the following table.
DataModel object methods for accessing information about fields or columns
Table 1. DataModel object methods for accessing information about fields or columns
Method Return type Description
d.getColumnCount() int Returns the number of columns in the data model.
d.columnIterator() Iterator Returns an iterator that returns each column in the "natural" insert order. The iterator returns instances of Column.
d.nameIterator() Iterator Returns an iterator that returns the name of each column in the "natural" insert order.
d.contains(name) Boolean Returns True if a column with the supplied name exists in this DataModel, False otherwise.
d.getColumn(name) Column Returns the column with the specified name.
d.getColumnGroup(name) ColumnGroup Returns the named column group or None if no such column group exists.
d.getColumnGroupCount() int Returns the number of column groups in this data model.
d.columnGroupIterator() Iterator Returns an iterator that returns each column group in turn.
d.toArray() Column[] Returns the data model as an array of columns. The columns are ordered in their "natural" insert order.
Each field (Column object) includes a number of methods for accessing information about the column. The following table shows a selection of these.
Column object methods for accessing information about the column
Table 2. Column object methods for accessing information about the column
Method Return type Description
c.getColumnName() string Returns the name of the column.
c.getColumnLabel() string Returns the label of the column or an empty string if there is no label associated with the column.
c.getMeasureType() MeasureType Returns the measure type for the column.
c.getStorageType() StorageType Returns the storage type for the column.
c.isMeasureDiscrete() Boolean Returns True if the column is discrete. Columns that are either a set or a flag are considered discrete.
c.isModelOutputColumn() Boolean Returns True if the column is a model output column.
c.isStorageDatetime() Boolean Returns True if the column's storage is a time, date or timestamp value.
c.isStorageNumeric() Boolean Returns True if the column's storage is an integer or a real number.
c.isValidValue(value) Boolean Returns True if the specified value is valid for this storage, and valid when the valid column values are known.
c.getModelingRole() ModelingRole Returns the modeling role for the column.
c.getSetValues() Object[] Returns an array of valid values for the column, or None if either the values are not known or the column is not a set.
c.getValueLabel(value) string Returns the label for the value in the column, or an empty string if there is no label associated with the value.
c.getFalseFlag() Object Returns the "false" indicator value for the column, or None if either the value is not known or the column is not a flag.
c.getTrueFlag() Object Returns the "true" indicator value for the column, or None if either the value is not known or the column is not a flag.
c.getLowerBound() Object Returns the lower bound value for the values in the column, or None if either the value is not known or the column is not continuous.
c.getUpperBound() Object Returns the upper bound value for the values in the column, or None if either the value is not known or the column is not continuous.
Note that most of the methods that access information about a column have equivalent methods defined on the DataModel object itself. For example, the two following statements are equivalent:
dataModel.getColumn("someName").getModelingRole()
dataModel.getModelingRole("someName")
| # Metadata: Information about data #
Because nodes are connected together in a flow, information about the columns or fields that are available at each node is available\. For example, in the SPSS Modeler user interface, this allows you to select which fields to sort or aggregate by\. This information is called the data model\.
Scripts can also access the data model by looking at the fields coming into or out of a node\. For some nodes, the input and output data models are the same (for example, a Sort node simply reorders the records but doesn't change the data model)\. Some, such as the Derive node, can add new fields\. Others, such as the Filter node, can rename or remove fields\.
In the following example, the script takes a standard IBM® SPSS® Modeler druglearn\.str flow, and for each field, builds a model with one of the input fields dropped\. It does this by:
<!-- <ol> -->
1. Accessing the output data model from the Type node\.
2. Looping through each field in the output data model\.
3. Modifying the Filter node for each input field\.
4. Changing the name of the model being built\.
5. Running the model build node\.
<!-- </ol> -->
Note: Before running the script in the druglean\.str flow, remember to set the scripting language to Python if the flow was created in an old version of IBM SPSS Modeler desktop and its scripting language is set to Legacy)\.
import modeler.api
stream = modeler.script.stream()
filternode = stream.findByType("filter", None)
typenode = stream.findByType("type", None)
c50node = stream.findByType("c50", None)
# Always use a custom model name
c50node.setPropertyValue("use_model_name", True)
lastRemoved = None
fields = typenode.getOutputDataModel()
for field in fields:
# If this is the target field then ignore it
if field.getModelingRole() == modeler.api.ModelingRole.OUT:
continue
# Re-enable the field that was most recently removed
if lastRemoved != None:
filternode.setKeyedPropertyValue("include", lastRemoved, True)
# Remove the field
lastRemoved = field.getColumnName()
filternode.setKeyedPropertyValue("include", lastRemoved, False)
# Set the name of the new model then run the build
c50node.setPropertyValue("model_name", "Exclude " + lastRemoved)
c50node.run([])
The `DataModel` object provides a number of methods for accessing information about the fields or columns within the data model\. These methods are summarized in the following table\.
<!-- <table "summary="DataModel object methods for accessing information about fields or columns" class="defaultstyle" "> -->
DataModel object methods for accessing information about fields or columns
Table 1\. DataModel object methods for accessing information about fields or columns
| Method | Return type | Description |
| ------------------------- | ----------- | ------------------------------------------------------------------------------------------------------------------------- |
| `d.getColumnCount()` | *int* | Returns the number of columns in the data model\. |
| `d.columnIterator()` | Iterator | Returns an iterator that returns each column in the "natural" insert order\. The iterator returns instances of `Column`\. |
| `d.nameIterator()` | Iterator | Returns an iterator that returns the name of each column in the "natural" insert order\. |
| `d.contains(name)` | *Boolean* | Returns `True` if a column with the supplied name exists in this DataModel, `False` otherwise\. |
| `d.getColumn(name)` | Column | Returns the column with the specified name\. |
| `d.getColumnGroup(name)` | ColumnGroup | Returns the named column group or `None` if no such column group exists\. |
| `d.getColumnGroupCount()` | *int* | Returns the number of column groups in this data model\. |
| `d.columnGroupIterator()` | Iterator | Returns an iterator that returns each column group in turn\. |
| `d.toArray()` | Column\[\] | Returns the data model as an array of columns\. The columns are ordered in their "natural" insert order\. |
<!-- </table "summary="DataModel object methods for accessing information about fields or columns" class="defaultstyle" "> -->
Each field (`Column` object) includes a number of methods for accessing information about the column\. The following table shows a selection of these\.
<!-- <table "summary="Column object methods for accessing information about the column" class="defaultstyle" "> -->
Column object methods for accessing information about the column
Table 2\. Column object methods for accessing information about the column
| Method | Return type | Description |
| ------------------------- | ------------ | ---------------------------------------------------------------------------------------------------------------------------------------- |
| `c.getColumnName()` | *string* | Returns the name of the column\. |
| `c.getColumnLabel()` | *string* | Returns the label of the column or an empty string if there is no label associated with the column\. |
| `c.getMeasureType()` | MeasureType | Returns the measure type for the column\. |
| `c.getStorageType()` | StorageType | Returns the storage type for the column\. |
| `c.isMeasureDiscrete()` | *Boolean* | Returns `True` if the column is discrete\. Columns that are either a set or a flag are considered discrete\. |
| `c.isModelOutputColumn()` | *Boolean* | Returns `True` if the column is a model output column\. |
| `c.isStorageDatetime()` | *Boolean* | Returns `True` if the column's storage is a time, date or timestamp value\. |
| `c.isStorageNumeric()` | *Boolean* | Returns `True` if the column's storage is an integer or a real number\. |
| `c.isValidValue(value)` | *Boolean* | Returns `True` if the specified value is valid for this storage, and `valid` when the valid column values are known\. |
| `c.getModelingRole()` | ModelingRole | Returns the modeling role for the column\. |
| `c.getSetValues()` | Object\[\] | Returns an array of valid values for the column, or `None` if either the values are not known or the column is not a set\. |
| `c.getValueLabel(value)` | *string* | Returns the label for the value in the column, or an empty string if there is no label associated with the value\. |
| `c.getFalseFlag()` | Object | Returns the "false" indicator value for the column, or `None` if either the value is not known or the column is not a flag\. |
| `c.getTrueFlag()` | Object | Returns the "true" indicator value for the column, or `None` if either the value is not known or the column is not a flag\. |
| `c.getLowerBound()` | Object | Returns the lower bound value for the values in the column, or `None` if either the value is not known or the column is not continuous\. |
| `c.getUpperBound()` | Object | Returns the upper bound value for the values in the column, or `None` if either the value is not known or the column is not continuous\. |
<!-- </table "summary="Column object methods for accessing information about the column" class="defaultstyle" "> -->
Note that most of the methods that access information about a column have equivalent methods defined on the `DataModel` object itself\. For example, the two following statements are equivalent:
dataModel.getColumn("someName").getModelingRole()
dataModel.getModelingRole("someName")
<!-- </article "role="article" "> -->
|
83A5FC83AA65717942A3437217F2114454552144 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/jython/clementine/python_node_create.html?context=cdpaas&locale=en | Creating nodes | Creating nodes
Flows provide a number of ways to create nodes. These methods are summarized in the following table.
Methods for creating nodes
Table 1. Methods for creating nodes
Method Return type Description
s.create(nodeType, name) Node Creates a node of the specified type and adds it to the specified flow.
s.createAt(nodeType, name, x, y) Node Creates a node of the specified type and adds it to the specified flow at the specified location. If either x < 0 or y < 0, the location is not set.
s.createModelApplier(modelOutput, name) Node Creates a model applier node that's derived from the supplied model output object.
For example, you can use the following script to create a new Type node in a flow:
stream = modeler.script.stream()
Create a new Type node
node = stream.create("type", "My Type")
| # Creating nodes #
Flows provide a number of ways to create nodes\. These methods are summarized in the following table\.
<!-- <table "summary="Methods for creating nodes" class="defaultstyle" "> -->
Methods for creating nodes
Table 1\. Methods for creating nodes
| Method | Return type | Description |
| ----------------------------------------- | ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| `s.create(nodeType, name)` | Node | Creates a node of the specified type and adds it to the specified flow\. |
| `s.createAt(nodeType, name, x, y)` | Node | Creates a node of the specified type and adds it to the specified flow at the specified location\. If either x < 0 or y < 0, the location is not set\. |
| `s.createModelApplier(modelOutput, name)` | Node | Creates a model applier node that's derived from the supplied model output object\. |
<!-- </table "summary="Methods for creating nodes" class="defaultstyle" "> -->
For example, you can use the following script to create a new Type node in a flow:
stream = modeler.script.stream()
# Create a new Type node
node = stream.create("type", "My Type")
<!-- </article "role="article" "> -->
|
D9304450E79DC05B5ECC4FE98D48FECEF76A852E | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/jython/clementine/python_node_find.html?context=cdpaas&locale=en | Finding nodes | Finding nodes
Flows provide a number of ways for locating an existing node. These methods are summarized in the following table.
Methods for locating an existing node
Table 1. Methods for locating an existing node
Method Return type Description
s.findAll(type, label) Collection Returns a list of all nodes with the specified type and label. Either the type or label can be None, in which case the other parameter is used.
s.findAll(filter, recursive) Collection Returns a collection of all nodes that are accepted by the specified filter. If the recursive flag is True, any SuperNodes within the specified flow are also searched.
s.findByID(id) Node Returns the node with the supplied ID or None if no such node exists. The search is limited to the current stream.
s.findByType(type, label) Node Returns the node with the supplied type, label, or both. Either the type or name can be None, in which case the other parameter is used. If multiple nodes result in a match, then an arbitrary one is chosen and returned. If no nodes result in a match, then the return value is None.
s.findDownstream(fromNodes) Collection Searches from the supplied list of nodes and returns the set of nodes downstream of the supplied nodes. The returned list includes the originally supplied nodes.
s.findUpstream(fromNodes) Collection Searches from the supplied list of nodes and returns the set of nodes upstream of the supplied nodes. The returned list includes the originally supplied nodes.
s.findProcessorForID(String id, boolean recursive) Node Returns the node with the supplied ID or None if no such node exists. If the recursive flag is true, then any composite nodes within this diagram are also searched.
As an example, if a flow contains a single Filter node that the script needs to access, the Filter node can be found by using the following script:
stream = modeler.script.stream()
node = stream.findByType("filter", None)
...
Alternatively, you can use the ID of a node. For example:
stream = modeler.script.stream()
node = stream.findByID("id49CVL4GHVV8") the Derive node ID
node.setPropertyValue("mode", "Multiple")
node.setPropertyValue("name_extension", "new_derive")
To obtain the ID for any node in a flow, click the Scripting icon on the toolbar, then select the desired node in your flow and click Insert selected node ID.
| # Finding nodes #
Flows provide a number of ways for locating an existing node\. These methods are summarized in the following table\.
<!-- <table "summary="Methods for locating an existing node" class="defaultstyle" "> -->
Methods for locating an existing node
Table 1\. Methods for locating an existing node
| Method | Return type | Description |
| ---------------------------------------------------- | ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `s.findAll(type, label)` | Collection | Returns a list of all nodes with the specified type and label\. Either the type or label can be `None`, in which case the other parameter is used\. |
| `s.findAll(filter, recursive)` | Collection | Returns a collection of all nodes that are accepted by the specified filter\. If the recursive flag is `True`, any SuperNodes within the specified flow are also searched\. |
| `s.findByID(id)` | Node | Returns the node with the supplied ID or `None` if no such node exists\. The search is limited to the current stream\. |
| `s.findByType(type, label)` | Node | Returns the node with the supplied type, label, or both\. Either the type or name can be `None`, in which case the other parameter is used\. If multiple nodes result in a match, then an arbitrary one is chosen and returned\. If no nodes result in a match, then the return value is `None`\. |
| `s.findDownstream(fromNodes)` | Collection | Searches from the supplied list of nodes and returns the set of nodes downstream of the supplied nodes\. The returned list includes the originally supplied nodes\. |
| `s.findUpstream(fromNodes)` | Collection | Searches from the supplied list of nodes and returns the set of nodes upstream of the supplied nodes\. The returned list includes the originally supplied nodes\. |
| `s.findProcessorForID(String id, boolean recursive)` | Node | Returns the node with the supplied ID or `None` if no such node exists\. If the recursive flag is `true`, then any composite nodes within this diagram are also searched\. |
<!-- </table "summary="Methods for locating an existing node" class="defaultstyle" "> -->
As an example, if a flow contains a single Filter node that the script needs to access, the Filter node can be found by using the following script:
stream = modeler.script.stream()
node = stream.findByType("filter", None)
...
Alternatively, you can use the ID of a node\. For example:
stream = modeler.script.stream()
node = stream.findByID("id49CVL4GHVV8") # the Derive node ID
node.setPropertyValue("mode", "Multiple")
node.setPropertyValue("name_extension", "new_derive")
To obtain the ID for any node in a flow, click the Scripting icon on the toolbar, then select the desired node in your flow and click Insert selected node ID\.
<!-- </article "role="article" "> -->
|
0CB42F245DF436AF2BCCB54B612786CA493B917B | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/jython/clementine/python_node_import.html?context=cdpaas&locale=en | Importing, replacing, and deleting nodes | Importing, replacing, and deleting nodes
Along with creating and connecting nodes, it's often necessary to replace and delete nodes from a flow. The methods that are available for importing, replacing, and deleting nodes are summarized in the following table.
Methods for importing, replacing, and deleting nodes
Table 1. Methods for importing, replacing, and deleting nodes
Method Return type Description
s.replace(originalNode, replacementNode, discardOriginal) Not applicable Replaces the specified node from the specified flow. Both the original node and replacement node must be owned by the specified flow.
s.insert(source, nodes, newIDs) List Inserts copies of the nodes in the supplied list. It's assumed that all nodes in the supplied list are contained within the specified flow. The newIDs flag indicates whether new IDs should be generated for each node, or whether the existing ID should be copied and used. It's assumed that all nodes in a flow have a unique ID, so this flag must be set to True if the source flow is the same as the specified flow. The method returns the list of newly inserted nodes, where the order of the nodes is undefined (that is, the ordering is not necessarily the same as the order of the nodes in the input list).
s.delete(node) Not applicable Deletes the specified node from the specified flow. The node must be owned by the specified flow.
s.deleteAll(nodes) Not applicable Deletes all the specified nodes from the specified flow. All nodes in the collection must belong to the specified flow.
s.clear() Not applicable Deletes all nodes from the specified flow.
| # Importing, replacing, and deleting nodes #
Along with creating and connecting nodes, it's often necessary to replace and delete nodes from a flow\. The methods that are available for importing, replacing, and deleting nodes are summarized in the following table\.
<!-- <table "summary="Methods for importing, replacing, and deleting nodes" class="defaultstyle" "> -->
Methods for importing, replacing, and deleting nodes
Table 1\. Methods for importing, replacing, and deleting nodes
| Method | Return type | Description |
| ----------------------------------------------------------- | -------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `s.replace(originalNode, replacementNode, discardOriginal)` | Not applicable | Replaces the specified node from the specified flow\. Both the original node and replacement node must be owned by the specified flow\. |
| `s.insert(source, nodes, newIDs)` | List | Inserts copies of the nodes in the supplied list\. It's assumed that all nodes in the supplied list are contained within the specified flow\. The `newIDs` flag indicates whether new IDs should be generated for each node, or whether the existing ID should be copied and used\. It's assumed that all nodes in a flow have a unique ID, so this flag must be set to `True` if the source flow is the same as the specified flow\. The method returns the list of newly inserted nodes, where the order of the nodes is undefined (that is, the ordering is not necessarily the same as the order of the nodes in the input list)\. |
| `s.delete(node)` | Not applicable | Deletes the specified node from the specified flow\. The node must be owned by the specified flow\. |
| `s.deleteAll(nodes)` | Not applicable | Deletes all the specified nodes from the specified flow\. All nodes in the collection must belong to the specified flow\. |
| `s.clear()` | Not applicable | Deletes all nodes from the specified flow\. |
<!-- </table "summary="Methods for importing, replacing, and deleting nodes" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
1243D4C8499CC9BE45CD9C1F6EB34254F1B9B4D7 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/jython/clementine/python_node_information.html?context=cdpaas&locale=en | Getting information about nodes | Getting information about nodes
Nodes fall into a number of different categories such as data import and export nodes, model building nodes, and other types of nodes. Every node provides a number of methods that can be used to find out information about the node.
The methods that can be used to obtain the ID, name, and label of a node are summarized in the following table.
Methods to obtain the ID, name, and label of a node
Table 1. Methods to obtain the ID, name, and label of a node
Method Return type Description
n.getLabel() string Returns the display label of the specified node. The label is the value of the property custom_name only if that property is a non-empty string and the use_custom_name property is not set; otherwise, the label is the value of getName().
n.setLabel(label) Not applicable Sets the display label of the specified node. If the new label is a non-empty string it is assigned to the property custom_name, and False is assigned to the property use_custom_name so that the specified label takes precedence; otherwise, an empty string is assigned to the property custom_name and True is assigned to the property use_custom_name.
n.getName() string Returns the name of the specified node.
n.getID() string Returns the ID of the specified node. A new ID is created each time a new node is created. The ID is persisted with the node when it's saved as part of a flow so that when the flow is opened, the node IDs are preserved. However, if a saved node is inserted into a flow, the inserted node is considered to be a new object and will be allocated a new ID.
Methods that can be used to obtain other information about a node are summarized in the following table.
Methods for obtaining information about a node
Table 2. Methods for obtaining information about a node
Method Return type Description
n.getTypeName() string Returns the scripting name of this node. This is the same name that could be used to create a new instance of this node.
n.isInitial() Boolean Returns True if this is an initial node (one that occurs at the start of a flow).
n.isInline() Boolean Returns True if this is an in-line node (one that occurs mid-flow).
n.isTerminal() Boolean Returns True if this is a terminal node (one that occurs at the end of a flow).
n.getXPosition() int Returns the x position offset of the node in the flow.
n.getYPosition() int Returns the y position offset of the node in the flow.
n.setXYPosition(x, y) Not applicable Sets the position of the node in the flow.
n.setPositionBetween(source, target) Not applicable Sets the position of the node in the flow so that it's positioned between the supplied nodes.
n.isCacheEnabled() Boolean Returns True if the cache is enabled; returns False otherwise.
n.setCacheEnabled(val) Not applicable Enables or disables the cache for this object. If the cache is full and the caching becomes disabled, the cache is flushed.
n.isCacheFull() Boolean Returns True if the cache is full; returns False otherwise.
n.flushCache() Not applicable Flushes the cache of this node. Has no affect if the cache is not enabled or is not full.
| # Getting information about nodes #
Nodes fall into a number of different categories such as data import and export nodes, model building nodes, and other types of nodes\. Every node provides a number of methods that can be used to find out information about the node\.
The methods that can be used to obtain the ID, name, and label of a node are summarized in the following table\.
<!-- <table "summary="Methods to obtain the ID, name, and label of a node" class="defaultstyle" "> -->
Methods to obtain the ID, name, and label of a node
Table 1\. Methods to obtain the ID, name, and label of a node
| Method | Return type | Description |
| ------------------- | -------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `n.getLabel()` | *string* | Returns the display label of the specified node\. The label is the value of the property `custom_name` only if that property is a non\-empty string and the `use_custom_name` property is not set; otherwise, the label is the value of `getName()`\. |
| `n.setLabel(label)` | Not applicable | Sets the display label of the specified node\. If the new label is a non\-empty string it is assigned to the property `custom_name`, and `False` is assigned to the property `use_custom_name` so that the specified label takes precedence; otherwise, an empty string is assigned to the property `custom_name` and `True` is assigned to the property `use_custom_name`\. |
| `n.getName()` | *string* | Returns the name of the specified node\. |
| `n.getID()` | *string* | Returns the ID of the specified node\. A new ID is created each time a new node is created\. The ID is persisted with the node when it's saved as part of a flow so that when the flow is opened, the node IDs are preserved\. However, if a saved node is inserted into a flow, the inserted node is considered to be a new object and will be allocated a new ID\. |
<!-- </table "summary="Methods to obtain the ID, name, and label of a node" class="defaultstyle" "> -->
Methods that can be used to obtain other information about a node are summarized in the following table\.
<!-- <table "summary="Methods for obtaining information about a node" class="defaultstyle" "> -->
Methods for obtaining information about a node
Table 2\. Methods for obtaining information about a node
| Method | Return type | Description |
| -------------------------------------- | -------------- | ----------------------------------------------------------------------------------------------------------------------------- |
| `n.getTypeName()` | *string* | Returns the scripting name of this node\. This is the same name that could be used to create a new instance of this node\. |
| `n.isInitial()` | *Boolean* | Returns `True` if this is an initial node (one that occurs at the start of a flow)\. |
| `n.isInline()` | *Boolean* | Returns `True` if this is an in\-line node (one that occurs mid\-flow)\. |
| `n.isTerminal()` | *Boolean* | Returns `True` if this is a terminal node (one that occurs at the end of a flow)\. |
| `n.getXPosition()` | *int* | Returns the x position offset of the node in the flow\. |
| `n.getYPosition()` | *int* | Returns the y position offset of the node in the flow\. |
| `n.setXYPosition(x, y)` | Not applicable | Sets the position of the node in the flow\. |
| `n.setPositionBetween(source, target)` | Not applicable | Sets the position of the node in the flow so that it's positioned between the supplied nodes\. |
| `n.isCacheEnabled()` | *Boolean* | Returns `True` if the cache is enabled; returns `False` otherwise\. |
| `n.setCacheEnabled(val)` | Not applicable | Enables or disables the cache for this object\. If the cache is full and the caching becomes disabled, the cache is flushed\. |
| `n.isCacheFull()` | *Boolean* | Returns `True` if the cache is full; returns `False` otherwise\. |
| `n.flushCache()` | Not applicable | Flushes the cache of this node\. Has no affect if the cache is not enabled or is not full\. |
<!-- </table "summary="Methods for obtaining information about a node" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
AB42FF6B754A2E29FCB56B0137EEDDF17F8EE271 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/jython/clementine/python_node_link.html?context=cdpaas&locale=en | Linking and unlinking nodes | Linking and unlinking nodes
When you add a new node to a flow, you must connect it to a sequence of nodes before it can be used. Flows provide a number of methods for linking and unlinking nodes. These methods are summarized in the following table.
Methods for linking and unlinking nodes
Table 1. Methods for linking and unlinking nodes
Method Return type Description
s.link(source, target) Not applicable Creates a new link between the source and the target nodes.
s.link(source, targets) Not applicable Creates new links between the source node and each target node in the supplied list.
s.linkBetween(inserted, source, target) Not applicable Connects a node between two other node instances (the source and target nodes) and sets the position of the inserted node to be between them. Any direct link between the source and target nodes is removed first.
s.linkPath(path) Not applicable Creates a new path between node instances. The first node is linked to the second, the second is linked to the third, and so on.
s.unlink(source, target) Not applicable Removes any direct link between the source and the target nodes.
s.unlink(source, targets) Not applicable Removes any direct links between the source node and each object in the targets list.
s.unlinkPath(path) Not applicable Removes any path that exists between node instances.
s.disconnect(node) Not applicable Removes any links between the supplied node and any other nodes in the specified flow.
s.isValidLink(source, target) boolean Returns True if it would be valid to create a link between the specified source and target nodes. This method checks that both objects belong to the specified flow, that the source node can supply a link and the target node can receive a link, and that creating such a link will not cause a circularity in the flow.
The example script that follows performs these five tasks:
1. Creates a Data Asset node, a Filter node, and a Table output node.
2. Connects the nodes together.
3. Filters the field "Drug" from the resulting output.
4. Runs the Table node.
stream = modeler.script.stream()
sourcenode = stream.findByID("idGXVBG5FBZH")
filternode = stream.createAt("filter", "Filter", 192, 64)
tablenode = stream.createAt("table", "Table", 288, 64)
stream.link(sourcenode, filternode)
stream.link(filternode, tablenode)
filternode.setKeyedPropertyValue("include", "Drug", False)
results = []
tablenode.run(results)
| # Linking and unlinking nodes #
When you add a new node to a flow, you must connect it to a sequence of nodes before it can be used\. Flows provide a number of methods for linking and unlinking nodes\. These methods are summarized in the following table\.
<!-- <table "summary="Methods for linking and unlinking nodes" class="defaultstyle" "> -->
Methods for linking and unlinking nodes
Table 1\. Methods for linking and unlinking nodes
| Method | Return type | Description |
| ----------------------------------------- | -------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `s.link(source, target)` | Not applicable | Creates a new link between the source and the target nodes\. |
| `s.link(source, targets)` | Not applicable | Creates new links between the source node and each target node in the supplied list\. |
| `s.linkBetween(inserted, source, target)` | Not applicable | Connects a node between two other node instances (the source and target nodes) and sets the position of the inserted node to be between them\. Any direct link between the source and target nodes is removed first\. |
| `s.linkPath(path)` | Not applicable | Creates a new path between node instances\. The first node is linked to the second, the second is linked to the third, and so on\. |
| `s.unlink(source, target)` | Not applicable | Removes any direct link between the source and the target nodes\. |
| `s.unlink(source, targets)` | Not applicable | Removes any direct links between the source node and each object in the targets list\. |
| `s.unlinkPath(path)` | Not applicable | Removes any path that exists between node instances\. |
| `s.disconnect(node)` | Not applicable | Removes any links between the supplied node and any other nodes in the specified flow\. |
| `s.isValidLink(source, target)` | *boolean* | Returns `True` if it would be valid to create a link between the specified source and target nodes\. This method checks that both objects belong to the specified flow, that the source node can supply a link and the target node can receive a link, and that creating such a link will not cause a circularity in the flow\. |
<!-- </table "summary="Methods for linking and unlinking nodes" class="defaultstyle" "> -->
The example script that follows performs these five tasks:
<!-- <ol> -->
1. Creates a Data Asset node, a Filter node, and a Table output node\.
2. Connects the nodes together\.
3. Filters the field "Drug" from the resulting output\.
4. Runs the Table node\.
<!-- </ol> -->
stream = modeler.script.stream()
sourcenode = stream.findByID("idGXVBG5FBZH")
filternode = stream.createAt("filter", "Filter", 192, 64)
tablenode = stream.createAt("table", "Table", 288, 64)
stream.link(sourcenode, filternode)
stream.link(filternode, tablenode)
filternode.setKeyedPropertyValue("include", "Drug", False)
results = []
tablenode.run(results)
<!-- </article "role="article" "> -->
|
F0EF147DBC0554F53B331E7B6D5715D0269FFBA8 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/jython/clementine/python_node_reference.html?context=cdpaas&locale=en | Referencing existing nodes | Referencing existing nodes
A flow is often pre-built with some parameters that must be modified before the flow runs. Modifying these parameters involves the following tasks:
1. Locating the nodes in the relevant flow.
2. Changing the node or flow settings (or both).
| # Referencing existing nodes #
A flow is often pre\-built with some parameters that must be modified before the flow runs\. Modifying these parameters involves the following tasks:
<!-- <ol> -->
1. Locating the nodes in the relevant flow\.
2. Changing the node or flow settings (or both)\.
<!-- </ol> -->
<!-- </article "role="article" "> -->
|
5EE63FCC911BA90930D413B58E1310EFE0E24243 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/jython/clementine/python_node_traverse.html?context=cdpaas&locale=en | Traversing through nodes in a flow | Traversing through nodes in a flow
A common requirement is to identify nodes that are either upstream or downstream of a particular node. The flow provides a number of methods that can be used to identify these nodes. These methods are summarized in the following table.
Methods to identify upstream and downstream nodes
Table 1. Methods to identify upstream and downstream nodes
Method Return type Description
s.iterator() Iterator Returns an iterator over the node objects that are contained in the specified flow. If the flow is modified between calls of the next() function, the behavior of the iterator is undefined.
s.predecessorAt(node, index) Node Returns the specified immediate predecessor of the supplied node or None if the index is out of bounds.
s.predecessorCount(node) int Returns the number of immediate predecessors of the supplied node.
s.predecessors(node) List Returns the immediate predecessors of the supplied node.
s.successorAt(node, index) Node Returns the specified immediate successor of the supplied node or None if the index is out of bounds.
s.successorCount(node) int Returns the number of immediate successors of the supplied node.
s.successors(node) List Returns the immediate successors of the supplied node.
| # Traversing through nodes in a flow #
A common requirement is to identify nodes that are either upstream or downstream of a particular node\. The flow provides a number of methods that can be used to identify these nodes\. These methods are summarized in the following table\.
<!-- <table "summary="Methods to identify upstream and downstream nodes" class="defaultstyle" "> -->
Methods to identify upstream and downstream nodes
Table 1\. Methods to identify upstream and downstream nodes
| Method | Return type | Description |
| ------------------------------ | ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| `s.iterator()` | Iterator | Returns an iterator over the node objects that are contained in the specified flow\. If the flow is modified between calls of the `next()` function, the behavior of the iterator is undefined\. |
| `s.predecessorAt(node, index)` | Node | Returns the specified immediate predecessor of the supplied node or `None` if the index is out of bounds\. |
| `s.predecessorCount(node)` | *int* | Returns the number of immediate predecessors of the supplied node\. |
| `s.predecessors(node)` | List | Returns the immediate predecessors of the supplied node\. |
| `s.successorAt(node, index)` | Node | Returns the specified immediate successor of the supplied node or `None` if the index is out of bounds\. |
| `s.successorCount(node)` | *int* | Returns the number of immediate successors of the supplied node\. |
| `s.successors(node)` | List | Returns the immediate successors of the supplied node\. |
<!-- </table "summary="Methods to identify upstream and downstream nodes" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
B6EC6454711B4946DBC663324DC478953723B1DD | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/jython/clementine/python_nodes_and_streams.html?context=cdpaas&locale=en | Creating nodes and modifying flows | Creating nodes and modifying flows
In some situations, you might want to add new nodes to existing flows. Adding nodes to existing flows typically involves the following tasks:
1. Creating the nodes.
2. Linking the nodes into the existing flow.
| # Creating nodes and modifying flows #
In some situations, you might want to add new nodes to existing flows\. Adding nodes to existing flows typically involves the following tasks:
<!-- <ol> -->
1. Creating the nodes\.
2. Linking the nodes into the existing flow\.
<!-- </ol> -->
<!-- </article "role="article" "> -->
|
9E77548AF396E9E9474371705BCFFF55684C5760 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/jython/clementine/python_object_oriented.html?context=cdpaas&locale=en | Object-oriented programming | Object-oriented programming
Object-oriented programming is based on the notion of creating a model of the target problem in your programs. Object-oriented programming reduces programming errors and promotes the reuse of code. Python is an object-oriented language. Objects defined in Python have the following features:
* Identity. Each object must be distinct, and this must be testable. The is and is not tests exist for this purpose.
* State. Each object must be able to store state. Attributes, such as fields and instance variables, exist for this purpose.
* Behavior. Each object must be able to manipulate its state. Methods exist for this purpose.
Python includes the following features for supporting object-oriented programming:
* Class-based object creation. Classes are templates for the creation of objects. Objects are data structures with associated behavior.
* Inheritance with polymorphism. Python supports single and multiple inheritance. All Python instance methods are polymorphic and can be overridden by subclasses.
* Encapsulation with data hiding. Python allows attributes to be hidden. When hidden, you can access attributes from outside the class only through methods of the class. Classes implement methods to modify the data.
| # Object\-oriented programming #
Object\-oriented programming is based on the notion of creating a model of the target problem in your programs\. Object\-oriented programming reduces programming errors and promotes the reuse of code\. Python is an object\-oriented language\. Objects defined in Python have the following features:
<!-- <ul> -->
* Identity\. Each object must be distinct, and this must be testable\. The `is` and `is not` tests exist for this purpose\.
* State\. Each object must be able to store state\. Attributes, such as fields and instance variables, exist for this purpose\.
* Behavior\. Each object must be able to manipulate its state\. Methods exist for this purpose\.
<!-- </ul> -->
Python includes the following features for supporting object\-oriented programming:
<!-- <ul> -->
* Class\-based object creation\. Classes are templates for the creation of objects\. Objects are data structures with associated behavior\.
* Inheritance with polymorphism\. Python supports single and multiple inheritance\. All Python instance methods are polymorphic and can be overridden by subclasses\.
* Encapsulation with data hiding\. Python allows attributes to be hidden\. When hidden, you can access attributes from outside the class only through methods of the class\. Classes implement methods to modify the data\.
<!-- </ul> -->
<!-- </article "role="article" "> -->
|
381D767DECD07EF388611FD22C3F08FB89BA73EC | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/jython/clementine/python_scripting_context.html?context=cdpaas&locale=en | The scripting context | The scripting context
The modeler.script module provides the context in which a script runs. The module is automatically imported into an SPSS® Modeler script at run time. The module defines four functions that provide a script with access to its execution environment:
* The session() function returns the session for the script. The session defines information such as the locale and the SPSS Modeler backend (either a local process or a networked SPSS Modeler Server) that's being used to run any flows.
* The stream() function can be used with flow and SuperNode scripts. This function returns the flow that owns either the flow script or the SuperNode script that's being run.
* The diagram() function can be used with SuperNode scripts. This function returns the diagram within the SuperNode. For other script types, this function returns the same as the stream() function.
* The supernode() function can be used with SuperNode scripts. This function returns the SuperNode that owns the script that's being run.
The four functions and their outputs are summarized in the following table.
Summary of modeler.script functions
Table 1. Summary of modeler.script functions
Script type session() stream() diagram() supernode()
Standalone Returns a session Returns the current managed flow at the time the script was invoked (for example, the flow passed via the batch mode -stream option), or None. Same as for stream() Not applicable
Flow Returns a session Returns a flow Same as for stream() Not applicable
SuperNode Returns a session Returns a flow Returns a SuperNode flow Returns a SuperNode
The modeler.script module also defines a way of terminating the script with an exit code. The exit(exit-code) function stops the script from running and returns the supplied integer exit code.
One of the methods that's defined for a flow is runAll(List). This method runs all executable nodes. Any models or outputs that are generated by running the nodes are added to the supplied list.
It's common for a flow run to generate outputs such as models, graphs, and other output. To capture this output, a script can supply a variable that's initialized to a list. For example:
stream = modeler.script.stream()
results = []
stream.runAll(results)
When execution is complete, any objects that are generated by the execution can be accessed from the results list.
| # The scripting context #
The `modeler.script` module provides the context in which a script runs\. The module is automatically imported into an SPSS® Modeler script at run time\. The module defines four functions that provide a script with access to its execution environment:
<!-- <ul> -->
* The `session()` function returns the session for the script\. The session defines information such as the locale and the SPSS Modeler backend (either a local process or a networked SPSS Modeler Server) that's being used to run any flows\.
* The `stream()` function can be used with flow and SuperNode scripts\. This function returns the flow that owns either the flow script or the SuperNode script that's being run\.
* The `diagram()` function can be used with SuperNode scripts\. This function returns the diagram within the SuperNode\. For other script types, this function returns the same as the `stream()` function\.
* The `supernode()` function can be used with SuperNode scripts\. This function returns the SuperNode that owns the script that's being run\.
<!-- </ul> -->
The four functions and their outputs are summarized in the following table\.
<!-- <table "summary="Summary of modeler.script functions" class="defaultstyle" "> -->
Summary of modeler.script functions
Table 1\. Summary of `modeler.script` functions
| Script type | `session()` | `stream()` | `diagram()` | `supernode()` |
| ----------- | ----------------- | --------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------ | ------------------- |
| Standalone | Returns a session | Returns the current managed flow at the time the script was invoked (for example, the flow passed via the batch mode `-stream` option), or `None`\. | Same as for `stream()` | Not applicable |
| Flow | Returns a session | Returns a flow | Same as for `stream()` | Not applicable |
| SuperNode | Returns a session | Returns a flow | Returns a SuperNode flow | Returns a SuperNode |
<!-- </table "summary="Summary of modeler.script functions" class="defaultstyle" "> -->
The `modeler.script` module also defines a way of terminating the script with an exit code\. The `exit(exit-code)` function stops the script from running and returns the supplied integer exit code\.
One of the methods that's defined for a flow is `runAll(List)`\. This method runs all executable nodes\. Any models or outputs that are generated by running the nodes are added to the supplied list\.
It's common for a flow run to generate outputs such as models, graphs, and other output\. To capture this output, a script can supply a variable that's initialized to a list\. For example:
stream = modeler.script.stream()
results = []
stream.runAll(results)
When execution is complete, any objects that are generated by the execution can be accessed from the `results` list\.
<!-- </article "role="article" "> -->
|
65998CB8747B70477477179E023332FD410E72D6 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/jython/clementine/python_scripting_types.html?context=cdpaas&locale=en | Scripting in SPSS Modeler | Scripting in SPSS Modeler
| # Scripting in SPSS Modeler #
<!-- </article "role="article" "> -->
|
B416F3605ADF246170E1B462EE0F2CFCDF5E591B | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/jython/clementine/python_setting_properties.html?context=cdpaas&locale=en | Setting properties | Setting properties
Nodes, flows, models, and outputs all have properties that can be accessed and, in most cases, set. Properties are typically used to modify the behavior or appearance of the object. The methods that are available for accessing and setting object properties are summarized in the following table.
Methods for accessing and setting object properties
Table 1. Methods for accessing and setting object properties
Method Return type Description
p.getPropertyValue(propertyName) Object Returns the value of the named property or None if no such property exists.
p.setPropertyValue(propertyName, value) Not applicable Sets the value of the named property.
p.setPropertyValues(properties) Not applicable Sets the values of the named properties. Each entry in the properties map consists of a key that represents the property name and the value that should be assigned to that property.
p.getKeyedPropertyValue( propertyName, keyName) Object Returns the value of the named property and associated key or None if no such property or key exists.
p.setKeyedPropertyValue( propertyName, keyName, value) Not applicable Sets the value of the named property and key.
For example, the following script sets the value of a Derive node for a flow:
stream = modeler.script.stream()
node = stream.findByType("derive", None)
node.setPropertyValue("name_extension", "new_derive")
Alternatively, you might want to filter a field from a Filter node. In this case, the value is also keyed on the field name. For example:
stream = modeler.script.stream()
Locate the filter node ...
node = stream.findByType("filter", None)
... and filter out the "Na" field
node.setKeyedPropertyValue("include", "Na", False)
| # Setting properties #
Nodes, flows, models, and outputs all have properties that can be accessed and, in most cases, set\. Properties are typically used to modify the behavior or appearance of the object\. The methods that are available for accessing and setting object properties are summarized in the following table\.
<!-- <table "summary="Methods for accessing and setting object properties" class="defaultstyle" "> -->
Methods for accessing and setting object properties
Table 1\. Methods for accessing and setting object properties
| Method | Return type | Description |
| -------------------------------------------------------- | -------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `p.getPropertyValue(propertyName)` | Object | Returns the value of the named property or `None` if no such property exists\. |
| `p.setPropertyValue(propertyName, value)` | Not applicable | Sets the value of the named property\. |
| `p.setPropertyValues(properties)` | Not applicable | Sets the values of the named properties\. Each entry in the properties map consists of a key that represents the property name and the value that should be assigned to that property\. |
| `p.getKeyedPropertyValue( propertyName, keyName)` | Object | Returns the value of the named property and associated key or `None` if no such property or key exists\. |
| `p.setKeyedPropertyValue( propertyName, keyName, value)` | Not applicable | Sets the value of the named property and key\. |
<!-- </table "summary="Methods for accessing and setting object properties" class="defaultstyle" "> -->
For example, the following script sets the value of a Derive node for a flow:
stream = modeler.script.stream()
node = stream.findByType("derive", None)
node.setPropertyValue("name_extension", "new_derive")
Alternatively, you might want to filter a field from a Filter node\. In this case, the value is also keyed on the field name\. For example:
stream = modeler.script.stream()
# Locate the filter node ...
node = stream.findByType("filter", None)
# ... and filter out the "Na" field
node.setKeyedPropertyValue("include", "Na", False)
<!-- </article "role="article" "> -->
|
542F90CA456DCCC3D79DBF6DC9E8A6755B3BA69E | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/jython/clementine/python_stream_execution.html?context=cdpaas&locale=en | Running a flow | Running a flow
The following example runs all executable nodes in the flow, and is the simplest type of flow script:
modeler.script.stream().runAll(None)
The following example also runs all executable nodes in the flow:
stream = modeler.script.stream()
stream.runAll(None)
In this example, the flow is stored in a variable called stream. Storing the flow in a variable is useful because a script is typically used to modify either the flow or the nodes within a flow. Creating a variable that stores the flow results in a more concise script.
| # Running a flow #
The following example runs all executable nodes in the flow, and is the simplest type of flow script:
modeler.script.stream().runAll(None)
The following example also runs all executable nodes in the flow:
stream = modeler.script.stream()
stream.runAll(None)
In this example, the flow is stored in a variable called `stream`\. Storing the flow in a variable is useful because a script is typically used to modify either the flow or the nodes within a flow\. Creating a variable that stores the flow results in a more concise script\.
<!-- </article "role="article" "> -->
|
D1CDE4FF34352A6E5CDC9914FD26CF72574E2D59 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/jython/clementine/python_streams.html?context=cdpaas&locale=en | Flows | Flows
A flow is the main IBM® SPSS® Modeler document type. It can be saved, loaded, edited and executed. Flows can also have parameters, global values, a script, and other information associated with them.
| # Flows #
A flow is the main IBM® SPSS® Modeler document type\. It can be saved, loaded, edited and executed\. Flows can also have parameters, global values, a script, and other information associated with them\.
<!-- </article "role="article" "> -->
|
6524DFDEABF32BAE384ACB9BB21637ADE3B4AC4F | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/jython/clementine/python_streams_diagrams.html?context=cdpaas&locale=en | Flows, SuperNode streams, and diagrams | Flows, SuperNode streams, and diagrams
Most of the time, the term flow means the same thing, regardless of whether it's a flow that's loaded from a file or used within a SuperNode. It generally means a collection of nodes that are connected together and can be executed. In scripting, however, not all operations are supported in all places. So as a script author, you should be aware of which flow variant they're using.
| # Flows, SuperNode streams, and diagrams #
Most of the time, the term flow means the same thing, regardless of whether it's a flow that's loaded from a file or used within a SuperNode\. It generally means a collection of nodes that are connected together and can be executed\. In scripting, however, not all operations are supported in all places\. So as a script author, you should be aware of which flow variant they're using\.
<!-- </article "role="article" "> -->
|
A4799F6BDEA1B1508528FC647DAD5D1B2EF777AA | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/jython/clementine/python_supernode_streams.html?context=cdpaas&locale=en | SuperNode flows | SuperNode flows
A SuperNode flow is the type of flow used within a SuperNode. Like a normal flow, it contains nodes that are linked together. SuperNode flows differ from normal flows in various ways:
* Parameters and any scripts are associated with the SuperNode that owns the SuperNode flow, rather than with the SuperNode flow itself.
* SuperNode flows have additional input and output connector nodes, depending on the type of SuperNode. These connector nodes are used to push information into and out of the SuperNode flow, and are created automatically when the SuperNode is created.
| # SuperNode flows #
A SuperNode flow is the type of flow used within a SuperNode\. Like a normal flow, it contains nodes that are linked together\. SuperNode flows differ from normal flows in various ways:
<!-- <ul> -->
* Parameters and any scripts are associated with the SuperNode that owns the SuperNode flow, rather than with the SuperNode flow itself\.
* SuperNode flows have additional input and output connector nodes, depending on the type of SuperNode\. These connector nodes are used to push information into and out of the SuperNode flow, and are created automatically when the SuperNode is created\.
<!-- </ul> -->
<!-- </article "role="article" "> -->
|
D6DB1FBF1B0A11FD3423B6F057182019496FF3F5 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/jython/clementine/python_syntax.html?context=cdpaas&locale=en | Python scripting | Python scripting
This guide to the Python scripting language is an introduction to the components that you're most likely to use when scripting in SPSS Modeler, including concepts and programming basics.
This provides you with enough knowledge to start developing your own Python scripts to use in SPSS Modeler.
| # Python scripting #
This guide to the Python scripting language is an introduction to the components that you're most likely to use when scripting in SPSS Modeler, including concepts and programming basics\.
This provides you with enough knowledge to start developing your own Python scripts to use in SPSS Modeler\.
<!-- </article "role="article" "> -->
|
C6B9BD6294C9A3EF6CD7E45E1B3765C061D92CC3 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/jython/clementine/python_syntax_ascii.html?context=cdpaas&locale=en | Using non-ASCII characters | Using non-ASCII characters
To use non-ASCII characters, Python requires explicit encoding and decoding of strings into Unicode. In SPSS Modeler, Python scripts are assumed to be encoded in UTF-8, which is a standard Unicode encoding that supports non-ASCII characters. The following script will compile because the Python compiler has been set to UTF-8 by SPSS Modeler.

However, the resulting node has an incorrect label.
Figure 1. Node label containing non-ASCII characters, displayed incorrectly

The label is incorrect because the string literal itself has been converted to an ASCII string by Python.
Python allows Unicode string literals to be specified by adding a u character prefix before the string literal:

This will create a Unicode string and the label will be appear correctly.
Figure 2. Node label containing non-ASCII characters, displayed correctly

Using Python and Unicode is a large topic that's beyond the scope of this document. Many books and online resources are available that cover this topic in great detail.
| # Using non\-ASCII characters #
To use non\-ASCII characters, Python requires explicit encoding and decoding of strings into Unicode\. In SPSS Modeler, Python scripts are assumed to be encoded in UTF\-8, which is a standard Unicode encoding that supports non\-ASCII characters\. The following script will compile because the Python compiler has been set to UTF\-8 by SPSS Modeler\.

However, the resulting node has an incorrect label\.
Figure 1\. Node label containing non\-ASCII characters, displayed incorrectly

The label is incorrect because the string literal itself has been converted to an ASCII string by Python\.
Python allows Unicode string literals to be specified by adding a `u` character prefix before the string literal:

This will create a Unicode string and the label will be appear correctly\.
Figure 2\. Node label containing non\-ASCII characters, displayed correctly

Using Python and Unicode is a large topic that's beyond the scope of this document\. Many books and online resources are available that cover this topic in great detail\.
<!-- </article "role="article" "> -->
|
2413C64687E434B4B2095163A5106C0C62AA3F59 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/jython/clementine/python_syntax_codeblocks.html?context=cdpaas&locale=en | Blocks of code | Blocks of code
Blocks of code are groups of statements you can use where single statements are expected.
Blocks of code can follow any of the following statements: if, elif, else, for, while, try, except, def, and class. These statements introduce the block of code with the colon character (:). For example:
if x == 1:
y = 2
z = 3
elif:
y = 4
z = 5
Use indentation to delimit code blocks (rather than the curly braces used in Java). All lines in a block must be indented to the same position. This is because a change in the indentation indicates the end of a code block. It's common to indent by four spaces per level. We recommend you use spaces to indent the lines, rather than tabs. Spaces and tabs must not be mixed. The lines in the outermost block of a module must start at column one, or a SyntaxError will occur.
The statements that make up a code block (and follow the colon) can also be on a single line, separated by semicolons. For example:
if x == 1: y = 2; z = 3;
| # Blocks of code #
Blocks of code are groups of statements you can use where single statements are expected\.
Blocks of code can follow any of the following statements: `if`, `elif`, `else`, `for`, `while`, `try`, `except`, `def`, and `class`\. These statements introduce the block of code with the colon character (`:`)\. For example:
if x == 1:
y = 2
z = 3
elif:
y = 4
z = 5
Use indentation to delimit code blocks (rather than the curly braces used in Java)\. All lines in a block must be indented to the same position\. This is because a change in the indentation indicates the end of a code block\. It's common to indent by four spaces per level\. We recommend you use spaces to indent the lines, rather than tabs\. Spaces and tabs must not be mixed\. The lines in the outermost block of a module must start at column one, or a SyntaxError will occur\.
The statements that make up a code block (and follow the colon) can also be on a single line, separated by semicolons\. For example:
if x == 1: y = 2; z = 3;
<!-- </article "role="article" "> -->
|
20D6B2732BE17C12226F186559FBEA647799F3B8 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/jython/clementine/python_syntax_examples.html?context=cdpaas&locale=en | Examples | Examples
The print keyword prints the arguments immediately following it. If the statement is followed by a comma, a new line isn't included in the output. For example:
print "This demonstrates the use of a",
print " comma at the end of a print statement."
This will result in the following output:
This demonstrates the use of a comma at the end of a print statement.
The for statement iterates through a block of code. For example:
mylist1 = ["one", "two", "three"]
for lv in mylist1:
print lv
continue
In this example, three strings are assigned to the list mylist1. The elements of the list are then printed, with one element of each line. This results in the following output:
one
two
three
In this example, the iterator lv takes the value of each element in the list mylist1 in turn as the for loop implements the code block for each element. An iterator can be any valid identifier of any length.
The if statement is a conditional statement. It evaluates the condition and returns either true or false, depending on the result of the evaluation. For example:
mylist1 = ["one", "two", "three"]
for lv in mylist1:
if lv == "two"
print "The value of lv is ", lv
else
print "The value of lv is not two, but ", lv
continue
In this example, the value of the iterator lv is evaluated. If the value of lv is two, a different string is returned to the string that's returned if the value of lv is not two. This results in the following output:
The value of lv is not two, but one
The value of lv is two
The value of lv is not two, but three
| # Examples #
The `print` keyword prints the arguments immediately following it\. If the statement is followed by a comma, a new line isn't included in the output\. For example:
print "This demonstrates the use of a",
print " comma at the end of a print statement."
This will result in the following output:
This demonstrates the use of a comma at the end of a print statement.
The `for` statement iterates through a block of code\. For example:
mylist1 = ["one", "two", "three"]
for lv in mylist1:
print lv
continue
In this example, three strings are assigned to the list `mylist1`\. The elements of the list are then printed, with one element of each line\. This results in the following output:
one
two
three
In this example, the iterator `lv` takes the value of each element in the list `mylist1` in turn as the `for` loop implements the code block for each element\. An iterator can be any valid identifier of any length\.
The `if` statement is a conditional statement\. It evaluates the condition and returns either true or false, depending on the result of the evaluation\. For example:
mylist1 = ["one", "two", "three"]
for lv in mylist1:
if lv == "two"
print "The value of lv is ", lv
else
print "The value of lv is not two, but ", lv
continue
In this example, the value of the iterator `lv` is evaluated\. If the value of `lv` is `two`, a different string is returned to the string that's returned if the value of `lv` is not `two`\. This results in the following output:
The value of lv is not two, but one
The value of lv is two
The value of lv is not two, but three
<!-- </article "role="article" "> -->
|
03C28B0A536906CA3597B4D382759BD791D0CFEC | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/jython/clementine/python_syntax_identifiers.html?context=cdpaas&locale=en | Identifiers | Identifiers
Identifiers are used to name variables, functions, classes, and keywords.
Identifiers can be any length, but must start with either an alphabetical character of uppercase or lowercase, or the underscore character (_). Names that start with an underscore are generally reserved for internal or private names. After the first character, the identifier can contain any number and combination of alphabetical characters, numbers from 0-9, and the underscore character.
There are some reserved words in Jython that can't be used to name variables, functions, or classes. They fall under the following categories:
* Statement introducers:assert, break, class, continue, def, del, elif, else, except, exec, finally, for, from, global, if, import, pass, print, raise, return, try, and while
* Parameter introducers:as, import, and in
* Operators:and, in, is, lambda, not, and or
Improper keyword use generally results in a SyntaxError.
| # Identifiers #
Identifiers are used to name variables, functions, classes, and keywords\.
Identifiers can be any length, but must start with either an alphabetical character of uppercase or lowercase, or the underscore character (`_`)\. Names that start with an underscore are generally reserved for internal or private names\. After the first character, the identifier can contain any number and combination of alphabetical characters, numbers from 0\-9, and the underscore character\.
There are some reserved words in Jython that can't be used to name variables, functions, or classes\. They fall under the following categories:
<!-- <ul> -->
* Statement introducers:`assert`, `break`, `class`, `continue`, `def`, `del`, `elif`, `else`, `except`, `exec`, `finally`, `for`, `from`, `global`, `if`, `import`, `pass`, `print`, `raise`, `return`, `try`, and `while`
* Parameter introducers:`as`, `import`, and `in`
* Operators:`and`, `in`, `is`, `lambda`, `not`, and `or`
<!-- </ul> -->
Improper keyword use generally results in a SyntaxError\.
<!-- </article "role="article" "> -->
|
659E43BA12550AA1E885BAEC945B7B1B25FD18E2 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/jython/clementine/python_syntax_lists.html?context=cdpaas&locale=en | Lists | Lists
Lists are sequences of elements. A list can contain any number of elements, and the elements of the list can be any type of object. Lists can also be thought of as arrays. The number of elements in a list can increase or decrease as elements are added, removed, or replaced.
| # Lists #
Lists are sequences of elements\. A list can contain any number of elements, and the elements of the list can be any type of object\. Lists can also be thought of as arrays\. The number of elements in a list can increase or decrease as elements are added, removed, or replaced\.
<!-- </article "role="article" "> -->
|
F837E34ED0AD4739783010D9FFD3684C37FD465C | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/jython/clementine/python_syntax_mathfunctions.html?context=cdpaas&locale=en | Mathematical methods | Mathematical methods
From the math module you can access useful mathematical methods. Some of these methods are listed in the following table. Unless specified otherwise, all values are returned as floats.
Mathematical methods
Table 1. Mathematical methods
Method Usage
math.ceil(x) Return the ceiling of x as a float, that is the smallest integer greater than or equal to x
math.copysign(x, y) Return x with the sign of y. copysign(1, -0.0) returns -1
math.fabs(x) Return the absolute value of x
math.factorial(x) Return x factorial. If x is negative or not an integer, a ValueError is raised.
math.floor(x) Return the floor of x as a float, that is the largest integer less than or equal to x
math.frexp(x) Return the mantissa (m) and exponent (e) of x as the pair (m, e). m is a float and e is an integer, such that x == m * 2e exactly. If x is zero, returns (0.0, 0), otherwise 0.5 <= abs(m) < 1.
math.fsum(iterable) Return an accurate floating point sum of values in iterable
math.isinf(x) Check if the float x is positive or negative infinitive
math.isnan(x) Check if the float x is NaN (not a number)
math.ldexp(x, i) Return x * (2i). This is essentially the inverse of the function frexp.
math.modf(x) Return the fractional and integer parts of x. Both results carry the sign of x and are floats.
math.trunc(x) Return the Real value x, that has been truncated to an Integral.
math.exp(x) Return ex
math.log(x[, base]) Return the logarithm of x to the given value of base. If base is not specified, the natural logarithm of x is returned.
math.log1p(x) Return the natural logarithm of 1+x (base e)
math.log10(x) Return the base-10 logarithm of x
math.pow(x, y) Return x raised to the power y. pow(1.0, x) and pow(x, 0.0) always return 1, even when x is zero or NaN.
math.sqrt(x) Return the square root of x
Along with the mathematical functions, there are also some useful trigonometric methods. These methods are listed in the following table.
Trigonometric methods
Table 2. Trigonometric methods
Method Usage
math.acos(x) Return the arc cosine of x in radians
math.asin(x) Return the arc sine of x in radians
math.atan(x) Return the arc tangent of x in radians
math.atan2(y, x) Return atan(y / x) in radians.
math.cos(x) Return the cosine of x in radians.
math.hypot(x, y) Return the Euclidean norm sqrt(xx + yy). This is the length of the vector from the origin to the point (x, y).
math.sin(x) Return the sine of x in radians
math.tan(x) Return the tangent of x in radians
math.degrees(x) Convert angle x from radians to degrees
math.radians(x) Convert angle x from degrees to radians
math.acosh(x) Return the inverse hyperbolic cosine of x
math.asinh(x) Return the inverse hyperbolic sine of x
math.atanh(x) Return the inverse hyperbolic tangent of x
math.cosh(x) Return the hyperbolic cosine of x
math.sinh(x) Return the hyperbolic cosine of x
math.tanh(x) Return the hyperbolic tangent of x
There are also two mathematical constants. The value of math.pi is the mathematical constant pi. The value of math.e is the mathematical constant e.
| # Mathematical methods #
From the `math` module you can access useful mathematical methods\. Some of these methods are listed in the following table\. Unless specified otherwise, all values are returned as floats\.
<!-- <table "summary="Mathematical methods" class="defaultstyle" "> -->
Mathematical methods
Table 1\. Mathematical methods
| Method | Usage |
| --------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| `math.ceil(x)` | Return the ceiling of `x` as a float, that is the smallest integer greater than or equal to `x` |
| `math.copysign(x, y)` | Return `x` with the sign of `y`\. `copysign(1, -0.0)` returns `-1` |
| `math.fabs(x)` | Return the absolute value of `x` |
| `math.factorial(x)` | Return `x` factorial\. If `x` is negative or not an integer, a `ValueError` is raised\. |
| `math.floor(x)` | Return the floor of `x` as a float, that is the largest integer less than or equal to `x` |
| `math.frexp(x)` | Return the mantissa (`m`) and exponent (`e`) of `x` as the pair `(m, e)`\. `m` is a float and `e` is an integer, such that `x == m * 2**e` exactly\. If `x` is zero, returns `(0.0, 0)`, otherwise `0.5 <= abs(m) < 1`\. |
| `math.fsum(iterable)` | Return an accurate floating point sum of values in `iterable` |
| `math.isinf(x)` | Check if the float `x` is positive or negative infinitive |
| `math.isnan(x)` | Check if the float `x` is `NaN` (not a number) |
| `math.ldexp(x, i)` | Return `x * (2**i)`\. This is essentially the inverse of the function `frexp`\. |
| `math.modf(x)` | Return the fractional and integer parts of `x`\. Both results carry the sign of `x` and are floats\. |
| `math.trunc(x)` | Return the `Real` value `x`, that has been truncated to an `Integral`\. |
| `math.exp(x)` | Return `e**x` |
| `math.log(x[, base])` | Return the logarithm of `x` to the given value of `base`\. If `base` is not specified, the natural logarithm of `x` is returned\. |
| `math.log1p(x)` | Return the natural logarithm of `1+x (base e)` |
| `math.log10(x)` | Return the base\-10 logarithm of `x` |
| `math.pow(x, y)` | Return `x` raised to the power `y`\. `pow(1.0, x)` and `pow(x, 0.0)` always return `1`, even when `x` is zero or NaN\. |
| `math.sqrt(x)` | Return the square root of `x` |
<!-- </table "summary="Mathematical methods" class="defaultstyle" "> -->
Along with the mathematical functions, there are also some useful trigonometric methods\. These methods are listed in the following table\.
<!-- <table "summary="Trigonometric methods" class="defaultstyle" "> -->
Trigonometric methods
Table 2\. Trigonometric methods
| Method | Usage |
| ------------------ | ---------------------------------------------------------------------------------------------------------------------- |
| `math.acos(x)` | Return the arc cosine of `x` in radians |
| `math.asin(x)` | Return the arc sine of `x` in radians |
| `math.atan(x)` | Return the arc tangent of `x` in radians |
| `math.atan2(y, x)` | Return `atan(y / x)` in radians\. |
| `math.cos(x)` | Return the cosine of `x` in radians\. |
| `math.hypot(x, y)` | Return the Euclidean norm `sqrt(x*x + y*y)`\. This is the length of the vector from the origin to the point `(x, y)`\. |
| `math.sin(x)` | Return the sine of `x` in radians |
| `math.tan(x)` | Return the tangent of `x` in radians |
| `math.degrees(x)` | Convert angle `x` from radians to degrees |
| `math.radians(x)` | Convert angle `x` from degrees to radians |
| `math.acosh(x)` | Return the inverse hyperbolic cosine of `x` |
| `math.asinh(x)` | Return the inverse hyperbolic sine of `x` |
| `math.atanh(x)` | Return the inverse hyperbolic tangent of `x` |
| `math.cosh(x)` | Return the hyperbolic cosine of `x` |
| `math.sinh(x)` | Return the hyperbolic cosine of `x` |
| `math.tanh(x)` | Return the hyperbolic tangent of `x` |
<!-- </table "summary="Trigonometric methods" class="defaultstyle" "> -->
There are also two mathematical constants\. The value of `math.pi` is the mathematical constant pi\. The value of `math.e` is the mathematical constant e\.
<!-- </article "role="article" "> -->
|
48CCA78CEB92570BCE08F4E1A5677E8CD7936095 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/jython/clementine/python_syntax_operations.html?context=cdpaas&locale=en | Operations | Operations
Use an equals sign (=) to assign values.
For example, to assign the value 3 to a variable called x, you would use the following statement:
x = 3
You can also use the equals sign to assign string type data to a variable. For example, to assign the value a string value to the variable y, you would use the following statement:
y = "a string value"
The following table lists some commonly used comparison and numeric operations, and their descriptions.
Common comparison and numeric operations
Table 1. Common comparison and numeric operations
Operation Description
x < y Is x less than y?
x > y Is x greater than y?
x <= y Is x less than or equal to y?
x >= y Is x greater than or equal to y?
x == y Is x equal to y?
x != y Is x not equal to y?
x <> y Is x not equal to y?
x + y Add y to x
x - y Subtract y from x
x * y Multiply x by y
x / y Divide x by y
x y Raise x to the y power
| # Operations #
Use an equals sign (`=`) to assign values\.
For example, to assign the value `3` to a variable called `x`, you would use the following statement:
x = 3
You can also use the equals sign to assign string type data to a variable\. For example, to assign the value `a string value` to the variable `y`, you would use the following statement:
y = "a string value"
The following table lists some commonly used comparison and numeric operations, and their descriptions\.
<!-- <table "summary="Common comparison and numeric operations" class="defaultstyle" "> -->
Common comparison and numeric operations
Table 1\. Common comparison and numeric operations
| Operation | Description |
| -------------- | ------------------------------------ |
| `x < y` | Is `x` less than `y`? |
| `x > y` | Is `x` greater than `y`? |
| `x <= y` | Is `x` less than or equal to `y`? |
| `x >= y` | Is `x` greater than or equal to `y`? |
| `x == y` | Is `x` equal to `y`? |
| `x != y` | Is `x` not equal to `y`? |
| `x <> y` | Is `x` not equal to `y`? |
| `x + y` | Add `y` to `x` |
| `x - y` | Subtract `y` from `x` |
| `x * y` | Multiply `x` by `y` |
| `x / y` | Divide `x` by `y` |
| `x ** y` | Raise `x` to the `y` power |
<!-- </table "summary="Common comparison and numeric operations" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
622526F6C171CED140394F3DD707B612778B661E | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/jython/clementine/python_syntax_passingargs.html?context=cdpaas&locale=en | Passing arguments to a script | Passing arguments to a script
Passing arguments to a script is useful because a script can be used repeatedly without modification.
The arguments you pass on the command line are passed as values in the list sys.argv. You can use the len(sys.argv) command to obtain the number of values passed. For example:
import sys
print "test1"
print sys.argv[0]
print sys.argv[1]
print len(sys.argv)
In this example, the import command imports the entire sys class so that you can use the existing methods for this class, such as argv.
The script in this example can be invoked using the following line:
/u/mjloos/test1 mike don
The result is the following output:
/u/mjloos/test1 mike don
test1
mike
don
3
| # Passing arguments to a script #
Passing arguments to a script is useful because a script can be used repeatedly without modification\.
The arguments you pass on the command line are passed as values in the list `sys.argv`\. You can use the `len(sys.argv)` command to obtain the number of values passed\. For example:
import sys
print "test1"
print sys.argv[0]
print sys.argv[1]
print len(sys.argv)
In this example, the `import` command imports the entire `sys` class so that you can use the existing methods for this class, such as `argv`\.
The script in this example can be invoked using the following line:
/u/mjloos/test1 mike don
The result is the following output:
/u/mjloos/test1 mike don
test1
mike
don
3
<!-- </article "role="article" "> -->
|
03A70C271775C3B15541B86E53E467844EF87296 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/jython/clementine/python_syntax_remarks.html?context=cdpaas&locale=en | Remarks | Remarks
Remarks are comments that are introduced by the pound (or hash) sign (). All text that follows the pound sign on the same line is considered part of the remark and is ignored. A remark can start in any column.
The following example demonstrates the use of remarks:
The HelloWorld application is one of the most simple
print 'Hello World' print the Hello World line
| # Remarks #
Remarks are comments that are introduced by the pound (or hash) sign (`#`)\. All text that follows the pound sign on the same line is considered part of the remark and is ignored\. A remark can start in any column\.
The following example demonstrates the use of remarks:
#The HelloWorld application is one of the most simple
print 'Hello World' # print the Hello World line
<!-- </article "role="article" "> -->
|
9F27A4650B0B0BF36223937D0CF60E460B66A723 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/jython/clementine/python_syntax_statements.html?context=cdpaas&locale=en | Statement syntax | Statement syntax
The statement syntax for Python is very simple.
In general, each source line is a single statement. Except for expression and assignment statements, each statement is introduced by a keyword name, such as if or for. Blank lines or remark lines can be inserted anywhere between any statements in the code. If there's more than one statement on a line, each statement must be separated by a semicolon (;).
Very long statements can continue on more than one line. In this case, the statement that is to continue on to the next line must end with a backslash (). For example:
x = "A loooooooooooooooooooong string" +
"another looooooooooooooooooong string"
When you enclose a structure by parentheses (()), brackets ([]), or curly braces ({}), the statement can be continued on a new line after any comma, without having to insert a backslash. For example:
x = (1, 2, 3, "hello",
"goodbye", 4, 5, 6)
| # Statement syntax #
The statement syntax for Python is very simple\.
In general, each source line is a single statement\. Except for `expression` and `assignment` statements, each statement is introduced by a keyword name, such as `if` or `for`\. Blank lines or remark lines can be inserted anywhere between any statements in the code\. If there's more than one statement on a line, each statement must be separated by a semicolon (`;`)\.
Very long statements can continue on more than one line\. In this case, the statement that is to continue on to the next line must end with a backslash (`\`)\. For example:
x = "A loooooooooooooooooooong string" + \
"another looooooooooooooooooong string"
When you enclose a structure by parentheses (`()`), brackets (`[]`), or curly braces (`{}`), the statement can be continued on a new line after any comma, without having to insert a backslash\. For example:
x = (1, 2, 3, "hello",
"goodbye", 4, 5, 6)
<!-- </article "role="article" "> -->
|
14F850B810E969CE2646D5641300FB407A6C49C5 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/jython/clementine/python_syntax_strings.html?context=cdpaas&locale=en | Strings | Strings
A string is an immutable sequence of characters that's treated as a value. Strings support all of the immutable sequence functions and operators that result in a new string. For example, "abcdef"[1:4] results in the output "bcd".
In Python, characters are represented by strings of length one.
Strings literals are defined by the use of single or triple quoting. Strings that are defined using single quotes can't span lines, while strings that are defined using triple quotes can. You can enclose a string in single quotes (') or double quotes ("). A quoting character may contain the other quoting character un-escaped or the quoting character escaped, that's preceded by the backslash () character.
| # Strings #
A string is an immutable sequence of characters that's treated as a value\. Strings support all of the immutable sequence functions and operators that result in a new string\. For example, `"abcdef"[1:4]` results in the output `"bcd"`\.
In Python, characters are represented by strings of length one\.
Strings literals are defined by the use of single or triple quoting\. Strings that are defined using single quotes can't span lines, while strings that are defined using triple quotes can\. You can enclose a string in single quotes (`'`) or double quotes (`"`)\. A quoting character may contain the other quoting character un\-escaped or the quoting character escaped, that's preceded by the backslash (`\`) character\.
<!-- </article "role="article" "> -->
|
398A23291331968098B47496D504743991855A61 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/kdemodelnodeslots.html?context=cdpaas&locale=en | kdemodel properties | kdemodel properties
Kernel Density Estimation (KDE)© uses the Ball Tree or KD Tree algorithms for efficient queries, and combines concepts from unsupervised learning, feature engineering, and data modeling. Neighbor-based approaches such as KDE are some of the most popular and useful density estimation techniques. The KDE Modeling and KDE Simulation nodes in SPSS Modeler expose the core features and commonly used parameters of the KDE library. The nodes are implemented in Python.
kdemodel properties
Table 1. kdemodel properties
kdemodel properties Data type Property description
custom_fields boolean This option tells the node to use field information specified here instead of that given in any upstream Type node(s). After selecting this option, specify the following fields as required.
inputs field List of the field names for input.
bandwidth double Default is 1.
kernel string The kernel to use: gaussian, tophat, epanechnikov, exponential, linear, or cosine. Default is gaussian.
algorithm string The tree algorithm to use: kd_tree, ball_tree, or auto. Default is auto.
metric string The metric to use when calculating distance. For the kd_tree algorithm, choose from: Euclidean, Chebyshev, Cityblock, Minkowski, Manhattan, Infinity, P, L2, or L1. For the ball_tree algorithm, choose from: Euclidian, Braycurtis, Chebyshev, Canberra, Cityblock, Dice, Hamming, Infinity, Jaccard, L1, L2, Minkowski, Matching, Manhattan, P, Rogersanimoto, Russellrao, Sokalmichener, Sokalsneath, or Kulsinski. Default is Euclidean.
atol float The desired absolute tolerance of the result. A larger tolerance will generally lead to faster execution. Default is 0.0.
rtol float The desired relative tolerance of the result. A larger tolerance will generally lead to faster execution. Default is 1E-8.
breadth_first boolean Set to True to use a breadth-first approach. Set to False to use a depth-first approach. Default is True.
leaf_size integer The leaf size of the underlying tree. Default is 40. Changing this value may significantly impact the performance.
p_value double Specify the P Value to use if you're using Minkowski for the metric. Default is 1.5.
custom_name
default_node_name
use_HPO
| # kdemodel properties #
Kernel Density Estimation (KDE)© uses the Ball Tree or KD Tree algorithms for efficient queries, and combines concepts from unsupervised learning, feature engineering, and data modeling\. Neighbor\-based approaches such as KDE are some of the most popular and useful density estimation techniques\. The KDE Modeling and KDE Simulation nodes in SPSS Modeler expose the core features and commonly used parameters of the KDE library\. The nodes are implemented in Python\.
<!-- <table "summary="kdemodel properties" id="kdemodelnodeslots__table_r3k_5bw_tz" class="defaultstyle" "> -->
kdemodel properties
Table 1\. kdemodel properties
| `kdemodel` properties | Data type | Property description |
| --------------------- | --------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `custom_fields` | *boolean* | This option tells the node to use field information specified here instead of that given in any upstream Type node(s)\. After selecting this option, specify the following fields as required\. |
| `inputs` | *field* | List of the field names for input\. |
| `bandwidth` | *double* | Default is `1`\. |
| `kernel` | *string* | The kernel to use: `gaussian`, `tophat`, `epanechnikov`, `exponential`, `linear`, or `cosine`\. Default is `gaussian`\. |
| `algorithm` | *string* | The tree algorithm to use: `kd_tree`, `ball_tree`, or `auto`\. Default is `auto`\. |
| `metric` | *string* | The metric to use when calculating distance\. For the `kd_tree` algorithm, choose from: `Euclidean`, `Chebyshev`, `Cityblock`, `Minkowski`, `Manhattan`, `Infinity`, `P`, `L2`, or `L1`\. For the `ball_tree` algorithm, choose from: `Euclidian`, `Braycurtis`, `Chebyshev`, `Canberra`, `Cityblock`, `Dice`, `Hamming`, `Infinity`, `Jaccard`, `L1`, `L2`, `Minkowski`, `Matching`, `Manhattan`, `P`, `Rogersanimoto`, `Russellrao`, `Sokalmichener`, `Sokalsneath`, or `Kulsinski`\. Default is `Euclidean`\. |
| `atol` | *float* | The desired absolute tolerance of the result\. A larger tolerance will generally lead to faster execution\. Default is `0.0`\. |
| `rtol` | *float* | The desired relative tolerance of the result\. A larger tolerance will generally lead to faster execution\. Default is `1E-8`\. |
| `breadth_first` | *boolean* | Set to `True` to use a breadth\-first approach\. Set to `False` to use a depth\-first approach\. Default is `True`\. |
| `leaf_size` | *integer* | The leaf size of the underlying tree\. Default is `40`\. Changing this value may significantly impact the performance\. |
| `p_value` | *double* | Specify the P Value to use if you're using `Minkowski` for the metric\. Default is `1.5`\. |
| `custom_name` | | |
| `default_node_name` | | |
| `use_HPO` | | |
<!-- </table "summary="kdemodel properties" id="kdemodelnodeslots__table_r3k_5bw_tz" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
0EA3470872BF545059B23B040AB1EB393630A29D | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/kdenodeslots.html?context=cdpaas&locale=en | kdeexport properties | kdeexport properties
Kernel Density Estimation (KDE)© uses the Ball Tree or KD Tree algorithms for efficient queries, and combines concepts from unsupervised learning, feature engineering, and data modeling. Neighbor-based approaches such as KDE are some of the most popular and useful density estimation techniques. The KDE Modeling and KDE Simulation nodes in SPSS Modeler expose the core features and commonly used parameters of the KDE library. The nodes are implemented in Python.
kdeexport properties
Table 1. kdeexport properties
kdeexport properties Data type Property description
custom_fields boolean This option tells the node to use field information specified here instead of that given in any upstream Type node(s). After selecting this option, specify the fields as required.
inputs field List of the field names for input.
bandwidth double Default is 1.
kernel string The kernel to use: gaussian or tophat. Default is gaussian.
algorithm string The tree algorithm to use: kd_tree, ball_tree, or auto. Default is auto.
metric string The metric to use when calculating distance. For the kd_tree algorithm, choose from: Euclidean, Chebyshev, Cityblock, Minkowski, Manhattan, Infinity, P, L2, or L1. For the ball_tree algorithm, choose from: Euclidian, Braycurtis, Chebyshev, Canberra, Cityblock, Dice, Hamming, Infinity, Jaccard, L1, L2, Minkowski, Matching, Manhattan, P, Rogersanimoto, Russellrao, Sokalmichener, Sokalsneath, or Kulsinski. Default is Euclidean.
atol float The desired absolute tolerance of the result. A larger tolerance will generally lead to faster execution. Default is 0.0.
rtol float The desired relative tolerance of the result. A larger tolerance will generally lead to faster execution. Default is 1E-8.
breadth_first boolean Set to True to use a breadth-first approach. Set to False to use a depth-first approach. Default is True.
leaf_size integer The leaf size of the underlying tree. Default is 40. Changing this value may significantly impact the performance.
p_value double Specify the P Value to use if you're using Minkowski for the metric. Default is 1.5.
| # kdeexport properties #
Kernel Density Estimation (KDE)© uses the Ball Tree or KD Tree algorithms for efficient queries, and combines concepts from unsupervised learning, feature engineering, and data modeling\. Neighbor\-based approaches such as KDE are some of the most popular and useful density estimation techniques\. The KDE Modeling and KDE Simulation nodes in SPSS Modeler expose the core features and commonly used parameters of the KDE library\. The nodes are implemented in Python\.
<!-- <table "summary="kdeexport properties" id="kdenodeslots__table_r3k_5bw_tz" class="defaultstyle" "> -->
kdeexport properties
Table 1\. kdeexport properties
| `kdeexport` properties | Data type | Property description |
| ---------------------- | --------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `custom_fields` | *boolean* | This option tells the node to use field information specified here instead of that given in any upstream Type node(s)\. After selecting this option, specify the fields as required\. |
| `inputs` | *field* | List of the field names for input\. |
| `bandwidth` | *double* | Default is `1`\. |
| `kernel` | *string* | The kernel to use: `gaussian` or `tophat`\. Default is `gaussian`\. |
| `algorithm` | *string* | The tree algorithm to use: `kd_tree`, `ball_tree`, or `auto`\. Default is `auto`\. |
| `metric` | *string* | The metric to use when calculating distance\. For the `kd_tree` algorithm, choose from: `Euclidean`, `Chebyshev`, `Cityblock`, `Minkowski`, `Manhattan`, `Infinity`, `P`, `L2`, or `L1`\. For the `ball_tree` algorithm, choose from: `Euclidian`, `Braycurtis`, `Chebyshev`, `Canberra`, `Cityblock`, `Dice`, `Hamming`, `Infinity`, `Jaccard`, `L1`, `L2`, `Minkowski`, `Matching`, `Manhattan`, `P`, `Rogersanimoto`, `Russellrao`, `Sokalmichener`, `Sokalsneath`, or `Kulsinski`\. Default is `Euclidean`\. |
| `atol` | *float* | The desired absolute tolerance of the result\. A larger tolerance will generally lead to faster execution\. Default is `0.0`\. |
| `rtol` | *float* | The desired relative tolerance of the result\. A larger tolerance will generally lead to faster execution\. Default is `1E-8`\. |
| `breadth_first` | *boolean* | Set to `True` to use a breadth\-first approach\. Set to `False` to use a depth\-first approach\. Default is `True`\. |
| `leaf_size` | *integer* | The leaf size of the underlying tree\. Default is `40`\. Changing this value may significantly impact the performance\. |
| `p_value` | *double* | Specify the P Value to use if you're using `Minkowski` for the metric\. Default is `1.5`\. |
<!-- </table "summary="kdeexport properties" id="kdenodeslots__table_r3k_5bw_tz" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
9BEA57D80C215D963CB0C54046136FB3E88C7D5C | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/kdenuggetnodeslots.html?context=cdpaas&locale=en | kdeapply properties | kdeapply properties
You can use the KDE Modeling node to generate a KDE model nugget. The scripting name of this model nugget is kdeapply. For information on scripting the modeling node itself, see [kdemodel properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/kdemodelnodeslots.htmlkdemodelnodeslots).
kdeapply properties
Table 1. kdeapply properties
kdeapply properties Data type Property description
out_log_density boolean Specify True or False to include or exclude the log density value in the output. Default is False.
| # kdeapply properties #
You can use the KDE Modeling node to generate a KDE model nugget\. The scripting name of this model nugget is `kdeapply`\. For information on scripting the modeling node itself, see [kdemodel properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/kdemodelnodeslots.html#kdemodelnodeslots)\.
<!-- <table "summary="kdeapply properties" id="kdenuggetnodeslots__table_r3k_5bw_tz" class="defaultstyle" "> -->
kdeapply properties
Table 1\. kdeapply properties
| `kdeapply` properties | Data type | Property description |
| --------------------- | --------- | ---------------------------------------------------------------------------------------------------------- |
| `out_log_density` | *boolean* | Specify `True` or `False` to include or exclude the log density value in the output\. Default is `False`\. |
<!-- </table "summary="kdeapply properties" id="kdenuggetnodeslots__table_r3k_5bw_tz" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
720712D40BFDEF5974C7C025A6AC0D0649124B79 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/kmeansasnodeslots.html?context=cdpaas&locale=en | kmeansasnode properties | kmeansasnode properties
K-Means is one of the most commonly used clustering algorithms. It clusters data points into a predefined number of clusters. The K-Means-AS node in SPSS Modeler is implemented in Spark. For details about K-Means algorithms, see [https://spark.apache.org/docs/2.2.0/ml-clustering.html](https://spark.apache.org/docs/2.2.0/ml-clustering.html). Note that the K-Means-AS node performs one-hot encoding automatically for categorical variables.
kmeansasnode properties
Table 1. kmeansasnode properties
kmeansasnode Properties Values Property description
roleUse string Specify predefined to use predefined roles, or custom to use custom field assignments. Default is predefined.
autoModel Boolean Specify true to use the default name ($S-prediction) for the new generated scoring field, or false to use a custom name. Default is true.
features field List of the field names for input when the roleUse property is set to custom.
name string The name of the new generated scoring field when the autoModel property is set to false.
clustersNum integer The number of clusters to create. Default is 5.
initMode string The initialization algorithm. Possible values are k-means or random. Default is k-means .
initSteps integer The number of initialization steps when initMode is set to k-means . Default is 2.
advancedSettings Boolean Specify true to make the following four properties available. Default is false.
maxIteration integer Maximum number of iterations for clustering. Default is 20.
tolerance string The tolerance to stop the iterations. Possible settings are 1.0E-1, 1.0E-2, ..., 1.0E-6. Default is 1.0E-4.
setSeed Boolean Specify true to use a custom random seed. Default is false.
randomSeed integer The custom random seed when the setSeed property is true.
displayGraph Boolean Select this option if you want a graph to be included in the output.
| # kmeansasnode properties #
K\-Means is one of the most commonly used clustering algorithms\. It clusters data points into a predefined number of clusters\. The K\-Means\-AS node in SPSS Modeler is implemented in Spark\. For details about K\-Means algorithms, see [https://spark\.apache\.org/docs/2\.2\.0/ml\-clustering\.html](https://spark.apache.org/docs/2.2.0/ml-clustering.html)\. Note that the K\-Means\-AS node performs one\-hot encoding automatically for categorical variables\.
<!-- <table "summary="kmeansasnode properties" class="defaultstyle" "> -->
kmeansasnode properties
Table 1\. kmeansasnode properties
| `kmeansasnode` Properties | Values | Property description |
| ------------------------- | --------- | --------------------------------------------------------------------------------------------------------------------------------------------------- |
| `roleUse` | *string* | Specify `predefined` to use predefined roles, or `custom` to use custom field assignments\. Default is `predefined`\. |
| `autoModel` | *Boolean* | Specify `true` to use the default name (`$S-prediction`) for the new generated scoring field, or `false` to use a custom name\. Default is `true`\. |
| `features` | *field* | List of the field names for input when the `roleUse` property is set to `custom`\. |
| `name` | *string* | The name of the new generated scoring field when the `autoModel` property is set to `false`\. |
| `clustersNum` | *integer* | The number of clusters to create\. Default is `5`\. |
| `initMode` | *string* | The initialization algorithm\. Possible values are `k-means||` or `random`\. Default is `k-means||`\. |
| `initSteps` | *integer* | The number of initialization steps when `initMode` is set to `k-means||`\. Default is `2`\. |
| `advancedSettings` | *Boolean* | Specify `true` to make the following four properties available\. Default is `false`\. |
| `maxIteration` | *integer* | Maximum number of iterations for clustering\. Default is `20`\. |
| `tolerance` | *string* | The tolerance to stop the iterations\. Possible settings are `1.0E-1`, `1.0E-2`, \.\.\., `1.0E-6`\. Default is `1.0E-4`\. |
| `setSeed` | *Boolean* | Specify `true` to use a custom random seed\. Default is `false`\. |
| `randomSeed` | *integer* | The custom random seed when the `setSeed` property is `true`\. |
| `displayGraph` | *Boolean* | Select this option if you want a graph to be included in the output\. |
<!-- </table "summary="kmeansasnode properties" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
6F35B89192B6C9A233B859CF66FCC435F3F9E650 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/kmeansnodeslots.html?context=cdpaas&locale=en | kmeansnode properties | kmeansnode properties
The K-Means node clusters the data set into distinct groups (or clusters). The method defines a fixed number of clusters, iteratively assigns records to clusters, and adjusts the cluster centers until further refinement can no longer improve the model. Instead of trying to predict an outcome, k-means uses a process known as unsupervised learning to uncover patterns in the set of input fields.
kmeansnode properties
Table 1. kmeansnode properties
kmeansnode Properties Values Property description
inputs [field1 ... fieldN] K-means models perform cluster analysis on a set of input fields but do not use a target field. Weight and frequency fields are not used. See [Common modeling node properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/modelingnodeslots_common.htmlmodelingnodeslots_common) for more information.
num_clusters number
gen_distance flag
cluster_label StringNumber
label_prefix string
mode SimpleExpert
stop_on DefaultCustom
max_iterations number
tolerance number
encoding_value number
optimize SpeedMemory Specifies whether model building should be optimized for speed or for memory.
| # kmeansnode properties #
The K\-Means node clusters the data set into distinct groups (or clusters)\. The method defines a fixed number of clusters, iteratively assigns records to clusters, and adjusts the cluster centers until further refinement can no longer improve the model\. Instead of trying to predict an outcome, *k*\-means uses a process known as unsupervised learning to uncover patterns in the set of input fields\.
<!-- <table "summary="kmeansnode properties" id="kmeansnodeslots__table_n53_zcj_cdb" class="defaultstyle" "> -->
kmeansnode properties
Table 1\. kmeansnode properties
| `kmeansnode` Properties | Values | Property description |
| ----------------------- | -------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `inputs` | \[*field1 \.\.\. fieldN*\] | K\-means models perform cluster analysis on a set of input fields but do not use a target field\. Weight and frequency fields are not used\. See [Common modeling node properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/modelingnodeslots_common.html#modelingnodeslots_common) for more information\. |
| `num_clusters` | *number* | |
| `gen_distance` | *flag* | |
| `cluster_label` | `String``Number` | |
| `label_prefix` | *string* | |
| `mode` | `Simple``Expert` | |
| `stop_on` | `Default``Custom` | |
| `max_iterations` | *number* | |
| `tolerance` | *number* | |
| `encoding_value` | *number* | |
| `optimize` | `Speed``Memory` | Specifies whether model building should be optimized for speed or for memory\. |
<!-- </table "summary="kmeansnode properties" id="kmeansnodeslots__table_n53_zcj_cdb" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
57D441EF305442BCDBBE48B980B87D47B825FFF9 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/kmeansnuggetnodeslots.html?context=cdpaas&locale=en | applykmeansnode properties | applykmeansnode properties
You can use K-Means modeling nodes to generate a K-Means model nugget. The scripting name of this model nugget is applykmeansnode. No other properties exist for this model nugget. For more information on scripting the modeling node itself, see [kmeansnode properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/kmeansnodeslots.htmlkmeansnodeslots).
| # applykmeansnode properties #
You can use K\-Means modeling nodes to generate a K\-Means model nugget\. The scripting name of this model nugget is *applykmeansnode*\. No other properties exist for this model nugget\. For more information on scripting the modeling node itself, see [kmeansnode properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/kmeansnodeslots.html#kmeansnodeslots)\.
<!-- </article "role="article" "> -->
|
CC60FEBF8E5D1907CE0CCF3868CD9E4B494AA1BF | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/knnnodeslots.html?context=cdpaas&locale=en | knnnode properties | knnnode properties
The k-Nearest Neighbor (KNN) node associates a new case with the category or value of the k objects nearest to it in the predictor space, where k is an integer. Similar cases are near each other and dissimilar cases are distant from each other.
knnnode properties
Table 1. knnnode properties
knnnode Properties Values Property description
analysis PredictTargetIdentifyNeighbors
objective BalanceSpeedAccuracyCustom
normalize_ranges flag
use_case_labels flag Check box to enable next option.
case_labels_field field
identify_focal_cases flag Check box to enable next option.
focal_cases_field field
automatic_k_selection flag
fixed_k integer Enabled only if automatic_k_selectio is False.
minimum_k integer Enabled only if automatic_k_selectio is True.
maximum_k integer
distance_computation EuclideanCityBlock
weight_by_importance flag
range_predictions MeanMedian
perform_feature_selection flag
forced_entry_inputs [field1 ... fieldN]
stop_on_error_ratio flag
number_to_select integer
minimum_change number
validation_fold_assign_by_field flag
number_of_folds integer Enabled only if validation_fold_assign_by_field is False
set_random_seed flag
random_seed number
folds_field field Enabled only if validation_fold_assign_by_field is True
all_probabilities flag
save_distances flag
calculate_raw_propensities flag
calculate_adjusted_propensities flag
adjusted_propensity_partition TestValidation
| # knnnode properties #
The *k*\-Nearest Neighbor (KNN) node associates a new case with the category or value of the *k* objects nearest to it in the predictor space, where *k* is an integer\. Similar cases are near each other and dissimilar cases are distant from each other\.
<!-- <table "summary="knnnode properties" id="knnnodeslots__table_nfn_1dj_cdb" class="defaultstyle" "> -->
knnnode properties
Table 1\. knnnode properties
| `knnnode` Properties | Values | Property description |
| --------------------------------- | ---------------------------------- | ------------------------------------------------------------ |
| `analysis` | `PredictTarget``IdentifyNeighbors` | |
| `objective` | `Balance``Speed``Accuracy``Custom` | |
| `normalize_ranges` | *flag* | |
| `use_case_labels` | *flag* | Check box to enable next option\. |
| `case_labels_field` | *field* | |
| `identify_focal_cases` | *flag* | Check box to enable next option\. |
| `focal_cases_field` | *field* | |
| `automatic_k_selection` | *flag* | |
| `fixed_k` | *integer* | Enabled only if `automatic_k_selectio` is `False`\. |
| `minimum_k` | *integer* | Enabled only if `automatic_k_selectio` is `True`\. |
| `maximum_k` | *integer* | |
| `distance_computation` | `Euclidean``CityBlock` | |
| `weight_by_importance` | *flag* | |
| `range_predictions` | `Mean``Median` | |
| `perform_feature_selection` | *flag* | |
| `forced_entry_inputs` | \[*field1 \.\.\. fieldN*\] | |
| `stop_on_error_ratio` | *flag* | |
| `number_to_select` | *integer* | |
| `minimum_change` | *number* | |
| `validation_fold_assign_by_field` | *flag* | |
| `number_of_folds` | *integer* | Enabled only if `validation_fold_assign_by_field` is `False` |
| `set_random_seed` | *flag* | |
| `random_seed` | *number* | |
| `folds_field` | *field* | Enabled only if `validation_fold_assign_by_field` is `True` |
| `all_probabilities` | *flag* | |
| `save_distances` | *flag* | |
| `calculate_raw_propensities` | *flag* | |
| `calculate_adjusted_propensities` | *flag* | |
| `adjusted_propensity_partition` | `Test``Validation` | |
<!-- </table "summary="knnnode properties" id="knnnodeslots__table_nfn_1dj_cdb" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
8B32EB4742D88B5CEC2E1C9616958BD7F8986785 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/knnnuggetnodeslots.html?context=cdpaas&locale=en | applyknnnode properties | applyknnnode properties
You can use KNN modeling nodes to generate a KNN model nugget. The scripting name of this model nugget is applyknnnode. For more information on scripting the modeling node itself, see [knnnode properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/knnnodeslots.htmlknnnodeslots).
applyknnnode properties
Table 1. applyknnnode properties
applyknnnode Properties Values Property description
all_probabilities flag
save_distances flag
enable_sql_generation falsenative When using data from a database, SQL code can be pushed back to the database for execution, providing superior performance for many operations.
| # applyknnnode properties #
You can use KNN modeling nodes to generate a KNN model nugget\. The scripting name of this model nugget is *applyknnnode*\. For more information on scripting the modeling node itself, see [knnnode properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/knnnodeslots.html#knnnodeslots)\.
<!-- <table "summary="applyknnnode properties" id="knnnuggetnodeslots__table_e5c_bdj_cdb" class="defaultstyle" "> -->
applyknnnode properties
Table 1\. applyknnnode properties
| `applyknnnode` Properties | Values | Property description |
| ------------------------- | --------------- | ------------------------------------------------------------------------------------------------------------------------------------------------ |
| `all_probabilities` | *flag* | |
| `save_distances` | *flag* | |
| `enable_sql_generation` | `false``native` | When using data from a database, SQL code can be pushed back to the database for execution, providing superior performance for many operations\. |
<!-- </table "summary="applyknnnode properties" id="knnnuggetnodeslots__table_e5c_bdj_cdb" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
0563FC6874B43FA0BCA09AE54805FE98BFA33042 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/kohonennodeslots.html?context=cdpaas&locale=en | kohonennode properties | kohonennode properties
The Kohonen node generates a type of neural network that can be used to cluster the data set into distinct groups. When the network is fully trained, records that are similar should be close together on the output map, while records that are different will be far apart. You can look at the number of observations captured by each unit in the model nugget to identify the strong units. This may give you a sense of the appropriate number of clusters.
kohonennode properties
Table 1. kohonennode properties
kohonennode Properties Values Property description
inputs [field1 ... fieldN] Kohonen models use a list of input fields, but no target. Frequency and weight fields are not used. See [Common modeling node properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/modelingnodeslots_common.htmlmodelingnodeslots_common) for more information.
continue flag
show_feedback flag
stop_on Default <br>Time
time number
optimize Speed <br>Memory Use to specify whether model building should be optimized for speed or for memory.
cluster_label flag
mode Simple <br>Expert
width number
length number
decay_style Linear <br>Exponential
phase1_neighborhood number
phase1_eta number
phase1_cycles number
phase2_neighborhood number
phase2_eta number
phase2_cycles number
set_random_seed Boolean If no random seed is set, the sequence of random values used to initialize the network weights will be different every time the node runs. This can cause the node to create different models on different runs, even if the node settings and data values are exactly the same. By selecting this option, you can set the random seed to a specific value so the resulting model is exactly reproducible.
random_seed integer Seed
| # kohonennode properties #
The Kohonen node generates a type of neural network that can be used to cluster the data set into distinct groups\. When the network is fully trained, records that are similar should be close together on the output map, while records that are different will be far apart\. You can look at the number of observations captured by each unit in the model nugget to identify the strong units\. This may give you a sense of the appropriate number of clusters\.
<!-- <table "summary="kohonennode properties" id="kohonennodeslots__table_pft_bdj_cdb" class="defaultstyle" "> -->
kohonennode properties
Table 1\. kohonennode properties
| `kohonennode` Properties | Values | Property description |
| ------------------------ | --------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `inputs` | \[*field1 \.\.\. fieldN*\] | Kohonen models use a list of input fields, but no target\. Frequency and weight fields are not used\. See [Common modeling node properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/modelingnodeslots_common.html#modelingnodeslots_common) for more information\. |
| `continue` | *flag* | |
| `show_feedback` | *flag* | |
| `stop_on` | `Default` <br>`Time` | |
| `time` | *number* | |
| `optimize` | `Speed` <br>`Memory` | Use to specify whether model building should be optimized for speed or for memory\. |
| `cluster_label` | *flag* | |
| `mode` | `Simple` <br>`Expert` | |
| `width` | *number* | |
| `length` | *number* | |
| `decay_style` | `Linear` <br>`Exponential` | |
| `phase1_neighborhood` | *number* | |
| `phase1_eta` | *number* | |
| `phase1_cycles` | *number* | |
| `phase2_neighborhood` | *number* | |
| `phase2_eta` | *number* | |
| `phase2_cycles` | *number* | |
| `set_random_seed` | *Boolean* | If no random seed is set, the sequence of random values used to initialize the network weights will be different every time the node runs\. This can cause the node to create different models on different runs, even if the node settings and data values are exactly the same\. By selecting this option, you can set the random seed to a specific value so the resulting model is exactly reproducible\. |
| `random_seed` | *integer* | Seed |
<!-- </table "summary="kohonennode properties" id="kohonennodeslots__table_pft_bdj_cdb" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
2939716BFA6089C8B6373ED7C6397AF71389A5C8 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/kohonennuggetnodeslots.html?context=cdpaas&locale=en | applykohonennode properties | applykohonennode properties
You can use Kohonen modeling nodes to generate a Kohonen model nugget. The scripting name of this model nugget is applykohonennode. For more information on scripting the modeling node itself, see [c50node properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/c50nodeslots.htmlc50nodeslots).
applykohonennode properties
Table 1. applykohonennode properties
applykohonennode Properties Values Property description
enable_sql_generation falsetruenative When using data from a database, SQL code can be pushed back to the database for execution, providing superior performance for many operations.
| # applykohonennode properties #
You can use Kohonen modeling nodes to generate a Kohonen model nugget\. The scripting name of this model nugget is *applykohonennode*\. For more information on scripting the modeling node itself, see [c50node properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/c50nodeslots.html#c50nodeslots)\.
<!-- <table "summary="applykohonennode properties" id="kohonennuggetnodeslots__table_vcl_gbj_cdb" class="defaultstyle" "> -->
applykohonennode properties
Table 1\. applykohonennode properties
| `applykohonennode` Properties | Values | Property description |
| ----------------------------- | --------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------ |
| `enable_sql_generation` | `false``true``native` | When using data from a database, SQL code can be pushed back to the database for execution, providing superior performance for many operations\. |
<!-- </table "summary="applykohonennode properties" id="kohonennuggetnodeslots__table_vcl_gbj_cdb" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
87D2FF4289EDCBF7FCFA7FC7FD460DEB02ECC71B | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/logregnodeslots.html?context=cdpaas&locale=en | logregnode properties | logregnode properties
Logistic regression is a statistical technique for classifying records based on values of input fields. It is analogous to linear regression but takes a categorical target field instead of a numeric range.
logregnode properties
Table 1. logregnode properties
logregnode Properties Values Property description
target field Logistic regression models require a single target field and one or more input fields. Frequency and weight fields are not used. See [Common modeling node properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/modelingnodeslots_common.htmlmodelingnodeslots_common) for more information.
logistic_procedure BinomialMultinomial
include_constant flag
mode SimpleExpert
method EnterStepwiseForwardsBackwardsBackwardsStepwise
binomial_method EnterForwardsBackwards
model_type MainEffectsFullFactorialCustom When FullFactorial is specified as the model type, stepping methods will not run, even if specified. Instead, Enter will be the method used. If the model type is set to Custom but no custom fields are specified, a main-effects model will be built.
custom_terms [[BP Sex][BP][Age]]
multinomial_base_category string Specifies how the reference category is determined.
binomial_categorical_input string
binomial_input_contrast IndicatorSimpleDifferenceHelmertRepeatedPolynomialDeviation Keyed property for categorical input that specifies how the contrast is determined. See the example for usage.
binomial_input_category FirstLast Keyed property for categorical input that specifies how the reference category is determined. See the example for usage.
scale NoneUserDefinedPearsonDeviance
scale_value number
all_probabilities flag
tolerance 1.0E-51.0E-61.0E-71.0E-81.0E-91.0E-10
min_terms number
use_max_terms flag
max_terms number
entry_criterion ScoreLR
removal_criterion LRWald
probability_entry number
probability_removal number
binomial_probability_entry number
binomial_probability_removal number
requirements HierarchyDiscreteHierarchyAllContainmentNone
max_iterations number
max_steps number
p_converge 1.0E-41.0E-51.0E-61.0E-71.0E-80
l_converge 1.0E-11.0E-21.0E-31.0E-41.0E-50
delta number
iteration_history flag
history_steps number
summary flag
likelihood_ratio flag
asymptotic_correlation flag
goodness_fit flag
parameters flag
confidence_interval number
asymptotic_covariance flag
classification_table flag
stepwise_summary flag
info_criteria flag
monotonicity_measures flag
binomial_output_display at_each_stepat_last_step
binomial_goodness_of_fit flag
binomial_parameters flag
binomial_iteration_history flag
binomial_classification_plots flag
binomial_ci_enable flag
binomial_ci number
binomial_residual outliersall
binomial_residual_enable flag
binomial_outlier_threshold number
binomial_classification_cutoff number
binomial_removal_criterion LRWaldConditional
calculate_variable_importance flag
calculate_raw_propensities flag
| # logregnode properties #
Logistic regression is a statistical technique for classifying records based on values of input fields\. It is analogous to linear regression but takes a categorical target field instead of a numeric range\.
<!-- <table "summary="logregnode properties" id="logregnodeslots__table_p5h_gdj_cdb" class="defaultstyle" "> -->
logregnode properties
Table 1\. logregnode properties
| `logregnode` Properties | Values | Property description |
| -------------------------------- | ------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `target` | *field* | Logistic regression models require a single target field and one or more input fields\. Frequency and weight fields are not used\. See [Common modeling node properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/modelingnodeslots_common.html#modelingnodeslots_common) for more information\. |
| `logistic_procedure` | `Binomial``Multinomial` | |
| `include_constant` | *flag* | |
| `mode` | `Simple``Expert` | |
| `method` | `Enter``Stepwise``Forwards``Backwards``BackwardsStepwise` | |
| `binomial_method` | `Enter``Forwards``Backwards` | |
| `model_type` | `MainEffects``FullFactorial``Custom` | When `FullFactorial` is specified as the model type, stepping methods will not run, even if specified\. Instead, `Enter` will be the method used\. If the model type is set to `Custom` but no custom fields are specified, a main\-effects model will be built\. |
| `custom_terms` | \[*\[BP Sex\]\[BP\]\[Age\]*\] | |
| `multinomial_base_category` | *string* | Specifies how the reference category is determined\. |
| `binomial_categorical_input` | *string* | |
| `binomial_input_contrast` | `Indicator``Simple``Difference``Helmert``Repeated``Polynomial``Deviation` | Keyed property for categorical input that specifies how the contrast is determined\. *See the example for usage\.* |
| `binomial_input_category` | `First``Last` | Keyed property for categorical input that specifies how the reference category is determined\. *See the example for usage\.* |
| `scale` | `None``UserDefined``Pearson``Deviance` | |
| `scale_value` | *number* | |
| `all_probabilities` | *flag* | |
| `tolerance` | `1.0E-5``1.0E-6``1.0E-7``1.0E-8``1.0E-9``1.0E-10` | |
| `min_terms` | *number* | |
| `use_max_terms` | *flag* | |
| `max_terms` | *number* | |
| `entry_criterion` | `Score``LR` | |
| `removal_criterion` | `LR``Wald` | |
| `probability_entry` | *number* | |
| `probability_removal` | *number* | |
| `binomial_probability_entry` | *number* | |
| `binomial_probability_removal` | *number* | |
| `requirements` | `HierarchyDiscrete``HierarchyAll``Containment``None` | |
| `max_iterations` | *number* | |
| `max_steps` | *number* | |
| `p_converge` | `1.0E-4``1.0E-5``1.0E-6``1.0E-7``1.0E-8``0` | |
| `l_converge` | `1.0E-1``1.0E-2``1.0E-3``1.0E-4``1.0E-5``0` | |
| `delta` | *number* | |
| `iteration_history` | *flag* | |
| `history_steps` | *number* | |
| `summary` | *flag* | |
| `likelihood_ratio` | *flag* | |
| `asymptotic_correlation` | *flag* | |
| `goodness_fit` | *flag* | |
| `parameters` | *flag* | |
| `confidence_interval` | *number* | |
| `asymptotic_covariance` | *flag* | |
| `classification_table` | *flag* | |
| `stepwise_summary` | *flag* | |
| `info_criteria` | *flag* | |
| `monotonicity_measures` | *flag* | |
| `binomial_output_display` | `at_each_step``at_last_step` | |
| `binomial_goodness_of_fit` | *flag* | |
| `binomial_parameters` | *flag* | |
| `binomial_iteration_history` | *flag* | |
| `binomial_classification_plots` | *flag* | |
| `binomial_ci_enable` | *flag* | |
| `binomial_ci` | *number* | |
| `binomial_residual` | `outliers``all` | |
| `binomial_residual_enable` | *flag* | |
| `binomial_outlier_threshold` | *number* | |
| `binomial_classification_cutoff` | *number* | |
| `binomial_removal_criterion` | `LR``Wald``Conditional` | |
| `calculate_variable_importance` | *flag* | |
| `calculate_raw_propensities` | *flag* | |
<!-- </table "summary="logregnode properties" id="logregnodeslots__table_p5h_gdj_cdb" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
7C8BCAFBD032E30DCC7C39E28A2B5DE1E340DA6B | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/logregnuggetnodeslots.html?context=cdpaas&locale=en | applylogregnode properties | applylogregnode properties
You can use Logistic modeling nodes to generate a Logistic model nugget. The scripting name of this model nugget is applylogregnode. For more information on scripting the modeling node itself, [logregnode properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/logregnodeslots.htmllogregnodeslots).
applylogregnode properties
Table 1. applylogregnode properties
applylogregnode Properties Values Property description
calculate_raw_propensities flag
calculate_conf flag
enable_sql_generation flag
| # applylogregnode properties #
You can use Logistic modeling nodes to generate a Logistic model nugget\. The scripting name of this model nugget is *applylogregnode*\. For more information on scripting the modeling node itself, [logregnode properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/logregnodeslots.html#logregnodeslots)\.
<!-- <table "summary="applylogregnode properties" id="logregnuggetnodeslots__table_s2w_gdj_cdb" class="defaultstyle" "> -->
applylogregnode properties
Table 1\. applylogregnode properties
| `applylogregnode` Properties | Values | Property description |
| ---------------------------- | ------ | -------------------- |
| `calculate_raw_propensities` | *flag* | |
| `calculate_conf` | *flag* | |
| `enable_sql_generation` | *flag* | |
<!-- </table "summary="applylogregnode properties" id="logregnuggetnodeslots__table_s2w_gdj_cdb" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
7C4F082004DBA0B946D64AA6C0127041F4622C7B | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/lsvmnodeslots.html?context=cdpaas&locale=en | lsvmnode properties | lsvmnode properties
With the Linear Support Vector Machine (LSVM) node, you can classify data into one of two groups without overfitting. LSVM is linear and works well with wide data sets, such as those with a very large number of records.
lsvmnode properties
Table 1. lsvmnode properties
lsvmnode Properties Values Property description
intercept flag Includes the intercept in the model. Default value is True.
target_order AscendingDescending Specifies the sorting order for the categorical target. Ignored for continuous targets. Default is Ascending.
precision number Used only if measurement level of target field is Continuous. Specifies the parameter related to the sensitiveness of the loss for regression. Minimum is 0 and there is no maximum. Default value is 0.1.
exclude_missing_values flag When True, a record is excluded if any single value is missing. The default value is False.
penalty_function L1L2 Specifies the type of penalty function used. The default value is L2.
lambda number Penalty (regularization) parameter.
calculate_variable_importance flag For models that produce an appropriate measure of importance, this option displays a chart that indicates the relative importance of each predictor in estimating the model. Note that variable importance may take longer to calculate for some models, particularly when working with large datasets, and is off by default for some models as a result. Variable importance is not available for decision list models.
| # lsvmnode properties #
With the Linear Support Vector Machine (LSVM) node, you can classify data into one of two groups without overfitting\. LSVM is linear and works well with wide data sets, such as those with a very large number of records\.
<!-- <table "summary="lsvmnode properties" class="defaultstyle" "> -->
lsvmnode properties
Table 1\. lsvmnode properties
| `lsvmnode` Properties | Values | Property description |
| ------------------------------- | ----------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `intercept` | *flag* | Includes the intercept in the model\. Default value is `True`\. |
| `target_order` | `Ascending``Descending` | Specifies the sorting order for the categorical target\. Ignored for continuous targets\. Default is `Ascending`\. |
| `precision` | *number* | Used only if measurement level of target field is `Continuous`\. Specifies the parameter related to the sensitiveness of the loss for regression\. Minimum is `0` and there is no maximum\. Default value is `0.1`\. |
| `exclude_missing_values` | *flag* | When `True`, a record is excluded if any single value is missing\. The default value is `False`\. |
| `penalty_function` | `L1``L2` | Specifies the type of penalty function used\. The default value is `L2`\. |
| `lambda` | *number* | Penalty (regularization) parameter\. |
| `calculate_variable_importance` | *flag* | For models that produce an appropriate measure of importance, this option displays a chart that indicates the relative importance of each predictor in estimating the model\. Note that variable importance may take longer to calculate for some models, particularly when working with large datasets, and is off by default for some models as a result\. Variable importance is not available for decision list models\. |
<!-- </table "summary="lsvmnode properties" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
5890D52D3DDE4C249AD06C5A4DFE25542723F1C1 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/lsvmnuggetnodeslots.html?context=cdpaas&locale=en | applylsvmnode properties | applylsvmnode properties
You can use LSVM modeling nodes to generate an LSVM model nugget. The scripting name of this model nugget is applylsvmnode. For more information on scripting the modeling node itself, see [lsvmnode properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/lsvmnodeslots.html).
applylsvmnode properties
Table 1. applylsvmnode properties
applylsvmnode Properties Values Property description
calculate_raw_propensities flag Specifies whether to calculate raw propensity scores.
enable_sql_generation falsenative When using data from a database, SQL code can be pushed back to the database for execution, providing superior performance for many operations.
| # applylsvmnode properties #
You can use LSVM modeling nodes to generate an LSVM model nugget\. The scripting name of this model nugget is *applylsvmnode*\. For more information on scripting the modeling node itself, see [lsvmnode properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/lsvmnodeslots.html)\.
<!-- <table "summary="applylsvmnode properties" class="defaultstyle" "> -->
applylsvmnode properties
Table 1\. applylsvmnode properties
| `applylsvmnode` Properties | Values | Property description |
| ---------------------------- | --------------- | ------------------------------------------------------------------------------------------------------------------------------------------------ |
| `calculate_raw_propensities` | *flag* | Specifies whether to calculate raw propensity scores\. |
| `enable_sql_generation` | `false``native` | When using data from a database, SQL code can be pushed back to the database for execution, providing superior performance for many operations\. |
<!-- </table "summary="applylsvmnode properties" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
3426FB738655136D42FA32BD6CFBFD979A3D5574 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/matrixnodeslots.html?context=cdpaas&locale=en | matrixnode properties | matrixnode properties
The Matrix node creates a table that shows relationships between fields. It's most commonly used to show the relationship between two symbolic fields, but it can also show relationships between flag fields or numeric fields.
matrixnode properties
Table 1. matrixnode properties
matrixnode properties Data type Property description
fields SelectedFlagsNumerics
row field
column field
include_missing_values flag Specifies whether user-missing (blank) and system missing (null) values are included in the row and column output.
cell_contents CrossTabsFunction
function_field string
function SumMeanMinMaxSDev
sort_mode UnsortedAscendingDescending
highlight_top number If non-zero, then true.
highlight_bottom number If non-zero, then true.
display [CountsExpectedResidualsRowPctColumnPctTotalPct]
include_totals flag
use_output_name flag Specifies whether a custom output name is used.
output_name string If use_output_name is true, specifies the name to use.
output_mode ScreenFile Used to specify target location for output generated from the output node.
output_format Formatted (.tab) Delimited (.csv) HTML (.html) Output (.cou) Used to specify the type of output. Both the Formatted and Delimited formats can take the modifier transposed, which transposes the rows and columns in the table.
paginate_output flag When the output_format is HTML, causes the output to be separated into pages.
lines_per_page number When used with paginate_output, specifies the lines per page of output.
full_filename string
| # matrixnode properties #
The Matrix node creates a table that shows relationships between fields\. It's most commonly used to show the relationship between two symbolic fields, but it can also show relationships between flag fields or numeric fields\.
<!-- <table "summary="matrixnode properties" id="matrixnodeslots__table_czl_hdj_cdb" class="defaultstyle" "> -->
matrixnode properties
Table 1\. matrixnode properties
| `matrixnode` properties | Data type | Property description |
| ------------------------ | -------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `fields` | `Selected``Flags``Numerics` | |
| `row` | *field* | |
| `column` | *field* | |
| `include_missing_values` | *flag* | Specifies whether user\-missing (blank) and system missing (null) values are included in the row and column output\. |
| `cell_contents` | `CrossTabs``Function` | |
| `function_field` | *string* | |
| `function` | `Sum``Mean``Min``Max``SDev` | |
| `sort_mode` | `Unsorted``Ascending``Descending` | |
| `highlight_top` | *number* | If non\-zero, then true\. |
| `highlight_bottom` | *number* | If non\-zero, then true\. |
| `display` | `[Counts``Expected``Residuals``RowPct``ColumnPct``TotalPct]` | |
| `include_totals` | *flag* | |
| `use_output_name` | *flag* | Specifies whether a custom output name is used\. |
| `output_name` | *string* | If `use_output_name` is true, specifies the name to use\. |
| `output_mode` | `Screen``File` | Used to specify target location for output generated from the output node\. |
| `output_format` | `Formatted` (\.*tab*) `Delimited` (\.*csv*) `HTML` (\.*html*) `Output` (\.*cou*) | Used to specify the type of output\. Both the `Formatted` and `Delimited` formats can take the modifier `transposed`, which transposes the rows and columns in the table\. |
| `paginate_output` | *flag* | When the `output_format` is `HTML`, causes the output to be separated into pages\. |
| `lines_per_page` | *number* | When used with `paginate_output`, specifies the lines per page of output\. |
| `full_filename` | *string* | |
<!-- </table "summary="matrixnode properties" id="matrixnodeslots__table_czl_hdj_cdb" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
DA3D295DA633CD271FB3970AD2ED4B31BDCB6247 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/meansnodeslots.html?context=cdpaas&locale=en | meansnode properties | meansnode properties
The Means node compares the means between independent groups or between pairs of related fields to test whether a significant difference exists. For example, you could compare mean revenues before and after running a promotion or compare revenues from customers who didn't receive the promotion with those who did.
meansnode properties
Table 1. meansnode properties
meansnode properties Data type Property description
means_mode BetweenGroupsBetweenFields Specifies the type of means statistic to be executed on the data.
test_fields [field1 ... fieldn] Specifies the test field when means_mode is set to BetweenGroups.
grouping_field field Specifies the grouping field.
paired_fields [field1 field2]field3 field4]...] Specifies the field pairs to use when means_mode is set to BetweenFields.
label_correlations flag Specifies whether correlation labels are shown in output. This setting applies only when means_mode is set to BetweenFields.
correlation_mode ProbabilityAbsolute Specifies whether to label correlations by probability or absolute value.
weak_label string
medium_label string
strong_label string
weak_below_probability number When correlation_mode is set to Probability, specifies the cutoff value for weak correlations. This must be a value between 0 and 1—for example, 0.90.
strong_above_probability number Cutoff value for strong correlations.
weak_below_absolute number When correlation_mode is set to Absolute, specifies the cutoff value for weak correlations. This must be a value between 0 and 1—for example, 0.90.
strong_above_absolute number Cutoff value for strong correlations.
unimportant_label string
marginal_label string
important_label string
unimportant_below number Cutoff value for low field importance. This must be a value between 0 and 1—for example, 0.90.
important_above number
use_output_name flag Specifies whether a custom output name is used.
output_name string Name to use.
output_mode ScreenFile Specifies the target location for output generated from the output node.
output_format Formatted (.tab) Delimited (.csv) HTML (.html) Output (.cou) Specifies the type of output.
full_filename string
output_view SimpleAdvanced Specifies whether the simple or advanced view is displayed in the output.
| # meansnode properties #
The Means node compares the means between independent groups or between pairs of related fields to test whether a significant difference exists\. For example, you could compare mean revenues before and after running a promotion or compare revenues from customers who didn't receive the promotion with those who did\.
<!-- <table "summary="meansnode properties" id="meansnodeslots__table_sb1_3dj_cdb" class="defaultstyle" "> -->
meansnode properties
Table 1\. meansnode properties
| `meansnode` properties | Data type | Property description |
| -------------------------- | -------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `means_mode` | `BetweenGroups``BetweenFields` | Specifies the type of means statistic to be executed on the data\. |
| `test_fields` | `[field1 ... fieldn]` | Specifies the test field when `means_mode` is set to `BetweenGroups`\. |
| `grouping_field` | *field* | Specifies the grouping field\. |
| `paired_fields` | `[field1 field2]``field3 field4]``...]` | Specifies the field pairs to use when `means_mode` is set to `BetweenFields`\. |
| `label_correlations` | *flag* | Specifies whether correlation labels are shown in output\. This setting applies only when `means_mode` is set to `BetweenFields`\. |
| `correlation_mode` | `Probability``Absolute` | Specifies whether to label correlations by probability or absolute value\. |
| `weak_label` | *string* | |
| `medium_label` | *string* | |
| `strong_label` | *string* | |
| `weak_below_probability` | *number* | When `correlation_mode` is set to `Probability`, specifies the cutoff value for weak correlations\. This must be a value between 0 and 1—for example, 0\.90\. |
| `strong_above_probability` | *number* | Cutoff value for strong correlations\. |
| `weak_below_absolute` | *number* | When `correlation_mode` is set to `Absolute`, specifies the cutoff value for weak correlations\. This must be a value between 0 and 1—for example, 0\.90\. |
| `strong_above_absolute` | *number* | Cutoff value for strong correlations\. |
| `unimportant_label` | *string* | |
| `marginal_label` | *string* | |
| `important_label` | *string* | |
| `unimportant_below` | *number* | Cutoff value for low field importance\. This must be a value between 0 and 1—for example, 0\.90\. |
| `important_above` | *number* | |
| `use_output_name` | *flag* | Specifies whether a custom output name is used\. |
| `output_name` | *string* | Name to use\. |
| `output_mode` | `Screen``File` | Specifies the target location for output generated from the output node\. |
| `output_format` | `Formatted` (\.*tab*) `Delimited` (\.*csv*) `HTML` (\.*html*) `Output` (\.*cou*) | Specifies the type of output\. |
| `full_filename` | *string* | |
| `output_view` | `Simple``Advanced` | Specifies whether the simple or advanced view is displayed in the output\. |
<!-- </table "summary="meansnode properties" id="meansnodeslots__table_sb1_3dj_cdb" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
A148122DA72AD9FF05B3483D6F50975C50B4AB33 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/mergenodeslots.html?context=cdpaas&locale=en | mergenode properties | mergenode properties
 The Merge node takes multiple input records and creates a single output record containing some or all of the input fields. It's useful for merging data from different sources, such as internal customer data and purchased demographic data.
mergenode properties
Table 1. mergenode properties
mergenode properties Data type Property description
method Order <br>Keys <br>Condition <br>Rankedcondition Specify whether records are merged in the order they are listed in the data files, if one or more key fields will be used to merge records with the same value in the key fields, if records will be merged if a specified condition is satisfied, or if each row pairing in the primary and all secondary data sets are to be merged; using the ranking expression to sort any multiple matches into order from low to high.
condition string If method is set to Condition, specifies the condition for including or discarding records.
key_fields list
common_keys flag
join Inner <br>FullOuter <br>PartialOuter <br>Anti
outer_join_tag.n flag In this property, n is the tag name as displayed in the node properties. Note that multiple tag names may be specified, as any number of datasets could contribute incomplete records.
single_large_input flag Specifies whether optimization for having one input relatively large compared to the other inputs will be used.
single_large_input_tag string Specifies the tag name as displayed in the note properties. Note that the usage of this property differs slightly from the outer_join_tag property (flag versus string) because only one input dataset can be specified.
use_existing_sort_keys flag Specifies whether the inputs are already sorted by one or more key fields.
existing_sort_keys [['string','Ascending'] \ ['string'','Descending']] Specifies the fields that are already sorted and the direction in which they are sorted.
primary_dataset string If method is Rankedcondition, select the primary data set in the merge. This can be considered as the left side of an outer join merge.
rename_duplicate_fields boolean If method is Rankedcondition, and this is set to Y, if the resulting merged data set contains multiple fields with the same name from different data sources the respective tags from the data sources are added at the start of the field column headers.
merge_condition string
ranking_expression string
Num_matches integer The number of matches to be returned, based on the merge_condition and ranking_expression. Minimum 1, maximum 100.
default_sort_order Ascending <br>Descending Specify whether, by default, records are sorted in ascending or descending order of the sort key values.
| # mergenode properties #
 The Merge node takes multiple input records and creates a single output record containing some or all of the input fields\. It's useful for merging data from different sources, such as internal customer data and purchased demographic data\.
<!-- <table "summary="mergenode properties" id="mergenodeslots__table_irn_3dj_cdb" class="defaultstyle" "> -->
mergenode properties
Table 1\. mergenode properties
| `mergenode` properties | Data type | Property description |
| ------------------------- | ------------------------------------------------------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| `method` | `Order` <br>`Keys` <br>`Condition` <br>`Rankedcondition` | Specify whether records are merged in the order they are listed in the data files, if one or more key fields will be used to merge records with the same value in the key fields, if records will be merged if a specified condition is satisfied, or if each row pairing in the primary and all secondary data sets are to be merged; using the ranking expression to sort any multiple matches into order from low to high\. |
| `condition` | *string* | If `method` is set to `Condition`, specifies the condition for including or discarding records\. |
| `key_fields` | *list* | |
| `common_keys` | *flag* | |
| `join` | `Inner` <br>`FullOuter` <br>`PartialOuter` <br>`Anti` | |
| `outer_join_tag.n` | *flag* | In this property, *n* is the tag name as displayed in the node properties\. Note that multiple tag names may be specified, as any number of datasets could contribute incomplete records\. |
| `single_large_input` | *flag* | Specifies whether optimization for having one input relatively large compared to the other inputs will be used\. |
| `single_large_input_tag` | *string* | Specifies the tag name as displayed in the note properties\. Note that the usage of this property differs slightly from the `outer_join_tag` property (flag versus string) because only one input dataset can be specified\. |
| `use_existing_sort_keys` | *flag* | Specifies whether the inputs are already sorted by one or more key fields\. |
| `existing_sort_keys` | \[\[*'string',*`'Ascending'`\] \\ \[*'string'',*`'Descending'`\]\] | Specifies the fields that are already sorted and the direction in which they are sorted\. |
| `primary_dataset` | *string* | If `method` is `Rankedcondition`, select the primary data set in the merge\. This can be considered as the left side of an outer join merge\. |
| `rename_duplicate_fields` | *boolean* | If `method` is `Rankedcondition`, and this is set to `Y`, if the resulting merged data set contains multiple fields with the same name from different data sources the respective tags from the data sources are added at the start of the field column headers\. |
| `merge_condition` | *string* | |
| `ranking_expression` | *string* | |
| `Num_matches` | *integer* | The number of matches to be returned, based on the `merge_condition` and `ranking_expression`\. Minimum 1, maximum 100\. |
| `default_sort_order` | `Ascending` <br>`Descending` | Specify whether, by default, records are sorted in ascending or descending order of the sort key values\. |
<!-- </table "summary="mergenode properties" id="mergenodeslots__table_irn_3dj_cdb" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.