doc_id
stringlengths 40
40
| url
stringlengths 90
160
| title
stringlengths 5
96
| document
stringlengths 24
62.1k
| md_document
stringlengths 63
109k
|
---|---|---|---|---|
D04178DDE54F21A248DAFF3F1582EB4BF1E9AC43 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/_nodes_TA.html?context=cdpaas&locale=en | Text Analytics (SPSS Modeler) | Text Analytics
SPSS Modeler offers nodes that are specialized for handling text.
The Text Analytics nodes offer powerful text analytics capabilities, using advanced linguistic technologies and Natural Language Processing (NLP) to rapidly process a large variety of unstructured text data and, from this text, extract and organize the key concepts. Text Analytics can also group these concepts into categories.
Around 80% of data held within an organization is in the form of text documents—for example, reports, web pages, e-mails, and call center notes. Text is a key factor in enabling an organization to gain a better understanding of their customers' behavior. A system that incorporates NLP can intelligently extract concepts, including compound phrases. Moreover, knowledge of the underlying language allows classification of terms into related groups, such as products, organizations, or people, using meaning and context. As a result, you can quickly determine the relevance of the information to your needs. These extracted concepts and categories can be combined with existing structured data, such as demographics, and applied to modeling in SPSS Modeler to yield better and more-focused decisions.
Linguistic systems are knowledge sensitive—the more information contained in their dictionaries, the higher the quality of the results. Text Analytics provides a set of linguistic resources, such as dictionaries for terms and synonyms, libraries, and templates. These nodes further allow you to develop and refine these linguistic resources to your context. Fine-tuning of the linguistic resources is often an iterative process and is necessary for accurate concept retrieval and categorization. Custom templates, libraries, and dictionaries for specific domains, such as CRM and genomics, are also included.
Tips for getting started:
* Watch the following video for an overview of Text Analytics.
* See the [Hotel satisfaction example for Text Analytics](https://dataplatform.cloud.ibm.com/docs/content/wsd/tutorials/tut_ta_hotel.html).
This video provides a visual method to learn the concepts and tasks in this documentation.
Video disclaimer: Some minor steps and graphical elements in this video might differ from your platform.
[https://video.ibm.com/embed/channel/23952663/video/spss-text-analytics-workbench](https://video.ibm.com/embed/channel/23952663/video/spss-text-analytics-workbench)
| # Text Analytics #
SPSS Modeler offers nodes that are specialized for handling text\.
The Text Analytics nodes offer powerful text analytics capabilities, using advanced linguistic technologies and Natural Language Processing (NLP) to rapidly process a large variety of unstructured text data and, from this text, extract and organize the key concepts\. Text Analytics can also group these concepts into categories\.
Around 80% of data held within an organization is in the form of text documents—for example, reports, web pages, e\-mails, and call center notes\. Text is a key factor in enabling an organization to gain a better understanding of their customers' behavior\. A system that incorporates NLP can intelligently extract concepts, including compound phrases\. Moreover, knowledge of the underlying language allows classification of terms into related groups, such as products, organizations, or people, using meaning and context\. As a result, you can quickly determine the relevance of the information to your needs\. These extracted concepts and categories can be combined with existing structured data, such as demographics, and applied to modeling in SPSS Modeler to yield better and more\-focused decisions\.
Linguistic systems are knowledge sensitive—the more information contained in their dictionaries, the higher the quality of the results\. Text Analytics provides a set of linguistic resources, such as dictionaries for terms and synonyms, libraries, and templates\. These nodes further allow you to develop and refine these linguistic resources to your context\. Fine\-tuning of the linguistic resources is often an iterative process and is necessary for accurate concept retrieval and categorization\. Custom templates, libraries, and dictionaries for specific domains, such as CRM and genomics, are also included\.
Tips for getting started:
<!-- <ul> -->
* Watch the following video for an overview of Text Analytics\.
* See the [Hotel satisfaction example for Text Analytics](https://dataplatform.cloud.ibm.com/docs/content/wsd/tutorials/tut_ta_hotel.html)\.
<!-- </ul> -->
This video provides a visual method to learn the concepts and tasks in this documentation\.
Video disclaimer: Some minor steps and graphical elements in this video might differ from your platform\.
[https://video\.ibm\.com/embed/channel/23952663/video/spss\-text\-analytics\-workbench](https://video.ibm.com/embed/channel/23952663/video/spss-text-analytics-workbench)
<!-- </article "role="article" "> -->
|
42E228E8218A4FDEF9F2CA0DB53B5B594A475B88 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/_nodes_TA_intro.html?context=cdpaas&locale=en | About text mining (SPSS Modeler) | About text mining
Today, an increasing amount of information is being held in unstructured and semi-structured formats, such as customer e-mails, call center notes, open-ended survey responses, news feeds, web forms, etc. This abundance of information poses a problem to many organizations that ask themselves: How can we collect, explore, and leverage this information?
Text mining is the process of analyzing collections of textual materials in order to capture key concepts and themes and uncover hidden relationships and trends without requiring that you know the precise words or terms that authors have used to express those concepts. Although they are quite different, text mining is sometimes confused with information retrieval. While the accurate retrieval and storage of information is an enormous challenge, the extraction and management of quality content, terminology, and relationships contained within the information are crucial and critical processes.
| # About text mining #
Today, an increasing amount of information is being held in unstructured and semi\-structured formats, such as customer e\-mails, call center notes, open\-ended survey responses, news feeds, web forms, etc\. This abundance of information poses a problem to many organizations that ask themselves: How can we collect, explore, and leverage this information?
Text mining is the process of analyzing collections of textual materials in order to capture key concepts and themes and uncover hidden relationships and trends without requiring that you know the precise words or terms that authors have used to express those concepts\. Although they are quite different, text mining is sometimes confused with information retrieval\. While the accurate retrieval and storage of information is an enormous challenge, the extraction and management of quality content, terminology, and relationships contained within the information are crucial and critical processes\.
<!-- </article "role="article" "> -->
|
3602C22051EA1148B07446605DD3C57BF7830C3A | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/_nodes_TA_intro_categorize.html?context=cdpaas&locale=en | How categorization works (SPSS Modeler) | How categorization works
When creating category models in Text Analytics, there are several different techniques you can choose from to create categories. Because every dataset is unique, the number of techniques and the order in which you apply them may change.
Since your interpretation of the results may be different from someone else's, you may need to experiment with the different techniques to see which one produces the best results for your text data. In Text Analytics, you can create category models in a workbench session in which you can explore and fine-tune your categories further.
In this documentation, category building refers to the generation of category definitions and classification through the use of one or more built-in techniques, and categorization refers to the scoring, or labeling, process whereby unique identifiers (name/ID/value) are assigned to the category definitions for each record or document.
During category building, the concepts and types that were extracted are used as the building blocks for your categories. When you build categories, the records or documents are automatically assigned to categories if they contain text that matches an element of a category's definition.
Text Analytics offers you several automated category building techniques to help you categorize your documents or records quickly.
| # How categorization works #
When creating category models in Text Analytics, there are several different techniques you can choose from to create categories\. Because every dataset is unique, the number of techniques and the order in which you apply them may change\.
Since your interpretation of the results may be different from someone else's, you may need to experiment with the different techniques to see which one produces the best results for your text data\. In Text Analytics, you can create category models in a workbench session in which you can explore and fine\-tune your categories further\.
In this documentation, category building refers to the generation of category definitions and classification through the use of one or more built\-in techniques, and categorization refers to the scoring, or labeling, process whereby unique identifiers (name/ID/value) are assigned to the category definitions for each record or document\.
During category building, the concepts and types that were extracted are used as the building blocks for your categories\. When you build categories, the records or documents are automatically assigned to categories if they contain text that matches an element of a category's definition\.
Text Analytics offers you several automated category building techniques to help you categorize your documents or records quickly\.
<!-- </article "role="article" "> -->
|
F976E639BDE8A2B880E46D94F4C832B6ED9A9303 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/_nodes_TA_intro_extract.html?context=cdpaas&locale=en | How extraction works (SPSS Modeler) | How extraction works
During the extraction of key concepts and ideas from your responses, Text Analytics relies on linguistics-based text analysis. This approach offers the speed and cost effectiveness of statistics-based systems. But it offers a far higher degree of accuracy, while requiring far less human intervention. Linguistics-based text analysis is based on the field of study known as natural language processing, also known as computational linguistics.
Understanding how the extraction process works can help you make key decisions when fine-tuning your linguistic resources (libraries, types, synonyms, and more). Steps in the extraction process include:
* Converting source data to a standard format
* Identifying candidate terms
* Identifying equivalence classes and integration of synonyms
* Assigning a type
* Indexing
* Matching patterns and events extraction
| # How extraction works #
During the extraction of key concepts and ideas from your responses, Text Analytics relies on linguistics\-based text analysis\. This approach offers the speed and cost effectiveness of statistics\-based systems\. But it offers a far higher degree of accuracy, while requiring far less human intervention\. Linguistics\-based text analysis is based on the field of study known as natural language processing, also known as computational linguistics\.
Understanding how the extraction process works can help you make key decisions when fine\-tuning your linguistic resources (libraries, types, synonyms, and more)\. Steps in the extraction process include:
<!-- <ul> -->
* Converting source data to a standard format
* Identifying candidate terms
* Identifying equivalence classes and integration of synonyms
* Assigning a type
* Indexing
* Matching patterns and events extraction
<!-- </ul> -->
<!-- </article "role="article" "> -->
|
B0B80EB59E769546EEDF8CA32A493BF38C6A9707 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/_nodes_export.html?context=cdpaas&locale=en | Export nodes (SPSS Modeler) | Export
Export nodes provide a mechanism for exporting data in various formats to interface with your other software tools.
| # Export #
Export nodes provide a mechanism for exporting data in various formats to interface with your other software tools\.
<!-- </article "role="article" "> -->
|
B6DC074F83F9E8984B9CD3A3BF5B392BC4A61844 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/_nodes_extension.html?context=cdpaas&locale=en | Extension nodes (SPSS Modeler) | Extension nodes
SPSS Modeler supports the languages R and Apache Spark (via Python).
To complement SPSS Modeler and its data mining abilities, several Extension nodes are available to enable expert users to input their own R scripts or Python for Spark scripts to carry out data processing, model building, and model scoring.
* The Extension Import node is available under Import on the Node Palette. See [Extension Import node](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/extension_importer.html).
* The Extension Model node is available under Modeling on the Node Palette. See [Extension Model node](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/extension_build.html).
* The Extension Output node is available under Outputs on the Node Palette. See [Extension Output node](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/extension_output.html).
* The Extension Export node is available under Export on the Node Palette. See [Extension Export node](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/extension_export.html).
| # Extension nodes #
SPSS Modeler supports the languages R and Apache Spark (via Python)\.
To complement SPSS Modeler and its data mining abilities, several Extension nodes are available to enable expert users to input their own R scripts or Python for Spark scripts to carry out data processing, model building, and model scoring\.
<!-- <ul> -->
* The Extension Import node is available under Import on the Node Palette\. See [Extension Import node](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/extension_importer.html)\.
* The Extension Model node is available under Modeling on the Node Palette\. See [Extension Model node](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/extension_build.html)\.
* The Extension Output node is available under Outputs on the Node Palette\. See [Extension Output node](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/extension_output.html)\.
* The Extension Export node is available under Export on the Node Palette\. See [Extension Export node](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/extension_export.html)\.
<!-- </ul> -->
<!-- </article "role="article" "> -->
|
4E571695FB4E12489157704D87F89DF5DAD1A580 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/_nodes_field_operations.html?context=cdpaas&locale=en | Field Operations nodes (SPSS Modeler) | Field Operations
After an initial data exploration, you will probably need to select, clean, or construct data in preparation for analysis. The Field Operations palette contains many nodes useful for this transformation and preparation.
For example, using a Derive node, you might create an attribute that is not currently represented in the data. Or you might use a Binning node to recode field values automatically for targeted analysis. You will probably find yourself using a Type node frequently—it allows you to assign a measurement level, values, and a modeling role for each field in the dataset. Its operations are useful for handling missing values and downstream modeling.
| # Field Operations #
After an initial data exploration, you will probably need to select, clean, or construct data in preparation for analysis\. The Field Operations palette contains many nodes useful for this transformation and preparation\.
For example, using a Derive node, you might create an attribute that is not currently represented in the data\. Or you might use a Binning node to recode field values automatically for targeted analysis\. You will probably find yourself using a Type node frequently—it allows you to assign a measurement level, values, and a modeling role for each field in the dataset\. Its operations are useful for handling missing values and downstream modeling\.
<!-- </article "role="article" "> -->
|
2AEC614E6CBE5D4963D53DEC7E22877D5A1BEDE8 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/_nodes_graphs.html?context=cdpaas&locale=en | Graph nodes (SPSS Modeler) | Graphs
Several phases of the data mining process use graphs and charts to explore data brought in to watsonx.ai.
For example, you can connect a Plot or Distribution node to a data source to gain insight into data types and distributions. You can then perform record and field manipulations to prepare the data for downstream modeling operations. Another common use of graphs is to check the distribution and relationships between newly derived fields.
| # Graphs #
Several phases of the data mining process use graphs and charts to explore data brought in to watsonx\.ai\.
For example, you can connect a Plot or Distribution node to a data source to gain insight into data types and distributions\. You can then perform record and field manipulations to prepare the data for downstream modeling operations\. Another common use of graphs is to check the distribution and relationships between newly derived fields\.
<!-- </article "role="article" "> -->
|
A9FA1D31F4CC6018DAF5B927908210846B082675 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/_nodes_import.html?context=cdpaas&locale=en | Import nodes (SPSS Modeler) | Import
Use Import nodes to import data stored in various formats, or to generate your own synthetic data.
| # Import #
Use Import nodes to import data stored in various formats, or to generate your own synthetic data\.
<!-- </article "role="article" "> -->
|
7E30541B3A12F403ADCB02F90BC96134CE6B6386 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/_nodes_modeling.html?context=cdpaas&locale=en | Modeling nodes (SPSS Modeler) | Modeling
Watsonx.ai offers a variety of modeling methods taken from machine learning, artificial intelligence, and statistics.
The methods available on the palette allow you to derive new information from your data and to develop predictive models. Each method has certain strengths and is best suited for particular types of problems. For more information about modeling, see [Creating SPSS Modeler flows](https://dataplatform.cloud.ibm.com/docs/content/wsd/spss-modeler.htmlspss-modeler).
| # Modeling #
Watsonx\.ai offers a variety of modeling methods taken from machine learning, artificial intelligence, and statistics\.
The methods available on the palette allow you to derive new information from your data and to develop predictive models\. Each method has certain strengths and is best suited for particular types of problems\. For more information about modeling, see [Creating SPSS Modeler flows](https://dataplatform.cloud.ibm.com/docs/content/wsd/spss-modeler.html#spss-modeler)\.
<!-- </article "role="article" "> -->
|
A56C821C7EE483D01E4338397F62DDD6CB6D5E9F | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/_nodes_outputs.html?context=cdpaas&locale=en | Output nodes (SPSS Modeler) | Outputs
Output nodes provide the means to obtain information about your data and models. They also provide a mechanism for exporting data in various formats to interface with your other software tools.
| # Outputs #
Output nodes provide the means to obtain information about your data and models\. They also provide a mechanism for exporting data in various formats to interface with your other software tools\.
<!-- </article "role="article" "> -->
|
09BB38FB6DF4C562A478D6D3DC54D22823F922FB | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/_nodes_record_operations.html?context=cdpaas&locale=en | Record Operations nodes (SPSS Modeler) | Record Operations
Record Operations nodes are useful for making changes to data at the record level. These operations are important during the data understanding and data preparation phases of data mining because they allow you to tailor the data to your particular business need.
For example, based on the results of a data audit conducted using the Data Audit node (Outputs palette), you might decide that you would like to merge customer purchase records for the past three months. Using a Merge node, you can merge records based on the values of a key field, such as Customer ID. Or you might discover that a database containing information about web site hits is unmanageable with over one million records. Using a Sample node, you can select a subset of data for use in modeling.
| # Record Operations #
Record Operations nodes are useful for making changes to data at the record level\. These operations are important during the data understanding and data preparation phases of data mining because they allow you to tailor the data to your particular business need\.
For example, based on the results of a data audit conducted using the Data Audit node (Outputs palette), you might decide that you would like to merge customer purchase records for the past three months\. Using a Merge node, you can merge records based on the values of a key field, such as `Customer ID`\. Or you might discover that a database containing information about web site hits is unmanageable with over one million records\. Using a Sample node, you can select a subset of data for use in modeling\.
<!-- </article "role="article" "> -->
|
8435D88B7DC8317B982E1EAA57FA55B8391D00CF | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/aggregate.html?context=cdpaas&locale=en | Aggregate node (SPSS Modeler) | Aggregate node
Aggregation is a data preparation task frequently used to reduce the size of a dataset. Before proceeding with aggregation, you should take time to clean the data, concentrating especially on missing values. A aggregation, potentially useful information regarding missing values may be lost.
You can use an Aggregate node to replace a sequence of input records with summary, aggregated output records. For example, you might have a set of input sales records such as those shown in the following table.
Sales record input example
Table 1. Sales record input example
Age Sex Region Branch Sales
23 M S 8 4
45 M S 16 4
37 M S 8 5
30 M S 5 7
44 M N 4 9
25 M N 2 11
29 F S 16 6
41 F N 4 8
23 F N 6 2
45 F N 4 5
33 F N 6 10
You can aggregate these records with Sex and Region as key fields. Then choose to aggregate Age with the mode Mean and Sales with the mode Sum. Select the Include record count in field aggregate node option and your aggregated output will be similar to the following table.
Aggregated record example
Table 2. Aggregated record example
Age (mean) Sex Region Sales (sum) Record Count
35.5 F N 25 4
29 F S 6 1
34.5 M N 20 2
33.75 M S 20 4
From this you learn, for example, that the average age of the four female sales staff in the North region is 35.5, and the sum total of their sales was 25 units.
Note: Fields such as Branch are automatically discarded when no aggregate mode is specified.
| # Aggregate node #
Aggregation is a data preparation task frequently used to reduce the size of a dataset\. Before proceeding with aggregation, you should take time to clean the data, concentrating especially on missing values\. A aggregation, potentially useful information regarding missing values may be lost\.
You can use an Aggregate node to replace a sequence of input records with summary, aggregated output records\. For example, you might have a set of input sales records such as those shown in the following table\.
<!-- <table "summary="Sales record input example" id="aggregate__table_azf_l25_zcb" class="defaultstyle" "> -->
Sales record input example
Table 1\. Sales record input example
| Age | Sex | Region | Branch | Sales |
| --- | --- | ------ | ------ | ----- |
| 23 | M | S | 8 | 4 |
| 45 | M | S | 16 | 4 |
| 37 | M | S | 8 | 5 |
| 30 | M | S | 5 | 7 |
| 44 | M | N | 4 | 9 |
| 25 | M | N | 2 | 11 |
| 29 | F | S | 16 | 6 |
| 41 | F | N | 4 | 8 |
| 23 | F | N | 6 | 2 |
| 45 | F | N | 4 | 5 |
| 33 | F | N | 6 | 10 |
<!-- </table "summary="Sales record input example" id="aggregate__table_azf_l25_zcb" class="defaultstyle" "> -->
You can aggregate these records with `Sex` and `Region` as key fields\. Then choose to aggregate `Age` with the mode Mean and `Sales` with the mode Sum\. Select the Include record count in field aggregate node option and your aggregated output will be similar to the following table\.
<!-- <table "summary="Aggregated record example" id="aggregate__table_ezf_l25_zcb" class="defaultstyle" "> -->
Aggregated record example
Table 2\. Aggregated record example
| Age (mean) | Sex | Region | Sales (sum) | Record Count |
| ---------- | --- | ------ | ----------- | ------------ |
| 35\.5 | F | N | 25 | 4 |
| 29 | F | S | 6 | 1 |
| 34\.5 | M | N | 20 | 2 |
| 33\.75 | M | S | 20 | 4 |
<!-- </table "summary="Aggregated record example" id="aggregate__table_ezf_l25_zcb" class="defaultstyle" "> -->
From this you learn, for example, that the average age of the four female sales staff in the North region is 35\.5, and the sum total of their sales was 25 units\.
Note: Fields such as `Branch` are automatically discarded when no aggregate mode is specified\.
<!-- </article "role="article" "> -->
|
6D7B948346F167B5390A0E56E1B6DE83AE31A19A | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/analysis.html?context=cdpaas&locale=en | Analysis node (SPSS Modeler) | Analysis node
With the Analysis node, you can evaluate the ability of a model to generate accurate predictions. Analysis nodes perform various comparisons between predicted values and actual values (your target field) for one or more model nuggets. You can also use Analysis nodes to compare predictive models to other predictive models.
When you execute an Analysis node, a summary of the analysis results is automatically added to the Analysis section on the Summary tab for each model nugget in the executed flow. The detailed analysis results appear on the Outputs tab of the manager window or can be written directly to a file.
Note: Because Analysis nodes compare predicted values to actual values, they are only useful with supervised models (those that require a target field). For unsupervised models such as clustering algorithms, there are no actual results available to use as a basis for comparison.
| # Analysis node #
With the Analysis node, you can evaluate the ability of a model to generate accurate predictions\. Analysis nodes perform various comparisons between predicted values and actual values (your target field) for one or more model nuggets\. You can also use Analysis nodes to compare predictive models to other predictive models\.
When you execute an Analysis node, a summary of the analysis results is automatically added to the Analysis section on the Summary tab for each model nugget in the executed flow\. The detailed analysis results appear on the Outputs tab of the manager window or can be written directly to a file\.
Note: Because Analysis nodes compare predicted values to actual values, they are only useful with supervised models (those that require a target field)\. For unsupervised models such as clustering algorithms, there are no actual results available to use as a basis for comparison\.
<!-- </article "role="article" "> -->
|
35A87CAEDB1F1B6739159B9C7A31CCE7C8978431 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/anomalydetection.html?context=cdpaas&locale=en | Anomaly node (SPSS Modeler) | Anomaly node
Anomaly detection models are used to identify outliers, or unusual cases, in the data. Unlike other modeling methods that store rules about unusual cases, anomaly detection models store information on what normal behavior looks like. This makes it possible to identify outliers even if they do not conform to any known pattern, and it can be particularly useful in applications, such as fraud detection, where new patterns may constantly be emerging. Anomaly detection is an unsupervised method, which means that it does not require a training dataset containing known cases of fraud to use as a starting point.
While traditional methods of identifying outliers generally look at one or two variables at a time, anomaly detection can examine large numbers of fields to identify clusters or peer groups into which similar records fall. Each record can then be compared to others in its peer group to identify possible anomalies. The further away a case is from the normal center, the more likely it is to be unusual. For example, the algorithm might lump records into three distinct clusters and flag those that fall far from the center of any one cluster.
Each record is assigned an anomaly index, which is the ratio of the group deviation index to its average over the cluster that the case belongs to. The larger the value of this index, the more deviation the case has than the average. Under the usual circumstance, cases with anomaly index values less than 1 or even 1.5 would not be considered as anomalies, because the deviation is just about the same or a bit more than the average. However, cases with an index value greater than 2 could be good anomaly candidates because the deviation is at least twice the average.
Anomaly detection is an exploratory method designed for quick detection of unusual cases or records that should be candidates for further analysis. These should be regarded as suspected anomalies, which, on closer examination, may or may not turn out to be real. You may find that a record is perfectly valid but choose to screen it from the data for purposes of model building. Alternatively, if the algorithm repeatedly turns up false anomalies, this may point to an error or artifact in the data collection process.
Note that anomaly detection identifies unusual records or cases through cluster analysis based on the set of fields selected in the model without regard for any specific target (dependent) field and regardless of whether those fields are relevant to the pattern you are trying to predict. For this reason, you may want to use anomaly detection in combination with feature selection or another technique for screening and ranking fields. For example, you can use feature selection to identify the most important fields relative to a specific target and then use anomaly detection to locate the records that are the most unusual with respect to those fields. (An alternative approach would be to build a decision tree model and then examine any misclassified records as potential anomalies. However, this method would be more difficult to replicate or automate on a large scale.)
Example. In screening agricultural development grants for possible cases of fraud, anomaly detection can be used to discover deviations from the norm, highlighting those records that are abnormal and worthy of further investigation. You are particularly interested in grant applications that seem to claim too much (or too little) money for the type and size of farm.
Requirements. One or more input fields. Note that only fields with a role set to Input using a source or Type node can be used as inputs. Target fields (role set to Target or Both) are ignored.
Strengths. By flagging cases that do not conform to a known set of rules rather than those that do, Anomaly Detection models can identify unusual cases even when they don't follow previously known patterns. When used in combination with feature selection, anomaly detection makes it possible to screen large amounts of data to identify the records of greatest interest relatively quickly.
| # Anomaly node #
Anomaly detection models are used to identify outliers, or unusual cases, in the data\. Unlike other modeling methods that store rules about unusual cases, anomaly detection models store information on what normal behavior looks like\. This makes it possible to identify outliers even if they do not conform to any known pattern, and it can be particularly useful in applications, such as fraud detection, where new patterns may constantly be emerging\. Anomaly detection is an unsupervised method, which means that it does not require a training dataset containing known cases of fraud to use as a starting point\.
While traditional methods of identifying outliers generally look at one or two variables at a time, anomaly detection can examine large numbers of fields to identify clusters or peer groups into which similar records fall\. Each record can then be compared to others in its peer group to identify possible anomalies\. The further away a case is from the normal center, the more likely it is to be unusual\. For example, the algorithm might lump records into three distinct clusters and flag those that fall far from the center of any one cluster\.
Each record is assigned an anomaly index, which is the ratio of the group deviation index to its average over the cluster that the case belongs to\. The larger the value of this index, the more deviation the case has than the average\. Under the usual circumstance, cases with anomaly index values less than 1 or even 1\.5 would not be considered as anomalies, because the deviation is just about the same or a bit more than the average\. However, cases with an index value greater than 2 could be good anomaly candidates because the deviation is at least twice the average\.
Anomaly detection is an exploratory method designed for quick detection of unusual cases or records that should be candidates for further analysis\. These should be regarded as *suspected* anomalies, which, on closer examination, may or may not turn out to be real\. You may find that a record is perfectly valid but choose to screen it from the data for purposes of model building\. Alternatively, if the algorithm repeatedly turns up false anomalies, this may point to an error or artifact in the data collection process\.
Note that anomaly detection identifies unusual records or cases through cluster analysis based on the set of fields selected in the model without regard for any specific target (dependent) field and regardless of whether those fields are relevant to the pattern you are trying to predict\. For this reason, you may want to use anomaly detection in combination with feature selection or another technique for screening and ranking fields\. For example, you can use feature selection to identify the most important fields relative to a specific target and then use anomaly detection to locate the records that are the most unusual with respect to those fields\. (An alternative approach would be to build a decision tree model and then examine any misclassified records as potential anomalies\. However, this method would be more difficult to replicate or automate on a large scale\.)
Example\. In screening agricultural development grants for possible cases of fraud, anomaly detection can be used to discover deviations from the norm, highlighting those records that are abnormal and worthy of further investigation\. You are particularly interested in grant applications that seem to claim too much (or too little) money for the type and size of farm\.
Requirements\. One or more input fields\. Note that only fields with a role set to Input using a source or Type node can be used as inputs\. Target fields (role set to Target or Both) are ignored\.
Strengths\. By flagging cases that do *not* conform to a known set of rules rather than those that do, Anomaly Detection models can identify unusual cases even when they don't follow previously known patterns\. When used in combination with feature selection, anomaly detection makes it possible to screen large amounts of data to identify the records of greatest interest relatively quickly\.
<!-- </article "role="article" "> -->
|
F05134C8C952A7585B82A042B14BCF1234AF9329 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/anonymize.html?context=cdpaas&locale=en | Anonymize node (SPSS Modeler) | Anonymize node
With the Anonymize node, you can disguise field names, field values, or both when working with data that's to be included in a model downstream of the node. In this way, the generated model can be freely distributed (for example, to Technical Support) with no danger that unauthorized users will be able to view confidential data, such as employee records or patients' medical records.
Depending on where you place the Anonymize node in your flow, you may need to make changes to other nodes. For example, if you insert an Anonymize node upstream from a Select node, the selection criteria in the Select node will need to be changed if they are acting on values that have now become anonymized.
The method to be used for anonymizing depends on various factors. For field names and all field values except Continuous measurement levels, the data is replaced by a string of the form:
prefix_Sn
where prefix_ is either a user-specified string or the default string anon_, and n is an integer value that starts at 0 and is incremented for each unique value (for example, anon_S0, anon_S1, etc.).
Field values of type Continuous must be transformed because numeric ranges deal with integer or real values rather than strings. As such, they can be anonymized only by transforming the range into a different range, thus disguising the original data. Transformation of a value x in the range is performed in the following way:
A(x + B)
where:
A is a scale factor, which must be greater than 0.
B is a translation offset to be added to the values.
| # Anonymize node #
With the Anonymize node, you can disguise field names, field values, or both when working with data that's to be included in a model downstream of the node\. In this way, the generated model can be freely distributed (for example, to Technical Support) with no danger that unauthorized users will be able to view confidential data, such as employee records or patients' medical records\.
Depending on where you place the Anonymize node in your flow, you may need to make changes to other nodes\. For example, if you insert an Anonymize node upstream from a Select node, the selection criteria in the Select node will need to be changed if they are acting on values that have now become anonymized\.
The method to be used for anonymizing depends on various factors\. For field names and all field values except Continuous measurement levels, the data is replaced by a string of the form:
prefix_Sn
where `prefix_` is either a user\-specified string or the default string `anon_`, and `n` is an integer value that starts at 0 and is incremented for each unique value (for example, `anon_S0`, `anon_S1`, etc\.)\.
Field values of type Continuous must be transformed because numeric ranges deal with integer or real values rather than strings\. As such, they can be anonymized only by transforming the range into a different range, thus disguising the original data\. Transformation of a value `x` in the range is performed in the following way:
A*(x + B)
where:
`A` is a scale factor, which must be greater than 0\.
`B` is a translation offset to be added to the values\.
<!-- </article "role="article" "> -->
|
4C83F9C21CA1E70077C8004BD26FE5FB0FC947EB | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/append.html?context=cdpaas&locale=en | Append node (SPSS Modeler) | Append node
You can use Append nodes to concatenate sets of records. Unlike Merge nodes, which join records from different sources together, Append nodes read and pass downstream all of the records from one source until there are no more. Then the records from the next source are read using the same data structure (number of records, number of fields, and so on) as the first, or primary, input. When the primary source has more fields than another input source, the system null string ($null$) will be used for any incomplete values.
Append nodes are useful for combining datasets with similar structures but different data. For example, you might have transaction data stored in different files for different time periods, such as a sales data file for March and a separate one for April. Assuming that they have the same structure (the same fields in the same order), the Append node will join them together into one large file, which you can then analyze.
Note: To append files, the field measurement levels must be similar. For example, a Nominal field cannot be appended with a field whose measurement level is Continuous.
| # Append node #
You can use Append nodes to concatenate sets of records\. Unlike Merge nodes, which join records from different sources together, Append nodes read and pass downstream all of the records from one source until there are no more\. Then the records from the next source are read using the same data structure (number of records, number of fields, and so on) as the first, or primary, input\. When the primary source has more fields than another input source, the system null string ($null$) will be used for any incomplete values\.
Append nodes are useful for combining datasets with similar structures but different data\. For example, you might have transaction data stored in different files for different time periods, such as a sales data file for March and a separate one for April\. Assuming that they have the same structure (the same fields in the same order), the Append node will join them together into one large file, which you can then analyze\.
Note: To append files, the field measurement levels must be similar\. For example, a `Nominal` field cannot be appended with a field whose measurement level is `Continuous`\.
<!-- </article "role="article" "> -->
|
E14741F9A90592B67437AAED4B7042CD3DC268A8 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/applyextension.html?context=cdpaas&locale=en | Extension model nugget (SPSS Modeler) | Extension model nugget
The Extension model nugget is generated and placed on your flow canvas after running the Extension Model node, which contains your R script or Python for Spark script that defines the model building and model scoring.
By default, the Extension model nugget contains the script that's used for model scoring, options for reading the data, and any output from the R console or Python for Spark. Optionally, the Extension model nugget can also contain various other forms of model output, such as graphs and text output. After the Extension model nugget is generated and added to your flow canvas, an output node can be connected to it. The output node is then used in the usual way within your flow to obtain information about the data and models, and for exporting data in various formats.
| # Extension model nugget #
The Extension model nugget is generated and placed on your flow canvas after running the Extension Model node, which contains your R script or Python for Spark script that defines the model building and model scoring\.
By default, the Extension model nugget contains the script that's used for model scoring, options for reading the data, and any output from the R console or Python for Spark\. Optionally, the Extension model nugget can also contain various other forms of model output, such as graphs and text output\. After the Extension model nugget is generated and added to your flow canvas, an output node can be connected to it\. The output node is then used in the usual way within your flow to obtain information about the data and models, and for exporting data in various formats\.
<!-- </article "role="article" "> -->
|
9346A72CFCD74DFDA05213A2A321BF9CFB823358 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/apriori.html?context=cdpaas&locale=en | Apriori node (SPSS Modeler) | Apriori node
The Apriori node discovers association rules in your data.
Association rules are statements of the form:
if antecedent(s) then consequent(s)
For example, if a customer purchases a razor and after shave, then that customer will purchase shaving cream with 80% confidence. Apriori extracts a set of rules from the data, pulling out the rules with the highest information content. The Apriori node also discovers association rules in the data. Apriori offers five different methods of selecting rules and uses a sophisticated indexing scheme to efficiently process large data sets.
Requirements. To create an Apriori rule set, you need one or more Input fields and one or more Target fields. Input and output fields (those with the role Input, Target, or Both) must be symbolic. Fields with the role None are ignored. Fields types must be fully instantiated before executing the node. Data can be in tabular or transactional format.
Strengths. For large problems, Apriori is generally faster to train. It also has no arbitrary limit on the number of rules that can be retained and can handle rules with up to 32 preconditions. Apriori offers five different training methods, allowing more flexibility in matching the data mining method to the problem at hand.
| # Apriori node #
The Apriori node discovers association rules in your data\.
Association rules are statements of the form:
if antecedent(s) then consequent(s)
For example, if a customer purchases a razor and after shave, then that customer will purchase shaving cream with 80% confidence\. Apriori extracts a set of rules from the data, pulling out the rules with the highest information content\. The Apriori node also discovers association rules in the data\. Apriori offers five different methods of selecting rules and uses a sophisticated indexing scheme to efficiently process large data sets\.
Requirements\. To create an Apriori rule set, you need one or more `Input` fields and one or more `Target` fields\. Input and output fields (those with the role `Input`, `Target`, or `Both`) must be symbolic\. Fields with the role `None` are ignored\. Fields types must be fully instantiated before executing the node\. Data can be in tabular or transactional format\.
Strengths\. For large problems, Apriori is generally faster to train\. It also has no arbitrary limit on the number of rules that can be retained and can handle rules with up to 32 preconditions\. Apriori offers five different training methods, allowing more flexibility in matching the data mining method to the problem at hand\.
<!-- </article "role="article" "> -->
|
27091A60BA512E180C699261ECFFDC3A621418A5 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/associationrules.html?context=cdpaas&locale=en | Association Rules node (SPSS Modeler) | Association Rules node
Association rules associate a particular conclusion (the purchase of a particular product, for example) with a set of conditions (the purchase of several other products, for example).
For example, the rule
beer <= cannedveg & frozenmeal (173, 17.0%, 0.84)
states that beer often occurs when cannedveg and frozenmeal occur together. The rule is 84% reliable and applies to 17% of the data, or 173 records. Association rule algorithms automatically find the associations that you could find manually using visualization techniques, such as the Web node.
The advantage of association rule algorithms over the more standard decision tree algorithms (C5.0 and C&R Trees) is that associations can exist between any of the attributes. A decision tree algorithm will build rules with only a single conclusion, whereas association algorithms attempt to find many rules, each of which may have a different conclusion.
The disadvantage of association algorithms is that they are trying to find patterns within a potentially very large search space and, hence, can require much more time to run than a decision tree algorithm. The algorithms use a generate and test method for finding rules--simple rules are generated initially, and these are validated against the dataset. The good rules are stored and all rules, subject to various constraints, are then specialized. Specialization is the process of adding conditions to a rule. These new rules are then validated against the data, and the process iteratively stores the best or most interesting rules found. The user usually supplies some limit to the possible number of antecedents to allow in a rule, and various techniques based on information theory or efficient indexing schemes are used to reduce the potentially large search space.
At the end of the processing, a table of the best rules is presented. Unlike a decision tree, this set of association rules cannot be used directly to make predictions in the way that a standard model (such as a decision tree or a neural network) can. This is due to the many different possible conclusions for the rules. Another level of transformation is required to transform the association rules into a classification rule set. Hence, the association rules produced by association algorithms are known as unrefined models. Although the user can browse these unrefined models, they cannot be used explicitly as classification models unless the user tells the system to generate a classification model from the unrefined model. This is done from the browser through a Generate menu option.
Two association rule algorithms are supported:
* The Apriori node extracts a set of rules from the data, pulling out the rules with the highest information content. Apriori offers five different methods of selecting rules and uses a sophisticated indexing scheme to process large data sets efficiently. For large problems, Apriori is generally faster to train; it has no arbitrary limit on the number of rules that can be retained, and it can handle rules with up to 32 preconditions. Apriori requires that input and output fields all be categorical but delivers better performance because it is optimized for this type of data.
* The Sequence node discovers association rules in sequential or time-oriented data. A sequence is a list of item sets that tends to occur in a predictable order. For example, a customer who purchases a razor and aftershave lotion may purchase shaving cream the next time he shops. The Sequence node is based on the CARMA association rules algorithm, which uses an efficient two-pass method for finding sequences.
| # Association Rules node #
Association rules associate a particular conclusion (the purchase of a particular product, for example) with a set of conditions (the purchase of several other products, for example)\.
For example, the rule
beer <= cannedveg & frozenmeal (173, 17.0%, 0.84)
states that `beer` often occurs when `cannedveg` and `frozenmeal` occur together\. The rule is 84% reliable and applies to 17% of the data, or 173 records\. Association rule algorithms automatically find the associations that you could find manually using visualization techniques, such as the Web node\.
The advantage of association rule algorithms over the more standard decision tree algorithms (C5\.0 and C&R Trees) is that associations can exist between *any* of the attributes\. A decision tree algorithm will build rules with only a single conclusion, whereas association algorithms attempt to find many rules, each of which may have a different conclusion\.
The disadvantage of association algorithms is that they are trying to find patterns within a potentially very large search space and, hence, can require much more time to run than a decision tree algorithm\. The algorithms use a generate and test method for finding rules\-\-simple rules are generated initially, and these are validated against the dataset\. The good rules are stored and all rules, subject to various constraints, are then specialized\. Specialization is the process of adding conditions to a rule\. These new rules are then validated against the data, and the process iteratively stores the best or most interesting rules found\. The user usually supplies some limit to the possible number of antecedents to allow in a rule, and various techniques based on information theory or efficient indexing schemes are used to reduce the potentially large search space\.
At the end of the processing, a table of the best rules is presented\. Unlike a decision tree, this set of association rules cannot be used directly to make predictions in the way that a standard model (such as a decision tree or a neural network) can\. This is due to the many different possible conclusions for the rules\. Another level of transformation is required to transform the association rules into a classification rule set\. Hence, the association rules produced by association algorithms are known as unrefined models\. Although the user can browse these unrefined models, they cannot be used explicitly as classification models unless the user tells the system to generate a classification model from the unrefined model\. This is done from the browser through a Generate menu option\.
Two association rule algorithms are supported:
<!-- <ul> -->
* The Apriori node extracts a set of rules from the data, pulling out the rules with the highest information content\. Apriori offers five different methods of selecting rules and uses a sophisticated indexing scheme to process large data sets efficiently\. For large problems, Apriori is generally faster to train; it has no arbitrary limit on the number of rules that can be retained, and it can handle rules with up to 32 preconditions\. Apriori requires that input and output fields all be categorical but delivers better performance because it is optimized for this type of data\.
* The Sequence node discovers association rules in sequential or time\-oriented data\. A sequence is a list of item sets that tends to occur in a predictable order\. For example, a customer who purchases a razor and aftershave lotion may purchase shaving cream the next time he shops\. The Sequence node is based on the CARMA association rules algorithm, which uses an efficient two\-pass method for finding sequences\.
<!-- </ul> -->
<!-- </article "role="article" "> -->
|
1ACF5ED461253F09DB844C2D84C1AE21277BC1E6 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/autoclassifier.html?context=cdpaas&locale=en | Auto Classifier node (SPSS Modeler) | Auto Classifier node
The Auto Classifier node estimates and compares models for either nominal (set) or binary (yes/no) targets, using a number of different methods, enabling you to try out a variety of approaches in a single modeling run. You can select the algorithms to use, and experiment with multiple combinations of options. For example, rather than choose between Radial Basis Function, polynomial, sigmoid, or linear methods for an SVM, you can try them all. The node explores every possible combination of options, ranks each candidate model based on the measure you specify, and saves the best models for use in scoring or further analysis.
Example
: A retail company has historical data tracking the offers made to specific customers in past campaigns. The company now wants to achieve more profitable results by matching the appropriate offer to each customer.
Requirements
: A target field with a measurement level of either Nominal or Flag (with the role set to Target), and at least one input field (with the role set to Input). For a flag field, the True value defined for the target is assumed to represent a hit when calculating profits, lift, and related statistics. Input fields can have a measurement level of Continuous or Categorical, with the limitation that some inputs may not be appropriate for some model types. For example, ordinal fields used as inputs in C&R Tree, CHAID, and QUEST models must have numeric storage (not string), and will be ignored by these models if specified otherwise. Similarly, continuous input fields can be binned in some cases. The requirements are the same as when using the individual modeling nodes; for example, a Bayes Net model works the same whether generated from the Bayes Net node or the Auto Classifier node.
Frequency and weight fields
: Frequency and weight are used to give extra importance to some records over others because, for example, the user knows that the build dataset under-represents a section of the parent population (Weight) or because one record represents a number of identical cases (Frequency). If specified, a frequency field can be used by C&R Tree, CHAID, QUEST, Decision List, and Bayes Net models. A weight field can be used by C&RT, CHAID, and C5.0 models. Other model types will ignore these fields and build the models anyway. Frequency and weight fields are used only for model building, and are not considered when evaluating or scoring models.
Prefixes
: If you attach a table node to the nugget for the Auto Classifier Node, there are several new variables in the table with names that begin with a $ prefix.
: The names of the fields that are generated during scoring are based on the target field, but with a standard prefix. Different model types use different sets of prefixes.
: For example, the prefixes $G, $R, $C are used as the prefix for predictions that are generated by the Generalized Linear model, CHAID model, and C5.0 model, respectively. $X is typically generated by using an ensemble, and $XR, $XS, and $XF are used as prefixes in cases where the target field is a Continuous, Categorical, or Flag field, respectively.
: $..C prefixes are used for prediction confidence of a Categorical, or Flag target; for example, $XFC is used as a prefix for ensemble Flag prediction confidence. $RC and $CC are the prefixes for a single prediction of confidence for a CHAID model and C5.0 model respectively.
| # Auto Classifier node #
The Auto Classifier node estimates and compares models for either nominal (set) or binary (yes/no) targets, using a number of different methods, enabling you to try out a variety of approaches in a single modeling run\. You can select the algorithms to use, and experiment with multiple combinations of options\. For example, rather than choose between Radial Basis Function, polynomial, sigmoid, or linear methods for an SVM, you can try them all\. The node explores every possible combination of options, ranks each candidate model based on the measure you specify, and saves the best models for use in scoring or further analysis\.
Example
: A retail company has historical data tracking the offers made to specific customers in past campaigns\. The company now wants to achieve more profitable results by matching the appropriate offer to each customer\.
Requirements
: A target field with a measurement level of either `Nominal` or `Flag` (with the role set to Target), and at least one input field (with the role set to Input)\. For a flag field, the `True` value defined for the target is assumed to represent a hit when calculating profits, lift, and related statistics\. Input fields can have a measurement level of `Continuous` or `Categorical`, with the limitation that some inputs may not be appropriate for some model types\. For example, ordinal fields used as inputs in C&R Tree, CHAID, and QUEST models must have numeric storage (not string), and will be ignored by these models if specified otherwise\. Similarly, continuous input fields can be binned in some cases\. The requirements are the same as when using the individual modeling nodes; for example, a Bayes Net model works the same whether generated from the Bayes Net node or the Auto Classifier node\.
Frequency and weight fields
: Frequency and weight are used to give extra importance to some records over others because, for example, the user knows that the build dataset under\-represents a section of the parent population (Weight) or because one record represents a number of identical cases (Frequency)\. If specified, a frequency field can be used by C&R Tree, CHAID, QUEST, Decision List, and Bayes Net models\. A weight field can be used by C&RT, CHAID, and C5\.0 models\. Other model types will ignore these fields and build the models anyway\. Frequency and weight fields are used only for model building, and are not considered when evaluating or scoring models\.
Prefixes
: If you attach a table node to the nugget for the Auto Classifier Node, there are several new variables in the table with names that begin with a $ prefix\.
: The names of the fields that are generated during scoring are based on the target field, but with a standard prefix\. Different model types use different sets of prefixes\.
: For example, the prefixes $G, $R, $C are used as the prefix for predictions that are generated by the Generalized Linear model, CHAID model, and C5\.0 model, respectively\. $X is typically generated by using an ensemble, and $XR, $XS, and $XF are used as prefixes in cases where the target field is a Continuous, Categorical, or Flag field, respectively\.
: $\.\.C prefixes are used for prediction confidence of a Categorical, or Flag target; for example, $XFC is used as a prefix for ensemble Flag prediction confidence\. $RC and $CC are the prefixes for a single prediction of confidence for a CHAID model and C5\.0 model respectively\.
<!-- </article "role="article" "> -->
|
3A9DC582441C2474E183DA0E7DAC20FB182842C2 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/autocluster.html?context=cdpaas&locale=en | Auto Cluster node (SPSS Modeler) | Auto Cluster node
The Auto Cluster node estimates and compares clustering models that identify groups of records with similar characteristics. The node works in the same manner as other automated modeling nodes, enabling you to experiment with multiple combinations of options in a single modeling pass. Models can be compared using basic measures with which to attempt to filter and rank the usefulness of the cluster models, and provide a measure based on the importance of particular fields.
Clustering models are often used to identify groups that can be used as inputs in subsequent analyses. For example, you may want to target groups of customers based on demographic characteristics such as income, or based on the services they have bought in the past. You can do this without prior knowledge about the groups and their characteristics -- you may not know how many groups to look for, or what features to use in defining them. Clustering models are often referred to as unsupervised learning models, since they do not use a target field, and do not return a specific prediction that can be evaluated as true or false. The value of a clustering model is determined by its ability to capture interesting groupings in the data and provide useful descriptions of those groupings.
Requirements. One or more fields that define characteristics of interest. Cluster models do not use target fields in the same manner as other models, because they do not make specific predictions that can be assessed as true or false. Instead, they are used to identify groups of cases that may be related. For example, you cannot use a cluster model to predict whether a given customer will churn or respond to an offer. But you can use a cluster model to assign customers to groups based on their tendency to do those things. Weight and frequency fields are not used.
Evaluation fields. While no target is used, you can optionally specify one or more evaluation fields to be used in comparing models. The usefulness of a cluster model may be evaluated by measuring how well (or badly) the clusters differentiate these fields.
| # Auto Cluster node #
The Auto Cluster node estimates and compares clustering models that identify groups of records with similar characteristics\. The node works in the same manner as other automated modeling nodes, enabling you to experiment with multiple combinations of options in a single modeling pass\. Models can be compared using basic measures with which to attempt to filter and rank the usefulness of the cluster models, and provide a measure based on the importance of particular fields\.
Clustering models are often used to identify groups that can be used as inputs in subsequent analyses\. For example, you may want to target groups of customers based on demographic characteristics such as income, or based on the services they have bought in the past\. You can do this without prior knowledge about the groups and their characteristics \-\- you may not know how many groups to look for, or what features to use in defining them\. Clustering models are often referred to as unsupervised learning models, since they do not use a target field, and do not return a specific prediction that can be evaluated as true or false\. The value of a clustering model is determined by its ability to capture interesting groupings in the data and provide useful descriptions of those groupings\.
Requirements\. One or more fields that define characteristics of interest\. Cluster models do not use target fields in the same manner as other models, because they do not make specific predictions that can be assessed as true or false\. Instead, they are used to identify groups of cases that may be related\. For example, you cannot use a cluster model to predict whether a given customer will churn or respond to an offer\. But you can use a cluster model to assign customers to groups based on their tendency to do those things\. Weight and frequency fields are not used\.
Evaluation fields\. While no target is used, you can optionally specify one or more evaluation fields to be used in comparing models\. The usefulness of a cluster model may be evaluated by measuring how well (or badly) the clusters differentiate these fields\.
<!-- </article "role="article" "> -->
|
FD94481E337829121072F5E46CC39B6290E43B44 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/autodataprep.html?context=cdpaas&locale=en | Auto Data Prep node (SPSS Modeler) | Auto Data Prep node
Preparing data for analysis is one of the most important steps in any project—and traditionally, one of the most time consuming. Automated Data Preparation (ADP) handles the task for you, analyzing your data and identifying fixes, screening out fields that are problematic or not likely to be useful, deriving new attributes when appropriate, and improving performance through intelligent screening techniques. You can use the algorithm in fully automatic fashion, allowing it to choose and apply fixes, or you can use it in interactive fashion, previewing the changes before they are made and accept or reject them as you want.
Using ADP enables you to make your data ready for model building quickly and easily, without needing prior knowledge of the statistical concepts involved. Models will tend to build and score more quickly
Note: When ADP prepares a field for analysis, it creates a new field containing the adjustments or transformations, rather than replacing the existing values and properties of the old field. The old field is not used in further analysis; its role is set to None.
Example. An insurance company with limited resources to investigate homeowner's insurance claims wants to build a model for flagging suspicious, potentially fraudulent claims. Before building the model, they will ready the data for modeling using automated data preparation. Since they want to be able to review the proposed transformations before the transformations are applied, they will use automated data preparation in interactive mode.
An automotive industry group keeps track of the sales for a variety of personal motor vehicles. In an effort to be able to identify over- and underperforming models, they want to establish a relationship between vehicle sales and vehicle characteristics. They will use automated data preparation to prepare the data for analysis, and build models using the data "before" and "after" preparation to see how the results differ.
What is your objective? Automated data preparation recommends data preparation steps that will affect the speed with which other algorithms can build models and improve the predictive power of those models. This can include transforming, constructing and selecting features. The target can also be transformed. You can specify the model-building priorities that the data preparation process should concentrate on.
* Balance speed and accuracy. This option prepares the data to give equal priority to both the speed with which data are processed by model-building algorithms and the accuracy of the predictions.
* Optimize for speed. This option prepares the data to give priority to the speed with which data are processed by model-building algorithms. When you are working with very large datasets, or are looking for a quick answer, select this option.
* Optimize for accuracy. This option prepares the data to give priority to the accuracy of predictions produced by model-building algorithms.
* Custom analysis. When you want to manually change the algorithm on the Settings tab, select this option. Note that this setting is automatically selected if you subsequently make changes to options on the Settings tab that are incompatible with one of the other objectives.
| # Auto Data Prep node #
Preparing data for analysis is one of the most important steps in any project—and traditionally, one of the most time consuming\. Automated Data Preparation (ADP) handles the task for you, analyzing your data and identifying fixes, screening out fields that are problematic or not likely to be useful, deriving new attributes when appropriate, and improving performance through intelligent screening techniques\. You can use the algorithm in fully automatic fashion, allowing it to choose and apply fixes, or you can use it in interactive fashion, previewing the changes before they are made and accept or reject them as you want\.
Using ADP enables you to make your data ready for model building quickly and easily, without needing prior knowledge of the statistical concepts involved\. Models will tend to build and score more quickly
Note: When ADP prepares a field for analysis, it creates a new field containing the adjustments or transformations, rather than replacing the existing values and properties of the old field\. The old field is not used in further analysis; its role is set to None\.
Example\. An insurance company with limited resources to investigate homeowner's insurance claims wants to build a model for flagging suspicious, potentially fraudulent claims\. Before building the model, they will ready the data for modeling using automated data preparation\. Since they want to be able to review the proposed transformations before the transformations are applied, they will use automated data preparation in interactive mode\.
An automotive industry group keeps track of the sales for a variety of personal motor vehicles\. In an effort to be able to identify over\- and underperforming models, they want to establish a relationship between vehicle sales and vehicle characteristics\. They will use automated data preparation to prepare the data for analysis, and build models using the data "before" and "after" preparation to see how the results differ\.
What is your objective? Automated data preparation recommends data preparation steps that will affect the speed with which other algorithms can build models and improve the predictive power of those models\. This can include transforming, constructing and selecting features\. The target can also be transformed\. You can specify the model\-building priorities that the data preparation process should concentrate on\.
<!-- <ul> -->
* Balance speed and accuracy\. This option prepares the data to give equal priority to both the speed with which data are processed by model\-building algorithms and the accuracy of the predictions\.
* Optimize for speed\. This option prepares the data to give priority to the speed with which data are processed by model\-building algorithms\. When you are working with very large datasets, or are looking for a quick answer, select this option\.
* Optimize for accuracy\. This option prepares the data to give priority to the accuracy of predictions produced by model\-building algorithms\.
* Custom analysis\. When you want to manually change the algorithm on the Settings tab, select this option\. Note that this setting is automatically selected if you subsequently make changes to options on the Settings tab that are incompatible with one of the other objectives\.
<!-- </ul> -->
<!-- </article "role="article" "> -->
|
9D9C67189BE5D6DB22575CF01A75BD5826B92074 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/autonumeric.html?context=cdpaas&locale=en | Auto Numeric node (SPSS Modeler) | Auto Numeric node
The Auto Numeric node estimates and compares models for continuous numeric range outcomes using a number of different methods, enabling you to try out a variety of approaches in a single modeling run. You can select the algorithms to use, and experiment with multiple combinations of options. For example, you could predict housing values using neural net, linear regression, C&RT, and CHAID models to see which performs best, and you could try out different combinations of stepwise, forward, and backward regression methods. The node explores every possible combination of options, ranks each candidate model based on the measure you specify, and saves the best for use in scoring or further analysis.
Example
: A municipality wants to more accurately estimate real estate taxes and to adjust values for specific properties as needed without having to inspect every property. Using the Auto Numeric node, the analyst can generate and compare a number of models that predict property values based on building type, neighborhood, size, and other known factors.
Requirements
: A single target field (with the role set to Target), and at least one input field (with the role set to Input). The target must be a continuous (numeric range) field, such as age or income. Input fields can be continuous or categorical, with the limitation that some inputs may not be appropriate for some model types. For example, C&R Tree models can use categorical string fields as inputs, while linear regression models cannot use these fields and will ignore them if specified. The requirements are the same as when using the individual modeling nodes. For example, a CHAID model works the same whether generated from the CHAID node or the Auto Numeric node.
Frequency and weight fields
: Frequency and weight are used to give extra importance to some records over others because, for example, the user knows that the build dataset under-represents a section of the parent population (Weight) or because one record represents a number of identical cases (Frequency). If specified, a frequency field can be used by C&R Tree and CHAID algorithms. A weight field can be used by C&RT, CHAID, Regression, and GenLin algorithms. Other model types will ignore these fields and build the models anyway. Frequency and weight fields are used only for model building and are not considered when evaluating or scoring models.
Prefixes
: If you attach a table node to the nugget for the Auto Numeric Node, there are several new variables in the table with names that begin with a $ prefix.
: The names of the fields that are generated during scoring are based on the target field, but with a standard prefix. Different model types use different sets of prefixes.
: For example, the prefixes $G, $R, $C are used as the prefix for predictions that are generated by the Generalized Linear model, CHAID model, and C5.0 model, respectively. $X is typically generated by using an ensemble, and $XR, $XS, and $XF are used as prefixes in cases where the target field is a Continuous, Categorical, or Flag field, respectively.
: $..E prefixes are used for the prediction confidence of a Continuous target; for example, $XRE is used as a prefix for ensemble Continuous prediction confidence. $GE is the prefix for a single prediction of confidence for a Generalized Linear model.
| # Auto Numeric node #
The Auto Numeric node estimates and compares models for continuous numeric range outcomes using a number of different methods, enabling you to try out a variety of approaches in a single modeling run\. You can select the algorithms to use, and experiment with multiple combinations of options\. For example, you could predict housing values using neural net, linear regression, C&RT, and CHAID models to see which performs best, and you could try out different combinations of stepwise, forward, and backward regression methods\. The node explores every possible combination of options, ranks each candidate model based on the measure you specify, and saves the best for use in scoring or further analysis\.
Example
: A municipality wants to more accurately estimate real estate taxes and to adjust values for specific properties as needed without having to inspect every property\. Using the Auto Numeric node, the analyst can generate and compare a number of models that predict property values based on building type, neighborhood, size, and other known factors\.
Requirements
: A single target field (with the role set to Target), and at least one input field (with the role set to Input)\. The target must be a continuous (numeric range) field, such as *age* or *income*\. Input fields can be continuous or categorical, with the limitation that some inputs may not be appropriate for some model types\. For example, C&R Tree models can use categorical string fields as inputs, while linear regression models cannot use these fields and will ignore them if specified\. The requirements are the same as when using the individual modeling nodes\. For example, a CHAID model works the same whether generated from the CHAID node or the Auto Numeric node\.
Frequency and weight fields
: Frequency and weight are used to give extra importance to some records over others because, for example, the user knows that the build dataset under\-represents a section of the parent population (Weight) or because one record represents a number of identical cases (Frequency)\. If specified, a frequency field can be used by C&R Tree and CHAID algorithms\. A weight field can be used by C&RT, CHAID, Regression, and GenLin algorithms\. Other model types will ignore these fields and build the models anyway\. Frequency and weight fields are used only for model building and are not considered when evaluating or scoring models\.
Prefixes
: If you attach a table node to the nugget for the Auto Numeric Node, there are several new variables in the table with names that begin with a $ prefix\.
: The names of the fields that are generated during scoring are based on the target field, but with a standard prefix\. Different model types use different sets of prefixes\.
: For example, the prefixes $G, $R, $C are used as the prefix for predictions that are generated by the Generalized Linear model, CHAID model, and C5\.0 model, respectively\. $X is typically generated by using an ensemble, and $XR, $XS, and $XF are used as prefixes in cases where the target field is a Continuous, Categorical, or Flag field, respectively\.
: $\.\.E prefixes are used for the prediction confidence of a Continuous target; for example, $XRE is used as a prefix for ensemble Continuous prediction confidence\. $GE is the prefix for a single prediction of confidence for a Generalized Linear model\.
<!-- </article "role="article" "> -->
|
0294AB8C0FBC393F5C227A0F8BEBCCDC67B78B1D | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/balance.html?context=cdpaas&locale=en | Balance node (SPSS Modeler) | Balance node
You can use Balance nodes to correct imbalances in datasets so they conform to specified test criteria.
For example, suppose that a dataset has only two values--low or high--and that 90% of the cases are low while only 10% of the cases are high. Many modeling techniques have trouble with such biased data because they will tend to learn only the low outcome and ignore the high one, since it is more rare. If the data is well balanced with approximately equal numbers of low and high outcomes, models will have a better chance of finding patterns that distinguish the two groups. In this case, a Balance node is useful for creating a balancing directive that reduces cases with a low outcome.
Balancing is carried out by duplicating and then discarding records based on the conditions you specify. Records for which no condition holds are always passed through. Because this process works by duplicating and/or discarding records, the original sequence of your data is lost in downstream operations. Be sure to derive any sequence-related values before adding a Balance node to the data stream.
| # Balance node #
You can use Balance nodes to correct imbalances in datasets so they conform to specified test criteria\.
For example, suppose that a dataset has only two values\-\-`low` or `high`\-\-and that 90% of the cases are `low` while only 10% of the cases are `high`\. Many modeling techniques have trouble with such biased data because they will tend to learn only the *low* outcome and ignore the *high* one, since it is more rare\. If the data is well balanced with approximately equal numbers of `low` and `high` outcomes, models will have a better chance of finding patterns that distinguish the two groups\. In this case, a Balance node is useful for creating a balancing directive that reduces cases with a *low* outcome\.
Balancing is carried out by duplicating and then discarding records based on the conditions you specify\. Records for which no condition holds are always passed through\. Because this process works by duplicating and/or discarding records, the original sequence of your data is lost in downstream operations\. Be sure to derive any sequence\-related values before adding a Balance node to the data stream\.
<!-- </article "role="article" "> -->
|
1D5D80DFF65EE4195713EEEB43F1291B79779A6B | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/bayesnet.html?context=cdpaas&locale=en | Bayes Net node (SPSS Modeler) | Bayes Net node
The Bayesian Network node enables you to build a probability model by combining observed and recorded evidence with "common-sense" real-world knowledge to establish the likelihood of occurrences by using seemingly unlinked attributes. The node focuses on Tree Augmented Naïve Bayes (TAN) and Markov Blanket networks that are primarily used for classification.
Bayesian networks are used for making predictions in many varied situations; some examples are:
* Selecting loan opportunities with low default risk.
* Estimating when equipment will need service, parts, or replacement, based on sensor input and existing records.
* Resolving customer problems via online troubleshooting tools.
* Diagnosing and troubleshooting cellular telephone networks in real-time.
* Assessing the potential risks and rewards of research-and-development projects in order to focus resources on the best opportunities.
A Bayesian network is a graphical model that displays variables (often referred to as nodes) in a dataset and the probabilistic, or conditional, independencies between them. Causal relationships between nodes may be represented by a Bayesian network; however, the links in the network (also known as arcs) do not necessarily represent direct cause and effect. For example, a Bayesian network can be used to calculate the probability of a patient having a specific disease, given the presence or absence of certain symptoms and other relevant data, if the probabilistic independencies between symptoms and disease as displayed on the graph hold true. Networks are very robust where information is missing and make the best possible prediction using whatever information is present.
A common, basic, example of a Bayesian network was created by Lauritzen and Spiegelhalter (1988). It is often referred to as the "Asia" model and is a simplified version of a network that may be used to diagnose a doctor's new patients; the direction of the links roughly corresponding to causality. Each node represents a facet that may relate to the patient's condition; for example, "Smoking" indicates that they are a confirmed smoker, and "VisitAsia" shows if they recently visited Asia. Probability relationships are shown by the links between any nodes; for example, smoking increases the chances of the patient developing both bronchitis and lung cancer, whereas age only seems to be associated with the possibility of developing lung cancer. In the same way, abnormalities on an x-ray of the lungs may be caused by either tuberculosis or lung cancer, while the chances of a patient suffering from shortness of breath (dyspnea) are increased if they also suffer from either bronchitis or lung cancer.
Figure 1. Lauritzen and Spegelhalter's Asia network example

There are several reasons why you might decide to use a Bayesian network:
* It helps you learn about causal relationships. From this, it enables you to understand a problem area and to predict the consequences of any intervention.
* The network provides an efficient approach for avoiding the overfitting of data.
* A clear visualization of the relationships involved is easily observed.
Requirements. Target fields must be categorical and can have a measurement level of Nominal, Ordinal, or Flag. Inputs can be fields of any type. Continuous (numeric range) input fields will be automatically binned; however, if the distribution is skewed, you may obtain better results by manually binning the fields using a Binning node before the Bayesian Network node. For example, use Optimal Binning where the Supervisor field is the same as the Bayesian Network node Target field.
Example. An analyst for a bank wants to be able to predict customers, or potential customers, who are likely to default on their loan repayments. You can use a Bayesian network model to identify the characteristics of customers most likely to default, and build several different types of model to establish which is the best at predicting potential defaulters.
Example. A telecommunications operator wants to reduce the number of customers who leave the business (known as "churn"), and update the model on a monthly basis using each preceding month's data. You can use a Bayesian network model to identify the characteristics of customers most likely to churn, and continue training the model each month with the new data.
| # Bayes Net node #
The Bayesian Network node enables you to build a probability model by combining observed and recorded evidence with "common\-sense" real\-world knowledge to establish the likelihood of occurrences by using seemingly unlinked attributes\. The node focuses on Tree Augmented Naïve Bayes (TAN) and Markov Blanket networks that are primarily used for classification\.
Bayesian networks are used for making predictions in many varied situations; some examples are:
<!-- <ul> -->
* Selecting loan opportunities with low default risk\.
* Estimating when equipment will need service, parts, or replacement, based on sensor input and existing records\.
* Resolving customer problems via online troubleshooting tools\.
* Diagnosing and troubleshooting cellular telephone networks in real\-time\.
* Assessing the potential risks and rewards of research\-and\-development projects in order to focus resources on the best opportunities\.
<!-- </ul> -->
A Bayesian network is a graphical model that displays variables (often referred to as **nodes**) in a dataset and the probabilistic, or conditional, independencies between them\. Causal relationships between nodes may be represented by a Bayesian network; however, the links in the network (also known as **arcs**) do not necessarily represent direct cause and effect\. For example, a Bayesian network can be used to calculate the probability of a patient having a specific disease, given the presence or absence of certain symptoms and other relevant data, if the probabilistic independencies between symptoms and disease as displayed on the graph hold true\. Networks are very robust where information is missing and make the best possible prediction using whatever information is present\.
A common, basic, example of a Bayesian network was created by Lauritzen and Spiegelhalter (1988)\. It is often referred to as the "Asia" model and is a simplified version of a network that may be used to diagnose a doctor's new patients; the direction of the links roughly corresponding to causality\. Each node represents a facet that may relate to the patient's condition; for example, "Smoking" indicates that they are a confirmed smoker, and "VisitAsia" shows if they recently visited Asia\. Probability relationships are shown by the links between any nodes; for example, smoking increases the chances of the patient developing both bronchitis and lung cancer, whereas age only seems to be associated with the possibility of developing lung cancer\. In the same way, abnormalities on an x\-ray of the lungs may be caused by either tuberculosis or lung cancer, while the chances of a patient suffering from shortness of breath (dyspnea) are increased if they also suffer from either bronchitis or lung cancer\.
Figure 1\. Lauritzen and Spegelhalter's Asia network example

There are several reasons why you might decide to use a Bayesian network:
<!-- <ul> -->
* It helps you learn about causal relationships\. From this, it enables you to understand a problem area and to predict the consequences of any intervention\.
* The network provides an efficient approach for avoiding the overfitting of data\.
* A clear visualization of the relationships involved is easily observed\.
<!-- </ul> -->
Requirements\. Target fields must be categorical and can have a measurement level of *Nominal*, *Ordinal*, or *Flag*\. Inputs can be fields of any type\. Continuous (numeric range) input fields will be automatically binned; however, if the distribution is skewed, you may obtain better results by manually binning the fields using a Binning node before the Bayesian Network node\. For example, use Optimal Binning where the Supervisor field is the same as the Bayesian Network node Target field\.
Example\. An analyst for a bank wants to be able to predict customers, or potential customers, who are likely to default on their loan repayments\. You can use a Bayesian network model to identify the characteristics of customers most likely to default, and build several different types of model to establish which is the best at predicting potential defaulters\.
Example\. A telecommunications operator wants to reduce the number of customers who leave the business (known as "churn"), and update the model on a monthly basis using each preceding month's data\. You can use a Bayesian network model to identify the characteristics of customers most likely to churn, and continue training the model each month with the new data\.
<!-- </article "role="article" "> -->
|
8B5211BC5AC76B26C8C102E576F0AF560DFBCBC2 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/binning.html?context=cdpaas&locale=en | Binning node (SPSS Modeler) | Binning node
The Binning node enables you to automatically create new nominal fields based on the values of one or more existing continuous (numeric range) fields. For example, you can transform a continuous income field into a new categorical field containing income groups of equal width, or as deviations from the mean. Alternatively, you can select a categorical "supervisor" field in order to preserve the strength of the original association between the two fields.
Binning can be useful for a number of reasons, including:
* Algorithm requirements. Certain algorithms, such as Naive Bayes and Logistic Regression, require categorical inputs.
* Performance. Algorithms such as multinomial logistic may perform better if the number of distinct values of input fields is reduced. For example, use the median or mean value for each bin rather than using the original values.
* Data Privacy. Sensitive personal information, such as salaries, may be reported in ranges rather than actual salary figures in order to protect privacy.
A number of binning methods are available. After you create bins for the new field, you can generate a Derive node based on the cut points.
| # Binning node #
The Binning node enables you to automatically create new nominal fields based on the values of one or more existing continuous (numeric range) fields\. For example, you can transform a continuous income field into a new categorical field containing income groups of equal width, or as deviations from the mean\. Alternatively, you can select a categorical "supervisor" field in order to preserve the strength of the original association between the two fields\.
Binning can be useful for a number of reasons, including:
<!-- <ul> -->
* Algorithm requirements\. Certain algorithms, such as Naive Bayes and Logistic Regression, require categorical inputs\.
* Performance\. Algorithms such as multinomial logistic may perform better if the number of distinct values of input fields is reduced\. For example, use the median or mean value for each bin rather than using the original values\.
* Data Privacy\. Sensitive personal information, such as salaries, may be reported in ranges rather than actual salary figures in order to protect privacy\.
<!-- </ul> -->
A number of binning methods are available\. After you create bins for the new field, you can generate a Derive node based on the cut points\.
<!-- </article "role="article" "> -->
|
C5673E6023D99F8354E9B61DA2D2F1B58FBC970F | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/c50.html?context=cdpaas&locale=en | C5.0 node (SPSS Modeler) | C5.0 node
This node uses the C5.0 algorithm to build either a decision tree or a rule set. A C5.0 model works by splitting the sample based on the field that provides the maximum information gain. Each sub-sample defined by the first split is then split again, usually based on a different field, and the process repeats until the subsamples cannot be split any further. Finally, the lowest-level splits are reexamined, and those that do not contribute significantly to the value of the model are removed or pruned.
Note: The C5.0 node can predict only a categorical target. When analyzing data with categorical (nominal or ordinal) fields, the node is likely to group categories together.
C5.0 can produce two kinds of models. A decision tree is a straightforward description of the splits found by the algorithm. Each terminal (or "leaf") node describes a particular subset of the training data, and each case in the training data belongs to exactly one terminal node in the tree. In other words, exactly one prediction is possible for any particular data record presented to a decision tree.
In contrast, a rule set is a set of rules that tries to make predictions for individual records. Rule sets are derived from decision trees and, in a way, represent a simplified or distilled version of the information found in the decision tree. Rule sets can often retain most of the important information from a full decision tree but with a less complex model. Because of the way rule sets work, they do not have the same properties as decision trees. The most important difference is that with a rule set, more than one rule may apply for any particular record, or no rules at all may apply. If multiple rules apply, each rule gets a weighted "vote" based on the confidence associated with that rule, and the final prediction is decided by combining the weighted votes of all of the rules that apply to the record in question. If no rule applies, a default prediction is assigned to the record.
Example. A medical researcher has collected data about a set of patients, all of whom suffered from the same illness. During their course of treatment, each patient responded to one of five medications. You can use a C5.0 model, in conjunction with other nodes, to help find out which drug might be appropriate for a future patient with the same illness.
Requirements. To train a C5.0 model, there must be one categorical (i.e., nominal or ordinal) Target field, and one or more Input fields of any type. Fields set to Both or None are ignored. Fields used in the model must have their types fully instantiated. A weight field can also be specified.
Strengths. C5.0 models are quite robust in the presence of problems such as missing data and large numbers of input fields. They usually do not require long training times to estimate. In addition, C5.0 models tend to be easier to understand than some other model types, since the rules derived from the model have a very straightforward interpretation. C5.0 also offers the powerful boosting method to increase accuracy of classification.
Tip: C5.0 model building speed may benefit from enabling parallel processing. Note: When first creating a flow, you select which runtime to use. By default, flows use the IBM SPSS Modeler runtime. If you want to use native Spark algorithms instead of SPSS algorithms, select the Spark runtime. Properties for this node will vary depending on which runtime option you choose.
| # C5\.0 node #
This node uses the C5\.0 algorithm to build either a decision tree or a rule set\. A C5\.0 model works by splitting the sample based on the field that provides the maximum information gain\. Each sub\-sample defined by the first split is then split again, usually based on a different field, and the process repeats until the subsamples cannot be split any further\. Finally, the lowest\-level splits are reexamined, and those that do not contribute significantly to the value of the model are removed or pruned\.
Note: The C5\.0 node can predict only a categorical target\. When analyzing data with categorical (nominal or ordinal) fields, the node is likely to group categories together\.
C5\.0 can produce two kinds of models\. A decision tree is a straightforward description of the splits found by the algorithm\. Each terminal (or "leaf") node describes a particular subset of the training data, and each case in the training data belongs to exactly one terminal node in the tree\. In other words, exactly one prediction is possible for any particular data record presented to a decision tree\.
In contrast, a rule set is a set of rules that tries to make predictions for individual records\. Rule sets are derived from decision trees and, in a way, represent a simplified or distilled version of the information found in the decision tree\. Rule sets can often retain most of the important information from a full decision tree but with a less complex model\. Because of the way rule sets work, they do not have the same properties as decision trees\. The most important difference is that with a rule set, more than one rule may apply for any particular record, or no rules at all may apply\. If multiple rules apply, each rule gets a weighted "vote" based on the confidence associated with that rule, and the final prediction is decided by combining the weighted votes of all of the rules that apply to the record in question\. If no rule applies, a default prediction is assigned to the record\.
Example\. A medical researcher has collected data about a set of patients, all of whom suffered from the same illness\. During their course of treatment, each patient responded to one of five medications\. You can use a C5\.0 model, in conjunction with other nodes, to help find out which drug might be appropriate for a future patient with the same illness\.
Requirements\. To train a C5\.0 model, there must be one categorical (i\.e\., nominal or ordinal) `Target` field, and one or more `Input` fields of any type\. Fields set to `Both` or `None` are ignored\. Fields used in the model must have their types fully instantiated\. A weight field can also be specified\.
Strengths\. C5\.0 models are quite robust in the presence of problems such as missing data and large numbers of input fields\. They usually do not require long training times to estimate\. In addition, C5\.0 models tend to be easier to understand than some other model types, since the rules derived from the model have a very straightforward interpretation\. C5\.0 also offers the powerful boosting method to increase accuracy of classification\.
Tip: C5\.0 model building speed may benefit from enabling parallel processing\. Note: When first creating a flow, you select which runtime to use\. By default, flows use the IBM SPSS Modeler runtime\. If you want to use native Spark algorithms instead of SPSS algorithms, select the Spark runtime\. Properties for this node will vary depending on which runtime option you choose\.
<!-- </article "role="article" "> -->
|
DE6C4CB72844FC59FD80FC0B26ACC8C94A3BA994 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/cache_nodes.html?context=cdpaas&locale=en | Caching options for nodes (SPSS Modeler) | Caching options for nodes
To optimize the running of flows, you can set up a cache on any nonterminal node. When you set up a cache on a node, the cache is filled with the data that passes through the node the next time you run the data flow. From then on, the data is read from the cache (which is stored temporarily) rather than from the data source.
Caching is most useful following a time-consuming operation such as a sort, merge, or aggregation. For example, suppose that you have an import node set to read sales data from a database and an Aggregate node that summarizes sales by location. You can set up a cache on the Aggregate node rather than on the import node because you want the cache to store the aggregated data rather than the entire data set. Note: Caching at import nodes, which simply stores a copy of the original data as it is read into SPSS Modeler, won't improve performance in most circumstances.
Nodes with caching enabled are displayed with a special circle-backslash icon. When the data is cached at the node, the icon changes to a check mark.
Figure 1. Node with empty cache vs. node with full cache

A circle-backslash icon by node indicates that its cache is empty. When the cache is full, the icon becomes a check mark. If you want to replace the contents of the cache, you must first flush the cache and then re-run the data flow to refill it.
In your flow, right-click the node and select .
| # Caching options for nodes #
To optimize the running of flows, you can set up a cache on any nonterminal node\. When you set up a cache on a node, the cache is filled with the data that passes through the node the next time you run the data flow\. From then on, the data is read from the cache (which is stored temporarily) rather than from the data source\.
Caching is most useful following a time\-consuming operation such as a sort, merge, or aggregation\. For example, suppose that you have an import node set to read sales data from a database and an Aggregate node that summarizes sales by location\. You can set up a cache on the Aggregate node rather than on the import node because you want the cache to store the aggregated data rather than the entire data set\. Note: Caching at import nodes, which simply stores a copy of the original data as it is read into SPSS Modeler, won't improve performance in most circumstances\.
Nodes with caching enabled are displayed with a special circle\-backslash icon\. When the data is cached at the node, the icon changes to a check mark\.
Figure 1\. Node with empty cache vs\. node with full cache

A circle\-backslash icon by node indicates that its cache is empty\. When the cache is full, the icon becomes a check mark\. If you want to replace the contents of the cache, you must first flush the cache and then re\-run the data flow to refill it\.
In your flow, right\-click the node and select \.
<!-- </article "role="article" "> -->
|
D43DE202E6D3EEE211893585616BDA7EB09211C4 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/caml.html?context=cdpaas&locale=en | Continuous machine learning (SPSS Modeler) | Continuous machine learning
As a result of IBM research, and inspired by natural selection in biology, continuous machine learning is available for the Auto Classifier node and the Auto Numeric node.
An inconvenience with modeling is models getting outdated due to changes to your data over time. This is commonly referred to as model drift or concept drift. To help overcome model drift effectively, SPSS Modeler provides continuous automated machine learning.
What is model drift? When you build a model based on historical data, it can become stagnant. In many cases, new data is always coming in—new variations, new patterns, new trends, etc.—that the old historical data doesn't capture. To solve this problem, IBM was inspired by the famous phenomenon in biology called the natural selection of species. Think of models as species and think of data as nature. Just as nature selects species, we should let data select the model. There's one big difference between models and species: species can evolve, but models are static after they're built.
There are two preconditions for species to evolve; the first is gene mutation, and the second is population. Now, from a modeling perspective, to satisfy the first precondition (gene mutation), we should introduce new data changes into the existing model. To satisfy the second precondition (population), we should use a number of models rather than just one. What can represent a number of models? An Ensemble Model Set (EMS)!
The following figure illustrates how an EMS can evolve. The upper left portion of the figure represents historical data with hybrid partitions. The hybrid partitions ensure a rich initial EMS. The upper right portion of the figure represents a new chunk of data that becomes available, with vertical bars on each side. The left vertical bar represents current status, and the right vertical bar represents the status when there's a risk of model drift. In each new round of continuous machine learning, two steps are performed to evolve your model and avoid model drift.
First, you construct an ensemble model set (EMS) using existing training data. After that, when a new chunk of data becomes available, new models are built against that new data and added to the EMS as component models. The weights of existing component models in the EMS are reevaluated using the new data. As a result of this reevaluation, component models having higher weights are selected for the current prediction, and component models having lower weights may be deleted from the EMS. This process refreshes the EMS for both model weights and model instances, thus evolving in a flexible and efficient way to address the inevitable changes to your data over time.
Figure 1. Continuous auto machine learning

The ensemble model set (EMS) is a generated auto model nugget, and there's a refresh link between the auto modeling node and the generated auto model nugget that defines the refresh relationship between them. When you enable continuous auto machine learning, new data assets are continuously fed to auto modeling nodes to generate new component models. The model nugget is updated instead of replaced.
The following figure provides an example of the internal structure of an EMS in a continuous machine learning scenario. Only the top three component models are selected for the current prediction. For each component model (labeled as M1, M2, and M3), two kinds of weights are maintained. Current Model Weight (CMW) describes how a component model performs with a new chunk of data, and Accumulated Model Weight (AMW) describes the comprehensive performance of a component model against recent chunks of data. AMW is calculated iteratively via CMW and previous values of itself, and there's a hyper parameter beta to balance between them. The formula to calculate AMW is called exponential moving average.
When a new chunk of data becomes available, first SPSS Modeler uses it to build a few new component models. In this example figure, model four (M4) is built with CMW and AMW calculated during the initial model building process. Then SPSS Modeler uses the new chunk of data to reevaluate measures of existing component models (M1, M2, and M3) and update their CMW and AMW based on the reevaluation results. Finally, SPSS Modeler might reorder the component models based on CMW or AMW and select the top three component models accordingly.
In this figure, CMW is described using normalized value (sum = 1) and AMW is calculated based on CMW. In SPSS Modeler, the absolute value (equal to evaluation-weighted measure selected - for example, accuracy) is chosen to represent CMW and AMW for simplicity.
Figure 2. EMS structure
Note that there are two types of weights defined for each EMS component model, both of which could be used for selecting top N models and component model drop out:
* Current Model Weight (CMW) is computed via evaluation against the new data chunk (for example, evaluation accuracy on the new data chunk).
* Accumulated Model Weight (AMW) is computed via combining both CMW and existing AMW (for example, exponentially weighted moving average (EWMA).
Exponential moving average formula for calculating AMW:

In SPSS Modeler, after running an Auto Classifier node to generate a model nugget, the following model options are available for continuous machine learning:
* Enable continuous auto machine learning during model refresh. Select this option to enable continuous machine learning. Keep in mind that consistent metadata (data model) must be used to train the continuous auto model. If you select this option, other options are enabled.
* Enable automatic model weights reevaluation. This option controls whether evaluation measures (accuracy, for example) are computed and updated during model refresh. If you select this option, an automatic evaluation process will run after the EMS (during model refresh). This is because it's usually necessary to reevaluate existing component models using new data to reflect the current state of your data. Then the weights of the EMS component models are assigned according to reevaluation results, and the weights are used to decide the proportion a component model contributes to the final ensemble prediction. This option is selected by default.
Figure 3. Model settings

Figure 4. Flag target
Following are the supported CMW and AMW for the Auto Classifier node:
Table 1. Supported CMW and AMW
Target type CMW AMW
flag target Overall Accuracy <br>Area Under Curve Accumulated Accuracy <br>Accumulated AUC
set target Overall Accuracy Accumulated Accuracy
The following three options are related to AMW, which is used to evaluate how a component model performs during recent data chunk periods:
* Enable accumulated factor during model weights reevaluation. If you select this option, AMW computation will be enabled during model weights reevaluation. AMW represents the comprehensive performance of an EMS component model during recent data chunk periods, related to the accumulated factor β defined in the AMW formula listed previously, which you can adjust in the node properties. When this option isn't selected, only CMW will be computed. This option is selected by default.
* Perform model reduction based on accumulated limit during model refresh. Select this option if you want component models with an AMW value below the specified limit to be removed from the auto model EMS during model refresh. This can be helpful in discarding component models that are useless to prevent the auto model EMS from becoming too heavy.The accumulated limit value evaluation is related to the weighted measure used when Evaluation-weighted voting is selected as the ensemble method. See the following.
Figure 5. Set and flag targets

Note that if you select Model Accuracy for the evaluation-weighted measure, models with an accumulated accuracy below the specified limit will be deleted. And if you select Area under curve for the evaluation-weighted measure, models with an accumulated AUC below the specified limit will be deleted.
By default, Model Accuracy is used for the evaluation-weighted measure for the Auto Classifier node, and there's an optional AUC ROC measure in the case of flag targets.
* Use accumulated evaluation-weighted voting. Select this option if you want AMW to be used for the current scoring/prediction. Otherwise, CMW will be used by default. This option is enabled when Evaluation-weighted voting is selected for the ensemble method.
Note that for flag targets, by selecting this option, if you select Model Accuracy for the evaluation-weighted measure, then Accumulated Accuracy will be used as the AMW to perform the current scoring. Or if you select Area under curve for the evaluation-weighted measure, then Accumulated AUC will be used as the AMW to perform the current scoring. If you don't select this option and you select Model Accuracy for the evaluation-weighted measure, then Overall Accuracy will be used as the CMW to perform the current scoring. If you select Area under curve, Area under curve will be used as the CMW to perform the current scoring.
For set targets, if you select this Use accumulated evaluation-weighted voting option, then Accumulated Accuracy will be used as the AMW to perform the current scoring. Otherwise, Overall Accuracy will be used as the CMW to perform the current scoring.
With continuous auto machine learning, the auto model nugget is evolving all the time by rebuilding the auto model, which ensures that you get the most updated version reflecting the current state of your data. SPSS Modeler provides the flexibility for different top N component models in the EMS to be selected according to their current weights, which keeps pace with varying data during different periods.
Note: The Auto Numeric node is a much simpler case, providing a subset of the options in the Auto Classifier node.
| # Continuous machine learning #
As a result of IBM research, and inspired by natural selection in biology, continuous machine learning is available for the Auto Classifier node and the Auto Numeric node\.
An inconvenience with modeling is models getting outdated due to changes to your data over time\. This is commonly referred to as model drift or concept drift\. To help overcome model drift effectively, SPSS Modeler provides continuous automated machine learning\.
What is model drift? When you build a model based on historical data, it can become stagnant\. In many cases, new data is always coming in—new variations, new patterns, new trends, etc\.—that the old historical data doesn't capture\. To solve this problem, IBM was inspired by the famous phenomenon in biology called the natural selection of species\. Think of models as species and think of data as nature\. Just as nature selects species, we should let data select the model\. There's one big difference between models and species: species can evolve, but models are static after they're built\.
There are two preconditions for species to evolve; the first is gene mutation, and the second is population\. Now, from a modeling perspective, to satisfy the first precondition (gene mutation), we should introduce new data changes into the existing model\. To satisfy the second precondition (population), we should use a number of models rather than just one\. What can represent a number of models? An Ensemble Model Set (EMS)\!
The following figure illustrates how an EMS can evolve\. The upper left portion of the figure represents historical data with hybrid partitions\. The hybrid partitions ensure a rich initial EMS\. The upper right portion of the figure represents a new chunk of data that becomes available, with vertical bars on each side\. The left vertical bar represents current status, and the right vertical bar represents the status when there's a risk of model drift\. In each new round of continuous machine learning, two steps are performed to evolve your model and avoid model drift\.
First, you construct an ensemble model set (EMS) using existing training data\. After that, when a new chunk of data becomes available, new models are built against that new data and added to the EMS as component models\. The weights of existing component models in the EMS are reevaluated using the new data\. As a result of this reevaluation, component models having higher weights are selected for the current prediction, and component models having lower weights may be deleted from the EMS\. This process refreshes the EMS for both model weights and model instances, thus evolving in a flexible and efficient way to address the inevitable changes to your data over time\.
Figure 1\. Continuous auto machine learning

The ensemble model set (EMS) is a generated auto model nugget, and there's a refresh link between the auto modeling node and the generated auto model nugget that defines the refresh relationship between them\. When you enable continuous auto machine learning, new data assets are continuously fed to auto modeling nodes to generate new component models\. The model nugget is updated instead of replaced\.
The following figure provides an example of the internal structure of an EMS in a continuous machine learning scenario\. Only the top three component models are selected for the current prediction\. For each component model (labeled as M1, M2, and M3), two kinds of weights are maintained\. Current Model Weight (CMW) describes how a component model performs with a new chunk of data, and Accumulated Model Weight (AMW) describes the comprehensive performance of a component model against recent chunks of data\. AMW is calculated iteratively via CMW and previous values of itself, and there's a hyper parameter beta to balance between them\. The formula to calculate AMW is called exponential moving average\.
When a new chunk of data becomes available, first SPSS Modeler uses it to build a few new component models\. In this example figure, model four (M4) is built with CMW and AMW calculated during the initial model building process\. Then SPSS Modeler uses the new chunk of data to reevaluate measures of existing component models (M1, M2, and M3) and update their CMW and AMW based on the reevaluation results\. Finally, SPSS Modeler might reorder the component models based on CMW or AMW and select the top three component models accordingly\.
In this figure, CMW is described using normalized value (sum = 1) and AMW is calculated based on CMW\. In SPSS Modeler, the absolute value (equal to evaluation\-weighted measure selected \- for example, accuracy) is chosen to represent CMW and AMW for simplicity\.
Figure 2\. EMS structure
Note that there are two types of weights defined for each EMS component model, both of which could be used for selecting top N models and component model drop out:
<!-- <ul> -->
* Current Model Weight (CMW) is computed via evaluation against the new data chunk (for example, evaluation accuracy on the new data chunk)\.
* Accumulated Model Weight (AMW) is computed via combining both CMW and existing AMW (for example, exponentially weighted moving average (EWMA)\.
Exponential moving average formula for calculating AMW:

<!-- </ul> -->
In SPSS Modeler, after running an Auto Classifier node to generate a model nugget, the following model options are available for continuous machine learning:
<!-- <ul> -->
* Enable continuous auto machine learning during model refresh\. Select this option to enable continuous machine learning\. Keep in mind that consistent metadata (data model) must be used to train the continuous auto model\. If you select this option, other options are enabled\.
* Enable automatic model weights reevaluation\. This option controls whether evaluation measures (accuracy, for example) are computed and updated during model refresh\. If you select this option, an automatic evaluation process will run after the EMS (during model refresh)\. This is because it's usually necessary to reevaluate existing component models using new data to reflect the current state of your data\. Then the weights of the EMS component models are assigned according to reevaluation results, and the weights are used to decide the proportion a component model contributes to the final ensemble prediction\. This option is selected by default\.
Figure 3. Model settings

Figure 4. Flag target
Following are the supported CMW and AMW for the Auto Classifier node:
<!-- <table "summary="" class="defaultstyle" "> -->
Table 1. Supported CMW and AMW
| Target type | CMW | AMW |
| ----------- | -------------------------------------- | ----------------------------------------- |
| flag target | Overall Accuracy <br>Area Under Curve | Accumulated Accuracy <br>Accumulated AUC |
| set target | Overall Accuracy | Accumulated Accuracy |
<!-- </table "summary="" class="defaultstyle" "> -->
The following three options are related to AMW, which is used to evaluate how a component model performs during recent data chunk periods:
* Enable accumulated factor during model weights reevaluation\. If you select this option, AMW computation will be enabled during model weights reevaluation\. AMW represents the comprehensive performance of an EMS component model during recent data chunk periods, related to the accumulated factor β defined in the AMW formula listed previously, which you can adjust in the node properties\. When this option isn't selected, only CMW will be computed\. This option is selected by default\.
* Perform model reduction based on accumulated limit during model refresh\. Select this option if you want component models with an AMW value below the specified limit to be removed from the auto model EMS during model refresh\. This can be helpful in discarding component models that are useless to prevent the auto model EMS from becoming too heavy\.The accumulated limit value evaluation is related to the weighted measure used when Evaluation\-weighted voting is selected as the ensemble method\. See the following\.
Figure 5. Set and flag targets

Note that if you select Model Accuracy for the evaluation-weighted measure, models with an accumulated accuracy below the specified limit will be deleted. And if you select Area under curve for the evaluation-weighted measure, models with an accumulated AUC below the specified limit will be deleted.
By default, Model Accuracy is used for the evaluation-weighted measure for the Auto Classifier node, and there's an optional AUC ROC measure in the case of flag targets.
* Use accumulated evaluation\-weighted voting\. Select this option if you want AMW to be used for the current scoring/prediction\. Otherwise, CMW will be used by default\. This option is enabled when Evaluation\-weighted voting is selected for the ensemble method\.
Note that for flag targets, by selecting this option, if you select Model Accuracy for the evaluation-weighted measure, then Accumulated Accuracy will be used as the AMW to perform the current scoring. Or if you select Area under curve for the evaluation-weighted measure, then Accumulated AUC will be used as the AMW to perform the current scoring. If you don't select this option and you select Model Accuracy for the evaluation-weighted measure, then Overall Accuracy will be used as the CMW to perform the current scoring. If you select Area under curve, Area under curve will be used as the CMW to perform the current scoring.
For set targets, if you select this Use accumulated evaluation-weighted voting option, then Accumulated Accuracy will be used as the AMW to perform the current scoring. Otherwise, Overall Accuracy will be used as the CMW to perform the current scoring.
<!-- </ul> -->
With continuous auto machine learning, the auto model nugget is evolving all the time by rebuilding the auto model, which ensures that you get the most updated version reflecting the current state of your data\. SPSS Modeler provides the flexibility for different top N component models in the EMS to be selected according to their current weights, which keeps pace with varying data during different periods\.
Note: The Auto Numeric node is a much simpler case, providing a subset of the options in the Auto Classifier node\.
<!-- </article "role="article" "> -->
|
461D1A8F855174F44550531EF8BE6E67C29D3E3B | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/carma.html?context=cdpaas&locale=en | CARMA node (SPSS Modeler) | CARMA node
The CARMA node uses an association rules discovery algorithm to discover association rules in the data.
Association rules are statements in the form:
if antecedent(s) then consequent(s)
For example, if a Web customer purchases a wireless card and a high-end wireless router, the customer is also likely to purchase a wireless music server if offered. The CARMA model extracts a set of rules from the data without requiring you to specify input or target fields. This means that the rules generated can be used for a wider variety of applications. For example, you can use rules generated by this node to find a list of products or services (antecedents) whose consequent is the item that you want to promote this holiday season. Using watsonx.ai, you can determine which clients have purchased the antecedent products and construct a marketing campaign designed to promote the consequent product.
Requirements. In contrast to Apriori, the CARMA node does not require Input or Target fields. This is integral to the way the algorithm works and is equivalent to building an Apriori model with all fields set to Both. You can constrain which items are listed only as antecedents or consequents by filtering the model after it is built. For example, you can use the model browser to find a list of products or services (antecedents) whose consequent is the item that you want to promote this holiday season.
To create a CARMA rule set, you need to specify an ID field and one or more content fields. The ID field can have any role or measurement level. Fields with the role None are ignored. Field types must be fully instantiated before executing the node. Like Apriori, data may be in tabular or transactional format.
Strengths. The CARMA node is based on the CARMA association rules algorithm. In contrast to Apriori, the CARMA node offers build settings for rule support (support for both antecedent and consequent) rather than antecedent support. CARMA also allows rules with multiple consequents. Like Apriori, models generated by a CARMA node can be inserted into a data stream to create predictions.
| # CARMA node #
The CARMA node uses an association rules discovery algorithm to discover association rules in the data\.
Association rules are statements in the form:
if antecedent(s) then consequent(s)
For example, if a Web customer purchases a wireless card and a high\-end wireless router, the customer is also likely to purchase a wireless music server if offered\. The CARMA model extracts a set of rules from the data without requiring you to specify input or target fields\. This means that the rules generated can be used for a wider variety of applications\. For example, you can use rules generated by this node to find a list of products or services (antecedents) whose consequent is the item that you want to promote this holiday season\. Using watsonx\.ai, you can determine which clients have purchased the antecedent products and construct a marketing campaign designed to promote the consequent product\.
Requirements\. In contrast to Apriori, the CARMA node does not require Input or Target fields\. This is integral to the way the algorithm works and is equivalent to building an Apriori model with all fields set to Both\. You can constrain which items are listed only as antecedents or consequents by filtering the model after it is built\. For example, you can use the model browser to find a list of products or services (antecedents) whose consequent is the item that you want to promote this holiday season\.
To create a CARMA rule set, you need to specify an ID field and one or more content fields\. The ID field can have any role or measurement level\. Fields with the role None are ignored\. Field types must be fully instantiated before executing the node\. Like Apriori, data may be in tabular or transactional format\.
Strengths\. The CARMA node is based on the CARMA association rules algorithm\. In contrast to Apriori, the CARMA node offers build settings for rule support (support for both antecedent and consequent) rather than antecedent support\. CARMA also allows rules with multiple consequents\. Like Apriori, models generated by a CARMA node can be inserted into a data stream to create predictions\.
<!-- </article "role="article" "> -->
|
37D9428BD2E4A45CA968DAD59D1005FB5FC4DE9C | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/cart.html?context=cdpaas&locale=en | C&R Tree node (SPSS Modeler) | C&R Tree node
The Classification and Regression (C&R) Tree node is a tree-based classification and prediction method. Similar to C5.0, this method uses recursive partitioning to split the training records into segments with similar output field values. The C&R Tree node starts by examining the input fields to find the best split, measured by the reduction in an impurity index that results from the split. The split defines two subgroups, each of which is subsequently split into two more subgroups, and so on, until one of the stopping criteria is triggered. All splits are binary (only two subgroups).
| # C&R Tree node #
The Classification and Regression (C&R) Tree node is a tree\-based classification and prediction method\. Similar to C5\.0, this method uses recursive partitioning to split the training records into segments with similar output field values\. The C&R Tree node starts by examining the input fields to find the best split, measured by the reduction in an impurity index that results from the split\. The split defines two subgroups, each of which is subsequently split into two more subgroups, and so on, until one of the stopping criteria is triggered\. All splits are binary (only two subgroups)\.
<!-- </article "role="article" "> -->
|
D64140C0B8D4187B49046528FF61A54D77A99223 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/chaid.html?context=cdpaas&locale=en | CHAID node (SPSS Modeler) | CHAID node
CHAID, or Chi-squared Automatic Interaction Detection, is a classification method for building decision trees by using chi-square statistics to identify optimal splits.
CHAID first examines the crosstabulations between each of the input fields and the outcome, and tests for significance using a chi-square independence test. If more than one of these relations is statistically significant, CHAID will select the input field that is the most significant (smallest p value). If an input has more than two categories, these are compared, and categories that show no differences in the outcome are collapsed together. This is done by successively joining the pair of categories showing the least significant difference. This category-merging process stops when all remaining categories differ at the specified testing level. For nominal input fields, any categories can be merged; for an ordinal set, only contiguous categories can be merged.
Exhaustive CHAID is a modification of CHAID that does a more thorough job of examining all possible splits for each predictor but takes longer to compute.
Requirements. Target and input fields can be continuous or categorical; nodes can be split into two or more subgroups at each level. Any ordinal fields used in the model must have numeric storage (not string). If necessary, the Reclassify node can be used to convert them.
Strengths. Unlike the C&R Tree and QUEST nodes, CHAID can generate nonbinary trees, meaning that some splits have more than two branches. It therefore tends to create a wider tree than the binary growing methods. CHAID works for all types of inputs, and it accepts both case weights and frequency variables.
| # CHAID node #
CHAID, or Chi\-squared Automatic Interaction Detection, is a classification method for building decision trees by using chi\-square statistics to identify optimal splits\.
CHAID first examines the crosstabulations between each of the input fields and the outcome, and tests for significance using a chi\-square independence test\. If more than one of these relations is statistically significant, CHAID will select the input field that is the most significant (smallest `p` value)\. If an input has more than two categories, these are compared, and categories that show no differences in the outcome are collapsed together\. This is done by successively joining the pair of categories showing the least significant difference\. This category\-merging process stops when all remaining categories differ at the specified testing level\. For nominal input fields, any categories can be merged; for an ordinal set, only contiguous categories can be merged\.
Exhaustive CHAID is a modification of CHAID that does a more thorough job of examining all possible splits for each predictor but takes longer to compute\.
Requirements\. Target and input fields can be continuous or categorical; nodes can be split into two or more subgroups at each level\. Any ordinal fields used in the model must have numeric storage (not string)\. If necessary, the Reclassify node can be used to convert them\.
Strengths\. Unlike the C&R Tree and QUEST nodes, CHAID can generate nonbinary trees, meaning that some splits have more than two branches\. It therefore tends to create a wider tree than the binary growing methods\. CHAID works for all types of inputs, and it accepts both case weights and frequency variables\.
<!-- </article "role="article" "> -->
|
54EE0BB6FBD2E35C46C41D0065C299408F5AB0A5 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/characters.html?context=cdpaas&locale=en | Characters (SPSS Modeler) | Characters
Characters (usually shown as CHAR) are typically used within a CLEM expression to perform tests on strings.
For example, you can use the function isuppercode to determine whether the first character of a string is uppercase. The following CLEM expression uses a character to indicate that the test should be performed on the first character of the string:
isuppercode(subscrs(1, "MyString"))
To express the code (in contrast to the location) of a particular character in a CLEM expression, use single backquotes of the form <character> . For example, A , Z .
Note: There is no CHAR storage type for a field, so if a field is derived or filled with an expression that results in a CHAR, then that result will be converted to a string.
| # Characters #
Characters (usually shown as `CHAR`) are typically used within a CLEM expression to perform tests on strings\.
For example, you can use the function `isuppercode` to determine whether the first character of a string is uppercase\. The following CLEM expression uses a character to indicate that the test should be performed on the first character of the string:
isuppercode(subscrs(1, "MyString"))
To express the code (in contrast to the location) of a particular character in a CLEM expression, use single backquotes of the form `` ` ``<*character*>`` ` ``\. For example, `` `A` ``, `` `Z` ``\.
Note: There is no `CHAR` storage type for a field, so if a field is derived or filled with an expression that results in a `CHAR`, then that result will be converted to a string\.
<!-- </article "role="article" "> -->
|
3C1D83E94DDC08D7A6229AEDC49C895E86E660BF | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/clem_datatypes.html?context=cdpaas&locale=en | CLEM datatypes (SPSS Modeler) | CLEM datatypes
This section covers CLEM datatypes.
CLEM datatypes can be made up of any of the following:
* Integers
* Reals
* Characters
* Strings
* Lists
* Fields
* Date/Time
| # CLEM datatypes #
This section covers CLEM datatypes\.
CLEM datatypes can be made up of any of the following:
<!-- <ul> -->
* Integers
* Reals
* Characters
* Strings
* Lists
* Fields
* Date/Time
<!-- </ul> -->
<!-- </article "role="article" "> -->
|
6F900078FD88E14400807E571E1F3A24C633C2DC | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/clem_expression_examples.html?context=cdpaas&locale=en | CLEM examples (SPSS Modeler) | CLEM examples
The example expressions in this section illustrate correct syntax and the types of expressions possible with CLEM.
Additional examples are discussed throughout this CLEM documentation. See [CLEM (legacy) language reference](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/clem_language_reference.htmlclem_language_reference) for more information.
| # CLEM examples #
The example expressions in this section illustrate correct syntax and the types of expressions possible with CLEM\.
Additional examples are discussed throughout this CLEM documentation\. See [CLEM (legacy) language reference](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/clem_language_reference.html#clem_language_reference) for more information\.
<!-- </article "role="article" "> -->
|
628354B3F2FA792B938756225315E3B4024DCC0E | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/clem_function_ref.html?context=cdpaas&locale=en | Functions reference (SPSS Modeler) | Functions reference
This section lists CLEM functions for working with data in SPSS Modeler. You can enter these functions as code in various areas of the user interface, such as Derive and Set To Flag nodes, or you can use the Expression Builder to create valid CLEM expressions without memorizing function lists or field names.
CLEM functions for use with SPSS Modeler data
Table 1. CLEM functions for use with SPSS Modeler data
Function Type Description
[Information](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/clem_function_ref_information.htmlclem_function_ref_information) Used to gain insight into field values. For example, the function is_string returns true for all records whose type is a string.
[Conversion](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/clem_function_ref_conversion.htmlclem_function_ref_conversion) Used to construct new fields or convert storage type. For example, the function to_timestamp converts the selected field to a timestamp.
[Comparison](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/clem_function_ref_comparison.htmlclem_function_ref_comparison) Used to compare field values to each other or to a specified string. For example, <=is used to compare whether the values of two fields are lesser or equal.
[Logical](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/clem_function_ref_logical.htmlclem_function_ref_logical) Used to perform logical operations, such as if, then, else operations.
[Numeric](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/clem_function_ref_numeric.htmlclem_function_ref_numeric) Used to perform numeric calculations, such as the natural log of field values.
[Trigonometric](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/clem_function_ref_trigonometric.htmlclem_function_ref_trigonometric) Used to perform trigonometric calculations, such as the arccosine of a specified angle.
[Probability](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/clem_function_ref_probability.htmlclem_function_ref_probability) Returns probabilities that are based on various distributions, such as probability that a value from Student's t distribution is less than a specific value.
[Spatial](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/clem_function_ref_spatial.htmlclem_function_ref_spatial) Used to perform spatial calculations on geospatial data.
[Bitwise](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/clem_function_ref_bitwise.htmlclem_function_ref_bitwise) Used to manipulate integers as bit patterns.
[Random](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/clem_function_ref_random.htmlclem_function_ref_random) Used to randomly select items or generate numbers.
[String](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/clem_function_ref_string.htmlclem_function_ref_string) Used to perform various operations on strings, such as stripchar, which allows you to remove a specified character.
[SoundEx](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/clem_function_ref_soundex.htmlclem_function_ref_soundex) Used to find strings when the precise spelling is not known; based on phonetic assumptions about how certain letters are pronounced.
[Date and time](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/clem_function_ref_datetime.htmlclem_function_ref_datetime) Used to perform various operations on date, time, and timestamp fields.
[Sequence](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/clem_function_ref_sequence.htmlclem_function_ref_sequence) Used to gain insight into the record sequence of a data set or perform operations that are based on that sequence.
[Global](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/clem_function_ref_global.htmlclem_function_ref_global) Used to access global values that are created by a Set Globals node. For example, @MEAN is used to refer to the mean average of all values for a field across the entire data set.
[Blanks and null](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/clem_function_ref_blanksnulls.htmlclem_function_ref_blanksnulls) Used to access, flag, and frequently fill user-specified blanks or system-missing values. For example, @BLANK(FIELD) is used to raise a true flag for records where blanks are present.
[Special fields](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/clem_function_ref_specialfields.htmlclem_function_ref_specialfields) Used to denote the specific fields under examination. For example, @FIELD is used when deriving multiple fields.
| # Functions reference #
This section lists CLEM functions for working with data in SPSS Modeler\. You can enter these functions as code in various areas of the user interface, such as Derive and Set To Flag nodes, or you can use the Expression Builder to create valid CLEM expressions without memorizing function lists or field names\.
<!-- <table "summary="CLEM functions for use with SPSS Modeler data" id="clem_function_ref__table_gs1_mk3_cdb" class="defaultstyle" "> -->
CLEM functions for use with SPSS Modeler data
Table 1\. CLEM functions for use with SPSS Modeler data
| Function Type | Description |
| ------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| [Information](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/clem_function_ref_information.html#clem_function_ref_information) | Used to gain insight into field values\. For example, the function `is_string` returns true for all records whose type is a string\. |
| [Conversion](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/clem_function_ref_conversion.html#clem_function_ref_conversion) | Used to construct new fields or convert storage type\. For example, the function `to_timestamp` converts the selected field to a timestamp\. |
| [Comparison](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/clem_function_ref_comparison.html#clem_function_ref_comparison) | Used to compare field values to each other or to a specified string\. For example, `<=`is used to compare whether the values of two fields are lesser or equal\. |
| [Logical](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/clem_function_ref_logical.html#clem_function_ref_logical) | Used to perform logical operations, such as `if`, `then`, `else` operations\. |
| [Numeric](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/clem_function_ref_numeric.html#clem_function_ref_numeric) | Used to perform numeric calculations, such as the natural log of field values\. |
| [Trigonometric](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/clem_function_ref_trigonometric.html#clem_function_ref_trigonometric) | Used to perform trigonometric calculations, such as the arccosine of a specified angle\. |
| [Probability](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/clem_function_ref_probability.html#clem_function_ref_probability) | Returns probabilities that are based on various distributions, such as probability that a value from Student's t distribution is less than a specific value\. |
| [Spatial](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/clem_function_ref_spatial.html#clem_function_ref_spatial) | Used to perform spatial calculations on geospatial data\. |
| [Bitwise](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/clem_function_ref_bitwise.html#clem_function_ref_bitwise) | Used to manipulate integers as bit patterns\. |
| [Random](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/clem_function_ref_random.html#clem_function_ref_random) | Used to randomly select items or generate numbers\. |
| [String](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/clem_function_ref_string.html#clem_function_ref_string) | Used to perform various operations on strings, such as `stripchar`, which allows you to remove a specified character\. |
| [SoundEx](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/clem_function_ref_soundex.html#clem_function_ref_soundex) | Used to find strings when the precise spelling is not known; based on phonetic assumptions about how certain letters are pronounced\. |
| [Date and time](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/clem_function_ref_datetime.html#clem_function_ref_datetime) | Used to perform various operations on date, time, and timestamp fields\. |
| [Sequence](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/clem_function_ref_sequence.html#clem_function_ref_sequence) | Used to gain insight into the record sequence of a data set or perform operations that are based on that sequence\. |
| [Global](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/clem_function_ref_global.html#clem_function_ref_global) | Used to access global values that are created by a Set Globals node\. For example, `@MEAN` is used to refer to the mean average of all values for a field across the entire data set\. |
| [Blanks and null](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/clem_function_ref_blanksnulls.html#clem_function_ref_blanksnulls) | Used to access, flag, and frequently fill user\-specified blanks or system\-missing values\. For example, `@BLANK(FIELD)` is used to raise a true flag for records where blanks are present\. |
| [Special fields](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/clem_function_ref_specialfields.html#clem_function_ref_specialfields) | Used to denote the specific fields under examination\. For example, `@FIELD` is used when deriving multiple fields\. |
<!-- </table "summary="CLEM functions for use with SPSS Modeler data" id="clem_function_ref__table_gs1_mk3_cdb" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
1D1659B46A454170A597B0450FD99C16EEC5B1AD | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/clem_function_ref_bitwise.html?context=cdpaas&locale=en | Bitwise integer operations (SPSS Modeler) | Bitwise integer operations
These functions enable integers to be manipulated as bit patterns representing two's-complement values, where bit position N has weight 2N.
Bits are numbered from 0 upward. These operations act as though the sign bit of an integer is extended indefinitely to the left. Thus, everywhere above its most significant bit, a positive integer has 0 bits and a negative integer has 1 bit.
CLEM bitwise integer operations
Table 1. CLEM bitwise integer operations
Function Result Description
INT1 Integer Produces the bitwise complement of the integer INT1. That is, there is a 1 in the result for each bit position for which INT1 has 0. It is always true that INT = –(INT + 1).
INT1 INT2 Integer The result of this operation is the bitwise "inclusive or" of INT1 and INT2. That is, there is a 1 in the result for each bit position for which there is a 1 in either INT1 or INT2 or both.
INT1 /& INT2 Integer The result of this operation is the bitwise "exclusive or" of INT1 and INT2. That is, there is a 1 in the result for each bit position for which there is a 1 in either INT1 or INT2 but not in both.
INT1 && INT2 Integer Produces the bitwise "and" of the integers INT1 and INT2. That is, there is a 1 in the result for each bit position for which there is a 1 in both INT1 and INT2.
INT1 && INT2 Integer Produces the bitwise "and" of INT1 and the bitwise complement of INT2. That is, there is a 1 in the result for each bit position for which there is a 1 in INT1 and a 0 in INT2. This is the same as INT1&& (INT2) and is useful for clearing bits of INT1 set in INT2.
INT << N Integer Produces the bit pattern of INT1 shifted left by N positions. A negative value for N produces a right shift.
INT >> N Integer Produces the bit pattern of INT1 shifted right by N positions. A negative value for N produces a left shift.
INT1 &&=_0 INT2 Boolean Equivalent to the Boolean expression INT1 && INT2 /== 0 but is more efficient.
INT1 &&/=_0 INT2 Boolean Equivalent to the Boolean expression INT1 && INT2 == 0 but is more efficient.
integer_bitcount(INT) Integer Counts the number of 1 or 0 bits in the two's-complement representation of INT. If INT is non-negative, N is the number of 1 bits. If INT is negative, it is the number of 0 bits. Owing to the sign extension, there are an infinite number of 0 bits in a non-negative integer or 1 bits in a negative integer. It is always the case that integer_bitcount(INT) = integer_bitcount(-(INT+1)).
integer_leastbit(INT) Integer Returns the bit position N of the least-significant bit set in the integer INT. N is the highest power of 2 by which INT divides exactly.
integer_length(INT) Integer Returns the length in bits of INT as a two's-complement integer. That is, N is the smallest integer such that INT < (1 << N) if INT >= 0 INT >= (–1 << N) if INT < 0. If INT is non-negative, then the representation of INT as an unsigned integer requires a field of at least N bits. Alternatively, a minimum of N+1 bits is required to represent INT as a signed integer, regardless of its sign.
testbit(INT, N) Boolean Tests the bit at position N in the integer INT and returns the state of bit N as a Boolean value, which is true for 1 and false for 0.
| # Bitwise integer operations #
These functions enable integers to be manipulated as bit patterns representing two's\-complement values, where bit position `N` has weight `2**N`\.
Bits are numbered from 0 upward\. These operations act as though the sign bit of an integer is extended indefinitely to the left\. Thus, everywhere above its most significant bit, a positive integer has 0 bits and a negative integer has 1 bit\.
<!-- <table "summary="CLEM bitwise integer operations" id="clem_function_ref_bitwise__table_q5d_ygz_ddb" class="defaultstyle" "> -->
CLEM bitwise integer operations
Table 1\. CLEM bitwise integer operations
| Function | Result | Description |
| ----------------------- | --------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| `~~ INT1` | *Integer* | Produces the bitwise complement of the integer *INT1*\. That is, there is a 1 in the result for each bit position for which *INT1* has 0\. It is always true that `~~ INT = –(INT + 1)`\. |
| `INT1 || INT2` | *Integer* | The result of this operation is the bitwise "inclusive or" of *INT1* and *INT2*\. That is, there is a 1 in the result for each bit position for which there is a 1 in either *INT1* or *INT2* or both\. |
| `INT1 ||/& INT2` | *Integer* | The result of this operation is the bitwise "exclusive or" of *INT1* and *INT2*\. That is, there is a 1 in the result for each bit position for which there is a 1 in either *INT1* or *INT2* but not in both\. |
| `INT1 && INT2` | *Integer* | Produces the bitwise "and" of the integers *INT1* and *INT2*\. That is, there is a 1 in the result for each bit position for which there is a 1 in both *INT1* and *INT2*\. |
| `INT1 &&~~ INT2` | *Integer* | Produces the bitwise "and" of *INT1* and the bitwise complement of *INT2*\. That is, there is a 1 in the result for each bit position for which there is a 1 in *INT1* and a 0 in *INT2*\. This is the same as `INT1``&& (~~INT2)` and is useful for clearing bits of *INT1* set in *INT2*\. |
| `INT << N` | *Integer* | Produces the bit pattern of *INT1* shifted left by *N* positions\. A negative value for *N* produces a right shift\. |
| `INT >> N` | *Integer* | Produces the bit pattern of *INT1* shifted right by *N* positions\. A negative value for *N* produces a left shift\. |
| `INT1 &&=_0 INT2` | *Boolean* | Equivalent to the Boolean expression `INT1 && INT2 /== 0` but is more efficient\. |
| `INT1 &&/=_0 INT2` | *Boolean* | Equivalent to the Boolean expression `INT1 && INT2 == 0` but is more efficient\. |
| `integer_bitcount(INT)` | *Integer* | Counts the number of 1 or 0 bits in the two's\-complement representation of *INT*\. If *INT* is non\-negative, *N* is the number of 1 bits\. If *INT* is negative, it is the number of 0 bits\. Owing to the sign extension, there are an infinite number of 0 bits in a non\-negative integer or 1 bits in a negative integer\. It is always the case that `integer_bitcount(INT) = integer_bitcount(-(INT+1))`\. |
| `integer_leastbit(INT)` | *Integer* | Returns the bit position *N* of the least\-significant bit set in the integer *INT*\. *N* is the highest power of 2 by which *INT* divides exactly\. |
| `integer_length(INT)` | *Integer* | Returns the length in bits of *INT* as a two's\-complement integer\. That is, *N* is the smallest integer such that `INT < (1 << N) if INT >= 0 INT >= (–1 << N) if INT < 0`\. If *INT* is non\-negative, then the representation of *INT* as an unsigned integer requires a field of at least *N* bits\. Alternatively, a minimum of *N\+1* bits is required to represent *INT* as a signed integer, regardless of its sign\. |
| `testbit(INT, N)` | *Boolean* | Tests the bit at position *N* in the integer *INT* and returns the state of bit *N* as a Boolean value, which is true for 1 and false for 0\. |
<!-- </table "summary="CLEM bitwise integer operations" id="clem_function_ref_bitwise__table_q5d_ygz_ddb" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
5FE3DE32EFB5DEA4094DCA22CBC77E24D23EF67A | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/clem_function_ref_blanksnulls.html?context=cdpaas&locale=en | Functions handling blanks and null values (SPSS Modeler) | Functions handling blanks and null values
Using CLEM, you can specify that certain values in a field are to be regarded as "blanks," or missing values.
The following functions work with blanks.
CLEM blank and null value functions
Table 1. CLEM blank and null value functions
Function Result Description
@BLANK(FIELD) Boolean Returns true for all records whose values are blank according to the blank-handling rules set in an upstream Type node or Import node (Types tab).
@LAST_NON_BLANK(FIELD) Any Returns the last value for FIELD that was not blank, as defined in an upstream Import or Type node. If there are no nonblank values for FIELD in the records read so far, $null$ is returned. Note that blank values, also called user-missing values, can be defined separately for each field.
@NULL(FIELD) Boolean Returns true if the value of FIELD is the system-missing $null$. Returns false for all other values, including user-defined blanks. If you want to check for both, use @BLANK(FIELD) and@NULL(FIELD).
undef Any Used generally in CLEM to enter a $null$ value—for example, to fill blank values with nulls in the Filler node.
Blank fields may be "filled in" with the Filler node. In both Filler and Derive nodes (multiple mode only), the special CLEM function @FIELD refers to the current field(s) being examined.
| # Functions handling blanks and null values #
Using CLEM, you can specify that certain values in a field are to be regarded as "blanks," or missing values\.
The following functions work with blanks\.
<!-- <table "summary="CLEM blank and null value functions" id="clem_function_ref_blanksnulls__table_shj_4k3_cdb" class="defaultstyle" "> -->
CLEM blank and null value functions
Table 1\. CLEM blank and null value functions
| Function | Result | Description |
| ------------------------ | --------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `@BLANK(FIELD)` | *Boolean* | Returns true for all records whose values are blank according to the blank\-handling rules set in an upstream Type node or Import node (Types tab)\. |
| `@LAST_NON_BLANK(FIELD)` | *Any* | Returns the last value for *FIELD* that was not blank, as defined in an upstream Import or Type node\. If there are no nonblank values for *FIELD* in the records read so far, `$null$` is returned\. Note that blank values, also called user\-missing values, can be defined separately for each field\. |
| `@NULL(FIELD)` | *Boolean* | Returns true if the value of *FIELD* is the system\-missing `$null$.` Returns false for all other values, including user\-defined blanks\. If you want to check for both, use `@BLANK(FIELD)` and`@NULL(FIELD)`\. |
| `undef` | *Any* | Used generally in CLEM to enter a `$null$` value—for example, to fill blank values with nulls in the Filler node\. |
<!-- </table "summary="CLEM blank and null value functions" id="clem_function_ref_blanksnulls__table_shj_4k3_cdb" class="defaultstyle" "> -->
Blank fields may be "filled in" with the Filler node\. In both Filler and Derive nodes (multiple mode only), the special CLEM function `@FIELD` refers to the current field(s) being examined\.
<!-- </article "role="article" "> -->
|
32A79D23C94FB1920DB500D2DD9464C1316C62A5 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/clem_function_ref_comparison.html?context=cdpaas&locale=en | Comparison functions (SPSS Modeler) | Comparison functions
Comparison functions are used to compare field values to each other or to a specified string.
For example, you can check strings for equality using =. An example of string equality verification is: Class = "class 1".
For purposes of numeric comparison, greater means closer to positive infinity, and lesser means closer to negative infinity. That is, all negative numbers are less than any positive number.
CLEM comparison functions
Table 1. CLEM comparison functions
Function Result Description
count_equal(ITEM1, LIST) Integer Returns the number of values from a list of fields that are equal to ITEM1 or null if ITEM1 is null.
count_greater_than(ITEM1, LIST) Integer Returns the number of values from a list of fields that are greater than ITEM1 or null if ITEM1 is null.
count_less_than(ITEM1, LIST) Integer Returns the number of values from a list of fields that are less than ITEM1 or null if ITEM1 is null.
count_not_equal(ITEM1, LIST) Integer Returns the number of values from a list of fields that aren't equal to ITEM1 or null if ITEM1 is null.
count_nulls(LIST) Integer Returns the number of null values from a list of fields.
count_non_nulls(LIST) Integer Returns the number of non-null values from a list of fields.
date_before(DATE1, DATE2) Boolean Used to check the ordering of date values. Returns a true value if DATE1 is before DATE2.
first_index(ITEM, LIST) Integer Returns the index of the first field containing ITEM from a LIST of fields or 0 if the value isn't found. Supported for string, integer, and real types only.
first_non_null(LIST) Any Returns the first non-null value in the supplied list of fields. All storage types supported.
first_non_null_index(LIST) Integer Returns the index of the first field in the specified LIST containing a non-null value or 0 if all values are null. All storage types are supported.
ITEM1 = ITEM2 Boolean Returns true for records where ITEM1 is equal to ITEM2.
ITEM1 /= ITEM2 Boolean Returns true if the two strings are not identical or 0 if they're identical.
ITEM1 < ITEM2 Boolean Returns true for records where ITEM1 is less than ITEM2.
ITEM1 <= ITEM2 Boolean Returns true for records where ITEM1 is less than or equal to ITEM2.
ITEM1 > ITEM2 Boolean Returns true for records where ITEM1 is greater than ITEM2.
ITEM1 >= ITEM2 Boolean Returns true for records where ITEM1 is greater than or equal to ITEM2.
last_index(ITEM, LIST) Integer Returns the index of the last field containing ITEM from a LIST of fields or 0 if the value isn't found. Supported for string, integer, and real types only.
last_non_null(LIST) Any Returns the last non-null value in the supplied list of fields. All storage types supported.
last_non_null_index(LIST) Integer Returns the index of the last field in the specified LIST containing a non-null value or 0 if all values are null. All storage types are supported.
max(ITEM1, ITEM2) Any Returns the greater of the two items: ITEM1 or ITEM2.
max_index(LIST) Integer Returns the index of the field containing the maximum value from a list of numeric fields or 0 if all values are null. For example, if the third field listed contains the maximum, the index value 3 is returned. If multiple fields contain the maximum value, the one listed first (leftmost) is returned.
max_n(LIST) Number Returns the maximum value from a list of numeric fields or null if all of the field values are null.
member(ITEM, LIST) Boolean Returns true if ITEM is a member of the specified LIST. Otherwise, a false value is returned. A list of field names can also be specified.
min(ITEM1, ITEM2) Any Returns the lesser of the two items: ITEM1 or ITEM2.
min_index(LIST) Integer Returns the index of the field containing the minimum value from a list of numeric fields or 0 if all values are null. For example, if the third field listed contains the minimum, the index value 3 is returned. If multiple fields contain the minimum value, the one listed first (leftmost) is returned.
min_n(LIST) Number Returns the minimum value from a list of numeric fields or null if all of the field values are null.
time_before(TIME1, TIME2) Boolean Used to check the ordering of time values. Returns a true value if TIME1 is before TIME2.
value_at(INT, LIST) Returns the value of each listed field at offset INT or NULL if the offset is outside the range of valid values (that is, less than 1 or greater than the number of listed fields). All storage types supported.
| # Comparison functions #
Comparison functions are used to compare field values to each other or to a specified string\.
For example, you can check strings for equality using `=`\. An example of string equality verification is: `Class = "class 1"`\.
For purposes of numeric comparison, *greater* means closer to positive infinity, and *lesser* means closer to negative infinity\. That is, all negative numbers are less than any positive number\.
<!-- <table "summary="CLEM comparison functions" id="clem_function_ref_comparison__table_jyk_zgz_ddb" class="defaultstyle" "> -->
CLEM comparison functions
Table 1\. CLEM comparison functions
| Function | Result | Description |
| --------------------------------- | --------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `count_equal(ITEM1, LIST)` | *Integer* | Returns the number of values from a list of fields that are equal to *ITEM1* or null if *ITEM1* is null\. |
| `count_greater_than(ITEM1, LIST)` | *Integer* | Returns the number of values from a list of fields that are greater than *ITEM1* or null if *ITEM1* is null\. |
| `count_less_than(ITEM1, LIST)` | *Integer* | Returns the number of values from a list of fields that are less than *ITEM1* or null if *ITEM1* is null\. |
| `count_not_equal(ITEM1, LIST)` | *Integer* | Returns the number of values from a list of fields that aren't equal to *ITEM1* or null if *ITEM1* is null\. |
| `count_nulls(LIST)` | *Integer* | Returns the number of null values from a list of fields\. |
| `count_non_nulls(LIST)` | *Integer* | Returns the number of non\-null values from a list of fields\. |
| `date_before(DATE1, DATE2)` | *Boolean* | Used to check the ordering of date values\. Returns a true value if *DATE1* is before *DATE2*\. |
| `first_index(ITEM, LIST)` | *Integer* | Returns the index of the first field containing ITEM from a LIST of fields or 0 if the value isn't found\. Supported for string, integer, and real types only\. |
| `first_non_null(LIST)` | *Any* | Returns the first non\-null value in the supplied list of fields\. All storage types supported\. |
| `first_non_null_index(LIST)` | *Integer* | Returns the index of the first field in the specified LIST containing a non\-null value or 0 if all values are null\. All storage types are supported\. |
| `ITEM1 = ITEM2` | *Boolean* | Returns true for records where *ITEM1* is equal to *ITEM2*\. |
| `ITEM1 /= ITEM2` | *Boolean* | Returns true if the two strings are not identical or 0 if they're identical\. |
| `ITEM1 < ITEM2` | *Boolean* | Returns true for records where *ITEM1* is less than *ITEM2*\. |
| `ITEM1 <= ITEM2` | *Boolean* | Returns true for records where *ITEM1* is less than or equal to *ITEM2*\. |
| `ITEM1 > ITEM2` | *Boolean* | Returns true for records where *ITEM1* is greater than *ITEM2*\. |
| `ITEM1 >= ITEM2` | *Boolean* | Returns true for records where *ITEM1* is greater than or equal to *ITEM2*\. |
| `last_index(ITEM, LIST)` | *Integer* | Returns the index of the last field containing ITEM from a LIST of fields or 0 if the value isn't found\. Supported for string, integer, and real types only\. |
| `last_non_null(LIST)` | *Any* | Returns the last non\-null value in the supplied list of fields\. All storage types supported\. |
| `last_non_null_index(LIST)` | *Integer* | Returns the index of the last field in the specified LIST containing a non\-null value or 0 if all values are null\. All storage types are supported\. |
| `max(ITEM1, ITEM2)` | *Any* | Returns the greater of the two items: *ITEM1* or *ITEM2*\. |
| `max_index(LIST)` | *Integer* | Returns the index of the field containing the maximum value from a list of numeric fields or 0 if all values are null\. For example, if the third field listed contains the maximum, the index value 3 is returned\. If multiple fields contain the maximum value, the one listed first (leftmost) is returned\. |
| `max_n(LIST)` | *Number* | Returns the maximum value from a list of numeric fields or null if all of the field values are null\. |
| `member(ITEM, LIST)` | *Boolean* | Returns true if *ITEM* is a member of the specified *LIST*\. Otherwise, a false value is returned\. A list of field names can also be specified\. |
| `min(ITEM1, ITEM2)` | *Any* | Returns the lesser of the two items: *ITEM1* or *ITEM2*\. |
| `min_index(LIST)` | *Integer* | Returns the index of the field containing the minimum value from a list of numeric fields or 0 if all values are null\. For example, if the third field listed contains the minimum, the index value 3 is returned\. If multiple fields contain the minimum value, the one listed first (leftmost) is returned\. |
| `min_n(LIST)` | *Number* | Returns the minimum value from a list of numeric fields or null if all of the field values are null\. |
| `time_before(TIME1, TIME2)` | *Boolean* | Used to check the ordering of time values\. Returns a true value if *TIME1* is before *TIME2*\. |
| `value_at(INT, LIST)` | | Returns the value of each listed field at offset INT or NULL if the offset is outside the range of valid values (that is, less than 1 or greater than the number of listed fields)\. All storage types supported\. |
<!-- </table "summary="CLEM comparison functions" id="clem_function_ref_comparison__table_jyk_zgz_ddb" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
8CE325D8AFC27359968A8799D58EF4BF0C57D68E | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/clem_function_ref_conversion.html?context=cdpaas&locale=en | Conversion functions (SPSS Modeler) | Conversion functions
With conversion functions, you can construct new fields and convert the storage type of existing files.
For example, you can form new strings by joining strings together or by taking strings apart. To join two strings, use the operator ><. For example, if the fieldSite has the value"BRAMLEY", then "xx" >< Site returns "xxBRAMLEY". The result of>< is always a string, even if the arguments aren't strings. Thus, if field V1 is 3 and field V2 is 5, then V1 >< V2 returns "35" (a string, not a number).
Conversion functions (and any other functions that require a specific type of input, such as a date or time value) depend on the current formats specified in the flow properties. For example, if you want to convert a string field with values Jan 2021, Feb 2021, and so on, select the matching date format MON YYYY as the default date format for the flow.
CLEM conversion functions
Table 1. CLEM conversion functions
Function Result Description
ITEM1 >< ITEM2 String Concatenates values for two fields and returns the resulting string as ITEM1ITEM2.
to_integer(ITEM) Integer Converts the storage of the specified field to an integer.
to_real(ITEM) Real Converts the storage of the specified field to a real.
to_number(ITEM) Number Converts the storage of the specified field to a number.
to_string(ITEM) String Converts the storage of the specified field to a string. When a real is converted to string using this function, it returns a value with 6 digits after the radix point.
to_time(ITEM) Time Converts the storage of the specified field to a time.
to_date(ITEM) Date Converts the storage of the specified field to a date.
to_timestamp(ITEM) Timestamp Converts the storage of the specified field to a timestamp.
to_datetime(ITEM) Datetime Converts the storage of the specified field to a date, time, or timestamp value.
datetime_date(ITEM) Date Returns the date value for a number, string, or timestamp. Note this is the only function that allows you to convert a number (in seconds) back to a date. If ITEM is a string, creates a date by parsing a string in the current date format. The date format specified in the flow properties must be correct for this function to be successful. If ITEM is a number, it's interpreted as a number of seconds since the base date (or epoch). Fractions of a day are truncated. If ITEM is a timestamp, the date part of the timestamp is returned. If ITEM is a date, it's returned unchanged.
stb_centroid_latitude(ITEM) Integer Returns an integer value for latitude corresponding to centroid of the geohash argument.
stb_centroid_longitude(ITEM) Integer Returns an integer value for longitude corresponding to centroid of the geohash argument.
to_geohash(ITEM) String Returns the geohashed string corresponding to the latitude and longitude using the specified number of bits for the density. A geohash is a code used to identify a set of geographic coordinates based on the latitude and longitude details. The three parameters for to_geohash are:<br><br><br><br> * latitude: Range (-180, 180), and units are degrees in the WGS84 coordinate system<br> * longitude: Range (-90, 90), and units are degrees in the WGS84 coordinate system<br> * bits: The number of bits to use to store the hash. Range [1,75]. This affects both the length of the returned string (1 character is used for every 5 bits), and the accuracy of the hash. For example, 5 bits (1 character) represents approximately 2500 kilometers, or 45 bits (9 characters), represents approximately 2.3 meters.<br><br><br>
| # Conversion functions #
With conversion functions, you can construct new fields and convert the storage type of existing files\.
For example, you can form new strings by joining strings together or by taking strings apart\. To join two strings, use the operator `><`\. For example, if the field`Site` has the value`"BRAMLEY"`, then `"xx" >< Site` returns `"xxBRAMLEY"`\. The result of`><` is always a string, even if the arguments aren't strings\. Thus, if field `V1` is `3` and field `V2` is `5`, then `V1 >< V2` returns `"35"` (a string, not a number)\.
Conversion functions (and any other functions that require a specific type of input, such as a date or time value) depend on the current formats specified in the flow properties\. For example, if you want to convert a string field with values *Jan 2021*, *Feb 2021*, and so on, select the matching date format MON YYYY as the default date format for the flow\.
<!-- <table "summary="CLEM conversion functions" id="clem_function_ref_conversion__table_lqn_qk3_cdb" class="defaultstyle" "> -->
CLEM conversion functions
Table 1\. CLEM conversion functions
| Function | Result | Description |
| ------------------------------ | ----------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `ITEM1` >< `ITEM2` | *String* | Concatenates values for two fields and returns the resulting string as *ITEM1ITEM2*\. |
| `to_integer(ITEM)` | *Integer* | Converts the storage of the specified field to an integer\. |
| `to_real(ITEM)` | *Real* | Converts the storage of the specified field to a real\. |
| `to_number(ITEM)` | *Number* | Converts the storage of the specified field to a number\. |
| `to_string(ITEM)` | *String* | Converts the storage of the specified field to a string\. When a real is converted to string using this function, it returns a value with 6 digits after the radix point\. |
| `to_time(ITEM)` | *Time* | Converts the storage of the specified field to a time\. |
| `to_date(ITEM)` | *Date* | Converts the storage of the specified field to a date\. |
| `to_timestamp(ITEM)` | *Timestamp* | Converts the storage of the specified field to a timestamp\. |
| `to_datetime(ITEM)` | *Datetime* | Converts the storage of the specified field to a date, time, or timestamp value\. |
| `datetime_date(ITEM)` | *Date* | Returns the date value for a *number*, *string*, or *timestamp*\. Note this is the only function that allows you to convert a number (in seconds) back to a date\. If `ITEM` is a string, creates a date by parsing a string in the current date format\. The date format specified in the flow properties must be correct for this function to be successful\. If `ITEM` is a number, it's interpreted as a number of seconds since the base date (or epoch)\. Fractions of a day are truncated\. If `ITEM` is a timestamp, the date part of the timestamp is returned\. If `ITEM` is a date, it's returned unchanged\. |
| `stb_centroid_latitude(ITEM)` | *Integer* | Returns an integer value for latitude corresponding to centroid of the geohash argument\. |
| `stb_centroid_longitude(ITEM)` | *Integer* | Returns an integer value for longitude corresponding to centroid of the geohash argument\. |
| `to_geohash(ITEM)` | *String* | Returns the geohashed string corresponding to the latitude and longitude using the specified number of bits for the density\. A geohash is a code used to identify a set of geographic coordinates based on the latitude and longitude details\. The three parameters for `to_geohash` are:<br><br><!-- <ul> --><br><br> * latitude: Range (\-180, 180), and units are degrees in the WGS84 coordinate system<br> * longitude: Range (\-90, 90), and units are degrees in the WGS84 coordinate system<br> * bits: The number of bits to use to store the hash\. Range \[1,75\]\. This affects both the length of the returned string (1 character is used for every 5 bits), and the accuracy of the hash\. For example, 5 bits (1 character) represents approximately 2500 kilometers, or 45 bits (9 characters), represents approximately 2\.3 meters\.<br><br><!-- </ul> --><br> |
<!-- </table "summary="CLEM conversion functions" id="clem_function_ref_conversion__table_lqn_qk3_cdb" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
D1FAFA3A73F77B401F49CC641BE44D61BC9C0689 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/clem_function_ref_datetime.html?context=cdpaas&locale=en | Date and time functions (SPSS Modeler) | Date and time functions
CLEM includes a family of functions for handling fields with datetime storage of string variables representing dates and times.
The formats of date and time used are specific to each flow and are specified in the flow properties. The date and time functions parse date and time strings according to the currently selected format.
When you specify a year in a date that uses only two digits (that is, the century is not specified), SPSS Modeler uses the default century that's specified in the flow properties.
CLEM date and time functions
Table 1. CLEM date and time functions
Function Result Description
@TODAY String If you select Rollover days/mins in the flow properties, this function returns the current date as a string in the current date format. If you use a two-digit date format and don't select Rollover days/mins, this function returns $null$ on the current server.
to_time(ITEM) Time Converts the storage of the specified field to a time.
to_date(ITEM) Date Converts the storage of the specified field to a date.
to_timestamp(ITEM) Timestamp Converts the storage of the specified field to a timestamp.
to_datetime(ITEM) Datetime Converts the storage of the specified field to a date, time, or timestamp value.
datetime_date(ITEM) Date Returns the date value for a number, string, or timestamp. Note this is the only function that allows you to convert a number (in seconds) back to a date. If ITEM is a string, creates a date by parsing a string in the current date format. The date format specified in the flow properties must be correct for this function to be successful. If ITEM is a number, it's interpreted as a number of seconds since the base date (or epoch). Fractions of a day are truncated. If ITEM is timestamp, the date part of the timestamp is returned. If ITEM is a date, it's returned unchanged.
date_before(DATE1, DATE2) Boolean Returns a value of true if DATE1 represents a date or timestamp before that represented by DATE2. Otherwise, this function returns a value of 0.
date_days_difference(DATE1, DATE2) Integer Returns the time in days from the date or timestamp represented by DATE1 to that represented by DATE2, as an integer. If DATE2 is before DATE1, this function returns a negative number.
date_in_days(DATE) Integer Returns the time in days from the baseline date to the date or timestamp represented by DATE, as an integer. If DATE is before the baseline date, this function returns a negative number. You must include a valid date for the calculation to work appropriately. For example, you should not specify 29 February 2001 as the date. Because 2001 isn't a leap year, this date doesn't exist.
date_in_months(DATE) Real Returns the time in months from the baseline date to the date or timestamp represented by DATE, as a real number. This is an approximate figure based on a month of 30.4375 days. If DATE is before the baseline date, this function returns a negative number. You must include a valid date for the calculation to work appropriately. For example, you should not specify 29 February 2001 as the date. Because 2001 isn't a leap year, this date doesn't exist.
date_in_weeks(DATE) Real Returns the time in weeks from the baseline date to the date or timestamp represented by DATE, as a real number. This is based on a week of 7.0 days. If DATE is before the baseline date, this function returns a negative number. You must include a valid date for the calculation to work appropriately. For example, you should not specify 29 February 2001 as the date. Because 2001 isn't a leap year, this date doesn't exist.
date_in_years(DATE) Real Returns the time in years from the baseline date to the date or timestamp represented by DATE, as a real number. This is an approximate figure based on a year of 365.25 days. If DATE is before the baseline date, this function returns a negative number. You must include a valid date for the calculation to work appropriately. For example, you should not specify 29 February 2001 as the date. Because 2001 isn't a leap year, this date doesn't exist.
date_months_difference (DATE1, DATE2) Real Returns the time in months from the date or timestamp represented by DATE1 to that represented by DATE2, as a real number. This is an approximate figure based on a month of 30.4375 days. If DATE2 is before DATE1, this function returns a negative number.
datetime_date(YEAR, MONTH, DAY) Date Creates a date value for the given YEAR, MONTH, and DAY. The arguments must be integers.
datetime_day(DATE) Integer Returns the day of the month from a given DATE or timestamp. The result is an integer in the range 1 to 31.
datetime_day_name(DAY) String Returns the full name of the given DAY. The argument must be an integer in the range 1 (Sunday) to 7 (Saturday).
datetime_hour(TIME) Integer Returns the hour from a TIME or timestamp. The result is an integer in the range 0 to 23.
datetime_in_seconds(TIME) Real Returns the seconds portion stored in TIME.
datetime_in_seconds(DATE), datetime_in_seconds(DATETIME) Real Returns the accumulated number, converted into seconds, from the difference between the current DATE or DATETIME and the baseline date (1900-01-01).
datetime_minute(TIME) Integer Returns the minute from a TIME or timestamp. The result is an integer in the range 0 to 59.
datetime_month(DATE) Integer Returns the month from a DATE or timestamp. The result is an integer in the range 1 to 12.
datetime_month_name (MONTH) String Returns the full name of the given MONTH. The argument must be an integer in the range 1 to 12.
datetime_now Timestamp Returns the current time as a timestamp.
datetime_second(TIME) Integer Returns the second from a TIME or timestamp. The result is an integer in the range 0 to 59.
datetime_day_short_name(DAY) String Returns the abbreviated name of the given DAY. The argument must be an integer in the range 1 (Sunday) to 7 (Saturday).
datetime_month_short_name(MONTH) String Returns the abbreviated name of the given MONTH. The argument must be an integer in the range 1 to 12.
datetime_time(HOUR, MINUTE, SECOND) Time Returns the time value for the specified HOUR, MINUTE, and SECOND. The arguments must be integers.
datetime_time(ITEM) Time Returns the time value of the given ITEM.
datetime_timestamp(YEAR, MONTH, DAY, HOUR, MINUTE, SECOND) Timestamp Returns the timestamp value for the given YEAR, MONTH, DAY, HOUR, MINUTE, and SECOND.
datetime_timestamp(DATE, TIME) Timestamp Returns the timestamp value for the given DATE and TIME.
datetime_timestamp(NUMBER) Timestamp Returns the timestamp value of the given number of seconds.
datetime_weekday(DATE) Integer Returns the day of the week from the given DATE or timestamp.
datetime_year(DATE) Integer Returns the year from a DATE or timestamp. The result is an integer such as 2021.
date_weeks_difference(DATE1, DATE2) Real Returns the time in weeks from the date or timestamp represented by DATE1 to that represented by DATE2, as a real number. This is based on a week of 7.0 days. If DATE2 is before DATE1, this function returns a negative number.
date_years_difference (DATE1, DATE2) Real Returns the time in years from the date or timestamp represented by DATE1 to that represented by DATE2, as a real number. This is an approximate figure based on a year of 365.25 days. If DATE2 is before DATE1, this function returns a negative number.
date_from_ywd(YEAR, WEEK, DAY) Integer Converts the year, week in year, and day in week, to a date using the ISO 8601 standard.
date_iso_day(DATE) Integer Returns the day in the week from the date using the ISO 8601 standard.
date_iso_week(DATE) Integer Returns the week in the year from the date using the ISO 8601 standard.
date_iso_year(DATE) Integer Returns the year from the date using the ISO 8601 standard.
time_before(TIME1, TIME2) Boolean Returns a value of true if TIME1 represents a time or timestamp before that represented by TIME2. Otherwise, this function returns a value of 0.
time_hours_difference (TIME1, TIME2) Real Returns the time difference in hours between the times or timestamps represented by TIME1 and TIME2, as a real number. If you select Rollover days/mins in the flow properties, a higher value of TIME1 is taken to refer to the previous day. If you don't select the rollover option, a higher value of TIME1 causes the returned value to be negative.
time_in_hours(TIME) Real Returns the time in hours represented by TIME, as a real number. For example, under time format HHMM, the expression time_in_hours('0130') evaluates to 1.5. TIME can represent a time or a timestamp.
time_in_mins(TIME) Real Returns the time in minutes represented by TIME, as a real number. TIME can represent a time or a timestamp.
time_in_secs(TIME) Integer Returns the time in seconds represented by TIME, as an integer. TIME can represent a time or a timestamp.
time_mins_difference(TIME1, TIME2) Real Returns the time difference in minutes between the times or timestamps represented by TIME1 and TIME2, as a real number. If you select Rollover days/mins in the flow properties, a higher value of TIME1 is taken to refer to the previous day (or the previous hour, if only minutes and seconds are specified in the current format). If you don't select the rollover option, a higher value of TIME1 will cause the returned value to be negative.
time_secs_difference(TIME1, TIME2) Integer Returns the time difference in seconds between the times or timestamps represented by TIME1 and TIME2, as an integer. If you select Rollover days/mins in the flow properties, a higher value of TIME1 is taken to refer to the previous day (or the previous hour, if only minutes and seconds are specified in the current format). If you don't select the rollover option, a higher value of TIME1 causes the returned value to be negative.
| # Date and time functions #
CLEM includes a family of functions for handling fields with datetime storage of string variables representing dates and times\.
The formats of date and time used are specific to each flow and are specified in the flow properties\. The date and time functions parse date and time strings according to the currently selected format\.
When you specify a year in a date that uses only two digits (that is, the century is not specified), SPSS Modeler uses the default century that's specified in the flow properties\.
<!-- <table "summary="CLEM date and time functions" class="defaultstyle" "> -->
CLEM date and time functions
Table 1\. CLEM date and time functions
| Function | Result | Description |
| ------------------------------------------------------------ | ----------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `@TODAY` | *String* | If you select Rollover days/mins in the flow properties, this function returns the current date as a string in the current date format\. If you use a two\-digit date format and don't select Rollover days/mins, this function returns `$null$` on the current server\. |
| `to_time(ITEM)` | *Time* | Converts the storage of the specified field to a time\. |
| `to_date(ITEM)` | *Date* | Converts the storage of the specified field to a date\. |
| `to_timestamp(ITEM)` | *Timestamp* | Converts the storage of the specified field to a timestamp\. |
| `to_datetime(ITEM)` | *Datetime* | Converts the storage of the specified field to a date, time, or timestamp value\. |
| `datetime_date(ITEM)` | *Date* | Returns the date value for a *number*, *string*, or *timestamp*\. Note this is the only function that allows you to convert a number (in seconds) back to a date\. If `ITEM` is a string, creates a date by parsing a string in the current date format\. The date format specified in the flow properties must be correct for this function to be successful\. If `ITEM` is a number, it's interpreted as a number of seconds since the base date (or epoch)\. Fractions of a day are truncated\. If `ITEM` is timestamp, the date part of the timestamp is returned\. If `ITEM` is a date, it's returned unchanged\. |
| `date_before(DATE1, DATE2)` | *Boolean* | Returns a value of true if *DATE1* represents a date or timestamp before that represented by *DATE2*\. Otherwise, this function returns a value of 0\. |
| `date_days_difference(DATE1, DATE2)` | *Integer* | Returns the time in days from the date or timestamp represented by *DATE1* to that represented by *DATE2*, as an integer\. If *DATE2* is before *DATE1*, this function returns a negative number\. |
| `date_in_days(DATE)` | *Integer* | Returns the time in days from the baseline date to the date or timestamp represented by *DATE*, as an integer\. If *DATE* is before the baseline date, this function returns a negative number\. You must include a valid date for the calculation to work appropriately\. For example, you should not specify 29 February 2001 as the date\. Because 2001 isn't a leap year, this date doesn't exist\. |
| `date_in_months(DATE)` | *Real* | Returns the time in months from the baseline date to the date or timestamp represented by *DATE*, as a real number\. This is an approximate figure based on a month of 30\.4375 days\. If *DATE* is before the baseline date, this function returns a negative number\. You must include a valid date for the calculation to work appropriately\. For example, you should not specify 29 February 2001 as the date\. Because 2001 isn't a leap year, this date doesn't exist\. |
| `date_in_weeks(DATE)` | *Real* | Returns the time in weeks from the baseline date to the date or timestamp represented by *DATE*, as a real number\. This is based on a week of 7\.0 days\. If *DATE* is before the baseline date, this function returns a negative number\. You must include a valid date for the calculation to work appropriately\. For example, you should not specify 29 February 2001 as the date\. Because 2001 isn't a leap year, this date doesn't exist\. |
| `date_in_years(DATE)` | *Real* | Returns the time in years from the baseline date to the date or timestamp represented by *DATE*, as a real number\. This is an approximate figure based on a year of 365\.25 days\. If *DATE* is before the baseline date, this function returns a negative number\. You must include a valid date for the calculation to work appropriately\. For example, you should not specify 29 February 2001 as the date\. Because 2001 isn't a leap year, this date doesn't exist\. |
| `date_months_difference (DATE1, DATE2)` | *Real* | Returns the time in months from the date or timestamp represented by *DATE1* to that represented by *DATE2*, as a real number\. This is an approximate figure based on a month of 30\.4375 days\. If *DATE2* is before *DATE1*, this function returns a negative number\. |
| `datetime_date(YEAR, MONTH, DAY)` | *Date* | Creates a date value for the given *YEAR*, *MONTH*, and *DAY*\. The arguments must be integers\. |
| `datetime_day(DATE)` | *Integer* | Returns the day of the month from a given *DATE* or timestamp\. The result is an integer in the range 1 to 31\. |
| `datetime_day_name(DAY)` | *String* | Returns the full name of the given *DAY*\. The argument must be an integer in the range 1 (Sunday) to 7 (Saturday)\. |
| `datetime_hour(TIME)` | *Integer* | Returns the hour from a *TIME* or timestamp\. The result is an integer in the range 0 to 23\. |
| `datetime_in_seconds(TIME)` | *Real* | Returns the seconds portion stored in *TIME*\. |
| `datetime_in_seconds(DATE)`, `datetime_in_seconds(DATETIME)` | *Real* | Returns the accumulated number, converted into seconds, from the difference between the current *DATE* or *DATETIME* and the baseline date (1900\-01\-01)\. |
| `datetime_minute(TIME)` | *Integer* | Returns the minute from a *TIME* or timestamp\. The result is an integer in the range 0 to 59\. |
| `datetime_month(DATE)` | *Integer* | Returns the month from a *DATE* or timestamp\. The result is an integer in the range 1 to 12\. |
| `datetime_month_name (MONTH)` | *String* | Returns the full name of the given *MONTH*\. The argument must be an integer in the range 1 to 12\. |
| `datetime_now` | *Timestamp* | Returns the current time as a timestamp\. |
| `datetime_second(TIME)` | *Integer* | Returns the second from a *TIME* or timestamp\. The result is an integer in the range 0 to 59\. |
| `datetime_day_short_name``(DAY)` | *String* | Returns the abbreviated name of the given *DAY*\. The argument must be an integer in the range 1 (Sunday) to 7 (Saturday)\. |
| `datetime_month_short_name``(MONTH)` | *String* | Returns the abbreviated name of the given *MONTH*\. The argument must be an integer in the range 1 to 12\. |
| `datetime_time(HOUR, MINUTE, SECOND)` | *Time* | Returns the time value for the specified *HOUR*, *MINUTE*, and *SECOND*\. The arguments must be integers\. |
| `datetime_time(ITEM)` | *Time* | Returns the time value of the given *ITEM*\. |
| `datetime_timestamp(YEAR, MONTH, DAY, HOUR, MINUTE, SECOND)` | *Timestamp* | Returns the timestamp value for the given *YEAR*, *MONTH*, *DAY*, *HOUR*, *MINUTE*, and *SECOND*\. |
| `datetime_timestamp(DATE, TIME)` | *Timestamp* | Returns the timestamp value for the given *DATE* and *TIME*\. |
| `datetime_timestamp``(NUMBER)` | *Timestamp* | Returns the timestamp value of the given number of seconds\. |
| `datetime_weekday(DATE)` | *Integer* | Returns the day of the week from the given *DATE* or timestamp\. |
| `datetime_year(DATE)` | *Integer* | Returns the year from a *DATE* or timestamp\. The result is an integer such as 2021\. |
| `date_weeks_difference``(DATE1, DATE2)` | *Real* | Returns the time in weeks from the date or timestamp represented by *DATE1* to that represented by *DATE2*, as a real number\. This is based on a week of 7\.0 days\. If *DATE2* is before *DATE1*, this function returns a negative number\. |
| `date_years_difference (DATE1, DATE2)` | *Real* | Returns the time in years from the date or timestamp represented by *DATE1* to that represented by *DATE2*, as a real number\. This is an approximate figure based on a year of 365\.25 days\. If *DATE2* is before *DATE1*, this function returns a negative number\. |
| `date_from_ywd(YEAR, WEEK, DAY)` | *Integer* | Converts the year, week in year, and day in week, to a date using the ISO 8601 standard\. |
| `date_iso_day(DATE)` | *Integer* | Returns the day in the week from the date using the ISO 8601 standard\. |
| `date_iso_week(DATE)` | *Integer* | Returns the week in the year from the date using the ISO 8601 standard\. |
| `date_iso_year(DATE)` | *Integer* | Returns the year from the date using the ISO 8601 standard\. |
| `time_before(TIME1, TIME2)` | *Boolean* | Returns a value of true if *TIME1* represents a time or timestamp before that represented by *TIME2*\. Otherwise, this function returns a value of 0\. |
| `time_hours_difference (TIME1, TIME2)` | *Real* | Returns the time difference in hours between the times or timestamps represented by *TIME1* and *TIME2*, as a real number\. If you select Rollover days/mins in the flow properties, a higher value of *TIME1* is taken to refer to the previous day\. If you don't select the rollover option, a higher value of *TIME1* causes the returned value to be negative\. |
| `time_in_hours(TIME)` | *Real* | Returns the time in hours represented by *TIME*, as a real number\. For example, under time format `HHMM`, the expression `time_in_hours('0130')` evaluates to 1\.5\. *TIME* can represent a time or a timestamp\. |
| `time_in_mins(TIME)` | *Real* | Returns the time in minutes represented by *TIME*, as a real number\. *TIME* can represent a time or a timestamp\. |
| `time_in_secs(TIME)` | *Integer* | Returns the time in seconds represented by *TIME*, as an integer\. *TIME* can represent a time or a timestamp\. |
| `time_mins_difference(TIME1, TIME2)` | *Real* | Returns the time difference in minutes between the times or timestamps represented by *TIME1* and *TIME2*, as a real number\. If you select Rollover days/mins in the flow properties, a higher value of *TIME1* is taken to refer to the previous day (or the previous hour, if only minutes and seconds are specified in the current format)\. If you don't select the rollover option, a higher value of *TIME1* will cause the returned value to be negative\. |
| `time_secs_difference(TIME1, TIME2)` | *Integer* | Returns the time difference in seconds between the times or timestamps represented by *TIME1* and *TIME2*, as an integer\. If you select Rollover days/mins in the flow properties, a higher value of *TIME1* is taken to refer to the previous day (or the previous hour, if only minutes and seconds are specified in the current format)\. If you don't select the rollover option, a higher value of *TIME1* causes the returned value to be negative\. |
<!-- </table "summary="CLEM date and time functions" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
299CEE894DFF422AAC8BF49B53CAC700DE1B172D | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/clem_function_ref_global.html?context=cdpaas&locale=en | Global functions (SPSS Modeler) | Global functions
The functions @MEAN, @SUM, @MIN, @MAX, and @SDEV work on, at most, all of the records read up to and including the current one. In some cases, however, it is useful to be able to work out how values in the current record compare with values seen in the entire data set. Using a Set Globals node to generate values across the entire data set, you can access these values in a CLEM expression using the global functions.
For example,
@GLOBAL_MAX(Age)
returns the highest value of Age in the data set, while the expression
(Value - @GLOBAL_MEAN(Value)) / @GLOBAL_SDEV(Value)
expresses the difference between this record's Value and the global mean as a number of standard deviations. You can use global values only after they have been calculated by a Set Globals node.
CLEM global functions
Table 1. CLEM global functions
Function Result Description
@GLOBAL_MAX(FIELD) Number Returns the maximum value for FIELD over the whole data set, as previously generated by a Set Globals node. FIELD must be the name of a numeric, date/time/datetime, or string field. If the corresponding global value has not been set, an error occurs.
@GLOBAL_MIN(FIELD) Number Returns the minimum value for FIELD over the whole data set, as previously generated by a Set Globals node. FIELD must be the name of a numeric, date/time/datetime, or string field. If the corresponding global value has not been set, an error occurs.
@GLOBAL_SDEV(FIELD) Number Returns the standard deviation of values for FIELD over the whole data set, as previously generated by a Set Globals node. FIELD must be the name of a numeric field. If the corresponding global value has not been set, an error occurs.
@GLOBAL_MEAN(FIELD) Number Returns the mean average of values for FIELD over the whole data set, as previously generated by a Set Globals node. FIELD must be the name of a numeric field. If the corresponding global value has not been set, an error occurs.
@GLOBAL_SUM(FIELD) Number Returns the sum of values for FIELD over the whole data set, as previously generated by a Set Globals node. FIELD must be the name of a numeric field. If the corresponding global value has not been set, an error occurs.
| # Global functions #
The functions `@MEAN`, `@SUM`, `@MIN`, `@MAX`, and `@SDEV` work on, at most, all of the records read up to and including the current one\. In some cases, however, it is useful to be able to work out how values in the current record compare with values seen in the entire data set\. Using a Set Globals node to generate values across the entire data set, you can access these values in a CLEM expression using the global functions\.
For example,
@GLOBAL_MAX(Age)
returns the highest value of `Age` in the data set, while the expression
(Value - @GLOBAL_MEAN(Value)) / @GLOBAL_SDEV(Value)
expresses the difference between this record's `Value` and the global mean as a number of standard deviations\. You can use global values only after they have been calculated by a Set Globals node\.
<!-- <table "summary="CLEM global functions" id="clem_function_ref_global__table_ept_sk3_cdb" class="defaultstyle" "> -->
CLEM global functions
Table 1\. CLEM global functions
| Function | Result | Description |
| --------------------- | -------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `@GLOBAL_MAX(FIELD)` | *Number* | Returns the maximum value for *FIELD* over the whole data set, as previously generated by a Set Globals node\. *FIELD* must be the name of a numeric, date/time/datetime, or string field\. If the corresponding global value has not been set, an error occurs\. |
| `@GLOBAL_MIN(FIELD)` | *Number* | Returns the minimum value for *FIELD* over the whole data set, as previously generated by a Set Globals node\. *FIELD* must be the name of a numeric, date/time/datetime, or string field\. If the corresponding global value has not been set, an error occurs\. |
| `@GLOBAL_SDEV(FIELD)` | *Number* | Returns the standard deviation of values for *FIELD* over the whole data set, as previously generated by a Set Globals node\. *FIELD* must be the name of a numeric field\. If the corresponding global value has not been set, an error occurs\. |
| `@GLOBAL_MEAN(FIELD)` | *Number* | Returns the mean average of values for *FIELD* over the whole data set, as previously generated by a Set Globals node\. *FIELD* must be the name of a numeric field\. If the corresponding global value has not been set, an error occurs\. |
| `@GLOBAL_SUM(FIELD)` | *Number* | Returns the sum of values for *FIELD* over the whole data set, as previously generated by a Set Globals node\. *FIELD* must be the name of a numeric field\. If the corresponding global value has not been set, an error occurs\. |
<!-- </table "summary="CLEM global functions" id="clem_function_ref_global__table_ept_sk3_cdb" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
C6379E4ACDD7B1C335E9944B8D9DBB08DB220420 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/clem_function_ref_information.html?context=cdpaas&locale=en | Information functions (SPSS Modeler) | Information functions
You can use information functions to gain insight into the values of a particular field. They're typically used to derive flag fields.
For example, the @BLANK function creates a flag field indicating records whose values are blank for the selected field. Similarly, you can check the storage type for a field using any of the storage type functions, such as is_string.
CLEM information functions
Table 1. CLEM information functions
Function Result Description
@BLANK(FIELD) Boolean Returns true for all records whose values are blank according to the blank-handling rules set in an upstream Type node or source node (Types tab).
@NULL(ITEM) Boolean Returns true for all records whose values are undefined. Undefined values are system null values, displayed in SPSS Modeler as $null$.
is_date(ITEM) Boolean Returns true for all records whose type is a date.
is_datetime(ITEM) Boolean Returns true for all records whose type is a date, time, or timestamp.
is_integer(ITEM) Boolean Returns true for all records whose type is an integer.
is_number(ITEM) Boolean Returns true for all records whose type is a number.
is_real(ITEM) Boolean Returns true for all records whose type is a real.
is_string(ITEM) Boolean Returns true for all records whose type is a string.
is_time(ITEM) Boolean Returns true for all records whose type is a time.
is_timestamp(ITEM) Boolean Returns true for all records whose type is a timestamp.
| # Information functions #
You can use information functions to gain insight into the values of a particular field\. They're typically used to derive flag fields\.
For example, the `@BLANK` function creates a flag field indicating records whose values are blank for the selected field\. Similarly, you can check the storage type for a field using any of the storage type functions, such as `is_string`\.
<!-- <table "summary="CLEM information functions" class="defaultstyle" "> -->
CLEM information functions
Table 1\. CLEM information functions
| Function | Result | Description |
| -------------------- | --------- | ---------------------------------------------------------------------------------------------------------------------------------------------------- |
| `@BLANK(FIELD)` | *Boolean* | Returns true for all records whose values are blank according to the blank\-handling rules set in an upstream Type node or source node (Types tab)\. |
| `@NULL(ITEM)` | *Boolean* | Returns true for all records whose values are undefined\. Undefined values are system null values, displayed in SPSS Modeler as `$null$`\. |
| `is_date(ITEM)` | *Boolean* | Returns true for all records whose type is a date\. |
| `is_datetime(ITEM)` | *Boolean* | Returns true for all records whose type is a date, time, or timestamp\. |
| `is_integer(ITEM)` | *Boolean* | Returns true for all records whose type is an integer\. |
| `is_number(ITEM)` | *Boolean* | Returns true for all records whose type is a number\. |
| `is_real(ITEM)` | *Boolean* | Returns true for all records whose type is a real\. |
| `is_string(ITEM)` | *Boolean* | Returns true for all records whose type is a string\. |
| `is_time(ITEM)` | *Boolean* | Returns true for all records whose type is a time\. |
| `is_timestamp(ITEM)` | *Boolean* | Returns true for all records whose type is a timestamp\. |
<!-- </table "summary="CLEM information functions" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
A67EA42903BF8BE22AEB379891B7E1CA3EB2E4D1 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/clem_function_ref_logical.html?context=cdpaas&locale=en | Logical functions (SPSS Modeler) | Logical functions
CLEM expressions can be used to perform logical operations.
CLEM logical functions
Table 1. CLEM logical functions
Function Result Description
COND1 and COND2 Boolean This operation is a logical conjunction and returns a true value if both COND1 and COND2 are true. If COND1 is false, then COND2 is not evaluated; this makes it possible to have conjunctions where COND1 first tests that an operation in COND2 is legal. For example, length(Label) >=6 and Label(6) = 'x'.
COND1 or COND2 Boolean This operation is a logical (inclusive) disjunction and returns a true value if either COND1 or COND2 is true or if both are true. If COND1 is true, COND2 is not evaluated.
not(COND) Boolean This operation is a logical negation and returns a true value if COND is false. Otherwise, this operation returns a value of 0.
if COND then EXPR1 else EXPR2 endif Any This operation is a conditional evaluation. If COND is true, this operation returns the result of EXPR1. Otherwise, the result of evaluating EXPR2 is returned.
if COND1 then EXPR1 elseif COND2 then EXPR2 else EXPR_N endif Any This operation is a multibranch conditional evaluation. If COND1 is true, this operation returns the result of EXPR1. Otherwise, if COND2 is true, this operation returns the result of evaluating EXPR2. Otherwise, the result of evaluating EXPR_N is returned.
| # Logical functions #
CLEM expressions can be used to perform logical operations\.
<!-- <table "summary="CLEM logical functions" id="clem_function_ref_logical__table_bct_5k3_cdb" class="defaultstyle" "> -->
CLEM logical functions
Table 1\. CLEM logical functions
| Function | Result | Description |
| --------------------------------------------------------------- | --------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| `COND1 and COND2` | *Boolean* | This operation is a logical conjunction and returns a true value if both *COND1* and *COND2* are true\. If *COND1* is false, then *COND2* is not evaluated; this makes it possible to have conjunctions where *COND1* first tests that an operation in *COND2* is legal\. For example, `length(Label) >=6` and `Label(6) = 'x'`\. |
| `COND1 or COND2` | *Boolean* | This operation is a logical (inclusive) disjunction and returns a true value if either *COND1* or *COND2* is true or if both are true\. If *COND1* is true, *COND2* is not evaluated\. |
| `not(COND)` | *Boolean* | This operation is a logical negation and returns a true value if *COND* is false\. Otherwise, this operation returns a value of 0\. |
| `if COND then EXPR1 else EXPR2 endif` | *Any* | This operation is a conditional evaluation\. If *COND* is true, this operation returns the result of *EXPR1*\. Otherwise, the result of evaluating *EXPR2* is returned\. |
| `if COND1 then EXPR1 elseif COND2 then EXPR2 else EXPR_N endif` | *Any* | This operation is a multibranch conditional evaluation\. If *COND1* is true, this operation returns the result of *EXPR1*\. Otherwise, if *COND2* is true, this operation returns the result of evaluating *EXPR2*\. Otherwise, the result of evaluating *EXPR\_N* is returned\. |
<!-- </table "summary="CLEM logical functions" id="clem_function_ref_logical__table_bct_5k3_cdb" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
EEC0EB0502DEF7B7ADB112F8D7D4C38E1F6D9170 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/clem_function_ref_numeric.html?context=cdpaas&locale=en | Numeric functions (SPSS Modeler) | Numeric functions
CLEM contains a number of commonly used numeric functions.
CLEM numeric functions
Table 1. CLEM numeric functions
Function Result Description
–NUM Number Used to negate NUM. Returns the corresponding number with the opposite sign.
NUM1 + NUM2 Number Returns the sum of NUM1 and NUM2.
NUM1 –NUM2 Number Returns the value of NUM2 subtracted from NUM1.
NUM1 * NUM2 Number Returns the value of NUM1 multiplied by NUM2.
NUM1 / NUM2 Number Returns the value of NUM1 divided by NUM2.
INT1 div INT2 Number Used to perform integer division. Returns the value of INT1 divided by INT2.
INT1 rem INT2 Number Returns the remainder of INT1 divided by INT2. For example, INT1 – (INT1 div INT2) INT2.
BASE POWER Number Returns BASE raised to the power POWER, where either may be any number (except that BASE must not be zero if POWER is zero of any type other than integer 0). If POWER is an integer, the computation is performed by successively multiplying powers of BASE. Thus, if BASE is an integer, the result will be an integer. If POWER is integer 0, the result is always a 1 of the same type as BASE. Otherwise, if POWER is not an integer, the result is computed as exp(POWER * log(BASE)).
abs(NUM) Number Returns the absolute value of NUM, which is always a number of the same type.
exp(NUM) Real Returns e raised to the power NUM, where e is the base of natural logarithms.
fracof(NUM) Real Returns the fractional part of NUM, defined as NUM–intof(NUM).
intof(NUM) Integer Truncates its argument to an integer. It returns the integer of the same sign as NUM and with the largest magnitude such that abs(INT) <= abs(NUM).
log(NUM) Real Returns the natural (base e) logarithm of NUM, which must not be a zero of any kind.
log10(NUM) Real Returns the base 10 logarithm of NUM, which must not be a zero of any kind. This function is defined as log(NUM) / log(10).
negate(NUM) Number Used to negate NUM. Returns the corresponding number with the opposite sign.
round(NUM) Integer Used to round NUM to an integer by taking intof(NUM+0.5) if NUM is positive or intof(NUM–0.5) if NUM is negative.
sign(NUM) Number Used to determine the sign of NUM. This operation returns –1, 0, or 1 if NUM is an integer. If NUM is a real, it returns –1.0, 0.0, or 1.0, depending on whether NUM is negative, zero, or positive.
sqrt(NUM) Real Returns the square root of NUM. NUM must be positive.
sum_n(LIST) Number Returns the sum of values from a list of numeric fields or null if all of the field values are null.
mean_n(LIST) Number Returns the mean value from a list of numeric fields or null if all of the field values are null.
sdev_n(LIST) Number Returns the standard deviation from a list of numeric fields or null if all of the field values are null.
| # Numeric functions #
CLEM contains a number of commonly used numeric functions\.
<!-- <table "summary="CLEM numeric functions" id="clem_function_ref_numeric__table_dw4_qqz_ddb" class="defaultstyle" "> -->
CLEM numeric functions
Table 1\. CLEM numeric functions
| Function | Result | Description |
| ---------------- | --------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| –`NUM` | *Number* | Used to negate *NUM*\. Returns the corresponding number with the opposite sign\. |
| `NUM1` \+ `NUM2` | *Number* | Returns the sum of *NUM1* and *NUM2*\. |
| `NUM1` –`NUM2` | *Number* | Returns the value of *NUM2* subtracted from *NUM1*\. |
| `NUM1` \* `NUM2` | *Number* | Returns the value of *NUM1* multiplied by *NUM2*\. |
| `NUM1` / `NUM2` | *Number* | Returns the value of *NUM1* divided by *NUM2*\. |
| `INT1 div INT2` | *Number* | Used to perform integer division\. Returns the value of *INT1* divided by *INT2*\. |
| `INT1 rem INT2` | *Number* | Returns the remainder of *INT1* divided by *INT2*\. For example, `INT1 – (INT1 div INT2)* INT2`\. |
| `BASE ** POWER` | *Number* | Returns *BASE* raised to the power *POWER*, where either may be any number (except that *BASE* must not be zero if *POWER* is zero of any type other than integer 0)\. If *POWER* is an integer, the computation is performed by successively multiplying powers of *BASE*\. Thus, if *BASE* is an integer, the result will be an integer\. If *POWER* is integer 0, the result is always a 1 of the same type as *BASE*\. Otherwise, if *POWER* is not an integer, the result is computed as `exp(POWER * log(BASE))`\. |
| `abs(NUM)` | *Number* | Returns the absolute value of *NUM*, which is always a number of the same type\. |
| `exp(NUM)` | *Real* | Returns *e* raised to the power *NUM*, where *e* is the base of natural logarithms\. |
| `fracof(NUM)` | *Real* | Returns the fractional part of *NUM*, defined as `NUM–intof(NUM)`\. |
| `intof(NUM)` | *Integer* | Truncates its argument to an integer\. It returns the integer of the same sign as *NUM* and with the largest magnitude such that `abs(INT) <= abs(NUM)`\. |
| `log(NUM)` | *Real* | Returns the natural (base *e*) logarithm of *NUM*, which must not be a zero of any kind\. |
| `log10(NUM)` | *Real* | Returns the base 10 logarithm of *NUM*, which must not be a zero of any kind\. This function is defined as `log(NUM) / log(10)`\. |
| `negate(NUM)` | *Number* | Used to negate *NUM*\. Returns the corresponding number with the opposite sign\. |
| `round(NUM)` | *Integer* | Used to round *NUM* to an integer by taking `intof(NUM+0.5`) if *NUM* is positive or `intof(NUM–0.5)` if *NUM* is negative\. |
| `sign(NUM)` | *Number* | Used to determine the sign of *NUM*\. This operation returns –1, 0, or 1 if *NUM* is an integer\. If *NUM* is a real, it returns –1\.0, 0\.0, or 1\.0, depending on whether *NUM* is negative, zero, or positive\. |
| `sqrt(NUM)` | *Real* | Returns the square root of *NUM*\. *NUM* must be positive\. |
| `sum_n(LIST)` | *Number* | Returns the sum of values from a list of numeric fields or null if all of the field values are null\. |
| `mean_n(LIST)` | *Number* | Returns the mean value from a list of numeric fields or null if all of the field values are null\. |
| `sdev_n(LIST)` | *Number* | Returns the standard deviation from a list of numeric fields or null if all of the field values are null\. |
<!-- </table "summary="CLEM numeric functions" id="clem_function_ref_numeric__table_dw4_qqz_ddb" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
29DEEC30687F805460A83DD924D2F119274D25F8 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/clem_function_ref_probability.html?context=cdpaas&locale=en | Probability functions (SPSS Modeler) | Probability functions
Probability functions return probabilities based on various distributions, such as the probability that a value from Student's t distribution will be less than a specific value.
CLEM probability functions
Table 1. CLEM probability functions
Function Result Description
cdf_chisq(NUM, DF) Real Returns the probability that a value from the chi-square distribution with the specified degrees of freedom will be less than the specified number.
cdf_f(NUM, DF1, DF2) Real Returns the probability that a value from the F distribution, with degrees of freedom DF1 and DF2, will be less than the specified number.
cdf_normal(NUM, MEAN, STDDEV) Real Returns the probability that a value from the normal distribution with the specified mean and standard deviation will be less than the specified number.
cdf_t(NUM, DF) Real Returns the probability that a value from Student's t distribution with the specified degrees of freedom will be less than the specified number.
| # Probability functions #
Probability functions return probabilities based on various distributions, such as the probability that a value from Student's *t* distribution will be less than a specific value\.
<!-- <table "summary="CLEM probability functions" id="clem_function_ref_probability__table_vqh_xk3_cdb" class="defaultstyle" "> -->
CLEM probability functions
Table 1\. CLEM probability functions
| Function | Result | Description |
| ------------------------------- | ------ | --------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `cdf_chisq(NUM, DF)` | *Real* | Returns the probability that a value from the chi\-square distribution with the specified degrees of freedom will be less than the specified number\. |
| `cdf_f(NUM, DF1, DF2)` | *Real* | Returns the probability that a value from the *F* distribution, with degrees of freedom *DF1* and *DF2*, will be less than the specified number\. |
| `cdf_normal(NUM, MEAN, STDDEV)` | *Real* | Returns the probability that a value from the normal distribution with the specified mean and standard deviation will be less than the specified number\. |
| `cdf_t(NUM, DF)` | *Real* | Returns the probability that a value from Student's *t* distribution with the specified degrees of freedom will be less than the specified number\. |
<!-- </table "summary="CLEM probability functions" id="clem_function_ref_probability__table_vqh_xk3_cdb" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
9789F3A8936AD06C653C1C7AEB421C70FFD7C3E1 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/clem_function_ref_random.html?context=cdpaas&locale=en | Random functions (SPSS Modeler) | Random functions
The functions listed on this page can be used to randomly select items or randomly generate numbers.
CLEM random functions
Table 1. CLEM random functions
Function Result Description
oneof(LIST) Any Returns a randomly chosen element of LIST. List items should be entered as [ITEM1,ITEM2,...,ITEM_N]. Note that a list of field names can also be specified.
random(NUM) Number Returns a uniformly distributed random number of the same type (INT or REAL), starting from 1 to NUM. If you use an integer, then only integers are returned. If you use a real (decimal) number, then real numbers are returned (decimal precision determined by the stream options). The largest random number returned by the function could equal NUM.
random0(NUM) Number This has the same properties as random(NUM), but starting from 0. The largest random number returned by the function will never equal NUM.
| # Random functions #
The functions listed on this page can be used to randomly select items or randomly generate numbers\.
<!-- <table "summary="CLEM random functions" id="clem_function_ref_random__table_t2w_sqz_ddb" class="defaultstyle" "> -->
CLEM random functions
Table 1\. CLEM random functions
| Function | Result | Description |
| -------------- | -------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `oneof(LIST)` | *Any* | Returns a randomly chosen element of *LIST*\. List items should be entered as `[ITEM1,ITEM2,...,ITEM_N]`\. Note that a list of field names can also be specified\. |
| `random(NUM)` | *Number* | Returns a uniformly distributed random number of the same type (*INT* or *REAL*), starting from 1 to *NUM*\. If you use an integer, then only integers are returned\. If you use a real (decimal) number, then real numbers are returned (decimal precision determined by the stream options)\. The largest random number returned by the function could equal *NUM*\. |
| `random0(NUM)` | *Number* | This has the same properties as `random(NUM)`, but starting from 0\. The largest random number returned by the function will never equal *NUM*\. |
<!-- </table "summary="CLEM random functions" id="clem_function_ref_random__table_t2w_sqz_ddb" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
BACAF30043E33912E3D7F174B3F8CF858CB3093A | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/clem_function_ref_sequence.html?context=cdpaas&locale=en | Sequence functions (SPSS Modeler) | Sequence functions
For some operations, the sequence of events is important.
The application allows you to work with the following record sequences:
* Sequences and time series
* Sequence functions
* Record indexing
* Averaging, summing, and comparing values
* Monitoring change—differentiation
* @SINCE
* Offset values
* Additional sequence facilities
For many applications, each record passing through a stream can be considered as an individual case, independent of all others. In such situations, the order of records is usually unimportant.
For some classes of problems, however, the record sequence is very important. These are typically time series situations, in which the sequence of records represents an ordered sequence of events or occurrences. Each record represents a snapshot at a particular instant in time; much of the richest information, however, might be contained not in instantaneous values but in the way in which such values are changing and behaving over time.
Of course, the relevant parameter may be something other than time. For example, the records could represent analyses performed at distances along a line, but the same principles would apply.
Sequence and special functions are immediately recognizable by the following characteristics:
* They are all prefixed by @
* Their names are given in uppercase
Sequence functions can refer to the record currently being processed by a node, the records that have already passed through a node, and even, in one case, records that have yet to pass through a node. Sequence functions can be mixed freely with other components of CLEM expressions, although some have restrictions on what can be used as their arguments.
| # Sequence functions #
For some operations, the sequence of events is important\.
The application allows you to work with the following record sequences:
<!-- <ul> -->
* Sequences and time series
* Sequence functions
* Record indexing
* Averaging, summing, and comparing values
* Monitoring change—differentiation
* `@SINCE`
* Offset values
* Additional sequence facilities
<!-- </ul> -->
For many applications, each record passing through a stream can be considered as an individual case, independent of all others\. In such situations, the order of records is usually unimportant\.
For some classes of problems, however, the record sequence is very important\. These are typically time series situations, in which the sequence of records represents an ordered sequence of events or occurrences\. Each record represents a snapshot at a particular instant in time; much of the richest information, however, might be contained not in instantaneous values but in the way in which such values are changing and behaving over time\.
Of course, the relevant parameter may be something other than time\. For example, the records could represent analyses performed at distances along a line, but the same principles would apply\.
Sequence and special functions are immediately recognizable by the following characteristics:
<!-- <ul> -->
* They are all prefixed by `@`
* Their names are given in uppercase
<!-- </ul> -->
Sequence functions can refer to the record currently being processed by a node, the records that have already passed through a node, and even, in one case, records that have yet to pass through a node\. Sequence functions can be mixed freely with other components of CLEM expressions, although some have restrictions on what can be used as their arguments\.
<!-- </article "role="article" "> -->
|
88E4E066B89D0A6993F31EA337930D962B76D6D1 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/clem_function_ref_soundex.html?context=cdpaas&locale=en | SoundEx functions (SPSS Modeler) | SoundEx functions
SoundEx is a method used to find strings when the sound is known but the precise spelling isn't known.
Developed in 1918, the method searches out words with similar sounds based on phonetic assumptions about how certain letters are pronounced. SoundEx can be used to search names in a database (for example, where spellings and pronunciations for similar names may vary). The basic SoundEx algorithm is documented in a number of sources and, despite known limitations (for example, leading letter combinations such as ph and f won't match even though they sound the same), is supported in some form by most databases.
CLEM soundex functions
Table 1. CLEM soundex functions
Function Result Description
soundex(STRING) Integer Returns the four-character SoundEx code for the specified STRING.
soundex_difference(STRING1, STRING2) Integer Returns an integer between 0 and 4 that indicates the number of characters that are the same in the SoundEx encoding for the two strings, where 0 indicates no similarity and 4 indicates strong similarity or identical strings.
| # SoundEx functions #
SoundEx is a method used to find strings when the sound is known but the precise spelling isn't known\.
Developed in 1918, the method searches out words with similar sounds based on phonetic assumptions about how certain letters are pronounced\. SoundEx can be used to search names in a database (for example, where spellings and pronunciations for similar names may vary)\. The basic SoundEx algorithm is documented in a number of sources and, despite known limitations (for example, leading letter combinations such as `ph` and `f` won't match even though they sound the same), is supported in some form by most databases\.
<!-- <table "summary="CLEM soundex functions" id="clem_function_ref_soundex__table_d42_vqz_ddb" class="defaultstyle" "> -->
CLEM soundex functions
Table 1\. CLEM soundex functions
| Function | Result | Description |
| -------------------------------------- | --------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `soundex(STRING)` | *Integer* | Returns the four\-character SoundEx code for the specified *STRING*\. |
| `soundex_difference(STRING1, STRING2)` | *Integer* | Returns an integer between 0 and 4 that indicates the number of characters that are the same in the SoundEx encoding for the two strings, where 0 indicates no similarity and 4 indicates strong similarity or identical strings\. |
<!-- </table "summary="CLEM soundex functions" id="clem_function_ref_soundex__table_d42_vqz_ddb" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
2C0EBF0CCB497F41C14A5895EF97C01864BFC3D2 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/clem_function_ref_spatial.html?context=cdpaas&locale=en | Spatial functions (SPSS Modeler) | Spatial functions
Spatial functions can be used with geospatial data. For example, they allow you to calculate the distances between two points, the area of a polygon, and so on.
There can also be situations that require a merge of multiple geospatial data sets that are based on a spatial predicate (within, close to, and so on), which can be done through a merge condition.
Notes:
* These spatial functions don't apply to three-dimensional data. If you import three-dimensional data into a flow, only the first two dimensions are used by these functions. The z-axis values are ignored.
* Geospatial functions aren't supported.
CLEM spatial functions
Table 1. CLEM spatial functions
Function Result Description
close_to(SHAPE,SHAPE,NUM) Boolean Tests whether 2 shapes are within a certain DISTANCE of each other. If a projected coordinate system is used, DISTANCE is in meters. If no coordinate system is used, it is an arbitrary unit.
crosses(SHAPE,SHAPE) Boolean Tests whether 2 shapes cross each other. This function is suitable for 2 linestring shapes, or 1 linestring and 1 polygon.
overlap(SHAPE,SHAPE) Boolean Tests whether there is an intersection between 2 polygons and that the intersection is interior to both shapes.
within(SHAPE,SHAPE) Boolean Tests whether the entirety of SHAPE1 is contained within a POLYGON.
area(SHAPE) Real Returns the area of the specified POLYGON. If a projected system is used, the function returns meters squared. If no coordinate system is used, it is an arbitrary unit. The shape must be a POLYGON or a MULTIPOLYGON.
num_points(SHAPE,LIST) Integer Returns the number of points from a point field (MULTIPOINT) which are contained within the bounds of a POLYGON. SHAPE1 must be a POLYGON or a MULTIPOLYGON.
distance(SHAPE,SHAPE) Real Returns the distance between SHAPE1 and SHAPE2. If a projected coordinate system is used, the function returns meters. If no coordinate system is used, it is an arbitrary unit. SHAPE1 and SHAPE2 can be any geo measurement type.
| # Spatial functions #
Spatial functions can be used with geospatial data\. For example, they allow you to calculate the distances between two points, the area of a polygon, and so on\.
There can also be situations that require a merge of multiple geospatial data sets that are based on a spatial predicate (within, close to, and so on), which can be done through a merge condition\.
Notes:
<!-- <ul> -->
* These spatial functions don't apply to three\-dimensional data\. If you import three\-dimensional data into a flow, only the first two dimensions are used by these functions\. The z\-axis values are ignored\.
* Geospatial functions aren't supported\.
<!-- </ul> -->
<!-- <table "summary="CLEM spatial functions" class="defaultstyle" "> -->
CLEM spatial functions
Table 1\. CLEM spatial functions
| Function | Result | Description |
| --------------------------- | --------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `close_to(SHAPE,SHAPE,NUM)` | *Boolean* | Tests whether 2 shapes are within a certain DISTANCE of each other\. If a projected coordinate system is used, DISTANCE is in meters\. If no coordinate system is used, it is an arbitrary unit\. |
| `crosses(SHAPE,SHAPE)` | *Boolean* | Tests whether 2 shapes cross each other\. This function is suitable for 2 linestring shapes, or 1 linestring and 1 polygon\. |
| `overlap(SHAPE,SHAPE)` | *Boolean* | Tests whether there is an intersection between 2 polygons and that the intersection is interior to both shapes\. |
| `within(SHAPE,SHAPE)` | *Boolean* | Tests whether the entirety of SHAPE1 is contained within a POLYGON\. |
| `area(SHAPE)` | *Real* | Returns the area of the specified POLYGON\. If a projected system is used, the function returns meters squared\. If no coordinate system is used, it is an arbitrary unit\. The shape must be a POLYGON or a MULTIPOLYGON\. |
| `num_points(SHAPE,LIST)` | *Integer* | Returns the number of points from a point field (MULTIPOINT) which are contained within the bounds of a POLYGON\. SHAPE1 must be a POLYGON or a MULTIPOLYGON\. |
| `distance(SHAPE,SHAPE)` | *Real* | Returns the distance between SHAPE1 and SHAPE2\. If a projected coordinate system is used, the function returns meters\. If no coordinate system is used, it is an arbitrary unit\. SHAPE1 and SHAPE2 can be any geo measurement type\. |
<!-- </table "summary="CLEM spatial functions" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
4058D0B5222F1C34ABF1737A10DA705E27480606 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/clem_function_ref_specialfields.html?context=cdpaas&locale=en | Special fields (SPSS Modeler) | Special fields
Special functions are used to denote the specific fields under examination, or to generate a list of fields as input.
For example, when deriving multiple fields at once, you should use @FIELD to denote perform this derive action on the selected fields. Using the expression log(@FIELD) derives a new log field for each selected field.
CLEM special fields
Table 1. CLEM special fields
Function Result Description
@FIELD Any Performs an action on all fields specified in the expression context.
@TARGET Any When a CLEM expression is used in a user-defined analysis function, @TARGET represents the target field or "correct value" for the target/predicted pair being analyzed. This function is commonly used in an Analysis node.
@PREDICTED Any When a CLEM expression is used in a user-defined analysis function, @PREDICTED represents the predicted value for the target/predicted pair being analyzed. This function is commonly used in an Analysis node.
@PARTITION_FIELD Any Substitutes the name of the current partition field.
@TRAINING_PARTITION Any Returns the value of the current training partition. For example, to select training records using a Select node, use the CLEM expression: @PARTITION_FIELD = @TRAINING_PARTITION This ensures that the Select node will always work regardless of which values are used to represent each partition in the data.
@TESTING_PARTITION Any Returns the value of the current testing partition.
@VALIDATION_PARTITION Any Returns the value of the current validation partition.
@FIELDS_BETWEEN(start, end) Any Returns the list of field names between the specified start and end fields (inclusive) based on the natural (that is, insert) order of the fields in the data.
@FIELDS_MATCHING(pattern) Any Returns a list a field names matching a specified pattern. A question mark (?) can be included in the pattern to match exactly one character; an asterisk () matches zero or more characters. To match a literal question mark or asterisk (rather than using these as wildcards), a backslash () can be used as an escape character.<br><br>Note: This requires a string literal as an argument; it can't use a nested expression to generate the argument.
@MULTI_RESPONSE_SET Any Returns the list of fields in the named multiple response set.
| # Special fields #
Special functions are used to denote the specific fields under examination, or to generate a list of fields as input\.
For example, when deriving multiple fields at once, you should use `@FIELD` to denote perform this derive action on the selected fields\. Using the expression `log(@FIELD)` derives a new log field for each selected field\.
<!-- <table "summary="CLEM special fields" id="clem_function_ref_specialfields__table_ffh_bl3_cdb" class="defaultstyle" "> -->
CLEM special fields
Table 1\. CLEM special fields
| Function | Result | Description |
| ----------------------------- | ------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| `@FIELD` | *Any* | Performs an action on all fields specified in the expression context\. |
| `@TARGET` | *Any* | When a CLEM expression is used in a user\-defined analysis function, `@TARGET` represents the target field or "correct value" for the target/predicted pair being analyzed\. This function is commonly used in an Analysis node\. |
| `@PREDICTED` | *Any* | When a CLEM expression is used in a user\-defined analysis function, `@PREDICTED` represents the predicted value for the target/predicted pair being analyzed\. This function is commonly used in an Analysis node\. |
| `@PARTITION_FIELD` | *Any* | Substitutes the name of the current partition field\. |
| `@TRAINING_PARTITION` | *Any* | Returns the value of the current training partition\. For example, to select training records using a Select node, use the CLEM expression: `@PARTITION_FIELD = @TRAINING_PARTITION` This ensures that the Select node will always work regardless of which values are used to represent each partition in the data\. |
| `@TESTING_PARTITION` | *Any* | Returns the value of the current testing partition\. |
| `@VALIDATION_PARTITION` | *Any* | Returns the value of the current validation partition\. |
| `@FIELDS_BETWEEN(start, end)` | *Any* | Returns the list of field names between the specified start and end fields (inclusive) based on the natural (that is, insert) order of the fields in the data\. |
| `@FIELDS_MATCHING(pattern)` | *Any* | Returns a list a field names matching a specified pattern\. A question mark (`?`) can be included in the pattern to match exactly one character; an asterisk (`*`) matches zero or more characters\. To match a literal question mark or asterisk (rather than using these as wildcards), a backslash (`\`) can be used as an escape character\.<br><br>Note: This requires a string literal as an argument; it can't use a nested expression to generate the argument\. |
| `@MULTI_RESPONSE_SET` | *Any* | Returns the list of fields in the named multiple response set\. |
<!-- </table "summary="CLEM special fields" id="clem_function_ref_specialfields__table_ffh_bl3_cdb" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
9A83A33ABB4C6A12A7457D3711C2511EB3982B2C | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/clem_function_ref_string.html?context=cdpaas&locale=en | String functions (SPSS Modeler) | String functions
With CLEM, you can run operations to compare strings, create strings, or access characters.
In CLEM, a string is any sequence of characters between matching double quotation marks ("string quotes"). Characters (CHAR) can be any single alphanumeric character. They're declared in CLEM expressions using single back quotes in the form of <character> , such as z , A , or 2 . Characters that are out-of-bounds or negative indices to a string will result in undefined behavior.
Note: Comparisons between strings that do and do not use SQL pushback may generate different results where trailing spaces exist.
CLEM string functions
Table 1. CLEM string functions
Function Result Description
allbutfirst(N, STRING) String Returns a string, which is STRING with the first N characters removed.
allbutlast(N, STRING) String Returns a string, which is STRING with the last characters removed.
alphabefore(STRING1, STRING2) Boolean Used to check the alphabetical ordering of strings. Returns true if STRING1 precedes STRING2.
count_substring(STRING, SUBSTRING) Integer Returns the number of times the specified substring occurs within the string. For example, count_substring("foooo.txt", "oo") returns 3.
endstring(LENGTH, STRING) String Extracts the last N characters from the specified string. If the string length is less than or equal to the specified length, then it is unchanged.
hasendstring(STRING, SUBSTRING) Integer This function is the same as isendstring(SUBSTRING, STRING).
hasmidstring(STRING, SUBSTRING) Integer This function is the same as ismidstring(SUBSTRING, STRING) (embedded substring).
hasstartstring(STRING, SUBSTRING) Integer This function is the same as isstartstring(SUBSTRING, STRING).
hassubstring(STRING, N, SUBSTRING) Integer This function is the same as issubstring(SUBSTRING, N, STRING), where N defaults to 1.
hassubstring(STRING, SUBSTRING) Integer This function is the same as issubstring(SUBSTRING, 1, STRING), where N defaults to 1.
isalphacode(CHAR) Boolean Returns a value of true if CHAR is a character in the specified string (often a field name) whose character code is a letter. Otherwise, this function returns a value of 0. For example, isalphacode(produce_num(1)).
isendstring(SUBSTRING, STRING) Integer If the string STRING ends with the substring SUBSTRING, then this function returns the integer subscript of SUBSTRING in STRING. Otherwise, this function returns a value of 0.
islowercode(CHAR) Boolean Returns a value of true if CHAR is a lowercase letter character for the specified string (often a field name). Otherwise, this function returns a value of 0. For example, both () and islowercode(country_name(2)) are valid expressions.
ismidstring(SUBSTRING, STRING) Integer If SUBSTRING is a substring of STRING but does not start on the first character of STRING or end on the last, then this function returns the subscript at which the substring starts. Otherwise, this function returns a value of 0.
isnumbercode(CHAR) Boolean Returns a value of true if CHAR for the specified string (often a field name) is a character whose character code is a digit. Otherwise, this function returns a value of 0. For example, isnumbercode(product_id(2)).
isstartstring(SUBSTRING, STRING) Integer If the string STRING starts with the substring SUBSTRING, then this function returns the subscript 1. Otherwise, this function returns a value of 0.
issubstring(SUBSTRING, N, STRING) Integer Searches the string STRING, starting from its Nth character, for a substring equal to the string SUBSTRING. If found, this function returns the integer subscript at which the matching substring begins. Otherwise, this function returns a value of 0. If N is not given, this function defaults to 1.
issubstring(SUBSTRING, STRING) Integer Searches the string STRING. If found, this function returns the integer subscript at which the matching substring begins. Otherwise, this function returns a value of 0.
issubstring_count(SUBSTRING, N, STRING) Integer Returns the index of the Nth occurrence of SUBSTRING within the specified STRING. If there are fewer than N occurrences of SUBSTRING, 0 is returned.
issubstring_lim(SUBSTRING, N, STARTLIM, ENDLIM, STRING) Integer This function is the same as issubstring, but the match is constrained to start on STARTLIM and to end on ENDLIM. The STARTLIM or ENDLIM constraints may be disabled by supplying a value of false for either argument—for example, issubstring_lim(SUBSTRING, N, false, false, STRING) is the same as issubstring.
isuppercode(CHAR) Boolean Returns a value of true if CHAR is an uppercase letter character. Otherwise, this function returns a value of 0. For example, both () and isuppercode(country_name(2)) are valid expressions.
last(STRING) String Returns the last character CHAR of STRING (which must be at least one character long).
length(STRING) Integer Returns the length of the string STRING (that is, the number of characters in it).
locchar(CHAR, N, STRING) Integer Used to identify the location of characters in symbolic fields. The function searches the string STRING for the character CHAR, starting the search at the Nth character of STRING. This function returns a value indicating the location (starting at N) where the character is found. If the character is not found, this function returns a value of 0. If the function has an invalid offset (N) (for example, an offset that is beyond the length of the string), this function returns $null$. <br>For example, locchar(n, 2, web_page) searches the field called web_page for the n character beginning at the second character in the field value. <br>Be sure to use single back quotes to encapsulate the specified character.
locchar_back(CHAR, N, STRING) Integer Similar to locchar, except that the search is performed backward starting from the Nth character. For example, locchar_back(n, 9, web_page) searches the field web_page starting from the ninth character and moving backward toward the start of the string. If the function has an invalid offset (for example, an offset that is beyond the length of the string), this function returns $null$. Ideally, you should use locchar_back in conjunction with the function length(<field>) to dynamically use the length of the current value of the field. For example, locchar_back(n, (length(web_page)), web_page).
lowertoupper(CHAR)lowertoupper (STRING) CHAR or String Input can be either a string or character, which is used in this function to return a new item of the same type, with any lowercase characters converted to their uppercase equivalents. For example, lowertoupper(a), lowertoupper(“My string”), and lowertoupper(field_name(2)) are all valid expressions.
matches Boolean Returns true if a string matches a specified pattern. The pattern must be a string literal; it can't be a field name containing a pattern. You can include a question mark (?) in the pattern to match exactly one character; an asterisk () matches zero or more characters. To match a literal question mark or asterisk (rather than using these as wildcards), use a backslash () as an escape character.
replace(SUBSTRING, NEWSUBSTRING, STRING) String Within the specified STRING, replace all instances of SUBSTRING with NEWSUBSTRING.
replicate(COUNT, STRING) String Returns a string that consists of the original string copied the specified number of times.
stripchar(CHAR,STRING) String Enables you to remove specified characters from a string or field. You can use this function, for example, to remove extra symbols, such as currency notations, from data to achieve a simple number or name. For example, using the syntax stripchar($, 'Cost') returns a new field with the dollar sign removed from all values. <br>Be sure to use single back quotes to encapsulate the specified character.
skipchar(CHAR, N, STRING) Integer Searches the string STRING for any character other than CHAR, starting at the Nth character. This function returns an integer substring indicating the point at which one is found or 0 if every character from the Nth onward is a CHAR. If the function has an invalid offset (for example, an offset that is beyond the length of the string), this function returns $null$. <br>locchar is often used in conjunction with the skipchar functions to determine the value of N (the point at which to start searching the string). For example, skipchar(s, (locchar(s, 1, "MyString")), "MyString").
skipchar_back(CHAR, N, STRING) Integer Similar to skipchar, except that the search is performed backward, starting from the Nth character.
startstring(N, STRING) String Extracts the first N characters from the specified string. If the string length is less than or equal to the specified length, then it is unchanged.
strmember(CHAR, STRING) Integer Equivalent to locchar(CHAR, 1, STRING). It returns an integer substring indicating the point at which CHAR first occurs, or 0. If the function has an invalid offset (for example, an offset that is beyond the length of the string), this function returns $null$.
subscrs(N, STRING) CHAR Returns the Nth character CHAR of the input string STRING. This function can also be written in a shorthand form as STRING(N). For example, lowertoupper(“name”(1)) is a valid expression.
substring(N, LEN, STRING) String Returns a string SUBSTRING, which consists of the LEN characters of the string STRING, starting from the character at subscript N.
substring_between(N1, N2, STRING) String Returns the substring of STRING, which begins at subscript N1 and ends at subscript N2.
textsplit(STRING, N, CHAR) String textsplit(STRING,N,CHAR) returns the substring between the Nth-1 and Nth occurrence of CHAR. If N is 1, then it will return the substring from the beginning of STRING up to but not including CHAR. If N-1 is the last occurrence of CHAR, then it will return the substring from the Nth-1 occurrence of CHAR to the end of the string.
trim(STRING) String Removes leading and trailing white space characters from the specified string.
trimstart(STRING) String Removes leading white space characters from the specified string.
trimend(STRING) String Removes trailing white space characters from the specified string.
unicode_char(NUM) CHAR Input must be decimal, not hexadecimal values. Returns the character with Unicode value NUM.
unicode_value(CHAR) NUM Returns the Unicode value of CHAR.
uppertolower(CHAR)uppertolower (STRING) CHAR or String Input can be either a string or character and is used in this function to return a new item of the same type with any uppercase characters converted to their lowercase equivalents. <br>Remember to specify strings with double quotes and characters with single back quotes. Simple field names should be specified without quotes.
| # String functions #
With CLEM, you can run operations to compare strings, create strings, or access characters\.
In CLEM, a string is any sequence of characters between matching double quotation marks (`"string quotes"`)\. Characters (`CHAR`) can be any single alphanumeric character\. They're declared in CLEM expressions using single back quotes in the form of `` `<character>` ``, such as `` `z` ``, `` `A` ``, or `` `2` ``\. Characters that are out\-of\-bounds or negative indices to a string will result in undefined behavior\.
Note: Comparisons between strings that do and do not use SQL pushback may generate different results where trailing spaces exist\.
<!-- <table "summary="CLEM string functions" id="clem_function_ref_string__table_mhf_xqz_ddb" class="defaultstyle" "> -->
CLEM string functions
Table 1\. CLEM string functions
| Function | Result | Description |
| --------------------------------------------------------- | ------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `allbutfirst(N, STRING)` | *String* | Returns a string, which is `STRING` with the first `N` characters removed\. |
| `allbutlast(N, STRING)` | *String* | Returns a string, which is `STRING` with the last characters removed\. |
| `alphabefore(STRING1, STRING2)` | *Boolean* | Used to check the alphabetical ordering of strings\. Returns true if `STRING1` precedes `STRING2`\. |
| `count_substring(STRING, SUBSTRING)` | *Integer* | Returns the number of times the specified substring occurs within the string\. For example, `count_substring("foooo.txt", "oo")` returns `3`\. |
| `endstring(LENGTH, STRING)` | *String* | Extracts the last `N` characters from the specified string\. If the string length is less than or equal to the specified length, then it is unchanged\. |
| `hasendstring(STRING, SUBSTRING)` | *Integer* | This function is the same as `isendstring(SUBSTRING, STRING)`\. |
| `hasmidstring(STRING, SUBSTRING)` | *Integer* | This function is the same as `ismidstring(SUBSTRING, STRING)` (embedded substring)\. |
| `hasstartstring(STRING, SUBSTRING)` | *Integer* | This function is the same as `isstartstring(SUBSTRING, STRING)`\. |
| `hassubstring(STRING, N, SUBSTRING)` | *Integer* | This function is the same as `issubstring(SUBSTRING, N, STRING)`, where `N` defaults to `1`\. |
| `hassubstring(STRING, SUBSTRING)` | *Integer* | This function is the same as `issubstring(SUBSTRING, 1, STRING)`, where `N` defaults to `1`\. |
| `isalphacode(CHAR)` | *Boolean* | Returns a value of true if `CHAR` is a character in the specified string (often a field name) whose character code is a letter\. Otherwise, this function returns a value of `0`\. For example, `isalphacode(produce_num(1))`\. |
| `isendstring(SUBSTRING, STRING)` | *Integer* | If the string `STRING` ends with the substring `SUBSTRING`, then this function returns the integer subscript of `SUBSTRING` in `STRING`\. Otherwise, this function returns a value of `0`\. |
| `islowercode(CHAR)` | *Boolean* | Returns a value of `true` if `CHAR` is a lowercase letter character for the specified string (often a field name)\. Otherwise, this function returns a value of `0`\. For example, both ```islowercode(``)``` and `islowercode(country_name(2))` are valid expressions\. |
| `ismidstring(SUBSTRING, STRING)` | *Integer* | If `SUBSTRING` is a substring of `STRING` but does not start on the first character of `STRING` or end on the last, then this function returns the subscript at which the substring starts\. Otherwise, this function returns a value of `0`\. |
| `isnumbercode(CHAR)` | *Boolean* | Returns a value of true if `CHAR` for the specified string (often a field name) is a character whose character code is a digit\. Otherwise, this function returns a value of `0`\. For example, `isnumbercode(product_id(2))`\. |
| `isstartstring(SUBSTRING, STRING)` | *Integer* | If the string `STRING` starts with the substring `SUBSTRING`, then this function returns the subscript `1`\. Otherwise, this function returns a value of `0`\. |
| `issubstring(SUBSTRING, N, STRING)` | *Integer* | Searches the string `STRING`, starting from its `Nth` character, for a substring equal to the string `SUBSTRING`\. If found, this function returns the integer subscript at which the matching substring begins\. Otherwise, this function returns a value of `0`\. If `N` is not given, this function defaults to `1`\. |
| `issubstring(SUBSTRING, STRING)` | *Integer* | Searches the string `STRING`\. If found, this function returns the integer subscript at which the matching substring begins\. Otherwise, this function returns a value of `0`\. |
| `issubstring_count(SUBSTRING, N, STRING)` | *Integer* | Returns the index of the `Nth` occurrence of `SUBSTRING` within the specified `STRING`\. If there are fewer than `N` occurrences of `SUBSTRING`, `0` is returned\. |
| `issubstring_lim(SUBSTRING, N, STARTLIM, ENDLIM, STRING)` | *Integer* | This function is the same as `issubstring`, but the match is constrained to start on `STARTLIM` and to end on `ENDLIM`\. The `STARTLIM` or `ENDLIM` constraints may be disabled by supplying a value of false for either argument—for example, `issubstring_lim(SUBSTRING, N, false, false, STRING)` is the same as `issubstring`\. |
| `isuppercode(CHAR)` | *Boolean* | Returns a value of true if `CHAR` is an uppercase letter character\. Otherwise, this function returns a value of `0`\. For example, both ```isuppercode(``)``` and `isuppercode(country_name(2))` are valid expressions\. |
| `last(STRING)` | *String* | Returns the last character `CHAR` of `STRING` (which must be at least one character long)\. |
| `length(STRING)` | *Integer* | Returns the length of the string `STRING` (that is, the number of characters in it)\. |
| `locchar(CHAR, N, STRING)` | *Integer* | Used to identify the location of characters in symbolic fields\. The function searches the string `STRING` for the character `CHAR`, starting the search at the `Nth` character of `STRING`\. This function returns a value indicating the location (starting at `N`) where the character is found\. If the character is not found, this function returns a value of 0\. If the function has an invalid offset `(N)` (for example, an offset that is beyond the length of the string), this function returns `$null$`\. <br>For example, ``locchar(`n`, 2, web_page)`` searches the field called `web_page` for the `` `n` `` character beginning at the second character in the field value\. <br>Be sure to use single back quotes to encapsulate the specified character\. |
| `locchar_back(CHAR, N, STRING)` | *Integer* | Similar to `locchar`, except that the search is performed backward starting from the `Nth` character\. For example, ``locchar_back(`n`, 9, web_page)`` searches the field `web_page` starting from the ninth character and moving backward toward the start of the string\. If the function has an invalid offset (for example, an offset that is beyond the length of the string), this function returns `$null$`\. Ideally, you should use `locchar_back` in conjunction with the function `length(<field>)` to dynamically use the length of the current value of the field\. For example, ``locchar_back(`n`, (length(web_page)), web_page)``\. |
| `lowertoupper(CHAR)``lowertoupper (STRING)` | *CHAR* or *String* | Input can be either a string or character, which is used in this function to return a new item of the same type, with any lowercase characters converted to their uppercase equivalents\. For example, ``lowertoupper(`a`)``, `lowertoupper(“My string”)`, and `lowertoupper(field_name(2))` are all valid expressions\. |
| `matches` | *Boolean* | Returns `true` if a string matches a specified pattern\. The pattern must be a string literal; it can't be a field name containing a pattern\. You can include a question mark (`?`) in the pattern to match exactly one character; an asterisk (`*`) matches zero or more characters\. To match a literal question mark or asterisk (rather than using these as wildcards), use a backslash (`\`) as an escape character\. |
| `replace(SUBSTRING, NEWSUBSTRING, STRING)` | *String* | Within the specified `STRING`, replace all instances of `SUBSTRING` with `NEWSUBSTRING`\. |
| `replicate(COUNT, STRING)` | *String* | Returns a string that consists of the original string copied the specified number of times\. |
| `stripchar(CHAR,STRING)` | *String* | Enables you to remove specified characters from a string or field\. You can use this function, for example, to remove extra symbols, such as currency notations, from data to achieve a simple number or name\. For example, using the syntax ``stripchar(`$`, 'Cost')`` returns a new field with the dollar sign removed from all values\. <br>Be sure to use single back quotes to encapsulate the specified character\. |
| `skipchar(CHAR, N, STRING)` | *Integer* | Searches the string `STRING` for any character other than `CHAR`, starting at the `Nth` character\. This function returns an integer substring indicating the point at which one is found or `0` if every character from the `Nth` onward is a `CHAR`\. If the function has an invalid offset (for example, an offset that is beyond the length of the string), this function returns `$null$`\. <br>`locchar` is often used in conjunction with the `skipchar` functions to determine the value of `N` (the point at which to start searching the string)\. For example, ``skipchar(`s`, (locchar(`s`, 1, "MyString")), "MyString")``\. |
| `skipchar_back(CHAR, N, STRING)` | *Integer* | Similar to `skipchar`, except that the search is performed backward, starting from the `Nth` character\. |
| `startstring(N, STRING)` | *String* | Extracts the first `N` characters from the specified string\. If the string length is less than or equal to the specified length, then it is unchanged\. |
| `strmember(CHAR, STRING)` | *Integer* | Equivalent to `locchar(CHAR, 1, STRING)`\. It returns an integer substring indicating the point at which `CHAR` first occurs, or `0`\. If the function has an invalid offset (for example, an offset that is beyond the length of the string), this function returns `$null$`\. |
| `subscrs(N, STRING)` | *CHAR* | Returns the `Nth` character `CHAR` of the input string `STRING`\. This function can also be written in a shorthand form as `STRING(N)`\. For example, `lowertoupper(“name”(1))` is a valid expression\. |
| `substring(N, LEN, STRING)` | *String* | Returns a string `SUBSTRING`, which consists of the `LEN` characters of the string `STRING`, starting from the character at subscript *N*\. |
| `substring_between(N1, N2, STRING)` | *String* | Returns the substring of `STRING`, which begins at subscript `N1` and ends at subscript `N2`\. |
| `textsplit(STRING, N, CHAR)` | *String* | `textsplit(STRING,N,CHAR)` returns the substring between the `Nth-1` and `Nth` occurrence of `CHAR`\. If `N` is `1`, then it will return the substring from the beginning of `STRING` up to but not including `CHAR`\. If `N-1` is the last occurrence of `CHAR`, then it will return the substring from the `Nth-1` occurrence of `CHAR` to the end of the string\. |
| `trim(STRING)` | *String* | Removes leading and trailing white space characters from the specified string\. |
| `trimstart(STRING)` | *String* | Removes leading white space characters from the specified string\. |
| `trimend(STRING)` | *String* | Removes trailing white space characters from the specified string\. |
| `unicode_char(NUM)` | *CHAR* | Input must be decimal, not hexadecimal values\. Returns the character with Unicode value `NUM`\. |
| `unicode_value(CHAR)` | *NUM* | Returns the Unicode value of `CHAR`\. |
| `uppertolower(CHAR)``uppertolower (STRING)` | *CHAR* or *String* | Input can be either a string or character and is used in this function to return a new item of the same type with any uppercase characters converted to their lowercase equivalents\. <br>Remember to specify strings with double quotes and characters with single back quotes\. Simple field names should be specified without quotes\. |
<!-- </table "summary="CLEM string functions" id="clem_function_ref_string__table_mhf_xqz_ddb" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
2904E26946523BB3E78975F68A822F5F2A32B9F5 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/clem_function_ref_trigonometric.html?context=cdpaas&locale=en | Trigonometric functions (SPSS Modeler) | Trigonometric functions
All of the functions in this section either take an angle as an argument or return one as a result.
CLEM trigonometric functions
Table 1. CLEM trigonometric functions
Function Result Description
arccos(NUM) Real Computes the arccosine of the specified angle.
arccosh(NUM) Real Computes the hyperbolic arccosine of the specified angle.
arcsin(NUM) Real Computes the arcsine of the specified angle.
arcsinh(NUM) Real Computes the hyperbolic arcsine of the specified angle.
arctan(NUM) Real Computes the arctangent of the specified angle.
arctan2(NUM_Y, NUM_X) Real Computes the arctangent of NUM_Y / NUM_X and uses the signs of the two numbers to derive quadrant information. The result is a real in the range - pi < ANGLE <= pi (radians) – 180 < ANGLE <= 180 (degrees)
arctanh(NUM) Real Computes the hyperbolic arctangent of the specified angle.
cos(NUM) Real Computes the cosine of the specified angle.
cosh(NUM) Real Computes the hyperbolic cosine of the specified angle.
pi Real This constant is the best real approximation to pi.
sin(NUM) Real Computes the sine of the specified angle.
sinh(NUM) Real Computes the hyperbolic sine of the specified angle.
tan(NUM) Real Computes the tangent of the specified angle.
tanh(NUM) Real Computes the hyperbolic tangent of the specified angle.
| # Trigonometric functions #
All of the functions in this section either take an angle as an argument or return one as a result\.
<!-- <table "summary="CLEM trigonometric functions" id="clem_function_ref_trigonometric__table_ih3_dl3_cdb" class="defaultstyle" "> -->
CLEM trigonometric functions
Table 1\. CLEM trigonometric functions
| Function | Result | Description |
| ----------------------- | ------ | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `arccos(NUM)` | *Real* | Computes the arccosine of the specified angle\. |
| `arccosh(NUM)` | *Real* | Computes the hyperbolic arccosine of the specified angle\. |
| `arcsin(NUM)` | *Real* | Computes the arcsine of the specified angle\. |
| `arcsinh(NUM)` | *Real* | Computes the hyperbolic arcsine of the specified angle\. |
| `arctan(NUM)` | *Real* | Computes the arctangent of the specified angle\. |
| `arctan2(NUM_Y, NUM_X)` | *Real* | Computes the arctangent of `NUM_Y / NUM_X` and uses the signs of the two numbers to derive quadrant information\. The result is a real in the range `- pi < ANGLE <= pi (radians) – 180 < ANGLE <= 180 (degrees)` |
| `arctanh(NUM)` | *Real* | Computes the hyperbolic arctangent of the specified angle\. |
| `cos(NUM)` | *Real* | Computes the cosine of the specified angle\. |
| `cosh(NUM)` | *Real* | Computes the hyperbolic cosine of the specified angle\. |
| `pi` | *Real* | This constant is the best real approximation to pi\. |
| `sin(NUM)` | *Real* | Computes the sine of the specified angle\. |
| `sinh(NUM)` | *Real* | Computes the hyperbolic sine of the specified angle\. |
| `tan(NUM)` | *Real* | Computes the tangent of the specified angle\. |
| `tanh(NUM)` | *Real* | Computes the hyperbolic tangent of the specified angle\. |
<!-- </table "summary="CLEM trigonometric functions" id="clem_function_ref_trigonometric__table_ih3_dl3_cdb" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
621083EB36CF3896B77D22EDBCC23FD2716F6B4A | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/clem_functions_convertingdates.html?context=cdpaas&locale=en | Converting date and time values (SPSS Modeler) | Converting date and time values
Note that conversion functions (and any other functions that require a specific type of input, such as a date or time value) depend on the current formats specified in the flow properties.
For example, if you have a field named DATE that's stored as a string with values Jan 2021, Feb 2021, and so on, you could convert it to date storage as follows:
to_date(DATE)
For this conversion to work, select the matching date format MON YYYY as the default date format for the flow.
Dates stored as numbers. Note that DATE in the previous example is the name of a field, while to_date is a CLEM function. If you have dates stored as numbers, you can convert them using the datetime_date function, where the number is interpreted as a number of seconds since the base date (or epoch).
datetime_date(DATE)
By converting a date to a number of seconds (and back), you can perform calculations such as computing the current date plus or minus a fixed number of days. For example:
datetime_date((date_in_days(DATE)-7)606024)
| # Converting date and time values #
Note that conversion functions (and any other functions that require a specific type of input, such as a date or time value) depend on the current formats specified in the flow properties\.
For example, if you have a field named *DATE* that's stored as a string with values *Jan 2021*, *Feb 2021*, and so on, you could convert it to date storage as follows:
to_date(DATE)
For this conversion to work, select the matching date format MON YYYY as the default date format for the flow\.
Dates stored as numbers\. Note that `DATE` in the previous example is the name of a field, while `to_date` is a CLEM function\. If you have dates stored as numbers, you can convert them using the `datetime_date` function, where the number is interpreted as a number of seconds since the base date (or epoch)\.
datetime_date(DATE)
By converting a date to a number of seconds (and back), you can perform calculations such as computing the current date plus or minus a fixed number of days\. For example:
datetime_date((date_in_days(DATE)-7)*60*60*24)
<!-- </article "role="article" "> -->
|
ADBEF9D5635EB271A8BD78B23064DCBA1A1915A6 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/clem_language_reference.html?context=cdpaas&locale=en | CLEM language reference (SPSS Modeler) | CLEM (legacy) language reference
This section describes the Control Language for Expression Manipulation (CLEM), which is a powerful tool used to analyze and manipulate the data used in SPSS Modeler flows.
You can use CLEM within nodes to perform tasks ranging from evaluating conditions or deriving values to inserting data into reports. CLEM expressions consist of values, field names, operators, and functions. Using the correct syntax, you can create a wide variety of powerful data operations.
Figure 1. Expression Builder

| # CLEM (legacy) language reference #
This section describes the Control Language for Expression Manipulation (CLEM), which is a powerful tool used to analyze and manipulate the data used in SPSS Modeler flows\.
You can use CLEM within nodes to perform tasks ranging from evaluating conditions or deriving values to inserting data into reports\. CLEM expressions consist of values, field names, operators, and functions\. Using the correct syntax, you can create a wide variety of powerful data operations\.
Figure 1\. Expression Builder

<!-- </article "role="article" "> -->
|
88467827811ED045A648A3C215F5B91D43EB49CD | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/clem_overview_multiple_response_data.html?context=cdpaas&locale=en | Working with multiple-response data (SPSS Modeler) | Working with multiple-response data
You can analyze multiple-response data using a number of comparison functions.
Available comparison functions include:
* value_at
* first_index / last_index
* first_non_null / last_non_null
* first_non_null_index / last_non_null_index
* min_index / max_index
For example, suppose a multiple-response question asked for the first, second, and third most important reasons for deciding on a particular purchase (for example, price, personal recommendation, review, local supplier, other). In this case, you might determine the importance of price by deriving the index of the field in which it was first included:
first_index("price", [Reason1 Reason2 Reason3])
Similarly, suppose you asked customers to rank three cars in order of likelihood to purchase and coded the responses in three separate fields, as follows:
Car ranking example
Table 1. Car ranking example
customer id car1 car2 car3
101 1 3 2
102 3 2 1
103 2 3 1
In this case, you could determine the index of the field for the car they like most (ranked #1, or the lowest rank) using the min_index function:
min_index(['car1' 'car2' 'car3'])
See [Comparison functions](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/clem_function_ref_comparison.htmlclem_function_ref_comparison) for more information.
| # Working with multiple\-response data #
You can analyze multiple\-response data using a number of comparison functions\.
Available comparison functions include:
<!-- <ul> -->
* `value_at`
* `first_index / last_index`
* `first_non_null / last_non_null`
* `first_non_null_index / last_non_null_index`
* `min_index / max_index`
<!-- </ul> -->
For example, suppose a multiple\-response question asked for the first, second, and third most important reasons for deciding on a particular purchase (for example, price, personal recommendation, review, local supplier, other)\. In this case, you might determine the importance of price by deriving the index of the field in which it was first included:
first_index("price", [Reason1 Reason2 Reason3])
Similarly, suppose you asked customers to rank three cars in order of likelihood to purchase and coded the responses in three separate fields, as follows:
<!-- <table "summary="Car ranking example" id="clem_overview_multiple_response_data__table_w55_brz_ddb" class="defaultstyle" "> -->
Car ranking example
Table 1\. Car ranking example
| customer id | car1 | car2 | car3 |
| ----------- | ---- | ---- | ---- |
| 101 | 1 | 3 | 2 |
| 102 | 3 | 2 | 1 |
| 103 | 2 | 3 | 1 |
<!-- </table "summary="Car ranking example" id="clem_overview_multiple_response_data__table_w55_brz_ddb" class="defaultstyle" "> -->
In this case, you could determine the index of the field for the car they like most (ranked \#1, or the lowest rank) using the `min_index` function:
min_index(['car1' 'car2' 'car3'])
See [Comparison functions](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/clem_function_ref_comparison.html#clem_function_ref_comparison) for more information\.
<!-- </article "role="article" "> -->
|
BC314650433831859C400BFFEFE5F919ED8735EA | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/clem_overview_numbers.html?context=cdpaas&locale=en | Working with numbers (SPSS Modeler) | Working with numbers
Numerous standard operations on numeric values are available in SPSS Modeler.
* Calculating the sine of the specified angle—sin(NUM)
* Calculating the natural log of numeric fields—log(NUM)
* Calculating the sum of two numbers—NUM1 + NUM2
See [Numeric functions](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/clem_function_ref_numeric.htmlclem_function_ref_numeric) for more information.
| # Working with numbers #
Numerous standard operations on numeric values are available in SPSS Modeler\.
<!-- <ul> -->
* Calculating the sine of the specified angle—`sin(NUM)`
* Calculating the natural log of numeric fields—`log(NUM)`
* Calculating the sum of two numbers—`NUM1` \+ `NUM2`
<!-- </ul> -->
See [Numeric functions](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/clem_function_ref_numeric.html#clem_function_ref_numeric) for more information\.
<!-- </article "role="article" "> -->
|
595BB1738027C777C1EB5A69631587923690ABC4 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/clem_overview_times_and_dates.html?context=cdpaas&locale=en | Working with strings (SPSS Modeler) | Working with strings
There are a number of operations available for strings.
* Converting a string to uppercase or lowercase—uppertolower(CHAR).
* Removing specified characters, such as ID_ or $ , from a string variable—stripchar(CHAR,STRING).
* Determining the length (number of characters) for a string variable—length(STRING).
* Checking the alphabetical ordering of string values—alphabefore(STRING1, STRING2).
* Removing leading or trailing white space from values—trim(STRING), trim_start(STRING), or trimend(STRING).
* Extract the first or last n characters from a string—startstring(LENGTH, STRING) or endstring(LENGTH, STRING). For example, suppose you have a field named item that combines a product name with a four-digit ID code (ACME CAMERA-D109). To create a new field that contains only the four-digit code, specify the following formula in a Derive node:
endstring(4, item)
* Matching a specific pattern—STRING matches PATTERN. For example, to select persons with "market" anywhere in their job title, you could specify the following in a Select node:
job_title matches "market"
* Replacing all instances of a substring within a string—replace(SUBSTRING, NEWSUBSTRING, STRING). For example, to replace all instances of an unsupported character, such as a vertical pipe ( | ), with a semicolon prior to text mining, use the replace function in a Filler node. Under Fill in fields in the node properties, select all fields where the character may occur. For the Replace condition, select Always, and specify the following condition under Replace with.
replace('|',';',@FIELD)
* Deriving a flag field based on the presence of a specific substring. For example, you could use a string function in a Derive node to generate a separate flag field for each response with an expression such as:
hassubstring(museums,"museum_of_design")
See [String functions](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/clem_function_ref_string.htmlclem_function_ref_string) for more information.
| # Working with strings #
There are a number of operations available for strings\.
<!-- <ul> -->
* Converting a string to uppercase or lowercase—`uppertolower(CHAR)`\.
* Removing specified characters, such as `` `ID_` `` or `` `$` ``, from a string variable—`stripchar(CHAR,STRING)`\.
* Determining the length (number of characters) for a string variable—`length(STRING).`
* Checking the alphabetical ordering of string values—`alphabefore(STRING1, STRING2)`\.
* Removing leading or trailing white space from values—`trim(STRING)`, `trim_start(STRING)`, or `trimend(STRING)`\.
* Extract the first or last *n* characters from a string—`startstring(LENGTH, STRING)` or `endstring(LENGTH, STRING)`\. For example, suppose you have a field named *item* that combines a product name with a four\-digit ID code (`ACME CAMERA-D109`)\. To create a new field that contains only the four\-digit code, specify the following formula in a Derive node:
endstring(4, item)
* Matching a specific pattern—`STRING matches PATTERN`\. For example, to select persons with "market" anywhere in their job title, you could specify the following in a Select node:
job_title matches "*market*"
* Replacing all instances of a substring within a string—`replace(SUBSTRING, NEWSUBSTRING, STRING)`\. For example, to replace all instances of an unsupported character, such as a vertical pipe ( `|` ), with a semicolon prior to text mining, use the `replace` function in a Filler node\. Under Fill in fields in the node properties, select all fields where the character may occur\. For the Replace condition, select Always, and specify the following condition under Replace with\.
replace('|',';',@FIELD)
* Deriving a flag field based on the presence of a specific substring\. For example, you could use a string function in a Derive node to generate a separate flag field for each response with an expression such as:
<!-- </ul> -->
hassubstring(museums,"museum_of_design")
See [String functions](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/clem_function_ref_string.html#clem_function_ref_string) for more information\.
<!-- </article "role="article" "> -->
|
D1FEF8C7F5BE28316CAA952CCC76281E6F3FE12F | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/clem_summarizemultiplefields.html?context=cdpaas&locale=en | Summarizing multiple fields (SPSS Modeler) | Summarizing multiple fields
The CLEM language includes a number of functions that return summary statistics across multiple fields.
These functions may be particularly useful in analyzing survey data, where multiple responses to a question may be stored in multiple fields. See [Working with multiple-response data](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/clem_overview_multiple_response_data.htmlclem_overview_multiple_response_data) for more information.
| # Summarizing multiple fields #
The CLEM language includes a number of functions that return summary statistics across multiple fields\.
These functions may be particularly useful in analyzing survey data, where multiple responses to a question may be stored in multiple fields\. See [Working with multiple\-response data](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/clem_overview_multiple_response_data.html#clem_overview_multiple_response_data) for more information\.
<!-- </article "role="article" "> -->
|
DAD2EDE59535330241F2FEBDF9BF99E21DEB4393 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/clem_times_and_dates_uses.html?context=cdpaas&locale=en | Working with times and dates (SPSS Modeler) | Working with times and dates
Time and date formats may vary depending on your data source and locale. The formats of date and time are specific to each flow and are set in the flow properties.
The following examples are commonly used functions for working with date/time fields.
| # Working with times and dates #
Time and date formats may vary depending on your data source and locale\. The formats of date and time are specific to each flow and are set in the flow properties\.
The following examples are commonly used functions for working with date/time fields\.
<!-- </article "role="article" "> -->
|
0F686BF5943844896A5385E01D440548081D2688 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/clemoverview_blanksnulls.html?context=cdpaas&locale=en | Handling blanks and missing values (SPSS Modeler) | Handling blanks and missing values
Replacing blanks or missing values is a common data preparation task for data miners. CLEM provides you with a number of tools to automate blank handling.
The Filler node is the most common place to work with blanks; however, the following functions can be used in any node that accepts CLEM expressions:
* @BLANK(FIELD) can be used to determine records whose values are blank for a particular field, such as Age.
* @NULL(FIELD) can be used to determine records whose values are system-missing for the specified field(s). In SPSS Modeler, system-missing values are displayed as $null$ values.
See [Functions handling blanks and null values](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/clem_function_ref_blanksnulls.htmlclem_function_ref_blanksnulls) for more information.
| # Handling blanks and missing values #
Replacing blanks or missing values is a common data preparation task for data miners\. CLEM provides you with a number of tools to automate blank handling\.
The Filler node is the most common place to work with blanks; however, the following functions can be used in any node that accepts CLEM expressions:
<!-- <ul> -->
* `@BLANK(FIELD)` can be used to determine records whose values are blank for a particular field, such as `Age`\.
* `@NULL(FIELD)` can be used to determine records whose values are system\-missing for the specified field(s)\. In SPSS Modeler, system\-missing values are displayed as $null$ values\.
<!-- </ul> -->
See [Functions handling blanks and null values](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/clem_function_ref_blanksnulls.html#clem_function_ref_blanksnulls) for more information\.
<!-- </article "role="article" "> -->
|
23296AAD76933152D5D3E9DD875EBBD3FB7575EA | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/clemoverview_container.html?context=cdpaas&locale=en | Building CLEM expressions (SPSS Modeler) | Building CLEM (legacy) expressions
| # Building CLEM (legacy) expressions #
<!-- </article "role="article" "> -->
|
7B9348596E2F005F89842D1B997FA09BDCBE8F06 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/conventions_in_function_descriptions.html?context=cdpaas&locale=en | Conventions and function descriptions (SPSS Modeler) | Conventions in function descriptions
This page describes the conventions used throughout this guide when referring to items in a function.
Conventions in function descriptions
Table 1. Conventions in function descriptions
Convention Description
BOOL A Boolean, or flag, such as true or false.
NUM, NUM1, NUM2 Any number.
REAL, REAL1, REAL2 Any real number, such as 1.234 or –77.01.
INT, INT1, INT2 Any integer, such as 1 or –77.
CHAR A character code, such as A .
STRING A string, such as "referrerID".
LIST A list of items, such as ["abc" "def"] or [A1, A2, A3] or [1 2 4 16].
ITEM A field, such as Customer or extract_concept.
DATE A date field, such as start_date, where values are in a format such as DD-MON-YYYY.
TIME A time field, such as power_flux, where values are in a format such as HHMMSS.
Functions in this guide are listed with the function in one column, the result type (integer, string, and so on) in another, and a description (where available) in a third column. For example, following is a description of the rem function.
rem function description
Table 2. rem function description
Function Result Description
INT1 rem INT2 Number Returns the remainder of INT1 divided by INT2. For example, INT1 – (INT1 div INT2) INT2.
Details on usage conventions, such as how to list items or specify characters in a function, are described elsewhere. See [CLEM datatypes](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/clem_datatypes.htmlclem_datatypes) for more information.
| # Conventions in function descriptions #
This page describes the conventions used throughout this guide when referring to items in a function\.
<!-- <table "summary="Conventions in function descriptions" id="conventions_in_function_descriptions__table_msf_fbj_cdb" class="defaultstyle" "> -->
Conventions in function descriptions
Table 1\. Conventions in function descriptions
| Convention | Description |
| ------------------------ | ---------------------------------------------------------------------------------------- |
| *BOOL* | A Boolean, or flag, such as true or false\. |
| *NUM*, *NUM1*, *NUM2* | Any number\. |
| *REAL*, *REAL1*, *REAL2* | Any real number, such as `1.234` or `–77.01`\. |
| *INT*, *INT1*, *INT2* | Any integer, such as `1` or `–77`\. |
| *CHAR* | A character code, such as `` `A` ``\. |
| *STRING* | A string, such as `"referrerID"`\. |
| *LIST* | A list of items, such as `["abc" "def"]` or `[A1, A2, A3]` or `[1 2 4 16]`\. |
| *ITEM* | A field, such as `Customer` or `extract_concept`\. |
| *DATE* | A date field, such as `start_date`, where values are in a format such as `DD-MON-YYYY`\. |
| *TIME* | A time field, such as `power_flux`, where values are in a format such as `HHMMSS`\. |
<!-- </table "summary="Conventions in function descriptions" id="conventions_in_function_descriptions__table_msf_fbj_cdb" class="defaultstyle" "> -->
Functions in this guide are listed with the function in one column, the result type (integer, string, and so on) in another, and a description (where available) in a third column\. For example, following is a description of the `rem` function\.
<!-- <table "summary="rem function description" id="conventions_in_function_descriptions__table_qsf_fbj_cdb" class="defaultstyle" "> -->
rem function description
Table 2\. rem function description
| Function | Result | Description |
| --------------- | -------- | -------------------------------------------------------------------------------------------------- |
| `INT1 rem INT2` | *Number* | Returns the remainder of *INT1* divided by *INT2*\. For example, `INT1 – (INT1 div INT2)* INT2`\. |
<!-- </table "summary="rem function description" id="conventions_in_function_descriptions__table_qsf_fbj_cdb" class="defaultstyle" "> -->
Details on usage conventions, such as how to list items or specify characters in a function, are described elsewhere\. See [CLEM datatypes](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/clem_datatypes.html#clem_datatypes) for more information\.
<!-- </article "role="article" "> -->
|
C2185A8C9156C6B38D76BD3FD29A833D96A5762B | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/date_formats.html?context=cdpaas&locale=en | Dates (SPSS Modeler) | Dates
Date calculations are based on a "baseline" date, which is specified in the flow properties. The default baseline date is 1 January 1900.
The CLEM language supports the following date formats.
CLEM language date formats
Table 1. CLEM language date formats
Format Examples
DDMMYY 150163
MMDDYY 011563
YYMMDD 630115
YYYYMMDD 19630115
YYYYDDD Four-digit year followed by a three-digit number representing the day of the year—for example, 2000032 represents the 32nd day of 2000, or 1 February 2000.
DAY Day of the week in the current locale—for example, Monday, Tuesday, ..., in English.
MONTH Month in the current locale—for example, January, February, ….
DD/MM/YY 15/01/63
DD/MM/YYYY 15/01/1963
MM/DD/YY 01/15/63
MM/DD/YYYY 01/15/1963
DD-MM-YY 15-01-63
DD-MM-YYYY 15-01-1963
MM-DD-YY 01-15-63
MM-DD-YYYY 01-15-1963
DD.MM.YY 15.01.63
DD.MM.YYYY 15.01.1963
MM.DD.YY 01.15.63
MM.DD.YYYY 01.15.1963
DD-MON-YY 15-JAN-63, 15-jan-63, 15-Jan-63
DD/MON/YY 15/JAN/63, 15/jan/63, 15/Jan/63
DD.MON.YY 15.JAN.63, 15.jan.63, 15.Jan.63
DD-MON-YYYY 15-JAN-1963, 15-jan-1963, 15-Jan-1963
DD/MON/YYYY 15/JAN/1963, 15/jan/1963, 15/Jan/1963
DD.MON.YYYY 15.JAN.1963, 15.jan.1963, 15.Jan.1963
MON YYYY Jan 2004
q Q YYYY Date represented as a digit (1–4) representing the quarter followed by the letter Q and a four-digit year—for example, 25 December 2004 would be represented as 4 Q 2004.
ww WK YYYY Two-digit number representing the week of the year followed by the letters WK and then a four-digit year. The week of the year is calculated assuming that the first day of the week is Monday and there is at least one day in the first week.
| # Dates #
Date calculations are based on a "baseline" date, which is specified in the flow properties\. The default baseline date is 1 January 1900\.
The CLEM language supports the following date formats\.
<!-- <table "summary="CLEM language date formats" id="date_formats__table_a1p_3bj_cdb" class="defaultstyle" "> -->
CLEM language date formats
Table 1\. CLEM language date formats
| Format | Examples |
| ------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `DDMMYY` | `150163` |
| `MMDDYY` | `011563` |
| `YYMMDD` | `630115` |
| `YYYYMMDD` | `19630115` |
| `YYYYDDD` | Four\-digit year followed by a three\-digit number representing the day of the year—for example, `2000032` represents the 32nd day of 2000, or 1 February 2000\. |
| `DAY` | Day of the week in the current locale—for example, `Monday`, `Tuesday`, \.\.\., in English\. |
| `MONTH` | Month in the current locale—for example, `January`, `February`, …\. |
| `DD/MM/YY` | `15/01/63` |
| `DD/MM/YYYY` | `15/01/1963` |
| `MM/DD/YY` | `01/15/63` |
| `MM/DD/YYYY` | `01/15/1963` |
| `DD-MM-YY` | `15-01-63` |
| `DD-MM-YYYY` | `15-01-1963` |
| `MM-DD-YY` | `01-15-63` |
| `MM-DD-YYYY` | `01-15-1963` |
| `DD.MM.YY` | `15.01.63` |
| `DD.MM.YYYY` | `15.01.1963` |
| `MM.DD.YY` | `01.15.63` |
| `MM.DD.YYYY` | `01.15.1963` |
| `DD-MON-YY` | `15-JAN-63, 15-jan-63, 15-Jan-63` |
| `DD/MON/YY` | `15/JAN/63, 15/jan/63, 15/Jan/63` |
| `DD.MON.YY` | `15.JAN.63, 15.jan.63, 15.Jan.63` |
| `DD-MON-YYYY` | `15-JAN-1963, 15-jan-1963, 15-Jan-1963` |
| `DD/MON/YYYY` | `15/JAN/1963, 15/jan/1963, 15/Jan/1963` |
| `DD.MON.YYYY` | `15.JAN.1963, 15.jan.1963, 15.Jan.1963` |
| `MON YYYY` | `Jan 2004` |
| `q Q YYYY` | Date represented as a digit (1–4) representing the quarter followed by the letter *Q* and a four\-digit year—for example, 25 December 2004 would be represented as `4 Q 2004`\. |
| `ww WK YYYY` | Two\-digit number representing the week of the year followed by the letters *WK* and then a four\-digit year\. The week of the year is calculated assuming that the first day of the week is Monday and there is at least one day in the first week\. |
<!-- </table "summary="CLEM language date formats" id="date_formats__table_a1p_3bj_cdb" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
FE88457CA86FFE3BE30873156A7A0A4FD12975AF | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/ebuilder_accessing.html?context=cdpaas&locale=en | Accessing the Expression Builder (SPSS Modeler) | Accessing the Expression Builder
The Expression Builder is available in all nodes where CLEM expressions are used, including Select, Balance, Derive, Filler, Analysis, Report, and Table nodes.
You can open it by double-clicking the node to open its properties, then click the calculator button by the formula field.
| # Accessing the Expression Builder #
The Expression Builder is available in all nodes where CLEM expressions are used, including Select, Balance, Derive, Filler, Analysis, Report, and Table nodes\.
You can open it by double\-clicking the node to open its properties, then click the calculator button by the formula field\.
<!-- </article "role="article" "> -->
|
56EA4620B049A9E291BF198E71D0C58C2018686D | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/ebuilder_checking.html?context=cdpaas&locale=en | Checking CLEM expressions (SPSS Modeler) | Checking CLEM expressions
Click Validate in the Expression Builder to validate an expression.
Expressions that haven't been checked are displayed in red. If errors are found, a message indicating the cause is displayed.
The following items are checked:
* Correct quoting of values and field names
* Correct usage of parameters and global variables
* Valid usage of operators
* Existence of referenced fields
* Existence and definition of referenced globals
If you encounter errors in syntax, try creating the expression using the lists and operator buttons rather than typing the expression manually. This method automatically adds the proper quotes for fields and values.
Note: Field names that contain separators must be surrounded by single quotes. To automatically add quotes, you can create expressions using the lists and operator buttons rather than typing expressions manually. The following characters in field names may cause errors: * ! " $% & '() = |-^ ¥ @" "+ " "<>? . ,/ :; →(arrow mark), □ △ (graphic mark, etc.)
| # Checking CLEM expressions #
Click Validate in the Expression Builder to validate an expression\.
Expressions that haven't been checked are displayed in red\. If errors are found, a message indicating the cause is displayed\.
The following items are checked:
<!-- <ul> -->
* Correct quoting of values and field names
* Correct usage of parameters and global variables
* Valid usage of operators
* Existence of referenced fields
* Existence and definition of referenced globals
<!-- </ul> -->
If you encounter errors in syntax, try creating the expression using the lists and operator buttons rather than typing the expression manually\. This method automatically adds the proper quotes for fields and values\.
Note: Field names that contain separators must be surrounded by single quotes\. To automatically add quotes, you can create expressions using the lists and operator buttons rather than typing expressions manually\. The following characters in field names may cause errors: `• ! "# $% & '() = ~ |-^ ¥ @" "+ *" "<>? . ,/ :; →`(arrow mark), `□ △` (graphic mark, etc\.)
<!-- </article "role="article" "> -->
|
6FD8A950F1EBE6B021EA9D4C775A5CA8660A1101 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/ebuilder_using.html?context=cdpaas&locale=en | Creating expressions (SPSS Modeler) | Creating expressions
The Expression Builder provides not only complete lists of fields, functions, and operators but also access to data values if your data is instantiated.
| # Creating expressions #
The Expression Builder provides not only complete lists of fields, functions, and operators but also access to data values if your data is instantiated\.
<!-- </article "role="article" "> -->
|
B8044B03933E3FCEA5BCF6362199ED083EC2F20F | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/expressionbuild_database_functions.html?context=cdpaas&locale=en | Database functions (SPSS Modeler) | Database functions
You can run an SPSS Modeler desktop stream file ( .str) that contains database functions.
But database functions aren't available in the Expression Builder user interface, and you can't edit them.
| # Database functions #
You can run an SPSS Modeler desktop stream file ( \.str) that contains database functions\.
But database functions aren't available in the Expression Builder user interface, and you can't edit them\.
<!-- </article "role="article" "> -->
|
841465AD74B0AFDBEC9EAFF7B038AFC4C000E96C | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/expressionbuild_fields.html?context=cdpaas&locale=en | Selecting fields (SPSS Modeler) | Selecting fields
The field list displays all fields available at this point in the data stream. Double-click a field from the list to add it to your expression.
After selecting a field, you can also select an associated value from the value list.
| # Selecting fields #
The field list displays all fields available at this point in the data stream\. Double\-click a field from the list to add it to your expression\.
After selecting a field, you can also select an associated value from the value list\.
<!-- </article "role="article" "> -->
|
0093065541AA4C3E90E47E3ACE89596155EA1735 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/expressionbuild_functions.html?context=cdpaas&locale=en | Selecting functions (SPSS Modeler) | Selecting functions
The function list displays all available SPSS Modeler functions and operators. Scroll to select a function from the list, or, for easier searching, use the drop-down list to display a subset of functions or operators. Available functions are grouped into categories for easier searching.
Most of these categories are described in the Reference section of the CLEM language description. For more information, see [Functions reference](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/clem_function_ref.htmlclem_function_ref).
The other categories are as follows.
* General Functions. Contains a selection of some of the most commonly-used functions.
* Recently Used. Contains a list of CLEM functions used within the current session.
* @ Functions. Contains a list of all the special functions, which have their names preceded by an "@" sign. Note: The @DIFF1(FIELD1,FIELD2) and @DIFF2(FIELD1,FIELD2) functions require that the two field types are the same (for example, both Integer or both Long or both Real).
* Database Functions. If the flow includes a database connection, this selection lists the functions available from within that database, including user-defined functions (UDFs). For more information, see [Database functions](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/expressionbuild_database_functions.htmlexpressionbuild_database_functions).
* Database Aggregates. If the flow includes a database connection, this selection lists the aggregation options available from within that database. These options are available in the Expression Builder of the Aggregate node.
* Built-In Aggregates. Contains a list of the possible modes of aggregation that can be used.
* Operators. Lists all the operators you can use when building expressions. Operators are also available from the buttons in the center of the dialog box.
* All Functions. Contains a complete list of available CLEM functions.
Double-click a function to insert it into the expression field at the position of the cursor.
| # Selecting functions #
The function list displays all available SPSS Modeler functions and operators\. Scroll to select a function from the list, or, for easier searching, use the drop\-down list to display a subset of functions or operators\. Available functions are grouped into categories for easier searching\.
Most of these categories are described in the Reference section of the CLEM language description\. For more information, see [Functions reference](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/clem_function_ref.html#clem_function_ref)\.
The other categories are as follows\.
<!-- <ul> -->
* General Functions\. Contains a selection of some of the most commonly\-used functions\.
* Recently Used\. Contains a list of CLEM functions used within the current session\.
* @ Functions\. Contains a list of all the special functions, which have their names preceded by an "@" sign\. Note: The `@DIFF1(FIELD1,FIELD2)` and `@DIFF2(FIELD1,FIELD2)` functions require that the two field types are the same (for example, both Integer or both Long or both Real)\.
* Database Functions\. If the flow includes a database connection, this selection lists the functions available from within that database, including user\-defined functions (UDFs)\. For more information, see [Database functions](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/expressionbuild_database_functions.html#expressionbuild_database_functions)\.
* Database Aggregates\. If the flow includes a database connection, this selection lists the aggregation options available from within that database\. These options are available in the Expression Builder of the Aggregate node\.
* Built\-In Aggregates\. Contains a list of the possible modes of aggregation that can be used\.
* Operators\. Lists all the operators you can use when building expressions\. Operators are also available from the buttons in the center of the dialog box\.
* All Functions\. Contains a complete list of available CLEM functions\.
<!-- </ul> -->
Double\-click a function to insert it into the expression field at the position of the cursor\.
<!-- </article "role="article" "> -->
|
C89753519B91F85DC9E0ED54A3248CD82D5F2A9E | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/expressionbuilderdialog.html?context=cdpaas&locale=en | The Expression Builder (SPSS Modeler) | The Expression Builder
You can type CLEM expressions manually or use the Expression Builder, which displays a complete list of CLEM functions and operators as well as data fields from the current flow, allowing you to quickly build expressions without memorizing the exact names of fields or functions.
In addition, the Expression Builder controls automatically add the proper quotes for fields and values, making it easier to create syntactically correct expressions.
Notes:
* The Expression Builder isn't supported in scripting or parameter settings.
* If you want to change your datasource, before changing the source you should check that the Expression Builder can still support the functions you have selected. Because not all databases support all functions, you may encounter an error if you run against a new datasource.
| # The Expression Builder #
You can type CLEM expressions manually or use the Expression Builder, which displays a complete list of CLEM functions and operators as well as data fields from the current flow, allowing you to quickly build expressions without memorizing the exact names of fields or functions\.
In addition, the Expression Builder controls automatically add the proper quotes for fields and values, making it easier to create syntactically correct expressions\.
Notes:
<!-- <ul> -->
* The Expression Builder isn't supported in scripting or parameter settings\.
* If you want to change your datasource, before changing the source you should check that the Expression Builder can still support the functions you have selected\. Because not all databases support all functions, you may encounter an error if you run against a new datasource\.
<!-- </ul> -->
<!-- </article "role="article" "> -->
|
F4F623D5A7C8913E227E962BD1F347B36AAB7B51 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/expressions_and_conditions.html?context=cdpaas&locale=en | Expressions and conditions (SPSS Modeler) | Expressions and conditions
CLEM expressions can return a result (used when deriving new values).
For example:
Weight * 2.2
Age + 1
sqrt(Signal-Echo)
Or, they can evaluate true or false (used when selecting on a condition). For example:
Drug = "drugA"
Age < 16
not(PowerFlux) and Power > 2000
You can combine operators and functions arbitrarily in CLEM expressions. For example:
sqrt(abs(Signal))* max(T1, T2) + Baseline
Brackets and operator precedence determine the order in which the expression is evaluated. In this example, the order of evaluation is:
* abs(Signal) is evaluated, and sqrt is applied to its result
* max(T1, T2) is evaluated
* The two results are multiplied: x has higher precedence than +
* Finally, Baseline is added to the result
The descending order of precedence (that is, operations that are performed first to operations that are performed last) is as follows:
* Function arguments
* Function calls
* xx
* x / mod div rem
* + –
* > < >= <= /== == = /=
If you want to override precedence, or if you're in any doubt of the order of evaluation, you can use parentheses to make it explicit. For example:
sqrt(abs(Signal))* (max(T1, T2) + Baseline)
| # Expressions and conditions #
CLEM expressions can return a result (used when deriving new values)\.
For example:
Weight * 2.2
Age + 1
sqrt(Signal-Echo)
Or, they can evaluate *true* or *false* (used when selecting on a condition)\. For example:
Drug = "drugA"
Age < 16
not(PowerFlux) and Power > 2000
You can combine operators and functions arbitrarily in CLEM expressions\. For example:
sqrt(abs(Signal))* max(T1, T2) + Baseline
Brackets and operator precedence determine the order in which the expression is evaluated\. In this example, the order of evaluation is:
<!-- <ul> -->
* `abs(Signal`) is evaluated, and `sqrt` is applied to its result
* `max(T1, T2)` is evaluated
* The two results are multiplied: x has higher precedence than `+`
* Finally, `Baseline` is added to the result
<!-- </ul> -->
The descending order of precedence (that is, operations that are performed first to operations that are performed last) is as follows:
<!-- <ul> -->
* Function arguments
* Function calls
* xx
* x / mod div rem
* `+ –`
* `> < >= <= /== == = /=`
<!-- </ul> -->
If you want to override precedence, or if you're in any doubt of the order of evaluation, you can use parentheses to make it explicit\. For example:
sqrt(abs(Signal))* (max(T1, T2) + Baseline)
<!-- </article "role="article" "> -->
|
85F8B4292483C5747AB2436A2D5D5377F1F6CAB9 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/field_information.html?context=cdpaas&locale=en | Viewing or selecting values (SPSS Modeler) | Viewing or selecting values
You can view field values from the Expression Builder. Note that data must be fully instantiated in an Import or Type node to use this feature, so that storage, types, and values are known.
To view values for a field from the Expression Builder, select the required field and then use the Value list or perform a search with the Find in column Value field to find values for the selected field. You can then double-click a value to insert it into the current expression or list.
For flag and nominal fields, all defined values are listed. For continuous (numeric range) fields, the minimum and maximum values are displayed.
| # Viewing or selecting values #
You can view field values from the Expression Builder\. Note that data must be fully instantiated in an Import or Type node to use this feature, so that storage, types, and values are known\.
To view values for a field from the Expression Builder, select the required field and then use the Value list or perform a search with the Find in column Value field to find values for the selected field\. You can then double\-click a value to insert it into the current expression or list\.
For flag and nominal fields, all defined values are listed\. For continuous (numeric range) fields, the minimum and maximum values are displayed\.
<!-- </article "role="article" "> -->
|
B69246113E589F088E8E1302B32B57720BD27720 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/fields.html?context=cdpaas&locale=en | Fields (SPSS Modeler) | Fields
Names in CLEM expressions that aren’t names of functions are assumed to be field names.
You can write these simply as Power, val27, state_flag, and so on, but if the name begins with a digit or includes non-alphabetic characters, such as spaces (with the exception of the underscore), place the name within single quotation marks (for example, 'Power Increase', '2nd answer', '101', '$P-NextField').
Note: Fields that are quoted but undefined in the data set will be misread as strings.
| # Fields #
Names in CLEM expressions that aren’t names of functions are assumed to be field names\.
You can write these simply as `Power`, `val27`, `state_flag`, and so on, but if the name begins with a digit or includes non\-alphabetic characters, such as spaces (with the exception of the underscore), place the name within single quotation marks (for example, `'Power Increase'`, `'2nd answer'`, `'#101'`, `'$P-NextField'`)\.
Note: Fields that are quoted but undefined in the data set will be misread as strings\.
<!-- </article "role="article" "> -->
|
C528D240892080AECE146D29FB3496DDD0F1FD48 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/findreplacedialog.html?context=cdpaas&locale=en | Find (SPSS Modeler) | Find
In the Expression Builder, you can search for fields, values, or functions.
For example, to search for a value, place your cursor in the Find in column Value field and enter the text you want to search for.
You can also search on special characters such as tabs or newline characters, classes or ranges of characters such as a through d, any digit or non-digit, and boundaries such as the beginning or end of a line. The following types of expressions are supported.
Character matches
Table 1. Character matches
Characters Matches
x The character x
\\ The backslash character
\0n The character with octal value 0n (0 <= n <= 7)
\0nn The character with octal value 0nn (0 <= n <= 7)
\0mnn The character with octal value 0mnn (0 <= m <= 3, 0 <= n <= 7)
\xhh The character with hexadecimal value 0xhh
\uhhhh The character with hexadecimal value 0xhhhh
\t The tab character ('\u0009')
\n The newline (line feed) character ('\u000A')
\r The carriage-return character ('\u000D')
\f The form-feed character ('\u000C')
\a The alert (bell) character ('\u0007')
\e The escape character ('\u001B')
\cx The control character corresponding to x
Matching character classes
Table 2. Matching character classes
Character classes Matches
[abc] a, b, or c (simple class)
[^abc] Any character except a, b, or c (subtraction)
[a-zA-Z] a through z or A through Z, inclusive (range)
[a-d[m-p]] a through d, or m through p (union). Alternatively this could be specified as [a-dm-p]
[a-z&&[def]] a through z, and d, e, or f (intersection)
[a-z&&[^bc]] a through z, except for b and c (subtraction). Alternatively this could be specified as [ad-z]
[a-z&&[^m-p]] a through z, and not m through p (subtraction). Alternatively this could be specified as [a-lq-z]
Predefined character classes
Table 3. Predefined character classes
Predefined character classes Matches
. Any character (may or may not match line terminators)
\d Any digit: [0-9]
\D A non-digit: [^0-9]
\s A white space character: [ \t\n\x0B\f\r]
\S A non-white space character: [^\s]
\w A word character: [a-zA-Z_0-9]
\W A non-word character: [^\w]
Boundary matches
Table 4. Boundary matches
Boundary matchers Matches
^ The beginning of a line
$ The end of a line
\b A word boundary
\B A non-word boundary
\A The beginning of the input
\Z The end of the input but for the final terminator, if any
\z The end of the input
| # Find #
In the Expression Builder, you can search for fields, values, or functions\.
For example, to search for a value, place your cursor in the Find in column Value field and enter the text you want to search for\.
You can also search on special characters such as tabs or newline characters, classes or ranges of characters such as *a* through *d*, any digit or non\-digit, and boundaries such as the beginning or end of a line\. The following types of expressions are supported\.
<!-- <table "summary="Character matches" id="findreplacedialog__table_wsf_1tz_ddb" class="defaultstyle" "> -->
Character matches
Table 1\. Character matches
| Characters | Matches |
| ---------- | -------------------------------------------------------------------------- |
| x | The character x |
| \\\\ | The backslash character |
| \\0n | The character with octal value 0n (0 <= n <= 7) |
| \\0nn | The character with octal value 0nn (0 <= n <= 7) |
| \\0mnn | The character with octal value 0mnn (0 <= m <= 3, 0 <= n <= 7) |
| \\xhh | The character with hexadecimal value 0xhh |
| \\uhhhh | The character with hexadecimal value 0xhhhh |
| \\t | The tab character ('\\u0009') |
| \\n | The newline (line feed) character ('\\u000A') |
| \\r | The carriage\-return character ('\\u000D') |
| \\f | The form\-feed character ('\\u000C') |
| \\a | The alert (bell) character ('\\u0007') |
| \\e | The escape character ('\\u001B') |
| \\cx | The control character corresponding to x |
<!-- </table "summary="Character matches" id="findreplacedialog__table_wsf_1tz_ddb" class="defaultstyle" "> -->
<!-- <table "summary="Matching character classes" id="findreplacedialog__table_xsf_1tz_ddb" class="defaultstyle" "> -->
Matching character classes
Table 2\. Matching character classes
| Character classes | Matches |
| ------------------- | ------------------------------------------------------------------------------------------------------ |
| \[abc\] | a, b, or c (simple class) |
| \[^abc\] | Any character except a, b, or c (subtraction) |
| \[a\-zA\-Z\] | a through z or A through Z, inclusive (range) |
| \[a\-d\[m\-p\]\] | a through d, or m through p (union)\. Alternatively this could be specified as \[a\-dm\-p\] |
| \[a\-z&&\[def\]\] | a through z, and d, e, or f (intersection) |
| \[a\-z&&\[^bc\]\] | a through z, except for b and c (subtraction)\. Alternatively this could be specified as \[ad\-z\] |
| \[a\-z&&\[^m\-p\]\] | a through z, and not m through p (subtraction)\. Alternatively this could be specified as \[a\-lq\-z\] |
<!-- </table "summary="Matching character classes" id="findreplacedialog__table_xsf_1tz_ddb" class="defaultstyle" "> -->
<!-- <table "summary="Predefined character classes" id="findreplacedialog__table_ysf_1tz_ddb" class="defaultstyle" "> -->
Predefined character classes
Table 3\. Predefined character classes
| Predefined character classes | Matches |
| ---------------------------- | ----------------------------------------------------- |
| \. | Any character (may or may not match line terminators) |
| \\d | Any digit: \[0\-9\] |
| \\D | A non\-digit: \[^0\-9\] |
| \\s | A white space character: \[ \\t\\n\\x0B\\f\\r\] |
| \\S | A non\-white space character: \[^\\s\] |
| \\w | A word character: \[a\-zA\-Z\_0\-9\] |
| \\W | A non\-word character: \[^\\w\] |
<!-- </table "summary="Predefined character classes" id="findreplacedialog__table_ysf_1tz_ddb" class="defaultstyle" "> -->
<!-- <table "summary="Boundary matches" id="findreplacedialog__table_zsf_1tz_ddb" class="defaultstyle" "> -->
Boundary matches
Table 4\. Boundary matches
| Boundary matchers | Matches |
| ----------------- | --------------------------------------------------------- |
| ^ | The beginning of a line |
| $ | The end of a line |
| \\b | A word boundary |
| \\B | A non\-word boundary |
| \\A | The beginning of the input |
| \\Z | The end of the input but for the final terminator, if any |
| \\z | The end of the input |
<!-- </table "summary="Boundary matches" id="findreplacedialog__table_zsf_1tz_ddb" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
C1324A359A58B4D399C10BC59AE94E7E0723836D | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/integers.html?context=cdpaas&locale=en | Integers (SPSS Modeler) | Integers
Integers are represented as a sequence of decimal digits.
Optionally, you can place a minus sign (−) before the integer to denote a negative number (for example, 1234, 999, −77).
The CLEM language handles integers of arbitrary precision. The maximum integer size depends on your platform. If the values are too large to be displayed in an integer field, changing the field type to Real usually restores the value.
| # Integers #
Integers are represented as a sequence of decimal digits\.
Optionally, you can place a minus sign (−) before the integer to denote a negative number (for example, `1234`, `999`, −`77`)\.
The CLEM language handles integers of arbitrary precision\. The maximum integer size depends on your platform\. If the values are too large to be displayed in an integer field, changing the field type to `Real` usually restores the value\.
<!-- </article "role="article" "> -->
|
D05F366AFC5726DC1A258EDC3689067381EFDECC | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/introduction_to_clem.html?context=cdpaas&locale=en | About CLEM (SPSS Modeler) | About CLEM
The Control Language for Expression Manipulation (CLEM) is a powerful language for analyzing and manipulating the data that streams through an SPSS Modeler flow. Data miners use CLEM extensively in flow operations to perform tasks as simple as deriving profit from cost and revenue data or as complex as transforming web log data into a set of fields and records with usable information.
CLEM is used within SPSS Modeler to:
* Compare and evaluate conditions on record fields
* Derive values for new fields
* Derive new values for existing fields
* Reason about the sequence of records
* Insert data from records into reports
CLEM expressions are indispensable for data preparation in SPSS Modeler and can be used in a wide range of nodes—from record and field operations (Select, Balance, Filler) to plots and output (Analysis, Report, Table). For example, you can use CLEM in a Derive node to create a new field based on a formula such as ratio.
CLEM expressions can also be used for global search and replace operations. For example, the expression @NULL(@FIELD) can be used in a Filler node to replace system-missing values with the integer value 0. (To replace user-missing values, also called blanks, use the @BLANK function.)
More complex CLEM expressions can also be created. For example, you can derive new fields based on a conditional set of rules, such as a new value category created by using the following expressions: If: CardID = @OFFSET(CardID,1), Then: @OFFSET(ValueCategory,1), Else: 'exclude'.
This example uses the @OFFSET function to say: If the value of the field CardID for a given record is the same as for the previous record, then return the value of the field named ValueCategory for the previous record. Otherwise, assign the string "exclude." In other words, if the CardIDs for adjacent records are the same, they should be assigned the same value category. (Records with the exclude string can later be culled using a Select node.)
| # About CLEM #
The Control Language for Expression Manipulation (CLEM) is a powerful language for analyzing and manipulating the data that streams through an SPSS Modeler flow\. Data miners use CLEM extensively in flow operations to perform tasks as simple as deriving profit from cost and revenue data or as complex as transforming web log data into a set of fields and records with usable information\.
CLEM is used within SPSS Modeler to:
<!-- <ul> -->
* Compare and evaluate conditions on record fields
* Derive values for new fields
* Derive new values for existing fields
* Reason about the sequence of records
* Insert data from records into reports
<!-- </ul> -->
CLEM expressions are indispensable for data preparation in SPSS Modeler and can be used in a wide range of nodes—from record and field operations (Select, Balance, Filler) to plots and output (Analysis, Report, Table)\. For example, you can use CLEM in a Derive node to create a new field based on a formula such as ratio\.
CLEM expressions can also be used for global search and replace operations\. For example, the expression `@NULL(@FIELD)` can be used in a Filler node to replace **system\-missing values** with the integer value 0\. (To replace **user\-missing values**, also called blanks, use the `@BLANK` function\.)
More complex CLEM expressions can also be created\. For example, you can derive new fields based on a conditional set of rules, such as a new value category created by using the following expressions: `If: CardID = @OFFSET(CardID,1), Then: @OFFSET(ValueCategory,1), Else: 'exclude'`\.
This example uses the `@OFFSET` function to say: If the value of the field *CardID* for a given record is the same as for the previous record, then return the value of the field named *ValueCategory* for the previous record\. Otherwise, assign the string "exclude\." In other words, if the *CardID*s for adjacent records are the same, they should be assigned the same value category\. (Records with the exclude string can later be culled using a Select node\.)
<!-- </article "role="article" "> -->
|
B93F8A3A1CED22CF84C45B552D5040A4A17FDB60 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/lists.html?context=cdpaas&locale=en | Lists (SPSS Modeler) | Lists
A list is an ordered sequence of elements, which may be of mixed type. Lists are enclosed in square brackets ([ ]).
Examples of lists are [1 2 4 16] and ["abc" "def"] and [A1, A2, A3]. Lists are not used as the value of SPSS Modeler fields. They are used to provide arguments to functions, such as member and oneof.
Notes:
* Lists can be composed only of static objects (for example, a string, number, or field name) and not calls to functions.
* Fields containing a list type aren't supported. For example, the function value_at(3, ['Gender' 'BP' 'Cholesterol']) is supported, but the function value_at(3, 'ListField') isn't supported.
| # Lists #
A list is an ordered sequence of elements, which may be of mixed type\. Lists are enclosed in square brackets (\[ \])\.
Examples of lists are `[1 2 4 16]` and `["abc" "def"]` and `[A1, A2, A3]`\. Lists are not used as the value of SPSS Modeler fields\. They are used to provide arguments to functions, such as `member` and `oneof`\.
Notes:
<!-- <ul> -->
* Lists can be composed only of static objects (for example, a string, number, or field name) and not calls to functions\.
* Fields containing a list type aren't supported\. For example, the function `value_at(3, ['Gender' 'BP' 'Cholesterol'])` is supported, but the function `value_at(3, 'ListField')` isn't supported\.
<!-- </ul> -->
<!-- </article "role="article" "> -->
|
9455A31E5D6C749F3028F9F5E5F758F713C09973 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/operator_precedence.html?context=cdpaas&locale=en | CLEM operators (SPSS Modeler) | CLEM operators
This page lists the available CLEM language operators.
CLEM language operators
Table 1. CLEM language operators
Operation Comments Precedence (see next section)
or Used between two CLEM expressions. Returns a value of true if either is true or if both are true. 10
and Used between two CLEM expressions. Returns a value of true if both are true. 9
= Used between any two comparable items. Returns true if ITEM1 is equal to ITEM2. 7
== Identical to =. 7
/= Used between any two comparable items. Returns true if ITEM1 is not equal to ITEM2. 7
/== Identical to /=. 7
> Used between any two comparable items. Returns true if ITEM1 is strictly greater than ITEM2. 6
>= Used between any two comparable items. Returns true if ITEM1 is greater than or equal to ITEM2. 6
< Used between any two comparable items. Returns true if ITEM1 is strictly less than ITEM2 6
<= Used between any two comparable items. Returns true if ITEM1 is less than or equal to ITEM2. 6
&&=_0 Used between two integers. Equivalent to the Boolean expression INT1 && INT2 = 0. 6
&&/=_0 Used between two integers. Equivalent to the Boolean expression INT1 && INT2 /= 0. 6
+ Adds two numbers: NUM1 + NUM2. 5
>< Concatenates two strings; for example, STRING1 >< STRING2. 5
- Subtracts one number from another: NUM1 - NUM2. Can also be used in front of a number: - NUM. 5
* Used to multiply two numbers: NUM1 * NUM2. 4
&& Used between two integers. The result is the bitwise 'and' of the integers INT1 and INT2. 4
&& Used between two integers. The result is the bitwise 'and' of INT1 and the bitwise complement of INT2. 4
Used between two integers. The result is the bitwise 'inclusive or' of INT1 and INT2. 4
Used in front of an integer. Produces the bitwise complement of INT. 4
/& Used between two integers. The result is the bitwise 'exclusive or' of INT1 and INT2. 4
INT1 << N Used between two integers. Produces the bit pattern of INT shifted left by N positions. 4
INT1 >> N Used between two integers. Produces the bit pattern of INT shifted right by N positions. 4
/ Used to divide one number by another: NUM1 / NUM2. 4
Used between two numbers: BASE ** POWER. Returns BASE raised to the power POWER. 3
rem Used between two integers: INT1 rem INT2. Returns the remainder, INT1 - (INT1 div INT2) * INT2. 2
div Used between two integers: INT1 div INT2. Performs integer division. 2
| # CLEM operators #
This page lists the available CLEM language operators\.
<!-- <table "summary="CLEM language operators" id="operator_precedence__table_gt3_lwz_ddb" class="defaultstyle" "> -->
CLEM language operators
Table 1\. CLEM language operators
| Operation | Comments | Precedence (see next section) |
| ----------------- | -------------------------------------------------------------------------------------------------------- | ----------------------------- |
| `or` | Used between two CLEM expressions\. Returns a value of true if either is true or if both are true\. | 10 |
| `and` | Used between two CLEM expressions\. Returns a value of true if both are true\. | 9 |
| `=` | Used between any two comparable items\. Returns true if ITEM1 is equal to ITEM2\. | 7 |
| `==` | Identical to `=`\. | 7 |
| `/=` | Used between any two comparable items\. Returns true if ITEM1 is *not* equal to ITEM2\. | 7 |
| `/==` | Identical to `/=`\. | 7 |
| `>` | Used between any two comparable items\. Returns true if ITEM1 is strictly greater than ITEM2\. | 6 |
| `>=` | Used between any two comparable items\. Returns true if ITEM1 is greater than or equal to ITEM2\. | 6 |
| `<` | Used between any two comparable items\. Returns true if ITEM1 is strictly less than ITEM2 | 6 |
| `<=` | Used between any two comparable items\. Returns true if ITEM1 is less than or equal to ITEM2\. | 6 |
| `&&=_0` | Used between two integers\. Equivalent to the Boolean expression INT1 && INT2 = 0\. | 6 |
| `&&/=_0` | Used between two integers\. Equivalent to the Boolean expression INT1 && INT2 /= 0\. | 6 |
| `+` | Adds two numbers: NUM1 \+ NUM2\. | 5 |
| `><` | Concatenates two strings; for example, `STRING1 >< STRING2`\. | 5 |
| `-` | Subtracts one number from another: NUM1 \- NUM2\. Can also be used in front of a number: \- NUM\. | 5 |
| `*` | Used to multiply two numbers: NUM1 \* NUM2\. | 4 |
| `&&` | Used between two integers\. The result is the bitwise 'and' of the integers INT1 and INT2\. | 4 |
| `&&~~` | Used between two integers\. The result is the bitwise 'and' of INT1 and the bitwise complement of INT2\. | 4 |
| `||` | Used between two integers\. The result is the bitwise 'inclusive or' of INT1 and INT2\. | 4 |
| `~~` | Used in front of an integer\. Produces the bitwise complement of INT\. | 4 |
| `||/&` | Used between two integers\. The result is the bitwise 'exclusive or' of INT1 and INT2\. | 4 |
| `INT1 << N` | Used between two integers\. Produces the bit pattern of INT shifted left by N positions\. | 4 |
| `INT1 >> N` | Used between two integers\. Produces the bit pattern of INT shifted right by N positions\. | 4 |
| `/` | Used to divide one number by another: NUM1 / NUM2\. | 4 |
| `**` | Used between two numbers: BASE \*\* POWER\. Returns BASE raised to the power POWER\. | 3 |
| `rem` | Used between two integers: INT1 rem INT2\. Returns the remainder, INT1 \- (INT1 div INT2) \* INT2\. | 2 |
| `div` | Used between two integers: INT1 div INT2\. Performs integer division\. | 2 |
<!-- </table "summary="CLEM language operators" id="operator_precedence__table_gt3_lwz_ddb" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
185C42AB06DE9FF515DCD03213F5C4608C6FAEBF | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/reals.html?context=cdpaas&locale=en | Reals (SPSS Modeler) | Reals
Real refers to a floating-point number. Reals are represented by one or more digits followed by a decimal point followed by one or more digits. CLEM reals are held in double precision.
Optionally, you can place a minus sign (−) before the real to denote a negative number (for example, 1.234, 0.999, −77.001). Use the form <number> e <exponent> to express a real number in exponential notation (for example, 1234.0e5, 1.7e−2). When SPSS Modeler reads number strings from files and converts them automatically to numbers, numbers with no leading digit before the decimal point or with no digit after the point are accepted (for example, 999. or .11). However, these forms are illegal in CLEM expressions.
Note: When referencing real numbers in CLEM expressions, a period must be used as the decimal separator, regardless of any settings for the current flow or locale. For example, specify
Na > 0.6
rather than
Na > 0,6
This applies even if a comma is selected as the decimal symbol in the flow properties and is consistent with the general guideline that code syntax should be independent of any specific locale or convention.
| # Reals #
*Real* refers to a floating\-point number\. Reals are represented by one or more digits followed by a decimal point followed by one or more digits\. CLEM reals are held in double precision\.
Optionally, you can place a minus sign (−) before the real to denote a negative number (for example, `1.234`, `0.999`, −`77.001`)\. Use the form <*number*> e <*exponent*> to express a real number in exponential notation (for example, `1234.0e5`, `1.7e`−`2`)\. When SPSS Modeler reads number strings from files and converts them automatically to numbers, numbers with no leading digit before the decimal point or with no digit after the point are accepted (for example, `999.` or `.11`)\. However, these forms are illegal in CLEM expressions\.
Note: When referencing real numbers in CLEM expressions, a period must be used as the decimal separator, regardless of any settings for the current flow or locale\. For example, specify
Na > 0.6
rather than
Na > 0,6
This applies even if a comma is selected as the decimal symbol in the flow properties and is consistent with the general guideline that code syntax should be independent of any specific locale or convention\.
<!-- </article "role="article" "> -->
|
385DEC32600A9DED58FEDE3E98568FED789A400A | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/strings.html?context=cdpaas&locale=en | Strings (SPSS Modeler) | Strings
Generally, you should enclose strings in double quotation marks. Examples of strings are "c35product2" and "referrerID".
To indicate special characters in a string, use a backslash (for example, "$65443"). (To indicate a backslash character, use a double backslash, \.) You can use single quotes around a string, but the result is indistinguishable from a quoted field ('referrerID'). See [String functions](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/clem_function_ref_string.htmlclem_function_ref_string) for more information.
| # Strings #
Generally, you should enclose strings in double quotation marks\. Examples of strings are `"c35product2"` and `"referrerID"`\.
To indicate special characters in a string, use a backslash (for example, `"\$65443"`)\. (To indicate a backslash character, use a double backslash, `\\`\.) You can use single quotes around a string, but the result is indistinguishable from a quoted field (`'referrerID'`)\. See [String functions](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/clem_function_ref_string.html#clem_function_ref_string) for more information\.
<!-- </article "role="article" "> -->
|
839B16AC73C000ECE7BAC7D50BAF6F7E37F2CAD9 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/time_formats_clem_language.html?context=cdpaas&locale=en | Time (SPSS Modeler) | Time
The CLEM language supports the time formats listed in this section.
CLEM language time formats
Table 1. CLEM language time formats
Format Examples
HHMMSS 120112, 010101, 221212
HHMM 1223, 0745, 2207
MMSS 5558, 0100
HH:MM:SS 12:01:12, 01:01:01, 22:12:12
HH:MM 12:23, 07:45, 22:07
MM:SS 55:58, 01:00
(H)H:(M)M:(S)S 12:1:12, 1:1:1, 22:12:12
(H)H:(M)M 12:23, 7:45, 22:7
(M)M:(S)S 55:58, 1:0
HH.MM.SS 12.01.12, 01.01.01, 22.12.12
HH.MM 12.23, 07.45, 22.07
MM.SS 55.58, 01.00
(H)H.(M)M.(S)S 12.1.12, 1.1.1, 22.12.12
(H)H.(M)M 12.23, 7.45, 22.7
(M)M.(S)S 55.58, 1.0
| # Time #
The CLEM language supports the time formats listed in this section\.
<!-- <table "summary="CLEM language time formats" id="time_formats_clem_language__table_zbw_jyy_ddb" class="defaultstyle" "> -->
CLEM language time formats
Table 1\. CLEM language time formats
| Format | Examples |
| ---------------- | ------------------------------ |
| `HHMMSS` | `120112, 010101, 221212` |
| `HHMM` | `1223, 0745, 2207` |
| `MMSS` | `5558, 0100` |
| `HH:MM:SS` | `12:01:12, 01:01:01, 22:12:12` |
| `HH:MM` | `12:23, 07:45, 22:07` |
| `MM:SS` | `55:58, 01:00` |
| `(H)H:(M)M:(S)S` | `12:1:12, 1:1:1, 22:12:12` |
| `(H)H:(M)M` | `12:23, 7:45, 22:7` |
| `(M)M:(S)S` | `55:58, 1:0` |
| `HH.MM.SS` | `12.01.12, 01.01.01, 22.12.12` |
| `HH.MM` | `12.23, 07.45, 22.07` |
| `MM.SS` | `55.58, 01.00` |
| `(H)H.(M)M.(S)S` | `12.1.12, 1.1.1, 22.12.12` |
| `(H)H.(M)M` | `12.23, 7.45, 22.7` |
| `(M)M.(S)S` | `55.58, 1.0` |
<!-- </table "summary="CLEM language time formats" id="time_formats_clem_language__table_zbw_jyy_ddb" class="defaultstyle" "> -->
<!-- </article "role="article" "> -->
|
F975B9964D088181CF34A1341083BC82053812D8 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/values_and_data_types.html?context=cdpaas&locale=en | Values and data types (SPSS Modeler) | Values and data types
CLEM expressions are similar to formulas constructed from values, field names, operators, and functions. The simplest valid CLEM expression is a value or a field name.
Examples of valid values are:
3
1.79
'banana'
Examples of field names are:
Product_ID
'$P-NextField'
where Product is the name of a field from a market basket data set, '$P-NextField' is the name of a parameter, and the value of the expression is the value of the named field. Typically, field names start with a letter and may also contain digits and underscores (_). You can use names that don't follow these rules if you place the name within quotation marks. CLEM values can be any of the following:
* Strings (for example, "c1", "Type 2", "a piece of free text")
* Integers (for example, 12, 0, –189)
* Real numbers (for example, 12.34, 0.0, –0.0045)
* Date/time fields (for example, 05/12/2002, 12/05/2002, 12/05/02)
It's also possible to use the following elements:
* Character codes (for example, a or 3)
* Lists of items (for example, [1 2 3], ['Type 1' 'Type 2'])
Character codes and lists don't usually occur as field values. Typically, they're used as arguments of CLEM functions.
| # Values and data types #
CLEM expressions are similar to formulas constructed from values, field names, operators, and functions\. The simplest valid CLEM expression is a value or a field name\.
Examples of valid values are:
3
1.79
'banana'
Examples of field names are:
Product_ID
'$P-NextField'
where `Product` is the name of a field from a market basket data set, `'$P-NextField'` is the name of a parameter, and the value of the expression is the value of the named field\. Typically, field names start with a letter and may also contain digits and underscores (\_)\. You can use names that don't follow these rules if you place the name within quotation marks\. CLEM values can be any of the following:
<!-- <ul> -->
* Strings (for example, `"c1"`, `"Type 2"`, `"a piece of free text"`)
* Integers (for example, `12`, `0`, `–189`)
* Real numbers (for example, `12.34`, `0.0`, `–0.0045`)
* Date/time fields (for example, `05/12/2002`, `12/05/2002`, `12/05/02`)
<!-- </ul> -->
It's also possible to use the following elements:
<!-- <ul> -->
* Character codes (for example, `` `a` or 3``)
* Lists of items (for example, `[1 2 3]`, `['Type 1' 'Type 2']`)
<!-- </ul> -->
Character codes and lists don't usually occur as field values\. Typically, they're used as arguments of CLEM functions\.
<!-- </article "role="article" "> -->
|
EE838EA978F9A0B0265A8D2B35FF2F64D00A1738 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/collection.html?context=cdpaas&locale=en | Collection node (SPSS Modeler) | Collection node
Collections are similar to histograms, but collections show the distribution of values for one numeric field relative to the values of another, rather than the occurrence of values for a single field. A collection is useful for illustrating a variable or field whose values change over time.
Using 3-D graphing, you can also include a symbolic axis displaying distributions by category. Two-dimensional collections are shown as stacked bar charts, with overlays where used.
| # Collection node #
Collections are similar to histograms, but collections show the distribution of values for one numeric field relative to the values of another, rather than the occurrence of values for a single field\. A collection is useful for illustrating a variable or field whose values change over time\.
Using 3\-D graphing, you can also include a symbolic axis displaying distributions by category\. Two\-dimensional collections are shown as stacked bar charts, with overlays where used\.
<!-- </article "role="article" "> -->
|
5A8AA187972BA8A711AC91447F668B233E580C8C | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/coxreg.html?context=cdpaas&locale=en | Cox node (SPSS Modeler) | Cox node
Cox Regression builds a predictive model for time-to-event data. The model produces a survival function that predicts the probability that the event of interest has occurred at a given time t for given values of the predictor variables. The shape of the survival function and the regression coefficients for the predictors are estimated from observed subjects; the model can then be applied to new cases that have measurements for the predictor variables.
Note that information from censored subjects, that is, those that do not experience the event of interest during the time of observation, contributes usefully to the estimation of the model.
Example. As part of its efforts to reduce customer churn, a telecommunications company is interested in modeling the time to churn in order to determine the factors that are associated with customers who are quick to switch to another service. To this end, a random sample of customers is selected, and their time spent as customers (whether or not they are still active customers) and various demographic fields are pulled from the database.
Requirements. You need one or more input fields, exactly one target field, and you must specify a survival time field within the Cox node. The target field should be coded so that the "false" value indicates survival and the "true" value indicates that the event of interest has occurred; it must have a measurement level of Flag, with string or integer storage. (Storage can be converted using a Filler or Derive node if necessary. ) Fields set to Both or None are ignored. Fields used in the model must have their types fully instantiated. The survival time can be any numeric field. Note: On scoring a Cox Regression model, an error is reported if empty strings in categorical variables are used as input to model building. Avoid using empty strings as input.
Dates & Times. Date & Time fields cannot be used to directly define the survival time; if you have Date & Time fields, you should use them to create a field containing survival times, based upon the difference between the date of entry into the study and the observation date.
Kaplan-Meier Analysis. Cox regression can be performed with no input fields. This is equivalent to a Kaplan-Meier analysis.
| # Cox node #
Cox Regression builds a predictive model for time\-to\-event data\. The model produces a survival function that predicts the probability that the event of interest has occurred at a given time `t` for given values of the predictor variables\. The shape of the survival function and the regression coefficients for the predictors are estimated from observed subjects; the model can then be applied to new cases that have measurements for the predictor variables\.
Note that information from censored subjects, that is, those that do not experience the event of interest during the time of observation, contributes usefully to the estimation of the model\.
Example\. As part of its efforts to reduce customer churn, a telecommunications company is interested in modeling the time to churn in order to determine the factors that are associated with customers who are quick to switch to another service\. To this end, a random sample of customers is selected, and their time spent as customers (whether or not they are still active customers) and various demographic fields are pulled from the database\.
Requirements\. You need one or more input fields, exactly one target field, and you must specify a survival time field within the Cox node\. The target field should be coded so that the "false" value indicates survival and the "true" value indicates that the event of interest has occurred; it must have a measurement level of `Flag`, with string or integer storage\. (Storage can be converted using a Filler or Derive node if necessary\. ) Fields set to `Both` or `None` are ignored\. Fields used in the model must have their types fully instantiated\. The survival time can be any numeric field\. Note: On scoring a Cox Regression model, an error is reported if empty strings in categorical variables are used as input to model building\. Avoid using empty strings as input\.
Dates & Times\. Date & Time fields cannot be used to directly define the survival time; if you have Date & Time fields, you should use them to create a field containing survival times, based upon the difference between the date of entry into the study and the observation date\.
Kaplan\-Meier Analysis\. Cox regression can be performed with no input fields\. This is equivalent to a Kaplan\-Meier analysis\.
<!-- </article "role="article" "> -->
|
67B99E436854F015A9DB19C775639BA4BB4D5F9B | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/cplex.html?context=cdpaas&locale=en | CPLEX Optimization node (SPSS Modeler) | CPLEX Optimization node
With the CPLEX Optimization node, you can use complex mathematical (CPLEX) based optimization via an Optimization Programming Language (OPL) model file.
For more information about CPLEX optimization and OPL, see the [IBM ILOG CPLEX Optimization Studio documentation](https://www.ibm.com/support/knowledgecenter/SSSA5P).
When outputting the data generated by the CPLEX Optimization node, you can output the original data from the data sources together as single indexes, or as multiple dimensional indexes of the result.
Note:
* When running a flow containing a CPLEX Optimization node, the CPLEX library has a limitation of 1000 variables and 1000 constraints.
| # CPLEX Optimization node #
With the CPLEX Optimization node, you can use complex mathematical (CPLEX) based optimization via an Optimization Programming Language (OPL) model file\.
For more information about CPLEX optimization and OPL, see the [IBM ILOG CPLEX Optimization Studio documentation](https://www.ibm.com/support/knowledgecenter/SSSA5P)\.
When outputting the data generated by the CPLEX Optimization node, you can output the original data from the data sources together as single indexes, or as multiple dimensional indexes of the result\.
Note:
<!-- <ul> -->
* When running a flow containing a CPLEX Optimization node, the CPLEX library has a limitation of 1000 variables and 1000 constraints\.
<!-- </ul> -->
<!-- </article "role="article" "> -->
|
9FA71067981E4FC0D6F68A14C91C694DC4C2AF25 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/dataassetexport.html?context=cdpaas&locale=en | Data Asset Export node (SPSS Modeler) | Data Asset Export node
You can use the Data Asset Export node to write to remote data sources using connections or write data to a project (delimited or . sav).
Double-click the node to open its properties. Various options are available, described as follows.
After running the node, you can find the data at the export location you specified.
| # Data Asset Export node #
You can use the Data Asset Export node to write to remote data sources using connections or write data to a project (delimited or \. sav)\.
Double\-click the node to open its properties\. Various options are available, described as follows\.
After running the node, you can find the data at the export location you specified\.
<!-- </article "role="article" "> -->
|
C70BB33E4E6792511DC4E7D88536017E64BCD0F1 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/dataassetimport.html?context=cdpaas&locale=en | Data Asset node (SPSS Modeler) | Data Asset node
You can use the Data Asset import node to pull in data from remote data sources using connections or from your local computer. First, you must create the connection.
Note for connections to a Planning Analytics database, you must choose a view (not a cube).
You can also pull in data from a local data file ( .csv, .txt, .json, .xls, .xlsx, .sav, and .sas are supported). Only the first sheet is imported from spreadsheets. In the node's properties, under DATA, select one or more data files to upload. You can also simply drag-and-drop the data file from your local file system onto your canvas.
Note: You can import a stream ( .str) into watsonx.ai that was created in SPSS Modeler Subscription or SPSS Modeler client. If the imported stream contains one or more import or export nodes, you'll be prompted to convert the nodes. See [Importing an SPSS Modeler stream](https://dataplatform.cloud.ibm.com/docs/content/wsd/migration.html).
| # Data Asset node #
You can use the Data Asset import node to pull in data from remote data sources using connections or from your local computer\. First, you must create the connection\.
Note for connections to a Planning Analytics database, you must choose a view (not a cube)\.
You can also pull in data from a local data file ( \.csv, \.txt, \.json, \.xls, \.xlsx, \.sav, and \.sas are supported)\. Only the first sheet is imported from spreadsheets\. In the node's properties, under DATA, select one or more data files to upload\. You can also simply drag\-and\-drop the data file from your local file system onto your canvas\.
Note: You can import a stream ( \.str) into watsonx\.ai that was created in SPSS Modeler Subscription or SPSS Modeler client\. If the imported stream contains one or more import or export nodes, you'll be prompted to convert the nodes\. See [Importing an SPSS Modeler stream](https://dataplatform.cloud.ibm.com/docs/content/wsd/migration.html)\.
<!-- </article "role="article" "> -->
|
7F4648FD3E7F8564C98CF142E0E09E23E8097A9E | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/dataaudit.html?context=cdpaas&locale=en | Data Audit node (SPSS Modeler) | Data Audit node
The Data Audit node provides a comprehensive first look at the data you bring to SPSS Modeler, presented in an interactive, easy-to-read matrix that can be sorted and used to generate full-size graphs.
When you run a Data Audit node, interactive output is generated that includes:
* Information such as summary statistics, histograms, box plots, bar charts, pie charts, and more that may be useful in gaining a preliminary understanding of the data.
* Information about outliers, extremes, and missing values.
Figure 1. Data Audit node output example

Figure 2. Data Audit node output example

Figure 3. Data Audit node output example

Figure 4. Data Audit node output example

Figure 5. Data Audit node output example

| # Data Audit node #
The Data Audit node provides a comprehensive first look at the data you bring to SPSS Modeler, presented in an interactive, easy\-to\-read matrix that can be sorted and used to generate full\-size graphs\.
When you run a Data Audit node, interactive output is generated that includes:
<!-- <ul> -->
* Information such as summary statistics, histograms, box plots, bar charts, pie charts, and more that may be useful in gaining a preliminary understanding of the data\.
* Information about outliers, extremes, and missing values\.
<!-- </ul> -->
Figure 1\. Data Audit node output example

Figure 2\. Data Audit node output example

Figure 3\. Data Audit node output example

Figure 4\. Data Audit node output example

Figure 5\. Data Audit node output example

<!-- </article "role="article" "> -->
|
1A5F15E64AABDCA9E2785588E76F3EBE22A1C426 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/decisionlist.html?context=cdpaas&locale=en | Decision List node (SPSS Modeler) | Decision List node
Decision List models identify subgroups or segments that show a higher or lower likelihood of a binary (yes or no) outcome relative to the overall sample.
For example, you might look for customers who are least likely to churn or most likely to say yes to a particular offer or campaign. The Decision List Viewer gives you complete control over the model, enabling you to edit segments, add your own business rules, specify how each segment is scored, and customize the model in a number of other ways to optimize the proportion of hits across all segments. As such, it is particularly well-suited for generating mailing lists or otherwise identifying which records to target for a particular campaign. You can also use multiple mining tasks to combine modeling approaches—for example, by identifying high- and low-performing segments within the same model and including or excluding each in the scoring stage as appropriate.
| # Decision List node #
Decision List models identify subgroups or segments that show a higher or lower likelihood of a binary (yes or no) outcome relative to the overall sample\.
For example, you might look for customers who are least likely to churn or most likely to say yes to a particular offer or campaign\. The Decision List Viewer gives you complete control over the model, enabling you to edit segments, add your own business rules, specify how each segment is scored, and customize the model in a number of other ways to optimize the proportion of hits across all segments\. As such, it is particularly well\-suited for generating mailing lists or otherwise identifying which records to target for a particular campaign\. You can also use multiple mining tasks to combine modeling approaches—for example, by identifying high\- and low\-performing segments within the same model and including or excluding each in the scoring stage as appropriate\.
<!-- </article "role="article" "> -->
|
4D299EFFF5B982097A5B9D48EA16041E4820A8BB | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/derive.html?context=cdpaas&locale=en | Derive node (SPSS Modeler) | Derive node
One of the most powerful features in watsonx.ai is the ability to modify data values and derive new fields from existing data. During lengthy data mining projects, it is common to perform several derivations, such as extracting a customer ID from a string of Web log data or creating a customer lifetime value based on transaction and demographic data. All of these transformations can be performed, using a variety of field operations nodes.
Several nodes provide the ability to derive new fields:
* The Derive node modifies data values or creates new fields from one or more existing fields. It creates fields of type formula, flag, nominal, state, count, and conditional.
* The Reclassify node transforms one set of categorical values to another. Reclassification is useful for collapsing categories or regrouping data for analysis.
* The Binning node automatically creates new nominal (set) fields based on the values of one or more existing continuous (numeric range) fields. For example, you can transform a continuous income field into a new categorical field containing groups of income as deviations from the mean. After you create bins for the new field, you can generate a Derive node based on the cut points.
* The Set to Flag node derives multiple flag fields based on the categorical values defined for one or more nominal fields.
* The Restructure node converts a nominal or flag field into a group of fields that can be populated with the values of yet another field. For example, given a field named payment type, with values of credit, cash, and debit, three new fields would be created (credit, cash, debit), each of which might contain the value of the actual payment made.
Tip: The Control Language for Expression Manipulation (CLEM) is a powerful tool you can use to analyze and manipulate the data used in your flows. For example, you might use CLEM in a node to derive values. For more information, see the [CLEM (legacy) language reference](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/clem_language_reference.html).
| # Derive node #
One of the most powerful features in watsonx\.ai is the ability to modify data values and derive new fields from existing data\. During lengthy data mining projects, it is common to perform several derivations, such as extracting a customer ID from a string of Web log data or creating a customer lifetime value based on transaction and demographic data\. All of these transformations can be performed, using a variety of field operations nodes\.
Several nodes provide the ability to derive new fields:
<!-- <ul> -->
* The Derive node modifies data values or creates new fields from one or more existing fields\. It creates fields of type formula, flag, nominal, state, count, and conditional\.
* The Reclassify node transforms one set of categorical values to another\. Reclassification is useful for collapsing categories or regrouping data for analysis\.
* The Binning node automatically creates new nominal (set) fields based on the values of one or more existing continuous (numeric range) fields\. For example, you can transform a continuous income field into a new categorical field containing groups of income as deviations from the mean\. After you create bins for the new field, you can generate a Derive node based on the cut points\.
* The Set to Flag node derives multiple flag fields based on the categorical values defined for one or more nominal fields\.
* The Restructure node converts a nominal or flag field into a group of fields that can be populated with the values of yet another field\. For example, given a field named `payment type`, with values of `credit`, `cash`, and `debit`, three new fields would be created (`credit`, `cash`, `debit`), each of which might contain the value of the actual payment made\.
<!-- </ul> -->
Tip: The Control Language for Expression Manipulation (CLEM) is a powerful tool you can use to analyze and manipulate the data used in your flows\. For example, you might use CLEM in a node to derive values\. For more information, see the [CLEM (legacy) language reference](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/clem_language_reference.html)\.
<!-- </article "role="article" "> -->
|
20CFE34D5494AB0AE2EF8B6F65396EDBF667F688 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/derive_stb.html?context=cdpaas&locale=en | Space-Time-Boxes node (SPSS Modeler) | Space-Time-Boxes node
Space-Time-Boxes (STB) are an extension of Geohashed spatial locations. More specifically, an STB is an alphanumeric string that represents a regularly shaped region of space and time.
For example, the STB dr5ru7|2013-01-01 00:00:00|2013-01-01 00:15:00 is made up of the following three parts:
* The geohash dr5ru7
* The start timestamp 2013-01-01 00:00:00
* The end timestamp 2013-01-01 00:15:00
As an example, you could use space and time information to improve confidence that two entities are the same because they are virtually in the same place at the same time. Alternatively, you could improve the accuracy of relationship identification by showing that two entities are related due to their proximity in space and time.
In the node properties, you can choose the Individual Records or Hangouts mode as appropriate for your requirements. Both modes require the same basic details, as follows:
Latitude field. Select the field that identifies the latitude (in WGS84 coordinate system).
Longitude field. Select the field that identifies the longitude (in WGS84 coordinate system).
Timestamp field. Select the field that identifies the time or date.
| # Space\-Time\-Boxes node #
Space\-Time\-Boxes (STB) are an extension of Geohashed spatial locations\. More specifically, an STB is an alphanumeric string that represents a regularly shaped region of space and time\.
For example, the STB dr5ru7\|2013\-01\-01 00:00:00\|2013\-01\-01 00:15:00 is made up of the following three parts:
<!-- <ul> -->
* The geohash dr5ru7
* The start timestamp 2013\-01\-01 00:00:00
* The end timestamp 2013\-01\-01 00:15:00
<!-- </ul> -->
As an example, you could use space and time information to improve confidence that two entities are the same because they are virtually in the same place at the same time\. Alternatively, you could improve the accuracy of relationship identification by showing that two entities are related due to their proximity in space and time\.
In the node properties, you can choose the Individual Records or Hangouts mode as appropriate for your requirements\. Both modes require the same basic details, as follows:
Latitude field\. Select the field that identifies the latitude (in WGS84 coordinate system)\.
Longitude field\. Select the field that identifies the longitude (in WGS84 coordinate system)\.
Timestamp field\. Select the field that identifies the time or date\.
<!-- </article "role="article" "> -->
|
909B04011F4C2211D6D945EC82217E3F89A79BD7 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/disable_nodes.html?context=cdpaas&locale=en | Disabling nodes in a flow (SPSS Modeler) | Disabling nodes in a flow
You can disable process nodes that have a single input so that they're ignored when the flow runs. This saves you from having to remove or bypass the node and means you can leave it connected to the remaining nodes.
You can still open and edit the node settings; however, any changes will not take effect until you enable the node again.
For example, you might use a Filter node to filter several fields, and then build models based on the reduced data set. If you want to also build the same models without fields being filtered, to see if they improve the model results, you can disable the Filter node. When you disable the Filter node, the connections to the modeling nodes pass directly through from the Derive node to the Type node.
| # Disabling nodes in a flow #
You can disable process nodes that have a single input so that they're ignored when the flow runs\. This saves you from having to remove or bypass the node and means you can leave it connected to the remaining nodes\.
You can still open and edit the node settings; however, any changes will not take effect until you enable the node again\.
For example, you might use a Filter node to filter several fields, and then build models based on the reduced data set\. If you want to also build the same models *without* fields being filtered, to see if they improve the model results, you can disable the Filter node\. When you disable the Filter node, the connections to the modeling nodes pass directly through from the Derive node to the Type node\.
<!-- </article "role="article" "> -->
|
338F12B976B522389F5FABE438280565490FB280 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/discriminant.html?context=cdpaas&locale=en | Discriminant node (SPSS Modeler) | Discriminant node
Discriminant analysis builds a predictive model for group membership. The model is composed of a discriminant function (or, for more than two groups, a set of discriminant functions) based on linear combinations of the predictor variables that provide the best discrimination between the groups. The functions are generated from a sample of cases for which group membership is known; the functions can then be applied to new cases that have measurements for the predictor variables but have unknown group membership.
Example. A telecommunications company can use discriminant analysis to classify customers into groups based on usage data. This allows them to score potential customers and target those who are most likely to be in the most valuable groups.
Requirements. You need one or more input fields and exactly one target field. The target must be a categorical field (with a measurement level of Flag or Nominal) with string or integer storage. (Storage can be converted using a Filler or Derive node if necessary. ) Fields set to Both or None are ignored. Fields used in the model must have their types fully instantiated.
Strengths. Discriminant analysis and Logistic Regression are both suitable classification models. However, Discriminant analysis makes more assumptions about the input fields—for example, they are normally distributed and should be continuous, and they give better results if those requirements are met, especially if the sample size is small.
| # Discriminant node #
Discriminant analysis builds a predictive model for group membership\. The model is composed of a discriminant function (or, for more than two groups, a set of discriminant functions) based on linear combinations of the predictor variables that provide the best discrimination between the groups\. The functions are generated from a sample of cases for which group membership is known; the functions can then be applied to new cases that have measurements for the predictor variables but have unknown group membership\.
Example\. A telecommunications company can use discriminant analysis to classify customers into groups based on usage data\. This allows them to score potential customers and target those who are most likely to be in the most valuable groups\.
Requirements\. You need one or more input fields and exactly one target field\. The target must be a categorical field (with a measurement level of `Flag` or `Nominal`) with string or integer storage\. (Storage can be converted using a Filler or Derive node if necessary\. ) Fields set to `Both` or `None` are ignored\. Fields used in the model must have their types fully instantiated\.
Strengths\. Discriminant analysis and Logistic Regression are both suitable classification models\. However, Discriminant analysis makes more assumptions about the input fields—for example, they are normally distributed and should be continuous, and they give better results if those requirements are met, especially if the sample size is small\.
<!-- </article "role="article" "> -->
|
5C597F82EC8484220A6FB3193DC78B878E8698F6 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/distinct.html?context=cdpaas&locale=en | Distinct node (SPSS Modeler) | Distinct node
Duplicate records in a data set must be removed before data mining can begin. For example, in a marketing database, individuals may appear multiple times with different address or company information. You can use the Distinct node to find or remove duplicate records in your data, or to create a single, composite record from a group of duplicate records.
To use the Distinct node, you must first define a set of key fields that determine when two records are considered to be duplicates.
If you do not pick all your fields as key fields, then two "duplicate" records may not be truly identical because they can still differ in the values of the remaining fields. In this case, you can also define a sort order that is applied within each group of duplicate records. This sort order gives you fine control over which record is treated as the first within a group. Otherwise, all duplicates are considered to be interchangeable and any record might be selected. The incoming order of the records is not taken into account, so it doesn't help to use an upstream Sort node (see "Sorting records within the Distinct node" on this page).
Mode. Specify whether to create a composite record, or to either include or exclude (discard) the first record.
* Create a composite record for each group. Provides a way for you to aggregate non-numeric fields. Selecting this option makes the Composite tab available where you specify how to create the composite records.
* Include only the first record in each group. Selects the first record from each group of duplicate records and discards the rest. The first record is determined by the sort order defined under the setting Within groups, sort records by, and not by the incoming order of the records.
* Discard only the first record in each group. Discards the first record from each group of duplicate records and selects the remainder instead. The first record is determined by the sort order defined under the setting Within groups, sort records by, and not by the incoming order of the records. This option is useful for finding duplicates in your data so that you can examine them later in the flow.
Key fields for grouping. Lists the field or fields used to determine whether records are identical. You can:
* Add fields to this list using the field picker button.
* Delete fields from the list by using the red X (remove) button.
Within groups, sort records by. Lists the fields used to determine how records are sorted within each group of duplicates, and whether they are sorted in ascending or descending order. You can:
* Add fields to this list using the field picker button.
* Delete fields from the list by using the red X (remove) button.
* Move fields using the up or down buttons, if you are sorting by more than one field.
You must specify a sort order if you have chosen to include or exclude the first record in each group, and it matters to you which record is treated as the first.
You may also want to specify a sort order if you have chosen to create a composite record, for certain options on the Composite tab.
Specify whether, by default, records are sorted in Ascending or Descending order of the sort key values.
| # Distinct node #
Duplicate records in a data set must be removed before data mining can begin\. For example, in a marketing database, individuals may appear multiple times with different address or company information\. You can use the Distinct node to find or remove duplicate records in your data, or to create a single, composite record from a group of duplicate records\.
To use the Distinct node, you must first define a set of key fields that determine when two records are considered to be duplicates\.
If you do not pick all your fields as key fields, then two "duplicate" records may not be truly identical because they can still differ in the values of the remaining fields\. In this case, you can also define a sort order that is applied within each group of duplicate records\. This sort order gives you fine control over which record is treated as the first within a group\. Otherwise, all duplicates are considered to be interchangeable and any record might be selected\. The incoming order of the records is not taken into account, so it doesn't help to use an upstream Sort node (see "Sorting records within the Distinct node" on this page)\.
Mode\. Specify whether to create a composite record, or to either include or exclude (discard) the first record\.
<!-- <ul> -->
* Create a composite record for each group\. Provides a way for you to aggregate non\-numeric fields\. Selecting this option makes the Composite tab available where you specify how to create the composite records\.
* Include only the first record in each group\. Selects the first record from each group of duplicate records and discards the rest\. The first record is determined by the sort order defined under the setting Within groups, sort records by, and not by the incoming order of the records\.
* Discard only the first record in each group\. Discards the first record from each group of duplicate records and selects the remainder instead\. The first record is determined by the sort order defined under the setting Within groups, sort records by, and not by the incoming order of the records\. This option is useful for finding duplicates in your data so that you can examine them later in the flow\.
<!-- </ul> -->
Key fields for grouping\. Lists the field or fields used to determine whether records are identical\. You can:
<!-- <ul> -->
* Add fields to this list using the field picker button\.
* Delete fields from the list by using the red X (remove) button\.
<!-- </ul> -->
Within groups, sort records by\. Lists the fields used to determine how records are sorted within each group of duplicates, and whether they are sorted in ascending or descending order\. You can:
<!-- <ul> -->
* Add fields to this list using the field picker button\.
* Delete fields from the list by using the red X (remove) button\.
* Move fields using the up or down buttons, if you are sorting by more than one field\.
<!-- </ul> -->
You must specify a sort order if you have chosen to include or exclude the first record in each group, and it matters to you which record is treated as the first\.
You may also want to specify a sort order if you have chosen to create a composite record, for certain options on the Composite tab\.
Specify whether, by default, records are sorted in Ascending or Descending order of the sort key values\.
<!-- </article "role="article" "> -->
|
570AF2AAF268A3DF1D959D54A5BE1790DC43EAD5 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/distribution.html?context=cdpaas&locale=en | Distribution node (SPSS Modeler) | Distribution node
A distribution graph or table shows the occurrence of symbolic (non-numeric) values, such as mortgage type or gender, in a dataset. A typical use of the Distribution node is to show imbalances in the data that you can rectify by using a Balance node before creating a model. You can automatically generate a Balance node using the Generate menu in the distribution graph or table window.
Note: To show the occurrence of numeric values, you should use a Histogram node.
| # Distribution node #
A distribution graph or table shows the occurrence of symbolic (non\-numeric) values, such as mortgage type or gender, in a dataset\. A typical use of the Distribution node is to show imbalances in the data that you can rectify by using a Balance node before creating a model\. You can automatically generate a Balance node using the Generate menu in the distribution graph or table window\.
Note: To show the occurrence of numeric values, you should use a Histogram node\.
<!-- </article "role="article" "> -->
|
D5D31FDA0EEBFCDD87005ED54EBEDFD164FA073B | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/dvcharts.html?context=cdpaas&locale=en | Charts node (SPSS Modeler) | Charts node
With the Charts node, you can launch the chart builder and create chart definitions to save with your flow. Then when you run the node, chart output is generated.
The Charts node is available under the Graphs section on the node palette. After adding a Charts node to your flow, double-click it to open the properties pane. Then click Launch Chart Builder to open the chart builder and create one or more chart definitions to associate with the node. See [Visualizing your data](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/visualizations.html) for details about creating charts.
Figure 1. Example charts
 Notes:
* When you create a chart, it uses a sample of your data. After clicking Save and close to save the chart definition and return to your flow, the Charts node will then use all of your data when you run it.
* Chart definitions are listed in the node properties panel, with icons available for editing them or removing them.
* When you right-click a Charts node to run it, the defined chart (or charts) is built and added to the Outputs pane. Open the chart output to interact with it by hovering over it, zooming in or out, or downloading the chart as an image file (. png).
* When creating a chart, you can click Back to flow to close the chart builder and return to your flow. But you can't run the Charts node until you save a chart definition.
| # Charts node #
With the Charts node, you can launch the chart builder and create chart definitions to save with your flow\. Then when you run the node, chart output is generated\.
The Charts node is available under the Graphs section on the node palette\. After adding a Charts node to your flow, double\-click it to open the properties pane\. Then click Launch Chart Builder to open the chart builder and create one or more chart definitions to associate with the node\. See [Visualizing your data](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/visualizations.html) for details about creating charts\.
Figure 1\. Example charts
 Notes:
<!-- <ul> -->
* When you create a chart, it uses a sample of your data\. After clicking Save and close to save the chart definition and return to your flow, the Charts node will then use all of your data when you run it\.
* Chart definitions are listed in the node properties panel, with icons available for editing them or removing them\.
* When you right\-click a Charts node to run it, the defined chart (or charts) is built and added to the Outputs pane\. Open the chart output to interact with it by hovering over it, zooming in or out, or downloading the chart as an image file (\. png)\.
* When creating a chart, you can click Back to flow to close the chart builder and return to your flow\. But you can't run the Charts node until you save a chart definition\.
<!-- </ul> -->
<!-- </article "role="article" "> -->
|
8C53BD47030C9BF4E7DBF1EA482CDED9CC8ABAD4 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/ensemble.html?context=cdpaas&locale=en | Ensemble node (SPSS Modeler) | Ensemble node
The Ensemble node combines two or more model nuggets to obtain more accurate predictions than can be gained from any of the individual models. By combining predictions from multiple models, limitations in individual models may be avoided, resulting in a higher overall accuracy. Models combined in this manner typically perform at least as well as the best of the individual models and often better.
This combining of nodes happens automatically in the Auto Classifier and Auto Numeric automated modeling nodes.
After using an Ensemble node, you can use an Analysis node or Evaluation node to compare the accuracy of the combined results with each of the input models. To do this, make sure the Filter out fields generated by ensembled models option is not selected in the Ensemble node settings.
| # Ensemble node #
The Ensemble node combines two or more model nuggets to obtain more accurate predictions than can be gained from any of the individual models\. By combining predictions from multiple models, limitations in individual models may be avoided, resulting in a higher overall accuracy\. Models combined in this manner typically perform at least as well as the best of the individual models and often better\.
This combining of nodes happens automatically in the Auto Classifier and Auto Numeric automated modeling nodes\.
After using an Ensemble node, you can use an Analysis node or Evaluation node to compare the accuracy of the combined results with each of the input models\. To do this, make sure the Filter out fields generated by ensembled models option is not selected in the Ensemble node settings\.
<!-- </article "role="article" "> -->
|
4F733928B0F749FFDDF2E6DAEF646A0524C54D67 | https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/evaluation.html?context=cdpaas&locale=en | Evaluation node (SPSS Modeler) | Evaluation node
The Evaluation node offers an easy way to evaluate and compare predictive models to choose the best model for your application. Evaluation charts show how models perform in predicting particular outcomes. They work by sorting records based on the predicted value and confidence of the prediction, splitting the records into groups of equal size (quantiles), and then plotting the value of the business criterion for each quantile, from highest to lowest. Multiple models are shown as separate lines in the plot.
Outcomes are handled by defining a specific value or range of values as a hit. Hits usually indicate success of some sort (such as a sale to a customer) or an event of interest (such as a specific medical diagnosis). You can define hit criteria under the OPTIONS section of the node properties, or you can use the default hit criteria as follows:
* Flag output fields are straightforward; hits correspond to true values.
* For Nominal output fields, the first value in the set defines a hit.
* For Continuous output fields, hits equal values greater than the midpoint of the field's range.
There are six types of evaluation charts, each of which emphasizes a different evaluation criterion.
Evaluation charts can also be cumulative, so that each point equals the value for the corresponding quantile plus all higher quantiles. Cumulative charts usually convey the overall performance of models better, whereas noncumulative charts often excel at indicating particular problem areas for models.
Note: The Evaluation node doesn't support the use of commas in field names. If you have field names containing commas, you must either remove the commas or surround the field name in quotes.
| # Evaluation node #
The Evaluation node offers an easy way to evaluate and compare predictive models to choose the best model for your application\. Evaluation charts show how models perform in predicting particular outcomes\. They work by sorting records based on the predicted value and confidence of the prediction, splitting the records into groups of equal size (quantiles), and then plotting the value of the business criterion for each quantile, from highest to lowest\. Multiple models are shown as separate lines in the plot\.
Outcomes are handled by defining a specific value or range of values as a hit\. Hits usually indicate success of some sort (such as a sale to a customer) or an event of interest (such as a specific medical diagnosis)\. You can define hit criteria under the OPTIONS section of the node properties, or you can use the default hit criteria as follows:
<!-- <ul> -->
* Flag output fields are straightforward; hits correspond to true values\.
* For Nominal output fields, the first value in the set defines a hit\.
* For Continuous output fields, hits equal values greater than the midpoint of the field's range\.
<!-- </ul> -->
There are six types of evaluation charts, each of which emphasizes a different evaluation criterion\.
Evaluation charts can also be cumulative, so that each point equals the value for the corresponding quantile plus all higher quantiles\. Cumulative charts usually convey the overall performance of models better, whereas noncumulative charts often excel at indicating particular problem areas for models\.
Note: The Evaluation node doesn't support the use of commas in field names\. If you have field names containing commas, you must either remove the commas or surround the field name in quotes\.
<!-- </article "role="article" "> -->
|
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.