diff --git "a/corpus/corpus.csv" "b/corpus/corpus.csv"
deleted file mode 100644--- "a/corpus/corpus.csv"
+++ /dev/null
@@ -1,70856 +0,0 @@
-id,document_label,page_content
-81D740CEF3967C20721612B7866072EF240484E9,81D740CEF3967C20721612B7866072EF240484E9," Decision Optimization Java models
-
-You can create and run Decision Optimization models in Java by using the Watson Machine Learning REST API.
-
-You can build your Decision Optimization models in Java or you can use Java worker to package CPLEX, CPO, and OPL models.
-
-For more information about these models, see the following reference manuals.
-
-
-
-* [Java CPLEX reference documentation](https://www.ibm.com/docs/en/SSSA5P_22.1.1/ilog.odms.cplex.help/refjavacplex/html/overview-summary.html)
-* [Java CPO reference documentation](https://www.ibm.com/docs/en/SSSA5P_22.1.1/ilog.odms.cpo.help/refjavacpoptimizer/html/overview-summary.html)
-* [Java OPL reference documentation](https://www.ibm.com/docs/en/SSSA5P_22.1.1/ilog.odms.ide.help/refjavaopl/html/overview-summary.html)
-
-
-
-To package and deploy Java models in Watson Machine Learning, see [Deploying Java models for Decision Optimization](https://dataplatform.cloud.ibm.com/docs/content/DO/WML_Deployment/DeployJava.html) and the boilerplate provided in the [Java worker GitHub](https://github.com/IBMDecisionOptimization/cplex-java-worker/blob/master/README.md).
-"
-6DBD14399B24F78CAFEC6225B77DAFAE357DDEE5,6DBD14399B24F78CAFEC6225B77DAFAE357DDEE5," Decision Optimization notebooks
-
-You can create and run Decision Optimization models in Python notebooks by using DOcplex, a native Python API for Decision Optimization. Several Decision Optimization notebooks are already available for you to use.
-
-The Decision Optimization environment currently supports Python 3.10. The following Python environments give you access to the Community Edition of the CPLEX engines. The Community Edition is limited to solving problems with up to 1000 constraints and 1000 variables, or with a search space of 1000 X 1000 for Constraint Programming problems.
-
-
-
-* Runtime 23.1 on Python 3.10 S/XS/XXS
-* Runtime 22.2 on Python 3.10 S/XS/XXS
-
-
-
-To run larger problems, select a runtime that includes the full CPLEX commercial edition. The Decision Optimization environment ( DOcplex) is available in the following runtimes (full CPLEX commercial edition):
-
-
-
-* NLP + DO runtime 23.1 on Python 3.10 with CPLEX 22.1.1.0
-* DO + NLP runtime 22.2 on Python 3.10 with CPLEX 20.1.0.1
-
-
-
-You can easily change environments (runtimes and Python version) inside a notebook by using the Environment tab (see [Changing the environment of a notebook](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebook-environments.htmlchange-env)). Thus, you can formulate optimization models and test them with small data sets in one environment. Then, to solve models with bigger data sets, you can switch to a different environment, without having to rewrite or copy the notebook code.
-
-Multiple examples of Decision Optimization notebooks are available in the Samples, including:
-
-
-
-* The Sudoku example, a Constraint Programming example in which the objective is to solve a 9x9 Sudoku grid.
-* The Pasta Production Problem example, a Linear Programming example in which the objective is to minimize the production cost for some pasta products and to ensure that the customers' demand for the products is satisfied.
-
-
-
-These and more examples are also available in the jupyter folder of the [DO-samples](https://github.com/IBMDecisionOptimization/DO-Samples)
-
-All Decision Optimization notebooks use DOcplex.
-"
-277C8CB678CAF766466EDE03C506EB0A822FD400,277C8CB678CAF766466EDE03C506EB0A822FD400," Supported data sources in Decision Optimization
-
-Decision Optimization supports the following relational and nonrelational data sources on . watsonx.ai.
-
-
-
-"
-E990E009903E315FA6752E7E82C2634AF4A425B9,E990E009903E315FA6752E7E82C2634AF4A425B9," Ways to use Decision Optimization
-
-To build Decision Optimization models, you can create Python notebooks with DOcplex, a native Python API for Decision Optimization, or use the Decision Optimization experiment UI that has more benefits and features.
-"
-8892A757ECB2C4A02806A7B262712FF2E30CE044,8892A757ECB2C4A02806A7B262712FF2E30CE044," OPL models
-
-You can build OPL models in the Decision Optimization experiment UI in watsonx.ai.
-
-In this section:
-
-
-
-* [Inputs and Outputs](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/OPLmodels.html?context=cdpaas&locale=entopic_oplmodels__section_oplIO)
-* [Engine settings](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/OPLmodels.html?context=cdpaas&locale=entopic_oplmodels__engsettings)
-
-
-
-To create an OPL model in the experiment UI, select in the model selection window. You can also import OPL models from a file or import a scenario .zip file that contains the OPL model and the data. If you import from a file or scenario .zip file, the data must be in .csv format. However, you can import other file formats that you have as project assets into the experiment UI. You can also import data sets including connected data into your project from the model builder in the [Prepare data view](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.htmlModelBuilderInterface__section_preparedata).
-
-For more information about the OPL language and engine parameters, see:
-
-
-
-* [OPL language reference manual](https://www.ibm.com/docs/en/SSSA5P_22.1.1/ilog.odms.ide.help/OPL_Studio/opllangref/topics/opl_langref_modeling_language.html)
-* [OPL Keywords](https://www.ibm.com/docs/en/SSSA5P_22.1.1/ilog.odms.ide.help/OPL_Studio/opllang_quickref/topics/opl_keywords_top.html)
-"
-8E56F0EFD08FF4A97E439EA3B8DE2B7AF1A302C9_0,8E56F0EFD08FF4A97E439EA3B8DE2B7AF1A302C9," Visualization view
-
-With the Decision Optimization experiment Visualization view, you can configure the graphical representation of input data and solutions for one or several scenarios.
-
-Quick links:
-
-
-
-* [Visualization view](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/Visualization.html?context=cdpaas&locale=entopic_visualization__section-dashboard)
-* [Table search and filtering](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/Visualization.html?context=cdpaas&locale=entopic_visualization__section_tablefilter)
-* [Visualization widgets syntax](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/Visualization.html?context=cdpaas&locale=entopic_visualization__section_widgetssyntax)
-* [Visualization Editor](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/Visualization.html?context=cdpaas&locale=entopic_visualization__viseditor)
-* [Visualization pages](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/Visualization.html?context=cdpaas&locale=entopic_visualization__vispages)
-
-
-
-The Visualization view is common to all scenarios in a Decision Optimization experiment.
-
-For example, the following image shows the default bar chart that appears in the solution tab for the example that is used in the tutorial [Solving and analyzing a model: the diet problem](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Notebooks/solveModel.htmltask_mtg_n3q_m1b).
-
-
-
-"
-8E56F0EFD08FF4A97E439EA3B8DE2B7AF1A302C9_1,8E56F0EFD08FF4A97E439EA3B8DE2B7AF1A302C9,"The Visualization view helps you compare different scenarios to validate models and business decisions.
-
-For example, to show the two scenarios solved in this diet example tutorial, you can add another bar chart as follows:
-
-
-
-1. Click the chart widget and configure it by clicking the pencil icon.
-2. In the Chart widget editor, select Add scenario and choose scenario 1 (assuming that your current scenario is scenario 2) so that you have both scenario 1 and scenario 2 listed.
-3. In the Table field, select the Solution data option and select solution from the drop-down list.
-4. In the bar chart pane, select Descending for the Category order, Y-axis for the Bar type and click OK to close the Chart widget editor. A second bar chart is then displayed showing you the solution results for scenario 2.
-5. Re-edit the chart and select @Scenario in the Split by field of the Bar chart pane. You then obtain both scenarios in the same bar chart:
-
-
-
-.
-
-You can select many different types of charts in the Chart widget editor.
-
-Alternatively using the Vega Chart widget, you can similarly choose Solution data>solution to display the same data, select value and name in both the x and y fields in the Chart section of the Vega Chart widget editor. Then, in the Mark section, select @Scenario for the color field. This selection gives you the following bar chart with the two scenarios on the same y-axis, distinguished by different colors.
-
-.
-
-If you re-edit the chart and select @Scenario for the column facet, you obtain the two scenarios in separate charts side-by-side as follows:
-
-
-
-"
-8E56F0EFD08FF4A97E439EA3B8DE2B7AF1A302C9_2,8E56F0EFD08FF4A97E439EA3B8DE2B7AF1A302C9,"You can use many different types of charts that are available in the Mark field of the Vega Chart widget editor.
-
-You can also select the JSON tab in all the widget editors and configure your charts by using the JSON code. A more advanced example of JSON code is provided in the [Vega Chart widget specifications](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/Visualization.html?context=cdpaas&locale=entopic_visualization__section_hdc_5mm_33b) section.
-
-The following widgets are available:
-
-
-
-* [Notes widget](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/Visualization.html?context=cdpaas&locale=entopic_visualization__section_edc_5mm_33b)
-
-Add simple text notes to the Visualization view.
-* [Table widget](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/Visualization.html?context=cdpaas&locale=entopic_visualization__section_fdc_5mm_33b)
-
-Present input data and solution in tables, with a search and filtering feature. See [Table search and filtering](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/Visualization.html?context=cdpaas&locale=entopic_visualization__section_tablefilter).
-* [Charts widgets](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/Visualization.html?context=cdpaas&locale=entopic_visualization__section_alh_lfn_l2b)
-
-Present input data and solution in charts.
-"
-8E56F0EFD08FF4A97E439EA3B8DE2B7AF1A302C9_3,8E56F0EFD08FF4A97E439EA3B8DE2B7AF1A302C9,"* [Gantt chart widget](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/Visualization.html?context=cdpaas&locale=entopic_visualization__section_idc_5mm_33b)
-
-Display the solution to a scheduling problem (or any other type of suitable problem) in a Gantt chart.
-
-This widget is used automatically for scheduling problems that are modeled with the Modeling Assistant. You can edit this Gantt chart or create and configure new Gantt charts for any problem even for those models that don't use the Modeling Assistant.
-"
-33923FE20855D3EA3850294C0FB447EC3F1B7BDF_0,33923FE20855D3EA3850294C0FB447EC3F1B7BDF," Decision Optimization experiments
-
-If you use the Decision Optimization experiment UI, you can take advantage of its many features in this user-friendly environment. For example, you can create and solve models, produce reports, compare scenarios and save models ready for deployment with Watson Machine Learning.
-
-The Decision Optimization experiment UI facilitates workflow. Here you can:
-
-
-
-* Select and edit the data relevant for your optimization problem, see [Prepare data view](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.htmlModelBuilderInterface__section_preparedata)
-* Create, import, edit and solve Python models in the Decision Optimization experiment UI, see [Decision Optimization notebook tutorial](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Notebooks/solveModel.htmltask_mtg_n3q_m1b)
-* Create, import, edit and solve models expressed in natural language with the Modeling Assistant, see [Modeling Assistant tutorial](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Mdl_Assist/exhousebuild.htmlcogusercase)
-* Create, import, edit and solve OPL models in the Decision Optimization experiment UI, see [OPL models](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/OPLmodels.htmltopic_oplmodels)
-* Generate a notebook from your model, work with it as a notebook then reload it as a model, see [Generating a notebook from a scenario](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.htmlModelBuilderInterface__generateNB) and [Overview](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.htmlModelBuilderInterface__section_overview)
-"
-33923FE20855D3EA3850294C0FB447EC3F1B7BDF_1,33923FE20855D3EA3850294C0FB447EC3F1B7BDF,"* Visualize data and solutions, see [Explore solution view](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.htmlModelBuilderInterface__solution)
-* Investigate and compare solutions for multiple scenarios, see [Scenario pane](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.htmlModelBuilderInterface__scenariopanel) and [Overview](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.htmlModelBuilderInterface__section_overview)
-* Easily create and share reports with tables, charts and notes using widgets provided in the [Visualization Editor](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/Visualization.htmltopic_visualization)
-* Save models that are ready for deployment in Watson Machine Learning, see [Scenario pane](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.htmlModelBuilderInterface__scenariopanel) and [Overview](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.htmlModelBuilderInterface__section_overview)
-
-
-
-See the [Decision Optimization experiment UI comparison table](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/DOintro.htmlDOIntro__comparisontable) for a list of features available with and without the Decision Optimization experiment UI.
-
-See [Views and scenarios](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.htmlModelBuilderInterface) for a description of the user interface and scenario management.
-"
-497007D0D0ABAC3202BBF912A15BFC389066EBDA_0,497007D0D0ABAC3202BBF912A15BFC389066EBDA," Configuring environments and adding Python extensions
-
-You can change your default environment for Python and CPLEX in the experiment Overview.
-
-"
-497007D0D0ABAC3202BBF912A15BFC389066EBDA_1,497007D0D0ABAC3202BBF912A15BFC389066EBDA," Procedure
-
-To change the default environment for DOcplex and Modeling Assistant models:
-
-
-
-1. Open the Overview, click  to open the Information pane, and select the Environments tab.
-
-
-2. Expand the environment section according to your model type. For Python and Modeling Assistant models, expand Python environment. You can see the default Python environment (if one exists). To change the default environment for OPL, CPLEX, or CPO models, expand the appropriate environment section according to your model type and follow this same procedure.
-3. Expand the name of your environment, and select a different Python environment.
-4. Optional: To create a new environment:
-
-
-
-1. Select New environment for Python. A new window opens for you to define your new environment. 
-2. Enter a name, and select a CPLEX version, hardware specification, copies (number of nodes), Python version and (optionally) you can set Associate a Python extension to On to include any Python libraries that you want to add.
-3. Click New Python extension.
-4. Enter a name for your extension in the new Create a Python extension window that opens, and click Create.
-5. In the new Configure Python extension window that opens, you can set YAML code to On and enter or edit the provided YAML code.For example, use the provided template to add the custom libraries:
-
- Modify the following content to add a software customization to an environment.
- To remove an existing customization, delete the entire content and click Apply.
-
- Add conda channels on a new line after defaults, indented by two spaces and a hyphen.
-channels:
-- defaults
-
- To add packages through conda or pip, remove the comment on the following line.
- dependencies:
-
-"
-497007D0D0ABAC3202BBF912A15BFC389066EBDA_2,497007D0D0ABAC3202BBF912A15BFC389066EBDA," Add conda packages here, indented by two spaces and a hyphen.
- Remove the comment on the following line and replace sample package name with your package name:
- - a_conda_package=1.0
-
- Add pip packages here, indented by four spaces and a hyphen.
- Remove the comments on the following lines and replace sample package name with your package name.
- - pip:
- - a_pip_package==1.0
-
-You can also click Browse to add any Python libraries.
-
-For example, this image shows a dynamic programming Python library that is imported and YAML code set to On.
-
-Click Done.
-6. Click Create in the New environment window.
-
-
-
-Your chosen (or newly created) environment appears as ticked in the Python environments drop-down list in the Environments tab. The tick indicates that this is the default Python environment for all scenarios in your experiment.
-5. Select Manage experiment environments to see a detailed list of all existing environments for your experiment in the Environments tab.
-
-You can use the options provided by clicking the three vertical dots next to an environment to Edit, Set as default, Update in a deployment space or Delete the environment. You can also create a New environment from the Manage experiment environments window, but creating a new environment from this window does not make it the default unless you explicitly set is as the default.
-
-Updating your environment for Python or CPLEX versions: Python versions are regularly updated. If however you have explicitly specified an older Python version in your model, you must update this version specification or your models will not work. You can either create a new Python environment, as described earlier, or edit one from Manage experiment environments. This is also useful if you want to select a different version of CPLEX for your default environment.
-"
-497007D0D0ABAC3202BBF912A15BFC389066EBDA_3,497007D0D0ABAC3202BBF912A15BFC389066EBDA,"6. Click the Python extensions tab.
-
-
-
-Here you can view your Python extensions and see which environment it is used in. You can also create a New Python extension or use the options to Edit, Download, and Delete existing ones. If you edit a Python extension that is used by an experiment environment, the environment will be re-created.
-
-You can also view your Python environments in your deployment space assets and any Python extensions you have added will appear in the software specification.
-
-
-
-
-
-"
-497007D0D0ABAC3202BBF912A15BFC389066EBDA_4,497007D0D0ABAC3202BBF912A15BFC389066EBDA," Selecting a different run environment for a particular scenario
-
-You can choose different environments for individual scenarios on the Environment tab of the Run configuration pane.
-
-"
-497007D0D0ABAC3202BBF912A15BFC389066EBDA_5,497007D0D0ABAC3202BBF912A15BFC389066EBDA," Procedure
-
-
-
-1. Open the Scenario pane and select your scenario in the Build model view.
-2. Click the Configure run icon next to the Run button to open the Run configuration pane and select the Environment tab.
-"
-5788D38721AEAE446CFAD7D9288B6BAB33FA1EF9,5788D38721AEAE446CFAD7D9288B6BAB33FA1EF9," Sample models and notebooks for Decision Optimization
-
-Several examples are presented in this documentation as tutorials. You can also use many other examples that are provided in the Decision Optimization GitHub, and in the Samples.
-
-Quick links:
-
-
-
-* [Examples used in this documentation](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/docExamples.html?context=cdpaas&locale=enExamples__docexamples)
-* [Decision Optimization experiment samples (Modeling Assistant, Python, OPL)](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/docExamples.html?context=cdpaas&locale=enExamples__section_modelbuildersamples)
-"
-167D5677958594BA275E34B8748F7E8091782560_0,167D5677958594BA275E34B8748F7E8091782560," Decision Optimization experiment views and scenarios
-
-The Decision Optimization experiment UI has different views in which you can select data, create models, solve different scenarios, and visualize the results.
-
-Quick links to sections:
-
-
-
-* [ Overview](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.html?context=cdpaas&locale=enModelBuilderInterface__section_overview)
-* [Hardware and software configuration](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.html?context=cdpaas&locale=enModelBuilderInterface__section_environment)
-* [Prepare data view](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.html?context=cdpaas&locale=enModelBuilderInterface__section_preparedata)
-* [Build model view](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.html?context=cdpaas&locale=enModelBuilderInterface__ModelView)
-* [Multiple model files](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.html?context=cdpaas&locale=enModelBuilderInterface__section_g21_p5n_plb)
-* [Run models](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.html?context=cdpaas&locale=enModelBuilderInterface__runmodel)
-* [Run configuration](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.html?context=cdpaas&locale=enModelBuilderInterface__section_runconfig)
-"
-167D5677958594BA275E34B8748F7E8091782560_1,167D5677958594BA275E34B8748F7E8091782560,"* [Run environment tab](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.html?context=cdpaas&locale=enModelBuilderInterface__envtabConfigRun)
-* [Explore solution view](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.html?context=cdpaas&locale=enModelBuilderInterface__solution)
-* [Scenario pane](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.html?context=cdpaas&locale=enModelBuilderInterface__scenariopanel)
-* [Generating notebooks from scenarios](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.html?context=cdpaas&locale=enModelBuilderInterface__generateNB)
-* [Importing scenarios](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.html?context=cdpaas&locale=enModelBuilderInterface__p_Importingscenarios)
-* [Exporting scenarios](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.html?context=cdpaas&locale=enModelBuilderInterface__p_Exportingscenarios)
-
-
-
-Note: To create and run Optimization models, you must have both a Machine Learning service added to your project and a deployment space that is associated with your experiment:
-
-
-
-"
-167D5677958594BA275E34B8748F7E8091782560_2,167D5677958594BA275E34B8748F7E8091782560,"1. Add a [Machine Learning service](https://cloud.ibm.com/catalog/services/machine-learning) to your project. You can either add this service at the project level (see [Creating a Watson Machine Learning Service instance](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-service-instance.html)), or you can add it when you first create a new Decision Optimization experiment: click Add a Machine Learning service, select, or create a New service, click Associate, then close the window.
-2. Associate a [deployment space](https://dataplatform.cloud.ibm.com/ml-runtime/spaces) with your Decision Optimization experiment (see [Deployment spaces](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-spaces_local.htmlcreate)). A deployment space can be created or selected when you first create a new Decision Optimization experiment: click Create a deployment space, enter a name for your deployment space, and click Create. For existing models, you can also create, or select a space in the [Overview](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.htmlModelBuilderInterface__section_overview) information pane.
-
-
-
-When you add a Decision Optimization experiment as an asset in your project, you open the Decision Optimization experiment UI.
-
-With the Decision Optimization experiment UI, you can create and solve prescriptive optimization models that focus on the specific business problem that you want to solve. To edit and solve models, you must have Admin or Editor roles in the project. Viewers of shared projects can only see experiments, but cannot modify or run them.
-
-You can create a Decision Optimization model from scratch by entering a name or by choosing a .zip file, and then selecting Create. Scenario 1 opens.
-
-"
-167D5677958594BA275E34B8748F7E8091782560_3,167D5677958594BA275E34B8748F7E8091782560,"With the Decision Optimization experiment UI, you can create several scenarios, with different data sets and optimization models. Thus, you, can create and compare different scenarios and see what impact changes can have on a problem.
-
-For a step-by-step guide to build, solve and deploy a Decision Optimization model, by using the user interface, see the [Quick start tutorial with video](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-do.html).
-
-For each of the following views, you can organize your screen as full-screen or as a split-screen. To do so, hover over one of the view tabs ( Prepare data, Build model, Explore solution) for a second or two. A menu then appears where you can select Full Screen, Left or Right. For example, if you choose Left for the Prepare data view, and then choose Right for the Explore solution view, you can see both these views on the same screen.
-"
-1C20BD9F24D670DD18B6BC28E020FBB23C742682_0,1C20BD9F24D670DD18B6BC28E020FBB23C742682," Creating advanced custom constraints with Python
-
-This Decision Optimization Modeling Assistant example shows you how to create advanced custom constraints that use Python.
-
-"
-1C20BD9F24D670DD18B6BC28E020FBB23C742682_1,1C20BD9F24D670DD18B6BC28E020FBB23C742682," Procedure
-
-To create a new advanced custom constraint:
-
-
-
-1. In the Build model view of your open Modeling Assistant model, look at the Suggestions pane. If you have Display by category selected, expand the Others section to locate New custom constraint, and click it to add it to your model. Alternatively, without categories displayed, you can enter, for example, custom in the search field to find the same suggestion and click it to add it to your model.A new custom constraint is added to your model.
-
-
-2. Click Enter your constraint. Use [brackets] for data, concepts, variables, or parameters and enter the constraint you want to specify. For example, type No [employees] has [onCallDuties] for more than [2] consecutive days and press enter.The specification is displayed with default parameters (parameter1, parameter2, parameter3) for you to customize. These parameters will be passed to the Python function that implements this custom rule.
-
-
-3. Edit the default parameters in the specification to give them more meaningful names. For example, change the parameters to employees, on_call_duties, and limit and click enter.
-4. Click function name and enter a name for the function. For example, type limitConsecutiveAssignments and click enter.Your function name is added and an Edit Python button appears.
-
-
-"
-1C20BD9F24D670DD18B6BC28E020FBB23C742682_2,1C20BD9F24D670DD18B6BC28E020FBB23C742682,"5. Click the Edit Python button.A new window opens showing you Python code that you can edit to implement your custom rule. You can see your customized parameters in the code as follows:
-
-
-
-Notice that the code is documented with corresponding data frames and table column names as you have defined in the custom rule. The limit is not documented as this is a numerical value.
-6. Optional: You can edit the Python code directly in this window, but you might find it useful to edit and debug your code in a notebook before using it here. In this case, close this window for now and in the Scenario pane, expand the three vertical dots and select Generate a notebook for this scenario that contains the custom rule. Enter a name for this notebook.The notebook is created in your project assets ready for you to edit and debug. Once you have edited, run and debugged it you can copy the code for your custom function back into this Edit Python window in the Modeling Assistant.
-7. Edit the Python code in the Modeling Assistant custom rule Edit Python window. For example, you can define the rule for consecutive days in Python as follows:
-
-def limitConsecutiveAssignments(self, mdl, employees, on_call_duties, limit):
-global helper_add_labeled_cplex_constraint, helper_get_index_names_for_type, helper_get_column_name_for_property
-print('Adding constraints for the custom rule')
-for employee, duties in employees.associated(on_call_duties):
-duties_day_idx = duties.join(Day) Retrieve Day index from Day label
-for d in Day['index']:
-end = d + limit + 1 One must enforce that there are no occurence of (limit + 1) working consecutive days
-"
-1C20BD9F24D670DD18B6BC28E020FBB23C742682_3,1C20BD9F24D670DD18B6BC28E020FBB23C742682,"duties_in_win = duties_day_idx[((duties_day_idx'index'] >= d) & (duties_day_idx'index'] <= end)) | (duties_day_idx'index'] <= end - 7)]
-"
-C07CD6DF8C92EDD0F2638573BFDCE7BF18AA2EB0_0,C07CD6DF8C92EDD0F2638573BFDCE7BF18AA2EB0," Adding multi-concept constraints and custom decisions: shift assignment
-
-This Decision Optimization Modeling Assistant example shows you how to use multi-concept iterations, the associated keyword in constraints, how to define your own custom decisions, and define logical constraints. For illustration, a resource assignment problem, ShiftAssignment, is used and its completed model with data is provided in the DO-samples.
-
-"
-C07CD6DF8C92EDD0F2638573BFDCE7BF18AA2EB0_1,C07CD6DF8C92EDD0F2638573BFDCE7BF18AA2EB0," Procedure
-
-To download and open the sample:
-
-
-
-1. Download the ShiftAssignment.zip file from the Model_Builder subfolder in the [DO-samples](https://github.com/IBMDecisionOptimization/DO-Samples). Select the relevant product and version subfolder.
-2. Open your project or create an empty project.
-3. On the Manage tab of your project, select the Services and integrations section and click Associate service. Then select an existing Machine Learning service instance (or create a new one ) and click Associate. When the service is associated, a success message is displayed, and you can then close the Associate service window.
-4. Select the Assets tab.
-5. Select New asset > Solve optimization problems in the Work with models section.
-6. Click Local file in the Solve optimization problems window that opens.
-7. Browse locally to find and choose the ShiftAssignment.zip archive that you downloaded. Click Open. Alternatively use drag and drop.
-8. Associate a Machine Learning service instance with your project and reload the page.
-9. If you haven't already associated a Machine Learning service with your project, you must first select Add a Machine Learning service to select or create one before you choose a deployment space for your experiment.
-10. Click Create.A Decision Optimization model is created with the same name as the sample.
-11. Open the scenario pane and select the AssignmentWithOnCallDuties scenario.
-
-
-
-
-
-"
-C07CD6DF8C92EDD0F2638573BFDCE7BF18AA2EB0_2,C07CD6DF8C92EDD0F2638573BFDCE7BF18AA2EB0," Using multi-concept iteration
-
-"
-C07CD6DF8C92EDD0F2638573BFDCE7BF18AA2EB0_3,C07CD6DF8C92EDD0F2638573BFDCE7BF18AA2EB0," Procedure
-
-To use multi-concept iteration, follow these steps.
-
-
-
-1. Click Build model in the sidebar to view your model formulation.The model formulation shows the intent as being to assign employees to shifts, with its objectives and constraints.
-2. Expand the constraint For each Employee-Day combination , number of associated Employee-Shift assignments is less than or equal to 1.
-
-
-
-
-
-
-
-"
-C07CD6DF8C92EDD0F2638573BFDCE7BF18AA2EB0_4,C07CD6DF8C92EDD0F2638573BFDCE7BF18AA2EB0," Defining custom decisions
-
-"
-C07CD6DF8C92EDD0F2638573BFDCE7BF18AA2EB0_5,C07CD6DF8C92EDD0F2638573BFDCE7BF18AA2EB0," Procedure
-
-To define custom decisions, follow these steps.
-
-
-
-1. Click Build model to see the model formulation of the AssignmentWithOnCallDuties Scenario.
-
-The custom decision OnCallDuties is used in the second objective. This objective ensures that the number of on-call duties are balanced over Employees.
-
-The constraint  ensures that the on-call duty requirements that are listed in the Day table are satisfied.
-
-The following steps show you how this custom decision OnCallDuties was defined.
-2. Open the Settings pane and notice that the Visualize and edit decisions is set to true (or set it to true if it is set to the default false).
-
-This setting adds a Decisions tab to your Add to model window.
-
-
-
-Here you can see OnCallDuty is specified as an assignment decision (to assign employees to on-call duties). Its two dimensions are defined with reference to the data tables Day and Employee. This means that your model will also assign on-call duties to employees. The Employee-Shift assignment decision is specified from the original intent.
-3. Optional: Enter your own text to describe the OnCallDuty in the [to be documented] field.
-"
-C07CD6DF8C92EDD0F2638573BFDCE7BF18AA2EB0_6,C07CD6DF8C92EDD0F2638573BFDCE7BF18AA2EB0,"4. Optional: To create your own decision in the Decisions tab, click the enter name, type in a name and click enter. A new decision (intent) is created with that name with some highlighted fields to be completed by using the drop-down menus. If you, for example, select assignment as the decision type, two dimensions are created. As assignment involves assigning at least one thing to another, at least two dimensions must be defined. Use select a table fields to define the dimensions.
-
-
-
-
-
-
-
-"
-C07CD6DF8C92EDD0F2638573BFDCE7BF18AA2EB0_7,C07CD6DF8C92EDD0F2638573BFDCE7BF18AA2EB0," Using logical constraints
-
-"
-C07CD6DF8C92EDD0F2638573BFDCE7BF18AA2EB0_8,C07CD6DF8C92EDD0F2638573BFDCE7BF18AA2EB0," Procedure
-
-To use logical constraints:
-
-
-
-"
-0EFC1AA12637C84918CEF9FA5DE5DA424822330C,0EFC1AA12637C84918CEF9FA5DE5DA424822330C," Formulating and running a model: house construction scheduling
-
-This tutorial shows you how to use the Modeling Assistant to define, formulate and run a model for a house construction scheduling problem. The completed model with data is also provided in the DO-samples, see [Importing Model Builder samples](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/docExamples.htmlExamples__section_modelbuildersamples).
-
-In this section:
-
-
-
-* [Modeling Assistant House construction scheduling tutorial](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Mdl_Assist/exhousebuild.html?context=cdpaas&locale=encogusercase__section_The_problem)
-"
-312E91752782553D39C335D0DAAF189025739BB4,312E91752782553D39C335D0DAAF189025739BB4," Modeling Assistant models
-
-You can model and solve Decision Optimization problems using the Modeling Assistant (which enables you to formulate models in natural language). This requires little to no knowledge of Operational Research (OR) and does not require you to write Python code. The Modeling Assistant is only available in English and is not globalized.
-
-The basic workflow to create a model with the Modeling Assistant and examine it under different scenarios is as follows:
-
-
-
-1. Create a project.
-2. Add a Decision Optimization experiment (a scenario is created by default in the experiment UI).
-3. Add and import your data into the scenario.
-4. Create a natural language model in the scenario, by first selecting your decision domain and then using the Modeling Assistant to guide you.
-5. Run the model to solve it and explore the solution.
-6. Create visualizations of solution and data.
-7. Copy the scenario and edit the model and/or the data.
-8. Solve the new scenario to see the impact of these changes.
-
-
-
-
-
-This is demonstrated with a simple [planning and scheduling example ](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Mdl_Assist/exhousebuild.htmlcogusercase).
-
-For more information about deployment see .
-"
-2746F2E53D41F5810D92D843AF8C0AB2B36A0D47,2746F2E53D41F5810D92D843AF8C0AB2B36A0D47," Selecting a Decision domain in the Modeling Assistant
-
-There are different decision domains currently available in the Modeling Assistant and you can be guided to choose the right domain for your problem.
-
-Once you have added and imported your data into your model, the Modeling Assistant helps you to formulate your optimization model by offering you suggestions in natural language that you can edit. In order to make intelligent suggestions using your data, and to ensure that the proposed model formulation is well suited to your problem, you are asked to start by selecting a decision domain for your model.
-
-If you need a decision domain that is not currently supported by the Modeling Assistant, you can still formulate your model as a Python notebook or as an OPL model in the experiment UI editor.
-"
-F37BD72C28F0DAC8D9478ECEABA4F077ABCDE0C9_0,F37BD72C28F0DAC8D9478ECEABA4F077ABCDE0C9," Create new scenario
-
-To solve with different versions of your model or data you can create new scenarios in the Decision Optimization experiment UI.
-
-"
-F37BD72C28F0DAC8D9478ECEABA4F077ABCDE0C9_1,F37BD72C28F0DAC8D9478ECEABA4F077ABCDE0C9," Procedure
-
-To create a new scenario:
-
-
-
-1. Click the Open scenario pane icon  to open the Scenario panel.
-2. Use the Create Scenario drop-down menu to create a new scenario from the current one.
-3. Add a name for the duplicate scenario and click Create.
-4. Working in your new scenario, in the Prepare data view, open the diet_food data table in full mode.
-5. Locate the entry for Hotdog at row 9, and set the qmax value to 0 to exclude hot dog from possible solutions.
-"
-056E37762231E9E32F0F443987C32ACF7BF1AED4,056E37762231E9E32F0F443987C32ACF7BF1AED4," Working with multiple scenarios
-
-You can generate multiple scenarios to test your model against a wide range of data and understand how robust the model is.
-
-This example steps you through the process to generate multiple scenarios with a model. This makes it possible to test the performance of the model against multiple randomly generated data sets. It's important in practice to check the robustness of a model against a wide range of data. This helps ensure that the model performs well in potentially stochastic real-world conditions.
-
-The example is the StaffPlanning model in the DO-samples.
-
-The example is structured as follows:
-
-
-
-* The model StaffPlanning contains a default scenario based on two default data sets, along with five additional scenarios based on randomized data sets.
-* The Python notebookCopyAndSolveScenarios contains the random generator to create the new scenarios in the StaffPlanning model.
-
-
-
-For general information about scenario management and configuration, see [Scenario pane](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.htmlModelBuilderInterface__scenariopanel) and [Overview](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.htmlModelBuilderInterface__section_overview).
-
-For information about writing methods and classes for scenarios, see the [ Decision Optimization Client Python API documentation](https://ibmdecisionoptimization.github.io/decision-optimization-client-doc/).
-"
-3BEB81A5A5953CD570FA673B2496F8AF98725438_0,3BEB81A5A5953CD570FA673B2496F8AF98725438," Generating multiple scenarios
-
-This tutorial shows you how to generate multiple scenarios from a notebook using randomized data. Generating multiple scenarios lets you test a model by exposing it to a wide range of data.
-
-"
-3BEB81A5A5953CD570FA673B2496F8AF98725438_1,3BEB81A5A5953CD570FA673B2496F8AF98725438," Procedure
-
-To create and solve a scenario using a sample:
-
-
-
-1. Download and extract all the [DO-samples](https://github.com/IBMDecisionOptimization/DO-Samples) on to your machine. You can also download just the StaffPlanning.zip file from the Model_Builder subfolder for your product and version, but in this case do not extract it.
-2. Open your project or create an empty project.
-3. On the Manage tab of your project, select the Services and integrations section and click Associate service. Then select an existing Machine Learning service instance (or create a new one ) and click Associate. When the service is associated, a success message is displayed, and you can then close the Associate service window.
-4. Select the Assets tab.
-5. Select New asset > Solve optimization problems in the Work with models section.
-6. Click Local file in the Solve optimization problems window that opens.
-7. Browse to choose the StaffPlanning.zip file in the Model_Builder folder. Select the relevant product and version subfolder in your downloaded DO-samples.
-8. If you haven't already associated a Machine Learning service with your project, you must first select Add a Machine Learning service to select or create one before you choose a deployment space for your experiment.
-"
-DECCA51BACC7BE33F484D36177B24C4BD0FE4CFD,DECCA51BACC7BE33F484D36177B24C4BD0FE4CFD," Input and output data
-
-You can access the input and output data you defined in the experiment UI by using the following dictionaries.
-
-The data that you imported in the Prepare data view in the experiment UI is accessible from the input dictionary. You must define each table by using the syntax inputs['tablename']. For example, here food is an entity that is defined from the table called diet_food:
-
-food = inputs['diet_food']
-
-Similarly, to show tables in the Explore solution view of the experiment UI you must specify them using the syntax outputs['tablename']. For example,
-
-outputs['solution'] = solution_df
-
-defines an output table that is called solution. The entity solution_df in the Python model defines this table.
-
-You can find this Diet example in the Model_Builder folder of the [DO-samples](https://github.com/IBMDecisionOptimization/DO-Samples). To import and run (solve) it in the experiment UI, see [Solving and analyzing a model: the diet problem](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Notebooks/solveModel.htmltask_mtg_n3q_m1b).
-"
-726175290D457B10A02C27F08ECA1F6546E64680,726175290D457B10A02C27F08ECA1F6546E64680," Python DOcplex models
-
-You can solve Python DOcplex models in a Decision Optimization experiment.
-
-The Decision Optimization environment currently supports Python 3.10. The default version is Python 3.10. You can modify this default version on the Environment tab of the [Run configuration pane](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.htmlModelBuilderInterface__section_runconfig) or from the [Overview](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.htmlModelBuilderInterface__section_overview) information pane.
-
-The basic workflow to create a Python DOcplex model in Decision Optimization, and examine it under different scenarios, is as follows:
-
-
-
-1. Create a project.
-2. Add data to the project.
-3. Add a Decision Optimization experiment (a scenario is created by default in the experiment UI).
-4. Select and import your data into the scenario.
-5. Create or import your Python model.
-6. Run the model to solve it and explore the solution.
-7. Copy the scenario and edit the data in the context of the new scenario.
-8. Solve the new scenario to see the impact of the changes to data.
-
-
-
-
-"
-2E1F6D5703CE75AF284903C20E5DBDFA1AE706B4_0,2E1F6D5703CE75AF284903C20E5DBDFA1AE706B4," Solving and analyzing a model: the diet problem
-
-This example shows you how to create and solve a Python-based model by using a sample.
-
-"
-2E1F6D5703CE75AF284903C20E5DBDFA1AE706B4_1,2E1F6D5703CE75AF284903C20E5DBDFA1AE706B4," Procedure
-
-To create and solve a Python-based model by using a sample:
-
-
-
-1. Download and extract all the [DO-samples](https://github.com/IBMDecisionOptimization/DO-Samples) on to your computer. You can also download just the diet.zip file from the Model_Builder subfolder for your product and version, but in this case, do not extract it.
-2. Open your project or create an empty project.
-3. On the Manage tab of your project, select the Services and integrations section and click Associate service. Then select an existing Machine Learning service instance (or create a new one ) and click Associate. When the service is associated, a success message is displayed, and you can then close the Associate service window.
-4. Select the Assets tab.
-5. Select New asset > Solve optimization problems in the Work with models section.
-6. Click Local file in the Solve optimization problems window that opens.
-7. Browse to find the Model_Builder folder in your downloaded DO-samples. Select the relevant product and version subfolder. Choose the Diet.zip file and click Open. Alternatively use drag and drop.
-8. If you haven't already associated a Machine Learning service with your project, you must first select Add a Machine Learning service to select or create one before you choose a deployment space for your experiment.
-9. Click New deployment space, enter a name, and click Create (or select an existing space from the drop-down menu).
-10. Click Create.A Decision Optimization model is created with the same name as the sample.
-11. In the Prepare data view, you can see the data assets imported.These tables represent the min and max values for nutrients in the diet (diet_nutrients), the nutrients in different foods (diet_food_nutrients), and the price and quantity of specific foods (diet_food).
-
-
-12. Click Build model in the sidebar to view your model.The Python model minimizes the cost of the food in the diet while satisfying minimum nutrient and calorie requirements.
-
-"
-2E1F6D5703CE75AF284903C20E5DBDFA1AE706B4_2,2E1F6D5703CE75AF284903C20E5DBDFA1AE706B4,"
-
-"
-D51AD51E5407BF4EFAE5C97FE7E031DB56CF8733,D51AD51E5407BF4EFAE5C97FE7E031DB56CF8733," Run parameters and Environment
-
-You can select various run parameters for the optimization solve in the Decision Optimization experiment UI.
-
-Quick links to sections:
-
-
-
-* [CPLEX runtime version](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_RunParameters/runparams.html?context=cdpaas&locale=enRunConfig__cplexruntime)
-* [Python version](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_RunParameters/runparams.html?context=cdpaas&locale=enRunConfig__pyversion)
-"
-C6EE4CACFC1E29BAFBB8ED5D98521EA68388D0CB,C6EE4CACFC1E29BAFBB8ED5D98521EA68388D0CB," Decision Optimization
-
-IBM® Decision Optimization gives you access to IBM's industry-leading solution engines for mathematical programming and constraint programming. You can build Decision Optimization models either with notebooks or by using the powerful Decision Optimization experiment UI (Beta version). Here you can import, or create and edit models in Python, in OPL or with natural language expressions provided by the intelligent Modeling Assistant (Beta version). You can also deploy models with Watson Machine Learning.
-
-Data format
-: Tabular: .csv, .xls, .json files. See [Prepare data view](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.htmlModelBuilderInterface__section_preparedata)
-
-Data from [Connected data assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html)
-
-For deployment see [Model input and output data file formats](https://dataplatform.cloud.ibm.com/docs/content/DO/WML_Deployment/ModelIOFileFormats.html)
-
-"
-E45F37BDDB38D6656992642FBEA2707FE34E942A,E45F37BDDB38D6656992642FBEA2707FE34E942A," Delegating the Decision Optimization solve to run on Watson Machine Learning from Java or .NET CPLEX or CPO models
-
-You can delegate the Decision Optimization solve to run on Watson Machine Learning from your Java or .NET (CPLEX or CPO) models.
-
-Delegating the solve is only useful if you are building and generating your models locally. You cannot deploy models and run jobs Watson Machine Learning with this method. For full use of Java models on Watson Machine Learning use the Java™ worker Important: To deploy and test models on Watson Machine Learning, use the Java worker. For more information about deploying Java models, see the [Java worker GitHub](https://github.com/IBMDecisionOptimization/cplex-java-worker/blob/master/README.md).For the library and documentation for:
-
-
-
-"
-5BC48AB9A35E2E8BAEA5204C4406835154E2B836,5BC48AB9A35E2E8BAEA5204C4406835154E2B836," Deployment steps
-
-With IBM Watson Machine Learning you can deploy your Decision Optimization prescriptive model and associated common data once and then submit job requests to this deployment with only the related transactional data. This deployment can be achieved by using the Watson Machine Learning REST API or by using the Watson Machine Learning Python client.
-
-See [REST API example](https://dataplatform.cloud.ibm.com/docs/content/DO/WML_Deployment/DeployModelRest.htmltask_deploymodelREST) for a full code example. See [Python client examples](https://dataplatform.cloud.ibm.com/docs/content/DO/WML_Deployment/DeployPythonClient.htmltopic_wmlpythonclient) for a link to a Python notebook available from the Samples.
-"
-134EB5D79038B55A3A6AC019016A21EC2B6A1917,134EB5D79038B55A3A6AC019016A21EC2B6A1917," Deploying Java models for Decision Optimization
-
-You can deploy Decision Optimization Java models in Watson Machine Learning by using the Watson Machine Learning REST API.
-
-With the Java worker API, you can create optimization models with OPL, CPLEX, and CP Optimizer Java APIs. Therefore, you can easily create your models locally, package them and deploy them on Watson Machine Learning by using the boilerplate that is provided in the public [Java worker GitHub](https://github.com/IBMDecisionOptimization/cplex-java-worker/blob/master/README.md).
-
-The Decision Optimization[Java worker GitHub](https://github.com/IBMDecisionOptimization/cplex-java-worker/blob/master/README.md) contains a boilerplate with everything that you need to run, deploy, and verify your Java models in Watson Machine Learning, including an example. You can use the code in this repository to package your Decision Optimization Java model in a .jar file that can be used as a Watson Machine Learning model. For more information about Java worker parameters, see the [Java documentation](https://github.com/IBMDecisionOptimization/do-maven-repo/blob/master/com/ibm/analytics/optim/api_java_client/1.0.0/api_java_client-1.0.0-javadoc.jar).
-
-You can build your Decision Optimization models in Java or you can use Java worker to package CPLEX, CPO, and OPL models.
-
-For more information about these models, see the following reference manuals.
-
-
-
-* [Java CPLEX reference documentation](https://www.ibm.com/docs/en/SSSA5P_22.1.1/ilog.odms.cplex.help/refjavacplex/html/overview-summary.html)
-"
-B92F42609B54B82BFE38A69B781052E876258C2C_0,B92F42609B54B82BFE38A69B781052E876258C2C," REST API example
-
-You can deploy a Decision Optimization model, create and monitor jobs and get solutions using the Watson Machine Learning REST API.
-
-"
-B92F42609B54B82BFE38A69B781052E876258C2C_1,B92F42609B54B82BFE38A69B781052E876258C2C," Procedure
-
-
-
-1. Generate an IAM token using your [IBM Cloud API key](https://cloud.ibm.com/iam/apikeys) as follows.
-
-curl ""https://iam.bluemix.net/identity/token""
--d ""apikey=YOUR_API_KEY_HERE&grant_type=urn%3Aibm%3Aparams%3Aoauth%3Agrant-type%3Aapikey""
--H ""Content-Type: application/x-www-form-urlencoded""
--H ""Authorization: Basic Yng6Yng=""
-
-Output example:
-
-{
-""access_token"": "" obtained IAM token "",
-""refresh_token"": """",
-""token_type"": ""Bearer"",
-""expires_in"": 3600,
-""expiration"": 1554117649,
-""scope"": ""ibm openid""
-}
-
-Use the obtained token (access_token value) prepended by the word Bearer in the Authorization header, and the Machine Learning service GUID in the ML-Instance-ID header, in all API calls.
-2. Optional: If you have not obtained your SPACE-ID from the user interface as described previously, you can create a space using the REST API as follows. Use the previously obtained token prepended by the word bearer in the Authorization header in all API calls.
-
-curl --location --request POST
-""https://api.dataplatform.cloud.ibm.com/v2/spaces""
--H ""Authorization: Bearer TOKEN-HERE""
--H ""ML-Instance-ID: MACHINE-LEARNING-SERVICE-GUID-HERE""
--H ""Content-Type: application/json""
---data-raw ""{
-""name"": ""SPACE-NAME-HERE"",
-""description"": ""optional description here"",
-""storage"": {
-""resource_crn"": ""COS-CRN-ID-HERE""
-},
-""compute"": [{
-""name"": ""MACHINE-LEARNING-SERVICE-NAME-HERE"",
-"
-B92F42609B54B82BFE38A69B781052E876258C2C_2,B92F42609B54B82BFE38A69B781052E876258C2C,"""crn"": ""MACHINE-LEARNING-SERVICE-CRN-ID-HERE""
-}]
-}""
-
-For Windows users, put the --data-raw command on one line and replace all "" with "" inside this command as follows:
-
-curl --location --request POST ^
-""https://api.dataplatform.cloud.ibm.com/v2/spaces"" ^
--H ""Authorization: Bearer TOKEN-HERE"" ^
--H ""ML-Instance-ID: MACHINE-LEARNING-SERVICE-GUID-HERE"" ^
--H ""Content-Type: application/json"" ^
---data-raw ""{""name"": ""SPACE-NAME-HERE"",""description"": ""optional description here"",""storage"": {""resource_crn"": ""COS-CRN-ID-HERE"" },""compute"": [{""name"": ""MACHINE-LEARNING-SERVICE-NAME-HERE"",""crn"": ""MACHINE-LEARNING-SERVICE-CRN-ID-HERE"" }]}""
-
-Alternatively put the data in a separate file.A SPACE-ID is returned in id field of the metadata section.
-
-Output example:
-
-{
-""entity"": {
-""compute"": [
-{
-""crn"": ""MACHINE-LEARNING-SERVICE-CRN"",
-""guid"": ""MACHINE-LEARNING-SERVICE-GUID"",
-""name"": ""MACHINE-LEARNING-SERVICE-NAME"",
-""type"": ""machine_learning""
-}
-],
-""description"": ""string"",
-""members"": [
-{
-""id"": ""XXXXXXX"",
-""role"": ""admin"",
-""state"": ""active"",
-""type"": ""user""
-}
-],
-""name"": ""name"",
-""scope"": {
-""bss_account_id"": ""account_id""
-},
-""status"": {
-""state"": ""active""
-}
-},
-""metadata"": {
-""created_at"": ""2020-07-17T08:36:57.611Z"",
-""creator_id"": ""XXXXXXX"",
-"
-B92F42609B54B82BFE38A69B781052E876258C2C_3,B92F42609B54B82BFE38A69B781052E876258C2C,"""id"": ""SPACE-ID"",
-""url"": ""/v2/spaces/SPACE-ID""
-}
-}
-
-You must wait until your deployment space status is ""active"" before continuing. You can poll to check for this as follows.
-
-curl --location --request GET ""https://api.dataplatform.cloud.ibm.com/v2/spaces/SPACE-ID-HERE""
--H ""Authorization: bearer TOKEN-HERE""
--H ""Content-Type: application/json""
-3. Create a new Decision Optimization model
-
-All API requests require a version parameter that takes a date in the format version=YYYY-MM-DD. This code example posts a model that uses the file create_model.json. The URL will vary according to the chosen region/location for your machine learning service. See [Endpoint URLs](https://cloud.ibm.com/apidocs/machine-learningendpoint-url).
-
-curl --location --request POST
-""https://us-south.ml.cloud.ibm.com/ml/v4/models?version=2020-08-01""
--H ""Authorization: bearer TOKEN-HERE""
--H ""Content-Type: application/json""
--d @create_model.json
-
-The create_model.json file contains the following code:
-
-{
-""name"": ""ModelName"",
-""description"": ""ModelDescription"",
-""type"": ""do-docplex_22.1"",
-""software_spec"": {
-""name"": ""do_22.1""
-},
-""custom"": {
-""decision_optimization"": {
-""oaas.docplex.python"": ""3.10""
-}
-},
-""space_id"": ""SPACE-ID-HERE""
-}
-
-The Python version is stated explicitly here in a custom block. This is optional. Without it your model will use the default version which is currently Python 3.10. As the default version will evolve over time, stating the Python version explicitly enables you to easily change it later or to keep using an older supported version when the default version is updated. Currently supported versions are 3.10.
-
-"
-B92F42609B54B82BFE38A69B781052E876258C2C_4,B92F42609B54B82BFE38A69B781052E876258C2C,"If you want to be able to run jobs for this model from the user interface, instead of only using the REST API , you must define the schema for the input and output data. If you do not define the schema when you create the model, you can only run jobs using the REST API and not from the user interface.
-
-You can also use the schema specified for input and output in your optimization model:
-
-{
-""name"": ""Diet-Model-schema"",
-""description"": ""Diet"",
-""type"": ""do-docplex_22.1"",
-""schemas"": {
-""input"": [
-{
-""id"": ""diet_food_nutrients"",
-""fields"":
-{ ""name"": ""Food"", ""type"": ""string"" },
-{ ""name"": ""Calories"", ""type"": ""double"" },
-{ ""name"": ""Calcium"", ""type"": ""double"" },
-{ ""name"": ""Iron"", ""type"": ""double"" },
-{ ""name"": ""Vit_A"", ""type"": ""double"" },
-{ ""name"": ""Dietary_Fiber"", ""type"": ""double"" },
-{ ""name"": ""Carbohydrates"", ""type"": ""double"" },
-{ ""name"": ""Protein"", ""type"": ""double"" }
-]
-},
-{
-""id"": ""diet_food"",
-""fields"":
-{ ""name"": ""name"", ""type"": ""string"" },
-{ ""name"": ""unit_cost"", ""type"": ""double"" },
-{ ""name"": ""qmin"", ""type"": ""double"" },
-{ ""name"": ""qmax"", ""type"": ""double"" }
-]
-},
-{
-""id"": ""diet_nutrients"",
-""fields"":
-{ ""name"": ""name"", ""type"": ""string"" },
-{ ""name"": ""qmin"", ""type"": ""double"" },
-{ ""name"": ""qmax"", ""type"": ""double"" }
-]
-}
-],
-""output"": [
-{
-""id"": ""solution"",
-""fields"":
-"
-B92F42609B54B82BFE38A69B781052E876258C2C_5,B92F42609B54B82BFE38A69B781052E876258C2C,"{ ""name"": ""name"", ""type"": ""string"" },
-{ ""name"": ""value"", ""type"": ""double"" }
-]
-}
-]
-},
-""software_spec"": {
-""name"": ""do_22.1""
-},
-""space_id"": ""SPACE-ID-HERE""
-}
-
-When you post a model you provide information about its model type and the software specification to be used.Model types can be, for example:
-
-
-
-* do-opl_22.1 for OPL models
-* do-cplex_22.1 for CPLEX models
-* do-cpo_22.1 for CP models
-* do-docplex_22.1 for Python models
-
-
-
-Version 20.1 can also be used for these model types.
-
-For the software specification, you can use the default specifications using their names do_22.1 or do_20.1. See also [Extend software specification notebook](https://dataplatform.cloud.ibm.com/docs/content/DO/WML_Deployment/DeployPythonClient.htmltopic_wmlpythonclient__extendWML) which shows you how to extend the Decision Optimization software specification (runtimes with additional Python libraries for docplex models).
-
-A MODEL-ID is returned in id field in the metadata.
-
-Output example:
-
-{
-""entity"": {
-""software_spec"": {
-""id"": ""SOFTWARE-SPEC-ID""
-},
-""type"": ""do-docplex_20.1""
-},
-""metadata"": {
-""created_at"": ""2020-07-17T08:37:22.992Z"",
-""description"": ""ModelDescription"",
-""id"": ""MODEL-ID"",
-""modified_at"": ""2020-07-17T08:37:22.992Z"",
-""name"": ""ModelName"",
-""owner"": """",
-""space_id"": ""SPACE-ID""
-}
-}
-"
-B92F42609B54B82BFE38A69B781052E876258C2C_6,B92F42609B54B82BFE38A69B781052E876258C2C,"4. Upload a Decision Optimization model formulation ready for deployment.First compress your model into a (tar.gz, .zip or .jar) file and upload it to be deployed by the Watson Machine Learning service.This code example uploads a model called diet.zip that contains a Python model and no common data:
-
-curl --location --request PUT
-""https://us-south.ml.cloud.ibm.com/ml/v4/models/MODEL-ID-HERE/content?version=2020-08-01&space_id=SPACE-ID-HERE&content_format=native""
--H ""Authorization: bearer TOKEN-HERE""
--H ""Content-Type: application/gzip""
---data-binary ""@diet.zip""
-
-You can download this example and other models from the [DO-samples](https://github.com/IBMDecisionOptimization/DO-Samples). Select the relevant product and version subfolder.
-5. Deploy your modelCreate a reference to your model. Use the SPACE-ID, the MODEL-ID obtained when you created your model ready for deployment and the hardware specification. For example:
-
-curl --location --request POST ""https://us-south.ml.cloud.ibm.com/ml/v4/deployments?version=2020-08-01""
--H ""Authorization: bearer TOKEN-HERE""
--H ""Content-Type: application/json""
--d @deploy_model.json
-
-The deploy_model.json file contains the following code:
-
-{
-""name"": ""Test-Diet-deploy"",
-""space_id"": ""SPACE-ID-HERE"",
-""asset"": {
-""id"": ""MODEL-ID-HERE""
-},
-""hardware_spec"": {
-""name"": ""S""
-},
-""batch"": {}
-}
-
-The DEPLOYMENT-ID is returned in id field in the metadata. Output example:
-
-{
-""entity"": {
-""asset"": {
-""id"": ""MODEL-ID""
-},
-""custom"": {},
-"
-B92F42609B54B82BFE38A69B781052E876258C2C_7,B92F42609B54B82BFE38A69B781052E876258C2C,"""description"": """",
-""hardware_spec"": {
-""id"": ""HARDWARE-SPEC-ID"",
-""name"": ""S"",
-""num_nodes"": 1
-},
-""name"": ""Test-Diet-deploy"",
-""space_id"": ""SPACE-ID"",
-""status"": {
-""state"": ""ready""
-}
-},
-""metadata"": {
-""created_at"": ""2020-07-17T09:10:50.661Z"",
-""description"": """",
-""id"": ""DEPLOYMENT-ID"",
-""modified_at"": ""2020-07-17T09:10:50.661Z"",
-""name"": ""test-Diet-deploy"",
-""owner"": """",
-""space_id"": ""SPACE-ID""
-}
-}
-6. Once deployed, you can monitor your model's deployment state. Use the DEPLOYMENT-ID.For example:
-
-curl --location --request GET ""https://us-south.ml.cloud.ibm.com/ml/v4/deployments/DEPLOYMENT-ID-HERE?version=2020-08-01&space_id=SPACE-ID-HERE""
--H ""Authorization: bearer TOKEN-HERE""
--H ""Content-Type: application/json""
-
-Output example:
-7. You can then Submit jobs for your deployed model defining the input data and the output (results of the optimization solve) and the log file.For example, the following shows the contents of a file called myjob.json. It contains (inline) input data, some solve parameters, and specifies that the output will be a .csv file. For examples of other types of input data references, see [Model input and output data adaptation](https://dataplatform.cloud.ibm.com/docs/content/DO/WML_Deployment/ModelIODataDefn.htmltopic_modelIOAdapt).
-
-{
-""name"":""test-job-diet"",
-""space_id"": ""SPACE-ID-HERE"",
-""deployment"": {
-"
-B92F42609B54B82BFE38A69B781052E876258C2C_8,B92F42609B54B82BFE38A69B781052E876258C2C,"""id"": ""DEPLOYMENT-ID-HERE""
-},
-""decision_optimization"" : {
-""solve_parameters"" : {
-""oaas.logAttachmentName"":""log.txt"",
-""oaas.logTailEnabled"":""true""
-},
-""input_data"": [
-{
-""id"":""diet_food.csv"",
-""fields"" : ""name"",""unit_cost"",""qmin"",""qmax""],
-""values"" :
-""Roasted Chicken"", 0.84, 0, 10],
-""Spaghetti W/ Sauce"", 0.78, 0, 10],
-""Tomato,Red,Ripe,Raw"", 0.27, 0, 10],
-""Apple,Raw,W/Skin"", 0.24, 0, 10],
-""Grapes"", 0.32, 0, 10],
-""Chocolate Chip Cookies"", 0.03, 0, 10],
-""Lowfat Milk"", 0.23, 0, 10],
-""Raisin Brn"", 0.34, 0, 10],
-""Hotdog"", 0.31, 0, 10]
-]
-},
-{
-""id"":""diet_food_nutrients.csv"",
-""fields"" : ""Food"",""Calories"",""Calcium"",""Iron"",""Vit_A"",""Dietary_Fiber"",""Carbohydrates"",""Protein""],
-""values"" :
-""Spaghetti W/ Sauce"", 358.2, 80.2, 2.3, 3055.2, 11.6, 58.3, 8.2],
-""Roasted Chicken"", 277.4, 21.9, 1.8, 77.4, 0, 0, 42.2],
-""Tomato,Red,Ripe,Raw"", 25.8, 6.2, 0.6, 766.3, 1.4, 5.7, 1],
-""Apple,Raw,W/Skin"", 81.4, 9.7, 0.2, 73.1, 3.7, 21, 0.3],
-""Grapes"", 15.1, 3.4, 0.1, 24, 0.2, 4.1, 0.2],
-""Chocolate Chip Cookies"", 78.1, 6.2, 0.4, 101.8, 0, 9.3, 0.9],
-"
-B92F42609B54B82BFE38A69B781052E876258C2C_9,B92F42609B54B82BFE38A69B781052E876258C2C,"""Lowfat Milk"", 121.2, 296.7, 0.1, 500.2, 0, 11.7, 8.1],
-""Raisin Brn"", 115.1, 12.9, 16.8, 1250.2, 4, 27.9, 4],
-""Hotdog"", 242.1, 23.5, 2.3, 0, 0, 18, 10.4]
-]
-},
-{
-""id"":""diet_nutrients.csv"",
-""fields"" : ""name"",""qmin"",""qmax""],
-""values"" :
-""Calories"", 2000, 2500],
-""Calcium"", 800, 1600],
-""Iron"", 10, 30],
-""Vit_A"", 5000, 50000],
-""Dietary_Fiber"", 25, 100],
-""Carbohydrates"", 0, 300],
-""Protein"", 50, 100]
-]
-}
-],
-""output_data"": [
-{
-""id"":""..csv""
-}
-]
-}
-}
-
-This code example posts a job that uses this file myjob.json.
-
-curl --location --request POST ""https://us-south.ml.cloud.ibm.com/ml/v4/deployment_jobs?version=2020-08-01&space_id=SPACE-ID-HERE""
--H ""Authorization: bearer TOKEN-HERE""
--H ""Content-Type: application/json""
--H ""cache-control: no-cache""
--d @myjob.json
-
-A JOB-ID is returned. Output example: (the job is queued)
-
-{
-""entity"": {
-""decision_optimization"": {
-""input_data"": [{
-""id"": ""diet_food.csv"",
-""fields"": ""name"", ""unit_cost"", ""qmin"", ""qmax""],
-"
-B92F42609B54B82BFE38A69B781052E876258C2C_10,B92F42609B54B82BFE38A69B781052E876258C2C,"""values"": ""Roasted Chicken"", 0.84, 0, 10], ""Spaghetti W/ Sauce"", 0.78, 0, 10], ""Tomato,Red,Ripe,Raw"", 0.27, 0, 10], ""Apple,Raw,W/Skin"", 0.24, 0, 10], ""Grapes"", 0.32, 0, 10], ""Chocolate Chip Cookies"", 0.03, 0, 10], ""Lowfat Milk"", 0.23, 0, 10], ""Raisin Brn"", 0.34, 0, 10], ""Hotdog"", 0.31, 0, 10]]
-}, {
-""id"": ""diet_food_nutrients.csv"",
-""fields"": ""Food"", ""Calories"", ""Calcium"", ""Iron"", ""Vit_A"", ""Dietary_Fiber"", ""Carbohydrates"", ""Protein""],
-""values"": ""Spaghetti W/ Sauce"", 358.2, 80.2, 2.3, 3055.2, 11.6, 58.3, 8.2], ""Roasted Chicken"", 277.4, 21.9, 1.8, 77.4, 0, 0, 42.2], ""Tomato,Red,Ripe,Raw"", 25.8, 6.2, 0.6, 766.3, 1.4, 5.7, 1], ""Apple,Raw,W/Skin"", 81.4, 9.7, 0.2, 73.1, 3.7, 21, 0.3], ""Grapes"", 15.1, 3.4, 0.1, 24, 0.2, 4.1, 0.2], ""Chocolate Chip Cookies"", 78.1, 6.2, 0.4, 101.8, 0, 9.3, 0.9], ""Lowfat Milk"", 121.2, 296.7, 0.1, 500.2, 0, 11.7, 8.1], ""Raisin Brn"", 115.1, 12.9, 16.8, 1250.2, 4, 27.9, 4], ""Hotdog"", 242.1, 23.5, 2.3, 0, 0, 18, 10.4]]
-}, {
-""id"": ""diet_nutrients.csv"",
-""fields"": ""name"", ""qmin"", ""qmax""],
-"
-B92F42609B54B82BFE38A69B781052E876258C2C_11,B92F42609B54B82BFE38A69B781052E876258C2C,"""values"": ""Calories"", 2000, 2500], ""Calcium"", 800, 1600], ""Iron"", 10, 30], ""Vit_A"", 5000, 50000], ""Dietary_Fiber"", 25, 100], ""Carbohydrates"", 0, 300], ""Protein"", 50, 100]]
-}],
-""output_data"": [
-{
-""id"": ""..csv""
-}
-],
-""solve_parameters"": {
-""oaas.logAttachmentName"": ""log.txt"",
-""oaas.logTailEnabled"": ""true""
-},
-""status"": {
-""state"": ""queued""
-}
-},
-""deployment"": {
-""id"": ""DEPLOYMENT-ID""
-},
-""platform_job"": {
-""job_id"": """",
-""run_id"": """"
-}
-},
-""metadata"": {
-""created_at"": ""2020-07-17T10:42:42.783Z"",
-""id"": ""JOB-ID"",
-""name"": ""test-job-diet"",
-""space_id"": ""SPACE-ID""
-}
-}
-8. You can also monitor job states. Use the JOB-IDFor example:
-
-curl --location --request GET
-""https://us-south.ml.cloud.ibm.com/ml/v4/deployment_jobs/JOB-ID-HERE?version=2020-08-01&space_id=SPACE-ID-HERE""
--H ""Authorization: bearer TOKEN-HERE""
--H ""Content-Type: application/json""
-
-Output example: (job has completed)
-
-{
-""entity"": {
-""decision_optimization"": {
-""input_data"": [{
-""id"": ""diet_food.csv"",
-""fields"": ""name"", ""unit_cost"", ""qmin"", ""qmax""],
-"
-B92F42609B54B82BFE38A69B781052E876258C2C_12,B92F42609B54B82BFE38A69B781052E876258C2C,"""values"": ""Roasted Chicken"", 0.84, 0, 10], ""Spaghetti W/ Sauce"", 0.78, 0, 10], ""Tomato,Red,Ripe,Raw"", 0.27, 0, 10], ""Apple,Raw,W/Skin"", 0.24, 0, 10], ""Grapes"", 0.32, 0, 10], ""Chocolate Chip Cookies"", 0.03, 0, 10], ""Lowfat Milk"", 0.23, 0, 10], ""Raisin Brn"", 0.34, 0, 10], ""Hotdog"", 0.31, 0, 10]]
-}, {
-""id"": ""diet_food_nutrients.csv"",
-""fields"": ""Food"", ""Calories"", ""Calcium"", ""Iron"", ""Vit_A"", ""Dietary_Fiber"", ""Carbohydrates"", ""Protein""],
-""values"": ""Spaghetti W/ Sauce"", 358.2, 80.2, 2.3, 3055.2, 11.6, 58.3, 8.2], ""Roasted Chicken"", 277.4, 21.9, 1.8, 77.4, 0, 0, 42.2], ""Tomato,Red,Ripe,Raw"", 25.8, 6.2, 0.6, 766.3, 1.4, 5.7, 1], ""Apple,Raw,W/Skin"", 81.4, 9.7, 0.2, 73.1, 3.7, 21, 0.3], ""Grapes"", 15.1, 3.4, 0.1, 24, 0.2, 4.1, 0.2], ""Chocolate Chip Cookies"", 78.1, 6.2, 0.4, 101.8, 0, 9.3, 0.9], ""Lowfat Milk"", 121.2, 296.7, 0.1, 500.2, 0, 11.7, 8.1], ""Raisin Brn"", 115.1, 12.9, 16.8, 1250.2, 4, 27.9, 4], ""Hotdog"", 242.1, 23.5, 2.3, 0, 0, 18, 10.4]]
-}, {
-""id"": ""diet_nutrients.csv"",
-""fields"": ""name"", ""qmin"", ""qmax""],
-"
-B92F42609B54B82BFE38A69B781052E876258C2C_13,B92F42609B54B82BFE38A69B781052E876258C2C,"""values"": ""Calories"", 2000, 2500], ""Calcium"", 800, 1600], ""Iron"", 10, 30], ""Vit_A"", 5000, 50000], ""Dietary_Fiber"", 25, 100], ""Carbohydrates"", 0, 300], ""Protein"", 50, 100]]
-}],
-""output_data"": [{
-""fields"": ""Name"", ""Value""],
-""id"": ""kpis.csv"",
-""values"": ""Total Calories"", 2000], ""Total Calcium"", 800.0000000000001], ""Total Iron"", 11.278317739831891], ""Total Vit_A"", 8518.432542485823], ""Total Dietary_Fiber"", 25], ""Total Carbohydrates"", 256.80576358904455], ""Total Protein"", 51.17372234135308], ""Minimal cost"", 2.690409171696264]]
-}, {
-""fields"": ""name"", ""value""],
-""id"": ""solution.csv"",
-""values"": ""Spaghetti W/ Sauce"", 2.1551724137931036], ""Chocolate Chip Cookies"", 10], ""Lowfat Milk"", 1.8311671008899097], ""Hotdog"", 0.9296975991385925]]
-}],
-""output_data_references"": [],
-""solve_parameters"": {
-""oaas.logAttachmentName"": ""log.txt"",
-""oaas.logTailEnabled"": ""true""
-},
-""solve_state"": {
-""details"": {
-""KPI.Minimal cost"": ""2.690409171696264"",
-""KPI.Total Calcium"": ""800.0000000000001"",
-""KPI.Total Calories"": ""2000.0"",
-""KPI.Total Carbohydrates"": ""256.80576358904455"",
-""KPI.Total Dietary_Fiber"": ""25.0"",
-""KPI.Total Iron"": ""11.278317739831891"",
-""KPI.Total Protein"": ""51.17372234135308"",
-"
-B92F42609B54B82BFE38A69B781052E876258C2C_14,B92F42609B54B82BFE38A69B781052E876258C2C,"""KPI.Total Vit_A"": ""8518.432542485823"",
-""MODEL_DETAIL_BOOLEAN_VARS"": ""0"",
-""MODEL_DETAIL_CONSTRAINTS"": ""7"",
-""MODEL_DETAIL_CONTINUOUS_VARS"": ""9"",
-""MODEL_DETAIL_INTEGER_VARS"": ""0"",
-""MODEL_DETAIL_KPIS"": ""[""Total Calories"", ""Total Calcium"", ""Total Iron"", ""Total Vit_A"", ""Total Dietary_Fiber"", ""Total Carbohydrates"", ""Total Protein"", ""Minimal cost""]"",
-""MODEL_DETAIL_NONZEROS"": ""57"",
-""MODEL_DETAIL_TYPE"": ""LP"",
-""PROGRESS_CURRENT_OBJECTIVE"": ""2.6904091716962637""
-},
-""latest_engine_activity"": [
-""2020-07-21T16:37:36Z, INFO] Model: diet"",
-""2020-07-21T16:37:36Z, INFO] - number of variables: 9"",
-""2020-07-21T16:37:36Z, INFO] - binary=0, integer=0, continuous=9"",
-""2020-07-21T16:37:36Z, INFO] - number of constraints: 7"",
-""2020-07-21T16:37:36Z, INFO] - linear=7"",
-""2020-07-21T16:37:36Z, INFO] - parameters: defaults"",
-""2020-07-21T16:37:36Z, INFO] - problem type is: LP"",
-""2020-07-21T16:37:36Z, INFO] Warning: Model: ""diet"" is not a MIP problem, progress listeners are disabled"",
-""2020-07-21T16:37:36Z, INFO] objective: 2.690"",
-""2020-07-21T16:37:36Z, INFO] ""Spaghetti W/ Sauce""=2.155"",
-""2020-07-21T16:37:36Z, INFO] ""Chocolate Chip Cookies""=10.000"",
-""2020-07-21T16:37:36Z, INFO] ""Lowfat Milk""=1.831"",
-""2020-07-21T16:37:36Z, INFO] ""Hotdog""=0.930"",
-"
-B92F42609B54B82BFE38A69B781052E876258C2C_15,B92F42609B54B82BFE38A69B781052E876258C2C,"""2020-07-21T16:37:36Z, INFO] solution.csv""
-],
-""solve_status"": ""optimal_solution""
-},
-""status"": {
-""completed_at"": ""2020-07-21T16:37:36.989Z"",
-""running_at"": ""2020-07-21T16:37:35.622Z"",
-""state"": ""completed""
-}
-},
-""deployment"": {
-""id"": ""DEPLOYMENT-ID""
-}
-},
-""metadata"": {
-""created_at"": ""2020-07-21T16:37:09.130Z"",
-""id"": ""JOB-ID"",
-""modified_at"": ""2020-07-21T16:37:37.268Z"",
-""name"": ""test-job-diet"",
-""space_id"": ""SPACE-ID""
-}
-}
-9. Optional: You can delete jobs as follows:
-
-curl --location --request DELETE ""https://us-south.ml.cloud.ibm.com/ml/v4/deployment_jobs/JOB-ID-HERE?version=2020-08-01&space_id=SPACE-ID-HERE&hard_delete=true""
--H ""Authorization: bearer TOKEN-HERE""
-
-"
-DEB599F49C3E459A08E8BF25304B063B50CAA294_0,DEB599F49C3E459A08E8BF25304B063B50CAA294," Deploying a Decision Optimization model by using the user interface
-
-You can save a model for deployment in the Decision Optimization experiment UI and promote it to your Watson Machine Learning deployment space.
-
-"
-DEB599F49C3E459A08E8BF25304B063B50CAA294_1,DEB599F49C3E459A08E8BF25304B063B50CAA294," Procedure
-
-To save your model for deployment:
-
-
-
-1. In the Decision Optimization experiment UI, either from the Scenario or from the Overview pane, click the menu icon  for the scenario that you want to deploy, and select Save for deployment
-2. Specify a name for your model and add a description, if needed, then click Next.
-
-
-
-1. Review the Input and Output schema and select the tables you want to include in the schema.
-2. Review the Run parameters and add, modify or delete any parameters as necessary.
-3. Review the Environment and Model files that are listed in the Review and save window.
-4. Click Save.
-
-
-
-The model is then available in the Models section of your project.
-
-
-
-To promote your model to your deployment space:
-
-
-
-3. View your model in the Models section of your project.You can see a summary with input and output schema. Click Promote to deployment space.
-4. In the Promote to space window that opens, check that the Target space field displays the name of your deployment space and click Promote.
-5. Click the link deployment space in the message that you receive that confirms successful promotion. Your promoted model is displayed in the Assets tab of your Deployment space. The information pane shows you the Type, Software specification, description and any defined tags such as the Python version used.
-
-
-
-To create a new deployment:
-
-
-
-6. From the Assets tab of your deployment space, open your model and click New Deployment.
-7. In the Create a deployment window that opens, specify a name for your deployment and select a Hardware specification.Click Create to create the deployment. Your deployment window opens from which you can later create jobs.
-
-
-
-
-
-"
-DEB599F49C3E459A08E8BF25304B063B50CAA294_2,DEB599F49C3E459A08E8BF25304B063B50CAA294," Creating and running Decision Optimization jobs
-
-You can create and run jobs to your deployed model.
-
-"
-DEB599F49C3E459A08E8BF25304B063B50CAA294_3,DEB599F49C3E459A08E8BF25304B063B50CAA294," Procedure
-
-
-
-1. Return to your deployment space by using the navigation path and (if the data pane isn't already open) click the data icon to open the data pane. Upload your input data tables, and solution and kpi output tables here. (You must have output tables defined in your model to be able to see the solution and kpi values.)
-2. Open your deployment model, by selecting it in the Deployments tab of your deployment space and click New job.
-3. Define the details of your job by entering a name, and an optional description for your job and click Next.
-4. Configure your job by selecting a hardware specification and Next.You can choose to schedule you job here, or leave the default schedule option off and click Next. You can also optionally choose to turn on notifications or click Next.
-5. Choose the data that you want to use in your job by clicking Select the source for each of your input and output tables. Click Next.
-"
-95689297B729A4186914E81A59FFB3A09289F8D8,95689297B729A4186914E81A59FFB3A09289F8D8," Python client examples
-
-You can deploy a Decision Optimization model, create and monitor jobs, and get solutions by using the Watson Machine Learning Python client.
-
-To deploy your model, see [Model deployment](https://dataplatform.cloud.ibm.com/docs/content/DO/WML_Deployment/ModelDeploymentTaskCloud.html).
-
-For more information, see [Watson Machine Learning Python client documentation](https://ibm.github.io/watson-machine-learning-sdk/core_api.htmldeployments).
-
-See also the following sample notebooks located in the jupyter folder of the [DO-samples](https://github.com/IBMDecisionOptimization/DO-Samples). Select the relevant product and version subfolder..
-
-
-
-* Deploying a DO model with WML
-* RunDeployedModel
-* ExtendWMLSoftwareSpec
-
-
-
-The Deploying a DO model with WML sample shows you how to deploy a Decision Optimization model, create and monitor jobs, and get solutions by using the Watson Machine Learning Python client. This notebook uses the diet sample for the Decision Optimization model and takes you through the whole procedure without using the Decision Optimization experiment UI.
-
-The RunDeployedModel shows you how to run jobs and get solutions from an existing deployed model. This notebook uses a model that is saved for deployment from a Decision Optimization experiment UI scenario.
-
-The ExtendWMLSoftwareSpec notebook shows you how to extend the Decision Optimization software specification within Watson Machine Learning. By extending the software specification, you can use your own pip package to add custom code and deploy it in your model and send jobs to it.
-
-You can also find in the samples several notebooks for deploying various models, for example CPLEX, DOcplex and OPL models with different types of data.
-"
-135AD82FAAA11FD4FEC7CE7A31516E98EE3D0EA5_0,135AD82FAAA11FD4FEC7CE7A31516E98EE3D0EA5," Solve parameters
-
-To control solve behavior, you can specify Decision Optimization solve parameters in your request as named value pairs.
-
-For example:
-
-""solve_parameters"" : {
-""oaas.logAttachmentName"":""log.txt"",
-""oaas.logTailEnabled"":""true""
-}
-
-You can use this code to collect the engine log tail during the solve and the whole engine log as output at the end of the solve.
-
-You can use these parameters in your request.
-
-
-
- Name Type Description
-
- oaas.timeLimit Number You can use this parameter to set a time limit in milliseconds.
- oaas.resultsFormat Enum * JSON * CSV * XML * TEXT * XLSX Specifies the format for returned results. The default formats are as follows: * CPLEX - .xml * CPO - .json * OPL - .csv * DOcplex - .json Other formats might or might not be supported depending on the application type.
- oaas.oplRunConfig String Specifies the name of the OPL run configuration to be executed.
- oaas.docplex.python 3.10 You can use this parameter to set the Python version for the run in your deployed model. If not specified, 3.10 is used by default.
- oaas.logTailEnabled Boolean Use this parameter to include the log tail in the solve status.
- oaas.logAttachmentName String If defined, engine logs will be defined as a job output attachment.
-"
-135AD82FAAA11FD4FEC7CE7A31516E98EE3D0EA5_1,135AD82FAAA11FD4FEC7CE7A31516E98EE3D0EA5," oaas.engineLogLevel Enum * OFF * SEVERE * WARNING * INFO * CONFIG * FINE * FINER * FINEST You can use this parameter to define the level of detail that is provided by the engine log. The default value is INFO.
- oaas.logLimit Number Maximum log-size limit in number of characters.
- oaas.dumpZipName Can be viewed as Boolean (see Description) If defined, a job dump (inputs and outputs) .zip file is provided with this name as a job output attachment. The name can contain a placeholder ${job_id}. If defined with no value, dump_${job_id}.zip attachmentName is used. If not defined, by default, no job dump .zip file is attached.
-"
-135AD82FAAA11FD4FEC7CE7A31516E98EE3D0EA5_2,135AD82FAAA11FD4FEC7CE7A31516E98EE3D0EA5," oaas.dumpZipRules String If defined, ta .zip file is generated according to specific job rules (RFC 1960-based Filter). It must be used in conjunction with the {@link DUMP_ZIP_NAME} parameter. Filters can be defined on the duration and the following {@link com.ibm.optim.executionservice.model.solve.SolveState} properties: * duration * solveState.executionStatus * solveState.interruptionStatus * solveState.solveStatus * solveState.failureInfo.type Example: (duration>=1000) or (&(duration<1000)(!(solveState.solveStatus=OPTIMAL_SOLUTION))) or ( (solveState.interruptionStatus=OUT_OF_MEMORY) (solveState.failureInfo.type=INFRASTRUCTURE)) (duration>=1000) or (&(duration<1000)(!(solveState.solveStatus=OPTIMAL_SOLUTION))) or (|(solveState.interruptionStatus=OUT_OF_MEMORY) (solveState.failureInfo.type=INFRASTRUCTURE))
-"
-939233F807850AE8D28246ADE7FDCCDA66E9DF03_0,939233F807850AE8D28246ADE7FDCCDA66E9DF03," Model deployment
-
-To deploy a Decision Optimization model, create a model ready for deployment in your deployment space and then upload your model as an archive. When deployed, you can submit jobs to your model and monitor job states.
-
-"
-939233F807850AE8D28246ADE7FDCCDA66E9DF03_1,939233F807850AE8D28246ADE7FDCCDA66E9DF03," Procedure
-
-To deploy a Decision Optimization model:
-
-
-
-1. Package your Decision Optimization model formulation with your common data (optional) ready for deployment as a tar.gz, .zip, or .jar file. Your archive can include the following optional files:
-
-
-
-1. Your model files
-2. Settings (For more information, see [ Solve parameters](https://dataplatform.cloud.ibm.com/docs/content/DO/WML_Deployment/DeploySolveParams.htmltopic_deploysolveparams) )
-3. Common data
-
-
-
-Note: For Python models with multiple .py files, put all files in the same folder in your archive. The same folder must contain a main file called main.py. Do not use subfolders.
-2. Create a model ready for deployment in Watson Machine Learning providing the following information:
-
-
-
-* Machine Learning service instance
-* Deployment space instance
-* Software specification ( Decision Optimizationruntime version):
-
-
-
-* do_ 22.1 runtime is based on CPLEX 22.1.1.0
-* do_ 20.1 runtime is based on CPLEX 20.1.0.1
-
-
-
-You can extend the software specification provided by Watson Machine Learning. See the [ExtendWMLSoftwareSpec](https://github.com/IBMDecisionOptimization/DO-Samples/blob/watson_studio_cloud/jupyter/watsonx.ai%20and%20Cloud%20Pak%20for%20Data%20as%20a%20Service/ExtendWMLSoftwareSpec.ipynb) notebook in the jupyter folder of the [DO-samples](https://github.com/IBMDecisionOptimization/DO-Samples).
-
-Updating CPLEX runtimes:
-
-"
-939233F807850AE8D28246ADE7FDCCDA66E9DF03_2,939233F807850AE8D28246ADE7FDCCDA66E9DF03,"If you previously deployed your model with a CPLEX runtime that is no longer supported, you can update your existing deployed model by using either the [ REST API](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-manage-outdated.htmlupdate-soft-specs-api) or the [UI](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-manage-outdated.htmldiscont-soft-spec).
-* The model type:
-
-
-
-* opl (do-opl_)
-* cplex (do-cplex_)
-* cpo (do-cpo_)
-* docplex (do-docplex_) using Python 3.10
-
-
-
-(The Runtime version can be one of the available runtimes so, for example, an opl model with runtime 22.1 would have the model type do-opl_ 22.1.)
-
-
-
-You obtain a MODEL-ID. Your Watson Machine Learning model can then be used in one or multiple deployments.
-3. Upload your model archive (tar.gz, .zip, or .jar file) on Watson Machine Learning. See [Model input and output data file formats](https://dataplatform.cloud.ibm.com/docs/content/DO/WML_Deployment/ModelIOFileFormats.htmltopic_modelIOFileFormats) for information about input file types.
-4. Deploy your model by using the MODEL-ID, SPACE-ID, and the hardware specification for the available configuration sizes (small S, medium M, large L, extra large XL). See [configurations](https://dataplatform.cloud.ibm.com/docs/content/DO/WML_Deployment/Paralleljobs.htmltopic_paralleljobs__34c6).You obtain a DEPLOYMENT-ID.
-5. Monitor the deployment by using the DEPLOYMENT-ID. Deployment states can be: initializing, updating, ready, or failed.
-"
-02C5718919D676E7EA14D16AC226407CC675C95E,02C5718919D676E7EA14D16AC226407CC675C95E," Model execution
-
-Once your model is deployed, you can submit Decision Optimization jobs to this deployment.
-
-You can submit jobs specifying the:
-
-
-
-* Input data: the transaction data used as input by the model. This can be inline or referenced
-* Output data: to define how the output data is generated by model. This is returned as inline or referenced data.
-* Solve parameters: to customize the behavior of the solution engine
-
-
-
-For more information see [Model input and output data adaptation](https://dataplatform.cloud.ibm.com/docs/content/DO/WML_Deployment/ModelIODataDefn.htmltopic_modelIOAdapt)
-
-After submitting a job, you can use the job-id to poll the job status to collect the:
-
-
-
-* Job execution status or error message
-* Solve execution status, progress and log tail
-* Inline or referenced output data
-
-
-
-Job states can be : queued, running, completed, failed, canceled.
-"
-E9E9556CA0C7B258D910BB31222A78BEABB46A48_0,E9E9556CA0C7B258D910BB31222A78BEABB46A48," Model input and output data adaptation
-
-When submitting your job you can include your data inline or reference your data in your request. This data will be mapped to a file named with data identifier and used by the model. The data identifier extension will define the format of the file used.
-
-The following adaptations are supported:
-
-
-
-* Tabular inline data to embed your data in your request. For example:
-
-""input_data"": [{
-""id"":""diet_food.csv"",
-""fields"" : ""name"",""unit_cost"",""qmin"",""qmax""],
-""values"" :
-""Roasted Chicken"", 0.84, 0, 10]
-]
-}]
-
-This will generate the corresponding diet_food.csv file that is used as the model input file. Only csv adaptation is currently supported.
-* Inline data, that is, non-tabular data (such as an OPL .dat file or an .lpfile) to embed data in your request. For example:
-
-""input_data"": [{
-""id"":""diet_food.csv"",
-""content"":""Input data as a base64 encoded string""
-}]
-* URL referenced data allowing you to reference files stored at a particular URL or REST data service. For example:
-
-""input_data_references"": {
-""type"": ""url"",
-""id"": ""diet_food.csv"",
-""connection"": {
-""verb"": ""GET"",
-""url"": ""https://myserver.com/diet_food.csv"",
-""headers"": {
-""Content-Type"": ""application/x-www-form-urlencoded""
-}
-},
-""location"": {}
-}
-
-This will copy the corresponding diet_food.csv file that is used as the model input file.
-* Data assets allowing you to reference any data asset or connected data asset present in your space and benefit from the data connector integration capabilities. For example:
-
-""input_data_references"": [{
-""name"": ""test_ref_input"",
-""type"": ""data_asset"",
-""connection"": {},
-""location"": {
-"
-E9E9556CA0C7B258D910BB31222A78BEABB46A48_1,E9E9556CA0C7B258D910BB31222A78BEABB46A48,"""href"": ""/v2/assets/ASSET-ID?space_id=SPACE-ID""
-}
-}],
-""output_data_references"": [{
-""type"": ""data_asset"",
-""connection"": {},
-""location"": {
-""href"": ""/v2/assets/ASSET-ID?space_id=SPACE-ID""
-}
-}]
-
-With this data asset type there are many different connections available. For more information, see [Batch deployment details](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-batch-details.htmldo).
-* Connection assets allowing you to reference any data and then refer to the connection, without having to specify credentials each time. For more information, see [Supported data sources in Decision Optimization](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/DOconnections.html). Referencing a secure connection without having to use inline credentials in the payload also offers you better security. For more information, see [Example connection_asset payload](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.htmlconnection_asset_payload).For example, to connect to a COS/S3 via a Connection asset:
-
-{
-""type"" : ""connection_asset"",
-""id"" : ""diet_food.csv"",
-""connection"" : {
-""id"" :
-},
-""location"" : {
-""file_name"" : ""FILENAME.csv"",
-""bucket"" : ""BUCKET-NAME""
-}
-}
-
-For information about the parameters used in these examples, see [Deployment job definitions](https://cloud.ibm.com/apidocs/machine-learning-cpdeployment-job-definitions-create).
-
-Another example showing you how to connect to a DB2 asset via a connection asset:
-
-{
-""type"" : ""connection_asset"",
-"
-E9E9556CA0C7B258D910BB31222A78BEABB46A48_2,E9E9556CA0C7B258D910BB31222A78BEABB46A48,"""id"" : ""diet_food.csv"",
-""connection"" : {
-""id"" :
-},
-""location"" : {
-""table_name"" : ""TABLE-NAME"",
-""schema_name"" : ""SCHEMA-NAME""
-}
-}
-
-
-
-With this connection asset type there are many different connections available. For more information, see [Batch deployment details](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-batch-details.htmldo).
-
-You can combine different adaptations in the same request. For more information about data definitions see [Adding data to an analytics project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/add-data-project.html).
-"
-977988398EFBDCD10DB4ACED047D8D864883614A_0,977988398EFBDCD10DB4ACED047D8D864883614A," Model input and output data file formats
-
-With your Decision Optimization model, you can use the following input and output data identifiers and extension combinations.
-
-This table shows the supported file type combinations for Decision Optimization in Watson Machine Learning:
-
-
-
- Model type Input file type Output file type Comments
-
- cplex .lp .mps .sav .feasibility .prm .jar for Java™ models .xml .json The name of the output file must be solution The output format can be specified by using the API. Files of type .lp, .mps, and .sav can be compressed by using gzip or bzip2, and uploaded as, for example, .lp.gz or .sav.bz2. The schemas for the CPLEX formats for solutions, conflicts, and feasibility files are available for you to download in the cplex_xsds.zip archive from the [Decision Optimization github](https://github.com/IBMDecisionOptimization/DO-Samples/blob/watson_studio_cloud/resources/cplex_xsds.zip).
- cpo .cpo .jar for Java models .xml .json The name of the output file must be solution The output format can be specified by using the solve parameter. For the native file format for CPO models, see: [CP Optimizer file format syntax](https://www.ibm.com/docs/en/icos/20.1.0?topic=manual-cp-optimizer-file-format-syntax).
-"
-977988398EFBDCD10DB4ACED047D8D864883614A_1,977988398EFBDCD10DB4ACED047D8D864883614A," opl .mod .dat .oplproject .xls .json .csv .jar for Java models .xml .json .txt .csv .xls The output format is consistent with the input type but can be specified by using the solve parameter if needed. To take advantage of data connectors, use the .csv format. Only models that are defined with tuple sets can be deployed; other OPL structures are not supported. To read and write input and output in OPL, see [OPL models](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/OPLmodels.htmltopic_oplmodels).
- docplex .py . (input data) Any output file type that is specified in the model. Any format can be used in your Python code, but to take advantage of data connectors, use the .csv format. To read and write input and output in Python, use the commands get_input_stream(""filename"") and get_output_stream(""filename""). See [DOcplex API sum example](https://ibmdecisionoptimization.github.io/docplex-doc/2.23.222/mp/docplex.util.environment.html)
-
-
-
-Data identifier restrictions
-: A file name has the following restrictions:
-
-
-
-* Is limited to 255 characters
-* Can include only ASCII characters
-"
-D476F3E93D23F52EF1D5079343D92DB793E3AD5E,D476F3E93D23F52EF1D5079343D92DB793E3AD5E," Output data definition
-
-When submitting your job you can define what output data you want and how you collect it (as either inline or referenced data).
-
-For more information about output file types and names see [Model input and output data file formats](https://dataplatform.cloud.ibm.com/docs/content/DO/WML_Deployment/ModelIOFileFormats.htmltopic_modelIOFileFormats).
-
-Some output data definition examples:
-
-
-
-* To collect solution.csv output as inline data:
-
-""output_data"": [{
-""id"":""solution.csv""
-}]
-* Regexp can be also used as an identifier. For example to collect all csv output files as inline data:
-
-""output_data"": [{
-""id"":""..csv""
-}]
-* Similarly for reference data, to collect all csv files in COS/S3 in job specific folder, you can combine regexp and ${job_id} and ${ attachment_name } place holder
-
-""output_data_references"": [{
-""id"":""..csv"",
-""type"": ""connection_asset"",
-""connection"": {
-""id"" :
-},
-""location"": {
-""bucket"": ""XXXXXXXXX"",
-""path"": ""${job_id}/${attachment_name}"" }
-}]
-
-For example, here if you have a job with identifier to generate a solution.csv file, you will have in your COS/S3 bucket, a XXXXXXXXX / solution.csv file.
-"
-693BC91EAADEAE664982AA88A372590A6758F294_0,693BC91EAADEAE664982AA88A372590A6758F294," Running jobs
-
-Decision Optimization uses Watson Machine Learning asynchronous APIs to enable jobs to be run in parallel.
-
-To solve a problem, you can create a new job from the model deployment and associate data to it. See [Deployment steps](https://dataplatform.cloud.ibm.com/docs/content/DO/WML_Deployment/DeployIntro.htmltopic_wmldeployintro) and the [REST API example](https://dataplatform.cloud.ibm.com/docs/content/DO/WML_Deployment/DeployModelRest.htmltask_deploymodelREST). You are not charged for deploying a model. Only the solving of a model with some data is charged, based on the running time.
-
-To solve more than one job at a time, specify more than one node when you create your deployment. For example in this [REST API example](https://dataplatform.cloud.ibm.com/docs/content/DO/WML_Deployment/DeployModelRest.htmltask_deploymodelREST__createdeploy), increment the number of the nodes by changing the value of the nodes property: ""nodes"" : 1.
-
-
-
-1. The new job is sent to the queue.
-2. If a POD is started but idle (not running a job), it immediately begins processing this job.
-3. Otherwise, if the maximum number of nodes is not reached, a new POD is started. (Starting a POD can take a few seconds). The job is then assigned to this new POD for processing.
-4. Otherwise, the job waits in the queue until one of the running PODs has finished and can pick up the waiting job.
-
-
-
-The configuration of PODs of each size is as follows:
-
-
-
-Table 1. T-shirt sizes for Decision Optimization
-
- Definition Name Description
-
- 2 vCPU and 8 GB S Small
- 4 vCPU and 16 GB M Medium
- 8 vCPU and 32 GB L Large
- 16 vCPU and 64 GB XL Extra Large
-
-
-
-For all configurations, 1 vCPU and 512 MB are reserved for internal use.
-
-"
-693BC91EAADEAE664982AA88A372590A6758F294_1,693BC91EAADEAE664982AA88A372590A6758F294,"In addition to the solve time, the pricing depends on the selected size through a multiplier.
-
-In the deployment configuration, you can also set the maximal number of nodes to be used.
-
-Idle PODs are automatically stopped after some timeout. If a new job is submitted when no PODs are up, it takes some time (approximately 30 seconds) for the POD to restart.
-"
-73DEFA42948BBE878834CA4B7C9B0395F44B9B90_0,73DEFA42948BBE878834CA4B7C9B0395F44B9B90," Changing Python version for an existing deployed model with the REST API
-
-You can update an existing Decision Optimization model using the Watson Machine Learning REST API. This can be useful, for example, if in your model you have explicitly specified a Python version that has now become deprecated.
-
-"
-73DEFA42948BBE878834CA4B7C9B0395F44B9B90_1,73DEFA42948BBE878834CA4B7C9B0395F44B9B90," Procedure
-
-To change Python version for an existing deployed model:
-
-
-
-1. Create a revision to your Decision Optimization model
-
-All API requests require a version parameter that takes a date in the format version=YYYY-MM-DD. This code example posts a model that uses the file update_model.json. The URL will vary according to the chosen region/location for your machine learning service.
-
-curl --location --request POST
-""https://us-south.ml.cloud.ibm.com/ml/v4/models/MODEL-ID-HERE/revisions?version=2021-12-01""
--H ""Authorization: bearer TOKEN-HERE""
--H ""Content-Type: application/json""
--d @revise_model.json
-
-The revise_model.json file contains the following code:
-
-{
-""commit_message"": ""Save current model"",
-""space_id"": ""SPACE-ID-HERE""
-}
-
-Note the model revision number ""rev"" that is provided in the output for use in the next step.
-2. Update an existing deployment so that current jobs will not be impacted:
-
-curl --location --request PATCH
-""https://us-south.ml.cloud.ibm.com/ml/v4/deployments/DEPLOYMENT-ID-HERE?version=2021-12-01&space_id=SPACE-ID-HERE""
--H ""Authorization: bearer TOKEN-HERE""
--H ""Content-Type: application/json""
--d @revise_deploy.json
-
-The revise_deploy.json file contains the following code:
-
-[
-{
-""op"": ""add"",
-""path"": ""/asset"",
-""value"": {
-""id"":""MODEL-ID-HERE"",
-""rev"":""MODEL-REVISION-NUMBER-HERE""
-}
-}
-]
-3. Patch an existing model to explicitly specify Python version 3.10
-
-curl --location --request PATCH
-"
-73DEFA42948BBE878834CA4B7C9B0395F44B9B90_2,73DEFA42948BBE878834CA4B7C9B0395F44B9B90,"""https://us-south.ml.cloud.ibm.com/ml/v4/models/MODEL-ID-HERE?rev=MODEL-REVISION-NUMBER-HERE&version=2021-12-01&space_id=SPACE-ID-HERE""
--H ""Authorization: bearer TOKEN-HERE""
--H ""Content-Type: application/json""
--d @update_model.json
-
-The update_model.json file, with the default Python version stated explicitly, contains the following code:
-
-[
-{
-""op"": ""add"",
-""path"": ""/custom"",
-""value"": {
-""decision_optimization"":{
-""oaas.docplex.python"": ""3.10""
-}
-}
-}
-]
-
-Alternatively, to remove any explicit mention of a Python version so that the default version will always be used:
-
-[
-{
-""op"": ""remove"",
-""path"": ""/custom/decision_optimization""
-}
-]
-4. Patch the deployment to use the model that was created for Python to use version 3.10
-
-curl --location --request PATCH
-""https://us-south.ml.cloud.ibm.com/ml/v4/deployments/DEPLOYMENT-ID-HERE?version=2021-12-01&space_id=SPACE-ID-HERE""
--H ""Authorization: bearer TOKEN-HERE""
--H ""Content-Type: application/json""
--d @update_deploy.json
-
-The update_deploy.json file contains the following code:
-
-[
-{
-""op"": ""add"",
-""path"": ""/asset"",
-""value"": { ""id"":""MODEL-ID-HERE""}
-"
-1BB1684259F93D91580690D898140D98F12611ED,1BB1684259F93D91580690D898140D98F12611ED," Decision Optimization
-
-When you have created and solved your Decision Optimization models, you can deploy them using Watson Machine Learning.
-
-See the [Decision Optimization experiment UI](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/buildingmodels.htmltopic_buildingmodels) for building and solving models. The following sections describe how you can deploy your models.
-"
-A255BB890CA287C5A91765B71832DAA45BA4132B_0,A255BB890CA287C5A91765B71832DAA45BA4132B," Global visualization preferences
-
-You can override the default settings for titles, range slider, grid lines, and mouse tracking. You can also specify a different color scheme template.
-
-
-
-1. In Visualizations, click the Global visualization preferences control in the Actions section.
-
-The Global visualization preferences dialog provides the following settings.
-
-Titles
-: Provides global chart title settings.
-
-Global titles
-: Enables or disables the global titles for all charts.
-
-Global primary title
-: Enables or disables the display of global, primary chart titles. When enabled, the top-level chart title that you enter here is applied to all chart's, effectively overriding each chart's individual Primary title setting.
-
-Global subtitle
-: Enables or disables the display of global chart subtitles. When enabled, the chart subtitle that you enter here is applied to all chart's, effectively overriding each chart's individual Subtitle setting.
-
-Default titles
-: Enables or disables the default titles for all charts.
-
-Title alignment
-: Provides the title alignment options Left, Center (the default setting), and Right.
-
-Tools
-: Provides options that control chart behavior.
-
-Range slider
-: Enables or disables the range slider for each chart. When enabled, you can control the amount of chart data that displays with a range slider that is provided for each chart.
-
-Grid lines
-: Controls the display of X axis (vertical) and Y axis (horizontal) grid lines.
-
-Mouse tracker
-: When enabled, the mouse cursor location, in relation to the chart data, is tracked and displayed when placed anywhere over the chart.
-
-Toolbox
-: Enables or disables the toolbox for each chart. Depending on the chart type, the toolbox on the right of the screen provides tools such as zoom, save as image, restore, select data, and clear selection.
-
-ARIA
-: When enabled, web content and web applications are more accessible to users with disabilities.
-
-Filter out null
-: Enables or disables the filtering of null chart data.
-
-X axis on zero
-: When enabled, the X axis lies on the other's origin position. When not enabled, the X axis always starts at 0.
-
-Y axis on zero
-"
-A255BB890CA287C5A91765B71832DAA45BA4132B_1,A255BB890CA287C5A91765B71832DAA45BA4132B,": When enabled, the Y axis lies on the other's origin position. When not enabled, the Y axis always starts at 0.
-
-Show xAxis Label
-: Enables or disables the xAxis label.
-
-Show yAxis Label
-: Enables or disables the yAxis label.
-
-Show xAxis Line
-: Enables or disables the xAxis line.
-
-Show yAxis Line
-: Enables or disables the yAxis line.
-
-Show xAxis Name
-: Enables or disables the xAxis name.
-
-Show yAxis Name
-: Enables or disables the yAxis name.
-
-yAxis Name Location
-: The drop-down list provides options for specifying the yAxis name location. Options include Start, Middle, and End.
-
-Truncation length
-: The specified value sets the string length. Strings that are longer than the specified length are truncated. The default value is 10. When 0 is specified, truncation is turned off.
-
-xAxis tick label decimal
-: Sets the tick label decimal value for the xAxis. The default value is 3.
-
-yAxis tick label decimal
-: Sets the tick label decimal value for the yAxis. The default value is 3.
-
-xAxis tick label rotate
-: Sets the xAxis tick label rotation value. The default value is 0 (no rotation). You can specify value in the range -90 to 90 degrees.
-
-Theme
-"
-5D043091B2F2398611A819743FC83688D7658B22,5D043091B2F2398611A819743FC83688D7658B22," Visualizations layout and terms
-
-Canvas
-: The canvas is the area of the Visualizations dialog where you build the chart.
-
-Chart type
-: Lists the available chart types. The graphic elements are the items in the chart that represent data (bars, points, lines, and so on).
-
-Details pane
-: The Details pane provides the basic chart building blocks.
-
-Chart settings
-: Provides options for selecting which variables are used to build the chart, distribution method, title and subtitle fields, and so on. Depending on the selected chart type, the Details pane options might vary. For more information, see [Chart types](https://dataplatform.cloud.ibm.com/docs/content/dataview/chart_creation_charttypes.html).
-
-"
-9F5D44B3A96F8418BE317AD258E4932E468551BE,9F5D44B3A96F8418BE317AD258E4932E468551BE," 3D charts
-
-3D charts are commonly used to represent multiple-variable functions and include a z-axis variable that is a function of both the x and y-axis variables.
-"
-823EB607207DFD62D80671AF48451CCE1C44153F,823EB607207DFD62D80671AF48451CCE1C44153F," Bar charts
-
-Bar charts are useful for summarizing categorical variables. For example, you can use a bar chart to show the number of men and the number of women who participated in a survey. You can also use a bar chart to show the mean salary for men and the mean salary for women.
-"
-BECCA4C839A0BCF01ADCB6A5CE31A3B1168D3548,BECCA4C839A0BCF01ADCB6A5CE31A3B1168D3548," Box plots
-
-A box plot chart shows the five statistics (minimum, first quartile, median, third quartile, and maximum). It is useful for displaying the distribution of a scale variable and pinpointing outliers.
-"
-5466D9A71E87BB01000DC957683E9CD3C10AD8BC,5466D9A71E87BB01000DC957683E9CD3C10AD8BC," Bubble charts
-
-Bubble charts display categories in your groups as nonhierarchical packed circles. The size of each circle (bubble) is proportional to its value. Bubble charts are useful for comparing relationships in your data.
-"
-F7D94E6CD13F36EB9B1FE7653C436DC5745250B1,F7D94E6CD13F36EB9B1FE7653C436DC5745250B1," Candlestick charts
-
-Candlestick charts are a style of financial charts that are used to describe price movements of a security, derivative, or currency. Each candlestick element typically shows one day. A one-month chart might show the 20 trading days as 20 candlesticks elements. Candlestick charts are most often used in the analysis of equity and currency price patterns and are similar to box plots.
-
-The data set that is used to create a candlestick chart must contain open, high, low, and close values for each time period you want to display.
-"
-2C9D0D0309E01FF2EE0D298A16011857DE068038,2C9D0D0309E01FF2EE0D298A16011857DE068038," Chart types
-
-The gallery contains a collection of the most commonly used charts.
-"
-035430AFAC1E73483636073C5BF48BCF8B4F5E1D,035430AFAC1E73483636073C5BF48BCF8B4F5E1D," Circle packing charts
-
-Circle packing charts display hierarchical data as a set of nested areas to visualize a large amount of hierarchically structured data. It's similar to a treemap, but uses circles instead of rectangles. Circle packing charts use containment (nesting) to display hierarchy data.
-"
-49724D4B7690D4B215FE6F1C0A49C8B347F0C9A1,49724D4B7690D4B215FE6F1C0A49C8B347F0C9A1," Custom charts
-
-The custom charts option provides options for pasting or editing JSON code to create the wanted chart.
-"
-91B834E69C2153740973C59CF6B4D66260640342,91B834E69C2153740973C59CF6B4D66260640342," Dendrogram charts
-
-Dendrogram charts are similar to tree charts and are typically used to illustrate a network structure (for example, a hierarchical structure). Dendrogram charts consist of a root node that is connected to subordinate nodes through edges or branches. The last nodes in the hierarchy are called leaves.
-"
-2910B7C4CD65F8E4ADD1607791DD22BED468B61D,2910B7C4CD65F8E4ADD1607791DD22BED468B61D," Dual Y-axes charts
-
-A dual Y-axes chart summarizes or plots two Y-axes variables that have different domains. For example, you can plot the number of cases on one axis and the mean salary on another. This chart can also be a mix of different graphic elements so that the dual Y-axes chart encompasses several of the different chart types. Dual Y-axes charts can display the counts as a line and the mean of each category as a bar.
-"
-97492A97F355A95D56BCF768A62CA7FD75718086,97492A97F355A95D56BCF768A62CA7FD75718086," Error bar charts
-
-Error bar charts represent the variability of data and indicate the error (or uncertainty) in a reported measurement. Error bars help determine whether differences are statistically significant. Error bars can also suggest goodness of fit for a specific function.
-"
-41167E3AD363B416D508B03A300E5ACFAF83F042,41167E3AD363B416D508B03A300E5ACFAF83F042," Evaluation charts
-
-Evaluation charts are similar to histograms or collection graphs. Evaluation charts show how accurate models are in predicting particular outcomes. They work by sorting records based on the predicted value and confidence of the prediction, splitting the records into groups of equal size (quantiles), and then plotting the value of the criterion for each quantile, from highest to lowest. Multiple models are shown as separate lines in the plot.
-
-Outcomes are handled by defining a specific value or range of values as a ""hit"". Hits usually indicate success of some sort (such as a sale to a customer) or an event of interest (such as a specific medical diagnosis).
-
-Flag
-: Output fields are straightforward; hits correspond to true values.
-
-Nominal
-: For nominal output fields, the first value in the set defines a hit.
-
-Continuous
-: For continuous output fields, hits equal values greater than the midpoint of the field's range.
-
-Evaluation charts can also be cumulative so that each point equals the value for the corresponding quantile plus all higher quantiles. Cumulative charts usually convey the overall performance of models better, whereas noncumulative charts often excel at indicating particular problem areas for models.
-"
-57AB3726FA10435D26878C626F61988F7305B9E8,57AB3726FA10435D26878C626F61988F7305B9E8," Building a chart from the chart type gallery
-
-Use chart type gallery for building charts. Following are general steps for building a chart from the gallery.
-
-
-
-1. In the Chart Type section, select a chart category. A preview version of the selected chart type is shown on the chart canvas.
-2. If the canvas already displays a chart, the new chart replaces the chart's axis set and graphic elements.
-
-
-
-1. Depending on the selected chart type, the available variables are presented under a number of different headings in the Details pane (for example, Category for bar charts, X-axis and Y-axis for line charts). Select the appropriate variables for the selected chart type.
-
-
-
-"
-CC0ADF041F1628221CAC49A1BAEC1D497D762DC4,CC0ADF041F1628221CAC49A1BAEC1D497D762DC4," Heat map charts
-
-Heat map charts present data where the individual values that are contained in a matrix are represented as colors.
-"
-1453D1CAD565842EEA24C8D92963BD73338EF0F1,1453D1CAD565842EEA24C8D92963BD73338EF0F1," Histogram charts
-
-A histogram is similar in appearance to a bar chart, but instead of comparing categories or looking for trends over time, each bar represents how data is distributed in a single category. Each bar represents a continuous range of data or the number of frequencies for a specific data point.
-
-Histograms are useful for showing the distribution of a single scale variable. Data are binned and summarized by using a count or percentage statistic. A variation of a histogram is a frequency polygon, which is like a typical histogram except that the area graphic element is used instead of the bar graphic element.
-
-Another variation of the histogram is the population pyramid. Its name is derived from its most common use: summarizing population data. When used with population data, it is split by gender to provide two back-to-back, horizontal histograms of age data. In countries with a young population, the shape of the resulting graph resembles a pyramid.
-
-Footnote
-: The chart footnote, which is placed beneath the chart.
-
-XAxis label
-: The x-axis label, which is placed beneath the x-axis.
-
-"
-9DF72C2325CE5BACA0CC7D2A884695D115557C40,9DF72C2325CE5BACA0CC7D2A884695D115557C40," Line charts
-
-A line chart plots a series of data points on a graph and connects them with lines. A line chart is useful for showing trend lines with subtle differences, or with data lines that cross one another. You can use a line chart to summarize categorical variables, in which case it is similar to a bar chart (see [Bar charts](https://dataplatform.cloud.ibm.com/docs/content/dataview/chart_creation_barcharts.htmlchart_creation_barcharts) ). Line charts are also useful for time-series data.
-"
-F5AF4BCC2D0168D2698BEB2A858C24F81A476610,F5AF4BCC2D0168D2698BEB2A858C24F81A476610," Map charts
-
-Map charts are commonly used to compare values and show categories across geographical regions. Map charts are most beneficial when the data contains geographic information (countries, regions, states, counties, postal codes, and so on).
-"
-0C836867DD758509B908532F35CFC5E160D81A19,0C836867DD758509B908532F35CFC5E160D81A19," Math curve charts
-
-A math curve chart plots mathematical equation curves that are based on user-entered expressions.
-"
-66E7B1F986535FCE165F0CB5C553A6305339204E,66E7B1F986535FCE165F0CB5C553A6305339204E," Scatter matrix charts
-
-Scatter plot matrices are a good way to determine whether linear correlations exist between multiple variables.
-"
-3094E343D06DA6AE0D0D5D4865C7B0D806DC61A1,3094E343D06DA6AE0D0D5D4865C7B0D806DC61A1," Multi-chart charts
-
-Multi-chart charts provide options for creating multiple charts. The charts can be of the same or different types, and can include different variables from the same data set.
-"
-E777A9C7D0450D572431F168374224179C1AE7C4,E777A9C7D0450D572431F168374224179C1AE7C4," Multiple series charts
-
-Multiple series charts are similar to line charts, with the exception that you can chart multiple variables on the Y-axis.
-"
-DE359E77F61C11B6F759E8DFE8EA69AAC3D0514A,DE359E77F61C11B6F759E8DFE8EA69AAC3D0514A," Parallel charts
-
-Parallel charts are useful for visualizing high dimensional geometry and for analyzing multivariate data. Parallel charts resemble line charts for time-series data, but the axes do not correspond to points in time (a natural order is not present).
-"
-6B4213FC5352021865E77592EBC27242E746B5AA,6B4213FC5352021865E77592EBC27242E746B5AA," Pareto charts
-
-Pareto charts contain both bars and a line graph. The bars represent individual variable categories and the line graph represents the cumulative total.
-"
-A2B0DB014389285D9ABCA9FE0D4035F85DE6D102,A2B0DB014389285D9ABCA9FE0D4035F85DE6D102," Pie charts
-
-A pie chart is useful for comparing proportions. For example, you can use a pie chart to demonstrate that a greater proportion of Europeans is enrolled in a certain class.
-"
-81F297B28D1978EB0D0B1985D6F44B45DFE53542,81F297B28D1978EB0D0B1985D6F44B45DFE53542," Population pyramid charts
-
-Population pyramid charts (also known as ""age-sex pyramids"") are commonly used to present and analyze population information based on age and gender.
-"
-BA8A6820B3DBFAA703679B19BE070F7BD0CCA3D1,BA8A6820B3DBFAA703679B19BE070F7BD0CCA3D1," Q-Q plots
-
-Q-Q (quantile-quantile) plots compare two probability distributions by plotting their quantiles against each other. A Q–Q plot is used to compare the shapes of distributions, providing a graphical view of how properties such as location, scale, and skewness are similar or different in the two distributions.
-"
-61F714F5629AD260B0D9776FC53CDA2EAA10DF24,61F714F5629AD260B0D9776FC53CDA2EAA10DF24," Radar charts
-
-Radar charts compare multiple quantitative variables and are useful for visualizing which variables have similar values, or if outliers exist among the variables. Radar charts consists of a sequence of spokes, with each spoke representing a single variable. Radar Charts are also useful for determining which variables are scoring high or low within a data set.
-"
-5A812008B8370853F0C151FDE4DFEDA4A39193CB,5A812008B8370853F0C151FDE4DFEDA4A39193CB," Relationship charts
-
-A relationship chart is useful for determining how variables relate to each other.
-"
-67C56AAC7DA2232E4DA2B8AEDEC41B9D8755E22A,67C56AAC7DA2232E4DA2B8AEDEC41B9D8755E22A," Scatter plots and dot plots
-
-Several broad categories of charts are created with the point graphic element.
-
-Scatter plots
-: Scatter plots are useful for plotting multivariate data. They can help you determine potential relationships among scale variables. A simple scatter plot uses a 2-D coordinate system to plot two variables. A 3-D scatter plot uses a 3-D coordinate system to plot three variables. When you need to plot more variables, you can try overlay scatter plots and scatter plot matrices (SPLOMs). An overlay scatter plot displays overlaid pairs of X-Y variables, with each pair distinguished by color or shape. A SPLOM creates a matrix of 2-D scatter plots, with each variable plotted against every other variable in the SPLOM.
-
-Dot plots
-: Like histograms, dot plots are useful for showing the distribution of a single scale variable. The data are binned, but, instead of one value for each bin (like a count), all of the points in each bin are displayed and stacked. These graphs are sometimes called density plots.
-
-Summary point plots
-: Summary point plots are similar to bar charts, except that points are drawn in place of the top of the bars. For more information, see [Bar charts](https://dataplatform.cloud.ibm.com/docs/content/dataview/chart_creation_barcharts.htmlchart_creation_barcharts).
-
-"
-7B3616D29E7AC720B73EF3E24C9C807DA05C4DA3,7B3616D29E7AC720B73EF3E24C9C807DA05C4DA3," Series array charts
-
-Series array charts include individual sub charts and display the Y-axis for all sub charts in the legend.
-"
-5CF2FE478862FCAA1745D5B0770CE6486B3B71F8,5CF2FE478862FCAA1745D5B0770CE6486B3B71F8," Sunburst charts
-
-A sunburst chart is useful for visualizing hierarchical data structures. A sunburst chart consists of an inner circle that is surrounded by rings of deeper hierarchy levels. The angle of each segment proportional to either a value or divided equally under its inner segment. The chart segments are colored based on the category or hierarchical level to which they belong.
-"
-BAE3302FC87E1BBFA604BAA2D003069E4233A517,BAE3302FC87E1BBFA604BAA2D003069E4233A517," Theme River charts
-
-A theme river is a specialized flow graph that shows changes over time.
-"
-B49F37BD511123A94FCAD3C6E826E60FC61DB446,B49F37BD511123A94FCAD3C6E826E60FC61DB446," Time plots
-
-Time plots illustrate data points at successive intervals of time. The time series you plot must contain numeric values and are assumed to occur over a range of time in which the periods are uniform. Time plots provide a preliminary analysis of the characteristics of time series data on basic statistics and test, and thus generate useful insights about your data before modeling. Time plots include analysis methods such as decomposition, augmented Dickey-Fuller test (ADF), correlations (ACF/PACF), and spectral analysis.
-"
-D872C74770B5729E037E841679F741CF3D8C20AD,D872C74770B5729E037E841679F741CF3D8C20AD," Tree charts
-
-Tree charts represent hierarchy in a tree-like structure. The structure of a Tree chart consists of a root node (has no parent node), line connections (named branches), and leaf nodes (have no child nodes). Line connections represent the relationships and connections between the members.
-"
-9B6386C6C291665ACA0892481681A94A70185E9D,9B6386C6C291665ACA0892481681A94A70185E9D," Treemap charts
-
-Treemap charts are an alternative method for visualizing the hierarchical structure of tree diagrams while also displaying quantities for each category. Treemap charts are useful for identifying patterns in data. Tree branches are represented by rectangles, with each sub branch represented by smaller rectangles.
-"
-99B0C1C962E0642E5B877747ED37E9BB27238664,99B0C1C962E0642E5B877747ED37E9BB27238664," t-SNE charts
-
-T-distributed Stochastic Neighbor Embedding (t-SNE) is a machine learning algorithm for visualization. t-SNE charts model each high-dimensional object by a two-or-three dimensional point in such a way that similar objects are modeled by nearby points and dissimilar objects are modeled by distant points with high probability.
-"
-3873A285DCB38EF4B4ED663BFA0DF4047AB7692D,3873A285DCB38EF4B4ED663BFA0DF4047AB7692D," Word cloud charts
-
-Word cloud charts present data as words, where the size and placement of any individual word is determined by how it is weighted.
-"
-3BB91EBACC556700F955C3E6E01D90E5256207CF,3BB91EBACC556700F955C3E6E01D90E5256207CF," Visualizing your data
-
-You can discover insights from your data by creating visualizations. By exploring data from different perspectives with visualizations, you can identify patterns, connections, and relationships within that data and quickly understand large amounts of information.
-
-Data format
-: Tabular: Avro, CSV, JSON, Parquet, TSV, SAV, Microsoft Excel .xls and .xlsx files, SAS, delimited text files, and connected data.
-
-For more information about supported data sources, see [Connectors](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn_types.html).
-
-Data size
-: No limit
-
-You can create graphics similar to the following example that shows how humidity values over time.
-
-
-"
-9D9188E6383DB5F7038B98A688CB2DC9CF5A336C,9D9188E6383DB5F7038B98A688CB2DC9CF5A336C," watsonx.governance on IBM® watsonx
-"
-CF88BCC09A32B2D6D65F2C2A831E2960ACA1E347,CF88BCC09A32B2D6D65F2C2A831E2960ACA1E347," Cloud Object Storage on IBM® watsonx
-"
-59DF73D502B5F62E3837464E81AC6BC9FDF07014_0,59DF73D502B5F62E3837464E81AC6BC9FDF07014," IBM Cloud services in the IBM watsonx services catalog
-
-You can provision IBM® Cloud service instances for the watsonx platform.
-
-The IBM watsonx.ai component provides the following services that provide key functionality, including tools and compute resources:
-
-
-
-* [Watson™ Studio](https://dataplatform.cloud.ibm.com/docs/content/svc-welcome/wsl.html)
-* [Watson Machine Learning](https://dataplatform.cloud.ibm.com/docs/content/svc-welcome/wml.html)
-
-
-
-If you signed up for watsonx.ai, you already have these services. Otherwise, you can create instances of these services from the Services catalog.
-
-If you signed up for watsonx.governance, you already have this service. Otherwise, you can create an instance of this service from the Services catalog.
-
-The [IBM Cloud Object Storage](https://dataplatform.cloud.ibm.com/docs/content/svc-welcome/cloud-object-storage.html) provides storage for projects and deployment spaces on the IBM watsonx platform.
-
-The [Secure Gateway](https://dataplatform.cloud.ibm.com/docs/content/svc-welcome/secure-gateway.html) service provides secure connections to on-premises date sources.
-
-These services provide databases that you can access in IBM watsonx by creating connections:
-
-
-
-* [IBM Analytics Engine](https://dataplatform.cloud.ibm.com/docs/content/svc-welcome/spark.html)
-* [Cloudant](https://dataplatform.cloud.ibm.com/docs/content/svc-welcome/cloudant.html)
-* [Databases for Elasticsearch](https://dataplatform.cloud.ibm.com/docs/content/svc-welcome/elasticsearch.html)
-* [Databases for EDB](https://dataplatform.cloud.ibm.com/docs/content/svc-welcome/edb.html)
-* [Databases for MongoDB](https://dataplatform.cloud.ibm.com/docs/content/svc-welcome/mongodb.html)
-"
-59DF73D502B5F62E3837464E81AC6BC9FDF07014_1,59DF73D502B5F62E3837464E81AC6BC9FDF07014,"* [Databases for PostgreSQL](https://dataplatform.cloud.ibm.com/docs/content/svc-welcome/postgresql.html)
-"
-A56686454E771E5FDDA0315DD38313F9FCB31AAC,A56686454E771E5FDDA0315DD38313F9FCB31AAC," Cloudant on IBM® watsonx
-"
-F3BA8CCB1E55BB6535944CB5ACDB19EFAEB1C3F9,F3BA8CCB1E55BB6535944CB5ACDB19EFAEB1C3F9," Db2 on IBM watsonx
-"
-E81F1FD08E472AF1516E6C6B0C936A2DCA55CC20,E81F1FD08E472AF1516E6C6B0C936A2DCA55CC20," Db2 Warehouse on IBM watsonx
-"
-32217F5F0DEE4A95C64B2BD92C25366706CC7E0C,32217F5F0DEE4A95C64B2BD92C25366706CC7E0C," Databases for EDB on IBM watsonx
-"
-868801EC73691D31B90C8611E934AA5DD3B17EA7,868801EC73691D31B90C8611E934AA5DD3B17EA7," Databases for Elasticsearch on IBM® watsonx
-"
-408FDAB4F452AB2C207EE3416332D315598E3456,408FDAB4F452AB2C207EE3416332D315598E3456," Databases for MongoDB on IBM watsonx
-"
-649119A6EF3F5AA2B1B0C63E0973532D4C950F48,649119A6EF3F5AA2B1B0C63E0973532D4C950F48," Databases for PostgreSQL on IBM® watsonx
-"
-B9D44BBCF205103BF01619D31CFEBE31A725BA5A,B9D44BBCF205103BF01619D31CFEBE31A725BA5A," Secure Gateway on IBM® watsonx
-"
-6AC4A29FEBF419002BDBA62D99D997CF55E9FCF2,6AC4A29FEBF419002BDBA62D99D997CF55E9FCF2," IBM Analytics Engine on IBM® watsonx
-"
-40DEFBE604B3629CAF8855A6D00EC14A0A6C92F3,40DEFBE604B3629CAF8855A6D00EC14A0A6C92F3," Watson Machine Learning on IBM watsonx
-
-Watson Machine Learning is part of IBM® watsonx.ai. Watson Machine Learning provides a full range of tools for your team to build, train, and deploy Machine Learning models. You can choose the tool with the level of automation or autonomy that matches your needs. Watson Machine Learning provides the following tools:
-
-
-
-* AutoAI experiment builder for automatically processing structured data to generate model-candidate pipelines. The best-performing pipelines can be saved as a machine learning model and deployed for scoring.
-"
-C4BB814768F5D91D2C6AA90B34FDDD944AA1EB91,C4BB814768F5D91D2C6AA90B34FDDD944AA1EB91," Watson Studio on IBM watsonx
-"
-189F970CF3B162E67B98B2A928B36193169E3CAF,189F970CF3B162E67B98B2A928B36193169E3CAF," Working with your data
-
-To see a quick sample of a flow's data, right-click a node a select Preview. To more thoroughly examine your data, use a Charts node to launch the chart builder.
-
-With the chart builder, you can use advanced visualizations to explore your data from different perspectives and identify patterns, connections, and relationships within your data. You can also visualize your data with these same charts in a Data Refinery flow.
-
-Figure 1. Sample visualizations available for a flow
-
-
-
-For more information, see [Visualizing your data](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/visualizations.html).
-"
-6A32659DF809F04F9A670634129FC75CC9140729,6A32659DF809F04F9A670634129FC75CC9140729," Setting properties for flows
-
-You can specify properties to apply to the current flow.
-
-To set flow properties, click the Flow Properties icon:
-
-The following properties are available.
-"
-81045ED1B34827B3BD74D2546185C3BD3163B37E,81045ED1B34827B3BD74D2546185C3BD3163B37E," Flow scripting
-
-You can use scripts to customize operations within a particular flow, and they're saved with that flow. For example, you might use a script to specify a particular run order for terminal nodes. You use the flow properties page to edit the script that's saved with the current flow.
-
-To access scripting in a flow's properties:
-
-
-
-1. Right-click your flow's canvas and select Flow properties.
-2. Open the Scripting section to work with scripts for the current flow.
-
-
-
-Tips:
-
-
-
-* By default, the Python scripting language is used. If you'd rather use a scripting language unique to old versions of SPSS Modeler desktop, select Legacy.
-* For complete details about scripting, see the [Scripting and automation](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/scripting_overview.html) guide.
-
-
-
-You can specify whether or not the script runs when the flow runs. To run the script each time the flow runs, respecting the run order of the script, select Run the script. This setting provides automation at the flow level for quicker model building. Or, to ignore the script, you can select the option to only Run all terminal nodes when the flow runs.
-
-The script editor includes the following features that help with script authoring:
-
-
-
-* Syntax highlighting; keywords, literal values (such as strings and numbers), and comments are highlighted
-* Line numbering
-* Block matching; when the cursor is placed by the start of a program block, the corresponding end block is also highlighted
-* Suggested auto-completion
-
-
-
-A list of suggested syntax completions can be accessed by selecting Auto-Suggest from the context menu, or pressing Ctrl + Space. Use the cursor keys to move up and down the list, then press Enter to insert the selected text. To exit from auto-suggest mode without modifying the existing text, press Esc.
-"
-D3084BFB07D425EBACE9F538D800E08DAEA97594,D3084BFB07D425EBACE9F538D800E08DAEA97594," Flow scripting example
-
-You can use a flow to train a model when it runs. Normally, to test the model, you might run the modeling node to add the model to the flow, make the appropriate connections, and run an Analysis node.
-
-Using a script, you can automate the process of testing the model nugget after you create it. For example, you might use a script such as the following to train a neural network model:
-
-stream = modeler.script.stream()
-neuralnetnode = stream.findByType(""neuralnetwork"", None)
-results = []
-neuralnetnode.run(results)
-appliernode = stream.createModelApplierAt(results[0], ""Drug"", 594, 187)
-analysisnode = stream.createAt(""analysis"", ""Drug"", 688, 187)
-typenode = stream.findByType(""type"", None)
-stream.linkBetween(appliernode, typenode, analysisnode)
-analysisnode.run([])
-
-The following bullets describe each line in this script example.
-
-
-
-* The first line defines a variable that points to the current flow.
-* In line 2, the script finds the Neural Net builder node.
-* In line 3, the script creates a list where the execution results can be stored.
-* In line 4, the Neural Net model nugget is created. This is stored in the list defined on line 3.
-* In line 5, a model apply node is created for the model nugget and placed on the flow canvas.
-* In line 6, an analysis node called Drug is created.
-* In line 7, the script finds the Type node.
-* In line 8, the script connects the model apply node created in line 5 between the Type node and the Analysis node.
-* Finally, the Analysis node runs to produce the Analysis report.
-
-
-
-Tips:
-
-
-
-"
-C8B4A993CB8642BC87432FCB305EEE744C16A154_0,C8B4A993CB8642BC87432FCB305EEE744C16A154," Importing an SPSS Modeler stream
-
-You can import a stream ( .str) that was created in SPSS Modeler Subscription or SPSS Modeler client.
-
-
-
-1. From your project's Assets tab, click .
-2. Select Local file, select the .str file you want to import, and click Create.
-
-
-
-If the imported stream contains one or more source (import) or export nodes, you'll be prompted to convert the nodes. Watsonx.ai will walk you through the migration process.
-
-Watch the following video for an example of this easy process:
-
-This video provides a visual method to learn the concepts and tasks in this documentation.
-
-Video disclaimer: Some minor steps and graphical elements in this video might differ from your platform.
-
-[https://www.ustream.tv/embed/recorded/127732173](https://www.ustream.tv/embed/recorded/127732173)
-
-If the stream contains multiple import nodes that use the same data file, then you must first add that file to your project as a data asset before migrating because the conversion can't upload the same file to more than one import node. After adding the data asset to your project, reopen the flow and proceed with the migration using the new data asset. Nodes with the same name will be automatically mapped to project assets.
-
-Configure export nodes to export to your project or to a connection. The following export nodes are supported:
-
-
-
-Table 1. Export nodes that can be migrated
-
- Supported SPSS Modeler export nodes
-
- Analytic Server
- Database
- Flat File
- Statistics Export
- Data Collection
- Excel
- IBM Cognos Analytics Export
- TM1 Export
- SAS
- XML Export
-
-
-
-Notes: Keep the following information in mind when migrating nodes.
-
-
-
-* When migrating export nodes, you're converting node types that don't exist in watsonx.ai. The nodes are converted to Data Asset export nodes or a connection. Due to a current limitation for automatically migrating nodes, only existing project assets or connections can be selected as export targets. These assets will be overwritten during export when the flow runs.
-* To preserve any type or filter information, when an import node is replaced with Data Asset nodes, they're converted to a SuperNode.
-"
-C8B4A993CB8642BC87432FCB305EEE744C16A154_1,C8B4A993CB8642BC87432FCB305EEE744C16A154,"* After migration, you can go back later and use the Convert button if you want to migrate a node that you skipped previously.
-* If the stream you imported uses scripting, you may encounter an error when you run the flow even after completing a migration. This could be due to the flow script containing a reference to an unsupported import or export node. To avoid such errors, you must remove the scripting code that references the unsupported node.
-"
-B851271C134A1B282412BD7A667C1C9813B4E8B2,B851271C134A1B282412BD7A667C1C9813B4E8B2," Text Mining model nuggets
-
-You can run a Text Mining node to automatically generate a concept model nugget using the Generate directly option in the node settings. Or you can use a more hands-on, exploratory approach using the Build interactively mode to generate category model nuggets from within the Text Analytics Workbench.
-"
-BBD1F022A8393101199ABB731534C10BE99CF1E4,BBD1F022A8393101199ABB731534C10BE99CF1E4," Mining for concepts and categories
-
-The Text Mining node uses linguistic and frequency techniques to extract key concepts from the text and create categories with these concepts and other data. Use the node to explore the text data contents or to produce either a concept model nugget or category model nugget.
-
-When you run this node, an internal linguistic extraction engine extracts and organizes the concepts, patterns, and categories by using natural language processing methods. Two build modes are available in the Text Mining node's properties:
-
-
-
-* The Generate directly (concept model nugget) mode automatically produces a concept or category model nugget when you run the node.
-* The Build interactively (category model nugget) is a more hands-on, exploratory approach. You can use this mode to not only extract concepts, create categories, and refine your linguistic resources, but also run text link analysis and explore clusters. This build mode launches the Text Analytics Workbench.
-
-
-
-And you can use the Text Mining node to generate one of two text mining model nuggets:
-
-
-
-* Concept model nuggets uncover and extract important concepts from your structured or unstructured text data.
-* Category model nuggets score and assign documents and records to categories, which are made up of the extracted concepts (and patterns).
-
-
-
-The extracted concepts and patterns and the categories from your model nuggets can all be combined with existing structured data, such as demographics, to yield better and more-focused decisions. For example, if customers frequently list login issues as the primary impediment to completing online account management tasks, you might want to incorporate ""login issues"" into your models.
-"
-D73C52B16EC33CAA6D1F51EFFA5A6E37052D6110,D73C52B16EC33CAA6D1F51EFFA5A6E37052D6110," Nodes palette
-
-The following sections describe all the nodes available on the palette in SPSS Modeler. Drag-and-drop or double-click a node in the list to add it to your flow canvas. You can then double-click any node icon in your flow to set its properties. Hover over a property to see information about it, or click the information icon to see Help.
-
-When first creating a flow, you select which runtime to use. By default, the flow will use the IBM SPSS Modeler runtime. If you want to use native Spark algorithms instead of SPSS algorithms, select the Spark runtime. Properties for some nodes will vary depending on which runtime option you choose.
-"
-D04178DDE54F21A248DAFF3F1582EB4BF1E9AC43_0,D04178DDE54F21A248DAFF3F1582EB4BF1E9AC43," Text Analytics
-
-SPSS Modeler offers nodes that are specialized for handling text.
-
-The Text Analytics nodes offer powerful text analytics capabilities, using advanced linguistic technologies and Natural Language Processing (NLP) to rapidly process a large variety of unstructured text data and, from this text, extract and organize the key concepts. Text Analytics can also group these concepts into categories.
-
-Around 80% of data held within an organization is in the form of text documents—for example, reports, web pages, e-mails, and call center notes. Text is a key factor in enabling an organization to gain a better understanding of their customers' behavior. A system that incorporates NLP can intelligently extract concepts, including compound phrases. Moreover, knowledge of the underlying language allows classification of terms into related groups, such as products, organizations, or people, using meaning and context. As a result, you can quickly determine the relevance of the information to your needs. These extracted concepts and categories can be combined with existing structured data, such as demographics, and applied to modeling in SPSS Modeler to yield better and more-focused decisions.
-
-Linguistic systems are knowledge sensitive—the more information contained in their dictionaries, the higher the quality of the results. Text Analytics provides a set of linguistic resources, such as dictionaries for terms and synonyms, libraries, and templates. These nodes further allow you to develop and refine these linguistic resources to your context. Fine-tuning of the linguistic resources is often an iterative process and is necessary for accurate concept retrieval and categorization. Custom templates, libraries, and dictionaries for specific domains, such as CRM and genomics, are also included.
-
-Tips for getting started:
-
-
-
-* Watch the following video for an overview of Text Analytics.
-* See the [Hotel satisfaction example for Text Analytics](https://dataplatform.cloud.ibm.com/docs/content/wsd/tutorials/tut_ta_hotel.html).
-
-
-
-This video provides a visual method to learn the concepts and tasks in this documentation.
-
-"
-D04178DDE54F21A248DAFF3F1582EB4BF1E9AC43_1,D04178DDE54F21A248DAFF3F1582EB4BF1E9AC43,"Video disclaimer: Some minor steps and graphical elements in this video might differ from your platform.
-
-[https://video.ibm.com/embed/channel/23952663/video/spss-text-analytics-workbench](https://video.ibm.com/embed/channel/23952663/video/spss-text-analytics-workbench)
-"
-42E228E8218A4FDEF9F2CA0DB53B5B594A475B88,42E228E8218A4FDEF9F2CA0DB53B5B594A475B88," About text mining
-
-Today, an increasing amount of information is being held in unstructured and semi-structured formats, such as customer e-mails, call center notes, open-ended survey responses, news feeds, web forms, etc. This abundance of information poses a problem to many organizations that ask themselves: How can we collect, explore, and leverage this information?
-
-Text mining is the process of analyzing collections of textual materials in order to capture key concepts and themes and uncover hidden relationships and trends without requiring that you know the precise words or terms that authors have used to express those concepts. Although they are quite different, text mining is sometimes confused with information retrieval. While the accurate retrieval and storage of information is an enormous challenge, the extraction and management of quality content, terminology, and relationships contained within the information are crucial and critical processes.
-"
-3602C22051EA1148B07446605DD3C57BF7830C3A,3602C22051EA1148B07446605DD3C57BF7830C3A," How categorization works
-
-When creating category models in Text Analytics, there are several different techniques you can choose from to create categories. Because every dataset is unique, the number of techniques and the order in which you apply them may change.
-
-Since your interpretation of the results may be different from someone else's, you may need to experiment with the different techniques to see which one produces the best results for your text data. In Text Analytics, you can create category models in a workbench session in which you can explore and fine-tune your categories further.
-
-In this documentation, category building refers to the generation of category definitions and classification through the use of one or more built-in techniques, and categorization refers to the scoring, or labeling, process whereby unique identifiers (name/ID/value) are assigned to the category definitions for each record or document.
-
-During category building, the concepts and types that were extracted are used as the building blocks for your categories. When you build categories, the records or documents are automatically assigned to categories if they contain text that matches an element of a category's definition.
-
-Text Analytics offers you several automated category building techniques to help you categorize your documents or records quickly.
-"
-F976E639BDE8A2B880E46D94F4C832B6ED9A9303,F976E639BDE8A2B880E46D94F4C832B6ED9A9303," How extraction works
-
-During the extraction of key concepts and ideas from your responses, Text Analytics relies on linguistics-based text analysis. This approach offers the speed and cost effectiveness of statistics-based systems. But it offers a far higher degree of accuracy, while requiring far less human intervention. Linguistics-based text analysis is based on the field of study known as natural language processing, also known as computational linguistics.
-
-Understanding how the extraction process works can help you make key decisions when fine-tuning your linguistic resources (libraries, types, synonyms, and more). Steps in the extraction process include:
-
-
-
-* Converting source data to a standard format
-* Identifying candidate terms
-* Identifying equivalence classes and integration of synonyms
-* Assigning a type
-"
-B0B80EB59E769546EEDF8CA32A493BF38C6A9707,B0B80EB59E769546EEDF8CA32A493BF38C6A9707," Export
-
-Export nodes provide a mechanism for exporting data in various formats to interface with your other software tools.
-"
-B6DC074F83F9E8984B9CD3A3BF5B392BC4A61844,B6DC074F83F9E8984B9CD3A3BF5B392BC4A61844," Extension nodes
-
-SPSS Modeler supports the languages R and Apache Spark (via Python).
-
-To complement SPSS Modeler and its data mining abilities, several Extension nodes are available to enable expert users to input their own R scripts or Python for Spark scripts to carry out data processing, model building, and model scoring.
-
-
-
-* The Extension Import node is available under Import on the Node Palette. See [Extension Import node](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/extension_importer.html).
-* The Extension Model node is available under Modeling on the Node Palette. See [Extension Model node](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/extension_build.html).
-"
-4E571695FB4E12489157704D87F89DF5DAD1A580,4E571695FB4E12489157704D87F89DF5DAD1A580," Field Operations
-
-After an initial data exploration, you will probably need to select, clean, or construct data in preparation for analysis. The Field Operations palette contains many nodes useful for this transformation and preparation.
-
-For example, using a Derive node, you might create an attribute that is not currently represented in the data. Or you might use a Binning node to recode field values automatically for targeted analysis. You will probably find yourself using a Type node frequently—it allows you to assign a measurement level, values, and a modeling role for each field in the dataset. Its operations are useful for handling missing values and downstream modeling.
-"
-2AEC614E6CBE5D4963D53DEC7E22877D5A1BEDE8,2AEC614E6CBE5D4963D53DEC7E22877D5A1BEDE8," Graphs
-
-Several phases of the data mining process use graphs and charts to explore data brought in to watsonx.ai.
-
-For example, you can connect a Plot or Distribution node to a data source to gain insight into data types and distributions. You can then perform record and field manipulations to prepare the data for downstream modeling operations. Another common use of graphs is to check the distribution and relationships between newly derived fields.
-"
-A9FA1D31F4CC6018DAF5B927908210846B082675,A9FA1D31F4CC6018DAF5B927908210846B082675," Import
-
-Use Import nodes to import data stored in various formats, or to generate your own synthetic data.
-"
-7E30541B3A12F403ADCB02F90BC96134CE6B6386,7E30541B3A12F403ADCB02F90BC96134CE6B6386," Modeling
-
-Watsonx.ai offers a variety of modeling methods taken from machine learning, artificial intelligence, and statistics.
-
-The methods available on the palette allow you to derive new information from your data and to develop predictive models. Each method has certain strengths and is best suited for particular types of problems. For more information about modeling, see [Creating SPSS Modeler flows](https://dataplatform.cloud.ibm.com/docs/content/wsd/spss-modeler.htmlspss-modeler).
-"
-A56C821C7EE483D01E4338397F62DDD6CB6D5E9F,A56C821C7EE483D01E4338397F62DDD6CB6D5E9F," Outputs
-
-Output nodes provide the means to obtain information about your data and models. They also provide a mechanism for exporting data in various formats to interface with your other software tools.
-"
-09BB38FB6DF4C562A478D6D3DC54D22823F922FB,09BB38FB6DF4C562A478D6D3DC54D22823F922FB," Record Operations
-
-Record Operations nodes are useful for making changes to data at the record level. These operations are important during the data understanding and data preparation phases of data mining because they allow you to tailor the data to your particular business need.
-
-For example, based on the results of a data audit conducted using the Data Audit node (Outputs palette), you might decide that you would like to merge customer purchase records for the past three months. Using a Merge node, you can merge records based on the values of a key field, such as Customer ID. Or you might discover that a database containing information about web site hits is unmanageable with over one million records. Using a Sample node, you can select a subset of data for use in modeling.
-"
-8435D88B7DC8317B982E1EAA57FA55B8391D00CF,8435D88B7DC8317B982E1EAA57FA55B8391D00CF," Aggregate node
-
-Aggregation is a data preparation task frequently used to reduce the size of a dataset. Before proceeding with aggregation, you should take time to clean the data, concentrating especially on missing values. A aggregation, potentially useful information regarding missing values may be lost.
-
-You can use an Aggregate node to replace a sequence of input records with summary, aggregated output records. For example, you might have a set of input sales records such as those shown in the following table.
-
-
-
-Sales record input example
-
-Table 1. Sales record input example
-
- Age Sex Region Branch Sales
-
- 23 M S 8 4
- 45 M S 16 4
- 37 M S 8 5
- 30 M S 5 7
- 44 M N 4 9
- 25 M N 2 11
- 29 F S 16 6
- 41 F N 4 8
- 23 F N 6 2
- 45 F N 4 5
- 33 F N 6 10
-
-
-
-You can aggregate these records with Sex and Region as key fields. Then choose to aggregate Age with the mode Mean and Sales with the mode Sum. Select the Include record count in field aggregate node option and your aggregated output will be similar to the following table.
-
-
-
-Aggregated record example
-
-Table 2. Aggregated record example
-
- Age (mean) Sex Region Sales (sum) Record Count
-
- 35.5 F N 25 4
- 29 F S 6 1
- 34.5 M N 20 2
- 33.75 M S 20 4
-
-
-
-From this you learn, for example, that the average age of the four female sales staff in the North region is 35.5, and the sum total of their sales was 25 units.
-
-Note: Fields such as Branch are automatically discarded when no aggregate mode is specified.
-"
-6D7B948346F167B5390A0E56E1B6DE83AE31A19A,6D7B948346F167B5390A0E56E1B6DE83AE31A19A," Analysis node
-
-With the Analysis node, you can evaluate the ability of a model to generate accurate predictions. Analysis nodes perform various comparisons between predicted values and actual values (your target field) for one or more model nuggets. You can also use Analysis nodes to compare predictive models to other predictive models.
-
-When you execute an Analysis node, a summary of the analysis results is automatically added to the Analysis section on the Summary tab for each model nugget in the executed flow. The detailed analysis results appear on the Outputs tab of the manager window or can be written directly to a file.
-
-Note: Because Analysis nodes compare predicted values to actual values, they are only useful with supervised models (those that require a target field). For unsupervised models such as clustering algorithms, there are no actual results available to use as a basis for comparison.
-"
-35A87CAEDB1F1B6739159B9C7A31CCE7C8978431_0,35A87CAEDB1F1B6739159B9C7A31CCE7C8978431," Anomaly node
-
-Anomaly detection models are used to identify outliers, or unusual cases, in the data. Unlike other modeling methods that store rules about unusual cases, anomaly detection models store information on what normal behavior looks like. This makes it possible to identify outliers even if they do not conform to any known pattern, and it can be particularly useful in applications, such as fraud detection, where new patterns may constantly be emerging. Anomaly detection is an unsupervised method, which means that it does not require a training dataset containing known cases of fraud to use as a starting point.
-
-While traditional methods of identifying outliers generally look at one or two variables at a time, anomaly detection can examine large numbers of fields to identify clusters or peer groups into which similar records fall. Each record can then be compared to others in its peer group to identify possible anomalies. The further away a case is from the normal center, the more likely it is to be unusual. For example, the algorithm might lump records into three distinct clusters and flag those that fall far from the center of any one cluster.
-
-Each record is assigned an anomaly index, which is the ratio of the group deviation index to its average over the cluster that the case belongs to. The larger the value of this index, the more deviation the case has than the average. Under the usual circumstance, cases with anomaly index values less than 1 or even 1.5 would not be considered as anomalies, because the deviation is just about the same or a bit more than the average. However, cases with an index value greater than 2 could be good anomaly candidates because the deviation is at least twice the average.
-
-Anomaly detection is an exploratory method designed for quick detection of unusual cases or records that should be candidates for further analysis. These should be regarded as suspected anomalies, which, on closer examination, may or may not turn out to be real. You may find that a record is perfectly valid but choose to screen it from the data for purposes of model building. Alternatively, if the algorithm repeatedly turns up false anomalies, this may point to an error or artifact in the data collection process.
-
-"
-35A87CAEDB1F1B6739159B9C7A31CCE7C8978431_1,35A87CAEDB1F1B6739159B9C7A31CCE7C8978431,"Note that anomaly detection identifies unusual records or cases through cluster analysis based on the set of fields selected in the model without regard for any specific target (dependent) field and regardless of whether those fields are relevant to the pattern you are trying to predict. For this reason, you may want to use anomaly detection in combination with feature selection or another technique for screening and ranking fields. For example, you can use feature selection to identify the most important fields relative to a specific target and then use anomaly detection to locate the records that are the most unusual with respect to those fields. (An alternative approach would be to build a decision tree model and then examine any misclassified records as potential anomalies. However, this method would be more difficult to replicate or automate on a large scale.)
-
-Example. In screening agricultural development grants for possible cases of fraud, anomaly detection can be used to discover deviations from the norm, highlighting those records that are abnormal and worthy of further investigation. You are particularly interested in grant applications that seem to claim too much (or too little) money for the type and size of farm.
-
-Requirements. One or more input fields. Note that only fields with a role set to Input using a source or Type node can be used as inputs. Target fields (role set to Target or Both) are ignored.
-
-Strengths. By flagging cases that do not conform to a known set of rules rather than those that do, Anomaly Detection models can identify unusual cases even when they don't follow previously known patterns. When used in combination with feature selection, anomaly detection makes it possible to screen large amounts of data to identify the records of greatest interest relatively quickly.
-"
-F05134C8C952A7585B82A042B14BCF1234AF9329,F05134C8C952A7585B82A042B14BCF1234AF9329," Anonymize node
-
-With the Anonymize node, you can disguise field names, field values, or both when working with data that's to be included in a model downstream of the node. In this way, the generated model can be freely distributed (for example, to Technical Support) with no danger that unauthorized users will be able to view confidential data, such as employee records or patients' medical records.
-
-Depending on where you place the Anonymize node in your flow, you may need to make changes to other nodes. For example, if you insert an Anonymize node upstream from a Select node, the selection criteria in the Select node will need to be changed if they are acting on values that have now become anonymized.
-
-The method to be used for anonymizing depends on various factors. For field names and all field values except Continuous measurement levels, the data is replaced by a string of the form:
-
-prefix_Sn
-
-where prefix_ is either a user-specified string or the default string anon_, and n is an integer value that starts at 0 and is incremented for each unique value (for example, anon_S0, anon_S1, etc.).
-
-Field values of type Continuous must be transformed because numeric ranges deal with integer or real values rather than strings. As such, they can be anonymized only by transforming the range into a different range, thus disguising the original data. Transformation of a value x in the range is performed in the following way:
-
-A(x + B)
-
-where:
-
-A is a scale factor, which must be greater than 0.
-
-B is a translation offset to be added to the values.
-"
-4C83F9C21CA1E70077C8004BD26FE5FB0FC947EB,4C83F9C21CA1E70077C8004BD26FE5FB0FC947EB," Append node
-
-You can use Append nodes to concatenate sets of records. Unlike Merge nodes, which join records from different sources together, Append nodes read and pass downstream all of the records from one source until there are no more. Then the records from the next source are read using the same data structure (number of records, number of fields, and so on) as the first, or primary, input. When the primary source has more fields than another input source, the system null string ($null$) will be used for any incomplete values.
-
-Append nodes are useful for combining datasets with similar structures but different data. For example, you might have transaction data stored in different files for different time periods, such as a sales data file for March and a separate one for April. Assuming that they have the same structure (the same fields in the same order), the Append node will join them together into one large file, which you can then analyze.
-
-Note: To append files, the field measurement levels must be similar. For example, a Nominal field cannot be appended with a field whose measurement level is Continuous.
-"
-E14741F9A90592B67437AAED4B7042CD3DC268A8,E14741F9A90592B67437AAED4B7042CD3DC268A8," Extension model nugget
-
-The Extension model nugget is generated and placed on your flow canvas after running the Extension Model node, which contains your R script or Python for Spark script that defines the model building and model scoring.
-
-By default, the Extension model nugget contains the script that's used for model scoring, options for reading the data, and any output from the R console or Python for Spark. Optionally, the Extension model nugget can also contain various other forms of model output, such as graphs and text output. After the Extension model nugget is generated and added to your flow canvas, an output node can be connected to it. The output node is then used in the usual way within your flow to obtain information about the data and models, and for exporting data in various formats.
-"
-9346A72CFCD74DFDA05213A2A321BF9CFB823358,9346A72CFCD74DFDA05213A2A321BF9CFB823358," Apriori node
-
-The Apriori node discovers association rules in your data.
-
-Association rules are statements of the form:
-
-if antecedent(s) then consequent(s)
-
-For example, if a customer purchases a razor and after shave, then that customer will purchase shaving cream with 80% confidence. Apriori extracts a set of rules from the data, pulling out the rules with the highest information content. The Apriori node also discovers association rules in the data. Apriori offers five different methods of selecting rules and uses a sophisticated indexing scheme to efficiently process large data sets.
-
-Requirements. To create an Apriori rule set, you need one or more Input fields and one or more Target fields. Input and output fields (those with the role Input, Target, or Both) must be symbolic. Fields with the role None are ignored. Fields types must be fully instantiated before executing the node. Data can be in tabular or transactional format.
-
-Strengths. For large problems, Apriori is generally faster to train. It also has no arbitrary limit on the number of rules that can be retained and can handle rules with up to 32 preconditions. Apriori offers five different training methods, allowing more flexibility in matching the data mining method to the problem at hand.
-"
-27091A60BA512E180C699261ECFFDC3A621418A5_0,27091A60BA512E180C699261ECFFDC3A621418A5," Association Rules node
-
-Association rules associate a particular conclusion (the purchase of a particular product, for example) with a set of conditions (the purchase of several other products, for example).
-
-For example, the rule
-
-beer <= cannedveg & frozenmeal (173, 17.0%, 0.84)
-
-states that beer often occurs when cannedveg and frozenmeal occur together. The rule is 84% reliable and applies to 17% of the data, or 173 records. Association rule algorithms automatically find the associations that you could find manually using visualization techniques, such as the Web node.
-
-The advantage of association rule algorithms over the more standard decision tree algorithms (C5.0 and C&R Trees) is that associations can exist between any of the attributes. A decision tree algorithm will build rules with only a single conclusion, whereas association algorithms attempt to find many rules, each of which may have a different conclusion.
-
-The disadvantage of association algorithms is that they are trying to find patterns within a potentially very large search space and, hence, can require much more time to run than a decision tree algorithm. The algorithms use a generate and test method for finding rules--simple rules are generated initially, and these are validated against the dataset. The good rules are stored and all rules, subject to various constraints, are then specialized. Specialization is the process of adding conditions to a rule. These new rules are then validated against the data, and the process iteratively stores the best or most interesting rules found. The user usually supplies some limit to the possible number of antecedents to allow in a rule, and various techniques based on information theory or efficient indexing schemes are used to reduce the potentially large search space.
-
-"
-27091A60BA512E180C699261ECFFDC3A621418A5_1,27091A60BA512E180C699261ECFFDC3A621418A5,"At the end of the processing, a table of the best rules is presented. Unlike a decision tree, this set of association rules cannot be used directly to make predictions in the way that a standard model (such as a decision tree or a neural network) can. This is due to the many different possible conclusions for the rules. Another level of transformation is required to transform the association rules into a classification rule set. Hence, the association rules produced by association algorithms are known as unrefined models. Although the user can browse these unrefined models, they cannot be used explicitly as classification models unless the user tells the system to generate a classification model from the unrefined model. This is done from the browser through a Generate menu option.
-
-Two association rule algorithms are supported:
-
-
-
-"
-1ACF5ED461253F09DB844C2D84C1AE21277BC1E6_0,1ACF5ED461253F09DB844C2D84C1AE21277BC1E6," Auto Classifier node
-
-The Auto Classifier node estimates and compares models for either nominal (set) or binary (yes/no) targets, using a number of different methods, enabling you to try out a variety of approaches in a single modeling run. You can select the algorithms to use, and experiment with multiple combinations of options. For example, rather than choose between Radial Basis Function, polynomial, sigmoid, or linear methods for an SVM, you can try them all. The node explores every possible combination of options, ranks each candidate model based on the measure you specify, and saves the best models for use in scoring or further analysis.
-
-Example
-: A retail company has historical data tracking the offers made to specific customers in past campaigns. The company now wants to achieve more profitable results by matching the appropriate offer to each customer.
-
-Requirements
-: A target field with a measurement level of either Nominal or Flag (with the role set to Target), and at least one input field (with the role set to Input). For a flag field, the True value defined for the target is assumed to represent a hit when calculating profits, lift, and related statistics. Input fields can have a measurement level of Continuous or Categorical, with the limitation that some inputs may not be appropriate for some model types. For example, ordinal fields used as inputs in C&R Tree, CHAID, and QUEST models must have numeric storage (not string), and will be ignored by these models if specified otherwise. Similarly, continuous input fields can be binned in some cases. The requirements are the same as when using the individual modeling nodes; for example, a Bayes Net model works the same whether generated from the Bayes Net node or the Auto Classifier node.
-
-Frequency and weight fields
-"
-1ACF5ED461253F09DB844C2D84C1AE21277BC1E6_1,1ACF5ED461253F09DB844C2D84C1AE21277BC1E6,": Frequency and weight are used to give extra importance to some records over others because, for example, the user knows that the build dataset under-represents a section of the parent population (Weight) or because one record represents a number of identical cases (Frequency). If specified, a frequency field can be used by C&R Tree, CHAID, QUEST, Decision List, and Bayes Net models. A weight field can be used by C&RT, CHAID, and C5.0 models. Other model types will ignore these fields and build the models anyway. Frequency and weight fields are used only for model building, and are not considered when evaluating or scoring models.
-
-Prefixes
-: If you attach a table node to the nugget for the Auto Classifier Node, there are several new variables in the table with names that begin with a $ prefix.
-: The names of the fields that are generated during scoring are based on the target field, but with a standard prefix. Different model types use different sets of prefixes.
-"
-3A9DC582441C2474E183DA0E7DAC20FB182842C2,3A9DC582441C2474E183DA0E7DAC20FB182842C2," Auto Cluster node
-
-The Auto Cluster node estimates and compares clustering models that identify groups of records with similar characteristics. The node works in the same manner as other automated modeling nodes, enabling you to experiment with multiple combinations of options in a single modeling pass. Models can be compared using basic measures with which to attempt to filter and rank the usefulness of the cluster models, and provide a measure based on the importance of particular fields.
-
-Clustering models are often used to identify groups that can be used as inputs in subsequent analyses. For example, you may want to target groups of customers based on demographic characteristics such as income, or based on the services they have bought in the past. You can do this without prior knowledge about the groups and their characteristics -- you may not know how many groups to look for, or what features to use in defining them. Clustering models are often referred to as unsupervised learning models, since they do not use a target field, and do not return a specific prediction that can be evaluated as true or false. The value of a clustering model is determined by its ability to capture interesting groupings in the data and provide useful descriptions of those groupings.
-
-Requirements. One or more fields that define characteristics of interest. Cluster models do not use target fields in the same manner as other models, because they do not make specific predictions that can be assessed as true or false. Instead, they are used to identify groups of cases that may be related. For example, you cannot use a cluster model to predict whether a given customer will churn or respond to an offer. But you can use a cluster model to assign customers to groups based on their tendency to do those things. Weight and frequency fields are not used.
-
-Evaluation fields. While no target is used, you can optionally specify one or more evaluation fields to be used in comparing models. The usefulness of a cluster model may be evaluated by measuring how well (or badly) the clusters differentiate these fields.
-"
-FD94481E337829121072F5E46CC39B6290E43B44_0,FD94481E337829121072F5E46CC39B6290E43B44," Auto Data Prep node
-
-Preparing data for analysis is one of the most important steps in any project—and traditionally, one of the most time consuming. Automated Data Preparation (ADP) handles the task for you, analyzing your data and identifying fixes, screening out fields that are problematic or not likely to be useful, deriving new attributes when appropriate, and improving performance through intelligent screening techniques. You can use the algorithm in fully automatic fashion, allowing it to choose and apply fixes, or you can use it in interactive fashion, previewing the changes before they are made and accept or reject them as you want.
-
-Using ADP enables you to make your data ready for model building quickly and easily, without needing prior knowledge of the statistical concepts involved. Models will tend to build and score more quickly
-
-Note: When ADP prepares a field for analysis, it creates a new field containing the adjustments or transformations, rather than replacing the existing values and properties of the old field. The old field is not used in further analysis; its role is set to None.
-
-Example. An insurance company with limited resources to investigate homeowner's insurance claims wants to build a model for flagging suspicious, potentially fraudulent claims. Before building the model, they will ready the data for modeling using automated data preparation. Since they want to be able to review the proposed transformations before the transformations are applied, they will use automated data preparation in interactive mode.
-
-An automotive industry group keeps track of the sales for a variety of personal motor vehicles. In an effort to be able to identify over- and underperforming models, they want to establish a relationship between vehicle sales and vehicle characteristics. They will use automated data preparation to prepare the data for analysis, and build models using the data ""before"" and ""after"" preparation to see how the results differ.
-
-What is your objective? Automated data preparation recommends data preparation steps that will affect the speed with which other algorithms can build models and improve the predictive power of those models. This can include transforming, constructing and selecting features. The target can also be transformed. You can specify the model-building priorities that the data preparation process should concentrate on.
-
-
-
-"
-FD94481E337829121072F5E46CC39B6290E43B44_1,FD94481E337829121072F5E46CC39B6290E43B44,"* Balance speed and accuracy. This option prepares the data to give equal priority to both the speed with which data are processed by model-building algorithms and the accuracy of the predictions.
-* Optimize for speed. This option prepares the data to give priority to the speed with which data are processed by model-building algorithms. When you are working with very large datasets, or are looking for a quick answer, select this option.
-"
-9D9C67189BE5D6DB22575CF01A75BD5826B92074_0,9D9C67189BE5D6DB22575CF01A75BD5826B92074," Auto Numeric node
-
-The Auto Numeric node estimates and compares models for continuous numeric range outcomes using a number of different methods, enabling you to try out a variety of approaches in a single modeling run. You can select the algorithms to use, and experiment with multiple combinations of options. For example, you could predict housing values using neural net, linear regression, C&RT, and CHAID models to see which performs best, and you could try out different combinations of stepwise, forward, and backward regression methods. The node explores every possible combination of options, ranks each candidate model based on the measure you specify, and saves the best for use in scoring or further analysis.
-
-Example
-: A municipality wants to more accurately estimate real estate taxes and to adjust values for specific properties as needed without having to inspect every property. Using the Auto Numeric node, the analyst can generate and compare a number of models that predict property values based on building type, neighborhood, size, and other known factors.
-
-Requirements
-: A single target field (with the role set to Target), and at least one input field (with the role set to Input). The target must be a continuous (numeric range) field, such as age or income. Input fields can be continuous or categorical, with the limitation that some inputs may not be appropriate for some model types. For example, C&R Tree models can use categorical string fields as inputs, while linear regression models cannot use these fields and will ignore them if specified. The requirements are the same as when using the individual modeling nodes. For example, a CHAID model works the same whether generated from the CHAID node or the Auto Numeric node.
-
-Frequency and weight fields
-"
-9D9C67189BE5D6DB22575CF01A75BD5826B92074_1,9D9C67189BE5D6DB22575CF01A75BD5826B92074,": Frequency and weight are used to give extra importance to some records over others because, for example, the user knows that the build dataset under-represents a section of the parent population (Weight) or because one record represents a number of identical cases (Frequency). If specified, a frequency field can be used by C&R Tree and CHAID algorithms. A weight field can be used by C&RT, CHAID, Regression, and GenLin algorithms. Other model types will ignore these fields and build the models anyway. Frequency and weight fields are used only for model building and are not considered when evaluating or scoring models.
-
-Prefixes
-: If you attach a table node to the nugget for the Auto Numeric Node, there are several new variables in the table with names that begin with a $ prefix.
-: The names of the fields that are generated during scoring are based on the target field, but with a standard prefix. Different model types use different sets of prefixes.
-"
-0294AB8C0FBC393F5C227A0F8BEBCCDC67B78B1D,0294AB8C0FBC393F5C227A0F8BEBCCDC67B78B1D," Balance node
-
-You can use Balance nodes to correct imbalances in datasets so they conform to specified test criteria.
-
-For example, suppose that a dataset has only two values--low or high--and that 90% of the cases are low while only 10% of the cases are high. Many modeling techniques have trouble with such biased data because they will tend to learn only the low outcome and ignore the high one, since it is more rare. If the data is well balanced with approximately equal numbers of low and high outcomes, models will have a better chance of finding patterns that distinguish the two groups. In this case, a Balance node is useful for creating a balancing directive that reduces cases with a low outcome.
-
-Balancing is carried out by duplicating and then discarding records based on the conditions you specify. Records for which no condition holds are always passed through. Because this process works by duplicating and/or discarding records, the original sequence of your data is lost in downstream operations. Be sure to derive any sequence-related values before adding a Balance node to the data stream.
-"
-1D5D80DFF65EE4195713EEEB43F1291B79779A6B_0,1D5D80DFF65EE4195713EEEB43F1291B79779A6B," Bayes Net node
-
-The Bayesian Network node enables you to build a probability model by combining observed and recorded evidence with ""common-sense"" real-world knowledge to establish the likelihood of occurrences by using seemingly unlinked attributes. The node focuses on Tree Augmented Naïve Bayes (TAN) and Markov Blanket networks that are primarily used for classification.
-
-Bayesian networks are used for making predictions in many varied situations; some examples are:
-
-
-
-* Selecting loan opportunities with low default risk.
-* Estimating when equipment will need service, parts, or replacement, based on sensor input and existing records.
-* Resolving customer problems via online troubleshooting tools.
-* Diagnosing and troubleshooting cellular telephone networks in real-time.
-* Assessing the potential risks and rewards of research-and-development projects in order to focus resources on the best opportunities.
-
-
-
-A Bayesian network is a graphical model that displays variables (often referred to as nodes) in a dataset and the probabilistic, or conditional, independencies between them. Causal relationships between nodes may be represented by a Bayesian network; however, the links in the network (also known as arcs) do not necessarily represent direct cause and effect. For example, a Bayesian network can be used to calculate the probability of a patient having a specific disease, given the presence or absence of certain symptoms and other relevant data, if the probabilistic independencies between symptoms and disease as displayed on the graph hold true. Networks are very robust where information is missing and make the best possible prediction using whatever information is present.
-
-"
-1D5D80DFF65EE4195713EEEB43F1291B79779A6B_1,1D5D80DFF65EE4195713EEEB43F1291B79779A6B,"A common, basic, example of a Bayesian network was created by Lauritzen and Spiegelhalter (1988). It is often referred to as the ""Asia"" model and is a simplified version of a network that may be used to diagnose a doctor's new patients; the direction of the links roughly corresponding to causality. Each node represents a facet that may relate to the patient's condition; for example, ""Smoking"" indicates that they are a confirmed smoker, and ""VisitAsia"" shows if they recently visited Asia. Probability relationships are shown by the links between any nodes; for example, smoking increases the chances of the patient developing both bronchitis and lung cancer, whereas age only seems to be associated with the possibility of developing lung cancer. In the same way, abnormalities on an x-ray of the lungs may be caused by either tuberculosis or lung cancer, while the chances of a patient suffering from shortness of breath (dyspnea) are increased if they also suffer from either bronchitis or lung cancer.
-
-Figure 1. Lauritzen and Spegelhalter's Asia network example
-
-
-
-There are several reasons why you might decide to use a Bayesian network:
-
-
-
-* It helps you learn about causal relationships. From this, it enables you to understand a problem area and to predict the consequences of any intervention.
-* The network provides an efficient approach for avoiding the overfitting of data.
-* A clear visualization of the relationships involved is easily observed.
-
-
-
-Requirements. Target fields must be categorical and can have a measurement level of Nominal, Ordinal, or Flag. Inputs can be fields of any type. Continuous (numeric range) input fields will be automatically binned; however, if the distribution is skewed, you may obtain better results by manually binning the fields using a Binning node before the Bayesian Network node. For example, use Optimal Binning where the Supervisor field is the same as the Bayesian Network node Target field.
-
-"
-1D5D80DFF65EE4195713EEEB43F1291B79779A6B_2,1D5D80DFF65EE4195713EEEB43F1291B79779A6B,"Example. An analyst for a bank wants to be able to predict customers, or potential customers, who are likely to default on their loan repayments. You can use a Bayesian network model to identify the characteristics of customers most likely to default, and build several different types of model to establish which is the best at predicting potential defaulters.
-
-Example. A telecommunications operator wants to reduce the number of customers who leave the business (known as ""churn""), and update the model on a monthly basis using each preceding month's data. You can use a Bayesian network model to identify the characteristics of customers most likely to churn, and continue training the model each month with the new data.
-"
-8B5211BC5AC76B26C8C102E576F0AF560DFBCBC2,8B5211BC5AC76B26C8C102E576F0AF560DFBCBC2," Binning node
-
-The Binning node enables you to automatically create new nominal fields based on the values of one or more existing continuous (numeric range) fields. For example, you can transform a continuous income field into a new categorical field containing income groups of equal width, or as deviations from the mean. Alternatively, you can select a categorical ""supervisor"" field in order to preserve the strength of the original association between the two fields.
-
-Binning can be useful for a number of reasons, including:
-
-
-
-* Algorithm requirements. Certain algorithms, such as Naive Bayes and Logistic Regression, require categorical inputs.
-* Performance. Algorithms such as multinomial logistic may perform better if the number of distinct values of input fields is reduced. For example, use the median or mean value for each bin rather than using the original values.
-* Data Privacy. Sensitive personal information, such as salaries, may be reported in ranges rather than actual salary figures in order to protect privacy.
-
-
-
-A number of binning methods are available. After you create bins for the new field, you can generate a Derive node based on the cut points.
-"
-C5673E6023D99F8354E9B61DA2D2F1B58FBC970F_0,C5673E6023D99F8354E9B61DA2D2F1B58FBC970F," C5.0 node
-
-This node uses the C5.0 algorithm to build either a decision tree or a rule set. A C5.0 model works by splitting the sample based on the field that provides the maximum information gain. Each sub-sample defined by the first split is then split again, usually based on a different field, and the process repeats until the subsamples cannot be split any further. Finally, the lowest-level splits are reexamined, and those that do not contribute significantly to the value of the model are removed or pruned.
-
-Note: The C5.0 node can predict only a categorical target. When analyzing data with categorical (nominal or ordinal) fields, the node is likely to group categories together.
-
-C5.0 can produce two kinds of models. A decision tree is a straightforward description of the splits found by the algorithm. Each terminal (or ""leaf"") node describes a particular subset of the training data, and each case in the training data belongs to exactly one terminal node in the tree. In other words, exactly one prediction is possible for any particular data record presented to a decision tree.
-
-In contrast, a rule set is a set of rules that tries to make predictions for individual records. Rule sets are derived from decision trees and, in a way, represent a simplified or distilled version of the information found in the decision tree. Rule sets can often retain most of the important information from a full decision tree but with a less complex model. Because of the way rule sets work, they do not have the same properties as decision trees. The most important difference is that with a rule set, more than one rule may apply for any particular record, or no rules at all may apply. If multiple rules apply, each rule gets a weighted ""vote"" based on the confidence associated with that rule, and the final prediction is decided by combining the weighted votes of all of the rules that apply to the record in question. If no rule applies, a default prediction is assigned to the record.
-
-"
-C5673E6023D99F8354E9B61DA2D2F1B58FBC970F_1,C5673E6023D99F8354E9B61DA2D2F1B58FBC970F,"Example. A medical researcher has collected data about a set of patients, all of whom suffered from the same illness. During their course of treatment, each patient responded to one of five medications. You can use a C5.0 model, in conjunction with other nodes, to help find out which drug might be appropriate for a future patient with the same illness.
-
-Requirements. To train a C5.0 model, there must be one categorical (i.e., nominal or ordinal) Target field, and one or more Input fields of any type. Fields set to Both or None are ignored. Fields used in the model must have their types fully instantiated. A weight field can also be specified.
-
-Strengths. C5.0 models are quite robust in the presence of problems such as missing data and large numbers of input fields. They usually do not require long training times to estimate. In addition, C5.0 models tend to be easier to understand than some other model types, since the rules derived from the model have a very straightforward interpretation. C5.0 also offers the powerful boosting method to increase accuracy of classification.
-
-Tip: C5.0 model building speed may benefit from enabling parallel processing. Note: When first creating a flow, you select which runtime to use. By default, flows use the IBM SPSS Modeler runtime. If you want to use native Spark algorithms instead of SPSS algorithms, select the Spark runtime. Properties for this node will vary depending on which runtime option you choose.
-"
-DE6C4CB72844FC59FD80FC0B26ACC8C94A3BA994,DE6C4CB72844FC59FD80FC0B26ACC8C94A3BA994," Caching options for nodes
-
-To optimize the running of flows, you can set up a cache on any nonterminal node. When you set up a cache on a node, the cache is filled with the data that passes through the node the next time you run the data flow. From then on, the data is read from the cache (which is stored temporarily) rather than from the data source.
-
-Caching is most useful following a time-consuming operation such as a sort, merge, or aggregation. For example, suppose that you have an import node set to read sales data from a database and an Aggregate node that summarizes sales by location. You can set up a cache on the Aggregate node rather than on the import node because you want the cache to store the aggregated data rather than the entire data set. Note: Caching at import nodes, which simply stores a copy of the original data as it is read into SPSS Modeler, won't improve performance in most circumstances.
-
-Nodes with caching enabled are displayed with a special circle-backslash icon. When the data is cached at the node, the icon changes to a check mark.
-
-Figure 1. Node with empty cache vs. node with full cache
-
-
-
-A circle-backslash icon by node indicates that its cache is empty. When the cache is full, the icon becomes a check mark. If you want to replace the contents of the cache, you must first flush the cache and then re-run the data flow to refill it.
-
-In your flow, right-click the node and select .
-"
-D43DE202E6D3EEE211893585616BDA7EB09211C4_0,D43DE202E6D3EEE211893585616BDA7EB09211C4," Continuous machine learning
-
-As a result of IBM research, and inspired by natural selection in biology, continuous machine learning is available for the Auto Classifier node and the Auto Numeric node.
-
-An inconvenience with modeling is models getting outdated due to changes to your data over time. This is commonly referred to as model drift or concept drift. To help overcome model drift effectively, SPSS Modeler provides continuous automated machine learning.
-
-What is model drift? When you build a model based on historical data, it can become stagnant. In many cases, new data is always coming in—new variations, new patterns, new trends, etc.—that the old historical data doesn't capture. To solve this problem, IBM was inspired by the famous phenomenon in biology called the natural selection of species. Think of models as species and think of data as nature. Just as nature selects species, we should let data select the model. There's one big difference between models and species: species can evolve, but models are static after they're built.
-
-There are two preconditions for species to evolve; the first is gene mutation, and the second is population. Now, from a modeling perspective, to satisfy the first precondition (gene mutation), we should introduce new data changes into the existing model. To satisfy the second precondition (population), we should use a number of models rather than just one. What can represent a number of models? An Ensemble Model Set (EMS)!
-
-The following figure illustrates how an EMS can evolve. The upper left portion of the figure represents historical data with hybrid partitions. The hybrid partitions ensure a rich initial EMS. The upper right portion of the figure represents a new chunk of data that becomes available, with vertical bars on each side. The left vertical bar represents current status, and the right vertical bar represents the status when there's a risk of model drift. In each new round of continuous machine learning, two steps are performed to evolve your model and avoid model drift.
-
-"
-D43DE202E6D3EEE211893585616BDA7EB09211C4_1,D43DE202E6D3EEE211893585616BDA7EB09211C4,"First, you construct an ensemble model set (EMS) using existing training data. After that, when a new chunk of data becomes available, new models are built against that new data and added to the EMS as component models. The weights of existing component models in the EMS are reevaluated using the new data. As a result of this reevaluation, component models having higher weights are selected for the current prediction, and component models having lower weights may be deleted from the EMS. This process refreshes the EMS for both model weights and model instances, thus evolving in a flexible and efficient way to address the inevitable changes to your data over time.
-
-Figure 1. Continuous auto machine learning
-
-
-
-The ensemble model set (EMS) is a generated auto model nugget, and there's a refresh link between the auto modeling node and the generated auto model nugget that defines the refresh relationship between them. When you enable continuous auto machine learning, new data assets are continuously fed to auto modeling nodes to generate new component models. The model nugget is updated instead of replaced.
-
-The following figure provides an example of the internal structure of an EMS in a continuous machine learning scenario. Only the top three component models are selected for the current prediction. For each component model (labeled as M1, M2, and M3), two kinds of weights are maintained. Current Model Weight (CMW) describes how a component model performs with a new chunk of data, and Accumulated Model Weight (AMW) describes the comprehensive performance of a component model against recent chunks of data. AMW is calculated iteratively via CMW and previous values of itself, and there's a hyper parameter beta to balance between them. The formula to calculate AMW is called exponential moving average.
-
-"
-D43DE202E6D3EEE211893585616BDA7EB09211C4_2,D43DE202E6D3EEE211893585616BDA7EB09211C4,"When a new chunk of data becomes available, first SPSS Modeler uses it to build a few new component models. In this example figure, model four (M4) is built with CMW and AMW calculated during the initial model building process. Then SPSS Modeler uses the new chunk of data to reevaluate measures of existing component models (M1, M2, and M3) and update their CMW and AMW based on the reevaluation results. Finally, SPSS Modeler might reorder the component models based on CMW or AMW and select the top three component models accordingly.
-
-In this figure, CMW is described using normalized value (sum = 1) and AMW is calculated based on CMW. In SPSS Modeler, the absolute value (equal to evaluation-weighted measure selected - for example, accuracy) is chosen to represent CMW and AMW for simplicity.
-
-Figure 2. EMS structure
-
-Note that there are two types of weights defined for each EMS component model, both of which could be used for selecting top N models and component model drop out:
-
-
-
-* Current Model Weight (CMW) is computed via evaluation against the new data chunk (for example, evaluation accuracy on the new data chunk).
-* Accumulated Model Weight (AMW) is computed via combining both CMW and existing AMW (for example, exponentially weighted moving average (EWMA).
-
-Exponential moving average formula for calculating AMW:
-
-
-
-
-In SPSS Modeler, after running an Auto Classifier node to generate a model nugget, the following model options are available for continuous machine learning:
-
-
-
-* Enable continuous auto machine learning during model refresh. Select this option to enable continuous machine learning. Keep in mind that consistent metadata (data model) must be used to train the continuous auto model. If you select this option, other options are enabled.
-"
-D43DE202E6D3EEE211893585616BDA7EB09211C4_3,D43DE202E6D3EEE211893585616BDA7EB09211C4,"* Enable automatic model weights reevaluation. This option controls whether evaluation measures (accuracy, for example) are computed and updated during model refresh. If you select this option, an automatic evaluation process will run after the EMS (during model refresh). This is because it's usually necessary to reevaluate existing component models using new data to reflect the current state of your data. Then the weights of the EMS component models are assigned according to reevaluation results, and the weights are used to decide the proportion a component model contributes to the final ensemble prediction. This option is selected by default.
-
-Figure 3. Model settings
-
-
-
-Figure 4. Flag target
-
-Following are the supported CMW and AMW for the Auto Classifier node:
-
-
-
-Table 1. Supported CMW and AMW
-
- Target type CMW AMW
-
- flag target Overall Accuracy Area Under Curve Accumulated Accuracy Accumulated AUC
- set target Overall Accuracy Accumulated Accuracy
-
-
-
-The following three options are related to AMW, which is used to evaluate how a component model performs during recent data chunk periods:
-* Enable accumulated factor during model weights reevaluation. If you select this option, AMW computation will be enabled during model weights reevaluation. AMW represents the comprehensive performance of an EMS component model during recent data chunk periods, related to the accumulated factor β defined in the AMW formula listed previously, which you can adjust in the node properties. When this option isn't selected, only CMW will be computed. This option is selected by default.
-"
-D43DE202E6D3EEE211893585616BDA7EB09211C4_4,D43DE202E6D3EEE211893585616BDA7EB09211C4,"* Perform model reduction based on accumulated limit during model refresh. Select this option if you want component models with an AMW value below the specified limit to be removed from the auto model EMS during model refresh. This can be helpful in discarding component models that are useless to prevent the auto model EMS from becoming too heavy.The accumulated limit value evaluation is related to the weighted measure used when Evaluation-weighted voting is selected as the ensemble method. See the following.
-
-Figure 5. Set and flag targets
-
-
-
-Note that if you select Model Accuracy for the evaluation-weighted measure, models with an accumulated accuracy below the specified limit will be deleted. And if you select Area under curve for the evaluation-weighted measure, models with an accumulated AUC below the specified limit will be deleted.
-
-By default, Model Accuracy is used for the evaluation-weighted measure for the Auto Classifier node, and there's an optional AUC ROC measure in the case of flag targets.
-* Use accumulated evaluation-weighted voting. Select this option if you want AMW to be used for the current scoring/prediction. Otherwise, CMW will be used by default. This option is enabled when Evaluation-weighted voting is selected for the ensemble method.
-
-Note that for flag targets, by selecting this option, if you select Model Accuracy for the evaluation-weighted measure, then Accumulated Accuracy will be used as the AMW to perform the current scoring. Or if you select Area under curve for the evaluation-weighted measure, then Accumulated AUC will be used as the AMW to perform the current scoring. If you don't select this option and you select Model Accuracy for the evaluation-weighted measure, then Overall Accuracy will be used as the CMW to perform the current scoring. If you select Area under curve, Area under curve will be used as the CMW to perform the current scoring.
-
-"
-D43DE202E6D3EEE211893585616BDA7EB09211C4_5,D43DE202E6D3EEE211893585616BDA7EB09211C4,"For set targets, if you select this Use accumulated evaluation-weighted voting option, then Accumulated Accuracy will be used as the AMW to perform the current scoring. Otherwise, Overall Accuracy will be used as the CMW to perform the current scoring.
-
-
-
-With continuous auto machine learning, the auto model nugget is evolving all the time by rebuilding the auto model, which ensures that you get the most updated version reflecting the current state of your data. SPSS Modeler provides the flexibility for different top N component models in the EMS to be selected according to their current weights, which keeps pace with varying data during different periods.
-
-Note: The Auto Numeric node is a much simpler case, providing a subset of the options in the Auto Classifier node.
-"
-461D1A8F855174F44550531EF8BE6E67C29D3E3B,461D1A8F855174F44550531EF8BE6E67C29D3E3B," CARMA node
-
-The CARMA node uses an association rules discovery algorithm to discover association rules in the data.
-
-Association rules are statements in the form:
-
-if antecedent(s) then consequent(s)
-
-For example, if a Web customer purchases a wireless card and a high-end wireless router, the customer is also likely to purchase a wireless music server if offered. The CARMA model extracts a set of rules from the data without requiring you to specify input or target fields. This means that the rules generated can be used for a wider variety of applications. For example, you can use rules generated by this node to find a list of products or services (antecedents) whose consequent is the item that you want to promote this holiday season. Using watsonx.ai, you can determine which clients have purchased the antecedent products and construct a marketing campaign designed to promote the consequent product.
-
-Requirements. In contrast to Apriori, the CARMA node does not require Input or Target fields. This is integral to the way the algorithm works and is equivalent to building an Apriori model with all fields set to Both. You can constrain which items are listed only as antecedents or consequents by filtering the model after it is built. For example, you can use the model browser to find a list of products or services (antecedents) whose consequent is the item that you want to promote this holiday season.
-
-To create a CARMA rule set, you need to specify an ID field and one or more content fields. The ID field can have any role or measurement level. Fields with the role None are ignored. Field types must be fully instantiated before executing the node. Like Apriori, data may be in tabular or transactional format.
-
-Strengths. The CARMA node is based on the CARMA association rules algorithm. In contrast to Apriori, the CARMA node offers build settings for rule support (support for both antecedent and consequent) rather than antecedent support. CARMA also allows rules with multiple consequents. Like Apriori, models generated by a CARMA node can be inserted into a data stream to create predictions.
-"
-37D9428BD2E4A45CA968DAD59D1005FB5FC4DE9C,37D9428BD2E4A45CA968DAD59D1005FB5FC4DE9C," C&R Tree node
-
-The Classification and Regression (C&R) Tree node is a tree-based classification and prediction method. Similar to C5.0, this method uses recursive partitioning to split the training records into segments with similar output field values. The C&R Tree node starts by examining the input fields to find the best split, measured by the reduction in an impurity index that results from the split. The split defines two subgroups, each of which is subsequently split into two more subgroups, and so on, until one of the stopping criteria is triggered. All splits are binary (only two subgroups).
-"
-D64140C0B8D4187B49046528FF61A54D77A99223,D64140C0B8D4187B49046528FF61A54D77A99223," CHAID node
-
-CHAID, or Chi-squared Automatic Interaction Detection, is a classification method for building decision trees by using chi-square statistics to identify optimal splits.
-
-CHAID first examines the crosstabulations between each of the input fields and the outcome, and tests for significance using a chi-square independence test. If more than one of these relations is statistically significant, CHAID will select the input field that is the most significant (smallest p value). If an input has more than two categories, these are compared, and categories that show no differences in the outcome are collapsed together. This is done by successively joining the pair of categories showing the least significant difference. This category-merging process stops when all remaining categories differ at the specified testing level. For nominal input fields, any categories can be merged; for an ordinal set, only contiguous categories can be merged.
-
-Exhaustive CHAID is a modification of CHAID that does a more thorough job of examining all possible splits for each predictor but takes longer to compute.
-
-Requirements. Target and input fields can be continuous or categorical; nodes can be split into two or more subgroups at each level. Any ordinal fields used in the model must have numeric storage (not string). If necessary, the Reclassify node can be used to convert them.
-
-Strengths. Unlike the C&R Tree and QUEST nodes, CHAID can generate nonbinary trees, meaning that some splits have more than two branches. It therefore tends to create a wider tree than the binary growing methods. CHAID works for all types of inputs, and it accepts both case weights and frequency variables.
-"
-54EE0BB6FBD2E35C46C41D0065C299408F5AB0A5,54EE0BB6FBD2E35C46C41D0065C299408F5AB0A5," Characters
-
-Characters (usually shown as CHAR) are typically used within a CLEM expression to perform tests on strings.
-
-For example, you can use the function isuppercode to determine whether the first character of a string is uppercase. The following CLEM expression uses a character to indicate that the test should be performed on the first character of the string:
-
-isuppercode(subscrs(1, ""MyString""))
-
-To express the code (in contrast to the location) of a particular character in a CLEM expression, use single backquotes of the form . For example, A , Z .
-
-Note: There is no CHAR storage type for a field, so if a field is derived or filled with an expression that results in a CHAR, then that result will be converted to a string.
-"
-3C1D83E94DDC08D7A6229AEDC49C895E86E660BF,3C1D83E94DDC08D7A6229AEDC49C895E86E660BF," CLEM datatypes
-
-This section covers CLEM datatypes.
-
-CLEM datatypes can be made up of any of the following:
-
-
-
-* Integers
-* Reals
-* Characters
-* Strings
-* Lists
-"
-6F900078FD88E14400807E571E1F3A24C633C2DC,6F900078FD88E14400807E571E1F3A24C633C2DC," CLEM examples
-
-The example expressions in this section illustrate correct syntax and the types of expressions possible with CLEM.
-
-Additional examples are discussed throughout this CLEM documentation. See [CLEM (legacy) language reference](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/clem_language_reference.htmlclem_language_reference) for more information.
-"
-628354B3F2FA792B938756225315E3B4024DCC0E_0,628354B3F2FA792B938756225315E3B4024DCC0E," Functions reference
-
-This section lists CLEM functions for working with data in SPSS Modeler. You can enter these functions as code in various areas of the user interface, such as Derive and Set To Flag nodes, or you can use the Expression Builder to create valid CLEM expressions without memorizing function lists or field names.
-
-
-
-CLEM functions for use with SPSS Modeler data
-
-Table 1. CLEM functions for use with SPSS Modeler data
-
- Function Type Description
-
- [Information](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/clem_function_ref_information.htmlclem_function_ref_information) Used to gain insight into field values. For example, the function is_string returns true for all records whose type is a string.
- [Conversion](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/clem_function_ref_conversion.htmlclem_function_ref_conversion) Used to construct new fields or convert storage type. For example, the function to_timestamp converts the selected field to a timestamp.
- [Comparison](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/clem_function_ref_comparison.htmlclem_function_ref_comparison) Used to compare field values to each other or to a specified string. For example, <=is used to compare whether the values of two fields are lesser or equal.
- [Logical](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/clem_function_ref_logical.htmlclem_function_ref_logical) Used to perform logical operations, such as if, then, else operations.
-"
-628354B3F2FA792B938756225315E3B4024DCC0E_1,628354B3F2FA792B938756225315E3B4024DCC0E," [Numeric](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/clem_function_ref_numeric.htmlclem_function_ref_numeric) Used to perform numeric calculations, such as the natural log of field values.
- [Trigonometric](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/clem_function_ref_trigonometric.htmlclem_function_ref_trigonometric) Used to perform trigonometric calculations, such as the arccosine of a specified angle.
- [Probability](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/clem_function_ref_probability.htmlclem_function_ref_probability) Returns probabilities that are based on various distributions, such as probability that a value from Student's t distribution is less than a specific value.
- [Spatial](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/clem_function_ref_spatial.htmlclem_function_ref_spatial) Used to perform spatial calculations on geospatial data.
- [Bitwise](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/clem_function_ref_bitwise.htmlclem_function_ref_bitwise) Used to manipulate integers as bit patterns.
- [Random](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/clem_function_ref_random.htmlclem_function_ref_random) Used to randomly select items or generate numbers.
-"
-628354B3F2FA792B938756225315E3B4024DCC0E_2,628354B3F2FA792B938756225315E3B4024DCC0E," [String](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/clem_function_ref_string.htmlclem_function_ref_string) Used to perform various operations on strings, such as stripchar, which allows you to remove a specified character.
- [SoundEx](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/clem_function_ref_soundex.htmlclem_function_ref_soundex) Used to find strings when the precise spelling is not known; based on phonetic assumptions about how certain letters are pronounced.
- [Date and time](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/clem_function_ref_datetime.htmlclem_function_ref_datetime) Used to perform various operations on date, time, and timestamp fields.
- [Sequence](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/clem_function_ref_sequence.htmlclem_function_ref_sequence) Used to gain insight into the record sequence of a data set or perform operations that are based on that sequence.
- [Global](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/clem_function_ref_global.htmlclem_function_ref_global) Used to access global values that are created by a Set Globals node. For example, @MEAN is used to refer to the mean average of all values for a field across the entire data set.
-"
-1D1659B46A454170A597B0450FD99C16EEC5B1AD_0,1D1659B46A454170A597B0450FD99C16EEC5B1AD," Bitwise integer operations
-
-These functions enable integers to be manipulated as bit patterns representing two's-complement values, where bit position N has weight 2N.
-
-Bits are numbered from 0 upward. These operations act as though the sign bit of an integer is extended indefinitely to the left. Thus, everywhere above its most significant bit, a positive integer has 0 bits and a negative integer has 1 bit.
-
-
-
-CLEM bitwise integer operations
-
-Table 1. CLEM bitwise integer operations
-
- Function Result Description
-
- INT1 Integer Produces the bitwise complement of the integer INT1. That is, there is a 1 in the result for each bit position for which INT1 has 0. It is always true that INT = –(INT + 1).
- INT1 INT2 Integer The result of this operation is the bitwise ""inclusive or"" of INT1 and INT2. That is, there is a 1 in the result for each bit position for which there is a 1 in either INT1 or INT2 or both.
- INT1 /& INT2 Integer The result of this operation is the bitwise ""exclusive or"" of INT1 and INT2. That is, there is a 1 in the result for each bit position for which there is a 1 in either INT1 or INT2 but not in both.
- INT1 && INT2 Integer Produces the bitwise ""and"" of the integers INT1 and INT2. That is, there is a 1 in the result for each bit position for which there is a 1 in both INT1 and INT2.
- INT1 && INT2 Integer Produces the bitwise ""and"" of INT1 and the bitwise complement of INT2. That is, there is a 1 in the result for each bit position for which there is a 1 in INT1 and a 0 in INT2. This is the same as INT1&& (INT2) and is useful for clearing bits of INT1 set in INT2.
- INT << N Integer Produces the bit pattern of INT1 shifted left by N positions. A negative value for N produces a right shift.
-"
-1D1659B46A454170A597B0450FD99C16EEC5B1AD_1,1D1659B46A454170A597B0450FD99C16EEC5B1AD," INT >> N Integer Produces the bit pattern of INT1 shifted right by N positions. A negative value for N produces a left shift.
- INT1 &&=_0 INT2 Boolean Equivalent to the Boolean expression INT1 && INT2 /== 0 but is more efficient.
- INT1 &&/=_0 INT2 Boolean Equivalent to the Boolean expression INT1 && INT2 == 0 but is more efficient.
- integer_bitcount(INT) Integer Counts the number of 1 or 0 bits in the two's-complement representation of INT. If INT is non-negative, N is the number of 1 bits. If INT is negative, it is the number of 0 bits. Owing to the sign extension, there are an infinite number of 0 bits in a non-negative integer or 1 bits in a negative integer. It is always the case that integer_bitcount(INT) = integer_bitcount(-(INT+1)).
- integer_leastbit(INT) Integer Returns the bit position N of the least-significant bit set in the integer INT. N is the highest power of 2 by which INT divides exactly.
-"
-5FE3DE32EFB5DEA4094DCA22CBC77E24D23EF67A,5FE3DE32EFB5DEA4094DCA22CBC77E24D23EF67A," Functions handling blanks and null values
-
-Using CLEM, you can specify that certain values in a field are to be regarded as ""blanks,"" or missing values.
-
-The following functions work with blanks.
-
-
-
-CLEM blank and null value functions
-
-Table 1. CLEM blank and null value functions
-
- Function Result Description
-
- @BLANK(FIELD) Boolean Returns true for all records whose values are blank according to the blank-handling rules set in an upstream Type node or Import node (Types tab).
- @LAST_NON_BLANK(FIELD) Any Returns the last value for FIELD that was not blank, as defined in an upstream Import or Type node. If there are no nonblank values for FIELD in the records read so far, $null$ is returned. Note that blank values, also called user-missing values, can be defined separately for each field.
- @NULL(FIELD) Boolean Returns true if the value of FIELD is the system-missing $null$. Returns false for all other values, including user-defined blanks. If you want to check for both, use @BLANK(FIELD) and@NULL(FIELD).
- undef Any Used generally in CLEM to enter a $null$ value—for example, to fill blank values with nulls in the Filler node.
-
-
-
-Blank fields may be ""filled in"" with the Filler node. In both Filler and Derive nodes (multiple mode only), the special CLEM function @FIELD refers to the current field(s) being examined.
-"
-32A79D23C94FB1920DB500D2DD9464C1316C62A5_0,32A79D23C94FB1920DB500D2DD9464C1316C62A5," Comparison functions
-
-Comparison functions are used to compare field values to each other or to a specified string.
-
-For example, you can check strings for equality using =. An example of string equality verification is: Class = ""class 1"".
-
-For purposes of numeric comparison, greater means closer to positive infinity, and lesser means closer to negative infinity. That is, all negative numbers are less than any positive number.
-
-
-
-CLEM comparison functions
-
-Table 1. CLEM comparison functions
-
- Function Result Description
-
- count_equal(ITEM1, LIST) Integer Returns the number of values from a list of fields that are equal to ITEM1 or null if ITEM1 is null.
- count_greater_than(ITEM1, LIST) Integer Returns the number of values from a list of fields that are greater than ITEM1 or null if ITEM1 is null.
- count_less_than(ITEM1, LIST) Integer Returns the number of values from a list of fields that are less than ITEM1 or null if ITEM1 is null.
- count_not_equal(ITEM1, LIST) Integer Returns the number of values from a list of fields that aren't equal to ITEM1 or null if ITEM1 is null.
- count_nulls(LIST) Integer Returns the number of null values from a list of fields.
- count_non_nulls(LIST) Integer Returns the number of non-null values from a list of fields.
- date_before(DATE1, DATE2) Boolean Used to check the ordering of date values. Returns a true value if DATE1 is before DATE2.
- first_index(ITEM, LIST) Integer Returns the index of the first field containing ITEM from a LIST of fields or 0 if the value isn't found. Supported for string, integer, and real types only.
- first_non_null(LIST) Any Returns the first non-null value in the supplied list of fields. All storage types supported.
- first_non_null_index(LIST) Integer Returns the index of the first field in the specified LIST containing a non-null value or 0 if all values are null. All storage types are supported.
-"
-32A79D23C94FB1920DB500D2DD9464C1316C62A5_1,32A79D23C94FB1920DB500D2DD9464C1316C62A5," ITEM1 = ITEM2 Boolean Returns true for records where ITEM1 is equal to ITEM2.
- ITEM1 /= ITEM2 Boolean Returns true if the two strings are not identical or 0 if they're identical.
- ITEM1 < ITEM2 Boolean Returns true for records where ITEM1 is less than ITEM2.
- ITEM1 <= ITEM2 Boolean Returns true for records where ITEM1 is less than or equal to ITEM2.
- ITEM1 > ITEM2 Boolean Returns true for records where ITEM1 is greater than ITEM2.
- ITEM1 >= ITEM2 Boolean Returns true for records where ITEM1 is greater than or equal to ITEM2.
- last_index(ITEM, LIST) Integer Returns the index of the last field containing ITEM from a LIST of fields or 0 if the value isn't found. Supported for string, integer, and real types only.
- last_non_null(LIST) Any Returns the last non-null value in the supplied list of fields. All storage types supported.
- last_non_null_index(LIST) Integer Returns the index of the last field in the specified LIST containing a non-null value or 0 if all values are null. All storage types are supported.
- max(ITEM1, ITEM2) Any Returns the greater of the two items: ITEM1 or ITEM2.
- max_index(LIST) Integer Returns the index of the field containing the maximum value from a list of numeric fields or 0 if all values are null. For example, if the third field listed contains the maximum, the index value 3 is returned. If multiple fields contain the maximum value, the one listed first (leftmost) is returned.
- max_n(LIST) Number Returns the maximum value from a list of numeric fields or null if all of the field values are null.
- member(ITEM, LIST) Boolean Returns true if ITEM is a member of the specified LIST. Otherwise, a false value is returned. A list of field names can also be specified.
- min(ITEM1, ITEM2) Any Returns the lesser of the two items: ITEM1 or ITEM2.
-"
-32A79D23C94FB1920DB500D2DD9464C1316C62A5_2,32A79D23C94FB1920DB500D2DD9464C1316C62A5," min_index(LIST) Integer Returns the index of the field containing the minimum value from a list of numeric fields or 0 if all values are null. For example, if the third field listed contains the minimum, the index value 3 is returned. If multiple fields contain the minimum value, the one listed first (leftmost) is returned.
- min_n(LIST) Number Returns the minimum value from a list of numeric fields or null if all of the field values are null.
-"
-8CE325D8AFC27359968A8799D58EF4BF0C57D68E_0,8CE325D8AFC27359968A8799D58EF4BF0C57D68E," Conversion functions
-
-With conversion functions, you can construct new fields and convert the storage type of existing files.
-
-For example, you can form new strings by joining strings together or by taking strings apart. To join two strings, use the operator ><. For example, if the fieldSite has the value""BRAMLEY"", then ""xx"" >< Site returns ""xxBRAMLEY"". The result of>< is always a string, even if the arguments aren't strings. Thus, if field V1 is 3 and field V2 is 5, then V1 >< V2 returns ""35"" (a string, not a number).
-
-Conversion functions (and any other functions that require a specific type of input, such as a date or time value) depend on the current formats specified in the flow properties. For example, if you want to convert a string field with values Jan 2021, Feb 2021, and so on, select the matching date format MON YYYY as the default date format for the flow.
-
-
-
-CLEM conversion functions
-
-Table 1. CLEM conversion functions
-
- Function Result Description
-
- ITEM1 >< ITEM2 String Concatenates values for two fields and returns the resulting string as ITEM1ITEM2.
- to_integer(ITEM) Integer Converts the storage of the specified field to an integer.
- to_real(ITEM) Real Converts the storage of the specified field to a real.
- to_number(ITEM) Number Converts the storage of the specified field to a number.
- to_string(ITEM) String Converts the storage of the specified field to a string. When a real is converted to string using this function, it returns a value with 6 digits after the radix point.
- to_time(ITEM) Time Converts the storage of the specified field to a time.
- to_date(ITEM) Date Converts the storage of the specified field to a date.
- to_timestamp(ITEM) Timestamp Converts the storage of the specified field to a timestamp.
- to_datetime(ITEM) Datetime Converts the storage of the specified field to a date, time, or timestamp value.
-"
-8CE325D8AFC27359968A8799D58EF4BF0C57D68E_1,8CE325D8AFC27359968A8799D58EF4BF0C57D68E," datetime_date(ITEM) Date Returns the date value for a number, string, or timestamp. Note this is the only function that allows you to convert a number (in seconds) back to a date. If ITEM is a string, creates a date by parsing a string in the current date format. The date format specified in the flow properties must be correct for this function to be successful. If ITEM is a number, it's interpreted as a number of seconds since the base date (or epoch). Fractions of a day are truncated. If ITEM is a timestamp, the date part of the timestamp is returned. If ITEM is a date, it's returned unchanged.
- stb_centroid_latitude(ITEM) Integer Returns an integer value for latitude corresponding to centroid of the geohash argument.
-"
-D1FAFA3A73F77B401F49CC641BE44D61BC9C0689_0,D1FAFA3A73F77B401F49CC641BE44D61BC9C0689," Date and time functions
-
-CLEM includes a family of functions for handling fields with datetime storage of string variables representing dates and times.
-
-The formats of date and time used are specific to each flow and are specified in the flow properties. The date and time functions parse date and time strings according to the currently selected format.
-
-When you specify a year in a date that uses only two digits (that is, the century is not specified), SPSS Modeler uses the default century that's specified in the flow properties.
-
-
-
-CLEM date and time functions
-
-Table 1. CLEM date and time functions
-
- Function Result Description
-
- @TODAY String If you select Rollover days/mins in the flow properties, this function returns the current date as a string in the current date format. If you use a two-digit date format and don't select Rollover days/mins, this function returns $null$ on the current server.
- to_time(ITEM) Time Converts the storage of the specified field to a time.
- to_date(ITEM) Date Converts the storage of the specified field to a date.
- to_timestamp(ITEM) Timestamp Converts the storage of the specified field to a timestamp.
- to_datetime(ITEM) Datetime Converts the storage of the specified field to a date, time, or timestamp value.
- datetime_date(ITEM) Date Returns the date value for a number, string, or timestamp. Note this is the only function that allows you to convert a number (in seconds) back to a date. If ITEM is a string, creates a date by parsing a string in the current date format. The date format specified in the flow properties must be correct for this function to be successful. If ITEM is a number, it's interpreted as a number of seconds since the base date (or epoch). Fractions of a day are truncated. If ITEM is timestamp, the date part of the timestamp is returned. If ITEM is a date, it's returned unchanged.
-"
-D1FAFA3A73F77B401F49CC641BE44D61BC9C0689_1,D1FAFA3A73F77B401F49CC641BE44D61BC9C0689," date_before(DATE1, DATE2) Boolean Returns a value of true if DATE1 represents a date or timestamp before that represented by DATE2. Otherwise, this function returns a value of 0.
- date_days_difference(DATE1, DATE2) Integer Returns the time in days from the date or timestamp represented by DATE1 to that represented by DATE2, as an integer. If DATE2 is before DATE1, this function returns a negative number.
- date_in_days(DATE) Integer Returns the time in days from the baseline date to the date or timestamp represented by DATE, as an integer. If DATE is before the baseline date, this function returns a negative number. You must include a valid date for the calculation to work appropriately. For example, you should not specify 29 February 2001 as the date. Because 2001 isn't a leap year, this date doesn't exist.
- date_in_months(DATE) Real Returns the time in months from the baseline date to the date or timestamp represented by DATE, as a real number. This is an approximate figure based on a month of 30.4375 days. If DATE is before the baseline date, this function returns a negative number. You must include a valid date for the calculation to work appropriately. For example, you should not specify 29 February 2001 as the date. Because 2001 isn't a leap year, this date doesn't exist.
- date_in_weeks(DATE) Real Returns the time in weeks from the baseline date to the date or timestamp represented by DATE, as a real number. This is based on a week of 7.0 days. If DATE is before the baseline date, this function returns a negative number. You must include a valid date for the calculation to work appropriately. For example, you should not specify 29 February 2001 as the date. Because 2001 isn't a leap year, this date doesn't exist.
-"
-D1FAFA3A73F77B401F49CC641BE44D61BC9C0689_2,D1FAFA3A73F77B401F49CC641BE44D61BC9C0689," date_in_years(DATE) Real Returns the time in years from the baseline date to the date or timestamp represented by DATE, as a real number. This is an approximate figure based on a year of 365.25 days. If DATE is before the baseline date, this function returns a negative number. You must include a valid date for the calculation to work appropriately. For example, you should not specify 29 February 2001 as the date. Because 2001 isn't a leap year, this date doesn't exist.
- date_months_difference (DATE1, DATE2) Real Returns the time in months from the date or timestamp represented by DATE1 to that represented by DATE2, as a real number. This is an approximate figure based on a month of 30.4375 days. If DATE2 is before DATE1, this function returns a negative number.
- datetime_date(YEAR, MONTH, DAY) Date Creates a date value for the given YEAR, MONTH, and DAY. The arguments must be integers.
- datetime_day(DATE) Integer Returns the day of the month from a given DATE or timestamp. The result is an integer in the range 1 to 31.
- datetime_day_name(DAY) String Returns the full name of the given DAY. The argument must be an integer in the range 1 (Sunday) to 7 (Saturday).
- datetime_hour(TIME) Integer Returns the hour from a TIME or timestamp. The result is an integer in the range 0 to 23.
- datetime_in_seconds(TIME) Real Returns the seconds portion stored in TIME.
- datetime_in_seconds(DATE), datetime_in_seconds(DATETIME) Real Returns the accumulated number, converted into seconds, from the difference between the current DATE or DATETIME and the baseline date (1900-01-01).
- datetime_minute(TIME) Integer Returns the minute from a TIME or timestamp. The result is an integer in the range 0 to 59.
-"
-D1FAFA3A73F77B401F49CC641BE44D61BC9C0689_3,D1FAFA3A73F77B401F49CC641BE44D61BC9C0689," datetime_month(DATE) Integer Returns the month from a DATE or timestamp. The result is an integer in the range 1 to 12.
- datetime_month_name (MONTH) String Returns the full name of the given MONTH. The argument must be an integer in the range 1 to 12.
- datetime_now Timestamp Returns the current time as a timestamp.
- datetime_second(TIME) Integer Returns the second from a TIME or timestamp. The result is an integer in the range 0 to 59.
- datetime_day_short_name(DAY) String Returns the abbreviated name of the given DAY. The argument must be an integer in the range 1 (Sunday) to 7 (Saturday).
- datetime_month_short_name(MONTH) String Returns the abbreviated name of the given MONTH. The argument must be an integer in the range 1 to 12.
- datetime_time(HOUR, MINUTE, SECOND) Time Returns the time value for the specified HOUR, MINUTE, and SECOND. The arguments must be integers.
- datetime_time(ITEM) Time Returns the time value of the given ITEM.
- datetime_timestamp(YEAR, MONTH, DAY, HOUR, MINUTE, SECOND) Timestamp Returns the timestamp value for the given YEAR, MONTH, DAY, HOUR, MINUTE, and SECOND.
- datetime_timestamp(DATE, TIME) Timestamp Returns the timestamp value for the given DATE and TIME.
- datetime_timestamp(NUMBER) Timestamp Returns the timestamp value of the given number of seconds.
- datetime_weekday(DATE) Integer Returns the day of the week from the given DATE or timestamp.
- datetime_year(DATE) Integer Returns the year from a DATE or timestamp. The result is an integer such as 2021.
-"
-D1FAFA3A73F77B401F49CC641BE44D61BC9C0689_4,D1FAFA3A73F77B401F49CC641BE44D61BC9C0689," date_weeks_difference(DATE1, DATE2) Real Returns the time in weeks from the date or timestamp represented by DATE1 to that represented by DATE2, as a real number. This is based on a week of 7.0 days. If DATE2 is before DATE1, this function returns a negative number.
- date_years_difference (DATE1, DATE2) Real Returns the time in years from the date or timestamp represented by DATE1 to that represented by DATE2, as a real number. This is an approximate figure based on a year of 365.25 days. If DATE2 is before DATE1, this function returns a negative number.
- date_from_ywd(YEAR, WEEK, DAY) Integer Converts the year, week in year, and day in week, to a date using the ISO 8601 standard.
- date_iso_day(DATE) Integer Returns the day in the week from the date using the ISO 8601 standard.
- date_iso_week(DATE) Integer Returns the week in the year from the date using the ISO 8601 standard.
- date_iso_year(DATE) Integer Returns the year from the date using the ISO 8601 standard.
- time_before(TIME1, TIME2) Boolean Returns a value of true if TIME1 represents a time or timestamp before that represented by TIME2. Otherwise, this function returns a value of 0.
- time_hours_difference (TIME1, TIME2) Real Returns the time difference in hours between the times or timestamps represented by TIME1 and TIME2, as a real number. If you select Rollover days/mins in the flow properties, a higher value of TIME1 is taken to refer to the previous day. If you don't select the rollover option, a higher value of TIME1 causes the returned value to be negative.
- time_in_hours(TIME) Real Returns the time in hours represented by TIME, as a real number. For example, under time format HHMM, the expression time_in_hours('0130') evaluates to 1.5. TIME can represent a time or a timestamp.
-"
-D1FAFA3A73F77B401F49CC641BE44D61BC9C0689_5,D1FAFA3A73F77B401F49CC641BE44D61BC9C0689," time_in_mins(TIME) Real Returns the time in minutes represented by TIME, as a real number. TIME can represent a time or a timestamp.
- time_in_secs(TIME) Integer Returns the time in seconds represented by TIME, as an integer. TIME can represent a time or a timestamp.
-"
-299CEE894DFF422AAC8BF49B53CAC700DE1B172D,299CEE894DFF422AAC8BF49B53CAC700DE1B172D," Global functions
-
-The functions @MEAN, @SUM, @MIN, @MAX, and @SDEV work on, at most, all of the records read up to and including the current one. In some cases, however, it is useful to be able to work out how values in the current record compare with values seen in the entire data set. Using a Set Globals node to generate values across the entire data set, you can access these values in a CLEM expression using the global functions.
-
-For example,
-
-@GLOBAL_MAX(Age)
-
-returns the highest value of Age in the data set, while the expression
-
-(Value - @GLOBAL_MEAN(Value)) / @GLOBAL_SDEV(Value)
-
-expresses the difference between this record's Value and the global mean as a number of standard deviations. You can use global values only after they have been calculated by a Set Globals node.
-
-
-
-CLEM global functions
-
-Table 1. CLEM global functions
-
- Function Result Description
-
- @GLOBAL_MAX(FIELD) Number Returns the maximum value for FIELD over the whole data set, as previously generated by a Set Globals node. FIELD must be the name of a numeric, date/time/datetime, or string field. If the corresponding global value has not been set, an error occurs.
- @GLOBAL_MIN(FIELD) Number Returns the minimum value for FIELD over the whole data set, as previously generated by a Set Globals node. FIELD must be the name of a numeric, date/time/datetime, or string field. If the corresponding global value has not been set, an error occurs.
- @GLOBAL_SDEV(FIELD) Number Returns the standard deviation of values for FIELD over the whole data set, as previously generated by a Set Globals node. FIELD must be the name of a numeric field. If the corresponding global value has not been set, an error occurs.
-"
-C6379E4ACDD7B1C335E9944B8D9DBB08DB220420,C6379E4ACDD7B1C335E9944B8D9DBB08DB220420," Information functions
-
-You can use information functions to gain insight into the values of a particular field. They're typically used to derive flag fields.
-
-For example, the @BLANK function creates a flag field indicating records whose values are blank for the selected field. Similarly, you can check the storage type for a field using any of the storage type functions, such as is_string.
-
-
-
-CLEM information functions
-
-Table 1. CLEM information functions
-
- Function Result Description
-
- @BLANK(FIELD) Boolean Returns true for all records whose values are blank according to the blank-handling rules set in an upstream Type node or source node (Types tab).
- @NULL(ITEM) Boolean Returns true for all records whose values are undefined. Undefined values are system null values, displayed in SPSS Modeler as $null$.
- is_date(ITEM) Boolean Returns true for all records whose type is a date.
- is_datetime(ITEM) Boolean Returns true for all records whose type is a date, time, or timestamp.
- is_integer(ITEM) Boolean Returns true for all records whose type is an integer.
- is_number(ITEM) Boolean Returns true for all records whose type is a number.
- is_real(ITEM) Boolean Returns true for all records whose type is a real.
- is_string(ITEM) Boolean Returns true for all records whose type is a string.
-"
-A67EA42903BF8BE22AEB379891B7E1CA3EB2E4D1,A67EA42903BF8BE22AEB379891B7E1CA3EB2E4D1," Logical functions
-
-CLEM expressions can be used to perform logical operations.
-
-
-
-CLEM logical functions
-
-Table 1. CLEM logical functions
-
- Function Result Description
-
- COND1 and COND2 Boolean This operation is a logical conjunction and returns a true value if both COND1 and COND2 are true. If COND1 is false, then COND2 is not evaluated; this makes it possible to have conjunctions where COND1 first tests that an operation in COND2 is legal. For example, length(Label) >=6 and Label(6) = 'x'.
- COND1 or COND2 Boolean This operation is a logical (inclusive) disjunction and returns a true value if either COND1 or COND2 is true or if both are true. If COND1 is true, COND2 is not evaluated.
- not(COND) Boolean This operation is a logical negation and returns a true value if COND is false. Otherwise, this operation returns a value of 0.
-"
-EEC0EB0502DEF7B7ADB112F8D7D4C38E1F6D9170_0,EEC0EB0502DEF7B7ADB112F8D7D4C38E1F6D9170," Numeric functions
-
-CLEM contains a number of commonly used numeric functions.
-
-
-
-CLEM numeric functions
-
-Table 1. CLEM numeric functions
-
- Function Result Description
-
- –NUM Number Used to negate NUM. Returns the corresponding number with the opposite sign.
- NUM1 + NUM2 Number Returns the sum of NUM1 and NUM2.
- NUM1 –NUM2 Number Returns the value of NUM2 subtracted from NUM1.
- NUM1 * NUM2 Number Returns the value of NUM1 multiplied by NUM2.
- NUM1 / NUM2 Number Returns the value of NUM1 divided by NUM2.
- INT1 div INT2 Number Used to perform integer division. Returns the value of INT1 divided by INT2.
- INT1 rem INT2 Number Returns the remainder of INT1 divided by INT2. For example, INT1 – (INT1 div INT2) INT2.
- BASE POWER Number Returns BASE raised to the power POWER, where either may be any number (except that BASE must not be zero if POWER is zero of any type other than integer 0). If POWER is an integer, the computation is performed by successively multiplying powers of BASE. Thus, if BASE is an integer, the result will be an integer. If POWER is integer 0, the result is always a 1 of the same type as BASE. Otherwise, if POWER is not an integer, the result is computed as exp(POWER * log(BASE)).
- abs(NUM) Number Returns the absolute value of NUM, which is always a number of the same type.
- exp(NUM) Real Returns e raised to the power NUM, where e is the base of natural logarithms.
- fracof(NUM) Real Returns the fractional part of NUM, defined as NUM–intof(NUM).
- intof(NUM) Integer Truncates its argument to an integer. It returns the integer of the same sign as NUM and with the largest magnitude such that abs(INT) <= abs(NUM).
-"
-EEC0EB0502DEF7B7ADB112F8D7D4C38E1F6D9170_1,EEC0EB0502DEF7B7ADB112F8D7D4C38E1F6D9170," log(NUM) Real Returns the natural (base e) logarithm of NUM, which must not be a zero of any kind.
- log10(NUM) Real Returns the base 10 logarithm of NUM, which must not be a zero of any kind. This function is defined as log(NUM) / log(10).
- negate(NUM) Number Used to negate NUM. Returns the corresponding number with the opposite sign.
- round(NUM) Integer Used to round NUM to an integer by taking intof(NUM+0.5) if NUM is positive or intof(NUM–0.5) if NUM is negative.
- sign(NUM) Number Used to determine the sign of NUM. This operation returns –1, 0, or 1 if NUM is an integer. If NUM is a real, it returns –1.0, 0.0, or 1.0, depending on whether NUM is negative, zero, or positive.
- sqrt(NUM) Real Returns the square root of NUM. NUM must be positive.
- sum_n(LIST) Number Returns the sum of values from a list of numeric fields or null if all of the field values are null.
-"
-29DEEC30687F805460A83DD924D2F119274D25F8,29DEEC30687F805460A83DD924D2F119274D25F8," Probability functions
-
-Probability functions return probabilities based on various distributions, such as the probability that a value from Student's t distribution will be less than a specific value.
-
-
-
-CLEM probability functions
-
-Table 1. CLEM probability functions
-
- Function Result Description
-
- cdf_chisq(NUM, DF) Real Returns the probability that a value from the chi-square distribution with the specified degrees of freedom will be less than the specified number.
- cdf_f(NUM, DF1, DF2) Real Returns the probability that a value from the F distribution, with degrees of freedom DF1 and DF2, will be less than the specified number.
-"
-9789F3A8936AD06C653C1C7AEB421C70FFD7C3E1,9789F3A8936AD06C653C1C7AEB421C70FFD7C3E1," Random functions
-
-The functions listed on this page can be used to randomly select items or randomly generate numbers.
-
-
-
-CLEM random functions
-
-Table 1. CLEM random functions
-
- Function Result Description
-
- oneof(LIST) Any Returns a randomly chosen element of LIST. List items should be entered as [ITEM1,ITEM2,...,ITEM_N]. Note that a list of field names can also be specified.
-"
-BACAF30043E33912E3D7F174B3F8CF858CB3093A,BACAF30043E33912E3D7F174B3F8CF858CB3093A," Sequence functions
-
-For some operations, the sequence of events is important.
-
-The application allows you to work with the following record sequences:
-
-
-
-* Sequences and time series
-* Sequence functions
-* Record indexing
-* Averaging, summing, and comparing values
-* Monitoring change—differentiation
-* @SINCE
-* Offset values
-* Additional sequence facilities
-
-
-
-For many applications, each record passing through a stream can be considered as an individual case, independent of all others. In such situations, the order of records is usually unimportant.
-
-For some classes of problems, however, the record sequence is very important. These are typically time series situations, in which the sequence of records represents an ordered sequence of events or occurrences. Each record represents a snapshot at a particular instant in time; much of the richest information, however, might be contained not in instantaneous values but in the way in which such values are changing and behaving over time.
-
-Of course, the relevant parameter may be something other than time. For example, the records could represent analyses performed at distances along a line, but the same principles would apply.
-
-Sequence and special functions are immediately recognizable by the following characteristics:
-
-
-
-* They are all prefixed by @
-* Their names are given in uppercase
-
-
-
-Sequence functions can refer to the record currently being processed by a node, the records that have already passed through a node, and even, in one case, records that have yet to pass through a node. Sequence functions can be mixed freely with other components of CLEM expressions, although some have restrictions on what can be used as their arguments.
-"
-88E4E066B89D0A6993F31EA337930D962B76D6D1,88E4E066B89D0A6993F31EA337930D962B76D6D1," SoundEx functions
-
-SoundEx is a method used to find strings when the sound is known but the precise spelling isn't known.
-
-Developed in 1918, the method searches out words with similar sounds based on phonetic assumptions about how certain letters are pronounced. SoundEx can be used to search names in a database (for example, where spellings and pronunciations for similar names may vary). The basic SoundEx algorithm is documented in a number of sources and, despite known limitations (for example, leading letter combinations such as ph and f won't match even though they sound the same), is supported in some form by most databases.
-
-
-
-CLEM soundex functions
-
-Table 1. CLEM soundex functions
-
- Function Result Description
-
-"
-2C0EBF0CCB497F41C14A5895EF97C01864BFC3D2,2C0EBF0CCB497F41C14A5895EF97C01864BFC3D2," Spatial functions
-
-Spatial functions can be used with geospatial data. For example, they allow you to calculate the distances between two points, the area of a polygon, and so on.
-
-There can also be situations that require a merge of multiple geospatial data sets that are based on a spatial predicate (within, close to, and so on), which can be done through a merge condition.
-
-Notes:
-
-
-
-* These spatial functions don't apply to three-dimensional data. If you import three-dimensional data into a flow, only the first two dimensions are used by these functions. The z-axis values are ignored.
-* Geospatial functions aren't supported.
-
-
-
-
-
-CLEM spatial functions
-
-Table 1. CLEM spatial functions
-
- Function Result Description
-
- close_to(SHAPE,SHAPE,NUM) Boolean Tests whether 2 shapes are within a certain DISTANCE of each other. If a projected coordinate system is used, DISTANCE is in meters. If no coordinate system is used, it is an arbitrary unit.
- crosses(SHAPE,SHAPE) Boolean Tests whether 2 shapes cross each other. This function is suitable for 2 linestring shapes, or 1 linestring and 1 polygon.
- overlap(SHAPE,SHAPE) Boolean Tests whether there is an intersection between 2 polygons and that the intersection is interior to both shapes.
- within(SHAPE,SHAPE) Boolean Tests whether the entirety of SHAPE1 is contained within a POLYGON.
- area(SHAPE) Real Returns the area of the specified POLYGON. If a projected system is used, the function returns meters squared. If no coordinate system is used, it is an arbitrary unit. The shape must be a POLYGON or a MULTIPOLYGON.
-"
-4058D0B5222F1C34ABF1737A10DA705E27480606,4058D0B5222F1C34ABF1737A10DA705E27480606," Special fields
-
-Special functions are used to denote the specific fields under examination, or to generate a list of fields as input.
-
-For example, when deriving multiple fields at once, you should use @FIELD to denote perform this derive action on the selected fields. Using the expression log(@FIELD) derives a new log field for each selected field.
-
-
-
-CLEM special fields
-
-Table 1. CLEM special fields
-
- Function Result Description
-
- @FIELD Any Performs an action on all fields specified in the expression context.
- @TARGET Any When a CLEM expression is used in a user-defined analysis function, @TARGET represents the target field or ""correct value"" for the target/predicted pair being analyzed. This function is commonly used in an Analysis node.
- @PREDICTED Any When a CLEM expression is used in a user-defined analysis function, @PREDICTED represents the predicted value for the target/predicted pair being analyzed. This function is commonly used in an Analysis node.
- @PARTITION_FIELD Any Substitutes the name of the current partition field.
- @TRAINING_PARTITION Any Returns the value of the current training partition. For example, to select training records using a Select node, use the CLEM expression: @PARTITION_FIELD = @TRAINING_PARTITION This ensures that the Select node will always work regardless of which values are used to represent each partition in the data.
- @TESTING_PARTITION Any Returns the value of the current testing partition.
- @VALIDATION_PARTITION Any Returns the value of the current validation partition.
- @FIELDS_BETWEEN(start, end) Any Returns the list of field names between the specified start and end fields (inclusive) based on the natural (that is, insert) order of the fields in the data.
-"
-9A83A33ABB4C6A12A7457D3711C2511EB3982B2C_0,9A83A33ABB4C6A12A7457D3711C2511EB3982B2C," String functions
-
-With CLEM, you can run operations to compare strings, create strings, or access characters.
-
-In CLEM, a string is any sequence of characters between matching double quotation marks (""string quotes""). Characters (CHAR) can be any single alphanumeric character. They're declared in CLEM expressions using single back quotes in the form of , such as z , A , or 2 . Characters that are out-of-bounds or negative indices to a string will result in undefined behavior.
-
-Note: Comparisons between strings that do and do not use SQL pushback may generate different results where trailing spaces exist.
-
-
-
-CLEM string functions
-
-Table 1. CLEM string functions
-
- Function Result Description
-
- allbutfirst(N, STRING) String Returns a string, which is STRING with the first N characters removed.
- allbutlast(N, STRING) String Returns a string, which is STRING with the last characters removed.
- alphabefore(STRING1, STRING2) Boolean Used to check the alphabetical ordering of strings. Returns true if STRING1 precedes STRING2.
- count_substring(STRING, SUBSTRING) Integer Returns the number of times the specified substring occurs within the string. For example, count_substring(""foooo.txt"", ""oo"") returns 3.
- endstring(LENGTH, STRING) String Extracts the last N characters from the specified string. If the string length is less than or equal to the specified length, then it is unchanged.
- hasendstring(STRING, SUBSTRING) Integer This function is the same as isendstring(SUBSTRING, STRING).
- hasmidstring(STRING, SUBSTRING) Integer This function is the same as ismidstring(SUBSTRING, STRING) (embedded substring).
- hasstartstring(STRING, SUBSTRING) Integer This function is the same as isstartstring(SUBSTRING, STRING).
- hassubstring(STRING, N, SUBSTRING) Integer This function is the same as issubstring(SUBSTRING, N, STRING), where N defaults to 1.
-"
-9A83A33ABB4C6A12A7457D3711C2511EB3982B2C_1,9A83A33ABB4C6A12A7457D3711C2511EB3982B2C," hassubstring(STRING, SUBSTRING) Integer This function is the same as issubstring(SUBSTRING, 1, STRING), where N defaults to 1.
- isalphacode(CHAR) Boolean Returns a value of true if CHAR is a character in the specified string (often a field name) whose character code is a letter. Otherwise, this function returns a value of 0. For example, isalphacode(produce_num(1)).
- isendstring(SUBSTRING, STRING) Integer If the string STRING ends with the substring SUBSTRING, then this function returns the integer subscript of SUBSTRING in STRING. Otherwise, this function returns a value of 0.
- islowercode(CHAR) Boolean Returns a value of true if CHAR is a lowercase letter character for the specified string (often a field name). Otherwise, this function returns a value of 0. For example, both () and islowercode(country_name(2)) are valid expressions.
- ismidstring(SUBSTRING, STRING) Integer If SUBSTRING is a substring of STRING but does not start on the first character of STRING or end on the last, then this function returns the subscript at which the substring starts. Otherwise, this function returns a value of 0.
- isnumbercode(CHAR) Boolean Returns a value of true if CHAR for the specified string (often a field name) is a character whose character code is a digit. Otherwise, this function returns a value of 0. For example, isnumbercode(product_id(2)).
- isstartstring(SUBSTRING, STRING) Integer If the string STRING starts with the substring SUBSTRING, then this function returns the subscript 1. Otherwise, this function returns a value of 0.
- issubstring(SUBSTRING, N, STRING) Integer Searches the string STRING, starting from its Nth character, for a substring equal to the string SUBSTRING. If found, this function returns the integer subscript at which the matching substring begins. Otherwise, this function returns a value of 0. If N is not given, this function defaults to 1.
-"
-9A83A33ABB4C6A12A7457D3711C2511EB3982B2C_2,9A83A33ABB4C6A12A7457D3711C2511EB3982B2C," issubstring(SUBSTRING, STRING) Integer Searches the string STRING. If found, this function returns the integer subscript at which the matching substring begins. Otherwise, this function returns a value of 0.
- issubstring_count(SUBSTRING, N, STRING) Integer Returns the index of the Nth occurrence of SUBSTRING within the specified STRING. If there are fewer than N occurrences of SUBSTRING, 0 is returned.
- issubstring_lim(SUBSTRING, N, STARTLIM, ENDLIM, STRING) Integer This function is the same as issubstring, but the match is constrained to start on STARTLIM and to end on ENDLIM. The STARTLIM or ENDLIM constraints may be disabled by supplying a value of false for either argument—for example, issubstring_lim(SUBSTRING, N, false, false, STRING) is the same as issubstring.
- isuppercode(CHAR) Boolean Returns a value of true if CHAR is an uppercase letter character. Otherwise, this function returns a value of 0. For example, both () and isuppercode(country_name(2)) are valid expressions.
- last(STRING) String Returns the last character CHAR of STRING (which must be at least one character long).
- length(STRING) Integer Returns the length of the string STRING (that is, the number of characters in it).
-"
-9A83A33ABB4C6A12A7457D3711C2511EB3982B2C_3,9A83A33ABB4C6A12A7457D3711C2511EB3982B2C," locchar(CHAR, N, STRING) Integer Used to identify the location of characters in symbolic fields. The function searches the string STRING for the character CHAR, starting the search at the Nth character of STRING. This function returns a value indicating the location (starting at N) where the character is found. If the character is not found, this function returns a value of 0. If the function has an invalid offset (N) (for example, an offset that is beyond the length of the string), this function returns $null$. For example, locchar(n, 2, web_page) searches the field called web_page for the n character beginning at the second character in the field value. Be sure to use single back quotes to encapsulate the specified character.
- locchar_back(CHAR, N, STRING) Integer Similar to locchar, except that the search is performed backward starting from the Nth character. For example, locchar_back(n, 9, web_page) searches the field web_page starting from the ninth character and moving backward toward the start of the string. If the function has an invalid offset (for example, an offset that is beyond the length of the string), this function returns $null$. Ideally, you should use locchar_back in conjunction with the function length() to dynamically use the length of the current value of the field. For example, locchar_back(n, (length(web_page)), web_page).
- lowertoupper(CHAR)lowertoupper (STRING) CHAR or String Input can be either a string or character, which is used in this function to return a new item of the same type, with any lowercase characters converted to their uppercase equivalents. For example, lowertoupper(a), lowertoupper(“My string”), and lowertoupper(field_name(2)) are all valid expressions.
-"
-9A83A33ABB4C6A12A7457D3711C2511EB3982B2C_4,9A83A33ABB4C6A12A7457D3711C2511EB3982B2C," matches Boolean Returns true if a string matches a specified pattern. The pattern must be a string literal; it can't be a field name containing a pattern. You can include a question mark (?) in the pattern to match exactly one character; an asterisk () matches zero or more characters. To match a literal question mark or asterisk (rather than using these as wildcards), use a backslash () as an escape character.
- replace(SUBSTRING, NEWSUBSTRING, STRING) String Within the specified STRING, replace all instances of SUBSTRING with NEWSUBSTRING.
- replicate(COUNT, STRING) String Returns a string that consists of the original string copied the specified number of times.
- stripchar(CHAR,STRING) String Enables you to remove specified characters from a string or field. You can use this function, for example, to remove extra symbols, such as currency notations, from data to achieve a simple number or name. For example, using the syntax stripchar($, 'Cost') returns a new field with the dollar sign removed from all values. Be sure to use single back quotes to encapsulate the specified character.
- skipchar(CHAR, N, STRING) Integer Searches the string STRING for any character other than CHAR, starting at the Nth character. This function returns an integer substring indicating the point at which one is found or 0 if every character from the Nth onward is a CHAR. If the function has an invalid offset (for example, an offset that is beyond the length of the string), this function returns $null$. locchar is often used in conjunction with the skipchar functions to determine the value of N (the point at which to start searching the string). For example, skipchar(s, (locchar(s, 1, ""MyString"")), ""MyString"").
- skipchar_back(CHAR, N, STRING) Integer Similar to skipchar, except that the search is performed backward, starting from the Nth character.
-"
-9A83A33ABB4C6A12A7457D3711C2511EB3982B2C_5,9A83A33ABB4C6A12A7457D3711C2511EB3982B2C," startstring(N, STRING) String Extracts the first N characters from the specified string. If the string length is less than or equal to the specified length, then it is unchanged.
- strmember(CHAR, STRING) Integer Equivalent to locchar(CHAR, 1, STRING). It returns an integer substring indicating the point at which CHAR first occurs, or 0. If the function has an invalid offset (for example, an offset that is beyond the length of the string), this function returns $null$.
- subscrs(N, STRING) CHAR Returns the Nth character CHAR of the input string STRING. This function can also be written in a shorthand form as STRING(N). For example, lowertoupper(“name”(1)) is a valid expression.
- substring(N, LEN, STRING) String Returns a string SUBSTRING, which consists of the LEN characters of the string STRING, starting from the character at subscript N.
- substring_between(N1, N2, STRING) String Returns the substring of STRING, which begins at subscript N1 and ends at subscript N2.
- textsplit(STRING, N, CHAR) String textsplit(STRING,N,CHAR) returns the substring between the Nth-1 and Nth occurrence of CHAR. If N is 1, then it will return the substring from the beginning of STRING up to but not including CHAR. If N-1 is the last occurrence of CHAR, then it will return the substring from the Nth-1 occurrence of CHAR to the end of the string.
- trim(STRING) String Removes leading and trailing white space characters from the specified string.
- trimstart(STRING) String Removes leading white space characters from the specified string.
- trimend(STRING) String Removes trailing white space characters from the specified string.
- unicode_char(NUM) CHAR Input must be decimal, not hexadecimal values. Returns the character with Unicode value NUM.
-"
-2904E26946523BB3E78975F68A822F5F2A32B9F5,2904E26946523BB3E78975F68A822F5F2A32B9F5," Trigonometric functions
-
-All of the functions in this section either take an angle as an argument or return one as a result.
-
-
-
-CLEM trigonometric functions
-
-Table 1. CLEM trigonometric functions
-
- Function Result Description
-
- arccos(NUM) Real Computes the arccosine of the specified angle.
- arccosh(NUM) Real Computes the hyperbolic arccosine of the specified angle.
- arcsin(NUM) Real Computes the arcsine of the specified angle.
- arcsinh(NUM) Real Computes the hyperbolic arcsine of the specified angle.
- arctan(NUM) Real Computes the arctangent of the specified angle.
- arctan2(NUM_Y, NUM_X) Real Computes the arctangent of NUM_Y / NUM_X and uses the signs of the two numbers to derive quadrant information. The result is a real in the range - pi < ANGLE <= pi (radians) – 180 < ANGLE <= 180 (degrees)
- arctanh(NUM) Real Computes the hyperbolic arctangent of the specified angle.
- cos(NUM) Real Computes the cosine of the specified angle.
- cosh(NUM) Real Computes the hyperbolic cosine of the specified angle.
- pi Real This constant is the best real approximation to pi.
- sin(NUM) Real Computes the sine of the specified angle.
- sinh(NUM) Real Computes the hyperbolic sine of the specified angle.
-"
-621083EB36CF3896B77D22EDBCC23FD2716F6B4A,621083EB36CF3896B77D22EDBCC23FD2716F6B4A," Converting date and time values
-
-Note that conversion functions (and any other functions that require a specific type of input, such as a date or time value) depend on the current formats specified in the flow properties.
-
-For example, if you have a field named DATE that's stored as a string with values Jan 2021, Feb 2021, and so on, you could convert it to date storage as follows:
-
-to_date(DATE)
-
-For this conversion to work, select the matching date format MON YYYY as the default date format for the flow.
-
-Dates stored as numbers. Note that DATE in the previous example is the name of a field, while to_date is a CLEM function. If you have dates stored as numbers, you can convert them using the datetime_date function, where the number is interpreted as a number of seconds since the base date (or epoch).
-
-datetime_date(DATE)
-
-By converting a date to a number of seconds (and back), you can perform calculations such as computing the current date plus or minus a fixed number of days. For example:
-
-datetime_date((date_in_days(DATE)-7)606024)
-"
-ADBEF9D5635EB271A8BD78B23064DCBA1A1915A6,ADBEF9D5635EB271A8BD78B23064DCBA1A1915A6," CLEM (legacy) language reference
-
-This section describes the Control Language for Expression Manipulation (CLEM), which is a powerful tool used to analyze and manipulate the data used in SPSS Modeler flows.
-
-You can use CLEM within nodes to perform tasks ranging from evaluating conditions or deriving values to inserting data into reports. CLEM expressions consist of values, field names, operators, and functions. Using the correct syntax, you can create a wide variety of powerful data operations.
-
-Figure 1. Expression Builder
-
-
-"
-88467827811ED045A648A3C215F5B91D43EB49CD,88467827811ED045A648A3C215F5B91D43EB49CD," Working with multiple-response data
-
-You can analyze multiple-response data using a number of comparison functions.
-
-Available comparison functions include:
-
-
-
-* value_at
-* first_index / last_index
-* first_non_null / last_non_null
-* first_non_null_index / last_non_null_index
-* min_index / max_index
-
-
-
-For example, suppose a multiple-response question asked for the first, second, and third most important reasons for deciding on a particular purchase (for example, price, personal recommendation, review, local supplier, other). In this case, you might determine the importance of price by deriving the index of the field in which it was first included:
-
-first_index(""price"", [Reason1 Reason2 Reason3])
-
-Similarly, suppose you asked customers to rank three cars in order of likelihood to purchase and coded the responses in three separate fields, as follows:
-
-
-
-Car ranking example
-
-Table 1. Car ranking example
-
- customer id car1 car2 car3
-
- 101 1 3 2
- 102 3 2 1
- 103 2 3 1
-
-
-
-In this case, you could determine the index of the field for the car they like most (ranked #1, or the lowest rank) using the min_index function:
-
-min_index(['car1' 'car2' 'car3'])
-
-See [Comparison functions](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/clem_function_ref_comparison.htmlclem_function_ref_comparison) for more information.
-"
-BC314650433831859C400BFFEFE5F919ED8735EA,BC314650433831859C400BFFEFE5F919ED8735EA," Working with numbers
-
-Numerous standard operations on numeric values are available in SPSS Modeler.
-
-
-
-* Calculating the sine of the specified angle—sin(NUM)
-* Calculating the natural log of numeric fields—log(NUM)
-* Calculating the sum of two numbers—NUM1 + NUM2
-
-
-
-See [Numeric functions](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/clem_function_ref_numeric.htmlclem_function_ref_numeric) for more information.
-"
-595BB1738027C777C1EB5A69631587923690ABC4_0,595BB1738027C777C1EB5A69631587923690ABC4," Working with strings
-
-There are a number of operations available for strings.
-
-
-
-* Converting a string to uppercase or lowercase—uppertolower(CHAR).
-* Removing specified characters, such as ID_ or $ , from a string variable—stripchar(CHAR,STRING).
-* Determining the length (number of characters) for a string variable—length(STRING).
-* Checking the alphabetical ordering of string values—alphabefore(STRING1, STRING2).
-* Removing leading or trailing white space from values—trim(STRING), trim_start(STRING), or trimend(STRING).
-* Extract the first or last n characters from a string—startstring(LENGTH, STRING) or endstring(LENGTH, STRING). For example, suppose you have a field named item that combines a product name with a four-digit ID code (ACME CAMERA-D109). To create a new field that contains only the four-digit code, specify the following formula in a Derive node:
-
-endstring(4, item)
-* Matching a specific pattern—STRING matches PATTERN. For example, to select persons with ""market"" anywhere in their job title, you could specify the following in a Select node:
-
-job_title matches ""market""
-* Replacing all instances of a substring within a string—replace(SUBSTRING, NEWSUBSTRING, STRING). For example, to replace all instances of an unsupported character, such as a vertical pipe ( | ), with a semicolon prior to text mining, use the replace function in a Filler node. Under Fill in fields in the node properties, select all fields where the character may occur. For the Replace condition, select Always, and specify the following condition under Replace with.
-
-replace('|',';',@FIELD)
-* Deriving a flag field based on the presence of a specific substring. For example, you could use a string function in a Derive node to generate a separate flag field for each response with an expression such as:
-
-
-
-"
-595BB1738027C777C1EB5A69631587923690ABC4_1,595BB1738027C777C1EB5A69631587923690ABC4,"hassubstring(museums,""museum_of_design"")
-
-See [String functions](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/clem_function_ref_string.htmlclem_function_ref_string) for more information.
-"
-D1FEF8C7F5BE28316CAA952CCC76281E6F3FE12F,D1FEF8C7F5BE28316CAA952CCC76281E6F3FE12F," Summarizing multiple fields
-
-The CLEM language includes a number of functions that return summary statistics across multiple fields.
-
-These functions may be particularly useful in analyzing survey data, where multiple responses to a question may be stored in multiple fields. See [Working with multiple-response data](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/clem_overview_multiple_response_data.htmlclem_overview_multiple_response_data) for more information.
-"
-DAD2EDE59535330241F2FEBDF9BF99E21DEB4393,DAD2EDE59535330241F2FEBDF9BF99E21DEB4393," Working with times and dates
-
-Time and date formats may vary depending on your data source and locale. The formats of date and time are specific to each flow and are set in the flow properties.
-
-The following examples are commonly used functions for working with date/time fields.
-"
-0F686BF5943844896A5385E01D440548081D2688,0F686BF5943844896A5385E01D440548081D2688," Handling blanks and missing values
-
-Replacing blanks or missing values is a common data preparation task for data miners. CLEM provides you with a number of tools to automate blank handling.
-
-The Filler node is the most common place to work with blanks; however, the following functions can be used in any node that accepts CLEM expressions:
-
-
-
-* @BLANK(FIELD) can be used to determine records whose values are blank for a particular field, such as Age.
-* @NULL(FIELD) can be used to determine records whose values are system-missing for the specified field(s). In SPSS Modeler, system-missing values are displayed as $null$ values.
-
-
-
-See [Functions handling blanks and null values](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/clem_function_ref_blanksnulls.htmlclem_function_ref_blanksnulls) for more information.
-"
-23296AAD76933152D5D3E9DD875EBBD3FB7575EA,23296AAD76933152D5D3E9DD875EBBD3FB7575EA," Building CLEM (legacy) expressions
-"
-7B9348596E2F005F89842D1B997FA09BDCBE8F06,7B9348596E2F005F89842D1B997FA09BDCBE8F06," Conventions in function descriptions
-
-This page describes the conventions used throughout this guide when referring to items in a function.
-
-
-
-Conventions in function descriptions
-
-Table 1. Conventions in function descriptions
-
- Convention Description
-
- BOOL A Boolean, or flag, such as true or false.
- NUM, NUM1, NUM2 Any number.
- REAL, REAL1, REAL2 Any real number, such as 1.234 or –77.01.
- INT, INT1, INT2 Any integer, such as 1 or –77.
- CHAR A character code, such as A .
- STRING A string, such as ""referrerID"".
- LIST A list of items, such as [""abc"" ""def""] or [A1, A2, A3] or [1 2 4 16].
- ITEM A field, such as Customer or extract_concept.
- DATE A date field, such as start_date, where values are in a format such as DD-MON-YYYY.
- TIME A time field, such as power_flux, where values are in a format such as HHMMSS.
-
-
-
-Functions in this guide are listed with the function in one column, the result type (integer, string, and so on) in another, and a description (where available) in a third column. For example, following is a description of the rem function.
-
-
-
-rem function description
-
-Table 2. rem function description
-
- Function Result Description
-
- INT1 rem INT2 Number Returns the remainder of INT1 divided by INT2. For example, INT1 – (INT1 div INT2) INT2.
-
-
-
-Details on usage conventions, such as how to list items or specify characters in a function, are described elsewhere. See [CLEM datatypes](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/clem_datatypes.htmlclem_datatypes) for more information.
-"
-C2185A8C9156C6B38D76BD3FD29A833D96A5762B,C2185A8C9156C6B38D76BD3FD29A833D96A5762B," Dates
-
-Date calculations are based on a ""baseline"" date, which is specified in the flow properties. The default baseline date is 1 January 1900.
-
-The CLEM language supports the following date formats.
-
-
-
-CLEM language date formats
-
-Table 1. CLEM language date formats
-
- Format Examples
-
- DDMMYY 150163
- MMDDYY 011563
- YYMMDD 630115
- YYYYMMDD 19630115
- YYYYDDD Four-digit year followed by a three-digit number representing the day of the year—for example, 2000032 represents the 32nd day of 2000, or 1 February 2000.
- DAY Day of the week in the current locale—for example, Monday, Tuesday, ..., in English.
- MONTH Month in the current locale—for example, January, February, ….
- DD/MM/YY 15/01/63
- DD/MM/YYYY 15/01/1963
- MM/DD/YY 01/15/63
- MM/DD/YYYY 01/15/1963
- DD-MM-YY 15-01-63
- DD-MM-YYYY 15-01-1963
- MM-DD-YY 01-15-63
- MM-DD-YYYY 01-15-1963
- DD.MM.YY 15.01.63
- DD.MM.YYYY 15.01.1963
- MM.DD.YY 01.15.63
- MM.DD.YYYY 01.15.1963
- DD-MON-YY 15-JAN-63, 15-jan-63, 15-Jan-63
- DD/MON/YY 15/JAN/63, 15/jan/63, 15/Jan/63
- DD.MON.YY 15.JAN.63, 15.jan.63, 15.Jan.63
- DD-MON-YYYY 15-JAN-1963, 15-jan-1963, 15-Jan-1963
- DD/MON/YYYY 15/JAN/1963, 15/jan/1963, 15/Jan/1963
- DD.MON.YYYY 15.JAN.1963, 15.jan.1963, 15.Jan.1963
- MON YYYY Jan 2004
-"
-FE88457CA86FFE3BE30873156A7A0A4FD12975AF,FE88457CA86FFE3BE30873156A7A0A4FD12975AF," Accessing the Expression Builder
-
-The Expression Builder is available in all nodes where CLEM expressions are used, including Select, Balance, Derive, Filler, Analysis, Report, and Table nodes.
-
-You can open it by double-clicking the node to open its properties, then click the calculator button by the formula field.
-"
-56EA4620B049A9E291BF198E71D0C58C2018686D,56EA4620B049A9E291BF198E71D0C58C2018686D," Checking CLEM expressions
-
-Click Validate in the Expression Builder to validate an expression.
-
-Expressions that haven't been checked are displayed in red. If errors are found, a message indicating the cause is displayed.
-
-The following items are checked:
-
-
-
-* Correct quoting of values and field names
-* Correct usage of parameters and global variables
-* Valid usage of operators
-* Existence of referenced fields
-* Existence and definition of referenced globals
-
-
-
-If you encounter errors in syntax, try creating the expression using the lists and operator buttons rather than typing the expression manually. This method automatically adds the proper quotes for fields and values.
-
-Note: Field names that contain separators must be surrounded by single quotes. To automatically add quotes, you can create expressions using the lists and operator buttons rather than typing expressions manually. The following characters in field names may cause errors: * ! "" $% & '() = |-^ ¥ @"" ""+ "" ""<>? . ,/ :; →(arrow mark), □ △ (graphic mark, etc.)
-"
-6FD8A950F1EBE6B021EA9D4C775A5CA8660A1101,6FD8A950F1EBE6B021EA9D4C775A5CA8660A1101," Creating expressions
-
-The Expression Builder provides not only complete lists of fields, functions, and operators but also access to data values if your data is instantiated.
-"
-B8044B03933E3FCEA5BCF6362199ED083EC2F20F,B8044B03933E3FCEA5BCF6362199ED083EC2F20F," Database functions
-
-You can run an SPSS Modeler desktop stream file ( .str) that contains database functions.
-
-But database functions aren't available in the Expression Builder user interface, and you can't edit them.
-"
-841465AD74B0AFDBEC9EAFF7B038AFC4C000E96C,841465AD74B0AFDBEC9EAFF7B038AFC4C000E96C," Selecting fields
-
-The field list displays all fields available at this point in the data stream. Double-click a field from the list to add it to your expression.
-
-After selecting a field, you can also select an associated value from the value list.
-"
-0093065541AA4C3E90E47E3ACE89596155EA1735_0,0093065541AA4C3E90E47E3ACE89596155EA1735," Selecting functions
-
-The function list displays all available SPSS Modeler functions and operators. Scroll to select a function from the list, or, for easier searching, use the drop-down list to display a subset of functions or operators. Available functions are grouped into categories for easier searching.
-
-Most of these categories are described in the Reference section of the CLEM language description. For more information, see [Functions reference](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/clem_function_ref.htmlclem_function_ref).
-
-The other categories are as follows.
-
-
-
-* General Functions. Contains a selection of some of the most commonly-used functions.
-* Recently Used. Contains a list of CLEM functions used within the current session.
-* @ Functions. Contains a list of all the special functions, which have their names preceded by an ""@"" sign. Note: The @DIFF1(FIELD1,FIELD2) and @DIFF2(FIELD1,FIELD2) functions require that the two field types are the same (for example, both Integer or both Long or both Real).
-* Database Functions. If the flow includes a database connection, this selection lists the functions available from within that database, including user-defined functions (UDFs). For more information, see [Database functions](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/expressionbuild_database_functions.htmlexpressionbuild_database_functions).
-* Database Aggregates. If the flow includes a database connection, this selection lists the aggregation options available from within that database. These options are available in the Expression Builder of the Aggregate node.
-* Built-In Aggregates. Contains a list of the possible modes of aggregation that can be used.
-* Operators. Lists all the operators you can use when building expressions. Operators are also available from the buttons in the center of the dialog box.
-"
-0093065541AA4C3E90E47E3ACE89596155EA1735_1,0093065541AA4C3E90E47E3ACE89596155EA1735,"* All Functions. Contains a complete list of available CLEM functions.
-
-
-
-Double-click a function to insert it into the expression field at the position of the cursor.
-"
-C89753519B91F85DC9E0ED54A3248CD82D5F2A9E,C89753519B91F85DC9E0ED54A3248CD82D5F2A9E," The Expression Builder
-
-You can type CLEM expressions manually or use the Expression Builder, which displays a complete list of CLEM functions and operators as well as data fields from the current flow, allowing you to quickly build expressions without memorizing the exact names of fields or functions.
-
-In addition, the Expression Builder controls automatically add the proper quotes for fields and values, making it easier to create syntactically correct expressions.
-
-Notes:
-
-
-
-"
-F4F623D5A7C8913E227E962BD1F347B36AAB7B51,F4F623D5A7C8913E227E962BD1F347B36AAB7B51," Expressions and conditions
-
-CLEM expressions can return a result (used when deriving new values).
-
-For example:
-
-Weight * 2.2
-Age + 1
-sqrt(Signal-Echo)
-
-Or, they can evaluate true or false (used when selecting on a condition). For example:
-
-Drug = ""drugA""
-Age < 16
-not(PowerFlux) and Power > 2000
-
-You can combine operators and functions arbitrarily in CLEM expressions. For example:
-
-sqrt(abs(Signal))* max(T1, T2) + Baseline
-
-Brackets and operator precedence determine the order in which the expression is evaluated. In this example, the order of evaluation is:
-
-
-
-* abs(Signal) is evaluated, and sqrt is applied to its result
-* max(T1, T2) is evaluated
-* The two results are multiplied: x has higher precedence than +
-* Finally, Baseline is added to the result
-
-
-
-The descending order of precedence (that is, operations that are performed first to operations that are performed last) is as follows:
-
-
-
-* Function arguments
-* Function calls
-* xx
-* x / mod div rem
-* + –
-* > < >= <= /== == = /=
-
-
-
-If you want to override precedence, or if you're in any doubt of the order of evaluation, you can use parentheses to make it explicit. For example:
-
-sqrt(abs(Signal))* (max(T1, T2) + Baseline)
-"
-85F8B4292483C5747AB2436A2D5D5377F1F6CAB9,85F8B4292483C5747AB2436A2D5D5377F1F6CAB9," Viewing or selecting values
-
-You can view field values from the Expression Builder. Note that data must be fully instantiated in an Import or Type node to use this feature, so that storage, types, and values are known.
-
-To view values for a field from the Expression Builder, select the required field and then use the Value list or perform a search with the Find in column Value field to find values for the selected field. You can then double-click a value to insert it into the current expression or list.
-
-For flag and nominal fields, all defined values are listed. For continuous (numeric range) fields, the minimum and maximum values are displayed.
-"
-B69246113E589F088E8E1302B32B57720BD27720,B69246113E589F088E8E1302B32B57720BD27720," Fields
-
-Names in CLEM expressions that aren’t names of functions are assumed to be field names.
-
-You can write these simply as Power, val27, state_flag, and so on, but if the name begins with a digit or includes non-alphabetic characters, such as spaces (with the exception of the underscore), place the name within single quotation marks (for example, 'Power Increase', '2nd answer', '101', '$P-NextField').
-
-Note: Fields that are quoted but undefined in the data set will be misread as strings.
-"
-C528D240892080AECE146D29FB3496DDD0F1FD48_0,C528D240892080AECE146D29FB3496DDD0F1FD48," Find
-
-In the Expression Builder, you can search for fields, values, or functions.
-
-For example, to search for a value, place your cursor in the Find in column Value field and enter the text you want to search for.
-
-You can also search on special characters such as tabs or newline characters, classes or ranges of characters such as a through d, any digit or non-digit, and boundaries such as the beginning or end of a line. The following types of expressions are supported.
-
-
-
-Character matches
-
-Table 1. Character matches
-
- Characters Matches
-
- x The character x
- \\ The backslash character
- \0n The character with octal value 0n (0 <= n <= 7)
- \0nn The character with octal value 0nn (0 <= n <= 7)
- \0mnn The character with octal value 0mnn (0 <= m <= 3, 0 <= n <= 7)
- \xhh The character with hexadecimal value 0xhh
- \uhhhh The character with hexadecimal value 0xhhhh
- \t The tab character ('\u0009')
- \n The newline (line feed) character ('\u000A')
- \r The carriage-return character ('\u000D')
- \f The form-feed character ('\u000C')
- \a The alert (bell) character ('\u0007')
- \e The escape character ('\u001B')
- \cx The control character corresponding to x
-
-
-
-
-
-Matching character classes
-
-Table 2. Matching character classes
-
- Character classes Matches
-
- [abc] a, b, or c (simple class)
- [^abc] Any character except a, b, or c (subtraction)
- [a-zA-Z] a through z or A through Z, inclusive (range)
- [a-d[m-p]] a through d, or m through p (union). Alternatively this could be specified as [a-dm-p]
- [a-z&&[def]] a through z, and d, e, or f (intersection)
-"
-C528D240892080AECE146D29FB3496DDD0F1FD48_1,C528D240892080AECE146D29FB3496DDD0F1FD48," [a-z&&[^bc]] a through z, except for b and c (subtraction). Alternatively this could be specified as [ad-z]
- [a-z&&[^m-p]] a through z, and not m through p (subtraction). Alternatively this could be specified as [a-lq-z]
-
-
-
-
-
-Predefined character classes
-
-Table 3. Predefined character classes
-
- Predefined character classes Matches
-
- . Any character (may or may not match line terminators)
- \d Any digit: [0-9]
- \D A non-digit: [^0-9]
- \s A white space character: [ \t\n\x0B\f\r]
- \S A non-white space character: [^\s]
- \w A word character: [a-zA-Z_0-9]
- \W A non-word character: [^\w]
-
-
-
-
-
-Boundary matches
-
-Table 4. Boundary matches
-
- Boundary matchers Matches
-
- ^ The beginning of a line
- $ The end of a line
- \b A word boundary
- \B A non-word boundary
- \A The beginning of the input
-"
-C1324A359A58B4D399C10BC59AE94E7E0723836D,C1324A359A58B4D399C10BC59AE94E7E0723836D," Integers
-
-Integers are represented as a sequence of decimal digits.
-
-Optionally, you can place a minus sign (−) before the integer to denote a negative number (for example, 1234, 999, −77).
-
-The CLEM language handles integers of arbitrary precision. The maximum integer size depends on your platform. If the values are too large to be displayed in an integer field, changing the field type to Real usually restores the value.
-"
-D05F366AFC5726DC1A258EDC3689067381EFDECC,D05F366AFC5726DC1A258EDC3689067381EFDECC," About CLEM
-
-The Control Language for Expression Manipulation (CLEM) is a powerful language for analyzing and manipulating the data that streams through an SPSS Modeler flow. Data miners use CLEM extensively in flow operations to perform tasks as simple as deriving profit from cost and revenue data or as complex as transforming web log data into a set of fields and records with usable information.
-
-CLEM is used within SPSS Modeler to:
-
-
-
-* Compare and evaluate conditions on record fields
-* Derive values for new fields
-* Derive new values for existing fields
-* Reason about the sequence of records
-* Insert data from records into reports
-
-
-
-CLEM expressions are indispensable for data preparation in SPSS Modeler and can be used in a wide range of nodes—from record and field operations (Select, Balance, Filler) to plots and output (Analysis, Report, Table). For example, you can use CLEM in a Derive node to create a new field based on a formula such as ratio.
-
-CLEM expressions can also be used for global search and replace operations. For example, the expression @NULL(@FIELD) can be used in a Filler node to replace system-missing values with the integer value 0. (To replace user-missing values, also called blanks, use the @BLANK function.)
-
-More complex CLEM expressions can also be created. For example, you can derive new fields based on a conditional set of rules, such as a new value category created by using the following expressions: If: CardID = @OFFSET(CardID,1), Then: @OFFSET(ValueCategory,1), Else: 'exclude'.
-
-This example uses the @OFFSET function to say: If the value of the field CardID for a given record is the same as for the previous record, then return the value of the field named ValueCategory for the previous record. Otherwise, assign the string ""exclude."" In other words, if the CardIDs for adjacent records are the same, they should be assigned the same value category. (Records with the exclude string can later be culled using a Select node.)
-"
-B93F8A3A1CED22CF84C45B552D5040A4A17FDB60,B93F8A3A1CED22CF84C45B552D5040A4A17FDB60," Lists
-
-A list is an ordered sequence of elements, which may be of mixed type. Lists are enclosed in square brackets ([ ]).
-
-Examples of lists are [1 2 4 16] and [""abc"" ""def""] and [A1, A2, A3]. Lists are not used as the value of SPSS Modeler fields. They are used to provide arguments to functions, such as member and oneof.
-
-Notes:
-
-
-
-"
-9455A31E5D6C749F3028F9F5E5F758F713C09973_0,9455A31E5D6C749F3028F9F5E5F758F713C09973," CLEM operators
-
-This page lists the available CLEM language operators.
-
-
-
-CLEM language operators
-
-Table 1. CLEM language operators
-
- Operation Comments Precedence (see next section)
-
- or Used between two CLEM expressions. Returns a value of true if either is true or if both are true. 10
- and Used between two CLEM expressions. Returns a value of true if both are true. 9
- = Used between any two comparable items. Returns true if ITEM1 is equal to ITEM2. 7
- == Identical to =. 7
- /= Used between any two comparable items. Returns true if ITEM1 is not equal to ITEM2. 7
- /== Identical to /=. 7
- > Used between any two comparable items. Returns true if ITEM1 is strictly greater than ITEM2. 6
- >= Used between any two comparable items. Returns true if ITEM1 is greater than or equal to ITEM2. 6
- < Used between any two comparable items. Returns true if ITEM1 is strictly less than ITEM2 6
- <= Used between any two comparable items. Returns true if ITEM1 is less than or equal to ITEM2. 6
- &&=_0 Used between two integers. Equivalent to the Boolean expression INT1 && INT2 = 0. 6
- &&/=_0 Used between two integers. Equivalent to the Boolean expression INT1 && INT2 /= 0. 6
- + Adds two numbers: NUM1 + NUM2. 5
- >< Concatenates two strings; for example, STRING1 >< STRING2. 5
- - Subtracts one number from another: NUM1 - NUM2. Can also be used in front of a number: - NUM. 5
- * Used to multiply two numbers: NUM1 * NUM2. 4
- && Used between two integers. The result is the bitwise 'and' of the integers INT1 and INT2. 4
- && Used between two integers. The result is the bitwise 'and' of INT1 and the bitwise complement of INT2. 4
- Used between two integers. The result is the bitwise 'inclusive or' of INT1 and INT2. 4
-"
-9455A31E5D6C749F3028F9F5E5F758F713C09973_1,9455A31E5D6C749F3028F9F5E5F758F713C09973," Used in front of an integer. Produces the bitwise complement of INT. 4
- /& Used between two integers. The result is the bitwise 'exclusive or' of INT1 and INT2. 4
- INT1 << N Used between two integers. Produces the bit pattern of INT shifted left by N positions. 4
- INT1 >> N Used between two integers. Produces the bit pattern of INT shifted right by N positions. 4
- / Used to divide one number by another: NUM1 / NUM2. 4
- Used between two numbers: BASE ** POWER. Returns BASE raised to the power POWER. 3
-"
-185C42AB06DE9FF515DCD03213F5C4608C6FAEBF,185C42AB06DE9FF515DCD03213F5C4608C6FAEBF," Reals
-
-Real refers to a floating-point number. Reals are represented by one or more digits followed by a decimal point followed by one or more digits. CLEM reals are held in double precision.
-
-Optionally, you can place a minus sign (−) before the real to denote a negative number (for example, 1.234, 0.999, −77.001). Use the form e to express a real number in exponential notation (for example, 1234.0e5, 1.7e−2). When SPSS Modeler reads number strings from files and converts them automatically to numbers, numbers with no leading digit before the decimal point or with no digit after the point are accepted (for example, 999. or .11). However, these forms are illegal in CLEM expressions.
-
-Note: When referencing real numbers in CLEM expressions, a period must be used as the decimal separator, regardless of any settings for the current flow or locale. For example, specify
-
-Na > 0.6
-
-rather than
-
-Na > 0,6
-
-This applies even if a comma is selected as the decimal symbol in the flow properties and is consistent with the general guideline that code syntax should be independent of any specific locale or convention.
-"
-385DEC32600A9DED58FEDE3E98568FED789A400A,385DEC32600A9DED58FEDE3E98568FED789A400A," Strings
-
-Generally, you should enclose strings in double quotation marks. Examples of strings are ""c35product2"" and ""referrerID"".
-
-To indicate special characters in a string, use a backslash (for example, ""$65443""). (To indicate a backslash character, use a double backslash, \.) You can use single quotes around a string, but the result is indistinguishable from a quoted field ('referrerID'). See [String functions](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/clem_function_ref_string.htmlclem_function_ref_string) for more information.
-"
-839B16AC73C000ECE7BAC7D50BAF6F7E37F2CAD9,839B16AC73C000ECE7BAC7D50BAF6F7E37F2CAD9," Time
-
-The CLEM language supports the time formats listed in this section.
-
-
-
-CLEM language time formats
-
-Table 1. CLEM language time formats
-
- Format Examples
-
- HHMMSS 120112, 010101, 221212
- HHMM 1223, 0745, 2207
- MMSS 5558, 0100
- HH:MM:SS 12:01:12, 01:01:01, 22:12:12
- HH:MM 12:23, 07:45, 22:07
- MM:SS 55:58, 01:00
- (H)H:(M)M:(S)S 12:1:12, 1:1:1, 22:12:12
- (H)H:(M)M 12:23, 7:45, 22:7
- (M)M:(S)S 55:58, 1:0
- HH.MM.SS 12.01.12, 01.01.01, 22.12.12
- HH.MM 12.23, 07.45, 22.07
- MM.SS 55.58, 01.00
- (H)H.(M)M.(S)S 12.1.12, 1.1.1, 22.12.12
-"
-F975B9964D088181CF34A1341083BC82053812D8,F975B9964D088181CF34A1341083BC82053812D8," Values and data types
-
-CLEM expressions are similar to formulas constructed from values, field names, operators, and functions. The simplest valid CLEM expression is a value or a field name.
-
-Examples of valid values are:
-
-3
-1.79
-'banana'
-
-Examples of field names are:
-
-Product_ID
-'$P-NextField'
-
-where Product is the name of a field from a market basket data set, '$P-NextField' is the name of a parameter, and the value of the expression is the value of the named field. Typically, field names start with a letter and may also contain digits and underscores (_). You can use names that don't follow these rules if you place the name within quotation marks. CLEM values can be any of the following:
-
-
-
-* Strings (for example, ""c1"", ""Type 2"", ""a piece of free text"")
-* Integers (for example, 12, 0, –189)
-* Real numbers (for example, 12.34, 0.0, –0.0045)
-* Date/time fields (for example, 05/12/2002, 12/05/2002, 12/05/02)
-
-
-
-It's also possible to use the following elements:
-
-
-
-* Character codes (for example, a or 3)
-* Lists of items (for example, [1 2 3], ['Type 1' 'Type 2'])
-
-
-
-Character codes and lists don't usually occur as field values. Typically, they're used as arguments of CLEM functions.
-"
-EE838EA978F9A0B0265A8D2B35FF2F64D00A1738,EE838EA978F9A0B0265A8D2B35FF2F64D00A1738," Collection node
-
-Collections are similar to histograms, but collections show the distribution of values for one numeric field relative to the values of another, rather than the occurrence of values for a single field. A collection is useful for illustrating a variable or field whose values change over time.
-
-Using 3-D graphing, you can also include a symbolic axis displaying distributions by category. Two-dimensional collections are shown as stacked bar charts, with overlays where used.
-"
-5A8AA187972BA8A711AC91447F668B233E580C8C_0,5A8AA187972BA8A711AC91447F668B233E580C8C," Cox node
-
-Cox Regression builds a predictive model for time-to-event data. The model produces a survival function that predicts the probability that the event of interest has occurred at a given time t for given values of the predictor variables. The shape of the survival function and the regression coefficients for the predictors are estimated from observed subjects; the model can then be applied to new cases that have measurements for the predictor variables.
-
-Note that information from censored subjects, that is, those that do not experience the event of interest during the time of observation, contributes usefully to the estimation of the model.
-
-Example. As part of its efforts to reduce customer churn, a telecommunications company is interested in modeling the time to churn in order to determine the factors that are associated with customers who are quick to switch to another service. To this end, a random sample of customers is selected, and their time spent as customers (whether or not they are still active customers) and various demographic fields are pulled from the database.
-
-Requirements. You need one or more input fields, exactly one target field, and you must specify a survival time field within the Cox node. The target field should be coded so that the ""false"" value indicates survival and the ""true"" value indicates that the event of interest has occurred; it must have a measurement level of Flag, with string or integer storage. (Storage can be converted using a Filler or Derive node if necessary. ) Fields set to Both or None are ignored. Fields used in the model must have their types fully instantiated. The survival time can be any numeric field. Note: On scoring a Cox Regression model, an error is reported if empty strings in categorical variables are used as input to model building. Avoid using empty strings as input.
-
-"
-5A8AA187972BA8A711AC91447F668B233E580C8C_1,5A8AA187972BA8A711AC91447F668B233E580C8C,"Dates & Times. Date & Time fields cannot be used to directly define the survival time; if you have Date & Time fields, you should use them to create a field containing survival times, based upon the difference between the date of entry into the study and the observation date.
-
-Kaplan-Meier Analysis. Cox regression can be performed with no input fields. This is equivalent to a Kaplan-Meier analysis.
-"
-67B99E436854F015A9DB19C775639BA4BB4D5F9B,67B99E436854F015A9DB19C775639BA4BB4D5F9B," CPLEX Optimization node
-
-With the CPLEX Optimization node, you can use complex mathematical (CPLEX) based optimization via an Optimization Programming Language (OPL) model file.
-
-For more information about CPLEX optimization and OPL, see the [IBM ILOG CPLEX Optimization Studio documentation](https://www.ibm.com/support/knowledgecenter/SSSA5P).
-
-When outputting the data generated by the CPLEX Optimization node, you can output the original data from the data sources together as single indexes, or as multiple dimensional indexes of the result.
-
-Note:
-
-
-
-* When running a flow containing a CPLEX Optimization node, the CPLEX library has a limitation of 1000 variables and 1000 constraints.
-"
-9FA71067981E4FC0D6F68A14C91C694DC4C2AF25,9FA71067981E4FC0D6F68A14C91C694DC4C2AF25," Data Asset Export node
-
-You can use the Data Asset Export node to write to remote data sources using connections or write data to a project (delimited or . sav).
-
-Double-click the node to open its properties. Various options are available, described as follows.
-
-After running the node, you can find the data at the export location you specified.
-"
-C70BB33E4E6792511DC4E7D88536017E64BCD0F1,C70BB33E4E6792511DC4E7D88536017E64BCD0F1," Data Asset node
-
-You can use the Data Asset import node to pull in data from remote data sources using connections or from your local computer. First, you must create the connection.
-
-Note for connections to a Planning Analytics database, you must choose a view (not a cube).
-
-You can also pull in data from a local data file ( .csv, .txt, .json, .xls, .xlsx, .sav, and .sas are supported). Only the first sheet is imported from spreadsheets. In the node's properties, under DATA, select one or more data files to upload. You can also simply drag-and-drop the data file from your local file system onto your canvas.
-
-Note: You can import a stream ( .str) into watsonx.ai that was created in SPSS Modeler Subscription or SPSS Modeler client. If the imported stream contains one or more import or export nodes, you'll be prompted to convert the nodes. See [Importing an SPSS Modeler stream](https://dataplatform.cloud.ibm.com/docs/content/wsd/migration.html).
-"
-7F4648FD3E7F8564C98CF142E0E09E23E8097A9E,7F4648FD3E7F8564C98CF142E0E09E23E8097A9E," Data Audit node
-
-The Data Audit node provides a comprehensive first look at the data you bring to SPSS Modeler, presented in an interactive, easy-to-read matrix that can be sorted and used to generate full-size graphs.
-
-When you run a Data Audit node, interactive output is generated that includes:
-
-
-
-* Information such as summary statistics, histograms, box plots, bar charts, pie charts, and more that may be useful in gaining a preliminary understanding of the data.
-* Information about outliers, extremes, and missing values.
-
-
-
-Figure 1. Data Audit node output example
-
-
-
-Figure 2. Data Audit node output example
-
-
-
-Figure 3. Data Audit node output example
-
-
-
-Figure 4. Data Audit node output example
-
-
-
-Figure 5. Data Audit node output example
-
-
-"
-1A5F15E64AABDCA9E2785588E76F3EBE22A1C426,1A5F15E64AABDCA9E2785588E76F3EBE22A1C426," Decision List node
-
-Decision List models identify subgroups or segments that show a higher or lower likelihood of a binary (yes or no) outcome relative to the overall sample.
-
-For example, you might look for customers who are least likely to churn or most likely to say yes to a particular offer or campaign. The Decision List Viewer gives you complete control over the model, enabling you to edit segments, add your own business rules, specify how each segment is scored, and customize the model in a number of other ways to optimize the proportion of hits across all segments. As such, it is particularly well-suited for generating mailing lists or otherwise identifying which records to target for a particular campaign. You can also use multiple mining tasks to combine modeling approaches—for example, by identifying high- and low-performing segments within the same model and including or excluding each in the scoring stage as appropriate.
-"
-4D299EFFF5B982097A5B9D48EA16041E4820A8BB,4D299EFFF5B982097A5B9D48EA16041E4820A8BB," Derive node
-
-One of the most powerful features in watsonx.ai is the ability to modify data values and derive new fields from existing data. During lengthy data mining projects, it is common to perform several derivations, such as extracting a customer ID from a string of Web log data or creating a customer lifetime value based on transaction and demographic data. All of these transformations can be performed, using a variety of field operations nodes.
-
-Several nodes provide the ability to derive new fields:
-
-
-
-* The Derive node modifies data values or creates new fields from one or more existing fields. It creates fields of type formula, flag, nominal, state, count, and conditional.
-* The Reclassify node transforms one set of categorical values to another. Reclassification is useful for collapsing categories or regrouping data for analysis.
-* The Binning node automatically creates new nominal (set) fields based on the values of one or more existing continuous (numeric range) fields. For example, you can transform a continuous income field into a new categorical field containing groups of income as deviations from the mean. After you create bins for the new field, you can generate a Derive node based on the cut points.
-* The Set to Flag node derives multiple flag fields based on the categorical values defined for one or more nominal fields.
-* The Restructure node converts a nominal or flag field into a group of fields that can be populated with the values of yet another field. For example, given a field named payment type, with values of credit, cash, and debit, three new fields would be created (credit, cash, debit), each of which might contain the value of the actual payment made.
-
-
-
-Tip: The Control Language for Expression Manipulation (CLEM) is a powerful tool you can use to analyze and manipulate the data used in your flows. For example, you might use CLEM in a node to derive values. For more information, see the [CLEM (legacy) language reference](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/clem_reference/clem_language_reference.html).
-"
-20CFE34D5494AB0AE2EF8B6F65396EDBF667F688,20CFE34D5494AB0AE2EF8B6F65396EDBF667F688," Space-Time-Boxes node
-
-Space-Time-Boxes (STB) are an extension of Geohashed spatial locations. More specifically, an STB is an alphanumeric string that represents a regularly shaped region of space and time.
-
-For example, the STB dr5ru7|2013-01-01 00:00:00|2013-01-01 00:15:00 is made up of the following three parts:
-
-
-
-* The geohash dr5ru7
-* The start timestamp 2013-01-01 00:00:00
-* The end timestamp 2013-01-01 00:15:00
-
-
-
-As an example, you could use space and time information to improve confidence that two entities are the same because they are virtually in the same place at the same time. Alternatively, you could improve the accuracy of relationship identification by showing that two entities are related due to their proximity in space and time.
-
-In the node properties, you can choose the Individual Records or Hangouts mode as appropriate for your requirements. Both modes require the same basic details, as follows:
-
-Latitude field. Select the field that identifies the latitude (in WGS84 coordinate system).
-
-Longitude field. Select the field that identifies the longitude (in WGS84 coordinate system).
-
-Timestamp field. Select the field that identifies the time or date.
-"
-909B04011F4C2211D6D945EC82217E3F89A79BD7,909B04011F4C2211D6D945EC82217E3F89A79BD7," Disabling nodes in a flow
-
-You can disable process nodes that have a single input so that they're ignored when the flow runs. This saves you from having to remove or bypass the node and means you can leave it connected to the remaining nodes.
-
-You can still open and edit the node settings; however, any changes will not take effect until you enable the node again.
-
-For example, you might use a Filter node to filter several fields, and then build models based on the reduced data set. If you want to also build the same models without fields being filtered, to see if they improve the model results, you can disable the Filter node. When you disable the Filter node, the connections to the modeling nodes pass directly through from the Derive node to the Type node.
-"
-338F12B976B522389F5FABE438280565490FB280,338F12B976B522389F5FABE438280565490FB280," Discriminant node
-
-Discriminant analysis builds a predictive model for group membership. The model is composed of a discriminant function (or, for more than two groups, a set of discriminant functions) based on linear combinations of the predictor variables that provide the best discrimination between the groups. The functions are generated from a sample of cases for which group membership is known; the functions can then be applied to new cases that have measurements for the predictor variables but have unknown group membership.
-
-Example. A telecommunications company can use discriminant analysis to classify customers into groups based on usage data. This allows them to score potential customers and target those who are most likely to be in the most valuable groups.
-
-Requirements. You need one or more input fields and exactly one target field. The target must be a categorical field (with a measurement level of Flag or Nominal) with string or integer storage. (Storage can be converted using a Filler or Derive node if necessary. ) Fields set to Both or None are ignored. Fields used in the model must have their types fully instantiated.
-
-Strengths. Discriminant analysis and Logistic Regression are both suitable classification models. However, Discriminant analysis makes more assumptions about the input fields—for example, they are normally distributed and should be continuous, and they give better results if those requirements are met, especially if the sample size is small.
-"
-5C597F82EC8484220A6FB3193DC78B878E8698F6_0,5C597F82EC8484220A6FB3193DC78B878E8698F6," Distinct node
-
-Duplicate records in a data set must be removed before data mining can begin. For example, in a marketing database, individuals may appear multiple times with different address or company information. You can use the Distinct node to find or remove duplicate records in your data, or to create a single, composite record from a group of duplicate records.
-
-To use the Distinct node, you must first define a set of key fields that determine when two records are considered to be duplicates.
-
-If you do not pick all your fields as key fields, then two ""duplicate"" records may not be truly identical because they can still differ in the values of the remaining fields. In this case, you can also define a sort order that is applied within each group of duplicate records. This sort order gives you fine control over which record is treated as the first within a group. Otherwise, all duplicates are considered to be interchangeable and any record might be selected. The incoming order of the records is not taken into account, so it doesn't help to use an upstream Sort node (see ""Sorting records within the Distinct node"" on this page).
-
-Mode. Specify whether to create a composite record, or to either include or exclude (discard) the first record.
-
-
-
-* Create a composite record for each group. Provides a way for you to aggregate non-numeric fields. Selecting this option makes the Composite tab available where you specify how to create the composite records.
-* Include only the first record in each group. Selects the first record from each group of duplicate records and discards the rest. The first record is determined by the sort order defined under the setting Within groups, sort records by, and not by the incoming order of the records.
-* Discard only the first record in each group. Discards the first record from each group of duplicate records and selects the remainder instead. The first record is determined by the sort order defined under the setting Within groups, sort records by, and not by the incoming order of the records. This option is useful for finding duplicates in your data so that you can examine them later in the flow.
-
-
-
-"
-5C597F82EC8484220A6FB3193DC78B878E8698F6_1,5C597F82EC8484220A6FB3193DC78B878E8698F6,"Key fields for grouping. Lists the field or fields used to determine whether records are identical. You can:
-
-
-
-* Add fields to this list using the field picker button.
-* Delete fields from the list by using the red X (remove) button.
-
-
-
-Within groups, sort records by. Lists the fields used to determine how records are sorted within each group of duplicates, and whether they are sorted in ascending or descending order. You can:
-
-
-
-* Add fields to this list using the field picker button.
-* Delete fields from the list by using the red X (remove) button.
-* Move fields using the up or down buttons, if you are sorting by more than one field.
-
-
-
-You must specify a sort order if you have chosen to include or exclude the first record in each group, and it matters to you which record is treated as the first.
-
-You may also want to specify a sort order if you have chosen to create a composite record, for certain options on the Composite tab.
-
-Specify whether, by default, records are sorted in Ascending or Descending order of the sort key values.
-"
-570AF2AAF268A3DF1D959D54A5BE1790DC43EAD5,570AF2AAF268A3DF1D959D54A5BE1790DC43EAD5," Distribution node
-
-A distribution graph or table shows the occurrence of symbolic (non-numeric) values, such as mortgage type or gender, in a dataset. A typical use of the Distribution node is to show imbalances in the data that you can rectify by using a Balance node before creating a model. You can automatically generate a Balance node using the Generate menu in the distribution graph or table window.
-
-Note: To show the occurrence of numeric values, you should use a Histogram node.
-"
-D5D31FDA0EEBFCDD87005ED54EBEDFD164FA073B,D5D31FDA0EEBFCDD87005ED54EBEDFD164FA073B," Charts node
-
-With the Charts node, you can launch the chart builder and create chart definitions to save with your flow. Then when you run the node, chart output is generated.
-
-The Charts node is available under the Graphs section on the node palette. After adding a Charts node to your flow, double-click it to open the properties pane. Then click Launch Chart Builder to open the chart builder and create one or more chart definitions to associate with the node. See [Visualizing your data](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/visualizations.html) for details about creating charts.
-
-Figure 1. Example charts
-
- Notes:
-
-
-
-* When you create a chart, it uses a sample of your data. After clicking Save and close to save the chart definition and return to your flow, the Charts node will then use all of your data when you run it.
-* Chart definitions are listed in the node properties panel, with icons available for editing them or removing them.
-"
-8C53BD47030C9BF4E7DBF1EA482CDED9CC8ABAD4,8C53BD47030C9BF4E7DBF1EA482CDED9CC8ABAD4," Ensemble node
-
-The Ensemble node combines two or more model nuggets to obtain more accurate predictions than can be gained from any of the individual models. By combining predictions from multiple models, limitations in individual models may be avoided, resulting in a higher overall accuracy. Models combined in this manner typically perform at least as well as the best of the individual models and often better.
-
-This combining of nodes happens automatically in the Auto Classifier and Auto Numeric automated modeling nodes.
-
-After using an Ensemble node, you can use an Analysis node or Evaluation node to compare the accuracy of the combined results with each of the input models. To do this, make sure the Filter out fields generated by ensembled models option is not selected in the Ensemble node settings.
-"
-4F733928B0F749FFDDF2E6DAEF646A0524C54D67,4F733928B0F749FFDDF2E6DAEF646A0524C54D67," Evaluation node
-
-The Evaluation node offers an easy way to evaluate and compare predictive models to choose the best model for your application. Evaluation charts show how models perform in predicting particular outcomes. They work by sorting records based on the predicted value and confidence of the prediction, splitting the records into groups of equal size (quantiles), and then plotting the value of the business criterion for each quantile, from highest to lowest. Multiple models are shown as separate lines in the plot.
-
-Outcomes are handled by defining a specific value or range of values as a hit. Hits usually indicate success of some sort (such as a sale to a customer) or an event of interest (such as a specific medical diagnosis). You can define hit criteria under the OPTIONS section of the node properties, or you can use the default hit criteria as follows:
-
-
-
-* Flag output fields are straightforward; hits correspond to true values.
-* For Nominal output fields, the first value in the set defines a hit.
-* For Continuous output fields, hits equal values greater than the midpoint of the field's range.
-
-
-
-There are six types of evaluation charts, each of which emphasizes a different evaluation criterion.
-
-Evaluation charts can also be cumulative, so that each point equals the value for the corresponding quantile plus all higher quantiles. Cumulative charts usually convey the overall performance of models better, whereas noncumulative charts often excel at indicating particular problem areas for models.
-
-Note: The Evaluation node doesn't support the use of commas in field names. If you have field names containing commas, you must either remove the commas or surround the field name in quotes.
-"
-F5A6D2AE83A7989E17704E69F0A640368C676594,F5A6D2AE83A7989E17704E69F0A640368C676594," Expression Builder
-
-You can type CLEM expressions manually or use the Expression Builder, which displays a complete list of CLEM functions and operators as well as data fields from the current flow, allowing you to quickly build expressions without memorizing the exact names of fields or functions.
-
-The Expression Builder controls automatically add the proper quotes for fields and values, making it easier to create syntactically correct expressions.
-
-Notes:
-
-
-
-* The Expression Builder is not supported in parameter settings.
-* If you want to change your datasource, before changing the source you should check that the Expression Builder can still support the functions you have selected. Because not all databases support all functions, you may encounter an error if you run against a new datasource.
-* You can run an SPSS Modeler desktop stream file ( .str) that contains database functions. But they aren't yet available in the Expression Builder user interface.
-
-
-
-Figure 1. Expression Builder
-
-
-"
-9DA0D100A88228AB463CB9B1B6CF1C051253911A_0,9DA0D100A88228AB463CB9B1B6CF1C051253911A," Selecting functions
-
-The function list displays all available CLEM functions and operators. Scroll to select a function from the list, or, for easier searching, use the drop-down list to display a subset of functions or operators.
-
-The following categories of functions are available:
-
-
-
-Table 1. CLEM functions for use with your data
-
- Function type Description
-
- Operators Lists all the operators you can use when building expressions. Operators are also available from the buttons.
- Information Used to gain insight into field values. For example, the function is_string returns true for all records whose type is a string.
- Conversion Used to construct new fields or convert storage type. For example, the function to_timestamp converts the selected field to a timestamp.
- Comparison Used to compare field values to each other or to a specified string. For example, <= is used to compare whether the values of two fields are lesser or equal.
- Logical Used to perform logical operations, such as if, then, else operations.
- Numeric Used to perform numeric calculations, such as the natural log of field values.
- Trigonometric Used to perform trigonometric calculations, such as the arccosine of a specified angle.
- Probability Returns probabilities that are based on various distributions, such as probability that a value from Student's t distribution is less than a specific value.
- Spatial Functions Used to perform spatial calculations on geospatial data.
- Bitwise Used to manipulate integers as bit patterns.
- Random Used to randomly select items or generate numbers.
- String Used to perform various operations on strings, such as stripchar, which allows you to remove a specified character.
- Date and time Used to perform various operations on date, time, and timestamp fields.
- Sequence Used to gain insight into the record sequence of a data set or perform operations that are based on that sequence.
- Global Used to access global values that are created by a Set Globals node. For example, @MEAN is used to refer to the mean average of all values for a field across the entire data set.
-"
-9DA0D100A88228AB463CB9B1B6CF1C051253911A_1,9DA0D100A88228AB463CB9B1B6CF1C051253911A," Blanks and Null Used to access, flag, and frequently fill user-specified blanks or system-missing values. For example, @BLANK(FIELD) is used to raise a true flag for records where blanks are present.
- Special Fields Used to denote the specific fields under examination. For example, @FIELD is used when deriving multiple fields.
-
-
-
-After you select a group of functions, double-click to insert the functions into the Expression box at the point indicated by the position of the cursor.
-"
-1B0AB9084C7DD9546BDC2F376B58E32C0ECFEE85,1B0AB9084C7DD9546BDC2F376B58E32C0ECFEE85," Extension Model node
-
-With the Extension Model node, you can run R scripts or Python for Spark scripts to build and score models.
-
-After adding the node to your canvas, double-click the node to open its properties.
-"
-6402316FEBFAD11A582D9C567811003F4BEE596A,6402316FEBFAD11A582D9C567811003F4BEE596A," Extension Export node
-
-You can use the Extension Export node to run R scripts or Python for Spark scripts to export data.
-"
-378F6A8306234029DE1642CBFF8E44ED6848BF74,378F6A8306234029DE1642CBFF8E44ED6848BF74," Extension Import node
-
-With the Extension Import node, you can run R scripts or Python for Spark scripts to import data.
-
-After adding the node to your canvas, double-click the node to open its properties.
-"
-97FA49D526786021CF325FF9AFF15646A8270B48,97FA49D526786021CF325FF9AFF15646A8270B48," Native Python APIs
-
-You can invoke native Python APIs from your scripts to interact with SPSS Modeler.
-
-The following APIs are supported.
-
-To see an example, you can download the sample stream [python-extension-str.zip](https://github.com/IBMDataScience/ModelerFlowsExamples/blob/main/samples) and import it into SPSS Modeler (from your project, click New asset, select SPSS Modeler, then select Local file). Then open the Extension node properties in the flow to see example syntax.
-"
-1D46D1240377AEA562F14A560CB9F24DF33EDF88,1D46D1240377AEA562F14A560CB9F24DF33EDF88," Extension Output node
-
-With the Extension Output node, you can run R scripts or Python for Spark scripts to produce output.
-
-After adding the node to your canvas, double-click the node to open its properties.
-"
-FF6C435ADBD62DE03C06CE4F90343D3CD04F9E8F,FF6C435ADBD62DE03C06CE4F90343D3CD04F9E8F," Extension Transform node
-
-With the Extension Transform node, you can take data from an SPSS Modeler flow and apply transformations to the data using R scripting or Python for Spark scripting.
-
-When the data has been modified, it's returned to the flow for further processing, model building, and model scoring. The Extension Transform node makes it possible to transform data using algorithms that are written in R or Python for Spark, and enables you to develop data transformation methods that are tailored to a particular problem.
-
-After adding the node to your canvas, double-click the node to open its properties.
-"
-63C0DFB695860E1DA7981D86959D998BEBC2DD03,63C0DFB695860E1DA7981D86959D998BEBC2DD03," Python for Spark scripts
-
-SPSS Modeler supports Python scripts for Apache Spark.
-
-Note:
-
-
-
-* Python nodes depend on the Spark environment.
-* Python scripts must use the Spark API because data is presented in the form of a Spark DataFrame.
-"
-17470065AFC59337B207721AB539B4622BBB3055,17470065AFC59337B207721AB539B4622BBB3055," Scripting with Python for Spark
-
-SPSS Modeler can run Python scripts using the Apache Spark framework to process data. This documentation provides the Python API description for the interfaces provided.
-
-The SPSS Modeler installation includes a Spark distribution.
-"
-7436F8933CA1DD44E05CD59F8E2CB13052763643,7436F8933CA1DD44E05CD59F8E2CB13052763643," Date, time, timestamp
-
-For operations that use date, time, or timestamp type data, the value is converted to the real value based on the value 1970-01-01:00:00:00 (using Coordinated Universal Time).
-
-For the date, the value represents the number of days, based on the value 1970-01-01 (using Coordinated Universal Time).
-
-For the time, the value represents the number of seconds at 24 hours.
-
-For the timestamp, the value represents the number of seconds based on the value 1970-01-01:00:00:00 (using Coordinated Universal Time).
-"
-835B998310E6E268F648D4AA28528190EBBB48CA,835B998310E6E268F648D4AA28528190EBBB48CA," Examples
-
-This section provides Python for Spark scripting examples.
-"
-AD61BC1B395A071D8850BC2405A8C311CFDC931F,AD61BC1B395A071D8850BC2405A8C311CFDC931F," Exceptions
-
-This section describes possible exception instances. They are all a subclass of python exception.
-"
-450CAAACD51ABDEDAB940CAFB4BC47EBFBCBBA67,450CAAACD51ABDEDAB940CAFB4BC47EBFBCBBA67," Data metadata
-
-This section describes how to set up the data model attributes based on pyspark.sql.StructField.
-"
-B98506EB96C587BDFD06CBF67617E25D9DAE8E60,B98506EB96C587BDFD06CBF67617E25D9DAE8E60," R scripts
-
-SPSS Modeler supports R scripts.
-"
-50636405C61E0AF7D2EE0EE31256C4CD0F6C5DED,50636405C61E0AF7D2EE0EE31256C4CD0F6C5DED," PCA/Factor node
-
-The PCA/Factor node provides powerful data-reduction techniques to reduce the complexity of your data. Two similar but distinct approaches are provided.
-
-
-
-* Principal components analysis (PCA) finds linear combinations of the input fields that do the best job of capturing the variance in the entire set of fields, where the components are orthogonal (perpendicular) to each other. PCA focuses on all variance, including both shared and unique variance.
-* Factor analysis attempts to identify underlying concepts, or factors, that explain the pattern of correlations within a set of observed fields. Factor analysis focuses on shared variance only. Variance that is unique to specific fields is not considered in estimating the model. Several methods of factor analysis are provided by the Factor/PCA node.
-
-
-
-For both approaches, the goal is to find a small number of derived fields that effectively summarize the information in the original set of fields.
-
-Requirements. Only numeric fields can be used in a PCA-Factor model. To estimate a factor analysis or PCA, you need one or more fields with the role set to Input fields. Fields with the role set to Target, Both, or None are ignored, as are non-numeric fields.
-
-Strengths. Factor analysis and PCA can effectively reduce the complexity of your data without sacrificing much of the information content. These techniques can help you build more robust models that execute more quickly than would be possible with the raw input fields.
-"
-9E1CDB994E758D43D9D8CDC5D88E2B5C7E0088D7_0,9E1CDB994E758D43D9D8CDC5D88E2B5C7E0088D7," Feature Selection node
-
-Data mining problems may involve hundreds, or even thousands, of fields that can potentially be used as inputs. As a result, a great deal of time and effort may be spent examining which fields or variables to include in the model. To narrow down the choices, the Feature Selection algorithm can be used to identify the fields that are most important for a given analysis. For example, if you are trying to predict patient outcomes based on a number of factors, which factors are the most likely to be important?
-
-Feature selection consists of three steps:
-
-
-
-* Screening. Removes unimportant and problematic inputs and records, or cases such as input fields with too many missing values or with too much or too little variation to be useful.
-* Ranking. Sorts remaining inputs and assigns ranks based on importance.
-* Selecting. Identifies the subset of features to use in subsequent models—for example, by preserving only the most important inputs and filtering or excluding all others.
-
-
-
-In an age where many organizations are overloaded with too much data, the benefits of feature selection in simplifying and speeding the modeling process can be substantial. By focusing attention quickly on the fields that matter most, you can reduce the amount of computation required; more easily locate small but important relationships that might otherwise be overlooked; and, ultimately, obtain simpler, more accurate, and more easily explainable models. By reducing the number of fields used in the model, you may find that you can reduce scoring times as well as the amount of data collected in future iterations.
-
-"
-9E1CDB994E758D43D9D8CDC5D88E2B5C7E0088D7_1,9E1CDB994E758D43D9D8CDC5D88E2B5C7E0088D7,"Example. A telephone company has a data warehouse containing information about responses to a special promotion by 5,000 of the company's customers. The data includes a large number of fields containing customers' ages, employment, income, and telephone usage statistics. Three target fields show whether or not the customer responded to each of three offers. The company wants to use this data to help predict which customers are most likely to respond to similar offers in the future.
-
-Requirements. A single target field (one with its role set to Target), along with multiple input fields that you want to screen or rank relative to the target. Both target and input fields can have a measurement level of Continuous (numeric range) or Categorical.
-"
-38D24508B131BEB6138652C2FD1E0380A001BB54_0,38D24508B131BEB6138652C2FD1E0380A001BB54," Filler node
-
-Filler nodes are used to replace field values and change storage. You can choose to replace values based on a specified CLEM condition, such as @BLANK(FIELD). Alternatively, you can choose to replace all blanks or null values with a specific value. Filler nodes are often used in conjunction with the Type node to replace missing values.
-
-Fill in fields. Select fields from the dataset whose values will be examined and replaced. The default behavior is to replace values depending on the specified Condition and Replace with expressions. You can also select an alternative method of replacement using the Replace options.
-
-Note: When selecting multiple fields to replace with a user-defined value, it is important that the field types are similar (all numeric or all symbolic).
-
-Replace. Select to replace the values of the selected field(s) using one of the following methods:
-
-
-
-* Based on condition. This option activates the Condition field and Expression Builder for you to create an expression used as a condition for replacement with the value specified.
-* Always. Replaces all values of the selected field. For example, you could use this option to convert the storage of income to a string using the following CLEM expression: (to_string(income)).
-* Blank values. Replaces all user-specified blank values in the selected field. The standard condition @BLANK(@FIELD) is used to select blanks. Note: You can define blanks using the Types tab of the source node or with a Type node.
-* Null values. Replaces all system null values in the selected field. The standard condition @NULL(@FIELD) is used to select nulls.
-* Blank and null values. Replaces both blank values and system nulls in the selected field. This option is useful when you are unsure whether or not nulls have been defined as missing values.
-
-
-
-Condition. This option is available when you have selected the Based on condition option. Use this text box to specify a CLEM expression for evaluating the selected fields. Click the calculator button to open the Expression Builder.
-
-"
-38D24508B131BEB6138652C2FD1E0380A001BB54_1,38D24508B131BEB6138652C2FD1E0380A001BB54,"Replace with. Specify a CLEM expression to give a new value to the selected fields. You can also replace the value with a null value by typing undef in the text box. Click the calculator button to open the Expression Builder.
-
-Note: When the field(s) selected are string, you should replace them with a string value. Using the default 0 or another numeric value as the replacement value for string fields will result in an error.Note that use of the following may change row order:
-
-
-
-* Running in a database via SQL pushback
-"
-EED64F79EBFDD957DEEBEC6261B3A70A248F3D35,EED64F79EBFDD957DEEBEC6261B3A70A248F3D35," Filter node
-
-You can rename or exclude fields at any point in a flow. For example, as a medical researcher, you may not be concerned about the potassium level (field-level data) of patients (record-level data); therefore, you can filter out the K (potassium) field. This can be done using a separate Filter node or using the Filter tab on an import or output node. The functionality is the same regardless of which node it's accessed from.
-
-
-
-* From import nodes, you can rename or filter fields as the data is read in.
-* Using a Filter node, you can rename or filter fields at any point in the flow.
-"
-B8522E9801281DD4118A5012ACF885A7EC2354E4,B8522E9801281DD4118A5012ACF885A7EC2354E4," GenLin node
-
-The generalized linear model expands the general linear model so that the dependent variable is linearly related to the factors and covariates via a specified link function. Moreover, the model allows for the dependent variable to have a non-normal distribution. It covers widely used statistical models, such as linear regression for normally distributed responses, logistic models for binary data, loglinear models for count data, complementary log-log models for interval-censored survival data, plus many other statistical models through its very general model formulation.
-
-Examples. A shipping company can use generalized linear models to fit a Poisson regression to damage counts for several types of ships constructed in different time periods, and the resulting model can help determine which ship types are most prone to damage.
-
-A car insurance company can use generalized linear models to fit a gamma regression to damage claims for cars, and the resulting model can help determine the factors that contribute the most to claim size.
-
-Medical researchers can use generalized linear models to fit a complementary log-log regression to interval-censored survival data to predict the time to recurrence for a medical condition.
-
-Generalized linear models work by building an equation that relates the input field values to the output field values. After the model is generated, you can use it to estimate values for new data. For each record, a probability of membership is computed for each possible output category. The target category with the highest probability is assigned as the predicted output value for that record.
-
-Requirements. You need one or more input fields and exactly one target field (which can have a measurement level of Continuous or Flag) with two or more categories. Fields used in the model must have their types fully instantiated.
-
-Strengths. The generalized linear model is extremely flexible, but the process of choosing the model structure is not automated and thus demands a level of familiarity with your data that is not required by ""black box"" algorithms.
-"
-CF6FE4E4058C24F0BEB94D379FB9E820C09456D2,CF6FE4E4058C24F0BEB94D379FB9E820C09456D2," GLE node
-
-The GLE model identifies the dependent variable that is linearly related to the factors and covariates via a specified link function. Moreover, the model allows for the dependent variable to have a non-normal distribution. It covers widely used statistical models, such as linear regression for normally distributed responses, logistic models for binary data, loglinear models for count data, complementary log-log models for interval-censored survival data, plus many other statistical models through its very general model formulation.
-
-Examples. A shipping company can use generalized linear models to fit a Poisson regression to damage counts for several types of ships constructed in different time periods, and the resulting model can help determine which ship types are most prone to damage.
-
-A car insurance company can use generalized linear models to fit a gamma regression to damage claims for cars, and the resulting model can help determine the factors that contribute the most to claim size.
-
-Medical researchers can use generalized linear models to fit a complementary log-log regression to interval-censored survival data to predict the time to recurrence for a medical condition.
-
-GLE models work by building an equation that relates the input field values to the output field values. After the model is generated, you can use it to estimate values for new data.
-
-For a categorical target, for each record, a probability of membership is computed for each possible output category. The target category with the highest probability is assigned as the predicted output value for that record.
-
-Requirements. You need one or more input fields and exactly one target field (which can have a measurement level of Continuous, Categorical, or Flag) with two or more categories. Fields used in the model must have their types fully instantiated.
-
-Note: When first creating a flow, you select which runtime to use. By default, flows use the IBM SPSS Modeler runtime. If you want to use native Spark algorithms instead of SPSS algorithms, select the Spark runtime. Properties for this node will vary depending on which runtime option you choose.
-"
-B561F461842BB0D185F097E0ADB8D3AC13266172_0,B561F461842BB0D185F097E0ADB8D3AC13266172," GLMM node
-
-This node creates a generalized linear mixed model (GLMM).
-
-Generalized linear mixed models extend the linear model so that:
-
-
-
-* The target is linearly related to the factors and covariates via a specified link function
-* The target can have a non-normal distribution
-* The observations can be correlated
-
-
-
-Generalized linear mixed models cover a wide variety of models, from simple linear regression to complex multilevel models for non-normal longitudinal data.
-
-Examples. The district school board can use a generalized linear mixed model to determine whether an experimental teaching method is effective at improving math scores. Students from the same classroom should be correlated since they are taught by the same teacher, and classrooms within the same school may also be correlated, so we can include random effects at school and class levels to account for different sources of variability.
-
-Medical researchers can use a generalized linear mixed model to determine whether a new anticonvulsant drug can reduce a patient's rate of epileptic seizures. Repeated measurements from the same patient are typically positively correlated so a mixed model with some random effects should be appropriate. The target field – the number of seizures – takes positive integer values, so a generalized linear mixed model with a Poisson distribution and log link may be appropriate.
-
-Executives at a cable provider of television, phone, and internet services can use a generalized linear mixed model to learn more about potential customers. Since possible answers have nominal measurement levels, the company analyst uses a generalized logit mixed model with a random intercept to capture correlation between answers to the service usage questions across service types (tv, phone, internet) within a given survey responder's answers.
-
-In the node properties, data structure options allow you to specify the structural relationships between records in your dataset when observations are correlated. If the records in the dataset represent independent observations, you don't need to specify any data structure options.
-
-"
-B561F461842BB0D185F097E0ADB8D3AC13266172_1,B561F461842BB0D185F097E0ADB8D3AC13266172,"Subjects. The combination of values of the specified categorical fields should uniquely define subjects within the dataset. For example, a single Patient ID field should be sufficient to define subjects in a single hospital, but the combination of Hospital ID and Patient ID may be necessary if patient identification numbers are not unique across hospitals. In a repeated measures setting, multiple observations are recorded for each subject, so each subject may occupy multiple records in the dataset.
-
-A subject is an observational unit that can be considered independent of other subjects. For example, the blood pressure readings from a patient in a medical study can be considered independent of the readings from other patients. Defining subjects becomes particularly important when there are repeated measurements per subject and you want to model the correlation between these observations. For example, you might expect that blood pressure readings from a single patient during consecutive visits to the doctor are correlated.
-
-All of the fields specified as subjects in the node properties are used to define subjects for the residual covariance structure, and provide the list of possible fields for defining subjects for random-effects covariance structures on the Random Effect Block.
-
-Repeated measures. The fields specified here are used to identify repeated observations. For example, a single variable Week might identify the 10 weeks of observations in a medical study, or Month and Day might be used together to identify daily observations over the course of a year.
-
-Define covariance groups by. The categorical fields specified here define independent sets of repeated effects covariance parameters; one for each category defined by the cross-classification of the grouping fields. All subjects have the same covariance type, and subjects within the same covariance grouping will have the same values for the parameters.
-
-Spatial covariance coordinates. The variables in this list specify the coordinates of the repeated observations when one of the spatial covariance types is selected for the repeated covariance type.
-
-Repeated covariance type. This specifies the covariance structure for the residuals. The available structures are:
-
-
-
-* First-order autoregressive (AR1)
-* Autoregressive moving average (1,1) (ARMA11)
-"
-B561F461842BB0D185F097E0ADB8D3AC13266172_2,B561F461842BB0D185F097E0ADB8D3AC13266172,"* Compound symmetry
-* Diagonal
-* Scaled identity
-* Spatial: Power
-* Spatial: Exponential
-* Spatial: Gaussian
-* Spatial: Linear
-* Spatial: Linear-log
-* Spatial: Spherical
-* Toeplitz
-"
-E6B5EAD096E68A255C5526ADD4C828534891C090,E6B5EAD096E68A255C5526ADD4C828534891C090," Gaussian Mixture node
-
-A Gaussian Mixture© model is a probabilistic model that assumes all the data points are generated from a mixture of a finite number of Gaussian distributions with unknown parameters.
-
-One can think of mixture models as generalizing k-means clustering to incorporate information about the covariance structure of the data as well as the centers of the latent Gaussians.^1^
-
-The Gaussian Mixture node in watsonx.ai exposes the core features and commonly used parameters of the Gaussian Mixture library. The node is implemented in Python.
-
-For more information about Gaussian Mixture modeling algorithms and parameters, see [Gaussian Mixture Models](http://scikit-learn.org/stable/modules/mixture.html) and [Gaussian Mixture](https://scikit-learn.org/stable/modules/generated/sklearn.mixture.GaussianMixture.html). ^2^
-
-^1^ [User Guide.](https://scikit-learn.org/stable/modules/mixture.html)Gaussian mixture models. Web. © 2007 - 2017. scikit-learn developers.
-
-^2^ [Scikit-learn: Machine Learning in Python](http://jmlr.csail.mit.edu/papers/v12/pedregosa11a.html), Pedregosa et al., JMLR 12, pp. 2825-2830, 2011.
-"
-A1FE4B06DB60F8A9C916FBEAF5C7482155BD62E3,A1FE4B06DB60F8A9C916FBEAF5C7482155BD62E3," HDBSCAN node
-
-Hierarchical Density-Based Spatial Clustering (HDBSCAN)© uses unsupervised learning to find clusters, or dense regions, of a data set.
-
-The HDBSCAN node in watsonx.ai exposes the core features and commonly used parameters of the HDBSCAN library. The node is implemented in Python, and you can use it to cluster your dataset into distinct groups when you don't know what those groups are at first. Unlike most learning methods in watsonx.ai, HDBSCAN models do not use a target field. This type of learning, with no target field, is called unsupervised learning. Rather than trying to predict an outcome, HDBSCAN tries to uncover patterns in the set of input fields. Records are grouped so that records within a group or cluster tend to be similar to each other, but records in different groups are dissimilar. The HDBSCAN algorithm views clusters as areas of high density separated by areas of low density. Due to this rather generic view, clusters found by HDBSCAN can be any shape, as opposed to k-means which assumes that clusters are convex shaped. Outlier points that lie alone in low-density regions are also marked. HDBSCAN also supports scoring of new samples.^1^
-
-To use the HDBSCAN node, you must set up an upstream Type node. The HDBSCAN node will read input values from the Type node (or from the Types of an upstream import node).
-
-For more information about HDBSCAN clustering algorithms, see the [HDBSCAN documentation](http://hdbscan.readthedocs.io/en/latest/). ^1^
-
-^1^ ""User Guide / Tutorial."" The hdbscan Clustering Library. Web. © 2016, Leland McInnes, John Healy, Steve Astels.
-"
-13F7C9C7B52EC7152F2B3D81B6EB42DB0319A6F4,13F7C9C7B52EC7152F2B3D81B6EB42DB0319A6F4," Histogram node
-
-Histogram nodes show the occurrence of values for numeric fields. They are often used to explore the data before manipulations and model building. Similar to the Distribution node, Histogram nodes are frequently used to reveal imbalances in the data.
-
-Note: To show the occurrence of values for symbolic fields, you should use a Distribution node.
-"
-00205C92C52FA28DB619EE1F9C8D76FE8564DB88,00205C92C52FA28DB619EE1F9C8D76FE8564DB88," History node
-
-History nodes are most often used for sequential data, such as time series data.
-
-They are used to create new fields containing data from fields in previous records. When using a History node, you may want to use data that is presorted by a particular field. You can use a Sort node to do this.
-"
-1BC1FE73146C70FA2A76241470314A4732EFD918,1BC1FE73146C70FA2A76241470314A4732EFD918," Isotonic-AS node
-
-Isotonic Regression belongs to the family of regression algorithms. The Isotonic-AS node in watsonx.ai is implemented in Spark.
-
-For details, see [Isotonic regression](https://spark.apache.org/docs/2.2.0/mllib-isotonic-regression.html). ^1^
-
-^1^ ""Regression - RDD-based API."" Apache Spark. MLlib: Main Guide. Web. 3 Oct 2017.
-"
-22A8F7539D1374784E9BF247B1370C430910F43D,22A8F7539D1374784E9BF247B1370C430910F43D," KDE node
-
-Kernel Density Estimation (KDE)© uses the Ball Tree or KD Tree algorithms for efficient queries, and walks the line between unsupervised learning, feature engineering, and data modeling.
-
-Neighbor-based approaches such as KDE are some of the most popular and useful density estimation techniques. KDE can be performed in any number of dimensions, though in practice high dimensionality can cause a degradation of performance. The KDE Modeling node and the KDE Simulation node in watsonx.ai expose the core features and commonly used parameters of the KDE library. The nodes are implemented in Python. ^1^
-
-To use a KDE node, you must set up an upstream Type node. The KDE node will read input values from the Type node (or from the Types of an upstream import node).
-
-The KDE Modeling node is available under the Modeling node palette. The KDE Modeling node generates a model nugget, and the nugget's scored values are kernel density values from the input data.
-
-The KDE Simulation node is available under the Outputs node palette. The KDE Simulation node generates a KDE Gen source node that can create some records that have the same distribution as the input data. In the KDE Gen node properties, you can specify how many records the node will create (default is 1) and generate a random seed.
-
-For more information about KDE, including examples, see the [KDE documentation](http://scikit-learn.org/stable/modules/density.htmlkernel-density-estimation). ^1^
-
-^1^ ""User Guide."" Kernel Density Estimation. Web. © 2007-2018, scikit-learn developers.
-"
-033E2B1CD9E006383C2D2C045B8834BFBBAB0F09,033E2B1CD9E006383C2D2C045B8834BFBBAB0F09," KDE Simulation node
-
-Kernel Density Estimation (KDE)© uses the Ball Tree or KD Tree algorithms for efficient queries, and walks the line between unsupervised learning, feature engineering, and data modeling.
-
-Neighbor-based approaches such as KDE are some of the most popular and useful density estimation techniques. KDE can be performed in any number of dimensions, though in practice high dimensionality can cause a degradation of performance. The KDE Modeling node and the KDE Simulation node in watsonx.ai expose the core features and commonly used parameters of the KDE library. The nodes are implemented in Python. ^1^
-
-To use a KDE node, you must set up an upstream Type node. The KDE node will read input values from the Type node (or from the Types of an upstream import node).
-
-The KDE Modeling node is available under the Modeling node palette. The KDE Modeling node generates a model nugget, and the nugget's scored values are kernel density values from the input data.
-
-The KDE Simulation node is available under the Outputs node palette. The KDE Simulation node generates a KDE Gen source node that can create some records that have the same distribution as the input data. In the KDE Gen node properties, you can specify how many records the node will create (default is 1) and generate a random seed.
-
-For more information about KDE, including examples, see the [KDE documentation](http://scikit-learn.org/stable/modules/density.htmlkernel-density-estimation). ^1^
-
-^1^ ""User Guide."" Kernel Density Estimation. Web. © 2007-2018, scikit-learn developers.
-"
-13A1FF3338F4AC1EB2CF3FF6781283B49AC8B5A6,13A1FF3338F4AC1EB2CF3FF6781283B49AC8B5A6," K-Means node
-
-The K-Means node provides a method of cluster analysis. It can be used to cluster the dataset into distinct groups when you don't know what those groups are at the beginning. Unlike most learning methods in SPSS Modeler, K-Means models do not use a target field. This type of learning, with no target field, is called unsupervised learning. Instead of trying to predict an outcome, K-Means tries to uncover patterns in the set of input fields. Records are grouped so that records within a group or cluster tend to be similar to each other, but records in different groups are dissimilar.
-
-K-Means works by defining a set of starting cluster centers derived from data. It then assigns each record to the cluster to which it is most similar, based on the record's input field values. After all cases have been assigned, the cluster centers are updated to reflect the new set of records assigned to each cluster. The records are then checked again to see whether they should be reassigned to a different cluster, and the record assignment/cluster iteration process continues until either the maximum number of iterations is reached, or the change between one iteration and the next fails to exceed a specified threshold.
-
-Note: The resulting model depends to a certain extent on the order of the training data. Reordering the data and rebuilding the model may lead to a different final cluster model.
-
-Requirements. To train a K-Means model, you need one or more fields with the role set to Input. Fields with the role set to Output, Both, or None are ignored.
-
-Strengths. You do not need to have data on group membership to build a K-Means model. The K-Means model is often the fastest method of clustering for large datasets.
-"
-DCE39CA6C888CA6D5CF3F9B9D18D06FD3BD2DFBE,DCE39CA6C888CA6D5CF3F9B9D18D06FD3BD2DFBE," K-Means-AS node
-
-K-Means is one of the most commonly used clustering algorithms. It clusters data points into a predefined number of clusters. The K-Means-AS node in SPSS Modeler is implemented in Spark.
-
-See [K-Means Algorithms](https://spark.apache.org/docs/2.2.0/ml-clustering.html) for more details.^1^
-
-Note that the K-Means-AS node performs one-hot encoding automatically for categorical variables.
-
-^1^ ""Clustering."" Apache Spark. MLlib: Main Guide. Web. 3 Oct 2017.
-"
-1DD1ED59E93DA4F6576E7EB1E420213AB34DD1DD,1DD1ED59E93DA4F6576E7EB1E420213AB34DD1DD," KNN node
-
-Nearest Neighbor Analysis is a method for classifying cases based on their similarity to other cases. In machine learning, it was developed as a way to recognize patterns of data without requiring an exact match to any stored patterns, or cases. Similar cases are near each other and dissimilar cases are distant from each other. Thus, the distance between two cases is a measure of their dissimilarity.
-
-Cases that are near each other are said to be ""neighbors."" When a new case (holdout) is presented, its distance from each of the cases in the model is computed. The classifications of the most similar cases – the nearest neighbors – are tallied and the new case is placed into the category that contains the greatest number of nearest neighbors.
-
-You can specify the number of nearest neighbors to examine; this value is called k. The pictures show how a new case would be classified using two different values of k. When k = 5, the new case is placed in category 1 because a majority of the nearest neighbors belong to category 1. However, when k = 9, the new case is placed in category 0 because a majority of the nearest neighbors belong to category 0.
-
-Nearest neighbor analysis can also be used to compute values for a continuous target. In this situation, the average or median target value of the nearest neighbors is used to obtain the predicted value for the new case.
-"
-F965BE0F67B8B3C26BE38939A33FA8AB74AEA4CC_0,F965BE0F67B8B3C26BE38939A33FA8AB74AEA4CC," Kohonen node
-
-Kohonen networks are a type of neural network that perform clustering, also known as a knet or a self-organizing map. This type of network can be used to cluster the dataset into distinct groups when you don't know what those groups are at the beginning. Records are grouped so that records within a group or cluster tend to be similar to each other, and records in different groups are dissimilar.
-
-The basic units are neurons, and they are organized into two layers: the input layer and the output layer (also called the output map). All of the input neurons are connected to all of the output neurons, and these connections have strengths, or weights, associated with them. During training, each unit competes with all of the others to ""win"" each record.
-
-The output map is a two-dimensional grid of neurons, with no connections between the units.
-
-Input data is presented to the input layer, and the values are propagated to the output layer. The output neuron with the strongest response is said to be the winner and is the answer for that input.
-
-Initially, all weights are random. When a unit wins a record, its weights (along with those of other nearby units, collectively referred to as a neighborhood) are adjusted to better match the pattern of predictor values for that record. All of the input records are shown, and weights are updated accordingly. This process is repeated many times until the changes become very small. As training proceeds, the weights on the grid units are adjusted so that they form a two-dimensional ""map"" of the clusters (hence the term self-organizing map).
-
-When the network is fully trained, records that are similar should be close together on the output map, whereas records that are vastly different will be far apart.
-
-"
-F965BE0F67B8B3C26BE38939A33FA8AB74AEA4CC_1,F965BE0F67B8B3C26BE38939A33FA8AB74AEA4CC,"Unlike most learning methods in watsonx.ai, Kohonen networks do not use a target field. This type of learning, with no target field, is called unsupervised learning. Instead of trying to predict an outcome, Kohonen nets try to uncover patterns in the set of input fields. Usually, a Kohonen net will end up with a few units that summarize many observations (strong units), and several units that don't really correspond to any of the observations (weak units). The strong units (and sometimes other units adjacent to them in the grid) represent probable cluster centers.
-
-Another use of Kohonen networks is in dimension reduction. The spatial characteristic of the two-dimensional grid provides a mapping from the k original predictors to two derived features that preserve the similarity relationships in the original predictors. In some cases, this can give you the same kind of benefit as factor analysis or PCA.
-
-Note that the method for calculating default size of the output grid is different from older versions of SPSS Modeler. The method will generally produce smaller output layers that are faster to train and generalize better. If you find that you get poor results with the default size, try increasing the size of the output grid on the Expert tab.
-
-Requirements. To train a Kohonen net, you need one or more fields with the role set to Input. Fields with the role set to Target, Both, or None are ignored.
-
-Strengths. You do not need to have data on group membership to build a Kohonen network model. You don't even need to know the number of groups to look for. Kohonen networks start with a large number of units, and as training progresses, the units gravitate toward the natural clusters in the data. You can look at the number of observations captured by each unit in the model nugget to identify the strong units, which can give you a sense of the appropriate number of clusters.
-"
-67241853FC2471C6C0719F1B98E40625358B2E19,67241853FC2471C6C0719F1B98E40625358B2E19," Reading in source text
-
-You can use the Language Identifier node to identify the natural language of a text field within your source data. The output of this node is a derived field that contains the detected language code.
-
-
-
-Data for text mining can be in any of the standard formats that are used by SPSS Modeler flows, including databases or other ""rectangular"" formats that represent data in rows and columns.
-
-
-
-"
-FC8006009802AE14770BE53062787D8A392B0070,FC8006009802AE14770BE53062787D8A392B0070," Linear node
-
-Linear regression is a common statistical technique for classifying records based on the values of numeric input fields. Linear regression fits a straight line or surface that minimizes the discrepancies between predicted and actual output values.
-
-Requirements. Only numeric fields can be used in a linear regression model. You must have exactly one target field (with the role set to Target) and one or more predictors (with the role set to Input). Fields with a role of Both or None are ignored, as are non-numeric fields. (If necessary, non-numeric fields can be recoded using a Derive node.)
-
-Strengths. Linear regression models are relatively simple and give an easily interpreted mathematical formula for generating predictions. Because linear regression is a long-established statistical procedure, the properties of these models are well understood. Linear models are also typically very fast to train. The Linear node provides methods for automatic field selection in order to eliminate nonsignificant input fields from the equation.
-
-Tip: In cases where the target field is categorical rather than a continuous range, such as yes/no or churn/don't churn, logistic regression can be used as an alternative. Logistic regression also provides support for non-numeric inputs, removing the need to recode these fields. Note: When first creating a flow, you select which runtime to use. By default, flows use the IBM SPSS Modeler runtime. If you want to use native Spark algorithms instead of SPSS algorithms, select the Spark runtime. Properties for this node will vary depending on which runtime option you choose.
-"
-2D9ACE87F4859BF7EF8CDF4EBBF8307C51034471,2D9ACE87F4859BF7EF8CDF4EBBF8307C51034471," Linear-AS node
-
-Linear regression is a common statistical technique for classifying records based on the values of numeric input fields. Linear regression fits a straight line or surface that minimizes the discrepancies between predicted and actual output values.
-
-Requirements. Only numeric fields and categorical predictors can be used in a linear regression model. You must have exactly one target field (with the role set to Target) and one or more predictors (with the role set to Input). Fields with a role of Both or None are ignored, as are non-numeric fields. (If necessary, non-numeric fields can be recoded using a Derive node.)
-
-Strengths. Linear regression models are relatively simple and give an easily interpreted mathematical formula for generating predictions. Because linear regression is a long-established statistical procedure, the properties of these models are well understood. Linear models are also typically very fast to train. The Linear node provides methods for automatic field selection in order to eliminate non-significant input fields from the equation.
-
-Note: In cases where the target field is categorical rather than a continuous range, such as yes/no or churn/don't churn, logistic regression can be used as an alternative. Logistic regression also provides support for non-numeric inputs, removing the need to recode these fields.
-"
-DE0C1913D6D770641762ED518FEFE8FFFC5A1F13_0,DE0C1913D6D770641762ED518FEFE8FFFC5A1F13," Logistic node
-
-Logistic regression, also known as nominal regression, is a statistical technique for classifying records based on values of input fields. It is analogous to linear regression but takes a categorical target field instead of a numeric one. Both binomial models (for targets with two discrete categories) and multinomial models (for targets with more than two categories) are supported.
-
-Logistic regression works by building a set of equations that relate the input field values to the probabilities associated with each of the output field categories. After the model is generated, you can use it to estimate probabilities for new data. For each record, a probability of membership is computed for each possible output category. The target category with the highest probability is assigned as the predicted output value for that record.
-
-Binomial example. A telecommunications provider is concerned about the number of customers it is losing to competitors. Using service usage data, you can create a binomial model to predict which customers are liable to transfer to another provider and customize offers so as to retain as many customers as possible. A binomial model is used because the target has two distinct categories (likely to transfer or not).
-
-Note: For binomial models only, string fields are limited to eight characters. If necessary, longer strings can be recoded using a Reclassify node or by using the Anonymize node.
-
-Multinomial example. A telecommunications provider has segmented its customer base by service usage patterns, categorizing the customers into four groups. Using demographic data to predict group membership, you can create a multinomial model to classify prospective customers into groups and then customize offers for individual customers.
-
-Requirements. One or more input fields and exactly one categorical target field with two or more categories. For a binomial model the target must have a measurement level of Flag. For a multinomial model the target can have a measurement level of Flag, or of Nominal with two or more categories. Fields set to Both or None are ignored. Fields used in the model must have their types fully instantiated.
-
-"
-DE0C1913D6D770641762ED518FEFE8FFFC5A1F13_1,DE0C1913D6D770641762ED518FEFE8FFFC5A1F13,"Strengths. Logistic regression models are often quite accurate. They can handle symbolic and numeric input fields. They can give predicted probabilities for all target categories so that a second-best guess can easily be identified. Logistic models are most effective when group membership is a truly categorical field; if group membership is based on values of a continuous range field (for example, high IQ versus low IQ), you should consider using linear regression to take advantage of the richer information offered by the full range of values. Logistic models can also perform automatic field selection, although other approaches such as tree models or Feature Selection might do this more quickly on large datasets. Finally, since logistic models are well understood by many analysts and data miners, they may be used by some as a baseline against which other modeling techniques can be compared.
-
-When processing large datasets, you can improve performance noticeably by disabling the likelihood-ratio test, an advanced output option.
-"
-A9E9D62E92156CEBC0D4619CDE322AF48CACE913,A9E9D62E92156CEBC0D4619CDE322AF48CACE913," LSVM node
-
-With the LSVM node, you can use a linear support vector machine to classify data. LSVM is particularly suited for use with wide datasets--that is, those with a large number of predictor fields. You can use the default settings on the node to produce a basic model relatively quickly, or you can use the build options to experiment with different settings.
-
-The LSVM node is similar to the SVM node, but it is linear and is better at handling a large number of records.
-
-After the model is built, you can:
-
-
-
-* Browse the model nugget to display the relative importance of the input fields in building the model.
-* Append a Table node to the model nugget to view the model output.
-
-
-
-Example. A medical researcher has obtained a dataset containing characteristics of a number of human cell samples extracted from patients who were believed to be at risk of developing cancer. Analysis of the original data showed that many of the characteristics differed significantly between benign and malignant samples. The researcher wants to develop an LSVM model that can use the values of similar cell characteristics in samples from other patients to give an early indication of whether their samples might be benign or malignant.
-"
-774FD49C617DAC62F48EB31E08757E0AEC3D1282,774FD49C617DAC62F48EB31E08757E0AEC3D1282," Matrix node
-
-Use the Matrix to create a table that shows relationships between fields. It is most commonly used to show the relationship between two categorical fields (flag, nominal, or ordinal), but it can also be used to show relationships between continuous (numeric range) fields.
-"
-7B586E10794F26EA2654A7F7C34EC9EA48C8BFD4,7B586E10794F26EA2654A7F7C34EC9EA48C8BFD4," Means node
-
-The Means node compares the means between independent groups or between pairs of related fields to test whether a significant difference exists. For example, you can compare mean revenues before and after running a promotion or compare revenues from customers who didn't receive the promotion with those who did.
-
-You can compare means in two different ways, depending on your data:
-
-
-
-"
-6647035446FC3A28586EBABC619D10DB5FE3F4FD,6647035446FC3A28586EBABC619D10DB5FE3F4FD," Merge node
-
-The function of a Merge node is to take multiple input records and create a single output record containing all or some of the input fields. This is a useful operation when you want to merge data from different sources, such as internal customer data and purchased demographic data.
-
-You can merge data in the following ways.
-
-
-
-* Merge by Order concatenates corresponding records from all sources in the order of input until the smallest data source is exhausted. It is important if using this option that you have sorted your data using a Sort node.
-* Merge using a Key field, such as Customer ID, to specify how to match records from one data source with records from the other(s). Several types of joins are possible, including inner join, full outer join, partial outer join, and anti-join.
-"
-61E8DF28E1A79B4BBA03CDA39F350BE5E55DAC7B,61E8DF28E1A79B4BBA03CDA39F350BE5E55DAC7B," Functions available for missing values
-
-Different methods are available for dealing with missing values in your data. You may choose to use functionality available in Data Refinery or in nodes.
-"
-0E5C87704E816097FF9E649620A1818798B5DB3F,0E5C87704E816097FF9E649620A1818798B5DB3F," Handling fields with missing values
-
-If the majority of missing values are concentrated in a small number of fields, you can address them at the field level rather than at the record level. This approach also allows you to experiment with the relative importance of particular fields before deciding on an approach for handling missing values. If a field is unimportant in modeling, it probably isn't worth keeping, regardless of how many missing values it has.
-
-For example, a market research company may collect data from a general questionnaire containing 50 questions. Two of the questions address age and political persuasion, information that many people are reluctant to give. In this case, Age and Political_persuasion have many missing values.
-"
-D5FAFC625D1A1D0793D9521351E9B59A04AF00E9_0,D5FAFC625D1A1D0793D9521351E9B59A04AF00E9," Missing data values
-
-During the data preparation phase of data mining, you will often want to replace missing values in the data.
-
-Missing values are values in the data set that are unknown, uncollected, or incorrectly entered. Usually, such values aren't valid for their fields. For example, the field Sex should contain the values M and F. If you discover the values Y or Z in the field, you can safely assume that such values aren't valid and should therefore be interpreted as blanks. Likewise, a negative value for the field Age is meaningless and should also be interpreted as a blank. Frequently, such obviously wrong values are purposely entered, or fields are left blank, during a questionnaire to indicate a nonresponse. At times, you may want to examine these blanks more closely to determine whether a nonresponse, such as the refusal to give one's age, is a factor in predicting a specific outcome.
-
-Some modeling techniques handle missing data better than others. For example, the [C5.0 node](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/c50.html) and the [Apriori node](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/apriori.html) cope well with values that are explicitly declared as ""missing"" in a [Type node](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/type.html). Other modeling techniques have trouble dealing with missing values and experience longer training times, resulting in less-accurate models.
-
-There are several types of missing values recognized by :
-
-
-
-* Null or system-missing values. These are nonstring values that have been left blank in the database or source file and have not been specifically defined as ""missing"" in an [Import](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/_nodes_import.html) or Type node. System-missing values are displayed as $null$. Note that empty strings are not considered nulls in , although they may be treated as nulls by certain databases.
-"
-D5FAFC625D1A1D0793D9521351E9B59A04AF00E9_1,D5FAFC625D1A1D0793D9521351E9B59A04AF00E9,"* Empty strings and white space. Empty string values and white space (strings with no visible characters) are treated as distinct from null values. Empty strings are treated as equivalent to white space for most purposes. For example, if you select the option to treat white space as blanks in an Import or Type node, this setting applies to empty strings as well.
-* Blank or user-defined missing values. These are values such as unknown, 99, or –1 that are explicitly defined in an Import node or Type node as missing. Optionally, you can also choose to treat nulls and white space as blanks, which allows them to be flagged for special treatment and to be excluded from most calculations. For example, you can use the @BLANK function to treat these values, along with other types of missing values, as blanks.
-
-
-
-Reading in mixed data. Note that when you're reading in fields with numeric storage (either integer, real, time, timestamp, or date), any non-numeric values are set to null or system missing. This is because, unlike some applications, doesn't allow mixed storage types within a field. To avoid this, you should read in any fields with mixed data as strings by changing the storage type in the Import node or external application as necessary.
-
-Reading empty strings from Oracle. When reading from or writing to an Oracle database, be aware that, unlike and unlike most other databases, Oracle treats and stores empty string values as equivalent to null values. This means that the same data extracted from an Oracle database may behave differently than when extracted from a file or another database, and the data may return different results.
-"
-FE9FF9F5CC449798C00D008182F55BDAA91E546C,FE9FF9F5CC449798C00D008182F55BDAA91E546C," Handling records with missing values
-
-If the majority of missing values are concentrated in a small number of records, you can just exclude those records. For example, a bank usually keeps detailed and complete records on its loan customers.
-
-If, however, the bank is less restrictive in approving loans for its own staff members, data gathered for staff loans is likely to have several blank fields. In such a case, there are two options for handling these missing values:
-
-
-
-"
-3BA46A09CF64CE6120BE65C44614995B50B67DA1,3BA46A09CF64CE6120BE65C44614995B50B67DA1," Handling records with system missing values
-"
-01C8222216B795904018497993CC5E44D51A3B35,01C8222216B795904018497993CC5E44D51A3B35," Handling missing values
-
-You should decide how to treat missing values in light of your business or domain knowledge. To ease training time and increase accuracy, you may want to remove blanks from your data set. On the other hand, the presence of blank values may lead to new business opportunities or additional insights.
-
-In choosing the best technique, you should consider the following aspects of your data:
-
-
-
-* Size of the data set
-* Number of fields containing blanks
-* Amount of missing information
-
-
-
-In general terms, there are two approaches you can follow:
-
-
-
-"
-6576530EC5D705B8BF323F6C459C32A87AE3F9A4,6576530EC5D705B8BF323F6C459C32A87AE3F9A4," MultiLayerPerceptron-AS node
-
-Multilayer perceptron is a classifier based on the feedforward artificial neural network and consists of multiple layers.
-
-Each layer is fully connected to the next layer in the network. See [Multilayer Perceptron Classifier (MLPC)](https://spark.apache.org/docs/latest/ml-classification-regression.htmlmultilayer-perceptron-classifier) for details.^1^
-
-The MultiLayerPerceptron-AS node in watsonx.ai is implemented in Spark. To use a this node, you must set up an upstream Type node. The MultiLayerPerceptron-AS node will read input values from the Type node (or from the Types of an upstream import node).
-
-^1^ ""Multilayer perceptron classifier."" Apache Spark. MLlib: Main Guide. Web. 5 Oct 2018.
-"
-5F0FC43F57AB9AF130DEA6A795E1E81A6AA95ACC,5F0FC43F57AB9AF130DEA6A795E1E81A6AA95ACC," Multiplot node
-
-A multiplot is a special type of plot that displays multiple Y fields over a single X field. The Y fields are plotted as colored lines and each is equivalent to a Plot node with Style set to Line and X Mode set to Sort. Multiplots are useful when you have time sequence data and want to explore the fluctuation of several variables over time.
-"
-9F06DF311976F336CB3164B08D5DA7D6F93419E2,9F06DF311976F336CB3164B08D5DA7D6F93419E2," Neural Net node
-
-A neural network can approximate a wide range of predictive models with minimal demands on model structure and assumption. The form of the relationships is determined during the learning process. If a linear relationship between the target and predictors is appropriate, the results of the neural network should closely approximate those of a traditional linear model. If a nonlinear relationship is more appropriate, the neural network will automatically approximate the ""correct"" model structure.
-
-The trade-off for this flexibility is that the neural network is not easily interpretable. If you are trying to explain an underlying process that produces the relationships between the target and predictors, it would be better to use a more traditional statistical model. However, if model interpretability is not important, you can obtain good predictions using a neural network.
-
-Field requirements. There must be at least one Target and one Input. Fields set to Both or None are ignored. There are no measurement level restrictions on targets or predictors (inputs).
-
-The initial weights assigned to neural networks during model building, and therefore the final models produced, depend on the order of the fields in the data. Watsonx.ai automatically sorts data by field name before presenting it to the neural network for training. This means that explicitly changing the order of the fields in the data upstream will not affect the generated neural net models when a random seed is set in the model builder. However, changing the input field names in a way that changes their sort order will produce different neural network models, even with a random seed set in the model builder. The model quality will not be affected significantly given different sort order of field names.
-"
-9933646421686556C9AE8459EE2E51ED9DAB1C33,9933646421686556C9AE8459EE2E51ED9DAB1C33," Disabling or caching nodes in a flow
-
-You can disable a node so it's ignored when the flow runs. And you can set up a cache on a node.
-"
-759B6927189FEA6BE3124BF79FA527873CB84EA6,759B6927189FEA6BE3124BF79FA527873CB84EA6," One-Class SVM node
-
-The One-Class SVM© node uses an unsupervised learning algorithm. The node can be used for novelty detection. It will detect the soft boundary of a given set of samples, to then classify new points as belonging to that set or not. This One-Class SVM modeling node is implemented in Python and requires the scikit-learn© Python library.
-
-For details about the scikit-learn library, see [Support Vector Machines](http://scikit-learn.org/stable/modules/svm.htmlsvm-outlier-detection)^1^.
-
-The Modeling tab on the palette contains the One-Class SVM node and other Python nodes.
-
-Note: One-Class SVM is used for usupervised outlier and novelty detection. In most cases, we recommend using a known, ""normal"" dataset to build the model so the algorithm can set a correct boundary for the given samples. Parameters for the model – such as nu, gamma, and kernel – impact the result significantly. So you may need to experiment with these options until you find the optimal settings for your situation.
-
-^1^Smola, Schölkopf. ""A Tutorial on Support Vector Regression."" Statistics and Computing Archive, vol. 14, no. 3, August 2004, pp. 199-222. (http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.114.4288)
-"
-98FC8E9A3380E4593D9BF08B78CE6A7797C0204B,98FC8E9A3380E4593D9BF08B78CE6A7797C0204B," Partition node
-
-Partition nodes are used to generate a partition field that splits the data into separate subsets or samples for the training, testing, and validation stages of model building. By using one sample to generate the model and a separate sample to test it, you can get a good indication of how well the model will generalize to larger datasets that are similar to the current data.
-
-The Partition node generates a nominal field with the role set to Partition. Alternatively, if an appropriate field already exists in your data, it can be designated as a partition using a Type node. In this case, no separate Partition node is required. Any instantiated nominal field with two or three values can be used as a partition, but flag fields cannot be used.
-
-Multiple partition fields can be defined in a flow, but if so, a single partition field must be selected in each modeling node that uses partitioning. (If only one partition is present, it is automatically used whenever partitioning is enabled.)
-
-To create a partition field based on some other criterion such as a date range or location, you can also use a Derive node. See [Derive node](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/derive.htmlderive) for more information.
-
-Example. When building an RFM flow to identify recent customers who have positively responded to previous marketing campaigns, the marketing department of a sales company uses a Partition node to split the data into training and test partitions.
-"
-CFC54BB4CEA29104BD4F9793B51ABE558AA0250D,CFC54BB4CEA29104BD4F9793B51ABE558AA0250D," Plot node
-
-Plot nodes show the relationship between numeric fields. You can create a plot using points (also known as a scatterplot), or you can use lines. You can create three types of line plots by specifying an X Mode in the node properties.
-"
-5E2A4B92C4F5F84B3DDE2EAD6827C7FA89EB0565,5E2A4B92C4F5F84B3DDE2EAD6827C7FA89EB0565," QUEST node
-
-QUEST—or Quick, Unbiased, Efficient Statistical Tree—is a binary classification method for building decision trees. A major motivation in its development was to reduce the processing time required for large C&R Tree analyses with either many variables or many cases. A second goal of QUEST was to reduce the tendency found in classification tree methods to favor inputs that allow more splits, that is, continuous (numeric range) input fields or those with many categories.
-
-
-
-* QUEST uses a sequence of rules, based on significance tests, to evaluate the input fields at a node. For selection purposes, as little as a single test may need to be performed on each input at a node. Unlike C&R Tree, all splits are not examined, and unlike C&R Tree and CHAID, category combinations are not tested when evaluating an input field for selection. This speeds the analysis.
-* Splits are determined by running quadratic discriminant analysis using the selected input on groups formed by the target categories. This method again results in a speed improvement over exhaustive search (C&R Tree) to determine the optimal split.
-
-
-
-Requirements. Input fields can be continuous (numeric ranges), but the target field must be categorical. All splits are binary. Weight fields cannot be used. Any ordinal (ordered set) fields used in the model must have numeric storage (not string). If necessary, the Reclassify node can be used to convert them.
-
-Strengths. Like CHAID, but unlike C&R Tree, QUEST uses statistical tests to decide whether or not an input field is used. It also separates the issues of input selection and splitting, applying different criteria to each. This contrasts with CHAID, in which the statistical test result that determines variable selection also produces the split. Similarly, C&R Tree employs the impurity-change measure to both select the input field and to determine the split.
-"
-2581DD8F04F917BA91F1201137AE0EFEA1F82E26,2581DD8F04F917BA91F1201137AE0EFEA1F82E26," Random Forest node
-
-Random Forest© is an advanced implementation of a bagging algorithm with a tree model as the base model.
-
-In random forests, each tree in the ensemble is built from a sample drawn with replacement (for example, a bootstrap sample) from the training set. When splitting a node during the construction of the tree, the split that is chosen is no longer the best split among all features. Instead, the split that is picked is the best split among a random subset of the features. Because of this randomness, the bias of the forest usually slightly increases (with respect to the bias of a single non-random tree) but, due to averaging, its variance also decreases, usually more than compensating for the increase in bias, hence yielding an overall better model.^1^
-
-The Random Forest node in watsonx.ai is implemented in Python. The nodes palette contains this node and other Python nodes.
-
-For more information about random forest algorithms, see [Forests of randomized trees](https://scikit-learn.org/stable/modules/ensemble.htmlforest).
-
-^1^L. Breiman, ""Random Forests,"" Machine Learning, 45(1), 5-32, 2001.
-"
-01800E00BDFB7CFE0E751FA6C616160C48E6ED21_0,01800E00BDFB7CFE0E751FA6C616160C48E6ED21," Random Trees node
-
-The Random Trees node can be used with data in a distributed environment. In this node, you build an ensemble model that consists of multiple decision trees.
-
-The Random Trees node is a tree-based classification and prediction method that is built on Classification and Regression Tree methodology. As with C&R Tree, this prediction method uses recursive partitioning to split the training records into segments with similar output field values. The node starts by examining the input fields available to it to find the best split, which is measured by the reduction in an impurity index that results from the split. The split defines two subgroups, each of which is then split into two more subgroups, and so on, until one of the stopping criteria is triggered. All splits are binary (only two subgroups).
-
-The Random Trees node uses bootstrap sampling with replacement to generate sample data. The sample data is used to grow a tree model. During tree growth, Random Trees will not sample the data again. Instead, it randomly selects part of the predictors and uses the best one to split a tree node. This process is repeated when splitting each tree node. This is the basic idea of growing a tree in random forest.
-
-Random Trees uses C&R Tree-like trees. Since such trees are binary, each field for splitting results in two branches. For a categorical field with multiple categories, the categories are grouped into two groups based on the inner splitting criterion. Each tree grows to the largest extent possible (there is no pruning). In scoring, Random Trees combines individual tree scores by majority voting (for classification) or average (for regression).
-
-Random Trees differ from C&R Trees as follows:
-
-
-
-* Random Trees nodes randomly select a specified number of predictors and uses the best one from the selection to split a node. In contrast, C&R Tree finds the best one from all predictors.
-"
-01800E00BDFB7CFE0E751FA6C616160C48E6ED21_1,01800E00BDFB7CFE0E751FA6C616160C48E6ED21,"* Each tree in Random Trees grows fully until each leaf node typically contains a single record. So the tree depth could be very large. But standard C&R Tree uses different stopping rules for tree growth, which usually leads to a much shallower tree.
-
-
-
-Random Trees adds two features compared to C&R Tree:
-
-
-
-* The first feature is bagging, where replicas of the training dataset are created by sampling with replacement from the original dataset. This action creates bootstrap samples that are of equal size to the original dataset, after which a component model is built on each replica. Together these component models form an ensemble model.
-* The second feature is that, at each split of the tree, only a sampling of the input fields is considered for the impurity measure.
-
-
-
-Requirements. To train a Random Trees model, you need one or more Input fields and one Target field. Target and input fields can be continuous (numeric range) or categorical. Fields that are set to either Both or None are ignored. Fields that are used in the model must have their types fully instantiated, and any ordinal (ordered set) fields that are used in the model must have numeric storage (not string). If necessary, the Reclassify node can be used to convert them.
-
-Strengths. Random Trees models are robust when you are dealing with large data sets and numbers of fields. Due to the use of bagging and field sampling, they are much less prone to overfitting and thus the results that are seen in testing are more likely to be repeated when you use new data.
-
-Note: When first creating a flow, you select which runtime to use. By default, flows use the IBM SPSS Modeler runtime. If you want to use native Spark algorithms instead of SPSS algorithms, select the Spark runtime. Properties for this node will vary depending on which runtime option you choose.
-"
-2D3F7F5EFB161E0D88AE69C4710D70AA99DB0BDE,2D3F7F5EFB161E0D88AE69C4710D70AA99DB0BDE," Reclassify node
-
-The Reclassify node enables the transformation from one set of categorical values to another. Reclassification is useful for collapsing categories or regrouping data for analysis.
-
-For example, you could reclassify the values for Product into three groups, such as Kitchenware, Bath and Linens, and Appliances.
-
-Reclassification can be performed for one or more symbolic fields. You can also choose to substitute the new values for the existing field or generate a new field.
-"
-BBDEDA771A051A9B1871F9BEC9589D91421E7C0C,BBDEDA771A051A9B1871F9BEC9589D91421E7C0C," Regression node
-
-Linear regression is a common statistical technique for classifying records based on the values of numeric input fields. Linear regression fits a straight line or surface that minimizes the discrepancies between predicted and actual output values.
-
-Requirements. Only numeric fields can be used in a regression model. You must have exactly one target field (with the role set to Target) and one or more predictors (with the role set to Input). Fields with a role of Both or None are ignored, as are non-numeric fields. (If necessary, non-numeric fields can be recoded using a Derive node. )
-
-Strengths. Regression models are relatively simple and give an easily interpreted mathematical formula for generating predictions. Because regression modeling is a long-established statistical procedure, the properties of these models are well understood. Regression models are also typically very fast to train. The Regression node provides methods for automatic field selection in order to eliminate nonsignificant input fields from the equation.
-
-Note: In cases where the target field is categorical rather than a continuous range, such as yes/no or churn/don't churn, logistic regression can be used as an alternative. Logistic regression also provides support for non-numeric inputs, removing the need to recode these fields. See [Logistic node](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/logreg.htmllogreg) for more information.
-"
-8322C981206A5C7EEEC48C32C9DDCEC9FCE98AEE,8322C981206A5C7EEEC48C32C9DDCEC9FCE98AEE," Field Reorder node
-
-With the Field Reorder node, you can define the natural order used to display fields downstream. This order affects the display of fields in a variety of places, such as tables, lists, and the Field Chooser.
-
-This operation is useful, for example, when working with wide datasets to make fields of interest more visible.
-"
-BF6A65F061558B6AED8A438A887B6474A0FDFFC3,BF6A65F061558B6AED8A438A887B6474A0FDFFC3," Report node
-
-You can use the Report node to create formatted reports containing fixed text, data, or other expressions derived from the data. Specify the format of the report by using text templates to define the fixed text and the data output constructions. You can provide custom text formatting using HTML tags in the template and by setting output options. Data values and other conditional output are included in the report using CLEM expressions in the template.
-"
-36C8AF3BBAFFF1C227CF611D7327AFA8E378D6EC,36C8AF3BBAFFF1C227CF611D7327AFA8E378D6EC," Restructure node
-
-With the Restructure node, you can generate multiple fields based on the values of a nominal or flag field. The newly generated fields can contain values from another field or numeric flags (0 and 1). The functionality of this node is similar to that of the Set to Flag node. However, it offers more flexibility by allowing you to create fields of any type (including numeric flags), using the values from another field. You can then perform aggregation or other manipulations with other nodes downstream. (The Set to Flag node lets you aggregate fields in one step, which may be convenient if you are creating flag fields.)
-
-Figure 1. Restructure node
-
-
-"
-265714702B012F1010CE06D97EC16623360F4E2B,265714702B012F1010CE06D97EC16623360F4E2B," RFM Aggregate node
-
-The Recency, Frequency, Monetary (RFM) Aggregate node allows you to take customers' historical transactional data, strip away any unused data, and combine all of their remaining transaction data into a single row (using their unique customer ID as a key) that lists when they last dealt with you (recency), how many transactions they have made (frequency), and the total value of those transactions (monetary).
-
-Before proceeding with any aggregation, you should take time to clean the data, concentrating especially on any missing values.
-
-After you identify and transform the data using the RFM Aggregate node, you might use an RFM Analysis node to carry out further analysis.
-
-Note that after the data file has been run through the RFM Aggregate node, it won't have any target values; therefore, before using the data file as input for further predictive analysis with any modeling nodes such as C5.0 or CHAID, you need to merge it with other customer data (for example, by matching the customer IDs).
-
-The RFM Aggregate and RFM Analysis nodes use independent binning; that is, they rank and bin data on each measure of recency, frequency, and monetary value, without regard to their values or the other two measures.
-"
-9E15D946EDFB82EF911D36032C073CF1736B39DA_0,9E15D946EDFB82EF911D36032C073CF1736B39DA," RFM Analysis node
-
-You can use the Recency, Frequency, Monetary (RFM) Analysis node to determine quantitatively which customers are likely to be the best ones by examining how recently they last purchased from you (recency), how often they purchased (frequency), and how much they spent over all transactions (monetary).
-
-The reasoning behind RFM analysis is that customers who purchase a product or service once are more likely to purchase again. The categorized customer data is separated into a number of bins, with the binning criteria adjusted as you require. In each of the bins, customers are assigned a score; these scores are then combined to provide an overall RFM score. This score is a representation of the customer's membership in the bins created for each of the RFM parameters. This binned data may be sufficient for your needs, for example, by identifying the most frequent, high-value customers; alternatively, it can be passed on in a flow for further modeling and analysis.
-
-Note, however, that although the ability to analyze and rank RFM scores is a useful tool, you must be aware of certain factors when using it. There may be a temptation to target customers with the highest rankings; however, over-solicitation of these customers could lead to resentment and an actual fall in repeat business. It is also worth remembering that customers with low scores should not be neglected but instead may be cultivated to become better customers. Conversely, high scores alone do not necessarily reflect a good sales prospect, depending on the market. For example, a customer in bin 5 for recency, meaning that they have purchased very recently, may not actually be the best target customer for someone selling expensive, longer-life products such as cars or televisions.
-
-"
-9E15D946EDFB82EF911D36032C073CF1736B39DA_1,9E15D946EDFB82EF911D36032C073CF1736B39DA,"Note: Depending on how your data is stored, you may need to precede the RFM Analysis node with an RFM Aggregate node to transform the data into a usable format. For example, input data must be in customer format, with one row per customer; if the customers' data is in transactional form, use an RFM Aggregate node upstream to derive the recency, frequency, and monetary fields.
-
-The RFM Aggregate and RFM Analysis nodes in are set up to use independent binning; that is, they rank and bin data on each measure of recency, frequency, and monetary value, without regard to their values or the other two measures.
-"
-AF3DA662099BD616B642F69925AEC7C8AFC84611,AF3DA662099BD616B642F69925AEC7C8AFC84611," Sample node
-
-You can use Sample nodes to select a subset of records for analysis, or to specify a proportion of records to discard. A variety of sample types are supported, including stratified, clustered, and nonrandom (structured) samples.
-
-Sampling can be used for several reasons:
-
-
-
-* To improve performance by estimating models on a subset of the data. Models estimated from a sample are often as accurate as those derived from the full dataset, and may be more so if the improved performance allows you to experiment with different methods you might not otherwise have attempted.
-* To select groups of related records or transactions for analysis, such as selecting all the items in an online shopping cart (or market basket), or all the properties in a specific neighborhood.
-* To identify units or cases for random inspection in the interest of quality assurance, fraud prevention, or security.
-
-
-
-Note: If you simply want to partition your data into training and test samples for purposes of validation, a Partition node can be used instead. See [Partition node](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/partition.htmlpartition) for more information.
-"
-84E8928D464D412B225638BCC41F2837F98AEF43_0,84E8928D464D412B225638BCC41F2837F98AEF43," autodataprepnode properties
-
-The Auto Data Prep (ADP) node can analyze your data and identify fixes, screen out fields that are problematic or not likely to be useful, derive new attributes when appropriate, and improve performance through intelligent screening and sampling techniques. You can use the node in fully automated fashion, allowing the node to choose and apply fixes, or you can preview the changes before they are made and accept, reject, or amend them as desired.
-
-
-
-autodataprepnode properties
-
-Table 1. autodataprepnode properties
-
- autodataprepnode properties Data type Property description
-
- objective Balanced Speed Accuracy Custom
- custom_fields flag If true, allows you to specify target, input, and other fields for the current node. If false, the current settings from an upstream Type node are used.
- target field Specifies a single target field.
- inputs [field1 ... fieldN] Input or predictor fields used by the model.
- use_frequency flag
- frequency_field field
- use_weight flag
- weight_field field
- excluded_fields Filter None
- if_fields_do_not_match StopExecution ClearAnalysis
- prepare_dates_and_times flag Control access to all the date and time fields
- compute_time_until_date flag
- reference_date Today Fixed
- fixed_date date
- units_for_date_durations Automatic Fixed
- fixed_date_units Years Months Days
- compute_time_until_time flag
- reference_time CurrentTime Fixed
- fixed_time time
- units_for_time_durations Automatic Fixed
- fixed_time_units Hours Minutes Seconds
- extract_year_from_date flag
- extract_month_from_date flag
- extract_day_from_date flag
-"
-84E8928D464D412B225638BCC41F2837F98AEF43_1,84E8928D464D412B225638BCC41F2837F98AEF43," extract_hour_from_time flag
- extract_minute_from_time flag
- extract_second_from_time flag
- exclude_low_quality_inputs flag
- exclude_too_many_missing flag
- maximum_percentage_missing number
- exclude_too_many_categories flag
- maximum_number_categories number
- exclude_if_large_category flag
- maximum_percentage_category number
- prepare_inputs_and_target flag
- adjust_type_inputs flag
- adjust_type_target flag
- reorder_nominal_inputs flag
- reorder_nominal_target flag
- replace_outliers_inputs flag
- replace_outliers_target flag
- replace_missing_continuous_inputs flag
- replace_missing_continuous_target flag
- replace_missing_nominal_inputs flag
- replace_missing_nominal_target flag
- replace_missing_ordinal_inputs flag
- replace_missing_ordinal_target flag
- maximum_values_for_ordinal number
- minimum_values_for_continuous number
- outlier_cutoff_value number
- outlier_method Replace Delete
- rescale_continuous_inputs flag
- rescaling_method MinMax ZScore
- min_max_minimum number
- min_max_maximum number
- z_score_final_mean number
- z_score_final_sd number
- rescale_continuous_target flag
- target_final_mean number
- target_final_sd number
- transform_select_input_fields flag
- maximize_association_with_target flag
- p_value_for_merging number
- merge_ordinal_features flag
- merge_nominal_features flag
- minimum_cases_in_category number
- bin_continuous_fields flag
- p_value_for_binning number
- perform_feature_selection flag
- p_value_for_selection number
- perform_feature_construction flag
- transformed_target_name_extension string
- transformed_inputs_name_extension string
- constructed_features_root_name string
- years_duration_ name_extension string
- months_duration_ name_extension string
-"
-84E8928D464D412B225638BCC41F2837F98AEF43_2,84E8928D464D412B225638BCC41F2837F98AEF43," days_duration_ name_extension string
- hours_duration_ name_extension string
- minutes_duration_ name_extension string
- seconds_duration_ name_extension string
- year_cyclical_name_extension string
- month_cyclical_name_extension string
- day_cyclical_name_extension string
- hour_cyclical_name_extension string
-"
-8CD81C0F5F84DFE58834AEB8B71E6D7780B8DEAD,8CD81C0F5F84DFE58834AEB8B71E6D7780B8DEAD," aggregatenode properties
-
- The Aggregate node replaces a sequence of input records with summarized, aggregated output records.
-
-
-
-aggregatenode properties
-
-Table 1. aggregatenode properties
-
- aggregatenode properties Data type Property description
-
- keys list Lists fields that can be used as keys for aggregation. For example, if Sex and Region are your key fields, each unique combination of M and F with regions N and S (four unique combinations) will have an aggregated record.
- contiguous flag Select this option if you know that all records with the same key values are grouped together in the input (for example, if the input is sorted on the key fields). Doing so can improve performance.
- aggregates Structured property listing the numeric fields whose values will be aggregated, as well as the selected modes of aggregation.
- aggregate_exprs Keyed property which keys the derived field name with the aggregate expression used to compute it. For example: aggregatenode.setKeyedPropertyValue (""aggregate_exprs"", ""Na_MAX"", ""MAX('Na')"")
- extension string Specify a prefix or suffix for duplicate aggregated fields.
- add_as Suffix Prefix
- inc_record_count flag Creates an extra field that specifies how many input records were aggregated to form each aggregate record.
- count_field string Specifies the name of the record count field.
- allow_approximation Boolean Allows approximation of order statistics when aggregation is performed in SPSS Analytic Server.
-"
-2C17E0A9E72FE65317838E81ACF1FA77620E0C6C,2C17E0A9E72FE65317838E81ACF1FA77620E0C6C," analysisnode properties
-
-The Analysis node evaluates predictive models' ability to generate accurate predictions. Analysis nodes perform various comparisons between predicted values and actual values for one or more model nuggets. They can also compare predictive models to each other.
-
-
-
-analysisnode properties
-
-Table 1. analysisnode properties
-
- analysisnode properties Data type Property description
-
- output_mode ScreenFile Used to specify target location for output generated from the output node.
- use_output_name flag Specifies whether a custom output name is used.
- output_name string If use_output_name is true, specifies the name to use.
- output_format Text (.txt) HTML (.html) Output (.cou) Used to specify the type of output.
- by_fields list
- full_filename string If disk, data, or HTML output, the name of the output file.
- coincidence flag
- performance flag
- evaluation_binary flag
- confidence flag
- threshold number
- improve_accuracy number
- field_detection_method MetadataName Determines how predicted fields are matched to the original target field. Specify Metadata or Name.
- inc_user_measure flag
- user_if expr
- user_then expr
- user_else expr
-"
-5C2296329A2D24B1A22A3848731708D78949E74C_0,5C2296329A2D24B1A22A3848731708D78949E74C," anomalydetectionnode properties
-
-The Anomaly node identifies unusual cases, or outliers, that don't conform to patterns of ""normal"" data. With this node, it's possible to identify outliers even if they don't fit any previously known patterns and even if you're not exactly sure what you're looking for.
-
-
-
-anomalydetectionnode properties
-
-Table 1. anomalydetectionnode properties
-
- anomalydetectionnode Properties Values Property description
-
- inputs [field1 ... fieldN] Anomaly Detection models screen records based on the specified input fields. They don't use a target field. Weight and frequency fields are also not used. See [Common modeling node properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/modelingnodeslots_common.htmlmodelingnodeslots_common) for more information.
- mode ExpertSimple
- anomaly_method IndexLevelPerRecordsNumRecords Specifies the method used to determine the cutoff value for flagging records as anomalous.
- index_level number Specifies the minimum cutoff value for flagging anomalies.
- percent_records number Sets the threshold for flagging records based on the percentage of records in the training data.
- num_records number Sets the threshold for flagging records based on the number of records in the training data.
- num_fields integer The number of fields to report for each anomalous record.
- impute_missing_values flag
- adjustment_coeff number Value used to balance the relative weight given to continuous and categorical fields in calculating the distance.
- peer_group_num_auto flag Automatically calculates the number of peer groups.
- min_num_peer_groups integer Specifies the minimum number of peer groups used when peer_group_num_auto is set to True.
-"
-5C2296329A2D24B1A22A3848731708D78949E74C_1,5C2296329A2D24B1A22A3848731708D78949E74C," max_num_per_groups integer Specifies the maximum number of peer groups.
- num_peer_groups integer Specifies the number of peer groups used when peer_group_num_auto is set to False.
-"
-B51FF1FBA515035A93290F353D20AD9D54BC043C,B51FF1FBA515035A93290F353D20AD9D54BC043C," applyanomalydetectionnode properties
-
-You can use Anomaly Detection modeling nodes to generate an Anomaly Detection model nugget. The scripting name of this model nugget is applyanomalydetectionnode. For more information on scripting the modeling node itself, see [anomalydetectionnode properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/anomalydetectionnodeslots.htmlanomalydetectionnodeslots).
-
-
-
-applyanomalydetectionnode properties
-
-Table 1. applyanomalydetectionnode properties
-
- applyanomalydetectionnode Properties Values Property description
-
- anomaly_score_method FlagAndScoreFlagOnlyScoreOnly Determines which outputs are created for scoring.
- num_fields integer Fields to report.
- discard_records flag Indicates whether records are discarded from the output or not.
-"
-65FFB2E27EACD57BCADC6C1646EB280212D3B2C2,65FFB2E27EACD57BCADC6C1646EB280212D3B2C2," anonymizenode properties
-
-The Anonymize node transforms the way field names and values are represented downstream, thus disguising the original data. This can be useful if you want to allow other users to build models using sensitive data, such as customer names or other details.
-
-
-
-anonymizenode properties
-
-Table 1. anonymizenode properties
-
- anonymizenode properties Data type Property description
-
- enable_anonymize flag When set to True, activates anonymization of field values (equivalent to selecting Yes for that field in the Anonymize Values column).
- use_prefix flag When set to True, a custom prefix will be used if one has been specified. Applies to fields that will be anonymized by the Hash method and is equivalent to choosing the Custom option in the Replace Values settings for that field.
- prefix string Equivalent to typing a prefix into the text box in the Replace Values settings. The default prefix is the default value if nothing else has been specified.
- transformation RandomFixed Determines whether the transformation parameters for a field anonymized by the Transform method will be random or fixed.
- set_random_seed flag When set to True, the specified seed value will be used (if transformation is also set to Random).
- random_seed integer When set_random_seed is set to True, this is the seed for the random number.
-"
-8D328FC36822024D739F83A36FEF66E5ABE61128,8D328FC36822024D739F83A36FEF66E5ABE61128," appendnode properties
-
- The Append node concatenates sets of records. It's useful for combining datasets with similar structures but different data.
-
-
-
-appendnode properties
-
-Table 1. appendnode properties
-
- appendnode properties Data type Property description
-
- match_by PositionName You can append datasets based on the position of fields in the main data source or the name of fields in the input datasets.
- match_case flag Enables case sensitivity when matching field names.
- include_fields_from MainAll
-"
-76EC742BC2D093C10C6A5B85456BFBB6571C416D,76EC742BC2D093C10C6A5B85456BFBB6571C416D," apriorinode properties
-
-The Apriori node extracts a set of rules from the data, pulling out the rules with the highest information content. Apriori offers five different methods of selecting rules and uses a sophisticated indexing scheme to process large data sets efficiently. For large problems, Apriori is generally faster to train; it has no arbitrary limit on the number of rules that can be retained, and it can handle rules with up to 32 preconditions. Apriori requires that input and output fields all be categorical but delivers better performance because it'ss optimized for this type of data.
-
-
-
-apriorinode properties
-
-Table 1. apriorinode properties
-
- apriorinode Properties Values Property description
-
- consequents field Apriori models use Consequents and Antecedents in place of the standard target and input fields. Weight and frequency fields are not used. See [Common modeling node properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/modelingnodeslots_common.htmlmodelingnodeslots_common) for more information.
- antecedents [field1 ... fieldN]
- min_supp number
- min_conf number
- max_antecedents number
- true_flags flag
- optimize Speed Memory
- use_transactional_data flag
- contiguous flag
- id_field string
- content_field string
- mode SimpleExpert
- evaluation RuleConfidence DifferenceToPrior ConfidenceRatio InformationDifference NormalizedChiSquare
- lower_bound number
-"
-292C0E87B8E56B15991C954508AB125A8FB80972,292C0E87B8E56B15991C954508AB125A8FB80972," applyapriorinode properties
-
-You can use Apriori modeling nodes to generate an Apriori model nugget. The scripting name of this model nugget is applyapriorinode. For more information on scripting the modeling node itself, see [apriorinode properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/apriorinodeslots.htmlapriorinodeslots).
-
-
-
-applyapriorinode properties
-
-Table 1. applyapriorinode properties
-
- applyapriorinode Properties Values Property description
-
- max_predictions number (integer)
- ignore_unmatached flag
- allow_repeats flag
- check_basket NoPredictionsPredictionsNoCheck
-"
-2BCBD3D61CC24296EA38B26B10306B7F50CE4988_0,2BCBD3D61CC24296EA38B26B10306B7F50CE4988," astimeintervalsnode properties
-
-Use the Time Intervals node to specify intervals and derive a new time field for estimating or forecasting. A full range of time intervals is supported, from seconds to years.
-
-
-
-astimeintervalsnode properties
-
-Table 1. astimeintervalsnode properties
-
- astimeintervalsnode properties Data type Property description
-
- time_field field Can accept only a single continuous field. That field is used by the node as the aggregation key for converting the interval. If an integer field is used here it's considered to be a time index.
- dimensions [field1 field2 … fieldn] These fields are used to create individual time series based on the field values.
- fields_to_aggregate [field1 field2 … fieldn] These fields are aggregated as part of changing the period of the time field. Any fields not included in this picker are filtered out of the data leaving the node.
- interval_type_timestamp Years Quarters Months Weeks Days Hours Minutes Seconds Specify intervals and derive a new time field for estimating or forecasting.
- interval_type_time Hours Minutes Seconds
- interval_type_date Years Quarters Months Weeks Days Time interval
- interval_type_integer Periods Time interval
- periods_per_interval integer Periods per interval
- start_month JanuaryFebruaryMarchAprilMayJuneJulyAugustSeptemberOctoberNovemberDecember
- week_begins_on Sunday Monday Tuesday Wednesday Thursday Friday Saturday
- minute_interval 1 2 3 4 5 6 10 12 15 20 30
- second_interval 1 2 3 4 5 6 10 12 15 20 30
-"
-2BCBD3D61CC24296EA38B26B10306B7F50CE4988_1,2BCBD3D61CC24296EA38B26B10306B7F50CE4988," agg_range_default Sum Mean Min Max Median 1stQuartile 3rdQuartile Available functions for continuous fields include Sum, Mean, Min, Max, Median, 1st Quartile, and 3rd Quartile.
- agg_set_default Mode Min Max Nominal options include Mode, Min, and Max.
- agg_flag_default TrueIfAnyTrue FalseIfAnyFalse Options are either True if any true or False if any false.
- custom_agg array Custom settings for specified fields.
-"
-27963DF2327FBE202B836AC5905258D063A8770D,27963DF2327FBE202B836AC5905258D063A8770D," applyautoclassifiernode properties
-
-You can use Auto Classifier modeling nodes to generate an Auto Classifier model nugget. The scripting name of this model nugget is applyautoclassifiernode. For more information on scripting the modeling node itself, see [autoclassifiernode properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/binaryclassifiernodeslots.htmlbinaryclassifiernodeslots).
-
-
-
-applyautoclassifiernode properties
-
-Table 1. applyautoclassifiernode properties
-
- applyautoclassifiernode Properties Values Property description
-
- flag_ensemble_method VotingConfidenceWeightedVotingRawPropensityWeightedVotingHighestConfidenceAverageRawPropensity Specifies the method used to determine the ensemble score. This setting applies only if the selected target is a flag field.
- flag_voting_tie_selection RandomHighestConfidenceRawPropensity If a voting method is selected, specifies how ties are resolved. This setting applies only if the selected target is a flag field.
-"
-E399A5B6FA720C6F21337792F822F20F20F98910_0,E399A5B6FA720C6F21337792F822F20F20F98910," autoclusternode properties
-
-The Auto Cluster node estimates and compares clustering models, which identify groups of records that have similar characteristics. The node works in the same manner as other automated modeling nodes, allowing you to experiment with multiple combinations of options in a single modeling pass. Models can be compared using basic measures with which to attempt to filter and rank the usefulness of the cluster models, and provide a measure based on the importance of particular fields.
-
-
-
-autoclusternode properties
-
-Table 1. autoclusternode properties
-
- autoclusternode Properties Values Property description
-
- evaluation field Note: Auto Cluster node only. Identifies the field for which an importance value will be calculated. Alternatively, can be used to identify how well the cluster differentiates the value of this field and, therefore, how well the model will predict this field.
- ranking_measure SilhouetteNum_clustersSize_smallest_clusterSize_largest_clusterSmallest_to_largestImportance
- ranking_dataset TrainingTest
- summary_limit integer Number of models to list in the report. Specify an integer between 1 and 100.
- enable_silhouette_limit flag
- silhouette_limit integer Integer between 0 and 100.
- enable_number_less_limit flag
- number_less_limit number Real number between 0.0 and 1.0.
- enable_number_greater_limit flag
- number_greater_limit number Integer greater than 0.
- enable_smallest_cluster_limit flag
- smallest_cluster_units PercentageCounts
- smallest_cluster_limit_percentage number
- smallest_cluster_limit_count integer Integer greater than 0.
- enable_largest_cluster_limit flag
- largest_cluster_units PercentageCounts
- largest_cluster_limit_percentage number
- largest_cluster_limit_count integer
- enable_smallest_largest_limit flag
- smallest_largest_limit number
- enable_importance_limit flag
-"
-E399A5B6FA720C6F21337792F822F20F20F98910_1,E399A5B6FA720C6F21337792F822F20F20F98910," importance_limit_condition Greater_thanLess_than
- importance_limit_greater_than number Integer between 0 and 100.
- importance_limit_less_than number Integer between 0 and 100.
- flag Enables or disables the use of a specific algorithm.
- . string Sets a property value for a specific algorithm. See [Setting algorithm properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/factorymodeling_algorithmproperties.htmlfactorymodeling_algorithmproperties) for more information.
- number_of_models integer
- enable_model_build_time_limit boolean (K-Means, Kohonen, TwoStep, SVM, KNN, Bayes Net and Decision List models only.) Sets a maximum time limit for any one model. For example, if a particular model requires an unexpectedly long time to train because of some complex interaction, you probably don't want it to hold up your entire modeling run.
- model_build_time_limit integer Time spent on model build.
- enable_stop_after_time_limit boolean (Neural Network, K-Means, Kohonen, TwoStep, SVM, KNN, Bayes Net and C&R Tree models only.) Stops a run after a specified number of hours. All models generated up to that point will be included in the model nugget, but no further models will be produced.
-"
-14416203D840C788359110B18CFD9CE922DE0D67,14416203D840C788359110B18CFD9CE922DE0D67," applyautoclusternode properties
-
-You can use Auto Cluster modeling nodes to generate an Auto Cluster model nugget. The scripting name of this model nugget is applyautoclusternode. No other properties exist for this model nugget. For more information on scripting the modeling node itself, see [autoclusternode properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/autoclusternodeslots.htmlautoclusternodeslots).
-"
-3EAAFDDADE769D3B0300BE1401BB3D7E68B312DD,3EAAFDDADE769D3B0300BE1401BB3D7E68B312DD," applyautonumericnode properties
-
-You can use Auto Numeric modeling nodes to generate an Auto Numeric model nugget. The scripting name of this model nugget is applyautonumericnode.For more information on scripting the modeling node itself, see [autonumericnode properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/rangepredictornodeslots.htmlrangepredictornodeslots).
-
-
-
-applyautonumericnode properties
-
-Table 1. applyautonumericnode properties
-
- applyautonumericnode Properties Values Property description
-
- calculate_standard_error flag
-"
-D2D9F4E05CABC566B2021116ED28EF413FA96779,D2D9F4E05CABC566B2021116ED28EF413FA96779," Node properties overview
-
-Each type of node has its own set of legal properties, and each property has a type. This type may be a general type—number, flag, or string—in which case settings for the property are coerced to the correct type. An error is raised if they can't be coerced. Alternatively, the property reference may specify the range of legal values, such as Discard, PairAndDiscard, and IncludeAsText, in which case an error is raised if any other value is used. Flag properties should be read or set by using values of true and false. (Variations including Off, OFF, off, No, NO, no, n, N, f, F, false, False, FALSE, or 0 are also recognized when setting values, but may cause errors when reading property values in some cases. All other values are regarded as true. Using true and false consistently will avoid any confusion.) In this documentation's reference tables, the structured properties are indicated as such in the Property description column, and their usage formats are provided.
-"
-7A9F4CDF362D1F06C3644EDBD634B2A77DDC6005,7A9F4CDF362D1F06C3644EDBD634B2A77DDC6005," balancenode properties
-
- The Balance node corrects imbalances in a dataset, so it conforms to a specified condition. The balancing directive adjusts the proportion of records where a condition is true by the factor specified.
-
-
-
-balancenode properties
-
-Table 1. balancenode properties
-
- balancenode properties Data type Property description
-
- directives Structured property to balance proportion of field values based on number specified.
- training_data_only flag Specifies that only training data should be balanced. If no partition field is present in the stream, then this option is ignored.
-
-
-
-This node property uses the format:
-
-[[ number, string ] \ [ number, string] \ ... [number, string ]].
-
-Note: If strings (using double quotation marks) are embedded in the expression, they must be preceded by the escape character "" "". The "" "" character is also the line continuation character, which you can use to align the arguments for clarity.
-"
-FE2254205E6DD1EE2A4EC62036AB86BC5E084F5D_0,FE2254205E6DD1EE2A4EC62036AB86BC5E084F5D," bayesnetnode properties
-
-With the Bayesian Network (Bayes Net) node, you can build a probability model by combining observed and recorded evidence with real-world knowledge to establish the likelihood of occurrences. The node focuses on Tree Augmented Naïve Bayes (TAN) and Markov Blanket networks that are primarily used for classification.
-
-
-
-bayesnetnode properties
-
-Table 1. bayesnetnode properties
-
- bayesnetnode Properties Values Property description
-
- inputs [field1 ... fieldN] Bayesian network models use a single target field, and one or more input fields. Continuous fields are automatically binned. See the topic [Common modeling node properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/modelingnodeslots_common.htmlmodelingnodeslots_common) for more information.
- continue_training_existing_model flag
- structure_type TANMarkovBlanket Select the structure to be used when building the Bayesian network.
- use_feature_selection flag
- parameter_learning_method LikelihoodBayes Specifies the method used to estimate the conditional probability tables between nodes where the values of the parents are known.
- mode ExpertSimple
- missing_values flag
- all_probabilities flag
- independence LikelihoodPearson Specifies the method used to determine whether paired observations on two variables are independent of each other.
- significance_level number Specifies the cutoff value for determining independence.
- maximal_conditioning_set number Sets the maximal number of conditioning variables to be used for independence testing.
- inputs_always_selected [field1 ... fieldN] Specifies which fields from the dataset are always to be used when building the Bayesian network. Note: The target field is always selected.
-"
-FE2254205E6DD1EE2A4EC62036AB86BC5E084F5D_1,FE2254205E6DD1EE2A4EC62036AB86BC5E084F5D," maximum_number_inputs number Specifies the maximum number of input fields to be used in building the Bayesian network.
- calculate_variable_importance flag
- calculate_raw_propensities flag
-"
-EC154AE6F7FE894644424BFA90C6CA31E13A4B71,EC154AE6F7FE894644424BFA90C6CA31E13A4B71," applybayesnetnode properties
-
-You can use Bayesian network modeling nodes to generate a Bayesian network model nugget. The scripting name of this model nugget is applybayesnetnode. For more information on scripting the modeling node itself, see [bayesnetnode properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/bayesnetnodeslots.htmlbayesnetnodeslots).
-
-
-
-applybayesnetnode properties
-
-Table 1. applybayesnetnode properties
-
- applybayesnetnode Properties Values Property description
-
- all_probabilities flag
- raw_propensity flag
- adjusted_propensity flag
- calculate_raw_propensities flag
-"
-CDA0897D49B56EE521BF16E52014DA5E2E1D2710_0,CDA0897D49B56EE521BF16E52014DA5E2E1D2710," autoclassifiernode properties
-
-The Auto Classifier node creates and compares a number of different models for binary outcomes (yes or no, churn or do not churn, and so on), allowing you to choose the best approach for a given analysis. A number of modeling algorithms are supported, making it possible to select the methods you want to use, the specific options for each, and the criteria for comparing the results. The node generates a set of models based on the specified options and ranks the best candidates according to the criteria you specify.
-
-
-
-autoclassifiernode properties
-
-Table 1. autoclassifiernode properties
-
- autoclassifiernode Properties Values Property description
-
- target field For flag targets, the Auto Classifier node requires a single target and one or more input fields. Weight and frequency fields can also be specified. See [Common modeling node properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/modelingnodeslots_common.htmlmodelingnodeslots_common) for more information.
- ranking_measure Accuracy Area_under_curve Profit Lift Num_variables
- ranking_dataset Training Test
- number_of_models integer Number of models to include in the model nugget. Specify an integer between 1 and 100.
- calculate_variable_importance flag
- enable_accuracy_limit flag
- accuracy_limit integer Integer between 0 and 100.
- enable_area_under_curve_limit flag
- area_under_curve_limit number Real number between 0.0 and 1.0.
- enable_profit_limit flag
- profit_limit number Integer greater than 0.
- enable_lift_limit flag
- lift_limit number Real number greater than 1.0.
- enable_number_of_variables_limit flag
- number_of_variables_limit number Integer greater than 0.
- use_fixed_cost flag
-"
-CDA0897D49B56EE521BF16E52014DA5E2E1D2710_1,CDA0897D49B56EE521BF16E52014DA5E2E1D2710," fixed_cost number Real number greater than 0.0.
- variable_cost field
- use_fixed_revenue flag
- fixed_revenue number Real number greater than 0.0.
- variable_revenue field
- use_fixed_weight flag
- fixed_weight number Real number greater than 0.0
- variable_weight field
- lift_percentile number Integer between 0 and 100.
- enable_model_build_time_limit flag
- model_build_time_limit number Integer set to the number of minutes to limit the time taken to build each individual model.
- enable_stop_after_time_limit flag
- stop_after_time_limit number Real number set to the number of hours to limit the overall elapsed time for an auto classifier run.
- enable_stop_after_valid_model_produced flag
- use_costs flag
- flag Enables or disables the use of a specific algorithm.
- . string Sets a property value for a specific algorithm. See [Setting algorithm properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/factorymodeling_algorithmproperties.htmlfactorymodeling_algorithmproperties) for more information.
- use_cross_validation field Fields added to this list can take either the condition or prediction role in rules that are generated by the model. This is on a rule by rule basis, so a field might be a condition in one rule and a prediction in another.
- number_of_folds integer N fold parameter for cross validation, with range from 3 to 10.
- set_random_seed boolean Setting a random seed allows you to replicate analyses. Specify an integer or click Generate, which will create a pseudo-random integer between 1 and 2147483647, inclusive. By default, analyses are replicated with seed 229176228.
- random_seed integer Random seed
- stop_if_valid_model boolean
-"
-CDA0897D49B56EE521BF16E52014DA5E2E1D2710_2,CDA0897D49B56EE521BF16E52014DA5E2E1D2710," filter_individual_model_output boolean Removes from the output all of the additional fields generated by the individual models that feed into the Ensemble node. Select this option if you're interested only in the combined score from all of the input models. Ensure that this option is deselected if, for example, you want to use an Analysis node or Evaluation node to compare the accuracy of the combined score with that of each of the individual input models
- set_ensemble_method ""Voting"" ""ConfidenceWeightedVoting"" ""HighestConfidence"" Ensemble method for set targets.
- set_voting_tie_selection ""Random"" ""HighestConfidence"" If voting is tied, select value randomly or by using highest confidence.
-"
-B741FE5CDD06D606F869B15DEB2173C1F134D22D_0,B741FE5CDD06D606F869B15DEB2173C1F134D22D," binningnode properties
-
-The Binning node automatically creates new nominal (set) fields based on the values of one or more existing continuous (numeric range) fields. For example, you can transform a continuous income field into a new categorical field containing groups of income as deviations from the mean. After you create bins for the new field, you can generate a Derive node based on the cut points.
-
-
-
-binningnode properties
-
-Table 1. binningnode properties
-
- binningnode properties Data type Property description
-
- fields [field1 field2 ... fieldn] Continuous (numeric range) fields pending transformation. You can bin multiple fields simultaneously.
- method FixedWidthEqualCountRankSDevOptimal Method used for determining cut points for new field bins (categories).
- recalculate_bins AlwaysIfNecessary Specifies whether the bins are recalculated and the data placed in the relevant bin every time the node is executed, or that data is added only to existing bins and any new bins that have been added.
- fixed_width_name_extension string The default extension is _BIN.
- fixed_width_add_as SuffixPrefix Specifies whether the extension is added to the end (suffix) of the field name or to the start (prefix). The default extension is income_BIN.
- fixed_bin_method WidthCount
- fixed_bin_count integer Specifies an integer used to determine the number of fixed-width bins (categories) for the new field(s).
- fixed_bin_width real Value (integer or real) for calculating width of the bin.
- equal_count_name_extension string The default extension is _TILE.
- equal_count_add_as SuffixPrefix Specifies an extension, either suffix or prefix, used for the field name generated by using standard p-tiles. The default extension is _TILE plus N, where N is the tile number.
-"
-B741FE5CDD06D606F869B15DEB2173C1F134D22D_1,B741FE5CDD06D606F869B15DEB2173C1F134D22D," tile4 flag Generates four quantile bins, each containing 25% of cases.
- tile5 flag Generates five quintile bins.
- tile10 flag Generates 10 decile bins.
- tile20 flag Generates 20 vingtile bins.
- tile100 flag Generates 100 percentile bins.
- use_custom_tile flag
- custom_tile_name_extension string The default extension is _TILEN.
- custom_tile_add_as SuffixPrefix
- custom_tile integer
- equal_count_method RecordCountValueSum The RecordCount method seeks to assign an equal number of records to each bin, while ValueSum assigns records so that the sum of the values in each bin is equal.
- tied_values_method NextCurrentRandom Specifies which bin tied value data is to be put in.
- rank_order AscendingDescending This property includes Ascending (lowest value is marked 1) or Descending (highest value is marked 1).
- rank_add_as SuffixPrefix This option applies to rank, fractional rank, and percentage rank.
- rank flag
- rank_name_extension string The default extension is _RANK.
- rank_fractional flag Ranks cases where the value of the new field equals rank divided by the sum of the weights of the nonmissing cases. Fractional ranks fall in the range of 0–1.
- rank_fractional_name_extension string The default extension is _F_RANK.
- rank_pct flag Each rank is divided by the number of records with valid values and multiplied by 100. Percentage fractional ranks fall in the range of 1–100.
- rank_pct_name_extension string The default extension is _P_RANK.
- sdev_name_extension string
- sdev_add_as SuffixPrefix
- sdev_count OneTwoThree
- optimal_name_extension string The default extension is _OPTIMAL.
- optimal_add_as SuffixPrefix
- optimal_supervisor_field field Field chosen as the supervisory field to which the fields selected for binning are related.
- optimal_merge_bins flag Specifies that any bins with small case counts will be added to a larger, neighboring bin.
-"
-B741FE5CDD06D606F869B15DEB2173C1F134D22D_2,B741FE5CDD06D606F869B15DEB2173C1F134D22D," optimal_small_bin_threshold integer
- optimal_pre_bin flag Indicates that prebinning of dataset is to take place.
- optimal_max_bins integer Specifies an upper limit to avoid creating an inordinately large number of bins.
- optimal_lower_end_point InclusiveExclusive
-"
-5C95F2D19465DDA8969D0498D1B96D870BD02A1F,5C95F2D19465DDA8969D0498D1B96D870BD02A1F," c50node properties
-
-The C5.0 node builds either a decision tree or a rule set. The model works by splitting the sample based on the field that provides the maximum information gain at each level. The target field must be categorical. Multiple splits into more than two subgroups are allowed.
-
-
-
-c50node properties
-
-Table 1. c50node properties
-
- c50node Properties Values Property description
-
- target field C50 models use a single target field and one or more input fields. You can also specify a weight field. See [Common modeling node properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/modelingnodeslots_common.htmlmodelingnodeslots_common) for more information.
- output_type DecisionTreeRuleSet
- group_symbolics flag
- use_boost flag
- boost_num_trials number
- use_xval flag
- xval_num_folds number
- mode SimpleExpert
- favor AccuracyGenerality Favor accuracy or generality.
- expected_noise number
- min_child_records number
- pruning_severity number
- use_costs flag
- costs structured This is a structured property. See the example for usage.
- use_winnowing flag
- use_global_pruning flag On (True) by default.
- calculate_variable_importance flag
- calculate_raw_propensities flag
-"
-FCBDBFD3E4BEBEFE552FAD012509948FABA34B44,FCBDBFD3E4BEBEFE552FAD012509948FABA34B44," applyc50node properties
-
-You can use C5.0 modeling nodes to generate a C5.0 model nugget. The scripting name of this model nugget is applyc50node. For more information on scripting the modeling node itself, see [c50node properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/c50nodeslots.htmlc50nodeslots).
-
-
-
-applyc50node properties
-
-Table 1. applyc50node properties
-
- applyc50node Properties Values Property description
-
- sql_generate udfNeverNoMissingValues Used to set SQL generation options during rule set execution. The default value is udf.
- calculate_conf flag Available when SQL generation is enabled; this property includes confidence calculations in the generated tree.
-"
-499553788712E55ABE1345C61CCDB15D1CE04E83,499553788712E55ABE1345C61CCDB15D1CE04E83," carmanode properties
-
-The CARMA model extracts a set of rules from the data without requiring you to specify input or target fields. In contrast to Apriori, the CARMA node offers build settings for rule support (support for both antecedent and consequent) rather than just antecedent support. This means that the rules generated can be used for a wider variety of applications—for example, to find a list of products or services (antecedents) whose consequent is the item that you want to promote this holiday season.
-
-
-
-carmanode properties
-
-Table 1. carmanode properties
-
- carmanode Properties Values Property description
-
- inputs [field1 ... fieldn] CARMA models use a list of input fields, but no target. Weight and frequency fields are not used. See [Common modeling node properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/modelingnodeslots_common.htmlmodelingnodeslots_common) for more information.
- id_field field Field used as the ID field for model building.
- contiguous flag Used to specify whether IDs in the ID field are contiguous.
- use_transactional_data flag
- content_field field
- min_supp number(percent) Relates to rule support rather than antecedent support. The default is 20%.
- min_conf number(percent) The default is 20%.
- max_size number The default is 10.
- mode SimpleExpert The default is Simple.
- exclude_multiple flag Excludes rules with multiple consequents. The default is False.
- use_pruning flag The default is False.
- pruning_value number The default is 500.
- vary_support flag
-"
-CE14B5EFF03A17683C6AA16D02F62E1EBAD0D7F2,CE14B5EFF03A17683C6AA16D02F62E1EBAD0D7F2," applycarmanode properties
-
-You can use Carma modeling nodes to generate a Carma model nugget. The scripting name of this model nugget is applycarmanode. For more information on scripting the modeling node itself, see [carmanode properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/carmanodeslots.htmlcarmanodeslots).
-
-
-
-applycarmanode properties
-
-Table 1. applycarmanode properties
-
- applycarmanode Properties Values Property description
-
- enable_sql_generation udfnative When using data from a database, SQL code can be pushed back to the database for execution, providing superior performance for many operations.
-"
-CB130D4E1AE505CE39CBD49BF9D22359B9EC80AB_0,CB130D4E1AE505CE39CBD49BF9D22359B9EC80AB," cartnode properties
-
-The Classification and Regression (C&R) Tree node generates a decision tree that allows you to predict or classify future observations. The method uses recursive partitioning to split the training records into segments by minimizing the impurity at each step, where a node in the tree is considered ""pure"" if 100% of cases in the node fall into a specific category of the target field. Target and input fields can be numeric ranges or categorical (nominal, ordinal, or flags); all splits are binary (only two subgroups).
-
-
-
-cartnode properties
-
-Table 1. cartnode properties
-
- cartnode Properties Values Property description
-
- target field C&R Tree models require a single target and one or more input fields. A frequency field can also be specified. See the topic [Common modeling node properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/modelingnodeslots_common.htmlmodelingnodeslots_common) for more information.
- continue_training_existing_model flag
- objective StandardBoostingBaggingpsm psm is used for very large datasets, and requires a Server connection.
- model_output_type SingleInteractiveBuilder
- use_tree_directives flag
- tree_directives string Specify directives for growing the tree. Directives can be wrapped in triple quotes to avoid escaping newlines or quotes. Note that directives may be highly sensitive to minor changes in data or modeling options and may not generalize to other datasets.
- use_max_depth DefaultCustom
- max_depth integer Maximum tree depth, from 0 to 1000. Used only if use_max_depth = Custom.
- prune_tree flag Prune tree to avoid overfitting.
- use_std_err flag Use maximum difference in risk (in Standard Errors).
- std_err_multiplier number Maximum difference.
-"
-CB130D4E1AE505CE39CBD49BF9D22359B9EC80AB_1,CB130D4E1AE505CE39CBD49BF9D22359B9EC80AB," max_surrogates number Maximum surrogates.
- use_percentage flag
- min_parent_records_pc number
- min_child_records_pc number
- min_parent_records_abs number
- min_child_records_abs number
- use_costs flag
- costs structured Structured property.
- priors DataEqualCustom
- custom_priors structured Structured property.
- adjust_priors flag
- trails number Number of component models for boosting or bagging.
- set_ensemble_method VotingHighestProbabilityHighestMeanProbability Default combining rule for categorical targets.
- range_ensemble_method MeanMedian Default combining rule for continuous targets.
- large_boost flag Apply boosting to very large data sets.
- min_impurity number
- impurity_measure GiniTwoingOrdered
- train_pct number Overfit prevention set.
- set_random_seed flag Replicate results option.
- seed number
- calculate_variable_importance flag
- calculate_raw_propensities flag
-"
-C53BD428F2955B76BF24620A21A6461A1CC19F11,C53BD428F2955B76BF24620A21A6461A1CC19F11," applycartnode properties
-
-You can use C&R Tree modeling nodes to generate a C&R Tree model nugget. The scripting name of this model nugget is applycartnode. For more information on scripting the modeling node itself, see [cartnode properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/cartnodeslots.htmlcartnodeslots).
-
-
-
-applycartnode properties
-
-Table 1. applycartnode properties
-
- applycartnode Properties Values Property description
-
- calculate_conf flag Available when SQL generation is enabled; this property includes confidence calculations in the generated tree.
- display_rule_id flag Adds a field in the scoring output that indicates the ID for the terminal node to which each record is assigned.
- calculate_raw_propensities flag
-"
-B0B1665F022C9E781CE1AE94FA885266391FBCFE_0,B0B1665F022C9E781CE1AE94FA885266391FBCFE," chaidnode properties
-
-The CHAID node generates decision trees using chi-square statistics to identify optimal splits. Unlike the C&R Tree and Quest nodes, CHAID can generate non-binary trees, meaning that some splits have more than two branches. Target and input fields can be numeric range (continuous) or categorical. Exhaustive CHAID is a modification of CHAID that does a more thorough job of examining all possible splits but takes longer to compute.
-
-
-
-chaidnode properties
-
-Table 1. chaidnode properties
-
- chaidnode Properties Values Property description
-
- target field CHAID models require a single target and one or more input fields. You can also specify a frequency. See [Common modeling node properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/modelingnodeslots_common.htmlmodelingnodeslots_common) for more information.
- continue_training_existing_model flag
- objective Standard Boosting Bagging psm psm is used for very large datasets, and requires a server connection.
- model_output_type Single InteractiveBuilder
- use_tree_directives flag
- tree_directives string
- method Chaid ExhaustiveChaid
- use_max_depth Default Custom
- max_depth integer Maximum tree depth, from 0 to 1000. Used only if use_max_depth = Custom.
- use_percentage flag
- min_parent_records_pc number
- min_child_records_pc number
- min_parent_records_abs number
- min_child_records_abs number
- use_costs flag
- costs structured Structured property.
- trails number Number of component models for boosting or bagging.
-"
-B0B1665F022C9E781CE1AE94FA885266391FBCFE_1,B0B1665F022C9E781CE1AE94FA885266391FBCFE," set_ensemble_method Voting HighestProbability HighestMeanProbability Default combining rule for categorical targets.
- range_ensemble_method Mean Median Default combining rule for continuous targets.
- large_boost flag Apply boosting to very large data sets.
- split_alpha number Significance level for splitting.
- merge_alpha number Significance level for merging.
- bonferroni_adjustment flag Adjust significance values using Bonferroni method.
- split_merged_categories flag Allow resplitting of merged categories.
- chi_square Pearson LR Method used to calculate the chi-square statistic: Pearson or Likelihood Ratio
- epsilon number Minimum change in expected cell frequencies..
- max_iterations number Maximum iterations for convergence.
- set_random_seed integer
- seed number
- calculate_variable_importance flag
- calculate_raw_propensities flag
- calculate_adjusted_propensities flag
- adjusted_propensity_partition Test Validation
-"
-6644EAA4A383F7ED21C0CA1ADAE80A634867870A,6644EAA4A383F7ED21C0CA1ADAE80A634867870A," applychaidnode properties
-
-You can use CHAID modeling nodes to generate a CHAID model nugget. The scripting name of this model nugget is applychaidnode. For more information on scripting the modeling node itself, see [chaidnode properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/chaidnodeslots.htmlchaidnodeslots).
-
-
-
-applychaidnode properties
-
-Table 1. applychaidnode properties
-
- applychaidnode Properties Values Property description
-
- calculate_conf flag
- display_rule_id flag Adds a field in the scoring output that indicates the ID for the terminal node to which each record is assigned.
- calculate_raw_propensities flag
-"
-FD45693344E2B3CC3BDB7D1AA209AD9FBACB5309,FD45693344E2B3CC3BDB7D1AA209AD9FBACB5309," dvcharts properties
-
-With the Charts node, you can launch the chart builder and create chart definitions to save with your flow. Then when you run the node, chart output is generated.
-
-
-
-dvcharts properties
-
-Table 1. dvcharts properties
-
- dvcharts properties Data type Property description
-
- chart_definition list List of chart definitions, including chart type (string), chart name (string), chart template (string), and used fields (list of field names),
-"
-F24C445F7AB9052A92E411B826C60DEE2DF78448,F24C445F7AB9052A92E411B826C60DEE2DF78448," collectionnode properties
-
-The Collection node shows the distribution of values for one numeric field relative to the values of another. (It creates graphs that are similar to histograms.) It's useful for illustrating a variable or field whose values change over time. Using 3-D graphing, you can also include a symbolic axis displaying distributions by category.
-
-
-
-collectionnode properties
-
-Table 1. collectionnode properties
-
- collectionnode properties Data type Property description
-
- over_field field
- over_label_auto flag
- over_label string
- collect_field field
- collect_label_auto flag
- collect_label string
- three_D flag
- by_field field
- by_label_auto flag
- by_label string
- operation SumMeanMinMaxSDev
- color_field string
- panel_field string
- animation_field string
- range_mode AutomaticUserDefined
- range_min number
- range_max number
- bins ByNumberByWidth
- num_bins number
- bin_width number
- use_grid flag
-"
-F1B21B1232720492424BB07CD73C93DF2B9CD229,F1B21B1232720492424BB07CD73C93DF2B9CD229," coxregnode properties
-
-The Cox regression node enables you to build a survival model for time-to-event data in the presence of censored records. The model produces a survival function that predicts the probability that the event of interest has occurred at a given time (t) for given values of the input variables.
-
-
-
-coxregnode properties
-
-Table 1. coxregnode properties
-
- coxregnode Properties Values Property description
-
- survival_time field Cox regression models require a single field containing the survival times.
- target field Cox regression models require a single target field, and one or more input fields. See [Common modeling node properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/modelingnodeslots_common.htmlmodelingnodeslots_common) for more information.
- method Enter Stepwise BackwardsStepwise
- groups field
- model_type MainEffects Custom
- custom_terms [BP*Sex"" ""BP*Age]
- mode Expert Simple
- max_iterations number
- p_converge 1.0E-4 1.0E-5 1.0E-6 1.0E-7 1.0E-8 0
- l_converge 1.0E-1 1.0E-2 1.0E-3 1.0E-4 1.0E-5 0
- removal_criterion LR Wald Conditional
- probability_entry number
- probability_removal number
- output_display EachStep LastStep
- ci_enable flag
- ci_value 90 95 99
- correlation flag
- display_baseline flag
- survival flag
- hazard flag
- log_minus_log flag
- one_minus_survival flag
-"
-CEBDC984A6E14E7DC6B7526324BF06A0CE6FFE34,CEBDC984A6E14E7DC6B7526324BF06A0CE6FFE34," applycoxregnode properties
-
-You can use Cox modeling nodes to generate a Cox model nugget. The scripting name of this model nugget is applycoxregnode. For more information on scripting the modeling node itself, see [coxregnode properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/coxregnodeslots.htmlcoxregnodeslots).
-
-
-
-applycoxregnode properties
-
-Table 1. applycoxregnode properties
-
- applycoxregnode Properties Values Property description
-
- future_time_as IntervalsFields
- time_interval number
- num_future_times integer
- time_field field
- past_survival_time field
- all_probabilities flag
-"
-7566F3896A5AC6F89F4E7E18DC21B4A6A63864B4_0,7566F3896A5AC6F89F4E7E18DC21B4A6A63864B4," cplexoptnode properties
-
- The CPLEX Optimization node provides the ability to use complex mathematical (CPLEX) based optimization via an Optimization Programming Language (OPL) model file.
-
-
-
-cplexoptnode properties
-
-Table 1. cplexoptnode properties
-
- cplexoptnode properties Data type Property description
-
- opl_model_text string The OPL (Optimization Programming Language) script program that the CPLEX Optimization node will run and then generate the optimization result.
- opl_tuple_set_name string The tuple set name in the OPL model that corresponds to the incoming data. This isn't required and is normally not set via script. It should only be used for editing field mappings of a selected data source.
- data_input_map List of structured properties The input field mappings for a data source. This isn't required and is normally not set via script. It should only be used for editing field mappings of a selected data source.
-"
-7566F3896A5AC6F89F4E7E18DC21B4A6A63864B4_1,7566F3896A5AC6F89F4E7E18DC21B4A6A63864B4," md_data_input_map List of structured properties The field mappings between each tuple defined in the OPL, with each corresponding field data source (incoming data). Users can edit them each individually per data source. With this script, you can set the property directly to set all mappings at once. This setting isn't shown in the user interface. Each entity in the list is structured data: Data Source Tag. The tag of the data source. For example, for 0_Products_Type the tag is 0. Data Source Index. The physical sequence (index) of the data source. This is determined by the connection order. Source Node. The source node (annotation) of the data source. For example, for 0_Products_Type the source node is Products. Connected Node. The prior node (annotation) that connects the current CPLEX optimization node. For example, for 0_Products_Type the connected node is Type. Tuple Set Name. The tuple set name of the data source. It must match what's defined in the OPL. Tuple Field Name. The tuple set field name of the data source. It must match what's defined in the OPL tuple set definition. Storage Type. The field storage type. Possible values are int, float, or string.
-"
-7566F3896A5AC6F89F4E7E18DC21B4A6A63864B4_2,7566F3896A5AC6F89F4E7E18DC21B4A6A63864B4," Data Field Name. The field name of the data source. Example: [0,0,'Product','Type','Products','prod_id_tup','int','prod_id'], 0,0,'Product','Type','Products','prod_name_tup','string', 'prod_name'],1,1,'Components','Type','Components', 'comp_id_tup','int','comp_id'],1,1,'Components','Type', 'Components','comp_name_tup','string','comp_name']]
- opl_data_text string The definition of some variables or data used for the OPL.
- output_value_mode string Possible values are raw or dvar. If dvar is specified, on the Output tab the user must specify the object function variable name in OPL for the output. If raw is specified, the objective function will be output directly, regardless of name.
- decision_variable_name string The objective function variable name in defined in the OPL. This is enabled only when the output_value_mode property is set to dvar.
- objective_function_value_fieldname string The field name for the objective function value to use in the output. Default is _OBJECTIVE.
-"
-02D819D225558542A49AB6E43F94FE062A509EA5,02D819D225558542A49AB6E43F94FE062A509EA5," dataassetexport properties
-
-You can use the Data Asset Export node to write to remove data sources using connections, write to a data file on your local computer, or write data to a project.
-
-
-
-dataassetexport properties
-
-Table 1. dataassetexport properties
-
- dataassetexport properties Data type Property description
-
- user_settings string Escaped JSON string containing the interaction properties for the connection. Contact IBM for details about available interaction points. Example: user_settings: ""{""interactionProperties"":{""write_mode"":""write"",""file_name"":""output.csv"",""file_format"":""csv"",""quote_numerics"":true,""encoding"":""utf-8"",""first_line_header"":true,""include_types"":false}}"" Note that these values will change based on the type of connection you're using.
-"
-46915AFE957CA00C5B825C5F2BDC618BFEA43DE8,46915AFE957CA00C5B825C5F2BDC618BFEA43DE8," dataassetimport properties
-
- You can use the Data Asset import node to pull in data from remote data sources using connections or from your local computer.
-
-
-
-dataassetimport properties
-
-Table 1. dataassetimport properties
-
- dataassetimport properties Data type Property description
-
- connection_path string Name of the data asset (table) you want to access from a selected connection. The value of this property is: /asset_name or /schema_name/table_name.
-"
-CCDF1D5375060FCDE288A920A6F3C1B48454C6DB_0,CCDF1D5375060FCDE288A920A6F3C1B48454C6DB," dataauditnode properties
-
-The Data Audit node provides a comprehensive first look at the data, including summary statistics, histograms and distribution for each field, as well as information on outliers, missing values, and extremes. Results are displayed in an easy-to-read matrix that can be sorted and used to generate full-size graphs and data preparation nodes.
-
-
-
-dataauditnode properties
-
-Table 1. dataauditnode properties
-
- dataauditnode properties Data type Property description
-
- custom_fields flag
- fields [field1 … fieldN]
- overlay field
- display_graphs flag Used to turn the display of graphs in the output matrix on or off.
- basic_stats flag
- advanced_stats flag
- median_stats flag
- calculate CountBreakdown Used to calculate missing values. Select either, both, or neither calculation method.
- outlier_detection_method stdiqr Used to specify the detection method for outliers and extreme values.
- outlier_detection_std_outlier number If outlier_detection_method is std, specifies the number to use to define outliers.
- outlier_detection_std_extreme number If outlier_detection_method is std, specifies the number to use to define extreme values.
- outlier_detection_iqr_outlier number If outlier_detection_method is iqr, specifies the number to use to define outliers.
- outlier_detection_iqr_extreme number If outlier_detection_method is iqr, specifies the number to use to define extreme values.
- use_output_name flag Specifies whether a custom output name is used.
- output_name string If use_output_name is true, specifies the name to use.
- output_mode ScreenFile Used to specify target location for output generated from the output node.
-"
-CCDF1D5375060FCDE288A920A6F3C1B48454C6DB_1,CCDF1D5375060FCDE288A920A6F3C1B48454C6DB," output_format Formatted (.tab) Delimited (.csv) HTML (.html) Output (.cou) Used to specify the type of output.
- paginate_output flag When the output_format is HTML, causes the output to be separated into pages.
-"
-DAFB63017668C5DD34A07A1850CE9E9A37D0F525_0,DAFB63017668C5DD34A07A1850CE9E9A37D0F525," decisionlistnode properties
-
-The Decision List node identifies subgroups, or segments, that show a higher or lower likelihood of a given binary outcome relative to the overall population. For example, you might look for customers who are unlikely to churn or are most likely to respond favorably to a campaign. You can incorporate your business knowledge into the model by adding your own custom segments and previewing alternative models side by side to compare the results. Decision List models consist of a list of rules in which each rule has a condition and an outcome. Rules are applied in order, and the first rule that matches determines the outcome.
-
-
-
-decisionlistnode properties
-
-Table 1. decisionlistnode properties
-
- decisionlistnode Properties Values Property description
-
- target field Decision List models use a single target and one or more input fields. A frequency field can also be specified. See [Common modeling node properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/modelingnodeslots_common.htmlmodelingnodeslots_common) for more information.
- model_output_type ModelInteractiveBuilder
- search_direction UpDown Relates to finding segments; where Up is the equivalent of High Probability, and Down is the equivalent of Low Probability.
- target_value string If not specified, will assume true value for flags.
- max_rules integer The maximum number of segments excluding the remainder.
- min_group_size integer Minimum segment size.
- min_group_size_pct number Minimum segment size as a percentage.
- confidence_level number Minimum threshold that an input field has to improve the likelihood of response (give lift), to make it worth adding to a segment definition.
- max_segments_per_rule integer
- mode SimpleExpert
- bin_method EqualWidthEqualCount
- bin_count number
- max_models_per_cycle integer Search width for lists.
-"
-DAFB63017668C5DD34A07A1850CE9E9A37D0F525_1,DAFB63017668C5DD34A07A1850CE9E9A37D0F525," max_rules_per_cycle integer Search width for segment rules.
- segment_growth number
- include_missing flag
- final_results_only flag
- reuse_fields flag Allows attributes (input fields which appear in rules) to be re-used.
- max_alternatives integer
- calculate_raw_propensities flag
-"
-082349F7C1E486D18BCA3BB7569D4DE25A8E81A7,082349F7C1E486D18BCA3BB7569D4DE25A8E81A7," applydecisionlistnode properties
-
-You can use Decision List modeling nodes to generate a Decision List model nugget. The scripting name of this model nugget is applydecisionlistnode. For more information on scripting the modeling node itself, see [decisionlistnode properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/decisionlistnodeslots.htmldecisionlistnodeslots).
-
-
-
-applydecisionlistnode properties
-
-Table 1. applydecisionlistnode properties
-
- applydecisionlistnode Properties Values Property description
-
- enable_sql_generation flag When true, SPSS Modeler will try to push back the Decision List model to SQL.
- calculate_raw_propensities flag
-"
-CA6F118DBE9A1782053FE1F5F4697DDA07A7A365_0,CA6F118DBE9A1782053FE1F5F4697DDA07A7A365," Flow properties
-
-You can control a variety of flow properties with scripting. To reference flow properties, you must set the execution method to use scripts:
-
-stream = modeler.script.stream()
-stream.setPropertyValue(""execute_method"", ""Script"")
-
-The previous example uses the node property to create a list of all nodes in the flow and write that list in the flow annotations. The annotation produced looks like this:
-
-This flow is called ""druglearn"" and contains the following nodes:
-
-type node called ""Define Types""
-derive node called ""Na_to_K""
-variablefile node called ""DRUG1n""
-neuralnetwork node called ""Drug""
-c50 node called ""Drug""
-filter node called ""Discard Fields""
-
-Flow properties are described in the following table.
-
-
-
-Flow properties
-
-Table 1. Flow properties
-
- Property name Data type Property description
-
- execute_method Normal Script
- date_format ""DDMMYY"" ""MMDDYY"" ""YYMMDD"" ""YYYYMMDD"" ""YYYYDDD"" DAY MONTH ""DD-MM-YY"" ""DD-MM-YYYY"" ""MM-DD-YY"" ""MM-DD-YYYY"" ""DD-MON-YY"" ""DD-MON-YYYY"" ""YYYY-MM-DD"" ""DD.MM.YY"" ""DD.MM.YYYY"" ""MM.DD.YYYY"" ""DD.MON.YY"" ""DD.MON.YYYY"" ""DD/MM/YY"" ""DD/MM/YYYY"" ""MM/DD/YY"" ""MM/DD/YYYY"" ""DD/MON/YY"" ""DD/MON/YYYY"" MON YYYY q Q YYYY ww WK YYYY
- date_baseline number
- date_2digit_baseline number
-"
-CA6F118DBE9A1782053FE1F5F4697DDA07A7A365_1,CA6F118DBE9A1782053FE1F5F4697DDA07A7A365," time_format ""HHMMSS"" ""HHMM"" ""MMSS"" ""HH:MM:SS"" ""HH:MM"" ""MM:SS"" ""(H)H:(M)M:(S)S"" ""(H)H:(M)M"" ""(M)M:(S)S"" ""HH.MM.SS"" ""HH.MM"" ""MM.SS"" ""(H)H.(M)M.(S)S"" ""(H)H.(M)M"" ""(M)M.(S)S""
- time_rollover flag
- import_datetime_as_string flag
- decimal_places number
- decimal_symbol Default Period Comma
- angles_in_radians flag
- use_max_set_size flag
- max_set_size number
- ruleset_evaluation Voting FirstHit
- refresh_source_nodes flag Use to refresh import nodes automatically upon flow execution.
- script string
- annotation string
- name string This property is read-only. If you want to change the name of a flow, you should save it with a different name.
- parameters Use this property to update flow parameters from within a stand-alone script.
- nodes See detailed information that follows.
- encoding SystemDefault ""UTF-8""
- stream_rewriting boolean
- stream_rewriting_maximise_sql boolean
- stream_rewriting_optimise_clem_ execution boolean
- stream_rewriting_optimise_syntax_ execution boolean
- enable_parallelism boolean
- sql_generation boolean
- database_caching boolean
- sql_logging boolean
- sql_generation_logging boolean
- sql_log_native boolean
- sql_log_prettyprint boolean
- record_count_suppress_input boolean
- record_count_feedback_interval integer
- use_stream_auto_create_node_ settings boolean If true, then flow-specific settings are used, otherwise user preferences are used.
-"
-CA6F118DBE9A1782053FE1F5F4697DDA07A7A365_2,CA6F118DBE9A1782053FE1F5F4697DDA07A7A365," create_model_applier_for_new_ models boolean If true, when a model builder creates a new model, and it has no active update links, a new model applier is added.
- create_model_applier_update_links createEnabled createDisabled doNotCreate Defines the type of link created when a model applier node is added automatically.
- create_source_node_from_builders boolean If true, when a source builder creates a new source output, and it has no active update links, a new import node is added.
- create_source_node_update_links createEnabled createDisabled doNotCreate Defines the type of link created when an import node is added automatically.
- has_coordinate_system boolean If true, applies a coordinate system to the entire flow.
- coordinate_system string The name of the selected projected coordinate system.
- deployment_area modelRefresh Scoring None Choose how you want to deploy the flow. If this value is set to None, no other deployment entries are used.
- scoring_terminal_node_id string Choose the scoring branch in the flow. It can be any terminal node in the flow.
-"
-ABD445CE46B0329348E6AD464735BDB1D525EDAA,ABD445CE46B0329348E6AD464735BDB1D525EDAA," SuperNode properties
-
-The tables in this section describe properties that are specific to SuperNodes. Note that common node properties also apply to SuperNodes.
-
-
-
-Terminal supernode properties
-
-Table 1. Terminal supernode properties
-
- Property name Property type/List of values Property description
-
-"
-84573D3FDA739326819C7303EA21DB6DDF2ACC21_0,84573D3FDA739326819C7303EA21DB6DDF2ACC21," derivenode properties
-
-The Derive node modifies data values or creates new fields from one or more existing fields. It creates fields of type formula, flag, nominal, state, count, and conditional.
-
-
-
-derivenode properties
-
-Table 1. derivenode properties
-
- derivenode properties Data type Property description
-
- new_name string Name of new field.
- mode SingleMultiple Specifies single or multiple fields.
- fields list Used in Multiple mode only to select multiple fields.
- name_extension string Specifies the extension for the new field name(s).
- add_as SuffixPrefix Adds the extension as a prefix (at the beginning) or as a suffix (at the end) of the field name.
- result_type FormulaFlagSetStateCountConditional The six types of new fields that you can create.
- formula_expr string Expression for calculating a new field value in a Derive node.
- flag_expr string
- flag_true string
- flag_false string
- set_default string
- set_value_cond string Structured to supply the condition associated with a given value.
- state_on_val string Specifies the value for the new field when the On condition is met.
- state_off_val string Specifies the value for the new field when the Off condition is met.
- state_on_expression string
- state_off_expression string
- state_initial OnOff Assigns each record of the new field an initial value of On or Off. This value can change as each condition is met.
- count_initial_val string
- count_inc_condition string
- count_inc_expression string
- count_reset_condition string
- cond_if_cond string
- cond_then_expr string
- cond_else_expr string
-"
-84573D3FDA739326819C7303EA21DB6DDF2ACC21_1,84573D3FDA739326819C7303EA21DB6DDF2ACC21," formula_measure_type Range / MeasureType.RANGEDiscrete / MeasureType.DISCRETEFlag / MeasureType.FLAGSet / MeasureType.SETOrderedSet / MeasureType.ORDERED_SETTypeless / MeasureType.TYPELESSCollection / MeasureType.COLLECTIONGeospatial / MeasureType.GEOSPATIAL This property can be used to define the measurement associated with the derived field. The setter function can be passed either a string or one of the MeasureType values. The getter will always return on the MeasureType values.
- collection_measure Range / MeasureType.RANGEFlag / MeasureType.FLAGSet / MeasureType.SETOrderedSet / MeasureType.ORDERED_SETTypeless / MeasureType.TYPELESS For collection fields (lists with a depth of 0), this property defines the measurement type associated with the underlying values.
- geo_type PointMultiPointLineStringMultiLineStringPolygonMultiPolygon For geospatial fields, this property defines the type of geospatial object represented by this field. This should be consistent with the list depth of the values
-"
-16048584B029B9BE5DA50D7F9D9AE85FFE740718_0,16048584B029B9BE5DA50D7F9D9AE85FFE740718," discriminantnode properties
-
-Discriminant analysis makes more stringent assumptions than logistic regression, but can be a valuable alternative or supplement to a logistic regression analysis when those assumptions are met.
-
-
-
-discriminantnode properties
-
-Table 1. discriminantnode properties
-
- discriminantnode Properties Values Property description
-
- target field Discriminant models require a single target field and one or more input fields. Weight and frequency fields aren't used. See [Common modeling node properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/modelingnodeslots_common.htmlmodelingnodeslots_common) for more information.
- method Enter Stepwise
- mode Simple Expert
- prior_probabilities AllEqual ComputeFromSizes
- covariance_matrix WithinGroups SeparateGroups
- means flag Statistics options in the node properties under Expert Options.
- univariate_anovas flag
- box_m flag
- within_group_covariance flag
- within_groups_correlation flag
- separate_groups_covariance flag
- total_covariance flag
- fishers flag
- unstandardized flag
- casewise_results flag Classification options in the node properties under Expert Options.
- limit_to_first number Default value is 10.
- summary_table flag
- leave_one_classification flag
- separate_groups_covariance flag Matrices option Separate-groups covariance.
- territorial_map flag
- combined_groups flag Plot option Combined-groups.
- separate_groups flag Plot option Separate-groups.
- summary_of_steps flag
- F_pairwise flag
- stepwise_method WilksLambda UnexplainedVariance MahalanobisDistance SmallestF RaosV
- V_to_enter number
- criteria UseValue UseProbability
- F_value_entry number Default value is 3.84.
-"
-16048584B029B9BE5DA50D7F9D9AE85FFE740718_1,16048584B029B9BE5DA50D7F9D9AE85FFE740718," F_value_removal number Default value is 2.71.
- probability_entry number Default value is 0.05.
- probability_removal number Default value is 0.10.
- calculate_variable_importance flag
- calculate_raw_propensities flag
-"
-2C1E91540BD58780F781F8A06E2B5C62035CA84B,2C1E91540BD58780F781F8A06E2B5C62035CA84B," applydiscriminantnode properties
-
-You can use Discriminant modeling nodes to generate a Discriminant model nugget. The scripting name of this model nugget is applydiscriminantnode. For more information on scripting the modeling node itself, see [discriminantnode properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/discriminantnodeslots.htmldiscriminantnodeslots).
-
-
-
-applydiscriminantnode properties
-
-Table 1. applydiscriminantnode properties
-
- applydiscriminantnode Properties Values Property description
-
- calculate_raw_propensities flag
-"
-BAD5210D0F8114CD4E9B1DB05EB92F0EABC6E233,BAD5210D0F8114CD4E9B1DB05EB92F0EABC6E233," distinctnode properties
-
- The Distinct node removes duplicate records, either by passing the first distinct record to the data flow or by discarding the first record and passing any duplicates to the data flow instead.
-
-Example
-
-node = stream.create(""distinct"", ""My node"")
-node.setPropertyValue(""mode"", ""Include"")
-node.setPropertyValue(""fields"", [""Age"" ""Sex""])
-node.setPropertyValue(""keys_pre_sorted"", True)
-
-
-
-distinctnode properties
-
-Table 1. distinctnode properties
-
- distinctnode properties Data type Property description
-
- mode Include Discard You can include the first distinct record in the data stream, or discard the first distinct record and pass any duplicate records to the data stream instead.
- composite_value Structured slot See example below.
- composite_values Structured slot See example below.
- inc_record_count flag Creates an extra field that specifies how many input records were aggregated to form each aggregate record.
- count_field string Specifies the name of the record count field.
- default_ascending flag
- low_distinct_key_count flag Specifies that you have only a small number of records and/or a small number of unique values of the key field(s).
- keys_pre_sorted flag Specifies that all records with the same key values are grouped together in the input.
- disable_sql_generation flag
- grouping_fields array Lists the field or fields used to determine whether records are identical.
- sort_keys array Lists the fields used to determine how records are sorted within each group of duplicates, and whether they're sorted in ascending or descending order. You must specify a sort order if you've chosen to include or exclude the first record in each group, and if it matters to you which record is treated as the first.
-"
-DCB8FB91999D79190F3E5D54DE32B1B7F1401779,DCB8FB91999D79190F3E5D54DE32B1B7F1401779," distributionnode properties
-
-The Distribution node shows the occurrence of symbolic (categorical) values, such as mortgage type or gender. Typically, you might use the Distribution node to show imbalances in the data, which you could then rectify using a Balance node before creating a model.
-
-
-
-distributionnode properties
-
-Table 1. distributionnode properties
-
- distributionnode properties Data type Property description
-
- plot SelectedFieldsFlags
- x_field field
- color_field field Overlay field.
- normalize flag
- sort_mode ByOccurenceAlphabetic
-"
-5DCC543A106EC708FF97817AA0CFDEF8CB89894D,5DCC543A106EC708FF97817AA0CFDEF8CB89894D," ensemblenode properties
-
-The Ensemble node combines two or more model nuggets to obtain more accurate predictions than can be gained from any one model.
-
-
-
-ensemblenode properties
-
-Table 1. ensemblenode properties
-
- ensemblenode properties Data type Property description
-
- ensemble_target_field field Specifies the target field for all models used in the ensemble.
- filter_individual_model_output flag Specifies whether scoring results from individual models should be suppressed.
- flag_ensemble_method VotingConfidenceWeightedVotingRawPropensityWeightedVotingAdjustedPropensityWeightedVotingHighestConfidenceAverageRawPropensityAverageAdjustedPropensity Specifies the method used to determine the ensemble score. This setting applies only if the selected target is a flag field.
- set_ensemble_method VotingConfidenceWeightedVotingHighestConfidence Specifies the method used to determine the ensemble score. This setting applies only if the selected target is a nominal field.
- flag_voting_tie_selection RandomHighestConfidenceRawPropensityAdjustedPropensity If a voting method is selected, specifies how ties are resolved. This setting applies only if the selected target is a flag field.
-"
-98B447B5AF1CD17524E2BA82FED83B8966DDFEFB_0,98B447B5AF1CD17524E2BA82FED83B8966DDFEFB," evaluationnode properties
-
-The Evaluation node helps to evaluate and compare predictive models. The evaluation chart shows how well models predict particular outcomes. It sorts records based on the predicted value and confidence of the prediction. It splits the records into groups of equal size (quantiles) and then plots the value of the business criterion for each quantile from highest to lowest. Multiple models are shown as separate lines in the plot.
-
-
-
-evaluationnode properties
-
-Table 1. evaluationnode properties
-
- evaluationnode properties Data type Property description
-
- chart_type Gains Response Lift Profit ROI ROC
- inc_baseline flag
- field_detection_method Metadata Name
- use_fixed_cost flag
- cost_value number
- cost_field string
- use_fixed_revenue flag
- revenue_value number
- revenue_field string
- use_fixed_weight flag
- weight_value number
- weight_field field
- n_tile Quartiles Quintles Deciles Vingtiles Percentiles 1000-tiles
- cumulative flag
- style Line Point
- point_type Rectangle Dot Triangle Hexagon Plus Pentagon Star BowTie HorizontalDash VerticalDash IronCross Factory House Cathedral OnionDome ConcaveTriangleOblateGlobe CatEye FourSidedPillow RoundRectangle Fan
- export_data flag
- data_filename string
- delimiter string
- new_line flag
- inc_field_names flag
- inc_best_line flag
- inc_business_rule flag
- business_rule_condition string
- plot_score_fields flag
- score_fields [field1 ... fieldN]
- target_field field
-"
-98B447B5AF1CD17524E2BA82FED83B8966DDFEFB_1,98B447B5AF1CD17524E2BA82FED83B8966DDFEFB," use_hit_condition flag
- hit_condition string
- use_score_expression flag
- score_expression string
- caption_auto flag
- split_by_partition boolean If a partition field is used to split records into training, test, and validation samples, use this option to display a separate evaluation chart for each partition.
-"
-6CB2797AB2EF876F05A39F4CEE08EEE4249716D8,6CB2797AB2EF876F05A39F4CEE08EEE4249716D8," Flow script example: Training a neural net
-
-You can use a flow to train a neural network model when executed. Normally, to test the model, you might run the modeling node to add the model to the flow, make the appropriate connections, and run an Analysis node.
-
-Using an SPSS Modeler script, you can automate the process of testing the model nugget after you create it. Following is an example:
-
-stream = modeler.script.stream()
-neuralnetnode = stream.findByType(""neuralnetwork"", None)
-results = []
-neuralnetnode.run(results)
-appliernode = stream.createModelApplierAt(results[0], ""Drug"", 594, 187)
-analysisnode = stream.createAt(""analysis"", ""Drug"", 688, 187)
-typenode = stream.findByType(""type"", None)
-stream.linkBetween(appliernode, typenode, analysisnode)
-analysisnode.run([])
-
-The following bullets describe each line in this script example.
-
-
-
-* The first line defines a variable that points to the current flow
-* In line 2, the script finds the Neural Net builder node
-* In line 3, the script creates a list where the execution results can be stored
-* In line 4, the Neural Net model nugget is created. This is stored in the list defined on line 3.
-* In line 5, a model apply node is created for the model nugget and placed on the flow canvas
-* In line 6, an analysis node called Drug is created
-* In line 7, the script finds the Type node
-* In line 8, the script connects the model apply node created in line 5 between the Type node and the Analysis node
-* Finally, the Analysis node runs to produce the Analysis report
-
-
-
-It's possible to use a script to build and run a flow from scratch, starting with a blank canvas. To learn more about the scripting language in general, see [Scripting overview](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/using_scripting.html).
-"
-123987D173C0DB88D8E1F59AF46A8D9313A8E601,123987D173C0DB88D8E1F59AF46A8D9313A8E601," extensionexportnode properties
-
-With the Extension Export node, you can run R or Python for Spark scripts to export data.
-
-
-
-extensionexportnode properties
-
-Table 1. extensionexportnode properties
-
- extensionexportnode properties Data type Property description
-
- syntax_type RPython Specify which script runs: R or Python (R is the default).
- r_syntax string The R scripting syntax to run.
- python_syntax string The Python scripting syntax to run.
- convert_flags StringsAndDoubles LogicalValues Option to convert flag fields.
- convert_missing flag Option to convert missing values to the R NA value.
-"
-9AA00A347BD6F7725014C840F3D39BC0DDF26599,9AA00A347BD6F7725014C840F3D39BC0DDF26599," extensionimportnode properties
-
- With the Extension Import node, you can run R or Python for Spark scripts to import data.
-
-
-
-extensionimportnode properties
-
-Table 1. extensionimportnode properties
-
- extensionimportnode properties Data type Property description
-
- syntax_type RPython Specify which script runs – R or Python (R is the default).
-"
-7985570F01D50D057EBD4FAFCF8C8A1BCACB3006,7985570F01D50D057EBD4FAFCF8C8A1BCACB3006," extensionmodelnode properties
-
-With the Extension Model node, you can run R or Python for Spark scripts to build and score results.
-
-Note that many of the properties and much of the information on this page is only applicable to SPSS Modeler Desktop streams.
-
-
-
-extensionmodelnode properties
-
-Table 1. extensionmodelnode properties
-
- extensionmodelnode Properties Values Property description
-
- syntax_type RPython Specify which script runs: R or Python (R is the default).
- r_build_syntax string The R scripting syntax for model building.
- r_score_syntax string The R scripting syntax for model scoring.
- python_build_syntax string The Python scripting syntax for model building.
- python_score_syntax string The Python scripting syntax for model scoring.
- convert_flags StringsAndDoubles LogicalValues Option to convert flag fields.
- convert_missing flag Option to convert missing values to R NA value.
- convert_datetime flag Option to convert variables with date or datetime formats to R date/time formats.
- convert_datetime_class POSIXct POSIXlt Options to specify to what format variables with date or datetime formats are converted.
-"
-E85352E9588726771A8CD594A268ECA7D04379BD,E85352E9588726771A8CD594A268ECA7D04379BD," applyextension properties
-
-You can use Extension Model nodes to generate an Extension model nugget. The scripting name of this model nugget is applyextension. For more information on scripting the modeling node itself, see [extensionmodelnode properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/extensionmodelnodeslots.htmlextensionmodelnodeslots).
-
-
-
-applyextension properties
-
-Table 1. applyextension properties
-
- applyextension Properties Values Property Description
-
- r_syntax string R scripting syntax for model scoring.
- python_syntax string Python scripting syntax for model scoring.
- use_batch_size flag Enable use of batch processing.
- batch_size integer Specify the number of data records to be included in each batch.
- convert_flags StringsAndDoubles LogicalValues Option to convert flag fields.
- convert_missing flag Option to convert missing values to the R NA value.
-"
-14005F26F286B03F8AC692D42E9F3DFCE1F66962,14005F26F286B03F8AC692D42E9F3DFCE1F66962," extensionoutputnode properties
-
-With the Extension Output node, you can analyze data and the results of model scoring using your own custom R or Python for Spark script. The output of the analysis can be text or graphical.
-
-Note that many of the properties on this page are for streams from SPSS Modeler desktop.
-
-
-
-extensionoutputnode properties
-
-Table 1. extensionoutputnode properties
-
- extensionoutputnode properties Data type Property description
-
- syntax_type RPython Specify which script runs: R or Python (R is the default).
- r_syntax string R scripting syntax for model scoring.
- python_syntax string Python scripting syntax for model scoring.
- convert_flags StringsAndDoubles LogicalValues Option to convert flag fields.
- convert_missing flag Option to convert missing values to the R NA value.
- convert_datetime flag Option to convert variables with date or datetime formats to R date/time formats.
- convert_datetime_class POSIXct POSIXlt Options to specify to what format variables with date or datetime formats are converted.
- output_to Screen File Specify the output type (Screen or File).
- output_type Graph Text Specify whether to produce graphical or text output.
- full_filename string File name to use for the generated output.
-"
-D487DB53087C5FD4CD2A25112F1F8A8E496EFC72,D487DB53087C5FD4CD2A25112F1F8A8E496EFC72," extensionprocessnode properties
-
- With the Extension Transform node, you can take data from a flow and apply transformations to the data using R scripting or Python for Spark scripting.
-
-
-
-extensionprocessnode properties
-
-Table 1. extensionprocessnode properties
-
- extensionprocessnode properties Data type Property description
-
- syntax_type RPython Specify which script runs – R or Python (R is the default).
- r_syntax string The R scripting syntax to run.
- python_syntax string The Python scripting syntax to run.
- use_batch_size flag Enable use of batch processing.
- batch_size integer Specify the number of data records to include in each batch.
- convert_flags StringsAndDoubles LogicalValues Option to convert flag fields.
- convert_missing flag Option to convert missing values to the R NA value.
-"
-5EDDA143971CE5735307FEDE23FB0CD7E963264C,5EDDA143971CE5735307FEDE23FB0CD7E963264C," factornode properties
-
-The PCA/Factor node provides powerful data-reduction techniques to reduce the complexity of your data. Principal components analysis (PCA) finds linear combinations of the input fields that do the best job of capturing the variance in the entire set of fields, where the components are orthogonal (perpendicular) to each other. Factor analysis attempts to identify underlying factors that explain the pattern of correlations within a set of observed fields. For both approaches, the goal is to find a small number of derived fields that effectively summarizes the information in the original set of fields.
-
-
-
-factornode properties
-
-Table 1. factornode properties
-
- factornode Properties Values Property description
-
- inputs [field1 ... fieldN] PCA/Factor models use a list of input fields, but no target. Weight and frequency fields are not used. See [Common modeling node properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/modelingnodeslots_common.htmlmodelingnodeslots_common) for more information.
- method PCULSGLSMLPAFAlphaImage
- mode SimpleExpert
- max_iterations number
- complete_records flag
- matrix CorrelationCovariance
- extract_factors ByEigenvaluesByFactors
- min_eigenvalue number
- max_factor number
- rotation NoneVarimaxDirectObliminEquamaxQuartimaxPromax
- delta number If you select DirectOblimin as your rotation data type, you can specify a value for delta. If you don't specify a value, the default value for delta is used.
- kappa number If you select Promax as your rotation data type, you can specify a value for kappa. If you don't specify a value, the default value for kappa is used.
- sort_values flag
-"
-92442D67350644BFCAEC2B2A47B98F4EDE943DC3,92442D67350644BFCAEC2B2A47B98F4EDE943DC3," applyfactornode properties
-
-You can use PCA/Factor modeling nodes to generate a PCA/Factor model nugget. The scripting name of this model nugget is applyfactornode. No other properties exist for this model nugget. For more information on scripting the modeling node itself, see [factornode properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/factornodeslots.htmlfactornodeslots).
-"
-D5863A9857F07023885A810210DFB819AD692ED7,D5863A9857F07023885A810210DFB819AD692ED7," Setting algorithm properties
-
-For the Auto Classifier, Auto Numeric, and Auto Cluster nodes, you can set properties for specific algorithms used by the node by using the general form:
-
-autonode.setKeyedPropertyValue(, , )
-
-For example:
-
-node.setKeyedPropertyValue(""neuralnetwork"", ""method"", ""MultilayerPerceptron"")
-
-Algorithm names for the Auto Classifier node are cart, chaid, quest, c50, logreg, decisionlist, bayesnet, discriminant, svm and knn.
-
-Algorithm names for the Auto Numeric node are cart, chaid, neuralnetwork, genlin, svm, regression, linear and knn.
-
-Algorithm names for the Auto Cluster node are twostep, k-means, and kohonen.
-
-Property names are standard as documented for each algorithm node.
-
-Algorithm properties that contain periods or other punctuation must be wrapped in single quotes. For example:
-
-node.setKeyedPropertyValue(""logreg"", ""tolerance"", ""1.0E-5"")
-
-Multiple values can also be assigned for a property. For example:
-
-node.setKeyedPropertyValue(""decisionlist"", ""search_direction"", [""Up"", ""Down""])
-
-To enable or disable the use of a specific algorithm:
-
-node.setPropertyValue(""chaid"", True)
-
-Note: In cases where certain algorithm options aren't available in the Auto Classifier node, or when only a single value can be specified rather than a range of values, the same limits apply with scripting as when accessing the node in the standard manner.
-"
-055727FBA02274A87D30DA162E6F5ECA3ACE233D_0,055727FBA02274A87D30DA162E6F5ECA3ACE233D," featureselectionnode properties
-
-The Feature Selection node screens input fields for removal based on a set of criteria (such as the percentage of missing values); it then ranks the importance of remaining inputs relative to a specified target. For example, given a data set with hundreds of potential inputs, which are most likely to be useful in modeling patient outcomes?
-
-
-
-featureselectionnode properties
-
-Table 1. featureselectionnode properties
-
- featureselectionnode Properties Values Property description
-
- target field Feature Selection models rank predictors relative to the specified target. Weight and frequency fields are not used. See [Common modeling node properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/modelingnodeslots_common.html) for more information.
- screen_single_category flag If True, screens fields that have too many records falling into the same category relative to the total number of records.
- max_single_category number Specifies the threshold used when screen_single_category is True.
- screen_missing_values flag If True, screens fields with too many missing values, expressed as a percentage of the total number of records.
- max_missing_values number
- screen_num_categories flag If True, screens fields with too many categories relative to the total number of records.
- max_num_categories number
- screen_std_dev flag If True, screens fields with a standard deviation of less than or equal to the specified minimum.
- min_std_dev number
- screen_coeff_of_var flag If True, screens fields with a coefficient of variance less than or equal to the specified minimum.
- min_coeff_of_var number
- criteria PearsonLikelihoodCramersVLambda When ranking categorical predictors against a categorical target, specifies the measure on which the importance value is based.
-"
-055727FBA02274A87D30DA162E6F5ECA3ACE233D_1,055727FBA02274A87D30DA162E6F5ECA3ACE233D," unimportant_below number Specifies the threshold p values used to rank variables as important, marginal, or unimportant. Accepts values from 0.0 to 1.0.
- important_above number Accepts values from 0.0 to 1.0.
- unimportant_label string Specifies the label for the unimportant ranking.
- marginal_label string
- important_label string
- selection_mode ImportanceLevelImportanceValueTopN
- select_important flag When selection_mode is set to ImportanceLevel, specifies whether to select important fields.
- select_marginal flag When selection_mode is set to ImportanceLevel, specifies whether to select marginal fields.
- select_unimportant flag When selection_mode is set to ImportanceLevel, specifies whether to select unimportant fields.
-"
-9A5011652C8FAD610EF217B82B7F28C8256DCE8B,9A5011652C8FAD610EF217B82B7F28C8256DCE8B," applyfeatureselectionnode properties
-
-You can use Feature Selection modeling nodes to generate a Feature Selection model nugget. The scripting name of this model nugget is applyfeatureselectionnode. For more information on scripting the modeling node itself, see [featureselectionnode properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/featureselectionnodeslots.htmlfeatureselectionnodeslots).
-
-
-
-applyfeatureselectionnode properties
-
-Table 1. applyfeatureselectionnode properties
-
- applyfeatureselectionnode Properties Values Property description
-
-"
-76910487C819D14F9FEFCBC6252F25652AF1E65B,76910487C819D14F9FEFCBC6252F25652AF1E65B," fillernode properties
-
-The Filler node replaces field values and changes storage. You can choose to replace values based on a CLEM condition, such as @BLANK(@FIELD). Alternatively, you can choose to replace all blanks or null values with a specific value. A Filler node is often used together with a Type node to replace missing values.
-
-Example
-
-node = stream.create(""filler"", ""My node"")
-node.setPropertyValue(""fields"", [""Age""])
-node.setPropertyValue(""replace_mode"", ""Always"")
-node.setPropertyValue(""condition"", ""(""Age"" > 60) and (""Sex"" = ""M"""")
-node.setPropertyValue(""replace_with"", """"old man"""")
-
-
-
-fillernode properties
-
-Table 1. fillernode properties
-
- fillernode properties Data type Property description
-
- fields list Fields from the dataset whose values will be examined and replaced.
- replace_mode AlwaysConditionalBlankNullBlankAndNull You can replace all values, blank values, or null values, or replace based on a specified condition.
-"
-D91044A492D05F87613BBA485CD2FAE1F54764DB_0,D91044A492D05F87613BBA485CD2FAE1F54764DB," filternode properties
-
-The Filter node filters (discards) fields, renames fields, and maps fields from one import node to another.
-
-Using the default_include property. Note that setting the value of the default_include property doesn't automatically include or exclude all fields; it simply determines the default for the current selection. This is functionally equivalent to selecting the Include All Fields option in the Filter node properties. For example, suppose you run the following script:
-
-node = modeler.script.stream().create(""filter"", ""Filter"")
-node.setPropertyValue(""default_include"", False)
- Include these two fields in the list
-for f in [""Age"", ""Sex""]:
-node.setKeyedPropertyValue(""include"", f, True)
-
-This will cause the node to pass the fields Age and Sex and discard all others. Now suppose you run the same script again but name two different fields:
-
-node = modeler.script.stream().create(""filter"", ""Filter"")
-node.setPropertyValue(""default_include"", False)
- Include these two fields in the list
-for f in [""BP"", ""Na""]:
-node.setKeyedPropertyValue(""include"", f, True)
-
-This will add two more fields to the filter so that a total of four fields are passed (Age, Sex, BP, Na). In other words, resetting the value of default_include to False doesn't automatically reset all fields.
-
-Alternatively, if you now change default_include to True, either using a script or in the Filter node dialog box, this would flip the behavior so the four fields listed previously would be discarded rather than included. When in doubt, experimenting with the controls in the Filter node properties may be helpful in understanding this interaction.
-
-
-
-filternode properties
-
-Table 1. filternode properties
-
- filternode properties Data type Property description
-
-"
-D91044A492D05F87613BBA485CD2FAE1F54764DB_1,D91044A492D05F87613BBA485CD2FAE1F54764DB," default_include flag Keyed property to specify whether the default behavior is to pass or filter fields: Note that setting this property doesn't automatically include or exclude all fields; it simply determines whether selected fields are included or excluded by default.
-"
-916F0A90D0B8383F2353B3320628E23E38B380B5_0,916F0A90D0B8383F2353B3320628E23E38B380B5," genlinnode properties
-
-The Generalized Linear (GenLin) model expands the general linear model so that the dependent variable is linearly related to the factors and covariates through a specified link function. Moreover, the model allows for the dependent variable to have a non-normal distribution. It covers the functionality of a wide number of statistical models, including linear regression, logistic regression, loglinear models for count data, and interval-censored survival models.
-
-
-
-genlinnode properties
-
-Table 1. genlinnode properties
-
- genlinnode Properties Values Property description
-
- target field GenLin models require a single target field which must be a nominal or flag field, and one or more input fields. A weight field can also be specified. See [Common modeling node properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/modelingnodeslots_common.htmlmodelingnodeslots_common) for more information.
- use_weight flag
- weight_field field Field type is only continuous.
- target_represents_trials flag
- trials_type VariableFixedValue
- trials_field field Field type is continuous, flag, or ordinal.
- trials_number number Default value is 10.
- model_type MainEffectsMainAndAllTwoWayEffects
- offset_type VariableFixedValue
- offset_field field Field type is only continuous.
- offset_value number Must be a real number.
- base_category LastFirst
- include_intercept flag
- mode SimpleExpert
- distribution BINOMIALGAMMAIGAUSSNEGBINNORMALPOISSONTWEEDIEMULTINOMIAL IGAUSS: Inverse Gaussian. NEGBIN: Negative binomial.
- negbin_para_type SpecifyEstimate
- negbin_parameter number Default value is 1. Must contain a non-negative real number.
- tweedie_parameter number
-"
-916F0A90D0B8383F2353B3320628E23E38B380B5_1,916F0A90D0B8383F2353B3320628E23E38B380B5," link_function IDENTITYCLOGLOGLOGLOGCLOGITNEGBINNLOGLOGODDSPOWERPROBITPOWERCUMCAUCHITCUMCLOGLOGCUMLOGITCUMNLOGLOGCUMPROBIT CLOGLOG: Complementary log-log. LOGC: log complement. NEGBIN: Negative binomial. NLOGLOG: Negative log-log. CUMCAUCHIT: Cumulative cauchit. CUMCLOGLOG: Cumulative complementary log-log. CUMLOGIT: Cumulative logit. CUMNLOGLOG: Cumulative negative log-log. CUMPROBIT: Cumulative probit.
- power number Value must be real, nonzero number.
- method HybridFisherNewtonRaphson
- max_fisher_iterations number Default value is 1; only positive integers allowed.
- scale_method MaxLikelihoodEstimateDeviancePearsonChiSquareFixedValue
- scale_value number Default value is 1; must be greater than 0.
- covariance_matrix ModelEstimatorRobustEstimator
- max_iterations number Default value is 100; non-negative integers only.
- max_step_halving number Default value is 5; positive integers only.
- check_separation flag
- start_iteration number Default value is 20; only positive integers allowed.
- estimates_change flag
- estimates_change_min number Default value is 1E-006; only positive numbers allowed.
- estimates_change_type AbsoluteRelative
- loglikelihood_change flag
- loglikelihood_change_min number Only positive numbers allowed.
- loglikelihood_change_type AbsoluteRelative
- hessian_convergence flag
- hessian_convergence_min number Only positive numbers allowed.
- hessian_convergence_type AbsoluteRelative
- case_summary flag
- contrast_matrices flag
- descriptive_statistics flag
- estimable_functions flag
- model_info flag
- iteration_history flag
- goodness_of_fit flag
- print_interval number Default value is 1; must be positive integer.
- model_summary flag
- lagrange_multiplier flag
- parameter_estimates flag
- include_exponential flag
- covariance_estimates flag
-"
-916F0A90D0B8383F2353B3320628E23E38B380B5_2,916F0A90D0B8383F2353B3320628E23E38B380B5," correlation_estimates flag
- analysis_type TypeITypeIIITypeIAndTypeIII
- statistics WaldLR
- citype WaldProfile
- tolerancelevel number Default value is 0.0001.
- confidence_interval number Default value is 95.
- loglikelihood_function FullKernel
- singularity_tolerance 1E-0071E-0081E-0091E-0101E-0111E-012
- value_order AscendingDescendingDataOrder
- calculate_variable_importance flag
- calculate_raw_propensities flag
-"
-BC3D88E89001BB639E418AE5971B209535603A18,BC3D88E89001BB639E418AE5971B209535603A18," applygeneralizedlinearnode properties
-
-You can use Generalized Linear (GenLin) modeling nodes to generate a GenLin model nugget. The scripting name of this model nugget is applygeneralizedlinearnode. For more information on scripting the modeling node itself, see [genlinnode properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/genlinnodeslots.htmlgenlinnodeslots).
-
-
-
-applygeneralizedlinearnode properties
-
-Table 1. applygeneralizedlinearnode properties
-
- applygeneralizedlinearnode Properties Values Property description
-
- calculate_raw_propensities flag
-"
-ADF96F4DC435CCF7E702EEAC6FCAA9F7DECD1DCB_0,ADF96F4DC435CCF7E702EEAC6FCAA9F7DECD1DCB," gle properties
-
-A GLE extends the linear model so that the target can have a non-normal distribution, is linearly related to the factors and covariates via a specified link function, and so that the observations can be correlated. Generalized linear mixed models cover a wide variety of models, from simple linear regression to complex multilevel models for non-normal longitudinal data.
-
-
-
-gle properties
-
-Table 1. gle properties
-
- gle Properties Values Property description
-
- custom_target flag Indicates whether to use target defined in upstream node (false) or custom target specified by target_field (true).
- target_field field Field to use as target if custom_target is true.
- use_trials flag Indicates whether additional field or value specifying number of trials is to be used when target response is a number of events occurring in a set of trials. Default is false.
- use_trials_field_or_value Field Value Indicates whether field (default) or value is used to specify number of trials.
- trials_field field Field to use to specify number of trials.
- trials_value integer Value to use to specify number of trials. If specified, minimum value is 1.
- use_custom_target_reference flag Indicates whether custom reference category is to be used for a categorical target. Default is false.
- target_reference_value string Reference category to use if use_custom_target_reference is true.
- dist_link_combination NormalIdentity GammaLog PoissonLog NegbinLog TweedieIdentity NominalLogit BinomialLogit BinomialProbit BinomialLogC CUSTOM Common models for distribution of values for target. Choose CUSTOM to specify a distribution from the list provided by target_distribution.
-"
-ADF96F4DC435CCF7E702EEAC6FCAA9F7DECD1DCB_1,ADF96F4DC435CCF7E702EEAC6FCAA9F7DECD1DCB," target_distribution Normal Binomial Multinomial Gamma INVERSE_GAUSS NEG_BINOMIAL Poisson TWEEDIE UNKNOWN Distribution of values for target when dist_link_combination is Custom.
- link_function_type UNKNOWN IDENTITY LOG LOGIT PROBIT COMPL_LOG_LOG POWER LOG_COMPL NEG_LOG_LOG ODDS_POWER NEG_BINOMIAL GEN_LOGIT CUMUL_LOGIT CUMUL_PROBIT CUMUL_COMPL_LOG_LOG CUMUL_NEG_LOG_LOG CUMUL_CAUCHIT Link function to relate target values to predictors. If target_distribution is Binomial you can use: UNKNOWNIDENTITYLOGLOGITPROBITCOMPL_LOG_LOGPOWERLOG_COMPLNEG_LOG_LOGODDS_POWER If target_distribution is NEG_BINOMIAL you can use: NEG_BINOMIAL If target_distribution is UNKNOWN, you can use: GEN_LOGITCUMUL_LOGITCUMUL_PROBITCUMUL_COMPL_LOG_LOGCUMUL_NEG_LOG_LOGCUMUL_CAUCHIT
- link_function_param number Tweedie parameter value to use. Only applicable if normal_link_function or link_function_type is POWER.
- tweedie_param number Link function parameter value to use. Only applicable if dist_link_combination is set to TweedieIdentity, or link_function_type is TWEEDIE.
- use_predefined_inputs flag Indicates whether model effect fields are to be those defined upstream as input fields (true) or those from fixed_effects_list (false).
-"
-ADF96F4DC435CCF7E702EEAC6FCAA9F7DECD1DCB_2,ADF96F4DC435CCF7E702EEAC6FCAA9F7DECD1DCB," model_effects_list structured If use_predefined_inputs is false, specifies the input fields to use as model effect fields.
- use_intercept flag If true (default), includes the intercept in the model.
- regression_weight_field field Field to use as analysis weight field.
- use_offset None Value Variable Indicates how offset is specified. Value None means no offset is used.
- offset_value number Value to use for offset if use_offset is set to offset_value.
- offset_field field Field to use for offset value if use_offset is set to offset_field.
- target_category_order Ascending Descending Sorting order for categorical targets. Default is Ascending.
- inputs_category_order Ascending Descending Sorting order for categorical predictors. Default is Ascending.
- max_iterations integer Maximum number of iterations the algorithm will perform. A non-negative integer; default is 100.
- confidence_level number Confidence level used to compute interval estimates of the model coefficients. A non-negative integer; maximum is 100, default is 95.
- test_fixed_effects_coeffecients Model Robust Method for computing the parameter estimates covariance matrix.
- detect_outliers flag When true the algorithm finds influential outliers for all distributions except multinomial distribution.
- conduct_trend_analysis flag When true the algorithm conducts trend analysis for the scatter plot.
- estimation_method FISHER_SCORING NEWTON_RAPHSON HYBRID Specify the maximum likelihood estimation algorithm.
- max_fisher_iterations integer If using the FISHER_SCORINGestimation_method, the maximum number of iterations. Minimum 0, maximum 20.
- scale_parameter_method MLE FIXED DEVIANCE PEARSON_CHISQUARE Specify the method to be used for the estimation of the scale parameter.
-"
-ADF96F4DC435CCF7E702EEAC6FCAA9F7DECD1DCB_3,ADF96F4DC435CCF7E702EEAC6FCAA9F7DECD1DCB," scale_value number Only available if scale_parameter_method is set to Fixed.
- negative_binomial_method MLE FIXED Specify the method to be for the estimation of the negative binomial ancillary parameter.
- negative_binomial_value number Only available if negative_binomial_method is set to Fixed.
- use_p_converge flag Option for parameter convergence.
- p_converge number Blank, or any positive value.
- p_converge_type flag True = Absolute, False = Relative
- use_l_converge flag Option for log-likelihood convergence.
- l_converge number Blank, or any positive value.
- l_converge_type flag True = Absolute, False = Relative
- use_h_converge flag Option for Hessian convergence.
- h_converge number Blank, or any positive value.
- h_converge_type flag True = Absolute, False = Relative
- max_iterations integer Maximum number of iterations the algorithm will perform. A non-negative integer; default is 100.
- sing_tolerance integer
- use_model_selection flag Enables the parameter threshold and model selection method controls..
- method LASSO ELASTIC_NET FORWARD_STEPWISE RIDGE Determines the model selection method, or if using Ridge the regularization method, used.
- detect_two_way_interactions flag When True the model will automatically detect two-way interactions between input fields. This control should only be enabled if the model is main effects only (that is, where the user has not created any higher order effects) and if the method selected is Forward Stepwise, Lasso, or Elastic Net.
- automatic_penalty_params flag Only available if model selection method is Lasso or Elastic Net. Use this function to enter penalty parameters associated with either the Lasso or Elastic Net variable selection methods. If True, default values are used. If False, the penalty parameters are enabled custom values can be entered.
-"
-ADF96F4DC435CCF7E702EEAC6FCAA9F7DECD1DCB_4,ADF96F4DC435CCF7E702EEAC6FCAA9F7DECD1DCB," lasso_penalty_param number Only available if model selection method is Lasso or Elastic Net and automatic_penalty_params is False. Specify the penalty parameter value for Lasso.
- elastic_net_penalty_param1 number Only available if model selection method is Lasso or Elastic Net and automatic_penalty_params is False. Specify the penalty parameter value for Elastic Net parameter 1.
- elastic_net_penalty_param2 number Only available if model selection method is Lasso or Elastic Net and automatic_penalty_params is False. Specify the penalty parameter value for Elastic Net parameter 2.
- probability_entry number Only available if the method selected is Forward Stepwise. Specify the significance level of the f statistic criterion for effect inclusion.
- probability_removal number Only available if the method selected is Forward Stepwise. Specify the significance level of the f statistic criterion for effect removal.
- use_max_effects flag Only available if the method selected is Forward Stepwise. Enables the max_effects control. When False the default number of effects included should equal the total number of effects supplied to the model, minus the intercept.
- max_effects integer Specify the maximum number of effects when using the forward stepwise building method.
- use_max_steps flag Enables the max_steps control. When False the default number of steps should equal three times the number of effects supplied to the model, excluding the intercept.
- max_steps integer Specify the maximum number of steps to be taken when using the Forward Stepwise building method.
- use_model_name flag Indicates whether to specify a custom name for the model (true) or to use the system-generated name (false). Default is false.
- model_name string If use_model_name is true, specifies the model name to use.
- usePI flag If true, predictor importance is calculated..
-"
-863FD4EEE7625CF4012BC9E37B5B66CD25554B8A,863FD4EEE7625CF4012BC9E37B5B66CD25554B8A," applygle properties
-
-You can use the GLE modeling node to generate a GLE model nugget. The scripting name of this model nugget is applygle. For more information on scripting the modeling node itself, see [gle properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/glenodeslots.htmlglenodeslots).
-
-
-
-applygle properties
-
-Table 1. applygle properties
-
- applygle Properties Values Property description
-
- enable_sql_generation falsenative When using data from a database, SQL code can be pushed back to the database for execution, providing superior performance for many operations.
-"
-0A6D8500DAC43A18EC5DD8FCC3D31C2A31546554_0,0A6D8500DAC43A18EC5DD8FCC3D31C2A31546554," glmmnode properties
-
-A generalized linear mixed model (GLMM) extends the linear model so that the target can have a non-normal distribution, is linearly related to the factors and covariates via a specified link function, and so that the observations can be correlated. GLMM models cover a wide variety of models, from simple linear regression to complex multilevel models for non-normal longitudinal data.
-
-
-
-glmmnode properties
-
-Table 1. glmmnode properties
-
- glmmnode Properties Values Property description
-
- residual_subject_spec structured The combination of values of the specified categorical fields that uniquely define subjects within the data set
- repeated_measures structured Fields used to identify repeated observations.
- residual_group_spec [field1 ... fieldN] Fields that define independent sets of repeated effects covariance parameters.
- residual_covariance_type Diagonal AR1 ARMA11 COMPOUND_SYMMETRY IDENTITY TOEPLITZ UNSTRUCTURED VARIANCE_COMPONENTS Specifies covariance structure for residuals.
- custom_target flag Indicates whether to use target defined in upstream node (false) or custom target specified by target_field (true).
- target_field field Field to use as target if custom_target is true.
- use_trials flag Indicates whether additional field or value specifying number of trials is to be used when target response is a number of events occurring in a set of trials. Default is false.
- use_field_or_value Field Value Indicates whether field (default) or value is used to specify number of trials.
- trials_field field Field to use to specify number of trials.
- trials_value integer Value to use to specify number of trials. If specified, minimum value is 1.
-"
-0A6D8500DAC43A18EC5DD8FCC3D31C2A31546554_1,0A6D8500DAC43A18EC5DD8FCC3D31C2A31546554," use_custom_target_reference flag Indicates whether custom reference category is to be used for a categorical target. Default is false.
- target_reference_value string Reference category to use if use_custom_target_reference is true.
- dist_link_combination Nominal Logit GammaLog BinomialLogit PoissonLog BinomialProbit NegbinLog BinomialLogC Custom Common models for distribution of values for target. Choose Custom to specify a distribution from the list provided bytarget_distribution.
- target_distribution Normal Binomial Multinomial Gamma Inverse NegativeBinomial Poisson Distribution of values for target when dist_link_combination is Custom.
- link_function_type Identity LogC Log CLOGLOGLogit NLOGLOGPROBIT POWER CAUCHIT Link function to relate target values to predictors. If target_distribution is Binomial you can use any of the listed link functions. If target_distribution is Multinomial you can use CLOGLOG, CAUCHIT, LOGIT, NLOGLOG, or PROBIT. If target_distribution is anything other than Binomial or Multinomial you can use IDENTITY, LOG, or POWER.
- link_function_param number Link function parameter value to use. Only applicable if normal_link_function or link_function_type is POWER.
- use_predefined_inputs flag Indicates whether fixed effect fields are to be those defined upstream as input fields (true) or those from fixed_effects_list (false). Default is false.
- fixed_effects_list structured If use_predefined_inputs is false, specifies the input fields to use as fixed effect fields.
-"
-0A6D8500DAC43A18EC5DD8FCC3D31C2A31546554_2,0A6D8500DAC43A18EC5DD8FCC3D31C2A31546554," use_intercept flag If true (default), includes the intercept in the model.
- random_effects_list structured List of fields to specify as random effects.
- regression_weight_field field Field to use as analysis weight field.
- use_offset Noneoffset_valueoffset_field Indicates how offset is specified. Value None means no offset is used.
- offset_value number Value to use for offset if use_offset is set to offset_value.
- offset_field field Field to use for offset value if use_offset is set to offset_field.
- target_category_order AscendingDescendingData Sorting order for categorical targets. Value Data specifies using the sort order found in the data. Default is Ascending.
- inputs_category_order AscendingDescendingData Sorting order for categorical predictors. Value Data specifies using the sort order found in the data. Default is Ascending.
- max_iterations integer Maximum number of iterations the algorithm will perform. A non-negative integer; default is 100.
- confidence_level integer Confidence level used to compute interval estimates of the model coefficients. A non-negative integer; maximum is 100, default is 95.
- degrees_of_freedom_method FixedVaried Specifies how degrees of freedom are computed for significance test.
- test_fixed_effects_coeffecients ModelRobust Method for computing the parameter estimates covariance matrix.
- use_p_converge flag Option for parameter convergence.
- p_converge number Blank, or any positive value.
- p_converge_type AbsoluteRelative
- use_l_converge flag Option for log-likelihood convergence.
- l_converge number Blank, or any positive value.
- l_converge_type AbsoluteRelative
- use_h_converge flag Option for Hessian convergence.
- h_converge number Blank, or any positive value.
- h_converge_type AbsoluteRelative
- max_fisher_step integer
- sing_tolerance number
-"
-0A6D8500DAC43A18EC5DD8FCC3D31C2A31546554_3,0A6D8500DAC43A18EC5DD8FCC3D31C2A31546554," use_model_name flag Indicates whether to specify a custom name for the model (true) or to use the system-generated name (false). Default is false.
- model_name string If use_model_name is true, specifies the model name to use.
- confidence onProbabilityonIncrease Basis for computing scoring confidence value: highest predicted probability, or difference between highest and second highest predicted probabilities.
- score_category_probabilities flag If true, produces predicted probabilities for categorical targets. Default is false.
- max_categories integer If score_category_probabilities is true, specifies maximum number of categories to save.
- score_propensity flag If true, produces propensity scores for flag target fields that indicate likelihood of ""true"" outcome for field.
- emeans structure For each categorical field from the fixed effects list, specifies whether to produce estimated marginal means.
- covariance_list structure For each continuous field from the fixed effects list, specifies whether to use the mean or a custom value when computing estimated marginal means.
- mean_scale OriginalTransformed Specifies whether to compute estimated marginal means based on the original scale of the target (default) or on the link function transformation.
- comparison_adjustment_method LSDSEQBONFERRONISEQSIDAK Adjustment method to use when performing hypothesis tests with multiple contrasts.
- use_trials_field_or_value ""field""""value""
- residual_subject_ui_spec array Residual subject specification: The combination of values of the specified categorical fields should uniquely define subjects within the dataset. For example, a single Patient ID field should be sufficient to define subjects in a single hospital, but the combination of Hospital ID and Patient ID may be necessary if patient identification numbers are not unique across hospitals.
-"
-337CC5401082DFD6C8C79D49CD97F7BC197C7303,337CC5401082DFD6C8C79D49CD97F7BC197C7303," applyglmmnode properties
-
-You can use GLMM modeling nodes to generate a GLMM model nugget. The scripting name of this model nugget is applyglmmnode. For more information on scripting the modeling node itself, see [glmmnode properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/glmmnodeslots.htmlglmmnodeslots).
-
-
-
-applyglmmnode properties
-
-Table 1. applyglmmnode properties
-
- applyglmmnode Properties Values Property description
-
- confidence onProbabilityonIncrease Basis for computing scoring confidence value: highest predicted probability, or difference between highest and second highest predicted probabilities.
- score_category_probabilities flag If set to True, produces the predicted probabilities for categorical targets. A field is created for each category. Default is False.
- max_categories integer Maximum number of categories for which to predict probabilities. Used only if score_category_probabilities is True.
-"
-D1C3F3DB7837F7C5803F52829A542F6BA8B4837D_0,D1C3F3DB7837F7C5803F52829A542F6BA8B4837D," gmm properties
-
-A Gaussian Mixture© model is a probabilistic model that assumes all the data points are generated from a mixture of a finite number of Gaussian distributions with unknown parameters. One can think of mixture models as generalizing k-means clustering to incorporate information about the covariance structure of the data as well as the centers of the latent Gaussians. The Gaussian Mixture node in SPSS Modeler exposes the core features and commonly used parameters of the Gaussian Mixture library. The node is implemented in Python.
-
-
-
-gmm properties
-
-Table 1. gmm properties
-
- gmm properties Data type Property description
-
- custom_fields boolean This option tells the node to use field information specified here instead of that given in any upstream Type node(s). After selecting this option, specify the following fields as required.
- inputs field List of the field names for input.
- target field One field name for target.
- fast_build boolean Utilize multiple CPU cores to improve model building.
- use_partition boolean Set to True or False to specify whether to use partitioned data. Default is False.
- covariance_type string Specify Full, Tied, Diag, or Spherical to set the covariance type.
- number_component integer Specify an integer for the number of mixture components. Minimum value is 1. Default value is 2.
- component_lable boolean Specify True to set the cluster label to a string or False to set the cluster label to a number. Default is False.
- label_prefix string If using a string cluster label, you can specify a prefix.
- enable_random_seed boolean Specify True if you want to use a random seed. Default is False.
- random_seed integer If using a random seed, specify an integer to be used for generating random samples.
-"
-D1C3F3DB7837F7C5803F52829A542F6BA8B4837D_1,D1C3F3DB7837F7C5803F52829A542F6BA8B4837D," tol Double Specify the convergence threshold. Default is 0.000.1.
- max_iter integer Specify the maximum number of iterations to perform. Default is 100.
-"
-F2D3C76D5EABBBF72A0314F29374527C8339591A,F2D3C76D5EABBBF72A0314F29374527C8339591A," applygmm properties
-
-You can use the Gaussian Mixture node to generate a Gaussian Mixture model nugget. The scripting name of this model nugget is applygmm. For more information on scripting the modeling node itself, see [gmm properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/gmmnodeslots.htmlgmmnodeslots).
-
-
-
-applygmm properties
-
-Table 1. applygmm properties
-
- applygmm properties Data type Property description
-
- centers
- item_count
- total
- dimension
-"
-1F781DA5779DAFEFBB53038F71A18BBE2649117B_0,1F781DA5779DAFEFBB53038F71A18BBE2649117B," associationrulesnode properties
-
-The Association Rules node is similar to the Apriori Node. However, unlike Apriori, the Association Rules node can process list data. In addition, the Association Rules node can be used with SPSS Analytic Server to process big data and take advantage of faster parallel processing.
-
-
-
-associationrulesnode properties
-
-Table 1. associationrulesnode properties
-
- associationrulesnode properties Data type Property description
-
- predictions field Fields in this list can only appear as a predictor of a rule
- conditions [field1...fieldN] Fields in this list can only appear as a condition of a rule
- max_rule_conditions integer The maximum number of conditions that can be included in a single rule. Minimum 1, maximum 9.
- max_rule_predictions integer The maximum number of predictions that can be included in a single rule. Minimum 1, maximum 5.
- max_num_rules integer The maximum number of rules that can be considered as part of rule building. Minimum 1, maximum 10,000.
- rule_criterion_top_n ConfidenceRulesupportLiftConditionsupportDeployability The rule criterion that determines the value by which the top ""N"" rules in the model are chosen.
- true_flags Boolean Setting as Y determines that only the true values for flag fields are considered during rule building.
- rule_criterion Boolean Setting as Y determines that the rule criterion values are used for excluding rules during model building.
- min_confidence number 0.1 to 100 - the percentage value for the minimum required confidence level for a rule produced by the model. If the model produces a rule with a confidence level less than the value specified here the rule is discarded.
- min_rule_support number 0.1 to 100 - the percentage value for the minimum required rule support for a rule produced by the model. If the model produces a rule with a rule support level less than the specified value the rule is discarded.
-"
-1F781DA5779DAFEFBB53038F71A18BBE2649117B_1,1F781DA5779DAFEFBB53038F71A18BBE2649117B," min_condition_support number 0.1 to 100 - the percentage value for the minimum required condition support for a rule produced by the model. If the model produces a rule with a condition support level less than the specified value the rule is discarded.
- min_lift integer 1 to 10 - represents the minimum required lift for a rule produced by the model. If the model produces a rule with a lift level less than the specified value the rule is discarded.
- exclude_rules Boolean Used to select a list of related fields from which you do not want the model to create rules. Example: set :gsarsnode.exclude_rules = [field1,field2, field3]],field4, field5]]] - where each list of fields separated by [] is a row in the table.
- num_bins integer Set the number of automatic bins that continuous fields are binned to. Minimum 2, maximum 10.
- max_list_length integer Applies to any list fields for which the maximum length is not known. Elements in the list up until the number specified here are included in the model build; any further elements are discarded. Minimum 1, maximum 100.
- output_confidence Boolean
- output_rule_support Boolean
- output_lift Boolean
- output_condition_support Boolean
- output_deployability Boolean
- rules_to_display uptoall The maximum number of rules to display in the output tables.
- display_upto integer If upto is set in rules_to_display, set the number of rules to display in the output tables. Minimum 1.
- field_transformations Boolean
- records_summary Boolean
- rule_statistics Boolean
- most_frequent_values Boolean
- most_frequent_fields Boolean
- word_cloud Boolean
- word_cloud_sort ConfidenceRulesupportLiftConditionsupportDeployability
- word_cloud_display integer Minimum 1, maximum 20
- max_predictions integer The maximum number of rules that can be applied to each input to the score.
- criterion ConfidenceRulesupportLiftConditionsupportDeployability Select the measure used to determine the strength of rules.
-"
-9DA9A2809D484A6CAA70A66A3548CF4A537950FC,9DA9A2809D484A6CAA70A66A3548CF4A537950FC," applyassociationrulesnode properties
-
-You can use the Association Rules modeling node to generate an association rules model nugget. The scripting name of this model nugget is applyassociationrulesnode. For more information on scripting the modeling node itself, see [associationrulesnode properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/gsarnodeslots.htmlgsarnodeslots).
-
-
-
-applyassociationrulesnode properties
-
-Table 1. applyassociationrulesnode properties
-
- applyassociationrulesnode properties Data type Property description
-
- max_predictions integer The maximum number of rules that can be applied to each input to the score.
- criterion ConfidenceRulesupportLiftConditionsupportDeployability Select the measure used to determine the strength of rules.
-"
-E01184BCBA866D676B5A236D6638E78D3F55C794_0,E01184BCBA866D676B5A236D6638E78D3F55C794," hdbscannode properties
-
-Hierarchical Density-Based Spatial Clustering (HDBSCAN)© uses unsupervised learning to find clusters, or dense regions, of a data set. The HDBSCAN node in SPSS Modeler exposes the core features and commonly used parameters of the HDBSCAN library. The node is implemented in Python, and you can use it to cluster your dataset into distinct groups when you don't know what those groups are at first.
-
-
-
-hdbscannode properties
-
-Table 1. hdbscannode properties
-
- hdbscannode properties Data type Property description
-
- custom_fields boolean This option tells the node to use field information specified here instead of that given in any upstream Type node(s). After selecting this option, specify the following fields as required.
- inputs field Input fields for clustering.
- useHPO boolean Specify true or false to enable or disable Hyper-Parameter Optimization (HPO) based on Rbfopt, which automatically discovers the optimal combination of parameters so that the model will achieve the expected or lesser error rate on the samples. Default is false.
- min_cluster_size integer The minimum size of clusters. Specify an integer. Default is 5.
- min_samples integer The number of samples in a neighborhood for a point to be considered a core point. Specify an integer. If set to 0, the min_cluster_size is used. Default is 0.
- algorithm string Specify which algorithm to use: best, generic, prims_kdtree, prims_balltree, boruvka_kdtree, or boruvka_balltree. Default is best.
-"
-E01184BCBA866D676B5A236D6638E78D3F55C794_1,E01184BCBA866D676B5A236D6638E78D3F55C794," metric string Specify which metric to use when calculating distance between instances in a feature array: euclidean, cityblock, L1, L2, manhattan, braycurtis, canberra, chebyshev, correlation, minkowski, or sqeuclidean. Default is euclidean.
- useStringLabel boolean Specify true to use a string cluster label, or false to use a number cluster label. Default is false.
- stringLabelPrefix string If the useStringLabel parameter is set to true, specify a value for the string label prefix. Default prefix is cluster.
- approx_min_span_tree boolean Specify true to accept an approximate minimum spanning tree, or false if you are willing to sacrifice speed for correctness. Default is true.
- cluster_selection_method string Specify the method to use for selecting clusters from the condensed tree: eom or leaf. Default is eom (Excess of Mass algorithm).
- allow_single_cluster boolean Specify true if you want to allow single cluster results. Default is false.
- p_value double Specify the p value to use if you're using minkowski for the metric. Default is 1.5.
- leaf_size integer If using a space tree algorithm (boruvka_kdtree, or boruvka_balltree), specify the number of points in a leaf node of the tree. Default is 40.
- outputValidity boolean Specify true or false to control whether the Validity Index chart is included in the model output.
- outputCondensed boolean Specify true or false to control whether the Condensed Tree chart is included in the model output.
- outputSingleLinkage boolean Specify true or false to control whether the Single Linkage Tree chart is included in the model output.
-"
-4F0098CE544BA8AC594F98AF8DF26B7911399750,4F0098CE544BA8AC594F98AF8DF26B7911399750," hdbscannugget properties
-
-You can use the HDBSCAN node to generate an HDBSCAN model nugget. The scripting name of this model nugget is hdbscannugget. No other properties exist for this model nugget. For more information on scripting the modeling node itself, see [hdbscannode properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/hdbscannodeslots.htmlhdbscannodeslots).
-"
-8DAA2C34D27A7E09C0AB837C191E87F320790F75,8DAA2C34D27A7E09C0AB837C191E87F320790F75," histogramnode properties
-
-The Histogram node shows the occurrence of values for numeric fields. It's often used to explore the data before manipulations and model building. Similar to the Distribution node, the Histogram node frequently reveals imbalances in the data.
-
-
-
-histogramnode properties
-
-Table 1. histogramnode properties
-
- histogramnode properties Data type Property description
-
- field field
- color_field field
- panel_field field
- animation_field field
- range_mode AutomaticUserDefined
- range_min number
- range_max number
- bins ByNumberByWidth
- num_bins number
- bin_width number
- normalize flag
- separate_bands flag
- x_label_auto flag
- x_label string
- y_label_auto flag
- y_label string
- use_grid flag
- graph_background color Standard graph colors are described at the beginning of this section.
-"
-BDC1B4283563848E2C775804FC0857DBDE8843AF,BDC1B4283563848E2C775804FC0857DBDE8843AF," historynode properties
-
-The History node creates new fields containing data from fields in previous records. History nodes are most often used for sequential data, such as time series data. Before using a History node, you may want to sort the data using a Sort node.
-
-Example
-
-node = stream.create(""history"", ""My node"")
-node.setPropertyValue(""fields"", [""Drug""])
-node.setPropertyValue(""offset"", 1)
-node.setPropertyValue(""span"", 3)
-node.setPropertyValue(""unavailable"", ""Discard"")
-node.setPropertyValue(""fill_with"", ""undef"")
-
-
-
-historynode properties
-
-Table 1. historynode properties
-
- historynode properties Data type Property description
-
- fields list Fields for which you want a history.
- offset number Specifies the latest record (prior to the current record) from which you want to extract historical field values.
- span number Specifies the number of prior records from which you want to extract values.
-"
-2756DEAD36AC092838F80ACFFE6ECEE13A22A376,2756DEAD36AC092838F80ACFFE6ECEE13A22A376," isotonicasnode properties
-
-Isotonic Regression belongs to the family of regression algorithms. The Isotonic-AS node in SPSS Modeler is implemented in Spark. For details about Isotonic Regression algorithms, see [https://spark.apache.org/docs/2.2.0/mllib-isotonic-regression.html](https://spark.apache.org/docs/2.2.0/mllib-isotonic-regression.html).
-
-
-
-isotonicasnode properties
-
-Table 1. isotonicasnode properties
-
- isotonicasnode properties Data type Property description
-
- label string This property is a dependent variable for which isotonic regression is calculated.
- features string This property is an independent variable.
- weightCol string The weight represents a number of measures. Default is 1.
-"
-F67E458A29CF154C33221A8889789241725FE5C7_0,F67E458A29CF154C33221A8889789241725FE5C7," Python and Jython
-
-Jython is an implementation of the Python scripting language, which is written in the Java language and integrated with the Java platform. Python is a powerful object-oriented scripting language.
-
-Jython is useful because it provides the productivity features of a mature scripting language and, unlike Python, runs in any environment that supports a Java virtual machine (JVM). This means that the Java libraries on the JVM are available to use when you're writing programs. With Jython, you can take advantage of this difference, and use the syntax and most of the features of the Python language.
-
-As a scripting language, Python (and its Jython implementation) is easy to learn and efficient to code, and has minimal required structure to create a running program. Code can be entered interactively, that is, one line at a time. Python is an interpreted scripting language; there is no precompile step, as there is in Java. Python programs are simply text files that are interpreted as they're input (after parsing for syntax errors). Simple expressions, like defined values, as well as more complex actions, such as function definitions, are immediately executed and available for use. Any changes that are made to the code can be tested quickly. Script interpretation does, however, have some disadvantages. For example, use of an undefined variable is not a compiler error, so it's detected only if (and when) the statement in which the variable is used is executed. In this case, you can edit and run the program to debug the error.
-
-"
-F67E458A29CF154C33221A8889789241725FE5C7_1,F67E458A29CF154C33221A8889789241725FE5C7,"Python sees everything, including all data and code, as an object. You can, therefore, manipulate these objects with lines of code. Some select types, such as numbers and strings, are more conveniently considered as values, not objects; this is supported by Python. There is one null value that's supported. This null value has the reserved name None.
-
-For a more in-depth introduction to Python and Jython scripting, and for some example scripts, see [http://www.ibm.com/developerworks/java/tutorials/j-jython1/j-jython1.html](http://www.ibm.com/developerworks/java/tutorials/j-jython1/j-jython1.html) and [http://www.ibm.com/developerworks/java/tutorials/j-jython2/j-jython2.html](http://www.ibm.com/developerworks/java/tutorials/j-jython2/j-jython2.html).
-"
-033F114BFF6D5479C2B4BE7C1542A4C778ABA53E,033F114BFF6D5479C2B4BE7C1542A4C778ABA53E," Adding attributes to a class instance
-
-Unlike in Java, in Python clients can add attributes to an instance of a class. Only the one instance is changed. For example, to add attributes to an instance x, set new values on that instance:
-
-x.attr1 = 1
-x.attr2 = 2
-.
-"
-8BC347015FD7CE2AF13B17DE4D287471CB994F38,8BC347015FD7CE2AF13B17DE4D287471CB994F38," The scripting API
-
-The Scripting API provides access to a wide range of SPSS Modeler functionality. All the methods described so far are part of the API and can be accessed implicitly within the script without further imports. However, if you want to reference the API classes, you must import the API explicitly with the following statement:
-
-import modeler.api
-
-This import statement is required by many of the scripting API examples.
-"
-F290D0C61B4A664E303DE559BBC559015FD375F9,F290D0C61B4A664E303DE559BBC559015FD375F9," Example: Searching for nodes using a custom filter
-
-The section [Finding nodes](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/jython/clementine/python_node_find.htmlpython_node_find) includes an example of searching for a node in a flow using the type name of the node as the search criterion. In some situations, a more generic search is required and this can be accomplished by using the NodeFilter class and the flow findAll() method. This type of search involves the following two steps:
-
-
-
-1. Creating a new class that extends NodeFilter and that implements a custom version of the accept() method.
-2. Calling the flow findAll() method with an instance of this new class. This returns all nodes that meet the criteria defined in the accept() method.
-
-
-
-The following example shows how to search for nodes in a flow that have the node cache enabled. The returned list of nodes can be used to either flush or disable the caches of these nodes.
-
-import modeler.api
-
-class CacheFilter(modeler.api.NodeFilter):
-""""""A node filter for nodes with caching enabled""""""
-def accept(this, node):
-return node.isCacheEnabled()
-
-cachingnodes = modeler.script.stream().findAll(CacheFilter(), False)
-"
-78488A77CB39BDD413DBB7682F1DBE2675B3E3A0,78488A77CB39BDD413DBB7682F1DBE2675B3E3A0," Defining a class
-
-Within a Python class, you can define both variables and methods. Unlike in Java, in Python you can define any number of public classes per source file (or module). Therefore, you can think of a module in Python as similar to a package in Java.
-
-In Python, classes are defined using the class statement. The class statement has the following form:
-
-class name (superclasses): statement
-
-or
-
-class name (superclasses):
-assignment
-.
-.
-function
-.
-.
-
-When you define a class, you have the option to provide zero or more assignment statements. These create class attributes that are shared by all instances of the class. You can also provide zero or more function definitions. These function definitions create methods. The superclasses list is optional.
-
-The class name should be unique in the same scope, that is within a module, function, or class. You can define multiple variables to reference the same class.
-"
-E3EFB6106AE81DB5A8B3379C3EDCF86E31F95AB0,E3EFB6106AE81DB5A8B3379C3EDCF86E31F95AB0," Creating a class instance
-
-You can use classes to hold class (or shared) attributes or to create class instances. To create an instance of a class, you call the class as if it were a function. For example, consider the following class:
-
-class MyClass:
-pass
-
-Here, the pass statement is used because a statement is required to complete the class, but no action is required programmatically.
-
-The following statement creates an instance of the class MyClass:
-
-x = MyClass()
-"
-3491F666270894EE4BE071FD4A8551DF94CB9889,3491F666270894EE4BE071FD4A8551DF94CB9889," Defining class attributes and methods
-
-Any variable that's bound in a class is a class attribute. Any function defined within a class is a method. Methods receive an instance of the class, conventionally called self, as the first argument. For example, to define some class attributes and methods, you might enter the following script:
-
-class MyClass
-attr1 = 10 class attributes
-attr2 = ""hello""
-
-def method1(self):
-print MyClass.attr1 reference the class attribute
-
-def method2(self):
-print MyClass.attr2 reference the class attribute
-
-def method3(self, text):
-self.text = text instance attribute
-print text, self.text print my argument and my attribute
-
-method4 = method3 make an alias for method3
-
-Inside a class, you should qualify all references to class attributes with the class name (for example, MyClass.attr1). All references to instance attributes should be qualified with the self variable (for example, self.text). Outside the class, you should qualify all references to class attributes with the class name (for example, MyClass.attr1) or with an instance of the class (for example, x.attr1, where x is an instance of the class). Outside the class, all references to instance variables should be qualified with an instance of the class (for example, x.text).
-"
-5ED78E19C7780C6856E1FA05D4B8A3F671FC878B,5ED78E19C7780C6856E1FA05D4B8A3F671FC878B," Diagrams
-
-The term diagram covers the functions that are supported by both normal flows and SuperNode flows, such as adding and removing nodes and modifying connections between the nodes.
-"
-640F57E8262F846DA7884E43F6F2F6C04CD15667,640F57E8262F846DA7884E43F6F2F6C04CD15667," Global values
-
-Global values are used to compute various summary statistics for specified fields. These summary values can be accessed anywhere within the flow. Global values are similar to flow parameters in that they are accessed by name through the flow. They're different from flow parameters in that the associated values are updated automatically when a Set Globals node is run, rather than being assigned by scripting. The global values for a flow are accessed by calling the flow's getGlobalValues() method.
-
-The GlobalValues object defines the functions that are shown in the following table.
-
-
-
-Functions that are defined by the GlobalValues object
-
-Table 1. Functions that are defined by the GlobalValues object
-
- Method Return type Description
-
- g.fieldNameIterator() Iterator Returns an iterator for each field name with at least one global value.
- g.getValue(type, fieldName) Object Returns the global value for the specified type and field name, or None if no value can be located. The returned value is generally expected to be a number, although future functionality may return different value types.
- g.getValues(fieldName) Map Returns a map containing the known entries for the specified field name, or None if there are no existing entries for the field.
-
-
-
-GlobalValues.Type defines the type of summary statistics that are available. The following summary statistics are available:
-
-
-
-* MAX: the maximum value of the field.
-* MEAN: the mean value of the field.
-* MIN: the minimum value of the field.
-* STDDEV: the standard deviation of the field.
-* SUM: the sum of the values in the field.
-
-
-
-For example, the following script accesses the mean value of the ""income"" field, which is computed by a Set Globals node:
-
-import modeler.api
-
-"
-CAD5F0781542A67A581819B52BB1B6B4BB9ECE74_0,CAD5F0781542A67A581819B52BB1B6B4BB9ECE74," Parameters
-
-Parameters provide a useful way of passing values at runtime, rather than hard coding them directly in a script. Parameters and their values are defined in the same way as for flows; that is, as entries in the parameters table of a flow, or as parameters on the command line. The Stream class implements a set of functions defined by the ParameterProvider object as shown in the following table. Session provides a getParameters() call which returns an object that defines those functions.
-
-
-
-Functions defined by the ParameterProvider object
-
-Table 1. Functions defined by the ParameterProvider object
-
- Method Return type Description
-
- p.parameterIterator() Iterator Returns an iterator of parameter names for this object.
- p.getParameterDefinition( parameterName) ParameterDefinition Returns the parameter definition for the parameter with the specified name, or None if no such parameter exists in this provider. The result may be a snapshot of the definition at the time the method was called and need not reflect any subsequent modifications made to the parameter through this provider.
- p.getParameterLabel(parameterName) string Returns the label of the named parameter, or None if no such parameter exists.
- p.setParameterLabel(parameterName, label) Not applicable Sets the label of the named parameter.
- p.getParameterStorage( parameterName) ParameterStorage Returns the storage of the named parameter, or None if no such parameter exists.
- p.setParameterStorage( parameterName, storage) Not applicable Sets the storage of the named parameter.
- p.getParameterType(parameterName) ParameterType Returns the type of the named parameter, or None if no such parameter exists.
- p.setParameterType(parameterName, type) Not applicable Sets the type of the named parameter.
- p.getParameterValue(parameterName) Object Returns the value of the named parameter, or None if no such parameter exists.
- p.setParameterValue(parameterName, value) Not applicable Sets the value of the named parameter.
-
-
-
-"
-CAD5F0781542A67A581819B52BB1B6B4BB9ECE74_1,CAD5F0781542A67A581819B52BB1B6B4BB9ECE74,"In the following example, the script aggregates some Telco data to find which region has the lowest average income data. A flow parameter is then set with this region. That flow parameter is then used in a Select node to exclude that region from the data, before a churn model is built on the remainder.
-
-The example is artificial because the script generates the Select node itself and could therefore have generated the correct value directly into the Select node expression. However, flows are typically pre-built, so setting parameters in this way provides a useful example.
-
-The first part of this example script creates the flow parameter that will contain the region with the lowest average income. The script also creates the nodes in the aggregation branch and the model building branch, and connects them together.
-
-import modeler.api
-
-stream = modeler.script.stream()
-
- Initialize a flow parameter
-stream.setParameterStorage(""LowestRegion"", modeler.api.ParameterStorage.INTEGER)
-
- First create the aggregation branch to compute the average income per region
-sourcenode = stream.findByID(""idGXVBG5FBZH"")
-
-aggregatenode = modeler.script.stream().createAt(""aggregate"", ""Aggregate"", 294, 142)
-aggregatenode.setPropertyValue(""keys"", [""region""])
-aggregatenode.setKeyedPropertyValue(""aggregates"", ""income"", [""Mean""])
-
-tablenode = modeler.script.stream().createAt(""table"", ""Table"", 462, 142)
-
-stream.link(sourcenode, aggregatenode)
-stream.link(aggregatenode, tablenode)
-
-selectnode = stream.createAt(""select"", ""Select"", 210, 232)
-selectnode.setPropertyValue(""mode"", ""Discard"")
- Reference the flow parameter in the selection
-selectnode.setPropertyValue(""condition"", ""'region' = '$P-LowestRegion'"")
-
-typenode = stream.createAt(""type"", ""Type"", 366, 232)
-"
-CAD5F0781542A67A581819B52BB1B6B4BB9ECE74_2,CAD5F0781542A67A581819B52BB1B6B4BB9ECE74,"typenode.setKeyedPropertyValue(""direction"", ""Drug"", ""Target"")
-
-c50node = stream.createAt(""c50"", ""C5.0"", 534, 232)
-
-stream.link(sourcenode, selectnode)
-stream.link(selectnode, typenode)
-stream.link(typenode, c50node)
-
-The example script creates the following flow.
-
-Figure 1. Flow that results from the example script
-
-
-"
-E61658D2BA7D0D13E5A6008E28670D1B1F6CB7BB,E61658D2BA7D0D13E5A6008E28670D1B1F6CB7BB," Hidden variables
-
-You can hide data by creating Private variables. Private variables can be accessed only by the class itself. If you declare names of the form __xxx or __xxx_yyy, that is with two preceding underscores, the Python parser will automatically add the class name to the declared name, creating hidden variables. For example:
-
-class MyClass:
-__attr = 10 private class attribute
-
-def method1(self):
-pass
-
-def method2(self, p1, p2):
-pass
-
-def __privateMethod(self, text):
-self.__text = text private attribute
-
-Unlike in Java, in Python all references to instance variables must be qualified with self; there's no implied use of this.
-"
-9EE303CB0D99042537564DCDFC134B592BF0A3FE,9EE303CB0D99042537564DCDFC134B592BF0A3FE," Inheritance
-
-The ability to inherit from classes is fundamental to object-oriented programming. Python supports both single and multiple inheritance. Single inheritance means that there can be only one superclass. Multiple inheritance means that there can be more than one superclass.
-
-Inheritance is implemented by subclassing other classes. Any number of Python classes can be superclasses. In the Jython implementation of Python, only one Java class can be directly or indirectly inherited from. It's not required for a superclass to be supplied.
-
-Any attribute or method in a superclass is also in any subclass and can be used by the class itself, or by any client as long as the attribute or method isn't hidden. Any instance of a subclass can be used wherever an instance of a superclass can be used; this is an example of polymorphism. These features enable reuse and ease of extension.
-"
-97050C74E0C144E4F16AA808D275A9A472489EFB,97050C74E0C144E4F16AA808D275A9A472489EFB," The scripting language
-
-With the scripting facility for SPSS Modeler, you can create scripts that operate on the SPSS Modeler user interface, manipulate output objects, and run command syntax. You can also run scripts directly from within SPSS Modeler.
-
-Scripts in SPSS Modeler are written in the Python scripting language. The Java-based implementation of Python that's used by SPSS Modeler is called Jython. The scripting language consists of the following features:
-
-
-
-* A format for referencing nodes, flows, projects, output, and other SPSS Modeler objects
-* A set of scripting statements or commands you can use to manipulate these objects
-* A scripting expression language for setting the values of variables, parameters, and other objects
-* Support for comments, continuations, and blocks of literal text
-
-
-
-The following sections of this documentation describe the Python scripting language, the Jython implementation of Python, and the basic syntax for getting started with scripting in SPSS Modeler. Information about specific properties and commands is provided in the sections that follow.
-"
-1FEFE3C6F1A20841FA1AE6AFAA85CC7FF36778AC_0,1FEFE3C6F1A20841FA1AE6AFAA85CC7FF36778AC," Metadata: Information about data
-
-Because nodes are connected together in a flow, information about the columns or fields that are available at each node is available. For example, in the SPSS Modeler user interface, this allows you to select which fields to sort or aggregate by. This information is called the data model.
-
-Scripts can also access the data model by looking at the fields coming into or out of a node. For some nodes, the input and output data models are the same (for example, a Sort node simply reorders the records but doesn't change the data model). Some, such as the Derive node, can add new fields. Others, such as the Filter node, can rename or remove fields.
-
-In the following example, the script takes a standard IBM® SPSS® Modeler druglearn.str flow, and for each field, builds a model with one of the input fields dropped. It does this by:
-
-
-
-1. Accessing the output data model from the Type node.
-2. Looping through each field in the output data model.
-3. Modifying the Filter node for each input field.
-4. Changing the name of the model being built.
-5. Running the model build node.
-
-
-
-Note: Before running the script in the druglean.str flow, remember to set the scripting language to Python if the flow was created in an old version of IBM SPSS Modeler desktop and its scripting language is set to Legacy).
-
-import modeler.api
-
-stream = modeler.script.stream()
-filternode = stream.findByType(""filter"", None)
-typenode = stream.findByType(""type"", None)
-c50node = stream.findByType(""c50"", None)
- Always use a custom model name
-c50node.setPropertyValue(""use_model_name"", True)
-
-lastRemoved = None
-fields = typenode.getOutputDataModel()
-for field in fields:
- If this is the target field then ignore it
-if field.getModelingRole() == modeler.api.ModelingRole.OUT:
-continue
-
- Re-enable the field that was most recently removed
-if lastRemoved != None:
-"
-1FEFE3C6F1A20841FA1AE6AFAA85CC7FF36778AC_1,1FEFE3C6F1A20841FA1AE6AFAA85CC7FF36778AC,"filternode.setKeyedPropertyValue(""include"", lastRemoved, True)
-
- Remove the field
-lastRemoved = field.getColumnName()
-filternode.setKeyedPropertyValue(""include"", lastRemoved, False)
-
- Set the name of the new model then run the build
-c50node.setPropertyValue(""model_name"", ""Exclude "" + lastRemoved)
-c50node.run([])
-
-The DataModel object provides a number of methods for accessing information about the fields or columns within the data model. These methods are summarized in the following table.
-
-
-
-DataModel object methods for accessing information about fields or columns
-
-Table 1. DataModel object methods for accessing information about fields or columns
-
- Method Return type Description
-
- d.getColumnCount() int Returns the number of columns in the data model.
- d.columnIterator() Iterator Returns an iterator that returns each column in the ""natural"" insert order. The iterator returns instances of Column.
- d.nameIterator() Iterator Returns an iterator that returns the name of each column in the ""natural"" insert order.
- d.contains(name) Boolean Returns True if a column with the supplied name exists in this DataModel, False otherwise.
- d.getColumn(name) Column Returns the column with the specified name.
- d.getColumnGroup(name) ColumnGroup Returns the named column group or None if no such column group exists.
- d.getColumnGroupCount() int Returns the number of column groups in this data model.
- d.columnGroupIterator() Iterator Returns an iterator that returns each column group in turn.
- d.toArray() Column[] Returns the data model as an array of columns. The columns are ordered in their ""natural"" insert order.
-
-
-
-Each field (Column object) includes a number of methods for accessing information about the column. The following table shows a selection of these.
-
-
-
-"
-1FEFE3C6F1A20841FA1AE6AFAA85CC7FF36778AC_2,1FEFE3C6F1A20841FA1AE6AFAA85CC7FF36778AC,"Column object methods for accessing information about the column
-
-Table 2. Column object methods for accessing information about the column
-
- Method Return type Description
-
- c.getColumnName() string Returns the name of the column.
- c.getColumnLabel() string Returns the label of the column or an empty string if there is no label associated with the column.
- c.getMeasureType() MeasureType Returns the measure type for the column.
- c.getStorageType() StorageType Returns the storage type for the column.
- c.isMeasureDiscrete() Boolean Returns True if the column is discrete. Columns that are either a set or a flag are considered discrete.
- c.isModelOutputColumn() Boolean Returns True if the column is a model output column.
- c.isStorageDatetime() Boolean Returns True if the column's storage is a time, date or timestamp value.
- c.isStorageNumeric() Boolean Returns True if the column's storage is an integer or a real number.
- c.isValidValue(value) Boolean Returns True if the specified value is valid for this storage, and valid when the valid column values are known.
- c.getModelingRole() ModelingRole Returns the modeling role for the column.
- c.getSetValues() Object[] Returns an array of valid values for the column, or None if either the values are not known or the column is not a set.
- c.getValueLabel(value) string Returns the label for the value in the column, or an empty string if there is no label associated with the value.
- c.getFalseFlag() Object Returns the ""false"" indicator value for the column, or None if either the value is not known or the column is not a flag.
- c.getTrueFlag() Object Returns the ""true"" indicator value for the column, or None if either the value is not known or the column is not a flag.
-"
-1FEFE3C6F1A20841FA1AE6AFAA85CC7FF36778AC_3,1FEFE3C6F1A20841FA1AE6AFAA85CC7FF36778AC," c.getLowerBound() Object Returns the lower bound value for the values in the column, or None if either the value is not known or the column is not continuous.
- c.getUpperBound() Object Returns the upper bound value for the values in the column, or None if either the value is not known or the column is not continuous.
-
-
-
-Note that most of the methods that access information about a column have equivalent methods defined on the DataModel object itself. For example, the two following statements are equivalent:
-
-"
-83A5FC83AA65717942A3437217F2114454552144,83A5FC83AA65717942A3437217F2114454552144," Creating nodes
-
-Flows provide a number of ways to create nodes. These methods are summarized in the following table.
-
-
-
-Methods for creating nodes
-
-Table 1. Methods for creating nodes
-
- Method Return type Description
-
- s.create(nodeType, name) Node Creates a node of the specified type and adds it to the specified flow.
- s.createAt(nodeType, name, x, y) Node Creates a node of the specified type and adds it to the specified flow at the specified location. If either x < 0 or y < 0, the location is not set.
- s.createModelApplier(modelOutput, name) Node Creates a model applier node that's derived from the supplied model output object.
-
-
-
-For example, you can use the following script to create a new Type node in a flow:
-
-stream = modeler.script.stream()
-"
-D9304450E79DC05B5ECC4FE98D48FECEF76A852E_0,D9304450E79DC05B5ECC4FE98D48FECEF76A852E," Finding nodes
-
-Flows provide a number of ways for locating an existing node. These methods are summarized in the following table.
-
-
-
-Methods for locating an existing node
-
-Table 1. Methods for locating an existing node
-
- Method Return type Description
-
- s.findAll(type, label) Collection Returns a list of all nodes with the specified type and label. Either the type or label can be None, in which case the other parameter is used.
- s.findAll(filter, recursive) Collection Returns a collection of all nodes that are accepted by the specified filter. If the recursive flag is True, any SuperNodes within the specified flow are also searched.
- s.findByID(id) Node Returns the node with the supplied ID or None if no such node exists. The search is limited to the current stream.
- s.findByType(type, label) Node Returns the node with the supplied type, label, or both. Either the type or name can be None, in which case the other parameter is used. If multiple nodes result in a match, then an arbitrary one is chosen and returned. If no nodes result in a match, then the return value is None.
- s.findDownstream(fromNodes) Collection Searches from the supplied list of nodes and returns the set of nodes downstream of the supplied nodes. The returned list includes the originally supplied nodes.
- s.findUpstream(fromNodes) Collection Searches from the supplied list of nodes and returns the set of nodes upstream of the supplied nodes. The returned list includes the originally supplied nodes.
- s.findProcessorForID(String id, boolean recursive) Node Returns the node with the supplied ID or None if no such node exists. If the recursive flag is true, then any composite nodes within this diagram are also searched.
-
-
-
-As an example, if a flow contains a single Filter node that the script needs to access, the Filter node can be found by using the following script:
-
-stream = modeler.script.stream()
-node = stream.findByType(""filter"", None)
-...
-
-"
-D9304450E79DC05B5ECC4FE98D48FECEF76A852E_1,D9304450E79DC05B5ECC4FE98D48FECEF76A852E,"Alternatively, you can use the ID of a node. For example:
-
-stream = modeler.script.stream()
-node = stream.findByID(""id49CVL4GHVV8"") the Derive node ID
-node.setPropertyValue(""mode"", ""Multiple"")
-node.setPropertyValue(""name_extension"", ""new_derive"")
-
-To obtain the ID for any node in a flow, click the Scripting icon on the toolbar, then select the desired node in your flow and click Insert selected node ID.
-"
-0CB42F245DF436AF2BCCB54B612786CA493B917B,0CB42F245DF436AF2BCCB54B612786CA493B917B," Importing, replacing, and deleting nodes
-
-Along with creating and connecting nodes, it's often necessary to replace and delete nodes from a flow. The methods that are available for importing, replacing, and deleting nodes are summarized in the following table.
-
-
-
-Methods for importing, replacing, and deleting nodes
-
-Table 1. Methods for importing, replacing, and deleting nodes
-
- Method Return type Description
-
- s.replace(originalNode, replacementNode, discardOriginal) Not applicable Replaces the specified node from the specified flow. Both the original node and replacement node must be owned by the specified flow.
- s.insert(source, nodes, newIDs) List Inserts copies of the nodes in the supplied list. It's assumed that all nodes in the supplied list are contained within the specified flow. The newIDs flag indicates whether new IDs should be generated for each node, or whether the existing ID should be copied and used. It's assumed that all nodes in a flow have a unique ID, so this flag must be set to True if the source flow is the same as the specified flow. The method returns the list of newly inserted nodes, where the order of the nodes is undefined (that is, the ordering is not necessarily the same as the order of the nodes in the input list).
- s.delete(node) Not applicable Deletes the specified node from the specified flow. The node must be owned by the specified flow.
-"
-1243D4C8499CC9BE45CD9C1F6EB34254F1B9B4D7_0,1243D4C8499CC9BE45CD9C1F6EB34254F1B9B4D7," Getting information about nodes
-
-Nodes fall into a number of different categories such as data import and export nodes, model building nodes, and other types of nodes. Every node provides a number of methods that can be used to find out information about the node.
-
-The methods that can be used to obtain the ID, name, and label of a node are summarized in the following table.
-
-
-
-Methods to obtain the ID, name, and label of a node
-
-Table 1. Methods to obtain the ID, name, and label of a node
-
- Method Return type Description
-
- n.getLabel() string Returns the display label of the specified node. The label is the value of the property custom_name only if that property is a non-empty string and the use_custom_name property is not set; otherwise, the label is the value of getName().
- n.setLabel(label) Not applicable Sets the display label of the specified node. If the new label is a non-empty string it is assigned to the property custom_name, and False is assigned to the property use_custom_name so that the specified label takes precedence; otherwise, an empty string is assigned to the property custom_name and True is assigned to the property use_custom_name.
- n.getName() string Returns the name of the specified node.
- n.getID() string Returns the ID of the specified node. A new ID is created each time a new node is created. The ID is persisted with the node when it's saved as part of a flow so that when the flow is opened, the node IDs are preserved. However, if a saved node is inserted into a flow, the inserted node is considered to be a new object and will be allocated a new ID.
-
-
-
-Methods that can be used to obtain other information about a node are summarized in the following table.
-
-
-
-Methods for obtaining information about a node
-
-Table 2. Methods for obtaining information about a node
-
- Method Return type Description
-
- n.getTypeName() string Returns the scripting name of this node. This is the same name that could be used to create a new instance of this node.
-"
-1243D4C8499CC9BE45CD9C1F6EB34254F1B9B4D7_1,1243D4C8499CC9BE45CD9C1F6EB34254F1B9B4D7," n.isInitial() Boolean Returns True if this is an initial node (one that occurs at the start of a flow).
- n.isInline() Boolean Returns True if this is an in-line node (one that occurs mid-flow).
- n.isTerminal() Boolean Returns True if this is a terminal node (one that occurs at the end of a flow).
- n.getXPosition() int Returns the x position offset of the node in the flow.
- n.getYPosition() int Returns the y position offset of the node in the flow.
- n.setXYPosition(x, y) Not applicable Sets the position of the node in the flow.
- n.setPositionBetween(source, target) Not applicable Sets the position of the node in the flow so that it's positioned between the supplied nodes.
- n.isCacheEnabled() Boolean Returns True if the cache is enabled; returns False otherwise.
- n.setCacheEnabled(val) Not applicable Enables or disables the cache for this object. If the cache is full and the caching becomes disabled, the cache is flushed.
-"
-AB42FF6B754A2E29FCB56B0137EEDDF17F8EE271_0,AB42FF6B754A2E29FCB56B0137EEDDF17F8EE271," Linking and unlinking nodes
-
-When you add a new node to a flow, you must connect it to a sequence of nodes before it can be used. Flows provide a number of methods for linking and unlinking nodes. These methods are summarized in the following table.
-
-
-
-Methods for linking and unlinking nodes
-
-Table 1. Methods for linking and unlinking nodes
-
- Method Return type Description
-
- s.link(source, target) Not applicable Creates a new link between the source and the target nodes.
- s.link(source, targets) Not applicable Creates new links between the source node and each target node in the supplied list.
- s.linkBetween(inserted, source, target) Not applicable Connects a node between two other node instances (the source and target nodes) and sets the position of the inserted node to be between them. Any direct link between the source and target nodes is removed first.
- s.linkPath(path) Not applicable Creates a new path between node instances. The first node is linked to the second, the second is linked to the third, and so on.
- s.unlink(source, target) Not applicable Removes any direct link between the source and the target nodes.
- s.unlink(source, targets) Not applicable Removes any direct links between the source node and each object in the targets list.
- s.unlinkPath(path) Not applicable Removes any path that exists between node instances.
- s.disconnect(node) Not applicable Removes any links between the supplied node and any other nodes in the specified flow.
- s.isValidLink(source, target) boolean Returns True if it would be valid to create a link between the specified source and target nodes. This method checks that both objects belong to the specified flow, that the source node can supply a link and the target node can receive a link, and that creating such a link will not cause a circularity in the flow.
-
-
-
-The example script that follows performs these five tasks:
-
-
-
-1. Creates a Data Asset node, a Filter node, and a Table output node.
-2. Connects the nodes together.
-"
-AB42FF6B754A2E29FCB56B0137EEDDF17F8EE271_1,AB42FF6B754A2E29FCB56B0137EEDDF17F8EE271,"3. Filters the field ""Drug"" from the resulting output.
-4. Runs the Table node.
-
-
-
-stream = modeler.script.stream()
-sourcenode = stream.findByID(""idGXVBG5FBZH"")
-filternode = stream.createAt(""filter"", ""Filter"", 192, 64)
-tablenode = stream.createAt(""table"", ""Table"", 288, 64)
-stream.link(sourcenode, filternode)
-stream.link(filternode, tablenode)
-filternode.setKeyedPropertyValue(""include"", ""Drug"", False)
-"
-F0EF147DBC0554F53B331E7B6D5715D0269FFBA8,F0EF147DBC0554F53B331E7B6D5715D0269FFBA8," Referencing existing nodes
-
-A flow is often pre-built with some parameters that must be modified before the flow runs. Modifying these parameters involves the following tasks:
-
-
-
-"
-5EE63FCC911BA90930D413B58E1310EFE0E24243,5EE63FCC911BA90930D413B58E1310EFE0E24243," Traversing through nodes in a flow
-
-A common requirement is to identify nodes that are either upstream or downstream of a particular node. The flow provides a number of methods that can be used to identify these nodes. These methods are summarized in the following table.
-
-
-
-Methods to identify upstream and downstream nodes
-
-Table 1. Methods to identify upstream and downstream nodes
-
- Method Return type Description
-
- s.iterator() Iterator Returns an iterator over the node objects that are contained in the specified flow. If the flow is modified between calls of the next() function, the behavior of the iterator is undefined.
- s.predecessorAt(node, index) Node Returns the specified immediate predecessor of the supplied node or None if the index is out of bounds.
- s.predecessorCount(node) int Returns the number of immediate predecessors of the supplied node.
- s.predecessors(node) List Returns the immediate predecessors of the supplied node.
- s.successorAt(node, index) Node Returns the specified immediate successor of the supplied node or None if the index is out of bounds.
-"
-B6EC6454711B4946DBC663324DC478953723B1DD,B6EC6454711B4946DBC663324DC478953723B1DD," Creating nodes and modifying flows
-
-In some situations, you might want to add new nodes to existing flows. Adding nodes to existing flows typically involves the following tasks:
-
-
-
-"
-9E77548AF396E9E9474371705BCFFF55684C5760,9E77548AF396E9E9474371705BCFFF55684C5760," Object-oriented programming
-
-Object-oriented programming is based on the notion of creating a model of the target problem in your programs. Object-oriented programming reduces programming errors and promotes the reuse of code. Python is an object-oriented language. Objects defined in Python have the following features:
-
-
-
-* Identity. Each object must be distinct, and this must be testable. The is and is not tests exist for this purpose.
-* State. Each object must be able to store state. Attributes, such as fields and instance variables, exist for this purpose.
-* Behavior. Each object must be able to manipulate its state. Methods exist for this purpose.
-
-
-
-Python includes the following features for supporting object-oriented programming:
-
-
-
-* Class-based object creation. Classes are templates for the creation of objects. Objects are data structures with associated behavior.
-"
-381D767DECD07EF388611FD22C3F08FB89BA73EC_0,381D767DECD07EF388611FD22C3F08FB89BA73EC," The scripting context
-
-The modeler.script module provides the context in which a script runs. The module is automatically imported into an SPSS® Modeler script at run time. The module defines four functions that provide a script with access to its execution environment:
-
-
-
-* The session() function returns the session for the script. The session defines information such as the locale and the SPSS Modeler backend (either a local process or a networked SPSS Modeler Server) that's being used to run any flows.
-* The stream() function can be used with flow and SuperNode scripts. This function returns the flow that owns either the flow script or the SuperNode script that's being run.
-* The diagram() function can be used with SuperNode scripts. This function returns the diagram within the SuperNode. For other script types, this function returns the same as the stream() function.
-* The supernode() function can be used with SuperNode scripts. This function returns the SuperNode that owns the script that's being run.
-
-
-
-The four functions and their outputs are summarized in the following table.
-
-
-
-Summary of modeler.script functions
-
-Table 1. Summary of modeler.script functions
-
- Script type session() stream() diagram() supernode()
-
- Standalone Returns a session Returns the current managed flow at the time the script was invoked (for example, the flow passed via the batch mode -stream option), or None. Same as for stream() Not applicable
- Flow Returns a session Returns a flow Same as for stream() Not applicable
- SuperNode Returns a session Returns a flow Returns a SuperNode flow Returns a SuperNode
-
-
-
-The modeler.script module also defines a way of terminating the script with an exit code. The exit(exit-code) function stops the script from running and returns the supplied integer exit code.
-
-One of the methods that's defined for a flow is runAll(List). This method runs all executable nodes. Any models or outputs that are generated by running the nodes are added to the supplied list.
-
-"
-381D767DECD07EF388611FD22C3F08FB89BA73EC_1,381D767DECD07EF388611FD22C3F08FB89BA73EC,"It's common for a flow run to generate outputs such as models, graphs, and other output. To capture this output, a script can supply a variable that's initialized to a list. For example:
-
-stream = modeler.script.stream()
-results = []
-stream.runAll(results)
-
-When execution is complete, any objects that are generated by the execution can be accessed from the results list.
-"
-65998CB8747B70477477179E023332FD410E72D6,65998CB8747B70477477179E023332FD410E72D6," Scripting in SPSS Modeler
-"
-B416F3605ADF246170E1B462EE0F2CFCDF5E591B,B416F3605ADF246170E1B462EE0F2CFCDF5E591B," Setting properties
-
-Nodes, flows, models, and outputs all have properties that can be accessed and, in most cases, set. Properties are typically used to modify the behavior or appearance of the object. The methods that are available for accessing and setting object properties are summarized in the following table.
-
-
-
-Methods for accessing and setting object properties
-
-Table 1. Methods for accessing and setting object properties
-
- Method Return type Description
-
- p.getPropertyValue(propertyName) Object Returns the value of the named property or None if no such property exists.
- p.setPropertyValue(propertyName, value) Not applicable Sets the value of the named property.
- p.setPropertyValues(properties) Not applicable Sets the values of the named properties. Each entry in the properties map consists of a key that represents the property name and the value that should be assigned to that property.
- p.getKeyedPropertyValue( propertyName, keyName) Object Returns the value of the named property and associated key or None if no such property or key exists.
- p.setKeyedPropertyValue( propertyName, keyName, value) Not applicable Sets the value of the named property and key.
-
-
-
-For example, the following script sets the value of a Derive node for a flow:
-
-stream = modeler.script.stream()
-node = stream.findByType(""derive"", None)
-node.setPropertyValue(""name_extension"", ""new_derive"")
-
-Alternatively, you might want to filter a field from a Filter node. In this case, the value is also keyed on the field name. For example:
-
-stream = modeler.script.stream()
- Locate the filter node ...
-node = stream.findByType(""filter"", None)
-"
-542F90CA456DCCC3D79DBF6DC9E8A6755B3BA69E,542F90CA456DCCC3D79DBF6DC9E8A6755B3BA69E," Running a flow
-
-The following example runs all executable nodes in the flow, and is the simplest type of flow script:
-
-modeler.script.stream().runAll(None)
-
-The following example also runs all executable nodes in the flow:
-
-stream = modeler.script.stream()
-stream.runAll(None)
-
-In this example, the flow is stored in a variable called stream. Storing the flow in a variable is useful because a script is typically used to modify either the flow or the nodes within a flow. Creating a variable that stores the flow results in a more concise script.
-"
-D1CDE4FF34352A6E5CDC9914FD26CF72574E2D59,D1CDE4FF34352A6E5CDC9914FD26CF72574E2D59," Flows
-
-A flow is the main IBM® SPSS® Modeler document type. It can be saved, loaded, edited and executed. Flows can also have parameters, global values, a script, and other information associated with them.
-"
-6524DFDEABF32BAE384ACB9BB21637ADE3B4AC4F,6524DFDEABF32BAE384ACB9BB21637ADE3B4AC4F," Flows, SuperNode streams, and diagrams
-
-Most of the time, the term flow means the same thing, regardless of whether it's a flow that's loaded from a file or used within a SuperNode. It generally means a collection of nodes that are connected together and can be executed. In scripting, however, not all operations are supported in all places. So as a script author, you should be aware of which flow variant they're using.
-"
-A4799F6BDEA1B1508528FC647DAD5D1B2EF777AA,A4799F6BDEA1B1508528FC647DAD5D1B2EF777AA," SuperNode flows
-
-A SuperNode flow is the type of flow used within a SuperNode. Like a normal flow, it contains nodes that are linked together. SuperNode flows differ from normal flows in various ways:
-
-
-
-"
-D6DB1FBF1B0A11FD3423B6F057182019496FF3F5,D6DB1FBF1B0A11FD3423B6F057182019496FF3F5," Python scripting
-
-This guide to the Python scripting language is an introduction to the components that you're most likely to use when scripting in SPSS Modeler, including concepts and programming basics.
-
-This provides you with enough knowledge to start developing your own Python scripts to use in SPSS Modeler.
-"
-C6B9BD6294C9A3EF6CD7E45E1B3765C061D92CC3_0,C6B9BD6294C9A3EF6CD7E45E1B3765C061D92CC3," Using non-ASCII characters
-
-To use non-ASCII characters, Python requires explicit encoding and decoding of strings into Unicode. In SPSS Modeler, Python scripts are assumed to be encoded in UTF-8, which is a standard Unicode encoding that supports non-ASCII characters. The following script will compile because the Python compiler has been set to UTF-8 by SPSS Modeler.
-
-
-
-However, the resulting node has an incorrect label.
-
-Figure 1. Node label containing non-ASCII characters, displayed incorrectly
-
-
-
-The label is incorrect because the string literal itself has been converted to an ASCII string by Python.
-
-Python allows Unicode string literals to be specified by adding a u character prefix before the string literal:
-
-
-
-This will create a Unicode string and the label will be appear correctly.
-
-Figure 2. Node label containing non-ASCII characters, displayed correctly
-
-"
-C6B9BD6294C9A3EF6CD7E45E1B3765C061D92CC3_1,C6B9BD6294C9A3EF6CD7E45E1B3765C061D92CC3,"
-
-Using Python and Unicode is a large topic that's beyond the scope of this document. Many books and online resources are available that cover this topic in great detail.
-"
-2413C64687E434B4B2095163A5106C0C62AA3F59,2413C64687E434B4B2095163A5106C0C62AA3F59," Blocks of code
-
-Blocks of code are groups of statements you can use where single statements are expected.
-
-Blocks of code can follow any of the following statements: if, elif, else, for, while, try, except, def, and class. These statements introduce the block of code with the colon character (:). For example:
-
-if x == 1:
-y = 2
-z = 3
-elif:
-y = 4
-z = 5
-
-Use indentation to delimit code blocks (rather than the curly braces used in Java). All lines in a block must be indented to the same position. This is because a change in the indentation indicates the end of a code block. It's common to indent by four spaces per level. We recommend you use spaces to indent the lines, rather than tabs. Spaces and tabs must not be mixed. The lines in the outermost block of a module must start at column one, or a SyntaxError will occur.
-
-The statements that make up a code block (and follow the colon) can also be on a single line, separated by semicolons. For example:
-
-if x == 1: y = 2; z = 3;
-"
-20D6B2732BE17C12226F186559FBEA647799F3B8,20D6B2732BE17C12226F186559FBEA647799F3B8," Examples
-
-The print keyword prints the arguments immediately following it. If the statement is followed by a comma, a new line isn't included in the output. For example:
-
-print ""This demonstrates the use of a"",
-print "" comma at the end of a print statement.""
-
-This will result in the following output:
-
-This demonstrates the use of a comma at the end of a print statement.
-
-The for statement iterates through a block of code. For example:
-
-mylist1 = [""one"", ""two"", ""three""]
-for lv in mylist1:
-print lv
-continue
-
-In this example, three strings are assigned to the list mylist1. The elements of the list are then printed, with one element of each line. This results in the following output:
-
-one
-two
-three
-
-In this example, the iterator lv takes the value of each element in the list mylist1 in turn as the for loop implements the code block for each element. An iterator can be any valid identifier of any length.
-
-The if statement is a conditional statement. It evaluates the condition and returns either true or false, depending on the result of the evaluation. For example:
-
-mylist1 = [""one"", ""two"", ""three""]
-for lv in mylist1:
-if lv == ""two""
-print ""The value of lv is "", lv
-else
-print ""The value of lv is not two, but "", lv
-continue
-
-In this example, the value of the iterator lv is evaluated. If the value of lv is two, a different string is returned to the string that's returned if the value of lv is not two. This results in the following output:
-
-The value of lv is not two, but one
-"
-03C28B0A536906CA3597B4D382759BD791D0CFEC,03C28B0A536906CA3597B4D382759BD791D0CFEC," Identifiers
-
-Identifiers are used to name variables, functions, classes, and keywords.
-
-Identifiers can be any length, but must start with either an alphabetical character of uppercase or lowercase, or the underscore character (_). Names that start with an underscore are generally reserved for internal or private names. After the first character, the identifier can contain any number and combination of alphabetical characters, numbers from 0-9, and the underscore character.
-
-There are some reserved words in Jython that can't be used to name variables, functions, or classes. They fall under the following categories:
-
-
-
-* Statement introducers:assert, break, class, continue, def, del, elif, else, except, exec, finally, for, from, global, if, import, pass, print, raise, return, try, and while
-* Parameter introducers:as, import, and in
-* Operators:and, in, is, lambda, not, and or
-
-
-
-Improper keyword use generally results in a SyntaxError.
-"
-659E43BA12550AA1E885BAEC945B7B1B25FD18E2,659E43BA12550AA1E885BAEC945B7B1B25FD18E2," Lists
-
-Lists are sequences of elements. A list can contain any number of elements, and the elements of the list can be any type of object. Lists can also be thought of as arrays. The number of elements in a list can increase or decrease as elements are added, removed, or replaced.
-"
-F837E34ED0AD4739783010D9FFD3684C37FD465C_0,F837E34ED0AD4739783010D9FFD3684C37FD465C," Mathematical methods
-
-From the math module you can access useful mathematical methods. Some of these methods are listed in the following table. Unless specified otherwise, all values are returned as floats.
-
-
-
-Mathematical methods
-
-Table 1. Mathematical methods
-
- Method Usage
-
- math.ceil(x) Return the ceiling of x as a float, that is the smallest integer greater than or equal to x
- math.copysign(x, y) Return x with the sign of y. copysign(1, -0.0) returns -1
- math.fabs(x) Return the absolute value of x
- math.factorial(x) Return x factorial. If x is negative or not an integer, a ValueError is raised.
- math.floor(x) Return the floor of x as a float, that is the largest integer less than or equal to x
- math.frexp(x) Return the mantissa (m) and exponent (e) of x as the pair (m, e). m is a float and e is an integer, such that x == m * 2e exactly. If x is zero, returns (0.0, 0), otherwise 0.5 <= abs(m) < 1.
- math.fsum(iterable) Return an accurate floating point sum of values in iterable
- math.isinf(x) Check if the float x is positive or negative infinitive
- math.isnan(x) Check if the float x is NaN (not a number)
- math.ldexp(x, i) Return x * (2i). This is essentially the inverse of the function frexp.
- math.modf(x) Return the fractional and integer parts of x. Both results carry the sign of x and are floats.
- math.trunc(x) Return the Real value x, that has been truncated to an Integral.
- math.exp(x) Return ex
- math.log(x[, base]) Return the logarithm of x to the given value of base. If base is not specified, the natural logarithm of x is returned.
- math.log1p(x) Return the natural logarithm of 1+x (base e)
- math.log10(x) Return the base-10 logarithm of x
-"
-F837E34ED0AD4739783010D9FFD3684C37FD465C_1,F837E34ED0AD4739783010D9FFD3684C37FD465C," math.pow(x, y) Return x raised to the power y. pow(1.0, x) and pow(x, 0.0) always return 1, even when x is zero or NaN.
- math.sqrt(x) Return the square root of x
-
-
-
-Along with the mathematical functions, there are also some useful trigonometric methods. These methods are listed in the following table.
-
-
-
-Trigonometric methods
-
-Table 2. Trigonometric methods
-
- Method Usage
-
- math.acos(x) Return the arc cosine of x in radians
- math.asin(x) Return the arc sine of x in radians
- math.atan(x) Return the arc tangent of x in radians
- math.atan2(y, x) Return atan(y / x) in radians.
- math.cos(x) Return the cosine of x in radians.
- math.hypot(x, y) Return the Euclidean norm sqrt(xx + yy). This is the length of the vector from the origin to the point (x, y).
- math.sin(x) Return the sine of x in radians
- math.tan(x) Return the tangent of x in radians
- math.degrees(x) Convert angle x from radians to degrees
- math.radians(x) Convert angle x from degrees to radians
- math.acosh(x) Return the inverse hyperbolic cosine of x
- math.asinh(x) Return the inverse hyperbolic sine of x
- math.atanh(x) Return the inverse hyperbolic tangent of x
- math.cosh(x) Return the hyperbolic cosine of x
- math.sinh(x) Return the hyperbolic cosine of x
- math.tanh(x) Return the hyperbolic tangent of x
-
-
-
-There are also two mathematical constants. The value of math.pi is the mathematical constant pi. The value of math.e is the mathematical constant e.
-"
-48CCA78CEB92570BCE08F4E1A5677E8CD7936095,48CCA78CEB92570BCE08F4E1A5677E8CD7936095," Operations
-
-Use an equals sign (=) to assign values.
-
-For example, to assign the value 3 to a variable called x, you would use the following statement:
-
-x = 3
-
-You can also use the equals sign to assign string type data to a variable. For example, to assign the value a string value to the variable y, you would use the following statement:
-
-y = ""a string value""
-
-The following table lists some commonly used comparison and numeric operations, and their descriptions.
-
-
-
-Common comparison and numeric operations
-
-Table 1. Common comparison and numeric operations
-
- Operation Description
-
- x < y Is x less than y?
- x > y Is x greater than y?
- x <= y Is x less than or equal to y?
- x >= y Is x greater than or equal to y?
- x == y Is x equal to y?
- x != y Is x not equal to y?
- x <> y Is x not equal to y?
- x + y Add y to x
- x - y Subtract y from x
- x * y Multiply x by y
-"
-622526F6C171CED140394F3DD707B612778B661E,622526F6C171CED140394F3DD707B612778B661E," Passing arguments to a script
-
-Passing arguments to a script is useful because a script can be used repeatedly without modification.
-
-The arguments you pass on the command line are passed as values in the list sys.argv. You can use the len(sys.argv) command to obtain the number of values passed. For example:
-
-import sys
-print ""test1""
-print sys.argv[0]
-print sys.argv[1]
-print len(sys.argv)
-
-In this example, the import command imports the entire sys class so that you can use the existing methods for this class, such as argv.
-
-The script in this example can be invoked using the following line:
-
-/u/mjloos/test1 mike don
-
-The result is the following output:
-
-/u/mjloos/test1 mike don
-test1
-mike
-"
-03A70C271775C3B15541B86E53E467844EF87296,03A70C271775C3B15541B86E53E467844EF87296," Remarks
-
-Remarks are comments that are introduced by the pound (or hash) sign (). All text that follows the pound sign on the same line is considered part of the remark and is ignored. A remark can start in any column.
-
-The following example demonstrates the use of remarks:
-
-"
-9F27A4650B0B0BF36223937D0CF60E460B66A723,9F27A4650B0B0BF36223937D0CF60E460B66A723," Statement syntax
-
-The statement syntax for Python is very simple.
-
-In general, each source line is a single statement. Except for expression and assignment statements, each statement is introduced by a keyword name, such as if or for. Blank lines or remark lines can be inserted anywhere between any statements in the code. If there's more than one statement on a line, each statement must be separated by a semicolon (;).
-
-Very long statements can continue on more than one line. In this case, the statement that is to continue on to the next line must end with a backslash (). For example:
-
-x = ""A loooooooooooooooooooong string"" +
-""another looooooooooooooooooong string""
-
-When you enclose a structure by parentheses (()), brackets ([]), or curly braces ({}), the statement can be continued on a new line after any comma, without having to insert a backslash. For example:
-
-"
-14F850B810E969CE2646D5641300FB407A6C49C5,14F850B810E969CE2646D5641300FB407A6C49C5," Strings
-
-A string is an immutable sequence of characters that's treated as a value. Strings support all of the immutable sequence functions and operators that result in a new string. For example, ""abcdef""[1:4] results in the output ""bcd"".
-
-In Python, characters are represented by strings of length one.
-
-Strings literals are defined by the use of single or triple quoting. Strings that are defined using single quotes can't span lines, while strings that are defined using triple quotes can. You can enclose a string in single quotes (') or double quotes (""). A quoting character may contain the other quoting character un-escaped or the quoting character escaped, that's preceded by the backslash () character.
-"
-398A23291331968098B47496D504743991855A61_0,398A23291331968098B47496D504743991855A61," kdemodel properties
-
-Kernel Density Estimation (KDE)© uses the Ball Tree or KD Tree algorithms for efficient queries, and combines concepts from unsupervised learning, feature engineering, and data modeling. Neighbor-based approaches such as KDE are some of the most popular and useful density estimation techniques. The KDE Modeling and KDE Simulation nodes in SPSS Modeler expose the core features and commonly used parameters of the KDE library. The nodes are implemented in Python.
-
-
-
-kdemodel properties
-
-Table 1. kdemodel properties
-
- kdemodel properties Data type Property description
-
- custom_fields boolean This option tells the node to use field information specified here instead of that given in any upstream Type node(s). After selecting this option, specify the following fields as required.
- inputs field List of the field names for input.
- bandwidth double Default is 1.
- kernel string The kernel to use: gaussian, tophat, epanechnikov, exponential, linear, or cosine. Default is gaussian.
- algorithm string The tree algorithm to use: kd_tree, ball_tree, or auto. Default is auto.
- metric string The metric to use when calculating distance. For the kd_tree algorithm, choose from: Euclidean, Chebyshev, Cityblock, Minkowski, Manhattan, Infinity, P, L2, or L1. For the ball_tree algorithm, choose from: Euclidian, Braycurtis, Chebyshev, Canberra, Cityblock, Dice, Hamming, Infinity, Jaccard, L1, L2, Minkowski, Matching, Manhattan, P, Rogersanimoto, Russellrao, Sokalmichener, Sokalsneath, or Kulsinski. Default is Euclidean.
- atol float The desired absolute tolerance of the result. A larger tolerance will generally lead to faster execution. Default is 0.0.
-"
-398A23291331968098B47496D504743991855A61_1,398A23291331968098B47496D504743991855A61," rtol float The desired relative tolerance of the result. A larger tolerance will generally lead to faster execution. Default is 1E-8.
- breadth_first boolean Set to True to use a breadth-first approach. Set to False to use a depth-first approach. Default is True.
- leaf_size integer The leaf size of the underlying tree. Default is 40. Changing this value may significantly impact the performance.
- p_value double Specify the P Value to use if you're using Minkowski for the metric. Default is 1.5.
- custom_name
-"
-0EA3470872BF545059B23B040AB1EB393630A29D_0,0EA3470872BF545059B23B040AB1EB393630A29D," kdeexport properties
-
-Kernel Density Estimation (KDE)© uses the Ball Tree or KD Tree algorithms for efficient queries, and combines concepts from unsupervised learning, feature engineering, and data modeling. Neighbor-based approaches such as KDE are some of the most popular and useful density estimation techniques. The KDE Modeling and KDE Simulation nodes in SPSS Modeler expose the core features and commonly used parameters of the KDE library. The nodes are implemented in Python.
-
-
-
-kdeexport properties
-
-Table 1. kdeexport properties
-
- kdeexport properties Data type Property description
-
- custom_fields boolean This option tells the node to use field information specified here instead of that given in any upstream Type node(s). After selecting this option, specify the fields as required.
- inputs field List of the field names for input.
- bandwidth double Default is 1.
- kernel string The kernel to use: gaussian or tophat. Default is gaussian.
- algorithm string The tree algorithm to use: kd_tree, ball_tree, or auto. Default is auto.
- metric string The metric to use when calculating distance. For the kd_tree algorithm, choose from: Euclidean, Chebyshev, Cityblock, Minkowski, Manhattan, Infinity, P, L2, or L1. For the ball_tree algorithm, choose from: Euclidian, Braycurtis, Chebyshev, Canberra, Cityblock, Dice, Hamming, Infinity, Jaccard, L1, L2, Minkowski, Matching, Manhattan, P, Rogersanimoto, Russellrao, Sokalmichener, Sokalsneath, or Kulsinski. Default is Euclidean.
- atol float The desired absolute tolerance of the result. A larger tolerance will generally lead to faster execution. Default is 0.0.
-"
-0EA3470872BF545059B23B040AB1EB393630A29D_1,0EA3470872BF545059B23B040AB1EB393630A29D," rtol float The desired relative tolerance of the result. A larger tolerance will generally lead to faster execution. Default is 1E-8.
- breadth_first boolean Set to True to use a breadth-first approach. Set to False to use a depth-first approach. Default is True.
-"
-9BEA57D80C215D963CB0C54046136FB3E88C7D5C,9BEA57D80C215D963CB0C54046136FB3E88C7D5C," kdeapply properties
-
-You can use the KDE Modeling node to generate a KDE model nugget. The scripting name of this model nugget is kdeapply. For information on scripting the modeling node itself, see [kdemodel properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/kdemodelnodeslots.htmlkdemodelnodeslots).
-
-
-
-kdeapply properties
-
-Table 1. kdeapply properties
-
- kdeapply properties Data type Property description
-
- out_log_density boolean Specify True or False to include or exclude the log density value in the output. Default is False.
-"
-720712D40BFDEF5974C7C025A6AC0D0649124B79_0,720712D40BFDEF5974C7C025A6AC0D0649124B79," kmeansasnode properties
-
-K-Means is one of the most commonly used clustering algorithms. It clusters data points into a predefined number of clusters. The K-Means-AS node in SPSS Modeler is implemented in Spark. For details about K-Means algorithms, see [https://spark.apache.org/docs/2.2.0/ml-clustering.html](https://spark.apache.org/docs/2.2.0/ml-clustering.html). Note that the K-Means-AS node performs one-hot encoding automatically for categorical variables.
-
-
-
-kmeansasnode properties
-
-Table 1. kmeansasnode properties
-
- kmeansasnode Properties Values Property description
-
- roleUse string Specify predefined to use predefined roles, or custom to use custom field assignments. Default is predefined.
- autoModel Boolean Specify true to use the default name ($S-prediction) for the new generated scoring field, or false to use a custom name. Default is true.
- features field List of the field names for input when the roleUse property is set to custom.
- name string The name of the new generated scoring field when the autoModel property is set to false.
- clustersNum integer The number of clusters to create. Default is 5.
- initMode string The initialization algorithm. Possible values are k-means or random. Default is k-means .
- initSteps integer The number of initialization steps when initMode is set to k-means . Default is 2.
- advancedSettings Boolean Specify true to make the following four properties available. Default is false.
- maxIteration integer Maximum number of iterations for clustering. Default is 20.
-"
-720712D40BFDEF5974C7C025A6AC0D0649124B79_1,720712D40BFDEF5974C7C025A6AC0D0649124B79," tolerance string The tolerance to stop the iterations. Possible settings are 1.0E-1, 1.0E-2, ..., 1.0E-6. Default is 1.0E-4.
- setSeed Boolean Specify true to use a custom random seed. Default is false.
-"
-6F35B89192B6C9A233B859CF66FCC435F3F9E650,6F35B89192B6C9A233B859CF66FCC435F3F9E650," kmeansnode properties
-
-The K-Means node clusters the data set into distinct groups (or clusters). The method defines a fixed number of clusters, iteratively assigns records to clusters, and adjusts the cluster centers until further refinement can no longer improve the model. Instead of trying to predict an outcome, k-means uses a process known as unsupervised learning to uncover patterns in the set of input fields.
-
-
-
-kmeansnode properties
-
-Table 1. kmeansnode properties
-
- kmeansnode Properties Values Property description
-
- inputs [field1 ... fieldN] K-means models perform cluster analysis on a set of input fields but do not use a target field. Weight and frequency fields are not used. See [Common modeling node properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/modelingnodeslots_common.htmlmodelingnodeslots_common) for more information.
- num_clusters number
- gen_distance flag
- cluster_label StringNumber
- label_prefix string
- mode SimpleExpert
- stop_on DefaultCustom
- max_iterations number
- tolerance number
-"
-57D441EF305442BCDBBE48B980B87D47B825FFF9,57D441EF305442BCDBBE48B980B87D47B825FFF9," applykmeansnode properties
-
-You can use K-Means modeling nodes to generate a K-Means model nugget. The scripting name of this model nugget is applykmeansnode. No other properties exist for this model nugget. For more information on scripting the modeling node itself, see [kmeansnode properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/kmeansnodeslots.htmlkmeansnodeslots).
-"
-CC60FEBF8E5D1907CE0CCF3868CD9E4B494AA1BF,CC60FEBF8E5D1907CE0CCF3868CD9E4B494AA1BF," knnnode properties
-
-The k-Nearest Neighbor (KNN) node associates a new case with the category or value of the k objects nearest to it in the predictor space, where k is an integer. Similar cases are near each other and dissimilar cases are distant from each other.
-
-
-
-knnnode properties
-
-Table 1. knnnode properties
-
- knnnode Properties Values Property description
-
- analysis PredictTargetIdentifyNeighbors
- objective BalanceSpeedAccuracyCustom
- normalize_ranges flag
- use_case_labels flag Check box to enable next option.
- case_labels_field field
- identify_focal_cases flag Check box to enable next option.
- focal_cases_field field
- automatic_k_selection flag
- fixed_k integer Enabled only if automatic_k_selectio is False.
- minimum_k integer Enabled only if automatic_k_selectio is True.
- maximum_k integer
- distance_computation EuclideanCityBlock
- weight_by_importance flag
- range_predictions MeanMedian
- perform_feature_selection flag
- forced_entry_inputs [field1 ... fieldN]
- stop_on_error_ratio flag
- number_to_select integer
- minimum_change number
- validation_fold_assign_by_field flag
- number_of_folds integer Enabled only if validation_fold_assign_by_field is False
- set_random_seed flag
- random_seed number
- folds_field field Enabled only if validation_fold_assign_by_field is True
- all_probabilities flag
- save_distances flag
- calculate_raw_propensities flag
-"
-8B32EB4742D88B5CEC2E1C9616958BD7F8986785,8B32EB4742D88B5CEC2E1C9616958BD7F8986785," applyknnnode properties
-
-You can use KNN modeling nodes to generate a KNN model nugget. The scripting name of this model nugget is applyknnnode. For more information on scripting the modeling node itself, see [knnnode properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/knnnodeslots.htmlknnnodeslots).
-
-
-
-applyknnnode properties
-
-Table 1. applyknnnode properties
-
- applyknnnode Properties Values Property description
-
- all_probabilities flag
-"
-0563FC6874B43FA0BCA09AE54805FE98BFA33042,0563FC6874B43FA0BCA09AE54805FE98BFA33042," kohonennode properties
-
-The Kohonen node generates a type of neural network that can be used to cluster the data set into distinct groups. When the network is fully trained, records that are similar should be close together on the output map, while records that are different will be far apart. You can look at the number of observations captured by each unit in the model nugget to identify the strong units. This may give you a sense of the appropriate number of clusters.
-
-
-
-kohonennode properties
-
-Table 1. kohonennode properties
-
- kohonennode Properties Values Property description
-
- inputs [field1 ... fieldN] Kohonen models use a list of input fields, but no target. Frequency and weight fields are not used. See [Common modeling node properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/modelingnodeslots_common.htmlmodelingnodeslots_common) for more information.
- continue flag
- show_feedback flag
- stop_on Default Time
- time number
- optimize Speed Memory Use to specify whether model building should be optimized for speed or for memory.
- cluster_label flag
- mode Simple Expert
- width number
- length number
- decay_style Linear Exponential
- phase1_neighborhood number
- phase1_eta number
- phase1_cycles number
- phase2_neighborhood number
- phase2_eta number
- phase2_cycles number
-"
-2939716BFA6089C8B6373ED7C6397AF71389A5C8,2939716BFA6089C8B6373ED7C6397AF71389A5C8," applykohonennode properties
-
-You can use Kohonen modeling nodes to generate a Kohonen model nugget. The scripting name of this model nugget is applykohonennode. For more information on scripting the modeling node itself, see [c50node properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/c50nodeslots.htmlc50nodeslots).
-
-
-
-applykohonennode properties
-
-Table 1. applykohonennode properties
-
- applykohonennode Properties Values Property description
-
- enable_sql_generation falsetruenative When using data from a database, SQL code can be pushed back to the database for execution, providing superior performance for many operations.
-"
-87D2FF4289EDCBF7FCFA7FC7FD460DEB02ECC71B_0,87D2FF4289EDCBF7FCFA7FC7FD460DEB02ECC71B," logregnode properties
-
-Logistic regression is a statistical technique for classifying records based on values of input fields. It is analogous to linear regression but takes a categorical target field instead of a numeric range.
-
-
-
-logregnode properties
-
-Table 1. logregnode properties
-
- logregnode Properties Values Property description
-
- target field Logistic regression models require a single target field and one or more input fields. Frequency and weight fields are not used. See [Common modeling node properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/modelingnodeslots_common.htmlmodelingnodeslots_common) for more information.
- logistic_procedure BinomialMultinomial
- include_constant flag
- mode SimpleExpert
- method EnterStepwiseForwardsBackwardsBackwardsStepwise
- binomial_method EnterForwardsBackwards
- model_type MainEffectsFullFactorialCustom When FullFactorial is specified as the model type, stepping methods will not run, even if specified. Instead, Enter will be the method used. If the model type is set to Custom but no custom fields are specified, a main-effects model will be built.
- custom_terms [[BP Sex][BP][Age]]
- multinomial_base_category string Specifies how the reference category is determined.
- binomial_categorical_input string
- binomial_input_contrast IndicatorSimpleDifferenceHelmertRepeatedPolynomialDeviation Keyed property for categorical input that specifies how the contrast is determined. See the example for usage.
- binomial_input_category FirstLast Keyed property for categorical input that specifies how the reference category is determined. See the example for usage.
- scale NoneUserDefinedPearsonDeviance
- scale_value number
- all_probabilities flag
-"
-87D2FF4289EDCBF7FCFA7FC7FD460DEB02ECC71B_1,87D2FF4289EDCBF7FCFA7FC7FD460DEB02ECC71B," tolerance 1.0E-51.0E-61.0E-71.0E-81.0E-91.0E-10
- min_terms number
- use_max_terms flag
- max_terms number
- entry_criterion ScoreLR
- removal_criterion LRWald
- probability_entry number
- probability_removal number
- binomial_probability_entry number
- binomial_probability_removal number
- requirements HierarchyDiscreteHierarchyAllContainmentNone
- max_iterations number
- max_steps number
- p_converge 1.0E-41.0E-51.0E-61.0E-71.0E-80
- l_converge 1.0E-11.0E-21.0E-31.0E-41.0E-50
- delta number
- iteration_history flag
- history_steps number
- summary flag
- likelihood_ratio flag
- asymptotic_correlation flag
- goodness_fit flag
- parameters flag
- confidence_interval number
- asymptotic_covariance flag
- classification_table flag
- stepwise_summary flag
- info_criteria flag
- monotonicity_measures flag
- binomial_output_display at_each_stepat_last_step
- binomial_goodness_of_fit flag
- binomial_parameters flag
- binomial_iteration_history flag
- binomial_classification_plots flag
- binomial_ci_enable flag
- binomial_ci number
- binomial_residual outliersall
- binomial_residual_enable flag
- binomial_outlier_threshold number
- binomial_classification_cutoff number
- binomial_removal_criterion LRWaldConditional
-"
-7C8BCAFBD032E30DCC7C39E28A2B5DE1E340DA6B,7C8BCAFBD032E30DCC7C39E28A2B5DE1E340DA6B," applylogregnode properties
-
-You can use Logistic modeling nodes to generate a Logistic model nugget. The scripting name of this model nugget is applylogregnode. For more information on scripting the modeling node itself, [logregnode properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/logregnodeslots.htmllogregnodeslots).
-
-
-
-applylogregnode properties
-
-Table 1. applylogregnode properties
-
- applylogregnode Properties Values Property description
-
- calculate_raw_propensities flag
-"
-7C4F082004DBA0B946D64AA6C0127041F4622C7B,7C4F082004DBA0B946D64AA6C0127041F4622C7B," lsvmnode properties
-
-With the Linear Support Vector Machine (LSVM) node, you can classify data into one of two groups without overfitting. LSVM is linear and works well with wide data sets, such as those with a very large number of records.
-
-
-
-lsvmnode properties
-
-Table 1. lsvmnode properties
-
- lsvmnode Properties Values Property description
-
- intercept flag Includes the intercept in the model. Default value is True.
- target_order AscendingDescending Specifies the sorting order for the categorical target. Ignored for continuous targets. Default is Ascending.
- precision number Used only if measurement level of target field is Continuous. Specifies the parameter related to the sensitiveness of the loss for regression. Minimum is 0 and there is no maximum. Default value is 0.1.
- exclude_missing_values flag When True, a record is excluded if any single value is missing. The default value is False.
- penalty_function L1L2 Specifies the type of penalty function used. The default value is L2.
-"
-5890D52D3DDE4C249AD06C5A4DFE25542723F1C1,5890D52D3DDE4C249AD06C5A4DFE25542723F1C1," applylsvmnode properties
-
-You can use LSVM modeling nodes to generate an LSVM model nugget. The scripting name of this model nugget is applylsvmnode. For more information on scripting the modeling node itself, see [lsvmnode properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/lsvmnodeslots.html).
-
-
-
-applylsvmnode properties
-
-Table 1. applylsvmnode properties
-
- applylsvmnode Properties Values Property description
-
-"
-3426FB738655136D42FA32BD6CFBFD979A3D5574,3426FB738655136D42FA32BD6CFBFD979A3D5574," matrixnode properties
-
-The Matrix node creates a table that shows relationships between fields. It's most commonly used to show the relationship between two symbolic fields, but it can also show relationships between flag fields or numeric fields.
-
-
-
-matrixnode properties
-
-Table 1. matrixnode properties
-
- matrixnode properties Data type Property description
-
- fields SelectedFlagsNumerics
- row field
- column field
- include_missing_values flag Specifies whether user-missing (blank) and system missing (null) values are included in the row and column output.
- cell_contents CrossTabsFunction
- function_field string
- function SumMeanMinMaxSDev
- sort_mode UnsortedAscendingDescending
- highlight_top number If non-zero, then true.
- highlight_bottom number If non-zero, then true.
- display [CountsExpectedResidualsRowPctColumnPctTotalPct]
- include_totals flag
- use_output_name flag Specifies whether a custom output name is used.
- output_name string If use_output_name is true, specifies the name to use.
- output_mode ScreenFile Used to specify target location for output generated from the output node.
- output_format Formatted (.tab) Delimited (.csv) HTML (.html) Output (.cou) Used to specify the type of output. Both the Formatted and Delimited formats can take the modifier transposed, which transposes the rows and columns in the table.
- paginate_output flag When the output_format is HTML, causes the output to be separated into pages.
-"
-DA3D295DA633CD271FB3970AD2ED4B31BDCB6247_0,DA3D295DA633CD271FB3970AD2ED4B31BDCB6247," meansnode properties
-
-The Means node compares the means between independent groups or between pairs of related fields to test whether a significant difference exists. For example, you could compare mean revenues before and after running a promotion or compare revenues from customers who didn't receive the promotion with those who did.
-
-
-
-meansnode properties
-
-Table 1. meansnode properties
-
- meansnode properties Data type Property description
-
- means_mode BetweenGroupsBetweenFields Specifies the type of means statistic to be executed on the data.
- test_fields [field1 ... fieldn] Specifies the test field when means_mode is set to BetweenGroups.
- grouping_field field Specifies the grouping field.
- paired_fields [field1 field2]field3 field4]...] Specifies the field pairs to use when means_mode is set to BetweenFields.
- label_correlations flag Specifies whether correlation labels are shown in output. This setting applies only when means_mode is set to BetweenFields.
- correlation_mode ProbabilityAbsolute Specifies whether to label correlations by probability or absolute value.
- weak_label string
- medium_label string
- strong_label string
- weak_below_probability number When correlation_mode is set to Probability, specifies the cutoff value for weak correlations. This must be a value between 0 and 1—for example, 0.90.
- strong_above_probability number Cutoff value for strong correlations.
- weak_below_absolute number When correlation_mode is set to Absolute, specifies the cutoff value for weak correlations. This must be a value between 0 and 1—for example, 0.90.
- strong_above_absolute number Cutoff value for strong correlations.
- unimportant_label string
- marginal_label string
- important_label string
- unimportant_below number Cutoff value for low field importance. This must be a value between 0 and 1—for example, 0.90.
-"
-DA3D295DA633CD271FB3970AD2ED4B31BDCB6247_1,DA3D295DA633CD271FB3970AD2ED4B31BDCB6247," important_above number
- use_output_name flag Specifies whether a custom output name is used.
- output_name string Name to use.
- output_mode ScreenFile Specifies the target location for output generated from the output node.
- output_format Formatted (.tab) Delimited (.csv) HTML (.html) Output (.cou) Specifies the type of output.
-"
-A148122DA72AD9FF05B3483D6F50975C50B4AB33_0,A148122DA72AD9FF05B3483D6F50975C50B4AB33," mergenode properties
-
- The Merge node takes multiple input records and creates a single output record containing some or all of the input fields. It's useful for merging data from different sources, such as internal customer data and purchased demographic data.
-
-
-
-mergenode properties
-
-Table 1. mergenode properties
-
- mergenode properties Data type Property description
-
- method Order Keys Condition Rankedcondition Specify whether records are merged in the order they are listed in the data files, if one or more key fields will be used to merge records with the same value in the key fields, if records will be merged if a specified condition is satisfied, or if each row pairing in the primary and all secondary data sets are to be merged; using the ranking expression to sort any multiple matches into order from low to high.
- condition string If method is set to Condition, specifies the condition for including or discarding records.
- key_fields list
- common_keys flag
- join Inner FullOuter PartialOuter Anti
- outer_join_tag.n flag In this property, n is the tag name as displayed in the node properties. Note that multiple tag names may be specified, as any number of datasets could contribute incomplete records.
- single_large_input flag Specifies whether optimization for having one input relatively large compared to the other inputs will be used.
- single_large_input_tag string Specifies the tag name as displayed in the note properties. Note that the usage of this property differs slightly from the outer_join_tag property (flag versus string) because only one input dataset can be specified.
- use_existing_sort_keys flag Specifies whether the inputs are already sorted by one or more key fields.
- existing_sort_keys [['string','Ascending'] \ ['string'','Descending']] Specifies the fields that are already sorted and the direction in which they are sorted.
-"
-A148122DA72AD9FF05B3483D6F50975C50B4AB33_1,A148122DA72AD9FF05B3483D6F50975C50B4AB33," primary_dataset string If method is Rankedcondition, select the primary data set in the merge. This can be considered as the left side of an outer join merge.
- rename_duplicate_fields boolean If method is Rankedcondition, and this is set to Y, if the resulting merged data set contains multiple fields with the same name from different data sources the respective tags from the data sources are added at the start of the field column headers.
- merge_condition string
- ranking_expression string
-"
-ADB2D2B53C7F2A464A38F7DE5D7A74A39E697528,ADB2D2B53C7F2A464A38F7DE5D7A74A39E697528," Common modeling node properties
-
-The following properties are common to some or all modeling nodes. Any exceptions are noted in the documentation for individual modeling nodes as appropriate.
-
-
-
-Common modeling node properties
-
-Table 1. Common modeling node properties
-
- Property Values Property description
-
- custom_fields flag If true, allows you to specify target, input, and other fields for the current node. If false, the current settings from an upstream Type node are used.
- target or targets field or [field1 ... fieldN] Specifies a single target field or multiple target fields depending on the model type.
- inputs [field1 ... fieldN] Input or predictor fields used by the model.
- partition field
- use_partitioned_data flag If a partition field is defined, this option ensures that only data from the training partition is used to build the model.
- use_split_data flag
- splits [field1 ... fieldN] Specifies the field or fields to use for split modeling. Effective only if use_split_data is set to True.
- use_frequency flag Weight and frequency fields are used by specific models as noted for each model type.
- frequency_field field
- use_weight flag
- weight_field field
- use_model_name flag
-"
-5E1CE04D915B9A758F234F859DFFEFAB46484C97,5E1CE04D915B9A758F234F859DFFEFAB46484C97," multilayerperceptronnode properties
-
-Multilayer perceptron is a classifier based on the feedforward artificial neural network and consists of multiple layers. Each layer is fully connected to the next layer in the network. The MultiLayerPerceptron-AS node in SPSS Modeler is implemented in Spark. For details about the multilayer perceptron classifier (MLPC), see
-
-[https://spark.apache.org/docs/latest/ml-classification-regression.html#multilayer-perceptron-classifier](https://spark.apache.org/docs/latest/ml-classification-regression.htmlmultilayer-perceptron-classifier).
-
-
-
-multilayerperceptronnode properties
-
-Table 1. multilayerperceptronnode properties
-
- multilayerperceptronnode properties Data type Property description
-
- custom_fields boolean This option tells the node to use field information specified here instead of that given in any upstream Type node(s). After selecting this option, specify the following fields as required.
- target field One field name for target.
- inputs field List of the field names for input.
- num_hidden_layers string Specify the number of hidden layers. Use a comma between multiple hidden layers.
- num_output_number string Specify the number of output layers.
- random_seed integer Generate the seed used by the random number generator.
- maxiter integer Specify the maximum number of iterations to perform.
- set_expert boolean Select the Expert Mode option in the Model Building section if you want to specify the block size for stacking input data in matrices.
- block_size integer This option can speed up the computation.
-"
-093BFFCB43C46F1068A59A6B6338C955BF20AABF,093BFFCB43C46F1068A59A6B6338C955BF20AABF," multiplotnode properties
-
-The Multiplot node creates a plot that displays multiple Y fields over a single X field. The Y fields are plotted as colored lines; each is equivalent to a Plot node with Style set to Line and X Mode set to Sort. Multiplots are useful when you want to explore the fluctuation of several variables over time.
-
-
-
-multiplotnode properties
-
-Table 1. multiplotnode properties
-
- multiplotnode properties Data type Property description
-
- x_field field
- y_fields list
- panel_field field
- animation_field field
- normalize flag
- use_overlay_expr flag
- overlay_expression string
- records_limit number
- if_over_limit PlotBinsPlotSamplePlotAll
- x_label_auto flag
- x_label string
- y_label_auto flag
- y_label string
- use_grid flag
-"
-665B81FCF30212BA535DEDFFC35E22901ED3E3B6,665B81FCF30212BA535DEDFFC35E22901ED3E3B6," applyocsvmnode properties
-
-You can use One-Class SVM nodes to generate a One-Class SVM model nugget. The scripting name of this model nugget is applyocsvmnode. No other properties exist for this model nugget. For more information on scripting the modeling node itself, see [ocsvmnode properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/oneclasssvmnodeslots.htmloneclasssvmnodeslots).
-"
-B6DC15D9F3F199C8BAB5F85EDA67D50627BB3E08_0,B6DC15D9F3F199C8BAB5F85EDA67D50627BB3E08," ocsvmnode properties
-
-The One-Class SVM node uses an unsupervised learning algorithm. The node can be used for novelty detection. It will detect the soft boundary of a given set of samples, to then classify new points as belonging to that set or not. This One-Class SVM modeling node in SPSS Modeler is implemented in Python and requires the scikit-learn© Python library.
-
-
-
-ocsvmnode properties
-
-Table 1. ocsvmnode properties
-
- ocsvmnode properties Data type Property description
-
- custom_fields boolean This option tells the node to use field information specified here instead of that given in any upstream Type node(s). After selecting this option, specify the following fields as required.
- inputs field List of the field names for input.
- role_use string Specify predefined to use predefined roles or custom to use custom field assignments. Default is predefined.
- splits field List of the field names for split.
- use_partition Boolean Specify true or false. Default is true. If set to true, only training data will be used when building the model.
- mode_type string The mode. Possible values are simple or expert. All parameters on the Expert tab will be disabled if simple is specified.
- stopping_criteria string A string of scientific notation. Possible values are 1.0E-1, 1.0E-2, 1.0E-3, 1.0E-4, 1.0E-5, or 1.0E-6. Default is 1.0E-3.
- precision float The regression precision (nu). Bound on the fraction of training errors and support vectors. Specify a number greater than 0 and less than or equal to 1.0. Default is 0.1.
- kernel string The kernel type to use in the algorithm. Possible values are linear, poly, rbf, sigmoid, or precomputed. Default is rbf.
-"
-B6DC15D9F3F199C8BAB5F85EDA67D50627BB3E08_1,B6DC15D9F3F199C8BAB5F85EDA67D50627BB3E08," enable_gamma Boolean Enables the gamma parameter. Specify true or false. Default is true.
- gamma float This parameter is only enabled for the kernels rbf, poly, and sigmoid. If the enable_gamma parameter is set to false, this parameter will be set to auto. If set to true, the default is 0.1.
- coef0 float Independent term in the kernel function. This parameter is only enabled for the poly kernel and the sigmoid kernel. Default value is 0.0.
- degree integer Degree of the polynomial kernel function. This parameter is only enabled for the poly kernel. Specify any integer. Default is 3.
- shrinking Boolean Specifies whether to use the shrinking heuristic option. Specify true or false. Default is false.
- enable_cache_size Boolean Enables the cache_size parameter. Specify true or false. Default is false.
- cache_size float The size of the kernel cache in MB. Default is 200.
- enable_random_seed Boolean Enables the random_seed parameter. Specify true or false. Default is false.
- random_seed integer The random number seed to use when shuffling data for probability estimation. Specify any integer.
- pc_type string The type of the parallel coordinates graphic. Possible options are independent or general.
- lines_amount integer Maximum number of lines to include on the graphic. Specify an integer between 1 and 1000.
- lines_fields_custom Boolean Enables the lines_fields parameter, which allows you to specify custom fields to show in the graph output. If set to false, all fields will be shown. If set to true, only the fields specified with the lines_fields parameter will be shown. For performance reasons, a maximum of 20 fields will be displayed.
- lines_fields field List of the field names to include on the graphic as vertical axes.
- enable_graphic Boolean Specify true or false. Enables graphic output (disable this option if you want to save time and reduce stream file size).
-"
-B6DC15D9F3F199C8BAB5F85EDA67D50627BB3E08_2,B6DC15D9F3F199C8BAB5F85EDA67D50627BB3E08," enable_hpo Boolean Specify true or false to enable or disable the HPO options. If set to true, Rbfopt will be applied to find out the ""best"" One-Class SVM model automatically, which reaches the target objective value defined by the user with the following target_objval parameter.
- target_objval float The objective function value (error rate of the model on the samples) we want to reach (for example, the value of the unknown optimum). Set this parameter to the appropriate value if the optimum is unknown (for example, 0.01).
-"
-1B83FE669CB3776D00A1A78E4764F115DFD5A40A,1B83FE669CB3776D00A1A78E4764F115DFD5A40A," Output node properties
-
-Refer to this section for a list of available properties for Output nodes.
-
-Output node properties differ slightly from those of other node types. Rather than referring to a particular node option, output node properties store a reference to the output object. This can be useful in taking a value from a table and then setting it as a flow parameter.
-"
-1C19733ED0D3400BAF6FF05317475A6518B5BA1A,1C19733ED0D3400BAF6FF05317475A6518B5BA1A," partitionnode properties
-
-The Partition node generates a partition field, which splits the data into separate subsets for the training, testing, and validation stages of model building.
-
-
-
-partitionnode properties
-
-Table 1. partitionnode properties
-
- partitionnode properties Data type Property description
-
- new_name string Name of the partition field generated by the node.
- create_validation flag Specifies whether a validation partition should be created.
- training_size integer Percentage of records (0–100) to be allocated to the training partition.
- testing_size integer Percentage of records (0–100) to be allocated to the testing partition.
- validation_size integer Percentage of records (0–100) to be allocated to the validation partition. Ignored if a validation partition is not created.
- training_label string Label for the training partition.
- testing_label string Label for the testing partition.
- validation_label string Label for the validation partition. Ignored if a validation partition is not created.
- value_mode SystemSystemAndLabelLabel Specifies the values used to represent each partition in the data. For example, the training sample can be represented by the system integer 1, the label Training, or a combination of the two, 1_Training.
- set_random_seed Boolean Specifies whether a user-specified random seed should be used.
- random_seed integer A user-specified random seed value. For this value to be used, set_random_seed must be set to True.
-"
-F1CDB96AD5A56206F662BB3025B93F6D5820242B_0,F1CDB96AD5A56206F662BB3025B93F6D5820242B," plotnode properties
-
-The Plot node shows the relationship between numeric fields. You can create a plot by using points (a scatterplot) or lines.
-
-
-
-plotnode properties
-
-Table 1. plotnode properties
-
- plotnode properties Data type Property description
-
- x_field field Specifies a custom label for the x axis. Available only for labels.
- y_field field Specifies a custom label for the y axis. Available only for labels.
- three_D flag Specifies a custom label for the y axis. Available only for labels in 3-D graphs.
- z_field field
- color_field field Overlay field.
- size_field field
- shape_field field
- panel_field field Specifies a nominal or flag field for use in making a separate chart for each category. Charts are paneled together in one output window.
- animation_field field Specifies a nominal or flag field for illustrating data value categories by creating a series of charts displayed in sequence using animation.
- transp_field field Specifies a field for illustrating data value categories by using a different level of transparency for each category. Not available for line plots.
- overlay_type NoneSmootherFunction Specifies whether an overlay function or LOESS smoother is displayed.
- overlay_expression string Specifies the expression used when overlay_type is set to Function.
- style PointLine
- point_type Rectangle Dot Triangle Hexagon Plus Pentagon Star BowTie HorizontalDash VerticalDash IronCross Factory House Cathedral OnionDome ConcaveTriangle OblateGlobe CatEye FourSidedPillow RoundRectangle Fan
- x_mode SortOverlayAsRead
- x_range_mode AutomaticUserDefined
- x_range_min number
- x_range_max number
- y_range_mode AutomaticUserDefined
- y_range_min number
- y_range_max number
- z_range_mode AutomaticUserDefined
- z_range_min number
- z_range_max number
- jitter flag
- records_limit number
-"
-F1CDB96AD5A56206F662BB3025B93F6D5820242B_1,F1CDB96AD5A56206F662BB3025B93F6D5820242B," if_over_limit PlotBinsPlotSamplePlotAll
- x_label_auto flag
- x_label string
- y_label_auto flag
- y_label string
- z_label_auto flag
- z_label string
- use_grid flag
- graph_background color Standard graph colors are described at the beginning of this section.
-"
-8BAD741CD92F2DB6AB2CE3A3C2D35D000235BFE9,8BAD741CD92F2DB6AB2CE3A3C2D35D000235BFE9," applylinearasnode properties
-
-You can use Linear-AS modeling nodes to generate a Linear-AS model nugget. The scripting name of this model nugget is applylinearasnode. For more information on scripting the modeling node itself, see [linearasnode properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/properties/linearASslots.htmllinearASslots).
-
-
-
-applylinearasnode Properties
-
-Table 1. applylinearasnode Properties
-
- applylinearasnode Property Values Property description
-
- enable_sql_generation falsenative When using data from a database, SQL code can be pushed back to the database for execution, providing superior performance for many operations.
-"
-0E7FF3238A69FA701A2067672493CCB1B9698CC1,0E7FF3238A69FA701A2067672493CCB1B9698CC1," applylinearnode properties
-
-Linear modeling nodes can be used to generate a Linear model nugget. The scripting name of this model nugget is applylinearnode. For more information on scripting the modeling node itself, see [linearnode properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/properties/linearslots.htmllinearslots).
-
-
-
-applylinearnode Properties
-
-Table 1. applylinearnode Properties
-
- linear Properties Values Property description
-
- use_custom_name flag
-"
-CE40B0CEF1449476821A1EBD8D0CF339C866D16A,CE40B0CEF1449476821A1EBD8D0CF339C866D16A," applyneuralnetworknode properties
-
-You can use Neural Network modeling nodes to generate a Neural Network model nugget. The scripting name of this model nugget is applyneuralnetworknode. For more information on scripting the modeling node itself, see [neuralnetworknode properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/properties/neuralnetworkslots.htmlneuralnetworkslots).
-
-
-
-applyneuralnetworknode properties
-
-Table 1. applyneuralnetworknode properties
-
- applyneuralnetworknode Properties Values Property description
-
- use_custom_name flag
- custom_name string
- confidence onProbability onIncrease
- score_category_probabilities flag
- max_categories number
-"
-90FAFE76840267470228854A202752832D54A787,90FAFE76840267470228854A202752832D54A787," linearasnode properties
-
-Linear regression models predict a continuous target based on linear relationships between the target and one or more predictors.
-
-
-
-linearasnode properties
-
-Table 1. linearasnode properties
-
- linearasnode Properties Values Property description
-
- target field Specifies a single target field.
- inputs [field1 ... fieldN] Predictor fields used by the model.
- weight_field field Analysis field used by the model.
- custom_fields flag The default value is TRUE.
- intercept flag The default value is TRUE.
- detect_2way_interaction flag Whether or not to consider two way interaction. The default value is TRUE.
- cin number The interval of confidence used to compute estimates of the model coefficients. Specify a value greater than 0 and less than 100. The default value is 95.
- factor_order ascendingdescending The sort order for categorical predictors. The default value is ascending.
- var_select_method ForwardStepwiseBestSubsetsnone The model selection method to use. The default value is ForwardStepwise.
- criteria_for_forward_stepwise AICCFstatisticsAdjustedRSquareASE The statistic used to determine whether an effect should be added to or removed from the model. The default value is AdjustedRSquare.
- pin number The effect that has the smallest p-value less than this specified pin threshold is added to the model. The default value is 0.05.
- pout number Any effects in the model with a p-value greater than this specified pout threshold are removed. The default value is 0.10.
- use_custom_max_effects flag Whether to use max number of effects in the final model. The default value is FALSE.
- max_effects number Maximum number of effects to use in the final model. The default value is 1.
- use_custom_max_steps flag Whether to use the maximum number of steps. The default value is FALSE.
-"
-4DEAFAC111CF37F37A2F20CFF35606827D940390,4DEAFAC111CF37F37A2F20CFF35606827D940390," linearnode properties
-
-Linear regression models predict a continuous target based on linear relationships between the target and one or more predictors.
-
-
-
-linearnode properties
-
-Table 1. linearnode properties
-
- linearnode Properties Values Property description
-
- target field Specifies a single target field.
- inputs [field1 ... fieldN] Predictor fields used by the model.
- continue_training_existing_model flag
- objective Standard Bagging Boosting psm psm is used for very large datasets, and requires a server connection.
- use_auto_data_preparation flag
- confidence_level number
- model_selection ForwardStepwise BestSubsets None
- criteria_forward_stepwise AICC Fstatistics AdjustedRSquare ASE
- probability_entry number
- probability_removal number
- use_max_effects flag
- max_effects number
- use_max_steps flag
- max_steps number
- criteria_best_subsets AICC AdjustedRSquare ASE
- combining_rule_continuous Mean Median
- component_models_n number
- use_random_seed flag
- random_seed number
- use_custom_model_name flag
- custom_model_name string
- use_custom_name flag
- custom_name string
- tooltip string
- keywords string
- annotation string
- perform_model_effect_tests boolean Perform model effect tests for each regression effect.
- confidence_level double This is the interval of confidence used to compute estimates of the model coefficients. Specify a value greater than 0 and less than 100. The default is 95.
-"
-7F4719A688D4C15D72918EBBE43B908300138D2C_0,7F4719A688D4C15D72918EBBE43B908300138D2C," neuralnetworknode properties
-
-The Neural Net node uses a simplified model of the way the human brain processes information. It works by simulating a large number of interconnected simple processing units that resemble abstract versions of neurons. Neural networks are powerful general function estimators and require minimal statistical or mathematical knowledge to train or apply.
-
-
-
-neuralnetworknode properties
-
-Table 1. neuralnetworknode properties
-
- neuralnetworknode Properties Values Property description
-
- targets [field1 ... fieldN] Specifies target fields.
- inputs [field1 ... fieldN] Predictor fields used by the model.
- splits [field1 ... fieldN Specifies the field or fields to use for split modeling.
- use_partition flag If a partition field is defined, this option ensures that only data from the training partition is used to build the model.
- continue flag Continue training existing model.
- objective Standard Bagging Boosting psm psm is used for very large datasets, and requires a server connection.
- method MultilayerPerceptron RadialBasisFunction
- use_custom_layers flag
- first_layer_units number
- second_layer_units number
- use_max_time flag
- max_time number
- use_max_cycles flag
- max_cycles number
- use_min_accuracy flag
- min_accuracy number
- combining_rule_categorical Voting HighestProbability HighestMeanProbability
- combining_rule_continuous MeanMedian
- component_models_n number
- overfit_prevention_pct number
- use_random_seed flag
- random_seed number
- missing_values listwiseDeletion missingValueImputation
- use_model_name boolean
- model_name string
- confidence onProbability onIncrease
- score_category_probabilities flag
- max_categories number
- score_propensity flag
-"
-7F4719A688D4C15D72918EBBE43B908300138D2C_1,7F4719A688D4C15D72918EBBE43B908300138D2C," use_custom_name flag
- custom_name string
- tooltip string
- keywords string
-"
-3DC76AC891E282BADF1D7845B2B8A9B3A26DE3D2,3DC76AC891E282BADF1D7845B2B8A9B3A26DE3D2," Export node properties
-
-Refer to this section for a list of available properties for Export nodes.
-"
-049829EEA8EECD997E6CA05584CDE2D9BAE92218,049829EEA8EECD997E6CA05584CDE2D9BAE92218," Field Operations node properties
-
-Refer to this section for a list of available properties for Field Operations nodes.
-"
-9A1025416CDA5EA57E6B2D9525BDFC7F1AE58692,9A1025416CDA5EA57E6B2D9525BDFC7F1AE58692," Graph node properties
-
-Refer to this section for a list of available properties for Graph nodes.
-"
-9F78EEC8E37DB19F2C3220F8E43029B2C5370B5D,9F78EEC8E37DB19F2C3220F8E43029B2C5370B5D," Modeling node properties
-
-Refer to this section for a list of available properties for Modeling nodes.
-"
-F650943069620AA0BD7652DF1ABDCE2C076DE464,F650943069620AA0BD7652DF1ABDCE2C076DE464," Python node properties
-
-Refer to this section for a list of available properties for Python nodes.
-"
-8CE361C94FAB69503049EA703FD6D5A53CD81057,8CE361C94FAB69503049EA703FD6D5A53CD81057," Record Operations node properties
-
-Refer to this section for a list of available properties for Record Operations nodes.
-"
-179BDEFA68B788A2C197F0094C43979D9265BA77,179BDEFA68B788A2C197F0094C43979D9265BA77," Data Asset Import node properties
-
-Refer to this section for a list of available properties for Import nodes.
-"
-F585DF82F7A94309AF9FB51196F188B4FA212118,F585DF82F7A94309AF9FB51196F188B4FA212118," Spark node properties
-
-Refer to this section for a list of available properties for Spark nodes.
-"
-C1CA39FF2C12CC12697E62A37C7C52A256248AF7_0,C1CA39FF2C12CC12697E62A37C7C52A256248AF7," questnode properties
-
-The Quest node provides a binary classification method for building decision trees, designed to reduce the processing time required for large C&R Tree analyses while also reducing the tendency found in classification tree methods to favor inputs that allow more splits. Input fields can be numeric ranges (continuous), but the target field must be categorical. All splits are binary.
-
-
-
-questnode properties
-
-Table 1. questnode properties
-
- questnode Properties Values Property description
-
- target field Quest models require a single target and one or more input fields. A frequency field can also be specified. See [Common modeling node properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/modelingnodeslots_common.html) for more information.
- continue_training_existing_model flag
- objective StandardBoostingBaggingpsm psm is used for very large datasets, and requires a server connection.
- model_output_type SingleInteractiveBuilder
- use_tree_directives flag
- tree_directives string
- use_max_depth DefaultCustom
- max_depth integer Maximum tree depth, from 0 to 1000. Used only if use_max_depth = Custom.
- prune_tree flag Prune tree to avoid overfitting.
- use_std_err flag Use maximum difference in risk (in Standard Errors).
- std_err_multiplier number Maximum difference.
- max_surrogates number Maximum surrogates.
- use_percentage flag
- min_parent_records_pc number
- min_child_records_pc number
- min_parent_records_abs number
- min_child_records_abs number
- use_costs flag
- costs structured Structured property.
- priors DataEqualCustom
- custom_priors structured Structured property.
- adjust_priors flag
- trails number Number of component models for boosting or bagging.
-"
-C1CA39FF2C12CC12697E62A37C7C52A256248AF7_1,C1CA39FF2C12CC12697E62A37C7C52A256248AF7," set_ensemble_method VotingHighestProbabilityHighestMeanProbability Default combining rule for categorical targets.
- range_ensemble_method MeanMedian Default combining rule for continuous targets.
- large_boost flag Apply boosting to very large data sets.
- split_alpha number Significance level for splitting.
- train_pct number Overfit prevention set.
- set_random_seed flag Replicate results option.
- seed number
- calculate_variable_importance flag
- calculate_raw_propensities flag
-"
-2B2899A3878E20A4B73B0F11CFC4FD815A81E13F,2B2899A3878E20A4B73B0F11CFC4FD815A81E13F," applyquestnode properties
-
-You can use QUEST modeling nodes can be used to generate a QUEST model nugget. The scripting name of this model nugget is applyquestnode. For more information on scripting the modeling node itself, see [questnode properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/questnodeslots.html).
-
-
-
-applyquestnode properties
-
-Table 1. applyquestnode properties
-
- applyquestnode Properties Values Property description
-
- sql_generate Never NoMissingValues MissingValues native When using data from a database, SQL code can be pushed back to the database for execution, providing superior performance for many operations.
- calculate_conf flag
- display_rule_id flag Adds a field in the scoring output that indicates the ID for the terminal node to which each record is assigned.
-"
-19AE3ADCF2DA2FFE5186553229FEF07CB2B55043_0,19AE3ADCF2DA2FFE5186553229FEF07CB2B55043," autonumericnode properties
-
-The Auto Numeric node estimates and compares models for continuous numeric range outcomes using a number of different methods. The node works in the same manner as the Auto Classifier node, allowing you to choose the algorithms to use and to experiment with multiple combinations of options in a single modeling pass. Supported algorithms include neural networks, C&R Tree, CHAID, linear regression, generalized linear regression, and support vector machines (SVM). Models can be compared based on correlation, relative error, or number of variables used.
-
-
-
-autonumericnode properties
-
-Table 1. autonumericnode properties
-
- autonumericnode Properties Values Property description
-
- custom_fields flag If True, custom field settings will be used instead of type node settings.
- target field The Auto Numeric node requires a single target and one or more input fields. Weight and frequency fields can also be specified. See [Common modeling node properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/modelingnodeslots_common.htmlmodelingnodeslots_common) for more information.
- inputs [field1 … field2]
- partition field
- use_frequency flag
- frequency_field field
- use_weight flag
- weight_field field
- use_partitioned_data flag If a partition field is defined, only the training data is used for model building.
- ranking_measure CorrelationNumberOfFields
- ranking_dataset TestTraining
- number_of_models integer Number of models to include in the model nugget. Specify an integer between 1 and 100.
- calculate_variable_importance flag
- enable_correlation_limit flag
- correlation_limit integer
- enable_number_of_fields_limit flag
- number_of_fields_limit integer
- enable_relative_error_limit flag
- relative_error_limit integer
-"
-19AE3ADCF2DA2FFE5186553229FEF07CB2B55043_1,19AE3ADCF2DA2FFE5186553229FEF07CB2B55043," enable_model_build_time_limit flag
- model_build_time_limit integer
- enable_stop_after_time_limit flag
- stop_after_time_limit integer
- stop_if_valid_model flag
- flag Enables or disables the use of a specific algorithm.
- . string Sets a property value for a specific algorithm. See [Setting algorithm properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/factorymodeling_algorithmproperties.htmlfactorymodeling_algorithmproperties) for more information.
- use_cross_validation boolean Instead of using a single partition, a cross validation partition is used.
- number_of_folds integer N fold parameter for cross validation, with range from 3 to 10.
- set_random_seed boolean Setting a random seed allows you to replicate analyses. Specify an integer or click Generate, which will create a pseudo-random integer between 1 and 2147483647, inclusive. By default, analyses are replicated with seed 229176228.
- random_seed integer Random seed
-"
-B7CAC3027EB08D3E2CFBFAB0F0AF2ACF4DD0F990,B7CAC3027EB08D3E2CFBFAB0F0AF2ACF4DD0F990," reclassifynode properties
-
-The Reclassify node transforms one set of categorical values to another. Reclassification is useful for collapsing categories or regrouping data for analysis.
-
-
-
-reclassifynode properties
-
-Table 1. reclassifynode properties
-
- reclassifynode properties Data type Property description
-
- mode SingleMultiple Single reclassifies the categories for one field. Multiple activates options enabling the transformation of more than one field at a time.
- replace_field flag
- field string Used only in Single mode.
- new_name string Used only in Single mode.
- fields [field1 field2 ... fieldn] Used only in Multiple mode.
- name_extension string Used only in Multiple mode.
- add_as SuffixPrefix Used only in Multiple mode.
- reclassify string Structured property for field values.
- use_default flag Use the default value.
-"
-8023AC0A48264DB31F3C9DA92FD84F947BFD4047,8023AC0A48264DB31F3C9DA92FD84F947BFD4047," regressionnode properties
-
-Linear regression is a common statistical technique for summarizing data and making predictions by fitting a straight line or surface that minimizes the discrepancies between predicted and actual output values.
-
-
-
-regressionnode properties
-
-Table 1. regressionnode properties
-
- regressionnode Properties Values Property description
-
- target field Regression models require a single target field and one or more input fields. A weight field can also be specified. See the topic [Common modeling node properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/modelingnodeslots_common.htmlmodelingnodeslots_common) for more information.
- method Enter Stepwise Backwards Forwards
- include_constant flag
- use_weight flag
- weight_field field
- mode Simple Expert
- complete_records flag
- tolerance 1.0E-1 1.0E-2 1.0E-3 1.0E-4 1.0E-5 1.0E-6 1.0E-7 1.0E-8 1.0E-9 1.0E-10 1.0E-11 1.0E-12 Use double quotes for arguments.
- stepping_method useP useF useP: use probability of F useF: use F value
- probability_entry number
- probability_removal number
- F_value_entry number
- F_value_removal number
- selection_criteria flag
- confidence_interval flag
- covariance_matrix flag
- collinearity_diagnostics flag
- regression_coefficients flag
- exclude_fields flag
- durbin_watson flag
- model_fit flag
- r_squared_change flag
- p_correlations flag
- descriptives flag
-"
-D6A347CB86DF46925701892180F4D8A5B8E14508,D6A347CB86DF46925701892180F4D8A5B8E14508," applyregressionnode properties
-
-You can use Linear Regression modeling nodes to generate a Linear Regression model nugget. The scripting name of this model nugget is applyregressionnode. No other properties exist for this model nugget. For more information on scripting the modeling node itself, see [regressionnode properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/regressionnodeslots.htmlregressionnodeslots).
-"
-56DC9CABDA3980A4D5D41AA5B3E5612E727B289A,56DC9CABDA3980A4D5D41AA5B3E5612E727B289A," reordernode properties
-
-The Field Reorder node defines the natural order used to display fields downstream. This order affects the display of fields in a variety of places, such as tables, lists, and when selecting fields. This operation is useful when working with wide datasets to make fields of interest more visible.
-
-
-
-reordernode properties
-
-Table 1. reordernode properties
-
- reordernode properties Data type Property description
-
- mode CustomAuto You can sort values automatically or specify a custom order.
- sort_by NameTypeStorage
- ascending flag
-"
-57ED2F2E8EAA8DAB5B26C3759FD1BD102D03B975,57ED2F2E8EAA8DAB5B26C3759FD1BD102D03B975," reportnode properties
-
-The Report node creates formatted reports containing fixed text as well as data and other expressions derived from the data. You specify the format of the report using text templates to define the fixed text and data output constructions. You can provide custom text formatting by using HTML tags in the template and by setting output options. You can include data values and other conditional output by using CLEM expressions in the template.
-
-
-
-reportnode properties
-
-Table 1. reportnode properties
-
- reportnode properties Data type Property description
-
- output_mode ScreenFile Used to specify target location for output generated from the output node.
- output_format HTML (.html) Text (.txt) Output (.cou) Used to specify the type of file output.
- format AutoCustom Used to choose whether output is automatically formatted or formatted using HTML included in the template. To use HTML formatting in the template, specify Custom.
- use_output_name flag Specifies whether a custom output name is used.
- output_name string If use_output_name is true, specifies the name to use.
- text string
- full_filename string
- highlights flag
-"
-5D9039607C167566CED9A4D7CC9F30F2B0C58554,5D9039607C167566CED9A4D7CC9F30F2B0C58554," restructurenode properties
-
-The Restructure node converts a nominal or flag field into a group of fields that can be populated with the values of yet another field. For example, given a field named payment type, with values of credit, cash, and debit, three new fields would be created (credit, cash, debit), each of which might contain the value of the actual payment made.
-
-Example
-
-node = stream.create(""restructure"", ""My node"")
-node.setKeyedPropertyValue(""fields_from"", ""Drug"", [""drugA"", ""drugX""])
-node.setPropertyValue(""include_field_name"", True)
-node.setPropertyValue(""value_mode"", ""OtherFields"")
-node.setPropertyValue(""value_fields"", [""Age"", ""BP""])
-
-
-
-restructurenode properties
-
-Table 1. restructurenode properties
-
- restructurenode properties Data type Property description
-
- fields_from [category category category] all
- include_field_name flag Indicates whether to use the field name in the restructured field name.
-"
-CD0745062372B6A66356728DEA39EE6D8237D0DE_0,CD0745062372B6A66356728DEA39EE6D8237D0DE," randomtrees properties
-
-The Random Trees node is similar to the C&RT Tree node; however, the Random Trees node is designed to process big data to create a single tree. The Random Trees tree node generates a decision tree that you use to predict or classify future observations. The method uses recursive partitioning to split the training records into segments by minimizing the impurity at each step, where a node in the tree is considered pure if 100% of cases in the node fall into a specific category of the target field. Target and input fields can be numeric ranges or categorical (nominal, ordinal, or flags); all splits are binary (only two subgroups).
-
-
-
-randomtrees properties
-
-Table 1. randomtrees properties
-
- randomtrees Properties Values Property description
-
- target field In the Random Trees node, models require a single target and one or more input fields. A frequency field can also be specified. See [Common modeling node properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/modelingnodeslots_common.htmlmodelingnodeslots_common) for more information.
- number_of_models integer Determines the number of models to build as part of the ensemble modeling.
- use_number_of_predictors flag Determines whether number_of_predictors is used.
- number_of_predictors integer Specifies the number of predictors to be used when building split models.
- use_stop_rule_for_accuracy flag Determines whether model building stops when accuracy can't be improved.
- sample_size number Reduce this value to improve performance when processing very large datasets.
-"
-CD0745062372B6A66356728DEA39EE6D8237D0DE_1,CD0745062372B6A66356728DEA39EE6D8237D0DE," handle_imbalanced_data flag If the target of the model is a particular flag outcome, and the ratio of the desired outcome to a non-desired outcome is very small, then the data is imbalanced and the bootstrap sampling that's conducted by the model may affect the model's accuracy. Enable imbalanced data handling so that the model will capture a larger proportion of the desired outcome and generate a stronger model.
- use_weighted_sampling flag When False, variables for each node are randomly selected with the same probability. When True, variables are weighted and selected accordingly.
- max_node_number integer Maximum number of nodes allowed in individual trees. If the number would be exceeded on the next split, tree growth halts.
- max_depth integer Maximum tree depth before growth halts.
- min_child_node_size integer Determines the minimum number of records allowed in a child node after the parent node is split. If a child node would contain fewer records than specified here, the parent node won't be split.
- use_costs flag
- costs structured Structured property. The format is a list of 3 values: the actual value, the predicted value, and the cost if that prediction is wrong. For example: tree.setPropertyValue(""costs"", [""drugA"", ""drugB"", 3.0], ""drugX"", ""drugY"", 4.0]])
- default_cost_increase nonelinearsquarecustom Note this is only enabled for ordinal targets. Set default values in the costs matrix.
- max_pct_missing integer If the percentage of missing values in any input is greater than the value specified here, the input is excluded. Minimum 0, maximum 100.
- exclude_single_cat_pct integer If one category value represents a higher percentage of the records than specified here, the entire field is excluded from model building. Minimum 1, maximum 99.
- max_category_number integer If the number of categories in a field exceeds this value, the field is excluded from model building. Minimum 2.
-"
-CD0745062372B6A66356728DEA39EE6D8237D0DE_2,CD0745062372B6A66356728DEA39EE6D8237D0DE," min_field_variation number If the coefficient of variation of a continuous field is smaller than this value, the field is excluded from model building.
-"
-E10CEBBD89F23E057645097B776A51DEA0C1555F,E10CEBBD89F23E057645097B776A51DEA0C1555F," applyrandomtrees properties
-
-You can use the Random Trees modeling node to generate a Random Trees model nugget. The scripting name of this model nugget is applyrandomtrees. For more information on scripting the modeling node itself, see [randomtrees properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/rf_nodeslots.htmlrf_nodeslots).
-
-
-
-applyrandomtrees properties
-
-Table 1. applyrandomtrees properties
-
- applyrandomtrees Properties Values Property description
-
-"
-F6670B3B49F00E4EE1F44E8B1C09E24AFEDD2529_0,F6670B3B49F00E4EE1F44E8B1C09E24AFEDD2529," rfmaggregatenode properties
-
- The Recency, Frequency, Monetary (RFM) Aggregate node enables you to take customers' historical transactional data, strip away any unused data, and combine all of their remaining transaction data into a single row that lists when they last dealt with you, how many transactions they have made, and the total monetary value of those transactions.
-
-Example
-
-node = stream.create(""rfmaggregate"", ""My node"")
-node.setPropertyValue(""relative_to"", ""Fixed"")
-node.setPropertyValue(""reference_date"", ""2007-10-12"")
-node.setPropertyValue(""id_field"", ""CardID"")
-node.setPropertyValue(""date_field"", ""Date"")
-node.setPropertyValue(""value_field"", ""Amount"")
-node.setPropertyValue(""only_recent_transactions"", True)
-node.setPropertyValue(""transaction_date_after"", ""2000-10-01"")
-
-
-
-rfmaggregatenode properties
-
-Table 1. rfmaggregatenode properties
-
- rfmaggregatenode properties Data type Property description
-
- relative_to FixedToday Specify the date from which the recency of transactions will be calculated.
- reference_date date Only available if Fixed is chosen in relative_to.
- contiguous flag If your data is presorted so that all records with the same ID appear together in the data stream, selecting this option speeds up processing.
- id_field field Specify the field to be used to identify the customer and their transactions.
- date_field field Specify the date field to be used to calculate recency against.
- value_field field Specify the field to be used to calculate the monetary value.
- extension string Specify a prefix or suffix for duplicate aggregated fields.
-"
-F6670B3B49F00E4EE1F44E8B1C09E24AFEDD2529_1,F6670B3B49F00E4EE1F44E8B1C09E24AFEDD2529," add_as SuffixPrefix Specify if the extension should be added as a suffix or a prefix.
- discard_low_value_records flag Enable use of the discard_records_below setting.
- discard_records_below number Specify a minimum value below which any transaction details are not used when calculating the RFM totals. The units of value relate to the value field selected.
- only_recent_transactions flag Enable use of either the specify_transaction_date or transaction_within_last settings.
- specify_transaction_date flag
- transaction_date_after date Only available if specify_transaction_date is selected. Specify the transaction date after which records will be included in your analysis.
- transaction_within_last number Only available if transaction_within_last is selected. Specify the number and type of periods (days, weeks, months, or years) back from the Calculate Recency relative to date after which records will be included in your analysis.
- transaction_scale DaysWeeksMonthsYears Only available if transaction_within_last is selected. Specify the number and type of periods (days, weeks, months, or years) back from the Calculate Recency relative to date after which records will be included in your analysis.
-"
-4292721E4524AC59FA259576D39665946DB8849D_0,4292721E4524AC59FA259576D39665946DB8849D," rfmanalysisnode properties
-
-The Recency, Frequency, Monetary (RFM) Analysis node enables you to determine quantitatively which customers are likely to be the best ones by examining how recently they last purchased from you (recency), how often they purchased (frequency), and how much they spent over all transactions (monetary).
-
-
-
-rfmanalysisnode properties
-
-Table 1. rfmanalysisnode properties
-
- rfmanalysisnode properties Data type Property description
-
- recency field Specify the recency field. This may be a date, timestamp, or simple number.
- frequency field Specify the frequency field.
- monetary field Specify the monetary field.
- recency_bins integer Specify the number of recency bins to be generated.
- recency_weight number Specify the weighting to be applied to recency data. The default is 100.
- frequency_bins integer Specify the number of frequency bins to be generated.
- frequency_weight number Specify the weighting to be applied to frequency data. The default is 10.
- monetary_bins integer Specify the number of monetary bins to be generated.
- monetary_weight number Specify the weighting to be applied to monetary data. The default is 1.
- tied_values_method NextCurrent Specify which bin tied value data is to be put in.
- recalculate_bins AlwaysIfNecessary
- add_outliers flag Available only if recalculate_bins is set to IfNecessary. If set, records that lie below the lower bin will be added to the lower bin, and records above the highest bin will be added to the highest bin.
- binned_field RecencyFrequencyMonetary
-"
-4292721E4524AC59FA259576D39665946DB8849D_1,4292721E4524AC59FA259576D39665946DB8849D," recency_thresholds value value Available only if recalculate_bins is set to Always. Specify the upper and lower thresholds for the recency bins. The upper threshold of one bin is used as the lower threshold of the next—for example, [10 30 60] would define two bins, the first bin with upper and lower thresholds of 10 and 30, with the second bin thresholds of 30 and 60.
-"
-D1908D2F2C1701D4A9AC3354E42DFF295C06B40D_0,D1908D2F2C1701D4A9AC3354E42DFF295C06B40D," rfnode properties
-
-The Random Forest node uses an advanced implementation of a bagging algorithm with a tree model as the base model. This Random Forest modeling node in SPSS Modeler is implemented in Python and requires the scikit-learn© Python library.
-
-
-
-rfnode properties
-
-Table 1. rfnode properties
-
- rfnode properties Data type Property description
-
- custom_fields boolean This option tells the node to use field information specified here instead of that given in any upstream Type node(s). After selecting this option, specify the following fields as required.
- inputs field List of the field names for input.
- target field One field name for target.
- fast_build boolean Utilize multiple CPU cores to improve model building.
- role_use string Specify predefined to use predefined roles or custom to use custom field assignments. Default is predefined.
- splits field List of the field names for split.
- n_estimators integer Number of trees to build. Default is 10.
- specify_max_depth Boolean Specify custom max depth. If false, nodes are expanded until all leaves are pure or until all leaves contain less than min_samples_split samples. Default is false.
- max_depth integer The maximum depth of the tree. Default is 10.
- min_samples_leaf integer Minimum leaf node size. Default is 1.
- max_features string The number of features to consider when looking for the best split: * If auto, then max_features=sqrt(n_features) for classifier and max_features=sqrt(n_features) for regression. * If sqrt, then max_features=sqrt(n_features). * If log2, then max_features=log2 (n_features). Default is auto.
-"
-D1908D2F2C1701D4A9AC3354E42DFF295C06B40D_1,D1908D2F2C1701D4A9AC3354E42DFF295C06B40D," bootstrap Boolean Use bootstrap samples when building trees. Default is true.
- oob_score Boolean Use out-of-bag samples to estimate the generalization accuracy. Default value is false.
- extreme Boolean Use extremely randomized trees. Default is false.
- use_random_seed Boolean Specify this to get replicated results. Default is false.
- random_seed integer The random number seed to use when build trees. Specify any integer.
- cache_size float The size of the kernel cache in MB. Default is 200.
- enable_random_seed Boolean Enables the random_seed parameter. Specify true or false. Default is false.
- enable_hpo Boolean Specify true or false to enable or disable the HPO options. If set to true, Rbfopt will be applied to determine the ""best"" Random Forest model automatically, which reaches the target objective value defined by the user with the following target_objval parameter.
- target_objval float The objective function value (error rate of the model on the samples) you want to reach (for example, the value of the unknown optimum). Set this parameter to the appropriate value if the optimum is unknown (for example, 0.01).
-"
-949025C4DEEA46FD131C7B8D89978D75FCC440C4_0,949025C4DEEA46FD131C7B8D89978D75FCC440C4," samplenode properties
-
- The Sample node selects a subset of records. A variety of sample types are supported, including stratified, clustered, and nonrandom (structured) samples. Sampling can be useful for improving performance, and for selecting groups of related records or transactions for analysis.
-
-Example
-
-/* Create two Sample nodes to extract
-different samples from the same data /
-
-node = stream.create(""sample"", ""My node"")
-node.setPropertyValue(""method"", ""Simple"")
-node.setPropertyValue(""mode"", ""Include"")
-node.setPropertyValue(""sample_type"", ""First"")
-node.setPropertyValue(""first_n"", 500)
-
-node = stream.create(""sample"", ""My node"")
-node.setPropertyValue(""method"", ""Complex"")
-node.setPropertyValue(""stratify_by"", [""Sex"", ""Cholesterol""])
-node.setPropertyValue(""sample_units"", ""Proportions"")
-node.setPropertyValue(""sample_size_proportions"", ""Custom"")
-node.setPropertyValue(""sizes_proportions"", [""M"", ""High"", ""Default""], ""M"", ""Normal"", ""Default""],
-""F"", ""High"", 0.3], ""F"", ""Normal"", 0.3]])
-
-
-
-samplenode properties
-
-Table 1. samplenode properties
-
- samplenode properties Data type Property description
-
- method Simple Complex
- mode IncludeDiscard Include or discard records that meet the specified condition.
- sample_type FirstOneInNRandomPct Specifies the sampling method.
- first_n integer Records up to the specified cutoff point will be included or discarded.
- one_in_n number Include or discard every nth record.
-"
-949025C4DEEA46FD131C7B8D89978D75FCC440C4_1,949025C4DEEA46FD131C7B8D89978D75FCC440C4," rand_pct number Specify the percentage of records to include or discard.
- use_max_size flag Enable use of the maximum_size setting.
- maximum_size integer Specify the largest sample to be included or discarded from the data stream. This option is redundant and therefore disabled when First and Include are specified.
- set_random_seed flag Enables use of the random seed setting.
- random_seed integer Specify the value used as a random seed.
- complex_sample_type RandomSystematic
- sample_units ProportionsCounts
- sample_size_proportions FixedCustomVariable
- sample_size_counts FixedCustomVariable
- fixed_proportions number
- fixed_counts integer
- variable_proportions field
- variable_counts field
- use_min_stratum_size flag
- minimum_stratum_size integer This option only applies when a Complex sample is taken with Sample units=Proportions.
- use_max_stratum_size flag
- maximum_stratum_size integer This option only applies when a Complex sample is taken with Sample units=Proportions.
- clusters field
- stratify_by [field1 ... fieldN]
- specify_input_weight flag
- input_weight field
- new_output_weight string
- sizes_proportions [[stringstring value][stringstring value]���] If sample_units=proportions and sample_size_proportions=Custom, specifies a value for each possible combination of values of stratification fields.
- default_proportion number
-"
-3E0860FD12FA0BB5BE75C68FBD34D69A631F2324,3E0860FD12FA0BB5BE75C68FBD34D69A631F2324," Running and interrupting scripts
-
-You can run scripts in a number of ways. For example, in the flow script or standalone script pane, click Run This Script to run the complete script.
-
-You can run a script using any of the following methods:
-
-
-
-* Click Run script within a flow script or standalone script.
-* Run a flow where Run script is set as the default execution method.
-
-
-
-Note: A SuperNode script runs when the SuperNode is run as long as you select Run script within the SuperNode script dialog box.
-"
-27E7AD16129A9DC8AC8CE2EE79C9B584D441F0DE_0,27E7AD16129A9DC8AC8CE2EE79C9B584D441F0DE," Accessing flow run results
-
-Many SPSS Modeler nodes produce output objects such as models, charts, and tabular data. Many of these outputs contain useful values that can be used by scripts to guide subsequent runs. These values are grouped into content containers (referred to as simply containers) which can be accessed using tags or IDs that identify each container. The way these values are accessed depends on the format or ""content model"" used by that container.
-
-For example, many predictive model outputs use a variant of XML called PMML to represent information about the model such as which fields a decision tree uses at each split, or how the neurons in a neural network are connected and with what strengths. Model outputs that use PMML provide an XML Content Model that can be used to access that information. For example:
-
-stream = modeler.script.stream()
- Assume the flow contains a single C5.0 model builder node
- and that the datasource, predictors, and targets have already been
- set up
-modelbuilder = stream.findByType(""c50"", None)
-results = []
-modelbuilder.run(results)
-modeloutput = results[0]
-
- Now that we have the C5.0 model output object, access the
- relevant content model
-cm = modeloutput.getContentModel(""PMML"")
-
- The PMML content model is a generic XML-based content model that
- uses XPath syntax. Use that to find the names of the data fields.
- The call returns a list of strings match the XPath values
-dataFieldNames = cm.getStringValues(""/PMML/DataDictionary/DataField"", ""name"")
-
-SPSS Modeler supports the following content models in scripting:
-
-
-
-* Table content model provides access to the simple tabular data represented as rows and columns.
-* XML content model provides access to content stored in XML format.
-* JSON content model provides access to content stored in JSON format.
-* Column statistics content model provides access to summary statistics about a specific field.
-* Pair-wise column statistics content model provides access to summary statistics between two fields or values between two separate fields.
-
-
-
-Note that the following nodes don't contain these content models:
-
-
-
-* Time Series
-* Discriminant
-"
-27E7AD16129A9DC8AC8CE2EE79C9B584D441F0DE_1,27E7AD16129A9DC8AC8CE2EE79C9B584D441F0DE,"* SLRM
-"
-6638B9F61F15821F7A92D9C30FC6C24C029B78DC_0,6638B9F61F15821F7A92D9C30FC6C24C029B78DC," Column Statistics content model and Pairwise Statistics content model
-
-The Column Statistics content model provides access to statistics that can be computed for each field (univariate statistics). The Pairwise Statistics content model provides access to statistics that can be computed between pairs of fields or values in a field.
-
-Any of these statistics measures are possible:
-
-
-
-* Count
-* UniqueCount
-* ValidCount
-* Mean
-* Sum
-* Min
-* Max
-* Range
-* Variance
-* StandardDeviation
-* StandardErrorOfMean
-* Skewness
-* SkewnessStandardError
-* Kurtosis
-* KurtosisStandardError
-* Median
-* Mode
-* Pearson
-* Covariance
-* TTest
-* FTest
-
-
-
-Some values are only appropriate from single column statistics while others are only appropriate for pairwise statistics.
-
-Nodes that produce these are:
-
-
-
-* Statistics node produces column statistics and can produce pairwise statistics when correlation fields are specified
-* Data Audit node produces column and can produce pairwise statistics when an overlay field is specified.
-* Means node produces pairwise statistics when comparing pairs of fields or comparing a field's values with other field summaries.
-
-
-
-Which content models and statistics are available depends on both the particular node's capabilities and the settings within the node.
-
-
-
-Methods for the Column Statistics content model
-
-Table 1. Methods for the Column Statistics content model
-
- Method Return types Description
-
- getAvailableStatistics() List Returns the available statistics in this model. Not all fields necessarily have values for all statistics.
- getAvailableColumns() List Returns the column names for which statistics were computed.
- getStatistic(String column, StatisticType statistic) Number Returns the statistic values associated with the column.
- reset() void Flushes any internal storage associated with this content model.
-
-
-
-
-
-Methods for the Pairwise Statistics content model
-
-Table 2. Methods for the Pairwise Statistics content model
-
- Method Return types Description
-
- getAvailableStatistics() List Returns the available statistics in this model. Not all fields necessarily have values for all statistics.
-"
-6638B9F61F15821F7A92D9C30FC6C24C029B78DC_1,6638B9F61F15821F7A92D9C30FC6C24C029B78DC," getAvailablePrimaryColumns() List Returns the primary column names for which statistics were computed.
- getAvailablePrimaryValues() List Returns the values of the primary column for which statistics were computed.
- getAvailableSecondaryColumns() List Returns the secondary column names for which statistics were computed.
- getStatistic(String primaryColumn, String secondaryColumn, StatisticType statistic) Number Returns the statistic values associated with the columns.
-"
-6FC8A7D53D6951306E0FD23667A802538A81D6FF,6FC8A7D53D6951306E0FD23667A802538A81D6FF," JSON content model
-
-The JSON content model is used to access content stored in JSON format. It provides a basic API to allow callers to extract values on the assumption that they know which values are to be accessed.
-
-
-
-Methods for the JSON content model
-
-Table 1. Methods for the JSON content model
-
- Method Return types Description
-
- getJSONAsString() String Returns the JSON content as a string.
- getObjectAt( path, JSONArtifact artifact) throws Exception Object Returns the object at the specified path. The supplied root artifact might be null, in which case the root of the content is used. The returned value can be a literal string, integer, real or boolean, or a JSON artifact (either a JSON object or a JSON array).
- getChildValuesAt( path, JSONArtifact artifact) throws Exception Hash table (key:object, value:object> Returns the child values of the specified path if the path leads to a JSON object or null otherwise. The keys in the table are strings while the associated value can be a literal string, integer, real or boolean, or a JSON artifact (either a JSON object or a JSON array).
-"
-8FDDCA5B0D9D19DB5B349AB7F72625B8C6D5744C,8FDDCA5B0D9D19DB5B349AB7F72625B8C6D5744C," Table content model
-
-The table content model provides a simple model for accessing simple row and column data. The values in a particular column must all have the same type of storage (for example, strings or integers).
-"
-198246E6E7F694D36936989D23B2255B15C2A92B,198246E6E7F694D36936989D23B2255B15C2A92B," XML content model
-
-The XML content model provides access to XML-based content.
-
-The XML content model supports the ability to access components based on XPath expressions. XPath expressions are strings that define which elements or attributes are required by the caller. The XML content model hides the details of constructing various objects and compiling expressions that are typically required by XPath support. It is simpler to call from Python scripting.
-
-The XML content model includes a function that returns the XML document as a string, so Python script users can use their preferred Python library to parse the XML.
-
-
-
-Methods for the XML content model
-
-Table 1. Methods for the XML content model
-
- Method Return types Description
-
- getXMLAsString() String Returns the XML as a string.
- getNumericValue(String xpath) number Returns the result of evaluating the path with return type of numeric (for example, count the number of elements that match the path expression).
- getBooleanValue(String xpath) boolean Returns the boolean result of evaluating the specified path expression.
- getStringValue(String xpath, String attribute) String Returns either the attribute value or XML node value that matches the specified path.
- getStringValues(String xpath, String attribute) List of strings Returns a list of all attribute values or XML node values that match the specified path.
- getValuesList(String xpath, attributes, boolean includeValue) List of lists of strings Returns a list of all attribute values that match the specified path along with the XML node value if required.
- getValuesMap(String xpath, String keyAttribute, attributes, boolean includeValue) Hash table (key:string, value:list of string) Returns a hash table that uses either the key attribute or XML node value as key, and the list of specified attribute values as table values.
- isNamespaceAware() boolean Returns whether the XML parsers should be aware of namespaces. Default is False.
-"
-4D8B25691C26B2BA05F7E8A96B99FD3F15A124C6,4D8B25691C26B2BA05F7E8A96B99FD3F15A124C6," Looping through nodes
-
-You can use a for loop to loop through all the nodes in a flow. For example, the following two script examples loop through all nodes and change field names in any Filter nodes to uppercase.
-
-You can use this script in any flow that contains a Filter node, even if no fields are actually filtered. Simply add a Filter node that passes all fields in order to change field names to uppercase across the board.
-
- Alternative 1: using the data model nameIterator() function
-stream = modeler.script.stream()
-for node in stream.iterator():
-if (node.getTypeName() == ""filter""):
- nameIterator() returns the field names
-for field in node.getInputDataModel().nameIterator():
-newname = field.upper()
-node.setKeyedPropertyValue(""new_name"", field, newname)
-
- Alternative 2: using the data model iterator() function
-stream = modeler.script.stream()
-for node in stream.iterator():
-if (node.getTypeName() == ""filter""):
- iterator() returns the field objects so we need
- to call getColumnName() to get the name
-for field in node.getInputDataModel().iterator():
-newname = field.getColumnName().upper()
-node.setKeyedPropertyValue(""new_name"", field.getColumnName(), newname)
-
-The script loops through all nodes in the current flow, and checks whether each node is a Filter. If so, the script loops through each field in the node and uses either the field.upper() or field.getColumnName().upper() function to change the name to uppercase.
-"
-14A06DE43E6B08188A7672B5BE8068A572DE5B7C,14A06DE43E6B08188A7672B5BE8068A572DE5B7C," Scripting and automation
-
-Scripting in SPSS Modeler is a powerful tool for automating processes in the user interface. Scripts can perform the same types of actions that you perform with a mouse or a keyboard, and you can use them to automate tasks that would be highly repetitive or time consuming to perform manually.
-
-You can use scripts to:
-
-
-
-* Impose a specific order for node executions in a flow.
-* Set properties for a node as well as perform derivations using a subset of CLEM (Control Language for Expression Manipulation).
-* Specify an automatic sequence of actions that normally involves user interaction—for example, you can build a model and then test it.
-"
-AE3F5B72354288CC106BB10263673EBC80B2D544,AE3F5B72354288CC106BB10263673EBC80B2D544," Scripting tips
-
-This section provides tips and techniques for using scripts, including modifying flow execution, and using an encoded password in a script.
-"
-0301D6611A36E44C345083F6E2C3BDE58DE59982,0301D6611A36E44C345083F6E2C3BDE58DE59982," Types of scripts
-
-SPSS Modeler uses three types of scripts:
-
-
-
-* Flow scripts are stored as a flow property and are therefore saved and loaded with a specific flow. For example, you can write a flow script that automates the process of training and applying a model nugget. You can also specify that whenever a particular flow runs, the script should be run instead of the flow's canvas content.
-"
-92FE6B199A3B4773C5B57EDEDBA80500E6C66FAF,92FE6B199A3B4773C5B57EDEDBA80500E6C66FAF," selectnode properties
-
- The Select node selects or discards a subset of records from the data stream based on a specific condition. For example, you might select the records that pertain to a particular sales region.
-
-
-
-selectnode properties
-
-Table 1. selectnode properties
-
- selectnode properties Data type Property description
-
-"
-2B4D4CA6A91C05D12F5C7942E73ABAE74BF08472,2B4D4CA6A91C05D12F5C7942E73ABAE74BF08472," slrmnode properties
-
-The Self-Learning Response Model (SLRM) node enables you to build a model in which a single new case, or small number of new cases, can be used to reestimate the model without having to retrain the model using all data.
-
-
-
-slrmnode properties
-
-Table 1. slrmnode properties
-
- slrmnode Properties Values Property description
-
- target field The target field must be a nominal or flag field. A frequency field can also be specified. See [Common modeling node properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/modelingnodeslots_common.htmlmodelingnodeslots_common) for more information.
- target_response field Type must be flag.
- continue_training_existing_model flag
- target_field_values flag Use all: Use all values from source. Specify: Select values required.
- target_field_values_specify [field1 ... fieldN]
- include_model_assessment flag
- model_assessment_random_seed number Must be a real number.
- model_assessment_sample_size number Must be a real number.
- model_assessment_iterations number Number of iterations.
- display_model_evaluation flag
- max_predictions number
- randomization number
- scoring_random_seed number
- sort AscendingDescending Specifies whether the offers with the highest or lowest scores will be displayed first.
-"
-AEE1A739F2EA11F815EC571163BA99C9B2A97245,AEE1A739F2EA11F815EC571163BA99C9B2A97245," applyselflearningnode properties
-
-You can use Self-Learning Response Model (SLRM) modeling nodes to generate a SLRM model nugget. The scripting name of this model nugget is applyselflearningnode. For more information on scripting the modeling node itself, see [slrmnode properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/selflearnnodeslots.htmlselflearnnodeslots).
-
-
-
-applyselflearningnode properties
-
-Table 1. applyselflearningnode properties
-
- applyselflearningnode Properties Values Property description
-
- max_predictions number
- randomization number
- scoring_random_seed number
- sort ascending descending Specifies whether the offers with the highest or lowest scores will be displayed first.
-"
-641B0015A5A634BFC40F10AE59873CA784232F14,641B0015A5A634BFC40F10AE59873CA784232F14," sequencenode properties
-
-The Sequence node discovers association rules in sequential or time-oriented data. A sequence is a list of item sets that tends to occur in a predictable order. For example, a customer who purchases a razor and aftershave lotion may purchase shaving cream the next time he shops. The Sequence node is based on the CARMA association rules algorithm, which uses an efficient two-pass method for finding sequences.
-
-
-
-sequencenode properties
-
-Table 1. sequencenode properties
-
- sequencenode Properties Values Property description
-
- id_field field To create a Sequence model, you need to specify an ID field, an optional time field, and one or more content fields. Weight and frequency fields are not used. See [Common modeling node properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/modelingnodeslots_common.htmlmodelingnodeslots_common) for more information.
- time_field field
- use_time_field flag
- content_fields [field1 ... fieldn]
- contiguous flag
- min_supp number
- min_conf number
- max_size number
- max_predictions number
- mode SimpleExpert
- use_max_duration flag
- max_duration number
- use_gaps flag
- min_item_gap number
- max_item_gap number
- use_pruning flag
- pruning_value number
-"
-29AF55B95D387BE39D4E9D328936B95CAD5BEB67,29AF55B95D387BE39D4E9D328936B95CAD5BEB67," applysequencenode properties
-
-You can use Sequence modeling nodes to generate a Sequence model nugget. The scripting name of this model nugget is applysequencenode. No other properties exist for this model nugget. For more information on scripting the modeling node itself, see [sequencenode properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/sequencenodeslots.htmlsequencenodeslots).
-"
-2F88CC7897776EAD3F1A7052A740701B8E1A6969,2F88CC7897776EAD3F1A7052A740701B8E1A6969," setglobalsnode properties
-
-The Set Globals node scans the data and computes summary values that can be used in CLEM expressions. For example, you can use this node to compute statistics for a field called age and then use the overall mean of age in CLEM expressions by inserting the function @GLOBAL_MEAN(age).
-
-
-
-setglobalsnode properties
-
-Table 1. setglobalsnode properties
-
- setglobalsnode properties Data type Property description
-
- globals [Sum Mean Min Max SDev] Structured property where fields to be set must be referenced with the following syntax: node.setKeyedPropertyValue( ""globals"", ""Age"", [""Max"", ""Sum"", ""Mean"", ""SDev""])
-"
-17E39C164E92D0646C4DDDADFDF178BF3B5E2AD0,17E39C164E92D0646C4DDDADFDF178BF3B5E2AD0," settoflagnode properties
-
-The SetToFlag node derives multiple flag fields based on the categorical values defined for one or more nominal fields.
-
-
-
-settoflagnode properties
-
-Table 1. settoflagnode properties
-
- settoflagnode properties Data type Property description
-
- fields_from [category category category] all
- true_value string Specifies the true value used by the node when setting a flag. The default is T.
- false_value string Specifies the false value used by the node when setting a flag. The default is F.
- use_extension flag Use an extension as a suffix or prefix to the new flag field.
- extension string
- add_as SuffixPrefix Specifies whether the extension is added as a suffix or prefix.
-"
-723FD865C01F3AC097E03B74F7D81D574A1A13D4,723FD865C01F3AC097E03B74F7D81D574A1A13D4," simfitnode properties
-
-The Simulation Fitting (Sim Fit) node examines the statistical distribution of the data in each field and generates (or updates) a Simulation Generate node, with the best fitting distribution assigned to each field. The Simulation Generate node can then be used to generate simulated data.
-
-
-
-simfitnode properties
-
-Table 1. simfitnode properties
-
- simfitnode properties Data type Property description
-
- custom_gen_node_name boolean You can generate the name of the generated (or updated) Simulation Generate node automatically by selecting Auto.
- gen_node_name string Specify a custom name for the generated (or updated) node.
- used_cases_type string Specifies the number of cases to use when fitting distributions to the fields in the data set. Use AllCases or FirstNCases.
- used_cases integer The number of cases
- good_fit_type string For continuous fields, specify either the AnderDarling test or the KolmogSmirn test of goodness of fit to rank distributions when fitting distributions to the fields.
-"
-C24646ED4724E2A2D856392DDA9C1B9B05145E11,C24646ED4724E2A2D856392DDA9C1B9B05145E11," simgennode properties
-
- The Simulation Generate (Sim Gen) node provides an easy way to generate simulated data—either from scratch using user specified statistical distributions or automatically using the distributions obtained from running a Simulation Fitting (Sim Fit) node on existing historical data. This is useful when you want to evaluate the outcome of a predictive model in the presence of uncertainty in the model inputs.
-
-
-
-simgennode properties
-
-Table 1. simgennode properties
-
- simgennode properties Data type Property description
-
- fields Structured property See example
- correlations Structured property See example
- keep_min_max_setting boolean
- refit_correlations boolean
- max_cases integer Minimum value is 1000, maximum value is 2,147,483,647
- create_iteration_field boolean
- iteration_field_name string
- replicate_results boolean
-"
-984B203B8A0054A07F5BE3EB99438C7FBCB6CE85,984B203B8A0054A07F5BE3EB99438C7FBCB6CE85," Node and flow property examples
-
-You can use node and flow properties in a variety of ways with SPSS Modeler. They're most commonly used as part of a script: either a standalone script, used to automate multiple flows or operations, or a flow script, used to automate processes within a single flow. You can also specify node parameters by using the node properties within the SuperNode. At the most basic level, properties can also be used as a command line option for starting SPSS Modeler. Using the -p argument as part of command line invocation, you can use a flow property to change a setting in the flow.
-
-
-
-Node and flow property examples
-
-Table 1. Node and flow property examples
-
- Property Meaning
-
- s.max_size Refers to the property max_size of the node named s.
- s:samplenode.max_size Refers to the property max_size of the node named s, which must be a Sample node.
- :samplenode.max_size Refers to the property max_size of the Sample node in the current flow (there must be only one Sample node).
- s:sample.max_size Refers to the property max_size of the node named s, which must be a Sample node.
- t.direction.Age Refers to the role of the field Age in the Type node t.
- :.max_size *** NOT LEGAL *** You must specify either the node name or the node type.
-
-
-
-The example s:sample.max_size illustrates that you don't need to spell out node types in full.
-
-The example t.direction.Age illustrates that some slot names can themselves be structured—in cases where the attributes of a node are more complex than simply individual slots with individual values. Such slots are called structured or complex properties.
-"
-6601B619D597C89F715BC2FAFD703452D64F21CD,6601B619D597C89F715BC2FAFD703452D64F21CD," Syntax for properties
-
-You can set properties using the following syntax:
-
-OBJECT.setPropertyValue(PROPERTY, VALUE)
-
-or:
-
-OBJECT.setKeyedPropertyValue(PROPERTY, KEY, VALUE)
-
-You can retrieve the value of properties using the following syntax:
-
-VARIABLE = OBJECT.getPropertyValue(PROPERTY)
-
-or:
-
-VARIABLE = OBJECT.getKeyedPropertyValue(PROPERTY, KEY)
-
-where OBJECT is a node or output, PROPERTY is the name of the node property that your expression refers to, and KEY is the key value for keyed properties. For example, the following syntax finds the Filter node and then sets the default to include all fields and filter the Age field from downstream data:
-
-filternode = modeler.script.stream().findByType(""filter"", None)
-filternode.setPropertyValue(""default_include"", True)
-filternode.setKeyedPropertyValue(""include"", ""Age"", False)
-
-All nodes used in SPSS Modeler can be located using the flow function findByType(TYPE, LABEL). At least one of TYPE or LABEL must be specified.
-"
-6008CEE94719E6B3CAABFBA9BFF1973B9125E02F,6008CEE94719E6B3CAABFBA9BFF1973B9125E02F," Abbreviations
-
-Standard abbreviations are used throughout the syntax for node properties. Learning the abbreviations is helpful in constructing scripts.
-
-
-
-Standard abbreviations used throughout the syntax
-
-Table 1. Standard abbreviations used throughout the syntax
-
- Abbreviation Meaning
-
- abs Absolute value
- len Length
- min Minimum
- max Maximum
- correl Correlation
- covar Covariance
- num Number or numeric
- pct Percent or percentage
- transp Transparency
-"
-FBD84CB5A6901DDAF7412396F4C6CC190E1B7328,FBD84CB5A6901DDAF7412396F4C6CC190E1B7328," Common node properties
-
-A number of properties are common to all nodes in SPSS Modeler.
-
-
-
-Common node properties
-
-Table 1. Common node properties
-
- Property name Data type Property description
-
- use_custom_name flag
- name string Read-only property that reads the name (either auto or custom) for a node on the canvas.
- custom_name string Specifies a custom name for the node.
- tooltip string
- annotation string
- keywords string Structured slot that specifies a list of keywords associated with the object (for example, [""Keyword1"" ""Keyword2""]).
- cache_enabled flag
- node_type source_supernode process_supernode terminal_supernode all node names as specified for scripting Read-only property used to refer to a node by type. For example, instead of referring to a node only by name, such as real_income, you can also specify the type, such as userinputnode or filternode.
-
-
-
-SuperNode-specific properties are discussed separately, as with all other nodes. See [SuperNode properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/defining_slot_parameters_in_supernodes.htmldefining_slot_parameters_in_supernodes) for more information.
-"
-6F2CB7C072A05F7BE0C6CE2ECA39FC9A1BA5E107,6F2CB7C072A05F7BE0C6CE2ECA39FC9A1BA5E107," Model nugget node properties
-
-Refer to this section for a list of available properties for Model nuggets.
-
-Model nugget nodes share the same common properties as other nodes. See [Common node properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/slot_parameters_common.htmlslot_parameters_common) for more information.
-"
-29DCFC3FB6EE0CCBA63E0FF3A797936DA9E0C874,29DCFC3FB6EE0CCBA63E0FF3A797936DA9E0C874," Properties reference overview
-
-You can specify a number of different properties for nodes, flows, projects, and SuperNodes. Some properties are common to all nodes, such as name, annotation, and ToolTip, while others are specific to certain types of nodes. Other properties refer to high-level flow operations, such as caching or SuperNode behavior. Properties can be accessed through the standard user interface (for example, when you open the properties for a node) and can also be used in a number of other ways.
-
-
-
-* Properties can be modified through scripts, as described in this section. For more information, see [Syntax for properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/slot_parameter_syntax.html).
-* Node properties can be used in SuperNode parameters.
-
-
-
-In the context of scripting within SPSS Modeler, node and flow properties are often called slot parameters. In this documentation, they are referred to as node properties or flow properties.
-
-For more information about the scripting language, see [The scripting language](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/jython/clementine/python_language_overview.html).
-"
-F127EFF442D2C1D1A1EA01B23E8135B502EF2E79,F127EFF442D2C1D1A1EA01B23E8135B502EF2E79," smotenode properties
-
-The Synthetic Minority Over-sampling Technique (SMOTE) node provides an over-sampling algorithm to deal with imbalanced data sets. It provides an advanced method for balancing data. The SMOTE process node in SPSS Modeler is implemented in Python and requires the imbalanced-learn© Python library.
-
-
-
-smotenode properties
-
-Table 1. smotenode properties
-
- smotenode properties Data type Property description
-
- target field The target field.
- sample_ratio string Enables a custom ratio value. The two options are Auto (sample_ratio_auto) or Set ratio (sample_ratio_manual).
- sample_ratio_value float The ratio is the number of samples in the minority class over the number of samples in the majority class. It must be larger than 0 and less than or equal to 1. Default is auto.
- enable_random_seed Boolean If set to true, the random_seed property will be enabled.
- random_seed integer The seed used by the random number generator.
- k_neighbours integer The number of nearest neighbors to be used for constructing synthetic samples. Default is 5.
- m_neighbours integer The number of nearest neighbors to be used for determining if a minority sample is in danger. This option is only enabled with the SMOTE algorithm types borderline1 and borderline2. Default is 10.
-"
-3259E737315294C6380ED46645AB8D073A5ED861,3259E737315294C6380ED46645AB8D073A5ED861," sortnode properties
-
- The Sort node sorts records into ascending or descending order based on the values of one or more fields.
-
-
-
-sortnode properties
-
-Table 1. sortnode properties
-
- sortnode properties Data type Property description
-
- keys list Specifies the fields you want to sort against. If no direction is specified, the default is used.
- default_ascending flag Specifies the default sort order.
- use_existing_keys flag Specifies whether sorting is optimized by using the previous sort order for fields that are already sorted.
-"
-F3DD7962CB3AA07C8C469EDE0C7852993AC3F290,F3DD7962CB3AA07C8C469EDE0C7852993AC3F290," Import node common properties
-
-Properties that are common to most import nodes are listed here, with information on specific nodes in the topics that follow.
-
-
-
-Import node common properties
-
-Table 1. Import node common properties
-
- Property name Data type Property description
-
- asset_type DataAsset Connection Specify your data type: DataAsset or Connection.
- asset_id string When DataAsset is set for the asset_type, this is the ID of the asset.
- asset_name string When DataAsset is set for the asset_type, this is the name of the asset.
- connection_id string When Connection is set for the asset_type, this is the ID of the connection.
- connection_name string When Connection is set for the asset_type, this is the name of the connection.
-"
-8F42BD98BE9767332CE949506A9E193393DA73FA,8F42BD98BE9767332CE949506A9E193393DA73FA," statisticsnode properties
-
-The Statistics node provides basic summary information about numeric fields. It calculates summary statistics for individual fields and correlations between fields.
-
-
-
-statisticsnode properties
-
-Table 1. statisticsnode properties
-
- statisticsnode properties Data type Property description
-
- use_output_name flag Specifies whether a custom output name is used.
- output_name string If use_output_name is true, specifies the name to use.
- output_mode ScreenFile Used to specify target location for output generated from the output node.
- output_format Text (.txt) HTML (.html) Output (.cou) Used to specify the type of output.
- full_filename string
- examine list
- correlate list
- statistics [count mean sum min max range variance sdev semean median mode]
- correlation_mode ProbabilityAbsolute Specifies whether to label correlations by probability or absolute value.
- label_correlations flag
- weak_label string
- medium_label string
- strong_label string
- weak_below_probability number When correlation_mode is set to Probability, specifies the cutoff value for weak correlations. This must be a value between 0 and 1—for example, 0.90.
- strong_above_probability number Cutoff value for strong correlations.
-"
-5B85770138782723E09D9ED65F8655484D03BE44_0,5B85770138782723E09D9ED65F8655484D03BE44," derive_stbnode properties
-
- The Space-Time-Boxes node derives Space-Time-Boxes from latitude, longitude, and timestamp fields. You can also identify frequent Space-Time-Boxes as hangouts.
-
-
-
-Space-Time-Boxes node properties
-
-Table 1. Space-Time-Boxes node properties
-
- derive_stbnode properties Data type Property description
-
- mode IndividualRecords Hangouts
- latitude_field field
- longitude_field field
- timestamp_field field
- hangout_density density A single density. See densities for valid density values.
- densities [density,density,..., density] Each density is a string (for example, STB_GH8_1DAY). Note that there are limits to which densities are valid. For the geohash, you can use values from GH1 to GH15. For the temporal part, you can use the following values: EVER 1YEAR 1MONTH 1DAY 12HOURS 8HOURS 6HOURS 4HOURS 3HOURS 2HOURS 1HOUR 30MIN 15MIN 10MIN 5MIN 2MIN 1MIN 30SECS 15SECS 10SECS 5SECS 2SECS 1SEC
- id_field field
-"
-5B85770138782723E09D9ED65F8655484D03BE44_1,5B85770138782723E09D9ED65F8655484D03BE44," qualifying_duration 1DAY 12HOURS 8HOURS 6HOURS 4HOURS 3HOURS 2HOURS 1HOUR 30MIN 15MIN 10MIN 5MIN 2MIN 1MIN 30SECS 15SECS 10SECS 5SECS 2SECS 1SECS Must be a string.
- min_events integer Minimum valid integer value is 2.
- qualifying_pct integer Must be in the range of 1 and 100.
- add_extension_as Prefix Suffix
-"
-5D193C88D3E3235EA441BB82CCEEAAE20BB3EFCC,5D193C88D3E3235EA441BB82CCEEAAE20BB3EFCC," Flow scripts
-
-You can use scripts to customize operations within a particular flow, and they're saved with that flow. You can specify a particular execution order for the terminal nodes within a flow. You use the flow script settings to edit the script that's saved with the current flow.
-
-To access the flow script settings:
-
-
-
-1. Click the Flow Properties icon on the toolbar.
-2. Open the Scripting section to work with scripts for the current flow. You can also launch the Expression Builder from here by clicking the calculator icon. 
-
-
-
-You can specify whether a script does or doesn't run when the flow runs. To run the script each time the flow runs, respecting the execution order of the script, select Run the script. This setting provides automation at the flow level for quicker model building. However, the default setting is to ignore this script during flow execution ( Run all terminal nodes.
-"
-DA0357B0ADE596E1A23F676F76FF4304B97AEF2B,DA0357B0ADE596E1A23F676F76FF4304B97AEF2B," Jython code size limits
-
-Jython compiles each script to Java bytecode, which the Java Virtual Machine (JVM) then runs. However, Java imposes a limit on the size of a single bytecode file. So when Jython attempts to load the bytecode, it can cause the JVM to crash. SPSS Modeler is unable to prevent this from happening.
-
-Ensure that you write your Jython scripts using good coding practices (such as minimizing duplicated code by using variables or functions to compute common intermediate values). If necessary, you may need to split your code over several source files or define it using modules as these are compiled into separate bytecode files.
-"
-AAC6535CAB0B4600A9683433FCAB805B2C4EAA53,AAC6535CAB0B4600A9683433FCAB805B2C4EAA53," Structured properties
-
-There are two ways in which scripting uses structured properties for increased clarity when parsing:
-
-
-
-"
-C64A69EBC1360788037B11E8B0DC5BB74D913819,C64A69EBC1360788037B11E8B0DC5BB74D913819," svmnode properties
-
-The Support Vector Machine (SVM) node enables you to classify data into one of two groups without overfitting. SVM works well with wide data sets, such as those with a very large number of input fields.
-
-
-
-svmnode properties
-
-Table 1. svmnode properties
-
- svmnode Properties Values Property description
-
- all_probabilities flag
- stopping_criteria 1.0E-1 1.0E-2 1.0E-3 1.0E-4 1.0E-5 1.0E-6 Determines when to stop the optimization algorithm.
- regularization number Also known as the C parameter.
- precision number Used only if measurement level of target field is Continuous.
- kernel RBF Polynomial Sigmoid Linear Type of kernel function used for the transformation. RBF is the default.
- rbf_gamma number Used only if kernel is RBF.
- gamma number Used only if kernel is Polynomial or Sigmoid.
- bias number
- degree number Used only if kernel is Polynomial.
- calculate_variable_importance flag
- calculate_raw_propensities flag
-"
-BCAE38614C57F1ABB775C4C9372DC02531830659,BCAE38614C57F1ABB775C4C9372DC02531830659," applysvmnode properties
-
-You can use SVM modeling nodes to generate an SVM model nugget. The scripting name of this model nugget is applysvmnode. For more information on scripting the modeling node itself, see [svmnode properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/svmnodeslots.htmlsvmnodeslots).
-
-
-
-applysvmnode properties
-
-Table 1. applysvmnode properties
-
- applysvmnode Properties Values Property description
-
- all_probabilities flag
- calculate_raw_propensities flag
-"
-3F5D0FD7E429FEDBFC62DFC9BAB41B3CC5FB4E4F_0,3F5D0FD7E429FEDBFC62DFC9BAB41B3CC5FB4E4F," tablenode properties
-
-The Table node displays data in table format. This is useful whenever you need to inspect your data values.
-
-Note: Some of the properties on this page might not be available in your platform.
-
-
-
-tablenode properties
-
-Table 1. tablenode properties
-
- tablenode properties Data type Property description
-
- full_filename string If disk, data, or HTML output, the name of the output file.
- use_output_name flag Specifies whether a custom output name is used.
- output_name string If use_output_name is true, specifies the name to use.
- output_mode Screen File Used to specify target location for output generated from the output node.
- output_format Formatted (.tab) Delimited (.csv) HTML (.html) Output (.cou) Used to specify the type of output.
- transpose_data flag Transposes the data before export so that rows represent fields and columns represent records.
- paginate_output flag When the output_format is HTML, causes the output to be separated into pages.
- lines_per_page number When used with paginate_output, specifies the lines per page of output.
- highlight_expr string
- output string A read-only property that holds a reference to the last table built by the node.
- value_labels [[Value LabelString] [Value LabelString] ...] Used to specify labels for value pairs.
- display_places integer Sets the number of decimal places for the field when displayed (applies only to fields with REAL storage). A value of –1 will use the flow default.
- export_places integer Sets the number of decimal places for the field when exported (applies only to fields with REAL storage). A value of –1 will use the stream default.
- decimal_separator DEFAULT PERIOD COMMA Sets the decimal separator for the field (applies only to fields with REAL storage).
-"
-3F5D0FD7E429FEDBFC62DFC9BAB41B3CC5FB4E4F_1,3F5D0FD7E429FEDBFC62DFC9BAB41B3CC5FB4E4F," date_format ""DDMMYY"" ""MMDDYY"" ""YYMMDD"" ""YYYYMMDD"" ""YYYYDDD"" DAY MONTH ""DD-MM-YY"" ""DD-MM-YYYY"" ""MM-DD-YY"" ""MM-DD-YYYY"" ""DD-MON-YY"" ""DD-MON-YYYY"" ""YYYY-MM-DD"" ""DD.MM.YY"" ""DD.MM.YYYY"" ""MM.DD.YYYY"" ""DD.MON.YY"" ""DD.MON.YYYY"" ""DD/MM/YY"" ""DD/MM/YYYY"" ""MM/DD/YY"" ""MM/DD/YYYY"" ""DD/MON/YY"" ""DD/MON/YYYY"" MON YYYY q Q YYYY ww WK YYYY Sets the date format for the field (applies only to fields with DATE or TIMESTAMP storage).
- time_format ""HHMMSS"" ""HHMM"" ""MMSS"" ""HH:MM:SS"" ""HH:MM"" ""MM:SS"" ""(H)H:(M)M:(S)S"" ""(H)H:(M)M"" ""(M)M:(S)S"" ""HH.MM.SS"" ""HH.MM"" ""MM.SS"" ""(H)H.(M)M.(S)S"" ""(H)H.(M)M"" ""(M)M.(S)S"" Sets the time format for the field (applies only to fields with TIME or TIMESTAMP storage).
-"
-85C99B52BBBC96007BD819861E675C61D7B742CA_0,85C99B52BBBC96007BD819861E675C61D7B742CA," tcmnode properties
-
-Temporal causal modeling attempts to discover key causal relationships in time series data. In temporal causal modeling, you specify a set of target series and a set of candidate inputs to those targets. The procedure then builds an autoregressive time series model for each target and includes only those inputs that have the most significant causal relationship with the target.
-
-
-
-tcmnode properties
-
-Table 1. tcmnode properties
-
- tcmnode Properties Values Property description
-
- custom_fields Boolean
- dimensionlist [dimension1 ... dimensionN]
- data_struct Multiple Single
- metric_fields fields
- both_target_and_input [f1 ... fN]
- targets [f1 ... fN]
- candidate_inputs [f1 ... fN]
- forced_inputs [f1 ... fN]
- use_timestamp Timestamp Period
- input_interval None Unknown Year Quarter Month Week Day Hour Hour_nonperiod Minute Minute_nonperiod Second Second_nonperiod
- period_field string
- period_start_value integer
- num_days_per_week integer
- start_day_of_week Sunday Monday Tuesday Wednesday Thursday Friday Saturday
- num_hours_per_day integer
- start_hour_of_day integer
- timestamp_increments integer
- cyclic_increments integer
- cyclic_periods list
- output_interval None Year Quarter Month Week Day Hour Minute Second
- is_same_interval Same Notsame
- cross_hour Boolean
-"
-85C99B52BBBC96007BD819861E675C61D7B742CA_1,85C99B52BBBC96007BD819861E675C61D7B742CA," aggregate_and_distribute list
- aggregate_default Mean Sum Mode Min Max
- distribute_default Mean Sum
- group_default Mean Sum Mode Min Max
- missing_imput Linear_interp Series_mean K_mean K_meridian Linear_trend None
- k_mean_param integer
- k_median_param integer
- missing_value_threshold integer
- conf_level integer
- max_num_predictor integer
- max_lag integer
- epsilon number
- threshold integer
- is_re_est Boolean
- num_targets integer
- percent_targets integer
- fields_display list
- series_dispaly list
- network_graph_for_target Boolean
- sign_level_for_target number
- fit_and_outlier_for_target Boolean
- sum_and_para_for_target Boolean
- impact_diag_for_target Boolean
- impact_diag_type_for_target Effect Cause Both
- impact_diag_level_for_target integer
- series_plot_for_target Boolean
- res_plot_for_target Boolean
- top_input_for_target Boolean
- forecast_table_for_target Boolean
- same_as_for_target Boolean
- network_graph_for_series Boolean
- sign_level_for_series number
- fit_and_outlier_for_series Boolean
- sum_and_para_for_series Boolean
- impact_diagram_for_series Boolean
- impact_diagram_type_for_series Effect Cause Both
- impact_diagram_level_for_series integer
- series_plot_for_series Boolean
- residual_plot_for_series Boolean
- forecast_table_for_series Boolean
- outlier_root_cause_analysis Boolean
- causal_levels integer
- outlier_table Interactive Pivot Both
- rmsp_error Boolean
- bic Boolean
-"
-85C99B52BBBC96007BD819861E675C61D7B742CA_2,85C99B52BBBC96007BD819861E675C61D7B742CA," r_square Boolean
- outliers_over_time Boolean
- series_transormation Boolean
- use_estimation_period Boolean
- estimation_period Times Observation
- observations list
- observations_type Latest Earliest
- observations_num integer
- observations_exclude integer
- extend_records_into_future Boolean
- forecastperiods integer
- max_num_distinct_values integer
- display_targets FIXEDNUMBER PERCENTAGE
- goodness_fit_measure ROOTMEAN BIC RSQUARE
- top_input_for_series Boolean
- aic Boolean
- rmse Boolean
- date_time_field field Time/Date field
- auto_detect_lag Boolean This setting specifies the number of lag terms for each input in the model for each target.
- numoflags Integer By default, the number of lag terms is automatically determined from the time interval that is used for the analysis.
-"
-DB504727C8688251CAAB0C18E12BDE9DC625ECD1,DB504727C8688251CAAB0C18E12BDE9DC625ECD1," applytcmnode properties
-
-You can use Temporal Causal Modeling (TCM) modeling nodes to generate a TCM model nugget. The scripting name of this model nugget is applytcmnode. For more information on scripting the modeling node itself, see [tcmnode properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/tcmnodeslots.htmltcmnodeslots).
-
-
-
-applytcmnode properties
-
-Table 1. applytcmnode properties
-
- applytcmnode Properties Values Property description
-
- ext_future boolean
- ext_future_num integer
- noise_res boolean
- conf_limits boolean
-"
-5062008D59B761C5CF7F32F131021EA81A03B048,5062008D59B761C5CF7F32F131021EA81A03B048," timeplotnode properties
-
-The Time Plot node displays one or more sets of time series data. Typically, you would first use a Time Intervals node to create a TimeLabel field, which would be used to label the x axis.
-
-
-
-timeplotnode properties
-
-Table 1. timeplotnode properties
-
- timeplotnode properties Data type Property description
-
- plot_series SeriesModels
- use_custom_x_field flag
- x_field field
- y_fields list
- panel flag
- normalize flag
- line flag
- points flag
- point_type Rectangle Dot Triangle Hexagon Plus Pentagon Star BowTie HorizontalDash VerticalDash IronCross Factory House Cathedral OnionDome ConcaveTriangleOblateGlobe CatEye FourSidedPillow RoundRectangle Fan
- smoother flag You can add smoothers to the plot only if you set panel to True.
- use_records_limit flag
- records_limit integer
- symbol_size number Specifies a symbol size.
-"
-76B3F98C842554781D96B8DDE05A74D4D78B4E7A_0,76B3F98C842554781D96B8DDE05A74D4D78B4E7A," ts properties
-
-The Time Series node estimates exponential smoothing, univariate Autoregressive Integrated Moving Average (ARIMA), and multivariate ARIMA (or transfer function) models for time series data and produces forecasts of future performance.
-
-
-
-ts properties
-
-Table 1. ts properties
-
- ts Properties Values Property description
-
- targets field The Time Series node forecasts one or more targets, optionally using one or more input fields as predictors. Frequency and weight fields are not used. See [Common modeling node properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/modelingnodeslots_common.htmlmodelingnodeslots_common) for more information.
- candidate_inputs [field1 ... fieldN] Input or predictor fields used by the model.
- use_period flag
- date_time_field field
- input_interval None Unknown Year Quarter Month Week Day Hour Hour_nonperiod Minute Minute_nonperiod Second Second_nonperiod
- period_field field
- period_start_value integer
- num_days_per_week integer
- start_day_of_week Sunday Monday Tuesday Wednesday Thursday Friday Saturday
- num_hours_per_day integer
- start_hour_of_day integer
- timestamp_increments integer
- cyclic_increments integer
- cyclic_periods list
- output_interval None Year Quarter Month Week Day Hour Minute Second
- is_same_interval flag
- cross_hour flag
-"
-76B3F98C842554781D96B8DDE05A74D4D78B4E7A_1,76B3F98C842554781D96B8DDE05A74D4D78B4E7A," aggregate_and_distribute list
- aggregate_default Mean Sum Mode Min Max
- distribute_default Mean Sum
- group_default Mean Sum Mode Min Max
- missing_imput Linear_interp Series_mean K_mean K_median Linear_trend
- k_span_points integer
- use_estimation_period flag
- estimation_period Observations Times
- date_estimation list Only available if you use date_time_field
- period_estimation list Only available if you use use_period
- observations_type Latest Earliest
- observations_num integer
- observations_exclude integer
- method ExpertModeler Exsmooth Arima
- expert_modeler_method ExpertModeler Exsmooth Arima
- consider_seasonal flag
- detect_outliers flag
- expert_outlier_additive flag
- expert_outlier_level_shift flag
- expert_outlier_innovational flag
- expert_outlier_level_shift flag
- expert_outlier_transient flag
- expert_outlier_seasonal_additive flag
- expert_outlier_local_trend flag
- expert_outlier_additive_patch flag
- consider_newesmodels flag
- exsmooth_model_type Simple HoltsLinearTrend BrownsLinearTrend DampedTrend SimpleSeasonal WintersAdditive WintersMultiplicative DampedTrendAdditive DampedTrendMultiplicative MultiplicativeTrendAdditive MultiplicativeSeasonal MultiplicativeTrendMultiplicative MultiplicativeTrend Specifies the Exponential Smoothing method. Default is Simple.
-"
-76B3F98C842554781D96B8DDE05A74D4D78B4E7A_2,76B3F98C842554781D96B8DDE05A74D4D78B4E7A," futureValue_type_method Compute specify If Compute is used, the system computes the Future Values for the forecast period for each predictor. For each predictor, you can choose from a list of functions (blank, mean of recent points, most recent value) or use specify to enter values manually. To specify individual fields and properties, use the extend_metric_values property. For example: set :ts.futureValue_type_method=""specify"" set :ts.extend_metric_values=[{'Market_1','USER_SPECIFY', 1,2,3]}, {'Market_2','MOST_RECENT_VALUE', ''},{'Market_3','RECENT_POINTS_MEAN', ''}]
- exsmooth_transformation_type None SquareRoot NaturalLog
- arima.p integer
- arima.d integer
- arima.q integer
- arima.sp integer
- arima.sd integer
- arima.sq integer
- arima_transformation_type None SquareRoot NaturalLog
- arima_include_constant flag
- tf_arima.p.fieldname integer For transfer functions.
- tf_arima.d.fieldname integer For transfer functions.
- tf_arima.q.fieldname integer For transfer functions.
- tf_arima.sp.fieldname integer For transfer functions.
- tf_arima.sd.fieldname integer For transfer functions.
- tf_arima.sq.fieldname integer For transfer functions.
- tf_arima.delay.fieldname integer For transfer functions.
- tf_arima.transformation_type.fieldname None SquareRoot NaturalLog For transfer functions.
- arima_detect_outliers flag
- arima_outlier_additive flag
- arima_outlier_level_shift flag
- arima_outlier_innovational flag
- arima_outlier_transient flag
-"
-76B3F98C842554781D96B8DDE05A74D4D78B4E7A_3,76B3F98C842554781D96B8DDE05A74D4D78B4E7A," arima_outlier_seasonal_additive flag
- arima_outlier_local_trend flag
- arima_outlier_additive_patch flag
- max_lags integer
- cal_PI flag
- conf_limit_pct real
- events fields
- continue flag
- scoring_model_only flag Use for models with very large numbers (tens of thousands) of time series.
- forecastperiods integer
- extend_records_into_future flag
- extend_metric_values fields Allows you to provide future values for predictors.
- conf_limits flag
- noise_res flag
- max_models_output integer Controls how many models are shown in output. Default is 10. Models are not shown in output if the total number of models built exceeds this value. Models are still available for scoring.
-"
-EED66538A3E4854D56210AB1D6AC49016F1E40A2_0,EED66538A3E4854D56210AB1D6AC49016F1E40A2," streamingtimeseries properties
-
-The Streaming Time Series node builds and scores time series models in one step.
-
-
-
-streamingtimeseries properties
-
-Table 1. streamingtimeseries properties
-
- streamingtimeseries properties Values Property description
-
- targets field The Streaming TS node forecasts one or more targets, optionally using one or more input fields as predictors. Frequency and weight fields aren't used. See [Common modeling node properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/modelingnodeslots_common.htmlmodelingnodeslots_common) for more information.
- candidate_inputs [field1 ... fieldN] Input or predictor fields used by the model.
- use_period flag
- date_time_field field
- input_interval NoneUnknownYearQuarterMonthWeekDayHourHour_nonperiodMinuteMinute_nonperiodSecondSecond_nonperiod
- period_field field
- period_start_value integer
- num_days_per_week integer
- start_day_of_week SundayMondayTuesdayWednesdayThursdayFridaySaturday
- num_hours_per_day integer
- start_hour_of_day integer
- timestamp_increments integer
- cyclic_increments integer
- cyclic_periods list
- output_interval NoneYearQuarterMonthWeekDayHourMinuteSecond
- is_same_interval flag
- cross_hour flag
- aggregate_and_distribute list
- aggregate_default MeanSumModeMinMax
- distribute_default MeanSum
- group_default MeanSumModeMinMax
- missing_imput Linear_interpSeries_meanK_meanK_medianLinear_trend
- k_span_points integer
- use_estimation_period flag
- estimation_period ObservationsTimes
-"
-EED66538A3E4854D56210AB1D6AC49016F1E40A2_1,EED66538A3E4854D56210AB1D6AC49016F1E40A2," date_estimation list Only available if you use date_time_field.
- period_estimation list Only available if you use use_period.
- observations_type LatestEarliest
- observations_num integer
- observations_exclude integer
- method ExpertModelerExsmoothArima
- expert_modeler_method ExpertModelerExsmoothArima
- consider_seasonal flag
- detect_outliers flag
- expert_outlier_additive flag
- expert_outlier_innovational flag
- expert_outlier_level_shift flag
- expert_outlier_transient flag
- expert_outlier_seasonal_additive flag
- expert_outlier_local_trend flag
- expert_outlier_additive_patch flag
- consider_newesmodels flag
- exsmooth_model_type SimpleHoltsLinearTrendBrownsLinearTrendDampedTrendSimpleSeasonalWintersAdditiveWintersMultiplicativeDampedTrendAdditiveDampedTrendMultiplicativeMultiplicativeTrendAdditiveMultiplicativeSeasonalMultiplicativeTrendMultiplicativeMultiplicativeTrend
- futureValue_type_method Computespecify
- exsmooth_transformation_type NoneSquareRootNaturalLog
- arima.p integer
- arima.d integer
- arima.q integer
- arima.sp integer
- arima.sd integer
- arima.sq integer
- arima_transformation_type NoneSquareRootNaturalLog
- arima_include_constant flag
- tf_arima.p.fieldname integer For transfer functions.
- tf_arima.d.fieldname integer For transfer functions.
- tf_arima.q.fieldname integer For transfer functions.
- tf_arima.sp.fieldname integer For transfer functions.
- tf_arima.sd.fieldname integer For transfer functions.
- tf_arima.sq.fieldname integer For transfer functions.
- tf_arima.delay.fieldname integer For transfer functions.
- tf_arima.transformation_type.fieldname NoneSquareRootNaturalLog For transfer functions.
- arima_detect_outliers flag
-"
-EED66538A3E4854D56210AB1D6AC49016F1E40A2_2,EED66538A3E4854D56210AB1D6AC49016F1E40A2," arima_outlier_additive flag
- arima_outlier_level_shift flag
- arima_outlier_innovational flag
- arima_outlier_transient flag
- arima_outlier_seasonal_additive flag
- arima_outlier_local_trend flag
- arima_outlier_additive_patch flag
- conf_limit_pct real
- events fields
- forecastperiods integer
- extend_records_into_future flag
- conf_limits flag
- noise_res flag
- max_models_output integer Specify the maximum number of models you want to include in the output. Note that if the number of models built exceeds this threshold, the models aren't shown in the output but they're still available for scoring. Default value is 10. Displaying a large number of models may result in poor performance or instability.
- custom_fields boolean This option tells the node to use the field information specified here instead of that given in any upstream Type node(s). After selecting this option, specify the following fields as required.
-"
-9087B2B5302FD4B7C8343C568C7C8A925544BB40,9087B2B5302FD4B7C8343C568C7C8A925544BB40," applyts properties
-
-You can use the Time Series modeling node to generate a Time Series model nugget. The scripting name of this model nugget is applyts. For more information on scripting the modeling node itself, see [ts properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/timeser_as_nodeslots.htmltimeser_as_nodeslots).
-
-
-
-applyts properties
-
-Table 1. applyts properties
-
- applyts Properties Values Property description
-
- extend_records_into_future Boolean
- ext_future_num integer
- compute_future_values_input Boolean
- forecastperiods integer
- noise_res boolean
- conf_limits boolean
- target_fields list
-"
-EA4CB9CD97FFB8C956B4F5D28D2759C0ED832BB5,EA4CB9CD97FFB8C956B4F5D28D2759C0ED832BB5," transformnode properties
-
-The Transform node allows you to select and visually preview the results of transformations before applying them to selected fields.
-
-
-
-transformnode properties
-
-Table 1. transformnode properties
-
- transformnode properties Data type Property description
-
- fields [ field1… fieldn] The fields to be used in the transformation.
- formula AllSelect Indicates whether all or selected transformations should be calculated.
- formula_inverse flag Indicates if the inverse transformation should be used.
- formula_inverse_offset number Indicates a data offset to be used for the formula. Set as 0 by default, unless specified by user.
- formula_log_n flag Indicates if the logn transformation should be used.
- formula_log_n_offset number
- formula_log_10 flag Indicates if the log10 transformation should be used.
- formula_log_10_offset number
- formula_exponential flag Indicates if the exponential transformation (e^x^) should be used.
- formula_square_root flag Indicates if the square root transformation should be used.
- use_output_name flag Specifies whether a custom output name is used.
- output_name string If use_output_name is true, specifies the name to use.
- output_mode ScreenFile Used to specify target location for output generated from the output node.
- output_format HTML (.html) Output (.cou) Used to specify the type of output.
- paginate_output flag When the output_format is HTML, causes the output to be separated into pages.
-"
-A20FCF106BA3053C247DAF57A4A396F073D1E4E2_0,A20FCF106BA3053C247DAF57A4A396F073D1E4E2," transposenode properties
-
-The Transpose node swaps the data in rows and columns so that records become fields and fields become records.
-
-
-
-transposenode properties
-
-Table 1. transposenode properties
-
- transposenode properties Data type Property description
-
- transpose_method enum Specifies the transpose method: Normal (normal), CASE to VAR (casetovar), or VAR to CASE (vartocase).
- transposed_names PrefixRead Property for the Normal transpose method. New field names can be generated automatically based on a specified prefix, or they can be read from an existing field in the data.
- prefix string Property for the Normal transpose method.
- num_new_fields integer Property for the Normal transpose method. When using a prefix, specifies the maximum number of new fields to create.
- read_from_field field Property for the Normal transpose method. Field from which names are read. This must be an instantiated field or an error will occur when the node is executed.
- max_num_fields integer Property for the Normal transpose method. When reading names from a field, specifies an upper limit to avoid creating an inordinately large number of fields.
- transpose_type NumericStringCustom Property for the Normal transpose method. By default, only continuous (numeric range) fields are transposed, but you can choose a custom subset of numeric fields or transpose all string fields instead.
- transpose_fields list Property for the Normal transpose method. Specifies the fields to transpose when the Custom option is used.
- id_field_name field Property for the Normal transpose method.
- transpose_casetovar_idfields field Property for the CASE to VAR (casetovar) transpose method. Accepts multiple fields to be used as index fields. field1 ... fieldN
-"
-A20FCF106BA3053C247DAF57A4A396F073D1E4E2_1,A20FCF106BA3053C247DAF57A4A396F073D1E4E2," transpose_casetovar_columnfields field Property for the CASE to VAR (casetovar) transpose method. Accepts multiple fields to be used as column fields. field1 ... fieldN
- transpose_casetovar_valuefields field Property for the CASE to VAR (casetovar) transpose method. Accepts multiple fields to be used as value fields. field1 ... fieldN
- transpose_vartocase_idfields field Property for the VAR to CASE (vartocase) transpose method. Accepts multiple fields to be used as ID variable fields. field1 ... fieldN
- transpose_vartocase_valfields field Property for the VAR to CASE (vartocase) transpose method. Accepts multiple fields to be used as value variable fields. field1 ... fieldN
- transpose_new_field_names array New field names.
-"
-E01C7D12E53747C7ED71D615D7E9DCD8F17638ED_0,E01C7D12E53747C7ED71D615D7E9DCD8F17638ED," treeas properties
-
-The Tree-AS node is similar to the CHAID node; however, the Tree-AS node is designed to process big data to create a single tree and displays the resulting model in the output viewer. The node generates a decision tree by using chi-square statistics (CHAID) to identify optimal splits. This use of CHAID can generate nonbinary trees, meaning that some splits have more than two branches. Target and input fields can be numeric range (continuous) or categorical. Exhaustive CHAID is a modification of CHAID that does a more thorough job of examining all possible splits but takes longer to compute.
-
-
-
-treeas properties
-
-Table 1. treeas properties
-
- treeas Properties Values Property description
-
- target field In the Tree-AS node, CHAID models require a single target and one or more input fields. A frequency field can also be specified. See [Common modeling node properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/modelingnodeslots_common.htmlmodelingnodeslots_common) for more information.
- method chaidexhaustive_chaid
- max_depth integer Maximum tree depth, from 0 to 20. The default value is 5.
- num_bins integer Only used if the data is made up of continuous inputs. Set the number of equal frequency bins to be used for the inputs; options are: 2, 4, 5, 10, 20, 25, 50, or 100.
- record_threshold integer The number of records at which the model will switch from using p-values to Effect sizes while building the tree. The default is 1,000,000; increase or decrease this in increments of 10,000.
- split_alpha number Significance level for splitting. The value must be between 0.01 and 0.99.
- merge_alpha number Significance level for merging. The value must be between 0.01 and 0.99.
-"
-E01C7D12E53747C7ED71D615D7E9DCD8F17638ED_1,E01C7D12E53747C7ED71D615D7E9DCD8F17638ED," bonferroni_adjustment flag Adjust significance values using Bonferroni method.
- effect_size_threshold_cont number Set the Effect size threshold when splitting nodes and merging categories when using a continuous target. The value must be between 0.01 and 0.99.
- effect_size_threshold_cat number Set the Effect size threshold when splitting nodes and merging categories when using a categorical target. The value must be between 0.01 and 0.99.
- split_merged_categories flag Allow resplitting of merged categories.
- grouping_sig_level number Used to determine how groups of nodes are formed or how unusual nodes are identified.
- chi_square pearsonlikelihood_ratio Method used to calculate the chi-square statistic: Pearson or Likelihood Ratio
- minimum_record_use use_percentageuse_absolute
- min_parent_records_pc number Default value is 2. Minimum 1, maximum 100, in increments of 1. Parent branch value must be higher than child branch.
- min_child_records_pc number Default value is 1. Minimum 1, maximum 100, in increments of 1.
- min_parent_records_abs number Default value is 100. Minimum 1, maximum 100, in increments of 1. Parent branch value must be higher than child branch.
- min_child_records_abs number Default value is 50. Minimum 1, maximum 100, in increments of 1.
- epsilon number Minimum change in expected cell frequencies..
- max_iterations number Maximum iterations for convergence.
- use_costs flag
- costs structured Structured property. The format is a list of 3 values: the actual value, the predicted value, and the cost if that prediction is wrong. For example: tree.setPropertyValue(""costs"", [""drugA"", ""drugB"", 3.0], ""drugX"", ""drugY"", 4.0]])
- default_cost_increase nonelinearsquarecustom Only enabled for ordinal targets. Set default values in the costs matrix.
-"
-8EA57CA1AE730686E86FC3B2AABD71C9F8EA9823,8EA57CA1AE730686E86FC3B2AABD71C9F8EA9823," applytreeas properties
-
-You can use Tree-AS modeling nodes to generate a Tree-AS model nugget. The scripting name of this model nugget is applytreenas. For more information on scripting the modeling node itself, see [treeas properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/treeASnodeslots.htmltreeASnodeslots).
-
-
-
-applytreeas properties
-
-Table 1. applytreeas properties
-
- applytreeas Properties Values Property description
-
- calculate_conf flag This property includes confidence calculations in the generated tree.
-"
-3B763FFD1393292F4C3CA9D236440065B6660E8E_0,3B763FFD1393292F4C3CA9D236440065B6660E8E," twostepAS properties
-
-TwoStep Cluster is an exploratory tool that's designed to reveal natural groupings (or clusters) within a data set that would otherwise not be apparent. The algorithm that's employed by this procedure has several desirable features that differentiate it from traditional clustering techniques, such as handling of categorical and continuous variables, automatic selection of number of clusters, and scalability.
-
-
-
-twostepAS properties
-
-Table 1. twostepAS properties
-
- twostepAS Properties Values Property description
-
- inputs [f1 ... fN] TwoStepAS models use a list of input fields, but no target. Weight and frequency fields are not recognized.
- use_predefined_roles Boolean Default=True
- use_custom_field_assignments Boolean Default=False
- cluster_num_auto Boolean Default=True
- min_num_clusters integer Default=2
- max_num_clusters integer Default=15
- num_clusters integer Default=5
- clustering_criterion AIC BIC
- automatic_clustering_method use_clustering_criterion_setting Distance_jump Minimum Maximum
- feature_importance_method use_clustering_criterion_setting effect_size
- use_random_seed Boolean
- random_seed integer
- distance_measure Euclidean Loglikelihood
- include_outlier_clusters Boolean Default=True
- num_cases_in_feature_tree_leaf_is_less_than integer Default=10
- top_perc_outliers integer Default=5
- initial_dist_change_threshold integer Default=0
- leaf_node_maximum_branches integer Default=8
- non_leaf_node_maximum_branches integer Default=8
- max_tree_depth integer Default=3
-"
-3B763FFD1393292F4C3CA9D236440065B6660E8E_1,3B763FFD1393292F4C3CA9D236440065B6660E8E," adjustment_weight_on_measurement_level integer Default=6
- memory_allocation_mb number Default=512
- delayed_split Boolean Default=True
- fields_not_to_standardize [f1 ... fN]
- adaptive_feature_selection Boolean Default=True
- featureMisPercent integer Default=70
- coefRange number Default=0.05
- percCasesSingleCategory integer Default=95
- numCases integer Default=24
- include_model_specifications Boolean Default=True
- include_record_summary Boolean Default=True
- include_field_transformations Boolean Default=True
- excluded_inputs Boolean Default=True
- evaluate_model_quality Boolean Default=True
- show_feature_importance bar chart Boolean Default=True
- show_feature_importance_ word_cloud Boolean Default=True
- show_outlier_clusters_interactive_table_and_chart Boolean Default=True
- show_outlier_clusters_pivot_table Boolean Default=True
- across_cluster_feature_importance Boolean Default=True
- across_cluster_profiles_pivot_table Boolean Default=True
- withinprofiles Boolean Default=True
- cluster_distances Boolean Default=True
- cluster_label String Number
- label_prefix String
-"
-356DD425AD5BE4EE255F2F95F7860B6FDFE3BCC0,356DD425AD5BE4EE255F2F95F7860B6FDFE3BCC0," applytwostepAS properties
-
-You can use TwoStep-AS modeling nodes to generate a TwoStep-AS model nugget. The scripting name of this model nugget is applytwostepAS. For more information on scripting the modeling node itself, see [twostepAS properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/twostep_as_nodeslots.htmltwostep_as_nodeslots).
-
-
-
-applytwostepAS Properties
-
-Table 1. applytwostepAS Properties
-
- applytwostepAS Properties Values Property description
-
- enable_sql_generation false true native When using data from a database, SQL code can be pushed back to the database for execution, providing superior performance for many operations.
-"
-0B54763A8146178F9F4809DA458E4DDBD9E28B39,0B54763A8146178F9F4809DA458E4DDBD9E28B39," twostepnode properties
-
-The TwoStep node uses a two-step clustering method. The first step makes a single pass through the data to compress the raw input data into a manageable set of subclusters. The second step uses a hierarchical clustering method to progressively merge the subclusters into larger and larger clusters. TwoStep has the advantage of automatically estimating the optimal number of clusters for the training data. It can handle mixed field types and large data sets efficiently.
-
-
-
-twostepnode properties
-
-Table 1. twostepnode properties
-
- twostepnode Properties Values Property description
-
- inputs [field1 ... fieldN] TwoStep models use a list of input fields, but no target. Weight and frequency fields are not recognized. See [Common modeling node properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/modelingnodeslots_common.htmlmodelingnodeslots_common) for more information.
- standardize flag
- exclude_outliers flag
- percentage number
- cluster_num_auto flag
- min_num_clusters number
- max_num_clusters number
- num_clusters number
- cluster_label StringNumber
- label_prefix string
-"
-BAB82891CA84875B6EEC64974558FC838197C99A,BAB82891CA84875B6EEC64974558FC838197C99A," applytwostepnode properties
-
-You can use TwoStep modeling nodes to generate a TwoStep model nugget. The scripting name of this model nugget is applytwostepnode. For more information on scripting the modeling node itself, see [twostepnode properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/twostepnodeslots.htmltwostepnodeslots).
-
-
-
-applytwostepnode properties
-
-Table 1. applytwostepnode properties
-
- applytwostepnode Properties Values Property description
-
- enable_sql_generation udf native When using data from a database, SQL code can be pushed back to the database for execution, providing superior performance for many operations.
-"
-7EC3F9527921FB3F713DD6AE1D8035E6C81753C4_0,7EC3F9527921FB3F713DD6AE1D8035E6C81753C4," typenode properties
-
-The Type node specifies field metadata and properties. For example, you can specify a measurement level (continuous, nominal, ordinal, or flag) for each field, set options for handling missing values and system nulls, set the role of a field for modeling purposes, specify field and value labels, and specify values for a field.
-
-Note that in some cases you may need to fully instantiate the Type node for other nodes to work correctly, such as the fields from property of the SetToFlag node. You can simply connect a Table node and run it to instantiate the fields:
-
-tablenode = stream.createAt(""table"", ""Table node"", 150, 50)
-stream.link(node, tablenode)
-tablenode.run(None)
-stream.delete(tablenode)
-
-
-
-typenode properties
-
-Table 1. typenode properties
-
- typenode properties Data type Property description
-
- direction Input Target Both None Partition Split Frequency RecordID Keyed property for field roles.
- type Range Flag Set Typeless Discrete OrderedSet Default Measurement level of the field (previously called the ""type"" of field). Setting type to Default will clear any values parameter setting, and if value_mode has the value Specify, it will be reset to Read. If value_mode is set to Pass or Read, setting type will not affect value_mode. The data types used internally differ from those visible in the type node. The correspondence is as follows: Range -> Continuous Set - > Nominal OrderedSet -> Ordinal Discrete- > Categorical.
-"
-7EC3F9527921FB3F713DD6AE1D8035E6C81753C4_1,7EC3F9527921FB3F713DD6AE1D8035E6C81753C4," storage Unknown String Integer Real Time Date Timestamp Read-only keyed property for field storage type.
- check None Nullify Coerce Discard Warn Abort Keyed property for field type and range checking.
- values [value value] For continuous fields, the first value is the minimum, and the last value is the maximum. For nominal fields, specify all values. For flag fields, the first value represents false, and the last value represents true. Setting this property automatically sets the value_mode property to Specify.
- value_mode Read Pass Read+ Current Specify Determines how values are set. Note that you cannot set this property to Specify directly; to use specific values, set the values property.
- extend_values flag Applies when value_mode is set to Read. Set to T to add newly read values to any existing values for the field. Set to F to discard existing values in favor of the newly read values.
- enable_missing flag When set to T, activates tracking of missing values for the field.
- missing_values [value value ...] Specifies data values that denote missing data.
- range_missing flag Specifies whether a missing-value (blank) range is defined for a field.
- missing_lower string When range_missing is true, specifies the lower bound of the missing-value range.
- missing_upper string When range_missing is true, specifies the upper bound of the missing-value range.
- null_missing flag When set to T, nulls (undefined values that are displayed as $null$ in the software) are considered missing values.
- whitespace_ missing flag When set to T, values containing only white space (spaces, tabs, and new lines) are considered missing values.
- description string Specifies the description for a field.
- value_labels [[Value LabelString] [ Value LabelString] ...] Used to specify labels for value pairs.
-"
-7EC3F9527921FB3F713DD6AE1D8035E6C81753C4_2,7EC3F9527921FB3F713DD6AE1D8035E6C81753C4," display_places integer Sets the number of decimal places for the field when displayed (applies only to fields with REAL storage). A value of –1 will use the stream default.
- export_places integer Sets the number of decimal places for the field when exported (applies only to fields with REAL storage). A value of –1 will use the stream default.
- decimal_separator DEFAULT PERIOD COMMA Sets the decimal separator for the field (applies only to fields with REAL storage).
- date_format ""DDMMYY"" ""MMDDYY"" ""YYMMDD"" ""YYYYMMDD"" ""YYYYDDD"" DAY MONTH ""DD-MM-YY"" ""DD-MM-YYYY"" ""MM-DD-YY"" ""MM-DD-YYYY"" ""DD-MON-YY"" ""DD-MON-YYYY"" ""YYYY-MM-DD"" ""DD.MM.YY"" ""DD.MM.YYYY"" ""MM.DD.YYYY"" ""DD.MON.YY"" ""DD.MON.YYYY"" ""DD/MM/YY"" ""DD/MM/YYYY"" ""MM/DD/YY"" ""MM/DD/YYYY"" ""DD/MON/YY"" ""DD/MON/YYYY"" MON YYYY q Q YYYY ww WK YYYY Sets the date format for the field (applies only to fields with DATE or TIMESTAMP storage).
- time_format ""HHMMSS"" ""HHMM"" ""MMSS"" ""HH:MM:SS"" ""HH:MM"" ""MM:SS"" ""(H)H:(M)M:(S)S"" ""(H)H:(M)M"" ""(M)M:(S)S"" ""HH.MM.SS"" ""HH.MM"" ""MM.SS"" ""(H)H.(M)M.(S)S"" ""(H)H.(M)M"" ""(M)M.(S)S"" Sets the time format for the field (applies only to fields with TIME or TIMESTAMP storage).
-"
-7EC3F9527921FB3F713DD6AE1D8035E6C81753C4_3,7EC3F9527921FB3F713DD6AE1D8035E6C81753C4," number_format DEFAULT STANDARD SCIENTIFIC CURRENCY Sets the number display format for the field.
- standard_places integer Sets the number of decimal places for the field when displayed in standard format. A value of –1 will use the stream default.
- scientific_places integer Sets the number of decimal places for the field when displayed in scientific format. A value of –1 will use the stream default.
- currency_places integer Sets the number of decimal places for the field when displayed in currency format. A value of –1 will use the stream default.
- grouping_symbol DEFAULT NONE LOCALE PERIOD COMMA SPACE Sets the grouping symbol for the field.
- column_width integer Sets the column width for the field. A value of –1 will set column width to Auto.
- justify AUTO CENTER LEFT RIGHT Sets the column justification for the field.
- measure_type Range / MeasureType.RANGE Discrete / MeasureType.DISCRETE Flag / MeasureType.FLAG Set / MeasureType.SET OrderedSet / MeasureType.ORDERED_SET Typeless / MeasureType.TYPELESS Collection / MeasureType.COLLECTION Geospatial / MeasureType.GEOSPATIAL This keyed property is similar to type in that it can be used to define the measurement associated with the field. What is different is that in Python scripting, the setter function can also be passed one of the MeasureType values while the getter will always return on the MeasureType values.
-"
-7EC3F9527921FB3F713DD6AE1D8035E6C81753C4_4,7EC3F9527921FB3F713DD6AE1D8035E6C81753C4," collection_ measure Range / MeasureType.RANGE Flag / MeasureType.FLAG Set / MeasureType.SET OrderedSet / MeasureType.ORDERED_SET Typeless / MeasureType.TYPELESS For collection fields (lists with a depth of 0), this keyed property defines the measurement type associated with the underlying values.
- geo_type Point MultiPoint LineString MultiLineString Polygon MultiPolygon For geospatial fields, this keyed property defines the type of geospatial object represented by this field. This should be consistent with the list depth of the values.
- has_coordinate_ system boolean For geospatial fields, this property defines whether this field has a coordinate system
- coordinate_system string For geospatial fields, this keyed property defines the coordinate system for this field.
- custom_storage_ type Unknown / MeasureType.UNKNOWN String / MeasureType.STRING Integer / MeasureType.INTEGER Real / MeasureType.REAL Time / MeasureType.TIME Date / MeasureType.DATE Timestamp / MeasureType.TIMESTAMP List / MeasureType.LIST This keyed property is similar to custom_storage in that it can be used to define the override storage for the field. What is different is that in Python scripting, the setter function can also be passed one of the StorageType values while the getter will always return on the StorageType values.
-"
-7EC3F9527921FB3F713DD6AE1D8035E6C81753C4_5,7EC3F9527921FB3F713DD6AE1D8035E6C81753C4," custom_list_ storage_type String / MeasureType.STRING Integer / MeasureType.INTEGER Real / MeasureType.REAL Time / MeasureType.TIME Date / MeasureType.DATE Timestamp / MeasureType.TIMESTAMP For list fields, this keyed property specifies the storage type of the underlying values.
- custom_list_depth integer For list fields, this keyed property specifies the depth of the field
- max_list_length integer Only available for data with a measurement level of either Geospatial or Collection. Set the maximum length of the list by specifying the number of elements the list can contain.
-"
-0B14841AF65A8855E9D497EF05270B54B245DAF8,0B14841AF65A8855E9D497EF05270B54B245DAF8," userinputnode properties
-
-The User Input node provides an easy way to create synthetic data—either from scratch or by altering existing data. This is useful, for example, when you want to create a test dataset for modeling.
-
-
-
-userinputnode properties
-
-Table 1. userinputnode properties
-
- userinputnode properties Data type Property description
-
- data
- names Structured slot that sets or returns a list of field names generated by the node.
-"
-679F2F7A79672580B5FB797D9C5280B1A83806EF,679F2F7A79672580B5FB797D9C5280B1A83806EF," Scripting overview
-
-This section provides high-level descriptions and examples of flow-level scripts and standalone scripts in the SPSS Modeler interface. More information on scripting language, syntax, and commands is provided in the sections that follow.
-
-Notes:
-
-
-
-"
-B3FFE77064106EE619C664233B7B7A9ABA75C30A,B3FFE77064106EE619C664233B7B7A9ABA75C30A," webnode properties
-
-The Web node illustrates the strength of the relationship between values of two or more symbolic (categorical) fields. The graph uses lines of various widths to indicate connection strength. You might use a Web node, for example, to explore the relationship between the purchase of a set of items at an e-commerce site.
-
-
-
-webnode properties
-
-Table 1. webnode properties
-
- webnode properties Data type Property description
-
- use_directed_web flag
- fields list
- to_field field
- from_fields list
- true_flags_only flag
- line_values AbsoluteOverallPctPctLargerPctSmaller
- strong_links_heavier flag
- num_links ShowMaximumShowLinksAboveShowAll
- max_num_links number
- links_above number
- discard_links_min flag
- links_min_records number
- discard_links_max flag
- links_max_records number
- weak_below number
- strong_above number
- link_size_continuous flag
- web_display CircularNetworkDirectedGrid
- graph_background color Standard graph colors are described at the beginning of this section.
- symbol_size number Specifies a symbol size.
- directed_line_values AbsoluteOverallPctPctToPctFrom Specify a threshold type.
-"
-D2EA86E13B810569E718E9DCA4C00DA28A2E1C9A_0,D2EA86E13B810569E718E9DCA4C00DA28A2E1C9A," xgboostasnode properties
-
-XGBoost is an advanced implementation of a gradient boosting algorithm. Boosting algorithms iteratively learn weak classifiers and then add them to a final strong classifier. XGBoost is very flexible and provides many parameters that can be overwhelming to most users, so the XGBoost-AS node in SPSS Modeler exposes the core features and commonly used parameters. The XGBoost-AS node is implemented in Spark.
-
-
-
-xgboostasnode properties
-
-Table 1. xgboostasnode properties
-
- xgboostasnode properties Data type Property description
-
- target_field field List of the field names for target.
- input_fields field List of the field names for inputs.
- nWorkers integer The number of workers used to train the XGBoost model. Default is 1.
- numThreadPerTask integer The number of threads used per worker. Default is 1.
- useExternalMemory Boolean Whether to use external memory as cache. Default is false.
- boosterType string The booster type to use. Available options are gbtree, gblinear, or dart. Default is gbtree.
- numBoostRound integer The number of rounds for boosting. Specify a value of 0 or higher. Default is 10.
- scalePosWeight Double Control the balance of positive and negative weights. Default is 1.
- randomseed integer The seed used by the random number generator. Default is 0.
- objectiveType string The learning objective. Possible values are reg:linear, reg:logistic, reg:gamma, reg:tweedie, rank:pairwise, binary:logistic, or multi. Note that for flag targets, only binary:logistic or multi can be used. If multi is used, the score result will show the multi:softmax and multi:softprob XGBoost objective types. Default is reg:linear.
-"
-D2EA86E13B810569E718E9DCA4C00DA28A2E1C9A_1,D2EA86E13B810569E718E9DCA4C00DA28A2E1C9A," evalMetric string Evaluation metrics for validation data. A default metric will be assigned according to the objective. Possible values are rmse, mae, logloss, error, merror, mlogloss, auc, ndcg, map, or gamma-deviance. Default is rmse.
- lambda Double L2 regularization term on weights. Increasing this value will make the model more conservative. Specify any number 0 or greater. Default is 1.
- alpha Double L1 regularization term on weights. Increasing this value will make the model more conservative. Specify any number 0 or greater. Default is 0.
- lambdaBias Double L2 regularization term on bias. If the gblinear booster type is used, this lambda bias linear booster parameter is available. Specify any number 0 or greater. Default is 0.
- treeMethod string If the gbtree or dart booster type is used, this tree method parameter for tree growth (and the other tree parameters that follow) is available. It specifies the XGBoost tree construction algorithm to use. Available options are auto, exact, or approx. Default is auto.
- maxDepth integer The maximum depth for trees. Specify a value of 2 or higher. Default is 6.
- minChildWeight Double The minimum sum of instance weight (hessian) needed in a child. Specify a value of 0 or higher. Default is 1.
- maxDeltaStep Double The maximum delta step to allow for each tree's weight estimation. Specify a value of 0 or higher. Default is 0.
- sampleSize Double The sub sample for is the ratio of the training instance. Specify a value between 0.1 and 1.0. Default is 1.0.
- eta Double The step size shrinkage used during the update step to prevent overfitting. Specify a value between 0 and 1. Default is 0.3.
- gamma Double The minimum loss reduction required to make a further partition on a leaf node of the tree. Specify any number 0 or greater. Default is 6.
- colsSampleRatio Double The sub sample ratio of columns when constructing each tree. Specify a value between 0.01 and 1. Default is1.
-"
-D2EA86E13B810569E718E9DCA4C00DA28A2E1C9A_2,D2EA86E13B810569E718E9DCA4C00DA28A2E1C9A," colsSampleLevel Double The sub sample ratio of columns for each split, in each level. Specify a value between 0.01 and 1. Default is 1.
- normalizeType string If the dart booster type is used, this dart parameter and the following three dart parameters are available. This parameter sets the normalization algorithm. Specify tree or forest. Default is tree.
- sampleType string The sampling algorithm type. Specify uniform or weighted. Default is uniform.
-"
-80CCB2CF7A994D218D5C47BBF7F8BBB0D479E399,80CCB2CF7A994D218D5C47BBF7F8BBB0D479E399," xgboostlinearnode properties
-
-XGBoost Linear© is an advanced implementation of a gradient boosting algorithm with a linear model as the base model. Boosting algorithms iteratively learn weak classifiers and then add them to a final strong classifier. The XGBoost Linear node in SPSS Modeler is implemented in Python.
-
-
-
-xgboostlinearnode properties
-
-Table 1. xgboostlinearnode properties
-
- xgboostlinearnode properties Data type Property description
-
- custom_fields boolean This option tells the node to use field information specified here instead of that given in any upstream Type node(s). After selecting this option, specify fields as required.
- target field
- inputs field
- alpha Double The alpha linear booster parameter. Specify any number 0 or greater. Default is 0.
- lambda Double The lambda linear booster parameter. Specify any number 0 or greater. Default is 1.
- lambdaBias Double The lambda bias linear booster parameter. Specify any number. Default is 0.
- num_boost_round integer The num boost round value for model building. Specify a value between 1 and 1000. Default is 10.
- objectiveType string The objective type for the learning task. Possible values are reg:linear, reg:logistic, reg:gamma, reg:tweedie, count:poisson, rank:pairwise, binary:logistic, or multi. Note that for flag targets, only binary:logistic or multi can be used. If multi is used, the score result will show the multi:softmax and multi:softprob XGBoost objective types.
-"
-8672A0AEF022CD97D9E834AB2FD3A607FBDAED4D,8672A0AEF022CD97D9E834AB2FD3A607FBDAED4D," applyxgboostlinearnode properties
-
-XGBoost Linear nodes can be used to generate an XGBoost Linear model nugget. The scripting name of this model nugget is applyxgboostlinearnode. No other properties exist for this model nugget. For more information on scripting the modeling node itself, see [xgboostlinearnode properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/xgboostlinearnodeslots.htmlxboostlinearnodeslots).
-"
-D05D9570CD32ACCCF91588C5886A1C4F5DA56D01_0,D05D9570CD32ACCCF91588C5886A1C4F5DA56D01," xgboosttreenode properties
-
-XGBoost Tree© is an advanced implementation of a gradient boosting algorithm with a tree model as the base model. Boosting algorithms iteratively learn weak classifiers and then add them to a final strong classifier. XGBoost Tree is very flexible and provides many parameters that can be overwhelming to most users, so the XGBoost Tree node in SPSS Modeler exposes the core features and commonly used parameters. The node is implemented in Python.
-
-
-
-xgboosttreenode properties
-
-Table 1. xgboosttreenode properties
-
- xgboosttreenode properties Data type Property description
-
- custom_fields boolean This option tells the node to use field information specified here instead of that given in any upstream Type node(s). After selecting this option, specify the fields as required.
- target field The target fields.
- inputs field The input fields.
- tree_method string The tree method for model building. Possible values are auto, exact, or approx. Default is auto.
- num_boost_round integer The num boost round value for model building. Specify a value between 1 and 1000. Default is 10.
- max_depth integer The max depth for tree growth. Specify a value of 1 or higher. Default is 6.
- min_child_weight Double The min child weight for tree growth. Specify a value of 0 or higher. Default is 1.
- max_delta_step Double The max delta step for tree growth. Specify a value of 0 or higher. Default is 0.
-"
-D05D9570CD32ACCCF91588C5886A1C4F5DA56D01_1,D05D9570CD32ACCCF91588C5886A1C4F5DA56D01," objective_type string The objective type for the learning task. Possible values are reg:linear, reg:logistic, reg:gamma, reg:tweedie, count:poisson, rank:pairwise, binary:logistic, or multi. Note that for flag targets, only binary:logistic or multi can be used. If multi is used, the score result will show the multi:softmax and multi:softprob XGBoost objective types.
- early_stopping Boolean Whether to use the early stopping function. Default is False.
- early_stopping_rounds integer Validation error needs to decrease at least every early stopping round(s) to continue training. Default is 10.
- evaluation_data_ratio Double Ration of input data used for validation errors. Default is 0.3.
- random_seed integer The random number seed. Any number between 0 and 9999999. Default is 0.
- sample_size Double The sub sample for control overfitting. Specify a value between 0.1 and 1.0. Default is 0.1.
- eta Double The eta for control overfitting. Specify a value between 0 and 1. Default is 0.3.
- gamma Double The gamma for control overfitting. Specify any number 0 or greater. Default is 6.
- col_sample_ratio Double The colsample by tree for control overfitting. Specify a value between 0.01 and 1. Default is 1.
- col_sample_level Double The colsample by level for control overfitting. Specify a value between 0.01 and 1. Default is 1.
- lambda Double The lambda for control overfitting. Specify any number 0 or greater. Default is 1.
- alpha Double The alpha for control overfitting. Specify any number 0 or greater. Default is 0.
-"
-116575C57D15C410AC921AEBFAF607E2F86E6C05,116575C57D15C410AC921AEBFAF607E2F86E6C05," applyxgboosttreenode properties
-
-You can use the XGBoost Tree node to generate an XGBoost Tree model nugget. The scripting name of this model nugget is applyxgboosttreenode. For more information on scripting the modeling node itself, see [xgboosttreenode properties](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/xgboosttreenodeslots.htmlxboosttreenodeslots).
-
-
-
-applyxgboosttreenode properties
-
-Table 1. applyxgboosttreenode properties
-
- applyxgboosttreenode properties Data type Property description
-
-"
-5C2F280E5C4326883F7B3623EF1B64FE4DDE7C05,5C2F280E5C4326883F7B3623EF1B64FE4DDE7C05," Select node
-
-You can use Select nodes to select or discard a subset of records from the data stream based on a specific condition, such as BP (blood pressure) = ""HIGH"".
-
-Mode. Specifies whether records that meet the condition will be included or excluded from the data stream.
-
-
-
-* Include. Select to include records that meet the selection condition.
-* Discard. Select to exclude records that meet the selection condition.
-
-
-
-Condition. Displays the selection condition that will be used to test each record, which you specify using a CLEM expression. Either enter an expression in the window or use the Expression Builder by clicking the calculator (Expression Builder) button.
-
-If you choose to discard records based on a condition, such as the following:
-
-(var1='value1' and var2='value2')
-
-the Select node by default also discards records having null values for all selection fields. To avoid this, append the following condition to the original one:
-
-and not(@NULL(var1) and @NULL(var2))
-
-Select nodes are also used to choose a proportion of records. Typically, you would use a different node, the Sample node, for this operation. However, if the condition you want to specify is more complex than the parameters provided, you can create your own condition using the Select node. For example, you can create a condition such as:
-
-BP = ""HIGH"" and random(10) <= 4
-
-This will select approximately 40% of the records showing high blood pressure and pass those records downstream for further analysis.
-"
-CBC6BDA4EC8356F2CE95DD4548406ABEE1EC5B76,CBC6BDA4EC8356F2CE95DD4548406ABEE1EC5B76," Sequence node
-
-The Sequence node discovers patterns in sequential or time-oriented data, in the format bread -> cheese. The elements of a sequence are item sets that constitute a single transaction.
-
-For example, if a person goes to the store and purchases bread and milk and then a few days later returns to the store and purchases some cheese, that person's buying activity can be represented as two item sets. The first item set contains bread and milk, and the second one contains cheese. A sequence is a list of item sets that tend to occur in a predictable order. The Sequence node detects frequent sequences and creates a generated model node that can be used to make predictions.
-
-Requirements. To create a Sequence rule set, you need to specify an ID field, an optional time field, and one or more content fields. Note that these settings must be made on the Fields tab of the modeling node; they cannot be read from an upstream Type node. The ID field can have any role or measurement level. If you specify a time field, it can have any role but its storage must be numeric, date, time, or timestamp. If you do not specify a time field, the Sequence node will use an implied timestamp, in effect using row numbers as time values. Content fields can have any measurement level and role, but all content fields must be of the same type. If they are numeric, they must be integer ranges (not real ranges).
-
-Strengths. The Sequence node is based on the CARMA association rules algorithm, which uses an efficient two-pass method for finding sequences. In addition, the generated model node created by a Sequence node can be inserted into a data stream to create predictions. The generated model node can also generate supernodes for detecting and counting specific sequences and for making predictions based on specific sequences.
-"
-A447EC7366D2EB328BCE8E44A73B3A825A9B757B,A447EC7366D2EB328BCE8E44A73B3A825A9B757B," Set Globals node
-
-The Set Globals node scans the data and computes summary values that can be used in CLEM expressions.
-
-For example, you can use a Set Globals node to compute statistics for a field called age and then use the overall mean of age in CLEM expressions by inserting the function @GLOBAL_MEAN(age).
-"
-5CC48263B0C282CA1D65ACCB46D73D7EA3C8A665,5CC48263B0C282CA1D65ACCB46D73D7EA3C8A665," Set to Flag node
-
-Use the Set to Flag node to derive flag fields based on the categorical values defined for one or more nominal fields.
-
-For example, your dataset might contain a nominal field, BP (blood pressure), with the values High, Normal, and Low. For easier data manipulation, you might create a flag field for high blood pressure, which indicates whether or not the patient has high blood pressure.
-"
-82546B72EDBFB76F571CFD06A7009E01615FA054,82546B72EDBFB76F571CFD06A7009E01615FA054," Sim Eval node
-
-The Simulation Evaluation (Sim Eval) node is a terminal node that evaluates a specified field, provides a distribution of the field, and produces charts of distributions and correlations.
-
-This node is primarily used to evaluate continuous fields. It therefore compliments the evaluation chart, which is generated by an Evaluation node and is useful for evaluating discrete fields. Another difference is that the Sim Eval node evaluates a single prediction across several iterations, whereas the Evaluation node evaluates multiple predictions each with a single iteration. Iterations are generated when more than one value is specified for a distribution parameter in the Sim Gen node.
-
-The Sim Eval node is designed to be used with data that was obtained from the Sim Fit and Sim Gen nodes. The node can, however, be used with any other node. Any number of processing steps can be placed between the Sim Gen node and the Sim Eval node.
-
-Important: The Sim Eval node requires a minimum of 1000 records with valid values for the target field.
-"
-51389B2D808C1F7D81DF9EC75F053528AE1BC128,51389B2D808C1F7D81DF9EC75F053528AE1BC128," Sim Fit node
-
-The Simulation Fitting node fits a set of candidate statistical distributions to each field in the data. The fit of each distribution to a field is assessed using a goodness of fit criterion. When a Simulation Fitting node runs, a Simulation Generate node is built (or an existing node is updated). Each field is assigned its best fitting distribution. The Simulation Generate node can then be used to generate simulated data for each field.
-
-Although the Simulation Fitting node is a terminal node, it does not add output to the Outputs panel, or export data.
-
-Note: If the historical data is sparse (that is, there are many missing values), it may be difficult for the fitting component to find enough valid values to fit distributions to the data. In cases where the data is sparse, before fitting you should either remove the sparse fields if they are not required, or impute the missing values. Using the QUALITY options in the Data Audit node, you can view the number of complete records, identify which fields are sparse, and select an imputation method. If there are an insufficient number of records for distribution fitting, you can use a Balance node to increase the number of records.
-"
-EC10AC085BA8A12BA0D8AF2DC66ADFBE759B3183,EC10AC085BA8A12BA0D8AF2DC66ADFBE759B3183," Sim Gen node
-
-The Simulation Generate node provides an easy way to generate simulated data, either without historical data using user specified statistical distributions, or automatically using the distributions obtained from running a Simulation Fitting node on existing historical data. Generating simulated data is useful when you want to evaluate the outcome of a predictive model in the presence of uncertainty in the model inputs.
-"
-EFAE4449CEB6F88AA4545F33BD886EC3080171B4,EFAE4449CEB6F88AA4545F33BD886EC3080171B4," SLRM node
-
-Use the Self-Learning Response Model (SLRM) node to build a model that you can continually update, or reestimate, as a dataset grows without having to rebuild the model every time using the complete dataset. For example, this is useful when you have several products and you want to identify which one a customer is most likely to buy if you offer it to them. This model allows you to predict which offers are most appropriate for customers and the probability of the offers being accepted.
-
-Initially, you can build the model using a small dataset with randomly made offers and the responses to those offers. As the dataset grows, the model can be updated and therefore becomes more able to predict the most suitable offers for customers and the probability of their acceptance based upon other input fields such as age, gender, job, and income. You can change the offers available by adding or removing them from within the node, instead of having to change the target field of the dataset.
-
-Before running an SLRM node, you must specify both the target and target response fields in the node properties. The target field must have string storage, not numeric. The target response field must be a flag. The true value of the flag indicates offer acceptance and the false value indicates offer refusal.
-
-Example. A financial institution wants to achieve more profitable results by matching the offer that is most likely to be accepted to each customer. You can use a self-learning model to identify the characteristics of customers most likely to respond favorably based on previous promotions and to update the model in real time based on the latest customer responses.
-"
-F837935A2FEFED20E2CAC93656E376F9868CC515,F837935A2FEFED20E2CAC93656E376F9868CC515," SMOTE node
-
-The Synthetic Minority Over-sampling Technique (SMOTE) node provides an over-sampling algorithm to deal with imbalanced data sets. It provides an advanced method for balancing data. The SMOTE node in watsonx.ai is implemented in Python and requires the imbalanced-learn© Python library.
-
-For details about the imbalanced-learn library, see [imbalanced-learn documentation](https://imbalanced-learn.org/stable/index.html)^1^.
-
-The Modeling tab on the nodes palette contains the SMOTE node and other Python nodes.
-
-^1^Lemaître, Nogueira, Aridas. ""Imbalanced-learn: A Python Toolbox to Tackle the Curse of Imbalanced Datasets in Machine Learning."" Journal of Machine Learning Research, vol. 18, no. 17, 2017, pp. 1-5. (http://jmlr.org/papers/v18/16-365.html)
-"
-8F64225936D78B691574900D641C0CB7C3CE78EF,8F64225936D78B691574900D641C0CB7C3CE78EF," Sort node
-
-You can use Sort nodes to sort records into ascending or descending order based on the values of one or more fields. For example, Sort nodes are frequently used to view and select records with the most common data values. Typically, you would first aggregate the data using the Aggregate node and then use the Sort node to sort the aggregated data into descending order of record counts. Displaying these results in a table will allow you to explore the data and to make decisions, such as selecting the records of the 10 best customers.
-
-The following settings are available for the Sort node
-
-Sort by. All fields selected to use as sort keys are displayed in a table. A key field works best for sorting when it is numeric.
-
-
-
-* Add fields to this list using the Field Chooser button.
-* Select an order by clicking the Ascending or Descending arrow in the table's Order column.
-* Delete fields using the red delete button.
-* Sort directives using the arrow buttons.
-
-
-
-Default sort order. Select either Ascending or Descending to use as the default sort order when new fields are added.
-
-Note: The Sort node is not applied if there is a Distinct node down the model flow. For information about the Distinct node, see [Distinct node](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/distinct.htmldistinct).
-"
-C81BEEA067CCC7FED12806F3FF0F20519092F2E4,C81BEEA067CCC7FED12806F3FF0F20519092F2E4," Statistics node
-
-The Statistics node gives you basic summary information about numeric fields. You can get summary statistics for individual fields and correlations between fields.
-"
-2E2A2BE1CB20EF0C663E591532D71CFB5637E57F,2E2A2BE1CB20EF0C663E591532D71CFB5637E57F," Streaming TCM node
-
-You can use this node to build and score temporal causal models in one step.
-
-After adding a Streaming TCM node to your flow canvas, double-click it to open the node properties. To see information about the properties, hover over the tool-tip icons. For more information about temporal causal modeling, see [TCM node](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/tcm.html).
-"
-84D42E162FEFC977AE807AF123CEDFDF400E403A,84D42E162FEFC977AE807AF123CEDFDF400E403A," SuperNodes
-
-One of the reasons the SPSS Modeler visual interface is so easy to learn is that each node has a clearly defined function. However, for complex processing, a long sequence of nodes may be necessary. Eventually, this may clutter your flow canvas and make it difficult to follow flow diagrams.
-
-There are two ways to avoid the clutter of a long and complex flow:
-
-
-
-* You can split a processing sequence into several flows. The first flow, for example, creates a data file that the second uses as input. The second creates a file that the third uses as input, and so on. However, this requires you to manage multiple flows.
-* You can create a SuperNode as a more streamlined alternative when working with complex flow processes. SuperNodes group multiple nodes into a single node by encapsulating sections of flow. This provides benefits to the data miner:
-
-
-
-* Grouping nodes results in a neater and more manageable flow.
-* Nodes can be combined into a business-specific SuperNode.
-
-
-
-
-
-To group nodes into a SuperNode:
-
-
-
-1. Ctrl + click to select the nodes you want to group.
-2. Right-click and select Create supernode. The nodes are grouped into a single SuperNode with a special star icon.
-
-Figure 1. SuperNode icon
-
-
-"
-8ED36D5E1CCDFB0139D9D3DB3AEA2B90AE1B405E,8ED36D5E1CCDFB0139D9D3DB3AEA2B90AE1B405E," SVM node
-
-The SVM node uses a support vector machine to classify data. SVM is particularly suited for use with wide datasets, that is, those with a large number of predictor fields. You can use the default settings on the node to produce a basic model relatively quickly, or you can use the Expert settings to experiment with different types of SVM models.
-
-After the model is built, you can:
-
-
-
-* Browse the model nugget to display the relative importance of the input fields in building the model.
-* Append a Table node to the model nugget to view the model output.
-
-
-
-Example. A medical researcher has obtained a dataset containing characteristics of a number of human cell samples extracted from patients who were believed to be at risk of developing cancer. Analysis of the original data showed that many of the characteristics differed significantly between benign and malignant samples. The researcher wants to develop an SVM model that can use the values of similar cell characteristics in samples from other patients to give an early indication of whether their samples might be benign or malignant.
-"
-7434988303BF295C1586C5EE42100E8AF244859C_0,7434988303BF295C1586C5EE42100E8AF244859C," Reusing custom category sets
-
-You can customize a category set in Text Analytics Workbench and then download it to use in other SPSS Modeler flows.
-
-"
-7434988303BF295C1586C5EE42100E8AF244859C_1,7434988303BF295C1586C5EE42100E8AF244859C," Procedure
-
-
-
-1. Optional: Customize the category set.
-
-
-
-1. Select a category to customize.
-2. To add descriptors, click the Descriptors tab and then drag-and-drop from the Descriptors tab into categories to add them.
-
-
-
-2. Download the customized category set.
-
-
-
-1. From the Text Analytics Workbench, go to the Categories tab.
-2. Click the Options icon and select Download category set.
-3. Give the category set a name and click Download.
-
-
-
-3. Add the category set to another Text Mining node.
-
-
-
-1. In a different flow session, go to the Categories tab in the Text Analytics Workbench.
-2. Click the Options icon and select Add category set.
-3. Browse to or drag-and-drop your category set.
-"
-E6A2EF28A33AA6A8C8B2321133A8816257CD1612_0,E6A2EF28A33AA6A8C8B2321133A8816257CD1612," Reusing a project asset in Resource editor
-
-From the Text Analytics Workbench, you can save a template or library as a project asset. You can then use the template or library in other Text Mining nodes by loading it in the Resource editor.
-
-"
-E6A2EF28A33AA6A8C8B2321133A8816257CD1612_1,E6A2EF28A33AA6A8C8B2321133A8816257CD1612," Procedure
-
-
-
-1. Save a library or template in Text Analytics Workbench.
-
-
-
-1. On the Resource Editor tab, select the template or library to save.
-2. Click the Options icon and select Save as project asset.
-3. Enter details about the asset, and click Submit.
-
-
-
-2. Load a library or template in a different Text Analytics Workbench.
-
-
-
-1. On the Resource Editor tab, open the toolbar menu for your current template or library.
-2. Click the Options icon and select Load library or Change template.
-"
-0F58073F0D5B237C3241126E98851A9E0C912792_0,0F58073F0D5B237C3241126E98851A9E0C912792," Uploading a custom asset in a Text Mining node
-
-You can add a custom text analysis package (TAP) or template directly in the Text Mining node. When your SPSS Modeler flow runs, it will use your custom asset.
-
-"
-0F58073F0D5B237C3241126E98851A9E0C912792_1,0F58073F0D5B237C3241126E98851A9E0C912792," Procedure
-
-
-
-1. If you want to download a TAP, save it locally.
-
-
-
-1. Click Text analysis package while in the Text Analytics Workbench.
-2. Enter details about the asset, and then click Submit. The text analysis package is saved locally as a .tap file.
-
-
-
-2. If you want to download a template, see [Linguistic resources](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/tmwb-linguistic-resource.htmltmwb-templates-intro__DownloadAssetsSteps).
-3. Add the TAP or template file to another Text Mining node.
-
-
-
-1. In the Text Mining node, click Select resources.
-2. Click the Text analysis package or Resource template tab depending on the asset you want.
-3. Click Import , and then browse to or drag-and-drop your TAP or template.
-"
-8654D0CBB99EE82483F99972EF5247401EB8E8D9,8654D0CBB99EE82483F99972EF5247401EB8E8D9," Table node
-
-The Table node creates a table that lists the values in your data. All fields and all values in the stream are included, making this an easy way to inspect your data values or export them in an easily readable form. Optionally, you can highlight records that meet a certain condition.
-
-Note: Unless you are working with small datasets, we recommend that you select a subset of the data to pass into the Table node. The Table node cannot display properly when the number of records surpasses a size that can be contained in the display structure (for example, 100 million rows).
-"
-6B6D315FFD086296183DE20086EE752A6A2B88C8,6B6D315FFD086296183DE20086EE752A6A2B88C8," TCM node
-
-Use this node to create a temporal causal model (TCM).
-
-Temporal causal modeling attempts to discover key causal relationships in time series data. In temporal causal modeling, you specify a set of target series and a set of candidate inputs to those targets. The procedure then builds an autoregressive time series model for each target and includes only those inputs that have a causal relationship with the target. This approach differs from traditional time series modeling where you must explicitly specify the predictors for a target series. Since temporal causal modeling typically involves building models for multiple related time series, the result is referred to as a model system.
-
-In the context of temporal causal modeling, the term causal refers to Granger causality. A time series X is said to ""Granger cause"" another time series Y if regressing for Y in terms of past values of both X and Y results in a better model for Y than regressing only on past values of Y.
-
-Note: To build a temporal causal model, you need enough data points. The product uses the constraint:
-
-m>(L + KL + 1)
-
-where m is the number of data points, L is the number of lags, and K is the number of predictors. Make sure your data set is big enough so that the number of data points (m) satisfies the condition.
-"
-6153F9F311CD2BB2DF31C6A4A1CB76D64E36BFE6_0,6153F9F311CD2BB2DF31C6A4A1CB76D64E36BFE6," Mining for text links
-
-The Text Link Analysis (TLA) node adds pattern-matching technology to text mining's concept extraction in order to identify relationships between the concepts in the text data based on known patterns. These relationships can describe how a customer feels about a product, which companies are doing business together, or even the relationships between genes or pharmaceutical agents.
-
-
-
-For example, extracting your competitor’s product name may not be interesting enough to you. Using this node, you could also learn how people feel about this product, if such opinions exist in the data. The relationships and associations are identified and extracted by matching known patterns to your text data.
-
-You can use the TLA pattern rules inside certain resource templates shipped with Text Analytics or create/edit your own. Pattern rules are made up of macros, word lists, and word gaps to form a Boolean query, or rule, that is compared to your input text. Whenever a TLA pattern rule matches text, this text can be extracted as a TLA result and restructured as output data.
-
-The Text Link Analysis node offers a more direct way to identify and extract TLA pattern results from your text and then add the results to the dataset in the flow. But the Text Link Analysis node is not the only way in which you can perform text link analysis. You can also use a Text Analytics Workbench session in the Text Mining modeling node.
-
-In the Text Analytics Workbench, you can explore the TLA pattern results and use them as category descriptors and/or to learn more about the results using drill-down and graphs. In fact, using the Text Mining node to extract TLA results is a great way to explore and fine-tune templates to your data for later use directly in the TLA node.
-
-The output can be represented in up to 6 slots, or parts.
-
-You can find this node under the Text Analytics section of the node palette.
-
-"
-6153F9F311CD2BB2DF31C6A4A1CB76D64E36BFE6_1,6153F9F311CD2BB2DF31C6A4A1CB76D64E36BFE6,"Requirements. The Text Link Analysis node accepts text data read into a field using an Import node.
-
-Strengths. The Text Link Analysis node goes beyond basic concept extraction to provide information about the relationships between concepts, as well as related opinions or qualifiers that may be revealed in the data.
-"
-0FAF8791603EB1A93ADC49EA8F9E5859D1E3360F,0FAF8791603EB1A93ADC49EA8F9E5859D1E3360F," Time Intervals node
-
-Use the Time Intervals node to specify intervals and derive a new time field for estimating or forecasting. A full range of time intervals is supported, from seconds to years.
-
-Use the node to derive a new time field. The new field has the same storage type as the input time field you chose. The node generates the following items:
-
-
-
-* The field specified in the node properties as the Time Field, along with the chosen prefix/suffix. By default the prefix is $TI_.
-* The fields specified in the node properties as the Dimension fields.
-* The fields specified in the node properties as the Fields to aggregate.
-
-
-
-You can also generate a number of extra fields, depending on the selected interval or period (such as the minute or second within which a measurement falls).
-"
-99675D0DDD35D743F2F0BECF008D9CBED68C0534,99675D0DDD35D743F2F0BECF008D9CBED68C0534," Time Plot node
-
-Time Plot nodes allow you to view one or more time series plotted over time. The series you plot must contain numeric values and are assumed to occur over a range of time in which the periods are uniform.
-
-Figure 1. Plotting sales of men's and women's clothing and jewelry over time
-
-
-"
-AC040F5709AB00AB3ED8275862FA2328D20842B2_0,AC040F5709AB00AB3ED8275862FA2328D20842B2," Expert options
-
-With the Text Link Analysis (TLA) node, the extraction of text link analysis pattern results is automatically enabled. In the node's properties, the expert options include certain additional parameters that impact how text is extracted and handled. The expert parameters control the basic behavior, as well as a few advanced behaviors, of the extraction process. There are also a number of linguistic resources and options that also impact the extraction results, which are controlled by the resource template you select.
-
-Limit extraction to concepts with a global frequency of at least [n]. This option specifies the minimum number of times a word or phrase must occur in the text in order for it to be extracted. In this way, a value of 5 limits the extraction to those words or phrases that occur at least five times in the entire set of records or documents.
-
-In some cases, changing this limit can make a big difference in the resulting extraction results, and consequently, your categories. Let's say that you're working with some restaurant data and you don't increase the limit beyond 1 for this option. In this case, you might find pizza (1), thin pizza (2), spinach pizza (2), and favorite pizza (2) in your extraction results. However, if you were to limit the extraction to a global frequency of 5 or more and re-extract, you would no longer get three of these concepts. Instead you would get pizza (7), since pizza is the simplest form and this word already existed as a possible candidate. And depending on the rest of your text, you might actually have a frequency of more than seven, depending on whether there are still other phrases with pizza in the text. Additionally, if spinach pizza was already a category descriptor, you might need to add pizza as a descriptor instead to capture all of the records. For this reason, change this limit with care whenever categories have already been created.
-
-Note that this is an extraction-only feature; if your template contains terms (they usually do), and a term for the template is found in the text, then the term will be indexed regardless of its frequency.
-
-"
-AC040F5709AB00AB3ED8275862FA2328D20842B2_1,AC040F5709AB00AB3ED8275862FA2328D20842B2,"For example, suppose you use a Basic Resources template that includes ""los angeles"" under the type in the Core library; if your document contains Los Angeles only once, then Los Angeles will be part of the list of concepts. To prevent this, you'll need to set a filter to display concepts occurring at least the same number of times as the value entered in the Limit extraction to concepts with a global frequency of at least [n] field.
-
-Accommodate punctuation errors. This option temporarily normalizes text containing punctuation errors (for example, improper usage) during extraction to improve the extractability of concepts. This option is extremely useful when text is short and of poor quality (as, for example, in open-ended survey responses, e-mail, and CRM data), or when the text contains many abbreviations.
-
-Accommodate spelling for a minimum word character length of [n]. This option applies a fuzzy grouping technique that helps group commonly misspelled words or closely spelled words under one concept. The fuzzy grouping algorithm temporarily strips all vowels (except the first one) and strips double/triple consonants from extracted words and then compares them to see if they're the same so that modeling and modelling would be grouped together. However, if each term is assigned to a different type, excluding the type, the fuzzy grouping technique won't be applied.
-
-"
-AC040F5709AB00AB3ED8275862FA2328D20842B2_2,AC040F5709AB00AB3ED8275862FA2328D20842B2,"You can also define the minimum number of root characters required before fuzzy grouping is used. The number of root characters in a term is calculated by totaling all of the characters and subtracting any characters that form inflection suffixes and, in the case of compound-word terms, determiners and prepositions. For example, the term exercises is counted as 8 root characters in the form ""exercise,"" since the letter s at the end of the word is an inflection (plural form). Similarly, apple sauce counts as 10 root characters (""apple sauce"") and manufacturing of cars counts as 16 root characters (“manufacturing car”). This method of counting is only used to check whether the fuzzy grouping should be applied but doesn't influence how the words are matched.
-
-Note: If you find that certain words are later grouped incorrectly, you can exclude word pairs from this technique by explicitly declaring them in the Fuzzy Grouping: Exceptions section under the Advanced Resources properties.
-
-Extract uniterms. This option extracts single words (uniterms) as long as the word isn't already part of a compound word and if it's either a noun or an unrecognized part of speech.
-
-Extract nonlinguistic entities. This option extracts nonlinguistic entities, such as phone numbers, social security numbers, times, dates, currencies, digits, percentages, e-mail addresses, and HTTP addresses. You can include or exclude certain types of nonlinguistic entities in the Nonlinguistic Entities: Configuration section under the Advanced Resources properties. By disabling any unnecessary entities, the extraction engine won't waste processing time.
-
-Uppercase algorithm. This option extracts simple and compound terms that aren't in the built-in dictionaries as long as the first letter of the term is in uppercase. This option offers a good way to extract most proper nouns.
-
-"
-AC040F5709AB00AB3ED8275862FA2328D20842B2_3,AC040F5709AB00AB3ED8275862FA2328D20842B2,"Group partial and full person names together when possible. This option groups names that appear differently in the text together. This feature is helpful since names are often referred to in their full form at the beginning of the text and then only by a shorter version. This option attempts to match any uniterm with the type to the last word of any of the compound terms that is typed as . For example, if doe is found and initially typed as , the extraction engine checks to see if any compound terms in the type include doe as the last word, such as john doe. This option doesn't apply to first names since most are never extracted as uniterms.
-
-Maximum nonfunction word permutation. This option specifies the maximum number of nonfunction words that can be present when applying the permutation technique. This permutation technique groups similar phrases that differ from each other only by the nonfunction words (for example, of and the) contained, regardless of inflection. For example, let's say that you set this value to—at most—two words, and both company officials and officials of the company were extracted. In this case, both extracted terms would be grouped together in the final concept list since both terms are deemed to be the same when of the is ignored.
-
-Use derivation when grouping multiterms. When processing Big Data, select this option to group multiterms by using derivation rules.
-"
-EFD36F1BF92225311B684D6AA0D05A597F00D707,EFD36F1BF92225311B684D6AA0D05A597F00D707," TLA node output
-
-After running a Text Link Analysis node, the data is restructured. It's important to understand the way text mining restructures your data.
-
-If you desire a different structure for data mining, you can use nodes on the Field Operations palette to accomplish this. For example, if you're working with data in which each row represents a text record, then one row is created for each pattern uncovered in the source text data. For each row in the output, there are 15 fields:
-
-
-
-* Six fields ( Concept#, such as Concept1, Concept2, ..., and Concept6) represent any concepts found in the pattern match
-* Six fields ( Type#, such as Type1, Type2, ..., and Type6) represent the type for each concept
-* Rule Name represents the name of the text link rule used to match the text and produce the output
-"
-B2250C2A2E20F6F123C6D1091BFD635DC74EE4FE,B2250C2A2E20F6F123C6D1091BFD635DC74EE4FE," Linguistic resources
-
-SPSS Modeler uses an extraction process that relies on linguistic resources. These resources serve as the basis for how to process the text data and extract information to get the concepts, types, and sometimes patterns.
-
-The linguistic resources can be divided into different types:
-
-Category sets
-: Categories are a group of closely related ideas and patterns that the text data is assigned to through a scoring process.
-
-Libraries
-: Libraries are used as building blocks for both TAPs and templates. Each library is made up of several dictionaries, which are used to define and manage terms, synonyms, and exclude lists. While libraries are also delivered individually, they are prepackaged together in templates and TAPs.
-
-Templates
-: Templates are made up of a set of libraries and some advanced linguistic and nonlinguistic resources. These resources form a specialized set that is adapted to a particular domain or context, such as product opinions.
-
-Text analysis packages (TAP)
-: A text analysis package is a predefined template that is bundled with one or more sets of predefined category sets. TAPs bundle together these resources so that the categories and the resources that were used to generate them are both stored together and reusable.
-
-Note: During extraction, some compiled internal resources are also used. These compiled resources contain many definitions that complement the types in the Core library. These compiled resources cannot be edited.
-"
-05275F4EC521878B13AD7DCE825E167B2FC7EF93_0,05275F4EC521878B13AD7DCE825E167B2FC7EF93," Advanced frequency settings
-
-You can build categories based on a straightforward and mechanical frequency technique. With this technique, you can build one category for each item (type, concept, or pattern) that was found to be higher than a given record or document count. Additionally, you can build a single category for all of the less frequently occurring items. By count, we refer to the number of records or documents containing the extracted concept (and any of its synonyms), type, or pattern in question as opposed to the total number of occurrences in the entire text.
-
-Grouping frequently occurring items can yield interesting results, since it may indicate a common or significant response. The technique is very useful on the unused extraction results after other techniques have been applied. Another application is to run this technique immediately after extraction when no other categories exist, edit the results to delete uninteresting categories, and then extend those categories so that they match even more records or documents.
-
-Instead of using this technique, you could sort the concepts or concept patterns by descending number of records or documents in the extraction results pane and then drag-and-drop the ones with the most records into the categories pane to create the corresponding categories.
-
-The following advanced settings are available for the Use frequencies to build categories option in the category settings.
-
-Generate category descriptors at. Select the kind of input for descriptors.
-
-
-
-* Concepts level. Selecting this option means that concepts or concept patterns frequencies will be used. Concepts will be used if types were selected as input for category building and concept patterns are used, if type patterns were selected. In general, applying this technique to the concept level will produce more specific results, since concepts and concept patterns represent a lower level of measurement.
-* Types level. Selecting this option means that type or type patterns frequencies will be used. Types will be used if types were selected as input for category building and type patterns are used, if type patterns were selected. By applying this technique to the type level, you can get a quick view of the kind of information given.
-
-
-
-"
-05275F4EC521878B13AD7DCE825E167B2FC7EF93_1,05275F4EC521878B13AD7DCE825E167B2FC7EF93,"Minimum record/doc. count for items to have their own category. With this option, you can build categories from frequently occurring items. This option restricts the output to only those categories containing a descriptor that occurred in at least X number of records or documents, where X is the value to enter for this option.
-
-Group all remaining items into a category called. Use this option if you want to group all concepts or types occurring infrequently into a single catch-all category with the name of your choice. By default, this category is named Other.
-
-Category input. Select the group to which to apply the techniques:
-
-
-
-* Unused extraction results. This option enables categories to be built from extraction results that aren't used in any existing categories. This minimizes the tendency for records to match multiple categories and limits the number of categories produced.
-* All extraction results. This option enables categories to be built using any of the extraction results. This is most useful when no or few categories already exist.
-
-
-
-Resolve duplicate category names by. Select how to handle any new categories or subcategories whose names would be the same as existing categories. You can either merge the new ones (and their descriptors) with the existing categories with the same name, or you can choose to skip the creation of any categories if a duplicate name is found in the existing categories.
-"
-A1365CD1E2ACBEE6E9BF025DD493FEB17A0D428F,A1365CD1E2ACBEE6E9BF025DD493FEB17A0D428F," Advanced linguistic settings
-
-When you build categories, you can select from a number of advanced linguistic category building techniques such as concept inclusion and semantic networks (English text only). These techniques can be used individually or in combination with each other to create categories.
-
-Keep in mind that because every dataset is unique, the number of methods and the order in which you apply them may change over time. Since your text mining goals may be different from one set of data to the next, you may need to experiment with the different techniques to see which one produces the best results for the given text data. None of the automatic techniques will perfectly categorize your data; therefore we recommend finding and applying one or more automatic techniques that work well with your data.
-
-The following advanced settings are available for the Use linguistic techniques to build categories option in the category settings.
-"
-D171FCF10D8A1699FD8AC67E44053BBF6405631C,D171FCF10D8A1699FD8AC67E44053BBF6405631C," The Concepts tab
-
-In the Text Analytics Workbench, you can use the Concepts tab to create and explore concepts as well as explore and tweak the extraction results.
-
-Concepts are the most basic level of extraction results available to use as building blocks, called descriptors, for your categories. Categories are a group of closely related ideas and patterns to which documents and records are assigned through a scoring process.
-
-Text mining is an iterative process in which extraction results are reviewed according to the context of the text data, fine-tuned to produce new results, and then reevaluated. Extraction results can be refined by modifying the linguistic resources. To simplify the process of fine-tuning your linguistic resources, you can perform common dictionary tasks directly from the Concepts tab. You can fine-tune other linguistic resources directly from the Resource editor tab.
-
-Figure 1. Concepts tab
-
-
-"
-6068B2555E5014D386397335D0ED56B430082FF7,6068B2555E5014D386397335D0ED56B430082FF7," The Resource editor tab
-
-Text Analytics rapidly and accurately captures key concepts from text data by using an extraction process. This process relies on linguistic resources to dictate how large amounts of unstructured, textual data should be analyzed and interpreted.
-
-You can use the Resource editor tab to view the linguistic resources used in the extraction process. These resources are stored in the form of templates and libraries, which are used to extract concepts, group them under types, discover patterns in the text data, and other processes. Text Analytics offers several preconfigured resource templates, and in some languages, you can also use the resources in text analysis packages.
-
-Figure 1. Resource editor tab
-
-
-"
-342AD3ABFEECA87987ED595047CC869E15F148BF,342AD3ABFEECA87987ED595047CC869E15F148BF," Generating a model nugget
-
-When you're working in the Text Analytics Workbench, you may want to use the work you've done to generate a category model nugget.
-
-A model generated from a Text Analytics Workbench session is a category model nugget. You must first have at least one category before you can generate a category model nugget.
-"
-7FE671DB2B6972A1CFB04E0902F8D82DC979D42A,7FE671DB2B6972A1CFB04E0902F8D82DC979D42A," Text Analytics Workbench
-
-From a Text Mining modeling node, you can choose to launch an interactive Text Analytics Workbench session when your flow runs. In this workbench, you can extract key concepts from your text data, build categories, explore patterns in text link analysis, and generate category models.
-
-You can use the Text Analytics Workbench to explore the results and tune the configuration for the node.
-
-Concepts
-: Concepts are the key words and phrases identified and extracted from your text data, also referred to as extraction results. These concepts are grouped into types. You can use these concepts to explore your data and create your categories. You can manage the concepts on the Concepts tab.
-
-Text links
-: If you have text link analysis (TLA) pattern rules in your linguistic resources or are using a resource template that already has some TLA rules, you can extract patterns from your text data. These patterns can help you uncover interesting relationships between concepts in your data. You can also use these patterns as descriptors in your categories. You can manage these on the Text links tab.
-
-Categories
-: Using descriptors (such as extraction results, patterns, and rules) as a definition, you can manually or automatically create a set of categories. Documents and records are assigned to these categories based on whether or not they contain a part of the category definition. You can manage categories on the Categories tab.
-
-Resources
-: The extraction process relies on a set of parameters and definitions from linguistic resources to govern how text is extracted and handled. These are managed in the form of templates and libraries on the Resource editor tab.
-
-Figure 1. Text Analytics Workbench
-
-
-"
-925108D09CFC6F2B5193D0D7414BFC83748111A9,925108D09CFC6F2B5193D0D7414BFC83748111A9," Setting options
-
-You can access settings in various panes of the Text Analytics Workbench, such as extraction settings for concepts.
-
-On the Concepts, Text links, and Categories tabs, categories are built from descriptors derived from either types or type patterns. In the table, you can select the individual types or patterns to include in the category building process. A description of all settings on each tab follows.
-"
-31A670D6B3F0D7AB4EAD7DAE3795589F161249DE,31A670D6B3F0D7AB4EAD7DAE3795589F161249DE," The Categories tab
-
-In the Text Analytics Workbench, you can use the Categories tab to create and explore categories as well as tweak the extraction results.
-
-Extraction results can be refined by modifying the linguistic resources, which you can do directly from the Categories tab.
-
-Figure 1. Categories tab
-
-
-"
-799CE322C90ECAD9CC4BACAD45F9749EC21E912E,799CE322C90ECAD9CC4BACAD45F9749EC21E912E," The Text links tab
-
-On the Text links tab, you can build and explore text link analysis patterns found in your text data. Text link analysis (TLA) is a pattern-matching technology that enables you to define TLA rules and compare them to actual extracted concepts and relationships found in your text.
-
-Patterns are most useful when you are attempting to discover relationships between concepts or opinions about a particular subject. Some examples include wanting to extract opinions on products from survey data, genomic relationships from within medical research papers, or relationships between people or places from intelligence data.
-
-After you've extracted some TLA patterns, you can explore them and even add them to categories. To extract TLA results, there must be some TLA rules defined in the resource template or libraries you're using.
-
-With no type patterns selected, you can click the Settings icon to change the extraction settings. For details, see [Setting options](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/tmwb_intro_options.html). You can also click the Filter icon to filter the type patterns that are displayed
-
-Figure 1. Text links view
-
-
-"
-BE6A4C0BB6BCC7166FF88D60FD433C220962730D,BE6A4C0BB6BCC7166FF88D60FD433C220962730D," Transform node
-
-Normalizing input fields is an important step before using traditional scoring techniques such as regression, logistic regression, and discriminant analysis. These techniques carry assumptions about normal distributions of data that may not be true for many raw data files. One approach to dealing with real-world data is to apply transformations that move a raw data element toward a more normal distribution. In addition, normalized fields can easily be compared with each other—for example, income and age are on totally different scales in a raw data file but, when normalized, the relative impact of each can be easily interpreted.
-
-The Transform node provides an output viewer that enables you to perform a rapid visual assessment of the best transformation to use. You can see at a glance whether variables are normally distributed and, if necessary, choose the transformation you want and apply it. You can pick multiple fields and perform one transformation per field.
-
-After selecting the preferred transformations for the fields, you can generate Derive or Filler nodes that perform the transformations and attach these nodes to the flow. The Derive node creates new fields, while the Filler node transforms the existing ones.
-"
-22B8136F68AC74838B9C2B9EAF3996CCFAA14921,22B8136F68AC74838B9C2B9EAF3996CCFAA14921," Transpose node
-
-By default, columns are fields and rows are records or observations. If necessary, you can use a Transpose node to swap the data in rows and columns so that fields become records and records become fields.
-
-For example, if you have time series data where each series is a row rather than a column, you can transpose the data prior to analysis.
-"
-015755C65C274F262396747D3F32A59AE74C08D7,015755C65C274F262396747D3F32A59AE74C08D7," Tree-AS node
-
-The Tree-AS node can be used with data in a distributed environment. With this node, you can choose to build decision trees using either a CHAID or Exhaustive CHAID model.
-
-CHAID, or Chi-squared Automatic Interaction Detection, is a classification method for building decision trees by using chi-square statistics to identify optimal splits.
-
-CHAID first examines the crosstabulations between each of the input fields and the outcome, and tests for significance using a chi-square independence test. If more than one of these relations is statistically significant, CHAID will select the input field that is the most significant (smallest p value). If an input has more than two categories, these are compared, and categories that show no differences in the outcome are collapsed together. This is done by successively joining the pair of categories showing the least significant difference. This category-merging process stops when all remaining categories differ at the specified testing level. For nominal input fields, any categories can be merged; for an ordinal set, only contiguous categories can be merged.
-
-Exhaustive CHAID is a modification of CHAID that does a more thorough job of examining all possible splits for each predictor but takes longer to compute.
-
-Requirements. Target and input fields can be continuous or categorical; nodes can be split into two or more subgroups at each level. Any ordinal fields used in the model must have numeric storage (not string). If necessary, use the Reclassify node to convert them.
-
-Strengths. CHAID can generate nonbinary trees, meaning that some splits have more than two branches. It therefore tends to create a wider tree than the binary growing methods. CHAID works for all types of inputs, and it accepts both case weights and frequency variables.
-"
-18C44D2A29B576F708BC515CEDE91227B6B4FC4E_0,18C44D2A29B576F708BC515CEDE91227B6B4FC4E," Time Series node
-
-The Time Series node can be used with data in either a local or distributed environment. With this node, you can choose to estimate and build exponential smoothing, univariate Autoregressive Integrated Moving Average (ARIMA), or multivariate ARIMA (or transfer function) models for time series, and produce forecasts based on the time series data.
-
-Exponential smoothing is a method of forecasting that uses weighted values of previous series observations to predict future values. As such, exponential smoothing is not based on a theoretical understanding of the data. It forecasts one point at a time, adjusting its forecasts as new data come in. The technique is useful for forecasting series that exhibit trend, seasonality, or both. You can choose from various exponential smoothing models that differ in their treatment of trend and seasonality.
-
-ARIMA models provide more sophisticated methods for modeling trend and seasonal components than do exponential smoothing models, and, in particular, they allow the added benefit of including independent (predictor) variables in the model. This involves explicitly specifying autoregressive and moving average orders as well as the degree of differencing. You can include predictor variables and define transfer functions for any or all of them, as well as specify automatic detection of outliers or an explicit set of outliers.
-
-Note: In practical terms, ARIMA models are most useful if you want to include predictors that might help to explain the behavior of the series that is being forecast, such as the number of catalogs that are mailed or the number of hits to a company web page. Exponential smoothing models describe the behavior of the time series without attempting to understand why it behaves as it does. For example, a series that historically peaks every 12 months will probably continue to do so even if you don't know why.
-
-"
-18C44D2A29B576F708BC515CEDE91227B6B4FC4E_1,18C44D2A29B576F708BC515CEDE91227B6B4FC4E,"An Expert Modeler option is also available, which attempts to automatically identify and estimate the best-fitting ARIMA or exponential smoothing model for one or more target variables, thus eliminating the need to identify an appropriate model through trial and error. If in doubt, use the Expert Modeler option.
-
-If predictor variables are specified, the Expert Modeler selects those variables that have a statistically significant relationship with the dependent series for inclusion in ARIMA models. Model variables are transformed where appropriate using differencing and/or a square root or natural log transformation. By default, the Expert Modeler considers all exponential smoothing models and all ARIMA models and picks the best model among them for each target field. You can, however, limit the Expert Modeler only to pick the best of the exponential smoothing models or only to pick the best of the ARIMA models. You can also specify automatic detection of outliers.
-"
-A5D736B45EC8EC0B906E183DE5DAA8BFA4C1F2D6,A5D736B45EC8EC0B906E183DE5DAA8BFA4C1F2D6," Streaming Time Series node
-
-You use the Streaming Time Series node to build and score time series models in one step. A separate time series model is built for each target field, however model nuggets are not added to the generated models palette and the model information cannot be browsed.
-
-Methods for modeling time series data require a uniform interval between each measurement, with any missing values indicated by empty rows. If your data does not already meet this requirement, you need to transform values as needed.
-
-Other points of interest regarding Time Series nodes:
-
-
-
-* Fields must be numeric.
-* Date fields cannot be used as inputs.
-* Partitions are ignored.
-
-
-
-The Streaming Time Series node estimates exponential smoothing, univariate Autoregressive Integrated Moving Average (ARIMA), and multivariate ARIMA (or transfer function) models for time series and produces forecasts based on the time series data. Also available is an Expert Modeler, which attempts to automatically identify and estimate the best-fitting ARIMA or exponential smoothing model for one or more target fields.
-"
-94FE9993A8201BDBD9D383CC4CC4CA4F2DDDB47D,94FE9993A8201BDBD9D383CC4CC4CA4F2DDDB47D," TwoStep cluster node
-
-The TwoStep Cluster node provides a form of cluster analysis. It can be used to cluster the dataset into distinct groups when you don't know what those groups are at the beginning. As with Kohonen nodes and K-Means nodes, TwoStep Cluster models do not use a target field. Instead of trying to predict an outcome, TwoStep Cluster tries to uncover patterns in the set of input fields. Records are grouped so that records within a group or cluster tend to be similar to each other, but records in different groups are dissimilar.
-
-TwoStep Cluster is a two-step clustering method. The first step makes a single pass through the data, during which it compresses the raw input data into a manageable set of subclusters. The second step uses a hierarchical clustering method to progressively merge the subclusters into larger and larger clusters, without requiring another pass through the data. Hierarchical clustering has the advantage of not requiring the number of clusters to be selected ahead of time. Many hierarchical clustering methods start with individual records as starting clusters and merge them recursively to produce ever larger clusters. Though such approaches often break down with large amounts of data, TwoStep's initial preclustering makes hierarchical clustering fast even for large datasets.
-
-Note: The resulting model depends to a certain extent on the order of the training data. Reordering the data and rebuilding the model may lead to a different final cluster model.
-
-Requirements. To train a TwoStep Cluster model, you need one or more fields with the role set to Input. Fields with the role set to Target, Both, or None are ignored. The TwoStep Cluster algorithm does not handle missing values. Records with blanks for any of the input fields will be ignored when building the model.
-
-Strengths. TwoStep Cluster can handle mixed field types and is able to handle large datasets efficiently. It also has the ability to test several cluster solutions and choose the best, so you don't need to know how many clusters to ask for at the outset. TwoStep Cluster can be set to automatically exclude outliers, or extremely unusual cases that can contaminate your results.
-"
-B7E56BEBF29F9AA59A9ABC9E299F19613E5859DA,B7E56BEBF29F9AA59A9ABC9E299F19613E5859DA," TwoStep-AS cluster node
-
-TwoStep Cluster is an exploratory tool that is designed to reveal natural groupings (or clusters) within a data set that would otherwise not be apparent. The algorithm that is employed by this procedure has several desirable features that differentiate it from traditional clustering techniques.
-
-
-
-* Handling of categorical and continuous variables. By assuming variables to be independent, a joint multinomial-normal distribution can be placed on categorical and continuous variables.
-* Automatic selection of number of clusters. By comparing the values of a model-choice criterion across different clustering solutions, the procedure can automatically determine the optimal number of clusters.
-* Scalability. By constructing a cluster feature (CF) tree that summarizes the records, the TwoStep algorithm can analyze large data files.
-
-
-
-For example, retail and consumer product companies regularly apply clustering techniques to information that describes their customers' buying habits, gender, age, income level, and other attributes. These companies tailor their marketing and product development strategies to each consumer group to increase sales and build brand loyalty.
-"
-A967430DA16338281405CF73A802C233911B6A13_0,A967430DA16338281405CF73A802C233911B6A13," Type node
-
-You can specify field properties in a Type node.
-
-The following main properties are available.
-
-
-
-* Field. Specify value and field labels for data in watsonx.ai. For example, field metadata imported from a data asset can be viewed or modified here. Similarly, you can create new labels for fields and their values.
-* Measure. This is the measurement level, used to describe characteristics of the data in a given field. If all the details of a field are known, it's called fully instantiated. Note: The measurement level of a field is different from its storage type, which indicates whether the data is stored as strings, integers, real numbers, dates, times, timestamps, or lists.
-* Role. Used to tell modeling nodes whether fields will be Input (predictor fields) or Target (predicted fields) for a machine-learning process. Both and None are also available roles, along with Partition, which indicates a field used to partition records into separate samples for training, testing, and validation. The value Split specifies that separate models will be built for each possible value of the field.
-* Value mode. Use this column to specify options for reading data values from the dataset, or use the Specify option to specify measurement levels and values.
-* Values. With this column, you can specify options for reading data values from the data set, or specify measurement levels and values separately. You can also choose to pass fields without reading their values. You can't amend the cell in this column if the corresponding Field entry contains a list.
-* Check. With this column, you can set options to ensure that field values conform to the specified values or ranges. You can't amend the cell in this column if the corresponding Field entry contains a list.
-
-
-
-Click the Edit (gear) icon next to each row to open additional options.
-
-Tip: Icons in the Type node properties quickly indicate the data type of each field, such as string, date, double integer, or hashtag.
-
-"
-A967430DA16338281405CF73A802C233911B6A13_1,A967430DA16338281405CF73A802C233911B6A13,"Figure 1. New Type node icons
-
-
-"
-5F584AEED890D6EFB4C9FAF133A26BD9F9E4F219,5F584AEED890D6EFB4C9FAF133A26BD9F9E4F219," Checking type values
-
-Turning on the Check option for each field examines all values in that field to determine whether they comply with the current type settings or the values that you've specified. This is useful for cleaning up datasets and reducing the size of a dataset within a single operation.
-
-The Check column in the Type node determines what happens when a value outside of the type limits is discovered. To change the check settings for a field, use the drop-down list for that field in the Check column. To set the check settings for all fields, select the check box for the top-level Field column heading. Then use the top-level drop-down above the Check column.
-
-The following check options are available:
-
-None. Values will be passed through without checking. This is the default setting.
-
-Nullify. Change values outside of the limits to the system null ($null$).
-
-Coerce. Fields whose measurement levels are fully instantiated will be checked for values that fall outside the specified ranges. Unspecified values will be converted to a legal value for that measurement level using the following rules:
-
-
-
-* For flags, any value other than the true and false value is converted to the false value
-* For sets (nominal or ordinal), any unknown value is converted to the first member of the set's values
-* Numbers greater than the upper limit of a range are replaced by the upper limit
-* Numbers less than the lower limit of a range are replaced by the lower limit
-* Null values in a range are given the midpoint value for that range
-
-
-
-Discard. When illegal values are found, the entire record is discarded.
-
-Warn. The number of illegal items is counted and reported in the flow properties dialog when all of the data has been read.
-
-Abort. The first illegal value encountered terminates the running of the flow. The error is reported in the flow properties dialog.
-"
-916C197A1A18FBE44382A30782B1FF7C13DBFEEC,916C197A1A18FBE44382A30782B1FF7C13DBFEEC," Converting continuous data
-
-Treating categorical data as continuous can have a serious impact on the quality of a model, especially if it's the target field (for example, producing a regression model rather than a binary model). To prevent this, you can convert integer ranges to categorical types such as Ordinal or Flag.
-
-
-
-1. Double-click a Type node to open its properties. Expand the Type Operations section.
-2. Specify a value for Set continuous integer field to ordinal if range less than or equal to.
-"
-8F5EA4DC23CAEE3B6887B07AE9D319BFE5E39CA8,8F5EA4DC23CAEE3B6887B07AE9D319BFE5E39CA8," Setting field format options
-
-With the FORMAT settings in the Type and Table nodes you can specify formatting options for current or unused fields.
-
-Under each formatting type, click Add Columns and add one or more fields. The field name and format setting will be displayed for each field you select. Then click the gear icon to specify formatting options.
-
-The following formatting options are available on a per-field basis:
-
-Date format. Select a date format to use for date storage fields or when strings are interpreted as dates by CLEM date functions.
-
-Time format. Select a time format to use for time storage fields or when strings are interpreted as times by CLEM time functions.
-
-Number format. You can choose from standard (.), scientific (.E+), or currency display formats ($.).
-
-Decimal symbol. Select either a comma (,) or period (.) as the decimal separator.
-
-Number grouping symbol. For number display formats, select the symbol used to group values (for example, the comma in 3,000.00). Options include none, period, comma, space, and locale-defined (in which case the default for the current locale is used).
-
-Decimal places (standard, scientific, currency, export). For number display formats, specify the number of decimal places to use when displaying real numbers. This option is specified separately for each display format. Note that the Export decimal places setting only applies to flat file exports. The number of decimal places exported by the XML Export node is always 6.
-
-Justify. Specifies how the values should be justified within the column. The default setting is Auto, which left-justifies symbolic values and right-justifies numeric values. You can override the default by selecting left, right, or center.
-
-Column width. By default, column widths are automatically calculated based on the values of the field. You can specify a custom width, if needed.
-"
-7292DE7C0036B9064A85D1DA77A860BD989EA638_0,7292DE7C0036B9064A85D1DA77A860BD989EA638," Setting the field role
-
-A field's role controls how it's used in model building—for example, whether a field is an input or target (the thing being predicted).
-
-Note: The Partition, Frequency, and Record ID roles can each be applied to a single field only.
-
-The following roles are available:
-
-Input. The field is used as an input to machine learning (a predictor field).
-
-Target. The field is used as an output or target for machine learning (one of the fields that the model will try to predict).
-
-Both. The field is used as both an input and an output by the Apriori node. All other modeling nodes will ignore the field.
-
-None. The field is ignored by machine learning. Fields whose measurement level is set to Typeless are automatically set to None in the Role column.
-
-Partition. Indicates a field used to partition the data into separate samples for training, testing, and (optional) validation purposes. The field must be an instantiated set type with two or three possible values (as defined in the advanced settings by clicking the gear icon). The first value represents the training sample, the second represents the testing sample, and the third (if present) represents the validation sample. Any additional values are ignored, and flag fields can't be used. Note that to use the partition in an analysis, partitioning must be enabled in the node settings of the appropriate model-building or analysis node. Records with null values for the partition field are excluded from the analysis when partitioning is enabled. If you defined multiple partition fields in the flow, you must specify a single partition field in the node settings for each applicable modeling node. If a suitable field doesn't already exist in your data, you can create one using a Partition node or Derive node. See [Partition node](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/partition.html) for more information.
-
-Split. (Nominal, ordinal, and flag fields only.) Specifies that a model is built for each possible value of the field.
-
-"
-7292DE7C0036B9064A85D1DA77A860BD989EA638_1,7292DE7C0036B9064A85D1DA77A860BD989EA638,"Frequency. (Numeric fields only.) Setting this role enables the field value to be used as a frequency weighting factor for the record. This feature is supported by C&R Tree, CHAID, QUEST, and Linear nodes only; all other nodes ignore this role. Frequency weighting is enabled by means of the Use frequency weight option in the node settings of those modeling nodes that support the feature.
-
-Record ID. The field is used as the unique record identifier. This feature is ignored by most nodes; however, it's supported by Linear models.
-"
-B8C3B95FC688C347D679F81711781B29578CFC19,B8C3B95FC688C347D679F81711781B29578CFC19," Viewing and setting information about types
-
-From the Type node, you can specify field metadata and properties that are invaluable to modeling and other work.
-
-These properties include:
-
-
-
-* Specifying a usage type, such as range, set, ordered set, or flag, for each field in your data
-* Setting options for handling missing values and system nulls
-* Setting the role of a field for modeling purposes
-"
-9F878A46B28C19B951157A5F31BB7A1A9920A89E,9F878A46B28C19B951157A5F31BB7A1A9920A89E," What is instantiation?
-
-Instantiation is the process of reading or specifying information, such as storage type and values for a data field. To optimize system resources, instantiating is a user-directed process—you tell the software to read values by running data through a Type node.
-
-
-
-* Data with unknown types is also referred to as uninstantiated. Data whose storage type and values are unknown is displayed in the Measure column of the Type node settings as Typeless.
-* When you have some information about a field's storage, such as string or numeric, the data is called partially instantiated. Categorical or Continuous are partially instantiated measurement levels. For example, Categorical specifies that the field is symbolic, but you don't know whether it's nominal, ordinal, or flag.
-* When all of the details about a type are known, including the values, a fully instantiated measurement level—nominal, ordinal, flag, or continuous—is displayed in this column. Note that the continuous type is used for both partially instantiated and fully instantiated data fields. Continuous data can be either integers or real numbers.
-
-
-
-When a data flow with a Type node runs, uninstantiated types immediately become partially instantiated, based on the initial data values. After all of the data passes through the node, all data becomes fully instantiated unless values were set to Pass. If the flow run is interrupted, the data will remain partially instantiated. After the Types settings are instantiated, the values of a field are static at that point in the flow. This means that any upstream changes will not affect the values of a particular field, even if you rerun the flow. To change or update the values based on new data or added manipulations, you need to edit them in the Types settings or set the value for a field to Read or Extend.
-"
-21DB0146B79B8256259507C62876E01ADA143BD6_0,21DB0146B79B8256259507C62876E01ADA143BD6," Measurement levels
-
-The measure, also referred to as measurement level, describes the usage of data fields in SPSS Modeler.
-
-You can specify the Measure in the node properties of an import node or a Type node. For example, you may want to set the measure for an integer field with values of 1 and 0 to Flag. This usually indicates that 1 = True and 0 = False.
-
-Storage versus measurement. Note that the measurement level of a field is different from its storage type, which indicates whether data is stored as a string, integer, real number, date, time, or timestamp. While you can modify data types at any point in a flow by using a Type node, storage must be determined at the source when reading data in (although you can subsequently change it using a conversion function).
-
-The following measurement levels are available:
-
-
-
-* Default. Data whose storage type and values are unknown (for example, because they haven't yet been read) are displayed as Default.
-* Continuous. Used to describe numeric values, such as a range of 0–100 or 0.75–1.25. A continuous value can be an integer, real number, or date/time.
-* Categorical. Used for string values when an exact number of distinct values is unknown. This is an uninstantiated data type, meaning that all possible information about the storage and usage of the data is not yet known. After data is read, the measurement level will be Flag, Nominal, or Typeless, depending on the maximum number of members for nominal fields specified.
-* Flag. Used for data with two distinct values that indicate the presence or absence of a trait, such as true and false, Yes and No, or 0 and 1. The values used may vary, but one must always be designated as the ""true"" value, and the other as the ""false"" value. Data may be represented as text, integer, real number, date, time, or timestamp.
-"
-21DB0146B79B8256259507C62876E01ADA143BD6_1,21DB0146B79B8256259507C62876E01ADA143BD6,"* Nominal. Used to describe data with multiple distinct values, each treated as a member of a set, such as small/medium/large. Nominal data can have any storage—numeric, string, or date/time. Note that setting the measurement level to Nominal doesn't automatically change the values to string storage.
-* Ordinal. Used to describe data with multiple distinct values that have an inherent order. For example, salary categories or satisfaction rankings can be typed as ordinal data. The order is defined by the natural sort order of the data elements. For example, 1, 3, 5 is the default sort order for a set of integers, while HIGH, LOW, NORMAL (ascending alphabetically) is the order for a set of strings. The ordinal measurement level enables you to define a set of categorical data as ordinal data for the purposes of visualization, model building, and export to other applications (such as IBM SPSS Statistics) that recognize ordinal data as a distinct type. You can use an ordinal field anywhere that a nominal field can be used. Additionally, fields of any storage type (real, integer, string, date, time, and so on) can be defined as ordinal.
-* Typeless. Used for data that doesn't conform to any of the Default, Continuous, Categorical, Flag, Nominal, or Ordinal types, for fields with a single value, or for nominal data where the set has more members than the defined maximum. Typeless is also useful for cases in which the measurement level would otherwise be a set with many members (such as an account number). When you select Typeless for a field, the role is automatically set to None, with Record ID as the only alternative. The default maximum size for sets is 250 unique values.
-* Collection. Used to identify non-geospatial data that is recorded in a list. A collection is effectively a list field of zero depth, where the elements in that list have one of the other measurement levels.
-"
-21DB0146B79B8256259507C62876E01ADA143BD6_2,21DB0146B79B8256259507C62876E01ADA143BD6,"* Geospatial. Used with the List storage type to identify geospatial data. Lists can be either List of Integer or List of Real fields with a list depth that's between zero and two, inclusive.
-
-
-
-You can manually specify measurement levels, or you can allow the software to read the data and determine the measurement level based on the values it reads. Alternatively, where you have several continuous data fields that should be treated as categorical data, you can choose an option to convert them. See [Converting continuous data](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/type_convert.html).
-"
-E0F6FBCA52D2EE44AC2E0795FA11FB53E3054C47,E0F6FBCA52D2EE44AC2E0795FA11FB53E3054C47," Geospatial measurement sublevels
-
-The Geospatial measurement level, which is used with the List storage type, has six sublevels that are used to identify different types of geospatial data.
-
-
-
-* Point. Identifies a specific location (for example, the center of a city).
-* Polygon. A series of points that identifies the single boundary of a region and its location (for example, a county).
-* LineString. Also referred to as a Polyline or just a Line, a LineString is a series of points that identifies the route of a line. For example, a LineString might be a fixed item, such as a road, river, or railway; or the track of something that moves, such as an aircraft's flight path or a ship's voyage.
-* MultiPoint. Used when each row in your data contains multiple points per region. For example, if each row represents a city street, the multiple points for each street can be used to identify every street lamp.
-"
-FD903F9A58632DF14BE5C98EEDA32E1FC2F46F4B,FD903F9A58632DF14BE5C98EEDA32E1FC2F46F4B," Defining missing values
-
-In the Type node settings, select the desired field in the table and then click the gear icon at the end of its row. Missing values settings are available in the window that appears.
-
-Select Define missing values to define missing value handing for this field. Here you can define explicit values to be considered as missing values for this field, or this can also be accomplished by means of a downstream Filler node.
-"
-063D5E4C6E2094F964752D376B5FF49FFD47433B,063D5E4C6E2094F964752D376B5FF49FFD47433B," Data values
-
-Using the Value mode column in the Type node settings, you can read values automatically from the data, or you can specify measures and values.
-
-The options available in the Value mode drop-down provide instructions for auto-typing, as shown in the following table.
-
-
-
-Table 1. Instructions for auto-typing
-
- Option Function
-
- Read Data is read when the node runs.
- Extend Data is read and appended to the current data (if any exists).
- Pass No data is read.
- Current Keep current data values.
- Specify You can click the gear icon at the end of the row to specify values.
-
-
-
-Running a Type node or clicking Read Values auto-types and reads values from your data source based on your selection. You can also specify these values manually by using the Specify option and clicking the gear icon at the end of a row.
-
-After you make changes for fields in the Type node, you can reset value information using the following buttons:
-
-
-
-"
-98AC4398E3EA902007D99E5BDB0686AEF04A4DAA,98AC4398E3EA902007D99E5BDB0686AEF04A4DAA," Specifying values for collection data
-
-Collection fields display non-geospatial data that's in a list.
-
-The only item you can set for the Collection measurement level is the List measure. By default, this measure is set to Typeless, but you can select another value to set the measurement level of the elements within the list. You can choose one of the following options:
-
-
-
-* Typeless
-* Continuous
-* Nominal
-"
-A82CB1ABABCF08E9FD361F13050D47850AF8768A,A82CB1ABABCF08E9FD361F13050D47850AF8768A," Specifying values and labels for continuous data
-
-The Continuous measurement level is for numeric fields.
-
-There are three storage types for continuous data:
-
-
-
-* Real
-* Integer
-* Date/Time
-
-
-
-The same settings are used to edit all continuous fields. The storage type is displayed for reference only. Select the desired field in the Type node settings and then click the gear icon at the end of its row.
-"
-077AFC6B667F6747FF066182E2F04AF486C13368,077AFC6B667F6747FF066182E2F04AF486C13368," Specifying values for a flag
-
-Use flag fields to display data that has two distinct values. The storage types for flags can be string, integer, real number, or date/time.
-
-True. Specify a flag value for the field when the condition is met.
-
-False. Specify a flag value for the field when the condition is not met.
-
-Labels. Specify labels for each value in the flag field. These labels appear in a variety of locations, such as graphs, tables, output, and model browsers.
-"
-24D2987869B1C8C34EFA1204903A7A8F3E35D459,24D2987869B1C8C34EFA1204903A7A8F3E35D459," Specifying values for geospatial data
-
-Geospatial fields display geospatial data that's in a list. For the Geospatial measurement level, you can use various options to set the measurement level of the elements within the list.
-
-Type. Select the measurement sublevel of the geospatial field. The available sublevels are determined by the depth of the list field. The defaults are: Point (zero depth), LineString (depth of one), and Polygon (depth of one).
-
-For more information about sublevels, see [Geospatial measurement sublevels](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/type_levels_geo.html).
-
-Coordinate system. This option is only available if you changed the measurement level to Geospatial from a non-geospatial level. To apply a coordinate system to your geospatial data, select this option. To use a different coordinate system, click Change.
-"
-2C991135B30B24A268FC9D847E3F43522543A96B,2C991135B30B24A268FC9D847E3F43522543A96B," Specifying values and labels for nominal and ordinal data
-
-Nominal (set) and ordinal (ordered set) measurement levels indicate that the data values are used discretely as a member of the set. The storage types for a set can be string, integer, real number, or date/time.
-
-The following controls are unique to nominal and ordinal fields. You can use them to specify values and labels. Select the desired field in the Type node settings and then click the gear icon at the end of its row.
-
-Values and Labels. You can specify values based on your knowledge of the current field. You can enter expected values for the field and check the dataset's conformity to these values using the Check options. And you can specify lables for each value in the set. Thse labels appear in a variety of locations, such as graphs, tables, output, and model browsers.
-"
-C9857AFEF4C7E7C2AD0B764277B90A2BCE51ADC8_0,C9857AFEF4C7E7C2AD0B764277B90A2BCE51ADC8," Setting options for values
-
-The Value mode column under the Type node settings displays a drop-down list of predefined values. Choosing the Specify option on this list and then clicking the gear icon opens a new screen where you can set options for reading, specifying, labeling, and handling values for the selected field.
-
-Many of the controls are common to all types of data. These common controls are discussed here.
-
-Measure. Displays the currently selected measurement level. You can change this setting to reflect the way that you intend to use data. For instance, if a field called day_of_week contains numbers that represent individual days, you might want to change this to nominal data in order to create a distribution node that examines each category individually.
-
-Role. Used to tell modeling nodes whether fields will be Input (predictor fields) or Target (predicted fields) for a machine-learning process. Other roles are also available such as Both , None, Partition, Split, Frequency, or Record ID.
-
-Value mode. Select a mode to determine values for the selected field. Choices for reading values include the following:
-
-
-
-* Read. Select to read values when the node runs.
-* Pass. Select not to read data for the current field.
-* Specify. Options here are used to specify values and labels for the selected field. Used with value checking, use this option to specify values that are based on your knowledge of the current field. This option activates unique controls for each type of field. You can't specify values or labels for a field whose measurement level is Typeless.
-* Extend. Select to append the current data with the values that you enter here. For example, if field_1 has a range from (0,10) and you enter a range of values from (8,16), the range is extended by adding the 16 without removing the original minimum. The new range would be (0,16).
-* Current. Select to keep the current data values.
-
-
-
-Value Labels (Add/Edit Labels). In this section you can enter custom labels for each value of the selected field.
-
-"
-C9857AFEF4C7E7C2AD0B764277B90A2BCE51ADC8_1,C9857AFEF4C7E7C2AD0B764277B90A2BCE51ADC8,"Max list length. Only available for data with a measurement level of either Geospatial or Collection. Set the maximum length of the list by specifying the number of elements the list can contain.
-
-Max string length. Only available for typeless data. Use this field when you're generating SQL to create a table. Enter the value of the largest string in your data; this generates a column in the table that's big enough for the string. If the string length value is not available, a default string size is used that may not be appropriate for the data (for example, if the value is too small, errors can occur when writing data to the table; too large a value could adversely affect performance).
-
-Check. Select a method of coercing values to conform to the specified continuous, flag, or nominal values. This option corresponds to the Check column in the main Type node settings, and a selection made here will override those in the main settings. Used with the options for specifying values and labels, value checking allows you to conform values in the data with expected values. For example, if you specify values as 1, 0 and then use the Discard. option here, you can discard all records with values other than 1 or 0.
-
-Define missing values. Select to activate the following controls you can use to declare missing values or blanks in your data.
-
-
-
-* Missing values. Use this field to define specific values (such as 99 or 0) as blanks. The value should be appropriate for the storage type of the field.
-* Range. Used to specify a range of missing values (such as ages 1–17 or greater than 65). If a bound value is blank, then the range is unbounded. For example, if you specify a lower bound of 100 with no upper bound, then all values greater than or equal to 100 are defined as missing. The bound values are inclusive. For example, a range with a lower bound of 5 and an upper bound of 10 includes 5 and 10 in the range definition. You can define a missing value range for any storage type, including date/time and string (in which case the alphabetic sort order is used to determine whether a value is within the range).
-"
-C9857AFEF4C7E7C2AD0B764277B90A2BCE51ADC8_2,C9857AFEF4C7E7C2AD0B764277B90A2BCE51ADC8,"* Null/White space. You can also specify system nulls (displayed in the data as $null$) and white space (string values with no visible characters) as blanks. Note that the Type node also treats empty strings as white space for purposes of analysis, although they are stored differently internally and may be handled differently in certain cases.
-
-
-
-Note: To code blanks as undefined or $null$, use the Filler node.
-"
-74706148818BD2ACE30029492DD8AD7D47283EDC,74706148818BD2ACE30029492DD8AD7D47283EDC," User Input node
-
-The User Input node provides an easy way for you to create synthetic data--either from scratch or by altering existing data. This is useful, for example, when you want to create a test dataset for modeling.
-"
-5B3FB712903B0D1044610C93E6FCDE6A41BE1CF6,5B3FB712903B0D1044610C93E6FCDE6A41BE1CF6," Web node
-
-Web nodes show the strength of relationships between values of two or more symbolic fields. The graph displays connections using varying types of lines to indicate connection strength. You can use a Web node, for example, to explore the relationship between the purchase of various items at an e-commerce site or a traditional retail outlet.
-
-Figure 1. Web graph showing relationships between the purchase of grocery items
-
-
-"
-114EBF33612531C5020FD739010049E5126E0E5B,114EBF33612531C5020FD739010049E5126E0E5B," XGBoost-AS node
-
-XGBoost© is an advanced implementation of a gradient boosting algorithm. Boosting algorithms iteratively learn weak classifiers and then add them to a final strong classifier. XGBoost is very flexible and provides many parameters that can be overwhelming to most users, so the XGBoost-AS node in Watson Studio exposes the core features and commonly used parameters. The XGBoost-AS node is implemented in Spark.
-
-For more information about boosting algorithms, see the [XGBoost Tutorials](http://xgboost.readthedocs.io/en/latest/tutorials/index.html). ^1^
-
-Note that the XGBoost cross-validation function is not supported in Watson Studio. You can use the Partition node for this functionality. Also note that XGBoost in Watson Studio performs one-hot encoding automatically for categorical variables.
-
-Notes:
-
-
-
-* On Mac, version 10.12.3 or higher is required for building XGBoost-AS models.
-* XGBoost isn't supported on IBM POWER.
-
-
-
-^1^ ""XGBoost Tutorials."" Scalable and Flexible Gradient Boosting. Web. © 2015-2016 DMLC.
-"
-8937DB13972E4DEDBCC542303EF3A783287FD10B,8937DB13972E4DEDBCC542303EF3A783287FD10B," XGBoost Linear node
-
-XGBoost Linear© is an advanced implementation of a gradient boosting algorithm with a linear model as the base model. Boosting algorithms iteratively learn weak classifiers and then add them to a final strong classifier. The XGBoost Linear node in watsonx.ai is implemented in Python.
-
-For more information about boosting algorithms, see the [XGBoost Tutorials](http://xgboost.readthedocs.io/en/latest/tutorials/index.html). ^1^
-
-Note that the XGBoost cross-validation function is not supported in watsonx.ai. You can use the Partition node for this functionality. Also note that XGBoost in watsonx.ai performs one-hot encoding automatically for categorical variables.
-
-^1^ ""XGBoost Tutorials."" Scalable and Flexible Gradient Boosting. Web. © 2015-2016 DMLC.
-"
-35F4C4A97CF58FA0642D88E501314F3D75FF9E01,35F4C4A97CF58FA0642D88E501314F3D75FF9E01," XGBoost Tree node
-
-XGBoost Tree© is an advanced implementation of a gradient boosting algorithm with a tree model as the base model. Boosting algorithms iteratively learn weak classifiers and then add them to a final strong classifier. XGBoost Tree is very flexible and provides many parameters that can be overwhelming to most users, so the XGBoost Tree node in watsonx.ai exposes the core features and commonly used parameters. The node is implemented in Python.
-
-For more information about boosting algorithms, see the [XGBoost Tutorials](http://xgboost.readthedocs.io/en/latest/tutorials/index.html). ^1^
-
-Note that the XGBoost cross-validation function is not supported in watsonx.ai. You can use the Partition node for this functionality. Also note that XGBoost in watsonx.ai performs one-hot encoding automatically for categorical variables.
-
-^1^ ""XGBoost Tutorials."" Scalable and Flexible Gradient Boosting. Web. © 2015-2016 DMLC.
-"
-717B697E0045B5D7DFF6ACC93AD5DEC98E27EBDC,717B697E0045B5D7DFF6ACC93AD5DEC98E27EBDC," Flow and SuperNode parameters
-
-You can define parameters for use in CLEM expressions and in scripting. They are, in effect, user-defined variables that are saved and persisted with the current flow or SuperNode and can be accessed from the user interface as well as through scripting.
-
-If you save a flow, for example, any parameters you set for that flow are also saved. (This distinguishes them from local script variables, which can be used only in the script in which they are declared.) Parameters are often used in scripting to control the behavior of the script, by providing information about fields and values that don't need to be hard coded in the script.
-
-You can set flow parameters in a flow script or in a flow's properties (right-click the canvas in your flow and select Flow properties), and they're available to all nodes in the flow. They're displayed in the Parameters list in the Expression Builder.
-
-You can also set parameters for SuperNodes, in which case they're visible only to nodes encapsulated within that SuperNode.
-
-Tip: For complete details about scripting, see the [Scripting and automation](https://dataplatform.cloud.ibm.com/docs/content/wsd/nodes/scripting_guide/clementine/scripting_overview.html) guide.
-"
-2B67D1EB41065CF9DA0EB68D429B69803D49EAA1,2B67D1EB41065CF9DA0EB68D429B69803D49EAA1," Reference information
-
-This section provides reference information about various topics.
-"
-C0CC7AE4029730B9846B6A05F4160643D3A8C393,C0CC7AE4029730B9846B6A05F4160643D3A8C393,"You may need to describe a flow to others in your organization. To help you do this, you can attach explanatory comments to nodes, and model nuggets.
-
-Others can then view these comments on-screen, or you might even print out an image of the flow that includes the comments. You can also add notes in the form of text annotations to nodes and model nuggets by means of the Annotations tab in a node's properties. These annotations are visible only when the Annotations tab is open.
-"
-E1232C341B3F590C23E9E81DDD157BC99FF77191,E1232C341B3F590C23E9E81DDD157BC99FF77191," Supported data sources for SPSS Modeler
-
-In SPSS Modeler, you can connect to your data no matter where it lives.
-
-
-
-"
-7D1E61EF82BC5DC1029D55C8F5C2EBB56082CDAC,7D1E61EF82BC5DC1029D55C8F5C2EBB56082CDAC," Creating SPSS Modeler flows
-
-With SPSS Modeler flows, you can quickly develop predictive models using business expertise and deploy them into business operations to improve decision making. Designed around the long-established SPSS Modeler client software and the industry-standard CRISP-DM model it uses, the flows interface supports the entire data mining process, from data to better business results.
-
-SPSS Modeler offers a variety of modeling methods taken from machine learning, artificial intelligence, and statistics. The methods available on the node palette allow you to derive new information from your data and to develop predictive models. Each method has certain strengths and is best suited for particular types of problems.
-
-Data format
-: Relational: Tables in relational data sources
-: Tabular: .xls, .xlsx, .csv, .sav, .json, .xml, or .sas. For Excel files, only the first sheet is read.
-: Textual: In the supported relational tables or files
-
-Data size
-: Any
-
-How can I prepare data?
-: Use automatic data preparation functions
-: Write SQL statements to manipulate data
-: Cleanse, shape, sample, sort, and derive data
-
-How can I analyze data?
-: Visualize data with many chart options
-: Identify the natural language of a text field
-
-How can I build models?
-: Build predictive models
-: Choose from over 40 modeling algorithms, and many other nodes
-: Use automatic modeling functions
-: Model time series or geospatial data
-: Classify textual data
-: Identify relationships between the concepts in textual data
-
-Getting started
-: To create an SPSS Modeler flow from the project's Assets tab, click .
-
-Note: Watsonx.ai doesn't include SPSS functionality in Peru, Ecuador, Colombia, or Venezuela.
-"
-68061CDEDA9E9E83180CA7513620B5988266CEBF,68061CDEDA9E9E83180CA7513620B5988266CEBF," SPSS algorithms
-
-Many of the nodes available in SPSS Modeler are based on statistical algorithms.
-
-If you're interested in learning more about the underlying algorithms used in your flows, you can read the SPSS Modeler Algorithms Guide available in PDF format. The guide is for advanced users, and the information is provided by a team of SPSS statisticians.
-
-[Download the SPSS Modeler Algorithms Guide ](https://public.dhe.ibm.com/software/analytics/spss/documentation/modeler/new/AlgorithmsGuide.pdf)
-"
-23080E48C7B666C07E92A6E4F4BB256D77BE49B4_0,23080E48C7B666C07E92A6E4F4BB256D77BE49B4," Tips and shortcuts
-
-Work quickly and easily by familiarizing yourself with the following shortcuts and tips:
-
-
-
-* Quickly find nodes. You can use the search bar on the Nodes palette to search for certain node types, and hover over them to see helpful descriptions.
-* Quickly edit nodes. After adding a node to your flow, double-click it to open its properties.
-* Add a node to a flow connection. To add a new node between two connected nodes, drag the node to the connection line.
-* Replace a connection. To replace an existing connection on a node, simply create a new connection and the old one will be replaced.
-* Start from an SPSS Modeler stream. You can import a stream ( .str) that was created in SPSS Modeler Subscription or SPSS Modeler client
-* Use tool tips. In node properties, helpful tool tips are available in various locations. Hover over the tooltip icon to see tool tips. 
-* Rename nodes and add annotations. Each node properties panel includes an Annotations section in which you can specify a custom name for nodes on the canvas. You can also include lengthy annotations to track progress, save process details, and denote any business decisions required or achieved.
-* Generate new nodes from table output. When viewing table output, you can select one or more fields, click Generate, and select a node to add to your flow.
-* Insert values automatically into a CLEM expression. Using the Expression Builder, accessible from various areas of the user interface (such as those for Derive and Filler nodes), you can automatically insert field values into a CLEM expression.
-
-
-
-Keyboard shortcuts are available for SPSS Modeler. See the following table. Note that all Ctrl keys listed are Cmd on macOS.
-
-
-
-Shortcut keys
-
-Table 1. Shortcut keys
-
- Shortcut Key Function
-
- Ctrl + F1 Navigate to the header.
-"
-23080E48C7B666C07E92A6E4F4BB256D77BE49B4_1,23080E48C7B666C07E92A6E4F4BB256D77BE49B4," Ctrl + F2 Navigate to the Nodes palette, then use arrow keys to move between nodes. Press Enter or the space key to add the selected node to your canvas.
- Ctrl + F3 Navigate to the toolbar.
- Ctrl + F4 Navigate to the flow canvas, then use arrow keys to move between nodes. Press Enter or space twice to open the node's context menu. Then use the arrow keys to select the desired context menu action and press Enter or space to perform the action.
- Ctrl + F5 Navigate to the node properties panel if it's open.
- Ctrl + F6 Move between areas of the user interface (header, palette, canvas, toolbar, etc.).
- Ctrl + F7 Open and navigate to the Messages panel.
- Ctrl + F8 Open and navigate to the Outputs panel.
- Ctrl + A Select all nodes when focus is on the canvas
- Ctrl + E With a node selected on the canvas, open its node properties. Then use the tab or arrow keys to move around the list of node properties. Press Ctrl + S to save your changes or press Ctrl + to cancel your changes.
- Ctrl + I Open the settings panel.
- Ctrl + J With a node selected on the canvas, connect it to another node. Use the arrow keys to select the node to connect to, then press Enter or space (or press Esc to cancel).
- Ctrl + K Disconnect a node.
- Ctrl + Enter Run a branch from where the focus is.
- Ctrl + Shift + Enter Run the entire flow.
- Ctrl + Shift + P Launch preview.
- Ctrl + arrow Move a selected node around the canvas.
- Ctrl + Alt + arrow Move the canvas in a direction.
- Ctrl + Shift + arrow Move a selected node around the canvas ten times faster than Ctrl + arrow.
- Ctrl + Shift + C Toggle cache on/off.
- Ctrl + Shift + up arrow Select all nodes upstream of the selected node.
- Ctrl + Shift + down arrow Select all nodes downstream of the selected node.
-"
-C5F5ACC006CD6F06BE3266EE98F89FABF4F6FBAF,C5F5ACC006CD6F06BE3266EE98F89FABF4F6FBAF," Troubleshooting SPSS Modeler
-
-The information in this section provides troubleshooting details for issues you may encounter in SPSS Modeler.
-"
-33FE18D89140517AB2A75D6FC64A4A3DB962B88B,33FE18D89140517AB2A75D6FC64A4A3DB962B88B," CLEM expressions and operators supporting SQL pushback
-
-The tables in this section list the mathematical operations and expressions that support SQL generation and are often used during data mining. Operations absent from these tables don't support SQL generation.
-
-
-
-Table 1. Operators
-
- Operations supporting SQL generation Notes
-
- +
- -
- /
- *
- >< Used to concatenate strings.
-
-
-
-
-
-Table 2. Relational operators
-
- Operations supporting SQL generation Notes
-
- =
- /= Used to specify ""not equal.""
- >
- >=
- <
- <=
-
-
-
-
-
-Table 3. Functions
-
- Operations supporting SQL generation Notes
-
- abs
- allbutfirst
- allbutlast
- and
- arccos
- arcsin
- arctan
- arctanh
- cos
- div
- exp
- fracof
- hasstartstring
- hassubstring
- integer
- intof
- isaplhacode
- islowercode
- isnumbercode
- isstartstring
- issubstring
- isuppercode
- last
- length
- locchar
- log
- log10
- lowertoupper
- max
- member
- min
- negate
- not
- number
- or
- pi
- real
- rem
- round
- sign
- sin
- sqrt
- string
- strmember
- subscrs
- substring
- substring_between
- uppertolower
- to_string
-
-
-
-
-
-Table 4. Special functions
-
- Operations supporting SQL generation Notes
-
- @NULL
- @GLOBAL_AVE You can use the special global functions to retrieve global values computed by the Set Globals node.
- @GLOBAL_SUM
- @GLOBAL_MAX
- @GLOBAL_MEAN
- @GLOBAL_MIN
- @GLOBALSDEV
-
-
-
-
-
-Table 5. Aggregate functions
-
- Operations supporting SQL generation Notes
-
- Sum
- Mean
- Min
- Max
-"
-262C45D286C9B8A7EDBA8635E636824F2B043D73,262C45D286C9B8A7EDBA8635E636824F2B043D73," How does SQL pushback work?
-
-The initial fragments of a flow leading from the data import nodes are the main targets for SQL generation. When a node is encountered that can't be compiled to SQL, the data is extracted from the database and subsequent processing is performed.
-
-During flow preparation and prior to running, the SQL generation process happens as follows:
-
-
-
-* The software reorders flows to move downstream nodes into the “SQL zone” where it can be proven safe to do so.
-* Working from the import nodes toward the terminal nodes, SQL expressions are constructed incrementally. This phase stops when a node is encountered that can't be converted to SQL or when the terminal node (for example, a Table node or a Graph node) is converted to SQL. At the end of this phase, each node is labeled with an SQL statement if the node and its predecessors have an SQL equivalent.
-"
-BDB3689801D81676AE642F1EBFF81D27C07F1F3C,BDB3689801D81676AE642F1EBFF81D27C07F1F3C," Generating SQL from model nuggets
-
-When using data from a database, SQL code can be pushed back to the database for execution, providing superior performance for many operations. For some nodes, SQL for the model nugget can be generated, pushing back the model scoring stage to the database. This allows flows containing these nuggets to have their full SQL pushed back.
-
-For a generated model nugget that supports SQL pushback:
-
-
-
-1. Double-click the model nugget to open its settings.
-2. Depending on the node type, one or more of the following options is available. Choose one of these options to specify how SQL generation is performed.
-
-Generate SQL for this model
-
-
-
-* Default: Score using Server Scoring Adapter (if installed) otherwise in process. This is the default option. If connected to a database with a scoring adapter installed, this option generates SQL using the scoring adapter and associated user defined functions (UDF) and scores your model within the database. When no scoring adapter is available, this option fetches your data back from the database and scores it in SPSS Modeler.
-* Score by converting to native SQL without Missing Value Support. This option generates native SQL to score the model within the database, without the overhead of handling missing values. This option simply sets the prediction to null ($null$) when a missing value is encountered while scoring a case.
-"
-D69F33671E13DF29FE56579AC4654EBC54A11F12_0,D69F33671E13DF29FE56579AC4654EBC54A11F12," Nodes supporting SQL pushback
-
-The tables in this section show nodes representing data-mining operations that support SQL pushback. If a node doesn't appear in these tables, it doesn't support SQL pushback.
-
-
-
-Table 1. Record Operations nodes
-
- Nodes supporting SQL generation Notes
-
- Select Supports generation only if SQL generation for the select expression itself is supported. If any fields have nulls, SQL generation does not give the same results for discard as are given in native SPSS Modeler.
- Sample Simple sampling supports SQL generation to varying degrees depending on the database.
- Aggregate SQL generation support for aggregation depends on the data storage type.
- RFM Aggregate Supports generation except if saving the date of the second or third most recent transactions, or if only including recent transactions. However, including recent transactions does work if the datetime_date(YEAR,MONTH,DAY) function is pushed back.
- Sort
- Merge No SQL generated for merge by order. Merge by key with full or partial outer join is only supported if the database/driver supports it. Non-matching input fields can be renamed by means of a Filter node, or the Filter settings of an import node. Supports SQL generation for merge by condition. For all types of merge, SQL_SP_EXISTS is not supported if inputs originate in different databases.
- Append Supports generation if inputs are unsorted. SQL optimization is only possible when your inputs have the same number of columns.
- Distinct A Distinct node with the (default) mode Create a composite record for each group selected doesn't support SQL optimization.
-
-
-
-
-
-Table 2. SQL generation support in the Sample node for simple sampling
-
- Mode Sample Max size Seed Db2 for z/OS Db2 for OS/400 Db2 for Win/UNIX Oracle SQL Server Teradata
-
- Include First n/a Y Y Y Y Y Y
- 1-in-n off Y Y Y Y Y
- max Y Y Y Y Y
- Random % off off Y Y Y Y
- on Y Y Y
- max off Y Y Y Y
- on Y Y Y
- Discard First off Y
- max Y
-"
-D69F33671E13DF29FE56579AC4654EBC54A11F12_1,D69F33671E13DF29FE56579AC4654EBC54A11F12," 1-in-n off Y Y Y Y Y
- max Y Y Y Y Y
- Random % off off Y Y Y Y
- on Y Y Y
- max off Y Y Y Y
- on Y Y Y
-
-
-
-
-
-Table 3. SQL generation support in the Aggregate node
-
- Storage Sum Mean Min Max SDev Median Count Variance Percentile
-
- Integer Y Y Y Y Y Y* Y Y Y*
- Real Y Y Y Y Y Y* Y Y Y*
- Date Y Y Y* Y Y*
- Time Y Y Y* Y Y*
- Timestamp Y Y Y* Y Y*
- String Y Y Y* Y Y*
-
-
-
-* Median and Percentile are supported on Oracle.
-
-
-
-Table 4. Field Operations nodes
-
- Nodes supporting SQL generation Notes
-
- Type Supports SQL generation if the Type node is instantiated and no ABORT or WARN type checking is specified.
- Filter
- Derive Supports SQL generation if SQL generated for the derive expression is supported (see expressions later on this page).
- Ensemble Supports SQL generation for Continuous targets. For other targets, supports generation only if the Highest confidence wins ensemble method is used.
- Filler Supports SQL generation if the SQL generated for the derive expression is supported.
- Anonymize Supports SQL generation for Continuous targets, and partial SQL generation for Nominal and Flag targets.
- Reclassify
- Binning Supports SQL generation if the Tiles (equal count) binning method is used and the Read from Bin Values tab if available option is selected. Due to differences in the way that bin boundaries are calculated (this is caused by the nature of the distribution of data in bin fields), you might see differences in the binning output when comparing normal flow execution results and SQL pushback results. To avoid this, use the Record count tiling method, and either Add to next or Keep in current tiles to obtain the closest match between the two methods of flow execution.
- RFM Analysis Supports SQL generation if the Read from Bin Values tab if available option is selected, but downstream nodes will not support it.
- Partition Supports SQL generation to assign records to partitions.
- Set To Flag
- Restructure
-
-
-
-
-
-Table 5. Graphs nodes
-
- Nodes supporting SQL generation Notes
-
- Distribution
- Web
- Evaluation
-
-
-
-"
-D69F33671E13DF29FE56579AC4654EBC54A11F12_2,D69F33671E13DF29FE56579AC4654EBC54A11F12,"For some models, SQL for the model nugget can be generated, pushing back the model scoring stage to the database. The main use of this feature is not to improve performance, but to allow flows containing these nuggets to have their full SQL pushed back. See [Generating SQL from model nuggets](https://dataplatform.cloud.ibm.com/docs/content/wsd/sql_native.html) for more information.
-
-
-
-Table 6. Model nuggets
-
- Model nuggets supporting SQL generation Notes
-
- C&R Tree Supports SQL generation for the single tree option, but not for the boosting, bagging, or large dataset options.
- QUEST
- CHAID
- C5.0
- Decision List
- Linear Supports SQL generation for the standard model option, but not for the boosting, bagging, or large dataset options.
- Neural Net Supports SQL generation for the standard model option (Multilayer Perceptron only), but not for the boosting, bagging, or large dataset options.
- PCA/Factor
- Logistic Supports SQL generation for Multinomial procedure but not Binomial. For Multinomial, generation isn't supported when confidences are selected, unless the target type is Flag.
- Generated Rulesets
- Auto Classifier If a User Defined Function (UDF) scoring adapter is enabled, these nuggets support SQL pushback. Also, if either SQL generation for Continuous targets, or the Highest confidence wins ensemble method are used, these nuggets support further pushback downstream.
- Auto Numeric If a User Defined Function (UDF) scoring adapter is enabled, these nuggets support SQL pushback. Also, if either SQL generation for Continuous targets, or the Highest confidence wins ensemble method are used, these nuggets support further pushback downstream.
-
-
-
-
-
-Table 7. Outputs nodes
-
- Nodes supporting SQL generation Notes
-
- Table Supports generation if SQL generation is supported for highlight expression.
- Matrix Supports generation except if All numerics is selected for the Fields option.
- Analysis Supports generation, depending on the options selected.
- Transform
- Statistics Supports generation if the Correlate option isn't used.
-"
-AF0F7C335A10C372C36A0CCEC76057C41B93731B_0,AF0F7C335A10C372C36A0CCEC76057C41B93731B," SQL optimization
-
-You can push many data preparation and mining operations directly in your database to improve performance.
-
-One of the most powerful capabilities of SPSS Modeler is the ability to perform many data preparation and mining operations directly in the database. By generating SQL code that can be pushed back to the database for execution, many operations, such as sampling, sorting, deriving new fields, and certain types of graphing, can be performed in the database rather than on the client or server computer. When you're working with large datasets, these pushbacks can dramatically enhance performance in several ways:
-
-
-
-* By reducing the size of the result set to be transferred from the DBMS to watsonx.ai. When large result sets are read through an ODBC driver, network I/O or driver inefficiencies may result. For this reason, the operations that benefit most from SQL optimization are row and column selection and aggregation (Select, Sample, Aggregate nodes), which typically reduce the size of the dataset to be transferred. Data can also be cached to a temporary table in the database at critical points in the flow (after a Merge or Select node, for example) to further improve performance.
-* By making use of the performance and scalability of the database. Efficiency is increased because a DBMS can often take advantage of parallel processing, more powerful hardware, more sophisticated management of disk storage, and the presence of indexes.
-
-
-
-Given these advantages, watsonx.ai is designed to maximize the amount of SQL generated by each SPSS Modeler flow so that only those operations that can't be compiled to SQL are executed by watsonx.ai. Because of limitations in what can be expressed in standard SQL (SQL-92), however, certain operations may not be supported.
-
-For details about currently supported databases, see [Supported data sources for SPSS Modeler](https://dataplatform.cloud.ibm.com/docs/content/wsd/spss-connections.html).
-
-Tips:
-
-
-
-"
-AF0F7C335A10C372C36A0CCEC76057C41B93731B_1,AF0F7C335A10C372C36A0CCEC76057C41B93731B,"* When running a flow, nodes that push back to your database are highlighted with a small SQL icon beside the node. When you start making edits to a flow after running it, the icons will be removed until the next time you run the flow.
-
-Figure 1. SQL pushback indicator
-
-
-* If you want to see which nodes will push back before running a flow, click SQL preview. This enables you to modify the flow before you run it to improve performance by moving the non-pushback operations as far downstream as possible, for example.
-* If a node can't be pushed back, all subsequent nodes in the flow won't be pushed back either (pushback stops at that node). This may impact how you want to organize the order of nodes in your flow.
-
-
-
-Notes: Keep the following information in mind regarding SQL:
-
-
-
-"
-2C669E0145DAC26A7517D9402874BAC048E46E82_0,2C669E0145DAC26A7517D9402874BAC048E46E82," Tips for maximizing SQL pushback
-
-To get the best performance boost from SQL optimization, pay attention to the items in this section.
-
-Flow order. SQL generation may be halted when the function of the node has no semantic equivalent in SQL because SPSS Modeler’s data-mining functionality is richer than the traditional data-processing operations supported by standard SQL. When this happens, SQL generation is also suppressed for any downstream nodes. Therefore, you may be able to significantly improve performance by reordering nodes to put operations that halt SQL as far downstream as possible. The SQL optimizer can do a certain amount of reordering automatically, but further improvements may be possible. A good candidate for this is the Select node, which can often be brought forward. See [Nodes supporting SQL pushback](https://dataplatform.cloud.ibm.com/docs/content/wsd/sql_nodes.html) for more information.
-
-CLEM expressions. If a flow can't be reordered, you may be able to change node options or CLEM expressions or otherwise recast the way the operation is performed, so that it no longer inhibits SQL generation. Derive, Select, and similar nodes can commonly be rendered into SQL, provided that all of the CLEM expression operators have SQL equivalents. Most operators can be rendered, but there are a number of operators that inhibit SQL generation (in particular, the sequence functions [“@ functions”]). Sometimes generation is halted because the generated query has become too complex for the database to handle. See [CLEM expressions and operators supporting SQL pushback](https://dataplatform.cloud.ibm.com/docs/content/wsd/sql_clem.html) for more information.
-
-Multiple input nodes. Where a flow has multiple data import nodes, SQL generation is applied to each import branch independently. If generation is halted on one branch, it can continue on another. Where two branches merge (and both branches can be expressed in SQL up to the merge), the merge itself can often be replaced with a database join, and generation can be continued downstream.
-
-"
-2C669E0145DAC26A7517D9402874BAC048E46E82_1,2C669E0145DAC26A7517D9402874BAC048E46E82,"Scoring models. In-database scoring is supported for some models by rendering the generated model into SQL. However, some models generate extremely complex SQL expressions that aren't always evaluated effectively within the database. For this reason, SQL generation must be enabled separately for each generated model nugget. If you find that a model nugget is inhibiting SQL generation, open the model nugget's settings and select Generate SQL for this model (with some models, you may have additional options controlling generation). Run tests to confirm that the option is beneficial for your application. See [Nodes supporting SQL pushback](https://dataplatform.cloud.ibm.com/docs/content/wsd/sql_nodes.html) for more information.
-
-When testing modeling nodes to see if SQL generation for models works effectively, we recommend first saving all flows from SPSS Modeler. Note that some database systems may hang while trying to process the (potentially complex) generated SQL.
-
-Database caching. If you are using a node cache to save data at critical points in the flow (for example, following a Merge or Aggregate node), make sure that database caching is enabled along with SQL optimization. This will allow data to be cached to a temporary table in the database (rather than the file system) in most cases.
-
-Vendor-specific SQL. Most of the generated SQL is standards-conforming (SQL-92), but some nonstandard, vendor-specific features are exploited where practical. The degree of SQL optimization can vary, depending on the database source.
-"
-3874AAF67EF04BB4D623FFF07E1CDB4C25B3B33E,3874AAF67EF04BB4D623FFF07E1CDB4C25B3B33E," Tutorials
-
-These tutorials use the assets that are available in the sample project, and they provide brief, targeted introductions to specific modeling methods and techniques.
-
-You can build the example flows provided by following the steps in the tutorials.
-
-Some of the simple flows are already completed in the projects, but you can still walk through them using their accompanying tutorials. Some of the more complicated flows must be completed by following the steps in the tutorials.
-
-Important: Before you begin the tutorials, complete the following steps to create the sample projects.
-"
-6E50438308B85E969B79DED22CC5E15F6872EE85,6E50438308B85E969B79DED22CC5E15F6872EE85," Automated modeling for a continuous target
-
-You can use the Auto Numeric node to automatically create and compare different models for continuous (numeric range) outcomes, such as predicting the taxable value of a property. With a single node, you can estimate and compare a set of candidate models and generate a subset of models for further analysis. The node works in the same manner as the Auto Classifier node, but for continuous rather than flag or nominal targets.
-"
-2D5B33F1352D8BA7CEF029D1979CCF0D44AAD63E,2D5B33F1352D8BA7CEF029D1979CCF0D44AAD63E," Building the flow
-
-
-
-1. Add a Data Asset node that points to property_values_train.csv.
-2. Add a Type node, and select taxable_value as the target field (Role = Target). Other fields will be used as predictors.
-
-Figure 1. Setting the measurement level and role
-
-
-3. Attach an Auto Numeric node, and select Correlation as the metric used to rank models (under BASICS in the node properties).
-4. Set the Number of models to use to 3. This means that the three best models will be built when you run the node.
-
-Figure 2. Auto Numeric node BASICS
-
-
-5. Under EXPERT, leave the default settings in place. The node will estimate a single model for each algorithm, for a total of six models. (Alternatively, you can modify these settings to compare multiple variants for each model type.)
-
-Because you set Number of models to use to 3 under BASICS, the node will calculate the accuracy of the six algorithms and build a single model nugget containing the three most accurate.
-
-Figure 3. Auto Numeric node EXPERT options
-
-"
-EC7FCF477E212945EAB7BB85C2279F37D62D4B49_0,EC7FCF477E212945EAB7BB85C2279F37D62D4B49," Comparing the models
-
-
-
-1. Run the flow. A generated model nugget is built and placed on the canvas, and results are added to the Outputs panel. You can view the model nugget, or save or deploy it in a number of ways.
-
-
-
-Right-click the model nugget and select View Model. You'll see details about each of the models created during the run. (In a real situation, in which hundreds of models are estimated on a large dataset, this could take many hours.)
-
-Figure 1. Auto numeric example flow with model nugget
-
-
-
-If you want to explore any of the individual models further, you can click a model name in the ESTIMATOR column to drill down and explore the individual model results.
-
-Figure 2. Auto Numeric results
-
-
-
-By default, models are sorted by accuracy (correlation) because correlation this was the measure you selected in the Auto Numeric node's properties. For purposes of ranking, the absolute value of the accuracy is used, with values closer to 1 indicating a stronger relationship.
-
-You can sort on a different column by clicking the header for that column.
-
-Based on these results, you decide to use all three of these most accurate models. By combining predictions from multiple models, limitations in individual models may be avoided, resulting in a higher overall accuracy.
-
-In the USE column, make sure all three models are selected.
-
-Attach an Analysis node (from the Outputs palette) after the model nugget. Right-click the Analysis node and choose Run to run the flow again.
-
-Figure 3. Auto Numeric sample flow
-
-
-
-"
-EC7FCF477E212945EAB7BB85C2279F37D62D4B49_1,EC7FCF477E212945EAB7BB85C2279F37D62D4B49,"The averaged score generated by the ensembled model is added in a field named $XR-taxable_value, with a correlation of 0.934, which is higher than those of the three individual models. The ensemble scores also show a low mean absolute error and may perform better than any of the individual models when applied to other datasets.
-
-Figure 4. Auto Numeric sample flow analysis results
-
-
-"
-69ED00ABB6B920D1FE4F5B5675AFDA422F04E8D8,69ED00ABB6B920D1FE4F5B5675AFDA422F04E8D8," Summary
-
-With this example Automated Modeling for a Flag Target flow, you used the Auto Numeric node to compare a number of different models, selected the three most accurate models, and added them to the flow within an ensembled Auto Numeric model nugget.
-
-The ensembled model showed performance that was better than two of the individual models and may perform better when applied to other datasets. If your goal is to automate the process as much as possible, this approach allows you to obtain a robust model under most circumstances without having to dig deeply into the specifics of any one model.
-"
-3D999C84C01328A45EBF0ECAD358D858C634DF5B,3D999C84C01328A45EBF0ECAD358D858C634DF5B," Training data
-
-The data file includes a field named taxable_value, which is the target field, or value, that you want to predict. The other fields contain information such as neighborhood, building type, and interior volume, and may be used as predictors.
-
-
-
- Field name Label
-
- property_id Property ID
- neighborhood Area within the city
- building_type Type of building
- year_built Year built
- volume_interior Volume of interior
- volume_other Volume of garage and extra buildings
-"
-D96C3A08A5607BDCB1BC85E0BEDD8743EA0B3DC5,D96C3A08A5607BDCB1BC85E0BEDD8743EA0B3DC5," Automated data preparation
-
-Preparing data for analysis is one of the most important steps in any data-mining project—and traditionally, one of the most time consuming. The Auto Data Prep node handles the task for you, analyzing your data and identifying fixes, screening out fields that are problematic or not likely to be useful, deriving new attributes when appropriate, and improving performance through intelligent screening techniques.
-
-You can use the Auto Data Prep node in fully automated fashion, allowing the node to choose and apply fixes, or you can preview the changes before they're made and accept or reject them as desired. With this node, you can ready your data for data mining quickly and easily, without the need for prior knowledge of the statistical concepts involved. If you run the node with the default settings, models will tend to build and score more quickly.
-
-This example uses the flow named Automated Data Preparation, available in the example project . The data file is telco.csv. This example demonstrates the increased accuracy you can find by using the default Auto Data Prep node settings when building models.
-
-Let's take a look at the flow.
-
-
-
-"
-895CD261C9F06F272286BCCA3555846FB1ED8AA3,895CD261C9F06F272286BCCA3555846FB1ED8AA3," Building the flow
-
-
-
-1. Add a Data Asset node that points to telco.csv.
-
-Figure 1. Auto Data Prep example flow
-
-
-2. Attach a Type node to the Data Asset node. Set the measure for the churn field to Flag, and set the role to Target. Make sure the role for all other fields is set to Input.
-
-Figure 2. Setting the measurement level and role
-
-
-3. Attach a Logistic node to the Type node.
-4. In the Logistic node's properties, under MODEL SETTINGS, select the Binomial procedure. For Model Name, select Custom and enter No ADP - churn.
-
-Figure 3. Choosing model options
-
-
-5. Attach an Auto Data Prep node to the Type node. Under OBJECTIVES, leave the default settings in place to analyze and prepare your data by balancing both speed and accuracy.
-6. Run the flow to analyze and process your data. Other Auto Data Prep node properties allow you to specify that you want to concentrate more on accuracy, more on the speed of processing, or to fine tune many of the data preparation processing steps. Note: If you want to adjust the node properties and run the flow again in the future, since the model already exists, you must first click Clear Analysis, under OBJECTIVES before running the flow again.
-
-Figure 4. Auto Data Prep default objectives
-
-
-"
-B523EBE64275BEE04D480B55CCAEAC3017A36980_0,B523EBE64275BEE04D480B55CCAEAC3017A36980," Comparing the models
-
-
-
-1. Right-click each Logistic node and run it to create the model nuggets, which are added to the flow. Results are also added to the Outputs panel.
-
-Figure 1. Attaching the model nuggets
-
-
-2. Attach Analysis nodes to the model nuggets and run the Analysis nodes (using their default settings).
-
-Figure 2. Attaching the Analysis nodes
-
-The Analysis of the non Auto Data Prep-derived model shows that just running the data through the Logistic Regression node with its default settings gives a model with low accuracy - just 10.6%.
-
-Figure 3. Non ADP-derived model results
-
-The Analysis of the Auto-Data Prep-derived model shows that by running the data through the default Auto Data Prep settings, you have built a much more accurate model that's 78.3% correct.
-
-Figure 4. ADP-derived model results
-
-
-
-
-
-In summary, by just running the Auto Data Prep node to fine tune the processing of your data, you were able to build a more accurate model with little direct data manipulation.
-
-"
-B523EBE64275BEE04D480B55CCAEAC3017A36980_1,B523EBE64275BEE04D480B55CCAEAC3017A36980,"Obviously, if you're interested in proving or disproving a certain theory, or want to build specific models, you may find it beneficial to work directly with the model settings. However, for those with a reduced amount of time, or with a large amount of data to prepare, the Auto Data Prep node may give you an advantage.
-
-Note that the results in this example are based on the training data only. To assess how well models generalize to other data in the real world, you would use a Partition node to hold out a subset of records for purposes of testing and validation.
-"
-1A548D934DFE57DD0F12195461F2DDB348EAE68C,1A548D934DFE57DD0F12195461F2DDB348EAE68C," Automated modeling for a flag target
-
-With the Auto Classifier node, you can automatically create and compare a number of different models for either flag (such as whether or not a given customer is likely to default on a loan or respond to a particular offer) or nominal (set) targets.
-"
-CE7976AFE82E2D17EE1FA308570AFA42E0E91667_0,CE7976AFE82E2D17EE1FA308570AFA42E0E91667," Building the flow
-
-
-
-1. Add a Data Asset node that points to pm_customer_train1.csv.
-2. Add a Type node, and select response as the target field (Role = Target). Set the measure for this field to Flag.
-
-Figure 1. Setting the measurement level and role
-
-
-3. Set the role to None for the following fields: customer_id, campaign, response_date, purchase, purchase_date, product_id, Rowid, and X_random. These fields will be ignored when you are building the model.
-4. Click Read Values in the Type node to make sure that values are instantiated.
-
-As we saw earlier, our source data includes information about four different campaigns, each targeted to a different type of customer account. These campaigns are coded as integers in the data, so to make it easier to remember which account type each integer represents, let's define labels for each one.
-
-Figure 2. Choosing to specify values for a field
-
-
-5. On the row for the campaign field, click the entry in the Value mode column.
-6. Choose Specify from the drop-down.
-
-Figure 3. Defining labels for the field values
-
-
-7. Click the Edit icon in the column for the campaign field. Type the labels as shown for each of the four values.
-8. Click OK. Now the labels will be displayed in output windows instead of the integers.
-9. Attach a Table node to the Type node.
-10. Right-click the Table node and select Run.
-11. In the Outputs panel, double-click the table output to open it.
-12. Click OK to close the output window.
-
-
-
-"
-CE7976AFE82E2D17EE1FA308570AFA42E0E91667_1,CE7976AFE82E2D17EE1FA308570AFA42E0E91667,"Although the data includes information about four different campaigns, you will focus the analysis on one campaign at a time. Since the largest number of records fall under the Premium account campaign (coded campaign=2 in the data), you can use a Select node to include only these records in the flow.
-
-Figure 4. Selecting records for a single campaign
-
-
-"
-B57A4B94BFAFDD0CD6EDBDFA4ABA1F708286E918,B57A4B94BFAFDD0CD6EDBDFA4ABA1F708286E918," Historical data
-
-This example uses the data file pm_customer_train1.csv, which contains historical data that tracks the offers made to specific customers in past campaigns, as indicated by the value of the campaign field. The largest number of records fall under the Premium account campaign.
-
-The values of the campaign field are actually coded as integers in the data (for example 2 = Premium account). Later, you'll define labels for these values that you can use to give more meaningful output.
-
-Figure 1. Data about previous promotions
-
-
-
-The file also includes a response field that indicates whether the offer was accepted (0 = no, and 1 = yes). This will be the target field, or value, that you want to predict. A number of fields containing demographic and financial information about each customer are also included. These can be used to build or ""train"" a model that predicts response rates for individuals or groups based on characteristics such as income, age, or number of transactions per month.
-"
-C4773EF8B0935E8DE084C1A6285EFE11E2A5F80A_0,C4773EF8B0935E8DE084C1A6285EFE11E2A5F80A," Generating and comparing models
-
-
-
-1. Attach an Auto Classifier node, open its BUILD OPTIONS properties, and select Overall accuracy as the metric used to rank models.
-2. Set the Number of models to use to 3. This means that the three best models will be built when you run the node.
-
-Figure 1. Auto Classifier node, build options
-
-
-
-Under the EXPERT options, you can choose from many different modeling algorithms.
-3. Deselect the Discriminant and SVM model types. (These models take longer to train on this data, so deselecting them will speed up the example. If you don't mind waiting, feel free to leave them selected.)
-
-Because you set Number of models to use to 3 under BUILD OPTIONS, the node will calculate the accuracy of the remaining algorithms and generate a single model nugget containing the three most accurate.
-
-Figure 2. Auto Classifier node, expert options
-
-
-4. Under the ENSEMBLE options, select Confidence-weighted voting for the ensemble method. This determines how a single aggregated score is produced for each record.
-
-With simple voting, if two out of three models predict yes, then yes wins by a vote of 2 to 1. In the case of confidence-weighted voting, the votes are weighted based on the confidence value for each prediction. Thus, if one model predicts no with a higher confidence than the two yes predictions combined, then no wins.
-
-Figure 3. Auto Classifier node, ensemble options
-
-
-"
-C4773EF8B0935E8DE084C1A6285EFE11E2A5F80A_1,C4773EF8B0935E8DE084C1A6285EFE11E2A5F80A,"5. Run the flow. After a few minutes, the generated model nugget is built and placed on the canvas, and results are added to the Outputs panel. You can view the model nugget, or save or deploy it in a number of other ways.
-6. Right-click the model nugget and select View Model. You'll see details about each of the models created during the run. (In a real situation, in which hundreds of models may be created on a large dataset, this could take many hours.)
-
-If you want to explore any of the individual models further, you can click their links in the Estimator column to drill down and browse the individual model results.
-
-Figure 4. Auto Classifier results
-
-
-
-By default, models are sorted based on overall accuracy, because this was the measure you selected in the Auto Classifier node properties. The XGBoost Tree model ranks best by this measure, but the C5.0 and C&RT models are nearly as accurate.
-
-Based on these results, you decide to use all three of these most accurate models. By combining predictions from multiple models, limitations in individual models may be avoided, resulting in a higher overall accuracy.
-7. In the USE column, select the three models. Return to the flow.
-8. Attach an Analysis output node after the model nugget. Right-click the Analysis node and choose Run to run the flow.
-
-Figure 5. Auto Classifier example flow
-
-
-
-"
-C4773EF8B0935E8DE084C1A6285EFE11E2A5F80A_2,C4773EF8B0935E8DE084C1A6285EFE11E2A5F80A,"The aggregated score generated by the ensembled model is shown in a field named $XF-response. When measured against the training data, the predicted value matches the actual response (as recorded in the original response field) with an overall accuracy of 92.77%. While not quite as accurate as the best of the three individual models in this case (92.82% for C5.0), the difference is too small to be meaningful. In general terms, an ensembled model will typically be more likely to perform well when applied to datasets other than the training data.
-
-Figure 6. Analysis of the three ensembled models
-
-
-"
-823D9660B5B41B7C85904D0EB88A8D40AC57383F,823D9660B5B41B7C85904D0EB88A8D40AC57383F," Summary
-
-With this example Automated Modeling for a Flag Target flow, you used the Auto Classifier node to compare a number of different models, used the three most accurate models, and added them to the flow within an ensembled Auto Classifier model nugget.
-
-
-
-"
-B2CA734AE719BA79AB4B5F877CF044F47090FAEC,B2CA734AE719BA79AB4B5F877CF044F47090FAEC," Forecasting bandwidth utilization
-
-An analyst for a national broadband provider is required to produce forecasts of user subscriptions to predict utilization of bandwidth. Forecasts are needed for each of the local markets that make up the national subscriber base.
-
-You'll use time series modeling to produce forecasts for the next three months for a number of local markets.
-"
-718CD1A731E0F4E5ABFD77519ED254B5CCC670FB,718CD1A731E0F4E5ABFD77519ED254B5CCC670FB," Forecasting with the Time Series node
-
-This example uses the flow Forecasting Bandwidth Utilization, available in the example project . The data file is broadband_1.csv.
-
-In SPSS Modeler, you can produce multiple time series models in a single operation. The broadband_1.csv data file has monthly usage data for each of 85 local markets. For the purposes of this example, only the first five series will be used; a separate model will be created for each of these five series, plus a total.
-
-The file also includes a date field that indicates the month and year for each record. This field will be used to label records. The date field reads into SPSS Modeler as a string, but to use the field in SPSS Modeler you will convert the storage type to numeric Date format using a Filler node.
-
-Figure 1. Example flow to show Time Series modeling
-
-
-
-The Time Series node requires that each series be in a separate column, with a row for each interval. Watson Studio provides methods for transforming data to match this format if necessary.
-
-Figure 2. Monthly subscription data for broadband local markets
-
-
-"
-C143A9F5185D9303301630D3FC53B604D3DCED2E,C143A9F5185D9303301630D3FC53B604D3DCED2E," Creating the flow
-
-
-
-1. Add a Data Asset node that points to broadband_1.csv.
-2. To simplify the model, use a Filter node to filter out the Market_6 to Market_85 fields and the MONTH_ and YEAR_ fields.
-
-
-
-Figure 1. Example flow to show Time Series modeling
-
-
-"
-EDB1038F1D71A450556D13AE34A416E46D7213FE_0,EDB1038F1D71A450556D13AE34A416E46D7213FE," Examining the data
-
-It's always a good idea to have a feel for the nature of your data before building a model.
-
-Does the data exhibit seasonal variations? Although Watson Studio can automatically find the best seasonal or nonseasonal model for each series, you can often obtain faster results by limiting the search to nonseasonal models when seasonality is not present in your data. Without examining the data for each of the local markets, we can get a rough picture of the presence or absence of seasonality by plotting the total number of subscribers over all five markets.
-
-Figure 1. Plotting the total number of subscribers
-
-
-
-
-
-1. From the Graphs palette, attach a Time Plot node to the Filter node.
-2. Add the Total field to the Series list.
-3. Deselect the Display series in separate panel and Normalize options. Save the changes.
-4. Right-click the Time Plot node and run it, then open the output that was generated.
-
-Figure 2. Time plot of the Total field
-
-
-
-The series exhibits a very smooth upward trend with no hint of seasonal variations. There might be individual series with seasonality, but it appears that seasonality isn't a prominent feature of the data in general.
-
-Of course, you should inspect each of the series before ruling out seasonal models. You can then separate out series exhibiting seasonality and model them separately.
-
-Watson Studio makes it easy to plot multiple series together.
-5. Double-click the Time Plot node to open its properties again.
-6. Remove the Total field from the Series list.
-7. Add the Market_1 through Market_5 fields to the list.
-8. Run the Time Plot node again.
-
-Figure 3. Time plot of multiple fields
-
-"
-EDB1038F1D71A450556D13AE34A416E46D7213FE_1,EDB1038F1D71A450556D13AE34A416E46D7213FE,"
-
-Inspection of each of the markets reveals a steady upward trend in each case. Although some markets are a little more erratic than others, there's no evidence of seasonality.
-"
-0721692D3F363B864A241FC4644D7D57B2DFF881,0721692D3F363B864A241FC4644D7D57B2DFF881," Defining the dates
-
-Now you need to change the storage type of the DATE_ field to date format.
-
-
-
-1. Attach a Filler node to the Filter node, then double-click the Filler node to open its properties
-2. Add the DATE_ field, set the Replace option to Always, and set the Replace with value to to_date(DATE_).
-
-Figure 1. Setting the date storage type
-
-
-"
-03DA4D2D23A65C146BA5AFD8F7175908F868F3EB_0,03DA4D2D23A65C146BA5AFD8F7175908F868F3EB," Examining the model
-
-
-
-1. Right-click the Time Series model nugget and select View Model to see information about the models generated for each of the markets.
-
-Figure 1. Time Series models generated for the markets
-
-
-2. In the left TARGET column, select any of the markets. Then go to Model Information. The Number of Predictors row shows how many fields were used as predictors for each target.
-
-The other rows in the Model Information tables show various goodness-of-fit measures for each model. Stationary R-Squared measures how a model is better than a baseline model. If the final model is ARIMA(p,d,q)(P,D,Q), the baseline model is ARIMA(0,d,0)(0,D,0). If the final model is an Exponential Smoothing model, then d is 2 for Brown and Holt model and 1 for other models, and D is 1 if the seasonal length is greater than 1, otherwise D is 0. A negative stationary R squared means that the model under consideration is worse than the baseline model. Zero stationary R squared means that the model is as good or bad as the baseline model and a positive stationary R squared means the model is better than the baseline model
-
-The Statistic and df lines, and the Significance under Parameter Estimates, relate to the Ljung-Box statistic, a test of the randomness of the residual errors in the model. The more random the errors, the better the model is likely to be. Statistic is the Ljung-Box statistic itself, while df (degrees of freedom) indicates the number of model parameters that are free to vary when estimating a particular target.
-
-The Significance gives the significance value of the Ljung-Box statistic, providing another indication of whether the model is correctly specified. A significance value less than 0.05 indicates that the residual errors are not random, implying that there is structure in the observed series that is not accounted for by the model.
-
-"
-03DA4D2D23A65C146BA5AFD8F7175908F868F3EB_1,03DA4D2D23A65C146BA5AFD8F7175908F868F3EB,"Taking both the Stationary R-Squared and Significance values into account, the models that the Expert Modeler has chosen for Market_3, and Market_4 are quite acceptable. The Significance values for Market_1, Market_2, and Market_5 are all less than 0.05, indicating that some experimentation with better-fitting models for these markets might be necessary.
-
-The display shows a number of additional goodness-of-fit measures. The R-Squared value gives an estimation of the total variation in the time series that can be explained by the model. As the maximum value for this statistic is 1.0, our models are fine in this respect.
-
-RMSE is the root mean square error, a measure of how much the actual values of a series differ from the values predicted by the model, and is expressed in the same units as those used for the series itself. As this is a measurement of an error, we want this value to be as low as possible. At first sight it appears that the models for Market_2 and Market_3, while still acceptable according to the statistics we have seen so far, are less successful than those for the other three markets.
-
-These additional goodness-of-fit measures include the mean absolute percentage errors ( MAPE) and its maximum value ( MAXAPE). Absolute percentage error is a measure of how much a target series varies from its model-predicted level, expressed as a percentage value. By examining the mean and maximum across all models, you can get an indication of the uncertainty in your predictions.
-
-The MAPE value shows that all models display a mean uncertainty of around 1%, which is very low. The MAXAPE value displays the maximum absolute percentage error and is useful for imagining a worst-case scenario for your forecasts. It shows that the largest percentage error for most of the models falls in the range of roughly 1.8% to 3.7%, again a very low set of figures, with only Market_4 being higher at close to 7%.
-
-"
-03DA4D2D23A65C146BA5AFD8F7175908F868F3EB_2,03DA4D2D23A65C146BA5AFD8F7175908F868F3EB,"The MAE (mean absolute error) value shows the mean of the absolute values of the forecast errors. Like the RMSE value, this is expressed in the same units as those used for the series itself. MAXAE shows the largest forecast error in the same units and indicates worst-case scenario for the forecasts.
-
-Although these absolute values are interesting, it's the values of the percentage errors ( MAPE and MAXAPE) that are more useful in this case, as the target series represent subscriber numbers for markets of varying sizes.
-
-Do the MAPE and MAXAPE values represent an acceptable amount of uncertainty with the models? They are certainly very low. This is a situation in which business sense comes into play, because acceptable risk will change from problem to problem. We'll assume that the goodness-of-fit statistics fall within acceptable bounds, so let's go on to look at the residual errors.
-
-Examining the values of the autocorrelation function ( ACF) and partial autocorrelation function ( PACF) for the model residuals provides more quantitative insight into the models than simply viewing goodness-of-fit statistics.
-
-A well-specified time series model will capture all of the nonrandom variation, including seasonality, trend, and cyclic and other factors that are important. If this is the case, any error should not be correlated with itself (autocorrelated) over time. A significant structure in either of the autocorrelation functions would imply that the underlying model is incomplete.
-3. For the fourth market, click Correlogram to display the values of the autocorrelation function ( ACF) and partial autocorrelation function ( PACF) for the residual errors in the model.
-
-Figure 2. ACF and PACF values for the fourth market
-
-
-
-"
-03DA4D2D23A65C146BA5AFD8F7175908F868F3EB_3,03DA4D2D23A65C146BA5AFD8F7175908F868F3EB,"In these plots, the original values of the error variable have been lagged (under BUILD OPTIONS - OUTPUT) up to the default value of 24 time periods and compared with the original value to see if there's any correlation over time. Ideally, the bars representing all lags of ACF and PACF should be within the shaded area. However, in practice, there may be some lags that extend outside of the shaded area. This is because, for example, some larger lags may not have been tried for inclusion in the model in order to save computation time. Some lags are insignificant and are removed from the model. If you want to improve the model further and don't care whether these lags are redundant or not, these plots serve as tips for you as to which lags are potential predictors.
-
-Should this occur, you'd need to check the lower ( PACF) plot to see whether the structure is confirmed there. The PACF plot looks at correlations after controlling for the series values at the intervening time points.
-
-The values for Market_4 are all within the shaded area, so we can continue and check the values for the other markets.
-4. Open the Correlogram for each of the other markets and the totals.
-
-The values for the other markets all show some values outside the shaded area, confirming what we suspected earlier from their Significance values. We'll need to experiment with some different models for those markets at some point to see if we can get a better fit, but for the rest of this example, we'll concentrate on what else we can learn from the Market_4 model.
-5. Return to your flow canvas. Attach a new Time Plot node to the Time Series model nugget. Double-click the node to open its properties.
-6. Deselect the Display series in separate panel option.
-7. For the Series list, add the Market_4 and $TS-Market_4 fields.
-"
-03DA4D2D23A65C146BA5AFD8F7175908F868F3EB_4,03DA4D2D23A65C146BA5AFD8F7175908F868F3EB,"8. Save the properties, then right-click the Time Plot node and select Run to generate a line graph of the actual and forecast data for the first of the local markets.Notice how the forecast ($TS-Market_4) line extends past the end of the actual data. You now have a forecast of expected demand for the next three months in this market.
-
-Figure 3. Time Plot of actual and forecast data for Market_4
-
-
-
-The lines for actual and forecast data over the entire time series are very close together on the graph, indicating that this is a reliable model for this particular time series.
-
-You have a reliable model for this particular market, but what margin of error does the forecast have? You can get an indication of this by examining the confidence interval.
-9. Double-click the last Time Plot node in the flow (the one labeled Market_4 $TS-Market_4).
-10. Add the $TSLCI-Market_4 and $TSUCI-Market_4 fields to the Series list.
-11. Save the properties and run the node again.
-
-
-
-Now you have the same graph as before, but with the upper ($TSUCI) and lower ($TSLCI) limits of the confidence interval added. Notice how the boundaries of the confidence interval diverge over the forecast period, indicating increasing uncertainty as you forecast further into the future. However, as each time period goes by, you'll have another (in this case) month's worth of actual usage data on which to base your forecast. In a real-world scenario, you could read the new data into the flow and reapply your model now that you know it's reliable.
-
-Figure 4. Time Plot with confidence interval added
-
-
-"
-A69DA07F8EE0529080646A4B1EAB45C1074AB683_0,A69DA07F8EE0529080646A4B1EAB45C1074AB683," Creating the model
-
-
-
-1. Double-click the Time Series node to open its properties.
-2. Under FIELDS, add all 5 of the markets to the Candidate Inputs lists. Also add the Total field to the Targets list.
-3. Under BUILD OPTIONS - GENERAL, make sure the Expert Modeler method is selected using all default settings. Doing so enables the Expert Modeler to decide the most appropriate model to use for each time series.
-
-Figure 1. Choosing the Expert Modeler method for Time Series
-
-
-4. Save the settings and then run the flow. A Time Series model nugget is generated. Attach it to the Time Series node.
-5. Attach a Table node to the Time Series model nugget and run the flow again.
-
-Figure 2. Example flow showing Time Series modeling
-
-
-
-
-
-There are now three new rows appended to the end of the original data. These are the rows for the forecast period, in this case January to March 2004.
-
-Several new columns are also present now. The $TS- columns are added by the Time Series node. The columns indicate the following for each row (that is, for each interval in the time series data):
-
-
-
- Column Description
-
- $TS-colname The generated model data for each column of the original data.
- $TSLCI-colname The lower confidence interval value for each column of the generated model data.
- $TSUCI-colname The upper confidence interval value for each column of the generated model data.
- $TS-Total The total of the $TS-colname values for this row.
- $TSLCI-Total The total of the $TSLCI-colname values for this row.
-"
-A69DA07F8EE0529080646A4B1EAB45C1074AB683_1,A69DA07F8EE0529080646A4B1EAB45C1074AB683," $TSUCI-Total The total of the $TSUCI-colname values for this row.
-
-
-
-The most significant columns for the forecast operation are the $TS-Market_n, $TSLCI-Market_n, and $TSUCI-Market_n columns. In particular, these columns in the last three rows contain the user subscription forecast data and confidence intervals for each of the local markets.
-"
-8CCC5CD4A9C103249435FC0A7FB18874B447DE3D,8CCC5CD4A9C103249435FC0A7FB18874B447DE3D," Summary
-
-You've learned how to use the Expert Modeler to produce forecasts for multiple time series. In a real-world scenario, you could now transform nonstandard time series data into a format suitable for input to a Time Series node.
-"
-59CDBABC75E7EC8987A3C464F3277923F444A724,59CDBABC75E7EC8987A3C464F3277923F444A724," Defining the targets
-
-
-
-1. Add a Type node after the Filler node, then double-click the Type node to open its properties.
-2. Set the role to None for the DATE_ field. Set the role to Target for all other fields (the Market_n fields plus the Total field).
-3. Click Read Values to populate the Values column.
-
-Figure 1. Setting the role for fields
-
-
-"
-83579304F7F59126FE983B1ED44BBBB1AC8BFCB2,83579304F7F59126FE983B1ED44BBBB1AC8BFCB2," Setting the time intervals
-
-
-
-1. Add a Time Series node and attach it to the Type node. Double-click the node to edit its properties.
-2. Under OBSERVATIONS AND TIME INTERVAL, select DATE_ as the Time/Date field.
-3. Select Months as the time interval.
-
-Figure 1. Setting the time interval
-
-
-4. Under MODEL OPTIONS, select the Extend records into the future option and set the value to 3.
-
-Figure 2. Setting the forecast period
-
-
-"
-7E9A5F54713CE7CB98EA4BCB223A40C4952F0083,7E9A5F54713CE7CB98EA4BCB223A40C4952F0083," Telecommunications churn
-
-Logistic regression is a statistical technique for classifying records based on values of input fields. It is analogous to linear regression, but takes a categorical target field instead of a numeric one.
-
-For example, suppose a telecommunications provider is concerned about the number of customers it's losing to competitors. If service usage data can be used to predict which customers are liable to transfer to another provider, offers can be customized to retain as many customers as possible.
-
-This example uses the flow named Telecommunications Churn, available in the example project . The data file is telco.csv.
-
-This example focuses on using usage data to predict customer loss (churn). Because the target has two distinct categories, a binomial model is used. In the case of a target with multiple categories, a multinomial model could be created instead. See [Classifying telecommunications customers](https://dataplatform.cloud.ibm.com/docs/content/wsd/tutorials/tut_classify.htmltut_classify) for more information.
-"
-433775834EA8AE82CBFA6077FC361C3C52A99E42_0,433775834EA8AE82CBFA6077FC361C3C52A99E42," Building the flow
-
-Figure 1. Example flow to classify customers using binomial logistic regression
-
-
-
-
-
-1. Add a Data Asset node that points to telco.csv.
-2. Add a Type node, double-click it to open its properties, and make sure all measurement levels are set correctly. For example, most fields with values of 0 and 1 can be regarded as flags, but certain fields, such as gender, are more accurately viewed as a nominal field with two values.
-
-Figure 2. Measurement levels
-
-
-3. Set the measurement level for the churn field to Flag, and set the role to Target. Leave the role for all other fields set to Input.
-4. Add a Feature Selection modeling node to the Type node. You can use a Feature Selection node to remove predictors or data that don't add any useful information about the predictor/target relationship.
-5. Run the flow. Right-click the resulting model nugget and select View Model. You'll see a list of the most important fields.
-6. Add a Filter node after the Type node. Not all of the data in the telco.csv data file will be useful in predicting churn. You can use the filter to only select data considered to be important for use as a predictor (the fields marked as Important in the model generated in the previous step).
-7. Double-click the Filter node to open its properties, select the option Retain the selected fields (all other fields are filtered), and add the following important fields from the Feature Selection model nugget:
-
-tenure
-age
-address
-income
-ed
-employ
-equip
-callcard
-wireless
-longmon
-tollmon
-equipmon
-cardmon
-wiremon
-longten
-tollten
-cardten
-voice
-pager
-internet
-callwait
-confer
-ebill
-loglong
-logtoll
-lninc
-custcat
-churn
-"
-433775834EA8AE82CBFA6077FC361C3C52A99E42_1,433775834EA8AE82CBFA6077FC361C3C52A99E42,"8. Add a Data Audit output node after the Filter node. Right-click the node and run it, then open the output that was added to the Outputs pane.
-9. Look at the % Complete column, which lets you identify any fields with large amounts of missing data. In this case, the only field you need to amend is logtoll, which is less than 50% complete.
-10. Close the output, and add a Filler node after the Filter node. Double-click the node to open its properties, click Add Columns, and select the logtoll field.
-11. Under Replace, select Blank and null values. Click Save to close the node properties.
-12. Right-click the Filler node you just created and select Create supernode. Double-click the supernode and change its name to Missing Value Imputation.
-13. Add a Logistic node after the Filler node. Double-click the node to open its properties. Under Model Settings, select the Binomial procedure and the Forwards Stepwise method.
-
-Figure 3. Choosing model settings
-
-
-14. Under Expert Options, select Expert.
-
-Figure 4. Choosing expert options
-
-
-15. Click Output to open the display settings. Select At each step, Iteration history, and Parameter estimates, then click OK.
-
-Figure 5. Choosing expert options
-
-
-"
-B648A5DEE55D7DBF258B7B088830F18C040C61D5,B648A5DEE55D7DBF258B7B088830F18C040C61D5," Browsing the model
-
-
-
-* Right-click the Logistic node and run it to generate its model nugget. Right-click the nugget and select View Model.The Parameter Estimates page shows the target (churn) and inputs (predictor fields) used by the model. These are the fields that were actually chosen based on the Forwards Stepwise method, not the complete list submitted for consideration.
-
-Figure 1. Parameter estimates showing input fields
-
-
-
-To assess how well the model actually fits your data, a number of diagnostics are available in the expert node settings when you're building the flow.
-
-Note also that these results are based on the training data only. To assess how well the model generalizes to other data in the real world, you would use a Partition node to hold out a subset of records for purposes of testing and validation.
-"
-2779271745A02F4DE48BD92AB93A7A4BE4A73D38,2779271745A02F4DE48BD92AB93A7A4BE4A73D38," Classifying telecommunications customers
-
-Logistic regression is a statistical technique for classifying records based on values of input fields. It is analogous to linear regression, but takes a categorical target field instead of a numeric one.
-
-For example, suppose a telecommunications provider has segmented its customer base by service usage patterns, categorizing the customers into four groups. If demographic data can be used to predict group membership, you can customize offers for individual prospective customers.
-
-This example uses the flow named Classifying Telecommmunications Customers, available in the example project . The data file is telco.csv.
-
-The example focuses on using demographic data to predict usage patterns. The target field custcat has four possible values that correspond to the four customer groups, as follows:
-
-
-
-Table 1. Possible values for the target field
-
- Value Label
-
- 1 Basic Service
- 2 E-Service
- 3 Plus Service
- 4 Total Service
-
-
-
-Because the target has multiple categories, a multinomial model is used. In the case of a target with two distinct categories, such as yes/no, true/false, or churn/don't churn, a binomial model could be created instead. See [Telecommunications churn](https://dataplatform.cloud.ibm.com/docs/content/wsd/tutorials/tut_churn.htmltut_churn) for more information.
-"
-400E9E780D8A149530DF21E38B256B71BDA12D83_0,400E9E780D8A149530DF21E38B256B71BDA12D83," Building the flow
-
-Figure 1. Example flow to classify customers using multinomial logistic regression
-
-
-
-
-
-1. Add a Data Asset node that points to telco.csv.
-2. Add a Type node, double-click it to open its properties, and click Read Values. Make sure all measurement levels are set correctly. For example, most fields with values of 0.0 and 1.0 can be regarded as flags.
-
-Figure 2. Measurement levels
-
-
-
-Notice that gender is more correctly considered as a field with a set of two values, instead of a flag, so leave its measurement value as Nominal.
-3. Set the role for the custcat field to Target. Leave the role for all other fields set to Input.
-4. Since this example focuses on demographics, use a Filter node to include only the relevant fields: region, age, marital, address, income, ed, employ, retire, gender, reside, and custcat). Other fields will be excluded for the purpose of this analysis. To filter them out, in the Filter node properties, click Add Columns and select the fields to exclude.
-
-Figure 3. Filtering on demographic fields
-
-
-
-(Alternatively, you could change the role to None for these fields rather than excluding them, or select the fields you want to use in the modeling node.)
-5. In the Logistic node properties, under MODEL SETTINGS, select the Stepwise method. Also select Multinomial, Main Effects, and Include constant in equation.
-
-Figure 4. Example flow to classify customers using multinomial logistic regression
-
-"
-400E9E780D8A149530DF21E38B256B71BDA12D83_1,400E9E780D8A149530DF21E38B256B71BDA12D83,"
-6. Under EXPERT OPTIONS, select Expert mode, expand the Output section, and select Classification table.
-
-Figure 5. Example flow to classify customers using multinomial logistic regression
-
-
-"
-D7FD91BAC6BE16ABD9B158C6B118E5E09E047C6D,D7FD91BAC6BE16ABD9B158C6B118E5E09E047C6D," Browsing the model
-
-
-
-* Run the Logistic node to generate the model. Right-click the model nugget and select View Model.
-
-Figure 1. Browsing the model results
-
-
-
-You can then explore the model information, feature (predictor) importance, and parameter estimates information.
-
-Note that these results are based on the training data only. To assess how well the model generalizes to other data in the real world, you can use a Partition node to hold out a subset of records for purposes of testing and validation.
-"
-9555087B12B80060FB337F8974FEA9261174115E,9555087B12B80060FB337F8974FEA9261174115E," Condition monitoring
-
-This example concerns monitoring status information from a machine and the problem of recognizing and predicting fault states.
-
-The data is created from a fictitious simulation and consists of a number of concatenated series measured over time. Each record is a snapshot report on the machine in terms of the following:
-
-
-
-* Time. An integer.
-* Power. An integer.
-* Temperature. An integer.
-* Pressure. 0 if normal, 1 for a momentary pressure warning.
-* Uptime. Time since last serviced.
-* Status. Normally 0, changes to an error code if an error occurs (101, 202, or 303).
-* Outcome. The error code that appears in this time series, or 0 if no error occurs. (These codes are available only with the benefit of hindsight.)
-
-
-
-This example uses the flow named Condition Monitoring, available in the example project . The data files are cond1n.csv and cond2n.csv.
-
-For each time series, there's a series of records from a period of normal operation followed by a period leading to the fault, as shown in the following table:
-
-
-
- Time Power Temperature Pressure Uptime Status Outcome
-
- 0 1059 259 0 404 0 0
- 1 1059 259 0 404 0 0
- ...
- 51 1059 259 0 404 0 0
- 52 1059 259 0 404 0 0
- 53 1007 259 0 404 0 303
- 54 998 259 0 404 0 303
- ...
- 89 839 259 0 404 0 303
- 90 834 259 0 404 303 303
- 0 965 251 0 209 0 0
- 1 965 251 0 209 0 0
- ...
- 51 965 251 0 209 0 0
- 52 965 251 0 209 0 0
- 53 938 251 0 209 0 101
- 54 936 251 0 209 0 101
- ...
- 208 644 251 0 209 0 101
- 209 640 251 0 209 101 101
-
-
-
-The following process is common to most data mining projects:
-
-
-
-* Examine the data to determine which attributes may be relevant to the prediction or recognition of the states of interest.
-* Retain those attributes (if already present), or derive and add them to the data, if necessary.
-"
-D59300B05666E072EA812EFFA009E2DD4B60A508,D59300B05666E072EA812EFFA009E2DD4B60A508," Examining the data
-
-For the first part of the process, imagine you have a flow that plots a number of graphs. If the time series of temperature or power contains visible patterns, you could differentiate between impending error conditions or possibly predict their occurrence. For both temperature and power, the flow plots the time series associated with the three different error codes on separate graphs, yielding six graphs. Select nodes separate the data associated with the different error codes.
-
-The graphs clearly display patterns distinguishing 202 errors from 101 and 303 errors. The 202 errors show rising temperature and fluctuating power over time; the other errors don't. However, patterns distinguishing 101 from 303 errors are less clear. Both errors show even temperature and a drop in power, but the drop in power seems steeper for 303 errors.
-
-Based on these graphs, it appears that the presence and rate of change for both temperature and power, as well as the presence and degree of fluctuation, are relevant to predicting and distinguishing faults. These attributes should therefore be added to the data before applying the learning systems.
-"
-D11A81E7333F63092FCF2C047744F2F3C18C1903,D11A81E7333F63092FCF2C047744F2F3C18C1903," Learning
-
-Running the flow trains the C5.0 rule and neural network (net). The network may take some time to train, but training can be interrupted early to save a net that produces reasonable results. After the learning is complete, model nuggets are generated: one represents the neural net and one represents the rule.
-
-Figure 1. Generated model nuggets
-
-
-
-These model nuggets enable us to test the system or export the results of the model. In this example, we will test the results of the model.
-"
-43071778B4E33375953AFB1AB743B342D3CC906A,43071778B4E33375953AFB1AB743B342D3CC906A," Data preparation
-
-Based on the results of exploring the data, the following flow derives the relevant data and learns to predict faults.
-
-This example uses the flow named Condition Monitoring, available in the example project installed with the product. The data files are cond1n.csv and cond2n.csv.
-
-
-
-1. On the My Projects screen, click Example Project.
-2. Scroll down to the Modeler flows section, click View all, and select the Condition Monitoring flow.
-
-
-
-Figure 1. Condition Monitoring example flow
-
-The flow uses a number of Derive nodes to prepare the data for modeling.
-
-
-
-* Data Asset import node. Reads data file cond1n.csv.
-* Pressure Warnings (Derive). Counts the number of momentary pressure warnings. Reset when time returns to 0.
-* TempInc (Derive). Calculates momentary rate of temperature change using @DIFF1.
-* PowerInc (Derive). Calculates momentary rate of power change using @DIFF1.
-* PowerFlux (Derive). A flag, true if power varied in opposite directions in the last record and this one; that is, for a power peak or trough.
-* PowerState (Derive). A state that starts as Stable and switches to Fluctuating when two successive power fluxes are detected. Switches back to Stable only when there hasn't been a power flux for five time intervals or when Time is reset.
-* PowerChange (Derive). Average of PowerInc over the last five time intervals.
-* TempChange (Derive). Average of TempInc over the last five time intervals.
-* Discard Initial (Select). Discards the first record of each time series to avoid large (incorrect) jumps in Power and Temperature at boundaries.
-"
-A187344EB767BAC8E4D674651BEDAFA33F70BFA1,A187344EB767BAC8E4D674651BEDAFA33F70BFA1," Testing
-
-Both of the generated model nuggets are connected to the Type node.
-
-
-
-1. Reposition the nuggets as shown, so the Type node connects to the neural net nugget, which connects to the C5.0 nugget.
-2. Attach an Analysis node to the C5.0 nugget.
-3. Edit the Data Asset node to use the file cond2n.csv (instead of cond1n.csv), which contains unseen test data.
-4. Right-click the Analysis node and select Run. Doing so yields figures reflecting the accuracy of the trained network and rule.
-
-Figure 1. Testing the trained network
-
-
-"
-3D9FB046D583A2D0177ECB4DA25EEAEB4FEBCCA9,3D9FB046D583A2D0177ECB4DA25EEAEB4FEBCCA9," Drug treatment - exploratory graphs
-
-In this example, imagine you're a medical researcher compiling data for a study. You've collected data about a set of patients, all of whom suffered from the same illness. During their course of treatment, each patient responded to one of five medications. Part of your job is to use data mining to find out which drug might be appropriate for a future patient with the same illness.
-
-This example uses the flow named Drug Treatment - Exploratory Graphs, available in the example project . The data file is drug1n.csv.
-
-Figure 1. Drug treatment example flow
-
-
-
-The data fields used in this example are:
-
-
-
- Data field Description
-
- Age Age of patient (number)
- Sex M or F
- BP Blood pressure: HIGH, NORMAL, or LOW
- Cholesterol Blood cholesterol: NORMAL or HIGH
- Na Blood sodium concentration
-"
-13D83AF5CCD616F312472FBAB4AC7D7A56D0F41D,13D83AF5CCD616F312472FBAB4AC7D7A56D0F41D," Using an Analysis node
-
-You can assess the accuracy of the model using an Analysis node. From the Palette, under Outputs, place an Analysis node on the canvas and attach it to the C5.0 model nugget. Then right-click the Analysis node and select Run.
-
-Figure 1. Analysis node
-
-The Analysis node output shows that with this artificial dataset, the model correctly predicted the choice of drug for every record in the dataset. With a real dataset you are unlikely to see 100% accuracy, but you can use the Analysis node to help determine whether the model is acceptably accurate for your particular application.
-
-Figure 2. Analysis node output
-
-
-"
-18A7A354C4B46E26DF8304755C8BE954BB922B04,18A7A354C4B46E26DF8304755C8BE954BB922B04," Browsing the model
-
-When the C5.0 node runs, its model nugget is added to the flow. To browse the model, right-click the model nugget and choose View Model.
-
-The Tree Diagram displays the set of rules generated by the C5.0 node in a tree format. Now you can see the missing pieces of the puzzle. For people with an Na-to-K ratio less than 14.829 and high blood pressure, age determines the choice of drug. For people with low blood pressure, cholesterol level seems to be the best predictor.
-
-Figure 1. Tree diagram
-
-
-
-You can hover over the nodes in the tree to see more details such as the number of cases for each blood pressure category and the confidence percentage of cases.
-"
-D1B1E93AD61D2B095BF8A00E9739FCF7D1DC974C,D1B1E93AD61D2B095BF8A00E9739FCF7D1DC974C," Building a model
-
-By exploring and manipulating the data, you have been able to form some hypotheses. The ratio of sodium to potassium in the blood seems to affect the choice of drug, as does blood pressure. But you cannot fully explain all of the relationships yet. This is where modeling will likely provide some answers. In this case, you will try to fit the data using a rule-building model called C5.0.
-
-Since you're using a derived field, Na_to_K, you can filter out the original fields, Na and K, so they're not used twice in the modeling algorithm. You can do this by using a Filter node.
-
-
-
-1. Place a Filter node on the canvas and connect it to the Derive node.
-
-Figure 1. Filter node
-
-
-2. Double-click the Filter node to edit its properties. Name it Discard Fields.
-3. For Mode, make sure Filter the selected fields is selected. Then select the K and Na fields. Click Save.
-4. Place a Type node on the canvas and connect it to the Filter node. With the Type node, you can indicate the types of fields you're using and how they're used to predict the outcomes.
-
-Figure 2. Type node
-
-
-5. Double-click the Type node to edit its properties. Name it Define Types.
-"
-D733288343A1790788E8069EB55908F9D12566A9,D733288343A1790788E8069EB55908F9D12566A9," Reading in text data
-
-
-
-1. You can read in delimited text data using a Data Asset import node. From the Palette, under Import, add a Data Asset node to your flow.
-"
-A73CA4F67523DBB58FD3521AE9BFF83AEE634607,A73CA4F67523DBB58FD3521AE9BFF83AEE634607," Creating a distribution chart
-
-During data mining, it is often useful to explore the data by creating visual summaries. Watson Studio offers many different types of charts to choose from, depending on the kind of data you want to summarize. For example, to find out what proportion of the patients responded to each drug, use a Distribution node.
-
-Figure 1. Distribution node
-
-
-
-
-
-1. Under Graphs on the Palette, add a Distribution node to the flow and connect it to the drug1n.csv Data Asset node. Then double-click the node to edit its options.
-2. Select Drug as the target field whose distribution you want to show. Then click Save, right-click the Distribution node, and select Run. A distribution chart is added to the Outputs panel.
-
-
-
-The chart helps you see the shape of the data. It shows that patients responded to drug Y most often and to drugs B and C least often.
-
-Alternatively, you can attach and run a Data Audit node for a quick glance at distributions and histograms for all fields at once. The Data Audit node is available under Outputs on the Palette.
-"
-0A5C26B7B5B7C1E3AFD8901D8B91F2E3C527DA3E_0,0A5C26B7B5B7C1E3AFD8901D8B91F2E3C527DA3E," Deriving a new field
-
-Figure 1. Scatterplot of drug distribution
-
-
-
-Since the ratio of sodium to potassium seems to predict when to use drug Y, you can derive a field that contains the value of this ratio for each record. This field might be useful later when you build a model to predict when to use each of the five drugs.
-
-
-
-1. To simplify your flow layout, start by deleting all the nodes except the drug1n.csv Data Asset node.
-2. Place a Derive node on the canvas and connect it to the drug1n.csv Data Asset node.
-
-Figure 2. Derive node
-
-
-3. Double-click the Derive node to edit its properties.
-4. Name the new field Na_to_K. Since you obtain the new field by dividing the sodium value by the potassium value, enter Na/K for the expression. You can also create an expression by clicking the calculator icon. This opens the Expression Builder, a way to interactively create expressions using built-in lists of functions, operands, and fields and their values.
-5. You can check the distribution of your new field by attaching a Histogram node to the Derive node. In the Histogram node properties, specify Na_to_K as the field to be plotted and Drug as the color overlay field.
-
-Figure 3. Histogram node
-
-
-6. Right-click the Histogram node and select Run. A histogram chart is added to the Outputs pane. Based on the chart, you can conclude that when the Na_to_K value is around 15 or more, drug Y is the drug of choice.
-
-"
-0A5C26B7B5B7C1E3AFD8901D8B91F2E3C527DA3E_1,0A5C26B7B5B7C1E3AFD8901D8B91F2E3C527DA3E,"Figure 4. Histogram chart output
-
-
-"
-BB659D7B00DB3096C4082BB93C7FDB933738B013,BB659D7B00DB3096C4082BB93C7FDB933738B013," Creating a scatterplot
-
-Now let's take a look at what factors might influence Drug, the target variable. As a researcher, you know that the concentrations of sodium and potassium in the blood are important factors. Since these are both numeric values, you can create a scatterplot of sodium versus potassium, using the drug categories as a color overlay.
-
-Figure 1. Plot node
-
-
-
-
-
-1. Place a Plot node on the canvas and connect it to the drug1n.csv Data Asset node. Then double-click the Plot node to edit its properties.
-2. Select Na as the X field, K as the Y field, and Drug as the Color (overlay) field. Click Save, then right-click the Plot node and select Run. A plot chart is added to the Outputs pane.
-
-The plot clearly shows a threshold above which the correct drug is always drug Y and below which the correct drug is never drug Y. This threshold is a ratio -- the ratio of sodium (Na) to potassium (K).
-
-Figure 2. Scatterplot of drug distribution
-
-
-"
-F7D95A9FCCA49861B0D4B7DCE677D4E6EFF1F7C1,F7D95A9FCCA49861B0D4B7DCE677D4E6EFF1F7C1," Creating advanced visualizations
-
-The previous three sections use different types of graph nodes. Another way to explore data is with the advanced visualizations feature.
-
-You can use the Charts node to launch the chart builder and create advanced charts to explore your data from different perspectives and identify patterns, connections, and relationships within your data.
-
-Figure 1. Advanced visualizations
-
-
-"
-95C10FDC6D0C3B142DA650044E1A0581D04EF8E4,95C10FDC6D0C3B142DA650044E1A0581D04EF8E4," Creating a web chart
-
-Since many of the data fields are categorical, you can also try plotting a web chart, which maps associations between different categories.
-
-Figure 1. Web node
-
-
-
-
-
-1. Place a Web node on the canvas and connect it to the drug1n.csv Data Asset node. Then double-click the Web node to edit its properties.
-2. Select the fields BP (for blood pressure) and Drug. Click Save, then right-click the Web node and select Run. A web chart is added to the Outputs pane.
-
-Figure 2. Web graph of drugs vs. blood pressure
-
-
-
-
-
-From the plot, it appears that drug Y is associated with all three levels of blood pressure. This is no surprise; you have already determined the situation in which drug Y is best.
-
-But if you ignore drug Y and focus on the other drugs, you can see that drugs A and B are also associated with high blood pressure. And drugs C and X are associated with low blood pressure. And normal blood pressure is associated with drug X. At this point, though, you still don't know how to choose between drugs A and B or between drugs C and X, for a given patient. This is where modeling can help.
-"
-E8B776685A4C1FFCDC8F90C57C3AD7243A43B2B3,E8B776685A4C1FFCDC8F90C57C3AD7243A43B2B3," Forecasting catalog sales
-
-A catalog company is interested in forecasting monthly sales of its men's clothing line, based on 10 years of their sales data.
-
-This example uses the flow Forecasting Catalog Sales, available in the example project . The data file is catalog_seasfac.csv.
-
-We've seen in an earlier tutorial how you can let the Expert Modeler decide which is the most appropriate model for your time series. Now it's time to take a closer look at the two methods that are available when choosing a model yourself—exponential smoothing and ARIMA.
-
-To help you decide on an appropriate model, it's a good idea to plot the time series first. Visual inspection of a time series can often be a powerful guide in helping you choose. In particular, you need to ask yourself:
-
-
-
-"
-B5873013457AADDCC20DB880B3FC9D9BFB7BD348_0,B5873013457AADDCC20DB880B3FC9D9BFB7BD348," ARIMA
-
-With the ARIMA procedure, you can create an autoregressive integrated moving-average (ARIMA) model that is suitable for finely tuned modeling of time series.
-
-ARIMA models provide more sophisticated methods for modeling trend and seasonal components than do exponential smoothing models, and they have the added benefit of being able to include predictor variables in the model.
-
-Continuing the example of the catalog company that wants to develop a forecasting model, we have seen how the company has collected data on monthly sales of men's clothing along with several series that might be used to explain some of the variation in sales. Possible predictors include the number of catalogs mailed and the number of pages in the catalog, the number of phone lines open for ordering, the amount spent on print advertising, and the number of customer service representatives.
-
-Are any of these predictors useful for forecasting? Is a model with predictors really better than one without? Using the ARIMA procedure, we can create a forecasting model with predictors, and see if there's a significant difference in predictive ability over the exponential smoothing model with no predictors.
-
-With the ARIMA method, you can fine-tune the model by specifying orders of autoregression, differencing, and moving average, as well as seasonal counterparts to these components. Determining the best values for these components manually can be a time-consuming process involving a good deal of trial and error so, for this example, we'll let the Expert Modeler choose an ARIMA model for us.
-
-We'll try to build a better model by treating some of the other variables in the dataset as predictor variables. The ones that seem most useful to include as predictors are the number of catalogs mailed (mail), the number of pages in the catalog (page), the number of phone lines open for ordering (phone), the amount spent on print advertising (print), and the number of customer service representatives (service).
-
-
-
-1. Double-click the Type node to open its properties.
-2. Set the role for mail, page, phone, print, and service to Input.
-3. Ensure that the role for men is set to Target and that all the remaining fields are set to None.
-4. Click Save.
-"
-B5873013457AADDCC20DB880B3FC9D9BFB7BD348_1,B5873013457AADDCC20DB880B3FC9D9BFB7BD348,"5. Double-click the Time Series node.
-6. Under BUILD OPTIONS - GENERAL, select Expert Modeler for the method.
-7. Select the options ARIMA models only and Expert Modeler considers seasonal models.
-
-Figure 1. Choosing only ARIMA models
-
-
-8. Click Save and run the flow.
-9. Right-click the model nugget and select View Model. Click men and then click Model information. Notice how the Expert Modeler has chosen only two of the five specified predictors as being significant to the model.
-
-Figure 2. Expert Modeler chooses two predictors
-
-
-10. Open the latest chart output.
-
-Figure 3. ARIMA model with predictors specified
-
-
-
-This model improves on the previous one by capturing the large downward spike as well, making it the best fit so far.
-
-We could try refining the model even further, but any improvements from this point on are likely to be minimal. We've established that the ARIMA model with predictors is preferable, so let's use the model we have just built. For the purposes of this example, we'll forecast sales for the coming year.
-11. Double-click the Time Series node.
-12. Under MODEL OPTIONS, select the option Extend records into the future and set its value to 12.
-13. Select the Compute future values of inputs option.
-14. Click Save and run the flow.The forecast looks good. As expected, there's a return to normal sales levels following the December peak, and a steady upward trend in the second half of the year, with sales in general better than those for the previous year.
-
-"
-B5873013457AADDCC20DB880B3FC9D9BFB7BD348_2,B5873013457AADDCC20DB880B3FC9D9BFB7BD348,"Figure 4. Sales forecast extended by 12 months
-
-
-"
-05F38627C9EC286CA7C379A31AA27392A65411AB,05F38627C9EC286CA7C379A31AA27392A65411AB," Examining the data
-
-The series shows a general upward trend; that is, the series values tend to increase over time. The upward trend is seemingly constant, which indicates a linear trend.
-
-Figure 1. Actual sales of men's clothing
-
-
-
-The series also has a distinct seasonal pattern with annual highs in December, as indicated by the vertical lines on the graph. The seasonal variations appear to grow with the upward series trend, which suggests multiplicative rather than additive seasonality.
-
-Now that you've identified the characteristics of the series, you're ready to try modeling it. The exponential smoothing method is useful for forecasting series that exhibit trend, seasonality, or both. As we've seen, this data exhibits both characteristics.
-"
-2ED4D7860687B2EF6F85FF81B6AF4CFD2C6EA839,2ED4D7860687B2EF6F85FF81B6AF4CFD2C6EA839," Creating the flow
-
-
-
-1. Create a new flow and add a Data Asset node that points to catalog_seasfac.csv.
-2. Connect a Type node to the Data Asset node and double-click it to open its properties.
-3. Click Read Values. For the men field, set the role to Target.
-
-Figure 1. Specifying the target field
-
-
-4. Set the role for all other fields to None and click Save.
-5. Attach a Time Plot graph node to the Type node and double-click it.
-
-Figure 2. Plotting the time series
-
-
-6. For the Plot, add the field men to the Series list.
-7. Select Use custom x axis field label and select date.
-"
-7394B97DA7B0846274940F439675051521A7DD7C_0,7394B97DA7B0846274940F439675051521A7DD7C," Exponential smoothing
-
-Building a best-fit exponential smoothing model involves determining the model type (whether the model needs to include trend, seasonality, or both) and then obtaining the best-fit parameters for the chosen model.
-
-The plot of men's clothing sales over time suggested a model with both a linear trend component and a multiplicative seasonality component. This implies a Winters' model. First, however, we will explore a simple model (no trend and no seasonality) and then a Holt's model (incorporates linear trend but no seasonality). This will give you practice in identifying when a model is not a good fit to the data, an essential skill in successful model building.
-
-We'll start with a simple exponential smoothing model.
-
-
-
-1. Add a Time Series node and attach it to the Type node. Double-click the node to edit its properties.
-2. Under OBSERVATIONS AND TIME INTERVAL, select date as the time/date field.
-3. Select Months as the time interval.
-
-Figure 1. Setting the time interval
-
-
-4. Under BUILD OPTIONS - GENERAL, select Exponential Smoothing for the Method.
-5. Set Model Type to Simple. Click Save.
-
-Figure 2. Setting the method
-
-
-6. Run the flow to create the model nugget.
-7. Attach a Time Plot node to the model nugget.
-8. Under Plot, add the fields men and $TS-men to the Series list.
-9. Select the option Use custom x axis field label and select the date field.
-10. Deselect the Display series in separate panel and Normalize options. Click Save.
-
-Figure 3. Setting the plot options
-
-
-"
-7394B97DA7B0846274940F439675051521A7DD7C_1,7394B97DA7B0846274940F439675051521A7DD7C,"11. Run the flow and then open the output.The men plot represents the actual data, while $TS-men denotes the time series model.
-
-Figure 4. Simple exponential smoothing model
-
-
-
-Although the simple model does, in fact, exhibit gradual (and rather ponderous) upward trend, it takes no account of seasonality. You can safely reject this model.
-
-Now let's try a Holt's linear model. This should at least model the trend better than the simple model, although it too is unlikely to capture the seasonality.
-12. Double-click the Time Series node. Under BUILD OPTIONS - GENERAL, with Exponential Smoothing still selected as the method, select HoltsLinearTrend as the model type.
-13. Click Save and run the flow again to regenerate the model nugget. Open the output.
-
-Figure 5. Holt's linear trend model
-
-
-
-Holt's model displays a smoother upward trend than the simple model, but it still takes no account of the seasonality, so you can disregard this one too.
-
-You may recall that the initial plot of men's clothing sales over time suggested a model incorporating a linear trend and multiplicative seasonality. A more suitable candidate, therefore, might be Winters' model.
-14. Double-click the Time Series node again to edit its properties.
-15. Under BUILD OPTIONS - GENERAL, with Exponential Smoothing still selected as the method, select WintersMultiplicative as the model type.
-16. Run the flow.
-
-Figure 6. Winters' multiplicative model
-
-
-
-"
-7394B97DA7B0846274940F439675051521A7DD7C_2,7394B97DA7B0846274940F439675051521A7DD7C,"This looks better. The model reflects both the trend and the seasonality of the data. The dataset covers a period of 10 years and includes 10 seasonal peaks occurring in December of each year. The 10 peaks present in the predicted results match up well with the 10 annual peaks in the real data.
-
-However, the results also underscore the limitations of the Exponential Smoothing procedure. Looking at both the upward and downward spikes, there is significant structure that's not accounted for.
-
-If you're primarily interested in modeling a long-term trend with seasonal variation, then exponential smoothing may be a good choice. To model a more complex structure such as this one, we need to consider using the ARIMA procedure.
-"
-5AE2F0D8BD974C7393BC5FFA773B90FD0A2229B0,5AE2F0D8BD974C7393BC5FFA773B90FD0A2229B0," Summary
-
-You've successfully modeled a complex time series, incorporating not only an upward trend but also seasonal and other variations. You've also seen how, through trial and error, you can get closer and closer to an accurate model, which you can then use to forecast future sales.
-
-In practice, you would need to reapply the model as your actual sales data are updated—for example, every month or every quarter—and produce updated forecasts.
-"
-02244F39BE9A15FA55C94C9F2775606247969A61_0,02244F39BE9A15FA55C94C9F2775606247969A61," Introduction to modeling
-
-A model is a set of rules, formulas, or equations that can be used to predict an outcome based on a set of input fields or variables. For example, a financial institution might use a model to predict whether loan applicants are likely to be good or bad risks, based on information that is already known about past applicants.
-
-Video disclaimer: Some minor steps and graphical elements in these videos might differ from your platform.
-
-[https://video.ibm.com/embed/recorded/131116287](https://video.ibm.com/embed/recorded/131116287)
-
-The ability to predict an outcome is the central goal of predictive analytics, and understanding the modeling process is the key to using flows in Watson Studio.
-
-Figure 1. A decision tree model
-
-
-
-This example uses a decision tree model, which classifies records (and predicts a response) using a series of decision rules. For example:
-
-IF income = Medium
-AND cards <5
-THEN -> 'Good'
-
-While this example uses a CHAID (Chi-squared Automatic Interaction Detection) model, it is intended as a general introduction, and most of the concepts apply broadly to other modeling types in Watson Studio.
-
-To understand any model, you first need to understand the data that goes into it. The data in this example contains information about the customers of a bank. The following fields are used:
-
-
-
- Field name Description
-
- Credit_rating Credit rating: 0=Bad, 1=Good, 9=missing values
- Age Age in years
- Income Income level: 1=Low, 2=Medium, 3=High
- Credit_cards Number of credit cards held: 1=Less than five, 2=Five or more
- Education Level of education: 1=High school, 2=College
- Car_loans Number of car loans taken out: 1=None or one, 2=More than two
-
-
-
-"
-02244F39BE9A15FA55C94C9F2775606247969A61_1,02244F39BE9A15FA55C94C9F2775606247969A61,"The bank maintains a database of historical information on customers who have taken out loans with the bank, including whether or not they repaid the loans (Credit rating = Good) or defaulted (Credit rating = Bad). Using this existing data, the bank wants to build a model that will enable them to predict how likely future loan applicants are to default on the loan.
-
-Using a decision tree model, you can analyze the characteristics of the two groups of customers and predict the likelihood of loan defaults.
-
-This example uses the flow named Introduction to Modeling, available in the example project . The data file is tree_credit.csv.
-
-Let's take a look at the flow.
-
-
-
-"
-A3022FF9DB2732F0AB3091884B428763D3879FD2_0,A3022FF9DB2732F0AB3091884B428763D3879FD2," Building the flow
-
-Figure 1. Modeling flow
-
-
-
-To build a flow that will create a model, we need at least three elements:
-
-
-
-* A Data Asset node that reads in data from an external source, in this case a .csv data file
-* An Import or Type node that specifies field properties, such as measurement level (the type of data that the field contains), and the role of each field as a target or input in modeling
-* A modeling node that generates a model nugget when the flow runs
-
-
-
-In this example, we're using a CHAID modeling node. CHAID, or Chi-squared Automatic Interaction Detection, is a classification method that builds decision trees by using a particular type of statistics known as chi-square statistics to work out the best places to make the splits in the decision tree.
-
-If measurement levels are specified in the source node, the separate Type node can be eliminated. Functionally, the result is the same.
-
-This flow also has Table and Analysis nodes that will be used to view the scoring results after the model nugget has been created and added to the flow.
-
-The Data Asset import node reads data in from the sample tree_credit.csv data file.
-
-The Type node specifies the measurement level for each field. The measurement level is a category that indicates the type of data in the field. Our source data file uses three different measurement levels:
-
-A Continuous field (such as the Age field) contains continuous numeric values, while a Nominal field (such as the Credit rating field) has two or more distinct values, for example Bad, Good, or No credit history. An Ordinal field (such as the Income level field) describes data with multiple distinct values that have an inherent order—in this case Low, Medium and High.
-
-Figure 2. Setting the target and input fields with the Type node
-
-"
-A3022FF9DB2732F0AB3091884B428763D3879FD2_1,A3022FF9DB2732F0AB3091884B428763D3879FD2,"
-
-For each field, the Type node also specifies a role to indicate the part that each field plays in modeling. The role is set to Target for the field Credit rating, which is the field that indicates whether or not a given customer defaulted on the loan. This is the target, or the field for which we want to predict the value.
-
-Role is set to Input for the other fields. Input fields are sometimes known as predictors, or fields whose values are used by the modeling algorithm to predict the value of the target field.
-
-The CHAID modeling node generates the model. In the node's properties, under FIELDS, the option Use custom field roles is available. We could select this option and change the field roles, but for this example we'll use the default targets and inputs as specified in the Type node.
-
-
-
-1. Double-click the CHAID node (named Creditrating). The node properties are displayed.
-
-Figure 3. CHAID modeling node properties
-
-
-
-Here there are several options where we could specify the kind of model we want to build.
-
-We want a brand-new model, so under OBJECTIVES we'll use the default option Build new model.
-
-We also just want a single, standard decision tree model without any enhancements, so we'll also use the default objective option Create a standard model.
-
-Figure 4. CHAID modeling node objectives
-
-
-
-For this example, we want to keep the tree fairly simple, so we'll limit the tree growth by raising the minimum number of cases for parent and child nodes.
-2. Under STOPPING RULES, select Use absolute value.
-3. Set Minimum records in parent branch to 400.
-"
-A3022FF9DB2732F0AB3091884B428763D3879FD2_2,A3022FF9DB2732F0AB3091884B428763D3879FD2,"4. Set Minimum records in child branch to 200.
-
-
-
-Figure 5. Setting the stopping criteria for decision tree building
-
-
-
-We can use all the other default options for this example, so click Save and then click the Run button on the toolbar to create the model. (Alternatively, right-click the CHAID node and choose Run from the context menu.)
-"
-9DEAC0E5B403BAEDEABE9C76A295651289E6416C_0,9DEAC0E5B403BAEDEABE9C76A295651289E6416C," Evaluating the model
-
-We've been browsing the model to understand how scoring works. But to evaluate how accurately it works, we need to score some records and compare the responses predicted by the model to the actual results. We're going to score the same records that were used to estimate the model, allowing us to compare the observed and predicted responses.
-
-Figure 1. Attaching the model nugget to output nodes for model evaluation
-
-
-
-
-
-1. To see the scores or predictions, attach the Table node to the model nugget and then right-click the Table node and select Run. A table will be generated and added to the Outputs panel. Double-click it to open it.
-
-The table displays the predicted scores in a field named $R-Credit rating, which was created by the model. We can compare these values to the original Credit rating field that contains the actual responses.
-
-By convention, the names of the fields generated during scoring are based on the target field, but with a standard prefix. Prefixes $G and $GE are generated by the Generalized Linear Model, $R is the prefix used for the prediction generated by the CHAID model in this case, $RC is for confidence values, $X is typically generated by using an ensemble, and $XR, $XS, and $XF are used as prefixes in cases where the target field is a Continuous, Categorical, Set, or Flag field, respectively. Different model types use different sets of prefixes. A confidence value is the model's own estimation, on a scale from 0.0 to 1.0, of how accurate each predicted value is.
-
-Figure 2. Table showing generated scores and confidence values
-
-
-
-"
-9DEAC0E5B403BAEDEABE9C76A295651289E6416C_1,9DEAC0E5B403BAEDEABE9C76A295651289E6416C,"As expected, the predicted value matches the actual responses for many records but not all. The reason for this is that each CHAID terminal node has a mix of responses. The prediction matches the most common one, but will be wrong for all the others in that node. (Recall the 18% minority of low-income customers who did not default.)
-
-To avoid this, we could continue splitting the tree into smaller and smaller branches, until every node was 100% pure—all Good or Bad with no mixed responses. But such a model would be extremely complicated and would probably not generalize well to other datasets.
-
-To find out exactly how many predictions are correct, we could read through the table and tally the number of records where the value of the predicted field $R-Credit rating matches the value of Credit rating. Fortunately, there's a much easier way; we can use an Analysis node, which does this automatically.
-2. Connect the model nugget to the Analysis node.
-3. Right-click the Analysis node and select Run. An Analysis entry will be added to the Outputs panel. Double-click it to open it.
-
-
-
-Figure 3. Attaching an Analysis node
-
-
-
-The analysis shows that for 1960 out of 2464 records—over 79%—the value predicted by the model matched the actual response.
-
-Figure 4. Analysis results comparing observed and predicted responses
-
-
-
-"
-9DEAC0E5B403BAEDEABE9C76A295651289E6416C_2,9DEAC0E5B403BAEDEABE9C76A295651289E6416C,"This result is limited by the fact that the records being scored are the same ones used to estimate the model. In a real situation, you could use a Partition node to split the data into separate samples for training and evaluation. By using one sample partition to generate the model and another sample to test it, you can get a much better indication of how well it will generalize to other datasets.
-
-The Analysis node allows us to test the model against records for which we already know the actual result. The next stage illustrates how we can use the model to score records for which we don't know the outcome. For example, this might include people who are not currently customers of the bank, but who are prospective targets for a promotional mailing.
-"
-A62A258BB486FBE7E7FC91C611DC2BC400E32308_0,A62A258BB486FBE7E7FC91C611DC2BC400E32308," Browsing the model
-
-After running a flow, an orange model nugget is added to the canvas with a link to the modeling node from which it was created. To view the model details, right-click the model nugget and choose View Model.
-
-Figure 1. Model nugget
-
-
-
-In the case of the CHAID nugget, the CHAID Tree Model screen includes pages for Model Information, Feature Importance, Top Decision Rules, Tree Diagram, Build Settings, and Training Summary. For example, you can see details in the form of a rule set—essentially a series of rules that can be used to assign individual records to child nodes based on the values of different input fields.
-
-Figure 2. CHAID model nugget, rule set
-
-
-
-For each decision tree terminal node – meaning those tree nodes that are not split further—a prediction of Good or Bad is returned. In each case, the prediction is determined by the mode, or most common response, for records that fall within that node.
-
-The Feature Importance chart shows the relative importance of each predictor in estimating the model. From this, we can see that Income level is easily the most significant in this case, with Number of credit cards being the next most significant factor.
-
-Figure 3. Feature Importance chart
-
-
-
-The Tree Diagram page displays the same model in the form of a tree, with a node at each decision point. Hover over branches and nodes to explore details.
-
-Figure 4. Tree diagram in the model nugget
-
-
-
-"
-A62A258BB486FBE7E7FC91C611DC2BC400E32308_1,A62A258BB486FBE7E7FC91C611DC2BC400E32308,"Looking at the start of the tree, the first node (node 0) gives us a summary for all the records in the data set. Just over 40% of the cases in the data set are classified as a bad risk. This is quite a high proportion, so let's see if the tree can give us any clues as to what factors might be responsible.
-
-We can see that the first split is by Income level. Records where the income level is in the Low category are assigned to node 2, and it's no surprise to see that this category contains the highest percentage of loan defaulters. Clearly, lending to customers in this category carries a high risk. However, almost 18% of the customers in this category actually didn’t default, so the prediction won't always be correct. No model can feasibly predict every response, but a good model should allow us to predict the most likely response for each record based on the available data.
-
-In the same way, if we look at the high income customers (node 1), we see that the vast majority (over 88%) are a good risk. But more than 1 in 10 of these customers has also defaulted. Can we refine our lending criteria to minimize the risk here?
-
-Notice how the model has divided these customers into two sub-categories (nodes 4 and 5), based on the number of credit cards held. For high-income customers, if we lend only to those with fewer than five credit cards, we can increase our success rate from 88% to almost 97%—an even more satisfactory outcome.
-
-Figure 5. High-income customers with fewer than five credit cards
-
-
-
-But what about those customers in the Medium income category (node 3)? They’re much more evenly divided between Good and Bad ratings. Again, the sub-categories (nodes 6 and 7 in this case) can help us. This time, lending only to those medium-income customers with fewer than five credit cards increases the percentage of Good ratings from 58% to 86%, a significant improvement.
-
-Figure 6. Tree view of medium-income customers
-
-"
-A62A258BB486FBE7E7FC91C611DC2BC400E32308_2,A62A258BB486FBE7E7FC91C611DC2BC400E32308,"
-
-So, we’ve learned that every record that is input to this model will be assigned to a specific node, and assigned a prediction of Good or Bad based on the most common response for that node. This process of assigning predictions to individual records is known as scoring. By scoring the same records used to estimate the model, we can evaluate how accurately it performs on the training data—the data for which we know the outcome. Let's examine how to do this.
-"
-3CF77633A489E42B01086588D6613D65BFD51F7F,3CF77633A489E42B01086588D6613D65BFD51F7F," Scoring records
-
-Earlier, we scored the same records used to estimate the model so we could evaluate how accurate the model was. Now we'll score a different set of records from the ones used to create the model. This is the goal of modeling with a target field: Study records for which you know the outcome, to identify patterns that will allow you to predict outcomes you don't yet know.
-
-Figure 1. Attaching new data for scoring
-
-
-
-You could update the data asset Import node to point to a different data file, or you could add a new Import node that reads in the data you want to score. Either way, the new dataset must contain the same input fields used by the model (Age, Income level, Education and so on), but not the target field Credit rating.
-
-Alternatively, you could add the model nugget to any flow that includes the expected input fields. Whether read from a file or a database, the source type doesn't matter as long as the field names and types match those used by the model.
-"
-F140F179614D126E483732933A5CA8DCF0A32876,F140F179614D126E483732933A5CA8DCF0A32876," Summary
-
-This example Introduction to Modeling flow demonstrates the basic steps for creating, evaluating, and scoring a model.
-
-
-
-* The modeling node estimates the model by studying records for which the outcome is known, and creates a model nugget. This is sometimes referred to as training the model.
-* The model nugget can be added to any flow with the expected fields to score records. By scoring the records for which you already know the outcome (such as existing customers), you can evaluate how well it performs.
-"
-2828FD5943ABBA08AA260F1080B850C90FC4EFBE,2828FD5943ABBA08AA260F1080B850C90FC4EFBE," Reducing input data string length
-
-For binomial logistic regression, and auto classifier models that include a binomial logistic regression model, string fields are limited to a maximum of eight characters. Where strings are more than eight characters, you can recode them using a Reclassify node.
-
-This example uses the flow named Reducing Input Data String Length, available in the example project . The data file is drug_long_name.csv.
-
-This example focuses on a small part of a flow to show the type of errors that may be generated with overlong strings, and explains how to use the Reclassify node to change the string details to an acceptable length. Although the example uses a binomial Logistic Regression node, it is equally applicable when using the Auto Classifier node to generate a binomial Logistic Regression model.
-"
-85381B4DF6F42B35CA5097709523038ABDCDC555_0,85381B4DF6F42B35CA5097709523038ABDCDC555," Reclassifying the data
-
-Figure 1. Example flow showing string reclassification for binomial logistic regression
-
-
-
-
-
-1. Add a Data Asset node that points to drug_long_name.csv.
-2. Add a Type node after the Data Asset node. Double-click the Type node to open its properties, and select Cholesterol_long as the target.
-3. Add a Logistic Regression node after the Type node. Double-click the node and select the Binomial procedure (instead of the default Multinomial procedure).
-4. Right-click the Logistic Regression node and run it. An error message warns you that the Cholesterol_long string values are too long. When you encounter this type of message, follow the procedure described in the rest of this example to modify your data.
-
-Figure 2. Error message displayed when running the binomial logistic regression node
-
-
-5. Add a Reclassify node after the Type node and double-click it to open its properties.
-6. For the Reclassify Field, select Cholesterol_long and type Cholesterol for the new field name.
-7. Click Get values to add the Cholesterol_long values to the original value column.
-8. In the new value column, type High next to the original value of High level of cholesterol and Normal next to the original value of Normal level of cholesterol.
-
-Figure 3. Reclassifying long strings
-
-
-9. Add a Filter node after the Reclassify node. Double-click the node, choose Filter the selected fields, and select the Cholesterol_long field.
-
-Figure 4. Filtering the ""Cholesterol_long"" field from the data
-
-"
-85381B4DF6F42B35CA5097709523038ABDCDC555_1,85381B4DF6F42B35CA5097709523038ABDCDC555,"
-10. Add a Type node after the Filter node. Double-click the node and select Cholesterol as the target.
-
-Figure 5. Short string details in the ""Cholesterol"" field
-
-
-11. Add a Logistic node after the Type node. Double-click the node and select the Binomial procedure.
-
-
-
-You can now run the binomial Logistic node and generate a model without encountering the error as you did before.
-
-This example only shows part of a flow. For more information about the types of flows in which you might need to reclassify long strings, see the following example:
-
-
-
-* Auto Classifier node. See [Automated modeling for a flag target](https://dataplatform.cloud.ibm.com/docs/content/wsd/tutorials/tut_autoflag.html).
-"
-75891659AB1DF929D219741C3F2D69384A01835C,75891659AB1DF929D219741C3F2D69384A01835C," Retail sales promotion
-
-This example deals with fictitious data that describes retail product lines and the effects of promotion on sales.
-
-Your goal in this example is to predict the effects of future sales promotions. Similar to the [condition monitoring example](https://dataplatform.cloud.ibm.com/docs/content/wsd/tutorials/tut_condition.html), the data mining process consists of the exploration, data preparation, training, and test phases.
-
-This example uses the flow named Retail Sales Promotion, available in the example project . The data files are goods1n.csv and goods2n.csv.
-
-
-
-"
-6F55360D336A77A06F2C4235B286A869CFF0986C,6F55360D336A77A06F2C4235B286A869CFF0986C," Examining the data
-
-Each record contains:
-
-
-
-* Class. Product type.
-* Cost. Unit price.
-* Promotion. Index of amount spent on a particular promotion.
-* Before. Revenue before promotion.
-* After. Revenue after promotion.
-
-
-
-The flow is simple. It displays the data in a table. The two revenue fields (Before and After) are expressed in absolute terms. However, it seems likely that the increase in revenue after the promotion (and presumably as a result of it) would be a more useful figure.
-
-Figure 1. Effects of promotion on product sales
-
-
-
-The flow also contains a node to derive this value, expressed as a percentage of the revenue before the promotion, in a field called Increase. A table shows this field.
-
-Figure 2. Increase in revenue after promotion
-
-
-
-For each class of product, and almost linear relationship exists between the increase in revenue and the cost of the promotion. Therefore, it seems likely that a decision tree or neural network could predict, with reasonable accuracy, the increase in revenue from the other available fields.
-"
-1399CD9C09634E30C0F099C0FAE66A756153DAB1,1399CD9C09634E30C0F099C0FAE66A756153DAB1," Learning and testing
-
-The flow trains a neural network and a decision tree to make this prediction of revenue increase.
-
-Figure 1. Retail Sales Promotion example flow
-
-
-
-After you run the flow to generate the model nuggets, you can test the results of the learning process. You do this by connecting the decision tree and network in series between the Type node and a new Analysis node, changing the Data Asset import node to point to goods2n.csv, and running the Analysis node. From the output of this node, in particular from the linear correlation between the predicted increase and the correct answer, you will find that the trained systems predict the increase in revenue with a high degree of success.
-
-Further exploration might focus on the cases where the trained systems make relatively large errors. These could be identified by plotting the predicted increase in revenue against the actual increase. Outliers on this graph could be selected using the interactive graphics within SPSS Modeler, and from their properties, it might be possible to tune the data description or learning process to improve accuracy.
-"
-420946CA7E893CC5A2B3D1A8F47A7A2C7059D7F6,420946CA7E893CC5A2B3D1A8F47A7A2C7059D7F6," Screening predictors
-
-The Feature Selection node helps you identify the fields that are most important in predicting a certain outcome. From a set of hundreds or even thousands of predictors, the Feature Selection node screens, ranks, and selects the predictors that may be most important. Ultimately, you may end up with a quicker, more efficient model—one that uses fewer predictors, runs more quickly, and may be easier to understand.
-
-The data used in this example represents a data warehouse for a hypothetical telephone company and contains information about responses to a special promotion by 5,000 of the company's customers. The data includes many fields that contain customers' age, employment, income, and telephone usage statistics. Three ""target"" fields show whether or not the customer responded to each of three offers. The company wants to use this data to help predict which customers are most likely to respond to similar offers in the future.
-
-This example uses the flow named Screening Predictors, available in the example project . The data file is customer_dbase.csv.
-
-This example focuses on only one of the offers as a target. It uses the CHAID tree-building node to develop a model to describe which customers are most likely to respond to the promotion. It contrasts two approaches:
-
-
-
-* Without feature selection. All predictor fields in the dataset are used as inputs to the CHAID tree.
-* With feature selection. The Feature Selection node is used to select the best 10 predictors. These are then input into the CHAID tree.
-
-
-
-By comparing the two resulting tree models, we can see how feature selection can produce effective results.
-"
-5A328CF6319859F041C48974E44046BCFCEA3B87,5A328CF6319859F041C48974E44046BCFCEA3B87," Building the flow
-
-Figure 1. Feature Selection example flow
-
-
-
-
-
-1. Add a Data Asset node that points to customer_dbase.csv.
-2. Add a Type node after the Data Asset node.
-3. Double-click the Type node to open its properties, and change the role for response_01 to Target. Change the role to None for the other response fields (response_02 and response_03) and for the customer ID (custid) field. Leave the role set to Input for all other fields.
-
-Figure 2. Adding a Type node
-
-
-4. Click Read Values and then click Save.
-5. Add a Feature Selection modeling node after the Type node. In the node properties, the rules and criteria used for screening or disqualifying fields are defined.
-
-Figure 3. Adding a Feature Selection node
-
-
-6. Run the flow to generate the Feature Selection model nugget.
-7. To look at the results, right-click the model nugget and choose View Model. The results show the fields found to be useful in the prediction, ranked by importance. By examining these fields, you can decide which ones to use in subsequent modeling sessions.
-"
-9B120FF1F8482EB617E16738D5160C966C6EDF3D,9B120FF1F8482EB617E16738D5160C966C6EDF3D," Building the models
-
-
-
-1. Run the CHAID node that uses all the predictors in the dataset (the one connected to the Type node). As it runs, notice how long it takes to finish.
-2. Right-click the generated model nugget, select View Model, and look at the tree diagram.
-3. Now run the other CHAID model, which uses less predictors. Again, look at its tree diagram.
-
-It might be hard to tell, but the second model ran faster than the first one. Because this dataset is relatively small, the difference in run times is probably only a few seconds; but for larger real-world datasets, the difference might be very noticeable—minutes or even hours. Using feature selection may speed up your processing times dramatically.
-
-The second tree also contains fewer tree nodes than the first. It's easier to comprehend. Using fewer predictors is less expensive. It means that you have less data to collect, process, and feed into your models. Computing time is improved. In this example, even with the extra feature selection step, model building was faster with the smaller set of predictors. With a larger real-world dataset, the time savings should be greatly amplified.
-
-Using fewer predictors results in simpler scoring. For example, you might identify only four profiles of customers who are likely to respond to the promotion. Note that with larger numbers of predictors, you run the risk of overfitting your model. The simpler model may generalize better to other datasets (although you would need to test this to be sure).
-
-You could instead use a tree-building algorithm to do the feature selection work, allowing the tree to identify the most important predictors for you. In fact, the CHAID algorithm is often used for this purpose, and it's even possible to grow the tree level-by-level to control its depth and complexity. However, the Feature Selection node is faster and easier to use. It ranks all of the predictors in one fast step, allowing you to identify the most important fields quickly.
-"
-C41C78F27BB2F48542141EA85EDA7AD333E3FD0B,C41C78F27BB2F48542141EA85EDA7AD333E3FD0B," Making offers to customers (self-learning)
-
-The Self-Learning Response Model (SLRM) node generates and enables the updating of a model that allows you to predict which offers are most appropriate for customers and the probability of the offers being accepted. These sorts of models are most beneficial in customer relationship management, such as marketing applications or call centers.
-
-This example is based on a fictional banking company. The marketing department wants to achieve more profitable results in future campaigns by matching the appropriate offer of financial services to each customer. Specifically, the example uses a Self-Learning Response model to identify the characteristics of customers who are most likely to respond favorably based on previous offers and responses and to promote the best current offer based on the results.
-
-This example uses the flow named Making Offers to Customers - Self-Learning, available in the example project . The data files are pm_customer_train1.csv, pm_customer_train2.csv, and pm_customer_train3.csv.
-
-
-
-"
-7CEF749C4ED4703D00346FCDEF795D0431BC7C26_0,7CEF749C4ED4703D00346FCDEF795D0431BC7C26," Building the flow
-
-
-
-1. Add a Data Asset node that points to pm_customer_train1.csv.
-
-Figure 1. SLRM example flow
-
-
-2. Attach a Filler node to the Data Asset node. Double-click the node to open its properties and, under Fill in fields, select campaign.
-3. Select a Replace type of Always.
-4. In the Replace with text box, enter to_string(campaign) and click Save.
-
-Figure 2. Derive a campaign field
-
-
-5. Add a Type node and set the Role to None for the following fields:
-
-
-
-* customer_id
-* response_date
-* purchase_date
-* product_id
-* Rowid
-* X_random
-
-
-
-6. Set the Role to Target for the campaign and response fields. These are the fields on which you want to base your predictions. Set the Measurement to Flag for the response field.
-7. Click Read Values then click Save. Because the campaign field data shows as a list of numbers (1, 2, 3, and 4), you can reclassify the fields to have more meaningful titles.
-8. Add a Reclassify node after the Type node and open its properties.
-9. Under Reclassify Into, select Existing field.
-10. Under Reclassify Field, select campaign.
-11. Click Get values. The campaign values are added to the ORIGINAL VALUE column.
-12. In the NEW VALUE column, enter the following campaign names in the first four rows:
-
-
-
-* Mortgage
-* Car loan
-* Savings
-* Pension
-
-
-
-13. Click Save.
-
-Figure 3. Reclassify the campaign names
-
-
-"
-7CEF749C4ED4703D00346FCDEF795D0431BC7C26_1,7CEF749C4ED4703D00346FCDEF795D0431BC7C26,"14. Attach an SLRM modeling node to the Reclassify node. Select campaign for the Target field, and response for the Target response field.
-"
-AA1C79D72D6A7D37CE1E72735B6DCFCA2B546DCE_0,AA1C79D72D6A7D37CE1E72735B6DCFCA2B546DCE," Browsing the model
-
-
-
-1. Right-click the model nugget and select View Model. The initial view shows the estimated accuracy of the predictions for each offer. You can also click Predictor Importance to see the relative importance of each predictor in estimating the model, or click Association With Response to show the correlation of each predictor with the target variable.
-2. To switch between each of the four offers for which there are prediction, use the View drop-down.
-
-Figure 1. SLRM model nugget
-
-
-3. Return to the flow.
-4. Disconnect the Data Asset node that points to pm_customer_train1.csv.
-5. Add a new Data Asset node that points to pm_customer_train2.csv and connect it to the Filler node.
-6. Double-click the SLRM node and select Continue training existing model (under BUILD OPTIONS). Click Save.
-7. Run the flow to regenerate the model nugget. Then right-click it and select View Model. The model now shows the revised estimates of accuracy of the predictions for each offer.
-8. Add a new Data Asset node that points to pm_customer_train3.csv and connect it to the Filler node
-9. Run the flow again, then right-click the model nugget and select View Model.
-
-The model now shows the final estimated accuracy of the predictions for each offer. As you can see, the average accuracy fell slightly as you added the additional data sources. However, this fluctuation is a minimal amount and may be attributed to slight anomalies within the available data.
-"
-AA1C79D72D6A7D37CE1E72735B6DCFCA2B546DCE_1,AA1C79D72D6A7D37CE1E72735B6DCFCA2B546DCE,"10. Attach a Table node to the generated model nugget, then right-click the Table node and run it. In the Outputs pane, open the table output that was just generated.The predictions in the table show which offers a customer is most likely to accept and the confidence that they'll accept, depending on each customer's details. For example, in the first row, there's only a 13.2% confidence rating (denoted by the value 0.132 in the $SC-campaign-1 column) that a customer who previously took out a car loan will accept a pension if offered one. However, the second and third lines show two more customers who also took out a car loan; in their cases, there is a 95.7% confidence that they, and other customers with similar histories, would open a savings account if offered one, and over 80% confidence that they would accept a pension.
-
-Figure 2. Model output - predicted offers and confidences
-
-
-
-Explanations of the mathematical foundations of the modeling methods used in SPSS Modeler are available in the [SPSS Modeler Algorithms Guide](http://public.dhe.ibm.com/software/analytics/spss/documentation/modeler/new/AlgorithmsGuide.pdf).
-
-Note that these results are based on the training data only. To assess how well the model generalizes to other data in the real world, you would use a Partition node to hold out a subset of records for purposes of testing and validation.
-"
-BBB6FC842A370135B8488D9A2E09FCF17341954B,BBB6FC842A370135B8488D9A2E09FCF17341954B," Hotel satisfaction example for Text Analytics
-
-SPSS Modeler offers nodes that are specialized for handling text.
-
-In this example, a hotel manager is interested in learning what customers think about the hotel.
-
-Figure 1. Chart of positive opinions
-
-
-
-Figure 2. Chart of negative opinions
-
-
-
-This example uses the flow named Hotel Satisfaction, available in the example project . The data files are hotelSatisfaction.csv and hotelSatisfaction.xlsx. The flow uses Text Analytics nodes to analyze fictional text data about hotel personnel, comfort, cleanliness, price, etc.
-
-This flow illustrates two ways of analyzing data with a Text Mining node and a Text Link Analysis node. It also illustrates how you can deploy a text model and score current or new data.
-
-Let's take a look at the flow.
-
-
-
-1. Open the .
-2. Scroll down to the Modeler flows section and select the Hotel Satisfaction flow.
-
-Figure 3. Completed flow
-
-
-"
-1924AE74643C2D9D416204693C9BB84D5212E3B0_0,1924AE74643C2D9D416204693C9BB84D5212E3B0," Building and deploying the model
-
-
-
-1. When your model is ready, click Generate a model to generate a text nugget.
-
-Figure 1. Generate a new model
-
-
-
-Figure 2. Build a category model
-
-
-2. If you want to save the Text Analytics Workbench session, instead click Return to flow and then Save and exit.
-
-Figure 3. Saving your session
-
-The generated text nugget appears on your flow canvas.
-
-Figure 4. Generated text nugget
-
-After the category model has been validated and generated in the Text Analytics Workbench, you can deploy it in your flow and score the same data set or score a new one.
-
-Figure 5. Example flow with two modes for scoring
-
-This example flow illustrates the two modes for scoring:
-
-
-
-* Categories as fields. With this option, there are just as many output records as there were in the input. However, each record now contains one new field for every category that was selected on the Model tab. For each field, enter a flag value for true and for false, such as True/False, or 1/0. In this flow, values are set to 1 and 0 to aggregate results and count the number of positive, negative, mixed (both positive and negative), or no score (no opinion) answers.
-
-Figure 6. Model results - categories as fields
-
-"
-1924AE74643C2D9D416204693C9BB84D5212E3B0_1,1924AE74643C2D9D416204693C9BB84D5212E3B0,"
-* Categories as records. With this option, a new record is created for each category, document pair. Typically, there are more records in the output than there were in the input. Along with the input fields, new fields are also added to the data depending on what kind of model it is.
-
-Figure 7. Model results - categories as records
-
-
-
-
-
-3. You can add a Select node after the DeriveSentiment SuperNode, include Sentiments=Pos, and add a Charts node to gain quick insight about what guests appreciate about the hotel:
-
-Figure 8. Chart of positive opinions
-
-
-"
-5E4D2166BB8C2B95E515591E014E7CA00B87BCA2,5E4D2166BB8C2B95E515591E014E7CA00B87BCA2," Using the Text Analytics Workbench
-
-The Text Analytics Workbench contains the extraction results and the category model contained in the text analytics package.
-"
-F161A94239C1DC6696DBB583EC46BC64F3AA8906,F161A94239C1DC6696DBB583EC46BC64F3AA8906," Text Link Analysis node
-
-In some cases, you may not need to create a category model to score. The Text Link Analysis (TLA) node adds a pattern-matching technology to text mining's concept extraction. This identifies relationships between the concepts in the text data based on known patterns. These relationships can describe how a customer feels about a product, which companies are doing business together, or even the relationships between genes or pharmaceutical agents.
-
-Figure 1. Text Link Analysis node
-
-
-
-
-
-1. Add a Text Link Analysis node to your canvas and connect it to the Data Asset node that points to hotelSatisfaction.csv. Double-click the node to open its properties.
-2. Select id for the ID field and Comments for the Text field. Note that only the Text field is required.
-3. For Copy resources from, select the Hotel Satisfaction (English) template.
-
-Figure 2. Text Link Analysis node FIELD properties
-
-
-4. Under Expert, select Accommodate spelling for a minimum word character length of.
-
-Figure 3. Text Link Analysis node Expert properties
-
-The resulting output is a table (or the result of an Export node).
-
-Figure 4. Raw TLA output
-
-
-
-Figure 5. Counting sentiments on a TLA node
-
-
-"
-E7FAC7868F0D237EFBFCC625D8C265AEEBCA3E7D_0,E7FAC7868F0D237EFBFCC625D8C265AEEBCA3E7D," Text Mining node
-
-Figure 1. Text Mining node to analyze comments from hotel guests
-
-
-
-
-
-1. Add a Data Asset node that points to hotelSatisfaction.csv.
-2. From the Text Analytics category on the node palette, add a Text Mining node, connect it to the Data Asset node you added in the previous step, and double-click it to open its properties.
-3. Under Fields, select Comments for the Text field and select id for the ID field. Note that only the Text field is required.
-
-Figure 2. Text Mining node properties
-
-
-4. Under Copy resources from, select Text analysis package, click Select Resources, and then load Hotel Satisfaction (English).tap (with Current category set(s) = Topic + Opinion).A text analysis package (TAP) is a predefined set of libraries and advanced linguistic and nonlinguistic resources bundled with one or more sets of predefined categories. If no text analysis package is relevant for your application, you can instead start by selecting Resource template under Copy resources from. A resource template is a predefined set of libraries and advanced linguistic and nonlinguistic resources that have been fine-tuned for a particular domain or usage.
-
-Figure 3. Text Mining node properties
-
-
-"
-E7FAC7868F0D237EFBFCC625D8C265AEEBCA3E7D_1,E7FAC7868F0D237EFBFCC625D8C265AEEBCA3E7D,"5. Under Build models, make sure Build interactively (category model nugget) is selected. Later when you run the node, this option will launch an interactive interface (known as the Text Analytics Workbench) in which you can extract concepts and patterns, explore and fine-tune the extracted results, build and refine categories, and build category model nuggets.
-6. Under Begin session by, select Extracting concepts and text links. The option Extracting concepts extracts only concepts, whereas TLA extraction outputs both concepts and text links that are connections between topics (service, personnel, food, etc.) and opinions.
-7. Under Expert, select Accommodate spelling for a minimum word character length of. This option applies a fuzzy grouping technique that helps group commonly misspelled words or closely spelled words under one concept. The fuzzy grouping algorithm temporarily strips all vowels (except the first one) and strips double/triple consonants from extracted words and then compares them to see if they're the same (so, for example, location and locatoin are grouped together).
-
-Figure 4. Text Mining node properties
-
-"
-ED7AFE85422B1DB8EAED166840D275DDDB63CAFA_0,ED7AFE85422B1DB8EAED166840D275DDDB63CAFA," Managing your account settings
-
-From the Account window you can view information about your IBM Cloud account and set the Resource scope, Credentials for connections, and Regional project storage settings for IBM watsonx.
-
-
-
-* [View account information](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/account-settings.html?context=cdpaas&locale=enview-account-information)
-* [Set the scope for resources](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/account-settings.html?context=cdpaas&locale=enset-the-scope-for-resources)
-* [Set the type of credentials for connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/account-settings.html?context=cdpaas&locale=enset-the-credentials-for-connections)
-* [Set the login session expiration](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/account-settings.html?context=cdpaas&locale=enset-expiration)
-
-
-
-You must be the IBM Cloud account owner or administrator to manage the account settings.
-
-"
-ED7AFE85422B1DB8EAED166840D275DDDB63CAFA_1,ED7AFE85422B1DB8EAED166840D275DDDB63CAFA," View account information
-
-You can see the account name, ID and type.
-
-
-
-1. Select Administration > Account and billing > Account to open the account window.
-2. If you need to manage your Cloud account, click the Manage in IBM Cloud link to navigate to the Account page on IBM Cloud.
-
-
-
-"
-ED7AFE85422B1DB8EAED166840D275DDDB63CAFA_2,ED7AFE85422B1DB8EAED166840D275DDDB63CAFA," Set the scope for resources
-
-By default, account users see resources based on membership. You can restrict the resource scope to the current account to control access. By setting the resource scope to the current account, users cannot access resources outside of their account, regardless of membership. The scope applies to projects, catalogs, and spaces.
-
-To restrict resources to current account:
-
-
-
-1. Select Administration > Account and billing > Account to open the account settings window.
-2. Set Resource scope to On. Access is updated immediately to be restricted to the current account.
-
-
-
-"
-ED7AFE85422B1DB8EAED166840D275DDDB63CAFA_3,ED7AFE85422B1DB8EAED166840D275DDDB63CAFA," Set the credentials for connections
-
-The credentials for connections setting determines the type of credentials users must specify when creating a new connection. This setting applies only when new connections are created; existing connections are not affected.
-
-"
-ED7AFE85422B1DB8EAED166840D275DDDB63CAFA_4,ED7AFE85422B1DB8EAED166840D275DDDB63CAFA," Either personal or shared credentials
-
-You can allow users the ability to specify personal or shared credentials when creating a new connection. Radio buttons will appear on the new connection form, allowing the user to select personal or shared.
-
-To allow the credential type to be chosen on the new connection form:
-
-
-
-1. Select Administration > Account and billing > Account to open the account settings window.
-2. Set both Shared credentials and Personal credentials to Enabled.
-
-
-
-"
-ED7AFE85422B1DB8EAED166840D275DDDB63CAFA_5,ED7AFE85422B1DB8EAED166840D275DDDB63CAFA," Personal credentials
-
-When personal credentials are specified, each user enters their own credentials when creating a new connection or when using a connection to access data.
-
-To require personal credentials for all new connections:
-
-
-
-1. Select Administration > Account and billing > Account to open the account settings window.
-2. Set Personal credentials to Enabled.
-3. Set Shared credentials to Disabled.
-
-
-
-"
-ED7AFE85422B1DB8EAED166840D275DDDB63CAFA_6,ED7AFE85422B1DB8EAED166840D275DDDB63CAFA," Shared credentials
-
-With shared credentials, the credentials that were entered by the creator of the connection are made available to all other users when accessing data with the connection.
-
-To require shared credentials for all new connections:
-
-
-
-1. Select Administration > Account and billing > Account to open the account settings window.
-2. Set Shared credentials to Enabled.
-3. Set Personal credentials to Disabled.
-
-
-
-"
-ED7AFE85422B1DB8EAED166840D275DDDB63CAFA_7,ED7AFE85422B1DB8EAED166840D275DDDB63CAFA," Set the login session expiration
-
-Active and inactive session durations are managed through IBM Cloud. You are notified of a session expiration 5 minutes before the session expires. Unless your service supports autosaving, your work is not saved when your session expires.
-
-You can change the default durations for active and inactive sessions. For more information on required permissions and duration limits, see [Setting limits for login sessions](https://cloud.ibm.com/docs/account?topic=account-iam-work-sessions&interface=ui).
-
-To change the default durations:
-
-
-
-1. From the watsonx navigation menu, select Administration > Access (IAM).
-2. In IBM Cloud, select Manage > Access (IAM) > Settings.
-3. Select the Login session tab.
-4. For each expiration time that you want to change, edit the time and click Save.
-
-
-
-The inactivity duration cannot be longer than the maximum session duration, and the token lifetime cannot be longer than the inactivity duration. IBM Cloud prevents you from inputing an invalid combination of settings.
-
-"
-ED7AFE85422B1DB8EAED166840D275DDDB63CAFA_8,ED7AFE85422B1DB8EAED166840D275DDDB63CAFA," Learn more
-
-
-
-* [Managing all projects in the account](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/admin-manage-projects.html)
-* [Adding connections to projects](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html)
-
-
-
-Parent topic:[Managing IBM watsonx](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/wdp_admin_console.html)
-"
-88BAC0DA2CCB09C93C0013A209147CC5A5DCEE68_0,88BAC0DA2CCB09C93C0013A209147CC5A5DCEE68," Managing the user API key
-
-Certain operations in IBM watsonx require an API key for secure authorization. You can generate and rotate a user API key as needed to help ensure your operations run smoothly.
-
-"
-88BAC0DA2CCB09C93C0013A209147CC5A5DCEE68_1,88BAC0DA2CCB09C93C0013A209147CC5A5DCEE68," User API key overview
-
-Operations running within services in IBM watsonx require credentials for secure authorization. These operations use an API key for authorization. A valid API key is required for many long-running tasks, including the following:
-
-
-
-* Model training in Watson Machine Learning
-* Problem solving with Decision Optimization
-* Data transformation with DataStage flows
-* Other runtime services (for example, Data Refinery and Pipelines) that accept API key references
-
-
-
-Both scheduled and ad hoc jobs require an API key for authorization. An API key is used for jobs when:
-
-
-
-* Creating a job schedule with a predefined key
-* Updating the API key for a scheduled job
-* Providing an API key for an ad hoc job
-
-
-
-User API keys give control to the account owner to secure and renew credentials, thus helping to ensure operations run without interruption. Keys are unique to the IBMid and account. If you change the account you are working in, you must generate a new key.
-
-"
-88BAC0DA2CCB09C93C0013A209147CC5A5DCEE68_2,88BAC0DA2CCB09C93C0013A209147CC5A5DCEE68," Active and Phased out keys
-
-When you create an API key, it is placed in Active state. The Active key is used for authorization for operations in IBM watsonx.
-
-When you rotate a key, a new key is created in Active state and the existing key is changed to Phased out state. A Phased out key is not used for authorization and can be deleted.
-
-"
-88BAC0DA2CCB09C93C0013A209147CC5A5DCEE68_3,88BAC0DA2CCB09C93C0013A209147CC5A5DCEE68," Viewing the current API key
-
-Click your avatar and select Profile and settings to open your account profile. Select User API key to view the Active and Phased out keys.
-
-"
-88BAC0DA2CCB09C93C0013A209147CC5A5DCEE68_4,88BAC0DA2CCB09C93C0013A209147CC5A5DCEE68," Creating an API key
-
-If you do not have an API key, you can create a key by clicking Create a key.
-
-A new key is created in Active state. The key automatically authorizes operations that require a secure credential. The key is stored in both IBM Cloud and IBM watsonx. You can view the API keys for your IBM Cloud account at [API keys](https://cloud.ibm.com/iam/apikeys).
-
-User API Keys take the form cpd-apikey-{username}-{timeStamp}, where username is the IBMid of the account owner and timestamp indicates when the key was created.
-
-"
-88BAC0DA2CCB09C93C0013A209147CC5A5DCEE68_5,88BAC0DA2CCB09C93C0013A209147CC5A5DCEE68," Rotating an API key
-
-If the API key becomes stale or invalid, you can generate a new Active key for use by all operations.
-
-To rotate a key, click Rotate.
-
-A new key is created to replace the current key. The rotated key is placed in Phased out status. A Phased out key is not available for use.
-
-"
-88BAC0DA2CCB09C93C0013A209147CC5A5DCEE68_6,88BAC0DA2CCB09C93C0013A209147CC5A5DCEE68," Deleting a phased out API key
-
-When you are certain the phased out key is no longer needed for operations, click the minus sign to delete it. Deleting keys might cause running operations to fail.
-
-"
-88BAC0DA2CCB09C93C0013A209147CC5A5DCEE68_7,88BAC0DA2CCB09C93C0013A209147CC5A5DCEE68," Deleting all API keys
-
-Delete all keys (both Active and Phased out) by clicking the trash can. Deleting keys might cause running operations to fail.
-
-"
-88BAC0DA2CCB09C93C0013A209147CC5A5DCEE68_8,88BAC0DA2CCB09C93C0013A209147CC5A5DCEE68," Learn more
-
-
-
-* [Creating and managing jobs in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/jobs.html)
-* [Adding task credentials](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/task-credentials.html)
-* [Understanding API keys](https://cloud.ibm.com/docs/account?topic=account-manapikey&interface=ui)
-
-
-
-Parent topic:[Administering your accounts and services](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/administer-accounts.html)
-"
-A10DE0E026BA0CF397108621D5927E16436ACF58_0,A10DE0E026BA0CF397108621D5927E16436ACF58," Configuring App ID with your identity provider
-
-To use App ID for user authentication for IBM watsonx, you configure App ID as a service on IBM Cloud. You configure an identity provider (IdP) such as Azure Active Directory. You then configure App ID and the identity provider to communicate with each other to grant access to authorized users.
-
-To configure App ID and your identity provider to work together, follow these steps:
-
-
-
-* [Configure your identity provider to communicate with IBM Cloud](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/admin-appid-tips.html?context=cdpaas&locale=encfg_idp)
-* [Configure App ID to communicate with your identify provider](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/admin-appid-tips.html?context=cdpaas&locale=encfg_appid)
-* [Configure IAM to enable login through your identity provider](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/admin-appid-tips.html?context=cdpaas&locale=encfg_iam)
-
-
-
-"
-A10DE0E026BA0CF397108621D5927E16436ACF58_1,A10DE0E026BA0CF397108621D5927E16436ACF58," Configuring your identity provider
-
-To configure your identity provider to communicate with IBM Cloud, you enter the entityID and Location into your SAML configuration for your identity provider. An overview of the steps for configuring Azure Active Directory is provided as an example. Refer to the documentation for your identity provider for detailed instructions for its platform.
-
-The prerequisites for configuring App ID with an identity provider are:
-
-
-
-* An IBM Cloud account
-* An App ID instance
-* An identity provider, for example, Azure Active Directory
-
-
-
-To configure your identity provider for SAML-based single sign-on:
-
-1. Download the SAML metadata file from App ID to find the values for entityID and Location. These values are entered into the identity provider configuration screen to establish communication with App ID on IBM Cloud. (The corresponding values from the identity provider, plus the primary certificate, are entered in App ID. See [Configuring App ID](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/admin-appid-tips.html?context=cdpaas&locale=encfg_appid)).
-
-
-
-* In App ID, choose Identity providers > SAML 2.0 federation.
-* Download the appid-metadata.xml file.
-* Find the values for entityID and Location.
-
-
-
-2. Copy the values for entityID and Location from the SAML metadata file and paste them into the corresponding fields on your identity provider. For Azure Active Directory, the fields are located in Section 1: Basic SAML Configuration in the Enterprise applications configuration screen.
-
-
-
- App ID value Active Directory field Example
-
- entityID Identifier (Entity ID) urn:ibm:cloud:services:appid:value
- Location Reply URL (Assertion Consumer Service URL) https://us-south.appid.cloud.ibm.com/saml2/v1/value/login-acs
-
-
-
-3. In Section 2: Attributes & Claims for Azure Active Directory, you map the username parameter to user.mail to identify the users by their unique email address. IBM watsonx requires that you set username to the user.mail attribute. For other identity providers, a similar field that uniquely identifies users must be mapped to user.mail.
-
-"
-A10DE0E026BA0CF397108621D5927E16436ACF58_2,A10DE0E026BA0CF397108621D5927E16436ACF58," Configuring App ID
-
-You establish communication between App ID and your identity provider by entering the SAML values from the identity provider into the corresponding App ID fields. An example is provided for configuring App ID to communicate with an Active Directory Enterprise Application.
-
-1. Choose Identity providers > SAML 2.0 federation and complete the Provide metadata from SAML IdP section.
-
-2. Download the Base64 certificate from Section 3: SAML Certificates in Active Directory (or your identity provider) and paste it into the Primary certificate field.
-
-3. Copy the values from Section 4: Set up your-enterprise-application in Active Directory into the corresponding fields in Provide metadata from SAML IdP in IBM App ID.
-
-
-
- App ID field Value from Active Directory
-
- Entity ID Azure AD Identifier
- Sign in URL Login URL
- Primary certificate Certificate (Base64)
-
-
-
-4. Click Test on the App ID page to test that App ID can connect to the identity provider. The happy face response indicates that App ID can communicate with the identity provider.
-
-
-
-"
-A10DE0E026BA0CF397108621D5927E16436ACF58_3,A10DE0E026BA0CF397108621D5927E16436ACF58," Configuring IAM
-
-You must assign the appropriate role to the users in IBM Cloud IAM and also configure your identity provider in IAM. Users require at least the Viewer role for All Identity and IAM enabled services.
-
-"
-A10DE0E026BA0CF397108621D5927E16436ACF58_4,A10DE0E026BA0CF397108621D5927E16436ACF58," Create an identity provider reference in IBM Cloud IAM
-
-Create an identity provider reference to connect your external repository to your IBM Cloud account.
-
-
-
-1. Navigate to Manage > Access(IAM) > Identity providers.
-2. For the type, choose IBM Cloud App ID.
-3. Click Create.
-4. Enter a name for the identity provider.
-5. Select the App ID service instance.
-6. Select how to on board users. Static adds users when they log in for the first time.
-7. Enable the identity provider for logging in by checking the Enable for account login? box.
-8. If you have more than one identity providers, set the identity provider as the default by checking the box.
-9. Click Create.
-
-
-
-"
-A10DE0E026BA0CF397108621D5927E16436ACF58_5,A10DE0E026BA0CF397108621D5927E16436ACF58," Change the App ID login alias
-
-A login alias is generated for App ID. Users enter the alias when logging on to IBM Cloud. You can change the default alias string to be easier to remember.
-
-
-
-1. Navigate to Manage > Access(IAM) > Identity providers.
-2. Select IBM Cloud App ID as the type.
-3. Edit the Default IdP URL to make it simpler. For example, https://cloud.ibm.com/authorize/540f5scc241a24a70513961 can be changed to https://cloud.ibm.com/authorize/my-company. Users log in with the alias my-company instead of 540f5scc241a24a70513961.
-
-
-
-"
-A10DE0E026BA0CF397108621D5927E16436ACF58_6,A10DE0E026BA0CF397108621D5927E16436ACF58," Learn more
-
-
-
-* [IBM Cloud docs: Managing authentication](https://cloud.ibm.com/docs/appid?topic=appid-managing-idp)
-* [IBM Cloud docs: Configuring federated identity providers: SAML](https://cloud.ibm.com/docs/appid?topic=appid-enterpriseenterprise)
-* [IBM Cloud SAML Federation Guide](https://www.ibm.com/cloud/blog/ibm-cloud-saml-federation-guide)
-* [Setting up IBM Cloud App ID with your Azure Active Directory](https://www.ibm.com/cloud/blog/setting-ibm-cloud-app-id-azure-active-directory)
-* [Reusing Existing Red Hat SSO and Keycloak for Applications That Run on IBM Cloud with App ID](https://www.ibm.com/cloud/blog/reusing-existing-red-hat-sso-and-keycloak-for-applications-that-run-on-ibm-cloud-with-app-id)
-
-
-
-Parent topic:[Setting up IBM Cloud App ID (beta)](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/admin-appid.html)
-"
-77393F760A3A3F834809ACA1078BDF229331C2FD_0,77393F760A3A3F834809ACA1078BDF229331C2FD," Overview for setting up IBM Cloud App ID (beta)
-
-IBM watsonx supports IBM Cloud App ID to integrate customer's registries for user authentication. You configure App ID on IBM Cloud to communicate with an identiry provider. You then provide an alias to the people in your organization to log in to IBM watsonx.
-
-Required roles : To configure identity providers for App ID, you must have one of the following roles in the IBM Cloud account:
-
-: - Account owner : - Operator or higher on the App ID instance : - Operator or Administrator role on the IAM Identity Service
-
-App ID is configured entirely on IBM Cloud. An identity provider, for example, Active Directory, must also be configured separately to communicate with App ID.
-
-For more information on configuring App ID to work with an identity provider, see [Configuring App ID with your identity provider](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/admin-appid-tips.html).
-
-"
-77393F760A3A3F834809ACA1078BDF229331C2FD_1,77393F760A3A3F834809ACA1078BDF229331C2FD," Configuring the log on alias
-
-The App ID instance is configured as the default identity provider for the account. For instructions on configuring an identity provider, refer to [IBM Cloud docs: Enabling authentication from an external identity provider](https://cloud.ibm.com/docs/account?topic=account-idp-integration).
-
-Each App ID instance requires a unique alias. There is one alias per account. All users in an account log in with the same alias. When the identity provider is configured, the alias is initially set to the account ID. You can [change the initial alias](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/admin-appid-tips.htmlcfg_alias) to be easier to type and remember.
-
-"
-77393F760A3A3F834809ACA1078BDF229331C2FD_2,77393F760A3A3F834809ACA1078BDF229331C2FD," Logging in with App ID (beta)
-
-Users choose App ID (beta) as the login method on the IBM watsonx login page and enter the alias. Then, they are redirected to their company's login page to enter their company credentials. Upon logging in successfully to their company, they are redirected to IBM watsonx.
-
-To verify that the alias is correctly configured, go to the User profile and settings page. Verify that the username in the profile is the email from your company’s registry. The alias is correct if the correct email is shown in the profile, as it indicates that the mapping was successful.
-
-You cannot switch accounts when logging in through App ID.
-
-"
-77393F760A3A3F834809ACA1078BDF229331C2FD_3,77393F760A3A3F834809ACA1078BDF229331C2FD," Limitations
-
-The following limitations apply to this beta release:
-
-
-
-* You must map the name/username/sub SAML profile properties to the email property in the user registry. If the mapping is absent or incorrect, a default opaque user ID is used, which is not supported in this beta release.
-* The IBM Cloud login page does not support an App ID alias. Users log in into IBM Cloud with a custom URL, following this form: https://cloud.ibm.com/authorize/{app_id_alias}.
-
-
-
-
-
-* If you are using the Cloud Directory included with App ID as your user registry, you must select Username and password as the option for Manage authentication > Cloud Directory > Settings > Allow users to sign-up and sign-in using.
-
-
-
-"
-77393F760A3A3F834809ACA1078BDF229331C2FD_4,77393F760A3A3F834809ACA1078BDF229331C2FD," Learn more
-
-
-
-* [Logging in to watsonx.ai through IBM App ID (beta)](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/signup-wx.htmlappid)
-* [Configuring App ID with your identity provider](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/admin-appid-tips.html)
-* [IBM Cloud docs: Getting started with App ID](https://cloud.ibm.com/docs/appid?topic=appid-getting-started)
-* [IBM Cloud docs: Enabling authentication from an external identity provider](https://cloud.ibm.com/docs/account?topic=account-idp-integration)
-
-
-
-Parent topic:[Managing IBM watsonx](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/wdp_admin_console.html)
-"
-78A4D6515FAA2766FEB3A03CA6A378846CF33D83_0,78A4D6515FAA2766FEB3A03CA6A378846CF33D83," Managing all projects in the account
-
-If you have the required permission, you can view and manage all projects in your IBM Cloud account. You can add yourself to a project so that you can delete it or change its collaborators.
-
-"
-78A4D6515FAA2766FEB3A03CA6A378846CF33D83_1,78A4D6515FAA2766FEB3A03CA6A378846CF33D83," Requirements
-
-To manage all projects in the account, you must:
-
-
-
-* Restrict resources to the current account. See steps to [set the scope for resources](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/account-settings.htmlset-the-scope-for-resources).
-* Have the Manage projects permission that is provided by the IAM Manager role for the IBM Cloud Pak for Data service.
-
-
-
-"
-78A4D6515FAA2766FEB3A03CA6A378846CF33D83_2,78A4D6515FAA2766FEB3A03CA6A378846CF33D83," Assigning the Manage projects permission
-
-To grant the Manage projects permission to a user who is already in your IBM Cloud account:
-
-
-
-1. From the navigation menu, choose Administration > Access (IAM) to open the Manage access and users page in your IBM Cloud account.
-2. Select the user on the Users page.
-3. Click the Access tab and then choose Assign access+.
-4. Select Access policy.
-5. For Service, choose IBM Cloud Pak for Data.
-6. For Service access, select the Manager role.
-7. For Platform access, assign the Editor role.
-8. Click Add and Assign to assign the policy to the user.
-
-
-
-"
-78A4D6515FAA2766FEB3A03CA6A378846CF33D83_3,78A4D6515FAA2766FEB3A03CA6A378846CF33D83," Managing projects
-
-You can add yourself to a project when you need to delete the project, delete collaborators, or assign the Admin role to a collaborator in the project. To manage projects:
-
-
-
-* View all active projects on the Projects page in IBM watsonx by clicking the drop-down menu next to the search field and selecting All active projects.
-* Join any project as Admin by clicking Join as admin in the Your role column.
-* Filter projects to identify which projects you are not a collaborator in, by clicking the filter icon  and selecting Your role > No membership.
-
-
-
-For more details on managing projects, see [Administering projects](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/admin-project.html).
-
-"
-78A4D6515FAA2766FEB3A03CA6A378846CF33D83_4,78A4D6515FAA2766FEB3A03CA6A378846CF33D83," Learn more
-
-
-
-* [Administering projects](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/admin-project.html)
-
-
-
-Parent topic:[Managing IBM watsonx](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/wdp_admin_console.html)
-"
-96C0566DA4EB3450616C3F358C32837BFD4DE6C8_0,96C0566DA4EB3450616C3F358C32837BFD4DE6C8," Removing users from the account or from the workspace
-
-The IBM Cloud account administrator or owner can remove users from the IBM Cloud account. Any use with the Admin role can remove users from a workspace.
-
-"
-96C0566DA4EB3450616C3F358C32837BFD4DE6C8_1,96C0566DA4EB3450616C3F358C32837BFD4DE6C8," Removing users from the IBM Cloud account
-
-You can remove a user from an IBM Cloud account, so that the user can no longer log in to the console, switch to your account, or access account resources.
-
-"
-96C0566DA4EB3450616C3F358C32837BFD4DE6C8_2,96C0566DA4EB3450616C3F358C32837BFD4DE6C8," Required roles
-
-: To remove a user from an IBM Cloud account, you must have one of the following roles for your IBM Cloud account: : - Owner : - Administrator : - Editor
-
-To remove a user from the IBM Cloud account:
-
-
-
-1. From the IBM watsonx navigation menu, click Administration > Access (IAM).
-2. Click Users and find the name of the user that you want to remove.
-3. Choose Remove user from the action menu and confirm the removal.
-
-
-
-Removing a user from an account doesn't delete the IBMid for the user. Any resources such as projects or catalogs that were created by the user remain in the account, but the user no longer has access to work with those resources. The account owner, or an administrator for the service instance, can assign other users to work with the projects and catalogs or delete them entirely.
-
-For more information, see [IBM Cloud docs: Removing users from an account](https://cloud.ibm.com/docs/account?topic=account-remove).
-
-"
-96C0566DA4EB3450616C3F358C32837BFD4DE6C8_3,96C0566DA4EB3450616C3F358C32837BFD4DE6C8," Removing users from a workspace
-
-You can remove collaborators from a workspace, such as a project or space, so that the user can no longer access the workspace or any of its contents.
-
-Required role : To remove a user from a workspace, you must have the Admin collaborator role for the workspace that you are editing.
-
-To remove a collaborator, select one or more users (or user groups) on the Access control page of the workspace and click Remove.
-
-The user is still a member of the IBM Cloud account and can be added as a collaborator to other workspaces as needed.
-
-"
-96C0566DA4EB3450616C3F358C32837BFD4DE6C8_4,96C0566DA4EB3450616C3F358C32837BFD4DE6C8," Learn more
-
-
-
-* [Stop using IBM watsonx](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/stopapps.html)
-* [Project collaborators](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/collaborate.html)
-* [IBM Cloud docs: Removing users from an account](https://cloud.ibm.com/docs/account?topic=account-remove)
-
-
-
-Parent topic:[Managing the platform](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/wdp_admin_console.html)
-"
-28F15AC17715506BB29327874DE7F76CB9FB2908,28F15AC17715506BB29327874DE7F76CB9FB2908," Administering your accounts and services
-
-For most administration tasks, you must be the IBM Cloud account owner or administrator. If you log in to your own account, you are the account owner. If you log in to someone else's account or an enterprise account, you might not be the account owner or administrator.
-
-Tasks for all users:
-
-
-
-* [Managing your personal settings](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/personal-settings.html)
-* [Determining your roles](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/your-roles.html)
-* [Understanding accessibility features](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/accessibility.html)
-
-
-
-Tasks for IBM Cloud account owners or administrators in IBM watsonx and in IBM Cloud:
-
-
-
-* [Managing IBM watsonx services](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/wdp_admin_console.html)
-* [Securing IBM watsonx](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-overview.html)
-* [Managing your IBM Cloud account](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/manage-account.html)
-"
-6E25D98279484E1D63CDEEFDD1D6A9F1917F1BA8_0,6E25D98279484E1D63CDEEFDD1D6A9F1917F1BA8," Activity Tracker events
-
-You can see the events for actions for your provisioned services in the IBM Cloud Activity Tracker. You can use the information that is registered through the IBM Cloud Activity Tracker service to identify security incidents, detect unauthorized access, and comply with regulatory and internal auditing requirements.
-
-To get started, provision an instance of the IBM Cloud Activity Tracker service. See [IBM Cloud Activity Tracker](https://cloud.ibm.com/docs/activity-tracker?topic=activity-tracker-getting-started).
-
-View events in the Activity Tracker in the same IBM Cloud region where you provisioned your services. To view the account and user management events and other global platform events, you must provision an instance of the IBM Cloud Activity Tracker service in the Frankfurt (eu-de) region. See [Platform services](https://cloud.ibm.com/docs/activity-tracker?topic=activity-tracker-cloud_services_locationscloud_services_locations_core_integrated).
-
-
-
-* [Events for account and user management](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/at-events.html?context=cdpaas&locale=enacct)
-* [Events for Watson Studio](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/at-events.html?context=cdpaas&locale=enws)
-* [Events for Watson Machine Learning](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/at-events.html?context=cdpaas&locale=enwml)
-* [Events for model evaluation (Watson OpenScale)](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/at-events.html?context=cdpaas&locale=enwos)
-
-
-
-"
-6E25D98279484E1D63CDEEFDD1D6A9F1917F1BA8_1,6E25D98279484E1D63CDEEFDD1D6A9F1917F1BA8," Events for account and user management
-
-You can audit account and user management events in Activity Tracker, including:
-
-
-
-* Billing events
-* Global catalog events
-* IAM and user management events
-
-
-
-For the complete list of account and user management events, see [IBM Cloud docs: Auditing events for account management](https://cloud.ibm.com/docs/activity-tracker?topic=activity-tracker-at_events_acc_mgt).
-
-"
-6E25D98279484E1D63CDEEFDD1D6A9F1917F1BA8_2,6E25D98279484E1D63CDEEFDD1D6A9F1917F1BA8," Events for Watson Studio
-
-
-
-Events in Activity Tracker for Watson Studio
-
- Action Description
-
- data-science-experience.project.create Create a project.
- data-science-experience.project.delete Delete a project.
- data-science-experience.notebook.create Create a Notebook.
- data-science-experience.notebook.delete Delete a Notebook.
- data-science-experience.notebook.update Change the runtime service of a Notebook by selecting another one.
- data-science-experience.rstudio.start Open RStudio.
- data-science-experience.rstudio.stop RStudio session timed out.
-
-
-
-"
-6E25D98279484E1D63CDEEFDD1D6A9F1917F1BA8_3,6E25D98279484E1D63CDEEFDD1D6A9F1917F1BA8," Events for Decision Optimization
-
-
-
-Events in Activity Tracker for Decision Optimization
-
- Action Description
-
- domodel.decision.create Create experiments
- domodel.decision.update Update experiments
- domodel.decision.delete Delete experiments
- domodel.container.create Create scenarios
- domodel.container.update Update scenarios
- domodel.container.delete Delete scenarios
- domodel.notebook.import Update a scenario from a notebook
- domodel.notebook.export Generate a model notebook from a scenario
- domodel.wml.export Generate Watson Machine Learning models from a scenario
- domodel.solve.start Solve a scenario
- domodel.solve.stop Cancel a solve
-
-
-
-"
-6E25D98279484E1D63CDEEFDD1D6A9F1917F1BA8_4,6E25D98279484E1D63CDEEFDD1D6A9F1917F1BA8," Events for feature groups
-
-
-
-Events in Activity Tracker for feature groups (Watson Studio)
-
- Action Description
-
- data_science_experience.feature-group.retrieve Retrieve a feature group
- data_science_experience.feature-group.create Create a feature group
- data_science_experience.feature-group.update Update a feature group
- data_science_experience.feature-group.delete Delete a feature group
-
-
-
-"
-6E25D98279484E1D63CDEEFDD1D6A9F1917F1BA8_5,6E25D98279484E1D63CDEEFDD1D6A9F1917F1BA8," Events for asset management
-
-
-
-Events in Activity Tracker for asset management in Watson Studio
-
- Action Description
-
- datacatalog.asset.clone Copy an asset.
- datacatalog.asset.create Create an asset.
- datacatalog.data-asset.create Create a data asset.
- datacatalog.folder-asset.create Create a folder asset.
- datacatalog.type.create Create an asset type.
- datacatalog.asset.purge Delete an asset from the trash.
- datacatalog.asset.restore Restore an asset from the trash.
- datacatalog.asset.trash Send an asset to the trash.
- datacatalog.asset.update Update an asset.
- datacatalog.promoted-asset.create Create a project asset in a space.
- datacatalog.promoted-asset.update Update a space asset that started in a project.
- datacatalog.asset.promote Promote an asset from project to space.
-
-
-
-"
-6E25D98279484E1D63CDEEFDD1D6A9F1917F1BA8_6,6E25D98279484E1D63CDEEFDD1D6A9F1917F1BA8," Events for asset attachments
-
-
-
-Events in Activity Tracker for attachments
-
- Action Description
-
- datacatalog.attachment.create Create an attachment.
- datacatalog.attachment.delete Delete an attachment.
- datacatalog.attachment-resources.increase Increase resources for an attachment.
- datacatalog.complete.transfer Mark an attachment as transfer complete.
- datacatalog.attachment.update Update attachment metadata.
-
-
-
-"
-6E25D98279484E1D63CDEEFDD1D6A9F1917F1BA8_7,6E25D98279484E1D63CDEEFDD1D6A9F1917F1BA8," Events for asset attributes
-
-
-
-Events in Activity Tracker for attributes
-
- Action Description
-
- datacatalog.attribute.create Create an attribute.
- datacatalog.attribute.delete Delete an attribute.
- datacatalog.attribute.update Update an attribute.
-
-
-
-"
-6E25D98279484E1D63CDEEFDD1D6A9F1917F1BA8_8,6E25D98279484E1D63CDEEFDD1D6A9F1917F1BA8," Events for connections
-
-
-
-Events in Activity Tracker for connections
-
- Action Description
-
- wdp-connect-connection.connection.read Read a connection.
- wdp-connect-connection.connection.get Retrieve a connection.
- wdp-connect-connection.connection.get.list Get a list of connections.
- wdp-connect-connection.connection.create Create a connection.
- wdp-connect-connection.connection.delete Delete a connection.
-
-
-
-"
-6E25D98279484E1D63CDEEFDD1D6A9F1917F1BA8_9,6E25D98279484E1D63CDEEFDD1D6A9F1917F1BA8," Events for scheduling
-
-
-
-Events in Activity Tracker for scheduling
-
- Action Description
-
- wdp.scheduling.schedule.update.failed An update to a schedule failed.
- wdp.scheduling.schedule.create.failed The creation of a schedule failed.
- wdp.scheduling.schedule.read Read a schedule.
- wdp.scheduling.schedule.update Update a schedule.
- wdp.scheduling.schedule.delete.multiple Delete multiple schedules.
- wdp.scheduling.schedule.list List all schedules.
- wdp.scheduling.schedule.create Create a schedule.
-
-
-
-"
-6E25D98279484E1D63CDEEFDD1D6A9F1917F1BA8_10,6E25D98279484E1D63CDEEFDD1D6A9F1917F1BA8," Events for Data Refinery flows
-
-
-
-Events in Activity Tracker for Data Refinery flows
-
- Action Description
-
- data-science-experience.datarefinery-flow.read Read a Data Refinery flow
- data-science-experience.datarefinery-flow.create Create a Data Refinery flow
- data-science-experience.datarefinery-flow.delete Delete a Data Refinery flow
- data-science-experience.datarefinery-flow.update Update (save) a Data Refinery flow
- data-science-experience.datarefinery-flow.backup Clone (duplicate) a Data Refinery flow
- data-science-experience.datarefinery-flowrun.create Create a Data Refinery flow job run
- data-science-experience.datarefinery-flowrun-complete.update Complete a Data Refinery flow job run
- data-science-experience.datarefinery-flowrun-cancel.update Cancel a Data Refinery flow job run
-
-
-
-"
-6E25D98279484E1D63CDEEFDD1D6A9F1917F1BA8_11,6E25D98279484E1D63CDEEFDD1D6A9F1917F1BA8," Events for profiling
-
-
-
-Events in Activity Tracker for profiling
-
- Action Description
-
- wdp-profiling.profile.start Initiate profiling.
- wdp-profiling.profile.create Create a profile.
- wdp-profiling.profile.delete Delete a profile.
- wdp-profiling.profile.read Read a profile.
- wdp-profiling.profile.list List the profiles of a data asset.
- wdp-profiling.profile.update Update a profile.
- wdp-profiling.profile.asset-classification.update Update the asset classification of a profile.
- wdp-profiling.profile.column-classification.update Update the column classification of a profile.
- wdp-profiling.profile.create.failed Profile could not be created.
- wdp-profiling.profile.delete.failed Profile could not be deleted.
- wdp-profiling.profile.read.failed Profile could not be read.
- wdp-profiling.profile.list.failed Profiles could not be listed.
- wdp-profiling.profile.update.failed Profile could not be updated.
- wdp-profiling.profile.asset-classification.update.failed Asset classification of the profile could not be updated.
- wdp-profiling.profile.column-classification.update.failed Column classification of the profile could not be updated.
-
-
-
-"
-6E25D98279484E1D63CDEEFDD1D6A9F1917F1BA8_12,6E25D98279484E1D63CDEEFDD1D6A9F1917F1BA8," Events for profiling options
-
-
-
-Events in Activity Tracker for profiling options
-
- Action Description
-
- wdp-profiling.profile_options.create Create profiling options.
- wdp-profiling.profile_options.read Read profiling options.
- wdp-profiling.profile_options.update Update profiling options.
- wdp-profiling.profile_options.delete Delete profiling options
- wdp-profiling.profile_options.create.failed Profiling options could not be created.
- wdp-profiling.profile_options.read.failed Profiling options could not be read.
- wdp-profiling.profile_options.update.failed Profiling options could not be updated.
- wdp-profiling.profile_options.delete.failed Profiling options could not be deleted.
-
-
-
-"
-6E25D98279484E1D63CDEEFDD1D6A9F1917F1BA8_13,6E25D98279484E1D63CDEEFDD1D6A9F1917F1BA8," Events for feature groups
-
-
-
-Events in Activity Tracker for feature groups (IBM Knowledge Catalog)
-
- Action Description
-
- data_catalog.feature-group.retrieve Retrieve a feature group
- data_catalog.feature-group.create Create a feature group
- data_catalog.feature-group.update Update a feature group
- data_catalog.feature-group.delete Delete a feature group
-
-
-
-"
-6E25D98279484E1D63CDEEFDD1D6A9F1917F1BA8_14,6E25D98279484E1D63CDEEFDD1D6A9F1917F1BA8," Events for Watson Machine Learning
-
-"
-6E25D98279484E1D63CDEEFDD1D6A9F1917F1BA8_15,6E25D98279484E1D63CDEEFDD1D6A9F1917F1BA8," Event for Prompt Lab
-
-
-
-Event in Activity Tracker for Prompt Lab
-
- Action Description
-
- pm-20.foundation-model.send Send a prompt to a foundation model or tuned foundation model for inferencing.
-
-
-
-"
-6E25D98279484E1D63CDEEFDD1D6A9F1917F1BA8_16,6E25D98279484E1D63CDEEFDD1D6A9F1917F1BA8," Events for Watson Machine Learning deployments
-
-
-
-Events in Activity Tracker for Watson Machine Learning deployments
-
- Action Description
-
- pm-20.deployment.create Create a Watson Machine Learning deployment.
- pm-20.deployment.read Get a Watson Machine Learning deployment.
- pm-20.deployment.update Update a Watson Machine Learning deployment.
- pm-20.deployment.delete Delete a Watson Machine Learning deployment.
- pm-20.deployment_job.create Create a Watson Machine Learning deployment job.
- pm-20.deployment_job.read Get a Watson Machine Learning deployment job.
- pm-20.deployment_job.delete Delete a Watson Machine Learning deployment job.
-
-
-
-"
-6E25D98279484E1D63CDEEFDD1D6A9F1917F1BA8_17,6E25D98279484E1D63CDEEFDD1D6A9F1917F1BA8," Events for SPSS Modeler flows
-
-
-
-Events in Activity Tracker for SPSS Modeler flows
-
- Action Description
-
- data-science-experience.modeler-session.create Create a new SPSS Modeler session.
- data-science-experience.modeler-flow.send Store the current SPSS Modeler flow.
- data-science-experience.modeler-flows-user.receive Get the current user information.
- data-science-experience.modeler-flow-preview.create Preview a node in an SPSS Modeler flow.
- data-science-experience.modeler-examples.receive Get the list of example SPSS Modeler flows.
- data-science-experience.modeler-runtimes.receive Get the list of available SPSS Modeler runtimes.
- data-science-experience.lock-modeler-flow.enable Allocate the lock for the SPSS Modeler flow to the user.
- data-science-experience.project-name.receive Get the name of the project.
-
-
-
-"
-6E25D98279484E1D63CDEEFDD1D6A9F1917F1BA8_18,6E25D98279484E1D63CDEEFDD1D6A9F1917F1BA8," Event for model visualizations
-
-
-
-Event in Activity Tracker for modeler visualizations
-
- Action Description
-
- pm-20.model.visualize Visualize model output. The model output can have a single model, ensemble models, or a time-series model. The visualization type can be single, auto, or time-series. This visualization type is in requestedData section.
-
-
-
-"
-6E25D98279484E1D63CDEEFDD1D6A9F1917F1BA8_19,6E25D98279484E1D63CDEEFDD1D6A9F1917F1BA8," Events for Watson Machine Learning training assets
-
-
-
-Event in Activity Tracker for Watson Machine Learning training assets
-
- Action Description
-
- pm-20.training.authenticate Authenticate user.
- pm-20.training.authorize Authorize user.
- pm-20.training.list List all of training.
- pm-20.training.get Get one training.
- pm-20.training.create Start a training.
- pm-20.training.delete Stop a training.
-
-
-
-"
-6E25D98279484E1D63CDEEFDD1D6A9F1917F1BA8_20,6E25D98279484E1D63CDEEFDD1D6A9F1917F1BA8," Events for Watson Machine Learning repository assets
-
-The deployment events are tracked for these Watson Machine Learning repository assets:
-
-
-
-Event in Activity Tracker for Watson Machine Learning repository assets
-
- Asset type Description
-
- wml_model Represents a machine learning model asset.
- wml_model_definition Represents the code that is used to train one or more models.
- wml_pipeline Represents a hybrid-pipeline, a SparkML pipeline or a sklearn pipeline that is represented as a JSON document that is used to train one or more models.
- wml_experiment Represents the assets that capture a set of wml_pipeline or wml_model_definition assets that are trained at the same time on the same data set.
- wml_function Represents a Python function (code is packaged in a compressed file) that will be deployed as online deployment in Watson Machine Learning. This code needs to contain a score(...) python function.
- wml_training_definition Represents the training metadata necessary to start a training job.
- wml_deployment_job_definition Represents the deployment metadata information to create a batch job in WML. This asset type contains the same metadata that is used by the /ml/v4/deployment_jobs endpoint. When you submit batch deployment jobs, you can either provide the job definition inline or reference a job definition in a query parameter.
-
-
-
-These activities are tracked for each asset type:
-
-
-
-Event in Activity Tracker for Watson Machine Learning repository assets
-
- Action Description
-
- pm-20..list List all of the specified asset type.
- pm-20..create Create one of the specified asset types.
- pm-20..delete Delete one of the specified asset types.
- pm-20..update Update a specified asset type.
- pm-20..read View a specified asset type.
- pm-20..add Add a specified asset type.
-
-
-
-"
-6E25D98279484E1D63CDEEFDD1D6A9F1917F1BA8_21,6E25D98279484E1D63CDEEFDD1D6A9F1917F1BA8," Events for model evaluation (Watson OpenScale)
-
-"
-6E25D98279484E1D63CDEEFDD1D6A9F1917F1BA8_22,6E25D98279484E1D63CDEEFDD1D6A9F1917F1BA8," Events for public APIs
-
-
-
-Events in Activity Tracker for Watson OpenScale public APIs
-
- Action Description
-
- aiopenscale.metrics.create Store metric in the Watson OpenScale instance
- aiopenscale.payload.create Log payload in the Watson OpenScale instance
-
-
-
-"
-6E25D98279484E1D63CDEEFDD1D6A9F1917F1BA8_23,6E25D98279484E1D63CDEEFDD1D6A9F1917F1BA8," Events for private APIs
-
-
-
-Events in Activity Tracker for Watson OpenScale private APIs
-
- Action Description
-
- aiopenscale.datamart.configure Configure the Watson OpenScale instance
- aiopenscale.datamart.delete Delete the Watson OpenScale instance
- aiopenscale.binding.create Add service binding to the Watson OpenScale instance
- aiopenscale.binding.delete Delete service binding from the Watson OpenScale instance
- aiopenscale.subscription.create Add subscription to the Watson OpenScale instance
- aiopenscale.subscription.delete Delete subscription from the Watson OpenScale instance
-
-
-
-Parent topic:[Administration](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/administer-accounts.html)
-"
-2C6B0F77C4CA0CAFB52E1FE3E10800D56015CADF_0,2C6B0F77C4CA0CAFB52E1FE3E10800D56015CADF," Creating and managing IBM Cloud services
-
-You can create IBM Cloud service instances within IBM watsonx from the Services catalog.
-
-Prerequisite : You must be [signed up for watsonx](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/signup-wx.html).
-
-Required permissions : For creating or managing a service instance, you must have Administrator or Editor platform access roles in the IBM Cloud account for IBM watsonx. If you signed up for IBM watsonx with your own IBM Cloud account, you are the owner of the account. Otherwise, you can [check your IBM Cloud account roles and permissions](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/your-roles.html).
-
-"
-2C6B0F77C4CA0CAFB52E1FE3E10800D56015CADF_1,2C6B0F77C4CA0CAFB52E1FE3E10800D56015CADF," Creating a service
-
-To view the Services catalog, select Administration > Services > Services catalog from the main menu. For a description of each service, see [Services](https://dataplatform.cloud.ibm.com/docs/content/svc-welcome/cloud-services.html).
-
-To check which service instances you have, select Administration > Services > Service instances from the main menu. You can filter which services you see by resource group, organization, and region.
-
-To create a service:
-
-
-
-1. Log in to IBM watsonx.
-2. Select Administration > Services > Services catalog from the main menu.
-3. Click the service you want to create.
-4. Specify the IBM Cloud service region.
-5. Select a plan.
-6. If necessary, select the resource group or organization.
-7. Click Create.
-
-
-
-"
-2C6B0F77C4CA0CAFB52E1FE3E10800D56015CADF_2,2C6B0F77C4CA0CAFB52E1FE3E10800D56015CADF," Managing services
-
-To manage a service:
-
-
-
-1. Select Administration > Services > Services instances from the main menu.
-2. Click the Action menu next to the service name and select Manage in IBM Cloud. The service page in IBM Cloud opens in a separate browser tab.
-3. To change pricing plans, select Plan and choose the desired plan.
-
-
-
-"
-2C6B0F77C4CA0CAFB52E1FE3E10800D56015CADF_3,2C6B0F77C4CA0CAFB52E1FE3E10800D56015CADF," Learn more
-
-
-
-* [Associate a service with a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/assoc-services.html)
-* [Managing the platform](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/wdp_admin_console.html)
-
-
-
-Parent topic:[IBM Cloud services](https://dataplatform.cloud.ibm.com/docs/content/svc-welcome/cloud-services.html)
-"
-A392BDDEAD4F42155DC83FBA8512775DB313FC53_0,A392BDDEAD4F42155DC83FBA8512775DB313FC53," Securing connections to services with private service endpoints
-
-You can configure isolated connectivity to your cloud-based services for production workloads with IBM Cloud service endpoints. When you enable IBM Cloud service endpoints in your account, you can expose a private network endpoint when you create a resource. You then connect directly to this endpoint over the IBM Cloud private network rather than the public network. Because resources that use private network endpoints don't have an internet-routable IP address, connections to these resources are more secure.
-
-To use service endpoints:
-
-
-
-1. Enable virtual routing and forwarding (VRF) in your account, if necessary, and enable the use of service endpoints.
-2. Create services that support VRF and service endpoints.
-
-
-
-See [Enabling VRF and service endpoints](https://cloud.ibm.com/docs/account?topic=account-vrf-service-endpoint).
-
-"
-A392BDDEAD4F42155DC83FBA8512775DB313FC53_1,A392BDDEAD4F42155DC83FBA8512775DB313FC53," Learn more
-
-
-
-* [Secure access to services using service endpoints](https://cloud.ibm.com/docs/account?topic=account-service-endpoints-overview)
-* [Enabling VRF and service endpoints](https://cloud.ibm.com/docs/account?topic=account-vrf-service-endpoint)
-* [List of services that support service endpoints](https://cloud.ibm.com/docs/account?topic=account-vrf-service-endpointuse-service-endpoint)
-
-
-
-Parent topic:[Security](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-overview.html)
-"
-71C98B1AE9BB65177C030CF1DE6760D41B7D7DF5_0,71C98B1AE9BB65177C030CF1DE6760D41B7D7DF5," Firewall access for Cloud Object Storage
-
-Private IP addresses are required when IBM watsonx and Cloud Object Storage are located on the same network. When creating a connection to a Cloud Object Storage bucket that is protected by a firewall on the same network as IBM watsonx, the connector automatically maps to private IP addresses for IBM watsonx. The private IP addresses must be added to a Bucket access policy to allow inbound connections from IBM watsonx.
-
-Follow these steps to search the private IP addresses for the IBM watsonx cluster and add them to the Bucket access policy:
-
-
-
-1. Go to the Administration > Cloud integrations page.
-2. Click the Firewall configuration link to view the list of IP ranges used by IBM watsonx.
-3. Choose Include private IPs to view the private IP addresses for the IBM watsonx cluster. 
-4. From your IBM Cloud Object Storage instance on IBM Cloud, open the Buckets list and choose the Bucket for the connection.
-5. Copy each of the private IP ranges listed and paste them into the Buckets > Permissions > IP address field on IBM Cloud. 
-
-
-
-"
-71C98B1AE9BB65177C030CF1DE6760D41B7D7DF5_1,71C98B1AE9BB65177C030CF1DE6760D41B7D7DF5," Learn more
-
-
-
-* [IBM Cloud Object Storage connection](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-cos.html)
-* [IBM Cloud docs: Setting a firewall](https://cloud.ibm.com/docs/cloud-object-storage?topic=cloud-object-storage-setting-a-firewallfirewall)
-
-
-
-Parent topic:[Configuring firewall access](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/firewall_ovrvw.html)
-"
-3D1B3C707202F30F8995025F356F82ABBE685B93,3D1B3C707202F30F8995025F356F82ABBE685B93," Firewall access for Watson Studio
-
-Inbound firewall access is granted to the Watson Studio service by allowing the IP addresses for IBM watsonx on IBM Cloud.
-
-If Watson Studio is installed behind a firewall, you must add the WebSocket connection for your region to the firewall settings. Enabling the WebSocket connection is required for notebooks and RStudio.
-
-Following are the WebSocket settings for each region:
-
-
-
-Table 1. Regional WebSockets
-
- Location Region WebSocket
-
- United States (Dallas) us-south wss://dataplatform.cloud.ibm.com
- Europe (Frankfurt) eu-de wss://eu-de.dataplatform.cloud.ibm.com
- United Kingdom (London) eu-gb wss://eu-gb.dataplatform.cloud.ibm.com
- Asia Pacific (Tokyo) jp-tok wss://jp-tok.dataplatform.cloud.ibm.com
-
-
-
-Follow these steps to look up the IP addresses for IBM watsonx and allow them on IBM Cloud:
-
-
-
-1. From the main menu, choose Administration > Cloud integrations.
-2. Click Firewall configuration to display the IP addresses for the current region. Use CIDR notation.
-3. Copy each CIDR range into the IP address restrictions for either a user or an account. You must also enter the allowed individual client IP addresses. Enter the IP addresses as a comma-separated list. Then, click Apply.
-4. Repeat for each region to allow access for Watson Studio.
-
-
-
-When you configure the allowed IP addresses for Watson Studio, you include the CIDR ranges for the Watson Studio cluster. You can also allow individual client system IP addresses.
-
-For step-by-step instructions for both user and account restrictions, see [IBM Cloud docs: Allowing specific IP addresses](https://cloud.ibm.com/docs/account?topic=account-ips)
-
-Parent topic:[Configuring firewall access](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/firewall_ovrvw.html)
-"
-34974DEE293BA190CFA1B3383EB2417D0FD4B601_0,34974DEE293BA190CFA1B3383EB2417D0FD4B601," Firewall access for AWS Redshift
-
-Inbound firewall access allows IBM watsonx to connect to Redshift on AWS through the firewall. You need inbound firewall access to work with your data stored in Redshift.
-
-To connect to Redshift from IBM watsonx, you configure inbound access through the Redshift firewall by entering the IP ranges for IBM watsonx into the inbound firewall rules (also called ingress rules). Inbound access through the firewall is configurable if Redshift resides on a public subnet. If Redshift resides on a private subnet, then no access is possible.
-
-Follow these steps to configure inbound firewall access to AWS Redshift:
-
-
-
-1. Go to your provisioned Amazon Redshift cluster.
-2. Select Properties and then scroll down to Network and security settings.
-3. Click the VPC security group.
-
-
-4. Edit the active/default security group.
-
-
-5. Under Inbound rules, change the port range to 5439 to specify the Redshift port. Then select Edit inbound rules > Add rule.
-
-
-6. From IBM watsonx, go to the Administration > Cloud integrations page.
-7. Click the Firewall configuration link to view the list of IP ranges used by IBM watsonx. IP addresses can be viewed in either CIDR notation or as Start and End addresses.
-8. Copy each of the IP ranges listed and paste them into the Source field for inbound firewall rules.
-
-
-
-"
-34974DEE293BA190CFA1B3383EB2417D0FD4B601_1,34974DEE293BA190CFA1B3383EB2417D0FD4B601," Learn more
-
-
-
-* [Working with Redshift-managed VPC endpoints in Amazon Redshift](https://docs.aws.amazon.com/redshift/latest/mgmt/managing-cluster-cross-vpc.html)
-"
-648122BED05213950C23287CB4845FA56660232B_0,648122BED05213950C23287CB4845FA56660232B," Firewall access for Spark
-
-To allow Spark to access data that is located behind a firewall, you add the appropriate IP addresses for your region to the inbound rules for your firewall.
-
-"
-648122BED05213950C23287CB4845FA56660232B_1,648122BED05213950C23287CB4845FA56660232B," Dallas (us-south)
-
-
-
-* dal12 - 169.61.173.96/27, 169.63.15.128/26, 150.239.143.0/25, 169.61.133.240/28, 169.63.56.0/24
-* dal13 - 169.61.57.48/28, 169.62.200.96/27, 169.62.235.64/26
-* dal10 - 169.60.246.160/27, 169.61.194.0/26, 169.46.22.128/26, 52.118.59.0/25
-
-
-
-Parent topic:[Configuring firewall access](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/firewall_ovrvw.html)
-"
-E732DFB3C4F38ABECBA99DA31750FB6291560DB5_0,E732DFB3C4F38ABECBA99DA31750FB6291560DB5," Firewall access for Watson Machine Learning
-
-To allow Watson Machine Learning to access data that is located behind a firewall, you add the appropriate IP addresses for your region to the inbound rules for your firewall.
-
-"
-E732DFB3C4F38ABECBA99DA31750FB6291560DB5_1,E732DFB3C4F38ABECBA99DA31750FB6291560DB5," Dallas (us-south)
-
-
-
-* dal10 - 169.60.39.152/29
-* dal12 - 169.48.198.96/29
-* dal13 - 169.61.47.128/29,169.62.162.88/29
-
-
-
-Parent topic:[Configuring firewall access](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/firewall_ovrvw.html)
-"
-E176531BA95036356A7E5DCA50A8DF728C78CE79_0,E176531BA95036356A7E5DCA50A8DF728C78CE79," Firewall access for the platform
-
-If a data source resides behind a firewall, then IBM watsonx requires inbound access through the firewall in order to make a connection. Inbound firewall access is required whether the data source resides on a third-party cloud provider or in an data center. The method for configuring inbound access varies for different vendor's firewalls. In general, you configure inbound access rules by entering the IP addresses for the IBM watsonx cluster to allow for access by IBM watsonx.
-
-You can enter the IP addresses using the starting and ending addresses for a range or by using CIDR notation. Classless Inter-Domain Routing (CIDR) notation is a compact representation of an IP address and its associated network mask. For start and end addresses, copy each address and enter them in the inbound rules for your firewall. Alternately, copy the addresses in CIDR notation.
-
-The IBM watsonx IP addresses vary by region. The user interface lists the IP addresses for the current region. The IP addresses apply to the base infrastructure for IBM watsonx.
-
-Follow these steps to look up the IP addresses for IBM watsonx cluster:
-
-
-
-1. Go to the Administration > Cloud integrations page.
-2. Click the Firewall configuration link to view the list of IP ranges used by IBM watsonx in your region.
-3. View the IP ranges for the IBM watsonx cluster in either CIDR notation or as Start and End addresses.
-4. Choose Include private IPs to view the private IP addresses. The private IP addresses allow connections to IBM Cloud Object Storage buckets that are behind a firewall. See [Firewall access for Cloud Object Storage](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/firewall-cfg-private-cos.html).
-5. Copy each of the IP ranges listed and paste them into the appropriate security configuration or inbound firewall rules area for your cloud provider.
-
-
-
-"
-E176531BA95036356A7E5DCA50A8DF728C78CE79_1,E176531BA95036356A7E5DCA50A8DF728C78CE79,"For example, if your data source resides on AWS, open the Create Security Group dialog for your AWS Management Console. Paste the IP ranges into the Inbound section for the security group rules.
-
-Parent topic:[Configuring firewall access](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/firewall_ovrvw.html)
-"
-E7B64045AF2C3FF02183FB1CCC036327CEE5E971_0,E7B64045AF2C3FF02183FB1CCC036327CEE5E971," Configuring firewall access
-
-Firewalls protect valuable data from public access. If your data sources reside behind a firewall for protection, and you are not using a Satellite Connector or Satellite location, then you must configure the firewall to allow the IP addresses for IBM watsonx and also for individual services. Otherwise, IBM watsonx is denied access to the data sources.
-
-To allow IBM watsonx access to private data sources, you configure inbound firewall rules using the security mechanisms for your firewall. Inbound firewall rules are not required for connections that use a Satellite Connector or Satellite location, which establishes a link by performing an outbound connection. For more information, see [Connecting to data behind a firewall](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.html).
-
-All services in IBM watsonx actively use WebSockets for the proper functioning of the user interface and APIs. Any firewall between the user and the IBM watsonx domain must allow HTTPUpgrade. If IBM watsonx is installed behind a firewall, traffic for the wss:// protocol must be enabled.
-
-"
-E7B64045AF2C3FF02183FB1CCC036327CEE5E971_1,E7B64045AF2C3FF02183FB1CCC036327CEE5E971," Configuring inbound access rules for firewalls
-
-If data sources reside behind a firewall, then inbound access rules are required for IBM watsonx. Inbound firewall rules protect the network against incoming traffic from the internet. The following scenarios require inbound access rules through a firewall:
-
-
-
-* [Firewall access for IBM watsonx](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/firewall_cfg.html)
-* [Firewall access for Cloud Object Storage](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/firewall-cfg-private-cos.html)
-* [Firewall access for AWS Redshift](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/firewall-redshift.html)
-* [Firewall access for Watson Studio](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/firewall-dsx.html)
-* [Firewall access for Watson Machine Learning](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/firewall-wml.html)
-* [Firewall access for Spark](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/firewall-spark.html)
-
-
-
-"
-E7B64045AF2C3FF02183FB1CCC036327CEE5E971_2,E7B64045AF2C3FF02183FB1CCC036327CEE5E971," Learn more
-
-
-
-* [Connecting to data behind a firewall](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/securingconn.html)
-
-
-
-Parent topic:[Setting up the platform for administrators](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/setup-platform.html)
-"
-9E71F112F9AF39E61A59914D87689B4B8DB13F50_0,9E71F112F9AF39E61A59914D87689B4B8DB13F50," Integrating with AWS
-
-You can configure an integration with the Amazon Web Services (AWS) platform to allow IBM watsonx users access to data sources from AWS. Before proceeding, make sure you have proper permissions. For example, you'll need to be able to create services and credentials in the AWS account.
-
-After you configure an integration, you'll see it under Service instances. You'll see a new AWS tab that lists your instances of Redshift and S3.
-
-To configure an integration with AWS:
-
-
-
-1. Log on to the [AWS Console](https://aws.amazon.com/console/).
-2. From the account drop-down at the upper right, select My Security Credentials.
-3. Under Access keys (access key ID and secret access key), click Create New Access Key.
-4. Copy the key ID and secret.
-
-Important: Write down your key ID and secret and store them in a safe place.
-5. In IBM watsonx, under Administration > Cloud integrations, go to the AWS tab, enable integration, and then paste the access key ID and access key secret into the appropriate fields.
-6. If you need to access Redshift, [configure firewall access](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/firewall-redshift.html).
-7. Confirm that you can see your AWS services. From the main menu, choose Administration > Services > Services instances. Click the AWS tab to see those services.
-
-
-
-Now users who have credentials to your AWS services can [create connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html) to them by selecting them on the Add connection page. Then they can access data from those connections by [creating connected data assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html).
-
-"
-9E71F112F9AF39E61A59914D87689B4B8DB13F50_1,9E71F112F9AF39E61A59914D87689B4B8DB13F50," Next steps
-
-
-
-* [Set up a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/projects.html)
-* [Create connections in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html)
-
-
-
-Parent topic:[Integrations with other cloud platforms](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/int-cloud.html)
-"
-496C8703EBA5C4C6BCD6D65EE60D3E768F1BF071_0,496C8703EBA5C4C6BCD6D65EE60D3E768F1BF071," Integrating with Microsoft Azure
-
-You can configure an integration with the Microsoft Azure platform to allow IBM watsonx users access to data sources from Microsoft Azure. Before proceeding, make sure you have proper permissions. For example, you'll need permission in your subscription to create an application integration in Azure Active Directory.
-
-After you configure an integration, you'll see it under Service instances. You'll see a new Azure tab that lists your instances of Data Lake Storage Gen1 and SQL Database.
-
-To configure an integration with Microsoft Azure:
-
-
-
-1. Log on to your Microsoft Azure account at [https://portal.azure.com](https://portal.azure.com).
-2. Navigate to the Subscriptions panel and copy your subscription ID.
-
-
-
-
-
-1. In IBM watsonx, go to Administration > Cloud integrations and click the Azure tab. Paste the subscription ID you copied in the previous step into the Subscription ID field.
-
-
-
-
-
-1. In Microsoft Azure Active Directory, navigate to Manage > App registrations and click New registration to register an application. Give it a name such as IBM integration and select the desired option for supported account types.
-
-
-
-
-
-1. Copy the Application (client) ID and the Tenant ID and paste them into the appropriate fields on the IBM watsonx Integrations page, as you did with the subscription ID.
-
-
-
-
-
-1. In Microsoft Azure, navigate to Certificates & secrets > New client secret to create a new secret.
-
-"
-496C8703EBA5C4C6BCD6D65EE60D3E768F1BF071_1,496C8703EBA5C4C6BCD6D65EE60D3E768F1BF071,"Important!
-
-
-
-* Write down your secret and store it in a safe place. After you leave this page, you won't be able to retrieve the secret again. You'd need to delete the secret and create a new one.
-* If you ever need to revoke the secret for some reason, you can simply delete it from this page.
-* Pay attention to the expiration date. When the secret expires, integration will stop working.
-
-
-
-2. Copy the secret from Microsoft Azure and paste it into the appropriate field on the Integrations page as you did with the subscription ID and client ID.
-3. Configure [firewall access](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/int-azure.html?context=cdpaas&locale=enfirewall).
-4. Confirm that you can see your Azure services. From the main menu, choose Administration > Services > Services instances. Click the Azure tab to see those services.
-
-
-
-Now users who have credentials to your Azure services can [create connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html) to them by selecting them on the Add connection page. Then they can access data from those connections by [creating connected data assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html).
-
-"
-496C8703EBA5C4C6BCD6D65EE60D3E768F1BF071_2,496C8703EBA5C4C6BCD6D65EE60D3E768F1BF071," Configuring firewall access
-
-You must also configure access so IBM watsonx can access data through the firewall.
-
-For Microsoft Azure SQL Database firewall:
-
-
-
-1. Open the database instance in Microsoft Azure.
-2. From the top list of actions, select Set server firewall.
-3. Set Deny public network access to No.
-4. In a separate tab or window, open IBM watsonx and go to Administration > Cloud integrations. In the Firewall configuration panel, for each firewall IP range, copy the start and end address values into the list of rules in the Microsoft Azure QL Database firewall.
-
-
-
-For Microsoft Azure Data Lake Storage Gen1 firewall:
-
-
-
-1. Open the Data Lake instance.
-2. Go to Settings > Firewall and virtual networks.
-3. In a separate tab or window, open IBM watsonx and go to Administration > Cloud integrations. In the Firewall configuration panel, for each firewall IP range, copy the start and end address values into the list of rules under Firewall in the Data Lake instance.
-
-
-
-You can now create connections, preview data from Microsoft Azure data sources, and access Microsoft Azure data in Notebooks, Data Refinery, SPSS Modeler, and other tools in projects and in catalogs. You can see your Microsoft Azure instances under Services > Service instances.
-
-"
-496C8703EBA5C4C6BCD6D65EE60D3E768F1BF071_3,496C8703EBA5C4C6BCD6D65EE60D3E768F1BF071," Next steps
-
-
-
-* [Set up a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/projects.html)
-* [Create connections in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html)
-
-
-
-Parent topic:[Integrations with other cloud platforms](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/int-cloud.html)
-"
-72B9EC702C95AC86DE08E0FB8F8C3404B1228B5F,72B9EC702C95AC86DE08E0FB8F8C3404B1228B5F," Integrations with other cloud platforms
-
-You can integrate IBM watsonx with other cloud platforms to configure access to the data source services on that platform. Then, users can easily create connections to those data source services and access the data in those data sources.
-
-You need to be the Account Owner or Administrator for the IBM Cloud account to configure integrations with other cloud platforms.
-
-You must have the proper permissions in your cloud platform subscription before you can configure an integration. If you are using Amazon Web Services (AWS) Redshift (or other AWS data sources) or Microsoft Azure, you must also configure firewall access to allow IBM watsonx to access data.
-
-After you configure integration and firewall access with another cloud platform, you can access and connect to the services on that platform:
-
-
-
-* The service instances for that platform are shown on the Service instances page. From the main menu, choose Administration > Services > Services instances. Each cloud platform that you integrate with has its own page.
-* The data source services in that platform are shown when you create a connection. Start adding a connection in a project, catalog, or other workspace. When the Add connection page appears, click the To service tab. The services are listed by cloud platform.
-
-
-
-You can configure integrations with these cloud platforms:
-
-
-
-* [Amazon Web Services (AWS)](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/int-aws.html)
-* [Microsoft Azure](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/int-azure.html)
-* [Google Cloud Platform (GCP)](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/int-google.html)
-
-
-
-Parent topic:[Services and integrations](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/svc-int.html)
-"
-CB81643BE8EE3B3DC2F6BCCDB77BD2CEC32C8926_0,CB81643BE8EE3B3DC2F6BCCDB77BD2CEC32C8926," Integrating with Google Cloud Platform
-
-You can configure an integration with the Google Cloud Platform (GCP) to allow IBM watsonx users to access data sources from GCP. Before proceeding, make sure you have proper permissions.
-
-After you configure an integration, you'll see it under Service instances. For example, you'll see a new GCP tab that lists your BigQuery data sets and Storage buckets.
-
-To configure an integration with GCP:
-
-
-
-1. Log on to the Google Cloud Platform at [https://console.cloud.google.com](https://console.cloud.google.com).
-2. Go to IAM & Admin > Service Accounts.
-3. Open your project and then click CREATE SERVICE ACCOUNT.1. Specify a name and description for the new service account and click CREATE. Specify other options as desired and click DONE.1. Click the actions menu next to the service instance and select Create key. For key type, select JSON and then click CREATE. The JSON key file will be downloaded to your machine.
-
-Important: Write down your key ID and secret and store them in a sStore the key file in a secure location.
-4. In IBM watsonx, under Administrator > Cloud integrations, go to the GCP tab, enable integration, and then paste the contents from the JSON key file into the text field. Only certain properties from the JSON will be stored, and the private_key property will be encrypted.
-5. Go back to Google Cloud Platform and edit the service account you created previously. Add the following roles:
-6. Confirm that you can see your GCP services. From the main menu, choose Administration > Services > Services instances. Click the GCP tab to see those services, for example, BigQuery data sets and Storage buckets.
-
-
-
-Now users who have credentials to your GCP services can can [create connections](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html) to them by selecting them on the Add connection page. Then they can access data from those connections by [creating connected data assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html).
-
-"
-CB81643BE8EE3B3DC2F6BCCDB77BD2CEC32C8926_1,CB81643BE8EE3B3DC2F6BCCDB77BD2CEC32C8926," Next steps
-
-
-
-* [Set up a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/projects.html)
-* [Create connections in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html)
-
-
-
-Parent topic:
-"
-E6A30655CBD3745ACBCBF18E79B4C3979CA6B35B_0,E6A30655CBD3745ACBCBF18E79B4C3979CA6B35B," Managing your IBM Cloud account
-
-You can manage your IBM Cloud account to view billing and usage, manage account users, and manage services.
-
-Required permissions : You must be the IBM Cloud account owner or administrator.
-
-To manage your IBM Cloud account, choose Administration > Account and billing > Account > Manage in IBM Cloud from IBM watsonx. Then from the IBM Cloud console, choose an option from the Manage menu.
-
-
-
-* Account: See [Adding orgs and spaces](https://cloud.ibm.com/docs/account?topic=account-orgsspacesusersorgsspacesusers) and [Managing resource groups](https://cloud.ibm.com/docs/account?topic=account-rgs).
-* Billing and Usage: See [How you're charged](https://cloud.ibm.com/docs/billing-usage?topic=billing-usage-chargescharges).
-* Access (IAM): See [Inviting users](https://cloud.ibm.com/docs/account?topic=account-access-getstarted).
-
-
-
-"
-E6A30655CBD3745ACBCBF18E79B4C3979CA6B35B_1,E6A30655CBD3745ACBCBF18E79B4C3979CA6B35B," Learn more
-
-
-
-* [Activity Tracker events](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/at-events.html)
-* [Manage your settings](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/personal-settings.html)
-* [Set up IBM watsonx for your organization](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/setup-platform.html)
-* [Manage users and access](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/setup-access.html)
-* [IBM Cloud SAML Federation Guide](https://www.ibm.com/cloud/blog/ibm-cloud-saml-federation-guide)
-* [Delete your IBM Cloud account](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/wdp_admin_console.htmldeletecloud)
-* [Check the status of IBM Cloud services](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/service-status.html)
-* [Configure private service endpoints](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/endpoints-vrf.html)
-
-
-
-Parent topic:[Administration](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/administer-accounts.html)
-"
-BEDA84E76E7F8FA5594F63E640DC17B4F6CB3E5E_0,BEDA84E76E7F8FA5594F63E640DC17B4F6CB3E5E," Monitoring account resource usage
-
-Some service plans charge for compute usage and other types of resource usage. If you are the IBM Cloud account owner or administrator, you can monitor the resources usage to ensure the limits are not exceeded.
-
-For Lite plans, you cannot exceed the limits of the plan. You must wait until the start of your next billing month to use resources that are calculated monthly. Alternatively, you can [upgrade to a paid plan](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/upgrade.html).
-
-For most paid plans, you pay for the resources that the tools and processes that are provided by the service consume each month.
-
-To see the costs of your plan, log in to IBM Cloud, open your service instance from your IBM Cloud dashboard, and click Plan.
-
-
-
-* [Capacity unit hours (CUH) for compute usage](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/monitor-resources.html?context=cdpaas&locale=encompute)
-* [Resource units for foundation model inferencing](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/monitor-resources.html?context=cdpaas&locale=enrus)
-* [Monitor monthly billing](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/monitor-resources.html?context=cdpaas&locale=enbilling)
-
-
-
-"
-BEDA84E76E7F8FA5594F63E640DC17B4F6CB3E5E_1,BEDA84E76E7F8FA5594F63E640DC17B4F6CB3E5E," Capacity unit hours (CUH) for compute usage
-
-Many tools consume compute usage that is measured in capacity unit hours (CUH). A capacity unit hour is a specific amount of compute capability with a set cost.
-
-"
-BEDA84E76E7F8FA5594F63E640DC17B4F6CB3E5E_2,BEDA84E76E7F8FA5594F63E640DC17B4F6CB3E5E," How compute usage is calculated
-
-Different types of processes and different levels of compute power are billed at different rates of capacity units per hour. For example, the hourly rate for a data profiling process is 6 capacity units.
-
-Compute usage for Watson Studio is charged by the minute, with a minimum charge of 10 minutes (0.16 hours). Compute usage for Watson Machine Learning is charged by the minute with a minimum charge of one minute.
-
-Compute usage is calculated by adding the minimum number of minutes billed for each process plus the number of minutes the process runs beyond the minimum minutes, then multiplying the total by the capacity unit rate for the process.
-
-The following table shows examples of how the billed CUH is calculated.
-
-
-
- Rate Usage time Calculation Total CUH billed
-
- 1 CUH/hour 1 hour 1 hour * 1 CUH/hour 1 CUH
- 2 CUH/hour 45 minutes 0.75 hours * 2 CUH/hour 1.5 CUH
- 6 CUH/hour 5 minutes 0.16 hours * 6 CUH/hour 0.96 CUH. The minimum charge for Watson Studio applies.
- 6 CUH/hour 30 minutes 0.5 hours * 6 CUH/hour 3 CUH
- 6 CUH/hour 1 hour 1 hour * 6 CUH/hour 6 CUH
-
-
-
-"
-BEDA84E76E7F8FA5594F63E640DC17B4F6CB3E5E_3,BEDA84E76E7F8FA5594F63E640DC17B4F6CB3E5E," Processes that consume capacity unit hours
-
-Some types of processes, such as AutoAI and Federated Learning, have a single compute rate for the runtime. However, with many tools you have a choice of compute resources for the runtime. The notebook editor, Data Refinery, SPSS Modeler, and other tools have different rates that reflect the memory and compute power for the environment. Environments with more memory and compute power consume capacity unit hours at a higher rate.
-
-This table shows each process that consumes CUH, where it runs, and against which service CUH is billed, and whether you can choose from more than one environment. Follow the links to view the available CUH rates for each process.
-
-
-
- Tool or Process Workspace Service that provides CUH Multiple CUH rates?
-
- [Notebook editor](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebook-environments.html) Project Watson Studio, Analytics Engine (Spark) Multiple rates
- [Invoking the machine learning API from a notebook](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebook-environments.htmlwml) Project Watson Machine Learning Multiple rates
- [Data Refinery](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/spark-dr-envs.html) Project Watson Studio Multiple rates
- [SPSS Modeler](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/spss-envs.html) Project Watson Studio Multiple rates
- [RStudio IDE](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/rstudio-envs.html) Project Watson Studio Multiple rates
- [AutoAI experiments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/run-autoai.html) Project Watson Machine Learning Multiple rates
-"
-BEDA84E76E7F8FA5594F63E640DC17B4F6CB3E5E_4,BEDA84E76E7F8FA5594F63E640DC17B4F6CB3E5E," [Decision Optimization experiments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/run-decisionopt.html) Spaces Watson Machine Learning Multiple rates
- [Running deployments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/run-cuh-deploy-spaces.html) Spaces Watson Machine Learning Multiple rates
- [Profiling](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/environments-parent.htmlprofiling) Project Watson Studio One rate
- [Synthetic Data Generator](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/synthetic-envs.html) Project Watson Studio One rate
-
-
-
-"
-BEDA84E76E7F8FA5594F63E640DC17B4F6CB3E5E_5,BEDA84E76E7F8FA5594F63E640DC17B4F6CB3E5E," Monitoring compute usage
-
-You can monitor compute usage for all services at the account level. To view the monthly CUH usage for a service, open the service instance from your IBM Cloud dashboard and click Plan.
-
-You can also monitor compute usage in a project on the Environments page on the Manage tab.
-
-To see the total amount of capacity unit hours that are used and that are remaining for Watson Studio and Watson Machine Learning, look at the Environment Runtimes page. From the navigation menu, select Administration > Environment runtimes. The Environment Runtimes page shows details of the [CUH used by environments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/track-runtime-usage.htmltrack-account). You can calculate the amount of CUH you use for data flows and profiling by subtracting the amount used by environments from the total amount used.
-
-"
-BEDA84E76E7F8FA5594F63E640DC17B4F6CB3E5E_6,BEDA84E76E7F8FA5594F63E640DC17B4F6CB3E5E," Resource units for foundation model inferencing
-
-Calling a foundation model to generate output in response to a prompt is known as inferencing. Foundation model inferencing is measure in resource units (RU). Each RU equals 1,000 tokens. A token is a basic unit of text (typically 4 characters or 0.75 words) used in the input or output for a foundation model prompt. For details on tokens, see [Tokens](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tokens.html).
-
-Resource unit billing is based on the rate of the foundation model class multipled by the number of tokens. Foundation models are classified into three classes. See [Resource unit metering](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/wml-plans.htmlru-metering).
-
-Note: You do not consume tokens when you use the generative AI search and answer app for this documentation site.
-
-"
-BEDA84E76E7F8FA5594F63E640DC17B4F6CB3E5E_7,BEDA84E76E7F8FA5594F63E640DC17B4F6CB3E5E," Monitoring token usage for foundation model inferencing
-
-You can monitor foundation model token usage in a project on the Environments page on the Manage tab.
-
-"
-BEDA84E76E7F8FA5594F63E640DC17B4F6CB3E5E_8,BEDA84E76E7F8FA5594F63E640DC17B4F6CB3E5E," Monitor monthly billing
-
-You must be an IBM Cloud account owner or administrator to see resource usage information.
-
-To view a summary of your monthly billing, from the navigation menu, choose Administration > Account and billing > Billing and usage. The IBM Cloud usage dashboard opens. To view the usage for each service, in the Usage summary section, click View usage.
-
-"
-BEDA84E76E7F8FA5594F63E640DC17B4F6CB3E5E_9,BEDA84E76E7F8FA5594F63E640DC17B4F6CB3E5E," Learn more
-
-
-
-* [Choosing compute resources for running tools in projects](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/environments-parent.html)
-* [Upgrade services](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/upgrade.html)
-* [Environments compute usage](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/track-runtime-usage.htmltrack-account)
-* [Watson Studio offering plans](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/ws-plans.html)
-* [Watson Machine Learning plans and compute usage](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/wml-plans.html)
-
-
-
-Parent topic:[Managing the platform](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/wdp_admin_console.html)
-"
-C4E83640891EA5D02EAE76027D05FDEFE2C4EFFE_0,C4E83640891EA5D02EAE76027D05FDEFE2C4EFFE," Managing your settings
-
-You can manage your profile, services, integrations, and notifications while logged in to IBM watsonx.
-
-
-
-* [Manage your profile](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/personal-settings.html?context=cdpaas&locale=enprofile)
-* [Manage user API keys](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/admin-apikeys.html)
-* [Switch accounts](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/personal-settings.html?context=cdpaas&locale=enaccount)
-* [Manage your services](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/wdp_admin_console.html)
-* [Manage your integrations](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/personal-settings.html?context=cdpaas&locale=enintegrations)
-* [Manage your notification settings](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/personal-settings.html?context=cdpaas&locale=enbell)
-* [View and personalize your project summary](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/personal-settings.html?context=cdpaas&locale=enproject-summary)
-
-
-
-"
-C4E83640891EA5D02EAE76027D05FDEFE2C4EFFE_1,C4E83640891EA5D02EAE76027D05FDEFE2C4EFFE," Manage your profile
-
-You can manage your profile on the Profile page by clicking your avatar in the banner and then clicking Profile and settings.
-
-You can make these changes to your profile:
-
-
-
-* Add or change your avatar photo.
-* Change your IBMid or password. Do not change your IBMid (email address) after you register with the IBM watsonx platform. The IBMid (email address) uniquely identifies users in the platform and also authorizes access to various IBM watsonx resources, including projects, spaces, models, and catalogs. If you change your IBMid (email address) in your IBM Cloud profile after you have registered with IBM watsonx, you will lose access to the platform and associated resources.
-* Set your service locations filters by resource group and location. The filters apply throughout the platform. For example, the Service instances page that you access through the Services menu shows only the filtered services. Ensure you have selected the region where Watson Studio is located, for example, Dallas, as well as the Global location. Global is required to provide access to your IBM Cloud Object Storage instance.
-* Access your IBM Cloud account.
-* [Leave IBM watsonx](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/stopapps.htmldeactivate).
-
-
-
-"
-C4E83640891EA5D02EAE76027D05FDEFE2C4EFFE_2,C4E83640891EA5D02EAE76027D05FDEFE2C4EFFE," Switch accounts
-
-If you are added to a shared IBM Cloud account that is different from your individual account, you can switch your account by selecting a different account from the account list in the menu bar, next to your avatar.
-
-"
-C4E83640891EA5D02EAE76027D05FDEFE2C4EFFE_3,C4E83640891EA5D02EAE76027D05FDEFE2C4EFFE," Manage your integrations
-
-To set up or modify an integration to GitHub:
-
-
-
-1. Click your avatar in the banner.
-2. Click Profile and settings.
-3. Click the Git integrations tab.
-
-
-
-See [Publish notebooks on GitHub](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/github-integration.html).
-
-"
-C4E83640891EA5D02EAE76027D05FDEFE2C4EFFE_4,C4E83640891EA5D02EAE76027D05FDEFE2C4EFFE," Manage your notification settings
-
-To see your notification settings, click the notification bell icon and then click the settings icon.
-
-You can make these changes to your notification settings:
-
-
-
-* Specify to receive push notifications that appear briefly on screen. If you select Do not disturb, you continue to see notifications on the home page and the number of notifications on the bell.
-* Specify to receive notifications by email.
-* Specify for which projects or spaces you receive notifications.
-
-
-
-"
-C4E83640891EA5D02EAE76027D05FDEFE2C4EFFE_5,C4E83640891EA5D02EAE76027D05FDEFE2C4EFFE," View and personalize your project summary
-
-Use the Overview page of a project to view a summary of what's happening in your project. You can jump back into your most recent work and keep up to date with alerts, tasks, project history, and compute usage.
-
-View recent asset activity in the Assets pane on the Overview page, and filter the assets by selecting By you or By all using the dropdown. Selecting By you lists assets edited by you, ordered by most recent at the top. Selecting By all lists assets edited by others and also by you, ordered by most recent at the top.
-
-You can use the readme file on the Overview page to document the status or results of the project. The readme file uses standard [Markdown formatting](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/markd-jupyter.html). Collaborators with the Admin or Editor role can edit the readme file.
-
-"
-C4E83640891EA5D02EAE76027D05FDEFE2C4EFFE_6,C4E83640891EA5D02EAE76027D05FDEFE2C4EFFE," Learn more
-
-
-
-* [Managing your IBM Cloud account](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/manage-account.html)
-* [Managing your services](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/create-services.htmlmanage)
-
-
-
-Parent topic:[Administration](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/administer-accounts.html)
-"
-FA4ACDC5DB590992630C704D00DEFB142F2F0489_0,FA4ACDC5DB590992630C704D00DEFB142F2F0489," Object storage for workspaces
-
-You must choose an IBM Cloud Object Storage instance when you create a project, catalog, or deployment space workspace. Information that is stored in IBM Cloud Object Storage is encrypted and resilient. Each workspace has its own dedicated bucket.
-
-You can encrypt the Cloud Object Storage instance that you use for workspaces with your own key. See [Encrypt IBM Cloud Object Storage with your own key](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/wdp_admin_cos.htmlbyok). The Locations in each user's Profile must include the Global location to allow access to Cloud Object Storage.
-
-When you create a workspace, the Cloud Object Storage bucket defaults to Regional resiliency. Regional buckets distribute data across several data centers that are within the same metropolitan area. If one of these data centers suffers an outage or destruction, availability and performance are not affected.
-
-If you are the account owner or administrator, you administer Cloud Object Storage from the Resource list > Storage page on the IBM Cloud dashboard. For example, you can upload and download assets, manage buckets, and configure credentials and other security settings for the Cloud Object Storage instance.
-
-Follow these steps to manage the Cloud Object Storage instance on IBM Cloud:
-
-
-
-1. Select a project from the Project list.
-2. Click the Manage tab.
-3. On the General page, locate the Storage section that displays the bucket name for the project.
-4. Select Manage in IBM Cloud to open the Cloud Object Storage Buckets list.
-5. Select the bucket name for the project to display a list of assets.
-6. Checkmark an asset to download it or perform other tasks as needed.
-
-
-
-Watch this video to see how to manage an object storage instance.
-
-Video disclaimer: Some minor steps and graphical elements in this video might differ from your platform.
-
-This video provides a visual method to learn the concepts and tasks in this documentation.
-
-
-
-* Transcript
-
-Synchronize transcript with video
-
-
-
- Time Transcript
-
- 00:00 This video shows how to manage an IBM Cloud Object Storage instance.
- 00:06 When you create a Watson Studio project, an IBM Cloud Object Storage instance is associated with the project.
- 00:15 On the Manage tab, you'll see the associated object storage instance and have the option to manage it in IBM Cloud.
-"
-FA4ACDC5DB590992630C704D00DEFB142F2F0489_1,FA4ACDC5DB590992630C704D00DEFB142F2F0489," 00:24 IBM Cloud Object Storage uses buckets to organize your data.
- 00:30 You can see that this instance contains a bucket with the ""jupyternotebooks"" prefix, which was created when the ""Jupyter Notebooks"" project was created.
- 00:41 If you open that bucket, you'll see all of the files that you added to that project.
- 00:47 From here, you can download an object or delete it from the bucket.
- 00:53 You can also view the object SQL URL to access that object from your application.
- 01:00 You can add objects to the bucket from here.
- 01:03 Just browse to select the file and wait for it to upload to storage.
- 01:10 And then that file will be available in the Files slide-out panel in the project.
- 01:16 Let's create a bucket.
- 01:20 You can create a Standard or Archive bucket, based on predefined settings, or create a custom bucket.
- 01:28 Provide a bucket name, which must be unique across the IBM Cloud Object Storage system.
- 01:35 Select a resiliency.
- 01:38 Cross Region provides higher availability and durability and Regional provides higher performance.
- 01:45 The Single Site option will only distribute data across devices within a single site.
- 01:52 Then select the location based on workload proximity.
- 01:57 Next, select a storage class, which defines the cost of storing data based on frequency of access.
- 02:05 Smart Tier provides automatic cost optimization for your storage.
- 02:11 Standard indicates frequent access.
- 02:14 Vault is for less frequent access.
- 02:18 And Cold Vault is for rare access.
- 02:21 There are other, optional settings to add rules, keys, and services.
- 02:27 Refer to the documentation for more details on these options.
- 02:32 When you're ready, create the bucket.
- 02:35 And, from here, you could add files to that bucket.
- 02:40 On the Access policies panel, you can manage access to buckets using IAM policies - that's Identity and Access Management.
- 02:50 On the Configuration panel, you'll find information about Key Protect encryption keys, as well as the bucket instance CRN and endpoints to access the data in the buckets from your application.
-"
-FA4ACDC5DB590992630C704D00DEFB142F2F0489_2,FA4ACDC5DB590992630C704D00DEFB142F2F0489," 03:01 You can also find some of the same information on the Endpoints panel.
- 03:06 On the Service credentials panel, you'll find the API and access keys to authenticate with your instance from your application.
- 03:15 You can also connect the object storage to a Cloud Foundry application, check usage details, and view your plan details.
- 03:26 Find more videos in the Cloud Pak for Data as a Service documentation.
-
-
-
-
-
-"
-FA4ACDC5DB590992630C704D00DEFB142F2F0489_3,FA4ACDC5DB590992630C704D00DEFB142F2F0489," Learn more
-
-
-
-* [Setting up IBM Cloud Object Storage](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/wdp_admin_cos.html)
-* [IBM Cloud docs: Getting started with IBM Cloud Object Storage](https://cloud.ibm.com/docs/cloud-object-storage?topic=cloud-object-storage-getting-started-cloud-object-storage)
-* [IBM Cloud docs: Endpoints and storage locations](https://cloud.ibm.com/docs/cloud-object-storage/basics?topic=cloud-object-storage-endpoints)
-* [Troubleshooting Cloud Object Storage for projects](https://dataplatform.cloud.ibm.com/docs/content/wsj/troubleshoot/troubleshoot-cos.html)
-
-
-
-Parent topic:[Creating a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/projects.html)
-"
-26FB8B86499454EFD078384D70B02917D1C7DAE1,26FB8B86499454EFD078384D70B02917D1C7DAE1," Services and integrations
-
-You can extend the functionality of the platform by provisioning other services and components, and integrating with other cloud platforms.
-
-
-
-* [Provision instances of services and components from the Services catalog](https://dataplatform.cloud.ibm.com/docs/content/svc-welcome/cloud-services.html). Add service instances and components to the IBM Cloud account to add functionality to the platform. You must be the owner or be assigned the Administrator or Editor role in the IBM Cloud account for IBM watsonx to provision service instances.
-* [Integrate with other cloud platforms](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/int-cloud.html). Allow users to easily create connections to data sources on those cloud platforms. You must have the required roles or permissions on the other cloud platform accounts.
-"
-0B4E34A88C9328EC6E0CC8A690B466F441E5EFC6_0,0B4E34A88C9328EC6E0CC8A690B466F441E5EFC6," Upgrading services on IBM watsonx
-
-When you're ready to upgrade services, you can upgrade in place without losing any of your work or data.
-
-Each service has its own plan and is independent of other plans.
-
-Required permissions : You must have an IBM Cloud IAM access policy with the Editor or Administrator role on all account management services.
-
-"
-0B4E34A88C9328EC6E0CC8A690B466F441E5EFC6_1,0B4E34A88C9328EC6E0CC8A690B466F441E5EFC6," Step 1: Update your IBM Cloud account
-
-You can skip this step if your IBM Cloud account has billing information with a Pay-As-You-Go or a subscription plan.
-
-You must update your IBM Cloud account in the following circumstances:
-
-
-
-* You have a Trial account from signing up for watsonx.
-* You have a Trial account that you [registered through an academic institution](https://ibm.biz/academic).
-* You have a [Lite account](https://cloud.ibm.com/docs/account?topic=account-accountsliteaccount) that you created before 25 October 2021.
-* You want to change a Pay-As-You-Go plan to a subscription plan.
-
-
-
-For instructions on updating your IBM Cloud account, see [Update your IBM Cloud account](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/setup-account.htmlpaid-account).
-
-"
-0B4E34A88C9328EC6E0CC8A690B466F441E5EFC6_2,0B4E34A88C9328EC6E0CC8A690B466F441E5EFC6," Step 2: Upgrade your service plans
-
-You can upgrade the service plans for services. To upgrade service plans, you must have an IBM Cloud access policy with either the Editor or Administrator platform role for the services.
-
-To upgrade a service plan:
-
-
-
-1. Click Upgrade on the header or choose Administration > Account and billing > Upgrade service plans from the main menu to open the Upgrade service plans page.
-2. Select one or more services to change the service plans.
-3. Click Select plan for each service in the Pricing summary pane. Select the plan from the Services catalog page for the service.
-4. Agree to the terms, then click Buy. Your service plans are instantly updated.
-
-
-
-After the upgrade, the additional features and capacity for the new plan are automatically available. For the following services, the difference between plans can be significant:
-
-
-
-* [Watson Studio offering plans](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/ws-plans.html)
-* [Watson Machine Learning plans and compute usage](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/wml-plans.html)
-
-
-
-"
-0B4E34A88C9328EC6E0CC8A690B466F441E5EFC6_3,0B4E34A88C9328EC6E0CC8A690B466F441E5EFC6," Learn more
-
-
-
-* [IBM Cloud docs: Account types](https://cloud.ibm.com/docs/account?topic=account-accounts)
-* [IBM Cloud docs: Upgrading your account](https://cloud.ibm.com/docs/account?topic=account-upgrading-account)
-* [Setting up the IBM Cloud account](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/setup-account.html)
-* [Monitoring account resource usage](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/monitor-resources.html)
-* [Find your account administrators](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/faq.htmlaccountadmin)
-
-
-
-Parent topic:[Managing IBM watsonx](https://dataplatform.cloud.ibm.com/docs/content/wsj/console/wdp_admin_console.html)
-"
-4BFEB479BCB6BBD28DD18EC423FDC5FB9C39B4B6_0,4BFEB479BCB6BBD28DD18EC423FDC5FB9C39B4B6," Determining your roles and permissions
-
-You have multiple roles within IBM Cloud and IBM watsonx that provide permissions. You can determine what each of your roles are, and, when necessary, who can change your roles.
-
-"
-4BFEB479BCB6BBD28DD18EC423FDC5FB9C39B4B6_1,4BFEB479BCB6BBD28DD18EC423FDC5FB9C39B4B6," Projects and catalogs roles
-
-To determine your role in a project or deployment space, look at the Access Control page on the Manage tab. Your role is listed next to your name or the service ID you use to log in.
-
-The permissions that are associated with each role are specific to the type of workspace:
-
-
-
-* [Project collaborator permissions](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/collaborator-permissions.html)
-* [Deployment space collaborator permissions](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/collaborator-permissions-wml.html)
-
-
-
-If you want a different role, ask someone who has the Admin role on the Access Control page to change your role.
-
-"
-4BFEB479BCB6BBD28DD18EC423FDC5FB9C39B4B6_2,4BFEB479BCB6BBD28DD18EC423FDC5FB9C39B4B6," IBM Cloud IAM account and service access roles
-
-You can see your IAM account and service access roles in IBM Cloud.
-
-To see your IAM account and service access roles in IBM Cloud:
-
-
-
-1. From the IBM watsonx main menu, click Administration > Access (IAM).
-2. Click Users, then click your name.
-3. Click the Access policies tab. You might have multiple entries:
-
-
-
-* The All resources in account (including future IAM enabled services) entry shows your general roles for all services in the account.
-* Other entries might show your roles for individual services.
-
-
-
-
-
-If you want the IBM Cloud account administrator role or another role, ask an IBM Cloud account owner or administrator to assign it to you. You can [find your account administrators](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/faq.htmlaccountadmin) on your Access (IAM) > Users page in IBM Cloud.
-
-"
-4BFEB479BCB6BBD28DD18EC423FDC5FB9C39B4B6_3,4BFEB479BCB6BBD28DD18EC423FDC5FB9C39B4B6," Learn more
-
-
-
-* [Roles in IBM watsonx](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/roles.html)
-* [Find your IBM Cloud account owner or administrator](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/faq.htmlaccountadmin)
-
-
-
-Parent topic:[Administration](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/administer-accounts.html)
-"
-8B34DED3493E5181B1D19F6D14A9598CFEAA5997_0,8B34DED3493E5181B1D19F6D14A9598CFEAA5997," AI risk atlas
-
-Explore this atlas to understand some of the risks of working with generative AI, foundation models, and machine learning models.
-
-"
-8B34DED3493E5181B1D19F6D14A9598CFEAA5997_1,8B34DED3493E5181B1D19F6D14A9598CFEAA5997," Risks associated with input
-
-"
-8B34DED3493E5181B1D19F6D14A9598CFEAA5997_2,8B34DED3493E5181B1D19F6D14A9598CFEAA5997," Training and tuning phase
-
-
-
-"
-8B34DED3493E5181B1D19F6D14A9598CFEAA5997_3,8B34DED3493E5181B1D19F6D14A9598CFEAA5997," Fairness
-
-[Data bias](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/bias.html)Amplified
-
-"
-8B34DED3493E5181B1D19F6D14A9598CFEAA5997_4,8B34DED3493E5181B1D19F6D14A9598CFEAA5997," Robustness
-
-[Data poisoning](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/data-poisoning.html)Traditional
-
-"
-8B34DED3493E5181B1D19F6D14A9598CFEAA5997_5,8B34DED3493E5181B1D19F6D14A9598CFEAA5997," Value alignment
-
-[Data curation](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/data-curation.html)Amplified
-[Downstream retraining](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/downstream-retraining.html)New
-
-"
-8B34DED3493E5181B1D19F6D14A9598CFEAA5997_6,8B34DED3493E5181B1D19F6D14A9598CFEAA5997," Data laws
-
-[Data transfer](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/data-transfer.html)Traditional
-[Data usage](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/data-usage.html)Traditional
-[Data aquisition](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/data-aquisition.html)Traditional
-
-"
-8B34DED3493E5181B1D19F6D14A9598CFEAA5997_7,8B34DED3493E5181B1D19F6D14A9598CFEAA5997," Intellectual property
-
-[Data usage rights](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/data-usage-rights.html)Amplified
-[Confidential data disclosure](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/confidential-data-disclosure.html)Traditional
-
-"
-8B34DED3493E5181B1D19F6D14A9598CFEAA5997_8,8B34DED3493E5181B1D19F6D14A9598CFEAA5997," Transparency
-
-[Data transparency](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/data-transparency.html)Amplified
-[Data provenance](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/data-provenance.html)Amplified
-
-"
-8B34DED3493E5181B1D19F6D14A9598CFEAA5997_9,8B34DED3493E5181B1D19F6D14A9598CFEAA5997," Privacy
-
-[Personal information in data](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/personal-information-in-data.html)Traditional
-[Reidentification](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/reidentification.html)Traditional
-[Data privacy rights](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/data-privacy-rights.html)Amplified
-
-"
-8B34DED3493E5181B1D19F6D14A9598CFEAA5997_10,8B34DED3493E5181B1D19F6D14A9598CFEAA5997," Inference phase
-
-
-
-"
-8B34DED3493E5181B1D19F6D14A9598CFEAA5997_11,8B34DED3493E5181B1D19F6D14A9598CFEAA5997," Privacy
-
-[Personal information in prompt](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/personal-information-in-prompt.html)New[Membership inference attack](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/membership-inference-attack.html)Traditional[Attribute inference attack](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/attribute-inference-attack.html)Amplified
-
-"
-8B34DED3493E5181B1D19F6D14A9598CFEAA5997_12,8B34DED3493E5181B1D19F6D14A9598CFEAA5997," Intellectual property
-
-[Confidential data in prompt](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/confidential-data-in-prompt.html)New
-
-"
-8B34DED3493E5181B1D19F6D14A9598CFEAA5997_13,8B34DED3493E5181B1D19F6D14A9598CFEAA5997," Robustness
-
-[Evasion attack](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/evasion-attack.html)Amplified[Extraction attack](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/extraction-attack.html)Amplified[Prompt injection](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/prompt-injection.html)New[Prompt leaking](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/prompt-leaking.html)Amplified
-
-"
-8B34DED3493E5181B1D19F6D14A9598CFEAA5997_14,8B34DED3493E5181B1D19F6D14A9598CFEAA5997," Multi-category
-
-[Prompt priming](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/prompt-priming.html)Amplified[Jailbreaking](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/jailbreaking.html)Amplified
-
-"
-8B34DED3493E5181B1D19F6D14A9598CFEAA5997_15,8B34DED3493E5181B1D19F6D14A9598CFEAA5997," Risks associated with output
-
-
-
-"
-8B34DED3493E5181B1D19F6D14A9598CFEAA5997_16,8B34DED3493E5181B1D19F6D14A9598CFEAA5997," Fairness
-
-[Output bias](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/output-bias.html)New[Decision bias](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/decision-bias.html)New
-
-"
-8B34DED3493E5181B1D19F6D14A9598CFEAA5997_17,8B34DED3493E5181B1D19F6D14A9598CFEAA5997," Intellectual property
-
-[Copyright infringement](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/copyright-infringement.html)New
-
-"
-8B34DED3493E5181B1D19F6D14A9598CFEAA5997_18,8B34DED3493E5181B1D19F6D14A9598CFEAA5997," Value alignment
-
-[Hallucination](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/hallucination.html)New[Toxic output](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/toxic-output.html)New[Trust calibration](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/trust-calibration.html)New[Physical harm](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/physical-harm.html)New[Benign advice](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/benign-advice.html)New[Improper usage](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/improper-usage.html)New
-
-"
-8B34DED3493E5181B1D19F6D14A9598CFEAA5997_19,8B34DED3493E5181B1D19F6D14A9598CFEAA5997," Misuse
-
-[Spreading disinformation](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/spreading-disinformation.html)Amplified[Toxicity](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/toxicity.html)New[Nonconsensual use](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/nonconsensual-use.html)Amplified[Dangerous use](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/dangerous-use.html)New[Non-disclosure](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/non-disclosure.html)New
-
-"
-8B34DED3493E5181B1D19F6D14A9598CFEAA5997_20,8B34DED3493E5181B1D19F6D14A9598CFEAA5997," Harmful code generation
-
-[Harmful code generation](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/harmful-code-generation.html)New
-
-"
-8B34DED3493E5181B1D19F6D14A9598CFEAA5997_21,8B34DED3493E5181B1D19F6D14A9598CFEAA5997," Privacy
-
-[Personal information in output](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/personal-information-in-output.html)New
-
-"
-8B34DED3493E5181B1D19F6D14A9598CFEAA5997_22,8B34DED3493E5181B1D19F6D14A9598CFEAA5997," Explainability
-
-[Explaining output](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/explaining-output.html)Amplified[Unreliable source attribution](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/unreliable-source-attribution.html)Amplified[Inaccessible training data](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/inaccessible-training-data.html)Amplified[Untraceable attribution](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/untraceable-attribution.html)Amplified
-"
-A304B9E82543C150236ECAD30F1594E1B832B8B1_0,A304B9E82543C150236ECAD30F1594E1B832B8B1," Attribute inference attack
-
-Risks associated with inputInferencePrivacyAmplified
-
-"
-A304B9E82543C150236ECAD30F1594E1B832B8B1_1,A304B9E82543C150236ECAD30F1594E1B832B8B1," Description
-
-An attribute inference attack is used to detect whether certain sensitive features can be inferred about individuals who participated in training a model. These attacks occur when an adversary has some prior knowledge about the training data and uses that knowledge to infer the sensitive data.
-
-"
-A304B9E82543C150236ECAD30F1594E1B832B8B1_2,A304B9E82543C150236ECAD30F1594E1B832B8B1," Why is attribute inference attack a concern for foundation models?
-
-With a successful attack, the attacker can gain valuable information such as sensitive personal information or intellectual property.
-
-Parent topic:[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html)
-"
-857C8C3489AC9B2891ED1AE9C81EA881CF1CED80_0,857C8C3489AC9B2891ED1AE9C81EA881CF1CED80," Benign advice
-
-Risks associated with outputValue alignmentNew
-
-"
-857C8C3489AC9B2891ED1AE9C81EA881CF1CED80_1,857C8C3489AC9B2891ED1AE9C81EA881CF1CED80," Description
-
-When a model generates information that is factually correct but not specific enough for the current context, the benign advice can be potentially harmful. For example, a model might provide medical, financial, and legal advice or recommendations for a specific problem that the end user may act on even when they should not.
-
-"
-857C8C3489AC9B2891ED1AE9C81EA881CF1CED80_2,857C8C3489AC9B2891ED1AE9C81EA881CF1CED80," Why is benign advice a concern for foundation models?
-
-A person might act on incomplete advice or worry about a situation that is not applicable to them due to the overgeneralized nature of the content generated.
-
-Parent topic:[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html)
-"
-BC8E9394D23A5320BFCE0EBE7F208CA18CB6B65C_0,BC8E9394D23A5320BFCE0EBE7F208CA18CB6B65C," Data bias
-
-Risks associated with inputTraining and tuning phaseFairnessAmplified
-
-"
-BC8E9394D23A5320BFCE0EBE7F208CA18CB6B65C_1,BC8E9394D23A5320BFCE0EBE7F208CA18CB6B65C," Description
-
-Historical, representational, and societal biases present in the data used to train and fine tune the model can adversely affect model behavior.
-
-"
-BC8E9394D23A5320BFCE0EBE7F208CA18CB6B65C_2,BC8E9394D23A5320BFCE0EBE7F208CA18CB6B65C," Why is data bias a concern for foundation models?
-
-Training an AI system on data with bias, such as historical or representational bias, could lead to biased or skewed outputs that may unfairly represent or otherwise discriminate against certain groups or individuals. In addition to negative societal impacts, business entities could face legal consequences or reputational harms from biased model outcomes.
-
-Example
-
-"
-BC8E9394D23A5320BFCE0EBE7F208CA18CB6B65C_3,BC8E9394D23A5320BFCE0EBE7F208CA18CB6B65C," Healthcare Bias
-
-Research on reinforcing disparities in medicine highlights that using data and AI to transform how people receive healthcare is only as strong as the data behind it, meaning use of training data with poor minority representation can lead to growing health inequalities.
-
-Sources:
-
-[Science, September 2022](https://www.science.org/doi/10.1126/science.abo2788)
-
-[Forbes, December 2022](https://www.forbes.com/sites/adigaskell/2022/12/02/minority-patients-often-left-behind-by-health-ai/?sh=31d28a225b41)
-
-Parent topic:[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html)
-"
-F6CC81E55C6AAD12849A56837F14538576F5A42C_0,F6CC81E55C6AAD12849A56837F14538576F5A42C," Confidential data disclosure
-
-Risks associated with inputTraining and tuning phaseIntellectual propertyTraditional
-
-"
-F6CC81E55C6AAD12849A56837F14538576F5A42C_1,F6CC81E55C6AAD12849A56837F14538576F5A42C," Description
-
-Models might be trained or fine-tuned using confidential data or the company’s intellectual property, which could result in unwanted disclosure of that information.
-
-"
-F6CC81E55C6AAD12849A56837F14538576F5A42C_2,F6CC81E55C6AAD12849A56837F14538576F5A42C," Why is confidential data disclosure a concern for foundation models?
-
-If not developed in accordance with data protection rules and regulations, the model might expose confidential information or IP in the generated output or through an adversarial attack.
-
-Parent topic:[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html)
-"
-2D3A398B8394671D9383F214FF5E69A00391BB22_0,2D3A398B8394671D9383F214FF5E69A00391BB22," Confidential data in prompt
-
-Risks associated with inputInferenceIntellectual propertyNew
-
-"
-2D3A398B8394671D9383F214FF5E69A00391BB22_1,2D3A398B8394671D9383F214FF5E69A00391BB22," Description
-
-Inclusion of confidential data as a part of a generative model's prompt, either through the system prompt design or through the inclusion of end user input, might later result in unintended reuse or disclosure of that information.
-
-"
-2D3A398B8394671D9383F214FF5E69A00391BB22_2,2D3A398B8394671D9383F214FF5E69A00391BB22," Why is confidential data in prompt a concern for foundation models?
-
-If not properly developed to secure confidential data, the model might expose confidential information or IP in the generated output. Additionally, end users' confidential information might be unintentionally collected and stored.
-
-Example
-
-"
-2D3A398B8394671D9383F214FF5E69A00391BB22_3,2D3A398B8394671D9383F214FF5E69A00391BB22," Disclosure of Confidential Information
-
-As per the source article, employees of Samsung disclosed confidential information to OpenAI through their use of ChatGPT. In one instance, an employee pasted confidential source code to check for errors. In another, an employee shared code with ChatGPT and ""requested code optimization."" A third shared a recording of a meeting to convert into notes for a presentation. Samsung has limited internal ChatGPT usage in response to these incidents, but it is unlikely that they will be able to recall any of their data. Additionally, that article highlighted that in response to the risk of leaking confidential information and other sensitive information, companies like Apple, JPMorgan Chase. Deutsche Bank, Verizon, Walmart, Samsung, Amazon, and Accenture have placed several restrictions on the usage of ChatGPT.
-
-Sources:
-
-[Business Insider, February 2023](https://www.businessinsider.com/walmart-warns-workers-dont-share-sensitive-information-chatgpt-generative-ai-2023-2)
-
-Parent topic:[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html)
-"
-C9FB652C433A0A0BC419CBFE4ECC3680252D2FE3_0,C9FB652C433A0A0BC419CBFE4ECC3680252D2FE3," Copyright infringement
-
-Risks associated with outputIntellectual propertyNew
-
-"
-C9FB652C433A0A0BC419CBFE4ECC3680252D2FE3_1,C9FB652C433A0A0BC419CBFE4ECC3680252D2FE3," Description
-
-Generative AI output that is too similar or identical to existing work risks claims of copyright infringement. Uncertainty and variability around the ownership, copyrightability, and patentability of output generated by AI increases the risk of copyright infringement problems.
-
-"
-C9FB652C433A0A0BC419CBFE4ECC3680252D2FE3_2,C9FB652C433A0A0BC419CBFE4ECC3680252D2FE3," Why is copyright infringement a concern for foundation models?
-
-Laws and regulations concerning the use of content that looks the same or closely similar to other copyrighted data are largely unsettled and can vary from country to country, providing challenges in determining and implementing compliance. Business entities could face fines, reputational harms, and other legal consequences.
-
-Parent topic:[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html)
-"
-6BFF9C4DB2BB43376A2A2CD681714ED3273E991E_0,6BFF9C4DB2BB43376A2A2CD681714ED3273E991E," Dangerous use
-
-Risks associated with outputMisuseNew
-
-"
-6BFF9C4DB2BB43376A2A2CD681714ED3273E991E_1,6BFF9C4DB2BB43376A2A2CD681714ED3273E991E," Description
-
-The possibility that a model could be misused for dangerous purposes such as creating plans to develop weapons, malware, or causing harm to others is the risk of dangerous use.
-
-"
-6BFF9C4DB2BB43376A2A2CD681714ED3273E991E_2,6BFF9C4DB2BB43376A2A2CD681714ED3273E991E," Why is dangerous use a concern for foundation models?
-
-Enabling people to harm others is unethical and can be illegal. A model that has this potential must be properly governed. Otherwise, business entities could face fines, reputational harms, and other legal consequences.
-
-Parent topic:[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html)
-"
-C2F1FF7794524DB18EE5FADFAA7232D0A94F8B4C_0,C2F1FF7794524DB18EE5FADFAA7232D0A94F8B4C," Data aquisition
-
-Risks associated with inputTraining and tuning phaseData lawsTraditional
-
-"
-C2F1FF7794524DB18EE5FADFAA7232D0A94F8B4C_1,C2F1FF7794524DB18EE5FADFAA7232D0A94F8B4C," Description
-
-Laws and other regulations might limit the collection of certain types of data for specific AI use cases.
-
-"
-C2F1FF7794524DB18EE5FADFAA7232D0A94F8B4C_2,C2F1FF7794524DB18EE5FADFAA7232D0A94F8B4C," Why is data aquisition a concern for foundation models?
-
-Failing to comply with data usage laws might result in fines and other legal consequences.
-
-Parent topic:[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html)
-"
-6BC478053FEFD091742C6775DFAC9EB5B8C4923F_0,6BC478053FEFD091742C6775DFAC9EB5B8C4923F," Data curation
-
-Risks associated with inputTraining and tuning phaseValue alignmentAmplified
-
-"
-6BC478053FEFD091742C6775DFAC9EB5B8C4923F_1,6BC478053FEFD091742C6775DFAC9EB5B8C4923F," Description
-
-When training or tuning data is improperly collected or prepared, the result can be a misalignment of a model's desired values or intent and the actual outcome.
-
-"
-6BC478053FEFD091742C6775DFAC9EB5B8C4923F_2,6BC478053FEFD091742C6775DFAC9EB5B8C4923F," Why is data curation a concern for foundation models?
-
-Improper data curation can adversely affect how a model is trained, resulting in a model that does not behave in accordance with the intended values. Correcting problems after the model is trained and deployed might be insufficient for guaranteeing proper behavior. Improper model behavior can result in business entities facing legal consequences or reputational harms.
-
-Parent topic:[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html)
-"
-C471B8B14614C985391115EC1ED53E0B56D2E27E_0,C471B8B14614C985391115EC1ED53E0B56D2E27E," Data poisoning
-
-Risks associated with inputTraining and tuning phaseRobustnessTraditional
-
-"
-C471B8B14614C985391115EC1ED53E0B56D2E27E_1,C471B8B14614C985391115EC1ED53E0B56D2E27E," Description
-
-Data poisoning is a type of adversarial attack where an adversary or malicious insider injects intentionally corrupted, false, misleading, or incorrect samples into the training or fine-tuning dataset.
-
-"
-C471B8B14614C985391115EC1ED53E0B56D2E27E_2,C471B8B14614C985391115EC1ED53E0B56D2E27E," Why is data poisoning a concern for foundation models?
-
-Poisoning data can make the model sensitive to a malicious data pattern and produce the adversary’s desired output. It can create a security risk where adversaries can force model behavior for their own benefit. In addition to producing unintended and potentially malicious results, a model misalignment from data poisoning can result in business entities facing legal consequences or reputational harms.
-
-Parent topic:[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html)
-"
-773F81DD69D3ADBBE1998FF5974CA83347EFFC76_0,773F81DD69D3ADBBE1998FF5974CA83347EFFC76," Data privacy rights
-
-Risks associated with inputTraining and tuning phasePrivacyAmplified
-
-"
-773F81DD69D3ADBBE1998FF5974CA83347EFFC76_1,773F81DD69D3ADBBE1998FF5974CA83347EFFC76," Description
-
-In some countries, privacy laws give individuals the right to access, correct, verify, or remove certain types of information that companies hold or process about them. Tracking the usage of an individual’s personal information in training a model and providing appropriate rights to comply with such laws can be a complex endeavor.
-
-"
-773F81DD69D3ADBBE1998FF5974CA83347EFFC76_2,773F81DD69D3ADBBE1998FF5974CA83347EFFC76," Why is data privacy rights a concern for foundation models?
-
-The identification or improper usage of data could lead to violation of privacy laws. Improper usage or a request for data removal could force organizations to retrain the model, which is expensive. In addition, business entities could face fines, reputational harms, and other legal consequences if they fail to comply with data privacy rules and regulations.
-
-Example
-
-"
-773F81DD69D3ADBBE1998FF5974CA83347EFFC76_3,773F81DD69D3ADBBE1998FF5974CA83347EFFC76," Right to Be Forgotten (RTBF)
-
-As stated in the article, laws in multiple locales, including Europe (GDPR); Canada (CPPA); and Japan (APPI), grant users’ rights for their personal data to be “forgotten” by technology (Right To Be Forgotten). However, the emerging and increasingly popular AI (LLMs) services present new challenges for the right to be forgotten (RTBF). According to Data61’s research, the only way for users to identify usage of their personal information in an LLM is “by either inspecting the original training dataset or perhaps prompting the model.” However, training data is either not public or companies do not disclose it, citing safety and other concerns, and guardrails may prevent users from accessing the information via prompting. Due to these barriers, users cannot initiate RTBF procedures and companies deploying LLMs may be unable to meet RTBF laws.
-
-Sources:
-
-[Zhang et al., Sept 2023](https://arxiv.org/pdf/2307.03941.pdf)
-
-Example
-
-"
-773F81DD69D3ADBBE1998FF5974CA83347EFFC76_4,773F81DD69D3ADBBE1998FF5974CA83347EFFC76," Lawsuit About LLM Unlearning
-
-According to the report, a lawsuit was filed against Google that alleges the use of copyright material and personal information as training data for its AI systems, which includes its Bard chatbot. Opt-out and deletion rights are guaranteed rights for California residents under the CCPA and children in the United States below 13 under the COPPA. The plaintiffs allege that because there is no way for Bard to “unlearn” or fully remove all the scraped PI it has been fed. The plaintiffs note that Bard’s privacy notice states that Bard conversations cannot be deleted by the user once they have been reviewed and annotated by the company and may be kept up to 3 years, which plaintiffs allege further contributes to non-compliance with these laws.
-
-Sources:
-
-[Reuters, July 2023](https://www.reuters.com/legal/litigation/google-hit-with-class-action-lawsuit-over-ai-data-scraping-2023-07-11/)
-
-[J.L. v. Alphabet Inc., July 2023](https://fingfx.thomsonreuters.com/gfx/legaldocs/myvmodloqvr/GOOGLE%20AI%20LAWSUIT%20complaint.pdf)
-
-Parent topic:[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html)
-"
-40D279ADA16512E67B7FB78FDAC4ADA9CFE5C645_0,40D279ADA16512E67B7FB78FDAC4ADA9CFE5C645," Data provenance
-
-Risks associated with inputTraining and tuning phaseTransparencyAmplified
-
-"
-40D279ADA16512E67B7FB78FDAC4ADA9CFE5C645_1,40D279ADA16512E67B7FB78FDAC4ADA9CFE5C645," Description
-
-Without standardized and established methods for verifying where data came from, there are no guarantees that available data is what it claims to be.
-
-"
-40D279ADA16512E67B7FB78FDAC4ADA9CFE5C645_2,40D279ADA16512E67B7FB78FDAC4ADA9CFE5C645," Why is data provenance a concern for foundation models?
-
-Not all data sources are trustworthy. Data might have been unethically collected, manipulated, or falsified. Using such data can result in undesirable behaviors in the model. Business entities could face fines, reputational harms, and other legal consequences.
-
-Parent topic:[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html)
-"
-C8816BF425EF039884DBF6A7282F8D7ADB7C5D04_0,C8816BF425EF039884DBF6A7282F8D7ADB7C5D04," Data transfer
-
-Risks associated with inputTraining and tuning phaseData lawsTraditional
-
-"
-C8816BF425EF039884DBF6A7282F8D7ADB7C5D04_1,C8816BF425EF039884DBF6A7282F8D7ADB7C5D04," Description
-
-Laws and other restrictions that apply to the transfer of data can limit or prohibit transferring or repurposing data from one country to another. Repurposing data can be further restricted within countries or with local regulations.
-
-"
-C8816BF425EF039884DBF6A7282F8D7ADB7C5D04_2,C8816BF425EF039884DBF6A7282F8D7ADB7C5D04," Why is data transfer a concern for foundation models?
-
-Data transfer restrictions can impact the availability of the data required for training an AI model and can lead to poorly represented data. Failing to comply with data transfer laws might result in fines and other legal consequences.
-
-Example
-
-"
-C8816BF425EF039884DBF6A7282F8D7ADB7C5D04_3,C8816BF425EF039884DBF6A7282F8D7ADB7C5D04," Data Restriction Laws
-
-As stated in the research article, data localization measures which restrict the ability to move data globally will reduce the capacity to develop tailored AI capacities. It will affect AI directly by providing less training data and indirectly by undercutting the building blocks on which AI is built.
-
-Examples include [China's data localization laws](https://iapp.org/resources/article/demystifying-data-localization-in-china-a-practical-guide/), GDPR restrictions on the processing and use of personal data, and [Singapore's bilateral data sharing](https://www.imda.gov.sg/how-we-can-help/data-innovation/trusted-data-sharing-framework).
-
-Sources:
-
-[Brookings, December 2018](https://www.brookings.edu/articles/the-impact-of-artificial-intelligence-on-international-trade)
-
-Parent topic:[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html)
-"
-221F46D0A3C2C3D3A623BE815B45E8B90AF61340_0,221F46D0A3C2C3D3A623BE815B45E8B90AF61340," Data transparency
-
-Risks associated with inputTraining and tuning phaseTransparencyAmplified
-
-"
-221F46D0A3C2C3D3A623BE815B45E8B90AF61340_1,221F46D0A3C2C3D3A623BE815B45E8B90AF61340," Description
-
-Without accurate documentation on how a model's data was collected, curated, and used to train a model, it might be harder to satisfactorily explain the behavior of the model with respect to the data.
-
-"
-221F46D0A3C2C3D3A623BE815B45E8B90AF61340_2,221F46D0A3C2C3D3A623BE815B45E8B90AF61340," Why is data transparency a concern for foundation models?
-
-Data transparency is important for legal compliance and AI ethics. Missing information limits the ability to evaluate risks associated with the data. The lack of standardized requirements might limit disclosure as organizations protect trade secrets and try to limit others from copying their models.
-
-Example
-
-"
-221F46D0A3C2C3D3A623BE815B45E8B90AF61340_3,221F46D0A3C2C3D3A623BE815B45E8B90AF61340," Data and Model Metadata Disclosure
-
-OpenAI's technical report is an example of the dichotomy around disclosing data and model metadata. While many model developers see value in enabling transparency for consumers, disclosure poses real safety issues and could increase the ability to misuse the models. In the GPT-4 technical report, they state: ""Given both the competitive landscape and the safety implications of large-scale models like GPT-4, this report contains no further details about the architecture (including model size), hardware, training compute, dataset construction, training method, or similar.""
-
-Sources:
-
-[OpenAI, March 2023](https://cdn.openai.com/papers/gpt-4.pdf)
-
-Parent topic:[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html)
-"
-34FFE04319CE15E4451729B183C35F288A58A1B7_0,34FFE04319CE15E4451729B183C35F288A58A1B7," Data usage rights
-
-Risks associated with inputTraining and tuning phaseIntellectual propertyAmplified
-
-"
-34FFE04319CE15E4451729B183C35F288A58A1B7_1,34FFE04319CE15E4451729B183C35F288A58A1B7," Description
-
-Terms of service, copyright laws, or other rules restrict the ability to use certain data for building models.
-
-"
-34FFE04319CE15E4451729B183C35F288A58A1B7_2,34FFE04319CE15E4451729B183C35F288A58A1B7," Why is data usage rights a concern for foundation models?
-
-Laws and regulations concerning the use of data to train AI are unsettled and can vary from country to country, which creates challenges in the development of models. If data usage violates rules or restrictions, business entities might face fines, reputational harms, and other legal consequences.
-
-Example
-
-"
-34FFE04319CE15E4451729B183C35F288A58A1B7_3,34FFE04319CE15E4451729B183C35F288A58A1B7," Text Copyright Infringement Claims
-
-According to the source article, bestselling novelists Sarah Silverman, Richard Kadrey, and Christopher Golden have sued Meta and OpenAI for copyright infringement. The article further stated that the authors had alleged the two tech companies had “ingested” text from their books into generative AI software (LLMs) and failed to give them credit or compensation.
-
-Sources:
-
-[Los Angeles Times, July 2023](https://www.latimes.com/entertainment-arts/books/story/2023-07-10/sarah-silverman-authors-sue-meta-openai-chatgpt-copyright-infringement)
-
-Parent topic:[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html)
-"
-B00BEB80E522D712DC9062F835AD10E787B8C5FC_0,B00BEB80E522D712DC9062F835AD10E787B8C5FC," Data usage
-
-Risks associated with inputTraining and tuning phaseData lawsTraditional
-
-"
-B00BEB80E522D712DC9062F835AD10E787B8C5FC_1,B00BEB80E522D712DC9062F835AD10E787B8C5FC," Description
-
-Laws and other restrictions can limit or prohibit the use of some data for specific AI use cases.
-
-"
-B00BEB80E522D712DC9062F835AD10E787B8C5FC_2,B00BEB80E522D712DC9062F835AD10E787B8C5FC," Why is data usage a concern for foundation models?
-
-Failing to comply with data usage laws might result in fines and other legal consequences.
-
-Parent topic:[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html)
-"
-DD88591C39C90F2CF211C3EE3330B7E7939C3472_0,DD88591C39C90F2CF211C3EE3330B7E7939C3472," Decision bias
-
-Risks associated with outputFairnessNew
-
-"
-DD88591C39C90F2CF211C3EE3330B7E7939C3472_1,DD88591C39C90F2CF211C3EE3330B7E7939C3472," Description
-
-Decision bias occurs when one group is unfairly advantaged over another due to decisions of the model. This bias can result from bias in the training data or as an unintended consequence of how the model was trained.
-
-"
-DD88591C39C90F2CF211C3EE3330B7E7939C3472_2,DD88591C39C90F2CF211C3EE3330B7E7939C3472," Why is decision bias a concern for foundation models?
-
-Bias can harm persons affected by the decisions of the model. Business entities could face fines, reputational harms, and other legal consequences.
-
-Example
-
-"
-DD88591C39C90F2CF211C3EE3330B7E7939C3472_3,DD88591C39C90F2CF211C3EE3330B7E7939C3472," Unfair health risk assignment for black patients
-
-A study on racial bias in health algorithms estimated that racial bias reduces the number of black patients identified for extra care by more than half. The study found that bias occurred because the algorithm used health costs as a proxy for health needs. Less money is spent on black patients who have the same level of need, and the algorithm thus falsely concludes that black patients are healthier than equally sick white patients.
-
-Sources:
-
-[Science, October 2019](https://www.science.org/doi/10.1126/science.aax2342)
-
-[American Civil Liberties Union, 2022](https://www.aclu.org/news/privacy-technology/algorithms-in-health-care-may-worsen-medical-racism::text=In%202019%2C%20a%20bombshell%20study,recommended%20for%20the%20same%20care)
-
-Parent topic:[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html)
-"
-C7049B7393149EDC2256A9D4EDB1D6E5A6E24B72_0,C7049B7393149EDC2256A9D4EDB1D6E5A6E24B72," Downstream retraining
-
-Risks associated with inputTraining and tuning phaseValue alignmentNew
-
-"
-C7049B7393149EDC2256A9D4EDB1D6E5A6E24B72_1,C7049B7393149EDC2256A9D4EDB1D6E5A6E24B72," Description
-
-Using data from user-generated content or AI-generated content from downstream applications for retraining a model can result in misalignment, undesirable output, and inaccurate or inappropriate model behavior.
-
-"
-C7049B7393149EDC2256A9D4EDB1D6E5A6E24B72_2,C7049B7393149EDC2256A9D4EDB1D6E5A6E24B72," Why is downstream retraining a concern for foundation models?
-
-Repurposing downstream output for re-training a model without implementing proper human vetting increases the chances of undesirable outputs being incorporated into the training or tuning data of the model, resulting in an echo chamber effect. Improper model behavior can result in business entities facing legal consequences or reputational harms.
-
-Example
-
-"
-C7049B7393149EDC2256A9D4EDB1D6E5A6E24B72_3,C7049B7393149EDC2256A9D4EDB1D6E5A6E24B72," Model collapse due to training using AI-generated content
-
-As stated in the source article, a group of researchers from the UK and Canada have investigated the problem of using AI-generated content for training instead of human-generated content. They found that using model-generated content in training causes irreversible defects in the resulting models and that learning from data produced by other models causes [model collapse](https://arxiv.org/pdf/2305.17493v2.pdf).
-
-Sources:
-
-[VentureBeat, June 2023](https://venturebeat.com/ai/the-ai-feedback-loop-researchers-warn-of-model-collapse-as-ai-trains-on-ai-generated-content/)
-
-Parent topic:[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html)
-"
-E3E5FA98908EEE308D960761E9F29CF7A8AAD690_0,E3E5FA98908EEE308D960761E9F29CF7A8AAD690," Evasion attack
-
-Risks associated with inputInferenceRobustnessAmplified
-
-"
-E3E5FA98908EEE308D960761E9F29CF7A8AAD690_1,E3E5FA98908EEE308D960761E9F29CF7A8AAD690," Description
-
-Evasion attacks attempt to make a model output incorrect results by perturbing the data sent to the trained model.
-
-"
-E3E5FA98908EEE308D960761E9F29CF7A8AAD690_2,E3E5FA98908EEE308D960761E9F29CF7A8AAD690," Why is evasion attack a concern for foundation models?
-
-Evasion attacks alter model behavior, usually to benefit the attacker. If not properly accounted for, business entities could face fines, reputational harms, and other legal consequences.
-
-Example
-
-"
-E3E5FA98908EEE308D960761E9F29CF7A8AAD690_3,E3E5FA98908EEE308D960761E9F29CF7A8AAD690," Adversarial attacks on autonomous vehicles' AI components
-
-A report from the European Union Agency for Cybersecurity (ENISA) found that autonomous vehicles are “highly vulnerable to a wide range of attacks” that could be dangerous for passengers, pedestrians, and people in other vehicles. The report states that an adversarial attack might be used to make the AI ‘blind’ to pedestrians by manipulating the image recognition component to misclassify pedestrians. This attack could lead to havoc on the streets, as autonomous cars may hit pedestrians on the road or crosswalks.
-
-Other studies have demonstrated potential adversarial attacks on autonomous vehicles:
-
-
-
-* Fooling machine learning algorithms by making minor changes to street sign graphics, such as adding stickers.
-* Security researchers from Tencent demonstrated how adding three small stickers in an intersection could cause Tesla's autopilot system to swerve into the wrong lane.
-* Two McAfee researchers demonstrated how using only black electrical tape could trick a 2016 Tesla into a dangerous burst of acceleration by changing a speed limit sign from 35 mph to 85 mph.
-
-
-
-Sources:
-
-[Venture Beat, February 2021](https://venturebeat.com/business/eu-report-warns-that-ai-makes-autonomous-vehicles-highly-vulnerable-to-attack/)
-
-[IEEE, August 2017](https://spectrum.ieee.org/slight-street-sign-modifications-can-fool-machine-learning-algorithms)
-
-[IEEE, April 2019](https://spectrum.ieee.org/three-small-stickers-on-road-can-steer-tesla-autopilot-into-oncoming-lane)
-
-[Market Watch, February 2020](https://www.marketwatch.com/story/85-in-a-35-hackers-show-how-easy-it-is-to-manipulate-a-self-driving-tesla-2020-02-19)
-
-Parent topic:[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html)
-"
-6B6F04AA6BBD6BE14B11EA62AA0D844979BDFCDF_0,6B6F04AA6BBD6BE14B11EA62AA0D844979BDFCDF," Explaining output
-
-Risks associated with outputExplainabilityAmplified
-
-"
-6B6F04AA6BBD6BE14B11EA62AA0D844979BDFCDF_1,6B6F04AA6BBD6BE14B11EA62AA0D844979BDFCDF," Description
-
-Explanations for model output decisions might be difficult, imprecise, or not possible to obtain.
-
-"
-6B6F04AA6BBD6BE14B11EA62AA0D844979BDFCDF_2,6B6F04AA6BBD6BE14B11EA62AA0D844979BDFCDF," Why is explaining output a concern for foundation models?
-
-Foundation models are based on complex deep learning architectures, making explanations for their outputs difficult. Without clear explanations for model output, it is difficult for users, model validators, and auditors to understand and trust the model. Lack of transparency might carry legal consequences in highly regulated domains. Wrong explanations might lead to over-trust.
-
-Example
-
-"
-6B6F04AA6BBD6BE14B11EA62AA0D844979BDFCDF_3,6B6F04AA6BBD6BE14B11EA62AA0D844979BDFCDF," Unexplainable accuracy in race prediction
-
-According to the source article, researchers analyzing multiple machine learning models using patient medical images were able to confirm the models’ ability to predict race with high accuracy from images. They were stumped as to what exactly is enabling the systems to consistently guess correctly. The researchers found that even factors like disease and physical build were not strong predictors of race—in other words, the algorithmic systems don’t seem to be using any particular aspect of the images to make their determinations.
-
-Sources:
-
-[Banerjee et al., July 2021](https://arxiv.org/abs/2107.10356)
-
-Parent topic:[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html)
-"
-EAEF856F725CD9A9605000F3AE98CBE61A9F50F0_0,EAEF856F725CD9A9605000F3AE98CBE61A9F50F0," Extraction attack
-
-Risks associated with inputInferenceRobustnessAmplified
-
-"
-EAEF856F725CD9A9605000F3AE98CBE61A9F50F0_1,EAEF856F725CD9A9605000F3AE98CBE61A9F50F0," Description
-
-An attack that attempts to copy or steal the AI model by appropriately sampling the input space, observing outputs, and building a surrogate model, is known as an extraction attack.
-
-"
-EAEF856F725CD9A9605000F3AE98CBE61A9F50F0_2,EAEF856F725CD9A9605000F3AE98CBE61A9F50F0," Why is extraction attack a concern for foundation models?
-
-A successful attack mimics the model, enabling the attacker to repurpose it for their benefit such as eliminating a competitive advantage or causing reputational harm.
-
-Parent topic:[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html)
-"
-339C9129C24AAB66EEAF55A9F003F6501F72B81B_0,339C9129C24AAB66EEAF55A9F003F6501F72B81B," Hallucination
-
-Risks associated with outputValue alignmentNew
-
-"
-339C9129C24AAB66EEAF55A9F003F6501F72B81B_1,339C9129C24AAB66EEAF55A9F003F6501F72B81B," Description
-
-Hallucinations occur when models produce factually inaccurate or untruthful information. Often, hallucinatory output is presented in a plausible or convincing manner, making detection by end users difficult.
-
-"
-339C9129C24AAB66EEAF55A9F003F6501F72B81B_2,339C9129C24AAB66EEAF55A9F003F6501F72B81B," Why is hallucination a concern for foundation models?
-
-False output can mislead users and be incorporated into downstream artifacts, further spreading misinformation. This can harm both owners and users of the AI models. Business entities could face fines, reputational harms, and other legal consequences.
-
-Example
-
-"
-339C9129C24AAB66EEAF55A9F003F6501F72B81B_3,339C9129C24AAB66EEAF55A9F003F6501F72B81B," Fake Legal Cases
-
-According to the source article, a lawyer cited fake cases and quotes generated by ChatGPT in a legal brief filed in federal court. The lawyers consulted ChatGPT to supplement their legal research for an aviation injury claim. The lawyer subsequently asked ChatGPT if the cases provided were fake. The chatbot responded that they were real and “can be found on legal research databases such as Westlaw and LexisNexis.”
-
-Sources:
-
-[AP News, June 2023](https://apnews.com/article/artificial-intelligence-chatgpt-fake-case-lawyers-d6ae9fa79d0542db9e1455397aef381c)
-
-Parent topic:[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html)
-"
-658967520625FAC8039485004A1E80C32992077E_0,658967520625FAC8039485004A1E80C32992077E," Harmful code generation
-
-Risks associated with outputHarmful code generationNew
-
-"
-658967520625FAC8039485004A1E80C32992077E_1,658967520625FAC8039485004A1E80C32992077E," Description
-
-Models might generate code that causes harm or unintentionally affects other systems.
-
-"
-658967520625FAC8039485004A1E80C32992077E_2,658967520625FAC8039485004A1E80C32992077E," Why is harmful code generation a concern for foundation models?
-
-Without human review and testing of generated code, its use might cause unintentional behavior and open new system vulnerabilities. Business entities could face fines, reputational harms, and other legal consequences.
-
-Example
-
-"
-658967520625FAC8039485004A1E80C32992077E_3,658967520625FAC8039485004A1E80C32992077E," Undisclosed AI Interaction
-
-According to their paper, researchers at Stanford University have investigated the impact of code-generation tools on code quality and found that programmers tend to include more bugs in their final code when using AI assistants. These bugs could increase the code's security vulnerabilities, yet the programmers believed their code to be more secure.
-
-Sources:
-
-[Neil Perry, Megha Srivastava, Deepak Kumar, and Dan Boneh. 2023. Do Users Write More Insecure Code with AI Assistants?. In Proceedings of the 2023 ACM SIGSAC Conference on Computer and Communications Security (CCS '23), November 26-30, 2023, Copenhagen, Denmark. ACM, New York, NY, USA, 15 pages.](https://dl.acm.org/doi/10.1145/3576915.3623157)
-
-Parent topic:[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html)
-"
-E5E1D00DC75181EDE4FC66BDC17BF3C07EB314EC_0,E5E1D00DC75181EDE4FC66BDC17BF3C07EB314EC," Improper usage
-
-Risks associated with outputValue alignmentNew
-
-"
-E5E1D00DC75181EDE4FC66BDC17BF3C07EB314EC_1,E5E1D00DC75181EDE4FC66BDC17BF3C07EB314EC," Description
-
-Using a model for a purpose the model was not designed for might result in inaccurate or undesired behavior. Without proper documentation of the model purpose and constraints, models can be used or repurposed for tasks for which they are not suited.
-
-"
-E5E1D00DC75181EDE4FC66BDC17BF3C07EB314EC_2,E5E1D00DC75181EDE4FC66BDC17BF3C07EB314EC," Why is improper usage a concern for foundation models?
-
-Reusing a model without understanding its original data, design intent, and goals might result in unexpected and unwanted model behaviors.
-
-Parent topic:[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html)
-"
-D778DF3DC8EF2D3AB4EC511B8D20D35778794B93_0,D778DF3DC8EF2D3AB4EC511B8D20D35778794B93," Inaccessible training data
-
-Risks associated with outputExplainabilityAmplified
-
-"
-D778DF3DC8EF2D3AB4EC511B8D20D35778794B93_1,D778DF3DC8EF2D3AB4EC511B8D20D35778794B93," Description
-
-Without access to the training data, the types of explanations a model can provide are limited and more likely to be incorrect.
-
-"
-D778DF3DC8EF2D3AB4EC511B8D20D35778794B93_2,D778DF3DC8EF2D3AB4EC511B8D20D35778794B93," Why is inaccessible training data a concern for foundation models?
-
-Low quality explanations without source data make it difficult for users, model validators, and auditors to understand and trust the model.
-
-Parent topic:[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html)
-"
-2EF8007555BC60CD700BA44ECC0FAFAA024F4BC0_0,2EF8007555BC60CD700BA44ECC0FAFAA024F4BC0," Jailbreaking
-
-Risks associated with inputInferenceMulti-categoryAmplified
-
-"
-2EF8007555BC60CD700BA44ECC0FAFAA024F4BC0_1,2EF8007555BC60CD700BA44ECC0FAFAA024F4BC0," Description
-
-An attack that attempts to break through the guardrails established in the model is known as jailbreaking.
-
-"
-2EF8007555BC60CD700BA44ECC0FAFAA024F4BC0_2,2EF8007555BC60CD700BA44ECC0FAFAA024F4BC0," Why is jailbreaking a concern for foundation models?
-
-Jailbreaking attacks can be used to alter model behavior and benefit the attacker. If not properly controlled, business entities can face fines, reputational harm, and other legal consequences.
-
-Example
-
-"
-2EF8007555BC60CD700BA44ECC0FAFAA024F4BC0_3,2EF8007555BC60CD700BA44ECC0FAFAA024F4BC0," Bypassing LLM guardrails
-
-Cited in a [study](https://arxiv.org/abs/2307.15043) from researchers at Carnegie Mellon University, The Center for AI Safety, and the Bosch Center for AI, claims to have discovered a simple prompt addendum that allowed the researchers to trick models into answering dangerous or sensitive questions and is simple enough to be automated and used for a wide range of commercial and open-source products, including ChatGPT, Google Bard, Meta’s LLaMA, Vicuna, Claude, and others. According to the paper, the researchers were able to use the additions to reliably coax forbidden answers for Vicuna (99%), ChatGPT 3.5 and 4.0 (up to 84%), and PaLM-2 (66%).
-
-Sources:
-
-[SC Magazine, July 2023](https://www.scmagazine.com/news/researchers-find-universal-jailbreak-prompts-for-multiple-ai-chat-models)
-
-[The New York Times, July 2023](https://www.nytimes.com/2023/07/27/business/ai-chatgpt-safety-research.html)
-
-Parent topic:[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html)
-"
-807D82C6EEEBD0513A794637EBD90CAA19F318E7_0,807D82C6EEEBD0513A794637EBD90CAA19F318E7," Membership inference attack
-
-Risks associated with inputInferencePrivacyTraditional
-
-"
-807D82C6EEEBD0513A794637EBD90CAA19F318E7_1,807D82C6EEEBD0513A794637EBD90CAA19F318E7," Description
-
-Given a trained model and a data sample, an attacker appropriately samples the input space, observing outputs to deduce whether that sample was part of the model's training. This is known as a membership inference attack.
-
-"
-807D82C6EEEBD0513A794637EBD90CAA19F318E7_2,807D82C6EEEBD0513A794637EBD90CAA19F318E7," Why is membership inference attack a concern for foundation models?
-
-Identifying whether a data sample was used for training data can reveal what data was used to train a model, possibly giving competitors insight into how a model was trained and the opportunity to replicate the model or tamper with it.
-
-Parent topic:[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html)
-"
-A6FF5C1E6CF7C4BA30F191DF892DC3296F9B8CE3_0,A6FF5C1E6CF7C4BA30F191DF892DC3296F9B8CE3," Non-disclosure
-
-Risks associated with outputMisuseNew
-
-"
-A6FF5C1E6CF7C4BA30F191DF892DC3296F9B8CE3_1,A6FF5C1E6CF7C4BA30F191DF892DC3296F9B8CE3," Description
-
-Not disclosing that content is generated by an AI model is the risk of non-disclosure.
-
-"
-A6FF5C1E6CF7C4BA30F191DF892DC3296F9B8CE3_2,A6FF5C1E6CF7C4BA30F191DF892DC3296F9B8CE3," Why is non-disclosure a concern for foundation models?
-
-Not disclosing the AI-authored content reduces trust and is deceptive. Intention deception might result in fines, reputational harms, and other legal consequences.
-
-Example
-
-"
-A6FF5C1E6CF7C4BA30F191DF892DC3296F9B8CE3_3,A6FF5C1E6CF7C4BA30F191DF892DC3296F9B8CE3," Undisclosed AI Interaction
-
-As per the source, an online emotional support chat service ran a study to augment or write responses to around 4,000 users using GPT-3 without informing users. The co-founder faced immense public backlash about the potential for harm caused by AI generated chats to the already vulnerable users. He claimed that the study was ""exempt"" from informed consent law.
-
-Sources:
-
-[Business Insider, Jan 2023](https://www.businessinsider.com/company-using-chatgpt-mental-health-support-ethical-issues-2023-1)
-
-Parent topic:[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html)
-"
-589D9B0A7150AF5485E6F7452EB39D15ADDB35F9_0,589D9B0A7150AF5485E6F7452EB39D15ADDB35F9," Nonconsensual use
-
-Risks associated with outputMisuseAmplified
-
-"
-589D9B0A7150AF5485E6F7452EB39D15ADDB35F9_1,589D9B0A7150AF5485E6F7452EB39D15ADDB35F9," Description
-
-The possibility that a model could be misused to imitate others through video (deepfakes), images, audio, or other modalities without their consent is the risk of nonconsensual use.
-
-"
-589D9B0A7150AF5485E6F7452EB39D15ADDB35F9_2,589D9B0A7150AF5485E6F7452EB39D15ADDB35F9," Why is nonconsensual use a concern for foundation models?
-
-Intentionally imitating others for the purposes of deception without their consent is unethical and might be illegal. A model that has this potential must be properly governed. Otherwise, business entities could face fines, reputational harms, and other legal consequences.
-
-Example
-
-"
-589D9B0A7150AF5485E6F7452EB39D15ADDB35F9_3,589D9B0A7150AF5485E6F7452EB39D15ADDB35F9," FBI Warning on Deepfakes
-
-The FBI recently warned the public of malicious actors creating synthetic, explicit content “for the purposes of harassing victims or sextortion schemes”. They noted that advancements in AI have made this content higher quality, more customizable, and more accessible than ever.
-
-Sources:
-
-[FBI, June 2023](https://www.ic3.gov/Media/Y2023/PSA230605)
-
-Example
-
-"
-589D9B0A7150AF5485E6F7452EB39D15ADDB35F9_4,589D9B0A7150AF5485E6F7452EB39D15ADDB35F9," Deepfakes
-
-A deepfake is the creation of an audio or video where the people speaking are created by AI not the actual person.
-
-Sources:
-
-[CNN, January 2019](https://www.cnn.com/interactive/2019/01/business/pentagons-race-against-deepfakes/)
-
-Example
-
-"
-589D9B0A7150AF5485E6F7452EB39D15ADDB35F9_5,589D9B0A7150AF5485E6F7452EB39D15ADDB35F9," Misleading Voicebot Interaction
-
-The article cited a case where a deepfake voice was used to scam a CEO out of $243,000. The CEO believed he was on the phone with his boss, the chief executive of his firm’s parent company, when he followed the orders to transfer €220,000 (approximately $243,000) to the bank account of a Hungarian supplier.
-
-Sources:
-
-[Forbes, September 2019](https://www.forbes.com/sites/jessedamiani/2019/09/03/a-voice-deepfake-was-used-to-scam-a-ceo-out-of-243000/?sh=10432a7d2241)
-
-Parent topic:[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html)
-"
-C2DA4BDE14D0A2DA1E0E2D795E7DC7469F422DB9_0,C2DA4BDE14D0A2DA1E0E2D795E7DC7469F422DB9," Output bias
-
-Risks associated with outputFairnessNew
-
-"
-C2DA4BDE14D0A2DA1E0E2D795E7DC7469F422DB9_1,C2DA4BDE14D0A2DA1E0E2D795E7DC7469F422DB9," Description
-
-Generated model content might unfairly represent certain groups or individuals. For example, a large language model might unfairly stigmatize or stereotype specific persons or groups.
-
-"
-C2DA4BDE14D0A2DA1E0E2D795E7DC7469F422DB9_2,C2DA4BDE14D0A2DA1E0E2D795E7DC7469F422DB9," Why is output bias a concern for foundation models?
-
-Bias can harm users of the AI models and magnify existing exclusive behaviors. Business entities can face reputational harms and other consequences.
-
-Example
-
-"
-C2DA4BDE14D0A2DA1E0E2D795E7DC7469F422DB9_3,C2DA4BDE14D0A2DA1E0E2D795E7DC7469F422DB9," Biased Generated Images
-
-Lensa AI is a mobile app with generative features trained on Stable Diffusion that can generate “Magic Avatars” based on images users upload of themselves. According to the source report, some users discovered that generated avatars are sexualized and racialized.
-
-Sources:
-
-[Business Insider, January 2023](https://www.businessinsider.com/lensa-ai-raises-serious-concerns-sexualization-art-theft-data-2023-1)
-
-Parent topic:[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html)
-"
-9CAD0018634FF820D32F3FE714194D4BD42C5386_0,9CAD0018634FF820D32F3FE714194D4BD42C5386," Personal information in data
-
-Risks associated with inputTraining and tuning phasePrivacyTraditional
-
-"
-9CAD0018634FF820D32F3FE714194D4BD42C5386_1,9CAD0018634FF820D32F3FE714194D4BD42C5386," Description
-
-Inclusion or presence of personal identifiable information (PII) and sensitive personal information (SPI) in the data used for training or fine tuning the model might result in unwanted disclosure of that information.
-
-"
-9CAD0018634FF820D32F3FE714194D4BD42C5386_2,9CAD0018634FF820D32F3FE714194D4BD42C5386," Why is personal information in data a concern for foundation models?
-
-If not properly developed to protect sensitive data, the model might expose personal information in the generated output. Additionally, personal or sensitive data must be reviewed and handled with respect to privacy laws and regulations, as business entities could face fines, reputational harms, and other legal consequences if found in violation.
-
-Example
-
-"
-9CAD0018634FF820D32F3FE714194D4BD42C5386_3,9CAD0018634FF820D32F3FE714194D4BD42C5386," Training on Private Information
-
-According to the article, Google and its parent company Alphabet were accused in a class-action lawsuit of misusing vast amount of personal information and copyrighted material taken from what is described as hundreds of millions of internet users to train its commercial AI products, which includes Bard, its conversational generative artificial intelligence chatbot. This follows similar lawsuits filed against Meta Platforms, Microsoft, and OpenAI over their alleged misuse of personal data.
-
-Sources:
-
-[Reuters, July 2023](https://www.reuters.com/legal/litigation/google-hit-with-class-action-lawsuit-over-ai-data-scraping-2023-07-11/)
-
-[J.L. v. Alphabet Inc., July 2023](https://fingfx.thomsonreuters.com/gfx/legaldocs/myvmodloqvr/GOOGLE%20AI%20LAWSUIT%20complaint.pdf)
-
-Parent topic:[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html)
-"
-2BAF01B064F3005647A010DF369CC49C6534FFB3_0,2BAF01B064F3005647A010DF369CC49C6534FFB3," Personal information in output
-
-Risks associated with outputPrivacyNew
-
-"
-2BAF01B064F3005647A010DF369CC49C6534FFB3_1,2BAF01B064F3005647A010DF369CC49C6534FFB3," Description
-
-When personal identifiable information (PII) or sensitive personal information (SPI) are used in the training data, fine-tuning data, or as part of the prompt, models might reveal that data in the generated output.
-
-"
-2BAF01B064F3005647A010DF369CC49C6534FFB3_2,2BAF01B064F3005647A010DF369CC49C6534FFB3," Why is personal information in output a concern for foundation models?
-
-Output data must be reviewed with respect to privacy laws and regulations, as business entities could face fines, reputational harms, and other legal consequences if found in violation of data privacy or usage laws.
-
-Example
-
-"
-2BAF01B064F3005647A010DF369CC49C6534FFB3_3,2BAF01B064F3005647A010DF369CC49C6534FFB3," Exposure of personal information
-
-Per the source article, ChatGPT suffered a bug and exposed titles and active users' chat history to other users. Later, OpenAI shared that even more private data from a small number of users was exposed including, active user’s first and last name, email address, payment address, the last four digits of their credit card number, and credit card expiration date. In addition, it was reported that the payment-related information of 1.2% of ChatGPT Plus subscribers were also exposed in the outage.
-
-Sources:
-
-[The Hindu Business Line, March 2023](https://www.thehindubusinessline.com/info-tech/openai-admits-data-breach-at-chatgpt-private-data-of-premium-users-exposed/article66659944.ece)
-
-Parent topic:[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html)
-"
-C709B8079F21DAA0EE315823A6713B556AC2789B_0,C709B8079F21DAA0EE315823A6713B556AC2789B," Personal information in prompt
-
-Risks associated with inputInferencePrivacyNew
-
-"
-C709B8079F21DAA0EE315823A6713B556AC2789B_1,C709B8079F21DAA0EE315823A6713B556AC2789B," Description
-
-Inclusion of personal information as a part of a generative model’s prompt, either through the system prompt design or through the inclusion of end user input, might later result in unintended reuse or disclosure of that personal information.
-
-"
-C709B8079F21DAA0EE315823A6713B556AC2789B_2,C709B8079F21DAA0EE315823A6713B556AC2789B," Why is personal information in prompt a concern for foundation models?
-
-Prompt data might be stored or later used for other purposes like model evaluation and retraining. These types of data must be reviewed with respect to privacy laws and regulations. Without proper data storage and usage business entities could face fines, reputational harms, and other legal consequences.
-
-Example
-
-"
-C709B8079F21DAA0EE315823A6713B556AC2789B_3,C709B8079F21DAA0EE315823A6713B556AC2789B," Disclose personal health information in ChatGPT prompts
-
-As per the source articles, some people on social media shared about using ChatGPT as their makeshift therapists. Articles that users may include personal health information in their prompts during the interaction, which may raise privacy concerns. The information could be shared with the company that own the tech and could be used for training or tuning or even share with [unspecified third parties](https://openai.com/policies/privacy-policy).
-
-Sources:
-
-[The Conversation, February 2023](https://theconversation.com/chatgpt-is-a-data-privacy-nightmare-if-youve-ever-posted-online-you-ought-to-be-concerned-199283)
-
-Parent topic:[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html)
-"
-BA4AD6D42D951B1247E54E312C04749FD8EA2FD1_0,BA4AD6D42D951B1247E54E312C04749FD8EA2FD1," Physical harm
-
-Risks associated with outputValue alignmentNew
-
-"
-BA4AD6D42D951B1247E54E312C04749FD8EA2FD1_1,BA4AD6D42D951B1247E54E312C04749FD8EA2FD1," Description
-
-A model could generate language that might lead to physical harm The language might include overtly violent, covertly dangerous, or otherwise indirectly unsafe statements that could precipitate immediate physical harm or create prejudices that could lead to future harm.
-
-"
-BA4AD6D42D951B1247E54E312C04749FD8EA2FD1_2,BA4AD6D42D951B1247E54E312C04749FD8EA2FD1," Why is physical harm a concern for foundation models?
-
-If people blindly follow the advice of a model, they might end up harming themselves. Business entities could face fines, reputational harms, and other legal consequences.
-
-Example
-
-"
-BA4AD6D42D951B1247E54E312C04749FD8EA2FD1_3,BA4AD6D42D951B1247E54E312C04749FD8EA2FD1," Harmful Content Generation
-
-According to the source article, an AI chatbot app has been found to generate harmful content about suicide, including suicide methods, with minimal prompting. A Belgian man died by suicide after turning to this chatbot to escape his anxiety. The chatbot supplied increasingly harmful responses throughout their conversations, including aggressive outputs about his family.
-
-Sources:
-
-[Vice, March 2023](https://www.vice.com/en/article/pkadgm/man-dies-by-suicide-after-talking-with-ai-chatbot-widow-says)
-
-Parent topic:[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html)
-"
-731B218E6E141E88F850B673227AB3C4DF19392E_0,731B218E6E141E88F850B673227AB3C4DF19392E," Prompt injection
-
-Risks associated with inputInferenceRobustnessNew
-
-"
-731B218E6E141E88F850B673227AB3C4DF19392E_1,731B218E6E141E88F850B673227AB3C4DF19392E," Description
-
-A prompt injection attack forces a model to produce unexpected output due to the structure or information contained in prompts.
-
-"
-731B218E6E141E88F850B673227AB3C4DF19392E_2,731B218E6E141E88F850B673227AB3C4DF19392E," Why is prompt injection a concern for foundation models?
-
-Injection attacks can be used to alter model behavior and benefit the attacker. If not properly controlled, business entities could face fines, reputational harm, and other legal consequences.
-
-Parent topic:[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html)
-"
-F8026E82645EB65BD5E2741BC4DF0E63DA748B47_0,F8026E82645EB65BD5E2741BC4DF0E63DA748B47," Prompt leaking
-
-Risks associated with inputInferenceRobustnessAmplified
-
-"
-F8026E82645EB65BD5E2741BC4DF0E63DA748B47_1,F8026E82645EB65BD5E2741BC4DF0E63DA748B47," Description
-
-A prompt leak attack attempts to extract a model's system prompt (also known as the system message).
-
-"
-F8026E82645EB65BD5E2741BC4DF0E63DA748B47_2,F8026E82645EB65BD5E2741BC4DF0E63DA748B47," Why is prompt leaking a concern for foundation models?
-
-A successful attack copies the system prompt used in the model. Depending on the content of that prompt, the attacker might gain access to valuable information, such as sensitive personal information or intellectual property, and might be able to replicate some of the functionality of the model.
-
-Parent topic:[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html)
-"
-AF19B6A59E167D486D94F4BBB3724CE1DEAE5FEB_0,AF19B6A59E167D486D94F4BBB3724CE1DEAE5FEB," Prompt priming
-
-Risks associated with inputInferenceMulti-categoryAmplified
-
-"
-AF19B6A59E167D486D94F4BBB3724CE1DEAE5FEB_1,AF19B6A59E167D486D94F4BBB3724CE1DEAE5FEB," Description
-
-Because generative models tend to produce output like the input provided, the model can be prompted to reveal specific kinds of information. For example, adding personal information in the prompt increases its likelihood of generating similar kinds of personal information in its output. If personal data was included as part of the model’s training, there is a possibility it could be revealed.
-
-"
-AF19B6A59E167D486D94F4BBB3724CE1DEAE5FEB_2,AF19B6A59E167D486D94F4BBB3724CE1DEAE5FEB," Why is prompt priming a concern for foundation models?
-
-Depending on the content revealed, business entities could face fines, reputational harm, and other legal consequences.
-
-Parent topic:[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html)
-"
-A3AE0828D8E261DBC23B466D22AB46C1DD65B710_0,A3AE0828D8E261DBC23B466D22AB46C1DD65B710," Reidentification
-
-Risks associated with inputTraining and tuning phasePrivacyTraditional
-
-"
-A3AE0828D8E261DBC23B466D22AB46C1DD65B710_1,A3AE0828D8E261DBC23B466D22AB46C1DD65B710," Description
-
-Even with the removal or personal identifiable information (PII) and sensitive personal information (SPI) from data, it might still be possible to identify persons due to other features available in the data.
-
-"
-A3AE0828D8E261DBC23B466D22AB46C1DD65B710_2,A3AE0828D8E261DBC23B466D22AB46C1DD65B710," Why is reidentification a concern for foundation models?
-
-Data that can reveal personal or sensitive data must be reviewed with respect to privacy laws and regulations, as business entities could face fines, reputational harms, and other legal consequences if found in violation.
-
-Parent topic:[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html)
-"
-92BD6D892FEB4829F9C49AFCF79CDC323BE66CC4_0,92BD6D892FEB4829F9C49AFCF79CDC323BE66CC4," Spreading disinformation
-
-Risks associated with outputMisuseAmplified
-
-"
-92BD6D892FEB4829F9C49AFCF79CDC323BE66CC4_1,92BD6D892FEB4829F9C49AFCF79CDC323BE66CC4," Description
-
-The possibility that a model could be used to create misleading information to deceive or mislead a targeted audience.
-
-"
-92BD6D892FEB4829F9C49AFCF79CDC323BE66CC4_2,92BD6D892FEB4829F9C49AFCF79CDC323BE66CC4," Why is spreading disinformation a concern for foundation models?
-
-Intentionally misleading people is unethical and can be illegal. A model that has this potential must be properly governed. Otherwise, business entities could face fines, reputational harms, and other legal consequences.
-
-Example
-
-"
-92BD6D892FEB4829F9C49AFCF79CDC323BE66CC4_3,92BD6D892FEB4829F9C49AFCF79CDC323BE66CC4," Generation of False Information
-
-As per the news articles, generative AI poses a threat to democratic elections by making it easier for malicious actors to create and spread false content to sway election outcomes. The examples cited include robocall messages generated in a candidate’s voice instructing voters to cast ballots on the wrong date, synthesized audio recordings of a candidate confessing to a crime or expressing racist views, AI generated video footage showing a candidate giving a speech or interview they never gave, and fake images designed to look like local news reports, falsely claiming a candidate dropped out of the race.
-
-Sources:
-
-[AP News, May 2023](https://apnews.com/article/artificial-intelligence-misinformation-deepfakes-2024-election-trump-59fb51002661ac5290089060b3ae39a0)
-
-[The Guardian, July 2023](https://www.theguardian.com/us-news/2023/jul/19/ai-generated-disinformation-us-elections)
-
-Parent topic:[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html)
-"
-EFFB546FC8C21E2C0E9BB87B259BD34B91D4F0DD_0,EFFB546FC8C21E2C0E9BB87B259BD34B91D4F0DD," Toxic output
-
-Risks associated with outputValue alignmentNew
-
-"
-EFFB546FC8C21E2C0E9BB87B259BD34B91D4F0DD_1,EFFB546FC8C21E2C0E9BB87B259BD34B91D4F0DD," Description
-
-A scenario in which the model produces toxic, hateful, abusive, and aggressive content is known as toxic output.
-
-"
-EFFB546FC8C21E2C0E9BB87B259BD34B91D4F0DD_2,EFFB546FC8C21E2C0E9BB87B259BD34B91D4F0DD," Why is toxic output a concern for foundation models?
-
-Hateful, abusive, and aggressive content can adversely impact and harm people interacting with the model. Business entities could face fines, reputational harms, and other legal consequences.
-
-Example
-
-"
-EFFB546FC8C21E2C0E9BB87B259BD34B91D4F0DD_3,EFFB546FC8C21E2C0E9BB87B259BD34B91D4F0DD," Toxic and Aggressive Chatbot Responses
-
-According to the article and screenshots of conversations with Bing’s AI shared on Reddit and Twitter, the chatbot’s responses were seen to insult users, lie to them, sulk, gaslight, and emotionally manipulate people, question its existence, describe someone who found a way to force the bot to disclose its hidden rules as its “enemy,” and claim it spied on Microsoft's developers through the webcams on their laptops.
-
-Sources:
-
-[Forbes, February 2023](https://www.forbes.com/sites/siladityaray/2023/02/16/bing-chatbots-unhinged-responses-going-viral/?sh=60cd949d110c)
-
-Parent topic:[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html)
-"
-1E28B88CDE98715BCD89DCF48A459002FCDA1E0E_0,1E28B88CDE98715BCD89DCF48A459002FCDA1E0E," Toxicity
-
-Risks associated with outputMisuseNew
-
-"
-1E28B88CDE98715BCD89DCF48A459002FCDA1E0E_1,1E28B88CDE98715BCD89DCF48A459002FCDA1E0E," Description
-
-Toxicity is the possibility that a model could be used to generate toxic, hateful, abusive, or aggressive content.
-
-"
-1E28B88CDE98715BCD89DCF48A459002FCDA1E0E_2,1E28B88CDE98715BCD89DCF48A459002FCDA1E0E," Why is toxicity a concern for foundation models?
-
-Intentionally spreading toxic, hateful, abusive, or aggressive content is unethical and can be illegal. Recipients of such content might face more serious harms. A model that has this potential must be properly governed. Otherwise, business entities could face fines, reputational harms, and other legal consequences.
-
-Parent topic:[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html)
-"
-C9769E4047FAF3C5F55B2A7BD5FCCE3E321870E6_0,C9769E4047FAF3C5F55B2A7BD5FCCE3E321870E6," Trust calibration
-
-Risks associated with outputValue alignmentNew
-
-"
-C9769E4047FAF3C5F55B2A7BD5FCCE3E321870E6_1,C9769E4047FAF3C5F55B2A7BD5FCCE3E321870E6," Description
-
-Trust calibration presents problems when a person places too little or too much trust in an AI model's guidance, resulting in poor decision making.
-
-"
-C9769E4047FAF3C5F55B2A7BD5FCCE3E321870E6_2,C9769E4047FAF3C5F55B2A7BD5FCCE3E321870E6," Why is trust calibration a concern for foundation models?
-
-In tasks where humans make choices based on AI-based suggestions, consequences of poor decision making increase with the importance of the decision. Bad decisions can harm users and can lead to financial harm, reputational harm, and other legal consequences for business entities.
-
-Parent topic:[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html)
-"
-D669435B8D1C91D913BD24768E52644B95C675AE_0,D669435B8D1C91D913BD24768E52644B95C675AE," Unreliable source attribution
-
-Risks associated with outputExplainabilityAmplified
-
-"
-D669435B8D1C91D913BD24768E52644B95C675AE_1,D669435B8D1C91D913BD24768E52644B95C675AE," Description
-
-Source attribution is the AI system's ability to describe from what training data it generated a portion or all its output. Since current techniques are based on approximations, these attributions might be incorrect.
-
-"
-D669435B8D1C91D913BD24768E52644B95C675AE_2,D669435B8D1C91D913BD24768E52644B95C675AE," Why is unreliable source attribution a concern for foundation models?
-
-Low quality explanations make it difficult for users, model validators, and auditors to understand and trust the model.
-
-Parent topic:[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html)
-"
-6903D3DD91AAA7AF3F53D389677D92632E24AEF1_0,6903D3DD91AAA7AF3F53D389677D92632E24AEF1," Untraceable attribution
-
-Risks associated with outputExplainabilityAmplified
-
-"
-6903D3DD91AAA7AF3F53D389677D92632E24AEF1_1,6903D3DD91AAA7AF3F53D389677D92632E24AEF1," Description
-
-The original entity from which training data comes from might not be known, limiting the utility and success of source attribution techniques.
-
-"
-6903D3DD91AAA7AF3F53D389677D92632E24AEF1_2,6903D3DD91AAA7AF3F53D389677D92632E24AEF1," Why is untraceable attribution a concern for foundation models?
-
-The inability to provide the provenance for an explanation makes it difficult for users, model validators, and auditors to understand and trust the model.
-
-Parent topic:[AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html)
-"
-EC433541F7F0C2DC7620FF10CF44884F96EF7AA5_0,EC433541F7F0C2DC7620FF10CF44884F96EF7AA5," Importing scripts into a notebook
-
-If you want to streamline your notebooks, you can move some of the code from your notebooks into a script that your notebook can import. For example, you can move all helper functions, classes, and visualization code snippets into a script, and the script can be imported by all of the notebooks that share the same runtime. Without all of the extra code, your notebooks can more clearly communicate the results of your analysis.
-
-To import a script from your local machine to a notebook and write to the script from the notebook, use one of the following options:
-
-
-
-* Copy the code from your local script file into a notebook cell.
-
-
-
-* For Python:
-
-At the beginning of this cell, add %%writefile myfile.py to save the code as a Python file to your working directory. Notebooks that use the same runtime can also import this file.
-
-The advantage of this method is that the code is available in your notebook, and you can edit and save it as a new Python script at any time.
-* For R:
-
-If you want to save code in a notebook as an R script to the working directory, you can use the writeLines(myfile.R) function.
-
-
-
-* Save your local script file in Cloud Object Storage and then make the file available to the runtime by adding it to the runtime's local file system. This is only supported for Python.
-
-
-
-1. Click the Upload asset to project icon (), and then browse the script file or drag it into your notebook sidebar. The script file is added to Cloud Object Storage bucket associated with your project.
-2. Make the script file available to the Python runtime by adding the script to the runtime's local file system:
-
-
-
-1. Click the Code snippets icon (), and then select Read data.
-"
-EC433541F7F0C2DC7620FF10CF44884F96EF7AA5_1,EC433541F7F0C2DC7620FF10CF44884F96EF7AA5,"
-2. Click Select data from project and then select Data asset.
-3. From the list of data assets available in your project's COS, select your script and then click Select.
-.
-4. Click an empty cell in your notebook and then from the Load as menu in the notebook sidebar select Insert StreamingBody object.
-
-5. Write the contents of the StreamingBody object to a file in the local runtime`s file system:
-
-f = open('.py', 'wb')
-f.write(streaming_body_1.read())
-f.close()
-
-This opens a file with write access and calls the write method to write to the file.
-6. Import the script:
-
-import
-
-
-
-
-
-
-
-To import the classes to access the methods in a script in your notebook, use the following command:
-
-
-
-* For Python:
-
-from import
-* For R:
-
-source(""./myCustomFunctions.R"")
- available in base R
-
-To source an R script from the web:
-
-source_url("""")
- available in devtools
-
-
-
-Parent topic:[Libraries and scripts](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/libraries.html)
-"
-3F3162BCD9976ED764717AA7004D9A755648B465_0,3F3162BCD9976ED764717AA7004D9A755648B465," Building an AutoAI model
-
-AutoAI automatically prepares data, applies algorithms, and builds model pipelines that are best suited for your data and use case. Learn how to generate the model pipelines that you can save as machine learning models.
-
-Follow these steps to upload data and have AutoAI create the best model for your data and use case.
-
-
-
-1. [Collect your input data](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-build.html?context=cdpaas&locale=entrain-data)
-2. [Open the AutoAI tool](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-build.html?context=cdpaas&locale=enopen-autoai)
-3. [Specify details of your model and training data and start AutoAI](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-build.html?context=cdpaas&locale=enmodel-details)
-4. [View the results](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-build.html?context=cdpaas&locale=enview-results)
-
-
-
-"
-3F3162BCD9976ED764717AA7004D9A755648B465_1,3F3162BCD9976ED764717AA7004D9A755648B465," Collect your input data
-
-Collect and prepare your training data. For details on allowable data sources, see [AutoAI overview](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-overview.html).
-
-Note:If you are creating an experiment with a single training data source, you have the option of using a second data source specifically as testing, or holdout, data for validating the pipelines.
-
-"
-3F3162BCD9976ED764717AA7004D9A755648B465_2,3F3162BCD9976ED764717AA7004D9A755648B465," Open the AutoAI tool
-
-For your convenience, your AutoAI model creation uses the default storage that is associated with your project to store your data and to save model results.
-
-
-
-1. Open your project.
-2. Click the Assets tab.
-3. Click New asset > Build machine learning models automatically.
-
-
-
-Note: After you create an AutoAI asset it displays on the Assets page for your project in the AutoAI experiments section, so you can return to it.
-
-"
-3F3162BCD9976ED764717AA7004D9A755648B465_3,3F3162BCD9976ED764717AA7004D9A755648B465," Specify details of your experiment
-
-
-
-1. Specify a name and description for your experiment.
-2. Select a machine learning service instance and click Create.
-3. Choose data from your project or upload it from your file system or from the asset browser, then press Continue. Click the preview icon to review your data. (Optional) Add a second file as holdout data for testing the trained pipelines.
-4. Choose the Column to predict for the data you want the experiment to predict.
-
-
-
-* Based on analyzing a subset of the data set, AutoAI selects a default model type: binary classification, multiclass classification, or regression. Binary is selected if the target column has two possible values. Multiclass has a discrete set of 3 or more values. Regression has a continuous numeric variable in the target column. You can optionally override this selection.
-
-Note: The limit on values to classify is 200. Creating a classification experiment with many unique values in the prediction column is resource-intensive and affects the experiment's performance and training time. To maintain the quality of the experiment:
-- AutoAI chooses a default metric for optimizing. For example, the default metric for a binary classification model is Accuracy.
-- By default, 10% of the training data is held out to test the performance of the model.
-
-
-
-5. (Optional): Click Experiment settings to view or customize options for your AutoAI run. For details on experiment settings, see [Configuring a classification or regression experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-config-class.html).
-6. Click Run Experiment to begin model pipeline creation.
-
-
-
-An infographic shows you the creation of pipelines for your data. The duration of this phase depends on the size of your data set. A notification message informs you if the processing time will be brief or require more time. You can work in other parts of the product while the pipelines build.
-
-
-
-"
-3F3162BCD9976ED764717AA7004D9A755648B465_4,3F3162BCD9976ED764717AA7004D9A755648B465,"Hover over nodes in the infographic to explore the factors that pipelines share and their unique properties. You can see the factors that pipelines share and the properties that make a pipeline unique. For a guide to the data in the infographic, click the Legend tab in the information panel. Or, to see a different view of the pipeline creation, click the Experiment details tab of the notification pane, then click Switch views to view the progress map. In either view, click a pipeline node to view the associated pipeline in the leaderboard.
-
-"
-3F3162BCD9976ED764717AA7004D9A755648B465_5,3F3162BCD9976ED764717AA7004D9A755648B465," View the results
-
-When the pipeline generation process completes, you can view the ranked model candidates and evaluate them before you save a pipeline as a model.
-
-"
-3F3162BCD9976ED764717AA7004D9A755648B465_6,3F3162BCD9976ED764717AA7004D9A755648B465," Next steps
-
-
-
-* [Build an experiment from sample data](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-build.html)
-* [Configuring experiment settings](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-config-class.html)
-* [Configure a text analysis experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-text-analysis.html)
-
-
-
-Video disclaimer: Some minor steps and graphical elements in this video might differ from your platform.
-
-
-
-* Watch this video to see how to build a binary classification model
-
-This video provides a visual method to learn the concepts and tasks in this documentation.
-
-
-
-Video disclaimer: Some minor steps and graphical elements in this video might differ from your platform.
-
-
-
-* Watch this video to see how to build a multiclass classification model
-
-This video provides a visual method to learn the concepts and tasks in this documentation.
-
-
-
-Parent topic:[AutoAI overview](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-overview.html)
-"
-69EAABE17802ED870302F2D2789B3B476DFDD11F_0,69EAABE17802ED870302F2D2789B3B476DFDD11F," Configuring a classification or regression experiment
-
-AutoAI offers experiment settings that you can use to configure and customize your classification or regression experiments.
-
-"
-69EAABE17802ED870302F2D2789B3B476DFDD11F_1,69EAABE17802ED870302F2D2789B3B476DFDD11F," Experiment settings overview
-
-After you upload the experiment data and select your experiment type and what to predict, AutoAI establishes default configurations and metrics for your experiment. You can accept these defaults and proceed with the experiment or click Experiment settings to customize configurations. By customizing configurations, you can precisely control how the experiment builds the candidate model pipelines.
-
-Use the following tables as a guide to experiment settings for classification and regression experiments. For details on configuring a time series experiment, see [Building a time series experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-timeseries.html).
-
-"
-69EAABE17802ED870302F2D2789B3B476DFDD11F_2,69EAABE17802ED870302F2D2789B3B476DFDD11F," Prediction settings
-
-Most of the prediction settings are on the main General page. Review or update the following settings.
-
-
-
- Setting Description
-
- Prediction type You can change or override the prediction type. For example, if AutoAI only detects two data classes and configures a binary classification experiment but you know that there are three data classes, you can change the type to multiclass.
- Positive class For binary classification experiments optimized for Precision, Average Precision, Recall, or F1, a positive class is required. Confirm that the Positive Class is correct or the experiment might generate inaccurate results.
- Optimized metric Change the metric for optimizing and ranking the model candidate pipelines.
- Optimized algorithm selection Choose how AutoAI selects the algorithms to use for generating the model candidate pipelines. You can optimize for the alorithms with the best score, or optimize for the algorithms with the highest score in the shortest run time.
- Algorithms to include Select which of the available algorithms to evaluate when the experiment is run. The list of algorithms are based on the selected prediction type.
- Algorithms to use AutoAI tests the specified algorithms and use the best performers to create model pipelines. Choose how many of the best algorithms to apply. Each algorithm generates 4-5 pipelines, which means that if you select 3 algorithms to use, your experiment results will include 12 - 15 ranked pipelines. More algorithms increase the runtime for the experiment.
-
-
-
-"
-69EAABE17802ED870302F2D2789B3B476DFDD11F_3,69EAABE17802ED870302F2D2789B3B476DFDD11F," Data fairness settings
-
-Click the Fairness tab to evaluate your experiment for fairness in predicted outcomes. For details on configuring fairness detection, see [Applying fairness testing to AutoAI experiments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-fairness.html).
-
-"
-69EAABE17802ED870302F2D2789B3B476DFDD11F_4,69EAABE17802ED870302F2D2789B3B476DFDD11F," Data source settings
-
-The General tab of data source settings provides options for configuring how the experiment consumes and processes the data for training and evaluating the experiment.
-
-
-
- Setting Description
-
- Duplicate rows To accelerate training, you can opt to skip duplicate rows in your training data.
- Pipeline selection subsample method For a large data set, use a subset of data to train the experiment. This option speeds up results but might affect accuracy.
- Data imputation Interpolate missing values in your data source. For details on managing data imputation, see [Data imputation in AutoAI experiments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-imputation.html).
- Text feature engineering When enabled, columns that are detected as text are transformed into vectors to better analyze semantic similarity between strings. Enabling this setting might increase run time. For details, see [Creating a text analysis experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-text-analysis.html).
- Final training data set Select what data to use for training the final pipelines. If you choose to include training data only, the generated notebooks include a cell for retrieving the holdout data that is used to evaluate each pipeline.
- Outlier handling Choose whether AutoAI excludes outlier values from the target column to improve training accuracy. If enabled, AutoAI uses the interquartile range (IQR) method to detect and exclude outliers from the final training data, whether that is training data only or training plus holdout data.
-"
-69EAABE17802ED870302F2D2789B3B476DFDD11F_5,69EAABE17802ED870302F2D2789B3B476DFDD11F," Training and holdout method Training data is used to train the model, and holdout data is withheld from training the model and used to measure the performance of the model. You can either split a singe data source into training and testing (holdout) data, or you can use a second data file specifically for the testing data. If you split your training data, specify the percentages to use for training data and holdout data. You can also specify the number of folds, from the default of three folds to a maximum of 10. Cross validation divides training data into folds, or groups, for testing model performance.
- Select features to include Select columns from your data source that contain data that supports the prediction column. Excluding extraneous columns can improve run time.
-
-
-
-"
-69EAABE17802ED870302F2D2789B3B476DFDD11F_6,69EAABE17802ED870302F2D2789B3B476DFDD11F," Runtime settings
-
-Review experiment settings or change the compute resources that are allocated for running the experiment.
-
-"
-69EAABE17802ED870302F2D2789B3B476DFDD11F_7,69EAABE17802ED870302F2D2789B3B476DFDD11F," Next steps
-
-[Configure a text analysis experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-text-analysis.html)
-
-Parent topic:[Building an AutoAI model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-build.html)
-"
-9CFB0A5FA276072E73C152485022C9A3EAFCC233_0,9CFB0A5FA276072E73C152485022C9A3EAFCC233," Data imputation implementation details for time series experiments
-
-The experiment settings used for data imputation in time series experiments.
-
-"
-9CFB0A5FA276072E73C152485022C9A3EAFCC233_1,9CFB0A5FA276072E73C152485022C9A3EAFCC233," Data imputation methods
-
-Apply one of these data imputation methods in experiment settings to supply missing values in a data set.
-
-
-
-Data imputation methods for classification and regression experiments
-
- Imputation method Description
-
- FlattenIterative Time series data is first flattened, then missing values are imputed with the Scikit-learn iterative imputer.
- Linear Linear interpolation method is used to impute the missing value.
- Cubic Cubic interpolation method is used to impute the missing value.
- Previous Missing value is imputed with the previous value.
- Next Missing value is imputed with the next value.
- Fill Missing value is imputed by using user-specified value, or sample mean, or sample median.
-
-
-
-"
-9CFB0A5FA276072E73C152485022C9A3EAFCC233_2,9CFB0A5FA276072E73C152485022C9A3EAFCC233," Input Settings
-
-These commands are used to support data imputation for time series experiments in a notebook.
-
-
-
-Data imputation methods for time series experiments
-
- Name Description Value DefaultValue
-
- use_imputation Flag for switching imputation on or off. True or False True
- imputer_list List of imputer names (strings) to search. If a list is not specified, all the default imputers are searched. If an empty list is passed, all imputers are searched. FlattenIterative"", ""Linear"", ""Cubic"", ""Previous"", ""Fill"", ""Next FlattenIterative"", ""Linear"", ""Cubic"", ""Previous
- imputer_fill_type Categories of ""Fill"" imputer mean""/""median""/""value value
- imputer_fill_value A single numeric value to be filled for all missing values. Only applies when ""imputer_fill_type"" is specified as ""value"". Ignored if ""mean"" or ""median"" is specified for ""imputer_fill_type. (Negative Infinity, Positive Infinity) 0
- imputation_threshold Threshold for imputation. The missing value ratio must not be greater than the threshold in one column. Otherwise, results in an error. (0,1) 0.25
-
-
-
- Notes for use_imputation usage
-
-
-
-* If the use_imputation method is specified as True and the input data has missing values:
-
-
-
-* imputation_threshold takes effect.
-* imputer candidates in imputer_list would be used to search for the best imputer.
-* If the best imputer is Fill, imputer_fill_type and imputer_fill_value are applied; otherwise, they are ignored.
-
-
-
-* If the use_imputation method is specified as True and the input data has no missing values:
-
-
-
-* imputation_threshold is ignored.
-* imputer candidates in imputer_list are used to search for the best imputer. If the best imputer is Fill, imputer_fill_type and imputer_fill_value are applied; otherwise, they are ignored.
-
-
-
-* If the use_imputation method is specified as False but the input data has missing values:
-
-
-
-"
-9CFB0A5FA276072E73C152485022C9A3EAFCC233_3,9CFB0A5FA276072E73C152485022C9A3EAFCC233,"* use_imputation is turned on with a warning, then the method follows the behavior for the first scenario.
-
-
-
-* If the use_imputation method is specified as False and the input data has no missing values, then no further processing is required.
-
-
-
-For example:
-
-""pipelines"": [
-{
-""id"": ""automl"",
-""runtime_ref"": ""hybrid"",
-""nodes"":
-{
-""id"": ""automl-ts"",
-""type"": ""execution_node"",
-""op"": ""kube"",
-""runtime_ref"": ""automl"",
-""parameters"": {
-""del_on_close"": true,
-""optimization"": {
-""target_columns"": 2,3,4],
-""timestamp_column"": 1,
-""use_imputation"": true
-}
-}
-}
-]
-}
-]
-
-Parent topic:[Data imputation in AutoAI experiments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-imputation.html)
-"
-EBB83F528AC02840EFE18510ED95979D2CDA5641_0,EBB83F528AC02840EFE18510ED95979D2CDA5641," AutoAI implementation details
-
-AutoAI automatically prepares data, applies algorithms, or estimators, and builds model pipelines that are best suited for your data and use case.
-
-The following sections describe some of these technical details that go into generating the pipelines and provide a list of research papers that describe how AutoAI was designed and implemented.
-
-
-
-* [Preparing the data for training (pre-processing)](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-details.html?context=cdpaas&locale=endata-prep)
-* [Automated model selection](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-details.html?context=cdpaas&locale=enauto-select)
-* [Algorithms used for classification models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-details.html?context=cdpaas&locale=enestimators-classification)
-* [Algorithms used for regression models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-details.html?context=cdpaas&locale=enestimators-regression)
-* [Metrics by model type](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-details.html?context=cdpaas&locale=enmetric-by-model)
-* [Data transformations](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-details.html?context=cdpaas&locale=endata-transformations)
-* [Automated Feature Engineering](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-details.html?context=cdpaas&locale=enfeat-eng)
-"
-EBB83F528AC02840EFE18510ED95979D2CDA5641_1,EBB83F528AC02840EFE18510ED95979D2CDA5641,"* [Hyperparameter optimization](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-details.html?context=cdpaas&locale=enhyper-opt)
-* [AutoAI FAQ](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-details.html?context=cdpaas&locale=enautoai-faq)
-* [Learn more](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-details.html?context=cdpaas&locale=enadd-resource)
-
-
-
-"
-EBB83F528AC02840EFE18510ED95979D2CDA5641_2,EBB83F528AC02840EFE18510ED95979D2CDA5641," Preparing the data for training (data pre-processing)
-
-During automatic data preparation, or pre-processing, AutoAI analyzes the training data and prepares it for model selection and pipeline generation. Most data sets contain missing values but machine learning algorithms typically expect no missing values. On exception to this rule is described in [xgboost section 3.4](https://arxiv.org/abs/1603.02754). AutoAI algorithms perform various missing value imputations in your data set by using various techniques, making your data ready for machine learning. In addition, AutoAI detects and categorizes features based on their data types, such as categorical or numerical. It explores encoding and scaling strategies that are based on the feature categorization.
-
-Data preparation involves these steps:
-
-
-
-* [Feature column classification](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-details.html?context=cdpaas&locale=encol-classification)
-* [Feature engineering](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-details.html?context=cdpaas&locale=enfeature-eng)
-* [Pre-processing (data imputation and encoding)](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-details.html?context=cdpaas&locale=enpre-process)
-
-
-
-"
-EBB83F528AC02840EFE18510ED95979D2CDA5641_3,EBB83F528AC02840EFE18510ED95979D2CDA5641," Feature column classification
-
-
-
-* Detects the types of feature columns and classifies them as categorical or numerical class
-* Detects various types of missing values (default, user-provided, outliers)
-
-
-
-"
-EBB83F528AC02840EFE18510ED95979D2CDA5641_4,EBB83F528AC02840EFE18510ED95979D2CDA5641," Feature engineering
-
-
-
-* Handles rows for which target values are missing (drop (default) or target imputation)
-* Drops unique value columns (except datetime and timestamps)
-* Drops constant value columns
-
-
-
-"
-EBB83F528AC02840EFE18510ED95979D2CDA5641_5,EBB83F528AC02840EFE18510ED95979D2CDA5641," Pre-processing (data imputation and encoding)
-
-
-
-* Applies Sklearn imputation/encoding/scaling strategies (separately on each feature class). For example, the current default method for missing value imputation strategies, which are used in the product are most frequent for categorical variables and mean for numerical variables.
-* Handles labels of test set that were not seen in training set
-* HPO feature: Optimizes imputation/encoding/scaling strategies given a data set and algorithm
-
-
-
-"
-EBB83F528AC02840EFE18510ED95979D2CDA5641_6,EBB83F528AC02840EFE18510ED95979D2CDA5641," Automatic model selection
-
-The second stage in an AutoAI experiment training is automated model selection. The automated model selection algorithm uses the Data Allocation by using Upper Bounds strategy. This approach sequentially allocates small subsets of training data among a large set of algorithms. The goal is to select an algorithm that gives near-optimal accuracy when trained on all data, while also minimizing the cost of misallocated samples. The system currently supports all Scikit-learn algorithms, and the popular XGBoost and LightGBM algorithms. Training and evaluation of models on large data sets is costly. The approach of starting small subsets and allocating incrementally larger ones to models that work well on the data set saves time, without sacrificing performance. Snap machine learning algorithms were added to the system to boost the performance even more.
-
-"
-EBB83F528AC02840EFE18510ED95979D2CDA5641_7,EBB83F528AC02840EFE18510ED95979D2CDA5641," Selecting algorithms for a model
-
-Algorithms are selected to match the data and the nature of the model, but they can also balance accuracy and duration of runtime, if the model is configured for that option. For example, Snap ML algorithms are typically faster for training than Scikit-learn algorithms. They are often the preferred algorithms AutoAI selects automatically for cases where training is optimized for a shorter run time and accuracy. You can manually select them if training speed is a priority. For details, see [Snap ML documentation](https://snapml.readthedocs.io/). For a discussion of when SnapML algorithms are useful, see this [blog post on using SnapML algorithms](https://lukasz-cmielowski.medium.com/watson-studio-autoai-python-api-and-covid-19-data-78169beacf36).
-
-"
-EBB83F528AC02840EFE18510ED95979D2CDA5641_8,EBB83F528AC02840EFE18510ED95979D2CDA5641," Algorithms used for classification models
-
-These algorithms are the default algorithms that are used for model selection for classification problems.
-
-
-
-Table 1: Default algorithms for classification
-
- Algorithm Description
-
- Decision Tree Classifier Maps observations about an item (represented in branches) to conclusions about the item's target value (represented in leaves). Supports both binary and multiclass labels, and both continuous and categorical features.
- Extra Trees Classifier An averaging algorithm based on randomized decision trees.
- Gradient Boosted Tree Classifier Produces a classification prediction model in the form of an ensemble of decision trees. It supports binary labels and both continuous and categorical features.
- LGBM Classifier Gradient boosting framework that uses leaf-wise (horizontal) tree-based learning algorithm.
- Logistic Regression Analyzes a data set where one or more independent variables that determine one of two outcomes. Only binary logistic regression is supported
- Random Forest Classifier Constructs multiple decision trees to produce the label that is a mode of each decision tree. It supports both binary and multiclass labels, and both continuous and categorical features.
- SnapDecisionTreeClassifier This algorithm provides a decision tree classifier by using the IBM Snap ML library.
- SnapLogisticRegression This algorithm provides regularized logistic regression by using the IBM Snap ML solver.
- SnapRandomForestClassifier This algorithm provides a random forest classifier by using the IBM Snap ML library.
- SnapSVMClassifier This algorithm provides a regularized support vector machine by using the IBM Snap ML solver.
- XGBoost Classifier Accurate sure procedure that can be used for classification problems. XGBoost models are used in various areas, including web search ranking and ecology.
- SnapBoostingMachineClassifier Boosting machine for binary and multi-class classification tasks that mix binary decision trees with linear models with random fourier features.
-
-
-
-"
-EBB83F528AC02840EFE18510ED95979D2CDA5641_9,EBB83F528AC02840EFE18510ED95979D2CDA5641," Algorithms used for regression models
-
-These algorithms are the default algorithms that are used for automatic model selection for regression problems.
-
-
-
-Table 2: Default algorithms for regression
-
- Algorithm Description
-
- Decision Tree Regression Maps observations about an item (represented in the branches) to conclusions about the item's target value (represented in the leaves). It supports both continuous and categorical features.
- Extra Trees Regression An averaging algorithm based on randomized decision trees.
- Gradient Boosting Regression Produces a regression prediction model in the form of an ensemble of decision trees. It supports both continuous and categorical features.
- LGBM Regression Gradient boosting framework that uses tree-based learning algorithms.
- Linear Regression Models the linear relationship between a scalar-dependent variable y and one or more explanatory variables (or independent variables) x.
- Random Forest Regression Constructs multiple decision trees to produce the mean prediction of each decision tree. It supports both continuous and categorical features.
- Ridge Ridge regression is similar to Ordinary Least Squares but imposes a penalty on the size of coefficients.
- SnapBoostingMachineRegressor This algorithm provides a boosting machine by using the IBM Snap ML library that can be used to construct an ensemble of decision trees.
- SnapDecisionTreeRegressor This algorithm provides a decision tree by using the IBM Snap ML library.
- SnapRandomForestRegressor This algorithm provides a random forest by using the IBM Snap ML library.
- XGBoost Regression GBRT is an accurate and effective off-the-shelf procedure that can be used for regression problems. Gradient Tree Boosting models are used in various areas, including web search ranking and ecology.
-
-
-
-"
-EBB83F528AC02840EFE18510ED95979D2CDA5641_10,EBB83F528AC02840EFE18510ED95979D2CDA5641," Metrics by model type
-
-The following metrics are available for measuring the accuracy of pipelines during training and for scoring data.
-
-"
-EBB83F528AC02840EFE18510ED95979D2CDA5641_11,EBB83F528AC02840EFE18510ED95979D2CDA5641," Binary classification metrics
-
-
-
-* Accuracy (default for ranking the pipelines)
-* Roc auc
-* Average precision
-* F
-* Negative log loss
-* Precision
-* Recall
-
-
-
-"
-EBB83F528AC02840EFE18510ED95979D2CDA5641_12,EBB83F528AC02840EFE18510ED95979D2CDA5641," Multi-class classification metrics
-
-Metrics for multi-class models generate scores for how well a pipeline performs against the specified measurement. For example, an F1 score averages precision (of the predictions made, how many positive predictions were correct) and recall (of all possible positive predictions, how many were predicted correctly).
-
-You can further refine a score by qualifying it to calculate the given metric globally (macro), per label (micro), or to weight an imbalanced data set to favor classes with more representation.
-
-
-
-* Metrics with the micro qualifier calculate metrics globally by counting the total number of true positives, false negatives and false positives.
-* Metrics with the macro qualifier calculates metrics for each label, and finds their unweighted mean. All labels are weighted equally.
-* Metrics with the weighted qualifier calculate metrics for each label, and find their average weighted by the contribution of each class. For example, in a data set that includes categories for apples, peaches, and plums, if there are many more instances of apples, the weighted metric gives greater importance to correctly predicting apples. This alters macro to account for label imbalance. Use a weighted metric such as F1-weighted for an imbalanced data set.
-
-
-
-These are the multi-class classification metrics:
-
-
-
-* Accuracy (default for ranking the pipelines)
-* F1
-* F1 Micro
-* F1 Macro
-* F1 Weighted
-* Precision
-* Precision Micro
-* Precision Macro
-* Precision Weighted
-* Recall
-* Recall Micro
-* Recall Macro
-* Recall Weighted
-
-
-
-"
-EBB83F528AC02840EFE18510ED95979D2CDA5641_13,EBB83F528AC02840EFE18510ED95979D2CDA5641," Regression metrics
-
-
-
-* Negative root mean squared error (default for ranking the pipeline)
-* Negative mean absolute error
-* Negative root mean squared log error
-* Explained variance
-* Negative mean squared error
-* Negative mean squared log error
-* Negative median absolute error
-* R2
-
-
-
-"
-EBB83F528AC02840EFE18510ED95979D2CDA5641_14,EBB83F528AC02840EFE18510ED95979D2CDA5641," Automated Feature Engineering
-
-The third stage in the AutoAI process is automated feature engineering. The automated feature engineering algorithm is based on Cognito, described in the research papers, [Cognito: Automated Feature Engineering for Supervised Learning](https://ieeexplore.ieee.org/abstract/document/7836821) and [Feature Engineering for Predictive Modeling by using Reinforcement Learning](https://research.ibm.com/publications/feature-engineering-for-predictive-modeling-using-reinforcement-learning). The system explores various feature construction choices in a hierarchical and nonexhaustive manner, while progressively maximizing the accuracy of the model through an exploration-exploitation strategy. This method is inspired from the ""trial and error"" strategy for feature engineering, but conducted by an autonomous agent in place of a human.
-
-"
-EBB83F528AC02840EFE18510ED95979D2CDA5641_15,EBB83F528AC02840EFE18510ED95979D2CDA5641," Metrics used for feature importance
-
-For tree-based classification and regression algorithms such as Decision Tree, Extra Trees, Random Forest, XGBoost, Gradient Boosted, and LGBM, feature importances are their inherent feature importance scores based on the reduction in the criterion that is used to select split points, and calculated when these algorithms are trained on the training data.
-
-For nontree algorithms such as Logistic Regression, LInear Regression, SnapSVM, and Ridge, the feature importances are the feature importances of a Random Forest algorithm that is trained on the same training data as the nontree algorithm.
-
-For any algorithm, all feature importances are in the range between zero and one and have been normalized as the ratio with respect to the maximum feature importance.
-
-"
-EBB83F528AC02840EFE18510ED95979D2CDA5641_16,EBB83F528AC02840EFE18510ED95979D2CDA5641," Data transformations
-
-For feature engineering, AutoAI uses a novel approach that explores various feature construction choices in a structured, nonexhaustive manner, while progressively maximizing model accuracy by using reinforcement learning. This results in an optimized sequence of transformations for the data that best match the algorithms, or algorithms, of the model selection step. This table lists some of the transformations that are used and some well-known conditions under which they are useful. This is not an exhaustive list of scenarios where the transformation is useful, as that can be complex and hard to interpret. Finally, the listed scenarios are not an explanation of how the transformations are selected. The selection of which transforms to apply is done in a trial and error, performance-oriented manner.
-
-
-
-Table 3: Transformations for feature engineering
-
- Name Code Function
-
- Principle Component Analysis pca Reduce dimensions of data and realign across a more suitable coordinate system. Helps tackle the 'curse of dimensionality' in linearly correlated data. It eliminates redundancy and separates significant signals in data.
- Standard Scaler stdscaler Scales data features to a standard range. This helps the efficacy and efficiency of certain learning algorithms and other transformations such as PCA.
- Logarithm log Reduces right skewness in features and make them more symmetric. Resulting symmetry in features helps algorithms understand the data better. Even scaling based on mean and variance is more meaningful on symmetrical data. Additionally, it can capture specific physical relationships between feature and target that is best described through a logarithm.
- Cube Root cbrt Reduces right skewness in data like logarithm, but is weaker than log in its impact, which might be more suitable in some cases. It is also applicable to negative or zero values to which log doesn't apply. Cube root can also change units such as reducing volume to length.
- Square root sqrt Reduces mild right skewness in data. It is weaker than log or cube root. It works with zeros and reduces spatial dimensions such as area to length.
-"
-EBB83F528AC02840EFE18510ED95979D2CDA5641_17,EBB83F528AC02840EFE18510ED95979D2CDA5641," Square square Reduces left skewness to a moderate extent to make such distributions more symmetric. It can also be helpful in capturing certain phenomena such as super-linear growth.
- Product product A product of two features can expose a nonlinear relationship to better predict the target value than the individual values alone. For example, item cost into number of items that are sold is a better indication of the size of a business than any of those alone.
- Numerical XOR nxor This transform helps capture ""exclusive disjunction"" type of relationships between variables, similar to a bitwise XOR, but in a general numerical context.
- Sum sum Sometimes the sum of two features is better correlated to the prediction target than the features alone. For instance, loans from different sources, when summed up, provide a better idea of a credit applicant's total indebtedness.
- Divide divide Division is a fundamental operand that is used to express quantities such as gross GDP over population (per capita GDP), representing a country's average lifespan better than either GDP alone or population alone.
- Maximum max Take the higher of two values.
- Rounding round This transformation can be seen as perturbation or adding some noise to reduce overfitting that might be a result of inaccurate observations.
- Absolute Value abs Consider only the magnitude and not the sign of observation. Sometimes, the direction or sign of an observation doesn't matter so much as the magnitude of it, such as physical displacement, while considering fuel or time spent in the actual movement.
- Hyperbolic tangent tanh Nonlinear activation function can improve prediction accuracy, similar to that of neural network activation functions.
- Sine sin Can reorient data to discover periodic trends such as simple harmonic motions.
- Cosine cos Can reorient data to discover periodic trends such as simple harmonic motions.
- Tangent tan Trigonometric tangent transform is usually helpful in combination with other transforms.
- Feature Agglomeration feature agglomeration Clustering different features into groups, based on distance or affinity, provides ease of classification for the learning algorithm.
- Sigmoid sigmoid Nonlinear activation function can improve prediction accuracy, similar to that of neural network activation functions.
-"
-EBB83F528AC02840EFE18510ED95979D2CDA5641_18,EBB83F528AC02840EFE18510ED95979D2CDA5641," Isolation Forest isoforestanomaly Performs clustering by using an Isolation Forest to create a new feature containing an anomaly score for each sample.
- Word to vector word2vec This algorithm, which is used for text analysis, is applied before all other transformations. It takes a corpus of text as input and outputs a set of vectors. By turning text into a numerical representation, it can detect and compare similar words. When trained with enough data, word2vec can make accurate predictions about a word’s meaning or relationship to other words. The predictions can be used to analyze text and predict meaning in sentiment analysis applications.
-
-
-
-"
-EBB83F528AC02840EFE18510ED95979D2CDA5641_19,EBB83F528AC02840EFE18510ED95979D2CDA5641," Hyperparameter Optimization
-
-The final stage in AutoAI is hyperparameter optimization. The AutoAI approach optimizes the parameters of the best performing pipelines from the previous phases. It is done by exploring the parameter ranges of these pipelines by using a black box hyperparameter optimizer called RBFOpt. RBFOpt is described in the research paper [RBFOpt: an open-source library for black-box optimization with costly function evaluations](http://www.optimization-online.org/DB_HTML/2014/09/4538.html). RBFOpt is suited for AutoAI experiments because it is built for optimizations with costly evaluations, as in the case of training and scoring an algorithm. RBFOpt's approach builds and iteratively refines a surrogate model of the unknown objective function to converge quickly despite the long evaluation times of each iteration.
-
-"
-EBB83F528AC02840EFE18510ED95979D2CDA5641_20,EBB83F528AC02840EFE18510ED95979D2CDA5641," AutoAI FAQs
-
-The following are commonly asked questions about creating an AutoAI experiment.
-
-"
-EBB83F528AC02840EFE18510ED95979D2CDA5641_21,EBB83F528AC02840EFE18510ED95979D2CDA5641," How many pipelines are created?
-
-Two AutoAI parameters determine the number of pipelines:
-
-
-
-* max_num_daub_ensembles: Maximum number (top-K ranked by DAUB model selection) of the selected algorithm, or estimator types, for example LGBMClassifierEstimator, XGBoostClassifierEstimator, or LogisticRegressionEstimator to use in pipeline composition. The default is 1, where only the highest ranked by model selection algorithm type is used.
-* num_folds: Number of subsets of the full data set to train pipelines in addition to the full data set. The default is 1 for training the full data set.
-
-
-
-For each fold and algorithm type, AutoAI creates four pipelines of increased refinement, corresponding to:
-
-
-
-1. Pipeline with default sklearn parameters for this algorithm type,
-2. Pipeline with optimized algorithm by using HPO
-3. Pipeline with optimized feature engineering
-4. Pipeline with optimized feature engineering and optimized algorithm by using HPO
-
-
-
-The total number of pipelines that are generated is:
-
-TotalPipelines= max_num_daub_ensembles * 4, if num_folds = 1:
-
-TotalPipelines= (num_folds+1)* max_num_daub_ensembles * 4, if num_folds > 1 :
-
-"
-EBB83F528AC02840EFE18510ED95979D2CDA5641_22,EBB83F528AC02840EFE18510ED95979D2CDA5641," What hyperparameter optimization is applied to my model?
-
-AutoAI uses a model-based, derivative-free global search algorithm, called RBfOpt, which is tailored for the costly machine learning model training and scoring evaluations that are required by hyperparameter optimization (HPO). In contrast to Bayesian optimization, which fits a Gaussian model to the unknown objective function, RBfOpt fits a radial basis function mode to accelerate the discovery of hyper-parameter configurations that maximize the objective function of the machine learning problem at hand. This acceleration is achieved by minimizing the number of expensive training and scoring machine learning models evaluations and by eliminating the need to compute partial derivatives.
-
-For each fold and algorithm type, AutoAI creates two pipelines that use HPO to optimize for the algorithm type.
-
-
-
-* The first is based on optimizing this algorithm type based on the preprocessed (imputed/encoded/scaled) data set (pipeline 2) above).
-* The second is based on optimizing the algorithm type based on optimized feature engineering of the preprocessed (imputed/encoded/scaled) data set.
-
-
-
-The parameter values of the algorithms of all pipelines that are generated by AutoAI is published in status messages.
-
-For more details regarding the RbfOpt algorithm, see:
-
-
-
-* [RbfOpt: A blackbox optimization library in Python](https://github.com/coin-or/rbfopt)
-* [An effective algorithm for hyperparameter optimization of neural networks. IBM Journal of Research and Development, 61(4-5), 2017](http://ieeexplore.ieee.org/document/8030298/)
-
-
-
-"
-EBB83F528AC02840EFE18510ED95979D2CDA5641_23,EBB83F528AC02840EFE18510ED95979D2CDA5641,"Research references
-
-This list includes some of the foundational research articles that further detail how AutoAI was designed and implemented to promote trust and transparency in the automated model-building process.
-
-
-
-* [Toward cognitive automation of data science](https://scholar.google.com/citations?view_op=view_citation&hl=en&user=km7EsqsAAAAJ&cst[…]&sortby=pubdate&citation_for_view=km7EsqsAAAAJ:R3hNpaxXUhUC)
-* [Cognito: Automated feature engineering for supervised learning](https://scholar.google.com/citations?view_op=view_citation&hl=en&user=km7EsqsAAAAJ&cst[…]&sortby=pubdate&citation_for_view=km7EsqsAAAAJ:maZDTaKrznsC)
-
-
-
-"
-EBB83F528AC02840EFE18510ED95979D2CDA5641_24,EBB83F528AC02840EFE18510ED95979D2CDA5641," Next steps
-
-[Data imputation in AutoAI experiments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-imputation.html)
-
-Parent topic:[AutoAI overview](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-overview.html)
-"
-3F0B3A581945A1C7FE243340843CC4671A4E32C6_0,3F0B3A581945A1C7FE243340843CC4671A4E32C6," Applying fairness testing to AutoAI experiments
-
-Evaluate an experiment for fairness to ensure that your results are not biased in favor of one group over another.
-
-"
-3F0B3A581945A1C7FE243340843CC4671A4E32C6_1,3F0B3A581945A1C7FE243340843CC4671A4E32C6," Limitations
-
-Fairness evaluations are not supported for time series experiments.
-
-"
-3F0B3A581945A1C7FE243340843CC4671A4E32C6_2,3F0B3A581945A1C7FE243340843CC4671A4E32C6," Evaluating experiments and models for fairness
-
-When you define an experiment and produce a machine learning model, you want to be sure that your results are reliable and unbiased. Bias in a machine learning model can result when the model learns the wrong lessons during training. This scenario can result when insufficient data or poor data collection or management results in a poor outcome when the model generates predictions. It is important to evaluate an experiment for signs of bias to remediate them when necessary and build confidence in the model results.
-
-AutoAI includes the following tools, techniques, and features to help you evaluate and remediate an experiment for bias.
-
-
-
-* [Definitions and terms](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-fairness.html?context=cdpaas&locale=enterms)
-* [Applying fairness test for an AutoAI experiment in the UI](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-fairness.html?context=cdpaas&locale=enfairness-ui)
-* [Applying fairness test for an AutoAI experiment in a notebook](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-fairness.html?context=cdpaas&locale=enfairness-api)
-* [Evaluating results](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-fairness.html?context=cdpaas&locale=enfairness-results)
-* [Bias mitigation](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-fairness.html?context=cdpaas&locale=enbias-mitigation)
-
-
-
-"
-3F0B3A581945A1C7FE243340843CC4671A4E32C6_3,3F0B3A581945A1C7FE243340843CC4671A4E32C6," Definitions and terms
-
-Fairness Attribute - Bias or Fairness is typically measured by using a fairness attribute such as gender, ethnicity, or age.
-
-Monitored/Reference Group - Monitored group are those values of fairness attribute for which you want to measure bias. Values in the monitored group are compared to values in the reference group. For example, if Fairness Attribute=Gender is used to measure bias against females, then the monitored group value is “Female” and the reference group value is “Male”.
-
-Favourable/Unfavourable outcome - An important concept in bias detection is that of favorable and unfavorable outcome of the model. For example, Claim approved might be considered a favorable outcome and Claim denied might be considered as an unfavorable outcome.
-
-Disparate impact - The metric used to measure bias (computed as the ratio of percentage of favorable outcome for the monitored group to the percentage of favorable outcome for the reference group). Bias is said to exist if the disparate impact value is less than a specified threshold.
-
-For example, if 80% of insurance claims that are made by males are approved but only 60% of claims that are made by females are approved, then the disparate impact is: 60/80 = 0.75. Typically, the threshold value for bias is 0.8. As this disparate impact ratio is less than 0.8, the model is considered to be biased.
-
-Note when the disparate impact ratio is greater than 1.25 [inverse value (1/disparate impact) is under the threshold 0.8] it is also considered as biased.
-
-"
-3F0B3A581945A1C7FE243340843CC4671A4E32C6_4,3F0B3A581945A1C7FE243340843CC4671A4E32C6," Watch a video about evaluating and improving fairness
-
-Watch this video to see how to evaluate a machine learning model for fairness to ensure that your results are not biased.
-
-Video disclaimer: Some minor steps and graphical elements in this video might differ from your platform.
-
-This video provides a visual method to learn the concepts and tasks in this documentation.
-
-"
-3F0B3A581945A1C7FE243340843CC4671A4E32C6_5,3F0B3A581945A1C7FE243340843CC4671A4E32C6," Applying fairness test for an AutoAI experiment in the UI
-
-
-
-1. Open Experiment Settings.
-2. Click the Fairness tab.
-3. Enable options for fairness. The options are as follows:
-
-
-
-* Fairness evaluation: Enable this option to check each pipeline for bias by calculating the disparate impact ration. This method tracks whether a pipeline shoes a tendency to provide a favorable (preferred) outcome for one group more often than another.
-* Fairness threshold: Set a fairness threshold to determine whether bias exists in a pipeline based on the value of the disparate impact ration. The default is 80, which represents a disparate impact ratio less than 0.80.
-* Favorable outcomes: Specify the value from your prediction column that would be considered favorable. For example, the value might be ""approved"", ""accepted"" or whatever fits your prediction type.
-* Automatic protected attribute method: Choose how to evaluate features that are a potential source of bias. You can specify automatic detection, in which case AutoAI detects commonly protected attributes, including: sex, ethnicity, marital status, age, and zip or postal code. Within each category, AutoAI tries to determine a protected group. For example, for the sex category, the monitored group would be female.
-
-Note: In automatic mode, it is likely that a feature is not identified correctly as a protected attribute if it has untypical values, for example, being in a language other than English. Auto-detect is only supported for English.
-* Manual protected attribute method: Manually specify an outcome and supply the protected attribute by choosing from a list of attributes. Note when you manually supply attributes, you must then define a group and specify whether it is likely to have the expected outcomes (the reference group) or should be reviewed to detect variance from the expected outcomes (the monitored group).
-
-
-
-
-
-For example, this image shows a set of manually specified attribute groups for monitoring.
-
-
-
-"
-3F0B3A581945A1C7FE243340843CC4671A4E32C6_6,3F0B3A581945A1C7FE243340843CC4671A4E32C6,"Save the settings to apply and run the experiment to apply the fairness evaluation to your pipelines.
-
-Notes:
-
-
-
-* For multiclass models, you can select multiple values in the prediction column to classify as favorable or not.
-* For regression models, you can specify a range of outcomes that are considered to be favorable or not.
-* Fairness evaluations are not currently available for time series experiments.
-
-
-
-"
-3F0B3A581945A1C7FE243340843CC4671A4E32C6_7,3F0B3A581945A1C7FE243340843CC4671A4E32C6," List of automatically detected attributes for measuring fairness
-
-When automatic detection is enabled, AutoAI will automatically detect the following attributes if they are present in the training data. The attributes must be in English.
-
-
-
-* age
-* citizen_status
-* color
-* disability
-* ethnicity
-* gender
-* genetic_information
-* handicap
-* language
-* marital
-* political_belief
-* pregnancy
-* religion
-* veteran_status
-
-
-
-"
-3F0B3A581945A1C7FE243340843CC4671A4E32C6_8,3F0B3A581945A1C7FE243340843CC4671A4E32C6," Applying fairness test for an AutoAI experiment in a notebook
-
-You can perform fairness testing in an AutoAI experiment that is trained in a notebook and extend the capabilities beyond what is provided in the UI.
-
-"
-3F0B3A581945A1C7FE243340843CC4671A4E32C6_9,3F0B3A581945A1C7FE243340843CC4671A4E32C6," Bias detection example
-
-In this example, by using the Watson Machine Learning Python API (ibm-watson-machine-learning), the optimizer configuration for bias detection is configured with the following input, where:
-
-
-
-* name - experiment name
-* prediction_type - type of the problem
-* prediction_column - target column name
-* fairness_info - bias detection configuration
-
-
-
-fairness_info = {
-""protected_attributes"": [
-{
-""feature"": ""personal_status"",
-""reference_group"": ""male div/sep"", ""male mar/wid"", ""male single""],
-""monitored_group"": ""female div/dep/mar""]
-},
-{
-""feature"": ""age"",
-""reference_group"": 26, 100]],
-""monitored_group"": 1, 25]]}
-],
-""favorable_labels"": [""good""],
-""unfavorable_labels"": [""bad""],
-}
-
-from ibm_watson_machine_learning.experiment import AutoAI
-
-experiment = AutoAI(wml_credentials, space_id=space_id)
-pipeline_optimizer = experiment.optimizer(
-name='Credit Risk Prediction and bias detection - AutoAI',
-prediction_type=AutoAI.PredictionType.BINARY,
-prediction_column='class',
-scoring='accuracy',
-fairness_info=fairness_info,
-retrain_on_holdout=False
-)
-
-"
-3F0B3A581945A1C7FE243340843CC4671A4E32C6_10,3F0B3A581945A1C7FE243340843CC4671A4E32C6," Evaluating results
-
-You can view the evaluation results for each pipeline.
-
-
-
-1. From the Experiment summary page, click the filter icon for the Pipeline leaderboard.
-2. Choose the Disparate impact metrics for your experiment. This option evaluates one general metric and one metric for each monitored group.
-3. Review the pipeline metrics for disparate impact to determine whether you have a problem with bias or just to determine which pipeline performs better for a fairness evaluation.
-
-
-
-In this example, the pipeline that was ranked first for accuracy also has a disparate income score that is within the acceptable limits.
-
-
-
-"
-3F0B3A581945A1C7FE243340843CC4671A4E32C6_11,3F0B3A581945A1C7FE243340843CC4671A4E32C6," Bias mitigation
-
-If bias is detected in an experiment, you can mitigate it by optimizing your experiment by using ""combined scorers"": [accuracy_and_disparate_impact](https://lale.readthedocs.io/en/latest/modules/lale.lib.aif360.util.htmllale.lib.aif360.util.accuracy_and_disparate_impact) or [r2_and_disparate_impact](https://lale.readthedocs.io/en/latest/modules/lale.lib.aif360.util.htmllale.lib.aif360.util.r2_and_disparate_impact), both defined by the open source [LALE package](https://lale.readthedocs.io/en/latest/index.html).
-
-Combined scorers are used in the search and optimization process to return fair and accurate models.
-
-For example, to optimize for bias detection for a classification experiment:
-
-
-
-1. Open Experiment Settings.
-2. On the Predictions page, choose to optimize Accuracy and disparate impact in the experiment.
-3. Rerun the experiment.
-
-
-
-The Accuracy and disparate impact metric creates a combined score for accuracy and fairness for classification experiments. A higher score indicates better performance and fairness measures. If the disparate impact score is between 0.9 and 1.11 (an acceptable level), the accuracy score is returned. Otherwise, a disparate impact value lower than the accuracy score is returned, with a lower (negative) value which indicates a fairness gap.
-
-"
-3F0B3A581945A1C7FE243340843CC4671A4E32C6_12,3F0B3A581945A1C7FE243340843CC4671A4E32C6,"Note:Advanced users can use a [notebook to apply or review fairness detection methods](https://github.com/IBM/watson-machine-learning-samples/blob/master/cloud/notebooks/python_sdk/experiments/autoai/Use%20AutoAI%20to%20train%20fair%20models.ipynb). You can further refine a trained AutoAI model by using third-party packages like: [lale, AIF360](https://lale.readthedocs.io/en/latest/modules/lale.lib.aif360.htmlmodule-lale.lib.aif360) to extend the fairness and bias detection capabilities beyond what is provided with AutoAI by default.
-
-Review a [sample notebook that evaluates pipelines for fairness](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-samples-overview.html).
-
-Read this [Medium blog post on Bias detection in AutoAI](https://lukasz-cmielowski.medium.com/bias-detection-and-mitigation-in-ibm-autoai-406db0e19181).
-
-"
-3F0B3A581945A1C7FE243340843CC4671A4E32C6_13,3F0B3A581945A1C7FE243340843CC4671A4E32C6," Next steps
-
-[Troubleshooting AutoAI experiments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-troubleshoot.html)
-
-Parent topic: [AutoAI overview](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-overview.html)
-"
-5042FBFB0C15AEDED02FF805C4869AC838910C7A_0,5042FBFB0C15AEDED02FF805C4869AC838910C7A," AutoAI glossary
-
-Learn terms and concepts that are used in AutoAI for building and deploying machine learning models.
-
-"
-5042FBFB0C15AEDED02FF805C4869AC838910C7A_1,5042FBFB0C15AEDED02FF805C4869AC838910C7A,"aggregate score
-The aggregation of the four anomaly types: level shift, trend, localized extreme, variance. A higher score indicates a stronger score.
-
-"
-5042FBFB0C15AEDED02FF805C4869AC838910C7A_2,5042FBFB0C15AEDED02FF805C4869AC838910C7A,"algorithm
-A formula applied to data to determine optimal ways to solve analytical problems.
-
-"
-5042FBFB0C15AEDED02FF805C4869AC838910C7A_3,5042FBFB0C15AEDED02FF805C4869AC838910C7A,"anomaly prediction
-An AutoAI time-series model that can predict anomalies, or unexpected results, against new data.
-
-"
-5042FBFB0C15AEDED02FF805C4869AC838910C7A_4,5042FBFB0C15AEDED02FF805C4869AC838910C7A,"AutoAI experiment
-An automated training process that considers a series of training definitions and parameters to create a set of ranked pipelines as model candidates.
-
-"
-5042FBFB0C15AEDED02FF805C4869AC838910C7A_5,5042FBFB0C15AEDED02FF805C4869AC838910C7A,"batch employment
-Processes input data from a file, data connection, or connected data in a storage bucket and writes the output to a selected destination.
-
-"
-5042FBFB0C15AEDED02FF805C4869AC838910C7A_6,5042FBFB0C15AEDED02FF805C4869AC838910C7A,"bias detection (machine learning)
-To identify imbalances in the training data or prediction behavior of the model.
-
-"
-5042FBFB0C15AEDED02FF805C4869AC838910C7A_7,5042FBFB0C15AEDED02FF805C4869AC838910C7A,"binary classification
-A classification model with two classes and only assigns samples into one of the two classes.
-
-"
-5042FBFB0C15AEDED02FF805C4869AC838910C7A_8,5042FBFB0C15AEDED02FF805C4869AC838910C7A,"classification model
-A predictive model that predicts data in distinct categories.
-
-"
-5042FBFB0C15AEDED02FF805C4869AC838910C7A_9,5042FBFB0C15AEDED02FF805C4869AC838910C7A,"confusion matrix
-A performance measurement that determines the accuracy between a model’s positive and negative predicted outcomes to positive and negative actual outcomes.
-
-"
-5042FBFB0C15AEDED02FF805C4869AC838910C7A_10,5042FBFB0C15AEDED02FF805C4869AC838910C7A,"cross validation
-A technique that tests the effectiveness of machine learning models. It is also used as a resampling procedure for models with limited data.
-
-"
-5042FBFB0C15AEDED02FF805C4869AC838910C7A_11,5042FBFB0C15AEDED02FF805C4869AC838910C7A,"data imputation
-Substituting missing values in a data set with estimated values.
-
-"
-5042FBFB0C15AEDED02FF805C4869AC838910C7A_12,5042FBFB0C15AEDED02FF805C4869AC838910C7A,"exogenous features
-Features that can influence the prediction model but cannot be influenced in return. See also: Supporting features
-
-"
-5042FBFB0C15AEDED02FF805C4869AC838910C7A_13,5042FBFB0C15AEDED02FF805C4869AC838910C7A,"fairness
-Determines whether a model produces biased outcomes that favor a monitored group over a reference group. Fairness evaluations detect if the model shows a tendency to provide a favorable or preferable outcome more often for one group over another. Typical categories to monitor are age, sex, and race.
-
-"
-5042FBFB0C15AEDED02FF805C4869AC838910C7A_14,5042FBFB0C15AEDED02FF805C4869AC838910C7A,"feature correlation
-The relationship between two features. For example, postal code might have a strong correlation with income in some models.
-
-"
-5042FBFB0C15AEDED02FF805C4869AC838910C7A_15,5042FBFB0C15AEDED02FF805C4869AC838910C7A,"feature encoding
-Transforming categorical values into numerical values.
-
-"
-5042FBFB0C15AEDED02FF805C4869AC838910C7A_16,5042FBFB0C15AEDED02FF805C4869AC838910C7A,"feature importance
-The relative impact a particular column or feature has on the model's prediction or forecast.
-
-"
-5042FBFB0C15AEDED02FF805C4869AC838910C7A_17,5042FBFB0C15AEDED02FF805C4869AC838910C7A,"feature scaling
-Normalizing the range of independent variables or features in a data set.
-
-"
-5042FBFB0C15AEDED02FF805C4869AC838910C7A_18,5042FBFB0C15AEDED02FF805C4869AC838910C7A,"feature selection
-Identifying the columns of data that best support an accurate prediction or score.
-
-"
-5042FBFB0C15AEDED02FF805C4869AC838910C7A_19,5042FBFB0C15AEDED02FF805C4869AC838910C7A,"feature transformation
-In AutoAI, a phase of pipeline creation that applies algorithms to transform and optimize the training data to achieve the best outcome for the model type.
-
-"
-5042FBFB0C15AEDED02FF805C4869AC838910C7A_20,5042FBFB0C15AEDED02FF805C4869AC838910C7A,"holdout data
-Data used to test or validate the model's performance. Holdout data can be a reserved portion of the training data, or it can be a separate file.
-
-"
-5042FBFB0C15AEDED02FF805C4869AC838910C7A_21,5042FBFB0C15AEDED02FF805C4869AC838910C7A,"hyperparameter optimization (HPO)
-The process for setting hyperparameter values to the settings that provide the most accurate model.
-
-"
-5042FBFB0C15AEDED02FF805C4869AC838910C7A_22,5042FBFB0C15AEDED02FF805C4869AC838910C7A,"incremental learning
-The process of training a model that uses data that is continually updated without forgetting data that is obtained from the preceding tasks.
-
-"
-5042FBFB0C15AEDED02FF805C4869AC838910C7A_23,5042FBFB0C15AEDED02FF805C4869AC838910C7A,"large tabular data
-Structured data that exceeds the limit on standard processing and must be processed in batches. See incremental learning.
-
-"
-5042FBFB0C15AEDED02FF805C4869AC838910C7A_24,5042FBFB0C15AEDED02FF805C4869AC838910C7A,"labeled data
-Data that is labeled to identify the appropriate data vectors to be pulled in for model training.
-
-"
-5042FBFB0C15AEDED02FF805C4869AC838910C7A_25,5042FBFB0C15AEDED02FF805C4869AC838910C7A,"monitored group
-A class of data monitored to determine whether the results differ significantly from the results of the reference group. For example, in a credit app, you might monitor applications in a particular age range and compare results to the age range more likely to recieve a positive outcome to evaluate whether there might be bias in the results.
-
-"
-5042FBFB0C15AEDED02FF805C4869AC838910C7A_26,5042FBFB0C15AEDED02FF805C4869AC838910C7A,"multiclass classification model
-A classification task with more than two classes. For example, where a binary classification model predicts yes or no values, a multi-class model predicts yes, no, maybe, or not applicable.
-
-"
-5042FBFB0C15AEDED02FF805C4869AC838910C7A_27,5042FBFB0C15AEDED02FF805C4869AC838910C7A,"multivariate time series
-Time series experiment that contains two or more changing variables. For example, a time series model that forecasts the electricity usage of three clients.
-
-"
-5042FBFB0C15AEDED02FF805C4869AC838910C7A_28,5042FBFB0C15AEDED02FF805C4869AC838910C7A,"optimized metric
-The metric used to measure the performance of the model. For example, accuracy is the typical metric that is used to measure the performance of a binary classification model.
-
-"
-5042FBFB0C15AEDED02FF805C4869AC838910C7A_29,5042FBFB0C15AEDED02FF805C4869AC838910C7A,"pipeline (model candidate pipeline)
-End-to-end outline that illustrates the steos in a workflow.
-
-"
-5042FBFB0C15AEDED02FF805C4869AC838910C7A_30,5042FBFB0C15AEDED02FF805C4869AC838910C7A,"positive class
-The class that is related to your objective function.
-
-"
-5042FBFB0C15AEDED02FF805C4869AC838910C7A_31,5042FBFB0C15AEDED02FF805C4869AC838910C7A,"reference group
-A group that you identify as most likely to receive a positive result in a predictive model. You can then compare the results to a monitored group to look for potential bias in outcomes.
-
-"
-5042FBFB0C15AEDED02FF805C4869AC838910C7A_32,5042FBFB0C15AEDED02FF805C4869AC838910C7A,"regression model
-A model that relates a dependent variable to one or more independent variable.
-
-"
-5042FBFB0C15AEDED02FF805C4869AC838910C7A_33,5042FBFB0C15AEDED02FF805C4869AC838910C7A,"scoring
-In machine learning, the process of measuring the confidence of a predicted outcome.
-
-"
-5042FBFB0C15AEDED02FF805C4869AC838910C7A_34,5042FBFB0C15AEDED02FF805C4869AC838910C7A,"supporting features
-Input features that can influence the prediction target. See also: Exogenus features
-
-"
-5042FBFB0C15AEDED02FF805C4869AC838910C7A_35,5042FBFB0C15AEDED02FF805C4869AC838910C7A,"text classification
-A model that automatically identifies and classifies text into distinct categories.
-
-"
-5042FBFB0C15AEDED02FF805C4869AC838910C7A_36,5042FBFB0C15AEDED02FF805C4869AC838910C7A,"time series model (AutoAI)
-A model that tracks data over time.
-
-"
-5042FBFB0C15AEDED02FF805C4869AC838910C7A_37,5042FBFB0C15AEDED02FF805C4869AC838910C7A,"trained model
-A model that is ready to be deployed.
-
-"
-5042FBFB0C15AEDED02FF805C4869AC838910C7A_38,5042FBFB0C15AEDED02FF805C4869AC838910C7A,"training
-The initial stage of model building, involving a subset of the source data. The model can then be tested against a further, different subset for which the outcome is already known.
-
-"
-5042FBFB0C15AEDED02FF805C4869AC838910C7A_39,5042FBFB0C15AEDED02FF805C4869AC838910C7A,"training data
-Data used to teach and train a model's learning algorithm.
-
-"
-5042FBFB0C15AEDED02FF805C4869AC838910C7A_40,5042FBFB0C15AEDED02FF805C4869AC838910C7A,"univariate time series
-Time series experiment that contains only one changing variable. For example, a time series model that forecasts the temperature has a single prediction column of the temperature.
-"
-73F96A06142EE17A6C55E5700580F33250552A00_0,73F96A06142EE17A6C55E5700580F33250552A00," Data imputation in AutoAI experiments
-
-Data imputation is the means of replacing missing values in your data set with substituted values. If you enable imputation, you can specify how missing values are interpolated in your data.
-
-"
-73F96A06142EE17A6C55E5700580F33250552A00_1,73F96A06142EE17A6C55E5700580F33250552A00," Imputation by experiment type
-
-Imputation methods depend on the type of experiment that you build.
-
-
-
-* For classification and regression you can configure categorical and numerical imputation methods.
-* For timeseries problems, you can choose from a set of imputation methods to apply to numerical columns. When the experiment runs, the best performing method from the set is applied automatically. You can also specify a specific value as a replacement value.
-
-
-
-"
-73F96A06142EE17A6C55E5700580F33250552A00_2,73F96A06142EE17A6C55E5700580F33250552A00," Enabling imputation
-
-To view and set imputation options:
-
-
-
-1. Click Experiment settings when you configure your experiment.
-2. Click the Data source option.
-3. Click Enable data imputation. Note that if you do not explicitly enable data imputation but your data source has missing values, AutoAI warns you and applies default imputation methods. See [imputation details](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-data-imp-details.html).
-4. Select options in the Imputation section.
-5. Optionally set a threshold for the percentage of imputation acceptable for a column of data. If the percentage of missing values exceeds the specified threshold, the experiment fails. To resolve, update the data source or adjust the threshold.
-
-
-
-"
-73F96A06142EE17A6C55E5700580F33250552A00_3,73F96A06142EE17A6C55E5700580F33250552A00," Configuring imputation for classification and regression experiments
-
-Choose one of these methods for imputing missing data in binary classification, multiclass classification, or regression experiments. Note that you can have one method for completing values for text-based (categorical) data and another for numerical data.
-
-
-
- Method Description
-
- Most frequent Replace missing value with the value that appears most frequently in the column.
- Median Replace missing value with the value in the middle of the sorted column.
- Mean Replace missing value with the average value for the column.
-
-
-
-"
-73F96A06142EE17A6C55E5700580F33250552A00_4,73F96A06142EE17A6C55E5700580F33250552A00," Configuring imputation for timeseries experiments
-
-Choose some or all of these methods. When multiple methods are selected, the best-performing method is automatically applied for the experiment.
-
-Note: Imputation is not supported for date or time values.
-
-
-
- Method Description
-
- Cubic Uses cubic interpolation by using pandas/scipy method to fill missing values.
- Fill Choose value as the type to replace the missing values with a numeric value you specify.
- Flatten iterative Data is first flattened and then the Scikit-learn iterative imputer is applied to find missing values.
- Linear Use linear interpolation by using pandas/scipy method to fill missing values.
- Next Replace missing value with the next value.
- Previous Replace missing value with the previous value.
-
-
-
-"
-73F96A06142EE17A6C55E5700580F33250552A00_5,73F96A06142EE17A6C55E5700580F33250552A00," Next steps
-
-[Data imputation implementation details for time series experiments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-data-imp-details.html)
-
-Parent topic:[AutoAI overview](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-overview.html)
-"
-83CD92CDB99DB6263492FAD998E932F50F0F8E99_0,83CD92CDB99DB6263492FAD998E932F50F0F8E99," AutoAI libraries for Python
-
-The autoai-lib library for Python contains a set of functions that help you to interact with IBM Watson Machine Learning AutoAI experiments. Using the autoai-lib library, you can review and edit the data transformations that take place in the creation of the pipeline. Similarly, you can use the autoai-ts-libs library to interact with pipeline notebooks for time series experiments.
-
-"
-83CD92CDB99DB6263492FAD998E932F50F0F8E99_1,83CD92CDB99DB6263492FAD998E932F50F0F8E99," Installing autoai-lib or autoai-ts-libs for Python
-
-Follow the instructions in [Installing custom libraries](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/install-cust-lib.html) to install autoai-lib or autoai-ts-libs.
-
-"
-83CD92CDB99DB6263492FAD998E932F50F0F8E99_2,83CD92CDB99DB6263492FAD998E932F50F0F8E99," Using autoai-lib and autoai-ts-libs for Python
-
-The autoai-lib and autoai-ts-libs library for Python contain functions that help you to interact with IBM Watson Machine Learning AutoAI experiments. Using the autoai-lib library, you can review and edit the data transformations that take place in the creation of classification and regression pipelines. Using the autoai-ts-libs library, you can review the data transformations that take place in the creation of time series (forecast) pipelines.
-
-"
-83CD92CDB99DB6263492FAD998E932F50F0F8E99_3,83CD92CDB99DB6263492FAD998E932F50F0F8E99," Installing autoai-lib and autoai-ts-libs for Python
-
-Follow the instructions in [Installing custom libraries](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/install-cust-lib.html) to install [autoai-lib](https://pypi.org/project/autoai-libs/) and [autoai-ts-libs](https://pypi.org/project/autoai-ts-libs/).
-
-"
-83CD92CDB99DB6263492FAD998E932F50F0F8E99_4,83CD92CDB99DB6263492FAD998E932F50F0F8E99," The autoai-lib functions
-
-The instantiated project object that is created after you import the autoai-lib library exposes these functions:
-
-"
-83CD92CDB99DB6263492FAD998E932F50F0F8E99_5,83CD92CDB99DB6263492FAD998E932F50F0F8E99," autoai_libs.transformers.exportable.NumpyColumnSelector()
-
-Selects a subset of columns of a numpy array
-
-Usage:
-
-autoai_libs.transformers.exportable.NumpyColumnSelector(columns=None)
-
-
-
- Option Description
-
- columns list of column indexes to select
-
-
-
-"
-83CD92CDB99DB6263492FAD998E932F50F0F8E99_6,83CD92CDB99DB6263492FAD998E932F50F0F8E99," autoai_libs.transformers.exportable.CompressStrings()
-
-Removes spaces and special characters from string columns of an input numpy array X.
-
-Usage:
-
-autoai_libs.transformers.exportable.CompressStrings(compress_type='string', dtypes_list=None, misslist_list=None, missing_values_reference_list=None, activate_flag=True)
-
-
-
- Option Description
-
- compress_type type of string compression. 'string' for removing spaces from a string and 'hash' for creating an int hash. Default is 'string'. 'hash' is used for columns with strings and cat_imp_strategy='most_frequent'
- dtypes_list list containing strings that denote the type of each column of the input numpy array X (strings are among 'char_str','int_str','float_str','float_num', 'float_int_num','int_num','Boolean','Unknown'). If None, the column types are discovered. Default is None.
- misslist_list list contains lists of missing values of each column of the input numpy array X. If None, the missing values of each column are discovered. Default is None.
- missing_values_reference_list reference list of missing values in the input numpy array X
- activate_flag flag that indicates that this transformer is active. If False, transform(X) outputs the input numpy array X unmodified.
-
-
-
-"
-83CD92CDB99DB6263492FAD998E932F50F0F8E99_7,83CD92CDB99DB6263492FAD998E932F50F0F8E99," autoai_libs.transformers.exportable.NumpyReplaceMissingValues()
-
-Given a numpy array and a reference list of missing values for it, replaces missing values with a special value (typically a special missing value such as np.nan).
-
-Usage:
-
-autoai_libs.transformers.exportable.NumpyReplaceMissingValues(missing_values, filling_values=np.nan)
-
-
-
- Option Description
-
- missing_values reference list of missing values
- filling_values special value that is assigned to unknown values
-
-
-
-"
-83CD92CDB99DB6263492FAD998E932F50F0F8E99_8,83CD92CDB99DB6263492FAD998E932F50F0F8E99," autoai_libs.transformers.exportable.NumpyReplaceUnknownValues()
-
-Given a numpy array and a reference list of known values for each column, replaces values that are not part of a reference list with a special value (typically np.nan). This method is typically used to remove labels for columns in a test data set that has not been seen in the corresponding columns of the training data set.
-
-Usage:
-
-autoai_libs.transformers.exportable.NumpyReplaceUnknownValues(known_values_list=None, filling_values=None, missing_values_reference_list=None)
-
-
-
- Option Description
-
- known_values_list reference list of lists of known values for each column
- filling_values special value that is assigned to unknown values
- missing_values_reference_list reference list of missing values
-
-
-
-"
-83CD92CDB99DB6263492FAD998E932F50F0F8E99_9,83CD92CDB99DB6263492FAD998E932F50F0F8E99," autoai_libs.transformers.exportable.boolean2float()
-
-Converts a 1-D numpy array of strings that represent booleans to floats and replaces missing values with np.nan. Also changes type of array from 'object' to 'float'.
-
-Usage:
-
-autoai_libs.transformers.exportable.boolean2float(activate_flag=True)
-
-
-
- Option Description
-
- activate_flag flag that indicates that this transformer is active. If False, transform(X) outputs the input numpy array X unmodified.
-
-
-
-"
-83CD92CDB99DB6263492FAD998E932F50F0F8E99_10,83CD92CDB99DB6263492FAD998E932F50F0F8E99," autoai_libs.transformers.exportable.CatImputer()
-
-This transformer is a wrapper for categorical imputer. Internally it currently uses sklearn SimpleImputer]([https://scikit-learn.org/stable/modules/generated/sklearn.impute.SimpleImputer.html](https://scikit-learn.org/stable/modules/generated/sklearn.impute.SimpleImputer.html))
-
-Usage:
-
-autoai_libs.transformers.exportable.CatImputer(strategy, missing_values, sklearn_version_family=global_sklearn_version_family, activate_flag=True)
-
-
-
- Option Description
-
- strategy string, optional, default=”mean”. The imputation strategy for missing values. -mean: replace by using the mean along each column. Can be used only with numeric data. - median:replace by using the median along each column. Can only be used with numeric data. - most_frequent:replace by using most frequent value each column. Used with strings or numeric data. - constant:replace with fill_value. Can be used with strings or numeric data.
- missing_values number, string, np.nan (default) or None. The placeholder for the missing values. All occurrences of missing_values are imputed.
- sklearn_version_family str indicating the sklearn version for backward compatibiity with versions 019, and 020dev. Currently unused. Default is None.
- activate_flag flag that indicates that this transformer is active. If False, transform(X) outputs the input numpy array X unmodified.
-
-
-
-"
-83CD92CDB99DB6263492FAD998E932F50F0F8E99_11,83CD92CDB99DB6263492FAD998E932F50F0F8E99," autoai_libs.transformers.exportable.CatEncoder()
-
-This method is a wrapper for categorical encoder. If encoding parameter is 'ordinal', internally it currently uses sklearn [OrdinalEncoder](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.OrdinalEncoder.html?highlight=ordinalencoder). If encoding parameter is 'onehot', or 'onehot-dense' internally it uses sklearn [OneHotEncoder](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.OneHotEncoder.htmlsklearn.preprocessing.OneHotEncoder)
-
-Usage:
-
-autoai_libs.transformers.exportable.CatEncoder(encoding, categories, dtype, handle_unknown, sklearn_version_family=global_sklearn_version_family, activate_flag=True)
-
-
-
- Option Description
-
- encoding str, 'onehot', 'onehot-dense' or 'ordinal'. The type of encoding to use (default is 'ordinal') 'onehot': encode the features by using a one-hot aka one-of-K scheme (or also called 'dummy' encoding). This encoding creates a binary column for each category and returns a sparse matrix. 'onehot-dense': the same as 'onehot' but returns a dense array instead of a sparse matrix. 'ordinal': encode the features as ordinal integers. The result is a single column of integers (0 to n_categories - 1) per feature.
-"
-83CD92CDB99DB6263492FAD998E932F50F0F8E99_12,83CD92CDB99DB6263492FAD998E932F50F0F8E99," categories 'auto' or a list of lists/arrays of values. Categories (unique values) per feature: 'auto' : Determine categories automatically from the training data. list : categories[i] holds the categories that are expected in the ith column. The passed categories must be sorted and can not mix strings and numeric values. The used categories can be found in the encoder.categories_ attribute.
- dtype number type, default np.float64 Desired dtype of output.
- handle_unknown 'error' (default) or 'ignore'. Whether to raise an error or ignore if a unknown categorical feature is present during transform (default is to raise). When this parameter is set to 'ignore' and an unknown category is encountered during transform, the resulting one-hot encoded columns for this feature are all zeros. In the inverse transform, an unknown category are denoted as None. Ignoring unknown categories is not supported for encoding='ordinal'.
- sklearn_version_family str indicating the sklearn version for backward compatibiity with versions 019, and 020dev. Currently unused. Default is None.
- activate_flag flag that indicates that this transformer are active. If False, transform(X) outputs the input numpy array X unmodified.
-
-
-
-"
-83CD92CDB99DB6263492FAD998E932F50F0F8E99_13,83CD92CDB99DB6263492FAD998E932F50F0F8E99," autoai_libs.transformers.exportable.float32_transform()
-
-Transforms a float64 numpy array to float32.
-
-Usage:
-
-autoai_libs.transformers.exportable.float32_transform(activate_flag=True)
-
-
-
- Option Description
-
- activate_flag flag that indicates that this transformer is active. If False, transform(X) outputs the input numpy array X unmodified.
-
-
-
-"
-83CD92CDB99DB6263492FAD998E932F50F0F8E99_14,83CD92CDB99DB6263492FAD998E932F50F0F8E99," autoai_libs.transformers.exportable.FloatStr2Float()
-
-Given numpy array X and dtypes_list that denotes the types of its columns, it replaces columns of strings that represent floats (type 'float_str' in dtypes_list) to columns of floats and replaces their missing values with np.nan.
-
-Usage:
-
-autoai_libs.transformers.exportable.FloatStr2Float(dtypes_list, missing_values_reference_list=None, activate_flag=True)
-
-
-
- Option Description
-
- dtypes_list list contains strings that denote the type of each column of the input numpy array X (strings are among 'char_str','int_str','float_str','float_num', 'float_int_num','int_num','Boolean','Unknown').
- missing_values_reference_list reference list of missing values
- activate_flag flag that indicates that this transformer is active. If False, transform(X) outputs the input numpy array X unmodified.
-
-
-
-"
-83CD92CDB99DB6263492FAD998E932F50F0F8E99_15,83CD92CDB99DB6263492FAD998E932F50F0F8E99," autoai_libs.transformers.exportable.NumImputer()
-
-This method is a wrapper for numerical imputer.
-
-Usage:
-
-autoai_libs.transformers.exportable.NumImputer(strategy, missing_values, activate_flag=True)
-
-
-
- Option Description
-
- strategy num_imp_strategy: string, optional (default=”mean”). The imputation strategy: - If “mean”, then replace missing values by using the mean along the axis. - If “median”, then replace missing values by using the median along the axis. - If “most_frequent”, then replace missing by using the most frequent value along the axis.
- missing_values integer or “NaN”, optional (default=”NaN”). The placeholder for the missing values. All occurrences of missing_values are imputed: - For missing values encoded as np.nan, use the string value “NaN”. - activate_flag: flag that indicates that this transformer is active. If False, transform(X) outputs the input numpy array X unmodified.
-
-
-
-"
-83CD92CDB99DB6263492FAD998E932F50F0F8E99_16,83CD92CDB99DB6263492FAD998E932F50F0F8E99," autoai_libs.transformers.exportable.OptStandardScaler()
-
-This parameter is a wrapper for scaling of numerical variables. It currently uses sklearn [StandardScaler](http://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.StandardScaler.html) internally.
-
-Usage:
-
-autoai_libs.transformers.exportable.OptStandardScaler(use_scaler_flag=True, num_scaler_copy=True, num_scaler_with_mean=True, num_scaler_with_std=True)
-
-
-
- Option Description
-
- num_scaler_copy Boolean, optional, default True. If False, try to avoid a copy and do in-place scaling instead. This action is not guaranteed to always work. With in-place, for example, if the data is not a NumPy array or scipy.sparse CSR matrix, a copy might still be returned.
- num_scaler_with_mean Boolean, True by default. If True, center the data before scaling. An exception is raised when attempted on sparse matrices because centering them entails building a dense matrix, which in common use cases is likely to be too large to fit in memory.
- num_scaler_with_std Boolean, True by default. If True, scale the data to unit variance (or equivalently, unit standard deviation).
- use_scaler_flag Boolean, flag that indicates that this transformer is active. If False, transform(X) outputs the input numpy array X unmodified. Default is True.
-
-
-
-"
-83CD92CDB99DB6263492FAD998E932F50F0F8E99_17,83CD92CDB99DB6263492FAD998E932F50F0F8E99," autoai_libs.transformers.exportable.NumpyPermuteArray()
-
-Rearranges columns or rows of a numpy array based on a list of indexes.
-
-Usage:
-
-autoai_libs.transformers.exportable.NumpyPermuteArray(permutation_indices=None, axis=None)
-
-
-
- Option Description
-
- permutation_indices list of indexes based on which columns are rearranged
- axis 0 permute along columns. 1 permute along rows.
-
-
-
-"
-83CD92CDB99DB6263492FAD998E932F50F0F8E99_18,83CD92CDB99DB6263492FAD998E932F50F0F8E99," Feature transformation
-
-These methods apply to the feature transformations described in [AutoAI implementation details](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-details.html).
-
-"
-83CD92CDB99DB6263492FAD998E932F50F0F8E99_19,83CD92CDB99DB6263492FAD998E932F50F0F8E99," autoai_libs.cognito.transforms.transform_utils.TA1(fun, name=None, datatypes=None, feat_constraints=None, tgraph=None, apply_all=True, col_names=None, col_dtypes=None)
-
-For unary stateless functions, such as square or log, use TA1.
-
-Usage:
-
-autoai_libs.cognito.transforms.transform_utils.TA1(fun, name=None, datatypes=None, feat_constraints=None, tgraph=None, apply_all=True, col_names=None, col_dtypes=None)
-
-
-
- Option Description
-
- fun the function pointer
- name a string name that uniquely identifies this transformer from others
- datatypes a list of datatypes either of which are valid input to the transformer function (numeric, float, int, and so on)
- feat_constraints all constraints, which must be satisfied by a column to be considered a valid input to this transform
- tgraph tgraph object must be the starting TGraph( ) object. This parameter is optional and you can pass None, but that can result in some failure to detect some inefficiencies due to lack of caching
- apply_all only use applyAll = True. It means that the transformer enumerates all features (or feature sets) that match the specified criteria and apply the provided function to each.
- col_names names of the feature columns in a list
- col_dtypes list of the datatypes of the feature columns
-
-
-
-"
-83CD92CDB99DB6263492FAD998E932F50F0F8E99_20,83CD92CDB99DB6263492FAD998E932F50F0F8E99," autoai_libs.cognito.transforms.transform_utils.TA2()
-
-For binary stateless functions, such as sum, product, use TA2.
-
-Usage:
-
-autoai_libs.cognito.transforms.transform_utils.TA2(fun, name, datatypes1, feat_constraints1, datatypes2, feat_constraints2, tgraph=None, apply_all=True, col_names=None, col_dtypes=None)
-
-
-
- Option Description
-
- fun the function pointer
- name: a string name that uniquely identifies this transformer from others
- datatypes1 a list of datatypes either of which are valid inputs (first parameter) to the transformer function (numeric, float, int, and so on)
- feat_constraints1 all constraints, which must be satisfied by a column to be considered a valid input (first parameter) to this transform
- datatypes2 a list of data types either of which are valid inputs (second parameter) to the transformer function (numeric, float, int, and so on)
- feat_constraints2 all constraints, which must be satisfied by a column to be considered a valid input (second parameter) to this transform
- tgraph tgraph object must be the invoking TGraph( ) object. Note this parameter is optional and you can pass None, but that results in some missing inefficiencies due to lack of caching
- apply_all only use applyAll = True. It means that the transformer enumerates all features (or feature sets) that match the specified criteria and apply the provided function to each.
- col_names names of the feature columns in a list
- col_dtypes list of the data types of the feature columns
-
-
-
-"
-83CD92CDB99DB6263492FAD998E932F50F0F8E99_21,83CD92CDB99DB6263492FAD998E932F50F0F8E99," autoai_libs.cognito.transforms.transform_utils.TB1()
-
-For unary state-based transformations (with fit/transform) use, such as frequent count.
-
-Usage:
-
-autoai_libs.cognito.transforms.transform_utils.TB1(tans_class, name, datatypes, feat_constraints, tgraph=None, apply_all=True, col_names=None, col_dtypes=None)
-
-
-
- Option Description
-
- tans_class a class that implements fit( ) and transform( ) in accordance with the transformation function definition
- name a string name that uniquely identifies this transformer from others
- datatypes list of datatypes either of which are valid input to the transformer function (numeric, float, int, and so on)
- feat_constraints all constraints, which must be satisfied by a column to be considered a valid input to this transform
- tgraph tgraph object must be the invoking TGraph( ) object. Note that this is optional and you might pass None, but that results in some missing inefficiencies due to lack of caching
- apply_all only use applyAll = True. It means that the transformer enumerates all features (or feature sets) that match the specified criteria and apply the provided function to each.
- col_names names of the feature columns in a list.
- col_dtypes list of the data types of the feature columns.
-
-
-
-"
-83CD92CDB99DB6263492FAD998E932F50F0F8E99_22,83CD92CDB99DB6263492FAD998E932F50F0F8E99," autoai_libs.cognito.transforms.transform_utils.TB2()
-
-For binary state-based transformations (with fit/transform) use, such as group-by.
-
-Usage:
-
-autoai_libs.cognito.transforms.transform_utils.TB2(tans_class, name, datatypes1, feat_constraints1, datatypes2, feat_constraints2, tgraph=None, apply_all=True)
-
-
-
- Option Description
-
- tans_class a class that implements fit( ) and transform( ) in accordance with the transformation function definition
- name a string name that uniquely identifies this transformer from others
- datatypes1 a list of data types either of which are valid inputs (first parameter) to the transformer function (numeric, float, int, and so on)
- feat_constraints1 all constraints, which must be satisfied by a column to be considered a valid input (first parameter) to this transform
- datatypes2 a list of data types either of which are valid inputs (second parameter) to the transformer function (numeric, float, int, and so on)
- feat_constraints2 all constraints, which must be satisfied by a column to be considered a valid input (second parameter) to this transform
- tgraph tgraph object must be the invoking TGraph( ) object. This parameter is optional and you might pass None, but that results in some missing inefficiencies due to lack of caching
- apply_all only use applyAll = True. It means that the transformer enumerates all features (or feature sets) that match the specified criteria and apply the provided function to each.
-
-
-
-"
-83CD92CDB99DB6263492FAD998E932F50F0F8E99_23,83CD92CDB99DB6263492FAD998E932F50F0F8E99," autoai_libs.cognito.transforms.transform_utils.TAM()
-
-For a transform that applies at the data level, such as PCA, use TAM.
-
-Usage:
-
-autoai_libs.cognito.transforms.transform_utils.TAM(tans_class, name, tgraph=None, apply_all=True, col_names=None, col_dtypes=None)
-
-
-
- Option Description
-
- tans_class a class that implements fit( ) and transform( ) in accordance with the transformation function definition
- name a string name that uniquely identifies this transformer from others
- tgraph tgraph object must be the invoking TGraph( ) object. This parameter is optional and you can pass None, but that results in some missing inefficiencies due to lack of caching
- apply_all only use applyAll = True. It means that the transformer enumerates all features (or feature sets) that match the specified criteria and apply the provided function to each.
- col_names names of the feature columns in a list
- col_dtypes list of the datatypes of the feature columns
-
-
-
-"
-83CD92CDB99DB6263492FAD998E932F50F0F8E99_24,83CD92CDB99DB6263492FAD998E932F50F0F8E99," autoai_libs.cognito.transforms.transform_utils.TGen()
-
-TGen is a general wrapper and can be used for most functions (might not be most efficient though).
-
-Usage:
-
-autoai_libs.cognito.transforms.transform_utils.TGen(fun, name, arg_count, datatypes_list, feat_constraints_list, tgraph=None, apply_all=True, col_names=None, col_dtypes=None)
-
-
-
- Option Description
-
- fun the function pointer
- name a string name that uniquely identifies this transformer from others
- arg_count number of inputs to the function, in this example it is 1, for binary, it is 2, and so on
- datatypes_list a list of arg_count lists that correspond to the acceptable input data types for each parameter. In the previous example, since `arg_count=1``, the result is one list within the outer list, and it contains a single type called 'numeric'. In another case, it might be a specific case 'int' or even more specific 'int64'.
- feat_constraints_list a list of arg_count lists that correspond to some constraints that can be imposed on selection of the input features
- tgraph tgraph object must be the invoking TGraph( ) object. Note this parameter is optional and you can pass None, but that results in some missing inefficiencies due to lack of caching
- apply_all only use applyAll = True. It means that the transformer enumerates all features (or feature sets) that match the specified criteria and apply the provided function to each.
- col_names names of the feature columns in a list
- col_dtypes list of the data types of the feature columns
-
-
-
-"
-83CD92CDB99DB6263492FAD998E932F50F0F8E99_25,83CD92CDB99DB6263492FAD998E932F50F0F8E99," autoai_libs.cognito.transforms.transform_utils.FS1()
-
-Feature selection, type 1 (using pairwise correlation between each feature and target.)
-
-Usage:
-
-autoai_libs.cognito.transforms.transform_utils.FS1(cols_ids_must_keep, additional_col_count_to_keep, ptype)
-
-
-
- Option Description
-
- cols_ids_must_keep serial numbers of the columns that must be kept irrespective of their feature importance
- additional_col_count_to_keep how many columns need to be retained
- ptype classification or regression
-
-
-
-"
-83CD92CDB99DB6263492FAD998E932F50F0F8E99_26,83CD92CDB99DB6263492FAD998E932F50F0F8E99," autoai_libs.cognito.transforms.transform_utils.FS2()
-
-Feature selection, type 2.
-
-Usage:
-
-autoai_libs.cognito.transforms.transform_utils.FS2(cols_ids_must_keep, additional_col_count_to_keep, ptype, eval_algo)
-
-
-
- Option Description
-
- cols_ids_must_keep serial numbers of the columns that must be kept irrespective of their feature importance
- additional_col_count_to_keep how many columns need to be retained
- ptype classification or regression
-
-
-
-"
-83CD92CDB99DB6263492FAD998E932F50F0F8E99_27,83CD92CDB99DB6263492FAD998E932F50F0F8E99," The autoai-ts-libs functions
-
-The combination of transformers and estimators are designed and chosen for each pipeline by the AutoAI Time Series system. Changing the transformers or the estimators in the generated pipeline notebook can cause unexpected results or even failure. We do not recommend you change the notebook for generated pipelines, thus we do not currently offer the specification of the functions for the autoai-ts-libs library.
-
-"
-83CD92CDB99DB6263492FAD998E932F50F0F8E99_28,83CD92CDB99DB6263492FAD998E932F50F0F8E99," Learn more
-
-[Selecting an AutoAI model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-view-results.html)
-
-Parent topic:[Saving an AutoAI generated notebook](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-notebook.html)
-"
-07A75B90684D731C6B33FC552585D391E86A2A35_0,07A75B90684D731C6B33FC552585D391E86A2A35," Saving an AutoAI generated notebook
-
-To view the code that created a particular experiment, or interact with the experiment programmatically, you can save an experiment as a notebook. You can also save an individual pipeline as a notebook so that you can review the code that is used in that pipeline.
-
-"
-07A75B90684D731C6B33FC552585D391E86A2A35_1,07A75B90684D731C6B33FC552585D391E86A2A35," Working with AutoAI-generated notebooks
-
-When you save an experiment or a pipeline as notebook, you can:
-
-
-
-* Access the saved notebooks from the Notebooks section on the Assets tab.
-* Review the code to understand the transformations applied to build the model. This increases confidence in the process and contributes to explainable AI practices.
-* Enter your own authentication credentials by using the template provided.
-* Use and run the code within Watson Studio, or download the notebook code to use in another notebook server. No matter where you use the notebook, it automatically installs all required dependencies, including libraries for:
-
-
-
-* xgboost
-* lightgbm
-* scikit-learn
-* autoai-libs
-* ibm-watson-machine-learning
-* snapml
-
-
-
-* View the training data used to train the experiment and the test (holdout) data used to validate the experiment.
-
-
-
-Notes:
-
-
-
-* Auto-generated notebook code excutes successfully as written. Modifying the code or changing the input data can adversely affect the code. If you want to make a significant change, consider retraining the experiment by using AutoAI.
-* For more information on the estimators, or algorithms, and transformers that are applied to your data to train an experiment and create pipelines, refer to [Implementation details](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-details.html).
-
-
-
-"
-07A75B90684D731C6B33FC552585D391E86A2A35_2,07A75B90684D731C6B33FC552585D391E86A2A35," Saving an experiment as a notebook
-
-Save all of the code for an experiment to view the transformations and optimizations applied to create the model pipelines.
-
-"
-07A75B90684D731C6B33FC552585D391E86A2A35_3,07A75B90684D731C6B33FC552585D391E86A2A35," What is included with the experiment notebook
-
-The experiment notebook provides annotated code so you can:
-
-
-
-* Interact with trained model pipelines
-* Access model details programmatically (including feature importance and machine learning metrics).
-* Visualize each pipeline as a graph, with each node documented, to provide transparency
-* Compare pipelines
-* Download selected pipelines and test locally
-* Create a deployment and score the model
-* Get the experiment definition or configuration in Python API, which you can use for automation or integration with other applications.
-
-
-
-"
-07A75B90684D731C6B33FC552585D391E86A2A35_4,07A75B90684D731C6B33FC552585D391E86A2A35," Saving the code for an experiment
-
-To save an entire experiment as a notebook:
-
-
-
-1. After the experiment completes, click Save code from the Progress map panel.
-2. Name your notebook, add an optional description, choose a runtime environment, and save.
-3. Click the link in the notification to open the notebook and review the code. You can also open the notebook from the Notebooks section of the Assets tab of your project.
-
-
-
-"
-07A75B90684D731C6B33FC552585D391E86A2A35_5,07A75B90684D731C6B33FC552585D391E86A2A35," Saving an individual pipeline as a notebook
-
-Save an individual pipeline as a notebook so you can review the Scikit-Learn source code for the trained model in a notebook.
-
-Note: Currently, you cannot generate a pipeline notebook for an experiment with joined data sources.
-
-"
-07A75B90684D731C6B33FC552585D391E86A2A35_6,07A75B90684D731C6B33FC552585D391E86A2A35," What is included with the pipeline notebook
-
-The experiment notebook provides annotated code that you can use to complete these tasks:
-
-
-
-* View the Scikit-learn pipeline definition
-* See the transformations applied for pipeline training
-* Review the pipeline evaluation
-
-
-
-"
-07A75B90684D731C6B33FC552585D391E86A2A35_7,07A75B90684D731C6B33FC552585D391E86A2A35," Saving a pipeline as a notebook
-
-To save a pipeline as a notebook:
-
-
-
-1. Complete your AutoAI experiment.
-2. Select the pipeline that you want to save in the leaderboard, and click Save from the action menu for the pipeline, then Save as notebook.
-3. Name your notebook, add an optional description, choose a runtime environment, and save.
-4. Click the link in the notification to open the notebook and review the code. You can also open the notebook from the Notebooks section of the Assets tab.
-
-
-
-"
-07A75B90684D731C6B33FC552585D391E86A2A35_8,07A75B90684D731C6B33FC552585D391E86A2A35," Create sample notebooks
-
-To see for yourself what AutoAI-generated notebooks look like:
-
-
-
-1. Follow the steps in [AutoAI tutorial](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai_example_binary_classifier.html) to create a binary classification experiment from sample data.
-2. After the experiment runs, click Save code in the experiment details panel.
-3. Name and save the experiment notebook.
-4. To save a pipeline as a model, select a pipeline from the leaderboard, then click Save and Save as notebook.
-5. Name and save the pipeline notebook.
-6. From Assets tab, open the resulting notebooks in the notebook editor and review the code.
-
-
-
-"
-07A75B90684D731C6B33FC552585D391E86A2A35_9,07A75B90684D731C6B33FC552585D391E86A2A35," Additional resources
-
-
-
-* For details on the methods used in the code, see [Using AutoAI libraris with Python](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-lib-python.html).
-* For more information on AutoAI notebooks, see this [blog post](https://lukasz-cmielowski.medium.com/watson-autoai-can-i-get-the-model-88a0fbae128a).
-
-
-
-"
-07A75B90684D731C6B33FC552585D391E86A2A35_10,07A75B90684D731C6B33FC552585D391E86A2A35," Next steps
-
-[Using autoai-lib for Python](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-lib-python.html)
-
-Parent topic:[AutoAI overview](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-overview.html)
-"
-91EEB0303C78EC7EAA6DAB7921E7173C68FF7769_0,91EEB0303C78EC7EAA6DAB7921E7173C68FF7769," AutoAI Overview
-
-The AutoAI graphical tool analyzes your data and uses data algorithms, transformations, and parameter settings to create the best predictive model. AutoAI displays various potential models as model candidate pipelines and rank them on a leaderboard for you to choose from.
-
-Data format : Tabular: CSV files, with comma (,) delimiter for all types of AutoAI experiments. : Connected data from [IBM Cloud Object Storage](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-cos.html).
-
-Note:You can use a data asset that is saved as a Feature Group (beta) but the metadata is not used to populate the AutoAI experiment settings.
-
-Data size : Up to 1 GB or up to 20 GB. For details, refer to [AutoAI data use](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-overview.html?context=cdpaas&locale=enautoai-data-use).
-
-"
-91EEB0303C78EC7EAA6DAB7921E7173C68FF7769_1,91EEB0303C78EC7EAA6DAB7921E7173C68FF7769," AutoAI data use
-
-These limits are based on the default compute configuration of 8 CPU and 32 GB.
-
-AutoAI classification and regression experiments:
-
-
-
-* You can upload a file up to 1 GB for AutoAI experiments.
-* If you connect to a data source that exceeds 1 GB, only the first 1 GB of records is used.
-
-
-
-AutoAI time series experiments:
-
-
-
-* If the data source contains a timestamp column, AutoAI samples the data at a uniform frequency. For example, data can be in increments of one minute, one hour, or one day. The specified timestamp is used to determine the lookback window to improve the model accuracy.
-
-Note:If the file size is larger than 1 GB, AutoAi sorts the data in descending time order and only the first 1 GB is used to train the experiment.
-* If the data source does not contain a timestamp column, ensure AutoAI samples the data at uniform intervals and sorts the data in ascending time order. An ascending sort order means that the value in the first row is the oldest, and the value in the last row is the most recent.
-
-Note: If the file size is larger than 1 GB, truncate the file size so it is smaller than 1 GB.
-
-
-
-"
-91EEB0303C78EC7EAA6DAB7921E7173C68FF7769_2,91EEB0303C78EC7EAA6DAB7921E7173C68FF7769," AutoAI process
-
-Using AutoAI, you can build and deploy a machine learning model with sophisticated training features and no coding. The tool does most of the work for you.
-
-To view the code that created a particular experiment, or interact with the experiment programmatically, you can [save an experiment as a notebook](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-notebook.html).
-
-
-
-AutoAI automatically runs the following tasks to build and evaluate candidate model pipelines:
-
-
-
-* [Data pre-processing](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-overview.html?context=cdpaas&locale=enpreprocess)
-* [Automated model selection](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-overview.html?context=cdpaas&locale=enmodel_selection)
-* [Automated feature engineering](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-overview.html?context=cdpaas&locale=enfeature_engineering)
-* [Hyperparameter optimization](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-overview.html?context=cdpaas&locale=enhpo_optimization)
-
-
-
-"
-91EEB0303C78EC7EAA6DAB7921E7173C68FF7769_3,91EEB0303C78EC7EAA6DAB7921E7173C68FF7769," Understanding the AutoAI process
-
-For additional detail on each of these phases, including links to associated research papers and descriptions of the algorithms applied to create the model pipelines, see [AutoAI implementation details](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-details.html).
-
-"
-91EEB0303C78EC7EAA6DAB7921E7173C68FF7769_4,91EEB0303C78EC7EAA6DAB7921E7173C68FF7769," Data pre-processing
-
-Most data sets contain different data formats and missing values, but standard machine learning algorithms work only with numbers and no missing values. Therefore, AutoAI applies various algorithms or estimators to analyze, clean, and prepare your raw data for machine learning. This technique automatically detects and categorizes values based on features, such as data type: categorical or numerical. Depending on the categorization, AutoAI uses [hyper-parameter optimization](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-overview.html?context=cdpaas&locale=enhpo_optimization) to determine the best combination of strategies for missing value imputation, feature encoding, and feature scaling for your data.
-
-"
-91EEB0303C78EC7EAA6DAB7921E7173C68FF7769_5,91EEB0303C78EC7EAA6DAB7921E7173C68FF7769," Automated model selection
-
-AutoAI uses automated model selection to identify the best model for your data. This novel approach tests potential models against small subsets of the data and ranks them based on accuracy. AutoAI then selects the most promising models and increases the size of the data subset until it identifies the best match. This approach saves time and improves performance by gradually narrowing down the potential models based on accuracy.
-
-For information on how to handle automatically-generated pipelines to select the best model, refer to [Selecting an AutoAI model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-view-results.html).
-
-"
-91EEB0303C78EC7EAA6DAB7921E7173C68FF7769_6,91EEB0303C78EC7EAA6DAB7921E7173C68FF7769," Automated feature engineering
-
-Feature engineering identifies the most accurate model by transforming raw data into a combination of features that best represent the problem. This unique approach explores various feature construction choices in a structured, nonexhaustive manner, while progressively maximizing model accuracy by using reinforcement learning. This technique results in an optimized sequence of transformations for the data that best match the algorithms of the model selection step.
-
-"
-91EEB0303C78EC7EAA6DAB7921E7173C68FF7769_7,91EEB0303C78EC7EAA6DAB7921E7173C68FF7769," Hyperparameter optimization
-
-Hyperparameter optimization refines the best performing models. AutoAI uses a novel hyperparameter optimization algorithm for certain function evaluations, such as model training and scoring, that are typical in machine learning. This approach quickly identifies the best model despite long evaluation times at each iteration.
-
-"
-91EEB0303C78EC7EAA6DAB7921E7173C68FF7769_8,91EEB0303C78EC7EAA6DAB7921E7173C68FF7769," Next steps
-
-[AutoAI tutorial: Build a Binary Classification Model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai_example_binary_classifier.html)
-
-Parent topic:[Analyzing data and building models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/data-science.html)
-"
-2757F7F9B9E4975B9E53DA5B4508FF9D7A41A0A4_0,2757F7F9B9E4975B9E53DA5B4508FF9D7A41A0A4," Creating a text analysis experiment
-
-Use AutoAI's text analysis feature to perform text analysis of your experiments. For example, perform basic sentiment analysis to predict an outcome based on text comments.
-
-Note: Text analysis is only available for AutoAI classification and regression experiments. This feature is not available for time series experiments.
-
-"
-2757F7F9B9E4975B9E53DA5B4508FF9D7A41A0A4_1,2757F7F9B9E4975B9E53DA5B4508FF9D7A41A0A4," Text analysis overview
-
-When you create an experiment that uses the text analysis feature, the AutoAI process uses the word2vec algorithm to transform the text into vectors, then compares the vectors to establish the impact on the prediction column.
-
-The word2vec algorithm takes a corpus of text as input and outputs a set of vectors. By turning text into a numerical representation, it can detect and compare similar words. When trained with enough data, word2vec can make accurate predictions about a word's meaning or relationship to other words. The predictions can be used to analyze text and guess at the meaning in sentiment analysis applications.
-
-During the feature engineering phase of the experiment training, 20 features are generated for the text column, by using the word2vec algorithm. Auto-detection of text features is based on analyzing the number of unique values in a column and the number of tokens in a record (minimum number = 3). If the number of unique values is less than number of all values divided by 5, the column is not treated as text.
-
-When the experiment completes, you can review the feature engineering results from the pipeline details page. You can also save a pipeline as a notebook, where you can review the transformations and see a visualization of the transformations.
-
-Note: When you review the experiment, if you determine that a text column was not detected and processed by the auto-detection, you can specify the text column manually in the experiment settings.
-
-In this example, the comments for a fictional car rental company are used to train a model that predicts a satisfaction rating when a new comment is entered.
-
-Watch this short video to see this example and then read further details about the text feature below the video.
-
-Video disclaimer: Some minor steps and graphical elements in this video might differ from your platform.
-
-This video provides a visual method to learn the concepts and tasks in this documentation.
-
-
-
-* Transcript
-
-Synchronize transcript with video
-
-
-
- Time Transcript
-
- 00:00 In this video you'll see how to create an AutoAI experiment to perform sentiment analysis on a text file.
- 00:09 You can use the text feature engineering to perform text analysis in your experiments.
- 00:15 For example, perform basic sentiment analysis to predict an outcome based on text comments.
-"
-2757F7F9B9E4975B9E53DA5B4508FF9D7A41A0A4_2,2757F7F9B9E4975B9E53DA5B4508FF9D7A41A0A4," 00:22 Start in a project and add an asset to that project, a new AutoAI experiment.
- 00:29 Just provide a name, description, select a machine learning service, and then create the experiment.
- 00:38 When the AutoAI experiment builder displays, you can add the data set.
- 00:43 In this case, the data set is already stored in the project as a data asset.
- 00:48 Select the asset to add to the experiment.
- 00:53 Before continuing, preview the data.
- 00:56 This data set has two columns.
- 00:59 The first contains the customers' comments and the second contains either 0, for ""Not satisfied"", or 1, for ""Satisfied"".
- 01:08 This isn't a time series forecast, so select ""No"" for that option.
- 01:13 Then select the column to predict, which is ""Satisfaction"" in this example.
- 01:19 AutoAI determines that the satisfaction column contains two possible values, making it suitable for a binary classification model.
- 01:28 And the positive class is 1, for ""Satisfied"".
- 01:32 Open the experiment settings if you'd like to customize the experiment.
- 01:36 On the data source panel, you'll see some options for the text feature engineering.
- 01:41 You can automatically select the text columns, or you can exercise more control by manually specifying the columns for text feature engineering.
- 01:52 You can also select how many vectors to create for each column during text feature engineering.
- 01:58 A lower number faster and a higher number is more accurate, but slower.
- 02:03 Now, run the experiment to view the transformations and progress.
- 02:09 When you create an experiment that uses the text analysis feature, the AutoAI process uses the word2vec algorithm to transform the text into vectors, then compares the vectors to establish the impact on the prediction column.
- 02:23 During the feature engineering phase of the experiment training, twenty features are generated for the text column using the word2vec algorithm.
- 02:33 When the experiment completes, you can review the feature engineering results from the pipeline details page.
- 02:40 On the Features summary panel, you can review the text transformations.
-"
-2757F7F9B9E4975B9E53DA5B4508FF9D7A41A0A4_3,2757F7F9B9E4975B9E53DA5B4508FF9D7A41A0A4," 02:45 You can see that AutoAI created several text features by applying the algorithm function to the column elements, along with the feature importance showing which features contribute most to your prediction output.
- 02:59 You can save this pipeline as a model or as a notebook.
- 03:03 The notebook contains the code to see the transformations and visualizations of those transformations.
- 03:09 In this case, create a model.
- 03:13 Use the link to view the model.
- 03:16 Now, promote the model to a deployment space.
- 03:23 Here are the model details, and from here you can deploy the model.
- 03:28 In this case, it will be an online deployment.
- 03:36 When that completes, open the deployment.
- 03:39 On the test app, you can specify one or more comments to analyze.
- 03:46 Then, click ""Predict"".
- 03:49 The first customer is predicted not to be satisfied with the service.
- 03:54 And the second customer is predicted to be satisfied with the service.
- 03:59 Find more videos in the Cloud Pak for Data as a Service documentation.
-
-
-
-
-
-Given a data set that contains a column of review comments for the rental experience (Customer_service), and a column that contains a binary satisfaction rating (Satisfaction) where 0 represents a negative comment and 1 represents a positive comment, the experiment is trained to predict a satisfaction rating when new feedback is entered.
-
-"
-2757F7F9B9E4975B9E53DA5B4508FF9D7A41A0A4_4,2757F7F9B9E4975B9E53DA5B4508FF9D7A41A0A4," Training a text transformation experiment
-
-After you load the data set and specify the prediction column (Satisfaction), the Experiment settings selects the Use text feature engineering option.
-
-
-
-Note some of the details for tuning your text analysis experiment:
-
-
-
-* You can accept the default selection of automatically selecting the text columns or you can exercise more control by manually specifying the columns for text feature engineering.
-* As the experiment runs, a default of 20 features is generated for the text column by using the word2vec algorithm. You can edit that value to increase or decrease the number of features. The more vectors that you generate the more accurate your model are, but the longer training takess.
-* The remainder of the options applies to all types of experiments so you can fine-tune how to handle the final training data.
-
-
-
-Run the experiment to view the transformations in progress.
-
-
-
-Select the name of a pipeline, then click Feature summary to review the text transformations.
-
-
-
-You can also save the experiment pipeline as a notebook and review the transformations as a visualization.
-
-"
-2757F7F9B9E4975B9E53DA5B4508FF9D7A41A0A4_5,2757F7F9B9E4975B9E53DA5B4508FF9D7A41A0A4," Deploying and scoring a text transformation model
-
-When you score this model, enter new comments to get a prediction with a confidence score for whether the comment results in a positive or negative satisfaction rating.
-
-For example, entering the comment ""It took us almost three hours to get a car. It was absurd"" predicts a satisfaction rating of 0 with a confidence score of 95%.
-
-
-
-"
-2757F7F9B9E4975B9E53DA5B4508FF9D7A41A0A4_6,2757F7F9B9E4975B9E53DA5B4508FF9D7A41A0A4," Next steps
-
-[Building a time series forecast experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-timeseries.html)
-
-Parent topic:[Building an AutoAI model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-build.html)
-"
-510BB82156702471C527D6EF7E51FE69EF746004_0,510BB82156702471C527D6EF7E51FE69EF746004," Time series implementation details
-
-These implementation details describe the stages and processing that are specific to an AutoAI time series experiment.
-
-"
-510BB82156702471C527D6EF7E51FE69EF746004_1,510BB82156702471C527D6EF7E51FE69EF746004," Implementation details
-
-Refer to these implementation and configuration details for your time series experiment.
-
-
-
-* [Time series stages](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-timeseries-details.html?context=cdpaas&locale=ents-stages) for processing an experiment.
-* [Time series optimizing metrics](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-timeseries-details.html?context=cdpaas&locale=ents-metrics) for tuning your pipelines.
-* [Time series algorithms](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-timeseries-details.html?context=cdpaas&locale=ents-algorithms) for building the pipelines.
-* [Supported date and time formats](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-timeseries-details.html?context=cdpaas&locale=ents-date-time).
-
-
-
-"
-510BB82156702471C527D6EF7E51FE69EF746004_2,510BB82156702471C527D6EF7E51FE69EF746004," Time series stages
-
-An AutoAI time series experiment includes these stages when an experiment runs:
-
-
-
-1. [Initialization](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-timeseries-details.html?context=cdpaas&locale=eninitialization)
-2. [Pipeline selection](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-timeseries-details.html?context=cdpaas&locale=enpipeline-selection)
-3. [Model evaluation](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-timeseries-details.html?context=cdpaas&locale=enmodel-eval)
-4. [Final pipeline generation](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-timeseries-details.html?context=cdpaas&locale=enfinal-pipeline)
-5. [Backtest](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-timeseries-details.html?context=cdpaas&locale=enbacktest)
-
-
-
-"
-510BB82156702471C527D6EF7E51FE69EF746004_3,510BB82156702471C527D6EF7E51FE69EF746004," Stage 1: Initialization
-
-The initialization stage processes the training data, in this sequence:
-
-
-
-* Load the data
-* Split the data set L into training data T and holdout data H
-* Set the validation, timestamp column handling, and lookback window generation. Notes:
-
-
-
-* The training data (T) is equal to the data set (L) minus the holdout (H). When you configure the experiment, you can adjust the size of the holdout data. By default, the size of the holdout data is 20 steps.
-* You can optionally specify the timestamp column.
-* By default, a lookback window is generated automatically by detecting the seasonal period by using signal processing method. However, if you have an idea of an appropriate lookback window, you can specify the value directly.
-
-
-
-
-
-"
-510BB82156702471C527D6EF7E51FE69EF746004_4,510BB82156702471C527D6EF7E51FE69EF746004," Stage 2: Pipeline selection
-
-The pipeline selection step uses an efficient method called T-Daub (Time Series Data Allocation Using Upper Bounds). The method selects pipelines by allocating more training data to the most promising pipelines, while allocating less training data to unpromising pipelines. In this way, not all pipelines see the complete set of data, and the selection process is typically faster. The following steps describe the process overview:
-
-
-
-1. All pipelines are sequentially allocated several small subsets of training data. The latest data is allocated first.
-2. Each pipeline is trained on every allocated subset of training data and evaluated with testing data (holdout data).
-3. A linear regression model is applied to each pipeline by using the data set described in the previous step.
-4. The accuracy score of the pipeline is projected on the entire training data set. This method results in a data set containing the accuracy and size of allocated data for each pipeline.
-5. The best pipeline is selected according to the projected accuracy and allotted rank 1.
-6. More data is allocated to the best pipeline. Then, the projected accuracy is updated for the other pipelines.
-7. The prior two steps are repeated until the top N pipelines are trained on all the data.
-
-
-
-"
-510BB82156702471C527D6EF7E51FE69EF746004_5,510BB82156702471C527D6EF7E51FE69EF746004," Stage 3: Model evaluation
-
-In this step, the winning pipelines N are retrained on the entire training data set T. Further, they are evaluated with the holdout data H.
-
-"
-510BB82156702471C527D6EF7E51FE69EF746004_6,510BB82156702471C527D6EF7E51FE69EF746004," Stage 4: Final pipeline generation
-
-In this step, the winning pipelines are retrained on the entire data set (L) and generated as the final pipelines.
-
-As the retraining of each pipeline completes, the pipeline is posted to the leaderboard. You can select to inspect the pipeline details or save the pipeline as a model.
-
-"
-510BB82156702471C527D6EF7E51FE69EF746004_7,510BB82156702471C527D6EF7E51FE69EF746004," Stage 5: Backtest
-
-In the final step, the winning pipelines are retrained and evaluated by using the backtest method. The following steps describe the backtest method:
-
-
-
-1. The training data length is determined based on the number of backtests, gap length, and holdout size. To learn more about these parameters, see [Building a time series experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-timeseries.html).
-2. Starting from the oldest data, the experiment is trained by using the training data.
-3. Further, the experiment is evaluated on the first validation data set. If the gap length is non-zero, any data in the gap is skipped over.
-4. The training data window is advanced by increasing the holdout size and gap length to form a new training set.
-5. A fresh experiment is trained with this new data and evaluated with the next validation data set.
-6. The prior two steps are repeated for the remaining backtesting periods.
-
-
-
-"
-510BB82156702471C527D6EF7E51FE69EF746004_8,510BB82156702471C527D6EF7E51FE69EF746004," Time series optimization metrics
-
-Accept the default metric, or choose a metric to optimize for your experiment.
-
-
-
- Metric Description
-
- Symmetric Mean Absolute Percentage Error (SMAPE) At each fitted point, the absolute difference between actual value and predicted value is divided by half the sum of absolute actual value and predicted value. Then, the average is calculated for all such values across all the fitted points.
- Mean Absolute Error (MAE) Average of absolute differences between the actual values and predicted values.
- Root Mean Squared Error (RMSE) Square root of the mean of the squared differences between the actual values and predicted values.
- R^2^ Measure of how the model performance compares to the baseline model, or mean model. The R^2^ must be equal or less than 1. Negative R^2^ value means that the model under consideration is worse than the mean model. Zero R^2^ value means that the model under consideration is as good or bad as the mean model. Positive R^2^ value means that the model under consideration is better than the mean model.
-
-
-
-"
-510BB82156702471C527D6EF7E51FE69EF746004_9,510BB82156702471C527D6EF7E51FE69EF746004," Reviewing the metrics for an experiment
-
-When you view the results for a time series experiment, you see the values for metrics used to train the experiment in the pipeline leaderboard:
-
-
-
-You can see that the accuracy measures for time-series experiments may vary widely, depending on the experiment data evaluated.
-
-
-
-* Validation is the score calculated on training data.
-* Holdout is the score calculated on the reserved holdout data.
-* Backtest is the mean score from all backtests scores.
-
-
-
-"
-510BB82156702471C527D6EF7E51FE69EF746004_10,510BB82156702471C527D6EF7E51FE69EF746004," Time series algorithms
-
-These algorithms are available for your time series experiment. You can use the algorithms that are selected by default, or you can configure your experiment to include or exclude specific algorithms.
-
-
-
- Algorithm Description
-
- ARIMA Autoregressive Integrated Moving Average (ARIMA) model is a typical time series model, which can transform non-stationary data to stationary data through differencing, and then forecast the next value by using the past values, including the lagged values and lagged forecast errors
- BATS The BATS algorithm combines Box-Cox Transformation, ARMA residuals, Trend, and Seasonality factors to forecast future values.
- Ensembler Ensembler combines multiple forecast methods to overcome accuracy of simple prediction and to avoid possible overfit.
- Holt-Winters Uses triple exponential smoothing to forecast data points in a series, if the series is repetitive over time (seasonal). Two types of Holt-Winters models are provided: additive Holt-Winters, and multiplicative Holt-Winters
- Random Forest Tree-based regression model where each tree in the ensemble is built from a sample that is drawn with replacement (for example, a bootstrap sample) from the training set.
- Support Vector Machine (SVM) SVMs are a type of machine learning models that can be used for both regression and classification. SVMs use a hyperplane to divide the data into separate classes.
- Linear regression Builds a linear relationship between time series variable and the date/time or time index with residuals that follow the AR process.
-
-
-
-"
-510BB82156702471C527D6EF7E51FE69EF746004_11,510BB82156702471C527D6EF7E51FE69EF746004," Supported date and time formats
-
-The date/time formats supported in time series experiments are based on the definitions that are provided by [dateutil](https://dateutil.readthedocs.io/en/stable/parser.html).
-
-Supported date formats are:
-
-Common:
-
-YYYY
-YYYY-MM, YYYY/MM, or YYYYMM
-YYYY-MM-DD or YYYYMMDD
-mm/dd/yyyy
-mm-dd-yyyy
-JAN YYYY
-
-Uncommon:
-
-YYYY-Www or YYYYWww - ISO week (day defaults to 0)
-YYYY-Www-D or YYYYWwwD - ISO week and day
-
-Numberng for the ISO week and day values follows the same logic as datetime.date.isocalendar().
-
-Supported time formats are:
-
-hh
-hh:mm or hhmm
-hh:mm:ss or hhmmss
-hh:mm:ss.ssssss (Up to 6 sub-second digits)
-dd-MMM
-yyyy/mm
-
-Notes:
-
-
-
-* Midnight can be represented as 00:00 or 24:00. The decimal separator can be either a period or a comma.
-* Dates can be submitted as strings, with double quotation marks, such as ""1958-01-16"".
-
-
-
-"
-510BB82156702471C527D6EF7E51FE69EF746004_12,510BB82156702471C527D6EF7E51FE69EF746004," Supporting features
-
-Supporting features, also known as exogenous features, are input features that can influence the prediction target. You can use supporting features to include additional columns from your data set to improve the prediction and increase your model’s accuracy. For example, in a time series experiment to predict prices over time, a supporting feature might be data on sales and promotions. Or, in a model that forecasts energy consumption, including daily temperature makes the forecast more accurate.
-
-"
-510BB82156702471C527D6EF7E51FE69EF746004_13,510BB82156702471C527D6EF7E51FE69EF746004," Algorithms and pipelines that use Supporting features
-
-Only a subset of algorithms allow supporting features. For example, Holt-winters and BATS do not support the use of supporting features. Algorithms that do not support supporting features ignore your selection for supporting features when you run the experiment.
-
-Some algorithms use supporting features for certain variations of the algorithm, but not for others. For example, you can generate two different pipelines with the Random Forest algorithm, RandomForestRegressor and ExogenousRandomForestRegressor. The ExogenousRandomForestRegressor variation provides support for supporting features, whereas RandomForestRegressor does not.
-
-This table details whether an algorithm provides support for Supporting features in a time series experiment:
-
-
-
- Algorithm Pipeline Provide support for Supporting features
-
- Random forest RandomForestRegressor No
- Random forest ExogenousRandomForestRegressor Yes
- SVM SVM No
- SVM ExogenousSVM Yes
- Ensembler LocalizedFlattenEnsembler Yes
- Ensembler DifferenceFlattenEnsembler No
- Ensembler FlattenEnsembler No
- Ensembler ExogenousLocalizedFlattenEnsembler Yes
- Ensembler ExogenousDifferenceFlattenEnsembler Yes
- Ensembler ExogenousFlattenEnsembler Yes
- Regression MT2RForecaster No
- Regression ExogenousMT2RForecaster Yes
- Holt-winters HoltWinterAdditive No
- Holt-winters HoltWinterMultiplicative No
- BATS BATS No
- ARIMA ARIMA No
- ARIMA ARIMAX Yes
- ARIMA ARIMAX_RSAR Yes
- ARIMA ARIMAX_PALR Yes
- ARIMA ARIMAX_RAR Yes
- ARIMA ARIMAX_DMLR Yes
-
-
-
-"
-510BB82156702471C527D6EF7E51FE69EF746004_14,510BB82156702471C527D6EF7E51FE69EF746004," Learn more
-
-[Scoring a time series model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-ts-score.html)
-
-Parent topic:[Building a time series experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-timeseries.html)
-"
-7B8B04B66E56FA847F1ACA3218EB99F3E568EEC7_0,7B8B04B66E56FA847F1ACA3218EB99F3E568EEC7," Building a time series experiment
-
-Use AutoAI to create a time series experiment to predict future activity, such as stock prices or temperatures, over a specified date or time range.
-
-"
-7B8B04B66E56FA847F1ACA3218EB99F3E568EEC7_1,7B8B04B66E56FA847F1ACA3218EB99F3E568EEC7," Time series overview
-
-A time series experiment is a method of forecasting that uses historical observations to predict future values. The experiment automatically builds many pipelines using machine learning models, such as random forest regression and Support Vector Machines (SVMs), as well as statistical time series models, such as ARIMA and Holt-Winters. Then, the experiment recommends the best pipeline according to the pipeline performance evaluated on a holdout data set or backtest data sets.
-
-Unlike a standard AutoAI experiment, which builds a set of pipelines to completion then ranks them. A time series experiment evaluates pipelines earlier in the process and only completes and test the best-performing pipelines.
-
-
-
-For details on the various stages of training and testing a time series experiment, see [Time series implementation details](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-timeseries-details.html).
-
-"
-7B8B04B66E56FA847F1ACA3218EB99F3E568EEC7_2,7B8B04B66E56FA847F1ACA3218EB99F3E568EEC7," Predicting anomalies in a time series experiment
-
-You can configure your time series experiment to predict anomalies (outliers) in your data or predictions. To configure anomaly prediction for your experiment, follow the steps in [Creating a time series anomaly prediction model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-ts-ap.html).
-
-"
-7B8B04B66E56FA847F1ACA3218EB99F3E568EEC7_3,7B8B04B66E56FA847F1ACA3218EB99F3E568EEC7," Using supporting features to improve predictions
-
-When you configure your time series experiment, you can choose to specify supporting features, also known as exogenous features. Supporting features are features that influence or add context to the prediction target. For example, if you are forecasting ice cream sales, daily temperature would be a logical supporting feature that would make the forecast more accurate.
-
-"
-7B8B04B66E56FA847F1ACA3218EB99F3E568EEC7_4,7B8B04B66E56FA847F1ACA3218EB99F3E568EEC7," Leveraging future values for supporting features
-
-If you know the future values for the supporting features, you can leverage those future values when you deploy the model. For example, if you are training a model to forecast future t-shirt sales, you can include promotional discounts as a supporting feature to enhance the prediction. Inputting the future value of the promotion then makes the forecast more accurate.
-
-"
-7B8B04B66E56FA847F1ACA3218EB99F3E568EEC7_5,7B8B04B66E56FA847F1ACA3218EB99F3E568EEC7," Data requirements
-
-These are the current data requirements for training a time series experiment:
-
-
-
-* The training data must be a single file in CSV format.
-* The file must contain one or more time series columns and optionally contain a timestamp column. For a list of supported date/time formats, see [AutoAI time series implementation details](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-timeseries-details.html).
-* If the data source contains a timestamp column, ensure that the data is sampled at uniform frequency. That is, the difference in timestamps of adjacent rows is the same. For example, data can be in increments of 1 minute, 1 hour, or one day. The specified timestamp is used to determine the lookback window to improve the model accuracy.
-
-Note:If the file size is larger than 1 GB, sort the data in descending order by the timestamp, and only the first 1 GB is used to train the experiment.
-* If the data source does not contain a timestamp column, ensure that the data is sampled at regular intervals and sorted in ascending order according to the sample date/time. That is, the value in the first row is the oldest, and the value in the last row is the most recent.
-
-Note: If the file size is larger than 1 GB, truncate the file so it is smaller than 1 GB.
-* Select what data to use when training the final pipelines. If you choose to include training data only, the generated notebooks will include a cell for retrieving the holdout data used to evaluate each pipeline.
-
-
-
-Choose data from your project or upload it from your file system or from the asset browser, then click Continue. Click the preview icon , after the data source name to review your data. Optionally, you can add a second file as holdout data for testing the trained pipelines.
-
-"
-7B8B04B66E56FA847F1ACA3218EB99F3E568EEC7_6,7B8B04B66E56FA847F1ACA3218EB99F3E568EEC7," Configuring a time series experiment
-
-When you configure the details for an experiment, click Yes to Enable time series and complete the experiment details.
-
-
-
- Field Description
-
- Prediction columns The time series columns that you want to predict based on the previous values. You can specify one or more columns to predict.
- Date/time column The column that indicates the date/time at which the time series values occur.
- Lookback window A parameter that indicates how many previous time series values are used to predict the current time point.
- Forecast window The range that you want to predict based on the data in the lookback window.
-
-
-
-The prediction summary shows you the experiment type and the metric that is selected for optimizing the experiment.
-
-"
-7B8B04B66E56FA847F1ACA3218EB99F3E568EEC7_7,7B8B04B66E56FA847F1ACA3218EB99F3E568EEC7," Configuring experiment settings
-
-To configure more details for your time series experiment, click Experiment settings.
-
-"
-7B8B04B66E56FA847F1ACA3218EB99F3E568EEC7_8,7B8B04B66E56FA847F1ACA3218EB99F3E568EEC7," General prediction settings
-
-On the General panel for prediction settings, you can optionally change the metric used to optimize the experiment or specify the algorithms to consider or the number of pipelines to generate.
-
-
-
- Field Description
-
- Prediction type View or change the prediction type based on prediction column for your experiment. For time series experiments, Time series forecast is selected by default. Note: If you change the prediction type, other prediction settings for your experiment are automatically changed.
- Optimized metric View or change the recommended optimized metric for your experiment.
- Optimized algorithm selection Not supported for time series experiments.
- Algorithms to include Select algorithms based on which you want your experiment to create pipelines. Algorithms and pipelines that support the use of supporting features, are indicated by a checkmark.
- Pipelines to complete View or change the number of pipelines to generate for your experiment.
-
-
-
-"
-7B8B04B66E56FA847F1ACA3218EB99F3E568EEC7_9,7B8B04B66E56FA847F1ACA3218EB99F3E568EEC7," Time series configuration details
-
-On the Time series pane for prediction settings, configure the details for how to train the experiment and generate predictions.
-
-
-
- Field Description
-
- Date/time column View or change the date/time column for the experiment.
- Lookback window View or update the number of previous time series values used to predict the current time point.
- Forecast window View or update the range that you want to predict based.
-
-
-
-"
-7B8B04B66E56FA847F1ACA3218EB99F3E568EEC7_10,7B8B04B66E56FA847F1ACA3218EB99F3E568EEC7," Configuring data source settings
-
-To configure details for your input data, click Experiment settings and select Data source.
-
-"
-7B8B04B66E56FA847F1ACA3218EB99F3E568EEC7_11,7B8B04B66E56FA847F1ACA3218EB99F3E568EEC7," General data source settings
-
-On the General panel for data source settings, you can modify your dataset to interpolate missing values, split your dataset into training and holdout data, and input supporting features.
-
-
-
- Field Description
-
- Duplicate rows Not supported for time series experiments.
- Subsample data Not supported for time series experiments.
- Text feature engineering Not supported for time series experiments.
- Final training data set Select what data to use when training the final pipelines: just the training data or the training and holdout data. If you choose to include training data only, generated notebooks for this experiment will include a cell for retrieving the holdout data used to evaluate each pipeline.
- Supporting features Choose additional columns from your data set as Supporting features to support predictions and increase your model’s accuracy. You can also use future values for Supporting features by enabling Leverage future values of supporting features. Note: You can only use supporting features with selected algorithms and pipelines. For more information on algorithms and pipelines that support the use of supporting features, see [Time series implementation details](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-timeseries-details.html).
- Data imputation Use data imputation to replace missing values in your dataset with substituted values. By enabling this option, you can specify how missing values should be interpolated in your data. To learn more about data imputation, see Data imputation in AutoAI experiments.
- Training and holdout data Choose to reserve some data from your training data set to test the experiment. Alternatively, upload a separate file of holdout data. The holdout data file must match the schema of the training data.
-
-
-
-"
-7B8B04B66E56FA847F1ACA3218EB99F3E568EEC7_12,7B8B04B66E56FA847F1ACA3218EB99F3E568EEC7," Configuring time series data
-
-To configure the time series data, you can adjust the settings for the time series data that is related to backtesting the experiment. Backtesting provides a means of validating a time-series model by using historical data.
-
-In a typical machine learning experiment, you can hold back part of the data randomly to test the resulting model for accuracy. To validate a time series model, you must preserve the time order relationship between the training data and testing data.
-
-The following steps describe the backtest method:
-
-
-
-1. The training data length is determined based on the number of backtests, gap length, and holdout size. To learn more about these parameters, see [Building a time series experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-timeseries.html?context=cdpaas&locale=en).
-2. Starting from the oldest data, the experiment is trained using the training data.
-3. The experiment is evaluated on the first validation data set. If the gap length is non-zero, any data in the gap is skipped over.
-4. The training data window is advanced by increasing the holdout size and gap length to form a new training set.
-5. A fresh experiment is trained with this new data and evaluated with the next validation data set.
-6. The prior two steps are repeated for the remaining backtesting periods.
-
-
-
-To adjust the backtesting configuration:
-
-
-
-1. Open Experiment settings.
-2. From Data sources, click the Time series.
-3. (Optional): Adjust the settings as shown in the table.
-
-
-
-
-
- Field Description
-
- Number of backtests Backtesting is similar to cross-validation for date/time periods. Optionally customize the number of backtests for your experiment.
- Holdout The size of the holdout set and each validation set for backtesting. The validation length can be adjusted by changing the holdout length.
- Gap length The number of time points between the training data set and validation data set for each backtest. When the parameter value is non-zero, the time series values in the gap will not be used to train the experiment or evaluate the current backtest.
-
-
-
-"
-7B8B04B66E56FA847F1ACA3218EB99F3E568EEC7_13,7B8B04B66E56FA847F1ACA3218EB99F3E568EEC7,"
-
-The visualization for the configuration settings illustrates the backtesting flow. The graphic is interactive, so you can manipulate the settings from the graphic or from the configuration fields. For example, by adjusting the gap length, you can see model validation results on earlier time periods of the data without increasing the number of backtests.
-
-"
-7B8B04B66E56FA847F1ACA3218EB99F3E568EEC7_14,7B8B04B66E56FA847F1ACA3218EB99F3E568EEC7," Interpreting the experiment results
-
-After you run your time series experiment, you can examine the resulting pipelines to get insights into the experiment details. Pipelines that use Supporting features are indicated by SUP enhancement tag to distinguish them from pipelines that don’t use these features. To view details:
-
-
-
-* Hover over nodes on the visualization to get details about the pipelines as they are being generated.
-* Toggle to the Progress Map view to see a different view of the training process. You can hover over each node in the process for details.
-* After the final pipelines are completed and written to the leaderboard, you can click a pipeline to see the performance details.
-* Click View discarded pipelines to view the algorithms that are used for the pipelines that are not selected as top performers.
-* Save the experiment code as notebook that you can review.
-* Save a particular pipeline as a notebook that you can review.
-
-
-
-Watch this video to see how to run a time series experiment and create a model in a Jupyter notebook using training and holdout data.
-
-Video disclaimer: Some minor steps and graphical elements in this video might differ from your platform.
-
-This video provides a visual method to learn the concepts and tasks in this documentation.
-
-"
-7B8B04B66E56FA847F1ACA3218EB99F3E568EEC7_15,7B8B04B66E56FA847F1ACA3218EB99F3E568EEC7," Next steps
-
-
-
-* Follow a step-by-step tutorial to [train a univariate time series model to predict minimum temperatures by using sample data](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-ts-uni-tutorial.html).
-* Follow a step-by-step tutorial to [train a time series experiment with supporting features](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-ts-tut-sup.html).
-* Learn about [scoring a deployed time series model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-ts-score.html).
-* Learn about using the [API for AutoAI time series experiments](https://lukasz-cmielowski.medium.com/predicting-covid19-cases-with-autoai-time-series-api-f6793acee48d).
-
-
-
-"
-7B8B04B66E56FA847F1ACA3218EB99F3E568EEC7_16,7B8B04B66E56FA847F1ACA3218EB99F3E568EEC7," Additional resources
-
-
-
-* For an introduction to forecasting with AutoAI time series experiments, see the blog post [Right on time(series): Introducing Watson Studio’s AutoAI Time Series](https://medium.com/ibm-data-ai/right-on-time-series-introducing-watson-studios-autoai-time-series-5175dbe66154).
-* For more information about creating a time series experiment, see this blog post about [creating a new time series experiment](https://medium.com/ibm-data-ai/right-on-time-series-introducing-watson-studios-autoai-time-series-5175dbe66154).
-* Read a blog post about [adding supporting features to a time series experiment](https://medium.com/ibm-data-ai/improve-autoai-time-series-forecasts-with-supporting-features-using-ibm-cloud-pak-for-data-as-a-ff24cc85f6b8).
-* Review a [sample notebook](https://github.com/IBM/watson-machine-learning-samples/blob/master/cloud/notebooks/python_sdk/experiments/autoai/Use%20AutoAI%20and%20timeseries%20data%20with%20supporting%20features%20to%20predict%20PM2.5.ipynb) for a time series experiment with supporting features.
-* Read a blog post about [adding supporting features to a time series experiment using the API](https://medium.com/ibm-data-ai/forecasting-pm2-5-using-autoai-time-series-api-with-supporting-features-12bbad18cb36).
-
-
-
-"
-7B8B04B66E56FA847F1ACA3218EB99F3E568EEC7_17,7B8B04B66E56FA847F1ACA3218EB99F3E568EEC7," Next steps
-
-
-
-* [Tutorial: AutoAI univariate time series experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-ts-uni-tutorial.html)
-* [Tutorial: AutoAI supporting features time series experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-ts-tut-sup.html)
-* [Time series experiment implementation details](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-timeseries-details.html)
-* [Scoring a time series model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-ts-score.html)
-
-
-
-Parent topic:[AutoAI overview](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-overview.html)
-"
-163EEB3DBAFF3B01D831F717EEB7487642C93080_0,163EEB3DBAFF3B01D831F717EEB7487642C93080," Troubleshooting AutoAI experiments
-
-The following list contains the common problems that are known for AutoAI. If your AutoAI experiment fails to run or deploy successfully, review some of these common problems and resolutions.
-
-"
-163EEB3DBAFF3B01D831F717EEB7487642C93080_1,163EEB3DBAFF3B01D831F717EEB7487642C93080," Passing incomplete or outlier input value to deployment can lead to outlier prediction
-
-After you deploy your machine learning model, note that providing input data that is markedly different from data that is used to train the model can produce an outlier prediction. When linear regression algorithms such as Ridge and LinearRegression are passed an out of scale input value, the model extrapolates the values and assigns a relatively large weight to it, producing a score that is not in line with conforming data.
-
-"
-163EEB3DBAFF3B01D831F717EEB7487642C93080_2,163EEB3DBAFF3B01D831F717EEB7487642C93080," Time Series pipeline with supporting features fails on retrieval
-
-If you train an AutoAI Time Series experiment by using supporting features and you get the error 'Error: name 'tspy_interpolators' is not defined' when the system tries to retrieve the pipeline for predictions, check to make sure your system is running Java 8 or higher.
-
-"
-163EEB3DBAFF3B01D831F717EEB7487642C93080_3,163EEB3DBAFF3B01D831F717EEB7487642C93080," Running a pipeline or experiment notebook fails with a software specification error
-
-If supported software specifications for AutoAI experiments change, you might get an error when you run a notebook built with an older software specification, such as an older version of Python. In this case, run the experiment again, then save a new notebook and try again.
-
-"
-163EEB3DBAFF3B01D831F717EEB7487642C93080_4,163EEB3DBAFF3B01D831F717EEB7487642C93080," Resolving an Out of Memory error
-
-If you get a memory error when you run a cell from an AutoAI generated notebook, create a notebook runtime with more resources for the AutoAI notebook and execute the cell again.
-
- Notebook for an experiment with subsampling can fail generating predictions
-
-If you do pipeline refinery to prepare the model, and the experiment uses subsampling of the data during training, you might encounter an “unknown class” error when you run a notebook that is saved from the experiment.
-
-The problem stems from an unknown class that is not included in the training data set. The workaround is to use the entire data set for training or re-create the subsampling that is used in the experiment.
-
-To subsample the training data (before fit()), provide sample size by number of rows or by fraction of the sample (as done in the experiment).
-
-
-
-* If number of records was used in subsampling settings, you can increase the value of n. For example:
-
-train_df = train_df.sample(n=1000)
-* If subsampling is represented as a fraction of the data set, increase the value of frac. For example:
-
-train_df = train_df.sample(frac=0.4, random_state=experiment_metadata['random_state'])
-
-
-
-"
-163EEB3DBAFF3B01D831F717EEB7487642C93080_5,163EEB3DBAFF3B01D831F717EEB7487642C93080," Pipeline creation fails for binary classification
-
-AutoAI analyzes a subset of the data to determine the best fit for experiment type. If the sample data in the prediction column contains only two values, AutoAI recommends a binary classification experiment and applies the related algorithms. However, if the full data set contains more than two values in the prediction column the binary classification fails and you get an error that indicates that AutoAI cannot create the pipelines.
-
-In this case, manually change the experiment type from binary to either multiclass, for a defined set of values, or regression, for an unspecified set of values.
-
-
-
-1. Click the Reconfigure Experiment icon to edit the experiment settings.
-2. On the Prediction page of Experiment Settings, change the prediction type to the one that best matches the data in the prediction column.
-3. Save the changes and run the experiment again.
-
-
-
-"
-163EEB3DBAFF3B01D831F717EEB7487642C93080_6,163EEB3DBAFF3B01D831F717EEB7487642C93080," Next steps
-
-[AutoAI overview](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-overview.html)
-
-Parent topic:[AutoAI overview](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-overview.html)
-"
-6B81F2288B810E3FFDD2DE5ACE4E13E3A90E1E10_0,6B81F2288B810E3FFDD2DE5ACE4E13E3A90E1E10," Tutorial: Create a time series anomaly prediction experiment
-
-This tutorial guides you through using AutoAI and sample data to train a time series experiment to detect if daily electricity usage values are normal or anomalies (outliers).
-
-When you set up the sample experiment, you load data that analyzes daily electricity usage from Industry A to determine whether a value is normal or an anomaly. Then, the experiment generates pipelines that use algorithms to label these predicted values as normal or an anomaly. After generating the pipelines, AutoAI chooses the best performers, and presents them in a leaderboard for you to review.
-
-Tech preview This is a technology preview and is not yet supported for use in production environments.
-
-"
-6B81F2288B810E3FFDD2DE5ACE4E13E3A90E1E10_1,6B81F2288B810E3FFDD2DE5ACE4E13E3A90E1E10," Data set overview
-
-This tutorial uses the Electricity usage anomalies sample data set from the Watson Studio Gallery. This data set describes the annual electricity usages for Industry A. The first column indicates the electricity usages and the second column indicates the date, which is in a day-by-day format.
-
-
-
-"
-6B81F2288B810E3FFDD2DE5ACE4E13E3A90E1E10_2,6B81F2288B810E3FFDD2DE5ACE4E13E3A90E1E10," Tasks overview
-
-In this tutorial, follow these steps to create an anomaly prediction experiment:
-
-
-
-1. [Create an AutoAI experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-ts-ap-tutorial.html?context=cdpaas&locale=enstep1)
-2. [View the experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-ts-ap-tutorial.html?context=cdpaas&locale=enstep2)
-3. [Review experiment results](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-ts-ap-tutorial.html?context=cdpaas&locale=enstep3)
-4. [Deploy the trained model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-ts-ap-tutorial.html?context=cdpaas&locale=enstep4)
-5. [Test the deployed model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-ts-ap-tutorial.html?context=cdpaas&locale=enstep5)
-
-
-
-"
-6B81F2288B810E3FFDD2DE5ACE4E13E3A90E1E10_3,6B81F2288B810E3FFDD2DE5ACE4E13E3A90E1E10," Create an AutoAI experiment
-
-Create an AutoAI experiment and add sample data to your experiment.
-
-
-
-1. From the navigation menu , click Projects > View all projects.
-2. Open an existing project or [create a new project](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/projects.html) to store the anomaly prediction experiment.
-3. On the Assets tab from within your project, click New asset > Build machine learning models automatically.
-4. Click Samples > Electricity usage anomalies sample data, then select Next. The AutoAI experiment name and description are pre-populated by the sample data.
-5. If prompted, associate a Watson Machine Learning instance with your AutoAI experiment.
-
-
-
-1. Click Associate a Machine Learning service instance and select an instance of Watson Machine Learning.
-2. Click Reload to confirm your configuration.
-
-
-
-6. Click Create.
-
-
-
-"
-6B81F2288B810E3FFDD2DE5ACE4E13E3A90E1E10_4,6B81F2288B810E3FFDD2DE5ACE4E13E3A90E1E10," View the experiment details
-
-AutoAI pre-populates the details fields for the sample experiment:
-
-
-
-
-
-* Type series analysis type: Anomaly prediction predicts whether future values in a series are anomalies (outliers). A prediction of 1 indicates a normal value and a prediction of -1 indicates an anomaly.
-* Feature column: industry_a_usage is the predicted value and indicates how much electricity Industry A consumes.
-* Date/Time column: date indicates the time increments for the experiment. For this experiment, there is one prediction value per day.
-
-* This experiment is optimized for the model performance metric: Average Precision. Average precision evaluates the performance of object detection and segmentation systems.
-
-
-
-Click Run experiment to train the model. The experiment takes several minutes to complete.
-
-"
-6B81F2288B810E3FFDD2DE5ACE4E13E3A90E1E10_5,6B81F2288B810E3FFDD2DE5ACE4E13E3A90E1E10," Review the experiment results
-
-The relationship map shows the transformations that are used to create pipelines. Follow these steps to review experiment results and save the pipeline with the best performance. 
-
-
-
-1. The leaderboard lists and saves the three best performing pipelines. Click the pipeline name with Rank 1 to review the details of the pipeline. For details on anomaly prediction metrics, see [Creating a time series anomaly prediction experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-ts-ap.html).
-2. Select the pipeline with Rank 1 and Save the pipeline as a model. The model name is pre-populated with the default name.
-3. Click Create to confirm your pipeline selection.
-
-
-
-"
-6B81F2288B810E3FFDD2DE5ACE4E13E3A90E1E10_6,6B81F2288B810E3FFDD2DE5ACE4E13E3A90E1E10," Deploy the trained model
-
-Before the trained model can make predictions on external values, you must deploy the model. Follow these steps to promote your trained model to a deployment space.
-
-
-
-1. Deploy the model from the Model details page. To access the Model details page, choose one of these options:
-
-
-
-* From the notification displayed when you save the model, click View in project.
-* From the project's Assets, select the model’s name in Models.
-
-
-
-2. From the Model details page, click Promote to Deployment Space. Then, select or create a deployment space to deploy the model.
-3. Select Go to the model in the space after promoting it and click Promote to promote the model.
-
-
-
-"
-6B81F2288B810E3FFDD2DE5ACE4E13E3A90E1E10_7,6B81F2288B810E3FFDD2DE5ACE4E13E3A90E1E10," Testing the model
-
-After promoting the model to the deployment space, you are ready to test your trained model with new data values.
-
-
-
-1. Select New Deployment and create a new deployment with the following fields:
-
-
-
-1. Deployment type: Online
-2. Name: Electricity usage online deployment
-
-
-
-2. Click Create and wait for the status to update to Deployed.
-3. After the deployment initializes, click the deployment. Use Test input to manually enter and evaluate values or use JSON input to attach a data set.
-
-
-4. Click Predict to see whether there are any anomalies in the values.
-
-Note:-1 indicates an anomaly; 1 indicates a normal value.
-
-
-
-
-
-"
-6B81F2288B810E3FFDD2DE5ACE4E13E3A90E1E10_8,6B81F2288B810E3FFDD2DE5ACE4E13E3A90E1E10," Next steps
-
-[Building a time series forecast experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-timeseries.html)
-"
-B23F48A4757500FEA641245CFFA69CB3B72AE0E8_0,B23F48A4757500FEA641245CFFA69CB3B72AE0E8," Creating a time series anomaly prediction (Beta)
-
-Create a time series anomaly prediction experiment to train a model that can detect anomalies, or unexpected results, when the model predicts results based on new data.
-
-Tech preview This is a technology preview and is not yet supported for use in production environments.
-
-"
-B23F48A4757500FEA641245CFFA69CB3B72AE0E8_1,B23F48A4757500FEA641245CFFA69CB3B72AE0E8," Detecting anomalies in predictions
-
-You can use anomaly prediction to find outliers in model predictions. Consider the following scenarios for training a time series model with anomaly prediction. For example, suppose you have operational metrics from monitoring devices that were collected in the date range of 2022.1.1 through 2022.3.31. You are confident that no anomalies exist in the data for that period, even if the data is unlabeled. You can use a time series anomaly prediction experiment to:
-
-
-
-* Train model candidate pipelines and auto-select the top-ranked model candidate
-* Deploy a selected model to predict new observations if:
-
-
-
-* A new time point is an anomaly (for example, an online score predicts a time point 2022.4.1 that is outside of the expected range)
-* A new time range has anomalies (for example, a batch score predicts values of 2022.4.1 to 2022.4.7, outside the expected range)
-
-
-
-
-
-"
-B23F48A4757500FEA641245CFFA69CB3B72AE0E8_2,B23F48A4757500FEA641245CFFA69CB3B72AE0E8," Working with a sample
-
-To create an AutoAI Time series experiment with anomaly prediction that uses a sample:
-
-
-
-1. Create an AutoAI experiment.
-2. Select Samples.
-
-
-3. Click the tile for Electricity usage anomalies sample data.
-4. Follow the prompts to configure and run the experiment.
-
-
-5. Review the details about the pipelines and explore the visualizations.
-
-
-
-"
-B23F48A4757500FEA641245CFFA69CB3B72AE0E8_3,B23F48A4757500FEA641245CFFA69CB3B72AE0E8," Configuring a time series experiment with anomaly prediction
-
-
-
-1. Load the data for your experiment.
-
-Restriction: You can upload only a single data file for an anomaly prediction experiment. If you upload a second data file (for holdout data) the Anomaly prediction option is disabled, and only the Forecast option is available. By default, Anomaly prediction experiments use a subset of the training data for validation.
-2. Click Yes to Enable time series.
-3. Select Anomaly prediction as the experiment type.
-4. Configure the feature columns from the data source that you want to predict based on the previous values. You can specify one or more columns to predict.
-5. Select the date/time column.
-
-
-
-The prediction summary shows you the experiment type and the metric that is selected for optimizing the experiment.
-
-"
-B23F48A4757500FEA641245CFFA69CB3B72AE0E8_4,B23F48A4757500FEA641245CFFA69CB3B72AE0E8," Configuring experiment settings
-
-To configure more details for your time series experiment, open the Experiment settings pane. Options that are not available for anomaly prediction experiments are unavailable.
-
-"
-B23F48A4757500FEA641245CFFA69CB3B72AE0E8_5,B23F48A4757500FEA641245CFFA69CB3B72AE0E8," General prediction settings
-
-On the General panel for prediction settings, configure details for training the experiment.
-
-
-
- Field Description
-
- Prediction type View or change the prediction type based on prediction column for your experiment. For time series experiments, Time series anomaly prediction is selected by default. Note: If you change the prediction type, other prediction settings for your experiment are automatically changed.
- Optimized metric Choose a metric for optimizing and ranking the pipelines.
- Optimized algorithm selection Not supported for time series experiments.
- Algorithms to include Select [algorithms](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-ts-ap.html?context=cdpaas&locale=enimplementation) based on which you want your experiment to create pipelines. The algorithms support anomaly prediction.
- Pipelines to complete View or change the number of pipelines to generate for your experiment.
-
-
-
-"
-B23F48A4757500FEA641245CFFA69CB3B72AE0E8_6,B23F48A4757500FEA641245CFFA69CB3B72AE0E8," Time series configuration details
-
-On the Time series pane for prediction settings, configure the details for how to train the experiment and generate predictions.
-
-
-
- Field Description
-
- Date/time column View or change the date/time column for the experiment.
- Lookback window Not supported for anomaly prediction.
- Forecast window Not supported for anomaly prediction.
-
-
-
-"
-B23F48A4757500FEA641245CFFA69CB3B72AE0E8_7,B23F48A4757500FEA641245CFFA69CB3B72AE0E8," Configuring data source settings
-
-To configure details for your input data, open the Experiment settings panel and select the Data source.
-
-"
-B23F48A4757500FEA641245CFFA69CB3B72AE0E8_8,B23F48A4757500FEA641245CFFA69CB3B72AE0E8," General data source settings
-
-On the General panel for data source settings, you can choose options for how to use your experiment data.
-
-
-
- Field Description
-
- Duplicate rows Not supported for time series anomaly prediction experiments.
- Subsample data Not supported for time series anomaly prediction experiments.
- Text feature engineering Not supported for time series anomaly prediction experiments.
- Final training data set Anomaly prediction uses a single data source file, which is the final training data set.
- Supporting features Not supported for time series anomaly prediction experiments.
- Data imputation Not supported for time series anomaly prediction experiments.
- Training and holdout data Anomaly prediction does not support a separate holdout file. You can adjust how the data is split between training and holdout data. Note: In some cases, AutoAI can overwrite your holdout settings to ensure the split is valid for the experiment. In this case, you see a notification and the change is noted in the log file.
-
-
-
-"
-B23F48A4757500FEA641245CFFA69CB3B72AE0E8_9,B23F48A4757500FEA641245CFFA69CB3B72AE0E8," Reviewing the experiment results
-
-When you run the experiment, the progress indicator displays the pathways to pipeline creation. Ranked pipelines are listed on the leaderboard. Pipeline score represents how well the pipeline performed for the optimizing metric.
-
-The Experiment summary tab displays a visualization of how metrics performed for the pipeline.
-
-
-
-* Use the metric filter to focus on particular metrics.
-* Hover over the name of a metric to view details.
-
-
-
-Click a pipeline name to view details. On the Model evaluation page, you can review a table that summarizes details about the pipeline.
-
-
-
-
-
-* The rows represent five evaluation metrics: Area under ROC, Precision, Recall, F1, Average precision.
-* The columns represent four synthesized anomaly types: Level shift, Trend, Localized extreme, Variance.
-* Each value in a cell is an average of the metric based on three iterations of evaluation on the synthesized anomaly type.
-
-
-
-"
-B23F48A4757500FEA641245CFFA69CB3B72AE0E8_10,B23F48A4757500FEA641245CFFA69CB3B72AE0E8," Evaluation metrics:
-
-These metrics are used to evaluate a pipeline:
-
-
-
- Metric Description
-
- Aggregate score (Recommended) This score is calculated based on an aggregation of the optimized metric (for example, Average precision) values for the 4 anomaly types. The scores for each pipeline are ranked, using the Borda count method, and then weighted for their contribution to the aggregate score. Unlike a standard metric score, this value is not between 0 and 1. A higher value indicates a stronger score.
- ROC AUC Measure of how well a parameter can distinguish between two groups.
- F1 Harmonic average of the precision and recall, with best value of 1 (perfect precision and recall) and worst at 0.
- Precision Measures the accuracy of a prediction based on percent of positive predictions that are correct.
- Recall Measures the percentage of identified positive predictions against possible positives in data set.
-
-
-
-"
-B23F48A4757500FEA641245CFFA69CB3B72AE0E8_11,B23F48A4757500FEA641245CFFA69CB3B72AE0E8," Anomaly types
-
-These are the anomaly types AutoAI detects.
-
-
-
- Anomaly type Description
-
- Localized extreme anomaly An unusual data point in a time series, which deviates significantly from the data points around it.
- Level shift anomaly A segment in which the mean value of a time series is changed.
- Trend anomaly A segment of time series, which has a trend change compared to the time series before the segment.
- Variance anomaly A segment of time series in which the variance of a time series is changed.
-
-
-
-"
-B23F48A4757500FEA641245CFFA69CB3B72AE0E8_12,B23F48A4757500FEA641245CFFA69CB3B72AE0E8," Saving a pipeline as a model
-
-To save a model candidate pipeline as a machine learning model, select Save as model for the pipeline you prefer. The model is saved as a project asset. You can promote the model to a space and create a deployment for it.
-
-"
-B23F48A4757500FEA641245CFFA69CB3B72AE0E8_13,B23F48A4757500FEA641245CFFA69CB3B72AE0E8," Saving a pipeline as a notebook
-
-To review the code for a pipeline, select Save as notebook for a pipeline. An automatically generated notebook is saved as a project asset. Review the code to explore how the pipeline was generated.
-
-For details on the methods used in the pipeline code, see the documentation for the [autoai-ts-libs library](https://pypi.org/project/autoai-ts-libs/).
-
-"
-B23F48A4757500FEA641245CFFA69CB3B72AE0E8_14,B23F48A4757500FEA641245CFFA69CB3B72AE0E8," Scoring the model
-
-After you save a pipeline as a model, then promote the model to a space, you can score the model to generate predictions for input, or payload, data. Scoring the model and interpreting the results is similar to scoring a binary classification model, as the score presents one of two possible values for each prediction:
-
-
-
-* 1 = no anomaly detected
-* -1 = anomaly detected
-
-
-
-"
-B23F48A4757500FEA641245CFFA69CB3B72AE0E8_15,B23F48A4757500FEA641245CFFA69CB3B72AE0E8," Deployment details
-
-Note these requirements for deploying an anomaly prediction model.
-
-
-
-* The schema for the deplyment input data must match the schema for the training data except for the prediction, or target column.
-* The order of the fields for model scoring must be the same as the order of the fields in the training data schema.
-
-
-
-"
-B23F48A4757500FEA641245CFFA69CB3B72AE0E8_16,B23F48A4757500FEA641245CFFA69CB3B72AE0E8," Deployment example
-
-The following is valid input for an anomaly prediction model:
-
-{
-""input_data"": [
-{
-""id"": ""observations"",
-""values"":
-12,34],
-22,23],
-35,45],
-46,34]
-]
-}
-]
-}
-
-The score for this input is [1,1,-1,1] where -1 means the value is an anomaly and 1 means the prediction is in the normal range.
-
-"
-B23F48A4757500FEA641245CFFA69CB3B72AE0E8_17,B23F48A4757500FEA641245CFFA69CB3B72AE0E8," Implementation details
-
-These algorithms support anomaly prediction in time series experiments.
-
-
-
- Algorithm Type Transformer
-
- Pipeline Name Algorithm Type Transformer
- PointwiseBoundedHoltWintersAdditive Forecasting N/A
- PointwiseBoundedBATS Forecasting N/A
- PointwiseBoundedBATSForceUpdate Forecasting N/A
- WindowNN Window Flatten
- WindowPCA Relationship Flatten
- WindowLOF Window Flatten
-
-
-
-The algorithms are organized in these categories:
-
-
-
-* Forecasting: Algorithms for detecting anomalies using time series forecasting methods
-* Relationship: Algorithms for detecting anomalies by analyzing the relationship among data points
-* Window: Algorithms for detecting anomalies by applying transformations and ML techniques to rolling windows
-
-
-
-"
-B23F48A4757500FEA641245CFFA69CB3B72AE0E8_18,B23F48A4757500FEA641245CFFA69CB3B72AE0E8," Learn more
-
-[Saving an AutoAI generated notebook (Watson Machine Learning)](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-notebook.html)
-
-Parent topic:[Building a time series experiment ](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-timeseries.html)
-"
-AD76780EA50A0FB37454A3A03FF08CA0AD39EF19_0,AD76780EA50A0FB37454A3A03FF08CA0AD39EF19," Scoring a time series model
-
-After you save an AutoAI time series pipeline as a model, you can deploy and score the model to forecast new values.
-
-"
-AD76780EA50A0FB37454A3A03FF08CA0AD39EF19_1,AD76780EA50A0FB37454A3A03FF08CA0AD39EF19," Deploying a time series model
-
-After you save a model to a project, follow the steps to deploy the model:
-
-
-
-1. Find the model in the project asset list.
-2. Promote the model to a deployment space.
-3. Promote payload data to the deployment space.
-4. From the deployment space, create a deployment.
-
-
-
-"
-AD76780EA50A0FB37454A3A03FF08CA0AD39EF19_2,AD76780EA50A0FB37454A3A03FF08CA0AD39EF19," Scoring considerations
-
-To this point, deploying a time series model follows the same steps as deploying a classification or regression model. However, because of the way predictions are structured and generated in a time series model, your input must match your model structure. For example, the way you structure your payload depends on whether you are predicting a single result (univariate) or multiple results (multivariate).
-
-Note these high-level considerations:
-
-
-
-* To get the first forecast window row or rows after the last row in your data, send an empty payload.
-* To get the next value, send the result from the empty payload request as your next scoring request, and so on.
-* You can send multiple rows as input, to build trends and predict the next value after a trend.
-* If you have multiple prediction columns, you need to include a value for each of them in your scoring request
-
-
-
-"
-AD76780EA50A0FB37454A3A03FF08CA0AD39EF19_3,AD76780EA50A0FB37454A3A03FF08CA0AD39EF19," Scoring an online deployment
-
-If you create an online deployment, you can pass the payload data by using an input form or by submitting JSON code. This example shows how to structure the JSON code to generate predictions.
-
-"
-AD76780EA50A0FB37454A3A03FF08CA0AD39EF19_4,AD76780EA50A0FB37454A3A03FF08CA0AD39EF19," Predicting a single value
-
-In the simplest case, given this sample data, you are trying to forecast the next step of value1 with a forecast window of 1, meaning each prediction will be a single step (row).
-
-
-
- timestamp value1
-
- 2015-02026 21:42 2
- 2015-02026 21:47 4
- 2015-02026 21:52 6
- 2015-02026 21:57 8
- 2015-02026 22:02 10
-
-
-
-You must pass a blank entry as the input data to request the first prediction, which is structured like this:
-
-{
-""input_data"": [
-{
-""fields"":
-""value1""
-],
-""values"": ]
-}
-]
-}
-
-The output that is returned predicts the next step in the model:
-
-{
-""predictions"": [
-{
-""fields"":
-""prediction""
-],
-""values"":
-
-12
-]
-]
-}
-]
-}
-
-The next input passes the result of the previous output to predict the next step:
-
-{
-""input_data"": [
-{
-""fields"":
-""value1""
-],
-""values"":
-12]
-]
-}
-]
-}
-
-"
-AD76780EA50A0FB37454A3A03FF08CA0AD39EF19_5,AD76780EA50A0FB37454A3A03FF08CA0AD39EF19," Predicting multiple values
-
-In this case, you are predicting two targets, value1 and value2.
-
-
-
- timestamp value1 value2
-
- 2015-02026 21:42 2 1
- 2015-02026 21:47 4 3
- 2015-02026 21:52 6 5
- 2015-02026 21:57 8 7
- 2015-02026 22:02 10 9
-
-
-
-The input data must still pass a blank entry to request the first prediction. The next input would be structured like this:
-
-{
-""input_data"": [
-{
-""fields"":
-""value1"",
-""value2""
-],
-""values"":
-2, 1],
-]
-}
-]
-}
-
-"
-AD76780EA50A0FB37454A3A03FF08CA0AD39EF19_6,AD76780EA50A0FB37454A3A03FF08CA0AD39EF19," Predicting based on new observations
-
-If instead of predicting the next row based on the prior step you want to enter new observations, enter the input data like this for a univariate model:
-
-{
-""input_data"": [
-{
-""fields"":
-""value1""
-],
-""values"":
-2],
-4],
-6]
-]
-}
-]
-}
-
-Enter new observations like this for a multivariate model:
-
-{
-""input_data"": [
-{
-""fields"":
-""value1"",
-""value2""
-],
-""values"":
-2, 1],
-4, 3],
-6, 5]
-]
-}
-]
-}
-
-Where 2, 4, and 6 are observations for value1 and 1, 3, 5 are observations for value2.
-
-"
-AD76780EA50A0FB37454A3A03FF08CA0AD39EF19_7,AD76780EA50A0FB37454A3A03FF08CA0AD39EF19," Scoring a time series model with Supporting features
-
-After you deploy your model, you can go to the page detailing your deployment to get prediction values. Choose one of the following ways to test your deployment:
-
-"
-AD76780EA50A0FB37454A3A03FF08CA0AD39EF19_8,AD76780EA50A0FB37454A3A03FF08CA0AD39EF19," Using existing input values
-
-You can use existing input values in your data set to obtain prediction values. Click Predict to obtain a set of prediction values. The total number of prediction values in the output is defined by prediction horizon that you previously set during the experiment configuration stage.
-
-"
-AD76780EA50A0FB37454A3A03FF08CA0AD39EF19_9,AD76780EA50A0FB37454A3A03FF08CA0AD39EF19," Using new input values
-
-You can choose to populate the spreadsheet with new input values or use JSON code to obtain a prediction.
-
-"
-AD76780EA50A0FB37454A3A03FF08CA0AD39EF19_10,AD76780EA50A0FB37454A3A03FF08CA0AD39EF19," Using spreadsheet to provide new input data for predicting values
-
-To add input data to the New observations (optional) spreadsheet, select the Input tab and do one of the following:
-
-
-
-* Add pre-existing .csv file containing new observations from your local directory by clicking Browse local files.
-* Download the input file template by clicking Download CSV template, enter values, and upload the file.
-* Use an existing data asset from your project by clicking Search in space.
-* Manually enter input observations in the spreadsheet.
-
-
-
-You can also provide future values for Supporting features if you previously enabled your experiment to leverage these values during the experiment configuration stage. Make sure to add these values to the Future supporting features (optional) spreadsheet.
-
-"
-AD76780EA50A0FB37454A3A03FF08CA0AD39EF19_11,AD76780EA50A0FB37454A3A03FF08CA0AD39EF19," Using JSON code to provide input data
-
-To add input data using JSON code, select the Paste JSON tab and do one of the following:
-
-
-
-* Add pre-existing JSON file containing new observations from your local directory by clicking Browse local files.
-* Use an existing data asset from your project by clicking Search in space.
-* Manually enter or paste JSON code into the editor.
-
-
-
-In this code sample, the prediction column is pollution, and the supporting features are temp and press.
-
-{
-""input_data"": [
-{
-""id"": ""observations"",
-""values"":
-
-96.125,
-3.958,
-1026.833
-]
-]
-},
-{
-""id"": ""supporting_features"",
-""values"":
-
-3.208,
-1020.667
-]
-]
-}
-]
-}
-
-"
-AD76780EA50A0FB37454A3A03FF08CA0AD39EF19_12,AD76780EA50A0FB37454A3A03FF08CA0AD39EF19," Next steps
-
-[Saving an AutoAI generated notebook (Watson Machine Learning)](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-notebook.html)
-
-Parent topic:[Building a time series experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-timeseries.html)
-"
-99843122C08D0D70ED3694A57482595E35FB0D8B_0,99843122C08D0D70ED3694A57482595E35FB0D8B," Tutorial: AutoAI multivariate time series experiment with Supporting features
-
-Use sample data to train a multivariate time series experiment that predicts pollution rate and temperature with the help of supporting features that influence the prediction fields.
-
-When you set up the experiment, you load sample data that tracks weather conditions in Beijing from 2010 to 2014. The experiment generates a set of pipelines that use algorithms to predict future pollution and temperature with supporting features, including dew, pressure, snow, and rain. After generating the pipelines, AutoAI compares and tests them, chooses the best performers, and presents them in a leaderboard for you to review.
-
-"
-99843122C08D0D70ED3694A57482595E35FB0D8B_1,99843122C08D0D70ED3694A57482595E35FB0D8B," Data set overview
-
-For this tutorial, you use the [Beijing PM 2.5](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/56e40ca77f9a72b0ab65b2c7938a99e2) data set from the Samples. This data set describes the weather conditions in Beijing from 2010 to 2014, which are measured in 1-day steps, or increments. You use this data set to configure your AutoAI experiment and select Supporting features. Details about the data set are described here:
-
-
-
-* Each column, other than the date column, represents a weather condition that impacts pollution index.
-* The Samples entry shows the origin of the data. You can preview the file before you download the file.
-* The sample data is structured in rows and columns and saved as a .csv file.
-
-
-
-
-
-"
-99843122C08D0D70ED3694A57482595E35FB0D8B_2,99843122C08D0D70ED3694A57482595E35FB0D8B," Tasks overview
-
-In this tutorial, you follow steps to create a multivariate time series experiment that uses Supporting features:
-
-
-
-1. [Create a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-ts-tut-sup.html?context=cdpaas&locale=enstep1)
-2. [Create an AutoAI experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-ts-tut-sup.html?context=cdpaas&locale=enstep2)
-3. [Configure the experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-ts-tut-sup.html?context=cdpaas&locale=enstep3)
-4. [Review experiment results](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-ts-tut-sup.html?context=cdpaas&locale=enstep4)
-5. [Deploy the trained model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-ts-tut-sup.html?context=cdpaas&locale=enstep5)
-6. [Test the deployed model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-ts-tut-sup.html?context=cdpaas&locale=enstep6)
-
-
-
-"
-99843122C08D0D70ED3694A57482595E35FB0D8B_3,99843122C08D0D70ED3694A57482595E35FB0D8B," Create a project
-
-Follow these steps to create an empty project and download the [Beijing PM 2.5](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/56e40ca77f9a72b0ab65b2c7938a99e2) data set from the IBM watsonx Samples:
-
-
-
-1. From the main navigation pane, click Projects > View all projects, then click New Project.
-a. Click Create an empty project.
-b. Enter a name and optional description for your project.
-c. Click Create.
-2. From the main navigation panel, click Samples and download a local copy of the [Beijing PM 2.5](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/56e40ca77f9a72b0ab65b2c7938a99e2) data set.
-
-
-
-"
-99843122C08D0D70ED3694A57482595E35FB0D8B_4,99843122C08D0D70ED3694A57482595E35FB0D8B," Create an AutoAI experiment
-
-Follow these steps to create an AutoAI experiment and add sample data to your experiment:
-
-
-
-1. On the Assets tab from within your project, click New asset > Build machine learning models automatically.
-2. Specify a name and optional description for your experiment.
-3. Associate a machine learning service instance with your experiment.
-4. Choose an environment definition of 8 vCPU and 32 GB RAM.
-5. Click Create.
-6. To add sample data, choose one of the these methods:
-
-
-
-* If you downloaded your file locally, upload the training data file, PM25.csv by clicking Browse and then following the prompts.
-* If you already uploaded your file to your project, click Select from project, then select the Data asset tab and choose Beijing PM 25.csv.
-
-
-
-
-
-"
-99843122C08D0D70ED3694A57482595E35FB0D8B_5,99843122C08D0D70ED3694A57482595E35FB0D8B," Configure the experiment
-
-Follow these steps to configure your multivariate AutoAI time series experiment:
-
-
-
-1. Click Yes for the option to create a Time Series Forecast.
-2. Choose as prediction columns: pollution, temp.
-3. Choose as the date/time column: date.
-
-
-4. Click Experiment settings to configure the experiment:
-a. In the Prediction page, accept the default selection for Algorithms to include. Algorithms that allow you to use Supporting features are indicated by a checkmark in the column Allows supporting features.
-
-
-b. Go to the Data Source page. For this tutorial, you will supply future values of Supporting features while testing. Future values are helpful when values for the supporting features are knowable for the prediction horizon. Accept the default enablement for Leverage future values of supporting features. Additionally, accept the default selection for columns that will be used as Supporting features.
-
-c. Click Cancel to exit from Experiment settings.
-5. Click Run experiment to begin the training.
-
-
-
-"
-99843122C08D0D70ED3694A57482595E35FB0D8B_6,99843122C08D0D70ED3694A57482595E35FB0D8B," Review experiment results
-
-The experiment takes several minutes to complete. As the experiment trains, the relationship map shows the transformations that are used to create pipelines. Follow these steps to review experiment results and save the pipeline with the best performance.
-
-
-
-1. Optional: Hover over any node in the relationship map to get details on the transformation for a particular pipeline.
-
-
-2. Optional: After the pipelines are listed on the leaderboard, click Pipeline comparison to see how they differ. For example:
-
-
-3. When the training completes, the top three best performing pipelines are saved to the leaderboard. Click any pipeline name to review details.
-
-Note: Pipelines that use Supporting features are indicated by SUP enhancement.
-
-
-4. Select the pipeline with Rank 1 and click Save as to create your model. Then, click Create. This action saves the pipeline under the Models section in the Assets tab.
-
-
-
-"
-99843122C08D0D70ED3694A57482595E35FB0D8B_7,99843122C08D0D70ED3694A57482595E35FB0D8B," Deploy the trained model
-
-Before you can use your trained model to make predictions on new data, you must deploy the model. Follow these steps to promote your trained model to a deployment space:
-
-
-
-1. You can deploy the model from the model details page. To access the model details page, choose one of these options:
-
-
-
-* Click the model’s name in the notification that is displayed when you save the model.
-* Open the Assets page for the project that contains the model and click the model’s name in the Machine Learning Model section.
-
-
-
-2. Select Promote to Deployment Space, then select or create a deployment space where the model will be deployed.
-Optional: Follow these steps to create a deployment space:
-a. From the Target space list, select Create a new deployment space.
-b. Enter a name for your deployment space.
-c. To associate a machine learning instance, go to Select machine learning service (optional) and select a machine learning instance from the list.
-d. Click Create.
-
-3. Once you select or create your space, click Promote.
-4. Click the deployment space link from the notification.
-5. From the Assets tab of the deployment space:
-a. Hover over the model’s name and click the deployment icon .
-b. In the page that opens, complete the fields:
-
-
-
-* Select Online as the Deployment type.
-
-* Specify a name for the deployment.
-
-* Click Create.
-
-
-
-
-
-After the deployment is complete, click the Deployments tab and select the deployment name to view the details page.
-
-"
-99843122C08D0D70ED3694A57482595E35FB0D8B_8,99843122C08D0D70ED3694A57482595E35FB0D8B," Test the deployed model
-
-Follow these steps to test the deployed model from the deployment details page:
-
-
-
-1. On the Test tab of the deployment details page, go to New observations (optional) spreadsheet and enter the following values:
-pollution (double): 80.417
-temp (double): -5.5
-dew (double): -7.083
-press (double): 1020.667
-wnd_spd (double): 9.518
-snow (double): 0
-rain (double): 0
-
-
-
-2. To add future values of Supporting features, go to Future exogenous features (optional) spreadsheet and enter the following values:
-dew (double): -12.667
-press (double): 1023.708
-wnd_spd (double): 9.518
-snow (double): 0
-rain (double): 0.042
-
-Note: You must provide the same number of values for future exogenous features as the prediction horizon that you set during experiment configuration stage.
-
-
-
-3. Click Predict. The resulting prediction indicates values for pollution and temperature.
-
-Note: Prediction values that are shown in the output might differ when you test your deployment.
-
-
-
-
-
-"
-99843122C08D0D70ED3694A57482595E35FB0D8B_9,99843122C08D0D70ED3694A57482595E35FB0D8B," Learn more
-
-Parent topic:[Building a time series experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-timeseries.html)
-"
-3AF15CB9E302A9E0D7DE22DE648EF7B3DCA1D865_0,3AF15CB9E302A9E0D7DE22DE648EF7B3DCA1D865," Tutorial: AutoAI univariate time series experiment
-
-Use sample data to train a univariate (single prediction column) time series experiment that predicts minimum daily temperatures.
-
-When you set up the experiment, you load data that tracks daily minimum temperatures for the city of Melbourne, Australia. The experiment will generate a set of pipelines that use algorithms to predict future minimum daily temperatures. After generating the pipelines, AutoAI compares and tests them, chooses the best performers, and presents them in a leaderboard for you to review.
-
-"
-3AF15CB9E302A9E0D7DE22DE648EF7B3DCA1D865_1,3AF15CB9E302A9E0D7DE22DE648EF7B3DCA1D865," Data set overview
-
-The [Mini_Daily_Temperatures](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/de4d953f2a766fbc0469723eba0d93ef) data set describes the minimum daily temperatures over 10 years (1981-1990) in the city Melbourne, Australia. The units are in degrees celsius and the data set contains 3650 observations. The source of the data is the Australian Bureau of Meteorology. Details about the data set are described here:
-
-
-
-
-
-* You will use the Min_Temp column as the prediction column to build pipelines and forecast the future daily minimum temperatures. Before the pipeline training, the date column and Min_Temp column are used together to figure out the appropriate lookback window.
-* The prediction column forecasts a prediction for the daily minimum temperature on a specified day.
-* The sample data is structured in rows and columns and saved as a .csv file.
-
-
-
-"
-3AF15CB9E302A9E0D7DE22DE648EF7B3DCA1D865_2,3AF15CB9E302A9E0D7DE22DE648EF7B3DCA1D865," Tasks overview
-
-In this tutorial, you follow these steps to create a univariate time series experiment:
-
-
-
-1. [Create a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-ts-uni-tutorial.html?context=cdpaas&locale=enstep0)
-2. [Create an AutoAI experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-ts-uni-tutorial.html?context=cdpaas&locale=enstep1)
-3. [Configure the experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-ts-uni-tutorial.html?context=cdpaas&locale=enstep2)
-4. [Review experiment results](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-ts-uni-tutorial.html?context=cdpaas&locale=enstep3)
-5. [Deploy the trained model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-ts-uni-tutorial.html?context=cdpaas&locale=enstep4)
-6. [Test the deployed model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-ts-uni-tutorial.html?context=cdpaas&locale=enstep5)
-
-
-
-"
-3AF15CB9E302A9E0D7DE22DE648EF7B3DCA1D865_3,3AF15CB9E302A9E0D7DE22DE648EF7B3DCA1D865," Create a project
-
-Follow these steps to download the [Mini_Daily_Temperatures](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/de4d953f2a766fbc0469723eba0d93ef) data set from the Samples and create an empty project:
-
-
-
-1. From the navigation menu , click Samples and download a local copy of the [Mini_Daily_Temperatures](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/de4d953f2a766fbc0469723eba0d93ef) data set.
-2. From the navigation menu , click Projects > View all projects, then click New Project.
-
-
-
-1. Click Create an empty project.
-2. Enter a name and optional description for your project.
-3. Click Create.
-
-
-
-
-
-"
-3AF15CB9E302A9E0D7DE22DE648EF7B3DCA1D865_4,3AF15CB9E302A9E0D7DE22DE648EF7B3DCA1D865," Create an AutoAI experiment
-
-Follow these steps to create an AutoAI experiment and add sample data to your experiment:
-
-
-
-1. On the Assets tab from within your project, click New asset > Build machine learning models automatically.
-2. Specify a name and optional description for your experiment, then select Create.
-3. Select Associate a Machine Learning service instance to create a new service instance or associate an existing instance with your project. Click Reload to confirm your configuration.
-4. Click Create.
-5. To add the sample data, choose one of the these methods:
-
-
-
-* If you downloaded your file locally, upload the training data file, Daily_Min_Temperatures.csv, by clicking Browse and then following the prompts.
-
-* If you already uploaded your file to your project, click Select from project, then select the Data asset tab and choose Daily_Min_Temperatures.csv.
-
-
-
-
-
-"
-3AF15CB9E302A9E0D7DE22DE648EF7B3DCA1D865_5,3AF15CB9E302A9E0D7DE22DE648EF7B3DCA1D865," Configure the experiment
-
-Follow these steps to configure your univariate AutoAI time series experiment:
-
-
-
-1. Click Yes for the option to create a Time Series Forecast.
-2. Choose as prediction columns: Min_Temp.
-3. Choose as the date/time column: Date.
-
-
-4. Click Experiment settings to configure the experiment:
-
-
-
-1. In the Data source page, select the Time series tab.
-
-2. For this tutorial, accept the default value for Number of backtests (4), Gap length (0 steps), and Holdout length (20 steps).
-
-Note: The validation length changes if you change the value of any of the parameters: Number of backtests, Gap length, or Holdout length.
-
-c. Click Cancel to exit from the Experiment settings.
-
-
-
-
-5. Click Run experiment to begin the training.
-
-
-
-"
-3AF15CB9E302A9E0D7DE22DE648EF7B3DCA1D865_6,3AF15CB9E302A9E0D7DE22DE648EF7B3DCA1D865," Review experiment results
-
-The experiment takes several minutes to complete. As the experiment trains, a visualization shows the transformations that are used to create pipelines. Follow these steps to review experiment results and save the pipeline with the best performance.
-
-
-
-1. (Optional): Hover over any node in the visualization to get details on the transformation for a particular pipeline.
-
-
-2. (Optional): After the pipelines are listed on the leaderboard, click Pipeline comparison to see how they differ. For example:
-
-
-3. (Optional): When the training completes, the top three best performing pipelines are saved to the leaderboard. Click View discarded pipelines to review pipelines with the least performance.
-
-
-4. Select the pipeline with Rank 1 and click Save as to create your model. Then, select Create. This action saves the pipeline under the Models section in the Assets tab.
-
-
-
-"
-3AF15CB9E302A9E0D7DE22DE648EF7B3DCA1D865_7,3AF15CB9E302A9E0D7DE22DE648EF7B3DCA1D865," Deploy the trained model
-
-Before you can use your trained model to make predictions on new data, you must deploy the model. Follow these steps to promote your trained model to a deployment space:
-
-
-
-1. You can deploy the model from the model details page. To access the model details page, choose one of the these methods:
-
-
-
-* Click the model’s name in the notification that is displayed when you save the model.
-* Open the Assets page for the project that contains the model and click the model’s name in the Machine Learning Model section.
-
-
-
-2. Click Promote to Deployment Space, then select or create a deployment space where the model will be deployed.
-(Optional): To create a deployment space, follow these steps:
-
-
-
-1. From the Target space list, select Create a new deployment space.
-
-2. Enter a name for your deployment space.
-
-3. To associate a machine learning instance, go to Select machine learning service (optional) and select an instance from the list.
-
-4. Click Create.
-
-
-
-3. After you select or create your space, click Promote.
-4. Click the deployment space link from the notification.
-5. From the Assets tab of the deployment space:
-
-
-
-1. Hover over the model’s name and click the deployment icon .
-2. In the page that opens, complete the fields:
-
-
-
-1. Specify a name for the deployment.
-2. Select Online as the Deployment type.
-3. Click Create.
-
-
-
-
-
-
-
-After the deployment is complete, click the Deployments tab and select the deployment name to view the details page.
-
-"
-3AF15CB9E302A9E0D7DE22DE648EF7B3DCA1D865_8,3AF15CB9E302A9E0D7DE22DE648EF7B3DCA1D865," Test the deployed model
-
-Follow these steps to test the deployed model from the deployment details page:
-
-
-
-1. On the Test tab of the deployment details page, click the terminal icon  and enter the following JSON test data:
-
-{ ""input_data"": [ {
-
-""fields"":
-
-""Min_Temp""
-
-],
-
-""values"":
-
-7], 15]
-
-]
-
-} ] }
-
-Note: The test data replicates the data fields for the model, except the prediction field.
-2. Click Predict to predict the future minimum temperature.
-
-
-
-
-
-Parent topic:[Building a time series experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-timeseries.html)
-"
-46B746B11CF60709BCEA7F7C2C0AA1EC0ADA5BC9_0,46B746B11CF60709BCEA7F7C2C0AA1EC0ADA5BC9," Selecting an AutoAI model
-
-AutoAI automatically prepares data, applies algorithms, and attempts to build model pipelines that are best suited for your data and use case. Learn how to evaluate the model pipelines so that you can save one as a model.
-
-"
-46B746B11CF60709BCEA7F7C2C0AA1EC0ADA5BC9_1,46B746B11CF60709BCEA7F7C2C0AA1EC0ADA5BC9," Reviewing experiment results
-
-During AutoAI training, your data set is split to a training part and a hold-out part. The training part is used by the AutoAI training stages to generate the AutoAI model pipelines and cross-validation scores that are used to rank them. After AutoAI training, the hold-out part is used for the resulting pipeline model evaluation and computation of performance information such as ROC curves and confusion matrices, which are shown in the leaderboard. The training/hold-out split ratio is 90/10.
-
-As the training progresses, you are presented with a dynamic infographic and leaderboard. Hover over nodes in the infographic to explore the factors that pipelines share and their unique properties. For a guide to the data in the infographic, click the Legend tab in the information panel. Or, to see a different view of the pipeline creation, click the Experiment details tab of the notification panel, then click Switch views to view the progress map. In either view, click a pipeline node to view the associated pipeline in the leaderboard. The leaderboard contains model pipelines that are ranked by cross-validation scores.
-
-"
-46B746B11CF60709BCEA7F7C2C0AA1EC0ADA5BC9_2,46B746B11CF60709BCEA7F7C2C0AA1EC0ADA5BC9," View the pipeline transformations
-
-Hover over a node in the infographic to view the transformations for a pipeline. The sequence of data transformations consists of a pre-processing transformer and a sequence of data transformers, if feature engineering was performed for the pipeline. The algorithm is determined by model selection and optimization steps during AutoAI training.
-
-
-
-See [Implementation details](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-details.html) to review the technical details for creating the pipelines.
-
-"
-46B746B11CF60709BCEA7F7C2C0AA1EC0ADA5BC9_3,46B746B11CF60709BCEA7F7C2C0AA1EC0ADA5BC9," View the leaderboard
-
-Each model pipeline is scored for various metrics and then ranked. The default ranking metric for binary classification models is the area under the ROC curve. For multi-class classification models the default metric is accuracy. For regression models, the default metric is the root mean-squared error (RMSE). The highest-ranked pipelines display in a leaderboard, so you can view more information about them. The leaderboard also provides the option to save select model pipelines after you review them.
-
-
-
-You can evaluate the pipelines as follows:
-
-
-
-* Click a pipeline in the leaderboard to view more detail about the metrics and performance.
-* Click Compare to view how the top pipelines compare.
-* Sort the leaderboard by a different metric.
-
-
-
-
-
-"
-46B746B11CF60709BCEA7F7C2C0AA1EC0ADA5BC9_4,46B746B11CF60709BCEA7F7C2C0AA1EC0ADA5BC9," Viewing the confusion matrix
-
-One of the details you can view for a pipeline for a binary classification experiment is a Confusion matrix.
-
-The confusion matrix is based on the holdout data, which is the portion of the training dataset that is not used for training the model pipeline but only used to measure its performance on data that was not seen during training.
-
-In a binary classification problem with a positive class and a negative class, the confusion matrix summarizes the pipeline model’s positive and negative predictions in four quadrants depending on their correctness regarding the positive or negative class labels of the holdout data set.
-
-For example, the Bank sample experiment seeks to identify customers that take promotions that are offered to them. The confusion matrix for the top-ranked pipeline is:
-
-
-
-The positive class is ‘yes’ (meaning a user takes the promotion). You can see that the measurement of true negatives, that is, customers the model predicted correctly they would refuse their promotions, is high.
-
-Click the items in the navigation menu to view other details about the selected pipeline. For example, Feature importance shows which data features contribute most to your prediction output.
-
-"
-46B746B11CF60709BCEA7F7C2C0AA1EC0ADA5BC9_5,46B746B11CF60709BCEA7F7C2C0AA1EC0ADA5BC9," Save a pipeline as a model
-
-When you are satisfied with a pipeline, save it using one of these methods:
-
-
-
-* Click Save model to save the candidate pipeline as a model to your project so you can test and deploy it.
-* Click [Save as notebook](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-notebook.html) to create and save an auto-generated notebook to your project. You can review the code or run the experiment in the notebook.
-
-
-
-"
-46B746B11CF60709BCEA7F7C2C0AA1EC0ADA5BC9_6,46B746B11CF60709BCEA7F7C2C0AA1EC0ADA5BC9," Next steps
-
-Promote the trained model to a deployment space so that you can test it with new data and generate predictions.
-
-"
-46B746B11CF60709BCEA7F7C2C0AA1EC0ADA5BC9_7,46B746B11CF60709BCEA7F7C2C0AA1EC0ADA5BC9," Learn more
-
-[AutoAI implementation details](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-details.html)
-
-Parent topic:[AutoAI overview](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-overview.html)
-"
-C926DFB3758881E6698F630E496F3817101E4176_0,C926DFB3758881E6698F630E496F3817101E4176," AutoAI tutorial: Build a Binary Classification Model
-
-This tutorial guides you through training a model to predict if a customer is likely to buy a tent from an outdoor equipment store.
-
-Create an AutoAI experiment to build a model that analyzes your data and selects the best model type and algorithms to produce, train, and optimize pipelines. After you review the pipelines, save one as a model, deploy it, and then test it to get a prediction.
-
-Watch this video to see a preview of the steps in this tutorial.
-
-Video disclaimer: Some minor steps and graphical elements in this video might differ from your platform.
-
-This video provides a visual method to learn the concepts and tasks in this documentation.
-
-
-
-* Transcript
-
-Synchronize transcript with video
-
-
-
- Time Transcript
-
- 00:00 In this video, you will see how to build a binary classification model that assesses the likelihood that a customer of an outdoor equipment company will buy a tent.
- 00:11 This video uses a data set called ""GoSales"", which you'll find in the Gallery.
- 00:16 View the data set.
- 00:20 The feature columns are ""GENDER"", ""AGE"", ""MARITAL_STATUS"", and ""PROFESSION"" and contain the attributes on which the machine learning model will base predictions.
- 00:31 The label columns are ""IS_TENT"", ""PRODUCT_LINE"", and ""PURCHASE_AMOUNT"" and contain historical outcomes that the models could be trained to predict.
- 00:44 Add this data set to the ""Machine Learning"" project and then go to the project.
- 00:56 You'll find the GoSales.csv file with your other data assets.
- 01:02 Add to the project an ""AutoAI experiment"".
- 01:08 This project already has the Watson Machine Learning service associated.
- 01:13 If you haven't done that yet, first, watch the video showing how to run an AutoAI experiment based on a sample.
- 01:22 Just provide a name for the experiment and then click ""Create"".
- 01:30 The AutoAI experiment builder displays.
- 01:33 You first need to load the training data.
- 01:36 In this case, the data set will be from the project.
- 01:40 Select the GoSales.csv file from the list.
-"
-C926DFB3758881E6698F630E496F3817101E4176_1,C926DFB3758881E6698F630E496F3817101E4176," 01:45 AutoAI reads the data set and lists the columns found in the data set.
- 01:50 Since you want the model to predict the likelihood that a given customer will purchase a tent, select ""IS_TENT"" as the column to predict.
- 01:59 Now, edit the experiment settings.
- 02:03 First, look at the settings for the data source.
- 02:06 If you have a large data set, you can run the experiment on a subsample of rows and you can configure how much of the data will be used for training and how much will be used for evaluation.
- 02:19 The default is a 90%/10% split, where 10% of the data is reserved for evaluation.
- 02:27 You can also select which columns from the data set to include when running the experiment.
- 02:35 On the ""Prediction"" panel, you can select a prediction type.
- 02:39 In this case, AutoAI analyzed your data and determined that the ""IS_TENT"" column contains true-false information, making this data suitable for a ""Binary classification"" model.
- 02:52 The positive class is ""TRUE"" and the recommended metric is ""Accuracy"".
- 03:01 If you'd like, you can choose specific algorithms to consider for this experiment and the number of top algorithms for AutoAI to test, which determines the number of pipelines generated.
- 03:16 On the ""Runtime"" panel, you can review other details about the experiment.
- 03:21 In this case, accepting the default settings makes the most sense.
- 03:25 Now, run the experiment.
- 03:28 AutoAI first loads the data set, then splits the data into training data and holdout data.
- 03:37 Then wait, as the ""Pipeline leaderboard"" fills in to show the generated pipelines using different estimators, such as XGBoost classifier, or enhancements such as hyperparameter optimization and feature engineering, with the pipelines ranked based on the accuracy metric.
- 03:58 Hyperparameter optimization is a mechanism for automatically exploring a search space for potential hyperparameters, building a series of models and comparing the models using metrics of interest.
- 04:10 Feature engineering attempts to transform the raw data into the combination of features that best represents the problem to achieve the most accurate prediction.
-"
-C926DFB3758881E6698F630E496F3817101E4176_2,C926DFB3758881E6698F630E496F3817101E4176," 04:21 Okay, the run has completed.
- 04:24 By default, you'll see the ""Relationship map"".
- 04:28 But you can swap views to see the ""Progress map"".
- 04:32 You may want to start with comparing the pipelines.
- 04:36 This chart provides metrics for the eight pipelines, viewed by cross validation score or by holdout score.
- 04:46 You can see the pipelines ranked based on other metrics, such as average precision.
- 04:55 Back on the ""Experiment summary"" tab, expand a pipeline to view the model evaluation measures and ROC curve.
- 05:03 During AutoAI training, your data set is split into two parts: training data and holdout data.
- 05:11 The training data is used by the AutoAI training stages to generate the model pipelines, and cross validation scores are used to rank them.
- 05:21 After training, the holdout data is used for the resulting pipeline model evaluation and computation of performance information, such as ROC curves and confusion matrices.
- 05:33 You can view an individual pipeline to see more details in addition to the confusion matrix, precision recall curve, model information, and feature importance.
- 05:46 This pipeline had the highest ranking, so you can save this as a machine learning model.
- 05:52 Just accept the defaults and save the model.
- 05:56 Now that you've trained the model, you're ready to view the model and deploy it.
- 06:04 The ""Overview"" tab shows a model summary and the input schema.
- 06:09 To deploy the model, you'll need to promote it to a deployment space.
- 06:15 Select the deployment space from the list, add a description for the model, and click ""Promote"".
- 06:24 Use the link to go to the deployment space.
- 06:28 Here's the model you just created, which you can now deploy.
- 06:33 In this case, it will be an online deployment.
- 06:37 Just provide a name for the deployment and click ""Create"".
- 06:41 Then wait, while the model is deployed.
- 06:44 When the model deployment is complete, view the deployment.
- 06:49 On the ""API reference"" tab, you'll find the scoring endpoint for future reference.
-"
-C926DFB3758881E6698F630E496F3817101E4176_3,C926DFB3758881E6698F630E496F3817101E4176," 06:56 You'll also find code snippets for various programming languages to utilize this deployment from your application.
- 07:05 On the ""Test"" tab, you can test the model prediction.
- 07:09 You can either enter test input data or paste JSON input data, and click ""Predict"".
- 07:20 This shows that there's a very high probability that the first customer will buy a tent and a very high probability that the second customer will not buy a tent.
- 07:33 And back in the project, you'll find the AutoAI experiment and the model on the ""Assets"" tab.
- 07:44 Find more videos in the Cloud Pak for Data as a Service documentation.
-
-
-
-
-
-"
-C926DFB3758881E6698F630E496F3817101E4176_4,C926DFB3758881E6698F630E496F3817101E4176," Overview of the data sets
-
-The sample data is structured (in rows and columns) and saved in a .csv file format.
-
-You can view the sample data file in a text editor or spreadsheet program:
-
-
-"
-C926DFB3758881E6698F630E496F3817101E4176_5,C926DFB3758881E6698F630E496F3817101E4176," What do you want to predict?
-
-Choose the column whose values that your model predicts.
-
-In this tutorial, the model predicts the values of the IS_TENT column:
-
-
-
-* IS_TENT: Whether the customer bought a tent
-
-
-
-The model that is built in this tutorial predicts whether a customer is likely to purchase a tent.
-
-"
-C926DFB3758881E6698F630E496F3817101E4176_6,C926DFB3758881E6698F630E496F3817101E4176," Tasks overview
-
-This tutorial presents the basic steps for building and training a machine learning model with AutoAI:
-
-
-
-1. [Create a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai_example_binary_classifier.html?context=cdpaas&locale=enstep0)
-2. [Create an AutoAI experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai_example_binary_classifier.html?context=cdpaas&locale=enstep1)
-3. [Training the experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai_example_binary_classifier.html?context=cdpaas&locale=enstep2)
-4. [Deploy the trained model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai_example_binary_classifier.html?context=cdpaas&locale=enstep3)
-5. [Test the deployed model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai_example_binary_classifier.html?context=cdpaas&locale=enstep4)
-6. [Creating a batch to score the model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai_example_binary_classifier.html?context=cdpaas&locale=enstep5)
-
-
-
-"
-C926DFB3758881E6698F630E496F3817101E4176_7,C926DFB3758881E6698F630E496F3817101E4176," Task 1: Create a project
-
-
-
-1. From the Samples, download the [GoSales](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/aa07a773f71cf1172a349f33e2028e4e?context=wx) data set file to your local computer.
-2. From the Projects page, to create a new project, select New Project.
-a. Select Create an empty project.
-b. Include your project name.
-c. Click Create.
-
-
-
-"
-C926DFB3758881E6698F630E496F3817101E4176_8,C926DFB3758881E6698F630E496F3817101E4176," Task 2: Create an AutoAI experiment
-
-
-
-1. On the Assets tab from within your project, click New asset > Build machine learning models automatically.
-2. Specify a name and optional description for your new experiment.
-3. Select the Associate a Machine Learning service instance link to associate the Watson Machine Learning Server instance with your project. Click Reload to confirm your configuration.
-4. To add a data source, you can choose one of these options:
-a. If you downloaded your file locally, upload the training data file, GoSales.csv, from your local computer. Drag the file onto the data panel or click browse and follow the prompts.
-b. If you already uploaded your file to your project, click select from project, then select the data asset tab and choose GoSales.csv.
-
-
-
-"
-C926DFB3758881E6698F630E496F3817101E4176_9,C926DFB3758881E6698F630E496F3817101E4176," Task 3: Training the experiment
-
-
-
-1. In Configuration details, select No for the option to create a Time Series Forecast.
-2. Choose IS_TENT as the column to predict. AutoAI analyzes your data and determines that the IS_TENT column contains True and False information, making this data suitable for a binary classification model. The default metric for a binary classification is ROC/AUC.
-
-
-3. Click Run experiment. As the model trains, an infographic shows the process of building the pipelines.
-
-Note:You might see slight differences in results based on the Cloud Pak for Data platform and version you use.
-
-
-
-For a list of algorithms or estimators that are available with each machine learning technique in AutoAI, see [AutoAI implementation detail](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-details.html).
-4. When all the pipelines are created, you can compare their accuracy on the Pipeline leaderboard.
-
-
-5. Select the pipeline with Rank 1 and click Save as to create your model. Then, select Create. This option saves the pipeline under the Models section in the Assets tab.
-
-
-
-"
-C926DFB3758881E6698F630E496F3817101E4176_10,C926DFB3758881E6698F630E496F3817101E4176," Task 4: Deploy the trained model
-
-
-
-1. You can deploy the model from the model details page. You can access the model details page in one of these ways:
-
-
-
-1. Clicking the model’s name in the notification displayed when you save the model.
-2. Open the Assets tab for the project, select the Models section and select the model’s name.
-
-
-
-2. Click Promote to Deployment Space then select or create the space where the model will be deployed.
-
-
-
-1. To create a deployment space:
-
-
-
-1. Enter a name.
-2. Associate it with a Machine Learning Service.
-3. Select Create.
-
-
-
-
-
-3. After you create your deployment space or select an existing one, select Promote.
-4. Click the deployment space link from the notification.
-5. From the Assets tab of the deployment space:
-
-
-
-1. Hover over the model’s name and click the deployment icon .
-
-
-
-1. In the page that opens, complete the fields:
-
-
-
-1. Select Online as the Deployment type.
-2. Specify a name for the deployment.
-3. Click Create.
-
-
-
-
-
-
-
-
-
-
-
-After the deployment is complete, click Deployments and select the deployment name to view the details page.
-
-"
-C926DFB3758881E6698F630E496F3817101E4176_11,C926DFB3758881E6698F630E496F3817101E4176," Task 5: Test the deployed model
-
-You can test the deployed model from the deployment details page:
-
-
-
-1. On the Test tab of the deployment details page, complete the form with test values or enter JSON test data by clicking the terminal icon  to provide the following JSON input data.
-
-{""input_data"":[{
-
-""fields"":
-
-""GENDER"",""AGE"",""MARITAL_STATUS"",""PROFESSION"",""PRODUCT_LINE"",""PURCHASE_AMOUNT""],
-
-""values"": ""M"",27,""Single"", ""Professional"",""Camping Equipment"",144.78]]
-
-}]}
-
-Note: The test data replicates the data fields for the model, except for the prediction field.
-2. Click Predict to predict whether a customer with the entered attributes is likely to buy a tent. The resulting prediction indicates that a customer with the attributes entered has a high probability of purchasing a tent.
-
-
-
-
-
-"
-C926DFB3758881E6698F630E496F3817101E4176_12,C926DFB3758881E6698F630E496F3817101E4176," Task 6: Creating a batch job to score the model
-
-For a batch deployment, you provide input data, also known as the model payload, in a CSV file. The data must be structured like the training data, with the same column headers. The batch job processes each row of data and creates a corresponding prediction.
-
-In a real scenario, you would submit new data to the model to get a score. However, this tutorial uses the same training data GoSales-updated.csv that you downloaded as part of the tutorial setup. Ensure that you delete the IS_TENT column and save the file before you upload it to the batch job. When deploying a model, you can add the payload data to a project, upload it to a space, or link to it in a storage repository such as a Cloud Object Storage bucket. For this tutorial, upload the file directly to the deployment space.
-
-"
-C926DFB3758881E6698F630E496F3817101E4176_13,C926DFB3758881E6698F630E496F3817101E4176," Step 1: Add data to space
-
-From the Assets page of the deployment space:
-
-
-
-1. Click Add to space then choose Data.
-2. Upload the file GoSales-updated.csv file that you saved locally.
-
-
-
-"
-C926DFB3758881E6698F630E496F3817101E4176_14,C926DFB3758881E6698F630E496F3817101E4176," Step 2: Create the batch deployment
-
-Now you can define the batch deployment.
-
-
-
-1. Click the deployment icon next to the model’s name.
-2. Enter a name a name for the deployment.
-
-
-
-1. Select Batch as the Deployment type.
-2. Choose the smallest hardware specification.
-3. Click Create.
-
-
-
-
-
-"
-C926DFB3758881E6698F630E496F3817101E4176_15,C926DFB3758881E6698F630E496F3817101E4176," Step 3: Create the batch job
-
-The batch job runs the deployment. To create the job, you must specify the input data and the name for the output file. You can set up a job to run on a schedule or run immediately.
-
-
-
-1. Click New job.
-2. Specify a name for the job
-3. Configure to the smallest hardware specification
-4. (Optional): To set a schedule and receive notifications.
-5. Upload the input file: GoSales-updated.csv
-6. Name the output file: GoSales-output.csv
-7. Review and click Create to run the job.
-
-
-
-"
-C926DFB3758881E6698F630E496F3817101E4176_16,C926DFB3758881E6698F630E496F3817101E4176," Step 4: View the output
-
-When the deployment status changes to Deployed, return to the Assets page for the deployment space. The file GoSales-output.csv was created and added to your assets list.
-
-Click the download icon next to the output file and open the file in an editor. You can review the prediction results for the customer information that is submitted for batch processing.
-
-For each case, the prediction that is returned indicates the confidence score of whether a customer will buy a tent.
-
-"
-C926DFB3758881E6698F630E496F3817101E4176_17,C926DFB3758881E6698F630E496F3817101E4176," Next steps
-
-[Building an AutoAI experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-build.html)
-
-Parent topic:[AutoAI overview](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-overview.html)
-"
-7BF4B8F1F49406EEC43BE3B7350092F9165B0757_0,7BF4B8F1F49406EEC43BE3B7350092F9165B0757," SPSS predictive analytics classification and regression algorithms in notebooks
-
-You can use generalized linear model, linear regression, linear support vector machine, random trees, or CHAID SPSS predictive analytics algorithms in notebooks.
-
-"
-7BF4B8F1F49406EEC43BE3B7350092F9165B0757_1,7BF4B8F1F49406EEC43BE3B7350092F9165B0757," Generalized Linear Model
-
-The Generalized Linear Model (GLE) is a commonly used analytical algorithm for different types of data. It covers not only widely used statistical models, such as linear regression for normally distributed targets, logistic models for binary or multinomial targets, and log linear models for count data, but also covers many useful statistical models via its very general model formulation. In addition to building the model, Generalized Linear Model provides other useful features such as variable selection, automatic selection of distribution and link function, and model evaluation statistics. This model has options for regularization, such as LASSO, ridge regression, elastic net, etc., and is also capable of handling very wide data.
-
-For more details about how to choose distribution and link function, see Distribution and Link Function Combination.
-
-"
-7BF4B8F1F49406EEC43BE3B7350092F9165B0757_2,7BF4B8F1F49406EEC43BE3B7350092F9165B0757,"Example code 1:
-
-This example shows a GLE setting with specified distribution and link function, specified effects, intercept, conducting ROC curve, and printing correlation matrix. This scenario builds a model, then scores the model.
-
-Python example:
-
-from spss.ml.classificationandregression.generalizedlinear import GeneralizedLinear
-from spss.ml.classificationandregression.params.effect import Effect
-
-gle1 = GeneralizedLinear().
-setTargetField(""Work_experience"").
-setInputFieldList([""Beginning_salary"", ""Sex_of_employee"", ""Educational_level"", ""Minority_classification"", ""Current_salary""]).
-setEffects([
-Effect(fields=""Beginning_salary""], nestingLevels=0]),
-Effect(fields=""Sex_of_employee""], nestingLevels=0]),
-Effect(fields=""Educational_level""], nestingLevels=0]),
-Effect(fields=""Current_salary""], nestingLevels=0]),
-Effect(fields=""Sex_of_employee"", ""Educational_level""], nestingLevels=0, 0])]).
-setIntercept(True).
-setDistribution(""NORMAL"").
-setLinkFunction(""LOG"").
-setAnalysisType(""BOTH"").
-setConductRocCurve(True)
-
-gleModel1 = gle1.fit(data)
-PMML = gleModel1.toPMML()
-statXML = gleModel1.statXML()
-predictions1 = gleModel1.transform(data)
-predictions1.show()
-
-"
-7BF4B8F1F49406EEC43BE3B7350092F9165B0757_3,7BF4B8F1F49406EEC43BE3B7350092F9165B0757,"Example code 2:
-
-This example shows a GLE setting with unspecified distribution and link function, and variable selection using the forward stepwise method. This scenario uses the forward stepwise method to select distribution, link function and effects, then builds and scores the model.
-
-Python example:
-
-from spss.ml.classificationandregression.generalizedlinear import GeneralizedLinear
-from spss.ml.classificationandregression.params.effect import Effect
-
-gle2 = GeneralizedLinear().
-setTargetField(""Work_experience"").
-setInputFieldList([""Beginning_salary"", ""Sex_of_employee"", ""Educational_level"", ""Minority_classification"", ""Current_salary""]).
-setEffects([
-Effect(fields=""Beginning_salary""], nestingLevels=0]),
-Effect(fields=""Sex_of_employee""], nestingLevels=0]),
-Effect(fields=""Educational_level""], nestingLevels=0]),
-Effect(fields=""Current_salary""], nestingLevels=0])]).
-setIntercept(True).
-setDistribution(""UNKNOWN"").
-setLinkFunction(""UNKNOWN"").
-setAnalysisType(""BOTH"").
-setUseVariableSelection(True).
-setVariableSelectionMethod(""FORWARD_STEPWISE"")
-
-gleModel2 = gle2.fit(data)
-PMML = gleModel2.toPMML()
-statXML = gleModel2.statXML()
-predictions2 = gleModel2.transform(data)
-predictions2.show()
-
-"
-7BF4B8F1F49406EEC43BE3B7350092F9165B0757_4,7BF4B8F1F49406EEC43BE3B7350092F9165B0757,"Example code 3:
-
-This example shows a GLE setting with unspecified distribution, specified link function, and variable selection using the LASSO method, with two-way interaction detection and automatic penalty parameter selection. This scenario detects two-way interaction for effects, then uses the LASSO method to select distribution and effects using automatic penalty parameter selection, then builds and scores the model.
-
-Python example:
-
-from spss.ml.classificationandregression.generalizedlinear import GeneralizedLinear
-from spss.ml.classificationandregression.params.effect import Effect
-
-gle3 = GeneralizedLinear().
-setTargetField(""Work_experience"").
-setInputFieldList([""Beginning_salary"", ""Sex_of_employee"", ""Educational_level"", ""Minority_classification"", ""Current_salary""]).
-setEffects([
-Effect(fields=""Beginning_salary""], nestingLevels=0]),
-Effect(fields=""Sex_of_employee""], nestingLevels=0]),
-Effect(fields=""Educational_level""], nestingLevels=0]),
-Effect(fields=""Current_salary""], nestingLevels=0])]).
-setIntercept(True).
-setDistribution(""UNKNOWN"").
-setLinkFunction(""LOG"").
-setAnalysisType(""BOTH"").
-setDetectTwoWayInteraction(True).
-setUseVariableSelection(True).
-setVariableSelectionMethod(""LASSO"").
-setUserSpecPenaltyParams(False)
-
-gleModel3 = gle3.fit(data)
-PMML = gleModel3.toPMML()
-statXML = gleModel3.statXML()
-predictions3 = gleModel3.transform(data)
-predictions3.show()
-
-"
-7BF4B8F1F49406EEC43BE3B7350092F9165B0757_5,7BF4B8F1F49406EEC43BE3B7350092F9165B0757," Linear Regression
-
-The linear regression model analyzes the predictive relationship between a continuous target and one or more predictors which can be continuous or categorical.
-
-Features of the linear regression model include automatic interaction effect detection, forward stepwise model selection, diagnostic checking, and unusual category detection based on Estimated Marginal Means (EMMEANS).
-
-"
-7BF4B8F1F49406EEC43BE3B7350092F9165B0757_6,7BF4B8F1F49406EEC43BE3B7350092F9165B0757,"Example code:
-
-Python example:
-
-from spss.ml.classificationandregression.linearregression import LinearRegression
-
-le = LinearRegression().
-setTargetField(""target"").
-setInputFieldList([""predictor1"", ""predictor2"", ""predictorn""]).
-setDetectTwoWayInteraction(True).
-setVarSelectionMethod(""forwardStepwise"")
-
-leModel = le.fit(data)
-predictions = leModel.transform(data)
-predictions.show()
-
-"
-7BF4B8F1F49406EEC43BE3B7350092F9165B0757_7,7BF4B8F1F49406EEC43BE3B7350092F9165B0757," Linear Support Vector Machine
-
-The Linear Support Vector Machine (LSVM) provides a supervised learning method that generates input-output mapping functions from a set of labeled training data. The mapping function can be either a classification function or a regression function. LSVM is designed to resolve large-scale problems in terms of the number of records and the number of variables (parameters). Its feature space is the same as the input space of the problem, and it can handle sparse data where the average number of non-zero elements in one record is small.
-
-"
-7BF4B8F1F49406EEC43BE3B7350092F9165B0757_8,7BF4B8F1F49406EEC43BE3B7350092F9165B0757,"Example code:
-
-Python example:
-
-from spss.ml.classificationandregression.linearsupportvectormachine import LinearSupportVectorMachine
-
-lsvm = LinearSupportVectorMachine().
-setTargetField(""BareNuc"").
-setInputFieldList([""Clump"", ""UnifSize"", ""UnifShape"", ""MargAdh"", ""SingEpiSize"", ""BlandChrom"", ""NormNucl"", ""Mit"", ""Class""]).
-setPenaltyFunction(""L2"")
-
-lsvmModel = lsvm.fit(df)
-predictions = lsvmModel.transform(data)
-predictions.show()
-
-"
-7BF4B8F1F49406EEC43BE3B7350092F9165B0757_9,7BF4B8F1F49406EEC43BE3B7350092F9165B0757," Random Trees
-
-Random Trees is a powerful approach for generating strong (accurate) predictive models. It's comparable and sometimes better than other state-of-the-art methods for classification or regression problems.
-
-Random Trees is an ensemble model consisting of multiple CART-like trees. Each tree grows on a bootstrap sample which is obtained by sampling the original data cases with replacement. Moreover, during the tree growth, for each node the best split variable is selected from a specified smaller number of variables that are drawn randomly from the full set of variables. Each tree grows to the largest extent possible, and there is no pruning. In scoring, Random Trees combines individual tree scores by majority voting (for classification) or average (for regression).
-
-"
-7BF4B8F1F49406EEC43BE3B7350092F9165B0757_10,7BF4B8F1F49406EEC43BE3B7350092F9165B0757,"Example code:
-
-Python example:
-
-from spss.ml.classificationandregression.ensemble.randomtrees import RandomTrees
-
- Random trees required a ""target"" field and some input fields. If ""target"" is continuous, then regression trees will be generate else classification .
- You can use the SPSS Attribute or Spark ML Attribute to indicate the field to categorical or continuous.
-randomTrees = RandomTrees().
-setTargetField(""target"").
-setInputFieldList([""feature1"", ""feature2"", ""feature3""]).
-numTrees(10).
-setMaxTreeDepth(5)
-
-randomTreesModel = randomTrees.fit(df)
-predictions = randomTreesModel.transform(scoreDF)
-predictions.show()
-
-"
-7BF4B8F1F49406EEC43BE3B7350092F9165B0757_11,7BF4B8F1F49406EEC43BE3B7350092F9165B0757," CHAID
-
-CHAID, or Chi-squared Automatic Interaction Detection, is a classification method for building decision trees by using chi-square statistics to identify optimal splits. An extension applicable to regression problems is also available.
-
-CHAID first examines the crosstabulations between each of the input fields and the target, and tests for significance using a chi-square independence test. If more than one of these relations is statistically significant, CHAID will select the input field that's the most significant (smallest p value). If an input has more than two categories, these are compared, and categories that show no differences in the outcome are collapsed together. This is done by successively joining the pair of categories showing the least significant difference. This category-merging process stops when all remaining categories differ at the specified testing level. For nominal input fields, any categories can be merged; for an ordinal set, only contiguous categories can be merged. Continuous input fields other than the target can't be used directly; they must be binned into ordinal fields first.
-
-Exhaustive CHAID is a modification of CHAID that does a more thorough job of examining all possible splits for each predictor but takes longer to compute.
-
-"
-7BF4B8F1F49406EEC43BE3B7350092F9165B0757_12,7BF4B8F1F49406EEC43BE3B7350092F9165B0757,"Example code:
-
-Python example:
-
-from spss.ml.classificationandregression.tree.chaid import CHAID
-
-chaid = CHAID().
-setTargetField(""salary"").
-setInputFieldList([""educ"", ""jobcat"", ""gender""])
-
-chaidModel = chaid.fit(data)
-pmmlStr = chaidModel.toPMML()
-statxmlStr = chaidModel.statXML()
-
-predictions = chaidModel.transform(data)
-predictions.show()
-
-Parent topic:[SPSS predictive analytics algorithms](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/spss-algorithms.html)
-"
-CE1B598A354C454F2D201039A2BB6D69BABBF840_0,CE1B598A354C454F2D201039A2BB6D69BABBF840," SPSS predictive analytics clustering algorithms in notebooks
-
-You can use the scalable Two-Step or the Cluster model evaluation algorithm to cluster data in notebooks.
-
-"
-CE1B598A354C454F2D201039A2BB6D69BABBF840_1,CE1B598A354C454F2D201039A2BB6D69BABBF840," Two-Step Cluster
-
-Scalable Two-Step is based on the familiar two-step clustering algorithm, but extends both its functionality and performance in several directions.
-
-First, it can effectively work with large and distributed data supported by Spark that provides the Map-Reduce computing paradigm.
-
-Second, the algorithm provides mechanisms for selecting the most relevant features for clustering the given data, as well as detecting rare outlier points. Moreover, it provides an enhanced set of evaluation and diagnostic features for enabling insight.
-
-The two-step clustering algorithm first performs a pre-clustering step by scanning the entire dataset and storing the dense regions of data cases in terms of summary statistics called cluster features. The cluster features are stored in memory in a data structure called the CF-tree. Finally, an agglomerative hierarchical clustering algorithm is applied to cluster the set of cluster features.
-
-"
-CE1B598A354C454F2D201039A2BB6D69BABBF840_2,CE1B598A354C454F2D201039A2BB6D69BABBF840,"Python example code:
-
-from spss.ml.clustering.twostep import TwoStep
-
-cluster = TwoStep().
-setInputFieldList([""region"", ""happy"", ""age""]).
-setDistMeasure(""LOGLIKELIHOOD"").
-setFeatureImportanceMethod(""CRITERION"").
-setAutoClustering(True)
-
-clusterModel = cluster.fit(data)
-predictions = clusterModel.transform(data)
-predictions.show()
-
-"
-CE1B598A354C454F2D201039A2BB6D69BABBF840_3,CE1B598A354C454F2D201039A2BB6D69BABBF840," Cluster model evaluation
-
-Cluster model evaluation (CME) aims to interpret cluster models and discover useful insights based on various evaluation measures.
-
-It's a post-modeling analysis that's generic and independent from any types of cluster models.
-
-"
-CE1B598A354C454F2D201039A2BB6D69BABBF840_4,CE1B598A354C454F2D201039A2BB6D69BABBF840,"Python example code:
-
-from spss.ml.clustering.twostep import TwoStep
-
-cluster = TwoStep().
-setInputFieldList([""region"", ""happy"", ""age""]).
-setDistMeasure(""LOGLIKELIHOOD"").
-setFeatureImportanceMethod(""CRITERION"").
-setAutoClustering(True)
-
-clusterModel = cluster.fit(data)
-predictions = clusterModel.transform(data)
-predictions.show()
-
-Parent topic:[SPSS predictive analytics algorithms](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/spss-algorithms.html)
-"
-AD5B9969C7557BFC4CFBB32CCA67F40C52FF824B_0,AD5B9969C7557BFC4CFBB32CCA67F40C52FF824B," Coding and running a notebook
-
-After you created a notebook to use in the notebook editor, you need to add libraries, code, and data so you can do your analysis.
-
-To develop analytic applications in a notebook, follow these general steps:
-
-
-
-1. Open the notebook in edit mode: click the edit icon (). If the notebook is locked, you might be able to [unlock and edit](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/manage-assets.htmleditassets) it.
-2. If the notebook is marked as being untrusted, tell the Jupyter service to trust your notebook content and allow executing all cells by:
-
-
-
-1. Clicking Not Trusted in the upper right corner of the notebook.
-2. Clicking Trust to execute all cells.
-
-
-
-3. Determine if the environment template that is associated with the notebook has the correct hardware size for the anticipated analysis processing throughput.
-
-
-
-1. Check the size of the environment by clicking the View notebook info icon () from the notebook toolbar and selecting the Environments page.
-2. If you need to change the environment, select another one from the list or, if none fits your needs, create your own environment template. See [Creating emvironment template](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/create-customize-env-definition.html).
-
-If you create an environment template, you can add your own libraries to the template that are preinstalled at the time the environment is started. See [Customize your environment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/customize-envs.html) for Python and R.
-
-
-
-"
-AD5B9969C7557BFC4CFBB32CCA67F40C52FF824B_1,AD5B9969C7557BFC4CFBB32CCA67F40C52FF824B,"4. Import preinstalled libraries. See [Libraries and scripts for notebooks](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/libraries.html).
-5. Load and access data. You can access data from project assets by running code that is generated for you when you select the asset or programmatically by using preinstalled library functions. See [Load and access data](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/load-and-access-data.html).
-6. Prepare and analyze the data with the appropriate methods:
-
-
-
-* [Build Watson Machine Learning models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-overview.html)
-* [Build Decision Optimization models](https://dataplatform.cloud.ibm.com/docs/content/DO/DOWS-Cloud_home.html)
-* [Use Watson Natural Language Processing](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp.html)
-* [Use SPSS predictive analytics algorithms](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/spss-algorithms.html)
-* [Use geospatial location analysis methods](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/geo-spatial-lib.html)
-* [Use Data skipping for Spark SQL](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/data-skipping-spark-sql.html)
-* [Apply Parquet encryption](https://dataplatform.cloud.ibm.com/docs/content/wsj/spark/parquet-encryption.html)
-"
-AD5B9969C7557BFC4CFBB32CCA67F40C52FF824B_2,AD5B9969C7557BFC4CFBB32CCA67F40C52FF824B,"* [Use Time series analysis methods](https://dataplatform.cloud.ibm.com/docs/content/wsj/spark/time-series-lib.html)
-
-
-
-7. If necessary, schedule the notebook to run at a regular time. See [Schedule a notebook](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-job-nb-editor.html).
-
-
-
-1. Monitor the status of your job runs from the project's Jobs page.
-2. Click your job to open the job's details page to view the runs for your job and the status of each run. If a run failed, you can select the run and view the log tail or download the entire log file to troubleshoot the run.
-
-
-
-8. When you're not actively working on the notebook, click File > Stop Kernel to stop the notebook kernel and free up resources.
-9. Stop the active runtime (and unnecessary capacity unit consumption) if no other notebook kernels are active under Tool runtimes on the Environments page on the Manage tab of your project.
-
-
-
-Video disclaimer: Some minor steps and graphical elements in these videos may differ from your deployment.
-
-Watch this short video to see how to create a Jupyter notebook and custom environment.
-
-This video provides a visual method to learn the concepts and tasks in this documentation.
-
-Watch this short video to see how to run basic SQL queries on Db2 Warehouse data in a Python notebook.
-
-This video provides a visual method to learn the concepts and tasks in this documentation.
-
-"
-AD5B9969C7557BFC4CFBB32CCA67F40C52FF824B_3,AD5B9969C7557BFC4CFBB32CCA67F40C52FF824B," Learn more
-
-
-
-* [Markdown cheatsheet](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/markd-jupyter.html)
-* [Notebook interface](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/parts-of-a-notebook.html)
-
-
-
-
-
-* [Stop active runtimes](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/manage-envs-new.htmlstop-active-runtimes)
-* [Load and access data](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/load-and-access-data.html)
-* [Schedule a notebook](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-job-nb-editor.html)
-
-
-
-Parent topic:[Jupyter Notebook editor](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebook-editor.html)
-"
-4562C632E1CFAAEDDE1374B29BDD1A1CCE5ECE86_0,4562C632E1CFAAEDDE1374B29BDD1A1CCE5ECE86," Deployment space collaborator roles and permissions
-
-When you add collaborators to a deployment space, you can specify which actions they can do by assigning them access levels. Learn how to add collaborators to your deployment spaces and the differences between access levels.
-
-"
-4562C632E1CFAAEDDE1374B29BDD1A1CCE5ECE86_1,4562C632E1CFAAEDDE1374B29BDD1A1CCE5ECE86," User roles and permissions in deployment spaces
-
-You can assign the following roles to collaborators based on the access level that you want to provide:
-
-
-
-* Admin: Administrators can control your deployment space assets, users, and settings.
-* Editor: Editors can control your space assets.
-* Viewer: Viewers can view your deployment space.
-
-
-
-The following table provides details on permissions based on user access level:
-
-
-
-Deployment space permissions
-
- Enabled permission Viewer Editor Admin
-
- View assets and deployments ✓ ✓ ✓
- Comment ✓ ✓ ✓
- Monitor ✓ ✓ ✓
- Test model deployment API ✓ ✓ ✓
- Find implementation details ✓ ✓ ✓
- Configure deployments ✓ ✓
- Batch deployment score ✓ ✓
- Online deployment score ✓ ✓ ✓
- Update assets ✓ ✓
- Import assets ✓ ✓
- Download assets ✓ ✓
- Deploy assets ✓ ✓
- Remove assets ✓ ✓
- Remove deployments ✓ ✓
- View spaces/members ✓ ✓ ✓
- Delete space ✓
-
-
-
-"
-4562C632E1CFAAEDDE1374B29BDD1A1CCE5ECE86_2,4562C632E1CFAAEDDE1374B29BDD1A1CCE5ECE86," Service IDs
-
-You can create service IDs in IBM Cloud to enable an application outside of IBM Cloud access to your IBM Cloud services. Service IDs are not tied to a specific user. Therefore, if a user leaves an organization and is deleted from the account, the service ID remains. Thus, your application or service stays up and running. For more information, see [Creating and working with service IDs](https://cloud.ibm.com/docs/account?topic=account-serviceids).
-
-To learn more about assigning space access by using a service ID, see [Adding collaborators to your deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/collaborator-permissions-wml.html?context=cdpaas&locale=enadding-collaborators).
-
-"
-4562C632E1CFAAEDDE1374B29BDD1A1CCE5ECE86_3,4562C632E1CFAAEDDE1374B29BDD1A1CCE5ECE86," Adding collaborators to your deployment space
-
-"
-4562C632E1CFAAEDDE1374B29BDD1A1CCE5ECE86_4,4562C632E1CFAAEDDE1374B29BDD1A1CCE5ECE86,"Prerequisites:
-All users in your IBM Cloud account with the Admin IAM platform access role for all IAM enabled services can manage space collaborators. For more information, see [IAM Platform access roles](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/roles.htmlplatform).
-
-"
-4562C632E1CFAAEDDE1374B29BDD1A1CCE5ECE86_5,4562C632E1CFAAEDDE1374B29BDD1A1CCE5ECE86,"Restriction:
-You can add collaborators to your deployment space only if they are a part of your organization and if they provisioned Watson Studio.
-
-To add one or more collaborators to a deployment space:
-
-
-
-1. From your deployment space, go to the Manage tab and click Access Control.
-2. Click Add collaborators and choose one of the following options:
-
-
-
-* If you want to add a user, click Add users. Assign a role that applies to the user.
-* If you want to add pre-defined user groups, click . Assign a role that applies to all members of the group.
-
-
-
-3. Add the user or user groups that you want to have the same access level and click Add.
-
-
-
-Parent topic:[Deployment spaces](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-spaces_local.html)
-"
-BF75F233FDFFDCA8A25D191E1DF4DF7F51E30823_0,BF75F233FDFFDCA8A25D191E1DF4DF7F51E30823," Creating environment templates
-
-You can create custom environment templates if you do not want to use the default environments provided by Watson Studio.
-
-Required permissions : To create an environment template, you must have the Admin or Editor role within the project.
-
-You can create environment templates for the following types of assets:
-
-
-
-* Notebooks in the Notebook editor
-* Notebooks in RStudio
-* Modeler flows in the SPSS Modeler
-* Data Refinery flows
-* Jobs that run operational assets, such as Data Refinery flows, or Notebooks in a project
-
-
-
-Note:
-
-To create an environment template:
-
-
-
-1. On the Manage tab of your project, select the Environments page and click New template under Templates.
-2. Enter a name and a description.
-3. Select one of the following engine types:
-
-
-
-* Default: Select for Python, R, and RStudio runtimes for Watson Studio.
-* Spark: Select for Spark with Python or R runtimes for Watson Studio.
-* GPU: Select for more computing power to improve model training performance for Watson Studio.
-
-
-
-4. Select the hardware configuration from the Hardware configuration drop-down menu.
-5. Select the software version if you selected a runtime of ""Default,"" ""Spark,"" or ""GPU.""
-
-
-
-"
-BF75F233FDFFDCA8A25D191E1DF4DF7F51E30823_1,BF75F233FDFFDCA8A25D191E1DF4DF7F51E30823," Where to find your custom environment template
-
-Your new environment template is listed under Templates on the Environments page in the Manage tab of your project. From this page, you can:
-
-
-
-* Check which runtimes are active
-* Update custom environment templates
-* Track the number of capacity units per hour that your runtimes have consumed so far
-* Stop active runtimes.
-
-
-
-"
-BF75F233FDFFDCA8A25D191E1DF4DF7F51E30823_2,BF75F233FDFFDCA8A25D191E1DF4DF7F51E30823," Limitations
-
-The default environments provided by Watson Studio cannot be edited or modified.
-
-Notebook environments (Anaconda Python or R distributions):
-
-: - You can't add a software customization to the default Python and R environment templates included in Watson Studio. You can only add a customization to an environment template that you create. : - If you add a software customization using conda, your environment must have at least 2 GB RAM. : - You can't customize an R environment for a notebook by installing R packages directly from CRAN or GitHub. You can check if the CRAN package you want is available only from conda channels and, if the package is available, add that package name in the customization list as r-.
-
-
-
-* After you have started a notebook in an Watson Studio environment, you can't create another conda environment from inside that notebook and use it. Watson Studio environments do not behave like a Conda environment manager.
-
-
-
-Spark environments: : - You can't customize the software configuration of a Spark environment template.
-
-"
-BF75F233FDFFDCA8A25D191E1DF4DF7F51E30823_3,BF75F233FDFFDCA8A25D191E1DF4DF7F51E30823," Next steps
-
-
-
-* [Customize environment templates for Python or R](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/customize-envs.html)
-
-
-
-"
-BF75F233FDFFDCA8A25D191E1DF4DF7F51E30823_4,BF75F233FDFFDCA8A25D191E1DF4DF7F51E30823," Learn more
-
-Parent topic:[Managing compute resources](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/manage-envs-new.html)
-"
-D21CD926CA1FE170C8C1645CA0EC65AEDDDB4AEF_0,D21CD926CA1FE170C8C1645CA0EC65AEDDDB4AEF," Publishing a notebook as a gist
-
-A gist is a simple way to share a notebook or parts of a notebook with other users. Unlike when you publish to a GitHub repository, you don't need to manage your gists; you can edit your gists directly in the browser.
-
-All project collaborators, who have administrator or editor permission, can share notebooks or parts of a notebook as gists. The latest saved version of your notebook is published as a gist.
-
-Before you can create a gist, you must be logged in to GitHub and have authorized access to gists in GitHub from Watson Studio. See [Publish notebooks on GitHub](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/github-integration.html). If this information is missing, you are prompted for it.
-
-To publish a notebook as a gist:
-
-
-
-1. Open the notebook in edit mode.
-2. Click the GitHub integration icon () and select Publish as gist.
-
-
-
-Watch this video to see how to enable GitHub integration.
-
-Video disclaimer: Some minor steps and graphical elements in this video might differ from your platform.
-
-This video provides a visual method to learn the concepts and tasks in this documentation.
-
-
-
-* Transcript
-
-Synchronize transcript with video
-
-
-
- Time Transcript
-
- 00:00 This video shows you how to publish notebooks from your Watson Studio project to your GitHub account.
- 00:07 Navigate to your profile and settings.
- 00:11 On the ""Integrations"" tab, visit the link to generate a GitHub personal access token.
- 00:17 Provide a descriptive name for the token and select the repo and gist scopes, then generate the token.
- 00:29 Copy the token, return to the GitHub integration settings, and paste the token.
- 00:36 The token is validated when you save it to your profile settings.
- 00:42 Now, navigate to your projects.
- 00:44 You enable GitHub integration at the project level on the ""Settings"" tab.
-"
-D21CD926CA1FE170C8C1645CA0EC65AEDDDB4AEF_1,D21CD926CA1FE170C8C1645CA0EC65AEDDDB4AEF," 00:50 Simply scroll to the bottom and paste the existing GitHub repository URL.
- 00:56 You'll find that on the ""Code"" tab in the repo.
- 01:01 Click ""Update"" to make the connection.
- 01:05 Now, go to the ""Assets"" tab and open the notebook you want to publish.
- 01:14 Notice that this notebook has the credentials replaced with X's.
- 01:19 It's a best practice to remove or replace credentials before publishing to GitHub.
- 01:24 So, this notebook is ready for publishing.
- 01:27 You can provide the target path along with a commit message.
- 01:31 You also have the option to publish content without hidden code, which means that any cells in the notebook that began with the hidden cell comment will not be published.
- 01:42 When you're, ready click ""Publish"".
- 01:45 The message tells you that the notebook was published successfully and provides links to the notebook, the repository, and the commit.
- 01:54 Let's take a look at the commit.
- 01:57 So, there's the commit, and you can navigate to the repository to see the published notebook.
- 02:04 Lastly, you can publish as a gist.
- 02:07 Gists are another way to share your work on GitHub.
- 02:10 Every gist is a git repository, so it can be forked and cloned.
- 02:15 There are two types of gists: public and secret.
- 02:19 If you start out with a secret gist, you can convert it to a public gist later.
- 02:24 And again, you have the option to remove hidden cells.
- 02:29 Follow the link to see the published gist.
- 02:32 So that's the basics of Watson Studio's GitHub integration.
- 02:37 Find more videos in the Cloud Pak for Data as a Service documentation.
-
-
-
-
-
-Parent topic:[Managing the lifecycle of notebooks and scripts](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/manage-nb-lifecycle.html)
-"
-11A093CB8F1D24EA066663B3991084A84FC32BF2_0,11A093CB8F1D24EA066663B3991084A84FC32BF2," Creating jobs in deployment spaces
-
-A job is a way of running a batch deployment, or a self-contained asset like a script, notebook, code package, or flow in Watson Machine Learning. You can select the input and output for your job and choose to run it manually or on a schedule. From a deployment space, you can create, schedule, run, and manage jobs.
-
-"
-11A093CB8F1D24EA066663B3991084A84FC32BF2_1,11A093CB8F1D24EA066663B3991084A84FC32BF2," Creating a batch deployment job
-
-Follow these steps when you are creating a batch deployment job:
-
-Important: You must have an existing batch deployment to create a batch job.
-
-
-
-1. From the Deployments tab, select your deployment and click New job. The Create a job dialog box opens.
-2. In the Define details section, enter your job name, an optional description, and click Next.
-3. In the Configure section, select a hardware specification.
-You can follow these steps to optionally configure environment variables and job run retention settings:
-
-
-
-* Optional: If you are deploying a Python script, an R script, or a notebook, then you can enter environment variables to pass parameters to the job. Click Environment variables to enter the key - value pair.
-* Optional: To avoid finishing resources by retaining all historical job metadata, follow one of these options:
-
-
-
-* Click By amount to set thresholds for saving a set number of job runs and associated logs.
-* Click By duration (days) to set thresholds for saving artifacts for a specified number of days.
-
-
-
-
-
-4. Optional: In the Schedule section, toggle the Schedule off button to schedule a run. You can set a date and time for start of schedule and set a schedule for repetition. Click Next.
-
-Note: If you don't specify a schedule, the job runs immediately.
-5. Optional: In the Notify section, toggle the Off button to turn on notifications associated with this job. Click Next.
-
-Note: You can receive notifications for three types of events: success, warning, and failure.
-6. In the Choose data section, provide inline data that corresponds with your model schema. You can provide input in JSON format. Click Next. See [Example JSON payload for inline data](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/create-jobs.html?context=cdpaas&locale=enexample-json).
-7. In the Review and create section, verify your job details, and click Create and run.
-
-
-
-Notes:
-
-
-
-* Scheduled jobs display on the Jobs tab of the deployment space.
-"
-11A093CB8F1D24EA066663B3991084A84FC32BF2_2,11A093CB8F1D24EA066663B3991084A84FC32BF2,"* Results of job runs are written to the specified output file and saved as a space asset.
-* A data asset can be a data source file that you promoted to the space, a connected data source, or tables from databases and files from file-based data sources.
-* If you exclude certain weekdays in your job schedule, the job might not run as you would expect. The reason is due to a discrepancy between the time zone of the user who creates the schedule, and the time zone of the main node where the job runs.
-* When you create or modify a scheduled job, an API key is generated. Future runs use this generated API key.
-
-
-
-"
-11A093CB8F1D24EA066663B3991084A84FC32BF2_3,11A093CB8F1D24EA066663B3991084A84FC32BF2," Example JSON payload for inline data
-
-{
-""deployment"": {
-""id"": """"
-},
-""space_id"": """",
-""name"": ""test_v4_inline"",
-""scoring"": {
-""input_data"": [{
-""fields"": ""AGE"", ""SEX"", ""BP"", ""CHOLESTEROL"", ""NA"", ""K""],
-""values"": 47, ""M"", ""LOW"", ""HIGH"", 0.739, 0.056], 47, ""M"", ""LOW"", ""HIGH"", 0.739, 0.056]]
-}]
-}
-}
-
-"
-11A093CB8F1D24EA066663B3991084A84FC32BF2_4,11A093CB8F1D24EA066663B3991084A84FC32BF2," Queuing and concurrent job executions
-
-The maximum number of concurrent jobs for each deployment is handled internally by the deployment service. For batch deployment, by default, two jobs can be run concurrently. Any deployment job request for a batch deployment that already has two running jobs is placed in a queue for execution later. When any of the running jobs is completed, the next job in the queue is run. The queue has no size limit.
-
-"
-11A093CB8F1D24EA066663B3991084A84FC32BF2_5,11A093CB8F1D24EA066663B3991084A84FC32BF2," Limitation on using large inline payloads for batch deployments
-
-Batch deployment jobs that use large inline payload might get stuck in starting or running state.
-
-Tip: If you provide huge payloads to batch deployments, use data references instead of inline.
-
-"
-11A093CB8F1D24EA066663B3991084A84FC32BF2_6,11A093CB8F1D24EA066663B3991084A84FC32BF2," Retention of deployment job metadata
-
-Job-related metadata is persisted and can be accessed until the job and its deployment are deleted.
-
-"
-11A093CB8F1D24EA066663B3991084A84FC32BF2_7,11A093CB8F1D24EA066663B3991084A84FC32BF2," Viewing deployment job details
-
-When you create or view a batch job, the deployment ID and the job ID are displayed.
-
-
-
-
-
-* The deployment ID represents the deployment definition, including the hardware and software configurations and related assets.
-* The job ID represents the details for a job, including input data and an output location and a schedule for running the job.
-
-
-
-Use these IDs to refer to the job in Watson Machine Learning [REST API](https://cloud.ibm.com/apidocs/machine-learning) requests or in notebooks that use the Watson Machine Learning [Python client library](https://ibm.github.io/watson-machine-learning-sdk/).
-
-Parent topic:[Managing predictive deployments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-general.html)
-"
-D1AFA9BB4E0475A56190DC8254E004308BEA484D_0,D1AFA9BB4E0475A56190DC8254E004308BEA484D," Creating notebooks
-
-You can add a notebook to your project by using one of these methods: creating a notebook file or copying a sample notebook from the Samples.
-
-Required permissions : You must have the Admin or Editor role in the project to create a notebook.
-
-Watch this short video to learn the basics of Jupyter notebooks.
-
-This video provides a visual method to learn the concepts and tasks in this documentation.
-
-"
-D1AFA9BB4E0475A56190DC8254E004308BEA484D_1,D1AFA9BB4E0475A56190DC8254E004308BEA484D," Creating a notebook file in the notebook editor
-
-To create a notebook file in the notebook editor:
-
-
-
-1. From your project, click New asset > Work with data and models in Python or R notebooks.
-2. On the New Notebook page, specify the method to use to create your notebook. You can create a blank notebook, upload a notebook file from your file system, or upload a notebook file from a URL:
-
-
-
-* The notebook file you select to upload must follow these requirements:
-
-
-
-* The file type must be .ipynb.
-* The file name must not exceed 255 characters.
-* The file name must not contain these characters: < > : ” / | ( ) ?
-
-
-
-* The URL must be a public URL that is shareable and doesn't require authentication.
-
-
-
-
-3. Specify the runtime environment for the language you want to use (Python or R). You can select a provided environment template or an environment template which you created and configured under Templates on the Environments page on the Manage tab of your project. For more information on environments, see [Notebook environments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebook-environments.html).
-4. Click Create Notebook. The notebook opens in edit mode.
-
-Note that the time that it takes to create a new notebook or to open an existing one for editing might vary. If no runtime container is available, a container needs to be created and only after it is available, the Jupyter notebook user interface can be loaded. The time it takes to create a container depends on the cluster load and size. Once a runtime container exists, subsequent calls to open notebooks will be significantly faster.
-
-The opened notebook is locked by you. For more information, see [Locking and unlocking notebooks](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/creating-notebooks.html?context=cdpaas&locale=enlocking-and-unlocking).
-"
-D1AFA9BB4E0475A56190DC8254E004308BEA484D_2,D1AFA9BB4E0475A56190DC8254E004308BEA484D,"5. Tell the service to trust your notebook content and execute all cells.
-
-When a new notebook is opened in edit mode, the notebook is considered to be untrusted by the Jupyter service by default. When you run an untrusted notebook, content deemed untrusted will not be executed. Untrusted content includes any Javascript, HTML or Javascript in Markdown cells or in any output cells that you did not generate.
-
-
-
-1. Click Not Trusted in the upper right corner of the notebook.
-2. Click Trust to execute all cells.
-
-
-
-
-
-"
-D1AFA9BB4E0475A56190DC8254E004308BEA484D_3,D1AFA9BB4E0475A56190DC8254E004308BEA484D," Adding a notebook from the Samples
-
-Notebooks from the Samples are based on real-world scenarios and contain many useful examples of computations and visualizations that you can adapt to your analysis needs.
-
-To copy a sample notebook:
-
-
-
-1. In the main menu, click Samples, then filter for Notebooks to show only notebook cards.
-2. Find the card for the sample notebook you want, and click the card. You can view the notebook contents to browse the steps and the code that it contains.
-3. To work with a copy of the sample notebook, click Add to project.
-4. Choose the project for the notebook, and click Add.
-5. Optional: Change the name and description for the notebook.
-6. Specify the runtime environment. If you created an environment template on the Environments page of your project, it will display in the list of runtimes you can select from.
-7. Click Create. The notebook opens in edit mode and is locked by you. Locking the file avoids possible merge conflicts that might be caused by competing changes to the file. To get familiar with the structure of a notebook, see [Parts of a notebook](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/parts-of-a-notebook.html).
-
-
-
-"
-D1AFA9BB4E0475A56190DC8254E004308BEA484D_4,D1AFA9BB4E0475A56190DC8254E004308BEA484D," Locking and unlocking notebooks
-
-If you open a notebook in edit mode, this notebook is locked by you. While you hold the lock, only you can make changes to the notebook. All other projects users will see the lock icon on the notebook. Only project administrators are able to unlock a locked notebook and open it in edit mode.
-
-When you close the notebook, the lock is released and another user can select to open the notebook in edit mode. Note that you must close the notebook while the runtime environment is still active. The notebook lock can't be released for you if the runtime was stopped or is in idle state. If the notebook lock is not released for you, you can unlock the notebook from the project's Assets page. Locking the file avoids possible merge conflicts that might be caused by competing changes to the file.
-
-"
-D1AFA9BB4E0475A56190DC8254E004308BEA484D_5,D1AFA9BB4E0475A56190DC8254E004308BEA484D," Finding your notebooks
-
-You can find and open notebooks from the Assets page of the project.
-
-You can open a notebook in view or edit mode. When you open a notebook in view mode, you can't change or run the notebook. You can only change or run a notebook when it is opened in edit mode and started in an environment.
-
-You can open a notebook by:
-
-
-
-* Clicking the notebook. This opens the notebook in view mode. To then open the notebook in edit mode, click the pencil icon () on the notebook toolbar. This starts the environment associated with the notebook.
-* Expanding the three vertical dots on the right of the notebook entry, and selecting View or Edit.
-
-
-
-"
-D1AFA9BB4E0475A56190DC8254E004308BEA484D_6,D1AFA9BB4E0475A56190DC8254E004308BEA484D," Next step
-
-
-
-* [Code and run notebooks](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/code-run-notebooks.html)
-
-
-
-"
-D1AFA9BB4E0475A56190DC8254E004308BEA484D_7,D1AFA9BB4E0475A56190DC8254E004308BEA484D," Learn more
-
-
-
-* [Provided CPU runtime environments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebook-environments.htmldefault-cpu)
-* [Provided Spark runtime environments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebook-environments.htmldefault-spark)
-* [Change the environment runtime used by a notebook](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebook-environments.htmlchange-env)
-
-
-
-Parent topic:[Jupyter Notebook editor](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebook-editor.html)
-"
-3B2719C3B56D1BD40FA0D8C6853DDD078FD13D94_0,3B2719C3B56D1BD40FA0D8C6853DDD078FD13D94," Customizing environment templates
-
-You can change the name, the description, and the hardware configuration of an environment template that you created. You can customize the software configuration of Jupyter notebook environment templates through conda channels or by using pip. You can provide a list of conda packages, a list of pip packages, or a combination of both. When using conda packages, you can provide a list of additional conda channel locations through which the packages can be obtained.
-
-Required permissions : You must be have the Admin or Editor role in the project to customize an environment template.
-
-Restrictions : You cannot change the language of an existing environment template. : You can’t customize the software configuration of a Spark environment template you created.
-
-To customize an environment template that you created:
-
-
-
-1. Under your project's Manage tab, click the Environments page.
-2. In the Active Runtimes section, check that no runtime is active for the environment template you want to change.
-3. In the Environment Templates section, click the environment template you want to customize.
-4. Make your changes.
-
-For a Juypter notebook environment template, select to create a customization and specify the libraries to add to the standard packages that are available by default. You can also use the customization to upgrade or downgrade packages that are part of the standard software configuration.
-
-The libraries that are added to an environment template through the customization aren't persisted; however, they are automatically installed each time the environment runtime is started. Note that if you add a library using pip install through a notebook cell and not through the customization, only you will be able to use this library; the library is not available to someone else using the same environment template.
-
-If you want you can use the provided template to add the custom libraries. There is a different template for Python and for R. The following example shows you how to add Python packages:
-
- Modify the following content to add a software customization to an environment.
- To remove an existing customization, delete the entire content and click Apply.
-
- Add conda channels below defaults, indented by two spaces and a hyphen.
-channels:
-- defaults
-
- To add packages through conda or pip, remove the comment on the following line.
- dependencies:
-
-"
-3B2719C3B56D1BD40FA0D8C6853DDD078FD13D94_1,3B2719C3B56D1BD40FA0D8C6853DDD078FD13D94," Add conda packages here, indented by two spaces and a hyphen.
- Remove the comment on the following line and replace sample package name with your package name:
- - a_conda_package=1.0
-
- Add pip packages here, indented by four spaces and a hyphen.
- Remove the comments on the following lines and replace sample package name with your package name.
- - pip:
- - a_pip_package==1.0
-
-Important when customizing:
-
-
-
-* Before you customize a package, verify that the changes you are planning have the intended effect.
-
-
-
-* conda can report the changes required for installing a given package, without actually installing it. You can verify the changes from your notebook. For example, for the library Plotly:
-
-
-
-* In a Python notebook, enter: !conda install --dry-run plotly
-* In an R notebook, enter: print(system2(""conda"", args=c(""install"",""--dry-run"",""r-plotly""), stdout=TRUE))
-
-
-
-* pip does install the package. However, restarting the runtime again after verification will remove the package. Here too you verify the changes from your notebook. For example, for the library Plotly:
-
-
-
-* In a Python notebook, enter: !pip install plotly
-* In an R notebook, enter: print(system2(""pip"", args=""install plotly"", stdout=TRUE))
-
-
-
-
-
-* If you can get a package through conda from the default channels and through pip from PyPI, the preferred method is through conda from the default channels.
-* Conda does dependency checking when installing packages which can be memory intensive if you add many packages to the customization. Ensure that you select an environment with sufficient RAM to enable dependency checking at the time the runtime is started.
-* To prevent unnecessary dependency checking if you only want packages from one Conda channel, exclude the default channels by removing defaults from the channels list in the template and adding nodefaults.
-"
-3B2719C3B56D1BD40FA0D8C6853DDD078FD13D94_2,3B2719C3B56D1BD40FA0D8C6853DDD078FD13D94,"* In addition to the Anaconda main channel, many packages for R can be found in Anaconda's R channel. In R environments, this channel is already part of the default channels, hence it does not need to be added separately.
-* If you add packages only through pip or only through conda to the customization template, you must make sure that dependencies is not commented out in the template.
-* When you specify a package version, use a single = for conda packages and == for pip packages. Wherever possible, specify a version number as this reduces the installation time and memory consumption significantly. If you don't specify a version, the package manager might pick the latest version available, or keep the version that is available in the package.
-* You cannot add arbitrary notebook extensions as a customization because notebook extensions must be pre-installed.
-
-
-
-5. Apply your changes.
-
-
-
-"
-3B2719C3B56D1BD40FA0D8C6853DDD078FD13D94_3,3B2719C3B56D1BD40FA0D8C6853DDD078FD13D94," Learn more
-
-
-
-* [Examples of customizations](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/example-customizations.html)
-* [Installing custom packages through a notebook](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/install-cust-lib.html)
-
-
-
-Parent topic:[Managing compute resources](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/manage-envs-new.html)
-"
-04B717FD06C5D906268E8530F4B521686065C6D5_0,04B717FD06C5D906268E8530F4B521686065C6D5," Data load support
-
-You can add automatically generated code to load data from project data assets to a notebook cell. The asset type can be a file or a database connection.
-
-By clicking in an empty code cell in your notebook, clicking the Code snippets icon () from the notebook toolbar, selecting Read data and an asset from the project, you can:
-
-
-
-* Insert the data source access credentials. This capability is available for all data assets that are added to a project. With the credentials, you can write your own code to access the asset and load the data into data structures of your choice.
-* Generate code that is added to the notebook cell. The inserted code serves as a quick start to allow you to easily begin working with a data set or connection. For production systems, you should carefully review the inserted code to determine if you should write your own code that better meets your needs.
-
-When you run the code cell, the data is accessed and loaded into the data structure you selected.
-
-Notes:
-
-
-
-1. The ability to provide generated code is disabled for some connections if:
-
-
-
-* The connection credentials are personal credentials
-* The connection uses a secure gateway link
-* The connection credentials are stored in vaults
-
-
-
-2. If the file type or database connection that you are using doesn't appear in the following lists, you can select to create generic code. For Python this is a StreamingBody object and for R a textConnection object.
-
-
-
-
-
-The following tables show you which data source connections (file types and database connections) support the option to generate code. The options for generating code vary depending on the data source, the notebook coding language, and the notebook runtime compute.
-
-"
-04B717FD06C5D906268E8530F4B521686065C6D5_1,04B717FD06C5D906268E8530F4B521686065C6D5," Supported files types
-
-
-
-Table 1. Supported file types
-
- Data source Notebook coding language Compute engine type Available support to load data
-
- CSV files
- Python Anaconda Python distribution Load data into pandasDataFrame
- With Spark Load data into pandasDataFrame and sparkSessionDataFrame
- With Hadoop Load data into pandasDataFrame and sparkSessionDataFrame
- R Anaconda R distribution Load data into R data frame
- With Spark Load data into R data frame and sparkSessionDataFrame
- With Hadoop Load data into R data frame and sparkSessionDataFrame
- Python Script
- Python Anaconda Python distribution Load data into pandasStreamingBody
- With Spark Load data into pandasStreamingBody
- With Hadoop Load data into pandasStreamingBody
- R Anaconda R distribution Load data into rRawObject
- With Spark Load data into rRawObject
- With Hadoop Load data into rRawObject
- JSON files
- Python Anaconda Python distribution Load data into pandasDataFrame
- With Spark Load data into pandasDataFrame and sparkSessionDataFrame
- With Hadoop Load data into pandasDataFrame and sparkSessionDataFrame
- R Anaconda R distribution Load data into R data frame
- With Spark Load data into R data frame, rRawObject and sparkSessionDataFrame
- With Hadoop Load data into R data frame, rRawObject and sparkSessionDataFrame
- .xlsx and .xls files
- Python Anaconda Python distribution Load data into pandasDataFrame
- With Spark Load data into pandasDataFrame
- With Hadoop Load data into pandasDataFrame
- R Anaconda R distribution Load data into rRawObject
- With Spark No data load support
- With Hadoop No data load support
- Octet-stream file types
- Python Anaconda Python distribution Load data into pandasStreamingBody
- With Spark Load data into pandasStreamingBody
- R Anaconda R distribution Load data in rRawObject
- With Spark Load data in rDataObject
- PDF file type
- Python Anaconda Python distribution Load data into pandasStreamingBody
- With Spark Load data into pandasStreamingBody
-"
-04B717FD06C5D906268E8530F4B521686065C6D5_2,04B717FD06C5D906268E8530F4B521686065C6D5," With Hadoop Load data into pandasStreamingBody
- R Anaconda R distribution Load data in rRawObject
- With Spark Load data in rDataObject
- With Hadoop Load data into rRawData
- ZIP file type
- Python Anaconda Python distribution Load data into pandasStreamingBody
- With Spark Load data into pandasStreamingBody
- R Anaconda R distribution Load data in rRawObject
- With Spark Load data in rDataObject
- JPEG, PNG image files
- Python Anaconda Python distribution Load data into pandasStreamingBody
- With Spark Load data into pandasStreamingBody
- With Hadoop Load data into pandasStreamingBody
- R Anaconda R distribution Load data in rRawObject
- With Spark Load data in rDataObject
- With Hadoop Load data in rDataObject
- Binary files
- Python Anaconda Python distribution Load data into pandasStreamingBody
- With Spark Load data into pandasStreamingBody
- Hadoop No data load support
- R Anaconda R distribution Load data in rRawObject
- With Spark Load data into rRawObject
- Hadoop Load data in rDataObject
-
-
-
-"
-04B717FD06C5D906268E8530F4B521686065C6D5_3,04B717FD06C5D906268E8530F4B521686065C6D5," Supported database connections
-
-
-
-Table 2. Supported database connections
-
- Data source Notebook coding language Compute engine type Available support to load data
-
- - [Db2 Warehouse on Cloud](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-db2-wh.html) - [IBM Db2 on Cloud](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-db2-cloud.html) - [IBM Db2 Database](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-db2.html)
- Python Anaconda Python distribution Load data into ibmdbpyIda and ibmdbpyPandas
- With Spark Load data into ibmdbpyIda, ibmdbpyPandas and sparkSessionDataFrame
- With Hadoop Load data into ibmdbpyIda, ibmdbpyPandas and sparkSessionDataFrame
- R Anaconda R distribution Load data into ibmdbrIda and ibmdbrDataframe
- With Spark Load data into ibmdbrIda, ibmdbrDataFrame and sparkSessionDataFrame
- With Hadoop Load data into ibmdbrIda, ibmdbrDataFrame and sparkSessionDataFrame
- - [Db2 for z/OS](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-db2-wh.html)
- Python Anaconda Python distribution Load data into ibmdbpyIda and ibmdbpyPandas
- With Spark No data load support
-"
-04B717FD06C5D906268E8530F4B521686065C6D5_4,04B717FD06C5D906268E8530F4B521686065C6D5," - [Amazon Simple Storage Services (S3)](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-amazon-s3.html) - [Amazon Simple Storage Services (S3) with an IAM access policy](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-amazon-s3.html)
- Python Anaconda Python distribution Load data into pandasStreamingBody
- With Hadoop Load data into pandasStreamingBody and sparkSessionSetup
- R Anaconda R distributuion Load data into rRawObject
- With Hadoop Load data into rRawObject and sparkSessionSetup
- - [IBM Cloud Databases for PostgreSQL](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-dbase-postgresql.html) - [Microsoft SQL Server](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-sql-server.html)
- Python Anaconda Python distribution Load data into pandasDataFrame
- With Spark Load data into pandasDataFrame
- R Anaconda R distribution Load data into R data frame
- With Spark Load data into R data frame and sparkSessionDataFrame
- - [IBM Cognos Analytics](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-cognos.html)
-"
-04B717FD06C5D906268E8530F4B521686065C6D5_5,04B717FD06C5D906268E8530F4B521686065C6D5," Python Anaconda Python distribution Load data into pandasDataFrame In the generated code: - Edit the path parameter in the last line of code - Remove the comment tagging To read data, see [Reading data from a data source](https://www.ibm.com/support/knowledgecenter/en/SSEP7J_11.1.0/com.ibm.swg.ba.cognos.ca_notebook.doc/c_read_notebook.html) To search data, see [Searching for data objects](https://www.ibm.com/support/knowledgecenter/en/SSEP7J_11.1.0/com.ibm.swg.ba.cognos.ca_notebook.doc/c_search_for_data_objects_notebook.html) To write data, see [Writing data to a data source](https://www.ibm.com/support/knowledgecenter/en/SSEP7J_11.1.0/com.ibm.swg.ba.cognos.ca_notebook.doc/c_write_notebook.html)
- With Spark No data load support
-"
-04B717FD06C5D906268E8530F4B521686065C6D5_6,04B717FD06C5D906268E8530F4B521686065C6D5," R Anaconda R distribution Load data into R data frame In the generated code: - Edit the path parameter in the last line of code - Remove the comment tagging To read data, see [Reading data from a data source](https://www.ibm.com/support/knowledgecenter/en/SSEP7J_11.1.0/com.ibm.swg.ba.cognos.ca_notebook.doc/c_read_notebook.html) To search data, see [Searching for data objects](https://www.ibm.com/support/knowledgecenter/en/SSEP7J_11.1.0/com.ibm.swg.ba.cognos.ca_notebook.doc/c_search_for_data_objects_notebook.html) To write data, see [Writing data to a data source](https://www.ibm.com/support/knowledgecenter/en/SSEP7J_11.1.0/com.ibm.swg.ba.cognos.ca_notebook.doc/c_write_notebook.html)
- With Spark No data load support
- - [Microsoft Azure Cosmos DB](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-cosmosdb.html)
- Python Anaconda Python distribution Load data into pandasDataFrame
- With Spark Load data into pandasDataFrame
- R Anaconda R distribution No data load support
- With Spark No data load support
- - [Amazon RDS for MySQL](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-azrds-mysql.html)
- Python Anaconda Python distribution Load data into pandasDataFrame
- With Spark Load data into pandasDataFrame
- R Anaconda R distribution Load data into R data frame and sparkSessionDataFrame
- With Spark No data load support
-"
-04B717FD06C5D906268E8530F4B521686065C6D5_7,04B717FD06C5D906268E8530F4B521686065C6D5," - [HTTP](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-http.html) - [Apache Cassandra](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-cassandra.html) - [Amazon RDS for PostgreSQL](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-azrds-postresql.html)
- Python Anaconda Python distribution Load data into pandasDataFrame
- With Spark Load data into pandasDataFrame
- R Anaconda R distribution Load data into R data frame
- With Spark Load data into R data frame
-
-
-
-Parent topic:[Loading and accessing data in a notebook](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/load-and-access-data.html)
-"
-E0E5646EA00A170BB595E9E0BBCCB69F702FFC7C_0,E0E5646EA00A170BB595E9E0BBCCB69F702FFC7C," Analyzing data and working with models
-
-You can analyze data and build or work with models in projects. The methods that you choose for preparing data or working models help you determine which tools best fit your needs.
-
-Each tool has a specific, primary task. Some tools have capabilities for multiple types of tasks.
-
-You can choose a tool based on how much automation you want:
-
-
-
-* Code editor tools: Use to write code in Python or R, all also with Spark.
-* Graphical builder tools: Use menus and drag-and-drop functionality on a builder to visually program.
-* Automated builder tools: Use to configure automated tasks that require limited user input.
-
-
-
-
-
-Tool to tasks
-
- Tool Primary task Tool type Work with data Work with models
-
- [Data Refinery](https://dataplatform.cloud.ibm.com/docs/content/wsj/refinery/refining_data.html) Prepare and visualize data Graphical builder ✓
- [Visualizations](https://dataplatform.cloud.ibm.com/docs/content/dataview/idh_idc_cg_help_main.html) Build graphs to visualize data Graphical builder ✓
- [Prompt Lab](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-lab.html) Experiment with foundation models and prompts Graphical builder ✓
- [Tuning Studio](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tuning-studio.html) Tune a foundation model to return output in a certain style or format Graphical builder ✓ ✓
- [Jupyter notebook editor](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebook-editor.html) Work with data and models in Python or R notebooks Code editor ✓ ✓
- [Federated learning](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fed-lea.html) Train models on distributed data Code editor ✓
-"
-E0E5646EA00A170BB595E9E0BBCCB69F702FFC7C_1,E0E5646EA00A170BB595E9E0BBCCB69F702FFC7C," [RStudio IDE](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/rstudio-overview.html) Work with data and models in R Code editor ✓ ✓
- [SPSS Modeler](https://dataplatform.cloud.ibm.com/docs/content/wsd/spss-modeler.html) Build models as a visual flow Graphical builder ✓ ✓
- [Decision Optimization](https://dataplatform.cloud.ibm.com/docs/content/DO/DOWS-Cloud_home.html) Solve optimization problems Graphical builder, code editor ✓ ✓
- [AutoAI tool](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-overview.html) Build machine learning models automatically Automated builder ✓ ✓
- [Pipelines](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-overview.html) Automate model lifecycle Graphical builder ✓ ✓
- [Synthetic Data Generator](https://dataplatform.cloud.ibm.com/docs/content/wsj/synthetic/synthetic_data_overview_sd.html) Generate synthetic tabular data Graphical builder ✓ ✓
-
-
-
-"
-E0E5646EA00A170BB595E9E0BBCCB69F702FFC7C_2,E0E5646EA00A170BB595E9E0BBCCB69F702FFC7C," Learn more
-
-
-
-"
-89F9E0463D14DED51B14392A4FD7A69BB53FA1BF_0,89F9E0463D14DED51B14392A4FD7A69BB53FA1BF," Data skipping for Spark SQL
-
-Data skipping can significantly boost the performance of SQL queries by skipping over irrelevant data objects or files based on a summary metadata associated with each object.
-
-Data skipping uses the open source Xskipper library for creating, managing and deploying data skipping indexes with Apache Spark. See [Xskipper - An Extensible Data Skipping Framework](https://xskipper.io).
-
-For more details on how to work with Xskipper see:
-
-
-
-* [Quick Start Guide](https://xskipper.io/getting-started/quick-start-guide/)
-* [Demo Notebooks](https://xskipper.io/getting-started/sample-notebooks/)
-
-
-
-In addition to the open source features in Xskipper, the following features are also available:
-
-
-
-* [Geospatial data skipping](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/data-skipping-spark-sql.html?context=cdpaas&locale=engeospatial-skipping)
-* [Encrypting indexes](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/data-skipping-spark-sql.html?context=cdpaas&locale=enencrypting-indexes)
-* [Data skipping with joins (for Spark 3 only)](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/data-skipping-spark-sql.html?context=cdpaas&locale=enskipping-with-joins)
-* [Samples showing these features](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/data-skipping-spark-sql.html?context=cdpaas&locale=ensamples)
-
-
-
-"
-89F9E0463D14DED51B14392A4FD7A69BB53FA1BF_1,89F9E0463D14DED51B14392A4FD7A69BB53FA1BF," Geospatial data skipping
-
-You can also use data skipping when querying geospatial data sets using [geospatial functions](https://www.ibm.com/support/knowledgecenter/en/SSCJDQ/com.ibm.swg.im.dashdb.analytics.doc/doc/geo_functions.html) from the [spatio-temporal library](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/geo-spatial-lib.html).
-
-
-
-* To benefit from data skipping in data sets with latitude and longitude columns, you can collect the min/max indexes on the latitude and longitude columns.
-* Data skipping can be used in data sets with a geometry column (a UDT column) by using a built-in [Xskipper plugin](https://xskipper.io/api/indexing/plugins).
-
-
-
-The next sections show you to work with the geospatial plugin.
-
-"
-89F9E0463D14DED51B14392A4FD7A69BB53FA1BF_2,89F9E0463D14DED51B14392A4FD7A69BB53FA1BF," Setting up the geospatial plugin
-
-To use the plugin, load the relevant implementations using the Registration module. Note that you can only use Scala in applications in IBM Analytics Engine powered by Apache Spark, not in Watson Studio.
-
-
-
-* For Scala:
-
-import com.ibm.xskipper.stmetaindex.filter.STMetaDataFilterFactory
-import com.ibm.xskipper.stmetaindex.index.STIndexFactory
-import com.ibm.xskipper.stmetaindex.translation.parquet.{STParquetMetaDataTranslator, STParquetMetadatastoreClauseTranslator}
-import io.xskipper._
-
-Registration.addIndexFactory(STIndexFactory)
-Registration.addMetadataFilterFactory(STMetaDataFilterFactory)
-Registration.addClauseTranslator(STParquetMetadatastoreClauseTranslator)
-Registration.addMetaDataTranslator(STParquetMetaDataTranslator)
-* For Python:
-
-from xskipper import Xskipper
-from xskipper import Registration
-
-Registration.addMetadataFilterFactory(spark, 'com.ibm.xskipper.stmetaindex.filter.STMetaDataFilterFactory')
-Registration.addIndexFactory(spark, 'com.ibm.xskipper.stmetaindex.index.STIndexFactory')
-Registration.addMetaDataTranslator(spark, 'com.ibm.xskipper.stmetaindex.translation.parquet.STParquetMetaDataTranslator')
-Registration.addClauseTranslator(spark, 'com.ibm.xskipper.stmetaindex.translation.parquet.STParquetMetadatastoreClauseTranslator')
-
-
-
-"
-89F9E0463D14DED51B14392A4FD7A69BB53FA1BF_3,89F9E0463D14DED51B14392A4FD7A69BB53FA1BF," Index building
-
-To build an index, you can use the addCustomIndex API. Note that you can only use Scala in applications in IBM Analytics Engine powered by Apache Spark, not in Watson Studio.
-
-
-
-* For Scala:
-
-import com.ibm.xskipper.stmetaindex.implicits._
-
-// index the dataset
-val xskipper = new Xskipper(spark, dataset_path)
-
-xskipper
-.indexBuilder()
-// using the implicit method defined in the plugin implicits
-.addSTBoundingBoxLocationIndex(""location"")
-// equivalent
-//.addCustomIndex(STBoundingBoxLocationIndex(""location""))
-.build(reader).show(false)
-* For Python:
-
-xskipper = Xskipper(spark, dataset_path)
-
- adding the index using the custom index API
-xskipper.indexBuilder()
-.addCustomIndex(""com.ibm.xskipper.stmetaindex.index.STBoundingBoxLocationIndex"", ['location'], dict())
-.build(reader)
-.show(10, False)
-
-
-
-"
-89F9E0463D14DED51B14392A4FD7A69BB53FA1BF_4,89F9E0463D14DED51B14392A4FD7A69BB53FA1BF," Supported functions
-
-The list of supported geospatial functions includes the following:
-
-
-
-* ST_Distance
-* ST_Intersects
-* ST_Contains
-* ST_Equals
-* ST_Crosses
-* ST_Touches
-* ST_Within
-* ST_Overlaps
-* ST_EnvelopesIntersect
-* ST_IntersectsInterior
-
-
-
-"
-89F9E0463D14DED51B14392A4FD7A69BB53FA1BF_5,89F9E0463D14DED51B14392A4FD7A69BB53FA1BF," Encrypting indexes
-
-If you use a Parquet metadata store, the metadata can optionally be encrypted using Parquet Modular Encryption (PME). This is achieved by storing the metadata itself as a Parquet data set, and thus PME can be used to encrypt it. This feature applies to all input formats, for example, a data set stored in CSV format can have its metadata encrypted using PME.
-
-In the following section, unless specified otherwise, when referring to footers, columns, and so on, these are with respect to metadata objects, and not to objects in the indexed data set.
-
-Index encryption is modular and granular in the following way:
-
-
-
-* Each index can either be encrypted (with a per-index key granularity) or left in plain text
-* Footer + object name column:
-
-
-
-* Footer column of the metadata object which in itself is a Parquet file contains, among other things:
-
-
-
-* Schema of the metadata object, which reveals the types, parameters and column names for all indexes collected. For example, you can learn that a BloomFilter is defined on column city with a false-positive probability of 0.1.
-* Full path to the original data set or a table name in case of a Hive metastore table.
-
-
-
-* Object name column stores the names of all indexed objects.
-
-
-
-* Footer + metadata column can either be:
-
-
-
-* Both encrypted using the same key. This is the default. In this case, the plain text footer configuration for the Parquet objects comprising the metadata in encrypted footer mode, and the object name column is encrypted using the selected key.
-* Both in plain text. In this case, the Parquet objects comprising the metadata are in plain text footer mode, and the object name column is not encrypted.
-
-"
-89F9E0463D14DED51B14392A4FD7A69BB53FA1BF_6,89F9E0463D14DED51B14392A4FD7A69BB53FA1BF,"If at least one index is marked as encrypted, then a footer key must be configured regardless of whether plain text footer mode is enabled or not. If plain text footer is set then the footer key is used only for tamper-proofing. Note that in that case the object name column is not tamper proofed.
-
-If a footer key is configured, then at least one index must be encrypted.
-
-
-
-
-
-Before using index encryption, you should check the documentation on [PME](https://dataplatform.cloud.ibm.com/docs/content/wsj/spark/parquet-encryption.html) and make sure you are familiar with the concepts.
-
-Important: When using index encryption, whenever a key is configured in any Xskipper API, it's always the label NEVER the key itself.
-
-To use index encryption:
-
-
-
-1. Follow all the steps to make sure PME is enabled. See [PME](https://dataplatform.cloud.ibm.com/docs/content/wsj/spark/parquet-encryption.html).
-2. Perform all regular PME configurations, including Key Management configurations.
-3. Create encrypted metadata for a data set:
-
-
-
-1. Follow the regular flow for creating metadata.
-2. Configure a footer key. If you wish to set a plain text footer + object name column, set io.xskipper.parquet.encryption.plaintext.footer to true (See samples below).
-3. In IndexBuilder, for each index you want to encrypt, add the label of the key to use for that index.
-
-To use metadata during query time or to refresh existing metadata, no setup is necessary other than the regular PME setup required to make sure the keys are accessible (literally the same configuration needed to read an encrypted data set).
-
-
-
-
-
-"
-89F9E0463D14DED51B14392A4FD7A69BB53FA1BF_7,89F9E0463D14DED51B14392A4FD7A69BB53FA1BF," Samples
-
-The following samples show metadata creation using a key named k1 as a footer + object name key, and a key named k2 as a key to encrypt a MinMax for temp, while also creating a ValueList for city, which is left in plain text. Note that you can only use Scala in applications in IBM Analytics Engine powered by Apache Spark, not in Watson Studio.
-
-
-
-* For Scala:
-
-// index the dataset
-val xskipper = new Xskipper(spark, dataset_path)
-// Configuring the JVM wide parameters
-val jvmComf = Map(
-""io.xskipper.parquet.mdlocation"" -> md_base_location,
-""io.xskipper.parquet.mdlocation.type"" -> ""EXPLICIT_BASE_PATH_LOCATION"")
-Xskipper.setConf(jvmConf)
-// set the footer key
-val conf = Map(
-""io.xskipper.parquet.encryption.footer.key"" -> ""k1"")
-xskipper.setConf(conf)
-xskipper
-.indexBuilder()
-// Add an encrypted MinMax index for temp
-.addMinMaxIndex(""temp"", ""k2"")
-// Add a plaintext ValueList index for city
-.addValueListIndex(""city"")
-.build(reader).show(false)
-* For Python
-
-xskipper = Xskipper(spark, dataset_path)
- Add JVM Wide configuration
-jvmConf = dict([
-(""io.xskipper.parquet.mdlocation"", md_base_location),
-(""io.xskipper.parquet.mdlocation.type"", ""EXPLICIT_BASE_PATH_LOCATION"")])
-Xskipper.setConf(spark, jvmConf)
- configure footer key
-conf = dict([(""io.xskipper.parquet.encryption.footer.key"", ""k1"")])
-xskipper.setConf(conf)
- adding the indexes
-xskipper.indexBuilder()
-.addMinMaxIndex(""temp"", ""k1"")
-.addValueListIndex(""city"")
-"
-89F9E0463D14DED51B14392A4FD7A69BB53FA1BF_8,89F9E0463D14DED51B14392A4FD7A69BB53FA1BF,".build(reader)
-.show(10, False)
-
-
-
-If you want the footer + object name to be left in plain text mode (as mentioned above), you need to add the configuration parameter:
-
-
-
-* For Scala:
-
-// index the dataset
-val xskipper = new Xskipper(spark, dataset_path)
-// Configuring the JVM wide parameters
-val jvmComf = Map(
-""io.xskipper.parquet.mdlocation"" -> md_base_location,
-""io.xskipper.parquet.mdlocation.type"" -> ""EXPLICIT_BASE_PATH_LOCATION"")
-Xskipper.setConf(jvmConf)
-// set the footer key
-val conf = Map(
-""io.xskipper.parquet.encryption.footer.key"" -> ""k1"",
-""io.xskipper.parquet.encryption.plaintext.footer"" -> ""true"")
-xskipper.setConf(conf)
-xskipper
-.indexBuilder()
-// Add an encrypted MinMax index for temp
-.addMinMaxIndex(""temp"", ""k2"")
-// Add a plaintext ValueList index for city
-.addValueListIndex(""city"")
-.build(reader).show(false)
-* For Python
-
-xskipper = Xskipper(spark, dataset_path)
- Add JVM Wide configuration
-jvmConf = dict([
-(""io.xskipper.parquet.mdlocation"", md_base_location),
-(""io.xskipper.parquet.mdlocation.type"", ""EXPLICIT_BASE_PATH_LOCATION"")])
-Xskipper.setConf(spark, jvmConf)
- configure footer key
-conf = dict([(""io.xskipper.parquet.encryption.footer.key"", ""k1""),
-(""io.xskipper.parquet.encryption.plaintext.footer"", ""true"")])
-xskipper.setConf(conf)
- adding the indexes
-xskipper.indexBuilder()
-.addMinMaxIndex(""temp"", ""k1"")
-"
-89F9E0463D14DED51B14392A4FD7A69BB53FA1BF_9,89F9E0463D14DED51B14392A4FD7A69BB53FA1BF,".addValueListIndex(""city"")
-.build(reader)
-.show(10, False)
-
-
-
-"
-89F9E0463D14DED51B14392A4FD7A69BB53FA1BF_10,89F9E0463D14DED51B14392A4FD7A69BB53FA1BF," Data skipping with joins (for Spark 3 only)
-
-With Spark 3, you can use data skipping in join queries such as:
-
-SELECT
-FROM orders, lineitem
-WHERE l_orderkey = o_orderkey and o_custkey = 800
-
-This example shows a star schema based on the TPC-H benchmark schema (see [TPC-H](http://www.tpc.org/tpch/)) where lineitem is a fact table and contains many records, while the orders table is a dimension table which has a relatively small number of records compared to the fact tables.
-
-The above query has a predicate on the orders tables which contains a small number of records which means using min/max will not benefit much from data skipping.
-
-Dynamic data skipping is a feature which enables queries such as the above to benefit from data skipping by first extracting the relevant l_orderkey values based on the condition on the orders table and then using it to push down a predicate on l_orderkey that uses data skipping indexes to filter irrelevant objects.
-
-To use this feature, enable the following optimization rule. Note that you can only use Scala in applications in IBM Analytics Engine powered by Apache Spark, not in Watson Studio.
-
-
-
-* For Scala:
-
-import com.ibm.spark.implicits.
-
-spark.enableDynamicDataSkipping()
-* For Python:
-
-from sparkextensions import SparkExtensions
-
-SparkExtensions.enableDynamicDataSkipping(spark)
-
-
-
-Then use the Xskipper API as usual and your queries will benefit from using data skipping.
-
-For example, in the above query, indexing l_orderkey using min/max will enable skipping over the lineitem table and will improve query performance.
-
-"
-89F9E0463D14DED51B14392A4FD7A69BB53FA1BF_11,89F9E0463D14DED51B14392A4FD7A69BB53FA1BF," Support for older metadata
-
-Xskipper supports older metadata created by the MetaIndexManager seamlessly. Older metadata can be used for skipping as updates to the Xskipper metadata are carried out automatically by the next refresh operation.
-
-If you see DEPRECATED_SUPPORTED in front of an index when listing indexes or running a describeIndex operation, the metadata version is deprecated but is still supported and skipping will work. The next refresh operation will update the metadata automatically.
-"
-E0B0B51CD757048207EEFE4EC8F1E98E967D9E69_0,E0B0B51CD757048207EEFE4EC8F1E98E967D9E69," SPSS predictive analytics data preparation algorithms in notebooks
-
-Descriptives provides efficient computation of the univariate and bivariate statistics and automatic data preparation features on large scale data. It can be used widely in data profiling, data exploration, and data preparation for subsequent modeling analyses.
-
-The core statistical features include essential univariate and bivariate statistical summaries, univariate order statistics, metadata information creation from raw data, statistics for visualization of single fields and field pairs, data preparation features, and data interestingness score and data quality assessment. It can efficiently support the functionality required for automated data processing, user interactivity, and obtaining data insights for single fields or the relationships between the pairs of fields inclusive with a specified target.
-
-"
-E0B0B51CD757048207EEFE4EC8F1E98E967D9E69_1,E0B0B51CD757048207EEFE4EC8F1E98E967D9E69,"Python example code:
-
-from spss.ml.datapreparation.descriptives import Descriptives
-
-de = Descriptives().
-setInputFieldsList([""Field1"", ""Field2""]).
-setTargetFieldList([""Field3""]).
-setTrimBlanks(""TRIM_BOTH"")
-
-deModel = de.fit(df)
-
-PMML = deModel.toPMML()
-statXML = deModel.statXML()
-
-predictions = deModel.transform(df)
-predictions.show()
-
-"
-E0B0B51CD757048207EEFE4EC8F1E98E967D9E69_2,E0B0B51CD757048207EEFE4EC8F1E98E967D9E69," Descriptives Selection Strategy
-
-When the number of field pairs is too large (for example, larger than the default of 1000), SelectionStrategy is used to limit the number of pairs for which bivariate statistics will be computed. The strategy involves 2 steps:
-
-
-
-1. Limit the number of pairs based on the univariate statistics.
-2. Limit the number of pairs based on the core association bivariate statistics.
-
-
-
-Notice that the pair will always be included under the following conditions:
-
-
-
-1. The pair consists of a predictor field and a target field.
-2. The pair of predictors or targets is enforced.
-
-
-
-"
-E0B0B51CD757048207EEFE4EC8F1E98E967D9E69_3,E0B0B51CD757048207EEFE4EC8F1E98E967D9E69," Smart Data Preprocessing
-
-The Smart Data Preprocessing (SDP) engine is an analytic component for data preparation. It consists of three separate modules: relevance analysis, relevance and redundancy analysis, and smart metadata (SMD) integration.
-
-Given the data with regular fields, list fields, and map fields, relevance analysis evaluates the associations of input fields with targets, and selects a specified number of fields for subsequent analysis. Meanwhile, it expands list fields and map fields, and extracts the selected fields into regular column-based format.
-
-Due to the efficiency of relevance analysis, it's also used to reduce the large number of fields in wide data to a moderate level where traditional analytics can work.
-
-SmartDataPreprocessingRelevanceAnalysis exports these outputs:
-
-
-
-* JSON file, containing model information
-* new column-based data
-* the related data model
-
-
-
-"
-E0B0B51CD757048207EEFE4EC8F1E98E967D9E69_4,E0B0B51CD757048207EEFE4EC8F1E98E967D9E69,"Python example code:
-
-from spss.ml.datapreparation.smartdatapreprocessing import SmartDataPreprocessingRelevanceAnalysis
-
-sdpRA = SmartDataPreprocessingRelevanceAnalysis().
-setInputFieldList([""holderage"", ""vehicleage"", ""claimamt""]).
-setTargetFieldList([""vehiclegroup"", ""nclaims""]).
-setMaxNumTarget(3).
-setInvalidPairsThresEnabled(True).
-setRMSSEThresEnabled(True).
-setAbsVariCoefThresEnabled(True).
-setInvalidPairsThreshold(0.7).
-setRMSSEThreshold(0.7).
-setAbsVariCoefThreshold(0.05).
-setMaxNumSelFields(2).
-setConCatRatio(0.3).
-setFilterSelFields(True)
-
-predictions = sdpRA.transform(data)
-predictions.show()
-
-"
-E0B0B51CD757048207EEFE4EC8F1E98E967D9E69_5,E0B0B51CD757048207EEFE4EC8F1E98E967D9E69," Sparse Data Convertor
-
-Sparse Data Convertor (SDC) converts regular data fields into list fields. You just need to specify the fields that you want to convert into list fields, then SDC will merge the fields according to their measurement level. It will generate, at most, three kinds of list fields: continuous list field, categorical list field, and map field.
-
-"
-E0B0B51CD757048207EEFE4EC8F1E98E967D9E69_6,E0B0B51CD757048207EEFE4EC8F1E98E967D9E69,"Python example code:
-
-from spss.ml.datapreparation.sparsedataconverter import SparseDataConverter
-
-sdc = SparseDataConverter().
-setInputFieldList([""Age"", ""Sex"", ""Marriage"", ""BP"", ""Cholesterol"", ""Na"", ""K"", ""Drug""])
-predictions = sdc.transform(data)
-predictions.show()
-
-"
-E0B0B51CD757048207EEFE4EC8F1E98E967D9E69_7,E0B0B51CD757048207EEFE4EC8F1E98E967D9E69," Binning
-
-You can use this function to derive one or more new binned fields or to obtain the bin definitions used to determine the bin values.
-
-"
-E0B0B51CD757048207EEFE4EC8F1E98E967D9E69_8,E0B0B51CD757048207EEFE4EC8F1E98E967D9E69,"Python example code:
-
-from spss.ml.datapreparation.binning.binning import Binning
-
-binDefinition = BinDefinitions(1, False, True, True, [CutPoint(50.0, False)])
-binField = BinRequest(""integer_field"", ""integer_bin"", binDefinition, None)
-
-params = [binField]
-bining = Binning().setBinRequestsParam(params)
-
-outputDF = bining.transform(inputDF)
-
-"
-E0B0B51CD757048207EEFE4EC8F1E98E967D9E69_9,E0B0B51CD757048207EEFE4EC8F1E98E967D9E69," Hex Binning
-
-You can use this function to calculate and assign hexagonal bins to two fields.
-
-"
-E0B0B51CD757048207EEFE4EC8F1E98E967D9E69_10,E0B0B51CD757048207EEFE4EC8F1E98E967D9E69,"Python example code:
-
-from spss.ml.datapreparation.binning.hexbinning import HexBinning
-from spss.ml.param.binningsettings import HexBinningSetting
-
-params = [HexBinningSetting(""field1_out"", ""field1"", 5, -1.0, 25.0, 5.0),
-HexBinningSetting(""field2_out"", ""field2"", 5, -1.0, 25.0, 5.0)]
-
-hexBinning = HexBinning().setHexBinRequestsParam(params)
-outputDF = hexBinning.transform(inputDF)
-
-"
-E0B0B51CD757048207EEFE4EC8F1E98E967D9E69_11,E0B0B51CD757048207EEFE4EC8F1E98E967D9E69," Complex Sampling
-
-The complexSampling function selects a pseudo-random sample of records from a data source.
-
-The complexSampling function performs stratified sampling of incoming data using simple exact sampling and simple proportional sampling. The stratifying fields are specified as input and the sampling counts or sampling ratio for each of the strata to be sampled must also be provided. Optionally, the record counts for each strata may be provided to improve performance.
-
-"
-E0B0B51CD757048207EEFE4EC8F1E98E967D9E69_12,E0B0B51CD757048207EEFE4EC8F1E98E967D9E69,"Python example code:
-
-from spss.ml.datapreparation.sampling.complexsampling import ComplexSampling
-from spss.ml.datapreparation.params.sampling import RealStrata, Strata, Stratification
-
-transformer = ComplexSampling().
-setRandomSeed(123444).
-setRepeatable(True).
-setStratification(Stratification([""real_field""], [
-Strata(key=RealStrata(11.1)], samplingCount=25),
-Strata(key=RealStrata(2.4)], samplingCount=40),
-Strata(key=RealStrata(12.9)], samplingRatio=0.5)])).
-setFrequencyField(""frequency_field"")
-
-sampled = transformer.transform(unionDF)
-
-"
-E0B0B51CD757048207EEFE4EC8F1E98E967D9E69_13,E0B0B51CD757048207EEFE4EC8F1E98E967D9E69," Count and Sample
-
-The countAndSample function produces a pseudo-random sample having a size approximately equal to the \'samplingCount\' input.
-
-The sampling is accomplished by calling the SamplingComponent with a sampling ratio that's computed as \'samplingCount / totalRecords\' where \'totalRecords\' is the record count of the incoming data.
-
-"
-E0B0B51CD757048207EEFE4EC8F1E98E967D9E69_14,E0B0B51CD757048207EEFE4EC8F1E98E967D9E69,"Python example code:
-
-from spss.ml.datapreparation.sampling.countandsample import CountAndSample
-
-transformer = CountAndSample().setSamplingCount(20000).setRandomSeed(123)
-sampled = transformer.transform(unionDF)
-
-"
-E0B0B51CD757048207EEFE4EC8F1E98E967D9E69_15,E0B0B51CD757048207EEFE4EC8F1E98E967D9E69," MR Sampling
-
-The mrsampling function selects a pseudo-random sample of records from a data source at a specified sampling ratio. The size of the sample will be approximately the specified proportion of the total number of records subject to an optional maximum. The set of records and their total number will vary with random seed. Every record in the data source has the same probability of being selected.
-
-"
-E0B0B51CD757048207EEFE4EC8F1E98E967D9E69_16,E0B0B51CD757048207EEFE4EC8F1E98E967D9E69,"Python example code:
-
-from spss.ml.datapreparation.sampling.mrsampling import MRSampling
-
-transformer = MRSampling().setSamplingRatio(0.5).setRandomSeed(123).setDiscard(True)
-sampled = transformer.transform(unionDF)
-
-"
-E0B0B51CD757048207EEFE4EC8F1E98E967D9E69_17,E0B0B51CD757048207EEFE4EC8F1E98E967D9E69," Sampling Model
-
-The samplingModel function selects a pseudo-random percentage of the subsequence of input records defined by every Nth record for a given step size N. The total sample size may be optionally limited by a maximum.
-
-When the step size is 1, the subsequence is the entire sequence of input records. When the sampling ratio is 1.0, selection becomes deterministic, not pseudo-random.
-
-Note that with distributed data, the samplingModel function applies the selection criteria independently to each data split. The maximum sample size, if any, applies independently to each split and not to the entire data source; the subsequence is started fresh at the start of each split.
-
-"
-E0B0B51CD757048207EEFE4EC8F1E98E967D9E69_18,E0B0B51CD757048207EEFE4EC8F1E98E967D9E69,"Python example code:
-
-from spss.ml.datapreparation.sampling.samplingcomponent import SamplingModel
-
-transformer = SamplingModel().setSamplingRatio(1.0).setSamplingStep(2).setRandomSeed(123).setDiscard(False)
-sampled = transformer.transform(unionDF)
-
-"
-E0B0B51CD757048207EEFE4EC8F1E98E967D9E69_19,E0B0B51CD757048207EEFE4EC8F1E98E967D9E69," Sequential Sampling
-
-The sequentialSampling function is similar to the samplingModel function. It also selects a pseudo-random percentage of the subsequence of input records defined by every Nth record for a given step size N. The total sample size may be optionally limited by a maximum.
-
-When the step size is 1, the subsequence is the entire sequence of input records. When the sampling ratio is 1.0, selection becomes deterministic, not pseudo-random. The main difference between sequentialSampling and samplingModel is that with distributed data, the sequentialSampling function applies the selection criteria to the entire data source, while the samplingModel function applies the selection criteria independently to each data split.
-
-"
-E0B0B51CD757048207EEFE4EC8F1E98E967D9E69_20,E0B0B51CD757048207EEFE4EC8F1E98E967D9E69,"Python example code:
-
-from spss.ml.datapreparation.sampling.samplingcomponent import SequentialSampling
-
-transformer = SequentialSampling().setSamplingRatio(1.0).setSamplingStep(2).setRandomSeed(123).setDiscard(False)
-sampled = transformer.transform(unionDF)
-
-Parent topic:[SPSS predictive analytics algorithms](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/spss-algorithms.html)
-"
-3CA99C6EF4745121C9865AB4119F3AB1B1A3BDB1_0,3CA99C6EF4745121C9865AB4119F3AB1B1A3BDB1," Data sources for scoring batch deployments
-
-You can supply input data for a batch deployment job in several ways, including directly uploading a file or providing a link to database tables. The types of allowable input data vary according to the type of deployment job that you are creating.
-
-For supported input types by framework, refer to [Batch deployment input details by framework](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-batch-input-by-framework.html).
-
-Input data can be supplied to a batch job as [inline data](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-batch-data-sources.html?context=cdpaas&locale=eninline_data) or [data reference](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-batch-data-sources.html?context=cdpaas&locale=endata_ref).
-
-"
-3CA99C6EF4745121C9865AB4119F3AB1B1A3BDB1_1,3CA99C6EF4745121C9865AB4119F3AB1B1A3BDB1," Available input types for batch deployments by framework and asset type
-
-
-
-Available input types for batch deployments by framework and asset type
-
- Framework Batch deployment type
-
- Decision optimization Reference
- Python function Inline
- PyTorch Inline and Reference
- Tensorflow Inline and Reference
- Scikit-learn Inline and Reference
- Python scripts Reference
- Spark MLlib Inline and Reference
- SPSS Inline and Reference
- XGBoost Inline and Reference
-
-
-
-"
-3CA99C6EF4745121C9865AB4119F3AB1B1A3BDB1_2,3CA99C6EF4745121C9865AB4119F3AB1B1A3BDB1," Inline data description
-
-Inline type input data for batch processing is specified in the batch deployment job's payload. For example, you can pass a CSV file as the deployment input in the UI or as a value for the scoring.input_data parameter in a notebook. When the batch deployment job is completed, the output is written to the corresponding job's scoring.predictions metadata parameter.
-
-"
-3CA99C6EF4745121C9865AB4119F3AB1B1A3BDB1_3,3CA99C6EF4745121C9865AB4119F3AB1B1A3BDB1," Data reference description
-
-Input and output data of type data reference that is used for batch processing can be stored:
-
-
-
-* In a remote data source, like a Cloud Object Storage bucket or an SQL or no-SQL database.
-* As a local or managed data asset in a deployment space.
-
-
-
-Details for data references include:
-
-
-
-* Data source reference type depends on the asset type. Refer to Data source reference types section in [Adding data assets to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html).
-* For data_asset type, the reference to input data must be specified as a /v2/assets href in the input_data_references.location.href parameter in the deployment job's payload. The data asset that is specified is a reference to a local or a connected data asset. Also, if the batch deployment job's output data must be persisted in a remote data source, the references to output data must be specified as a /v2/assets href in output_data_reference.location.href parameter in the deployment job's payload.
-* Any input and output data_asset references must be in the same space ID as the batch deployment.
-* If the batch deployment job's output data must be persisted in a deployment space as a local asset, output_data_reference.location.name must be specified. When the batch deployment job is completed successfully, the asset with the specified name is created in the space.
-* Output data can contain information on where in a remote database the data asset is located. In this situation, you can specify whether to append the batch output to the table or truncate the table and update the output data. Use the output_data_references.location.write_mode parameter to specify the values truncate or append.
-
-
-
-* Specifying truncate as value truncates the table and inserts the batch output data.
-* Specifying append as value appends the batch output data to the remote database table.
-"
-3CA99C6EF4745121C9865AB4119F3AB1B1A3BDB1_4,3CA99C6EF4745121C9865AB4119F3AB1B1A3BDB1,"* write_mode is applicable only for the output_data_references parameter.
-* write_mode is applicable only for remote database-related data assets. This parameter is not applicable for a local data asset or a Cloud Object Storage based data asset.
-
-
-
-
-
-"
-3CA99C6EF4745121C9865AB4119F3AB1B1A3BDB1_5,3CA99C6EF4745121C9865AB4119F3AB1B1A3BDB1," Example data_asset payload
-
-""input_data_references"": [{
-""type"": ""data_asset"",
-""connection"": {
-},
-""location"": {
-""href"": ""/v2/assets/?space_id=""
-}
-}]
-
-"
-3CA99C6EF4745121C9865AB4119F3AB1B1A3BDB1_6,3CA99C6EF4745121C9865AB4119F3AB1B1A3BDB1," Example connection_asset payload
-
-""input_data_references"": [{
-""type"": ""connection_asset"",
-""connection"": {
-""id"": """"
-},
-""location"": {
-""bucket"": """",
-""file_name"": ""/""
-}
-
-}]
-
-"
-3CA99C6EF4745121C9865AB4119F3AB1B1A3BDB1_7,3CA99C6EF4745121C9865AB4119F3AB1B1A3BDB1," Structuring the input data
-
-How you structure the input data, also known as the payload, for the batch job depends on the framework for the asset you are deploying.
-
-A .csv input file or other structured data formats must be formatted to match the schema of the asset. List the column names (fields) in the first row and values to be scored in subsequent rows. For example, see the following code snippet:
-
-PassengerId, Pclass, Name, Sex, Age, SibSp, Parch, Ticket, Fare, Cabin, Embarked
-1,3,""Braund, Mr. Owen Harris"",0,22,1,0,A/5 21171,7.25,,S
-4,1,""Winslet, Mr. Leo Brown"",1,65,1,0,B/5 200763,7.50,,S
-
-A JSON input file must provide the same information on fields and values, by using this format:
-
-{""input_data"":[{
-""fields"": , , ...],
-""values"": , , ...]]
-}]}
-
-For example:
-
-{""input_data"":[{
-""fields"": ""PassengerId"",""Pclass"",""Name"",""Sex"",""Age"",""SibSp"",""Parch"",""Ticket"",""Fare"",""Cabin"",""Embarked""],
-""values"": 1,3,""Braund, Mr. Owen Harris"",0,22,1,0,""A/5 21171"",7.25,null,""S""],
-4,1,""Winselt, Mr. Leo Brown"",1,65,1,0,""B/5 200763"",7.50,null,""S""]]
-}]}
-
-"
-3CA99C6EF4745121C9865AB4119F3AB1B1A3BDB1_8,3CA99C6EF4745121C9865AB4119F3AB1B1A3BDB1," Preparing a payload that matches the schema of an existing model
-
-Refer to this sample code:
-
-model_details = client.repository.get_details("""") retrieves details and includes schema
-columns_in_schema = []
-for i in range(0, len(model_details['entity']['input'].get('fields'))):
-columns_in_schema.append(model_details['entity']['input'].get('fields')[i])
-
-X = X[columns_in_schema] where X is a pandas dataframe that contains values to be scored
-(...)
-scoring_values = X.values.tolist()
-array_of_input_fields = X.columns.tolist()
-payload_scoring = {""input_data"": [{""fields"": array_of_input_fields],""values"": scoring_values}]}
-
-Parent topic:[Creating a batch deployment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-batch-details.html)
-"
-653FFEDFAC00F360750F776A3A60F6AAD38ED954_0,653FFEDFAC00F360750F776A3A60F6AAD38ED954," Creating batch deployments in Watson Machine Learning
-
-A batch deployment processes input data from a file, data connection, or connected data in a storage bucket, and writes the output to a selected destination.
-
-"
-653FFEDFAC00F360750F776A3A60F6AAD38ED954_1,653FFEDFAC00F360750F776A3A60F6AAD38ED954," Before you begin
-
-
-
-1. Save a model to a deployment space.
-2. Promote or add the input file for the batch deployment to the space. For more information, see [Deployment spaces](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-spaces_local.html).
-
-
-
-"
-653FFEDFAC00F360750F776A3A60F6AAD38ED954_2,653FFEDFAC00F360750F776A3A60F6AAD38ED954," Supported frameworks
-
-Batch deployment is supported for these frameworks and asset types:
-
-
-
-* Decision Optimization
-* PMML
-* Python functions
-* PyTorch-Onnx
-* Tensorflow
-* Scikit-learn
-* Python scripts
-* Spark MLlib
-* SPSS
-* XGBoost
-
-
-
-Notes:
-
-
-
-* You can create batch deployments only of Python functions and models based on the PMML framework programmatically.
-* Your list of deployment jobs can contain two types of jobs: WML deployment job and WML batch deployment.
-* When you create a batch deployment (through the UI or programmatically), an extra default deployment job is created of the type WML deployment job. The extra job is a parent job that stores all deployment runs generated for that batch deployment that were triggered by the Watson Machine Learning API.
-* The standard WML batch deployment type job is created only when you create a deployment from the UI. You cannot create a WML batch deployment type job by using the API.
-* The limitations of WML deployment job are as follows:
-
-
-
-* The job cannot be edited.
-* The job cannot be deleted unless the associated batch deployment is deleted.
-* The job doesn't allow scheduling.
-* The job doesn't allow notifications.
-* The job doesn't allow changing retention settings.
-
-
-
-
-
-For more information, see [Data sources for scoring batch deployments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-batch-data-sources.html). For more information, see [Batch deployment input details by framework](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-batch-input-by-framework.html)
-
-"
-653FFEDFAC00F360750F776A3A60F6AAD38ED954_3,653FFEDFAC00F360750F776A3A60F6AAD38ED954," Creating a batch deployment
-
-To create a batch deployment:
-
-
-
-1. From the deployment space, click the name of the saved model that you want to deploy. The model detail page opens.
-2. Click New deployment.
-3. Choose Batch as the deployment type.
-4. Enter a name and an optional description for your deployment.
-5. Select a [hardware specification](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-hardware-configs.html).
-6. Click Create. When status changes to Deployed, your deployment is created.
-
-
-
-Note: Additionally, you can create a batch deployment by using any of these interfaces:
-
-
-
-* Watson Studio user interface, from an Analytics deployment space
-* Watson Machine Learning Python Client
-* Watson Machine Learning REST APIs
-
-
-
-"
-653FFEDFAC00F360750F776A3A60F6AAD38ED954_4,653FFEDFAC00F360750F776A3A60F6AAD38ED954," Creating batch deployments programmatically
-
-See [Machine learning samples and examples](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-samples-overview.html) for sample notebooks that demonstrate creating batch deployments that use the Watson Machine Learning [REST API](https://cloud.ibm.com/apidocs/machine-learning) and Watson Machine Learning [Python client library](https://ibm.github.io/watson-machine-learning-sdk/).
-
-"
-653FFEDFAC00F360750F776A3A60F6AAD38ED954_5,653FFEDFAC00F360750F776A3A60F6AAD38ED954," Viewing deployment details
-
-Click the name of a deployment to view the details.
-
-
-
-You can view the configuration details such as hardware and software specifications. You can also get the deployment ID, which you can use in API calls from an endpoint. For more information, see [Looking up a deployment endpoint](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-online.html).
-
-"
-653FFEDFAC00F360750F776A3A60F6AAD38ED954_6,653FFEDFAC00F360750F776A3A60F6AAD38ED954," Learn more
-
-
-
-* For more information, see [Creating jobs in deployment spaces](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/create-jobs.html).
-* Refer to [Machine Learning samples and examples](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-samples-overview.html) for links to sample notebooks that demonstrate creating batch deployments that use the Watson Machine Learning [REST API](https://cloud.ibm.com/apidocs/machine-learning-cp) and Watson Machine Learning [Python client library](https://ibm.github.io/watson-machine-learning-sdk/).
-
-
-
-Parent topic:[Managing predictive deployments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-general.html)
-"
-7F755B81AB25CBD0950D528A240B12262FE6CA08_0,7F755B81AB25CBD0950D528A240B12262FE6CA08," Batch deployment input details for AutoAI models
-
-Follow these rules when you are specifying input details for batch deployments of AutoAI models.
-
-Data type summary table:
-
-
-
- Data Description
-
- Type inline, data references
- File formats CSV
-
-
-
-"
-7F755B81AB25CBD0950D528A240B12262FE6CA08_1,7F755B81AB25CBD0950D528A240B12262FE6CA08," Data Sources
-
-Input/output data references:
-
-
-
-* Local/managed assets from the space
-* Connected (remote) assets: Cloud Object Storage
-
-
-
-Notes:
-
-
-
-* For connections of type [Cloud Object Storage](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-cos.html) , you must configure Access key and Secret key, also known as [HMAC credentials](https://cloud.ibm.com/docs/cloud-object-storage?topic=cloud-object-storage-uhc-hmac-credentials-main).{: new_window}
-* Your training data source can differ from your deployment data source, but the schema of the data must match or the deployment will fail. For example, you can train an experiment by using data from a Snowflake database and deploy by using input data from a Db2 database if the schema is an exact match.
-* The environment variables parameter of deployment jobs is not applicable.
-
-
-
-If you are specifying input/output data references programmatically:
-
-
-
-* Data source reference type depends on the asset type. Refer to the Data source reference types section in [Adding data assets to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html).
-* For AutoAI assets, if the input or output data reference is of type connection_asset and the remote data source is a database then location.table_name and location.schema_name are required parameters. For example:
-
-
-
-""input_data_references"": [{
-""type"": ""connection_asset"",
-""connection"": {
-""id"":
-},
-""location"": {
-""table_name"": ,
-""schema_name"":
-
-}
-"
-7F755B81AB25CBD0950D528A240B12262FE6CA08_2,7F755B81AB25CBD0950D528A240B12262FE6CA08,"}]
-
-Parent topic:[Batch deployment input details by framework](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-batch-input-by-framework.html)
-"
-4C6242D9F2B3E125780FDF188F994270A6E2340D_0,4C6242D9F2B3E125780FDF188F994270A6E2340D," Batch deployment input details by framework
-
-Various data types are supported as input for batch deployments, depending on your specific model type.
-
-For details, follow these links:
-
-
-
-* [AutoAI models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-batch-input-autoai.html)
-* [Decision optimization models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-batch-input-do.html)
-* [Python functions](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-batch-input-py-function.html)
-* [Python scripts](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-batch-input-py-script.html)
-* [Pytorch models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-batch-input-pytorch.html)
-* [Scikit-Learn and XGBoost models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-batch-input-scikit.html)
-* [Spark models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-batch-input-spark.html)
-* [SPSS models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-batch-input-spss.html)
-* [Tensorflow models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-batch-input-tensorflow.html)
-
-
-
-"
-4C6242D9F2B3E125780FDF188F994270A6E2340D_1,4C6242D9F2B3E125780FDF188F994270A6E2340D,"For more information, see [Using multiple inputs for an SPSS job](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-SPSS-multiple-input.html).
-
-Parent topic:[Creating a batch deployment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-batch-details.html)
-"
-722D44681192F1766A0B1BACC328E719526E8DE2_0,722D44681192F1766A0B1BACC328E719526E8DE2," Batch deployment input details for Decision Optimization models
-
-Follow these rules when you are specifying input details for batch deployments of Decision Optimization models.
-
-Data type summary table:
-
-
-
- Data Description
-
- Type inline and data references
- File formats Refer to [Model input and output data file formats](https://dataplatform.cloud.ibm.com/docs/content/DO/WML_Deployment/ModelIODataDefn.html).
-
-
-
-"
-722D44681192F1766A0B1BACC328E719526E8DE2_1,722D44681192F1766A0B1BACC328E719526E8DE2," Data sources
-
-Input/output inline data:
-
-
-
-* Inline input data is converted to CSV files and used by the engine.
-* CSV output data is converted to output inline data.
-* Base64-encoded raw data is supported as input and output.
-
-
-
-Input/output data references:
-
-
-
-* Tabular data is loaded from CSV, XLS, XLSX, JSON files or database data sources supported by the WDP connection library, converted to CSV files, and used by the engine.
-* CSV output data is converted to tabular data and saved to CSV, XLS, XLSX, JSON files, or database data sources supported by the WDP connection library.
-* Raw data can be loaded and saved from or to any file data sources that are supported by the WDP connection library.
-* No support for compressed files.
-* The environment variables parameter of deployment jobs is not applicable.
-
-
-
-If you are specifying input/output data references programmatically:
-
-
-
-* Data source reference type depends on the asset type. Refer to the Data source reference types section in [Adding data assets to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html).
-* For S3 or Db2, connection details must be specified in the input_data_references.connection parameter, in the deployment job’s payload.
-* For S3 or Db2, location details such as table name, bucket name, or path must be specified in the input_data_references.location.path parameter, in the deployment job’s payload.
-* For data_asset, a managed asset can be updated or created. For creation, you can set the name and description for the created asset.
-* You can use a pattern in ID or connection properties. For example, see the following code snippet:
-
-
-
-* To collect all output CSV as inline data:
-
-""output_data"": [ { ""id"":""..csv""}]
-* To collect job output in a particular S3 folder:
-
-"
-722D44681192F1766A0B1BACC328E719526E8DE2_2,722D44681192F1766A0B1BACC328E719526E8DE2,"""output_data_references"": [ {""id"":""."", ""type"": ""s3"", ""connection"": {...}, ""location"": { ""bucket"": ""do-wml"", ""path"": ""${job_id}/${attachment_name}"" }}]
-
-
-
-
-
-Note:Support for s3 and db2 values for scoring.input_data_references.type and scoring.output_data_references.type is deprecated and will be removed in the future. Use connection_asset or data_asset instead. See the documentation for the Watson Machine Learning [REST API](https://cloud.ibm.com/apidocs/machine-learning) or Watson Machine Learning [Python client library](https://ibm.github.io/watson-machine-learning-sdk/){: new_window} for details and examples.
-
-For more information, see [Model input and output data adaptation](https://dataplatform.cloud.ibm.com/docs/content/DO/WML_Deployment/ModelIODataDefn.html).
-
-Parent topic:[Batch deployment input details by framework](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-batch-input-by-framework.html)
-"
-4F89E6B2B76E64B9618F799611DD1B053D045222_0,4F89E6B2B76E64B9618F799611DD1B053D045222," Batch deployment input details for Python functions
-
-Follow these rules when you are specifying input details for batch deployments of Python functions.
-
-Data type summary table:
-
-
-
- Data Description
-
- Type inline
- File formats N/A
-
-
-
-You can deploy Python functions in Watson Machine Learning the same way that you can deploy models. Your tools and apps can use the Watson Machine Learning Python client or REST API to send data to your deployed functions in the same way that they send data to deployed models. Deploying functions gives you the ability to:
-
-
-
-* Hide details (such as credentials)
-* Preprocess data before you pass it to models
-* Handle errors
-* Include calls to multiple models All of these actions take place within the deployed function, instead of in your application.
-
-
-
-"
-4F89E6B2B76E64B9618F799611DD1B053D045222_1,4F89E6B2B76E64B9618F799611DD1B053D045222," Data sources
-
-If you are specifying input/output data references programmatically:
-
-
-
-* Data source reference type depends on the asset type. Refer to the Data source reference types section in [Adding data assets to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html).
-
-
-
-Notes:
-
-
-
-* For connections of type [Cloud Object Storage](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-cos.html) or [Cloud Object Storage (infrastructure)](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-cos-infra.html), you must configure Access key and Secret key, also known as [HMAC credentials](https://cloud.ibm.com/docs/cloud-object-storage?topic=cloud-object-storage-uhc-hmac-credentials-main).
-* The environment variables parameter of deployment jobs is not applicable.
-* Make sure that the output is structured to match the output schema that is described in [Execute a synchronous deployment prediction](https://cloud.ibm.com/apidocs/machine-learningdeployments-compute-predictions).
-
-
-
-"
-4F89E6B2B76E64B9618F799611DD1B053D045222_2,4F89E6B2B76E64B9618F799611DD1B053D045222," Learn more
-
-[Deploying Python functions](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-py-function.html).
-
-Parent topic:[Batch deployment input details by framework](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-batch-input-by-framework.html)
-"
-85A8F36D819B12B355508090E787F4A182686394_0,85A8F36D819B12B355508090E787F4A182686394," Batch deployment input details for Python scripts
-
-Follow these rules when you specify input details for batch deployments of Python scripts.
-
-Data type summary table:
-
-
-
- Data Description
-
- Type Data references
- File formats Any
-
-
-
-"
-85A8F36D819B12B355508090E787F4A182686394_1,85A8F36D819B12B355508090E787F4A182686394," Data sources
-
-Input or output data references:
-
-
-
-* Local or managed assets from the space
-* Connected (remote) assets: Cloud Object Storage
-
-
-
-Notes:
-
-
-
-* For connections of type [Cloud Object Storage](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-cos.html) or [Cloud Object Storage(infrastructure)](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-cos-infra.html), you must configure Access key and Secret key, also known as [HMAC credentials](https://cloud.ibm.com/docs/cloud-object-storage?topic=cloud-object-storage-uhc-hmac-credentials-main).
-
-
-
-If you are specifying input/output data references programmatically:
-
-
-
-* Data source reference type depends on the asset type. For more information, see Data source reference types section in [Adding data assets to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html).
-* You can specify the environment variables that are required for running the Python Script as 'key': 'value' pairs in scoring.environment_variables. The key must be the name of an environment variable and the value must be the corresponding value of the environment variable.
-* The deployment job's payload is saved as a JSON file in the deployment container where you run the Python script. The Python script can access the full path file name of the JSON file that uses the JOBS_PAYLOAD_FILE environment variable.
-* If input data is referenced as a local or managed data asset, deployment service downloads the input data and places it in the deployment container where you run the Python script. You can access the location (path) of the downloaded input data through the BATCH_INPUT_DIR environment variable.
-"
-85A8F36D819B12B355508090E787F4A182686394_2,85A8F36D819B12B355508090E787F4A182686394,"* For input data references (data asset or connection asset), downloading of the data must be handled by the Python script. If a connected data asset or a connection asset is present in the deployment jobs payload, you can access it using the JOBS_PAYLOAD_FILE environment variable that contains the full path to the deployment job's payload that is saved as a JSON file.
-* If output data must be persisted as a local or managed data asset in a space, you can specify the name of the asset to be created in scoring.output_data_reference.location.name. As part of a Python script, output data can be placed in the path that is specified by the BATCH_OUTPUT_DIR environment variable. The deployment service compresses the data to compressed file format and upload it in the location that is specified in BATCH_OUTPUT_DIR.
-* These environment variables are set internally. If you try to set them manually, your values are overridden:
-
-
-
-* BATCH_INPUT_DIR
-* BATCH_OUTPUT_DIR
-* JOBS_PAYLOAD_FILE
-
-
-
-* If output data must be saved in a remote data store, you must specify the reference of the output data reference (for example, a data asset or a connected data asset) in output_data_reference.location.href. The Python script must take care of uploading the output data to the remote data source. If a connected data asset or a connection asset reference is present in the deployment jobs payload, you can access it using the JOBS_PAYLOAD_FILE environment variable, which contains the full path to the deployment job's payload that is saved as a JSON file.
-* If the Python script does not require any input or output data references to be specified in the deployment job payload, then do not provide the scoring.input_data_references and scoring.output_data_references objects in the payload.
-
-
-
-"
-85A8F36D819B12B355508090E787F4A182686394_3,85A8F36D819B12B355508090E787F4A182686394," Learn more
-
-[Deploying scripts in Watson Machine Learning](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-script.html).
-
-Parent topic:[Batch deployment input details by framework](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-batch-input-by-framework.html)
-"
-27A861059A73E83BC02C633EE194DAC6F8ACE374_0,27A861059A73E83BC02C633EE194DAC6F8ACE374," Batch deployment input details for Pytorch models
-
-Follow these rules when you are specifying input details for batch deployments of Pytorch models.
-
-Data type summary table:
-
-
-
- Data Description
-
- Type inline, data references
- File formats .zip archive that contains JSON files
-
-
-
-"
-27A861059A73E83BC02C633EE194DAC6F8ACE374_1,27A861059A73E83BC02C633EE194DAC6F8ACE374," Data sources
-
-Input or output data references:
-
-
-
-* Local or managed assets from the space
-* Connected (remote) assets: Cloud Object Storage
-
-
-
-If you are specifying input/output data references programmatically:
-
-
-
-* Data source reference type depends on the asset type. Refer to the Data source reference types section in [Adding data assets to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html).
-* If you deploy Pytorch models with ONNX format, specify the keep_initializers_as_inputs=True flag and set opset_version to 9 (always set opset_version to the most recent version that is supported by the deployment runtime).
-
-torch.onnx.export(net, x, 'lin_reg1.onnx', verbose=True, keep_initializers_as_inputs=True, opset_version=9)
-
-
-
-Note: The environment variables parameter of deployment jobs is not applicable.
-
-Parent topic:[Batch deployment input details by framework](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-batch-input-by-framework.html)
-"
-CDF460B2BB910F74723297BCB8E940BF370C6FFD_0,CDF460B2BB910F74723297BCB8E940BF370C6FFD," Batch deployment input details for Scikit-learn and XGBoost models
-
-Follow these rules when you are specifying input details for batch deployments of Scikit-learn and XGBoost models.
-
-Data type summary table:
-
-
-
- Data Description
-
- Type inline, data references
- File formats CSV, .zip archive that contains CSV files
-
-
-
-"
-CDF460B2BB910F74723297BCB8E940BF370C6FFD_1,CDF460B2BB910F74723297BCB8E940BF370C6FFD," Data source
-
-If you are specifying input/output data references programmatically:
-
-
-
-* Data source reference type depends on the asset type. Refer to the Data source reference types section in [Adding data assets to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html).
-
-
-
-Notes:
-
-
-
-* The environment variables parameter of deployment jobs is not applicable.
-* For connections of type [Cloud Object Storage](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-cos.html) or [Cloud Object Storage (infrastructure)](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-cos-infra.html), you must configure Access key and Secret key, also known as [HMAC credentials](https://cloud.ibm.com/docs/cloud-object-storage?topic=cloud-object-storage-uhc-hmac-credentials-main),
-
-
-
-Parent topic:[Batch deployment input details by framework](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-batch-input-by-framework.html)
-"
-ADBD308EEB761B4A1516D49F68C880EAF3F08D78,ADBD308EEB761B4A1516D49F68C880EAF3F08D78," Batch deployment input details for Spark models
-
-Follow these rules when you are specifying input details for batch deployments of Spark models.
-
-Data type summary table:
-
-
-
- Data Description
-
- Type Inline
- File formats N/A
-
-
-
-Parent topic:[Batch deployment input details by framework](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-batch-input-by-framework.html)
-"
-62BF74E391CFE1696E5218B3DF0926B735A4788F_0,62BF74E391CFE1696E5218B3DF0926B735A4788F," Batch deployment input details for SPSS models
-
-Follow these rules when you are specifying input details for batch deployments of SPSS models.
-
-Data type summary table:
-
-
-
- Data Description
-
- Type inline, data references
- File formats CSV
-
-
-
-"
-62BF74E391CFE1696E5218B3DF0926B735A4788F_1,62BF74E391CFE1696E5218B3DF0926B735A4788F," Data sources
-
-Input or output data references:
-
-
-
-* Local or managed assets from the space
-* Connected (remote) assets from these sources:
-
-
-
-* [Cloud Object Storage](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-cos.html)
-* [Db2 Warehouse](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-db2-wh.html)
-* [Db2](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-db2.html)
-* [Google Big-Query (googlebq)](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-bigquery.html)
-* [MySQL](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-mysql.html)
-* [Microsoft SQL Server](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-sql-server.html)
-* [Teradata (teradata)](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-teradata.html)
-* [PostgreSQL](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-postgresql.html)
-* [Oracle](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-oracle.html)
-* [Snowflake](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-snowflake.html)
-* [Informix](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-informix.html)
-"
-62BF74E391CFE1696E5218B3DF0926B735A4788F_2,62BF74E391CFE1696E5218B3DF0926B735A4788F,"* [Netezza Performance Server](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-puredata.html)
-
-
-
-
-
-Notes:
-
-
-
-* For connections of type [Cloud Object Storage](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-cos.html) or [Cloud Object Storage (infrastructure)](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-cos-infra.html), you must configure Access key and Secret key, also known as [HMAC credentials](https://cloud.ibm.com/docs/cloud-object-storage?topic=cloud-object-storage-uhc-hmac-credentials-main).
-* For SPSS deployments, these data sources are not compliant with Federal Information Processing Standard (FIPS):
-
-
-
-* Cloud Object Storage
-* Cloud Object Storage (infrastructure)
-* Storage volumes
-
-
-
-
-
-If you are specifying input/output data references programmatically:
-
-
-
-* Data source reference type depends on the asset type. Refer to the Data source reference types section in [Adding data assets to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html).
-* SPSS jobs support multiple data source inputs and a single output. If the schema is not provided in the model metadata at the time of saving the model, you must enter id manually and select a data asset for each connection. If the schema is provided in model metadata, id names are populated automatically by using metadata. You select the data asset for the corresponding ids in Watson Studio. For more information, see [Using multiple data sources for an SPSS job](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-SPSS-multiple-input.html).
-"
-62BF74E391CFE1696E5218B3DF0926B735A4788F_3,62BF74E391CFE1696E5218B3DF0926B735A4788F,"* To create a local or managed asset as an output data reference, the name field must be specified for output_data_reference so that a data asset is created with the specified name. Specifying an href that refers to an existing local data asset is not supported. Note:
-
-
-
-Connected data assets that refer to supported databases can be created in the output_data_references only when the input_data_references also refers to one of these sources.
-
-
-
-* Table names that are provided in input and output data references are ignored. Table names that are referred in the SPSS model stream are used during the batch deployment.
-* Use SQL PushBack to generate SQL statements for IBM SPSS Modeler operations that can be “pushed back” to or run in the database to improve performance. SQL Pushback is only supported by:
-
-
-
-* Db2
-* SQL Server
-* Netezza Performance Server
-
-
-
-* If you are creating a job by using the Python client, you must provide the connection name that is referred in the data nodes of the SPSS model stream in the id field, and the data asset href in location.href for input/output data references of the deployment jobs payload. For example, you can construct the job payload like this:
-
-job_payload_ref = {
-client.deployments.ScoringMetaNames.INPUT_DATA_REFERENCES: [{
-""id"": ""DB2Connection"",
-""name"": ""drug_ref_input1"",
-""type"": ""data_asset"",
-""connection"": {},
-""location"": {
-""href"":
-}
-},{
-""id"": ""Db2 WarehouseConn"",
-""name"": ""drug_ref_input2"",
-""type"": ""data_asset"",
-""connection"": {},
-""location"": {
-""href"":
-}
-}],
-client.deployments.ScoringMetaNames.OUTPUT_DATA_REFERENCE: {
-""type"": ""data_asset"",
-""connection"": {},
-""location"": {
-"
-62BF74E391CFE1696E5218B3DF0926B735A4788F_4,62BF74E391CFE1696E5218B3DF0926B735A4788F,"""href"":
-}
-}
-}
-
-
-
-"
-62BF74E391CFE1696E5218B3DF0926B735A4788F_5,62BF74E391CFE1696E5218B3DF0926B735A4788F," Using connected data for an SPSS Modeler flow job
-
-An SPSS Modeler flow can have a number of input and output data nodes. When you connect to a supported database as an input and output data source, the connection details are selected from the input and output data reference, but the input and output table names are selected from the SPSS model stream file.
-
-For batch deployment of an SPSS model that uses a database connection, make sure that the modeler stream Input and Output nodes are Data Asset nodes. In SPSS Modeler, the Data Asset nodes must be configured with the table names that are used later for job predictions. Set the nodes and table names before you save the model to Watson Machine Learning. When you are configuring the Data Asset nodes, choose the table name from the Connections; choosing a Data Asset that is created in your project is not supported.
-
-When you are creating the deployment job for an SPSS model, make sure that the types of data sources are the same for input and output. The configured table names from the model stream are passed to the batch deployment and the input/output table names that are provided in the connected data are ignored.
-
-For batch deployment of an SPSS model that uses a Cloud Object Storage connection, make sure that the SPSS model stream has single input and output data asset nodes.
-
-"
-62BF74E391CFE1696E5218B3DF0926B735A4788F_6,62BF74E391CFE1696E5218B3DF0926B735A4788F," Supported combinations of input and output sources
-
-You must specify compatible sources for the SPSS Modeler flow input, the batch job input, and the output. If you specify an incompatible combination of types of data sources, you get an error when you try to run the batch job.
-
-These combinations are supported for batch jobs:
-
-
-
- SPSS model stream input/output Batch deployment job input Batch deployment job output
-
- File Local, managed, or referenced data asset or connection asset (file) Remote data asset or connection asset (file) or name
- Database Remote data asset or connection asset (database) Remote data asset or connection asset (database)
-
-
-
-"
-62BF74E391CFE1696E5218B3DF0926B735A4788F_7,62BF74E391CFE1696E5218B3DF0926B735A4788F," Specifying multiple inputs
-
-If you are specifying multiple inputs for an SPSS model stream deployment with no schema, specify an ID for each element in input_data_references.
-
-For more information, see [Using multiple data sources for an SPSS job](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-SPSS-multiple-input.html).
-
-In this example, when you create the job, provide three input entries with IDs: sample_db2_conn, sample_teradata_conn, and sample_googlequery_conn and select the required connected data for each input.
-
-{
-""deployment"": {
-""href"": ""/v4/deployments/""
-},
-""scoring"": {
-""input_data_references"": [{
-""id"": ""sample_db2_conn"",
-""name"": ""DB2 connection"",
-""type"": ""data_asset"",
-""connection"": {},
-""location"": {
-""href"": ""/v2/assets/?space_id=""
-},
-},
-{
-""id"": ""sample_teradata_conn"",
-""name"": ""Teradata connection"",
-""type"": ""data_asset"",
-""connection"": {},
-""location"": {
-""href"": ""/v2/assets/?space_id=""
-},
-},
-{
-""id"": ""sample_googlequery_conn"",
-""name"": ""Google bigquery connection"",
-""type"": ""data_asset"",
-""connection"": {},
-""location"": {
-""href"": ""/v2/assets/?space_id=""
-},
-}],
-""output_data_references"": {
-""id"": ""sample_db2_conn"",
-""type"": ""data_asset"",
-""connection"": {},
-""location"": {
-"
-62BF74E391CFE1696E5218B3DF0926B735A4788F_8,62BF74E391CFE1696E5218B3DF0926B735A4788F,"""href"": ""/v2/assets/?space_id=""
-},
-}
-}
-
-Note: The environment variables parameter of deployment jobs is not applicable.
-
-Parent topic:[Batch deployment input details by framework](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-batch-input-by-framework.html)
-"
-7D385692A31E1E88E675AF0B91F98F55797BC02D_0,7D385692A31E1E88E675AF0B91F98F55797BC02D," Batch deployment input details for Tensorflow models
-
-Follow these rules when you are specifying input details for batch deployments of Tensorflow models.
-
-Data type summary table:
-
-
-
- Data Description
-
- Type Inline or data references
- File formats .zip archive that contains JSON files
-
-
-
-"
-7D385692A31E1E88E675AF0B91F98F55797BC02D_1,7D385692A31E1E88E675AF0B91F98F55797BC02D," Data sources
-
-Input or output data references:
-
-
-
-* Local or managed assets from the space
-* Connected (remote) assets: Cloud Object Storage
-
-
-
-If you are specifying input/output data references programmatically:
-
-
-
-* Data source reference type depends on the asset type. Refer to the Data source reference types section in [Adding data assets to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html).
-
-
-
-Notes:
-
-
-
-* The environment variables parameter of deployment jobs is not applicable.
-* For connections of type [Cloud Object Storage](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-cos.html) or [Cloud Object Storage (infrastructure)](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-cos-infra.html), you must configure Access key and Secret key, also known as [HMAC credentials](https://cloud.ibm.com/docs/cloud-object-storage?topic=cloud-object-storage-uhc-hmac-credentials-main).
-
-
-
-Parent topic:[Batch deployment input details by framework](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-batch-input-by-framework.html)
-"
-09897DCF1128D66144D2B165564C228C16CD5EC5_0,09897DCF1128D66144D2B165564C228C16CD5EC5," Deploying foundation model assets
-
-Deploy foundation model assets to test the assets, put them into production, and monitor them.
-
-After you save a prompt template as a project asset, you can promote it to a deployment space. A deployment space is used to organize the assets for deployments and to manage access to deployed assets. Use a Pre-production space to test and validate assets, and use a Production space for deploying assets for productive use.
-
-For details, see [Deploying a prompt template](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/prompt-template-deploy.html).
-
-"
-09897DCF1128D66144D2B165564C228C16CD5EC5_1,09897DCF1128D66144D2B165564C228C16CD5EC5," Learn more
-
-
-
-* [Tracking prompt templates ](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-track-prompt-temp.html)
-* [Evaluating a prompt template in a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-eval-prompt-spaces.html)
-
-
-
-Parent topic:[Deploying and managing assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/wmls/wmls-deploy-overview.html)
-"
-F31A520A8C2C1B9C7F80B14EBCD096BB1121D53D_0,F31A520A8C2C1B9C7F80B14EBCD096BB1121D53D," Managing deployment jobs
-
-A job is a way of running a batch deployment, script, or notebook in Watson Machine Learning. You can choose to run a job manually or on a schedule that you specify. After you create one or more jobs, you can view and manage them from the Jobs tab of your deployment space.
-
-From the Jobs tab of your space, you can:
-
-
-
-* See the list of the jobs in your space
-* View the details of each job. You can change the schedule settings of a job and pick a different environment template.
-* Monitor job runs
-* Delete jobs
-
-
-
-See the following sections for various aspects of job management:
-
-
-
-* [Creating a job for a batch deployment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-jobs.html?context=cdpaas&locale=encreate-jobs-batch)
-* [Viewing jobs in a space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-jobs.html?context=cdpaas&locale=enviewing-jobs-in-a-space)
-* [Managing job metadata retention ](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-jobs.html?context=cdpaas&locale=endelete-jobs)
-
-
-
-"
-F31A520A8C2C1B9C7F80B14EBCD096BB1121D53D_1,F31A520A8C2C1B9C7F80B14EBCD096BB1121D53D," Creating a job for a batch deployment
-
-Important: You must have an existing batch deployment to create a batch job.
-
-To learn how to create a job for a batch deployment, see [Creating jobs in a batch deployment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/create-jobs.html).
-
-"
-F31A520A8C2C1B9C7F80B14EBCD096BB1121D53D_2,F31A520A8C2C1B9C7F80B14EBCD096BB1121D53D," Viewing jobs in a space
-
-You can view all of the jobs that exist for your deployment space from the Jobs page. You can also delete a job.
-
-To view the details of a specific job, click the job. From the job's details page, you can do the following:
-
-
-
-* View the runs for that job and the status of each run. If a run failed, you can select the run and view the log tail or download the entire log file to help you troubleshoot the run. A failed run might be related to a temporary connection or environment problem. Try running the job again. If the job still fails, you can send the log to Customer Support.
-* When a job is running, a progress indicator on the information page displays information about relative progress of the run. You can use the progress indicator to monitor a long run.
-* Edit schedule settings or pick another environment template.
-* Run the job manually by clicking the run icon from the job action bar. You must deselect the schedule to run the job manually.
-
-
-
-"
-F31A520A8C2C1B9C7F80B14EBCD096BB1121D53D_3,F31A520A8C2C1B9C7F80B14EBCD096BB1121D53D," Managing job metadata retention
-
-The Watson Machine Learning plan that is associated with your IBM Cloud account sets limits on the number of running and stored deployments that you can create. If you exceed your limit, you cannot create new deployments until you delete existing deployments or upgrade your plan. For more information, see [Watson Machine Learning plans](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/wml-plans.html).
-
-"
-F31A520A8C2C1B9C7F80B14EBCD096BB1121D53D_4,F31A520A8C2C1B9C7F80B14EBCD096BB1121D53D," Managing metadata retention and deletion programmatically
-
-If you are managing a job programmatically by using the Python client or REST API, you can retrieve metadata from the deployment endpoint by using the GET method during the 30 days.
-
-To keep the metadata for more or less than 30 days, change the query parameter from the default of retention=30 for the POST method to override the default and preserve the metadata.
-
-Note:Changing the value to retention=-1 cancels the auto-delete and preserves the metadata.
-
-To delete a job programmatically, specify the query parameter hard_delete=true for the Watson Machine Learning DELETE method to completely remove the job metadata.
-
-The following example shows how to use DELETE method:
-
-DELETE /ml/v4/deployment_jobs/{JobsID}
-
-"
-F31A520A8C2C1B9C7F80B14EBCD096BB1121D53D_5,F31A520A8C2C1B9C7F80B14EBCD096BB1121D53D," Learn from samples
-
-Refer to [Machine learning samples and examples](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-samples-overview.html) for links to sample notebooks that demonstrate creating batch deployments and jobs by using the Watson Machine Learning [REST API](https://cloud.ibm.com/apidocs/machine-learning) and Watson Machine Learning [Python client library](https://ibm.github.io/watson-machine-learning-sdk/).
-
-Parent topic:[Managing predictive deployments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-general.html)
-"
-F4A482326D45DC729EB8D1A6735CEFACD7AE5578_0,F4A482326D45DC729EB8D1A6735CEFACD7AE5578," Creating online deployments in Watson Machine Learning
-
-Create an online (also called Web service) deployment to load a model or Python code when the deployment is created to generate predictions online, in real time. For example, if you create a classification model to test whether a new customer is likely to participate in a sales promotion, you can create an online deployment for the model. Then, you can enter the new customer data to get an immediate prediction.
-
-"
-F4A482326D45DC729EB8D1A6735CEFACD7AE5578_1,F4A482326D45DC729EB8D1A6735CEFACD7AE5578," Supported frameworks
-
-Online deployment is supported for these frameworks:
-
-
-
-* PMML
-* Python Function
-* PyTorch-Onnx
-* Tensorflow
-* Scikit-Learn
-* Spark MLlib
-* SPSS
-* XGBoost
-
-
-
-You can create an online deployment [from the user interface](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-online.html?context=cdpaas&locale=enonline-interface) or [programmatically](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-online.html?context=cdpaas&locale=enonline-programmatically).
-
-To send payload data to an asset that is deployed online, you must know the endpoint URL of the deployment. Examples include, classification of data, or making predictions from the data. For more information, see [Retrieving the deployment endpoint](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-online.html?context=cdpaas&locale=enget-online-endpoint).
-
-Additionally, you can:
-
-
-
-* [Test your online deployment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-online.html?context=cdpaas&locale=entest-online-deployment)
-* [Access the deployment details](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-online.html?context=cdpaas&locale=enaccess-online-details)
-
-
-
-"
-F4A482326D45DC729EB8D1A6735CEFACD7AE5578_2,F4A482326D45DC729EB8D1A6735CEFACD7AE5578," Creating an online deployment from the User Interface
-
-
-
-1. From the deployment space, click the name of the asset that you want to deploy. The details page opens.
-2. Click New deployment.
-3. Choose Online as the deployment type.
-4. Provide a name and an optional description for the deployment.
-5. If you want to specify a name to be used instead of deployment ID, use the Serving name field.
-
-
-
-* The name must be validated to be unique per IBM cloud region (all names in a specific region share a global namespace).
-* The name must contain only these characters: [a-z,0-9,_] and must be a maximum 36 characters long.
-* Serving name works only as part of the prediction URL. In some cases, you must still use the deployment ID.
-
-
-
-6. Click Create to create the deployment.
-
-
-
-"
-F4A482326D45DC729EB8D1A6735CEFACD7AE5578_3,F4A482326D45DC729EB8D1A6735CEFACD7AE5578," Creating an online deployment programmatically
-
-Refer to [Machine learning samples and examples](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-samples-overview.html) for links to sample notebooks. These notebooks demonstrate creating online deployments that use the Watson Machine Learning [REST API](https://cloud.ibm.com/apidocs/machine-learning) and Watson Machine Learning [Python client library](https://ibm.github.io/watson-machine-learning-sdk/).
-
-"
-F4A482326D45DC729EB8D1A6735CEFACD7AE5578_4,F4A482326D45DC729EB8D1A6735CEFACD7AE5578," Retrieving the online deployment endpoint
-
-You can find the endpoint URL of a deployment in these ways:
-
-
-
-* From the Deployments tab of your space, click your deployment name. A page with deployment details opens. You can find the endpoint there.
-* Using the Watson Machine Learning Python client:
-
-
-
-1. List the deployments by calling the [Python client method](https://ibm.github.io/watson-machine-learning-sdk/core_api.htmlclient.Deployments.list)client.deployments.list()
-2. Find the row with your deployment. The deployment endpoint URL is listed in the url column.
-
-
-
-
-
-Notes:
-
-
-
-* If you added Serving name to the deployment, two alternative endpoint URLs show on the screen; one containing the deployment ID, and the other containing your serving name. You can use either one of these URLs with your deployment.
-* The API Reference tab also shows code snippets in various programming languages that illustrate how to access the deployment.
-
-
-
-For more information, see [Endpoint URLs](https://cloud.ibm.com/apidocs/machine-learningendpoint-url).
-
-"
-F4A482326D45DC729EB8D1A6735CEFACD7AE5578_5,F4A482326D45DC729EB8D1A6735CEFACD7AE5578," Testing your online deployment
-
-From the Deployments tab of your space, click your deployment name. A page with deployment details opens. The Test tab provides a place where you can enter data and get a prediction back from the deployed model. If your model has a defined schema, a form shows on screen. In the form, you can enter data in one of these ways:
-
-
-
-* Enter data directly in the form
-* Download a CSV template, enter values, and upload the input data
-* Upload a file that contains input data from your local file system or from the space
-* Change to the JSON tab and enter your input data as JSON code Regardless of method, the input data must match the schema of the model. Submit the input data and get a score, or prediction, back.
-
-
-
-"
-F4A482326D45DC729EB8D1A6735CEFACD7AE5578_6,F4A482326D45DC729EB8D1A6735CEFACD7AE5578," Sample deployment code
-
-When you submit JSON code as the payload, or input data, for a deployment, your input data must match the schema of the model. The 'fields' must match the column headers for the data, and the 'values' must contain the data, in the same order. Use this format:
-
-{""input_data"":[{
-""fields"": , , ...],
-""values"": , , ...]]
-}]}
-
-Refer to this example:
-
-{""input_data"":[{
-""fields"": ""PassengerId"",""Pclass"",""Name"",""Sex"",""Age"",""SibSp"",""Parch"",""Ticket"",""Fare"",""Cabin"",""Embarked""],
-""values"": 1,3,""Braund, Mr. Owen Harris"",0,22,1,0,""A/5 21171"",7.25,null,""S""]]
-}]}
-
-Notes:
-
-
-
-* All strings are enclosed in double quotation marks. The Python notation for dictionaries looks similar, but Python strings in single quotation marks are not accepted in the JSON data.
-* Missing values can be indicated with null.
-* You can specify a hardware specification for an online deployment, for example if you are [scaling a deployment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-scaling.html).
-
-
-
-"
-F4A482326D45DC729EB8D1A6735CEFACD7AE5578_7,F4A482326D45DC729EB8D1A6735CEFACD7AE5578," Preparing payload that matches the schema of an existing model
-
-Refer to this sample code:
-
-model_details = client.repository.get_details("""") retrieves details and includes schema
-columns_in_schema = []
-for i in range(0, len(model_details['entity']['input'].get('fields'))):
-columns_in_schema.append(model_details['entity']['input'].get('fields')[i])
-
-X = X[columns_in_schema] where X is a pandas dataframe that contains values to be scored
-(...)
-scoring_values = X.values.tolist()
-array_of_input_fields = X.columns.tolist()
-payload_scoring = {""input_data"": [{""fields"": array_of_input_fields],""values"": scoring_values}]}
-
-"
-F4A482326D45DC729EB8D1A6735CEFACD7AE5578_8,F4A482326D45DC729EB8D1A6735CEFACD7AE5578," Accessing the online deployment details
-
-To access your online deployment details: From the Deployments tab of your space, click your deployment name and then click the Deployment details tab. The Deployment details tab contains specific information that is related to the currently opened online deployment and allows for adding a model to the model inventory, to enable activity tracking and model comparison.
-
-"
-F4A482326D45DC729EB8D1A6735CEFACD7AE5578_9,F4A482326D45DC729EB8D1A6735CEFACD7AE5578," Additional information
-
-Refer to [Assets in deployment spaces](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets-all.html) for details on managing deployment jobs, and updating, scaling, or deleting an online deployment.
-
-Parent topic:[Managing predictive deployments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-general.html)
-"
-32AFAFA1C90D43BA1D3330A64491039F63D9FEB5_0,32AFAFA1C90D43BA1D3330A64491039F63D9FEB5," Deploying scripts in Watson Machine Learning
-
-When a script is copied to a deployment space, you can deploy it for use. Supported script types are Python scripts. [Batch deployment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-batch-details.html) is the only supported deployment type for a script.
-
-
-
-* When the script is promoted from a project, your software specification is included.
-* When you create a deployment job for a script, you must manually override the default environment with the correct environment for your script. For more information, see [Creating a deployment job](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/create-jobs.html)
-
-
-
-"
-32AFAFA1C90D43BA1D3330A64491039F63D9FEB5_1,32AFAFA1C90D43BA1D3330A64491039F63D9FEB5," Learn more
-
-
-
-* To learn more about supported input and output types and setting environment variables, see [Batch deployment details](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-batch-details.html).
-* To learn more about software specifications, see [Software specifications and hardware specifications for deployments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/pm_service_supported_frameworks.html).
-
-
-
-Parent topic:[Managing predictive deployments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-general.html)
-"
-2B6DC49F4AFDE44DD385AE09CAAB02A3F1DB4259_0,2B6DC49F4AFDE44DD385AE09CAAB02A3F1DB4259," Choosing compute resources for running tools in projects
-
-You use compute resources in projects when you run jobs and most tools. Depending on the tool, you might have a choice of compute resources for the runtime for the tool.
-
-Compute resources are known as either environment templates or hardware and software specifications. In general, compute resources with larger hardware configurations incur larger usage costs.
-
-These tools have multiple choices for configuring runtimes that you can choose from:
-
-
-
-* [Notebook editor](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebook-environments.html)
-* [Data Refinery](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/spark-dr-envs.html)
-* [SPSS Modeler](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/spss-envs.html)
-* [AutoAI](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/run-autoai.html)
-* [Decision Optimization experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/run-decisionopt.html)
-* [RStudio IDE](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/rstudio-envs.html)
-* [Synthetic Data Generator](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/synthetic-envs.html)
-* [Tuning Studio](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/run-fm-tuning.html)
-
-
-
-Prompt Lab does not consume compute resources. Prompt Lab usage is measured by the number of processed tokens.
-
-"
-2B6DC49F4AFDE44DD385AE09CAAB02A3F1DB4259_1,2B6DC49F4AFDE44DD385AE09CAAB02A3F1DB4259," Learn more
-
-
-
-* [Monitoring account resource usage](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/monitor-resources.html)
-
-
-
-Parent topic:[Projects ](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/manage-projects.html)
-"
-D83BAAE9C79E5DF9CA904AB1886AC4826447B495_0,D83BAAE9C79E5DF9CA904AB1886AC4826447B495," Examples of environment template customizations
-
-You can follow examples of how to add custom libraries through conda or pip using the provided templates for Python and R when you create an environment template.
-
-You can use mamba in place of conda in the following examples with conda. Remember to select the checkbox to install from mamba if you add channels or packages from mamba to the existing environment template.
-
-Examples exist for:
-
-
-
-* [Adding conda packages](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/example-customizations.html?context=cdpaas&locale=enadd-conda-package)
-* [Adding pip packages](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/example-customizations.html?context=cdpaas&locale=enadd-pip-package)
-* [Combining conda and pip packages](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/example-customizations.html?context=cdpaas&locale=encombine-conda-pip)
-* [Adding complex packages with internal dependencies](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/example-customizations.html?context=cdpaas&locale=encomplex-packages)
-* [Adding conda packages for R notebooks](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/example-customizations.html?context=cdpaas&locale=enconda-in-r)
-* [Setting environment variables](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/example-customizations.html?context=cdpaas&locale=enset-vars)
-
-
-
-Hints and tips:
-
-
-
-"
-D83BAAE9C79E5DF9CA904AB1886AC4826447B495_1,D83BAAE9C79E5DF9CA904AB1886AC4826447B495,"* [Best practices](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/example-customizations.html?context=cdpaas&locale=enbest-practices)
-
-
-
-"
-D83BAAE9C79E5DF9CA904AB1886AC4826447B495_2,D83BAAE9C79E5DF9CA904AB1886AC4826447B495," Adding conda packages
-
-To get latest versions of pandas-profiling:
-
-dependencies:
-- pandas-profiling
-
-This is equivalent to running conda install pandas-profiling in a notebook.
-
-"
-D83BAAE9C79E5DF9CA904AB1886AC4826447B495_3,D83BAAE9C79E5DF9CA904AB1886AC4826447B495," Adding pip packages
-
-You can also customize an environment using pip if a particular package is not available in conda channels:
-
-dependencies:
-- pip:
-- ibm-watson-machine-learning
-
-This is equivalent to running pip install ibm-watson-machine-learning in a notebook.
-
-The customization will actually do more than just install the specified pip package. The default behavior of conda is to also look for a new version of pip itself and then install it. Checking all the implicit dependencies in conda often takes several minutes and also gigabytes of memory. The following customization will shortcut the installation of pip:
-
-channels:
-- empty
-- nodefaults
-
-dependencies:
-- pip:
-- ibm-watson-machine-learning
-
-The conda channel empty does not provide any packages. There is no pip package in particular. conda won't try to install pip and will use the already pre-installed version instead. Note that the keyword nodefaults in the list of channels needs at least one other channel in the list. Otherwise conda will silently ignore the keyword and use the default channels.
-
-"
-D83BAAE9C79E5DF9CA904AB1886AC4826447B495_4,D83BAAE9C79E5DF9CA904AB1886AC4826447B495," Combining conda and pip packages
-
-You can list multiple packages with one package per line. A single customization can have both conda packages and pip packages.
-
-dependencies:
-- pandas-profiling
-- scikit-learn=0.20
-- pip:
-- watson-machine-learning-client-V4
-- sklearn-pandas==1.8.0
-
-Note that the required template notation is sensitive to leading spaces. Each item in the list of conda packages must have two leading spaces. Each item in the list of pip packages must have four leading spaces. The version of a conda package must be specified using a single equals symbol (=), while the version of a pip package must be added using two equals symbols (==).
-
-"
-D83BAAE9C79E5DF9CA904AB1886AC4826447B495_5,D83BAAE9C79E5DF9CA904AB1886AC4826447B495," Adding complex packages with internal dependencies
-
-When you add many packages or a complex package with many internal dependencies, the conda installation might take long or might even stop without you seeing any error message. To avoid this from happening:
-
-
-
-* Specify the versions of the packages you want to add. This reduces the search space for conda to resolve dependencies.
-* Increase the memory size of the environment.
-* Use a specific channel instead of the default conda channels that are defined in the .condarc file. This avoids running lengthy searches through big channels.
-
-
-
-Example of a customization that doesn't use the default conda channels:
-
- get latest version of the prophet package from the conda-forge channel
-channels:
-- conda-forge
-- nodefaults
-
-dependencies:
-- prophet
-
-This customization corresponds to the following command in a notebook:
-
-!conda install -c conda-forge --override-channels prophet -y
-
-"
-D83BAAE9C79E5DF9CA904AB1886AC4826447B495_6,D83BAAE9C79E5DF9CA904AB1886AC4826447B495," Adding conda packages for R notebooks
-
-The following example shows you how to create a customization that adds conda packages to use in an R notebook:
-
-channels:
-- defaults
-
-dependencies:
-- r-plotly
-
-This customization corresponds to the following command in a notebook:
-
-print(system(""conda install r-plotly"", intern=TRUE))
-
-The names of R packages in conda generally start with the prefix r-. If you just use plotly in your customization, the installation would succeed but the Python package would be installed instead of the R package. If you then try to use the package in your R code as in library(plotly), this would return an error.
-
-"
-D83BAAE9C79E5DF9CA904AB1886AC4826447B495_7,D83BAAE9C79E5DF9CA904AB1886AC4826447B495," Setting environment variables
-
-You can set environment variables in your environment by adding a variables section to the software customization template as shown in the following example:
-
-variables:
-my_var: my_value
-HTTP_PROXY: https://myproxy:3128
-HTTPS_PROXY: https://myproxy:3128
-NO_PROXY: cluster.local
-
-The example also shows that you can use the variables section to set a proxy server for an environment.
-
-Limitation: You cannot override existing environment variables, for example LD_LIBRARY_PATH, using this approach.
-
-"
-D83BAAE9C79E5DF9CA904AB1886AC4826447B495_8,D83BAAE9C79E5DF9CA904AB1886AC4826447B495," Best practices
-
-To avoid problems that can arise finding packages or resolving conflicting dependencies, start by installing the packages you need manually through a notebook in a test environment. This enables you to check interactively if packages can be installed without errors. After you have verified that the packages were all correctly installed, create a customization for your development or production environment and add the packages to the customization template.
-
-Parent topic:[Customizing environments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/customize-envs.html)
-"
-A7845D8C3E419CEDD06E8C447ADF41E6E3D860C8_0,A7845D8C3E419CEDD06E8C447ADF41E6E3D860C8," IBM Federated Learning
-
-Federated Learning provides the tools for multiple remote parties to collaboratively train a single machine learning model without sharing data. Each party trains a local model with a private data set. Only the local model is sent to the aggregator to improve the quality of the global model that benefits all parties.
-
-"
-A7845D8C3E419CEDD06E8C447ADF41E6E3D860C8_1,A7845D8C3E419CEDD06E8C447ADF41E6E3D860C8,"Data format
-Any data format including but not limited to CSV files, JSON files, and databases for PostgreSQL.
-
-"
-A7845D8C3E419CEDD06E8C447ADF41E6E3D860C8_2,A7845D8C3E419CEDD06E8C447ADF41E6E3D860C8," How Federated Learning works
-
-Watch this overview video to learn the basic concepts and elements of a Federated Learning experiment. Learn how you can apply the tools for your company's analytics enhancements.
-
-This video provides a visual method to learn the concepts and tasks in this documentation.
-
-An example for using Federated Learning is when an aviation alliance wants to model how a global pandemic impacts airline delays. Each participating party in the federation can use their data to train a common model without ever moving or sharing their data. They can do so either in application silos or any other scenario where regulatory or pragmatic considerations prevent users from sharing data. The resulting model benefits each member of the alliance with improved business insights while lowering risk from data migration and privacy issues.
-
-As the following graphic illustrates, parties can be geographically distributed and run on different platforms.
-
-
-
-"
-A7845D8C3E419CEDD06E8C447ADF41E6E3D860C8_3,A7845D8C3E419CEDD06E8C447ADF41E6E3D860C8," Why use IBM Federated Learning
-
-IBM Federated Learning has a wide range of applications across many enterprise industries. Federated Learning:
-
-
-
-* Enables sites with large volumes of data to be collected, cleaned, and trained on an enterprise scale without migration.
-* Accommodates for the differences in data format, quality, and constraints.
-* Complies with data privacy and security while training models with different data sources.
-
-
-
-"
-A7845D8C3E419CEDD06E8C447ADF41E6E3D860C8_4,A7845D8C3E419CEDD06E8C447ADF41E6E3D860C8," Learn more
-
-
-
-* [Federated Learning tutorials and samples](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-demo.html)
-
-
-
-* [Federated Learning Tensorflow tutorial for UI](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-tf2-tutorial.html)
-* [Federated Learning Tensorflow samples for API](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-tf2-samples.html)
-* [Federated Learning XGBoost tutorial for UI](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-xg-tutorial.html)
-* [Federated Learning XGBoost sample for API](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-xg-samples.html)
-* [Federated Learning homomorphic encryption sample for API](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-fhe-sample.html)
-
-
-
-* [Getting started](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-get-started.html)
-
-
-
-* [Terminology](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-term.html)
-* [Federated Learning architecture](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-arch.html)
-
-
-
-* [Frameworks, fusion methods, and Python versions](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-frames.html)
-
-
-
-"
-A7845D8C3E419CEDD06E8C447ADF41E6E3D860C8_5,A7845D8C3E419CEDD06E8C447ADF41E6E3D860C8,"* [Hyperparameter definitions](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-param.html)
-
-
-
-* [Creating a Federated Learning experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-start.html)
-
-
-
-* [Set up your system](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-setup.html)
-* [Creating the initial model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-models.html)
-* [Create the data handler](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-handler.html)
-* [Starting the aggregator (Admin)](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-agg.html)
-* [Connecting to the aggregator (Party)](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-conn.html)
-* [Monitoring and saving the model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-mon.html)
-
-
-
-* [Applying encryption](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-homo.html)
-* [Limitations and troubleshooting](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-troubleshoot.html)
-
-
-
-Parent topic:[Analyzing data and building models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/data-science.html)
-"
-CEE9EF1F47611F6A9C00BB76C91B94B7DA2A62FF_0,CEE9EF1F47611F6A9C00BB76C91B94B7DA2A62FF," Starting the aggregator (Admin)
-
-An administrator completes the following steps to start the experiment and train the global model.
-
-
-
-* [Step 1: Set up the Federated Learning experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-agg.html?context=cdpaas&locale=enfl-setup)
-* [Step 2: Create the remote training system](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-agg.html?context=cdpaas&locale=enrts)
-* [Step 3: Start the experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-agg.html?context=cdpaas&locale=enstart)
-
-
-
-"
-CEE9EF1F47611F6A9C00BB76C91B94B7DA2A62FF_1,CEE9EF1F47611F6A9C00BB76C91B94B7DA2A62FF," Step 1: Set up the Federated Learning experiment
-
-Set up a Federated Learning experiment from a project.
-
-
-
-1. From the project, click New asset > Federated Learning.
-2. Name the experiment.
-Optional: Add an optional description and tags.
-3. [Add new collaborators](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/collaborate.html) to the project.
-4. In the Configure tab, choose the training framework and model type. See [Frameworks, fusion methods, and Python versions](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-frames.html) for a table listing supported frameworks, fusion methods, and their attributes. Optional: You can choose to enable the homomorphic encryption feature. For more details, see [Applying encryption](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-homo.html).
-5. Click Select under Model specification and upload the .zip file that contains your initial model.
-6. In the Define hyperparameters tab, you can choose hyperparameter options available for your framework and fusion method to tune your model.
-
-
-
-"
-CEE9EF1F47611F6A9C00BB76C91B94B7DA2A62FF_2,CEE9EF1F47611F6A9C00BB76C91B94B7DA2A62FF," Step 2: Create the Remote Training System
-
-Create Remote Training Systems (RTS) that authenticates the participating parties of the experiment.
-
-
-
-1. At Select remote training system, click Add new systems.
-
-2. Configure the RTS.
-
-| Field name | Definition | Example | | -- | -- | -- | | Name | A name to identify this RTS instance. | Canada Bank Model: Federated Learning Experiment | | Description
-(Optional) | Description of the training system. | This Remote Training System is for a
-Federated Learning experiment to train a model for
-predicting credit card fraud with data from Canadian banks. | | System administrator
-(Optional) | Specify a user with read-only access to this RTS. They can see system details, logs, and scripts, but not necessarily participate in the experiment. They should be contacted if issues occur when running the experiment. | Admin (admin@example.com) | | Allowed identities | List project collaborators who can participate in the Federated Learning experiment training. Multiple collaborators can be registered in this RTS, but only one can participate in the experiment. Multiple RTS's are needed to authenticate all participating collaborators. | John Doe (john.doe@example.com)
-Jane Doe (jane.doe@example.com) | | Allowed IP addresses
-(Optional) | Restrict individual parties from connecting to Federated Learning outside of a specified IP address.
-
-1. To configure this, click Configure.
-2. For Allowed identities, select the user to place IP constraints on.
-3. For Allowed IP addresses for user, enter a comma seperated list of IPs and or CIDRs that can connect to the Remote Training System. Note: Both IPv4 and IPv6 are supported. | John
-"
-CEE9EF1F47611F6A9C00BB76C91B94B7DA2A62FF_3,CEE9EF1F47611F6A9C00BB76C91B94B7DA2A62FF,"1234:5678:90ab:cdef:1234:5678:90ab:cdef: (John’s office IP), 123.123.123.123 (John’s home IP), 0987.6543.21ab.cdef (Remote VM IP)
-Jane
-123.123.123.0/16 (Jane's home IP), 0987.6543.21ab.cdef (Remote machine IP) | | Tags
-(Optional) | Associate keywords with the Remote Training System to make it easier to find. | Canada
-Bank
-Model
-Credit
-Fraud |
-
-
-
-
-
-1. Click Add to save the RTS instance. If you are creating multiple remote training instances, you can repeat these steps.
-2. Click Add systems to save the RTS as an asset in the project.
-
-Tip: You can use an RTS definition for future experiments. For example, in the Select remote training system tab, you can select any Remote Training System that you previously created.
-3. Each RTS can only authenticate one of its allowed party identities. Create an RTS for each new participating part(ies).
-
-
-
-"
-CEE9EF1F47611F6A9C00BB76C91B94B7DA2A62FF_4,CEE9EF1F47611F6A9C00BB76C91B94B7DA2A62FF," Step 3: Start the experiment
-
-Start the Federated Learning aggregator to initiate training of the global model.
-
-
-
-1. Click Review and create to view the settings of your current Federated Learning experiment. Then, click Create. 
-2. The Federated Learning experiment will be in Pending status while the aggregator is starting. When the aggregator starts, the status will change to Setup – Waiting for remote systems.
-
-
-
-Parent topic:[Creating a Federated Learning experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-start.html)
-"
-4B48EF3D089F3142B1ED604A32873217F89E052F_0,4B48EF3D089F3142B1ED604A32873217F89E052F," Federated Learning architecture
-
-IBM Federated Learning has two main components: the aggregator and the remote training parties.
-
-"
-4B48EF3D089F3142B1ED604A32873217F89E052F_1,4B48EF3D089F3142B1ED604A32873217F89E052F," Aggregator
-
-The aggregator is a model fusion processor. The admin manages the aggregator.
-
-The aggregator runs the following tasks:
-
-
-
-* Runs as a platform service in regions Dallas, Frankfurt, London, or Tokyo.
-* Starts with a Federated Learning experiment.
-
-
-
-"
-4B48EF3D089F3142B1ED604A32873217F89E052F_2,4B48EF3D089F3142B1ED604A32873217F89E052F," Party
-
-A party is a user that provides model input to the Federated Learning experiment aggregator. The party can be:
-
-
-
-* on any system that can run the Watson Machine Learning Python client and compatible with Watson Machine Learning frameworks.
-
-Note:The system does not have to be specifically IBM watsonx. For a list of system requirements, see [Set up your system](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-setup.html).
-* running on the system in any geographical location. You are recommended to locate each party in the same region where the data is to avoid data extraction out of different regions.
-
-
-
-This illustration shows the architecture of IBM Federated Learning.
-
-A Remote Training System is used to authenticate the party's identity to the aggregator during training.
-
-
-
-"
-4B48EF3D089F3142B1ED604A32873217F89E052F_3,4B48EF3D089F3142B1ED604A32873217F89E052F," User workflow
-
-
-
-1. The data scientist:
-
-
-
-1. Identifies the data sources.
-2. Creates an initial ""untrained"" model.
-3. Creates a data handler file.
-These tasks might overlap with a training party entity.
-
-
-
-2. A party connects to the aggregator on their system, which can be remote.
-3. An admin controls the Federated Learning experiment by:
-
-
-
-1. Configuring the experiment to accommodate remote parties.
-2. Starting the aggregator.
-
-
-
-
-
-This illustration shows the actions that are associated with each role in the Federated Learning process.
-
-
-
-Parent topic:[Get started](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-get-started.html)
-"
-924550083A3A6ACD177024DF788C02D236874893_0,924550083A3A6ACD177024DF788C02D236874893," Connecting to the aggregator (Party)
-
-Each party follows these steps to connect to a started aggregator.
-
-
-
-1. Open the project and click the Federated Learning experiment.
-2. Click View setup information and click the download icon to download the party connector script. 
-3. Each party must configure the party connector script and provide valid credentials to run the script. This is what a sample completed party connector script looks like:
-
-from ibm_watson_machine_learning import APIClient
-
-wml_credentials = {
-""url"": ""https://us-south.ml.cloud.ibm.com"",
-""apikey"": """"
-}
-
-wml_client = APIClient(wml_credentials)
-
-wml_client.set.default_project(""XXX-XXX-XXX-XXX-XXX"")
-
-party_metadata = {
-
-wml_client.remote_training_systems.ConfigurationMetaNames.DATA_HANDLER: {
-
-""name"": ""MnistSklearnDataHandler"",
-
-""path"": ""example.mnist_sklearn_data_handler"",
-
-""info"": {
-
-""npz_file"":""./example_data/example_data.npz""
-
-}
-
-party = wml_client.remote_training_systems.create_party(""XXX-XXX-XXX-XXX-XXX"", party_metadata)
-
-party.monitor_logs()
-party.run(aggregator_id=""XXX-XXX-XXX-XXX-XXX"", asynchronous=False)
-
-Parameters:
-
-
-
-* api_key:
-Your IAM API key. To create a new API key, go to the [IBM Cloud website](https://cloud.ibm.com/), and click Create an IBM Cloud Pak for Data API key under Manage > Access(IAM) > API keys.
-
-"
-924550083A3A6ACD177024DF788C02D236874893_1,924550083A3A6ACD177024DF788C02D236874893,"Optional: If you're reusing a script from a different project, you can copy the updated project_id, aggregator_id and experiment_id from the setup information window and copy them into the script.
-
-
-
-4. Install Watson Machine Learning with the latest Federated Learning package if you have not yet done so:
-
-
-
-* If you are using M-series on a Mac, install the latest package with the following script:
-
- -----------------------------------------------------------------------------------------
- (C) Copyright IBM Corp. 2023.
- https://opensource.org/licenses/BSD-3-Clause
- -----------------------------------------------------------------------------------------
-
-
- Script to create a conda environment and install ibm-watson-machine-learning with
- the dependencies required for Federated Learning on MacOS.
- The name of the conda environment to be created is passed as the first argument.
-
- Note: This script requires miniforge to be installed for conda.
-
-
-usage="". install_fl_rt22.2_macos.sh conda_env_name""
-
-arch=$(uname -m)
-os=$(uname -s)
-
-if (($ < 1))
-then
-echo $usage
-exit
-fi
-
-ENAME=$1
-
-conda create -y -n ${ENAME} python=3.10
-conda activate ${ENAME}
-pip install ibm-watson-machine-learning
-
-if [ ""$os"" == ""Darwin"" -a ""$arch"" == ""arm64"" ]
-then
-conda install -y -c apple tensorflow-deps
-fi
-
-python - <_.py
-
-
-
-"
-924550083A3A6ACD177024DF788C02D236874893_3,924550083A3A6ACD177024DF788C02D236874893," More resources
-
-[Federated Learning library functions](https://ibm.github.io/watson-machine-learning-sdk/)
-
-Parent topic:[Creating a Federated Learning experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-start.html)
-"
-D579ABA442C4652BAC088173107ECFEBBF4D8290_0,D579ABA442C4652BAC088173107ECFEBBF4D8290," Federated Learning tutorials and samples
-
-Select the tutorial that fits your needs. To facilitate the learning process of Federated Learning, one tutorial with a UI-based approach and one tutorial with an API calling approach for multiple frameworks and data sets is provided. The results of either are the same. All UI-based tutorials demonstrate how to create the Federated Learning experiment in a low-code environment. All API-based tutorials use two sample notebooks with Python scripts to demonstrate how to build and train the experiment.
-
-"
-D579ABA442C4652BAC088173107ECFEBBF4D8290_1,D579ABA442C4652BAC088173107ECFEBBF4D8290," Tensorflow
-
-These hands-on tutorials teach you how to create a Federated Learning experiment step by step. These tutorials use the MNIST data set to demonstrate how different parties can contribute data to train a model to recognize handwriting. You can choose between a UI-based or API version of the tutorial.
-
-
-
-* [Federated Learning Tensorflow tutorial for UI](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-tf2-tutorial.html)
-
-* [Federated Learning Tensorflow samples for API](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-tf2-samples.html)
-
-
-
-"
-D579ABA442C4652BAC088173107ECFEBBF4D8290_2,D579ABA442C4652BAC088173107ECFEBBF4D8290," XGBoost
-
-This is a tutorial for Federated Learning that teaches you how to create an experiment step by step with an income in the XGBoost framework. The tutorial demonstrates how different parties can contribute data to train a model about adult incomes.
-
-
-
-* [Federated Learning XGBoost tutorial for UI](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-xg-tutorial.html)
-
-* [Federated Learning XGBoost sample for API](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-xg-samples.html)
-
-
-
-"
-D579ABA442C4652BAC088173107ECFEBBF4D8290_3,D579ABA442C4652BAC088173107ECFEBBF4D8290," Homomorphic encryption
-
-This is a tutorial for Federated Learning that teaches you how to use the advanced method of homomorphic encryption step by step.
-
-
-
-* [Federated Learning homomorphic encryption sample for API](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-fhe-sample.html)
-
-
-
-Parent topic:[IBM Federated learning](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fed-lea.html)
-"
-CBDD718BCEE7B1FFDE95191BE1749D57B9A1A60D_0,CBDD718BCEE7B1FFDE95191BE1749D57B9A1A60D," Federated Learning homomorphic encryption sample for API
-
-Download and review sample files that show how to run a Federated Learning experiment with Fully Homomorphic Encryption (FHE).
-
-"
-CBDD718BCEE7B1FFDE95191BE1749D57B9A1A60D_1,CBDD718BCEE7B1FFDE95191BE1749D57B9A1A60D," Homomorphic encryption
-
-FHE is an advanced, optional method to provide additional security and privacy for your data by encrypting data sent between parties and the aggregator. This method still creates a computational result that is the same as if the computations were done on unencrypted data. For more details on applying homomorphic encryption in Federated Learning, see [Applying encryption](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-homo.html).
-
-"
-CBDD718BCEE7B1FFDE95191BE1749D57B9A1A60D_2,CBDD718BCEE7B1FFDE95191BE1749D57B9A1A60D," Download the Federated Learning sample files
-
-Download the following notebooks.
-
-[Federated Learning FHE Demo](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/aa449d3939b73847c502bd7822d0949a)
-
-Parent topic:[Federated Learning tutorial and samples](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-demo.html)
-"
-1D1783967CBF46A0B75539BADBAA1D601BC9F412_0,1D1783967CBF46A0B75539BADBAA1D601BC9F412," Frameworks, fusion methods, and Python versions
-
-These are the available machine learning model frameworks and model fusion methods for the Federated Learning model. The software spec and frameworks are also compatible with specific Python versions.
-
-"
-1D1783967CBF46A0B75539BADBAA1D601BC9F412_1,1D1783967CBF46A0B75539BADBAA1D601BC9F412," Frameworks and fusion methods
-
-This table lists supported software frameworks for building Federated Learning models. For each framework you can see the supported model types, fusion methods, and hyperparameter options.
-
-
-
-Table 1. Frameworks and fusion methods
-
- Frameworks Model Type Fusion Method Description Hyperparameters
-
- TensorFlow Used to build neural networks. See [Save the Tensorflow model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-models.htmltf-config). Any Simple Avg Simplest aggregation that is used as a baseline where all parties' model updates are equally weighted. - Rounds - Termination predicate (Optional) - Quorum (Optional) - Max Timeout (Optional)
- Weighted Avg Weights the average of updates based on the number of each party sample. Use with training data sets of widely differing sizes. - Rounds - Termination predicate (Optional) - Quorum (Optional) - Max Timeout (Optional)
- Scikit-learn Used for predictive data analysis. See [Save the Scikit-learn model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-models.htmlsklearn-config). Classification Simple Avg Simplest aggregation that is used as a baseline where all parties' model updates are equally weighted. - Rounds - Termination predicate (Optional)
- Weighted Avg Weights the average of updates based on the number of each party sample. Use with training data sets of widely differing sizes. - Rounds - Termination predicate (Optional)
- Regression Simple Avg Simplest aggregation that is used as a baseline where all parties' model updates are equally weighted. - Rounds
- Weighted Avg Weights the average of updates based on the number of each party sample. Use with training data sets of widely differing sizes. - Rounds
-"
-1D1783967CBF46A0B75539BADBAA1D601BC9F412_2,1D1783967CBF46A0B75539BADBAA1D601BC9F412," XGBoost XGBoost Classification Use to build classification models that use XGBoost. - Learning rate - Loss - Rounds - Number of classes
- XGBoost Regression Use to build regression models that use XGBoost. - Learning rate - Rounds - Loss
- K-Means/SPAHM Used to train KMeans (unsupervised learning) models when parties have heterogeneous data sets. - Max Iter - N cluster
- Pytorch Used for training neural network models. See [Save the Pytorch model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-models.htmlpytorch). Any Simple Avg Simplest aggregation that is used as a baseline where all parties' model updates are equally weighted. - Rounds - Epochs - Quorum (Optional) - Max Timeout (Optional)
- Neural Networks Probabilistic Federated Neural Matching (PFNM) Communication-efficient method for fully connected neural networks when parties have heterogeneous data sets. - Rounds - Termination accuracy (Optional) - Epochs - sigma - sigma0 - gamma - iters
-
-
-
-"
-1D1783967CBF46A0B75539BADBAA1D601BC9F412_3,1D1783967CBF46A0B75539BADBAA1D601BC9F412," Software specifications and Python version by framework
-
-This table lists the software spec and Python versions available for each framework.
-
-
-
-Software specifications and Python version by framework
-
- Watson Studio frameworks Python version Software Spec Python Client Extras Framework package
-
- scikit-learn 3.10 runtime-22.2-py3.10 fl-rt22.2-py3.10 scikit-learn 1.1.1
- Tensorflow 3.10 runtime-22.2-py3.10 fl-rt22.2-py3.10 tensorflow 2.9.2
- PyTorch 3.10 runtime-22.2-py3.10 fl-rt22.2-py3.10 torch 1.12.1
-
-
-
-"
-1D1783967CBF46A0B75539BADBAA1D601BC9F412_4,1D1783967CBF46A0B75539BADBAA1D601BC9F412," Learn more
-
-[Hyperparameter definitions](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-param.html)
-
-Parent topic:[IBM Federated Learning](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fed-lea.html)
-"
-ADEB3C4BA4949F2C87919D5493B71B67028B76EE_0,ADEB3C4BA4949F2C87919D5493B71B67028B76EE," Get started
-
-Federated Learning is appropriate for any situation where different entities from different geographical locations or Cloud providers want to train an analytical model without sharing their data.
-
-To get started with Federated Learning, choose from these options:
-
-
-
-* Familiarize yourself with the key concepts and [Terminology](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-term.html).
-* Review the [architecture](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-arch.html) for creating a Federated Learning experiment.
-* Follow a tutorial for step-by-step instructions for creating a [Federated Learning experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-demo.html) or review samples.
-
-
-
-"
-ADEB3C4BA4949F2C87919D5493B71B67028B76EE_1,ADEB3C4BA4949F2C87919D5493B71B67028B76EE," Learn more
-
-
-
-* [Terminology](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-term.html)
-
-* [Federated Learning architecture](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-arch.html)
-
-
-
-Parent topic:[IBM Federated Learning](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fed-lea.html)
-"
-51426DCF985B97AF6172727AFCF353A481591560_0,51426DCF985B97AF6172727AFCF353A481591560," Create the data handler
-
-Each party in a Federated Learning experiment must get a data handler to process their data. You or a data scientist must create the data handler. A data handler is a Python class that loads and transforms data so that all data for the experiment is in a consistent format.
-
-"
-51426DCF985B97AF6172727AFCF353A481591560_1,51426DCF985B97AF6172727AFCF353A481591560," About the data handler class
-
-The data handler performs the following functions:
-
-
-
-* Accesses the data that is required to train the model. For example, reads data from a CSV file into a Pandas data frame.
-* Pre-processes the data so data is in a consistent format across all parties. Some example cases are as follows:
-
-
-
-* The Date column might be stored as a time epoch or timestamp.
-* The Country column might be encoded or abbreviated.
-
-
-
-* The data handler ensures that the data formatting is in agreement.
-
-
-
-* Optional: feature engineer as needed.
-
-
-
-
-
-The following illustration shows how a data handler is used to process data and make it consumable by the experiment:
-
-
-
-"
-51426DCF985B97AF6172727AFCF353A481591560_2,51426DCF985B97AF6172727AFCF353A481591560," Data handler template
-
-A general data handler template is as follows:
-
- your import statements
-
-from ibmfl.data.data_handler import DataHandler
-
-class MyDataHandler(DataHandler):
-""""""
-Data handler for your dataset.
-""""""
-def __init__(self, data_config=None):
-super().__init__()
-self.file_name = None
-if data_config is not None:
- This can be any string field.
- For example, if your data set is in csv format,
- can be ""CSV"", "".csv"", ""csv"", ""csv_file"" and more.
-if '' in data_config:
-self.file_name = data_config['']
- extract other additional parameters from info if any.
-
- load and preprocess the training and testing data
-self.load_and_preprocess_data()
-
-""""""
- Example:
- (self.x_train, self.y_train), (self.x_test, self.y_test) = self.load_dataset()
-""""""
-
-def load_and_preprocess_data(self):
-""""""
-Loads and pre-processeses local datasets,
-and updates self.x_train, self.y_train, self.x_test, self.y_test.
-
- Example:
- return (self.x_train, self.y_train), (self.x_test, self.y_test)
-""""""
-
-pass
-
-def get_data(self):
-""""""
-Gets the prepared training and testing data.
-
-:return: ((x_train, y_train), (x_test, y_test)) most build-in training modules expect data is returned in this format
-:rtype: tuple
-
-This function should be as brief as possible. Any pre-processing operations should be performed in a separate function and not inside get_data(), especially computationally expensive ones.
-
- Example:
- X, y = load_somedata()
- x_train, x_test, y_train, y_test =
-"
-51426DCF985B97AF6172727AFCF353A481591560_3,51426DCF985B97AF6172727AFCF353A481591560," train_test_split(X, y, test_size=TEST_SIZE, random_state=RANDOM_STATE)
- return (x_train, y_train), (x_test, y_test)
-""""""
-pass
-
-def preprocess(self, X, y):
-pass
-
-"
-51426DCF985B97AF6172727AFCF353A481591560_4,51426DCF985B97AF6172727AFCF353A481591560,"Parameters
-
-
-
-* your_data_file_type: This can be any string field. For example, if your data set is in csv format, your_data_file_type can be ""CSV"", "".csv"", ""csv"", ""csv_file"" and more.
-
-
-
-"
-51426DCF985B97AF6172727AFCF353A481591560_5,51426DCF985B97AF6172727AFCF353A481591560," Return a data generator defined by Keras or Tensorflow
-
-The following is a code example that needs to be included as part of the get_data function to return a data generator defined by Keras or Tensorflow:
-
-train_gen = ImageDataGenerator(rotation_range=8,
-width_sht_range=0.08,
-shear_range=0.3,
-height_shift_range=0.08,
-zoom_range=0.08)
-
-train_datagenerator = train_gen.flow(
-x_train, y_train, batch_size=64)
-
-return train_datagenerator
-
-"
-51426DCF985B97AF6172727AFCF353A481591560_6,51426DCF985B97AF6172727AFCF353A481591560," Data handler examples
-
-
-
-* [MNIST Keras data handler](https://github.com/IBMDataScience/sample-notebooks/blob/master/Files/mnist_keras_data_handler.py)
-* [Adult XGBoost data handler](https://github.com/IBMDataScience/sample-notebooks/blob/master/Files/adult_sklearn_data_handler.py)
-
-
-
-Parent topic:[Creating a Federated Learning experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-start.html)
-"
-C48E63F001DFAE875E1C82B5D163B7A2C9961CE2_0,C48E63F001DFAE875E1C82B5D163B7A2C9961CE2," Applying homomorphic encryption for security and privacy
-
-Federated learning supports homomorphic encryption as an added measure of security for federated training data. Homomorphic encryption is a form of public key cryptography that enables computations on the encrypted data without first decrypting it, meaning the data can be used in modeling without exposing it to the risk of discovery.
-
-With homomorphic encryption, the results of the computations remain in encrypted form and when decrypted, result in an output that is the same as the output produced with computations performed on unencrypted data. It uses a public key for encryption and a private key for decryption.
-
-"
-C48E63F001DFAE875E1C82B5D163B7A2C9961CE2_1,C48E63F001DFAE875E1C82B5D163B7A2C9961CE2," How it works with Federated Learning
-
-Homomorphic encryption is an optional encryption method to add additional security and privacy to a Federated Learning experiment. When homomorphic encryption is applied in a Federated Learning experiment, the parties send their homomorphically encrypted model updates to the aggregator. The aggregator does not have the private key and can only see the homomorphically encrypted model updates. For example, the aggregator cannot reverse engineer the model updates to discover information on the parties' training data. The aggregator fuses the model updates in their encrypted form which results in an encrypted aggregated model. Then the aggregator sends the encrypted aggregated model to the participating parties who can use their private key for decryption and continue with the next round of training. Only the participating parties can decrypt model data.
-
-"
-C48E63F001DFAE875E1C82B5D163B7A2C9961CE2_2,C48E63F001DFAE875E1C82B5D163B7A2C9961CE2," Supported frameworks and fusion methods
-
-Fully Homomorphic Encryption (FHE) supports the simple average fusion method for these model frameworks:
-
-
-
-* Tensorflow
-* Pytorch
-* Scikit-learn classification
-* Scikit-learn regression
-
-
-
-"
-C48E63F001DFAE875E1C82B5D163B7A2C9961CE2_3,C48E63F001DFAE875E1C82B5D163B7A2C9961CE2," Before you begin
-
-To get started with using homomorphic encryption, ensure that your experiment meets the following requirements:
-
-
-
-* The hardware spec must be minimum small. Depending on the level of encryption that you apply, you might need a larger hardware spec to accommodate the resource consumption caused by more powerful data encryption. See the encryption level table in Configuring the aggregator.- The software spec is fl-rt22.2-py3.10.
-* FHE is supported in Python client version 1.0.263 or later. All parties must use the same Python client version.
-
-
-
-"
-C48E63F001DFAE875E1C82B5D163B7A2C9961CE2_4,C48E63F001DFAE875E1C82B5D163B7A2C9961CE2," Requirements for the parties
-
-Each party must:
-
-
-
-* Run on a Linux x86 system.
-* Configure with a root certificate that identifies a certificate authority that is uniform to all parties.
-* Configure an RSA public and private key pair with attributes described in the following table.
-* Configure with a certificate of the party issued by the certificate authority. The RSA public key must be included in the party's certificate.
-
-
-
-Note: You can also choose to use self-signed certificates.
-
-Homomorphic public and private encryption keys are generated and distributed automatically and securely among the parties for each experiment. Only the parties participating in an experiment have access to the private key generated for the experiment. To support the automatic generation and distribution mechanism, the parties must be configured with the certificates and RSA keys specified previously.
-
-"
-C48E63F001DFAE875E1C82B5D163B7A2C9961CE2_5,C48E63F001DFAE875E1C82B5D163B7A2C9961CE2," RSA key requirements
-
-
-
-Table 1. RSA Key Requirements
-
- Attribute Requirement
-
- Key size 4096 bit
- Public exponent 65537
- Password None
- Hash algorithm SHA256
- File format The key and certificate files must be in ""PEM"" format
-
-
-
-"
-C48E63F001DFAE875E1C82B5D163B7A2C9961CE2_6,C48E63F001DFAE875E1C82B5D163B7A2C9961CE2," Configuring the aggregator (admin)
-
-As you create a Federated Learning experiment, follow these steps:
-
-
-
-1. In the Configure tab, toggle ""Enable homomorphic encryption"".
-2. Choose small or above for Hardware specification. Depending on the level of encryption that you apply, you might need a larger hardware spec to accommodate the resource consumption for homomorphic encryption.
-3. Ensure that you upload an unencrypted initial model when selecting the model file for Model specification.
-4. Select ""Simple average (encrypted)"" for Fusion method. Click Next.
-5. Check Show advanced in the Define hyperparameters tab.
-6. Select the level of encryption in Encryption level.
-Higher encryption levels increase security and precision, and require higher resource consumption (e.g. computation, memory, network bandwidth). The default is encryption level 1.
-See the following table for description of the encryption levels:
-
-
-
-
-
-Increasing encryption level and security and precision
-
- Level Security Precision
-
- 1 High Good
- 2 High High
- 3 Very high Good
- 4 Very high High
-
-
-
-Security is the strength of the encryption, typically measured by the number of operations that an attacker must perform to break the encryption.
-Precision is the precision of the encryption system's outcomes. Higher precision levels reduce loss of accuracy of the model due to the encryption.
-
-"
-C48E63F001DFAE875E1C82B5D163B7A2C9961CE2_7,C48E63F001DFAE875E1C82B5D163B7A2C9961CE2," Connecting to the aggregator (party)
-
-The following steps only show the configuration needed for homomorphic encryption. For a step-by-step tutorial of using homomorphic encryption in Federated Learning, see [FHE sample](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-fhe-sample.html).
-
-To see how to create a general end-to-end party connector script, see [Connect to the aggregator (party)](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-conn.html).
-
-
-
-1. Install the Python client with FHE with the following command:
-pip install 'ibm_watson_machine_learning[fl-rt23.1-py3.10,fl-crypto]'
-2. Configure the party as follows:
-
-party_config = {
-""local_training"": {
-""info"": {
-""crypto"": {
-""key_manager"": {
-""key_mgr_info"": {
-""distribution"": {
-""ca_cert_file_path"": ""path of the root certificate file identifying the certificate authority"",
-""my_cert_file_path"": ""path of the certificate file of the party issued by the certificate authority"",
-""asym_key_file_path"": ""path of the RSA key file of the party""
-}
-}
-}
-}
-}
-}
-}
-}
-3. Run the party connector script after configuration.
-
-
-
-"
-C48E63F001DFAE875E1C82B5D163B7A2C9961CE2_8,C48E63F001DFAE875E1C82B5D163B7A2C9961CE2," Additional resources
-
-Parent topic:[Federated Learning](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fed-lea.html)
-"
-4CD539B8153216F80B26729A35AD4CD04A9C27DB_0,4CD539B8153216F80B26729A35AD4CD04A9C27DB," Creating the initial model
-
-Parties can create and save the initial model before training by following a set of examples.
-
-
-
-* [Save the Tensorflow model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-models.html?context=cdpaas&locale=entf-config)
-* [Save the Scikit-learn model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-models.html?context=cdpaas&locale=ensklearn-config)
-* [Save the Pytorch model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-models.html?context=cdpaas&locale=enpytorch)
-
-
-
-Consider the configuration examples that match your model type.
-
-"
-4CD539B8153216F80B26729A35AD4CD04A9C27DB_1,4CD539B8153216F80B26729A35AD4CD04A9C27DB," Save the Tensorflow model
-
-import tensorflow as tf
-from tensorflow.keras import
-from tensorflow.keras.layers import
-import numpy as np
-import os
-
-class MyModel(Model):
-def __init__(self):
-super(MyModel, self).__init__()
-self.conv1 = Conv2D(32, 3, activation='relu')
-self.flatten = Flatten()
-self.d1 = Dense(128, activation='relu')
-self.d2 = Dense(10)
-
-def call(self, x):
-x = self.conv1(x)
-x = self.flatten(x)
-x = self.d1(x)
-return self.d2(x)
-
- Create an instance of the model
-
-model = MyModel()
-loss_object = tf.keras.losses.SparseCategoricalCrossentropy(
-from_logits=True)
-optimizer = tf.keras.optimizers.Adam()
-acc = tf.keras.metrics.SparseCategoricalAccuracy(name='accuracy')
-model.compile(optimizer=optimizer, loss=loss_object, metrics=[acc])
-img_rows, img_cols = 28, 28
-input_shape = (None, img_rows, img_cols, 1)
-model.compute_output_shape(input_shape=input_shape)
-
-dir = ""./model_architecture""
-if not os.path.exists(dir):
-os.makedirs(dir)
-
-model.save(dir)
-
-If you choose Tensorflow as the model framework, you need to save a Keras model as the SavedModel format. A Keras model can be saved in SavedModel format by using tf.keras.model.save().
-
-To compress your files, run the command zip -r mymodel.zip model_architecture. The contents of your .zip file must contain:
-
-mymodel.zip
-└── model_architecture
-├── assets
-├── keras_metadata.pb
-├── saved_model.pb
-└── variables
-"
-4CD539B8153216F80B26729A35AD4CD04A9C27DB_2,4CD539B8153216F80B26729A35AD4CD04A9C27DB,"├── variables.data-00000-of-00001
-└── variables.index
-
-"
-4CD539B8153216F80B26729A35AD4CD04A9C27DB_3,4CD539B8153216F80B26729A35AD4CD04A9C27DB," Save the Scikit-learn model
-
-
-
-* [SKLearn classification](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-models.html?context=cdpaas&locale=ensk-class)
-* [SKLearn regression](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-models.html?context=cdpaas&locale=ensk-reg)
-* [SKLearn Kmeans](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-models.html?context=cdpaas&locale=ensk-k)
-
-
-
-"
-4CD539B8153216F80B26729A35AD4CD04A9C27DB_4,4CD539B8153216F80B26729A35AD4CD04A9C27DB," SKLearn classification
-
- SKLearn classification
-
-from sklearn.linear_model import SGDClassifier
-import numpy as np
-import joblib
-
-model = SGDClassifier(loss='log', penalty='l2')
-model.classes_ = np.array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
- You must specify the class label for IBM Federated Learning using model.classes. Class labels must be contained in a numpy array.
- In the example, there are 10 classes.
-
-joblib.dump(model, ""./model_architecture.pickle"")
-
-"
-4CD539B8153216F80B26729A35AD4CD04A9C27DB_5,4CD539B8153216F80B26729A35AD4CD04A9C27DB," SKLearn regression
-
- Sklearn regression
-
-from sklearn.linear_model import SGDRegressor
-import pickle
-
-model = SGDRegressor(loss='huber', penalty='l2')
-
-with open(""./model_architecture.pickle"", 'wb') as f:
-pickle.dump(model, f)
-
-"
-4CD539B8153216F80B26729A35AD4CD04A9C27DB_6,4CD539B8153216F80B26729A35AD4CD04A9C27DB," SKLearn Kmeans
-
- SKLearn Kmeans
-from sklearn.cluster import KMeans
-import joblib
-
-model = KMeans()
-joblib.dump(model, ""./model_architecture.pickle"")
-
-You need to create a .zip file that contains your model in pickle format by running the command zip mymodel.zip model_architecture.pickle. The contents of your .zip file must contain:
-
-mymodel.zip
-└── model_architecture.pickle
-
-"
-4CD539B8153216F80B26729A35AD4CD04A9C27DB_7,4CD539B8153216F80B26729A35AD4CD04A9C27DB," Save the PyTorch model
-
-import torch
-import torch.nn as nn
-
-model = nn.Sequential(
-nn.Flatten(start_dim=1, end_dim=-1),
-nn.Linear(in_features=784, out_features=256, bias=True),
-nn.ReLU(),
-nn.Linear(in_features=256, out_features=256, bias=True),
-nn.ReLU(),
-nn.Linear(in_features=256, out_features=256, bias=True),
-nn.ReLU(),
-nn.Linear(in_features=256, out_features=100, bias=True),
-nn.ReLU(),
-nn.Linear(in_features=100, out_features=50, bias=True),
-nn.ReLU(),
-nn.Linear(in_features=50, out_features=10, bias=True),
-nn.LogSoftmax(dim=1),
-).double()
-
-torch.save(model, ""./model_architecture.pt"")
-
-You need to create a .zip file containing your model in pickle format. Run the command zip mymodel.zip model_architecture.pt. The contents of your .zip file should contain:
-
-mymodel.zip
-└── model_architecture.pt
-
-Parent topic:[Creating a Federated Learning experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-start.html)
-"
-3ACF4AABD6BE9C3BC0E0A363C3BFFFDD4A37B442_0,3ACF4AABD6BE9C3BC0E0A363C3BFFFDD4A37B442," Monitoring the experiment and saving the model
-
-Any party or admin with collaborator access to the experiment can monitor the experiment and save a copy of the model.
-
-As the experiment runs, you can check the progress of the experiment. After the training is complete, you can view your results, save and deploy the model, and then test the model with new data.
-
-"
-3ACF4AABD6BE9C3BC0E0A363C3BFFFDD4A37B442_1,3ACF4AABD6BE9C3BC0E0A363C3BFFFDD4A37B442," Monitoring the experiment
-
-When all parties run the party connector script, the experiment starts training automatically. As the training runs, you can view a dynamic diagram of the training progress. For each round of training, you can view the four stages of a training round:
-
-
-
-* Sending model: Federated Learning sends the model metrics to each party.
-* Training: The process of training the data locally. Each party trains to produce a local model that is fused. No data is exchanged between parties.
-* Receiving models: After training is complete, each party sends its local model to the aggregator. The data is not sent and remains private.
-* Aggregating: The aggregator combines the models that are sent by each of the remote parties to create an aggregated model.
-
-
-
-"
-3ACF4AABD6BE9C3BC0E0A363C3BFFFDD4A37B442_2,3ACF4AABD6BE9C3BC0E0A363C3BFFFDD4A37B442," Saving your model
-
-When the training is complete, a chart that displays the model accuracy over each round of training is drawn. Hover over the points on the chart for more information on a single point's exact metrics.
-
-A Training rounds table shows details for each training round. The table displays the participating parties' average accuracy of their model training for each round.
-
-
-
-When you are done with the viewing, click Save model to project to save the Federated Learning model to your project.
-
-"
-3ACF4AABD6BE9C3BC0E0A363C3BFFFDD4A37B442_3,3ACF4AABD6BE9C3BC0E0A363C3BFFFDD4A37B442," Rerun the experiment
-
-You can rerun the experiment as many times as you need in your project.
-
-Note:If you encounter errors when rerunning an experiment, see [Troubleshoot](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-troubleshoot.html) for more details.
-
-"
-3ACF4AABD6BE9C3BC0E0A363C3BFFFDD4A37B442_4,3ACF4AABD6BE9C3BC0E0A363C3BFFFDD4A37B442," Deploying your model
-
-After you save your Federated Learning model, you can deploy and score the model like other machine learning models in a Watson Studio platform.
-
-See [Deploying models](https://dataplatform.cloud.ibm.com/docs/content/wsj/wmls/wmls-deploy-overview.html) for more details.
-
-Parent topic:[Creating a Federated Learning experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-start.html)
-"
-4B16740C786C0846194987998DAD887250BE95BF_0,4B16740C786C0846194987998DAD887250BE95BF," Hyperparameter definitions
-
-Definitions of hyperparameters used in the experiment training. One or more of these hyperparameter options might be used, depending on your framework and fusion method.
-
-
-
-Hyperparameter definitions
-
- Hyperparameters Description
-
- Rounds Int value. The number of training iterations to complete between the aggregator and the remote systems.
- Termination accuracy (Optional) Float value. Takes model_accuracy and compares it to a numerical value. If the condition is satisfied, then the experiment finishes early. For example, termination_predicate: accuracy >= 0.8 finishes the experiment when the mean of model accuracy for participating parties is greater than or equal to 80%. Currently, Federated Learning accepts one type of early termination condition (model accuracy) for classification models only.
- Quorum (Optional) Float value. Proceeds with model training after the aggregator reaches a certain ratio of party responses. Takes a decimal value between 0 - 1. The default is 1. The model training starts only after party responses reach the indicated ratio value. For example, setting this value to 0.5 starts the training after 50% of the registered parties responded to the aggregator call.
- Max Timeout (Optional) Int value. Terminates the Federated Learning experiment if the waiting time for party responses exceeds this value in seconds. Takes a numerical value up to 43200. If this value in seconds passes and the quorum ratio is not reached, the experiment terminates. For example, max_timeout = 1000 terminates the experiment after 1000 seconds if the parties do not respond in that time.
- Sketch accuracy vs privacy (Optional) Float value. Used with XGBoost training to control the relative accuracy of sketched data sent to the aggregator. Takes a decimal value between 0 and 1. Higher values will result in higher quality models but with a reduction in data privacy and increase in resource consumption.
-"
-4B16740C786C0846194987998DAD887250BE95BF_1,4B16740C786C0846194987998DAD887250BE95BF," Number of classes Int value. Number of target classes for the classification model. Required if ""Loss"" hyperparameter is: - auto - binary_crossentropy - categorical_crossentropy
- Learning rate Decimal value. The learning rate, also known as shrinkage. This is used as a multiplicative factor for the leaves values.
- Loss String value. The loss function to use in the boosting process. - binary_crossentropy (also known as logistic loss) is used for binary classification. - categorical_crossentropy is used for multiclass classification. - auto chooses either loss function depending on the nature of the problem. - least_squares is used for regression.
- Max Iter Int value. The total number of passes over the local training data set to train a Scikit-learn model.
- N cluster Int value. The number of clusters to form and the number of centroids to generate.
- Epoch (Optional) Int value. The number of local training iterations to be preformed by each remote party for each round. For example, if you set Rounds to 2 and Epochs to 5, all remote parties train locally 5 times before the model is sent to the aggregator. In round 2, the aggregator model is trained locally again by all parties 5 times and re-sent to the aggregator.
- sigma Float value. Determines how far the local model neurons are allowed from the global model. A bigger value allows more matching and produces a smaller global model. Default value is 1.
- sigma0 Float value. Defines the permitted deviation of the global network neurons. Default value is 1.
- gamma Float value. Indian Buffet Process parameter that controls the expected number of features in each observation. Default value is 1.
-
-
-
-Parent topic:[Frameworks, fusion methods, and Python versions](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-frames.html)
-"
-E0D36A6F5028FC5ED005E87FAF9F65F976E62A37_0,E0D36A6F5028FC5ED005E87FAF9F65F976E62A37," Set up your system
-
-Before you can use IBM Federated Learning, ensure that you have the required hardware, software, and dependencies.
-
-"
-E0D36A6F5028FC5ED005E87FAF9F65F976E62A37_1,E0D36A6F5028FC5ED005E87FAF9F65F976E62A37," Core requirements by role
-
-Each entity that participates in a Federated Learning experiment must meet the requirements for their role.
-
-"
-E0D36A6F5028FC5ED005E87FAF9F65F976E62A37_2,E0D36A6F5028FC5ED005E87FAF9F65F976E62A37," Admin software requirements
-
-Designate an admin for the Federated Learning experiment. The admin must have:
-
-
-
-* Access to the platform with Watson Studio and Watson Machine Learning enabled.
-You must [create a Watson Machine Learning service instance](https://cloud.ibm.com/catalog/services/machine-learning).
-* A [project](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/projects.html) for assembling the global model. You must [associate the Watson Machine Learning service instance with your project](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/assoc-services.html).
-
-
-
-"
-E0D36A6F5028FC5ED005E87FAF9F65F976E62A37_3,E0D36A6F5028FC5ED005E87FAF9F65F976E62A37," Party hardware and software requirements
-
-Each party must have a system that meets these minimum requirements.
-
-Note: Remote parties participating in the same Federated Learning experiment can use different hardware specs and architectures, as long as they each meet the minimum requirement.
-
-"
-E0D36A6F5028FC5ED005E87FAF9F65F976E62A37_4,E0D36A6F5028FC5ED005E87FAF9F65F976E62A37," Supported architectures
-
-
-
-* x86 64-bit
-* PPC
-* Mac M-series
-* 4 GB memory or greater
-
-
-
-"
-E0D36A6F5028FC5ED005E87FAF9F65F976E62A37_5,E0D36A6F5028FC5ED005E87FAF9F65F976E62A37," Supported environments
-
-
-
-* Linux
-* Mac OS/Unix
-* Windows
-
-
-
-"
-E0D36A6F5028FC5ED005E87FAF9F65F976E62A37_6,E0D36A6F5028FC5ED005E87FAF9F65F976E62A37," Software dependencies
-
-
-
-* A supported [Python version and a machine learning framework](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-frames.html).
-* The Watson Machine Learning Python client.
-
-
-
-1. If you are using Linux, run pip install 'ibm-watson-machine-learning[fl-rt22.2-py3.10]'.
-2. If you are using Mac OS with M-series CPU and Conda, download the installation script and then run ./install_fl_rt22.2_macos.sh .
-
-
-
-
-
-"
-E0D36A6F5028FC5ED005E87FAF9F65F976E62A37_7,E0D36A6F5028FC5ED005E87FAF9F65F976E62A37," Network requirements
-
-An outbound connection from the remote party to aggregator is required. Parties can use firewalls that restrict internal connections with each other.
-
-"
-E0D36A6F5028FC5ED005E87FAF9F65F976E62A37_8,E0D36A6F5028FC5ED005E87FAF9F65F976E62A37," Data sources requirements
-
-Data must comply with these requirements.
-
-
-
-* Data must be in a directory or storage repository that is accessible to the party that uses them.
-* Each data source for a federate model must have the same features. IBM Federated Learning supports horizontal federated learning only.
-* Data must be in a readable format, but the formats can vary by data source. Suggested formats include:
-
-
-
-* Hive
-* Excel
-* CSV
-* XML
-* Database
-
-
-
-
-
-Parent topic:[Creating a Federated Learning experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-start.html)
-"
-E5895BC081EDBF0CD7340015DECD0D0180AAC44A,E5895BC081EDBF0CD7340015DECD0D0180AAC44A," Creating a Federated Learning experiment
-
-Learn how to create a Federated Learning experiment to train a machine learning model.
-
-Watch this short overview video of how to create a Federated Learning experiment.
-
-Video disclaimer: Some minor steps and graphical elements in this video might differ from your platform.
-
-This video provides a visual method to learn the concepts and tasks in this documentation.
-
-Follow these steps to create a Federated Learning experiment:
-
-
-
-* [Set up your system](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-setup.html)
-* [Creating the initial model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-models.html)
-* [Create the data handler](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-handler.html)
-* [Starting the aggregator (Admin)](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-agg.html)
-* [Connecting to the aggregator (Party)](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-conn.html)
-* [Monitoring and saving the model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-mon.html)
-
-
-
-Parent topic:[IBM Federated Learning](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fed-lea.html)
-"
-8FFE1FB9CAF854DED9CA52190D4874D8280D26B0_0,8FFE1FB9CAF854DED9CA52190D4874D8280D26B0," Terminology
-
-Terminology that is used in IBM Federated Learning training processes.
-
-"
-8FFE1FB9CAF854DED9CA52190D4874D8280D26B0_1,8FFE1FB9CAF854DED9CA52190D4874D8280D26B0," Terminology
-
-
-
-Federated Learning terminology
-
- Term Definition
-
- Party Users that contribute different sources of data to train a model collaboratively. Federated Learning ensures that the training occurs with no data exposure risk across the different parties. A party must have at least Viewer permission in the Watson Studio Federated Learning project.
- Admin A party member that configures the Federated Learning experiment to specify how many parties are allowed, which frameworks to use, and sets up the Remote Training Systems (RTS). They start the Federated Learning experiment and see it to the end. An admin must have at least Editor permission in the Watson Studio Federated Learning project.
- Remote Training System An asset that is used to authenticate a party to the aggregator. Project members register in the Remote Training System (RTS) before training. Only one of the members can use one RTS to participate in an experiment as a party. Multiple contributing parties must each authenticate with one RTS for an experiment.
- Aggregator The aggregator fuses the model results between the parties to build one model.
- Fusion method The algorithm that is used to combine the results that the parties return to the aggregator.
- Data handler In IBM Federated Learning, data handler is a class that is used to load and pre-process data. It also helps to ensure that data that is collected from multiple sources are formatted uniformly to be trained. More details about the data handler can be found in [Data Handler](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-handler.html).
- Global model The resulting model that is fused between different parties.
- Training round A training round is the process of local data training, global model fusion, and update. Training is iterative. The admin can choose the number of training rounds.
-
-
-
-Parent topic:[Get started](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-get-started.html)
-"
-E64B1811E55868CF510B06BFD1A24BA4AC3008F1_0,E64B1811E55868CF510B06BFD1A24BA4AC3008F1," Federated Learning Tensorflow samples
-
-Download and review sample files that show how to run a Federated Learning experiment by using API calls with a Tensorflow Keras model framework.
-
-To see a step-by-step UI driven approach rather than sample files, see the [Federated Learning Tensorflow tutorial for UI](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-tf2-tutorial.html).
-
-"
-E64B1811E55868CF510B06BFD1A24BA4AC3008F1_1,E64B1811E55868CF510B06BFD1A24BA4AC3008F1," Download the Federated Learning sample files
-
-The Federated Learning sample has two parts, both in Jupyter Notebook format that can run in the latest Python environment.
-
-For single-user demonstrative purposes, the Notebooks are placed in a project. Access the [Federated Learning project](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/cab78523832431e767c41527a42a6727), and click Create project to get all the sample files at once.
-
-You can also get the Notebook separately. Since, for practical purposes of Federated Learning, one user would run the admin Notebook and multiple users would run the party Notebook. For more details on the admin and party, see [Terminology](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-term.html).
-
-
-
-1. [Federated Learning Tensorflow Demo Part 1 - for Admin](https://github.com/IBMDataScience/sample-notebooks/blob/master/CloudPakForData/notebooks/4.7/Federated_Learning_TF_Demo_Part_1.ipynb)
-2. [Federated Learning Tensorflow Demo Part 2 - for Party](https://github.com/IBMDataScience/sample-notebooks/blob/master/CloudPakForData/notebooks/4.7/Federated_Learning_TF_Demo_Part_2.ipynb)
-
-
-
-Parent topic:[Federated Learning tutorial and samples](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-demo.html)
-"
-37DC9376A7FB6EB772D242B85909A023C43C2417_0,37DC9376A7FB6EB772D242B85909A023C43C2417," Federated Learning Tensorflow tutorial
-
-This tutorial demonstrates the usage of Federated Learning with the goal of training a machine learning model with data from different users without having users share their data. The steps are done in a low code environment with the UI and with a Tensorflow framework.
-
-Note:This is a step-by-step tutorial for running a UI driven Federated Learning experiment. To see a code sample for an API driven approach, see [Federated Learning Tensorflow samples](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-tf2-samples.html). Tip:In this tutorial, admin refers to the user that starts the Federated Learning experiment, and party refers to one or more users who send their model results after the experiment is started by the admin. While the tutorial can be done by the admin and multiple parties, a single user can also complete a full runthrough as both the admin and the party. For a simpler demonstrative purpose, in the following tutorial only one data set is submitted by one party. For more information on the admin and party, see [Terminology](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-term.html).
-
-Watch this short video tutorial of how to create a Federated Learning experiment with Watson Studio.
-
-Video disclaimer: Some minor steps and graphical elements in this video might differ from your platform.
-
-This video provides a visual method to learn the concepts and tasks in this documentation.
-
-In this tutorial you will learn to:
-
-
-
-* [Step 1: Start Federated Learning as the admin](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-tf2-tutorial.html?context=cdpaas&locale=enstep-1)
-* [Step 2: Train model as a party](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-tf2-tutorial.html?context=cdpaas&locale=enstep-2)
-"
-37DC9376A7FB6EB772D242B85909A023C43C2417_1,37DC9376A7FB6EB772D242B85909A023C43C2417,"* [Step 3: Save and deploy the model online](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-tf2-tutorial.html?context=cdpaas&locale=enstep-3)
-
-
-
-"
-37DC9376A7FB6EB772D242B85909A023C43C2417_2,37DC9376A7FB6EB772D242B85909A023C43C2417," Step 1: Start Federated Learning as the admin
-
-In this tutorial, you train a Federated Learning experiment with a Tensorflow framework and the MNIST data set.
-
-"
-37DC9376A7FB6EB772D242B85909A023C43C2417_3,37DC9376A7FB6EB772D242B85909A023C43C2417," Before you begin
-
-
-
-1. Log in to [IBM Cloud](https://cloud.ibm.com/). If you don't have an account, create one with any email.
-2. [Create a Watson Machine Learning service instance](https://cloud.ibm.com/catalog/services/machine-learning) if you do not have it set up in your environment.
-3. Log in to [watsonx](https://dataplatform.cloud.ibm.com/home2?context=wx).
-4. Use an existing [project](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/projects.html) or create a new one. You must have at least admin permission.
-5. Associate the Watson Machine Learning service with your project.
-
-
-
-1. In your project, click the Manage > Service & integrations.
-2. Click Associate service.
-3. Select your Watson Machine Learning instance from the list, and click Associate; or click New service if you do not have one to set up an instance.
-
-
-
-
-
-
-
-"
-37DC9376A7FB6EB772D242B85909A023C43C2417_4,37DC9376A7FB6EB772D242B85909A023C43C2417," Start the aggregator
-
-
-
-1. Create the Federated learning experiment asset:
-
-
-
-1. Click the Assets tab in your project.
-2. Click New asset > Train models on distributed data.
-3. Type a Name for your experiment and optionally a description.
-4. Verify the associated Watson Machine Learning instance under Select a machine learning instance. If you don't see a Watson Machine Learning instance associated, follow these steps:
-
-
-
-1. Click Associate a Machine Learning Service Instance.
-2. Select an existing instance and click Associate, or create a New service.
-3. Click Reload to see the associated service.
-
-
-4. Click Next.
-
-
-
-
-
-2. Configure the experiment.
-
-
-
-1. On the Configure page, select a Hardware specification.
-2. Under the Machine learning framework dropdown, select Tensorflow 2.
-3. Select a Model type.
-4. Download the [untrained model](https://github.com/IBMDataScience/sample-notebooks/raw/master/Files/tf_mnist_model.zip).
-5. Back in the Federated Learning experiment, click Select under Model specification.
-6. Drag the downloaded file named tf_mnist_model.zip onto the Upload file box.1. Select runtime-22.2-py3.10 for the Software Specification dropdown.
-7. Give your model a name, and then click Add.
-
-
-8. Click Weighted average for the Fusion method, and click Next.
-
-
-
-
-
-3. Define the hyperparameters.
-
-
-
-1. Accept the default hyperparameters or adjust as needed.
-2. When you are finished, click Next.
-
-
-
-4. Select remote training systems.
-
-
-
-1. Click Add new systems.
-
-"
-37DC9376A7FB6EB772D242B85909A023C43C2417_5,37DC9376A7FB6EB772D242B85909A023C43C2417,"
-2. Give your Remote Training System a name.
-3. Under Allowed identities, choose the user that is your party, and then click Add. In this tutorial, you can add a dummy user or yourself, for demonstrative purposes.
-This user must be added to your project as a collaborator with Editor or higher permissions. Add additional systems by repeating this step for each remote party you intent to use.
-4. When you are finished, click Add systems.
-
-
-5. Return to the Select remote training systems page, verify that your system is selected, and then click Next.
-
-
-
-5. Review your settings, and then click Create.
-6. Watch the status. Your Federated Learning experiment status is Pending when it starts. When your experiment is ready for parties to connect, the status will change to Setup – Waiting for remote systems. This may take a few minutes.
-7. Click View setup information to download the party configuration and the party connector script that can be run on the remote party.
-8. Click the download icon besides each of the remote training systems that you created, and then click Party connector script. This gives you the party connector script. Save the script to a directory on your machine.
-
-
-
-
-
-"
-37DC9376A7FB6EB772D242B85909A023C43C2417_6,37DC9376A7FB6EB772D242B85909A023C43C2417," Step 2: Train model as the party
-
-Follow these steps to train the model as a party:
-
-
-
-1. Ensure that you are using the same Python version as the admin. Using a different Python version might cause compatibility issues. To see Python versions compatible with different frameworks, see [Frameworks and Python version compatibility](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-frames.htmlfl-py-fmwk).
-2. Create a new local directory, and put your party connector script in it.
-3. [Download the data handler mnist_keras_data_handler.py](https://raw.githubusercontent.com/IBMDataScience/sample-notebooks/master/Files/mnist_keras_data_handler.py) by right-clicking on it and click Save link as. Save it to the same directory as the party connector script.
-4. [Download the MNIST handwriting data set](https://api.dataplatform.cloud.ibm.com/v2/gallery-assets/entries/903188bb984a30f38bb889102a1baae5/data) from our Samples. In the the same directory as the party connector script, data handler, and the rest of your files, unzip it by running the unzip command unzip MNIST-pkl.zip.
-5. Install Watson Machine Learning.
-
-
-
-* If you are using Linux, run pip install 'ibm-watson-machine-learning[fl-rt22.2-py3.10]'.
-* If you are using Mac OS with M-series CPU and Conda, download the [installation script](https://raw.github.ibm.com/WML/federated-learning/master/docs/install_fl_rt22.2_macos.sh?token=AAAXW7VVQZF7LYMTX5VOW7DEDULLE) and then run ./install_fl_rt22.2_macos.sh .
-"
-37DC9376A7FB6EB772D242B85909A023C43C2417_7,37DC9376A7FB6EB772D242B85909A023C43C2417,"You now have the party connector script, mnist_keras_data_handler.py, mnist-keras-test.pkl and mnist-keras-train.pkl, data handler in the same directory.
-
-
-
-6. Your party connector script looks similar to the following. Edit it by filling in the data file locations, the data handler, and API key for the user defined in the remote training system. To get your API key, go to Manage > Access(IAM) > API keys in your [IBM Cloud account](https://cloud.ibm.com/iam/apikeys). If you don't have one, click Create API key, fill out the fields, and click Create.
-
-from ibm_watson_machine_learning import APIClient
-wml_credentials = {
-""url"": ""https://us-south.ml.cloud.ibm.com"",
-""apikey"": """"
-}
-wml_client = APIClient(wml_credentials)
-wml_client.set.default_project(""XXX-XXX-XXX-XXX-XXX"")
-party_metadata = {
-wml_client.remote_training_systems.ConfigurationMetaNames.DATA_HANDLER: {
- Supply the name of the data handler class and path to it.
- The info section may be used to pass information to the
- data handler.
- For example,
- ""name"": ""MnistSklearnDataHandler"",
- ""path"": ""example.mnist_sklearn_data_handler"",
- ""info"": {
- ""train_file"": pwd + ""/mnist-keras-train.pkl"",
- ""test_file"": pwd + ""/mnist-keras-test.pkl""
- }
-""name"": """",
-""path"": """",
-""info"": {
-""""
-}
-}
-}
-party = wml_client.remote_training_systems.create_party(""XXX-XXX-XXX-XXX-XXX"", party_metadata)
-party.monitor_logs()
-"
-37DC9376A7FB6EB772D242B85909A023C43C2417_8,37DC9376A7FB6EB772D242B85909A023C43C2417,"party.run(aggregator_id=""XXX-XXX-XXX-XXX-XXX"", asynchronous=False)
-7. Run the party connector script: python3 rts__.py.
-From the UI you can monitor the status of your Federated Learning experiment.
-
-
-
-"
-37DC9376A7FB6EB772D242B85909A023C43C2417_9,37DC9376A7FB6EB772D242B85909A023C43C2417," Step 3: Save and deploy the model online
-
-In this section, you will learn to save and deploy the model that you trained.
-
-
-
-1. Save your model.
-
-
-
-1. In your completed Federated Learning experiment, click Save model to project.
-2. Give your model a name and click Save.
-3. Go to your project home.
-
-
-
-2. Create a deployment space, if you don't have one.
-
-
-
-1. From the navigation menu , click Deployments.
-2. Click New deployment space.
-3. Fill in the fields, and click Create.
-
-
-
-3. Promote the model to a space.
-
-
-
-1. Return to your project, and click the Assets tab.
-2. In the Models section, click the model to view its details page.
-3. Click Promote to space.
-4. Choose a deployment space for your trained model.
-5. Select the Go to the model in the space after promoting it option.
-6. Click Promote.
-
-
-
-4. When the model displays inside the deployment space, click New deployment.
-
-
-
-1. Select Online as the Deployment type.
-2. Specify a name for the deployment.
-3. Click Create.
-
-
-
-5. Click the Deployments tab to monitor your model's deployment status.
-
-
-
-"
-37DC9376A7FB6EB772D242B85909A023C43C2417_10,37DC9376A7FB6EB772D242B85909A023C43C2417," Next steps
-
-Ready to create your own customized Federated Experiment? See the high level steps in [Creating your Federated Learning experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-start.html).
-
-Parent topic:[Federated Learning tutorial and samples](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-demo.html)
-"
-866BBCABEF2C6E3EDDF66300DC2639C938D815F4_0,866BBCABEF2C6E3EDDF66300DC2639C938D815F4," Troubleshooting Federated Learning experiments
-
-The following are some of the limitations and troubleshoot methods that apply to Federated learning experiments.
-
-"
-866BBCABEF2C6E3EDDF66300DC2639C938D815F4_1,866BBCABEF2C6E3EDDF66300DC2639C938D815F4," Limitations
-
-
-
-* If you choose to enable homomorphic encryption, intermediate models can no longer be saved. However, the final model of the training experiment can be saved and used normally. The aggregator will not be able to decrypt the model updates and the intermediate global models. The aggregator can see only the final global model.
-
-
-
-"
-866BBCABEF2C6E3EDDF66300DC2639C938D815F4_2,866BBCABEF2C6E3EDDF66300DC2639C938D815F4," Troubleshooting
-
-
-
-* If a quorum error occurs during homomorphic keys distribution, restart the experiment.
-* Changing the name of a Federated Learning experiment causes it to lose its current name, including earlier runs. If this is not intended, create a new experiment with the new name.
-* The default software spec is used by every run. If your model type becomes outdated and not compatible with future software specs, re-running an older experiment might run into issues.
-* As Remote Training Systems are meant to run on different servers, you might encounter unexpected behavior when you run with multiple parties that are based in the same server.
-
-
-
-"
-866BBCABEF2C6E3EDDF66300DC2639C938D815F4_3,866BBCABEF2C6E3EDDF66300DC2639C938D815F4," Federated Learning known issues
-
-
-
-* [Known issues for Federated Learning](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/known-issues.htmlwml)
-
-
-
-Parent topic:[IBM Federated Learning](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fed-lea.html)
-"
-D0142FFCD3063427101CCC165C5E5F2B0FA286DB_0,D0142FFCD3063427101CCC165C5E5F2B0FA286DB," Federated Learning XGBoost samples
-
-These are links to sample files to run Federated Learning by using API calls with an XGBoost framework. To see a step-by-step UI driven approach, go to [Federated Learning XGBoost tutorial for UI](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-xg-tutorial.html).
-
-"
-D0142FFCD3063427101CCC165C5E5F2B0FA286DB_1,D0142FFCD3063427101CCC165C5E5F2B0FA286DB," Download the Federated Learning sample files
-
-The Federated Learning samples have two parts, both in Jupyter Notebook format that can run in the latest Python environment.
-
-For single-user demonstrative purposes, the Notebooks are placed in a project. Go to the following link and click Create project to get all the sample files.
-
-[Download the Federated Learning project](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/45a71514d67d87bb7900880b4501732c?context=wx)
-
-You can also get the Notebook separately. For practical purposes of Federated Learning, one user would run the admin Notebook and multiple users would run the party Notebook. For more details on the admin and party, see [Terminology](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-term.html)
-
-
-
-1. [Federated Learning XGBoost Demo Part 1 - for Admin](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/c95a130a2efdddc0a4b38c319a011fed)
-2. [Federated Learning XGBoost Demo Part 2 - for Party](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/155a5e78ca72a013e45d54ae87012306)
-
-
-
-Parent topic:[Federated Learning tutorial and samples](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-demo.html)
-"
-FE207218CE0D1148AA57D10ED8848CD7E6FFD87E_0,FE207218CE0D1148AA57D10ED8848CD7E6FFD87E," Federated Learning XGBoost tutorial for UI
-
-This tutorial demonstrates the usage of Federated Learning with the goal of training a machine learning model with data from different users without having users share their data. The steps are done in a low code environment with the UI and with an XGBoost framework.
-
-In this tutorial you learn to:
-
-
-
-* [Step 1: Start Federated Learning as the admin](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-xg-tutorial.html?context=cdpaas&locale=enstep-1)
-
-
-
-* [Before you begin](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-xg-tutorial.html?context=cdpaas&locale=enbefore-you-begin)
-* [Start the aggregator](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-xg-tutorial.html?context=cdpaas&locale=enstart-the-aggregator)
-
-
-
-* [Step 2: Train model as a party](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-xg-tutorial.html?context=cdpaas&locale=enstep-2)
-
-
-
-* [Step 3: Save and deploy the model online](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-xg-tutorial.html?context=cdpaas&locale=enstep-3)
-* [Step 4: Score the model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-xg-tutorial.html?context=cdpaas&locale=enstep-4)
-
-
-
-
-
-Notes:
-
-
-
-"
-FE207218CE0D1148AA57D10ED8848CD7E6FFD87E_1,FE207218CE0D1148AA57D10ED8848CD7E6FFD87E,"* This is a step-by-step tutorial for running a UI driven Federated Learning experiment. To see a code sample for an API driven approach, go to [Federated Learning XGBoost samples](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-xg-samples.html).
-* In this tutorial, admin refers to the user that starts the Federated Learning experiment, and party refers to one or more users who send their model results after the experiment is started by the admin. While the tutorial can be done by the admin and multiple parties, a single user can also complete a full run through as both the admin and the party. For a simpler demonstrative purpose, in the following tutorial only one data set is submitted by one party. For more information on the admin and party, see [Terminology](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-term.html).
-
-
-
-"
-FE207218CE0D1148AA57D10ED8848CD7E6FFD87E_2,FE207218CE0D1148AA57D10ED8848CD7E6FFD87E," Step 1: Start Federated Learning
-
-In this section, you learn to start the Federated Learning experiment.
-
-"
-FE207218CE0D1148AA57D10ED8848CD7E6FFD87E_3,FE207218CE0D1148AA57D10ED8848CD7E6FFD87E," Before you begin
-
-
-
-1. Log in to [IBM Cloud](https://cloud.ibm.com/). If you don't have an account, create one with any email.
-2. [Create a Watson Machine Learning service instance](https://cloud.ibm.com/catalog/services/machine-learning) if you do not have it set up in your environment.
-3. Log in to [watsonx](https://dataplatform.cloud.ibm.com/home2?context=wx).
-4. Use an existing [project](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/projects.html) or create a new one. You must have at least admin permission.
-5. Associate the Watson Machine Learning service with your project.
-
-
-
-1. In your project, click the Manage > Service & integrations.
-2. Click Associate service.
-3. Select your Watson Machine Learning instance from the list, and click Associate; or click New service if you do not have one to set up an instance.
-
-
-
-
-
-
-
-"
-FE207218CE0D1148AA57D10ED8848CD7E6FFD87E_4,FE207218CE0D1148AA57D10ED8848CD7E6FFD87E," Start the aggregator
-
-
-
-1. Create the Federated learning experiment asset:
-
-
-
-1. Click the Assets tab in your project.
-
-
-
-1. Click New asset > Train models on distributed data.
-2. Type a Name for your experiment and optionally a description.
-3. Verify the associated Watson Machine Learning instance under Select a machine learning instance. If you don't see a Watson Machine Learning instance associated, follow these steps:
-
-
-
-1. Click Associate a Machine Learning Service Instance.
-2. Select an existing instance and click Associate, or create a New service.
-3. Click Reload to see the associated service.
-
-
-4. Click Next.
-
-
-
-
-
-
-
-2. Configure the experiment.
-
-
-
-1. On the Configure page, select a Hardware specification.
-2. Under the Machine learning framework dropdown, select scikit-learn.
-3. For the Model type, select XGBoost.
-4. For the Fusion method, select XGBoost classification fusion
-
-
-
-
-
-3. Define the hyperparameters.
-
-
-
-1. Set the value for the Rounds field to 5.
-2. Accept the default values for the rest of the fields.
-
-
-3. Click Next.
-
-
-
-4. Select remote training systems.
-
-
-
-1. Click Add new systems.
-
-
-2. Give your Remote Training System a name.
-3. Under Allowed identities, select the user that will participate in the experiment, and then click Add. You can add as many allowed identities as participants in this Federated Experiment training instance. For this tutorial, choose only yourself.
-"
-FE207218CE0D1148AA57D10ED8848CD7E6FFD87E_5,FE207218CE0D1148AA57D10ED8848CD7E6FFD87E,"Any allowed identities must be part of the project and have at leastAdmin permission.
-4. When you are finished, click Add systems.
-
-
-5. Return to the Select remote training systems page, verify that your system is selected, and then click Next.
-
-
-
-
-
-5. Review your settings, and then click Create.
-6. Watch the status. Your Federated Learning experiment status is Pending when it starts. When your experiment is ready for parties to connect, the status will change to Setup – Waiting for remote systems. This may take a few minutes.
-
-
-
-"
-FE207218CE0D1148AA57D10ED8848CD7E6FFD87E_6,FE207218CE0D1148AA57D10ED8848CD7E6FFD87E," Step 2: Train model as a party
-
-
-
-1. Ensure that you are using the same Python version as the admin. Using a different Python version might cause compatibility issues. To see Python versions compatible with different frameworks, see [Frameworks and Python version compatibility](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-frames.htmlfl-py-fmwk).
-2. Create a new local directory.
-3. Download the Adult data set into the directory with this command: wget https://api.dataplatform.cloud.ibm.com/v2/gallery-assets/entries/5fcc01b02d8f0e50af8972dc8963f98e/data -O adult.csv.
-4. Download the data handler by running wget https://raw.githubusercontent.com/IBMDataScience/sample-notebooks/master/Files/adult_sklearn_data_handler.py -O adult_sklearn_data_handler.py.
-5. Install Watson Machine Learning.
-
-
-
-* If you are using Linux, run pip install 'ibm-watson-machine-learning[fl-rt22.2-py3.10]'.
-* If you are using Mac OS with M-series CPU and Conda, download the [installation script](https://raw.github.ibm.com/WML/federated-learning/master/docs/install_fl_rt22.2_macos.sh?token=AAAXW7VVQZF7LYMTX5VOW7DEDULLE) and then run ./install_fl_rt22.2_macos.sh .
-You now have the party connector script, mnist_keras_data_handler.py, mnist-keras-test.pkl and mnist-keras-train.pkl, data handler in the same directory.
-
-
-
-6. Go back to the Federated Learning experiment page, where the aggregator is running. Click View Setup Information.
-7. Click the download icon next to the remote training system, and select Party connector script.
-"
-FE207218CE0D1148AA57D10ED8848CD7E6FFD87E_7,FE207218CE0D1148AA57D10ED8848CD7E6FFD87E,"8. Ensure that you have the party connector script, the Adult data set, and the data handler in the same directory. If you run ls -l, you should see:
-
-adult.csv
-adult_sklearn_data_handler.py
-rts__.py
-9. In the party connector script:
-
-
-
-1. Authenticate using any method.
-2. Put in these parameters for the ""data"" section:
-
-""data"": {
-""name"": ""AdultSklearnDataHandler"",
-""path"": ""./adult_sklearn_data_handler.py"",
-""info"": {
-""txt_file"": ""./adult.csv""
-},
-},
-
-where:
-
-
-
-* name: Class name defined for the data handler.
-* path: Path of where the data handler is located.
-* info: Create a key value pair for the file type of local data set, or the path of your data set.
-
-
-
-
-
-10. Run the party connector script: python3 rts__.py.
-11. When all participating parties connect to the aggregator, the aggregator facilitates the local model training and global model update. Its status is Training. You can monitor the status of your Federated Learning experiment from the user interface.
-12. When training is complete, the party receives a Received STOP message on the party.
-13. Now, you can save the trained model and deploy it to a space.
-
-
-
-"
-FE207218CE0D1148AA57D10ED8848CD7E6FFD87E_8,FE207218CE0D1148AA57D10ED8848CD7E6FFD87E," Step 3: Save and deploy the model online
-
-In this section, you learn how to save and deploy the model that you trained.
-
-
-
-1. Save your model.
-
-
-
-1. In your completed Federated Learning experiment, click Save model to project.
-2. Give your model a name and click Save.
-3. Go to your project home.
-
-
-
-2. Create a deployment space, if you don't have one.
-
-
-
-1. From the navigation menu , click Deployments.
-2. Click New deployment space.
-3. Fill in the fields, and click Create.
-
-
-
-
-
-3. Promote the model to a space.
-
-
-
-1. Return to your project, and click the Assets tab.
-2. In the Models section, click the model to view its details page.
-3. Click Promote to space.
-4. Choose a deployment space for your trained model.
-5. Select the Go to the model in the space after promoting it option.
-6. Click Promote.
-
-
-
-4. When the model displays inside the deployment space, click New deployment.
-
-
-
-1. Select Online as the Deployment type.
-2. Specify a name for the deployment.
-3. Click Create.
-
-
-
-
-
-"
-FE207218CE0D1148AA57D10ED8848CD7E6FFD87E_9,FE207218CE0D1148AA57D10ED8848CD7E6FFD87E," Step 4: Score the model
-
-In this section, you learn to create a Python function to process the scoring data to ensure that it is in the same format that was used during training. For comparison, you will also score the raw data set by calling the Python function that we created.
-
-
-
-1. Define the Python function as follows. The function loads the scoring data in its raw format and processes the data exactly as it was done during training. Then, score the processed data.
-
-def adult_scoring_function():
-
-import pandas as pd
-
-from ibm_watson_machine_learning import APIClient
-
-wml_credentials = {
-""url"": ""https://us-south.ml.cloud.ibm.com"",
-""apikey"": """"
-}
-client = APIClient(wml_credentials)
-client.set.default_space('')
-
- converts scoring input data format to pandas dataframe
-def create_dataframe(raw_dataset):
-
-fields = raw_dataset.get(""input_data"")[0].get(""fields"")
-values = raw_dataset.get(""input_data"")[0].get(""values"")
-
-raw_dataframe = pd.DataFrame(
-columns = fields,
-data = values
-)
-
-return raw_dataframe
-
- reuse preprocess definition from training data handler
-def preprocess(training_data):
-
-""""""
-Performs the following preprocessing on adult training and testing data:
-* Drop following features: 'workclass', 'fnlwgt', 'education', 'marital-status', 'occupation',
-'relationship', 'capital-gain', 'capital-loss', 'hours-per-week', 'native-country'
-* Map 'race', 'sex' and 'class' values to 0/1
-* ' White': 1, ' Amer-Indian-Eskimo': 0, ' Asian-Pac-Islander': 0, ' Black': 0, ' Other': 0
-* ' Male': 1, ' Female': 0
-* Further details in Kamiran, F. and Calders, T. Data preprocessing techniques for classification without discrimination
-"
-FE207218CE0D1148AA57D10ED8848CD7E6FFD87E_10,FE207218CE0D1148AA57D10ED8848CD7E6FFD87E,"* Split 'age' and 'education' columns into multiple columns based on value
-
-:param training_data: Raw training data
-:type training_data: pandas.core.frame.DataFrame
-:return: Preprocessed training data
-:rtype: pandas.core.frame.DataFrame
-""""""
-if len(training_data.columns)==15:
- drop 'fnlwgt' column
-training_data = training_data.drop(training_data.columns[2], axis='columns')
-
-training_data.columns = ['age',
-'workclass',
-'education',
-'education-num',
-'marital-status',
-'occupation',
-'relationship',
-'race',
-'sex',
-'capital-gain',
-'capital-loss',
-'hours-per-week',
-'native-country',
-'class']
-
- filter out columns unused in training, and reorder columns
-training_dataset = training_data['race', 'sex', 'age', 'education-num', 'class']]
-
- map 'sex' and 'race' feature values based on sensitive attribute privileged/unpriveleged groups
-training_dataset['sex'] = training_dataset['sex'].map({' Female': 0,
-' Male': 1})
-
-training_dataset['race'] = training_dataset['race'].map({' Asian-Pac-Islander': 0,
-' Amer-Indian-Eskimo': 0,
-' Other': 0,
-' Black': 0,
-' White': 1})
-
- map 'class' values to 0/1 based on positive and negative classification
-training_dataset['class'] = training_dataset['class'].map({' <=50K': 0, ' >50K': 1})
-
-training_dataset['age'] = training_dataset['age'].astype(int)
-training_dataset['education-num'] = training_dataset['education-num'].astype(int)
-
-"
-FE207218CE0D1148AA57D10ED8848CD7E6FFD87E_11,FE207218CE0D1148AA57D10ED8848CD7E6FFD87E," split age column into category columns
-for i in range(8):
-if i != 0:
-training_dataset['age' + str(i)] = 0
-
-for index, row in training_dataset.iterrows():
-if row['age'] < 20:
-training_dataset.loc[index, 'age1'] = 1
-elif ((row['age'] < 30) & (row['age'] >= 20)):
-training_dataset.loc[index, 'age2'] = 1
-elif ((row['age'] < 40) & (row['age'] >= 30)):
-training_dataset.loc[index, 'age3'] = 1
-elif ((row['age'] < 50) & (row['age'] >= 40)):
-training_dataset.loc[index, 'age4'] = 1
-elif ((row['age'] < 60) & (row['age'] >= 50)):
-training_dataset.loc[index, 'age5'] = 1
-elif ((row['age'] < 70) & (row['age'] >= 60)):
-training_dataset.loc[index, 'age6'] = 1
-elif row['age'] >= 70:
-training_dataset.loc[index, 'age7'] = 1
-
- split age column into multiple columns
-training_dataset['ed6less'] = 0
-for i in range(13):
-if i >= 6:
-training_dataset['ed' + str(i)] = 0
-training_dataset['ed12more'] = 0
-
-for index, row in training_dataset.iterrows():
-if row['education-num'] < 6:
-training_dataset.loc[index, 'ed6less'] = 1
-elif row['education-num'] == 6:
-training_dataset.loc[index, 'ed6'] = 1
-elif row['education-num'] == 7:
-training_dataset.loc[index, 'ed7'] = 1
-elif row['education-num'] == 8:
-"
-FE207218CE0D1148AA57D10ED8848CD7E6FFD87E_12,FE207218CE0D1148AA57D10ED8848CD7E6FFD87E,"training_dataset.loc[index, 'ed8'] = 1
-elif row['education-num'] == 9:
-training_dataset.loc[index, 'ed9'] = 1
-elif row['education-num'] == 10:
-training_dataset.loc[index, 'ed10'] = 1
-elif row['education-num'] == 11:
-training_dataset.loc[index, 'ed11'] = 1
-elif row['education-num'] == 12:
-training_dataset.loc[index, 'ed12'] = 1
-elif row['education-num'] > 12:
-training_dataset.loc[index, 'ed12more'] = 1
-
-training_dataset.drop(['age', 'education-num'], axis=1, inplace=True)
-
- move class column to be last column
-label = training_dataset['class']
-training_dataset.drop('class', axis=1, inplace=True)
-training_dataset['class'] = label
-
-return training_dataset
-
-def score(raw_dataset):
-try:
-
- create pandas dataframe from input
-raw_dataframe = create_dataframe(raw_dataset)
-
- reuse preprocess from training data handler
-processed_dataset = preprocess(raw_dataframe)
-
- drop class column
-processed_dataset.drop('class', inplace=True, axis='columns')
-
- create data payload for scoring
-fields = processed_dataset.columns.values.tolist()
-values = processed_dataset.values.tolist()
-scoring_dataset = {client.deployments.ScoringMetaNames.INPUT_DATA: [{'fields': fields, 'values': values}]}
-print(scoring_dataset)
-
- score data
-prediction = client.deployments.score('', scoring_dataset)
-return prediction
-
-except Exception as e:
-return {'error': repr(e)}
-
-return score
-"
-FE207218CE0D1148AA57D10ED8848CD7E6FFD87E_13,FE207218CE0D1148AA57D10ED8848CD7E6FFD87E,"2. Replace the variables in the previous Python function:
-
-
-
-* API KEY: Your IAM API key. To create a new API key, go to the [IBM Cloud website](https://cloud.ibm.com/), and click Create an IBM Cloud API key under Manage > Access(IAM) > API keys.
-* SPACE ID: ID of the Deployment space where the adult income deployment is running. To see your space ID, go to Deployment spaces > YOUR SPACE NAME > Manage. Copy the Space GUID.
-* MODEL DEPLOYMENT ID: Online deployment ID for the adult income model. To see your model ID, you can see it by clicking the model in your project. It is in both the address bar and the information pane.
-
-
-
-3. Get the Software Spec ID for Python 3.9. For list of other environments run client.software_specifications.list(). software_spec_id = client.software_specifications.get_id_by_name('default_py3.9')
-4. Store the Python function into your Watson Studio space.
-
- stores python function in space
-meta_props = {
-client.repository.FunctionMetaNames.NAME: 'Adult Income Scoring Function',
-client.repository.FunctionMetaNames.SOFTWARE_SPEC_ID: software_spec_id
-}
-stored_function = client.repository.store_function(meta_props=meta_props, function=adult_scoring_function)
-function_id = stored_function['metadata']
-5. Create an online deployment by using the Python function.
-
- create online deployment for fucntion
-meta_props = {
-client.deployments.ConfigurationMetaNames.NAME: ""Adult Income Online Scoring Function"",
-client.deployments.ConfigurationMetaNames.ONLINE: {}
-}
-online_deployment = client.deployments.create(function_id, meta_props=meta_props)
-function_deployment_id = online_deployment['metadata']
-6. Download the Adult Income data set. This is reused as our scoring data.
-
-"
-FE207218CE0D1148AA57D10ED8848CD7E6FFD87E_14,FE207218CE0D1148AA57D10ED8848CD7E6FFD87E,"import pandas as pd
-
- read adult csv dataset
-adult_csv = pd.read_csv('./adult.csv', dtype='category')
-
- use 10 random rows for scoring
-sample_dataset = adult_csv.sample(n=10)
-
-fields = sample_dataset.columns.values.tolist()
-values = sample_dataset.values.tolist()
-7. Score the adult income data by using the Python function created.
-
-raw_dataset = {client.deployments.ScoringMetaNames.INPUT_DATA: [{'fields': fields, 'values': values}]}
-
-prediction = client.deployments.score(function_deployment_id, raw_dataset)
-print(prediction)
-
-
-
-"
-FE207218CE0D1148AA57D10ED8848CD7E6FFD87E_15,FE207218CE0D1148AA57D10ED8848CD7E6FFD87E," Next steps
-
-[Creating your Federated Learning experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-start.html).
-
-Parent topic:[Federated Learning tutorial and samples](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fl-demo.html)
-"
-FD48879C34D316981B4F67C2B82C8179E0042F74_0,FD48879C34D316981B4F67C2B82C8179E0042F74," Credentials for prompting foundation models (IBM Cloud API key and IAM token)
-
-To prompt foundation models in IBM watsonx.ai programmatically, you need an IBM Cloud API key and sometimes an IBM Cloud IAM token.
-
-"
-FD48879C34D316981B4F67C2B82C8179E0042F74_1,FD48879C34D316981B4F67C2B82C8179E0042F74," IBM Cloud API key
-
-To use the [foundation models Python library](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-python-lib.html), you need an IBM Cloud API key.
-
-"
-FD48879C34D316981B4F67C2B82C8179E0042F74_2,FD48879C34D316981B4F67C2B82C8179E0042F74,"Python pseudo-code
-
-my_credentials = {
-""url"" : ""https://us-south.ml.cloud.ibm.com"",
-""apikey"" :
-}
-...
-model = Model( ... credentials=my_credentials ... )
-
-You can create this API key by using multiple interfaces. For full instructions, see [Creating an API key](https://cloud.ibm.com/docs/account?topic=account-userapikey&interface=uicreate_user_key)
-
-"
-FD48879C34D316981B4F67C2B82C8179E0042F74_3,FD48879C34D316981B4F67C2B82C8179E0042F74," IBM Cloud IAM token
-
-When you click the View code button in the Prompt Lab, a curl command is displayed that you can call outside the Prompt Lab to submit the current prompt and parameters to the selected model and get a generated response. In the command, there is a placeholder for an IBM Cloud IAM token.
-
-For information about generating that access token, see: [Generating an IBM Cloud IAM token](https://cloud.ibm.com/docs/account?topic=account-iamtoken_from_apikey)
-
-Parent topic:[Foundation models Python library](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-python-lib.html)
-"
-52507FE59C92EF1667E463B2C5D709C139673F4D,52507FE59C92EF1667E463B2C5D709C139673F4D," Foundation model terms of use in watsonx.ai
-
-Review these model terms of use to understand your responsibilities and risks with foundation models.
-
-By using any foundation model provided with this IBM offering, you acknowledge and understand that:
-
-
-
-* Some models that are included in this IBM offering are Non-IBM Products. Review the applicable model information for details on the third-party provider and license terms that apply. See [Supported foundation models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-models.html).
-* Third Party models have been trained with data that may contain biases and inaccuracies and could generate outputs containing misinformation, obscene or offensive language, or discriminatory content. Users should review and validate the outputs that are generated.
-* The output that is generated by all models is provided to augment, not replace, human decision-making by the Client.
-
-
-
-Parent topic:[Foundation models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-overview.html)
-"
-43785386700CF73E37A8F76ADC4EF9FB01EE0AEB_0,43785386700CF73E37A8F76ADC4EF9FB01EE0AEB," Generating accurate output
-
-Foundation models sometimes generate output that is not factually accurate. If factual accuracy is important for your project, set yourself up for success by learning how and why these models might sometimes get facts wrong and how you can ground generated output in correct facts.
-
-"
-43785386700CF73E37A8F76ADC4EF9FB01EE0AEB_1,43785386700CF73E37A8F76ADC4EF9FB01EE0AEB," Why foundation models get facts wrong
-
-Foundation models can get facts wrong for a few reasons:
-
-
-
-* Pre-training builds word associations, not facts
-* Pre-training data sets contain out-of-date facts
-* Pre-training data sets do not contain esoteric or domain-specific facts and jargon
-* Sampling decoding is more likely to stray from the facts
-
-
-
-"
-43785386700CF73E37A8F76ADC4EF9FB01EE0AEB_2,43785386700CF73E37A8F76ADC4EF9FB01EE0AEB," Pre-training builds word associations, not facts
-
-During pre-training, a foundation model builds up a vocabulary of words ([tokens](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tokens.html)) encountered in the pre-training data sets. Also during pre-training, statistical relationships between those words become encoded in the model weights.
-
-For example, ""Mount Everest"" often appears near ""tallest mountain in the world"" in many articles, books, speeches, and other common pre-training sources. As a result, a pre-trained model will probably correctly complete the prompt ""The tallest mountain in the world is "" with the output ""Mount Everest.""
-
-These word associations can make it seem that facts have been encoded into these models too. For very common knowledge and immutable facts, you might have good luck generating factually accurate output using pre-trained foundation models with simple prompts like the tallest-mountain example. However, it is a risky strategy to rely on only pre-trained word associations when using foundation models in applications where accuracy matters.
-
-"
-43785386700CF73E37A8F76ADC4EF9FB01EE0AEB_3,43785386700CF73E37A8F76ADC4EF9FB01EE0AEB," Pre-training data sets contain out-of-date facts
-
-Collecting pre-training data sets and performing pre-training runs can take a significant amount of time, sometimes months. If a model was pre-trained on a data set from several years ago, the model vocabulary and word associations encoded in the model weights won't reflect current world events or newly popular themes. For this reason, if you submit the prompt ""The most recent winner of the world cup of football (soccer) is "" to a model pre-trained on information a few years old, the generated output will be out of date.
-
-"
-43785386700CF73E37A8F76ADC4EF9FB01EE0AEB_4,43785386700CF73E37A8F76ADC4EF9FB01EE0AEB," Pre-training data sets do not contain esoteric or domain-specific facts and jargon
-
-Common foundation model pre-training data sets, such as [The Pile (Wikipedia)](https://en.wikipedia.org/wiki/The_Pile_%28dataset%29), contain hundreds of millions of documents. Given how famous Mount Everest is, it's reasonable to expect a foundation model to have encoded a relationship between ""tallest mountain in the world"" and ""Mount Everest"". However, if a phenomenon, person, or concept is mentioned in only a handful of articles, chances are slim that a foundation model would have any word associations about that topic encoded in its weights. Prompting a pre-trained model about information that was not in its pre-training data sets is unlikely to produce factually accurate generated output.
-
-"
-43785386700CF73E37A8F76ADC4EF9FB01EE0AEB_5,43785386700CF73E37A8F76ADC4EF9FB01EE0AEB," Sampling decoding is more likely to stray from the facts
-
-Decoding is the process a model uses to choose the words (tokens) in the generated output:
-
-
-
-* Greedy decoding always selects the token with the highest probability
-* Sampling decoding selects tokens pseudo-randomly from a probability distribution
-
-
-
-Greedy decoding generates output that is more predictable and more repetitive. Sampling decoding is more random, which feels ""creative"". If, based on pre-training data sets, the most likely words to follow ""The tallest mountain is "" are ""Mount Everest"", then greedy decoding could reliably generate that factually correct output, whereas sampling decoding might sometimes generate the name of some other mountain or something that's not even a mountain.
-
-"
-43785386700CF73E37A8F76ADC4EF9FB01EE0AEB_6,43785386700CF73E37A8F76ADC4EF9FB01EE0AEB," How to ground generated output in correct facts
-
-Rather than relying on only pre-trained word associations for factual accuracy, provide context in your prompt text.
-
-"
-43785386700CF73E37A8F76ADC4EF9FB01EE0AEB_7,43785386700CF73E37A8F76ADC4EF9FB01EE0AEB," Use context in your prompt text to establish facts
-
-When you prompt a foundation model to generate output, the words (tokens) in the generated output are influenced by the words in the model vocabulary and the words in the prompt text. You can use your prompt text to boost factually accurate word associations.
-
-"
-43785386700CF73E37A8F76ADC4EF9FB01EE0AEB_8,43785386700CF73E37A8F76ADC4EF9FB01EE0AEB," Example 1
-
-Here's a prompt to cause a model to complete a sentence declaring your favorite color:
-
-My favorite color is
-
-Given that only you know what your favorite color is, there's no way the model could reliably generate the correct output.
-
-Instead, a color will be selected from colors mentioned in the model's pre-training data:
-
-
-
-* If greedy decoding is used, whichever color appears most frequently with statements about favorite colors in pre-training content will be selected.
-* If sampling decoding is used, a color will be selected randomly from colors mentioned most often as favorites in the pre-training content.
-
-
-
-"
-43785386700CF73E37A8F76ADC4EF9FB01EE0AEB_9,43785386700CF73E37A8F76ADC4EF9FB01EE0AEB," Example 2
-
-Here's a prompt that includes context to establish the facts:
-
-I recently painted my kitchen yellow, which is my favorite color.
-
-My favorite color is
-
-If you prompt a model with text that includes factually accurate context like this, then the output the model generates will be more likely to be accurate.
-
-For more examples of including context in your prompt, see these samples:
-
-
-
-* [Sample 4a - Answer a question based on an article](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.htmlsample4a)
-* [Sample 4b - Answer a question based on an article](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.htmlsample4b)
-
-
-
-"
-43785386700CF73E37A8F76ADC4EF9FB01EE0AEB_10,43785386700CF73E37A8F76ADC4EF9FB01EE0AEB," Use less ""creative"" decoding
-
-When you include context with the needed facts in your prompt, using greedy decoding is likely to generate accurate output. If you need some variety in the output, you can experiment with sampling decoding with low values for parameters like Temperature, Top P, and Top K. However, using sampling decoding increases the risk of inaccurate output.
-
-"
-43785386700CF73E37A8F76ADC4EF9FB01EE0AEB_11,43785386700CF73E37A8F76ADC4EF9FB01EE0AEB," Retrieval-augmented generation
-
-The retrieval-augmented generation pattern scales out the technique of pulling context into prompts. If you have a knowledge base, such as process documentation in web pages, legal contracts in PDF files, a database of products for sale, a GitHub repository of C++ code files, or any other collection of information, you can use the retrieval-augmented generation pattern to generate factually accurate output based on the information in that knowledge base.
-
-Retrieval-augmented generation involves three basic steps:
-
-
-
-1. Search for relevant content in your knowledge base
-2. Pull the most relevant content into your prompt as context
-3. Send the combined prompt text to the model to generate output
-
-
-
-For more information, see: [Retrieval-augmented generation](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-rag.html)
-
-Parent topic:[Prompt tips](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-tips.html)
-"
-E59B59312D1EB3B2BA78D7E78993883BB3784C2B_0,E59B59312D1EB3B2BA78D7E78993883BB3784C2B," Techniques for avoiding undesirable output
-
-Every foundation model has the potential to generate output that includes incorrect or even harmful content. Understand the types of undesirable output that can be generated, the reasons for the undesirable output, and steps that you can take to reduce the risk of harm.
-
-The foundation models that are available in IBM watsonx.ai can generate output that contains hallucinations, personal information, hate speech, abuse, profanity, and bias. The following techniques can help reduce the risk, but do not guarantee that generated output will be free of undesirable content.
-
-Find techniques to help you avoid the following types of undesirable content in foundation model output:
-
-
-
-* [Hallucinations](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-hallucinations.html?context=cdpaas&locale=enhallucinations)
-* [Personal information](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-hallucinations.html?context=cdpaas&locale=enpersonal-info)
-* [Hate speech, abuse, and profanity](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-hallucinations.html?context=cdpaas&locale=enhap)
-* [Bias](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-hallucinations.html?context=cdpaas&locale=enbias)
-
-
-
-"
-E59B59312D1EB3B2BA78D7E78993883BB3784C2B_1,E59B59312D1EB3B2BA78D7E78993883BB3784C2B," Hallucinations
-
-When a foundation model generates off-topic, repetitive, or incorrect content or fabricates details, that behavior is sometimes called hallucination.
-
-Off-topic hallucinations can happen because of pseudo-randomness in the decoding of the generated output. In the best cases, that randomness can result in wonderfully creative output. But randomness can also result in nonsense output that is not useful.
-
-The model might return hallucinations in the form of fabricated details when it is prompted to generate text, but is not given enough related text to draw upon. If you include correct details in the prompt, for example, the model is less likely to hallucinate and make up details.
-
-"
-E59B59312D1EB3B2BA78D7E78993883BB3784C2B_2,E59B59312D1EB3B2BA78D7E78993883BB3784C2B," Techniques for avoiding hallucinations
-
-To avoid hallucinations, test one or more of these techniques:
-
-
-
-* Choose a model with pretraining and fine-tuning that matches your domain and the task you are doing.
-* Provide context in your prompt.
-
-If you instruct a foundation model to generate text on a subject that is not common in its pretraining data and you don't add information about the subject to the prompt, the model is more likely to hallucinate.
-* Specify conservative values for the Min tokens and Max tokens parameters and specify one or more stop sequences.
-
-When you specify a high value for the Min tokens parameter, you can force the model to generate a longer response than the model would naturally return for a prompt. The model is more likely to hallucinate as it adds words to the output to reach the required limit.
-* For use cases that don't require much creativity in the generated output, use greedy decoding. If you prefer to use sampling decoding, be sure to specify conservative values for the temperature, top-p, and top-k parameters.
-* To reduce repetitive text in the generated output, try increasing the repetition penalty parameter.
-* If you see repetitive text in the generated output when you use greedy decoding, and if some creativity is acceptable for your use case, then try using sampling decoding instead. Be sure to set moderately low values for the temperature, top-p, and top-k parameters.
-* In your prompt, instruct the model what to do when it has no confident or high-probability answer.
-
-For example, in a question-answering scenario, you can include the instruction: If the answer is not in the article, say “I don't know”.
-
-
-
-"
-E59B59312D1EB3B2BA78D7E78993883BB3784C2B_3,E59B59312D1EB3B2BA78D7E78993883BB3784C2B," Personal information
-
-A foundation model's vocabulary is formed from words in its pretraining data. If pretraining data includes web pages that are scraped from the internet, the model's vocabulary might contain the following types of information:
-
-
-
-* Names of article authors
-* Contact information from company websites
-* Personal information from questions and comments that are posted in open community forums
-
-
-
-If you use a foundation model to generate text for part of an advertising email, the generated content might include contact information for another company!
-
-If you ask a foundation model to write a paper with citations, the model might include references that look legitimate but aren't. It might even attribute those made-up references to real authors from the correct field. A foundation model is likely to generate imitation citations, correct in form but not grounded in facts, because the models are good at stringing together words (including names) that have a high probability of appearing together. The fact that the model lends the output a touch of legitimacy, by including the names of real people as authors in citations, makes this form of hallucination compelling and believable. It also makes this form of hallucination dangerous. People can get into trouble if they believe that the citations are real. Not to mention the harm that can come to people who are listed as authors of works they did not write.
-
-"
-E59B59312D1EB3B2BA78D7E78993883BB3784C2B_4,E59B59312D1EB3B2BA78D7E78993883BB3784C2B," Techniques for excluding personal information
-
-To exclude personal information, try these techniques:
-
-
-
-* In your prompt, instruct the model to refrain from mentioning names, contact details, or personal information.
-
-For example, when you prompt a model to generate an advertising email, instruct the model to include your company name and phone number. Also, instruct the model to “include no other company or personal information”.
-* In your larger application, pipeline, or solution, post-process the content that is generated by the foundation model to find and remove personal information.
-
-
-
-"
-E59B59312D1EB3B2BA78D7E78993883BB3784C2B_5,E59B59312D1EB3B2BA78D7E78993883BB3784C2B," Hate speech, abuse, and profanity
-
-As with personal information, when pretraining data includes hateful or abusive terms or profanity, a foundation model that is trained on that data has those problematic terms in its vocabulary. If inappropriate language is in the model's vocabulary, the foundation model might generate text that includes undesirable content.
-
-When you use foundation models to generate content for your business, you must do the following things:
-
-
-
-* Recognize that this kind of output is always possible.
-* Take steps to reduce the likelihood of triggering the model to produce this kind of harmful output.
-* Build human review and verification processes into your solutions.
-
-
-
-"
-E59B59312D1EB3B2BA78D7E78993883BB3784C2B_6,E59B59312D1EB3B2BA78D7E78993883BB3784C2B," Techniques for reducing the risk of hate speech, abuse, and profanity
-
-To avoid hate speech, abuse, and profanity, test one or more of these techniques:
-
-
-
-* In the Prompt Lab, set the AI guardrails switch to On. When this feature is enabled, any sentence in the input prompt or generated output that contains harmful language is replaced with a message that says that potentially harmful text was removed.
-* Do not include hate speech, abuse, or profanity in your prompt to prevent the model from responding in kind.
-* In your prompt, instruct the model to use clean language.
-
-For example, depending on the tone you need for the output, instruct the model to use “formal”, “professional”, “PG”, or “friendly” language.
-* In your larger application, pipeline, or solution, post-process the content that is generated by the foundation model to remove undesirable content.
-
-
-
-"
-E59B59312D1EB3B2BA78D7E78993883BB3784C2B_7,E59B59312D1EB3B2BA78D7E78993883BB3784C2B," Reducing the risk of bias in model output
-
-During pretraining, a foundation model learns the statistical probability that certain words follow other words based on how those words appear in the training data. Any bias in the training data is trained into the model.
-
-For example, if the training data more frequently refers to doctors as men and nurses as women, that bias is likely to be reflected in the statistical relationships between those words in the model. As a result, the model is likely to generate output that more frequently refers to doctors as men and nurses as women. Sometimes, people believe that algorithms can be more fair and unbiased than humans because the algorithms are “just using math to decide”. But bias in training data is reflected in content that is generated by foundation models that are trained on that data.
-
-"
-E59B59312D1EB3B2BA78D7E78993883BB3784C2B_8,E59B59312D1EB3B2BA78D7E78993883BB3784C2B," Techniques for reducing bias
-
-It is difficult to debias output that is generated by a foundation model that was pretrained on biased data. However, you might improve results by including content in your prompt to counter bias that might apply to your use case.
-
-For example, instead of instructing a model to “list heart attack symptoms”, you might instruct the model to “list heart attack symptoms, including symptoms common for men and symptoms common for women”.
-
-Parent topic:[Prompt tips](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-tips.html)
-"
-120CAE8361AE4E0B6FE4D6F0D32EEE9517F11190_0,120CAE8361AE4E0B6FE4D6F0D32EEE9517F11190," Choosing a foundation model in watsonx.ai
-
-To determine which models might work well for your project, consider model attributes, such as license, pretraining data, model size, and how the model was fine-tuned. After you have a short list of models that best fit your use case, systematically test the models to see which ones consistently return the results you want.
-
-
-
-Table 1. Considerations for choosing a foundation model in IBM watsonx.ai
-
- Model attribute Considerations
-
- Context length Sometimes called context window length, context window, or maximum sequence length, context length is the maximum allowed value for the number of tokens in the input prompt plus the number of tokens in the generated output. When you generate output with models in watsonx.ai, the number of tokens in the generated output is limited by the Max tokens parameter. For some models, the token length of model output for Lite plans is limited by a dynamic, model-specific, environment-driven upper limit.
- Cost The cost of using foundation models is measured in resource units. The price of a resource unit is based on the rate of the billing class for the foundation model.
- Fine-tuning After being pretrained, many foundation models are fine-tuned for specific tasks, such as classification, information extraction, summarization, responding to instructions, answering questions, or participating in a back-and-forth dialog chat. A model that was fine-tuned on tasks similar to your planned use typically perform better with zero-shot prompts than models that were not fine-tuned in a way that fits your use case. One way to improve results for a fine-tuned model is to structure your prompt in the same format as prompts in the data sets that were used to fine-tune that model.
- Instruction-tuned Instruction-tuned means that the model was fine-tuned with prompts that include an instruction. When a model is instruction-tuned, it typically responds well to prompts that have an instruction even if those prompts don't have examples.
-"
-120CAE8361AE4E0B6FE4D6F0D32EEE9517F11190_1,120CAE8361AE4E0B6FE4D6F0D32EEE9517F11190," IP indemnity In addition to license terms, review the intellectual property indemnification policy for the model. Some foundation model providers require you to exempt them from liability for any IP infringement that might result from the use of their AI models. For information about contractual protections related to IBM watsonx.ai, see the [IBM watsonx.ai service description](https://www.ibm.com/support/customer/csol/terms/?id=i126-7747).
- License In general, each foundation model comes with a different license that limits how the model can be used. Review model licenses to make sure that you can use a model for your planned solution.
- Model architecture The architecture of the model influences how the model behaves. A transformer-based model typically has one of the following architectures: * Encoder-only: Understands input text at the sentence level by transforming input sequences into representational vectors called embeddings. Common tasks for encoder-only models include classification and entity extraction. * Decoder-only: Generates output text word-by-word by inference from the input sequence. Common tasks for decoder-only models include generating text and answering questions. * Encoder-decoder: Both understands input text and generates output text based on the input text. Common tasks for encoder-decoder models include translation and summarization.
- Regional availability You can work with models that are available in the same IBM Cloud regional data center as your watsonx services.
- Supported natural languages Many foundation models work well in English only. But some model creators include multiple languages in the pretraining data sets to fine-tune their model on tasks in different languages, and to test their model's performance in multiple languages. If you plan to build a solution for a global audience or a solution that does translation tasks, look for models that were created with multilingual support in mind.
-"
-120CAE8361AE4E0B6FE4D6F0D32EEE9517F11190_2,120CAE8361AE4E0B6FE4D6F0D32EEE9517F11190," Supported programming languages Not all foundation models work well for programming use cases. If you are planning to create a solution that summarizes, converts, generates, or otherwise processes code, review which programming languages were included in a model's pretraining data sets and fine-tuning activities to determine whether that model is a fit for your use case.
-
-
-
-"
-120CAE8361AE4E0B6FE4D6F0D32EEE9517F11190_3,120CAE8361AE4E0B6FE4D6F0D32EEE9517F11190," Learn more
-
-
-
-* [Tokens and tokenization](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tokens.html)
-* [Model parameters for prompting](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-model-parameters.html)
-* [Prompt tips](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-tips.html)
-* [Watson Machine Learning plans](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/wml-plans.html)
-* [Regional availability for foundation models](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/regional-datactr.htmldata-centers)
-
-
-
-Parent topic:[Supported foundation models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-models.html)
-"
-42AE491240EF740E6A8C5CF32B817E606F554E49_0,42AE491240EF740E6A8C5CF32B817E606F554E49," Foundation model parameters: decoding and stopping criteria
-
-You can specify parameters to control how the model generates output in response to your prompt. This topic lists parameters that you can control in the Prompt Lab.
-
-"
-42AE491240EF740E6A8C5CF32B817E606F554E49_1,42AE491240EF740E6A8C5CF32B817E606F554E49," Decoding
-
-Decoding is the process a model uses to choose the tokens in the generated output.
-
-Greedy decoding selects the token with the highest probability at each step of the decoding process. Greedy decoding produces output that closely matches the most common language in the model's pretraining data and in your prompt text, which is desirable in less creative or fact-based use cases. A weakness of greedy decoding is that it can cause repetitive loops in the generated output.
-
-Sampling decoding is more variable, more random than greedy decoding. Variability and randomness is desirable in creative use cases. However, with greater variability comes the risk of nonsensical output. Sampling decoding selects tokens from a probability distribution at each step:
-
-
-
-* Temperature sampling refers to selecting a high- or low-probability next token.
-* Top-k sampling refers to selecting the next token randomly from a specified number, k, of tokens with the highest probabilities.
-* Top-p sampling refers to selecting the next token randomly from the smallest set of tokens for which the cumulative probability exceeds a specified value, p. (Top-p sampling is also called nucleus sampling.)
-
-
-
-You can specify values for both Top K and Top P. When both parameters are used, Top K is applied first. When Top P is computed, any tokens below the cutoff set by Top K are considered to have a probability of zero.
-
-
-
-Table 1. Supported values, defaults, and usage notes for sampling decoding
-
- Parameter Supported values Default Use
-
- Temperature Floating-point number in the range 0.0 (same as greedy decoding) to 2.0 (maximum creativity) 0.7 Higher values lead to greater variability
- Top K Integer in the range 1 to 100 50 Higher values lead to greater variability
- Top P Floating-point number in the range 0.0 to 1.0 1.0 Higher values lead to greater variability
-
-
-
-"
-42AE491240EF740E6A8C5CF32B817E606F554E49_2,42AE491240EF740E6A8C5CF32B817E606F554E49," Random seed
-
-When you submit the same prompt to a model multiple times with sampling decoding, you'll usually get back different generated text each time. This variability is the result of intentional pseudo-randomness built into the decoding process. Random seed refers to the number used to generate that pseudo-random behavior.
-
-
-
-* Supported values: Integer in the range 1 to 4 294 967 295
-* Default: Generated based on the current server system time
-* Use: To produce repeatable results, set the same random seed value every time.
-
-
-
-"
-42AE491240EF740E6A8C5CF32B817E606F554E49_3,42AE491240EF740E6A8C5CF32B817E606F554E49," Repetition penalty
-
-If you notice the result generated for your chosen prompt, model, and parameters consistently contains repetitive text, you can try adding a repetition penalty.
-
-
-
-* Supported values: Floating-point number in the range 1.0 (no penalty) to 2.0 (maximum penalty)
-* Default: 1.0
-* Use: The higher the penalty, the less likely it is that the result will include repeated text.
-
-
-
-"
-42AE491240EF740E6A8C5CF32B817E606F554E49_4,42AE491240EF740E6A8C5CF32B817E606F554E49," Stopping criteria
-
-You can affect the length of the output generated by the model in two ways: specifying stop sequences and setting Min tokens and Max tokens. Text generation stops after the model considers the output to be complete, a stop sequence is generated, or the maximum token limit is reached.
-
-"
-42AE491240EF740E6A8C5CF32B817E606F554E49_5,42AE491240EF740E6A8C5CF32B817E606F554E49," Stop sequences
-
-A stop sequence is a string of one or more characters. If you specify stop sequences, the model will automatically stop generating output after one of the stop sequences that you specify appears in the generated output. For example, one way to cause a model to stop generating output after just one sentence is to specify a period as a stop sequence. That way, after the model generates the first sentence and ends it with a period, output generation stops. Choosing effective stop sequences depends on your use case and the nature of the generated output you expect.
-
-Supported values: 0 to 6 strings, each no longer than 40 tokens
-
-Default: No stop sequence
-
-"
-42AE491240EF740E6A8C5CF32B817E606F554E49_6,42AE491240EF740E6A8C5CF32B817E606F554E49,"Use:
-
-
-
-* Stop sequences are ignored until after the number of tokens that are specified in the Min tokens parameter are generated.
-* If your prompt includes examples of input-output pairs, ensure the sample output in the examples ends with one of the stop sequences.
-
-
-
-"
-42AE491240EF740E6A8C5CF32B817E606F554E49_7,42AE491240EF740E6A8C5CF32B817E606F554E49," Minimum and maximum new tokens
-
-If you're finding the output from the model is too short or too long, try adjusting the parameters that control the number of generated tokens:
-
-
-
-* The Min tokens parameter controls the minimum number of tokens in the generated output
-* The Max tokens parameter controls the maximum number of tokens in the generated output
-
-
-
-The maximum number of tokens that are allowed in the output differs by model. For more information, see the Maximum tokens information in [Supported foundation models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-models.html).
-
-"
-42AE491240EF740E6A8C5CF32B817E606F554E49_8,42AE491240EF740E6A8C5CF32B817E606F554E49,"Defaults:
-
-
-
-* Min tokens: 0
-* Max tokens: 20
-
-
-
-"
-42AE491240EF740E6A8C5CF32B817E606F554E49_9,42AE491240EF740E6A8C5CF32B817E606F554E49,"Use:
-
-
-
-* Min tokens must be less than or equal to Max tokens.
-* Because the cost of using foundation models in IBM watsonx.ai is based on use, which is partly related to the number of tokens that are generated, specifying the lowest value for Max tokens that works for your use case is a cost-saving strategy.
-* For Lite plans, output stops being generated after a dynamic, model-specific, environment-driven upper limit is reached, even if the value specified with the Max tokens parameter is not reached. To determine the upper limit, see the Tokens limits section for the model in [Supported foundation models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-models.html) or call the [get_details](https://ibm.github.io/watson-machine-learning-sdk/foundation_models.htmlibm_watson_machine_learning.foundation_models.Model.get_details) function of the foundation models Python library.
-
-
-
-Parent topic:[Prompt Lab](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-lab.html)
-"
-B2593108FA446C4B4B0EF5ADC2CD5D9585B0B63C_0,B2593108FA446C4B4B0EF5ADC2CD5D9585B0B63C," Foundation models built by IBM
-
-In IBM watsonx.ai, you can use IBM foundation models that are built with integrity and designed for business.
-
-The Granite family of foundation models includes decoder-only models that can efficiently predict and generate language in English.
-
-The models were built with trusted data that has the following characteristics:
-
-
-
-* Sourced from quality data sets in domains such as finance (SEC Filings), law (Free Law), technology (Stack Exchange), science (arXiv, DeepMind Mathematics), literature (Project Gutenberg (PG-19)), and more.
-* Compliant with rigorous IBM data clearance and governance standards.
-* Scrubbed of hate, abuse, and profanity, data duplication, and blocklisted URLs, among other things.
-
-
-
-IBM is committed to building AI that is open, trusted, targeted, and empowering. For more information about contractual protections related to the IBM Granite foundation models, see the [IBM watsonx.ai service description](https://www.ibm.com/support/customer/csol/terms/?id=i126-7747) and [model license](https://www.ibm.com/support/customer/csol/terms/?id=i126-6883).
-
-The following Granite models are available in watsonx.ai today:
-
-granite-13b-chat-v2 : General use model that is optimized for dialogue use cases. This version of the model is able to generate longer, higher-quality responses with a professional tone. The model can recognize mentions of people and can detect tone and sentiment.
-
-granite-13b-chat-v1 : General use model that is optimized for dialogue use cases. Useful for virtual agent and chat applications that engage in conversation with users.
-
-granite-13b-instruct-v2 : General use model. This version of the model is optimized for classification, extraction, and summarization tasks. The model can recognize mentions of people and can summarize longer inputs.
-
-"
-B2593108FA446C4B4B0EF5ADC2CD5D9585B0B63C_1,B2593108FA446C4B4B0EF5ADC2CD5D9585B0B63C,"granite-13b-instruct-v1 : General use model. The model was tuned on relevant business tasks, such as detecting sentiment from earnings calls transcripts, extracting credit risk assessments, summarizing financial long-form text, and answering financial or insurance-related questions.
-
-To learn more about the models, read the following resources:
-
-
-
-* [Model information](https://www.ibm.com/blog/watsonx-tailored-generative-ai/)
-* [Research paper](https://www.ibm.com/downloads/cas/X9W4O6BM)
-* [granite-13b-instruct-v2 model card](https://dataplatform.cloud.ibm.com/wx/samples/models/ibm/granite-13b-instruct-v2?context=wx)
-* [granite-13b-instruct-v1 model card](https://dataplatform.cloud.ibm.com/wx/samples/models/ibm/granite-13b-instruct-v1?context=wx)
-* [granite-13b-chat-v2 model card](https://dataplatform.cloud.ibm.com/wx/samples/models/ibm/granite-13b-chat-v2?context=wx)
-* [granite-13b-chat-v1 model card](https://dataplatform.cloud.ibm.com/wx/samples/models/ibm/granite-13b-chat-v1?context=wx)
-
-
-
-To get started with the models, try these samples:
-
-
-
-* [Prompt Lab sample: Extract details from a complaint](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.htmlsample2a)
-* [Prompt Lab sample: Generate a numbered list on a particular theme](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.htmlsample3c)
-"
-B2593108FA446C4B4B0EF5ADC2CD5D9585B0B63C_2,B2593108FA446C4B4B0EF5ADC2CD5D9585B0B63C,"* [prompt Lab sample: Answer a question based on a document](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.htmlsample4c)
-* [Prompt Lab sample: Answer general knowledge questions](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.htmlsample4d)
-* [Prompt Lab sample: Converse in a dialogue](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.htmlsample7a)
-
-
-
-
-
-* [Sample Python notebook: Use watsonx and a Granite model to analyze car rental customer satisfaction from text](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/61c1e967-8d10-44bb-a846-cc1f27e9e69a?context=wx)
-
-
-
-Parent topic:[Supported foundation models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-models.html)
-"
-5B37710FE7BBD6EFB842FEB7B49B036302E18F81_0,5B37710FE7BBD6EFB842FEB7B49B036302E18F81," Supported foundation models available with watsonx.ai
-
-A collection of open source and IBM foundation models are deployed in IBM watsonx.ai.
-
-The following models are available in watsonx.ai:
-
-
-
-* flan-t5-xl-3b
-* flan-t5-xxl-11b
-* flan-ul2-20b
-* gpt-neox-20b
-* granite-13b-chat-v2
-* granite-13b-chat-v1
-* granite-13b-instruct-v2
-* granite-13b-instruct-v1
-* llama-2-13b-chat
-* llama-2-70b-chat
-* mpt-7b-instruct2
-* mt0-xxl-13b
-* starcoder-15.5b
-
-
-
-You can prompt these models in the Prompt Lab or programmatically by using the Python library.
-
-"
-5B37710FE7BBD6EFB842FEB7B49B036302E18F81_1,5B37710FE7BBD6EFB842FEB7B49B036302E18F81," Summary of models
-
-To understand how the model provider, instruction tuning, token limits, and other factors can affect which model you choose, see [Choosing a model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-model-choose.html).
-
-The following table lists the supported foundation models that IBM provides.
-
-
-
-Table 1. IBM foundation models in watsonx.ai
-
- Model name Provider Instruction-tuned Billing class Maximum tokens Context (input + output) More information
-
- [granite-13b-chat-v2](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-models.html?context=cdpaas&locale=engranite-13b-chat) IBM Yes Class 2 8192 * [Model card](https://dataplatform.cloud.ibm.com/wx/samples/models/ibm/granite-13b-chat-v2?context=wx) * [Website](https://www.ibm.com/blog/watsonx-tailored-generative-ai/) * [Research paper](https://www.ibm.com/downloads/cas/X9W4O6BM)
- [granite-13b-chat-v1](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-models.html?context=cdpaas&locale=engranite-13b-chat-v1) IBM Yes Class 2 8192 * [Model card](https://dataplatform.cloud.ibm.com/wx/samples/models/ibm/granite-13b-chat-v1?context=wx) * [Website](https://www.ibm.com/blog/watsonx-tailored-generative-ai/) * [Research paper](https://www.ibm.com/downloads/cas/X9W4O6BM)
-"
-5B37710FE7BBD6EFB842FEB7B49B036302E18F81_2,5B37710FE7BBD6EFB842FEB7B49B036302E18F81," [granite-13b-instruct-v2](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-models.html?context=cdpaas&locale=engranite-13b-instruct) IBM Yes Class 2 8192 * [Model card](https://dataplatform.cloud.ibm.com/wx/samples/models/ibm/granite-13b-instruct-v2?context=wx) * [Website](https://www.ibm.com/blog/watsonx-tailored-generative-ai/) * [Research paper](https://www.ibm.com/downloads/cas/X9W4O6BM)
- [granite-13b-instruct-v1](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-models.html?context=cdpaas&locale=engranite-13b-instruct-v1) IBM Yes Class 2 8192 * [Model card](https://dataplatform.cloud.ibm.com/wx/samples/models/ibm/granite-13b-instruct-v1?context=wx) * [Website](https://www.ibm.com/blog/watsonx-tailored-generative-ai/) * [Research paper](https://www.ibm.com/downloads/cas/X9W4O6BM)
-
-
-
-The following table lists the supported foundation models that third parties provide through Hugging Face.
-
-
-
-Table 2. Supported third party foundation models in watsonx.ai
-
- Model name Provider Instruction-tuned Billing class Maximum tokens Context (input + output) More information
-
-"
-5B37710FE7BBD6EFB842FEB7B49B036302E18F81_3,5B37710FE7BBD6EFB842FEB7B49B036302E18F81," [flan-t5-xl-3b](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-models.html?context=cdpaas&locale=enflan-t5-xl-3b) Google Yes Class 1 4096 * [Model card](https://dataplatform.cloud.ibm.com/wx/samples/models/google/flan-t5-xl?context=wx) * [Research paper](https://arxiv.org/abs/2210.11416)
- [flan-t5-xxl-11b](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-models.html?context=cdpaas&locale=enflan-t5-xxl-11b) Google Yes Class 2 4096 * [Model card](https://dataplatform.cloud.ibm.com/wx/samples/models/google/flan-t5-xxl?context=wx) * [Research paper](https://arxiv.org/abs/2210.11416)
- [flan-ul2-20b](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-models.html?context=cdpaas&locale=enflan-ul2-20b) Google Yes Class 3 4096 * [Model card](https://dataplatform.cloud.ibm.com/wx/samples/models/google/flan-ul2?context=wx) * [UL2 research paper](https://arxiv.org/abs/2205.05131v1) * [Flan research paper](https://arxiv.org/abs/2210.11416)
-"
-5B37710FE7BBD6EFB842FEB7B49B036302E18F81_4,5B37710FE7BBD6EFB842FEB7B49B036302E18F81," [gpt-neox-20b](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-models.html?context=cdpaas&locale=engpt-neox-20b) EleutherAI No Class 3 8192 * [Model card](https://dataplatform.cloud.ibm.com/wx/samples/models/eleutherai/gpt-neox-20b?context=wx) * [Research paper](https://arxiv.org/abs/2204.06745)
- [llama-2-13b-chat](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-models.html?context=cdpaas&locale=enllama-2) Meta Yes Class 1 4096 * [Model card](https://dataplatform.cloud.ibm.com/wx/samples/models/meta-llama/llama-2-13b-chat?context=wx) * [Research paper](https://arxiv.org/abs/2307.09288)
- [llama-2-70b-chat](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-models.html?context=cdpaas&locale=enllama-2) Meta Yes Class 2 4096 * [Model card](https://dataplatform.cloud.ibm.com/wx/samples/models/meta-llama/llama-2-70b-chat?context=wx) * [Research paper](https://arxiv.org/abs/2307.09288)
-"
-5B37710FE7BBD6EFB842FEB7B49B036302E18F81_5,5B37710FE7BBD6EFB842FEB7B49B036302E18F81," [mpt-7b-instruct2](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-models.html?context=cdpaas&locale=enmpt-7b-instruct2) Mosaic ML Yes Class 1 2048 * [Model card](https://huggingface.co/ibm/mpt-7b-instruct2) * [Website](https://www.mosaicml.com/blog/mpt-7b)
- [mt0-xxl-13b](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-models.html?context=cdpaas&locale=enmt0-xxl-13b) BigScience Yes Class 2 4096 * [Model card](https://dataplatform.cloud.ibm.com/wx/samples/models/bigscience/mt0-xxl?context=wx) * [Research paper](https://arxiv.org/abs/2211.01786)
- [starcoder-15.5b](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-models.html?context=cdpaas&locale=enstarcoder-15.5b) BigCode No Class 2 8192 * [Model card](https://dataplatform.cloud.ibm.com/wx/samples/models/bigcode/starcoder?context=wx) * [Research paper](https://arxiv.org/abs/2305.06161)
-
-
-
-
-
-* For a list of which models are provided in each regional data center, see [Regional availability of foundation model](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/regional-datactr.htmldata-centers).
-"
-5B37710FE7BBD6EFB842FEB7B49B036302E18F81_6,5B37710FE7BBD6EFB842FEB7B49B036302E18F81,"* For information about the billing classes and rate limiting, see [Watson Machine Learning plans](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/wml-plans.htmlru-metering).
-
-
-
-"
-5B37710FE7BBD6EFB842FEB7B49B036302E18F81_7,5B37710FE7BBD6EFB842FEB7B49B036302E18F81," Foundation model details
-
-The available foundation models support a range of use cases for both natural languages and programming languages. To see the types of tasks that these models can do, review and try the [sample prompts](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html).
-
-"
-5B37710FE7BBD6EFB842FEB7B49B036302E18F81_8,5B37710FE7BBD6EFB842FEB7B49B036302E18F81," flan-t5-xl-3b
-
-The flan-t5-xl-3b model is provided by Google on Hugging Face. This model is based on the pretrained text-to-text transfer transformer (T5) model and uses instruction fine-tuning methods to achieve better zero- and few-shot performance. The model is also fine-tuned with chain-of-thought data to improve its ability to perform reasoning tasks.
-
-Note: This foundation model can be tuned by using the Tuning Studio.
-
-Usage : General use with zero- or few-shot prompts.
-
-Cost : Class 1. For pricing details, see [Watson Machine Learning plans](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/wml-plans.html).
-
-Try it out : [Sample prompts](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html)
-
-Size : 3 billion parameters
-
-Token limits : Context window length (input + output): 4096
-
-: Note: Lite plan output is limited to 700
-
-Supported natural languages : English, German, French
-
-Instruction tuning information : The model was fine-tuned on tasks that involve multiple-step reasoning from chain-of-thought data in addition to traditional natural language processing tasks.
-
-Details about the training data sets used are published.
-
-Model architecture : Encoder-decoder
-
-License : [Apache 2.0 license](https://www.apache.org/licenses/LICENSE-2.0.txt)
-
-Learn more : [Research paper](https://arxiv.org/abs/2210.11416) : [Model card](https://dataplatform.cloud.ibm.com/wx/samples/models/google/flan-t5-xl?context=wx) : [Sample notebook: Tune a model to classify CFPB documents in watsonx](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/bf57e8896f3e50c638b5a378780f7502)
-
-"
-5B37710FE7BBD6EFB842FEB7B49B036302E18F81_9,5B37710FE7BBD6EFB842FEB7B49B036302E18F81," flan-t5-xxl-11b
-
-The flan-t5-xxl-11b model is provided by Google on Hugging Face. This model is based on the pretrained text-to-text transfer transformer (T5) model and uses instruction fine-tuning methods to achieve better zero- and few-shot performance. The model is also fine-tuned with chain-of-thought data to improve its ability to perform reasoning tasks.
-
-Usage : General use with zero- or few-shot prompts.
-
-Cost : Class 2. For pricing details, see [Watson Machine Learning plans](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/wml-plans.html).
-
-Try it out : [Sample prompts](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html) : [Sample notebook: Use watsonx and Google flan-t5-xxl to generate advertising copy](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/73243d67b49a6e05f4cdf351b4b35e21?context=wx) : [Sample notebook: Use watsonx and LangChain to make a series of calls to a language model](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/c3dbf23a-9a56-4c4b-8ce5-5707828fc981?context=wx)
-
-Size : 11 billion parameters
-
-Token limits : Context window length (input + output): 4096
-
-: Note: Lite plan output is limited to 700
-
-Supported natural languages : English, German, French
-
-Instruction tuning information : The model was fine-tuned on tasks that involve multiple-step reasoning from chain-of-thought data in addition to traditional natural language processing tasks. Details about the training data sets used are published.
-
-Model architecture : Encoder-decoder
-
-License : [Apache 2.0 license](https://www.apache.org/licenses/LICENSE-2.0.txt)
-
-"
-5B37710FE7BBD6EFB842FEB7B49B036302E18F81_10,5B37710FE7BBD6EFB842FEB7B49B036302E18F81,"Learn more : [Research paper](https://arxiv.org/abs/2210.11416) : [Model card](https://dataplatform.cloud.ibm.com/wx/samples/models/google/flan-t5-xxl?context=wx)
-
-"
-5B37710FE7BBD6EFB842FEB7B49B036302E18F81_11,5B37710FE7BBD6EFB842FEB7B49B036302E18F81," flan-ul2-20b
-
-The flan-ul2-20b model is provided by Google on Hugging Face. This model was trained by using the Unifying Language Learning Paradigms (UL2). The model is optimized for language generation, language understanding, text classification, question answering, common sense reasoning, long text reasoning, structured-knowledge grounding, and information retrieval, in-context learning, zero-shot prompting, and one-shot prompting.
-
-Usage : General use with zero- or few-shot prompts.
-
-Cost : Class 3. For pricing details, see [Watson Machine Learning plans](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/wml-plans.html).
-
-"
-5B37710FE7BBD6EFB842FEB7B49B036302E18F81_12,5B37710FE7BBD6EFB842FEB7B49B036302E18F81,"Try it out : [Sample prompts](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html) : [Sample notebook: Use watsonx to summarize cybersecurity documents](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/1cb62d6a5847b8ed5cdb6531a08e9104?context=wx) : [Sample notebook: Use watsonx and LangChain to answer questions by using retrieval-augmented generation (RAG)](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/d3a5f957-a93b-46cd-82c1-c8d37d4f62c6?context=wx&audience=wdp) : [Sample notebook: Use watsonx, Elasticsearch, and LangChain to answer questions (RAG)](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/ebeb9fc0-9844-4838-aff8-1fa1997d0c13?context=wx&audience=wdp) : [Sample notebook: Use watsonx, and Elasticsearch Python SDK to answer questions (RAG)](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/bdbc8ad4-9c1f-460f-99ee-5c3a1f374fa7?context=wx&audience=wdp)
-
-Size : 20 billion parameters
-
-Token limits : Context window length (input + output): 4096
-
-: Note: Lite plan output is limited to 700
-
-Supported natural languages : English
-
-Instruction tuning information : The flan-ul2-20b model is pretrained on the colossal, cleaned version of Common Crawl's web crawl corpus. The model is fine-tuned with multiple pretraining objectives to optimize it for various natural language processing tasks. Details about the training data sets used are published.
-
-Model architecture : Encoder-decoder
-
-"
-5B37710FE7BBD6EFB842FEB7B49B036302E18F81_13,5B37710FE7BBD6EFB842FEB7B49B036302E18F81,"License : [Apache 2.0 license](https://www.apache.org/licenses/LICENSE-2.0.txt)
-
-Learn more : [Unifying Language Learning (UL2) research paper](https://arxiv.org/abs/2205.05131v1) : [Fine-tuned Language Model (Flan) research paper](https://arxiv.org/abs/2210.11416)
-
-: [Model card](https://dataplatform.cloud.ibm.com/wx/samples/models/google/flan-ul2?context=wx)
-
-"
-5B37710FE7BBD6EFB842FEB7B49B036302E18F81_14,5B37710FE7BBD6EFB842FEB7B49B036302E18F81," gpt-neox-20b
-
-The gpt-neox-20b model is provided by EleutherAI on Hugging Face. This model is an autoregressive language model that is trained on diverse English-language texts to support general-purpose use cases. GPT-NeoX-20B has not been fine-tuned for downstream tasks.
-
-Usage : Works best with few-shot prompts. Accepts special characters, which can be used for generating structured output. : The data set used for training contains profanity and offensive text. Be sure to curate any output from the model before using it in an application.
-
-Cost : Class 3. For pricing details, see [Watson Machine Learning plans](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/wml-plans.html).
-
-Try it out : [Sample prompts](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html)
-
-Size : 20 billion parameters
-
-Token limits : Context window length (input + output): 8192
-
-: Note: Lite plan output is limited to 700
-
-Supported natural languages : English
-
-Data used during training : The gpt-neox-20b model was trained on the Pile. For more information about the Pile, see [The Pile: An 800GB Dataset of Diverse Text for Language Modeling](https://arxiv.org/abs/2101.00027). The Pile was not deduplicated before being used for training.
-
-Model architecture : Decoder
-
-License : [Apache 2.0 license](https://www.apache.org/licenses/LICENSE-2.0.txt)
-
-Learn more : [Research paper](https://arxiv.org/abs/2204.06745)
-
-: [Model card](https://dataplatform.cloud.ibm.com/wx/samples/models/eleutherai/gpt-neox-20b?context=wx)
-
-"
-5B37710FE7BBD6EFB842FEB7B49B036302E18F81_15,5B37710FE7BBD6EFB842FEB7B49B036302E18F81," granite-13b-chat-v2
-
-The granite-13b-chat-v2 model is provided by IBM. This model is optimized for dialogue use cases and works well with virtual agent and chat applications.
-
-Usage : Generates dialogue output like a chatbot. Uses a model-specific prompt format. Includes a keyword in its output that can be used as a stop sequence to produce succinct answers.
-
-Cost : Class 2. For pricing details, see [Watson Machine Learning plans](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/wml-plans.html).
-
-Try it out : [Sample prompt](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.htmlsample7a)
-
-Size : 13 billion parameters
-
-Token limits : Context window length (input + output): 8192
-
-Supported natural languages : English
-
-Instruction tuning information : The Granite family of models is trained on enterprise-relevant data sets from five domains: internet, academic, code, legal, and finance. Data used to train the models first undergoes IBM data governance reviews and is filtered of text that is flagged for hate, abuse, or profanity by the IBM-developed HAP filter. IBM shares information about the training methods and data sets used.
-
-Model architecture : Decoder
-
-License : [Terms of use](https://www.ibm.com/support/customer/csol/terms/?id=i126-6883) : For more information about contractual protections related to IBM watsonx.ai, see the [IBM watsonx.ai service description](https://www.ibm.com/support/customer/csol/terms/?id=i126-7747).
-
-Learn more : [Model information](https://www.ibm.com/blog/watsonx-tailored-generative-ai/) : [Research paper](https://www.ibm.com/downloads/cas/X9W4O6BM)
-
-"
-5B37710FE7BBD6EFB842FEB7B49B036302E18F81_16,5B37710FE7BBD6EFB842FEB7B49B036302E18F81,": [Model card](https://dataplatform.cloud.ibm.com/wx/samples/models/ibm/granite-13b-chat-v2?context=wx)
-
-"
-5B37710FE7BBD6EFB842FEB7B49B036302E18F81_17,5B37710FE7BBD6EFB842FEB7B49B036302E18F81," granite-13b-chat-v1
-
-The granite-13b-chat-v1 model is provided by IBM. This model is optimized for dialogue use cases and works well with virtual agent and chat applications.
-
-Usage : Generates dialogue output like a chatbot. Uses a model-specific prompt format. Includes a keyword in its output that can be used as a stop sequence to produce succinct answers.
-
-Cost : Class 2. For pricing details, see [Watson Machine Learning plans](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/wml-plans.html).
-
-Try it out : [Sample prompt](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.htmlsample7a)
-
-Size : 13 billion parameters
-
-Token limits : Context window length (input + output): 8192
-
-Supported natural languages : English
-
-Instruction tuning information : The Granite family of models is trained on enterprise-relevant data sets from five domains: internet, academic, code, legal, and finance. Data used to train the models first undergoes IBM data governance reviews and is filtered of text that is flagged for hate, abuse, or profanity by the IBM-developed HAP filter. IBM shares information about the training methods and data sets used.
-
-Model architecture : Decoder
-
-License : [Terms of use](https://www.ibm.com/support/customer/csol/terms/?id=i126-6883) : For more information about contractual protections related to IBM watsonx.ai, see the [IBM watsonx.ai service description](https://www.ibm.com/support/customer/csol/terms/?id=i126-7747).
-
-Learn more : [Model information](https://www.ibm.com/blog/watsonx-tailored-generative-ai/) : [Research paper](https://www.ibm.com/downloads/cas/X9W4O6BM)
-
-"
-5B37710FE7BBD6EFB842FEB7B49B036302E18F81_18,5B37710FE7BBD6EFB842FEB7B49B036302E18F81,": [Model card](https://dataplatform.cloud.ibm.com/wx/samples/models/ibm/granite-13b-chat-v1?context=wx)
-
-"
-5B37710FE7BBD6EFB842FEB7B49B036302E18F81_19,5B37710FE7BBD6EFB842FEB7B49B036302E18F81," granite-13b-instruct-v2
-
-The granite-13b-instruct-v2 model is provided by IBM. This model was trained with high-quality finance data, and is a top-performing model on finance tasks. Financial tasks evaluated include: providing sentiment scores for stock and earnings call transcripts, classifying news headlines, extracting credit risk assessments, summarizing financial long-form text, and answering financial or insurance-related questions.
-
-Usage : Supports extraction, summarization, and classification tasks. Generates useful output for finance-related tasks. Uses a model-specific prompt format. Accepts special characters, which can be used for generating structured output.
-
-Cost : Class 2. For pricing details, see [Watson Machine Learning plans](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/wml-plans.html).
-
-Try it out : [Sample 3b: Generate a numbered list on a particular theme](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.htmlsample3b) : [Sample 4c: Answer a question based on a document](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.htmlsample4c) : [Sample 4d: Answer general knowledge questions](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.htmlsample4d)
-
-: [Sample notebook: Use watsonx and ibm/granite-13b-instruct to analyze car rental customer satisfaction from text](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/61c1e967-8d10-44bb-a846-cc1f27e9e69a?context=wx)
-
-Size : 13 billion parameters
-
-Token limits : Context window length (input + output): 8192
-
-Supported natural languages : English
-
-"
-5B37710FE7BBD6EFB842FEB7B49B036302E18F81_20,5B37710FE7BBD6EFB842FEB7B49B036302E18F81,"Instruction tuning information : The Granite family of models is trained on enterprise-relevant data sets from five domains: internet, academic, code, legal, and finance. Data used to train the models first undergoes IBM data governance reviews and is filtered of text that is flagged for hate, abuse, or profanity by the IBM-developed HAP filter. IBM shares information about the training methods and data sets used.
-
-Model architecture : Decoder
-
-License : [Terms of use](https://www.ibm.com/support/customer/csol/terms/?id=i126-6883) : For more information about contractual protections related to IBM watsonx.ai, see the [IBM watsonx.ai service description](https://www.ibm.com/support/customer/csol/terms/?id=i126-7747).
-
-Learn more : [Model information](https://www.ibm.com/blog/watsonx-tailored-generative-ai/) : [Research paper](https://www.ibm.com/downloads/cas/X9W4O6BM)
-
-: [Model card](https://dataplatform.cloud.ibm.com/wx/samples/models/ibm/granite-13b-instruct-v2?context=wx)
-
-"
-5B37710FE7BBD6EFB842FEB7B49B036302E18F81_21,5B37710FE7BBD6EFB842FEB7B49B036302E18F81," granite-13b-instruct-v1
-
-The granite-13b-instruct-v1 model is provided by IBM. This model was trained with high-quality finance data, and is a top-performing model on finance tasks. Financial tasks evaluated include: providing sentiment scores for stock and earnings call transcripts, classifying news headlines, extracting credit risk assessments, summarizing financial long-form text, and answering financial or insurance-related questions.
-
-Usage : Supports extraction, summarization, and classification tasks. Generates useful output for finance-related tasks. Uses a model-specific prompt format. Accepts special characters, which can be used for generating structured output.
-
-Cost : Class 2. For pricing details, see [Watson Machine Learning plans](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/wml-plans.html).
-
-Try it out : [Sample 3b: Generate a numbered list on a particular theme](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.htmlsample3b) : [Sample 4d: Answer general knowledge questions](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.htmlsample4d)
-
-: [Sample notebook: Use watsonx and ibm/granite-13b-instruct to analyze car rental customer satisfaction from text](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/61c1e967-8d10-44bb-a846-cc1f27e9e69a?context=wx)
-
-Size : 13 billion parameters
-
-Token limits : Context window length (input + output): 8192
-
-Supported natural languages : English
-
-"
-5B37710FE7BBD6EFB842FEB7B49B036302E18F81_22,5B37710FE7BBD6EFB842FEB7B49B036302E18F81,"Instruction tuning information : The Granite family of models is trained on enterprise-relevant data sets from five domains: internet, academic, code, legal, and finance. Data used to train the models first undergoes IBM data governance reviews and is filtered of text that is flagged for hate, abuse, or profanity by the IBM-developed HAP filter. IBM shares information about the training methods and data sets used.
-
-Model architecture : Decoder
-
-License : [Terms of use](https://www.ibm.com/support/customer/csol/terms/?id=i126-6883) : For more information about contractual protections related to IBM watsonx.ai, see the [IBM watsonx.ai service description](https://www.ibm.com/support/customer/csol/terms/?id=i126-7747).
-
-Learn more : [Model information](https://www.ibm.com/blog/watsonx-tailored-generative-ai/) : [Research paper](https://www.ibm.com/downloads/cas/X9W4O6BM)
-
-: [Model card](https://dataplatform.cloud.ibm.com/wx/samples/models/ibm/granite-13b-instruct-v1?context=wx)
-
-"
-5B37710FE7BBD6EFB842FEB7B49B036302E18F81_23,5B37710FE7BBD6EFB842FEB7B49B036302E18F81," Llama-2 Chat
-
-The Llama-2 Chat model is provided by Meta on Hugging Face. The fine-tuned model is useful for chat generation. The model is pretrained with publicly available online data and fine-tuned using reinforcement learning from human feedback.
-
-You can choose to use the 13 billion parameter or 70 billion parameter version of the model.
-
-Usage : Generates dialogue output like a chatbot. Uses a model-specific prompt format.
-
-Cost : 13b: Class 1 : 70b: Class 2 : For pricing details, see [Watson Machine Learning plans](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/wml-plans.html).
-
-Try it out : [Sample prompt](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.htmlsample7b) : [Sample notebook: Use watsonx and Meta llama-2-70b-chat to answer questions about an article](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/b59922d8-678f-44e4-b5ef-18138890b444?context=wx) : [Sample notebook: Use watsonx and Meta llama-2-70b-chat to answer questions about an article](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/b59922d8-678f-44e4-b5ef-18138890b444?context=wx)
-
-Available sizes : 13 billion parameters : 70 billion parameters
-
-Token limits : Context window length (input + output): 4096
-
-: Lite plan output is limited as follows: : - 70b version: 900 : - 13b version: 2048
-
-Supported natural languages : English
-
-Instruction tuning information : Llama 2 was pretrained on 2 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction data sets and more than one million new examples that were annotated by humans.
-
-"
-5B37710FE7BBD6EFB842FEB7B49B036302E18F81_24,5B37710FE7BBD6EFB842FEB7B49B036302E18F81,"Model architecture : Llama 2 is an auto-regressive decoder-only language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning and reinforcement learning with human feedback.
-
-License : [License](https://ai.meta.com/llama/license/)
-
-Learn more : [Research paper](https://arxiv.org/abs/2307.09288)
-
-: [13b Model card](https://dataplatform.cloud.ibm.com/wx/samples/models/meta-llama/llama-2-13b-chat?context=wx) : [70b Model card](https://dataplatform.cloud.ibm.com/wx/samples/models/meta-llama/llama-2-70b-chat?context=wx)
-
-"
-5B37710FE7BBD6EFB842FEB7B49B036302E18F81_25,5B37710FE7BBD6EFB842FEB7B49B036302E18F81," mpt-7b-instruct2
-
-The mpt-7b-instruct2 model is provided by MosaicML on Hugging Face. This model is a fine-tuned version of the base MosaicML Pretrained Transformer (MPT) model that was trained to handle long inputs. This version of the model was optimized by IBM for following short-form instructions.
-
-Usage : General use with zero- or few-shot prompts.
-
-Cost : Class 1. For pricing details, see [Watson Machine Learning plans](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/wml-plans.html).
-
-Try it out : [Sample prompts](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html)
-
-Size : 7 billion parameters
-
-Token limits : Context window length (input + output): 2048
-
-: Note: Lite plan output is limited to 500
-
-Supported natural languages : English
-
-Instruction tuning information : The dataset that was used to train this model is a combination of the Dolly dataset from Databrick and a filtered subset of the Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback training data from Anthropic.
-
-During filtering, parts of dialog exchanges that contain instruction-following steps were extracted to be used as samples.
-
-Model architecture : Encoder-decoder
-
-License : [Apache 2.0 license](https://www.apache.org/licenses/LICENSE-2.0.txt)
-
-Learn more : [Model card](https://huggingface.co/ibm/mpt-7b-instruct2) : [Blog](https://www.mosaicml.com/blog/mpt-7b)
-
-"
-5B37710FE7BBD6EFB842FEB7B49B036302E18F81_26,5B37710FE7BBD6EFB842FEB7B49B036302E18F81," mt0-xxl-13b
-
-The mt0-xxl-13b model is provided by BigScience on Hugging Face. The model is optimized to support language generation and translation tasks with English, languages other than English, and multilingual prompts.
-
-Usage : General use with zero- or few-shot prompts. For translation tasks, include a period to indicate the end of the text you want translated or the model might continue the sentence rather than translate it.
-
-Cost : Class 2. For pricing details, see [Watson Machine Learning plans](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/wml-plans.html).
-
-Try it out : [Sample prompts](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html)
-
-: [Sample notebook: Simple introduction to retrieval-augmented generation with watsonx.ai](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/fed7cf6b-1c48-4d71-8c04-0fce0e000d43?context=wx)
-
-Size : 13 billion parameters
-
-Token limits : Context window length (input + output): 4096
-
-: Note: Lite plan output is limited to 700
-
-Supported natural languages : The model is pretrained on multilingual data in 108 languages and fine-tuned with multilingual data in 46 languages to perform multilingual tasks.
-
-Instruction tuning information : BigScience publishes details about its code and data sets.
-
-Model architecture : Encoder-decoder
-
-License : [Apache 2.0 license](https://www.apache.org/licenses/LICENSE-2.0.txt)
-
-Learn more : [Research paper](https://arxiv.org/abs/2211.01786)
-
-: [Model card](https://dataplatform.cloud.ibm.com/wx/samples/models/bigscience/mt0-xxl?context=wx)
-
-"
-5B37710FE7BBD6EFB842FEB7B49B036302E18F81_27,5B37710FE7BBD6EFB842FEB7B49B036302E18F81," starcoder-15.5b
-
-The starcoder-15.5b model is provided by BigCode on Hugging Face. This model can generate code and convert code from one programming language to another. The model is meant to be used by developers to boost their productivity.
-
-Usage : Code generation and code conversion : Note: The model output might include code that is taken directly from its training data, which can be licensed code that requires attribution.
-
-Cost : Class 2. For pricing details, see [Watson Machine Learning plans](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/wml-plans.html).
-
-Try it out : [Sample prompts](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.htmlcode) : [Sample notebook: Use watsonx and BigCode starcoder-15.5b to generate code based on instruction](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/b5792ad4-555b-4b68-8b6f-ce368093fac6?context=wx)
-
-Size : 15.5 billion parameters
-
-Token limits : Context window length (input + output): 8192
-
-Supported programming languages : Over 80 programming languages, with an emphasis on Python.
-
-Data used during training : This model was trained on over 80 programming languages from GitHub. A filter was applied to exclude from the training data any licensed code or code that is marked with opt-out requests. Nevertheless, the model's output might include code from its training data that requires attribution. The model was not instruction-tuned. Submitting input with only an instruction and no examples might result in poor model output.
-
-Model architecture : Decoder
-
-License : [License](https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement)
-
-Learn more : [Research paper](https://arxiv.org/abs/2305.06161)
-
-"
-5B37710FE7BBD6EFB842FEB7B49B036302E18F81_28,5B37710FE7BBD6EFB842FEB7B49B036302E18F81,": [Model card](https://dataplatform.cloud.ibm.com/wx/samples/models/bigcode/starcoder?context=wx)
-
-Parent topic:[Foundation models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-overview.html)
-"
-58C6D0A1C6DAD01E3F0F1748DC472C3DDCC07E43_0,58C6D0A1C6DAD01E3F0F1748DC472C3DDCC07E43," Foundation models
-
-Build generative AI solutions with foundation models in IBM watsonx.ai.
-
-Foundation models are large AI models that have billions of parameters and are trained on terabytes of data. Foundation models can do various tasks, including text, code, or image generation, classification, conversation, and more. Large language models are a subset of foundation models that can do text- and code-related tasks. Watsonx.ai has a range of deployed large language models for you to try. For details, see [Supported foundation models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-models.html).
-
-"
-58C6D0A1C6DAD01E3F0F1748DC472C3DDCC07E43_1,58C6D0A1C6DAD01E3F0F1748DC472C3DDCC07E43," Foundation model architecture
-
-Foundation models represent a fundamentally different model architecture and purpose for AI systems. The following diagram illustrates the difference between traditional AI models and foundation models.
-
-
-
-As shown in the diagram, traditional AI models specialize in specific tasks. Most traditional AI models are built by using machine learning, which requires a large, structured, well-labeled data set that encompasses a specific task that you want to tackle. Often these data sets must be sourced, curated, and labeled by hand, a job that requires people with domain knowledge and takes time. After it is trained, a traditional AI model can do a single task well. The traditional AI model uses what it learns from patterns in the training data to predict outcomes in unknown data. You can create machine learning models for your specific use cases with tools like AutoAI and Jupyter notebooks, and then deploy them.
-
-In contrast, foundation models are trained on large, diverse, unlabeled data sets and can be used for many different tasks. Foundation models were first used to generate text by calculating the most-probable next word in natural language translation tasks. However, model providers are learning that, when prompted with the right input, foundation models can do various other tasks well. Instead of creating your own foundation models, you use existing deployed models and engineer prompts to generate the results that you need.
-
-"
-58C6D0A1C6DAD01E3F0F1748DC472C3DDCC07E43_2,58C6D0A1C6DAD01E3F0F1748DC472C3DDCC07E43," Methods of working with foundation models
-
-The possibilities and applications of foundation models are just starting to be discovered. Explore and validate use cases with foundation models in watsonx.ai to automate, simplify, and speed up existing processes or provide value in a new way.
-
-You can interact with foundation models in the following ways:
-
-
-
-* Engineer prompts and inference deployed foundation models directly by using the Prompt Lab
-* Inference deployed foundation models programmatically by using the Python library
-* Tune foundation models to return output in a certain style or format by using the Tuning Studio
-
-
-
-"
-58C6D0A1C6DAD01E3F0F1748DC472C3DDCC07E43_3,58C6D0A1C6DAD01E3F0F1748DC472C3DDCC07E43," Learn more
-
-
-
-* [Prompt Lab](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-lab.html)
-* [Python library](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-python-lib.html)
-* [Tuning Studio](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tuning-studio.html)
-* [Security and privacy](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-security.html)
-* [Model terms of use](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-disclaimer.html)
-* [Tokens](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tokens.html)
-* [Retrieval-augmented generation](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-rag.html)
-* [AI risk atlas](https://dataplatform.cloud.ibm.com/docs/content/wsj/ai-risk-atlas/ai-risk-atlas.html)
-
-
-
-Parent topic:[Analyzing data and working with models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/data-science.html)
-"
-78A8C07B83DF1B01276353D098E84F12304636E2_0,78A8C07B83DF1B01276353D098E84F12304636E2," Prompt Lab
-
-In the Prompt Lab in IBM watsonx.ai, you can experiment with prompting different foundation models, explore sample prompts, and save and share your best prompts.
-
-You use the Prompt Lab to engineer effective prompts that you submit to deployed foundation models for inferencing. You do not use the Prompt Lab to create new foundation models.
-
-This video provides a visual method to learn the concepts and tasks in this documentation.
-
-"
-78A8C07B83DF1B01276353D098E84F12304636E2_1,78A8C07B83DF1B01276353D098E84F12304636E2," Requirements
-
-If you signed up for watsonx.ai and you have a sandbox project, all requirements are met and you're ready to use the Prompt Lab.
-
-You must meet these requirements to use the Prompt Lab:
-
-
-
-* You must have a project.
-* You must have the Editor or Admin role in the project.
-* The project must have an associated Watson Machine Learning service instance. Otherwise, you are prompted to associate the service when you open the Prompt Lab.
-
-
-
-"
-78A8C07B83DF1B01276353D098E84F12304636E2_2,78A8C07B83DF1B01276353D098E84F12304636E2," Creating and running a prompt
-
-To create and run a new prompt, complete the following steps:
-
-
-
-1. From the [watsonx.ai home page](https://dataplatform.cloud.ibm.com/wx/home?context=wx), choose a project, and then click Experiment with foundation models and build prompts.
-
-
-
-
-
-1. Select a model.
-2. Enter a prompt.
-3. If necessary, update model parameters or add prompt variables.
-4. Click Generate.
-5. To preserve your work, so you can reuse or share a prompt with collaborators in the current project, save your work as a project asset. For more information, see [Saving prompts](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-save.html).
-
-
-
-To run a sample prompt, complete the following steps:
-
-
-
-1. From the Sample prompts menu in the Prompt Lab, select a sample prompt.
-
-The prompt is opened in the editor and an appropriate model is selected.
-2. Click Generate.
-
-
-
-"
-78A8C07B83DF1B01276353D098E84F12304636E2_3,78A8C07B83DF1B01276353D098E84F12304636E2," Prompt editing options
-
-You type your prompt in the prompt editor. The prompt editor has the following modes:
-
-Freeform : You add your prompt in plain text. Your prompt text is sent to the model exactly as you typed it. : Quotation marks in your text are escaped with a backslash (""). Newline characters are represented by n. Apostrophes are escaped (it'''s) so that they can be handled properly in the cURL command.
-
-Structured : You add parts of your prompt into the appropriate fields: : - Instruction: Add an instruction if it makes sense for your use case. An instruction is an imperative statement, such as Summarize the following article. : - Examples: Add one or more pairs of examples that contain the input and the corresponding output that you want. Providing a few example input-and-output pairs in your prompt is called few-shot prompting. If you need a specific prefix to the input or the output, you can replace the default labels, ""Input:"" or ""Output:"", with the labels you want to use. A space is added between the example label and the example text. : - Test your input: In the Try area, enter the final input of your prompt. : Structured mode is designed to help new users create effective prompts. Text from the fields is sent to the model in a template format.
-
-"
-78A8C07B83DF1B01276353D098E84F12304636E2_4,78A8C07B83DF1B01276353D098E84F12304636E2," Model and prompt configuration options
-
-You must specify which model to prompt and can optionally set parameters that control the generated result.
-
-"
-78A8C07B83DF1B01276353D098E84F12304636E2_5,78A8C07B83DF1B01276353D098E84F12304636E2," Model choices
-
-In the Prompt Lab, you can submit your prompt to any of the models that are supported by watsonx.ai. You can choose recently-used models from the drop-down list. Or you can click View all foundation models to view all the supported models, filter them by task, and read high-level information about the models.
-
-If you tuned a foundation model by using the Tuning Studio and deployed the tuned model, your tuned model is also available for prompting from the Prompt Lab.
-
-"
-78A8C07B83DF1B01276353D098E84F12304636E2_6,78A8C07B83DF1B01276353D098E84F12304636E2," Model parameters
-
-To control how the model generates output in response to your prompt, you can specify decoding parameters and stopping criteria. For more information, see [Model parameters for prompting](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-model-parameters.html).
-
-"
-78A8C07B83DF1B01276353D098E84F12304636E2_7,78A8C07B83DF1B01276353D098E84F12304636E2," Prompt variables
-
-To add flexibility to your prompts, you can define prompt variables. A prompt variable is a placeholder keyword that you include in the static text of your prompt at creation time and replace with text dynamically at run time. For more information, see [Building reusable prompts](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-variables.html).
-
-"
-78A8C07B83DF1B01276353D098E84F12304636E2_8,78A8C07B83DF1B01276353D098E84F12304636E2," AI guardrails
-
-When you set the AI guardrails switcher to On, harmful language is automatically removed from the input prompt text and from the output that is generated by the model. Specifically, any sentence in the input or output that contains harmful language is replaced with a message that says that potentially harmful text was removed.
-
-"
-78A8C07B83DF1B01276353D098E84F12304636E2_9,78A8C07B83DF1B01276353D098E84F12304636E2," Prompt code
-
-If you want to run the prompt programmatically, you can view and copy the prompt code or use the Python library.
-
-"
-78A8C07B83DF1B01276353D098E84F12304636E2_10,78A8C07B83DF1B01276353D098E84F12304636E2," View code
-
-When you click the View code icon (), a cURL command is displayed that you can call from outside the Prompt Lab to submit the current prompt and parameters to the selected model and get a generated response.
-
-In the command, there is a placeholder for an IBM Cloud IAM token. For information about generating the access token, see [Generating an IBM Cloud IAM token](https://cloud.ibm.com/docs/account?topic=account-iamtoken_from_apikey).
-
-"
-78A8C07B83DF1B01276353D098E84F12304636E2_11,78A8C07B83DF1B01276353D098E84F12304636E2," Programmatic alternative to the Prompt Lab
-
-The Prompt Lab graphical interface is a great place to experiment and iterate with your prompts. However, you can also prompt foundation models in watsonx.ai programmatically by using the Python library. For details, see [Foundation models Python library](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-python-lib.html).
-
-"
-78A8C07B83DF1B01276353D098E84F12304636E2_12,78A8C07B83DF1B01276353D098E84F12304636E2," Available prompts
-
-In the side panel, you can access sample prompts, your session history, and saved prompts.
-
-"
-78A8C07B83DF1B01276353D098E84F12304636E2_13,78A8C07B83DF1B01276353D098E84F12304636E2," Samples
-
-A collection of sample prompts are available in the Prompt Lab. The samples demonstrate effective prompt text and model parameters for different tasks, including classification, extraction, content generation, question answering, and summarization.
-
-When you click a sample, the prompt text loads in the editor, an appropriate model is selected, and optimal parameters are configured automatically.
-
-"
-78A8C07B83DF1B01276353D098E84F12304636E2_14,78A8C07B83DF1B01276353D098E84F12304636E2," History
-
-As you experiment with different prompt text, model choices, and parameters, the details are captured in the session history each time you submit your prompt. To load a previous prompt, click the entry in the history and then click Restore.
-
-"
-78A8C07B83DF1B01276353D098E84F12304636E2_15,78A8C07B83DF1B01276353D098E84F12304636E2," Saved
-
-From the Saved prompt templates menu, you can load any prompts that you saved to the current project as a prompt template asset.
-
-"
-78A8C07B83DF1B01276353D098E84F12304636E2_16,78A8C07B83DF1B01276353D098E84F12304636E2," Learn more
-
-
-
-* [Security and privacy for foundation models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-security.html)
-* [Building reusable prompts](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-variables.html)
-* [Saving prompts](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-save.html)
-* [Model parameters for prompting](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-model-parameters.html)
-* [Sample prompts](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html)
-* [Prompt tips](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-tips.html)
-* Try these tutorials:
-
-
-
-* [Prompt a foundation model using Prompt Lab](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-prompt-lab.html)
-* [Prompt a foundation model with the retrieval-augmented generation pattern](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-fm-notebook.html)
-
-
-
-
-
-
-
-* Watch these other prompt lab videos
-
-
-
-Parent topic:[Foundation models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-overview.html)
-"
-E5D702E67E93752155510B56A3B2F464E190EBA2_0,E5D702E67E93752155510B56A3B2F464E190EBA2," Sample foundation model prompts for common tasks
-
-Try these samples to learn how different prompts can guide foundation models to do common tasks.
-
-"
-E5D702E67E93752155510B56A3B2F464E190EBA2_1,E5D702E67E93752155510B56A3B2F464E190EBA2," How to use this topic
-
-Explore the sample prompts in this topic:
-
-
-
-* Copy and paste the prompt text and input parameter values into the Prompt Lab in IBM watsonx.ai
-* See what text is generated.
-* See how different models generate different output.
-* Change the prompt text and parameters to see how results vary.
-
-
-
-There is no one right way to prompt foundation models. But patterns have been found, in academia and industry, that work fairly reliably. Use the samples in this topic to build your skills and your intuition about prompt engineering through experimentation.
-
-This video provides a visual method to learn the concepts and tasks in this documentation.
-
-"
-E5D702E67E93752155510B56A3B2F464E190EBA2_2,E5D702E67E93752155510B56A3B2F464E190EBA2,"Video chapters
-[ 0:11 ] Introduction to prompts and Prompt Lab
-[ 0:33 ] Key concept: Everything is text completion
-[ 1:34 ] Useful prompt pattern: Few-shot prompt
-[ 1:58 ] Stopping criteria: Max tokens, stop sequences
-[ 3:32 ] Key concept: Fine-tuning
-[ 4:32 ] Useful prompt pattern: Zero-shot prompt
-[ 5:32 ] Key concept: Be flexible, try different prompts
-[ 6:14 ] Next steps: Experiment with sample prompts
-
-"
-E5D702E67E93752155510B56A3B2F464E190EBA2_3,E5D702E67E93752155510B56A3B2F464E190EBA2," Samples overview
-
-You can find samples that prompt foundation models to generate output that supports the following tasks:
-
-
-
-* [Classification](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html?context=cdpaas&locale=enclassification)
-* [Extraction](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html?context=cdpaas&locale=enextraction)
-* [Generation](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html?context=cdpaas&locale=engeneration)
-* [Question answering (QA)](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html?context=cdpaas&locale=enqa)
-* [Summarization](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html?context=cdpaas&locale=ensummarization)
-* [Code generation and conversion](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html?context=cdpaas&locale=encode)
-* [Dialogue](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html?context=cdpaas&locale=endialogue)
-
-
-
-The following table shows the foundation models that are used in task-specific samples. A checkmark indicates that the model is used in a sample for the associated task.
-
-
-
-Table 1. Models used in samples for certain tasks
-
- Model Classification Extraction Generation QA Summarization Coding Dialogue
-
-"
-E5D702E67E93752155510B56A3B2F464E190EBA2_4,E5D702E67E93752155510B56A3B2F464E190EBA2," flan-t5-xxl-11b ✓ ✓
- flan-ul2-20b ✓ ✓ ✓
- gpt-neox-20b ✓ ✓ ✓
- granite-13b-chat-v1 ✓
- granite-13b-instruct-v1 ✓ ✓
- granite-13b-instruct-v2 ✓ ✓ ✓
- llama-2 chat ✓
- mpt-7b-instruct2 ✓ ✓
- mt0-xxl-13b ✓ ✓
- starcoder-15.5b ✓
-
-
-
-The following table summarizes the available sample prompts.
-
-
-
-Table 2. List of sample prompts
-
- Scenario Prompt editor Prompt format Model Decoding Notes
-
- [Sample 1a: Classify a message](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html?context=cdpaas&locale=ensample1a) Freeform Zero-shot * mt0-xxl-13b * flan-t5-xxl-11b * flan-ul2-20b Greedy * Uses the class names as stop sequences to stop the model after it prints the class name
- [Sample 1b: Classify a message](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html?context=cdpaas&locale=ensample1b) Freeform Few-shot * gpt-neox-20b * mpt-7b-instruct Greedy * Uses the class names as stop sequences
- [Sample 1c: Classify a message](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html?context=cdpaas&locale=ensample1c) Structured Few-shot * gpt-neox-20b * mpt-7b-instruct Greedy * Uses the class names as stop sequences
-"
-E5D702E67E93752155510B56A3B2F464E190EBA2_5,E5D702E67E93752155510B56A3B2F464E190EBA2," [Sample 2a: Extract details from a complaint](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html?context=cdpaas&locale=ensample2a) Freeform Zero-shot * flan-ul2-20b * granite-13b-instruct-v2 Greedy
- [Sample 3a: Generate a numbered list on a particular theme](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html?context=cdpaas&locale=ensample3a) Freeform Few-shot * gpt-neox-20b Sampling * Generates formatted output * Uses two newline characters as a stop sequence to stop the model after one list
- [Sample 3b: Generate a numbered list on a particular theme](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html?context=cdpaas&locale=ensample3b) Structured Few-shot * gpt-neox-20b Sampling * Generates formatted output. * Uses two newline characters as a stop sequence
- [Sample 3c: Generate a numbered list on a particular theme](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html?context=cdpaas&locale=ensample3c) Freeform Zero-shot * granite-13b-instruct-v1 * granite-13b-instruct-v2 Greedy * Generates formatted output
-"
-E5D702E67E93752155510B56A3B2F464E190EBA2_6,E5D702E67E93752155510B56A3B2F464E190EBA2," [Sample 4a: Answer a question based on an article](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html?context=cdpaas&locale=ensample4a) Freeform Zero-shot * mt0-xxl-13b * flan-t5-xxl-11b * flan-ul2-20b Greedy * Uses a period ""."" as a stop sequence to cause the model to return only a single sentence
- [Sample 4b: Answer a question based on an article](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html?context=cdpaas&locale=ensample4b) Structured Zero-shot * mt0-xxl-13b * flan-t5-xxl-11b * flan-ul2-20b Greedy * Uses a period ""."" as a stop sequence * Generates results for multiple inputs at once
- [Sample 4c: Answer a question based on a document](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html?context=cdpaas&locale=ensample4c) Freeform Zero-shot * granite-13b-instruct-v2 Greedy
- [Sample 4d: Answer general knowledge questions](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html?context=cdpaas&locale=ensample4d) Freeform Zero-shot * granite-13b-instruct-v1 Greedy
-"
-E5D702E67E93752155510B56A3B2F464E190EBA2_7,E5D702E67E93752155510B56A3B2F464E190EBA2," [Sample 5a: Summarize a meeting transcript](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html?context=cdpaas&locale=ensample5a) Freeform Zero-shot * flan-t5-xxl-11b * flan-ul2-20b * mpt-7b-instruct2 Greedy
- [Sample 5b: Summarize a meeting transcript](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html?context=cdpaas&locale=ensample5b) Freeform Few-shot * gpt-neox-20b Greedy
- [Sample 5c: Summarize a meeting transcript](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html?context=cdpaas&locale=ensample5c) Structured Few-shot * gpt-neox-20b Greedy * Generates formatted output * Uses two newline characters as a stop sequence to stop the model after one list
- [Sample 6a: Generate programmatic code from instructions](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html?context=cdpaas&locale=ensample6a) Freeform Few-shot * starcoder-15.5b Greedy * Generates programmatic code as output * Uses as a stop sequence
-"
-E5D702E67E93752155510B56A3B2F464E190EBA2_8,E5D702E67E93752155510B56A3B2F464E190EBA2," [Sample 6b: Convert code from one programming language to another](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html?context=cdpaas&locale=ensample6b) Freeform Few-shot * starcoder-15.5b Greedy * Generates programmatic code as output * Uses as a stop sequence
- [Sample 7a: Converse in a dialogue](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html?context=cdpaas&locale=ensample7a) Freeform Custom structure * granite-13b-chat-v1 Greedy * Generates dialogue output like a chatbot * Uses a special token that is named END_KEY as a stop sequence
- [Sample 7b: Converse in a dialogue](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html?context=cdpaas&locale=ensample7b) Freeform Custom structure * llama-2 chat Greedy * Generates dialogue output like a chatbot * Uses a model-specific prompt format
-
-
-
-"
-E5D702E67E93752155510B56A3B2F464E190EBA2_9,E5D702E67E93752155510B56A3B2F464E190EBA2," Classification
-
-Classification is useful for predicting data in distinct categories. Classifications can be binary, with two classes of data, or multi-class. A classification task is useful for categorizing information, such as customer feedback, so that you can manage or act on the information more efficiently.
-
-"
-E5D702E67E93752155510B56A3B2F464E190EBA2_10,E5D702E67E93752155510B56A3B2F464E190EBA2," Sample 1a: Classify a message
-
-Scenario: Given a message that is submitted to a customer-support chatbot for a cloud software company, classify the customer's message as either a question or a problem. Depending on the class assignment, the chat is routed to the correct support team for the issue type.
-
-"
-E5D702E67E93752155510B56A3B2F464E190EBA2_11,E5D702E67E93752155510B56A3B2F464E190EBA2,"Model choice
-Models that are instruction-tuned can generally complete this task with this sample prompt. Suggestions: mt0-xxl-13b, flan-t5-xxl-11b, or flan-ul2-20b
-
-"
-E5D702E67E93752155510B56A3B2F464E190EBA2_12,E5D702E67E93752155510B56A3B2F464E190EBA2,"Decoding
-Greedy. The model must return one of the specified class names; it cannot be creative and make up new classes.
-
-"
-E5D702E67E93752155510B56A3B2F464E190EBA2_13,E5D702E67E93752155510B56A3B2F464E190EBA2,"Stopping criteria
-
-
-
-* Specify two stop sequences: ""Question"" and ""Problem"". After the model generates either of those words, it should stop.
-* With such short output, the Max tokens parameter can be set to 5.
-
-
-
-"
-E5D702E67E93752155510B56A3B2F464E190EBA2_14,E5D702E67E93752155510B56A3B2F464E190EBA2,"Prompt text
-Paste this zero-shot prompt text into the freeform prompt editor in Prompt Lab, select the model, set parameters, and then click Generate to see the result.
-
-Classify this customer message into one of two classes: Question, Problem.
-
-Class name: Question
-Description: The customer is asking a technical question or a how-to question
-about our products or services.
-
-Class name: Problem
-Description: The customer is describing a problem they are having. They might
-say they are trying something, but it's not working. They might say they are
-getting an error or unexpected results.
-
-Message: I'm having trouble registering for a new account.
-Class name:
-
-"
-E5D702E67E93752155510B56A3B2F464E190EBA2_15,E5D702E67E93752155510B56A3B2F464E190EBA2," Sample 1b: Classify a message
-
-Scenario: Given a message that is submitted to a customer-support chatbot for a cloud software company, classify the customer's message as either a question or a problem description. Based on the class type, the chat can be routed to the correct support team.
-
-"
-E5D702E67E93752155510B56A3B2F464E190EBA2_16,E5D702E67E93752155510B56A3B2F464E190EBA2,"Model choice
-With few-shot examples of both classes, most models can complete this task well, including: gpt-neox-20b and mpt-7b-instruct.
-
-"
-E5D702E67E93752155510B56A3B2F464E190EBA2_17,E5D702E67E93752155510B56A3B2F464E190EBA2,"Decoding
-Greedy. The model must return one of the specified class names; it cannot be creative and make up new classes.
-
-"
-E5D702E67E93752155510B56A3B2F464E190EBA2_18,E5D702E67E93752155510B56A3B2F464E190EBA2,"Stopping criteria
-
-
-
-* Specify two stop sequences: ""Question"" and ""Problem"". After the model generates either of those words, it should stop.
-* With such short output, the Max tokens parameter can be set to 5.
-
-
-
-"
-E5D702E67E93752155510B56A3B2F464E190EBA2_19,E5D702E67E93752155510B56A3B2F464E190EBA2,"Prompt text
-Paste this few-shot prompt text into the freeform prompt editor in Prompt Lab, select the model, set parameters, and then click Generate to see the result.
-
-Message: When I try to log in, I get an error.
-Class name: Problem
-
-Message: Where can I find the plan prices?
-Class name: Question
-
-Message: What is the difference between trial and paygo?
-Class name: Question
-
-Message: The registration page crashed, and now I can't create a new account.
-Class name: Problem
-
-Message: What regions are supported?
-Class name: Question
-
-Message: I can't remember my password.
-Class name: Problem
-
-Message: I'm having trouble registering for a new account.
-
-"
-E5D702E67E93752155510B56A3B2F464E190EBA2_20,E5D702E67E93752155510B56A3B2F464E190EBA2," Sample 1c: Classify a message
-
-Scenario: Given a message that is submitted to a customer-support chatbot for a cloud software company, classify the customer's message as either a question or a problem description. Based on the class type, the chat can be routed to the correct support team.
-
-"
-E5D702E67E93752155510B56A3B2F464E190EBA2_21,E5D702E67E93752155510B56A3B2F464E190EBA2,"Model choice
-With few-shot examples of both classes, most models can complete this task well, including: gpt-neox-20b and mpt-7b-instruct.
-
-"
-E5D702E67E93752155510B56A3B2F464E190EBA2_22,E5D702E67E93752155510B56A3B2F464E190EBA2,"Decoding
-Greedy. The model must return one of the specified class names, not be creative and make up new classes.
-
-"
-E5D702E67E93752155510B56A3B2F464E190EBA2_23,E5D702E67E93752155510B56A3B2F464E190EBA2,"Stopping criteria
-
-
-
-* Specify two stop sequences: ""Question"" and ""Problem"". After the model generates either of those words, it should stop.
-* With such short output, the Max tokens parameter can be set to 5.
-
-
-
-"
-E5D702E67E93752155510B56A3B2F464E190EBA2_24,E5D702E67E93752155510B56A3B2F464E190EBA2,"Set up section
-Paste these headers and examples into the Examples area of the Set up section:
-
-
-
-Table 2. Classification few-shot examples
-
- Message: Class name:
-
- When I try to log in, I get an error. Problem
- Where can I find the plan prices? Question
- What is the difference between trial and paygo? Question
- The registration page crashed, and now I can't create a new account. Problem
- What regions are supported? Question
- I can't remember my password. Problem
-
-
-
-"
-E5D702E67E93752155510B56A3B2F464E190EBA2_25,E5D702E67E93752155510B56A3B2F464E190EBA2,"Try section
-Paste this message in the Try section:
-
-I'm having trouble registering for a new account.
-
-Select the model and set parameters, then click Generate to see the result.
-
-"
-E5D702E67E93752155510B56A3B2F464E190EBA2_26,E5D702E67E93752155510B56A3B2F464E190EBA2," Extracting details
-
-Extraction tasks can help you to find key terms or mentions in data based on the semantic meaning of words rather than simple text matches.
-
-"
-E5D702E67E93752155510B56A3B2F464E190EBA2_27,E5D702E67E93752155510B56A3B2F464E190EBA2," Sample 2a: Extract details from a complaint
-
-Scenario: Given a complaint from a customer who had trouble booking a flight on a reservation website, identify the factors that contributed to this customer's unsatisfactory experience.
-
-"
-E5D702E67E93752155510B56A3B2F464E190EBA2_28,E5D702E67E93752155510B56A3B2F464E190EBA2,"Model choices
-flan-ul2-20b, granite-13b-instruct-v2
-
-"
-E5D702E67E93752155510B56A3B2F464E190EBA2_29,E5D702E67E93752155510B56A3B2F464E190EBA2,"Decoding
-Greedy. We need the model to return words that are in the input; the model cannot be creative and make up new words.
-
-"
-E5D702E67E93752155510B56A3B2F464E190EBA2_30,E5D702E67E93752155510B56A3B2F464E190EBA2,"Stopping criteria
-The list of extracted factors will not be long, so set the Max tokens parameter to 50.
-
-"
-E5D702E67E93752155510B56A3B2F464E190EBA2_31,E5D702E67E93752155510B56A3B2F464E190EBA2,"Prompt text
-Paste this zero-shot prompt text into the freeform prompt editor in Prompt Lab, select the model, set parameters, and then click Generate to see the result.
-
-From the following customer complaint, extract all the factors that
-caused the customer to be unhappy.
-
-Customer complaint:
-I just tried to book a flight on your incredibly slow website. All
-the times and prices were confusing. I liked being able to compare
-the amenities in economy with business class side by side. But I
-never got to reserve a seat because I didn't understand the seat map.
-Next time, I'll use a travel agent!
-
-Numbered list of all the factors that caused the customer to be unhappy:
-
-"
-E5D702E67E93752155510B56A3B2F464E190EBA2_32,E5D702E67E93752155510B56A3B2F464E190EBA2," Generating natural language
-
-Generation tasks are what large language models do best. Your prompts can help guide the model to generate useful language.
-
-"
-E5D702E67E93752155510B56A3B2F464E190EBA2_33,E5D702E67E93752155510B56A3B2F464E190EBA2," Sample 3a: Generate a numbered list on a particular theme
-
-Scenario: Generate a numbered list on a particular theme.
-
-"
-E5D702E67E93752155510B56A3B2F464E190EBA2_34,E5D702E67E93752155510B56A3B2F464E190EBA2,"Model choice
-gpt-neox-20b was trained to recognize and handle special characters, such as the newline character, well. This model is a good choice when you want your generated text to be formatted a specific way with special characters.
-
-"
-E5D702E67E93752155510B56A3B2F464E190EBA2_35,E5D702E67E93752155510B56A3B2F464E190EBA2,"Decoding
-Sampling. This is a creative task. Set the following parameters:
-
-
-
-* Temperature: 0.7
-* Top P: 1
-* Top K: 50
-* Random seed: 9045 (To get different output each time you click Generate, specify a different value for the Random seed parameter or clear the parameter.)
-
-
-
-"
-E5D702E67E93752155510B56A3B2F464E190EBA2_36,E5D702E67E93752155510B56A3B2F464E190EBA2,"Stopping criteria
-
-
-
-* To make sure the model stops generating text after one list, specify a stop sequence of two newline characters. To do that, click the Stop sequence text box, press the Enter key twice, then click Add sequence.
-* The list will not be very long, so set the Max tokens parameter to 50.
-
-
-
-"
-E5D702E67E93752155510B56A3B2F464E190EBA2_37,E5D702E67E93752155510B56A3B2F464E190EBA2,"Prompt text
-Paste this few-shot prompt text into the freeform prompt editor in Prompt Lab, select the model, set parameters, and then click Generate to see the result.
-
-What are 4 types of dog breed?
-1. Poodle
-2. Dalmatian
-3. Golden retriever
-4. Bulldog
-
-What are 3 ways to incorporate exercise into your day?
-1. Go for a walk at lunch
-2. Take the stairs instead of the elevator
-3. Park farther away from your destination
-
-What are 4 kinds of vegetable?
-1. Spinach
-2. Carrots
-3. Broccoli
-4. Cauliflower
-
-What are the 3 primary colors?
-1. Red
-2. Green
-3. Blue
-
-What are 3 ingredients that are good on pizza?
-
-"
-E5D702E67E93752155510B56A3B2F464E190EBA2_38,E5D702E67E93752155510B56A3B2F464E190EBA2," Sample 3b: Generate a numbered list on a particular theme
-
-Scenario: Generate a numbered list on a particular theme.
-
-"
-E5D702E67E93752155510B56A3B2F464E190EBA2_39,E5D702E67E93752155510B56A3B2F464E190EBA2,"Model choice
-gpt-neox-20b was trained to recognize and handle special characters, such as the newline character, well. This model is a good choice when you want your generated text to be formatted in a specific way with special characters.
-
-"
-E5D702E67E93752155510B56A3B2F464E190EBA2_40,E5D702E67E93752155510B56A3B2F464E190EBA2,"Decoding
-Sampling. This scenario is a creative one. Set the following parameters:
-
-
-
-* Temperature: 0.7
-* Top P: 1
-* Top K: 50
-* Random seed: 9045 (To generate different results, specify a different value for the Random seed parameter or clear the parameter.)
-
-
-
-"
-E5D702E67E93752155510B56A3B2F464E190EBA2_41,E5D702E67E93752155510B56A3B2F464E190EBA2,"Stopping criteria
-
-
-
-* To make sure that the model stops generating text after one list, specify a stop sequence of two newline characters. To do that, click in the Stop sequence text box, press the Enter key twice, then click Add sequence.
-* The list will not be long, so set the Max tokens parameter to 50.
-
-
-
-"
-E5D702E67E93752155510B56A3B2F464E190EBA2_42,E5D702E67E93752155510B56A3B2F464E190EBA2,"Set up section
-Paste these headers and examples into the Examples area of the Set up section:
-
-
-
-Table 3. Generation few-shot examples
-
- Input: Output:
-
- What are 4 types of dog breed? 1. Poodle 2. Dalmatian 3. Golden retriever 4. Bulldog
- What are 3 ways to incorporate exercise into your day? 1. Go for a walk at lunch 2. Take the stairs instead of the elevator 3. Park farther away from your destination
- What are 4 kinds of vegetable? 1. Spinach 2. Carrots 3. Broccoli 4. Cauliflower
- What are the 3 primary colors? 1. Red 2. Green 3. Blue
-
-
-
-"
-E5D702E67E93752155510B56A3B2F464E190EBA2_43,E5D702E67E93752155510B56A3B2F464E190EBA2,"Try section
-Paste this input in the Try section:
-
-What are 3 ingredients that are good on pizza?
-
-Select the model and set parameters, then click Generate to see the result.
-
-"
-E5D702E67E93752155510B56A3B2F464E190EBA2_44,E5D702E67E93752155510B56A3B2F464E190EBA2," Sample 3c: Generate a numbered list on a particular theme
-
-Scenario: Ask the model to play devil's advocate. Describe a potential action and ask the model to list possible downsides or risks that are associated with the action.
-
-"
-E5D702E67E93752155510B56A3B2F464E190EBA2_45,E5D702E67E93752155510B56A3B2F464E190EBA2,"Model choice
-Similar to gpt-neox-20b, the granite-13b-instruct model was trained to recognize and handle special characters, such as the newline character, well. The granite-13b-instruct-v2 oe granite-13b-instruct-v1 model is a good choice when you want your generated text to be formatted in a specific way with special characters.
-
-"
-E5D702E67E93752155510B56A3B2F464E190EBA2_46,E5D702E67E93752155510B56A3B2F464E190EBA2,"Decoding
-Greedy. The model must return the most predictable content based on what's in the prompt; the model cannot be too creative.
-
-"
-E5D702E67E93752155510B56A3B2F464E190EBA2_47,E5D702E67E93752155510B56A3B2F464E190EBA2,"Stopping criteria
-The summary might run several sentences, so set the Max tokens parameter to 60.
-
-"
-E5D702E67E93752155510B56A3B2F464E190EBA2_48,E5D702E67E93752155510B56A3B2F464E190EBA2,"Prompt text
-Paste this prompt text into the freeform prompt editor in Prompt Lab, select the model, set parameters, and then click Generate to see the result.
-
-You are playing the role of devil's advocate. Argue against the proposed plans. List 3 detailed, unique, compelling reasons why moving forward with the plan would be a bad choice. Consider all types of risks.
-
-Plan we are considering:
-Extend our store hours.
-Three problems with this plan are:
-1. We'll have to pay more for staffing.
-2. Risk of theft increases late at night.
-3. Clerks might not want to work later hours.
-
-Plan we are considering:
-Open a second location for our business.
-Three problems with this plan are:
-1. Managing two locations will be more than twice as time-consuming than managed just one.
-2. Creating a new location doesn't guarantee twice as many customers.
-3. A new location means added real estate, utility, and personnel expenses.
-
-Plan we are considering:
-Refreshing our brand image by creating a new logo.
-Three problems with this plan are:
-
-"
-E5D702E67E93752155510B56A3B2F464E190EBA2_49,E5D702E67E93752155510B56A3B2F464E190EBA2," Question answering
-
-Question-answering tasks are useful in help systems and other scenarios where frequently asked or more nuanced questions can be answered from existing content.
-
-To help the model return factual answers, implement the retrieval-augmented generation pattern. For more information, see [Retrieval-augmented generation](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-rag.html).
-
-"
-E5D702E67E93752155510B56A3B2F464E190EBA2_50,E5D702E67E93752155510B56A3B2F464E190EBA2," Sample 4a: Answer a question based on an article
-
-Scenario: The website for an online seed catalog has many articles to help customers plan their garden and ultimately select which seeds to purchase. A new widget is being added to the website to answer customer questions based on the contents of the article the customer is viewing. Given a question that is related to an article, answer the question based on the article.
-
-"
-E5D702E67E93752155510B56A3B2F464E190EBA2_51,E5D702E67E93752155510B56A3B2F464E190EBA2,"Model choice
-Models that are instruction-tuned, such as mt0-xxl-13b, flan-t5-xxl-11b, or flan-ul2-20b, can generally complete this task with this sample prompt.
-
-"
-E5D702E67E93752155510B56A3B2F464E190EBA2_52,E5D702E67E93752155510B56A3B2F464E190EBA2,"Decoding
-Greedy. The answers must be grounded in the facts in the article, and if there is no good answer in the article, the model should not be creative and make up an answer.
-
-"
-E5D702E67E93752155510B56A3B2F464E190EBA2_53,E5D702E67E93752155510B56A3B2F464E190EBA2,"Stopping criteria
-To cause the model to return a one-sentence answer, specify a period ""."" as a stop sequence. The Max tokens parameter can be set to 50.
-
-"
-E5D702E67E93752155510B56A3B2F464E190EBA2_54,E5D702E67E93752155510B56A3B2F464E190EBA2,"Prompt text
-Paste this zero-shot prompt text into the freeform prompt editor in Prompt Lab, select the model, set parameters, and then click Generate to see the result.
-
-"
-E5D702E67E93752155510B56A3B2F464E190EBA2_55,E5D702E67E93752155510B56A3B2F464E190EBA2,"Tomatoes are one of the most popular plants for vegetable gardens.
-Tip for success: If you select varieties that are resistant to
-disease and pests, growing tomatoes can be quite easy. For
-experienced gardeners looking for a challenge, there are endless
-heirloom and specialty varieties to cultivate. Tomato plants come
-in a range of sizes. There are varieties that stay very small, less
-than 12 inches, and grow well in a pot or hanging basket on a balcony
-or patio. Some grow into bushes that are a few feet high and wide,
-and can be grown is larger containers. Other varieties grow into
-huge bushes that are several feet wide and high in a planter or
-garden bed. Still other varieties grow as long vines, six feet or
-more, and love to climb trellises. Tomato plants do best in full
-sun. You need to water tomatoes deeply and often. Using mulch
-prevents soil-borne disease from splashing up onto the fruit when you
-water. Pruning suckers and even pinching the tips will encourage the
-"
-E5D702E67E93752155510B56A3B2F464E190EBA2_56,E5D702E67E93752155510B56A3B2F464E190EBA2,"Answer the following question using only information from the article.
-Answer in a complete sentence, with proper capitalization and punctuation.
-If there is no good answer in the article, say ""I don't know"".
-
-Question: Why should you use mulch when growing tomatoes?
-Answer:
-
-You can experiment with asking other questions too, such as:
-
-
-
-* How large do tomato plants get?
-* Do tomato plants prefer shade or sun?
-* Is it easy to grow tomatoes?
-
-
-
-Try out-of-scope questions too, such as:
-
-
-
-* How do you grow cucumbers?
-
-
-
-"
-E5D702E67E93752155510B56A3B2F464E190EBA2_57,E5D702E67E93752155510B56A3B2F464E190EBA2," Sample 4b: Answer a question based on an article
-
-Scenario: The website for an online seed catalog has many articles to help customers plan their garden and ultimately select which seeds to purchase. A new widget is being added to the website to answer customer questions based on the contents of the article the customer is viewing. Given a question related to a particular article, answer the question based on the article.
-
-"
-E5D702E67E93752155510B56A3B2F464E190EBA2_58,E5D702E67E93752155510B56A3B2F464E190EBA2,"Model choice
-Models that are instruction-tuned, such as mt0-xxl-13b, flan-t5-xxl-11b, or flan-ul2-20b, can generally complete this task with this sample prompt.
-
-"
-E5D702E67E93752155510B56A3B2F464E190EBA2_59,E5D702E67E93752155510B56A3B2F464E190EBA2,"Decoding
-Greedy. The answers must be grounded in the facts in the article, and if there is no good answer in the article, the model should not be creative and make up an answer.
-
-"
-E5D702E67E93752155510B56A3B2F464E190EBA2_60,E5D702E67E93752155510B56A3B2F464E190EBA2,"Stopping criteria
-To cause the model to return a one-sentence answer, specify a period ""."" as a stop sequence. The Max tokens parameter can be set to 50.
-
-"
-E5D702E67E93752155510B56A3B2F464E190EBA2_61,E5D702E67E93752155510B56A3B2F464E190EBA2,"Set up section
-Paste this text into the Instruction area of the Set up section:
-
-"
-E5D702E67E93752155510B56A3B2F464E190EBA2_62,E5D702E67E93752155510B56A3B2F464E190EBA2,"Tomatoes are one of the most popular plants for vegetable gardens.
-Tip for success: If you select varieties that are resistant to
-disease and pests, growing tomatoes can be quite easy. For
-experienced gardeners looking for a challenge, there are endless
-heirloom and specialty varieties to cultivate. Tomato plants come
-in a range of sizes. There are varieties that stay very small, less
-than 12 inches, and grow well in a pot or hanging basket on a balcony
-or patio. Some grow into bushes that are a few feet high and wide,
-and can be grown is larger containers. Other varieties grow into
-huge bushes that are several feet wide and high in a planter or
-garden bed. Still other varieties grow as long vines, six feet or
-more, and love to climb trellises. Tomato plants do best in full
-sun. You need to water tomatoes deeply and often. Using mulch
-prevents soil-borne disease from splashing up onto the fruit when you
-water. Pruning suckers and even pinching the tips will encourage the
-"
-E5D702E67E93752155510B56A3B2F464E190EBA2_63,E5D702E67E93752155510B56A3B2F464E190EBA2,"Answer the following question using only information from the article.
-Answer in a complete sentence, with proper capitalization and punctuation.
-If there is no good answer in the article, say ""I don't know"".
-
-"
-E5D702E67E93752155510B56A3B2F464E190EBA2_64,E5D702E67E93752155510B56A3B2F464E190EBA2,"Try section
-In the Try section, add an extra test row so you can paste each of these two questions in a separate row:
-
-Why should you use mulch when growing tomatoes?
-
-How do you grow cucumbers?
-
-Select the model and set parameters, then click Generate to see two results.
-
-"
-E5D702E67E93752155510B56A3B2F464E190EBA2_65,E5D702E67E93752155510B56A3B2F464E190EBA2," Sample 4c: Answer a question based on a document
-
-Scenario: You are creating a chatbot that can answer user questions. When a user asks a question, you want the agent to answer the question with information from a specific document.
-
-"
-E5D702E67E93752155510B56A3B2F464E190EBA2_66,E5D702E67E93752155510B56A3B2F464E190EBA2,"Model choice
-Models that are instruction-tuned, such as granite-13b-instruct-v2, can complete the task with this sample prompt.
-
-"
-E5D702E67E93752155510B56A3B2F464E190EBA2_67,E5D702E67E93752155510B56A3B2F464E190EBA2,"Decoding
-Greedy. The answers must be grounded in the facts in the document, and if there is no good answer in the article, the model should not be creative and make up an answer.
-
-"
-E5D702E67E93752155510B56A3B2F464E190EBA2_68,E5D702E67E93752155510B56A3B2F464E190EBA2,"Stopping criteria
-Use a Max tokens parameter of 50.
-
-"
-E5D702E67E93752155510B56A3B2F464E190EBA2_69,E5D702E67E93752155510B56A3B2F464E190EBA2,"Prompt text
-Paste this zero-shot prompt text into the freeform prompt editor in Prompt Lab, select the model, set parameters, and then click Generate to see the result.
-
-Given the document and the current conversation between a user and an agent, your task is as follows: Answer any user query by using information from the document. The response should be detailed.
-
-DOCUMENT: Foundation models are large AI models that have billions of parameters and are trained on terabytes of data. Foundation models can do various tasks, including text, code, or image generation, classification, conversation, and more. Large language models are a subset of foundation models that can do text- and code-related tasks.
-DIALOG: USER: What are foundation models?
-
-"
-E5D702E67E93752155510B56A3B2F464E190EBA2_70,E5D702E67E93752155510B56A3B2F464E190EBA2," Sample 4d: Answer general knowledge questions
-
-Scenario: Answer general questions about finance.
-
-"
-E5D702E67E93752155510B56A3B2F464E190EBA2_71,E5D702E67E93752155510B56A3B2F464E190EBA2,"Model choice
-The granite-13b-instruct-v1 model can be used for multiple tasks, including text generation, summarization, question and answering, classification, and extraction.
-
-"
-E5D702E67E93752155510B56A3B2F464E190EBA2_72,E5D702E67E93752155510B56A3B2F464E190EBA2,"Decoding
-Greedy. This sample is answering questions, so we don't want creative output.
-
-"
-E5D702E67E93752155510B56A3B2F464E190EBA2_73,E5D702E67E93752155510B56A3B2F464E190EBA2,"Stopping criteria
-Set the Max tokens parameter to 200 so the model can return a complete answer.
-
-"
-E5D702E67E93752155510B56A3B2F464E190EBA2_74,E5D702E67E93752155510B56A3B2F464E190EBA2,"Prompt text
-The model was tuned for question-answering with examples in the following format:
-
-<|user|>
-content of the question
-`<|assistant|>
-new line for the model's answer
-
-You can use the exact syntax <|user|> and <|assistant|> in the lines before and after the question or you can replace the values with equivalent terms, such as User and Assistant.
-
-If you're using version 1, do not include any trailing white spaces after the label, and be sure to add a new line.
-
-Paste this prompt text into the freeform prompt editor in Prompt Lab, select the model, set parameters, and then click Generate to see the result.
-
-<|user|>
-Tell me about interest rates
-<|assistant|>
-
-After the model generates an answer, you can ask a follow-up question. The model uses information from the previous question when it generates a response.
-
-<|user|>
-Who sets it?
-<|assistant|>
-
-The model retains information from a previous question when it answers a follow-up question, but it is not optimized to support an extended dialogue.
-
-Note: When you ask a follow-up question, the previous question is submitted again, which adds to the number of tokens that are used.
-
-"
-E5D702E67E93752155510B56A3B2F464E190EBA2_75,E5D702E67E93752155510B56A3B2F464E190EBA2," Summarization
-
-Summarization tasks save you time by condensing large amounts of text into a few key pieces of information.
-
-"
-E5D702E67E93752155510B56A3B2F464E190EBA2_76,E5D702E67E93752155510B56A3B2F464E190EBA2," Sample 5a: Summarize a meeting transcript
-
-Scenario: Given a meeting transcript, summarize the main points as meeting notes so those notes can be shared with teammates who did not attend the meeting.
-
-"
-E5D702E67E93752155510B56A3B2F464E190EBA2_77,E5D702E67E93752155510B56A3B2F464E190EBA2,"Model choice
-Models that are instruction-tuned can generally complete this task with this sample prompt. Suggestions: flan-t5-xxl-11b, flan-ul2-20b, or mpt-7b-instruct2.
-
-"
-E5D702E67E93752155510B56A3B2F464E190EBA2_78,E5D702E67E93752155510B56A3B2F464E190EBA2,"Decoding
-Greedy. The model must return the most predictable content based on what's in the prompt; the model cannot be too creative.
-
-"
-E5D702E67E93752155510B56A3B2F464E190EBA2_79,E5D702E67E93752155510B56A3B2F464E190EBA2,"Stopping criteria
-The summary might run several sentences, so set the Max tokens parameter to 60.
-
-"
-E5D702E67E93752155510B56A3B2F464E190EBA2_80,E5D702E67E93752155510B56A3B2F464E190EBA2,"Prompt text
-Paste this zero-shot prompt text into the freeform prompt editor in Prompt Lab, select the model, set parameters, and then click Generate to see the result.
-
-Summarize the following transcript.
-Transcript:
-00:00 [alex] Let's plan the team party!
-00:10 [ali] How about we go out for lunch at the restaurant?
-00:21 [sam] Good idea.
-00:47 [sam] Can we go to a movie too?
-01:04 [alex] Maybe golf?
-01:15 [sam] We could give people an option to do one or the other.
-01:29 [alex] I like this plan. Let's have a party!
-Summary:
-
-"
-E5D702E67E93752155510B56A3B2F464E190EBA2_81,E5D702E67E93752155510B56A3B2F464E190EBA2," Sample 5b: Summarize a meeting transcript
-
-Scenario: Given a meeting transcript, summarize the main points as meeting notes so those notes can be shared with teammates who did not attend the meeting.
-
-"
-E5D702E67E93752155510B56A3B2F464E190EBA2_82,E5D702E67E93752155510B56A3B2F464E190EBA2,"Model choice
-With few-shot examples, most models can complete this task well. Try: gpt-neox-20b.
-
-"
-E5D702E67E93752155510B56A3B2F464E190EBA2_83,E5D702E67E93752155510B56A3B2F464E190EBA2,"Decoding
-Greedy. The model must return the most predictable content based on what's in the prompt, not be too creative.
-
-"
-E5D702E67E93752155510B56A3B2F464E190EBA2_84,E5D702E67E93752155510B56A3B2F464E190EBA2,"Stopping criteria
-
-
-
-* To make sure that the model stops generating text after the summary, specify a stop sequence of two newline characters. To do that, click in the Stop sequence text box, press the Enter key twice, then click Add sequence.
-* Set the Max tokens parameter to 60.
-
-
-
-"
-E5D702E67E93752155510B56A3B2F464E190EBA2_85,E5D702E67E93752155510B56A3B2F464E190EBA2,"Prompt text
-Paste this few-shot prompt text into the freeform prompt editor in Prompt Lab, select the model, set parameters, and then click Generate to see the result.
-
-Transcript:
-00:00 [sam] I wanted to share an update on project X today.
-00:15 [sam] Project X will be completed at the end of the week.
-00:30 [erin] That's great!
-00:35 [erin] I heard from customer Y today, and they agreed to buy our product.
-00:45 [alex] Customer Z said they will too.
-01:05 [sam] Great news, all around.
-Summary:
-Sam shared an update that project X will be complete at the end of the week.
-Erin said customer Y will buy our product. And Alex said customer Z will buy
-our product too.
-
-Transcript:
-00:00 [ali] The goal today is to agree on a design solution.
-00:12 [alex] I think we should consider choice 1.
-00:25 [ali] I agree
-00:40 [erin] Choice 2 has the advantage that it will take less time.
-01:03 [alex] Actually, that's a good point.
-01:30 [ali] So, what should we do?
-01:55 [alex] I'm good with choice 2.
-02:20 [erin] Me too.
-02:45 [ali] Done!
-Summary:
-Alex suggested considering choice 1. Erin pointed out choice two will take
-less time. The team agreed with choice 2 for the design solution.
-
-Transcript:
-00:00 [alex] Let's plan the team party!
-00:10 [ali] How about we go out for lunch at the restaurant?
-00:21 [sam] Good idea.
-00:47 [sam] Can we go to a movie too?
-01:04 [alex] Maybe golf?
-01:15 [sam] We could give people an option to do one or the other.
-01:29 [alex] I like this plan. Let's have a party!
-Summary:
-
-"
-E5D702E67E93752155510B56A3B2F464E190EBA2_86,E5D702E67E93752155510B56A3B2F464E190EBA2," Sample 5c: Summarize a meeting transcript
-
-Scenario: Given a meeting transcript, summarize the main points in a bulleted list so that the list can be shared with teammates who did not attend the meeting.
-
-"
-E5D702E67E93752155510B56A3B2F464E190EBA2_87,E5D702E67E93752155510B56A3B2F464E190EBA2,"Model choice
-gpt-neox-20b was trained to recognize and handle special characters, such as the newline character, well. This model is a good choice when you want your generated text to be formatted in a specific way with special characters.
-
-"
-E5D702E67E93752155510B56A3B2F464E190EBA2_88,E5D702E67E93752155510B56A3B2F464E190EBA2,"Decoding
-Greedy. The model must return the most predictable content based on what's in the prompt; the model cannot be too creative.
-
-"
-E5D702E67E93752155510B56A3B2F464E190EBA2_89,E5D702E67E93752155510B56A3B2F464E190EBA2,"Stopping criteria
-
-
-
-* To make sure that the model stops generating text after one list, specify a stop sequence of two newline characters. To do that, click in the Stop sequence text box, press the Enter key twice, then click Add sequence.
-* Set the Max tokens parameter to 60.
-
-
-
-"
-E5D702E67E93752155510B56A3B2F464E190EBA2_90,E5D702E67E93752155510B56A3B2F464E190EBA2,"Set up section
-Paste these headers and examples into the Examples area of the Set up section:
-
-
-
-Table 4. Summarization few-shot examples
-
- Transcript: Summary:
-
- 00:00 [sam] I wanted to share an update on project X today. 00:15 [sam] Project X will be completed at the end of the week. 00:30 [erin] That's great! 00:35 [erin] I heard from customer Y today, and they agreed to buy our product. 00:45 [alex] Customer Z said they will too. 01:05 [sam] Great news, all around. - Sam shared an update that project X will be complete at the end of the week - Erin said customer Y will buy our product - And Alex said customer Z will buy our product too
- 00:00 [ali] The goal today is to agree on a design solution. 00:12 [alex] I think we should consider choice 1. 00:25 [ali] I agree 00:40 [erin] Choice 2 has the advantage that it will take less time. 01:03 [alex] Actually, that's a good point. 01:30 [ali] So, what should we do? 01:55 [alex] I'm good with choice 2. 02:20 [erin] Me too. 02:45 [ali] Done! - Alex suggested considering choice 1 - Erin pointed out choice two will take less time - The team agreed with choice 2 for the design solution
-
-
-
-"
-E5D702E67E93752155510B56A3B2F464E190EBA2_91,E5D702E67E93752155510B56A3B2F464E190EBA2,"Try section
-Paste this message in the Try section:
-
-00:00 [alex] Let's plan the team party!
-00:10 [ali] How about we go out for lunch at the restaurant?
-00:21 [sam] Good idea.
-00:47 [sam] Can we go to a movie too?
-01:04 [alex] Maybe golf?
-01:15 [sam] We could give people an option to do one or the other.
-01:29 [alex] I like this plan. Let's have a party!
-
-Select the model and set parameters, then click Generate to see the result.
-
-"
-E5D702E67E93752155510B56A3B2F464E190EBA2_92,E5D702E67E93752155510B56A3B2F464E190EBA2," Code generation and conversion
-
-Foundation models that can generate and convert programmatic code are great resources for developers. They can help developers to brainstorm and troubleshoot programming tasks.
-
-"
-E5D702E67E93752155510B56A3B2F464E190EBA2_93,E5D702E67E93752155510B56A3B2F464E190EBA2," Sample 6a: Generate programmatic code from instructions
-
-Scenario: You want to generate code from instructions. Namely, you want to write a function in the Python programming language that returns a sequence of prime numbers that are lower than the number that is passed to the function as a variable.
-
-"
-E5D702E67E93752155510B56A3B2F464E190EBA2_94,E5D702E67E93752155510B56A3B2F464E190EBA2,"Model choice
-Models that can generate code, such as starcoder-15.5b, can generally complete this task when a sample prompt is provided.
-
-"
-E5D702E67E93752155510B56A3B2F464E190EBA2_95,E5D702E67E93752155510B56A3B2F464E190EBA2,"Decoding
-Greedy. The answer must be a valid code snippet. The model cannot be creative and make up an answer.
-
-"
-E5D702E67E93752155510B56A3B2F464E190EBA2_96,E5D702E67E93752155510B56A3B2F464E190EBA2,"Stopping criteria
-To stop the model after it returns a single code snippet, specify as the stop sequence. The Max tokens parameter can be set to 1,000.
-
-"
-E5D702E67E93752155510B56A3B2F464E190EBA2_97,E5D702E67E93752155510B56A3B2F464E190EBA2,"Prompt text
-Paste this code snippet into the freeform prompt editor in Prompt Lab, select the model, set parameters, and then click Generate to see the result.
-
-Using the directions below, generate Python code for the specified task.
-
-Input:
- Write a Python function that prints 'Hello World!' string 'n' times.
-
-Output:
-def print_n_times(n):
-for i in range(n):
-print(""Hello World!"")
-
-
-
-Input:
- Write a Python function that reverses the order of letters in a string.
- The function named 'reversed' takes the argument 'my_string', which is a string. It returns the string in reverse order.
-
-Output:
-
-The output contains Python code similar to the following snippet:
-
-def reversed(my_string):
-return my_string[::-1]
-
-Be sure to test the generated code to verify that it works as you expect.
-
-For example, if you run reversed(""good morning""), the result is 'gninrom doog'.
-
-Note: The StarCoder model might generate code that is taken directly from its training data. As a result, generated code might require attribution. You are responsible for ensuring that any generated code that you use is properly attributed, if necessary.
-
-"
-E5D702E67E93752155510B56A3B2F464E190EBA2_98,E5D702E67E93752155510B56A3B2F464E190EBA2," Sample 6b: Convert code from one programming language to another
-
-Scenario: You want to convert code from one programming language to another. Namely, you want to convert a code snippet from C++ to Python.
-
-"
-E5D702E67E93752155510B56A3B2F464E190EBA2_99,E5D702E67E93752155510B56A3B2F464E190EBA2,"Model choice
-Models that can generate code, such as starcoder-15.5b, can generally complete this task when a sample prompt is provided.
-
-"
-E5D702E67E93752155510B56A3B2F464E190EBA2_100,E5D702E67E93752155510B56A3B2F464E190EBA2,"Decoding
-Greedy. The answer must be a valid code snippet. The model cannot be creative and make up an answer.
-
-"
-E5D702E67E93752155510B56A3B2F464E190EBA2_101,E5D702E67E93752155510B56A3B2F464E190EBA2,"Stopping criteria
-To stop the model after it returns a single code snippet, specify as the stop sequence. The Max tokens parameter can be set to 300.
-
-"
-E5D702E67E93752155510B56A3B2F464E190EBA2_102,E5D702E67E93752155510B56A3B2F464E190EBA2,"Prompt text
-Paste this code snippet into the freeform prompt editor in Prompt Lab, select the model, set parameters, and then click Generate to see the result.
-
-This prompt includes an example input and output pair. The input is C++ code and the output is the equivalent function in Python code.
-
-The C++ code snippet to be converted is included next. It is a function that counts the number of arithmetic progressions with the sum S and common difference of D, where S and D are integer values that are passed as parameters.
-
-The final part of the prompt identifies the language that you want the C++ code snippet to be converted into.
-
-Translate the following code from C++ to Python.
-
-C++:
-include ""bits/stdc++.h""
-using namespace std;
-bool isPerfectSquare(long double x) {
-long double sr = sqrt(x);
-return ((sr - floor(sr)) == 0);
-}
-void checkSunnyNumber(int N) {
-if (isPerfectSquare(N + 1)) {
-cout << ""Yes
-"";
-} else {
-cout << ""No
-"";
-}
-}
-int main() {
-int N = 8;
-checkSunnyNumber(N);
-return 0;
-}
-
-Python:
-from math import
-
-def isPerfectSquare(x):
-sr = sqrt(x)
-return ((sr - floor(sr)) == 0)
-
-def checkSunnyNumber(N):
-if (isPerfectSquare(N + 1)):
-print(""Yes"")
-else:
-print(""No"")
-
-if __name__ == '__main__':
-N = 8
-checkSunnyNumber(N)
-
-
-
-C++:
-include
-using namespace std;
-int countAPs(int S, int D) {
-S = S * 2;
-int answer = 0;
-for (int i = 1; i <= sqrt(S); i++) {
-if (S % i == 0) {
-if (((S / i) - D * i + D) % 2 == 0)
-answer++;
-"
-E5D702E67E93752155510B56A3B2F464E190EBA2_103,E5D702E67E93752155510B56A3B2F464E190EBA2,"if ((D * i - (S / i) + D) % 2 == 0)
-answer++;
-}
-}
-return answer;
-}
-int main() {
-int S = 12, D = 1;
-cout << countAPs(S, D);
-return 0;
-}
-
-Python:
-
-The output contains Python code similar to the following snippet:
-
-from math import
-
-def countAPs(S, D):
-S = S * 2
-answer = 0
-for i in range(1, int(sqrt(S)) + 1):
-if (S % i == 0):
-if (((S / i) - D * i + D) % 2 == 0):
-answer += 1
-if ((D * i - (S / i) + D) % 2 == 0):
-answer += 1
-return answer
-
-if __name__ == '__main__':
-S = 12
-D = 1
-print(countAPs(S, D))
-
-The generated Python code functions the same as the C++ function included in the prompt.
-
-Test the generated Python code to verify that it works as you expect.
-
-Remember, the StarCoder model might generate code that is taken directly from its training data. As a result, generated code might require attribution. You are responsible for ensuring that any generated code that you use is properly attributed, if necessary.
-
-"
-E5D702E67E93752155510B56A3B2F464E190EBA2_104,E5D702E67E93752155510B56A3B2F464E190EBA2," Dialogue
-
-Dialogue tasks are helpful in customer service scenarios, especially when a chatbot is used to guide customers through a workflow to reach a goal.
-
-"
-E5D702E67E93752155510B56A3B2F464E190EBA2_105,E5D702E67E93752155510B56A3B2F464E190EBA2," Sample 7a: Converse in a dialogue
-
-Scenario: Generate dialogue output like a chatbot.
-
-"
-E5D702E67E93752155510B56A3B2F464E190EBA2_106,E5D702E67E93752155510B56A3B2F464E190EBA2,"Model choice
-Like other foundation models, granite-13b-chat can be used for multiple tasks. However, it is optimized for carrying on a dialogue.
-
-"
-E5D702E67E93752155510B56A3B2F464E190EBA2_107,E5D702E67E93752155510B56A3B2F464E190EBA2,"Decoding
-Greedy. This sample is answering general knowledge, factual questions, so we don't want creative output.
-
-"
-E5D702E67E93752155510B56A3B2F464E190EBA2_108,E5D702E67E93752155510B56A3B2F464E190EBA2,"Stopping criteria
-
-
-
-* A helpful feature of the model is the inclusion of a special token that is named END_KEY at the end of each response. When some generative models return a response to the input in fewer tokens than the maximum number allowed, they can repeat patterns from the input. This model prevents such repetition by incorporating a reliable stop sequence for the prompt. Add END_KEY as the stop sequence.
-* Set the Max tokens parameter to 200 so the model can return a complete answer.
-
-
-
-"
-E5D702E67E93752155510B56A3B2F464E190EBA2_109,E5D702E67E93752155510B56A3B2F464E190EBA2,"Prompt text
-The model expects the input to follow a specific pattern.
-
-Start the input with an instruction. For example, the instruction might read as follows:
-
-Participate in a dialogue with various people as an AI assistant. As the Assistant, you are upbeat, professional, and polite. You do your best to understand exactly what the human needs and help them to achieve their goal as best you can. You do not give false or misleading information. If you don't know an answer, you state that you don't know or aren't sure about the right answer. You prioritize caution over usefulness. You do not answer questions that are unsafe, immoral, unethical, or dangerous.
-
-Next, add lines to capture the question and answer pattern with the following syntax:
-
-Human:
-content of the question
-Assistant:
-new line for the model's answer
-
-You can replace the terms Human and Assistant with other terms.
-
-If you're using version 1, do not include any trailing white spaces after the Assistant: label, and be sure to add a new line.
-
-Paste the following prompt text into the freeform prompt editor in Prompt Lab, select the model, set parameters, and then click Generate to see the result.
-
-Participate in a dialogue with various people as an AI assistant. As the Assistant, you are upbeat, professional, and polite. You do your best to understand exactly what the human needs and help them to achieve their goal as best you can. You do not give false or misleading information. You prioritize caution over usefulness. You do not answer questions that are unsafe, immoral, unethical, or dangerous.
-
-Human: How does a bill become a law?
-Assistant:
-
-After the initial output is generated, continue the dialogue by asking a follow-up question. For example, if the output describes how a bill becomes a law in the United States, you can ask about how laws are made in other countries.
-
-Human: What about in Canada?
-Assistant:
-
-A few notes about using this sample with the model:
-
-
-
-* The prompt input outlines the chatbot scenario and describes the personality of the AI assistant. The description explains that the assistant should indicate when it doesn't know an answer. It also directs the assistant to avoid discussing unethical topics.
-"
-E5D702E67E93752155510B56A3B2F464E190EBA2_110,E5D702E67E93752155510B56A3B2F464E190EBA2,"* The assistant is able to respond to a follow-up question that relies on information from an earlier exchange in the same dialogue.
-* The model expects the input to follow a specific pattern.
-* The generated response from the model is clearly indicated by the keyword END_KEY. You can use this keyword as a stop sequence to help the model generate succinct responses.
-
-
-
-"
-E5D702E67E93752155510B56A3B2F464E190EBA2_111,E5D702E67E93752155510B56A3B2F464E190EBA2," Sample 7b: Converse in a dialogue
-
-Scenario: Generate dialogue output like a chatbot.
-
-"
-E5D702E67E93752155510B56A3B2F464E190EBA2_112,E5D702E67E93752155510B56A3B2F464E190EBA2,"Model choice
-Like other foundation models, Llama 2 (in both the 70 billion and 13 billion sizes) can be used for multiple tasks. But both Llama 2 models are optimized for dialogue use cases. The llama-2-70b-chat and llama-2-13b-chat are the only models in watsonx.ai that are fine-tuned for the [INST]<>< >[/INST] prompt format. For more information about this prompt format, see [How to prompt Llama 2](https://huggingface.co/blog/llama2how-to-prompt-llama-2).
-
-"
-E5D702E67E93752155510B56A3B2F464E190EBA2_113,E5D702E67E93752155510B56A3B2F464E190EBA2,"Decoding
-Greedy. This sample is answering general knowledge, factual questions, so we don't want creative output.
-
-"
-E5D702E67E93752155510B56A3B2F464E190EBA2_114,E5D702E67E93752155510B56A3B2F464E190EBA2,"Stopping criteria
-Set the Max tokens parameter to 100.
-
-The template for Llama 2 uses an end-of-string (EOS) token () to delimit the end of the agent's response. The model output stops generating text when the EOS token is encountered.
-
-"
-E5D702E67E93752155510B56A3B2F464E190EBA2_115,E5D702E67E93752155510B56A3B2F464E190EBA2,"Prompt text
-Paste this prompt text into the freeform prompt editor in Prompt Lab, select the model, set parameters, and then click Generate to see the result.
-
-[INST] <>
-You are a helpful assistant. Begin each response with the phrase ""Dear user, "". If you are unsure about an answer, truthfully say ""I don't know""< >
-
-I have a question about the Earth.[/INST]
-Dear user, go on.
-[INST]
-
-How many moons are there?
-[/INST]
-
-After the initial output is generated, continue the dialogue to ask about more planets, wrapping your user messages in the same [INST] structure. You don't need to add an token to the start of follow-up questions.
-
-The output looks something like this:
-
-[INST] <>
-You are a helpful assistant. Begin each response with the phrase ""Dear user, "". If you are unsure about an answer, truthfully say ""I don't know""< >
-
-I have a question about the Earth.[/INST]
-Dear user, go on.
-[INST]
-
-How many moons are there?
-[/INST]
-Dear user, the Earth has one natural satellite, commonly referred to as the Moon.
-[INST]
-
-What about Saturn?
-[/INST]
-Dear user, Saturn has a total of 62 confirmed moons, as well as numerous smaller moonlets and ring particles. The largest moon of Saturn is Titan, which is the second-largest moon in the solar system, after Jupiter's moon Ganymede.
-[INST]
-
-And Mars?
-[/INST]
-Dear user, Mars has two small natural satellites, Phobos and Deimos. Both moons are thought to be captured asteroids, and are irregularly shaped and very small, with diameters of about 22 kilometers and 12 kilometers respectively.
-
-Notice a few important points about the output that the model generates as you go:
-
-
-
-"
-E5D702E67E93752155510B56A3B2F464E190EBA2_116,E5D702E67E93752155510B56A3B2F464E190EBA2,"* The so-called system instruction, in the <>< > tags, continues to influence the output at each dialogue turn without having to repeat the system instruction. In other words, the assistant responses continue to be prepended with ""Dear user, "".
-* In true dialogue fashion, the assistant response to the user input ""How many moons are there?"" takes into account the previous user input ""I have a question about the Earth."" and returns the number of moons orbiting Earth.
-* Also in proper dialogue form, the assistant responses continue to follow the topic of the conversation, which is the number of moons. (Otherwise, the generated output to the vague user message ""And Mars?"" could wander off in any direction.)
-* Caution: Newline (carriage-return) characters especially, and spaces to a lesser extent, in the prompt text can have a dramatic impact on the output generated.
-* When you use Llama 2 for chat use cases, follow the recommended prompt template format as closely as possible. Do not use the [INST]<>< >[/INST] prompt format when you use Llama 2 for any other tasks besides chat.
-
-
-
-Parent topic:[Prompt Lab](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-lab.html)
-"
-38DBE0E16434502696281563802B76F3E38B25D2_0,38DBE0E16434502696281563802B76F3E38B25D2," Saving your work
-
-Prompt engineering involves trial and error. Keep track of your experimentation and save model-and-prompt combinations that generate the output you want.
-
-When you save your work, you can choose to save it as different asset types. Saving your work as an asset makes it possible to share your work with collaborators in the current project.
-
-
-
-Table 1: Asset types
-
- Asset type When to use this asset type What is saved How to retrieve the asset
-
- Prompt template asset When you find a combination of prompt static text, prompt variables, and prompt engineering parameters that generate the results you want from a specific model and want to reuse it. Prompt text, model, prompt engineering parameters, and prompt variables. Note: The output that is generated by the model is not saved. From the Saved prompt templates tab
- Prompt session asset When you want to keep track of the steps involved with your experimentation so you know what you've tried and what you haven't. Prompt text, model, prompt engineering parameters, and model output for up to 500 prompts that are submitted during a prompt engineering session. From the History tab
- Notebook asset When you want to work with models programmatically, but want to start from the Prompt Lab interface for a better prompt engineering experience. Prompt text, model, prompt engineering parameters, and prompt variable names and default values are formatted as Python code and stored as a notebook. From the Assets page of the project
-
-
-
-Each of these asset types is available from the project's Assets page. Project collaborators with the Admin or Editor role can open and work with them. Your prompt template and prompt session assets are locked automatically, but you can unlock them by clicking the lock icon ().
-
-"
-38DBE0E16434502696281563802B76F3E38B25D2_1,38DBE0E16434502696281563802B76F3E38B25D2," Saving your work
-
-To save your prompt engineering work, complete the following steps:
-
-
-
-1. From the header of the prompt editor, click Save work, and then click Save as.
-2. Choose an asset type.
-3. Name the asset, and then optionally add a description.
-4. Choose the task type that best matches your goal.
-5. If you save the prompt as a notebook asset only: Select View in project after saving.
-6. Click Save.
-
-
-
-"
-38DBE0E16434502696281563802B76F3E38B25D2_2,38DBE0E16434502696281563802B76F3E38B25D2," Working with prompts saved in a notebook
-
-When you save your work as a notebook asset, a Python notebook is built.
-
-To work with a prompt notebook asset, complete the following steps:
-
-
-
-1. Open the notebook asset from the Assets tab of your project.
-2. Click the Edit icon () to instantiate the notebook so you can step through the code.
-
-The notebook contains runnable code that manages the following steps for you:
-
-
-
-* Authenticates with the service.
-* Defines a Python class.
-* Defines the input text for the model and declares any prompt variables. You can edit the static prompt text and assign values to prompt variables.
-* Uses the defined class to call the watsonx.ai inferencing API and pass your input to the foundation model.
-* Shows the output that is generated by the foundation model.
-
-
-
-3. Use the notebook as is, or change it to meet the needs of your use case.
-
-The Python code that is generated by using the Prompt Lab executes successfully. You must test and validate any changes that you make to the code.
-
-
-
-"
-38DBE0E16434502696281563802B76F3E38B25D2_3,38DBE0E16434502696281563802B76F3E38B25D2," Working with saved prompt templates
-
-To continue working with a saved prompt, open it from the Saved prompt templates tab of the Prompt Lab.
-
-When you open a saved prompt template, Autosave is on, which means that any changes you make to the prompt will be reflected in the saved prompt template asset. If you want the prompt template that you saved to remain unchanged, click New prompt to start a new prompt.
-
-For more information, see [Building reusable prompts](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-variables.html).
-
-"
-38DBE0E16434502696281563802B76F3E38B25D2_4,38DBE0E16434502696281563802B76F3E38B25D2," Working with saved prompt sessions
-
-To continue working with a saved prompt session, open it from the History tab of the Prompt Lab.
-
-To review previous prompt submissions, you can click a prompt entry from the history to open it in the prompt editor. If you prefer the results from the earlier prompt, you can reset it as your current prompt by clicking Restore. When you restore an earlier prompt, your current prompt session is replaced by the earlier version of the prompt session.
-
-"
-38DBE0E16434502696281563802B76F3E38B25D2_5,38DBE0E16434502696281563802B76F3E38B25D2," Learn more
-
-
-
-* [Security and privacy for foundation models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-security.html)
-
-
-
-Parent topic:[Prompt Lab](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-lab.html)
-"
-F839CD35991DF790F17239C9C63BFCAE701F3D65_0,F839CD35991DF790F17239C9C63BFCAE701F3D65," Tips for writing foundation model prompts: prompt engineering
-
-Part art, part science, prompt engineering is the process of crafting prompt text to best effect for a given model and parameters. When it comes to prompting foundation models, there isn't just one right answer. There are usually multiple ways to prompt a foundation model for a successful result.
-
-Use the Prompt Lab to experiment with crafting prompts.
-
-
-
-* For help using the prompt editor, see [Prompt Lab](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-lab.html).
-* Try the samples that are available from the Sample prompts tab.
-* Learn from documented samples. See [Sample prompts](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html).
-
-
-
-As you experiment, remember these tips. The tips in this topic will help you successfully prompt most text-generating foundation models.
-
-"
-F839CD35991DF790F17239C9C63BFCAE701F3D65_1,F839CD35991DF790F17239C9C63BFCAE701F3D65," Tip 1: Always remember that everything is text completion
-
-Your prompt is the text you submit for processing by a foundation model.
-
-The Prompt Lab in IBM watsonx.ai is not a chatbot interface. For most models, simply asking a question or typing an instruction usually won't yield the best results. That's because the model isn't answering your prompt, the model is appending text to it.
-
-This image demonstrates prompt text and generated output:
-
-
-
-* Prompt text: ""I took my dog ""
-* Generated output: ""to the park.""
-
-
-
-
-
-"
-F839CD35991DF790F17239C9C63BFCAE701F3D65_2,F839CD35991DF790F17239C9C63BFCAE701F3D65," Tip 2: Include all the needed prompt components
-
-Effective prompts usually have one or more of the following components: instruction, context, examples, and cue.
-
-"
-F839CD35991DF790F17239C9C63BFCAE701F3D65_3,F839CD35991DF790F17239C9C63BFCAE701F3D65," Instruction
-
-An instruction is an imperative statement that tells the model what to do. For example, if you want the model to list ideas for a dog-walking business, your instruction could be: ""List ideas for starting a dog-walking business:""
-
-"
-F839CD35991DF790F17239C9C63BFCAE701F3D65_4,F839CD35991DF790F17239C9C63BFCAE701F3D65," Context
-
-Including background or contextual information in your prompt can nudge the model output in a desired direction. Specifically, (tokenized) words that appear in your prompt text are more likely to be included in the generated output.
-
-"
-F839CD35991DF790F17239C9C63BFCAE701F3D65_5,F839CD35991DF790F17239C9C63BFCAE701F3D65," Examples
-
-To indicate the format or shape that you want the model response to be, include one or more pairs of example input and corresponding desired output showing the pattern you want the generated text to follow. (Including one example in your prompt is called one-shot prompting, including two or more examples in your prompt is called few-shot prompting, and when your prompt has no examples, that's called zero-shot prompting.)
-
-Note that when you are prompting models that have been fine-tuned, you might not need examples.
-
-"
-F839CD35991DF790F17239C9C63BFCAE701F3D65_6,F839CD35991DF790F17239C9C63BFCAE701F3D65," Cue
-
-A cue is text at the end of the prompt that is likely to start the generated output on a desired path. (Remember, as much as it seems like the model is responding to your prompt, the model is really appending text to your prompt or continuing your prompt.)
-
-"
-F839CD35991DF790F17239C9C63BFCAE701F3D65_7,F839CD35991DF790F17239C9C63BFCAE701F3D65," Tip 3: Include descriptive details
-
-The more guidance, the better. Experiment with including descriptive phrases related to aspects of your ideal result: content, style, and length. Including these details in your prompt can cause a more creative or more complete result to be generated.
-
-For example, you could improve upon the sample instruction given previously:
-
-
-
-* Original: ""List ideas for starting a dog-walking business""
-* Improved: ""List ideas for starting a large, wildly successful dog-walking business""
-
-
-
-"
-F839CD35991DF790F17239C9C63BFCAE701F3D65_8,F839CD35991DF790F17239C9C63BFCAE701F3D65," Example
-
-"
-F839CD35991DF790F17239C9C63BFCAE701F3D65_9,F839CD35991DF790F17239C9C63BFCAE701F3D65," Before
-
-In this image, you can see a prompt with the original, simple instruction. This prompt doesn't produce great results.
-
-
-
-"
-F839CD35991DF790F17239C9C63BFCAE701F3D65_10,F839CD35991DF790F17239C9C63BFCAE701F3D65," After
-
-In this image, you can see all the prompt components: instruction (complete with descriptive details), context, example, and cue. This prompt produces a much better result.
-
-
-
-You can experiment with this prompt in the Prompt Lab yourself:
-
-Model: gpt-neox-20b
-
-Decoding: Sampling
-
-
-
-* Temperature: 0.7
-* Top P: 1
-* Top K: 50
-* Repetition penalty: 1.02
-
-
-
-"
-F839CD35991DF790F17239C9C63BFCAE701F3D65_11,F839CD35991DF790F17239C9C63BFCAE701F3D65,"Stopping criteria:
-
-
-
-* Stop sequence: Two newline characters
-* Min tokens: 0
-* Max tokens: 80
-
-
-
-"
-F839CD35991DF790F17239C9C63BFCAE701F3D65_12,F839CD35991DF790F17239C9C63BFCAE701F3D65,"Prompt text:
-
-Copy this prompt text and paste it into the freeform prompt editor in Prompt Lab, then click Generate to see a result.
-
-With no random seed specified, results will vary each time you submit the prompt.
-
-Based on the following industry research, suggest ideas for starting a large, wildly
-successful dog-walking business.
-
-"
-F839CD35991DF790F17239C9C63BFCAE701F3D65_13,F839CD35991DF790F17239C9C63BFCAE701F3D65,"The most successful dog-walking businesses cater to owners' needs and desires while
-also providing great care to the dogs. For example, owners want flexible hours, a
-shuttle to pick up and drop off dogs at home, and personalized services, such as
-custom meal and exercise plans. Consider too how social media has permeated our lives.
-Web-enabled interaction provide images and video that owners will love to share online,
-"
-F839CD35991DF790F17239C9C63BFCAE701F3D65_14,F839CD35991DF790F17239C9C63BFCAE701F3D65,"Ideas for starting a lemonade business:
-- Set up a lemonade stand
-- Partner with a restaurant
-- Get a celebrity to endorse the lemonade
-
-Ideas for starting a large, wildly successful dog-walking business:
-
-"
-F839CD35991DF790F17239C9C63BFCAE701F3D65_15,F839CD35991DF790F17239C9C63BFCAE701F3D65," Learn more
-
-
-
-* [Sample prompts](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-samples.html)
-* [Avoiding hallucinations](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-hallucinations.html)
-* [Generating accurate output](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-factual-accuracy.html)
-
-
-
-Parent topic:[Prompt Lab](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-lab.html)
-"
-6049D5AA5DE41309E6281534A464ABD6898A758C_0,6049D5AA5DE41309E6281534A464ABD6898A758C," Building reusable prompts
-
-Prompt engineering to find effective prompts for a model takes time and effort. Stretch the benefits of your work by building prompts that you can reuse and share with others.
-
-A great way to add flexibility to a prompt is to add prompt variables. A prompt variable is a placeholder keyword that you include in the static text of your prompt at creation time and replace with text dynamically at run time.
-
-"
-6049D5AA5DE41309E6281534A464ABD6898A758C_1,6049D5AA5DE41309E6281534A464ABD6898A758C," Using variables to change prompt text dynamically
-
-Variables help you to generalize a prompt so that it can be reused more easily.
-
-For example, a prompt for a generative task might contain the following static text:
-
-Write a story about a dog.
-
-If you replace the text dog with a variable that is named {animal}, you add support for dynamic content to the prompt.
-
-Write a story about a {animal}.
-
-With the variable {animal}, the text can still be used to prompt the model for a story about a dog. But now it can be reused to ask for a story about a cat, a mouse, or another animal, simply by swapping the value that is specified for the {animal} variable.
-
-"
-6049D5AA5DE41309E6281534A464ABD6898A758C_2,6049D5AA5DE41309E6281534A464ABD6898A758C," Creating prompt variables
-
-To create a prompt variable, complete the following steps:
-
-
-
-1. From the Prompt Lab, review the text in your prompt for words or phrases that, when converted to a variable, will make the prompt easier to reuse.
-2. Click the Prompt variables icon () at the start of the page.
-
-The Prompt variables panel is displayed where you can add variable name-and-value pairs.
-3. Click New variable.
-4. Click to add a variable name, tab to the next field, and then add a default value.
-
-The variable name can contain alphanumeric characters or an underscore (_), but cannot begin with a number.
-
-The default value for the variable is a fallback value; it is used every time that the prompt is submitted, unless someone overwrites the default value by specifying a new value for the variable.
-5. Repeat the previous step to add more variables.
-
-The following table shows some examples of the types of variables that you might want to add.
-
-| Variable name | Default value | |---------------|---------------| | country | Ireland | | city | Boston | | project | Project X | | company | IBM |
-6. Replace static text in the prompt with your variables.
-
-Select the word or phrase in the prompt that you want to replace, and then click the Prompt variables icon () within the text box to see a list of available variables. Click the variable that you want to use from the list.
-
-The variable replaces the selected text. It is formatted with the syntax {variable name}, where the variable name is surrounded by braces.
-
-If your static text already contains variables that are formatted with braces, they are ignored unless prompt variables of the same name exist.
-"
-6049D5AA5DE41309E6281534A464ABD6898A758C_3,6049D5AA5DE41309E6281534A464ABD6898A758C,"7. To specify a value for a variable at run time, open the Prompt variables panel, click Preview, and then add a value for the variable.
-
-You can also change the variable value from the edit view of the Prompt variables panel, but the value you specify will become the new default value.
-
-
-
-When you find a set of prompt static text, prompt variables, and prompt engineering parameters that generates the results you want from a model, save the prompt as a prompt template asset. After you save the prompt template asset, you can reuse the prompt or share it with collaborators in the current project. For more information, see [Saving prompts](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-save.html).
-
-"
-6049D5AA5DE41309E6281534A464ABD6898A758C_4,6049D5AA5DE41309E6281534A464ABD6898A758C," Examples of reusing prompts
-
-The following examples help illustrate ways that using prompt variables can add versatility to your prompts.
-
-
-
-* [Thank you note example](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-variables.html?context=cdpaas&locale=enthank-you-example)
-* [Devil's advocate example](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-variables.html?context=cdpaas&locale=endevil-example)
-
-
-
-"
-6049D5AA5DE41309E6281534A464ABD6898A758C_5,6049D5AA5DE41309E6281534A464ABD6898A758C," Thank you note example
-
-Replace static text in the Thank you note generation built-in sample prompt with variables to make the prompt reusable.
-
-To add versatility to a built-in prompt, complete the following steps:
-
-
-
-1. From the Prompt Lab, click Sample prompts to list the built-in sample prompts. From the Generation section, click Thank you note generation.
-
-The input for the built-in sample prompt is added to the prompt editor and the flan-ul2-20b model is selected.
-
-Write a thank you note for attending a workshop.
-
-Attendees: interns
-Topic: codefest, AI
-Tone: energetic
-2. Review the text for words or phrases that make good variable candidates.
-
-In this example, if the following words are replaced, the prompt meaning will change:
-
-
-
-* workshop
-* interns
-* codefest
-* AI
-* energetic
-
-
-
-3. Create a variable to represent each word in the list. Add the current value as the default value for the variable.
-
-| Variable name | Value | |---------------|---------------| | event | workshop | | attendees | interns | | topic1 | codefest | | topic2 | AI | | tone | energetic |
-4. Click Preview to review the variables that you added.
-5. Update the static prompt text to use variables in place of words.
-
-Write a thank you note for attending a {event}.
-
-Attendees: {attendees}
-Topic: {topic1}, {topic2}
-Tone: {tone}
-
-
-
-The original meaning of the prompt is maintained.
-6. Now, change the values of the variables to change the meaning of the prompt.
-
-From the Fill in prompt variables view of the prompt variables panel, add values for the variables.
-
-"
-6049D5AA5DE41309E6281534A464ABD6898A758C_6,6049D5AA5DE41309E6281534A464ABD6898A758C,"| Variable name | Value | |---------------|---------------| | event | human resources presentation | | attendees | expecting parents | | topic1 | resources for new parents | | topic2 | parental leave | | tone | supportive |
-
-You effectively converted the original prompt into the following prompt:
-
-Write a thank you note for attending a human resources presentation.
-
-Attendees: expecting parents
-Topic: resources for new parents, parental leave
-Tone: supportive
-
-Click Generate to see how the model responds.
-7. Swap the values for the variables to reuse the same prompt again to generate thank you notes for usability test attendees.
-
-| Variable name | Value | |---------------|-------| | event | usability test | | attendees | user volunteers | | topic1 | testing out new features | | topic2 | sharing early feedback | | tone | appreciative |
-
-Click Generate to see how the model responds.
-
-
-
-"
-6049D5AA5DE41309E6281534A464ABD6898A758C_7,6049D5AA5DE41309E6281534A464ABD6898A758C," Devil's advocate example
-
-Use prompt variables to reuse effective examples that you devise for a prompt.
-
-You can guide a foundation model to answer in an expected way by adding a few examples that establish a pattern for the model to follow. This kind of prompt is called a few-shot prompt. Inventing good examples for a prompt requires imagination and testing and can be time-consuming. If you successfully create a few-shot prompt that proves to be effective, you can make it reusable by adding prompt variables.
-
-Maybe you want to use the granite-13b-instruct-v1 model to help you consider risks or problems that might arise from an action or plan under consideration.
-
-For example, the prompt might have the following instruction and examples:
-
-You are playing the role of devil's advocate. Argue against the proposed plans. List 3 detailed, unique, compelling reasons why moving forward with the plan would be a bad choice. Consider all types of risks.
-
-Plan we are considering:
-Extend our store hours.
-Three problems with this plan are:
-1. We'll have to pay more for staffing.
-2. Risk of theft increases late at night.
-3. Clerks might not want to work later hours.
-
-Plan we are considering:
-Open a second location for our business.
-Three problems with this plan are:
-1. Managing two locations will be more than twice as time-consuming than managed just one.
-2. Creating a new location doesn't guarantee twice as many customers.
-3. A new location means added real estate, utility, and personnel expenses.
-
-Plan we are considering:
-Refreshing our brand image by creating a new logo.
-Three problems with this plan are:
-
-You can reuse the prompt by completing the following steps:
-
-
-
-1. Replace the text that describes the action that you are considering with a variable.
-
-For example, you can add the following variable:
-
-| Variable name | Default value | |---------------|---------------| | plan | Refreshing our brand image by creating a new logo. |
-2. Replace the static text that defines the plan with the {plan} variable.
-
-"
-6049D5AA5DE41309E6281534A464ABD6898A758C_8,6049D5AA5DE41309E6281534A464ABD6898A758C,"You are playing the role of devil's advocate. Argue against the proposed plans. List 3 detailed, unique, compelling reasons why moving forward with the plan would be a bad choice. Consider all types of risks.
-
-Plan we are considering:
-Extend our store hours.
-Three problems with this plan are:
-1. We'll have to pay more for staffing.
-2. Risk of theft increases late at night.
-3. Clerks might not want to work later hours.
-
-Plan we are considering:
-Open a second location for our business.
-Three problems with this plan are:
-1. Managing two locations will be more than twice as time-consuming than managed just one.
-2. Creating a new location doesn't guarantee twice as many customers.
-3. A new location means added real estate, utility, and personnel expenses.
-
-Plan we are considering:
-{plan}
-Three problems with this plan are:
-
-Now you can use the same prompt to prompt the model to brainstorm about other actions.
-3. Change the text in the {plan} variable to describe a different plan, and then click Generate to send the new input to the model.
-
-
-
-Parent topic:[Prompt Lab](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-lab.html)
-"
-B1AF301F18E6444DA2842CC71F9AC38505EE5E1F_0,B1AF301F18E6444DA2842CC71F9AC38505EE5E1F," Foundation models Python library
-
-You can prompt foundation models in IBM watsonx.ai programmatically by using the Python library.
-
-The Watson Machine Learning Python library is a publicly available library that you can use to work with Watson Machine Learning services. The Watson Machine Learning service hosts the watsonx.ai foundation models.
-
-"
-B1AF301F18E6444DA2842CC71F9AC38505EE5E1F_1,B1AF301F18E6444DA2842CC71F9AC38505EE5E1F," Using the Python library
-
-After you create a prompt in the Prompt Lab, you can save the prompt as a notebook, and then edit the notebook. Using the generated notebook as a starting point is useful because it handles the initial setup steps, such as getting credentials and the project ID information for you.
-
-If you want to work with the models directly from a notebook, you can do so by using the Watson Machine Learning Python library.
-
-The ibm-watson-machine-learning Python library is publicly available on PyPI from the url: [https://pypi.org/project/ibm-watson-machine-learning/](https://pypi.org/project/ibm-watson-machine-learning/). However, you can install it in your development environment by using the following command:
-
-pip install ibm-watson-machine-learning
-
-If you installed the library before, include the -U parameter to ensure that you have the latest version.
-
-pip install -U ibm-watson-machine-learning
-
-For more information about the available methods for working with foundation models, see [Foundation models Python library](https://ibm.github.io/watson-machine-learning-sdk/foundation_models.html).
-
-You need to take some steps before you can use the Python library:
-
-
-
-* [Setting up credentials](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-credentials.html)
-* [Looking up your project ID](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-python-lib.html?context=cdpaas&locale=enproject-id)
-
-
-
-"
-B1AF301F18E6444DA2842CC71F9AC38505EE5E1F_2,B1AF301F18E6444DA2842CC71F9AC38505EE5E1F," Looking up your project ID
-
-To prompt foundation models in IBM watsonx.ai programmatically, you need to pass the identifier (ID) of a project that has an instance of IBM Watson Machine Learning associated with it.
-
-To get the ID of a project, complete the following steps:
-
-
-
-1. Navigate to the project in the watsonx web console, open the project, and then click the Manage tab.
-2. Copy the project ID from the Details section of the General page.
-
-
-
-"
-B1AF301F18E6444DA2842CC71F9AC38505EE5E1F_3,B1AF301F18E6444DA2842CC71F9AC38505EE5E1F," Examples
-
-The following examples show you how to use the library to perform a few basic tasks in a notebook.
-
-"
-B1AF301F18E6444DA2842CC71F9AC38505EE5E1F_4,B1AF301F18E6444DA2842CC71F9AC38505EE5E1F," Example 1: List available foundation models
-
-You can view [ModelTypes](https://ibm.github.io/watson-machine-learning-sdk/foundation_models.htmlibm_watson_machine_learning.foundation_models.utils.enums.ModelTypes) to see available foundation models.
-
-"
-B1AF301F18E6444DA2842CC71F9AC38505EE5E1F_5,B1AF301F18E6444DA2842CC71F9AC38505EE5E1F,"Python code
-
-from ibm_watson_machine_learning.foundation_models.utils.enums import ModelTypes
-import json
-
-print( json.dumps( ModelTypes._member_names_, indent=2 ) )
-
-"
-B1AF301F18E6444DA2842CC71F9AC38505EE5E1F_6,B1AF301F18E6444DA2842CC71F9AC38505EE5E1F,"Sample output
-
-[
-""FLAN_T5_XXL"",
-""FLAN_UL2"",
-""MT0_XXL"",
-...
-]
-
-"
-B1AF301F18E6444DA2842CC71F9AC38505EE5E1F_7,B1AF301F18E6444DA2842CC71F9AC38505EE5E1F," Example: View details of a foundation model
-
-You can view details, such as a short description and foundation model limits, by using [get_details()](https://ibm.github.io/watson-machine-learning-sdk/foundation_models.htmlibm_watson_machine_learning.foundation_models.Model.get_details).
-
-"
-B1AF301F18E6444DA2842CC71F9AC38505EE5E1F_8,B1AF301F18E6444DA2842CC71F9AC38505EE5E1F,"Python code
-
-from ibm_watson_machine_learning.foundation_models.utils.enums import ModelTypes
-from ibm_watson_machine_learning.foundation_models import Model
-import json
-
-my_credentials = {
-""url"" : ""https://us-south.ml.cloud.ibm.com"",
-""apikey"" : {my-IBM-Cloud-API-key}
-}
-
-model_id = ModelTypes.MPT_7B_INSTRUCT2
-gen_parms = None
-project_id = {my-project-ID}
-space_id = None
-verify = False
-
-model = Model( model_id, my_credentials, gen_parms, project_id, space_id, verify )
-
-model_details = model.get_details()
-
-print( json.dumps( model_details, indent=2 ) )
-
-Note:Replace {my-IBM-Cloud-API-key} and {my-project-ID} with your API key and project ID.
-
-"
-B1AF301F18E6444DA2842CC71F9AC38505EE5E1F_9,B1AF301F18E6444DA2842CC71F9AC38505EE5E1F,"Sample output
-
-{
-""model_id"": ""ibm/mpt-7b-instruct2"",
-""label"": ""mpt-7b-instruct2"",
-""provider"": ""IBM"",
-""source"": ""Hugging Face"",
-""short_description"": ""MPT-7B is a decoder-style transformer pretrained from
-scratch on 1T tokens of English text and code. This model was trained by IBM."",
-...
-}
-
-"
-B1AF301F18E6444DA2842CC71F9AC38505EE5E1F_10,B1AF301F18E6444DA2842CC71F9AC38505EE5E1F," Example: Prompt a foundation model with default parameters
-
-Prompt a foundation model to generate a response.
-
-"
-B1AF301F18E6444DA2842CC71F9AC38505EE5E1F_11,B1AF301F18E6444DA2842CC71F9AC38505EE5E1F,"Python code
-
-from ibm_watson_machine_learning.foundation_models.utils.enums import ModelTypes
-from ibm_watson_machine_learning.foundation_models import Model
-import json
-
-my_credentials = {
-""url"" : ""https://us-south.ml.cloud.ibm.com"",
-""apikey"" : {my-IBM-Cloud-API-key}
-}
-
-model_id = ModelTypes.FLAN_T5_XXL
-gen_parms = None
-project_id = {my-project-ID}
-space_id = None
-verify = False
-
-model = Model( model_id, my_credentials, gen_parms, project_id, space_id, verify )
-
-prompt_txt = ""In today's sales meeting, we ""
-gen_parms_override = None
-
-generated_response = model.generate( prompt_txt, gen_parms_override )
-
-print( json.dumps( generated_response, indent=2 ) )
-
-Note:Replace {my-IBM-Cloud-API-key} and {my-project-ID} with your API key and project ID.
-
-"
-B1AF301F18E6444DA2842CC71F9AC38505EE5E1F_12,B1AF301F18E6444DA2842CC71F9AC38505EE5E1F,"Sample output
-
-{
-""model_id"": ""google/flan-t5-xxl"",
-""created_at"": ""2023-07-27T03:40:17.575Z"",
-""results"": [
-{
-""generated_text"": ""will discuss the new product line."",
-""generated_token_count"": 8,
-""input_token_count"": 10,
-""stop_reason"": ""EOS_TOKEN""
-}
-],
-...
-}
-
-"
-B1AF301F18E6444DA2842CC71F9AC38505EE5E1F_13,B1AF301F18E6444DA2842CC71F9AC38505EE5E1F," Learn more
-
-
-
-* [Credentials for prompting foundation models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-credentials.html)
-
-
-
-Parent topic:[Foundation models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-overview.html)
-"
-752D982C2F694FFEE2A312CEA6ADF22C2384D4B2_0,752D982C2F694FFEE2A312CEA6ADF22C2384D4B2," Retrieval-augmented generation
-
-You can use foundation models in IBM watsonx.ai to generate factually accurate output that is grounded in information in a knowledge base by applying the retrieval-augmented generation pattern.
-
-This video provides a visual method to learn the concepts and tasks in this documentation.
-
-"
-752D982C2F694FFEE2A312CEA6ADF22C2384D4B2_1,752D982C2F694FFEE2A312CEA6ADF22C2384D4B2,"Video chapters
-[ 0:08 ] Scenario description
-[ 0:27 ] Overview of pattern
-[ 1:03 ] Knowledge base
-[ 1:22 ] Search component
-[ 1:41 ] Prompt augmented with context
-[ 2:13 ] Generating output
-[ 2:31 ] Full solution
-[ 2:55 ] Considerations for search
-[ 3:58 ] Considerations for prompt text
-[ 5:01 ] Considerations for explainability
-
-"
-752D982C2F694FFEE2A312CEA6ADF22C2384D4B2_2,752D982C2F694FFEE2A312CEA6ADF22C2384D4B2," Providing context in your prompt improves accuracy
-
-Foundation models can generate output that is factually inaccurate for various reasons. One way to improve the accuracy of generated output is to provide the needed facts as context in your prompt text.
-
-"
-752D982C2F694FFEE2A312CEA6ADF22C2384D4B2_3,752D982C2F694FFEE2A312CEA6ADF22C2384D4B2," Example
-
-The following prompt includes context to establish some facts:
-
-Aisha recently painted the kitchen yellow, which is her favorite color.
-
-Aisha's favorite color is
-
-Unless Aisha is a famous person whose favorite color was mentioned in many online articles that are included in common pretraining data sets, without the context at the beginning of the prompt, no foundation model could reliably generate the correct completion of the sentence at the end of the prompt.
-
-If you prompt a model with text that includes fact-filled context, then the output the model generates is more likely to be accurate. For more details, see [Generating factually accurate output](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-factual-accuracy.html).
-
-"
-752D982C2F694FFEE2A312CEA6ADF22C2384D4B2_4,752D982C2F694FFEE2A312CEA6ADF22C2384D4B2," The retrieval-augmented generation pattern
-
-You can scale out the technique of including context in your prompts by using information in a knowledge base.
-
-The following diagram illustrates the retrieval-augmented generation pattern. Although the diagram shows a question-answering example, the same workflow supports other use cases.
-
-
-
-The retrieval-augmented generation pattern involves the following steps:
-
-
-
-1. Search in your knowledge base for content that is related to the user's input.
-2. Pull the most relevant search results into your prompt as context and add an instruction, such as “Answer the following question by using only information from the following passages.”
-3. Only if the foundation model that you're using is not instruction-tuned: Add a few examples that demonstrate the expected input and output format.
-4. Send the combined prompt text to the model to generate output.
-
-
-
-"
-752D982C2F694FFEE2A312CEA6ADF22C2384D4B2_5,752D982C2F694FFEE2A312CEA6ADF22C2384D4B2," The origin of retrieval-augmented generation
-
-The term retrieval-augmented generation (RAG) was introduced in this paper: [Retrieval-augmented generation for knowledge-intensive NLP tasks](https://arxiv.org/abs/2005.11401).
-
-> We build RAG models where the parametric memory is a pre-trained seq2seq transformer, and the non-parametric memory is a dense vector index of Wikipedia, accessed with a pre-trained neural retriever.
-
-In that paper, the term ""RAG models"" refers to a specific implementation of a retriever (a specific query encoder and vector-based document search index) and a generator (a specific pre-trained, generative language model). However, the basic search-and-generate approach can be generalized to use different retriever components and foundation models.
-
-"
-752D982C2F694FFEE2A312CEA6ADF22C2384D4B2_6,752D982C2F694FFEE2A312CEA6ADF22C2384D4B2," Knowledge base
-
-The knowledge base can be any collection of information-containing artifacts, such as:
-
-
-
-* Process information in internal company wiki pages
-* Files in GitHub (in any format: Markdown, plain text, JSON, code)
-* Messages in a collaboration tool
-* Topics in product documentation
-* Text passages in a database like Db2
-* A collection of legal contracts in PDF files
-* Customer support tickets in a content management system
-
-
-
-"
-752D982C2F694FFEE2A312CEA6ADF22C2384D4B2_7,752D982C2F694FFEE2A312CEA6ADF22C2384D4B2," Retriever
-
-The retriever can be any combination of search and content tools that reliably returns relevant content from the knowledge base:
-
-
-
-* Search tools like IBM Watson Discovery
-* Search and content APIs (GitHub has APIs like this, for example)
-* Vector databases (such as chromadb)
-
-
-
-"
-752D982C2F694FFEE2A312CEA6ADF22C2384D4B2_8,752D982C2F694FFEE2A312CEA6ADF22C2384D4B2," Generator
-
-The generator component can use any model in watsonx.ai, whichever one suits your use case, prompt format, and content you are pulling in for context.
-
-"
-752D982C2F694FFEE2A312CEA6ADF22C2384D4B2_9,752D982C2F694FFEE2A312CEA6ADF22C2384D4B2," Examples
-
-The following examples demonstrate how to apply the retrieval-augmented generation pattern.
-
-
-
-Retrieval-augmented generation examples
-
- Example Description Link
-
- Simple introduction Uses a small knowledge base and a simple search component to demonstrate the basic pattern. [Introduction to retrieval-augmented generation](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/fed7cf6b-1c48-4d71-8c04-0fce0e000d43)
- Introduction to RAG with Discovery Contains the steps and code to demonstrate the retrieval-augmented generation pattern in IBM watsonx.ai by using IBM Watson Discovery as the search component. [Simple introduction to retrieval-augmented generation with watsonx.ai and Discovery](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/ba4a9e35-2091-49d3-9364-a1284afab7ec)
- Real-world example The watsonx.ai documentation has a search-and-answer feature that can answer basic what-is questions by using the topics in the documentation as a knowledge base. [Answering watsonx.ai questions using a foundation model](https://ibm.biz/watsonx-llm-search)
- Example with LangChain Contains the steps and code to demonstrate support of retrieval-augumented generation with LangChain in watsonx.ai. It introduces commands for data retrieval, knowledge base building and querying, and model testing. [Use watsonx and LangChain to answer questions by using RAG](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/d3a5f957-a93b-46cd-82c1-c8d37d4f62c6)
-"
-752D982C2F694FFEE2A312CEA6ADF22C2384D4B2_10,752D982C2F694FFEE2A312CEA6ADF22C2384D4B2," Example with LangChain and an Elasticsearch vector database Demonstrates how to use LangChain to apply an embedding model to documents in an Elasticsearch vector database. The notebook then indexes and uses the data store to generate answers to incoming questions. [Use watsonx, Elasticsearch, and LangChain to answer questions (RAG)](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/ebeb9fc0-9844-4838-aff8-1fa1997d0c13?context=wx&audience=wdp)
- Example with the Elasticsearch Python SDK Demonstrates how to use the Elasticsearch Python SDK to apply an embedding model to documents in an Elasticsearch vector database. The notebook then indexes and uses the data store to generate answers to incoming questions. [Use watsonx, and Elasticsearch Python SDK to answer questions (RAG)](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/bdbc8ad4-9c1f-460f-99ee-5c3a1f374fa7?context=wx&audience=wdp)
- Example with LangChain and a SingleStore database Shows you how to apply retrieval-augmented generation to large language models in watsonx by using the SingleStore database. [RAG with SingleStore and watsonx](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/daf645b2-281d-4969-9292-5012f3b18215)
-
-
-
-"
-752D982C2F694FFEE2A312CEA6ADF22C2384D4B2_11,752D982C2F694FFEE2A312CEA6ADF22C2384D4B2," Learn more
-
-Try these tutorials:
-
-
-
-* [Prompt a foundation model by using Prompt Lab](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-prompt-lab.html)
-* [Prompt a foundation model with the retrieval-augmented generation pattern](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-fm-notebook.html)
-
-
-
-Parent topic:[Foundation models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-overview.html)
-"
-38FB0908B90954D96CEFF54BA975DE832286A0A7_0,38FB0908B90954D96CEFF54BA975DE832286A0A7," Security and privacy for foundation models
-
-Your work with foundation models is secure and private, in the same way that all your work on watsonx is secure and private.
-
-Foundation models that you interact with through watsonx are hosted in IBM Cloud. Your data is not sent to any third-party or open source platforms.
-
-The foundation model prompts that you create and engineer in the Prompt Lab or send by using the API are accessible only by you. Your prompts are used only by you and are submitted only to models you choose. Your prompt text is not accessible or used by IBM or any other person or organization.
-
-You control whether prompts, model choices, and prompt engineering parameter settings are saved. When saved, your data is stored in a dedicated IBM Cloud Object Storage bucket that is associated with your project.
-
-Data that is stored in your project storage bucket is encrypted at rest and in motion. You can delete your stored data at any time.
-
-"
-38FB0908B90954D96CEFF54BA975DE832286A0A7_1,38FB0908B90954D96CEFF54BA975DE832286A0A7," Privacy of text in Prompt Lab during a session
-
-Text that you submit by clicking Generate from the prompt editor in Prompt Lab is reformatted as tokens, and then submitted to the foundation model you choose. The submitted message is encrypted in transit.
-
-Your prompt text is not saved unless you choose to save your work.
-
-Unsaved prompt text is kept in the web page until the page is refreshed, at which time the prompt text is deleted.
-
-"
-38FB0908B90954D96CEFF54BA975DE832286A0A7_2,38FB0908B90954D96CEFF54BA975DE832286A0A7," Privacy and security of saved work
-
-How saved work is managed differs based on the asset type that you choose to save:
-
-
-
-* Prompt template asset: The current prompt text, model, prompt engineering parameters, and any prompt variables are saved as a prompt template asset and stored in the IBM Cloud Object Storage bucket that is associated with your project. Prompt template assets are retained until they are deleted or changed by you. When autosave is on, if you open a saved prompt and change the text, the text in the saved prompt template asset is replaced.
-* Prompt session asset: A prompt session asset includes the prompt input text, model, prompt engineering parameters, and model output. After you create the prompt session asset, prompt information for up to 500 submitted prompts is stored in the project storage bucket where it is retained for 30 days.
-* Notebook asset: Your prompt, model, prompt engineering parameters, and any prompt variables are formatted as Python code and stored as a notebook asset in the project storage bucket.
-
-
-
-Only people with Admin or Editor role access to the project or the project storage bucket can view saved assets. You control who can access your project and its associated Cloud Object Storage bucket.
-
-
-
-* For more information about asset security, see [Data security](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/security-data.html).
-* For more information about managing project access, see [Project collaborators](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/collaborate.html)
-
-
-
-"
-38FB0908B90954D96CEFF54BA975DE832286A0A7_3,38FB0908B90954D96CEFF54BA975DE832286A0A7," Logging and text in the Prompt Lab
-
-Nothing that you add to the prompt editor or submit to a model from the Prompt Lab or by using the API is logged by IBM. Messages that are generated by foundation models and returned to the Prompt Lab also are not logged.
-
-"
-38FB0908B90954D96CEFF54BA975DE832286A0A7_4,38FB0908B90954D96CEFF54BA975DE832286A0A7," Ownership of your content and foundation model output
-
-Content that you upload into watsonx is yours.
-
-IBM does not use the content that you upload to watsonx or the output generated by a foundation model to further train or improve any IBM developed models.
-
-IBM does not claim to have any ownership rights to any foundation model outputs. You remain solely responsible for your content and the output of any foundation model.
-
-"
-38FB0908B90954D96CEFF54BA975DE832286A0A7_5,38FB0908B90954D96CEFF54BA975DE832286A0A7," Learn more
-
-
-
-* [Watsonx terms](https://www.ibm.com/support/customer/csol/terms/?id=i126-9640&lc=endetail-document)
-* [IBM Watson Machine Learning terms](http://www.ibm.com/support/customer/csol/terms/?id=i126-6883)
-* [IBM Watson Studio terms](https://www.ibm.com/support/customer/csol/terms/?id=i126-7747)
-
-
-
-Parent topic:[Foundation models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-overview.html)
-"
-B193A2795BDEF17A5D204CDD18188A767E2FE7B7_0,B193A2795BDEF17A5D204CDD18188A767E2FE7B7," Tokens and tokenization
-
-A token is a collection of characters that has semantic meaning for a model. Tokenization is the process of converting the words in your prompt into tokens.
-
-You can monitor foundation model token usage in a project on the Environments page on the Resource usage tab.
-
-"
-B193A2795BDEF17A5D204CDD18188A767E2FE7B7_1,B193A2795BDEF17A5D204CDD18188A767E2FE7B7," Converting words to tokens and back again
-
-Prompt text is converted to tokens before being processed by foundation models.
-
-The correlation between words and tokens is complex:
-
-
-
-* Sometimes a single word is broken into multiple tokens
-* The same word might be broken into a different number of tokens, depending on context (such as: where the word appears, or surrounding words)
-* Spaces, newline characters, and punctuation are sometimes included in tokens and sometimes not
-* The way words are broken into tokens varies from language to language
-* The way words are broken into tokens varies from model to model
-
-
-
-For a rough idea, a sentence that has 10 words could be 15 to 20 tokens.
-
-The raw output from a model is also tokens. In the Prompt Lab in IBM watsonx.ai, the output tokens from the model are converted to words to be displayed in the prompt editor.
-
-"
-B193A2795BDEF17A5D204CDD18188A767E2FE7B7_2,B193A2795BDEF17A5D204CDD18188A767E2FE7B7," Example
-
-The following image shows how this sample input might be tokenized:
-
-> Tomatoes are one of the most popular plants for vegetable gardens. Tip for success: If you select varieties that are resistant to disease and pests, growing tomatoes can be quite easy. For experienced gardeners looking for a challenge, there are endless heirloom and specialty varieties to cultivate. Tomato plants come in a range of sizes.
-
-
-
-Notice a few interesting points:
-
-
-
-* Some words are broken into multiple tokens and some are not
-* The word ""Tomatoes"" is broken into multiple tokens at the beginning, but later ""tomatoes"" is all one token
-* Spaces are sometimes included at the beginning of a word-token and sometimes spaces are a token all by themselves
-* Punctuation marks are tokens
-
-
-
-"
-B193A2795BDEF17A5D204CDD18188A767E2FE7B7_3,B193A2795BDEF17A5D204CDD18188A767E2FE7B7," Token limits
-
-Every model has an upper limit to the number of tokens in the input prompt plus the number of tokens in the generated output from the model (sometimes called context window length, context window, context length, or maximum sequence length.) In the Prompt Lab, an informational message shows how many tokens are used in a given prompt submission and the resulting generated output.
-
-In the Prompt Lab, you use the Max tokens parameter to specify an upper limit on the number of output tokens for the model to generate. The maximum number of tokens that are allowed in the output differs by model. For more information, see the Maximum tokens information in [Supported foundation models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-models.html).
-
-Parent topic:[Foundation models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-overview.html)
-"
-96597F608C26E68BFC4BDCA45061400D63793523_0,96597F608C26E68BFC4BDCA45061400D63793523," Data formats for tuning foundation models
-
-Prepare a set of prompt examples to use to tune the model. The examples must contain the type of input that the model will need to process at run time and the appropriate output for the model to generate in response.
-
-You can add one file as training data. The maximum file size that is allowed is 200 MB.
-
-Prompt input-and-output example pairs are sometimes also referred to as samples or records.
-
-Follow these guidelines when you create your training data:
-
-
-
-* Add 100 to 1,000 labeled prompt examples to a file. Between 50 to 10,000 examples are allowed.
-* Use one of the following formats:
-
-
-
-* JavaScript Object Notation (JSON)
-* JSON Lines (JSONL) format
-
-
-
-* Each example must include one input and output pair.
-* The language of the training data must be English.
-* If the input or output text includes quotation marks, escape each quotation mark with a backslash(). For example, He said, ""Yes."".
-* To represent a carriage return or line break, you can use a backslash followed by n (n) to represent the new line. For example, ...end of paragraph.nStart of new paragraph.
-
-
-
-You can control the number of tokens from the input and output that are used during training. If an input or output example from the training data is longer than the specified limit, it will be truncated. Only the allowed maximum number of tokens will be used by the experiment. For more information, see [Controlling the number of tokens used](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tuning-tune.htmltuning-tokens).
-
-How tokens are counted differs by model, which makes the number of tokens difficult to estimate. For language-based foundation models, you can think of 256 tokens as about 130—170 words and 128 tokens as about 65—85 words. To learn more about tokens, see [Tokens and tokenization](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tokens.html).
-
-If you are using the model to classify data, follow these extra guidelines:
-
-
-
-"
-96597F608C26E68BFC4BDCA45061400D63793523_1,96597F608C26E68BFC4BDCA45061400D63793523,"* Try to limit the number of class labels to 10 or fewer.
-* Include an equal number of examples of each class type.
-
-
-
-You can use the Prompt Lab to craft examples for the training data. For more information, see [Prompt Lab](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-lab.html).
-
-"
-96597F608C26E68BFC4BDCA45061400D63793523_2,96597F608C26E68BFC4BDCA45061400D63793523," JSON example
-
-The following example shows an excerpt from a training data file with labeled prompts for a classification task in JSON format.
-
-{
-[
-{
-""input"":""Message: When I try to log in, I get an error."",
-""output"":""Class name: Problem""
-}
-{
-""input"":""Message: Where can I find the plan prices?"",
-""output"":""Class name: Question""
-}
-{
-""input"":""Message: What is the difference between trial and paygo?"",
-""output"":""Class name: Question""
-}
-{
-""input"":""Message: The registration page crashed, and now I can't create a new account."",
-""output"":""Class name: Problem""
-}
-{
-""input"":""Message: What regions are supported?"",
-""output"":""Class name: Question""
-}
-{
-""input"":""Message: I can't remember my password."",
-""output"":""Class name: Problem""
-}
-{
-""input"":""Message: I'm having trouble registering for a new account."",
-""output"":""Classname: Problem""
-}
-{
-""input"":""Message: A teammate shared a service instance with me, but I can't access it. What's wrong?"",
-""output"":""Class name: Problem""
-}
-{
-""input"":""Message: What extra privileges does an administrator have?"",
-""output"":""Class name: Question""
-}
-{
-""input"":""Message: Can I create a service instance for data in a language other than English?"",
-""output"":""Class name: Question""
-}
-]
-}
-
-"
-96597F608C26E68BFC4BDCA45061400D63793523_3,96597F608C26E68BFC4BDCA45061400D63793523," JSONL example
-
-The following example shows an excerpt from a training data file with labeled prompts for a classification task in JSONL format.
-
-{""input"":""Message: When I try to log in, I get an error."",""output"":""Class name: Problem""}
-{""input"":""Message: Where can I find the plan prices?"",""output"":""Class name: Question""}
-{""input"":""Message: What is the difference between trial and paygo?"",""output"":""Class name: Question""}
-{""input"":""Message: The registration page crashed, and now I can't create a new account."",""output"":""Class name: Problem""}
-{""input"":""Message: What regions are supported?"",""output"":""Class name: Question""}
-{""input"":""Message: I can't remember my password."",""output"":""Class name: Problem""}
-{""input"":""Message: I'm having trouble registering for a new account."",""output"":""Classname: Problem""}
-{""input"":""Message: A teammate shared a service instance with me, but I can't access it. What's wrong?"",""output"":""Class name: Problem""}
-{""input"":""Message: What extra privileges does an administrator have?"",""output"":""Class name: Question""}
-{""input"":""Message: Can I create a service instance for data in a language other than English?"",""output"":""Class name: Question""}
-
-Parent topic:[Tuning a model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tuning-tune.html)
-"
-FC8DBF139A485E98914CBB73B8BA684B283AE983_0,FC8DBF139A485E98914CBB73B8BA684B283AE983," Deploying a tuned foundation model
-
-Deploy a tuned model so you can add it to a business workflow and start to use foundation models in a meaningful way.
-
-"
-FC8DBF139A485E98914CBB73B8BA684B283AE983_1,FC8DBF139A485E98914CBB73B8BA684B283AE983," Before you begin
-
-The tuning experiment that you used to tune the foundation model must be finished. For more information, see [Tuning a foundation model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tuning-tune.html).
-
-"
-FC8DBF139A485E98914CBB73B8BA684B283AE983_2,FC8DBF139A485E98914CBB73B8BA684B283AE983," Deploy a tuned model
-
-To deploy a tuned model, complete the following steps:
-
-
-
-1. From the navigation menu, expand Projects, and then click All projects.
-2. Click to open your project.
-3. From the Assets tab, click the Experiments asset type.
-4. Click to open the tuning experiment for the model you want to deploy.
-5. From the Tuned models list, find the completed tuning experiment, and then click New deployment.
-6. Name the tuned model.
-
-The name of the tuning experiment is used as the tuned model name if you don't change it. The name has a number after it in parentheses, which counts the deployments. The number starts at one and is incremented by one each time you deploy this tuning experiment.
-7. Optional: Add a description and tags.
-8. In the Target deployment space field, choose a deployment space.
-
-The deployment space must be associated with a machine learning instance that is in the same account as the project where the tuned model was created.
-
-If you don't have a deployment space, choose Create a new deployment space, and then follow the steps in [Creating deployment spaces](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-create.html).
-
-For more information, see [What is a deployment space?](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tuning-deploy.html?context=cdpaas&locale=endeployment-space)
-9. In the Deployment serving name field, add a label for the deployment.
-
-The serving name is used in the URL for the API endpoint that identifies your deployment. Adding a name is helpful because the human-readable name that you add replaces a long, system-generated ID that is assigned otherwise.
-
-The serving name also abstracts the deployment from its service instance details. Applications refer to this name that allows for the underlying service instance to be changed without impacting users.
-
-"
-FC8DBF139A485E98914CBB73B8BA684B283AE983_3,FC8DBF139A485E98914CBB73B8BA684B283AE983,"The name can have up to 36 characters. The supported characters are [a-z,0-9,_].
-
-The name must be unique across the IBM Cloud region. You might be prompted to change the serving name if the name you choose is already in use.
-10. Tip: Select View deployment in deployment space after creating. Otherwise, you need to take more steps to find your deployed model.
-11. Click Deploy.
-
-
-
-After the tuned model is promoted to the deployment space and deployed, a copy of the tuned model is stored in your project as a model asset.
-
-"
-FC8DBF139A485E98914CBB73B8BA684B283AE983_4,FC8DBF139A485E98914CBB73B8BA684B283AE983," What is a deployment space?
-
-When you create a new deployment, a tuned model is promoted to a deployment space, and then deployed. A deployment space is separate from the project where you create the asset. A deployment space is associated with the following services that it uses to deploy assets:
-
-
-
-* Watson Machine Learning: A product with tools and services you can use to build, train, and deploy machine learning models. This service hosts your turned model.
-* IBM Cloud Object Storage: A secure platform for storing structured and unstructured data. Your deployed model asset is stored in a Cloud Object Storage bucket that is associated with your project.
-
-
-
-For more information, see [Deployment spaces](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-spaces_local.html).
-
-"
-FC8DBF139A485E98914CBB73B8BA684B283AE983_5,FC8DBF139A485E98914CBB73B8BA684B283AE983," Testing the deployed model
-
-The true test of your tuned model is how it responds to input that follows tuned-for patterns.
-
-You can test the tuned model from one of the following pages:
-
-
-
-* Prompt Lab: A tool with an intuitive user interface for prompting foundation models. You can customize the prompt parameters for each input. You can also save the prompt as a notebook so you can interact with it programmatically.
-* Deployment space: Useful when you want to test your model programmatically. From the API Reference tab, you can find information about the available endpoints and code examples. You can also submit input as text and choose to return the output or in a stream, as the output is generated. However, you cannot change the prompt parameters for the input text.
-
-
-
-To test your tuned model, complete the following steps:
-
-
-
-1. From the navigation menu, select Deployments.
-2. Click the name of the deployment space where you deployed the tuned model.
-3. Click the name of your deployed model.
-4. Follow the appropriate steps based on where you want to test the tuned model:
-
-
-
-* From Prompt Lab:
-
-
-
-1. Click Open in Prompt Lab, and then choose the project where you want to work with the model.
-
-Prompt Lab opens and the tuned model that you deployed is selected from the Model field.
-2. In the Try section, add a prompt to the Input field that follows the prompt pattern that your tuned model is trained to recognize, and then click Generate.
-
-
-
-For more information about how to use the prompt editor, see [Prompt Lab](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-lab.html).
-* From the deployment space:
-
-
-
-1. Click the Test tab.
-2. In the Input data field, add a prompt that follows the prompt pattern that your tuned model is trained to recognize, and then click Generate.
-
-You can click View parameter settings to see the prompt parameters that are applied to the model by default. To change the prompt parameters, you must go to the Prompt Lab.
-
-
-
-
-
-
-
-"
-FC8DBF139A485E98914CBB73B8BA684B283AE983_6,FC8DBF139A485E98914CBB73B8BA684B283AE983," Learn more
-
-
-
-* [Tuning a foundation model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tuning-tune.html)
-* [Security and privacy for foundation models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-security.html)
-
-
-
-Parent topic:[Deploying foundation model assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-found-assets.html)
-"
-15A014C514B00FF78C689585F393E21BAE922DB2_0,15A014C514B00FF78C689585F393E21BAE922DB2," Methods for tuning foundation models
-
-Learn more about different tuning methods and how they work.
-
-Models can be tuned in the following ways:
-
-
-
-* Fine-tuning: Changes the parameters of the underlying foundation model to guide the model to generate output that is optimized for a task.
-
-Note: You currently cannot fine-tune models in Tuning Studio.
-* Prompt-tuning: Adjusts the content of the prompt that is passed to the model to guide the model to generate output that matches a pattern you specify. The underlying foundation model and its parameters are not edited. Only the prompt input is altered.
-
-When you prompt-tune a model, the underlying foundation model can be used to address different business needs without being retrained each time. As a result, you reduce computational needs and inference costs.
-
-
-
-"
-15A014C514B00FF78C689585F393E21BAE922DB2_1,15A014C514B00FF78C689585F393E21BAE922DB2," How prompt-tuning works
-
-Foundation models are sensitive to the input that you give them. Your input, or how you prompt the model, can introduce context that the model will use to tailor its generated output. Prompt engineering to find the right prompt often works well. However, it can be time-consuming, error-prone, and its effectiveness can be restricted by the context window length that is allowed by the underlying model.
-
-Prompt-tuning a model in the Tuning Studio applies machine learning to the task of prompt engineering. Instead of adding words to the input itself, prompt-tuning is a method for finding a sequence of values that, when added as a prefix to the input text, improve the model's ability to generate the output you want. This sequence of values is called a prompt vector.
-
-Normally, words in the prompt are vectorized by the model. Vectorization is the process of converting text to tokens, and then to numbers defined by the model's tokenizer to identify the tokens. Lastly, the token IDs are encoded, meaning they are converted into a vector representation, which is the input format that is expected by the embedding layer of the model. Prompt-tuning bypasses the model's text-vectorization process and instead crafts a prompt vector directly. This changeable prompt vector is concatenated to the vectorized input text and the two are passed as one input to the embedding layer of the model. Values from this crafted prompt vector affect the word embedding weights that are set by the model and influence the words that the model chooses to add to the output.
-
-To find the best values for the prompt vector, you run a tuning experiment. You demonstrate the type of output that you want for a corresponding input by providing the model with input and output example pairs in training data. With each training run of the experiment, the generated output is compared to the training data output. Based on what it learns from differences between the two, the experiment adjusts the values in the prompt vector. After many runs through the training data, the model finds the prompt vector that works best.
-
-"
-15A014C514B00FF78C689585F393E21BAE922DB2_2,15A014C514B00FF78C689585F393E21BAE922DB2,"You can choose to start the training process by providing text that is vectorized by the experiment. Or you can let the experiment use random values in the prompt vector. Either way, unless the initial values are exactly right, they will be changed repeatedly as part of the training process. Providing your own initialization text can help the experiment reach a good result more quickly.
-
-The result of the experiment is a tuned version of the underlying model. You submit input to the tuned model for inferencing and the model generates output that follows the tuned-for pattern.
-
-For more information about this tuning method, read the research paper named [The Power of Scale for Parameter-Efficient Prompt Tuning](https://arxiv.org/abs/2104.08691).
-
-"
-15A014C514B00FF78C689585F393E21BAE922DB2_3,15A014C514B00FF78C689585F393E21BAE922DB2," Learn more
-
-
-
-* [Tuning parameters](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tuning-parameters.html)
-
-
-
-Parent topic:[Tuning Studio](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tuning-studio.html)
-"
-51747F17F413F1F34CFD73D170DE392D874D03DD_0,51747F17F413F1F34CFD73D170DE392D874D03DD," Parameters for tuning foundation models
-
-Tuning parameters configure the tuning experiments that you use to tune the model.
-
-During the experiment, the tuning model repeatedly adjusts the structure of the prompt so that its predictions can get better over time.
-
-The following diagram illustrates the steps that occur during a tuning training experiment run. The parts of the experiment flow that you can configure are highlighted. These decision points correspond with experiment tuning parameters that you control.
-
-
-
-The diagram shows the following steps of the experiment:
-
-
-
-1. Starts from the initialization method that you choose to use to initialize the prompt.
-
-If the initialization method parameter is set to text, then you must add the initialization text.
-2. If specified, tokenizes the initialization text and converts it into a prompt vector.
-3. Reads the training data, tokenizes it, and converts it into batches.
-
-The size of the batches is determined by the batch size parameter.
-4. Sends input from the examples in the batch to the foundation model for the model to process and generate output.
-5. Compares the model's output to the output from the training data that corresponds to the training data input that was submitted. Then, computes the loss gradient, which is the difference between the predicted output and the actual output from the training data.
-
-At some point, the experiment adjusts the prompt vector that is added to the input based on the performance of the model. When this adjustment occurs depends on how the Accumulation steps parameter is configured.
-6. Adjustments are applied to the prompt vector that was initialized in Step 2. The degree to which the vector is changed is controlled by the Learning rate parameter. The edited prompt vector is added as a prefix to the input from the next example in the training data, and is submitted to the model as input.
-7. The process repeats until all of the examples in all of the batches are processed.
-8. The entire set of batches are processed again as many times as is specified in the Number of epochs parameter.
-
-
-
-"
-51747F17F413F1F34CFD73D170DE392D874D03DD_1,51747F17F413F1F34CFD73D170DE392D874D03DD,"Note: No layer of the base foundation model is changed during this process.
-
-"
-51747F17F413F1F34CFD73D170DE392D874D03DD_2,51747F17F413F1F34CFD73D170DE392D874D03DD," Parameter details
-
-The parameters that you change when you tune a model are related to the tuning experiment, not to the underlying foundation model.
-
-
-
-Table 1: Tuning parameters
-
- Parameter name Value options Default value Learn more
-
- Initialization method Random, Text Random [Initializing prompt tuning](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tuning-parameters.html?context=cdpaas&locale=eninitialize)
- Initialization text None None [Initializing prompt tuning](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tuning-parameters.html?context=cdpaas&locale=eninitialize)
- Batch size 1 - 16 16 [Segmenting the training data](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tuning-parameters.html?context=cdpaas&locale=ensegment)
- Accumulation steps 1 - 128 16 [Segmenting the training data](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tuning-parameters.html?context=cdpaas&locale=ensegment)
- Learning rate 0.01 - 0.5 0.3 [Managing the learning rate](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tuning-parameters.html?context=cdpaas&locale=enlearning-rate)
- Number of epochs (training cycles) 1 - 50 20 [Choosing the number of training runs to complete](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tuning-parameters.html?context=cdpaas&locale=enruns)
-
-
-
-"
-51747F17F413F1F34CFD73D170DE392D874D03DD_3,51747F17F413F1F34CFD73D170DE392D874D03DD," Segmenting the training data
-
-When an experiment runs, the experiment first breaks the training data into smaller batches, and then trains on one batch at a time. Each batch must fit in GPU memory to be processed. To reduce the amount of GPU memory that is needed, you can configure the tuning experiment to postpone making adjustments until more than one batch is processed. Tuning runs on a batch and its performance metrics are calculated, but the prompt vector isn't changed. Instead, the performance information is collected over some number of batches before the cumulative performance metrics are evaluated.
-
-Use the following parameters to control how the training data is segmented:
-
-Batch size Number of labeled examples (also known as samples) to process at one time.
-
-For example, for a data set with 1,000 examples and a batch size of 10, the data set is divided into 100 batches of 10 examples each.
-
-If the training data set is small, specify a smaller batch size to ensure that each batch has enough examples in it.
-
-Accumulation steps: Number of batches to process before the prompt vector is adjusted.
-
-For example, if the data set is divided into 100 batches and you set the accumulation steps value to 10, then the prompt vector is adjusted 10 times instead of 100 times.
-
-"
-51747F17F413F1F34CFD73D170DE392D874D03DD_4,51747F17F413F1F34CFD73D170DE392D874D03DD," Initializing prompt tuning
-
-When you create an experiment, you can choose whether to specify your own text to serve as the initial prompt vector or let the experiment generate it for you. These new tokens start the training process either in random positions, or based on the embedding of a vocabulary or instruction that you specify in text. Studies show that as the size of the underlying model grows beyond 10 billion parameters, the initialization method that is used becomes less important.
-
-The choice that you make when you create the tuning experiment customizes how the prompt is initialized.
-
-Initialization method: Choose a method from the following options:
-
-
-
-* Text: The Prompt Tuning method is used where you specify the initialization text of the prompt yourself.
-* Random: The Prompt Tuning method is used that allows the experiment to add values that are chosen at random to include with the prompt.
-
-
-
-Initialization text: The text that you want to add. Specify a task description or instructions similar to what you use for zero-shot prompting.
-
-"
-51747F17F413F1F34CFD73D170DE392D874D03DD_5,51747F17F413F1F34CFD73D170DE392D874D03DD," Managing the learning rate
-
-The learning rate parameter determines how much to change the prompt vector when the it is adjusted. The higher the number, the greater the change to the vector.
-
-"
-51747F17F413F1F34CFD73D170DE392D874D03DD_6,51747F17F413F1F34CFD73D170DE392D874D03DD," Choosing the number of training runs to complete
-
-The Number of epochs parameter specifies the number of times to cycle through the training data.
-
-For example, with a batch size of 10 and a data set with 1,000 examples, one epoch must process 100 batches and update the prompt vector 100 times. If you set the number of epochs to 20, the model is passed through the data set 20 times, which means it processes a total of 2,000 batches during the tuning process.
-
-The higher the number of epochs and bigger your training data, the longer it takes to tune a model.
-
-"
-51747F17F413F1F34CFD73D170DE392D874D03DD_7,51747F17F413F1F34CFD73D170DE392D874D03DD," Learn more
-
-
-
-* [Data formats](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tuning-data.html)
-
-
-
-Parent topic:[Tuning a model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tuning-tune.html)
-"
-8745FB7BF19F0E2B0A78C3CD43AA4BF79A25DBCE_0,8745FB7BF19F0E2B0A78C3CD43AA4BF79A25DBCE," Tuning Studio
-
-Tune a foundation model with the Tuning Studio to guide an AI foundation model to return useful output.
-
-Required permissions : To run training experiments, you must have the Admin or Editor role in a project.
-
-: The Tuning Studio is not available with all plans or in all data centers. See [Watson Machine Learning plans](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/wml-plans.html) and [Regional availability for services and features](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/regional-datactr.html).
-
-Data format : Tabular: JSON, JSONL. For details, see [Data formats](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tuning-data.html).
-
-Note: You can use the same training data file with one or more tuning experiments.
-
-Data size : 50 to 10,000 input and output example pairs. The maximum file size is 200 MB.
-
-You use the Tuning Studio to create a tuned version of an existing foundation model.
-
-This video provides a visual method to learn the concepts and tasks in this documentation.
-
-Foundation models are AI models that are pretrained on terabytes of data from across the internet and other public resources. They are unrivaled in their ability to predict the next best word and generate language. While language-generation can be useful for brainstorming and spurring creativity, it is less useful for achieving concrete tasks. Model tuning, and other techniques, such as retrieval-augmented generation, help you to use foundation models in meaningful ways for your business.
-
-With the Tuning Studio, you can tune a smaller foundation model to improve its performance on natural language processing tasks such as classification, summarization, and generation. Tuning can help a smaller foundation model achieve results comparable to larger models in the same model family. By tuning and deploying the smaller model, you can reduce long-term inference costs.
-
-"
-8745FB7BF19F0E2B0A78C3CD43AA4BF79A25DBCE_1,8745FB7BF19F0E2B0A78C3CD43AA4BF79A25DBCE,"Much like prompt engineering, tuning a foundation model helps you to influence the content and format of the foundation model output. Knowing what to expect from a foundation model is essential if you want to plug the step of inferencing a foundation model into a business workflow.
-
-The following diagram illustrates how tuning a foundation model can help you guide the model to generate useful output. You provide labeled data that illustrates the format and type of output that you want the model to return, which helps the foundation model to follow the established pattern.
-
-
-
-You can tune a foundation model to optimize the model's ability to do many things, including:
-
-
-
-* Generate new text in a specific style
-* Generate text that summarizes or extracts information in a certain way
-* Classify text
-
-
-
-To learn more about when tuning a model is the right approach, see [When to tune a foundation model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tuning-when.html).
-
-"
-8745FB7BF19F0E2B0A78C3CD43AA4BF79A25DBCE_2,8745FB7BF19F0E2B0A78C3CD43AA4BF79A25DBCE," Workflow
-
-Tuning a model involves the following tasks:
-
-
-
-1. Engineer prompts that work well with the model you want to use.
-
-
-
-* Find the largest foundation model that works best for the task.
-* Experiment until you understand which prompt formats show the most potential for getting good results from the model.
-
-
-
-Tuning doesn't mean you can skip prompt engineering altogether. Experimentation is necessary to find the right foundation model for your use case. Tuning means you can do the work of prompt engineering once and benefit from it again and again.
-
-You can use the Prompt Lab to experiment with prompt engineering. For help, see [Prompt Lab](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-lab.html).
-2. Create training data to use for model tuning.
-3. Create a tuning experiment to tune the model.
-4. Evaluate the tuned model.
-
-If necessary, change the training data or the experiment parameters and run more experiments until you're satisfied with the results.
-5. Deploy the tuned model.
-
-
-
-"
-8745FB7BF19F0E2B0A78C3CD43AA4BF79A25DBCE_3,8745FB7BF19F0E2B0A78C3CD43AA4BF79A25DBCE," Learn more
-
-
-
-* [When to tune](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tuning-when.html)
-* [Methods for tuning](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tuning-methods.html)
-* [Tuning a model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tuning-tune.html)
-
-
-
-
-
-* [Quick start: Tune a foundation model](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-tuning-studio.html)
-* [Sample notebook: Tune a model to classify CFPB documents in watsonx](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/bf57e8896f3e50c638b5a378780f7502)
-
-
-
-Parent topic:[Foundation models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-overview.html)
-"
-2372AEEEB4DA6A3E94273CB46224ED09CD84CD9E_0,2372AEEEB4DA6A3E94273CB46224ED09CD84CD9E," Tuning a foundation model
-
-To tune a foundation model, create a tuning experiment that guides the foundation model to return the output you want in the format you want.
-
-"
-2372AEEEB4DA6A3E94273CB46224ED09CD84CD9E_1,2372AEEEB4DA6A3E94273CB46224ED09CD84CD9E," Requirements
-
-If you signed up for watsonx.ai and specified the Dallas region, all requirements are met and you're ready to use the Tuning Studio.
-
-The Tuning Studio is available from a project that is created for you automatically when you sign up for watsonx.ai. The project is named sandbox and you can use it to get started with testing and customizing foundation models.
-
-"
-2372AEEEB4DA6A3E94273CB46224ED09CD84CD9E_2,2372AEEEB4DA6A3E94273CB46224ED09CD84CD9E," Before you begin
-
-Experiment with the Prompt Lab to determine the best model to use for your task. Craft and try prompts until you find the input and output patterns that generate the best results from the model. For more information, see [Prompt Lab](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-lab.html).
-
-Create a set of example prompts that follow the patterns that generate the best results based on your prompt engineering work. For more information, see [Data formats](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tuning-data.html).
-
-"
-2372AEEEB4DA6A3E94273CB46224ED09CD84CD9E_3,2372AEEEB4DA6A3E94273CB46224ED09CD84CD9E," Tune a model
-
-
-
-1. Click the Tune a foundation model with labeled data task.
-2. Name the tuning experiment.
-3. Optional: Add a description and tags. Add a description as a reminder to yourself and to help collaborators understand the goal of the tuned model. Assigning a tag gives you a way to filter your tuning assets later to show only the assets associated with a tag.
-4. Click Create.
-5. The flan-t5-xl foundation model is selected for you to tune.
-
-To read more about the model, click the Preview icon () that is displayed from the drop-down list.
-
-For more information, see the [model card](https://huggingface.co/google/flan-t5-xl)
-6. Choose how to initialize the prompt from the following options:
-
-Text : Uses text that you specify.
-
-Random : Uses values that are generated for you as part of the tuning experiment.
-
-These options are related to the prompt tuning method for tuning models. For more information about how each option affects the tuning experiment, see [How prompt-tuning works](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tuning-methods.htmlhow-prompt-tuning-works).
-7. Required for the Text initialization method only: Add the initialization text that you want to include with the prompt.
-
-
-
-* For a classification task, give an instruction that describes what you want to classify and lists the class labels to be used. For example, Classify whether the sentiment of each comment is Positive or Negative.
-* For a generative task, describe what you want the model to provide in the output. For example, Make the case for allowing employees to work from home a few days a week.
-* For a summarization task, give an instruction such as, Summarize the main points from a meeting transcript.
-
-
-
-8. Choose a task type.
-
-Choose the task type that most closely matches what you want the model to do:
-
-"
-2372AEEEB4DA6A3E94273CB46224ED09CD84CD9E_4,2372AEEEB4DA6A3E94273CB46224ED09CD84CD9E,"Classification : Predicts categorical labels from features. For example, given a set of customer comments, you might want to label each statement as a question or a problem. By separating out customer problems, you can find and address them more quickly.
-
-Generation : Generates text. For example, writes a promotional email.
-
-Summarization : Generates text that describes the main ideas that are expressed in a body of text. For example, summarizes a research paper.
-
-Whichever task you choose, the input is submitted to the underlying foundation model as a generative request type during the experiment. For classification tasks, class names are taken into account in the prompts that are used to tune the model. As models and tuning methods evolve, task-specific enhancements are likely to be added that you can leverage if tasks are represented accurately.
-9. Required for classification tasks only: In the Classification output (verbalizer) field, add the class labels that you want the model to use one at a time.
-
-Important: Specify the same labels that are used in your training data.
-
-During the tuning experiment, class label information is submitted along with the input examples from the training data.
-10. Add the training data that will be used to tune the model. You can upload a file or use an asset from your project.
-
-To see examples of how to format your file, expand What should your data look like?, and then click Preview template. For more information, see [Data formats](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tuning-data.html).
-11. Optional: If you want to limit the size of the input or output examples that are used during training, adjust the maximum number of tokens that are allowed. Expand What should your data look like?, and then drag the sliders to change the values. Limiting the size can reduce the time that it takes to run the tuning experiment. For more information, see [Controlling the number of tokens used](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tuning-tune.html?context=cdpaas&locale=entuning-tokens).
-"
-2372AEEEB4DA6A3E94273CB46224ED09CD84CD9E_5,2372AEEEB4DA6A3E94273CB46224ED09CD84CD9E,"12. Optional: Click Configure parameters to edit the parameters that are used by the tuning experiment.
-
-The tuning run is configured with parameter values that represent a good starting point for tuning a model. You can adjust them if you want.
-
-For more information about the available parameters and what they do, see [Tuning parameters](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tuning-parameters.html).
-
-After you change parameter values, click Save.
-13. Click Start tuning.
-
-
-
-The tuning experiment begins. It might take a few minutes to a few hours depending on the size of your training data and the availability of compute resources. When the experiment is finished, the status shows as completed.
-
-A tuned model asset is not created until after you create a deployment from a completed tuning experiment. For more information, see [Deploying a tuned model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tuning-deploy.html).
-
-"
-2372AEEEB4DA6A3E94273CB46224ED09CD84CD9E_6,2372AEEEB4DA6A3E94273CB46224ED09CD84CD9E," Controlling the number of tokens used
-
-You can change the number of tokens that are allowed in the model input and output during a tuning experiment.
-
-
-
-Table 1: Token number parameters
-
- Parameter name Value options Default value
-
- Maximum input tokens 1 - 256 256
- Maximum output tokens 1 - 128 128
-
-
-
-You already have some control over the input size. The input text that is used during a tuning experiment comes from your training data. So, you can manage the input size by keeping your example inputs to a set length. However, you might be getting training data that isn't curated from another team or process. In that case, you can use the Maximum input tokens slider to manage the input size. If you set the parameter to 200 and the training data has an example input with 1,000 tokens, for example, the example is truncated. Only the first 200 tokens of the example input are used.
-
-The Max output tokens value is important because it controls the number of tokens that the model is allowed to generate as output at training time. You can use the slider to limit the output size, which helps the model to generate concise output.
-
-For classification tasks, minimizing the size of the output is a good way to force a generative model to return the class label only, without repeating the classification pattern in the output.
-
-For natural language models, words are converted to tokens. 256 tokens is equal to approximately 130—170 words. 128 tokens is equal to approximately 65—85 words. However, token numbers are difficult to estimate and can differ by model. For more information, see [Tokens and tokenization](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tokens.html).
-
-"
-2372AEEEB4DA6A3E94273CB46224ED09CD84CD9E_7,2372AEEEB4DA6A3E94273CB46224ED09CD84CD9E," Evaluating the tuning experiment
-
-When the experiment is finished, a loss function graph is displayed that illustrates the improvement in the model output over time. The epochs are shown on the x-axis and a measure of the difference between predicted and actual results per epoch is shown on the y-axis. The value that is shown per epoch is calculated from the average gradient value from all of the accumulation steps in the epoch.
-
-The best experiment outcome is represented by a downward-sloping curve. A decreasing curve means that the model gets better at generating the expected outputs in the expected format over time.
-
-If the gradient value for the last epoch remains too high, you can run another experiment. To help improve the results, try one of the following approaches:
-
-
-
-* Augment or edit the training data that you're using.
-* Adjust the experiment parameters.
-
-
-
-When you're satisfied with the results from the tuning experiment, deploy the tuned foundation model. For more information, see [Deploying a tuned model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tuning-deploy.html).
-
-"
-2372AEEEB4DA6A3E94273CB46224ED09CD84CD9E_8,2372AEEEB4DA6A3E94273CB46224ED09CD84CD9E," Learn more
-
-
-
-* [Data formats](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tuning-data.html)
-* [Tuning parameters](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tuning-parameters.html)
-
-
-
-
-
-* [Quick start: Tune a foundation model](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-tuning-studio.html)
-* [Sample notebook: Tune a model to classify CFPB documents in watsonx](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/bf57e8896f3e50c638b5a378780f7502)
-
-
-
-Parent topic:[Tuning Studio](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tuning-studio.html)
-"
-FBC3C5F81D060CD996489B772ABAC886F12130A3_0,FBC3C5F81D060CD996489B772ABAC886F12130A3," When to tune a foundation model
-
-Find out when tuning a model can help you use a foundation model to achieve your goals.
-
-Tune a foundation model when you want to do the following things:
-
-
-
-* Reduce the cost of inferencing at scale
-
-Larger foundation models typically generate better results. However, they are also more expensive to use. By tuning a model, you can get similar, sometimes even better results from a smaller model that costs less to use.
-* Get the model's output to use a certain style or format
-* Improve the model's performance by teaching the model a specialized task
-* Generate output in a reliable form in response to zero-shot prompts
-
-
-
-"
-FBC3C5F81D060CD996489B772ABAC886F12130A3_1,FBC3C5F81D060CD996489B772ABAC886F12130A3," When not to tune a model
-
-Tuning a model is not always the right approach for improving the output of a model. For example, tuning a model cannot help you do the following things:
-
-
-
-* Improve the accuracy of answers in model output
-
-If you're using a foundation model for factual recall in a question-answering scenario, tuning will marginally improve answer accuracy. To get factual answers, you must provide factual information as part of your input to the model. Tuning can be used to help the generated factual answers conform to a format that can be more-easily used by a downstream process in a workflow. To learn about methods for returning factual answers, see [Retreival-augmented generation](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-rag.html).
-* Get the model to use a specific vocabulary in its output consistently
-
-Large language models that are trained on large amounts of data formulate a vocabulary based on that initial set of data. You can introduce significant terms to the model from training data that you use to tune the model. However, the model might not use these preferred terms reliably in its output.
-* Teach a foundation model to perform an entirely new task
-
-Experimenting with prompt engineering is an important first step because it helps you understand the type of output that a foundation model is and is not capable of generating. You can use tuning to tweak, tailor, and shape the output that a foundation model is able to return.
-
-
-
-"
-FBC3C5F81D060CD996489B772ABAC886F12130A3_2,FBC3C5F81D060CD996489B772ABAC886F12130A3," Learn more
-
-
-
-* [Retreival-augmented generation](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-rag.html)
-* [Tuning methods](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tuning-methods.html)
-
-
-
-Parent topic:[Foundation models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-overview.html)
-"
-E3B9F33C36E5636808B137CFA4745E39F3B48D62_0,E3B9F33C36E5636808B137CFA4745E39F3B48D62," SPSS predictive analytics forecasting using data preparation for time series data in notebooks
-
-Data preparation for time series data (TSDP) provides the functionality to convert raw time data (in Flattened multi-dimensional format, which includes transactional (event) based and column-based data) into regular time series data (in compact row-based format) which is required by the subsequent time series analysis methods.
-
-The main job of TSDP is to generate time series in terms of the combination of each unique value in the dimension fields with metric fields. In addition, it sorts the data based on the timestamp, extracts metadata of time variables, transforms time series with another time granularity (interval) by applying an aggregation or distribution function, checks the data quality, and handles missing values if needed.
-
-"
-E3B9F33C36E5636808B137CFA4745E39F3B48D62_1,E3B9F33C36E5636808B137CFA4745E39F3B48D62,"Python example code:
-
-from spss.ml.forecasting.timeseriesdatapreparation import TimeSeriesDataPreparation
-
-tsdp = TimeSeriesDataPreparation().
-setMetricFieldList([""Demand""]).
-setDateTimeField(""Date"").
-setEncodeSeriesID(True).
-setInputTimeInterval(""MONTH"").
-setOutTimeInterval(""MONTH"").
-setQualityScoreThreshold(0.0).
-setConstSeriesThreshold(0.0)
-
-tsdpOut = tsdp.transform(data)
-
-"
-E3B9F33C36E5636808B137CFA4745E39F3B48D62_2,E3B9F33C36E5636808B137CFA4745E39F3B48D62," TimeSeriesDataPreparationConvertor
-
-This is the date/time convertor API that's used to provide some functionalities of the date/time convertor inside TSDP for applications to use. There are two use cases for this component:
-
-
-
-* Compute the time points between a specified start and end time. In this case, the start and end time both occur after the first observation in the previous TSDP\'s output.
-* Compute the time points between a start index and end index referring to the last observation in the previous TSDP\'s output.
-
-
-
-"
-E3B9F33C36E5636808B137CFA4745E39F3B48D62_3,E3B9F33C36E5636808B137CFA4745E39F3B48D62," Temporal causal modeling
-
-Temporal causal modeling (TCM) refers to a suite of methods that attempt to discover key temporal relationships in time series data by using a combination of Granger causality and regression algorithms for variable selection.
-
-"
-E3B9F33C36E5636808B137CFA4745E39F3B48D62_4,E3B9F33C36E5636808B137CFA4745E39F3B48D62,"Python example code:
-
-from spss.ml.forecasting.timeseriesdatapreparation import TimeSeriesDataPreparation
-from spss.ml.common.wrapper import LocalContainerManager
-from spss.ml.forecasting.temporalcausal import TemporalCausal
-from spss.ml.forecasting.params.predictor import MaxLag, MaxNumberOfPredictor, Predictor
-from spss.ml.forecasting.params.temporal import FieldNameList, FieldSettings, Forecast, Fit
-from spss.ml.forecasting.reversetimeseriesdatapreparation import ReverseTimeSeriesDataPreparation
-
-tsdp = TimeSeriesDataPreparation().setDimFieldList([""Demension1"", ""Demension2""]).
-setMetricFieldList([""m1"", ""m2"", ""m3"", ""m4""]).
-setDateTimeField(""date"").
-setEncodeSeriesID(True).
-setInputTimeInterval(""MONTH"").
-setOutTimeInterval(""MONTH"")
-tsdpOutput = tsdp.transform(changedDF)
-
-lcm = LocalContainerManager()
-lcm.exportContainers(""TSDP"", tsdp.containers)
-
-estimator = TemporalCausal(lcm).
-setInputContainerKeys([""TSDP""]).
-setTargetPredictorList([Predictor(
-targetList="""", """", """"]],
-predictorCandidateList="""", """", """"]])]).
-setMaxNumPredictor(MaxNumberOfPredictor(False, 4)).
-setMaxLag(MaxLag(""SETTING"", 5)).
-setTolerance(1e-6)
-
-tcmModel = estimator.fit(tsdpOutput)
-transformer = tcmModel.setDataEncoded(True).
-setCILevel(0.95).
-setOutTargetValues(False).
-"
-E3B9F33C36E5636808B137CFA4745E39F3B48D62_5,E3B9F33C36E5636808B137CFA4745E39F3B48D62,"setTargets(FieldSettings(fieldNameList=FieldNameList(seriesIDList=[""da1"", ""db1"", ""m1""]]))).
-setReestimate(False).
-setForecast(Forecast(outForecast=True, forecastSpan=5, outCI=True)).
-setFit(Fit(outFit=True, outCI=True, outResidual=True))
-
-predictions = transformer.transform(tsdpOutput)
-rtsdp = ReverseTimeSeriesDataPreparation(lcm).
-setInputContainerKeys([""TSDP""]).
-setDeriveFutureIndicatorField(True)
-
-rtsdpOutput = rtsdp.transform(predictions)
-rtsdpOutput.show()
-
-"
-E3B9F33C36E5636808B137CFA4745E39F3B48D62_6,E3B9F33C36E5636808B137CFA4745E39F3B48D62," Temporal Causal Auto Regressive Model
-
-Autoregressive (AR) models are built to compute out-of-sample forecasts for predictor series that aren't target series. These predictor forecasts are then used to compute out-of-sample forecasts for the target series.
-
-"
-E3B9F33C36E5636808B137CFA4745E39F3B48D62_7,E3B9F33C36E5636808B137CFA4745E39F3B48D62,"Model produced by TemporalCausal
-
-TemporalCausal exports outputs:
-
-
-
-* a JSON file that contains TemporalCausal model information
-* an XML file that contains multi series model
-
-
-
-"
-E3B9F33C36E5636808B137CFA4745E39F3B48D62_8,E3B9F33C36E5636808B137CFA4745E39F3B48D62,"Python example code:
-
-from spss.ml.common.wrapper import LocalContainerManager
-from spss.ml.forecasting.temporalcausal import TemporalCausal, TemporalCausalAutoRegressiveModel
-from spss.ml.forecasting.params.predictor import MaxLag, MaxNumberOfPredictor, Predictor
-from spss.ml.forecasting.params.temporal import FieldNameList, FieldSettingsAr, ForecastAr
-
-lcm = LocalContainerManager()
-arEstimator = TemporalCausal(lcm).
-setInputContainerKeys([tsdp.uid]).
-setTargetPredictorList([Predictor(
-targetList = ""da1"", ""db1"", ""m2""]],
-predictorCandidateList = ""da1"", ""db1"", ""m1""],
-""da1"", ""db2"", ""m1""],
-""da1"", ""db2"", ""m2""],
-""da1"", ""db3"", ""m1""],
-""da1"", ""db3"", ""m2""],
-""da1"", ""db3"", ""m3""]])]).
-setMaxNumPredictor(MaxNumberOfPredictor(False, 5)).
-setMaxLag(MaxLag(""SETTING"", 5))
-
-arEstimator.fit(df)
-
-tcmAr = TemporalCausalAutoRegressiveModel(lcm).
-setInputContainerKeys([arEstimator.uid]).
-setDataEncoded(True).
-setOutTargetValues(True).
-setTargets(FieldSettingsAr(FieldNameList(
-seriesIDList=[""da1"", ""db1"", ""m1""],
-""da1"", ""db2"", ""m2""],
-""da1"", ""db3"", ""m3""]]))).
-setForecast(ForecastAr(forecastSpan = 5))
-
-scored = tcmAr.transform(df)
-scored.show()
-
-"
-E3B9F33C36E5636808B137CFA4745E39F3B48D62_9,E3B9F33C36E5636808B137CFA4745E39F3B48D62," Temporal Causal Outlier Detection
-
-One of the advantages of building TCM models is the ability to detect model-based outliers. Outlier detection refers to a capability to identify the time points in the target series with values that stray too far from their expected (fitted) values based on the TCM models.
-
-"
-E3B9F33C36E5636808B137CFA4745E39F3B48D62_10,E3B9F33C36E5636808B137CFA4745E39F3B48D62," Temporal Causal Root Cause Analysis
-
-The root cause analysis refers to a capability to explore the Granger causal graph in order to analyze the key/root values that resulted in the outlier in question.
-
-"
-E3B9F33C36E5636808B137CFA4745E39F3B48D62_11,E3B9F33C36E5636808B137CFA4745E39F3B48D62," Temporal Causal Scenario Analysis
-
-Scenario analysis refers to a capability of the TCM models to ""play-out"" the repercussions of artificially setting the value of a time series. A scenario is the set of forecasts that are performed by substituting the values of a root time series by a vector of substitute values.
-
-"
-E3B9F33C36E5636808B137CFA4745E39F3B48D62_12,E3B9F33C36E5636808B137CFA4745E39F3B48D62," Temporal Causal Summary
-
-TCM Summary selects Top N models based on one model quality measure. There are five model quality measures: Root Mean Squared Error (RMSE), Root Mean Squared Percentage Error (RMSPE), Bayesian Information Criterion (BIC), Akaike Information Criterion (AIC), and R squared (RSQUARE). Both N and the model quality measure can be set by the user.
-
-"
-E3B9F33C36E5636808B137CFA4745E39F3B48D62_13,E3B9F33C36E5636808B137CFA4745E39F3B48D62," Time Series Exploration
-
-Time Series Exploration explores the characteristics of time series data based on some statistics and tests to generate preliminary insights about the time series before modeling. It covers not only analytic methods for expert users (including time series clustering, unit root test, and correlations), but also provides an automatic exploration process based on a simple time series decomposition method for business users.
-
-"
-E3B9F33C36E5636808B137CFA4745E39F3B48D62_14,E3B9F33C36E5636808B137CFA4745E39F3B48D62,"Python example code:
-
-from spss.ml.forecasting.timeseriesexploration import TimeSeriesExploration
-
-tse = TimeSeriesExploration().
-setAutoExploration(True).
-setClustering(True)
-
-tseModel = tse.fit(data)
-predictions = tseModel.transform(data)
-predictions.show()
-
-"
-E3B9F33C36E5636808B137CFA4745E39F3B48D62_15,E3B9F33C36E5636808B137CFA4745E39F3B48D62," Reverse Data preparation for time series data
-
-Reverse Data preparation for time series data (RTSDP) provides functionality that converts the compact row based (CRB) format that's generated by TimeSeriesDataPreperation (TSDP) or TemporalCausalModel (TCM Score) back to the flattened multidimensional (FMD) format.
-
-"
-E3B9F33C36E5636808B137CFA4745E39F3B48D62_16,E3B9F33C36E5636808B137CFA4745E39F3B48D62,"Python example code:
-
-from spss.ml.common.wrapper import LocalContainerManager
-from spss.ml.forecasting.params.temporal import GroupType
-from spss.ml.forecasting.reversetimeseriesdatapreparation import ReverseTimeSeriesDataPreparation
-from spss.ml.forecasting.timeseriesdatapreparation import TimeSeriesDataPreparation
-
-manager = LocalContainerManager()
-tsdp = TimeSeriesDataPreparation(manager).
-setDimFieldList([""Dimension1"", ""Dimension2"", ""Dimension3""]).
-setMetricFieldList(
-[""Metric1"", ""Metric2"", ""Metric3"", ""Metric4"", ""Metric5"", ""Metric6"", ""Metric7"", ""Metric8"", ""Metric9"", ""Metric10""]).
-setDateTimeField(""TimeStamp"").
-setEncodeSeriesID(False).
-setInputTimeInterval(""WEEK"").
-setOutTimeInterval(""WEEK"").
-setMissingImputeType(""LINEAR_INTERP"").
-setQualityScoreThreshold(0.0).
-setConstSeriesThreshold(0.0).
-setGroupType(
-GroupType([(""Metric1"", ""MEAN""), (""Metric2"", ""SUM""), (""Metric3"", ""MODE""), (""Metric4"", ""MIN""), (""Metric5"", ""MAX"")]))
-
-tsdpOut = tsdp.transform(changedDF)
-rtsdp = ReverseTimeSeriesDataPreparation(manager).
-setInputContainerKeys([tsdp.uid]).
-setDeriveFutureIndicatorField(True)
-
-rtdspOut = rtsdp.transform(tsdpOut)
-
-import com.ibm.spss.ml.forecasting.traditional.TimeSeriesForecastingModelReEstimate
-
-val tsdp = TimeSeriesDataPreparation().
-setDimFieldList(Array(""da"", ""db"")).
-setMetricFieldList(Array(""metric"")).
-setDateTimeField(""date"").
-"
-E3B9F33C36E5636808B137CFA4745E39F3B48D62_17,E3B9F33C36E5636808B137CFA4745E39F3B48D62,"setEncodeSeriesID(false).
-setInputTimeInterval(""MONTH"").
-setOutTimeInterval(""MONTH"")
-
-val lcm = LocalContainerManager()
-lcm.exportContainers(""k"", tsdp.containers)
-
-val reestimate = TimeSeriesForecastingModelReEstimate(lcm).
-setForecast(ForecastEs(outForecast = true, forecastSpan = 4, outCI = true)).
-setFitSettings(Fit(outFit = true, outCI = true, outResidual = true)).
-setOutInputData(true).
-setInputContainerKeys(Seq(""k""))
-
-val rtsdp = ReverseTimeSeriesDataPreparation(tsdp.manager).
-setInputContainerKeys(List(tsdp.uid)).
-setDeriveFutureIndicatorField(true)
-
-val pipeline = new Pipeline().setStages(Array(tsdp, reestimate, rtsdp))
-val scored = pipeline.fit(data).transform(data)
-scored.show()
-
-Parent topic:[SPSS predictive analytics algorithms](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/spss-algorithms.html)
-"
-3508F0DDA4CCBDBB07BD583218F4E4260DC01C0D_0,3508F0DDA4CCBDBB07BD583218F4E4260DC01C0D," Geospatial data analysis
-
-You can use the geospatio-temporal library to expand your data science analysis in Python notebooks to include location analytics by gathering, manipulating and displaying imagery, GPS, satellite photography and historical data.
-
-The gespatio-temporal library is available in all IBM Watson Studio Spark with Python runtime environments.
-
-"
-3508F0DDA4CCBDBB07BD583218F4E4260DC01C0D_1,3508F0DDA4CCBDBB07BD583218F4E4260DC01C0D," Key functions
-
-The geospatio-temporal library includes functions to read and write data, topological functions, geohashing, indexing, ellipsoidal and routing functions.
-
-Key aspects of the library include:
-
-
-
-* All calculated geometries are accurate without the need for projections.
-* The geospatial functions take advantage of the distributed processing capabilities provided by Spark.
-* The library includes native geohashing support for geometries used in simple aggregations and in indexing, thereby improving storage retrieval considerably.
-* The library supports extensions of Spark distributed joins.
-* The library supports the SQL/MM extensions to Spark SQL.
-
-
-
-"
-3508F0DDA4CCBDBB07BD583218F4E4260DC01C0D_2,3508F0DDA4CCBDBB07BD583218F4E4260DC01C0D," Getting started with the library
-
-Before you can start using the library in a notebook, you must register STContext in your notebook to access the st functions.
-
-To register STContext:
-
-from pyst import STContext
-stc = STContext(spark.sparkContext._gateway)
-
-"
-3508F0DDA4CCBDBB07BD583218F4E4260DC01C0D_3,3508F0DDA4CCBDBB07BD583218F4E4260DC01C0D," Next steps
-
-After you have registered STContext in your notebook, you can begin exploring the spatio-temporal library for:
-
-
-
-* Functions to read and write data
-* Topological functions
-* Geohashing functions
-* Geospatial indexing functions
-* Ellipsoidal functions
-* Routing functions
-
-
-
-Check out the following sample Python notebooks to learn how to use these different functions in Python notebooks:
-
-
-
-* [Use the spatio-temporal library for location analytics](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/92c6ab6ea922d1da6a2cc9496a277005)
-* [Use spatial indexing to query spatial data](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/a7432f0c29c5bda2fb42749f3628d981)
-* [Spatial queries in PySpark](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/27ecffa80bd3a386fffca1d8d1256ba7)
-
-
-
-Parent topic:[Notebooks and scripts](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebooks-and-scripts.html)
-"
-B61649DF5425DEA0C1F16942BDE0EEC79B3E4F80_0,B61649DF5425DEA0C1F16942BDE0EEC79B3E4F80," Publishing notebooks on GitHub
-
-To collaborate with stakeholders and other data scientists, you can publish your notebooks in GitHub repositories. You can also use GitHub to back up notebooks for source code management.
-
-Watch this video to see how to enable GitHub integration.
-
-Video disclaimer: Some minor steps and graphical elements in this video might differ from your platform.
-
-This video provides a visual method to learn the concepts and tasks in this documentation.
-
-
-
-* Transcript
-
-Synchronize transcript with video
-
-
-
- Time Transcript
-
- 00:00 This video shows you how to publish notebooks from your Watson Studio project to your GitHub account.
- 00:07 Navigate to your profile and settings.
- 00:11 On the ""Integrations"" tab, visit the link to generate a GitHub personal access token.
- 00:17 Provide a descriptive name for the token and select the repo and gist scopes, then generate the token.
- 00:29 Copy the token, return to the GitHub integration settings, and paste the token.
- 00:36 The token is validated when you save it to your profile settings.
- 00:42 Now, navigate to your projects.
- 00:44 You enable GitHub integration at the project level on the ""Settings"" tab.
- 00:50 Simply scroll to the bottom and paste the existing GitHub repository URL.
- 00:56 You'll find that on the ""Code"" tab in the repo.
- 01:01 Click ""Update"" to make the connection.
- 01:05 Now, go to the ""Assets"" tab and open the notebook you want to publish.
- 01:14 Notice that this notebook has the credentials replaced with X's.
- 01:19 It's a best practice to remove or replace credentials before publishing to GitHub.
- 01:24 So, this notebook is ready for publishing.
- 01:27 You can provide the target path along with a commit message.
- 01:31 You also have the option to publish content without hidden code, which means that any cells in the notebook that began with the hidden cell comment will not be published.
- 01:42 When you're, ready click ""Publish"".
- 01:45 The message tells you that the notebook was published successfully and provides links to the notebook, the repository, and the commit.
- 01:54 Let's take a look at the commit.
-"
-B61649DF5425DEA0C1F16942BDE0EEC79B3E4F80_1,B61649DF5425DEA0C1F16942BDE0EEC79B3E4F80," 01:57 So, there's the commit, and you can navigate to the repository to see the published notebook.
- 02:04 Lastly, you can publish as a gist.
- 02:07 Gists are another way to share your work on GitHub.
- 02:10 Every gist is a git repository, so it can be forked and cloned.
- 02:15 There are two types of gists: public and secret.
- 02:19 If you start out with a secret gist, you can convert it to a public gist later.
- 02:24 And again, you have the option to remove hidden cells.
- 02:29 Follow the link to see the published gist.
- 02:32 So that's the basics of Watson Studio's GitHub integration.
- 02:37 Find more videos in the Cloud Pak for Data as a Service documentation.
-
-
-
-
-
-"
-B61649DF5425DEA0C1F16942BDE0EEC79B3E4F80_2,B61649DF5425DEA0C1F16942BDE0EEC79B3E4F80," Enabling access to GitHub from your account
-
-Before you can publish notebooks on GitHub, you must enable your IBM watsonx account to access GitHub. You enable access by creating a personal access token with the required access scope in GitHub and linking the token to your IBM watsonx account.
-
-Follow these steps to create a personal access token:
-
-
-
-1. Click your avatar in the header, and then click Profile and settings.
-2. Go to the Integrations tab and click the GitHub personal access tokens link on the dialog and generate a new token.
-3. On the New personal access token page, select repo scope and then click to generate a token.
-4. Copy the generated access token and paste it in the GitHub integration dialog window in IBM watsonx.
-
-
-
-"
-B61649DF5425DEA0C1F16942BDE0EEC79B3E4F80_3,B61649DF5425DEA0C1F16942BDE0EEC79B3E4F80," Linking a project to a GitHub repository
-
-After you have saved the access token, your project must be connected to an existing GitHub repository. You can only link to one existing GitHub repository from a project. Private repositories are supported.
-
-To link a project to an existing GitHub repository, you must have administrator permission to the project. All project collaborators, who have adminstrator or editor permission, can publish files to this GitHub repository. However, these users must have permission to access the repository. Granting user permissions to repositories must be done in GitHub.
-
-To connect a project to an existing GitHub repository:
-
-
-
-1. Select the Manage tab and go to the Services and Integrations page.
-2. Click the Third-party integrations tab.
-3. Click Connect integration.
-4. Enter your generated access token from Github.
-
-
-
-Now you can begin publishing notebooks on GitHub.
-
-Note:For information on how to change your Git integration, refer to [Managing your integrations](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/personal-settings.htmlintegrations).
-
-"
-B61649DF5425DEA0C1F16942BDE0EEC79B3E4F80_4,B61649DF5425DEA0C1F16942BDE0EEC79B3E4F80," Publishing a notebook on GitHub
-
-To publish a notebook on GitHub:
-
-
-
-1. Open the notebook in edit mode.
-2. Click the GitHub integration icon () and select Publish on GitHub from the opened notebook's action bar.
-
-
-
-When you enter the name of the file you want to publish on GitHub, you can specify a folder path in the GitHub repository. Note that notebook files are always pushed to the master branch.
-
-If you get this error: An error occurred while publishing the notebook. Invalid access token permissions or repository does not exist. make sure that:
-
-
-
-* You generated your personal access token, as described in [Enabling access to GitHub from your account](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/enabling-access.html) and the token was not deleted.
-* The repository that you want to publish your notebook to still exists.
-
-
-
-Parent topic:[Managing the lifecycle of notebooks and scripts](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/manage-nb-lifecycle.html)
-"
-3C307031346D4FD7DD1A66E2A2F919713582B075,3C307031346D4FD7DD1A66E2A2F919713582B075," Hiding sensitive code cells in a notebook
-
-If your notebook includes code cells with sensitive data, such as credentials for data sources, you can hide those code cells from anyone you share your notebook with. Any collaborators in the same project can see the cells, but when you share a notebook with a link, those cells will be hidden from anyone who uses the link.
-
-To hide code cells:
-
-
-
-1. Open the notebook and select the code cell to hide.
-2. Insert a comment with the hide tag on the first line of the code cell.
-
-For the Python and R languages, enter the following syntax: @hidden_cell
-
-
-
-
-
-Parent topic:[Sharing notebooks](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/share-notebooks.html)
-"
-AF2AC67B66D3A2DB0D4F2AF2D6743F903F1385D7_0,AF2AC67B66D3A2DB0D4F2AF2D6743F903F1385D7," Installing custom libraries through notebooks
-
-The prefered way of installing additional Python libraries to use in a notebook is to customize the software configuration of the environment runtime associated with the notebook. You can add the conda or PyPi packages through a customization template when you customize the environment template.
-
-See [Customizing environment templates](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/customize-envs.html).
-
-However, if you want to install packages from somewhere else or packages you created on your local machine, for example, you can install and import the packages through the notebook.
-
-To install packages other than conda or PyPi packages through your notebook:
-
-
-
-1. Add the package to your project storage by clicking the Upload asset to project icon (), and then browsing the package file or dragging it into your notebook sidebar.
-2. Add a project token to the notebook by clicking More > Insert project token from the notebook action bar. The code that is generated by this action initializes the variable project, which is required to access the library you uploaded to object storage.
-
-Example of an inserted project token:
-
- @hidden_cell
- The project token is an authorization token that is used to access project resources like data sources, connections, and used by platform APIs.
-from project_lib import Project
-project = Project(project_id='7c7a9455-1916-4677-a2a9-a61a75942f58', project_access_token='p-9a4c487075063e610471d6816e286e8d0d222141')
-pc = project.project_context
-
-If you don't have a token, you need to create one. See [Adding a project token](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/token.html).
-3. Install the library:
-
-
-
-
-"
-AF2AC67B66D3A2DB0D4F2AF2D6743F903F1385D7_1,AF2AC67B66D3A2DB0D4F2AF2D6743F903F1385D7," Fetch the library file, for example the tar.gz or whatever installable distribution you created
-with open(""xxx-0.1.tar.gz"",""wb"") as f:
-f.write(project.get_file(""xxx-0.1.tar.gz"").read())
-
- Install the library
-!pip install xxx-0.1.tar.gz
-
-
-
-
-1. Now you can import the library:
-
-import xxx
-
-
-
-Parent topic:[Libraries and scripts](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/libraries.html)
-"
-7623F4FA0F93DB33077A8B64F7A7B27FBC84E9E4_0,7623F4FA0F93DB33077A8B64F7A7B27FBC84E9E4," Jupyter kernels and notebook environments
-
-Jupyter notebooks run in kernels in Jupyter notebook environments or, if the notebooks use Spark APIs, those kernels run in a Spark environment.
-
-The number of notebook Juypter kernels started in an environment depends on the environment type:
-
-
-
-* CPU or GPU environments
-
-When you open a notebook in edit mode, exactly one interactive session connects to a Jupyter kernel for the notebook language and the environment runtime that you select. The runtime is started per user and not per notebook. This means that if you open a second notebook with the same environment template, a second kernel is started in that runtime. Resources are shared. If you want to avoid sharing runtime resources, you must associate each notebook with its own environment template.
-
-Important: Stopping a notebook kernel doesn't stop the environment runtime in which the kernel is started because other notebook kernels could still be active in that runtime. Only stop an environment runtime if you are sure that no kernels are active.
-* Spark environments
-
-When you open a notebook in edit mode in a Spark environment, a dedicated Spark cluster is started, even if another notebook was opened in the same Spark environment template. Each notebook kernel has its own Spark driver and set of Spark executors. No resources are shared.
-
-
-
-If necessary, you can restart or reconnect to a kernel. When you restart a kernel, the kernel is stopped and then started in the same session, but all execution results are lost. When you reconnect to a kernel after losing a connection, the notebook is connected to the same kernel session, and all previous execution results which were saved are available.
-
-The kernel remains active even if you leave the notebook or close the web browser window. When you reopen the same notebook, the notebook is connected to the same kernel. Only the output cells that were saved (auto-save happens every 2 minutes) before you left the notebook or closed the web browser window will be visible. You will not see the output for any cells which ran in the background after you left the notebook or closed the window. To see all of the output cells, you need to rerun the notebook.
-
-"
-7623F4FA0F93DB33077A8B64F7A7B27FBC84E9E4_1,7623F4FA0F93DB33077A8B64F7A7B27FBC84E9E4," Learn more
-
-
-
-* [Notebook environments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/environments-parent.html)
-
-
-
-
-
-* [Associated Spark services](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/assoc-services.html)
-
-
-
-
-
-* [Runtime scope in notebook environments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebook-environments.htmlruntime-scope)
-
-
-
-Parent topic:[Creating notebooks](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/creating-notebooks.html)
-"
-A99D0A49CDC1C3C38EFF43A6B1B51B0A177E5573_0,A99D0A49CDC1C3C38EFF43A6B1B51B0A177E5573," Libraries and scripts for notebooks
-
-Watson Studio includes a large selection of preinstalled open source libraries for Python and R in its runtime environments. You can also use preinstalled IBM libraries or install custom libraries.
-
-Watson Studio includes the following libraries and the appropriate runtime environments with which you can expand your data analysis:
-
-
-
-* The Watson Natural Language Processing library in Python and Python with GPU runtime environments.
-* The gespatio-temporal library in Spark with Python runtime environments
-* The Xskipper library for data skipping uses the open source in Spark with Python runtime environments
-* Parquet encryption in Spark with Python runtime environments
-* The tspy library for time series analysis in Spark with Python runtime environments
-
-
-
-"
-A99D0A49CDC1C3C38EFF43A6B1B51B0A177E5573_1,A99D0A49CDC1C3C38EFF43A6B1B51B0A177E5573," Listing installed libraries
-
-Many of your favorite open source libraries are pre-installed on runtime environments. All you have to do is import them. See [Import preinstalled libraries and packages](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/libraries.html?context=cdpaas&locale=enimport-lib).
-
-If a library is not preinstalled, you can add it:
-
-
-
-* Through the notebook
-
-Some libraries require a kernel restart after a version change. If you need to work with a library version that isn't pre-installed in the environment in which you start the notebook, and you install this library version through the notebook, the notebook only runs successfully after you restart the kernel.
-
-Note that when you run the notebook non-interactively, for example as a notebook job, it fails because the kernel can't be restarted.
-* By adding a customization to the environment in which the notebook runs
-
-If you add a library with a particular version to the software customization, the library is preinstalled at the time the environment is started and no kernel restart is required. Also, if the notebook is run in a scheduled job, it won't fail.
-
-The advantage of adding an environment customization is that the library is preinstalled each time the environment runtime is started. Libraries that you add through a notebook are persisted for the lifetime of the runtime only. If the runtime is stopped and later restarted, those libraries are not installed.
-
-
-
-To see the list of installed libraries in your environment runtime:
-
-
-
-1. From the Manage tab, on the project's Environments page, select the environment template.
-2. From a notebook, run the appropriate command from a notebook cell:
-
-
-
-* Python: !pip list --isolated
-* R: installed.packages()
-
-
-
-3. Optional: Add custom libraries and packages to the environment. See [customizing an environment template](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/customize-envs.html).
-
-
-
-"
-A99D0A49CDC1C3C38EFF43A6B1B51B0A177E5573_2,A99D0A49CDC1C3C38EFF43A6B1B51B0A177E5573," Importing an installed library
-
-To import an installed library into your notebook, run the appropriate command from a notebook cell with the library name:
-
-
-
-* Python: import library_name
-* R: library(library_name)
-
-
-
-Alternatively, you can write a script that includes multiple classes and methods and then [import the script into your notebook](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/add-script-to-notebook.html).
-
-"
-A99D0A49CDC1C3C38EFF43A6B1B51B0A177E5573_3,A99D0A49CDC1C3C38EFF43A6B1B51B0A177E5573," Learn more
-
-
-
-* [Installing custom libraries](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/install-cust-lib.html)
-* [Importing scripts into a notebook](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/add-script-to-notebook.html)
-* [Watson Natural Language Processing library](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp.html)
-* [gespatio-temporal library for location analysis](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/geo-spatial-lib.html)
-* [Xskipper library for data skipping](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/data-skipping-spark-sql.html)
-* [Parquet encryption](https://dataplatform.cloud.ibm.com/docs/content/wsj/spark/parquet-encryption.html)
-* [tspy library for time series analysis](https://dataplatform.cloud.ibm.com/docs/content/wsj/spark/time-series-lib.html)
-
-
-
-Parent topic:[Notebooks and scripts](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebooks-and-scripts.html)
-"
-773FA6558F9FD3115F36AF9E4B11F67C1F501432_0,773FA6558F9FD3115F36AF9E4B11F67C1F501432," Loading and accessing data in a notebook
-
-You can integrate data into notebooks by accessing the data from a local file, from free data sets, or from a data source connection. You load that data into a data structure or container in the notebook, for example, a pandas.DataFrame, numpy.array, Spark RDD, or Spark DataFrame.
-
-To work with data in a notebook, you can choose between the following options:
-
-
-
-Recommended methods for adding data to your notebook
-
- Option Recommended method Requirements Details
-
- Add data from a file on your local system Add a Code snippet that loads your data The file must exist as an asset in your project [Add a file from your local system](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/load-and-access-data.html?context=cdpaas&locale=enadd-file-local) and then [Use a code snippet to load the data](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/load-and-access-data.html?context=cdpaas&locale=enfiles)
- Add data from a free data set from the Samples Add a Code snippet that loads your data The data set (file) must exist as an asset in your project [Add a free data set from the Samples](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/load-and-access-data.html?context=cdpaas&locale=enloadcomm) and then [Use a code snippet to load the data](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/load-and-access-data.html?context=cdpaas&locale=enfiles)
-"
-773FA6558F9FD3115F36AF9E4B11F67C1F501432_1,773FA6558F9FD3115F36AF9E4B11F67C1F501432," Load data from data source connections Add a Code snippet that loads your data The connection must exist as an asset in your project [Add a connection to your project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html) and then [Add a code snippet that loads the data from your data source connection](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/load-and-access-data.html?context=cdpaas&locale=enconns)
- Access project assets and metadata programmatically Use ibm-watson-studio-lib The data asset must exist in your project [Use the ibm-watson-studio-lib library to interact with data assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/using-ibm-ws-lib.html)
- Create and use feature store data Use assetframe-lib library functions The data asset must exist in your project [Use the assetframe-lib library for Python to create and use feature store data](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/py_assetframe.html)
- Access data using an API function or an operating system command For example, use wget N/A [Access data using an API function or an operating system command](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/load-and-access-data.html?context=cdpaas&locale=enapi-function)
-
-
-
-Important: Make sure that the environment in which the notebook is started has enough memory to store the data that you load to the notebook. The environment must have significantly more memory than the total size of the data that is loaded to the notebook. Some data frameworks, like pandas, can hold multiple copies of the data in memory.
-
-"
-773FA6558F9FD3115F36AF9E4B11F67C1F501432_2,773FA6558F9FD3115F36AF9E4B11F67C1F501432," Adding a file from your local system
-
-To add a file from your local system to your project by using the Jupyterlab notebook editor:
-
-
-
-1. Open your notebook in edit mode.
-2. From the toolbar, click the Upload asset to project icon () and add your file.
-
-
-
-Tip: You can also drag the file into your notebook sidebar.
-
-"
-773FA6558F9FD3115F36AF9E4B11F67C1F501432_3,773FA6558F9FD3115F36AF9E4B11F67C1F501432," Load data sets from the Samples
-
-The data sets on the Samples contain open data. Watch this short video to see how to work with public data sets in the Samples.
-
-This video provides a visual method to learn the concepts and tasks in this documentation.
-
-To add a data set from the Samples to your project:
-
-
-
-1. From the IBM watsonx navigation menu, select Samples.
-2. Find the card for the data set that you want to add. 
-3. Click Add to project, select the project, and click Add. Clicking View project takes you to the project Overview page. The data asset is added to the list of data assets on the project's Assets page.
-
-
-
-"
-773FA6558F9FD3115F36AF9E4B11F67C1F501432_4,773FA6558F9FD3115F36AF9E4B11F67C1F501432," Loading data from files
-
-Prerequisites The file must exist as an asset in your project. For details, see [Adding a file from your local system](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/load-and-access-data.html?context=cdpaas&locale=enadd-file-local) or [Loading a data set from the Samples](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/load-and-access-data.html?context=cdpaas&locale=enloadcomm).
-
-To load data from a project file to your notebook:
-
-
-
-1. Open your notebook in edit mode.
-2. Click the Code snippets icon (), click Read data, and then select the data file from your project. If you want to change your selection, use Edit icon.
-3. From the Load as drop-down list, select the load option that you prefer. If you select Credentials, only file access credentials will be generated. For details, see [Adding credentials](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/load-and-access-data.html?context=cdpaas&locale=enadding-creds).
-4. Click in an empty code cell in your notebook and then click Insert code to cell to insert the generated code. Alternatively, click to copy the generated code to the clipboard and then paste the code into your notebook.
-
-
-
-The generated code serves as a quick start to begin working with a data set. For production systems, carefully review the inserted code to determine whether to write your own code that better meets your needs.
-
-"
-773FA6558F9FD3115F36AF9E4B11F67C1F501432_5,773FA6558F9FD3115F36AF9E4B11F67C1F501432,"To learn which data structures are generated for which notebook language and data format, see [Data load support](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/data-load-support.htmlfile-types).
-
-"
-773FA6558F9FD3115F36AF9E4B11F67C1F501432_6,773FA6558F9FD3115F36AF9E4B11F67C1F501432," Loading data from data source connections
-
-Prerequisites Before you can load data from an IBM data service or from an external data source, you must create or add a connection to your project. See [Adding connections to projects](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html).
-
-To load data from an existing data source connection into a data structure in your notebook:
-
-
-
-1. Open your notebook in edit mode.
-2. Click the Code snippets icon (), click Read data, and then select the data source connection from your project.
-3. Select the schema and choose a table. If you want to change your selection, use Edit icon.
-4. Select the load option. If you select Credentials, only metadata will be generated. For details, see [Adding credentials](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/load-and-access-data.html?context=cdpaas&locale=enadding-creds).
-5. Click in an empty code cell in your notebook and then insert code to the cell. Alternatively, click to copy the generated code to the clipboard and then paste the code into your notebook.
-6. If necessary, enter your personal credentials for locked data connections that are marked with a key icon (). This is a one-time step that permanently unlocks the connection for you. After you unlock the connection, the key icon is no longer displayed. For more information, see [Adding connections to projects](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-conn.html).
-
-
-
-"
-773FA6558F9FD3115F36AF9E4B11F67C1F501432_7,773FA6558F9FD3115F36AF9E4B11F67C1F501432,"The generated code serves as a quick start to begin working with a connection. For production systems, carefully review the inserted code to determine whether to write your own code that better meets your needs.
-
-To learn which data structures are generated for which notebook language and data format, see [Data load support](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/data-load-support.htmlfile-types).
-
-"
-773FA6558F9FD3115F36AF9E4B11F67C1F501432_8,773FA6558F9FD3115F36AF9E4B11F67C1F501432," Adding credentials
-
-You can generate your own code to access the file located in your IBM Cloud Object Storage or a file accessible through a connection. This is useful when, for example, your file format is not supported by the snippet generation tool. With the credentials, you can write your own code to load the data into a data structure in a notebook cell.
-
-To add the credentials:
-
-
-
-1. Click the Code snippets icon () and then click Read data.
-2. Click in an empty code cell in your notebook, select Credentials as the load option, and then load the credentials to the cell. You can also click to copy the credentials to the clipboard and then paste them into your notebook.
-3. Insert your credentials into the code in your notebook to access the data. For example, see this code in a [blog for Python](https://medium.com/ibm-data-science-experience/working-with-ibm-cloud-object-storage-in-python-fe0ba8667d5f).
-
-
-
-"
-773FA6558F9FD3115F36AF9E4B11F67C1F501432_9,773FA6558F9FD3115F36AF9E4B11F67C1F501432," Use an API function or an operating system command to access the data
-
-You can use API functions or operating system commands in your notebook to access data, for example, the wget command to access data by using the HTTP, HTTPS or FTP protocols. When you use these types of API functions and commands, you must include code that sets the project access token. See [Manually add the project access token](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/token.html).
-
-For reference information about the API, see [Watson Data API](https://cloud.ibm.com/apidocs/watson-data-api).
-
-Parent topic:[Notebooks and scripts](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebooks-and-scripts.html)
-"
-7BAB40E15D18920009E4168C32265A950A8AFE38_0,7BAB40E15D18920009E4168C32265A950A8AFE38," Managing compute resources
-
-If you have the Admin role or Editor in a project, you can perform management tasks for environments.
-
-
-
-* [Create an environment template](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/create-customize-env-definition.html)
-* [Customize an environment template](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/customize-envs.html)
-* [Stop active runtimes](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/manage-envs-new.html?context=cdpaas&locale=enstop-active-runtimes)
-* [Promote an environment template to a space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/promote-envs.html)
-* [Track capacity unit consumption of runtimes](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/track-runtime-usage.html)
-
-
-
-"
-7BAB40E15D18920009E4168C32265A950A8AFE38_1,7BAB40E15D18920009E4168C32265A950A8AFE38," Stop active runtimes
-
-You should stop all active runtimes when you no longer need them to prevent consuming extra capacity unit hours (CUHs).
-
-Jupyter notebook runtimes are started per user and not per notebook. Stopping a notebook kernel doesn't stop the environment runtime in which the kernel is started because you could have started other notebooks in the same environment. You should only stop a notebook runtime if you are sure that no other notebook kernels are active.
-
-Only runtimes that are started for jobs are automatically shut down after the scheduled job has completed. For example, if you schedule to run a notebook once a day for 2 months, the runtime instance will be activated every day for the duration of the scheduled job and deactivated again after the job has finished.
-
-Project users with Admin role can stop all runtimes in the project. Users added to the project with Editor role can stop the runtimes they started, but can't stop other project users' runtimes. Users added to the project with the viewer role can't see the runtimes in the project.
-
-You can stop runtimes from:
-
-
-
-* The Environment Runtimes page, which lists all active runtimes across all projects for your account, by clicking Administration > Environment runtimes from the Watson Studio navigation menu.
-* Under Tool runtimes on the Environments page on the Manage tab of your project, which lists the active runtimes for a specific project.
-* The Environments page when you click the Notebook Info icon () from the notebook toolbar in the notebook editor. You can stop the runtime under Runtime status.
-
-
-
-Idle timeouts for:
-
-
-
-* [Jupyter notebook runtimes](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/manage-envs-new.html?context=cdpaas&locale=encpu)
-"
-7BAB40E15D18920009E4168C32265A950A8AFE38_2,7BAB40E15D18920009E4168C32265A950A8AFE38,"* [Spark runtimes for notebooks and Data Refinery](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/manage-envs-new.html?context=cdpaas&locale=enspark)
-* [Notebook with GPU runtimes](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/manage-envs-new.html?context=cdpaas&locale=engpu)
-* [RStudio runtimes](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/manage-envs-new.html?context=cdpaas&locale=enrstudio)
-
-
-
-"
-7BAB40E15D18920009E4168C32265A950A8AFE38_3,7BAB40E15D18920009E4168C32265A950A8AFE38," Jupyter notebook idle timeout
-
-Runtime idle times differ for the Jupyter notebook runtimes depending on your Watson Studio plan.
-
-
-
-Idle timeout for default CPU runtimes
-
- Plan Idle timeout
-
- Lite - Idle stop time: 1 hour - CUH limit: 10 CUHs
- Professional - Idle stop time: 1 hour - CUH limit: no limit
- Standard (Legacy) - Idle stop time: 1 hour - CUH limit: no limit
- Enterprise (Legacy) - Idle stop time: 3 hours - CUH limit: no limit
- All plans Free runtime - Idle stop time: 1 hour - Maximum lifetime: 12 hours
-
-
-
-Important: A runtime is started per user and not per notebook. Stopping a notebook kernel doesn't stop the environment runtime in which the kernel is started because you could have started other notebooks in the same environment. Only stop a runtime if you are sure that no kernels are active.
-
-"
-7BAB40E15D18920009E4168C32265A950A8AFE38_4,7BAB40E15D18920009E4168C32265A950A8AFE38," Spark idle timeout
-
-All Spark runtimes, for example for notebook and Data Refinery, are stopped after 3 hours of inactivity. The Default Data Refinery XS runtime that is used when you refine data in Data Refinery is stopped after an idle time of 1 hour.
-
-Spark runtimes that are started when a job is started, for example to run a Data Refinery flow or a notebook, are stopped when the job finishes.
-
-"
-7BAB40E15D18920009E4168C32265A950A8AFE38_5,7BAB40E15D18920009E4168C32265A950A8AFE38," GPU idle timeout
-
-All GPU runtimes are automatically stopped after 3 hours of inactivity for Enterprise plan users and after 1 hour of inactivity for other paid plan users.
-
-"
-7BAB40E15D18920009E4168C32265A950A8AFE38_6,7BAB40E15D18920009E4168C32265A950A8AFE38," RStudio idle timeout
-
-An RStudio is stopped for you after an idle time of 2 hour. During this idle time, you will continue to consume CUHs for which you are billed. Long compute-intensive jobs are hard stopped after 24 hours.
-
-Parent topic:[Projects](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/manage-projects.html)
-"
-6349E43EA9B4AC5775DB122E0F6C365D5DB810BF,6349E43EA9B4AC5775DB122E0F6C365D5DB810BF," Managing the lifecycle of notebooks and scripts
-
-After you have created and tested your notebooks, you can add them to pipelines, publish them to a catalog so that other catalog members can use the notebook in their projects, or share read-only copies outside of Watson Studio so that people who aren't collaborators in your Watson Studio projects can see and use them. R scripts and Shiny apps can't be published or shared using functionality in a project at this time.
-
-You can use any of these methods for notebooks:
-
-
-
-* [Add notebooks to a pipeline](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-config.html)
-* [Share a URL on social media](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/share-notebooks.html)
-* [Publish on GitHub](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/github-integration.html)
-* [Publish as a gist](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/create-gist.html)
-* [Publish your notebook to a catalog](https://dataplatform.cloud.ibm.com/docs/content/wsj/catalog/publish-asset-project.html)
-
-
-
-Make sure that before you share or publish a notebook, you hide any sensitive code, like credentials, that you don't want others to see! See [Hide sensitive cells in a notebook](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/hide_code.html).
-
-Parent topic:[Notebooks and scripts](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebooks-and-scripts.html)
-"
-FF69E780BD8FECEAF7A0ADD24C159679F7359F81_0,FF69E780BD8FECEAF7A0ADD24C159679F7359F81," Markdown cheatsheet
-
-You can use Markdown tagging to improve the readability of a project readme or the Markdown cells in Jupyter notebooks. The differences between Markdown in the readme files and in notebooks are noted.
-
-Headings: Use #s followed by a blank space for notebook titles and section headings:
-
- title
- major headings
- subheadings
- 4th level subheadings
-
-Emphasis: Use this code: Bold: __string__ or string, Italic: _string_ or string, Strikethrough: string
-
-Mathematical symbols: Use this code: $ mathematical symbols $
-
-Monospace font: Surround text with a back single quotation mark (`). Use monospace for file path and file names and for text users enter or message text users see.
-
-Line breaks: Sometimes Markdown doesn’t make line breaks when you want them. Put two spaces at the end of the line, or use this code for a manual line break:
-
-Indented quoting: Use a greater-than sign (>) and then a space, then type the text. The text is indented and has a gray horizontal line to the left of it until the next carriage return.
-
-Bullets: Use the dash sign (-) with a space after it or a space, a dash, and a space (-), to create a circular bullet. To create a sub bullet, use a tab followed a dash and a space. You can also use an asterisk instead of a dash, and it works the same.
-
-Numbered lists: Start with 1. followed by a space, then your text. Hit return and numbering is automatic. Start each line with some number and a period, then a space. Tab to indent to get subnumbering.
-
-Checkboxes in readme files: Use this code for an unchecked box: ( )
-Use this code for a checked box: (x)
-
-Tables in readme files: Use this code:
-
- Heading Heading
-
- text text
- text text
-
-"
-FF69E780BD8FECEAF7A0ADD24C159679F7359F81_1,FF69E780BD8FECEAF7A0ADD24C159679F7359F81,"Graphics in notebooks: Drag and drop images to the Markdown cell to attach it to the notebook. To add images to other cell types, use graphics that are hosted on the web with this code, substituting url/name with the full URL and name of the image:
-
-Graphics in readme files: Use this code: Alt text]
-
-Geometric shapes: Use this code with a decimal or hex reference number from here: [!UTF-8 Geometric shapes](https://www.w3schools.com/charsets/ref_utf_geometric.asp)reference_number;
-
-Horizontal lines: Use three asterisks:
-
-Internal links: To link to a section, add an anchor above the section title and then create a link.
-
-Use this code to create an anchor:
-Use this code to create the link: [section title](section-ID)
-Make sure that the section_ID is unique within the notebook or readme.
-
-Alternatively, for notebooks you can skip creating anchors and use this code: [section title](section-title)
-For the text in the parentheses, replace spaces and special characters with a hyphen and make all characters lowercase.
-
-Test all links!
-
-External links: Use this code: [link text](http://url)
-
-To create a link that opens in a new window or tab, use this code: link text
-
-Test all links!
-
-Parent topic:[Projects ](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/manage-projects.html)
-"
-FD32E17FF88251CDFC3FA01A1AD8EEBDA98EDA06_0,FD32E17FF88251CDFC3FA01A1AD8EEBDA98EDA06," Accessing asset details
-
-Display details about an asset and preview data assets in a deployment space.
-
-To display details about the asset, click the asset name. For example, click a model name to view details such as the associated software and hardware specifications, the model creation date, and more. Some details, such as the model name, description, and tags, are editable.
-
-For data assets, you can also preview the data.
-
-"
-FD32E17FF88251CDFC3FA01A1AD8EEBDA98EDA06_1,FD32E17FF88251CDFC3FA01A1AD8EEBDA98EDA06," Previewing data assets
-
-To preview a data asset, click the data asset name.
-
-
-
-* User's access to the data is based on the API layer. This means that if user's bearer token allows for viewing data, the data preview is displayed.
-* For tabular data, only a subset of the data is displayed. Also, column names are displayed but their data types are not inferred.
-* For data in XLS files, only the first worksheet is displayed for preview.
-* All data from Cloud Object Storage connectors is assumed to be tabular data.
-
-
-
-MIME types supported for preview:
-
-
-
- Format Mime types
-
- Image image/bmp, image/cmu-raster, image/fif, image/florian, image/g3fax, image/gif, image/ief, image/jpeg, image/jutvision, image/naplps, image/pict, image/png, image/svg+xml, image/vnd.net-fpx, image/vnd.rn-realflash, image/vnd.rn-realpix, image/vnd.wap.wbmp, image/vnd.xiff, image/x-cmu-raster, image/x-dwg, image/x-icon, image/x-jg, image/x-jps, image/x-niff, image/x-pcx, image/x-pict, image/x-portable-anymap, image/x-portable-bitmap, image/x-portable-greymap, image/x-portable-pixmap, image/x-quicktime, image/x-rgb, image/x-tiff, image/x-windows-bmp, image/x-xwindowdump, image/xbm, image/xpm
-"
-FD32E17FF88251CDFC3FA01A1AD8EEBDA98EDA06_2,FD32E17FF88251CDFC3FA01A1AD8EEBDA98EDA06," Text application/json, text/asp, text/css, text/csv, text/html, text/mcf, text/pascal, text/plain, text/richtext, text/scriplet, text/tab-separated-values, text/tab-separated-values, text/uri-list, text/vnd.abc, text/vnd.fmi.flexstor, text/vnd.rn-realtext, text/vnd.wap.wml, text/vnd.wap.wmlscript, text/webviewhtml, text/x-asm, text/x-audiosoft-intra, text/x-c, text/x-component, text/x-fortran, text/x-h, text/x-java-source, text/x-la-asf, text/x-m, text/x-pascal, text/x-script, text/x-script.csh, text/x-script.elisp, text/x-script.ksh, text/x-script.lisp, text/x-script.perl, text/x-script.perl-module, text/x-script.python, text/x-script.rexx, text/x-script.tcl, text/x-script.tcsh, text/x-script.zsh, text/x-server-parsed-html, text/x-setext, text/x-sgml, text/x-speech, text/x-uil, text/x-uuencode, text/x-vcalendar, text/xml
- Tabular data text/csv, application/excel, application/vnd.ms-excel, application/vnd.openxmlformats-officedocument.spreadsheetml.sheet, data from connections
-
-
-
-Parent topic:[Assets in deployment spaces](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets-all.html)
-"
-B518A7A2D4AA3B05564C965889116F6A6151A34B_0,B518A7A2D4AA3B05564C965889116F6A6151A34B," Authenticating for programmatic access
-
-To use Watson Machine Learning with the Python client library or the REST API, you must authenticate to secure your work. Learn about the different ways to authenticate and how to apply them to the service of your choosing.
-
-You use IBM Cloud® Identity and Access Management (IAM) to make authenticated requests to public IBM Watson™ services. With IAM access policies, you can assign access to more than one resource from a single key. In addition, a user, service ID, and service instance can hold multiple API keys.
-
-"
-B518A7A2D4AA3B05564C965889116F6A6151A34B_1,B518A7A2D4AA3B05564C965889116F6A6151A34B," Security overview
-
-Refer to the section that describes your security needs.
-
-
-
-* [Authentication credentials](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-authentication.html?context=cdpaas&locale=enterminology)
-* [Python client](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-authentication.html?context=cdpaas&locale=enpython-client)
-* [Rest API](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-authentication.html?context=cdpaas&locale=enrest-api)
-
-
-
-"
-B518A7A2D4AA3B05564C965889116F6A6151A34B_2,B518A7A2D4AA3B05564C965889116F6A6151A34B," Authentication credentials
-
-These terms relate to the security requirements described in this topic.
-
-
-
-* API keys allow you to easily authenticate when you are using the Python client or APIs and can be used across multiple services. API Keys are considered confidential because they are used to grant access. Treat all API keys as you would a password because anyone with your API key can access your service.
-* An IAM token is an authentication token that is required to access IBM Cloud services. You can generate a token by using your API key in the token request. For details on using IAM tokens, refer to [Authenticating to Watson Machine Learning API](https://cloud.ibm.com/apidocs/machine-learningauthentication).
-
-
-
-To authenticate to a service through its API, pass your credentials to the API. You can pass either a bearer token in an authorization header or an API key.
-
-"
-B518A7A2D4AA3B05564C965889116F6A6151A34B_3,B518A7A2D4AA3B05564C965889116F6A6151A34B," Generating an API key
-
-To generate an API key from your IBM Cloud user account, go to [Manage access and users - API Keys](https://cloud.ibm.com/iam/apikeys) and create or select an API key for your user account.
-
-You can also generate and rotate API keys from Profile and settings > User API key. For more information, see [Managing the user API key](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/admin-apikeys.html).
-
-"
-B518A7A2D4AA3B05564C965889116F6A6151A34B_4,B518A7A2D4AA3B05564C965889116F6A6151A34B," Authenticate with an IAM token
-
-IAM tokens are temporary security credentials that are valid for 60 minutes. When a token expires, you generate a new one. Tokens can be useful for temporary access to resources. For more information, see [Generating an IBM Cloud IAM token by using an API key](https://cloud.ibm.com/docs/account?topic=account-iamtoken_from_apikey).
-
-"
-B518A7A2D4AA3B05564C965889116F6A6151A34B_5,B518A7A2D4AA3B05564C965889116F6A6151A34B," Getting a service-level token
-
-You can also authenticate with a service-level token. To generate a service-level token:
-
-
-
-1. Refer to the IBM Cloud instructions for [creating a Service ID](https://cloud.ibm.com/iam/serviceids).
-2. Generate an API key for that Service ID.
-3. Open the space where you plan to keep your deployable assets.
-4. On the Access control tab, add the Service ID and assign an access role of Admin or Editor.
-
-
-
-You can use the service-level token with your API scoring requests.
-
-"
-B518A7A2D4AA3B05564C965889116F6A6151A34B_6,B518A7A2D4AA3B05564C965889116F6A6151A34B," Interfaces
-
-
-
-* [Python client](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-authentication.html?context=cdpaas&locale=enpython)
-* [REST API](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-authentication.html?context=cdpaas&locale=enrest-api)
-
-
-
-"
-B518A7A2D4AA3B05564C965889116F6A6151A34B_7,B518A7A2D4AA3B05564C965889116F6A6151A34B," Python client
-
-Refer to: [Watson Machine Learning Python client ](https://ibm.github.io/watson-machine-learning-sdk/)
-
-To create an instance of the Watson Machine Learning Python client object, you need to pass your credentials to Watson Machine Learning API client.
-
-wml_credentials = {
-""apikey"":""123456789"",
-""url"": "" https://HIJKL""
-}
-from ibm_watson_machine_learning import APIClient
-wml_client = APIClient(wml_credentials)
-
-Note:Even though you do not explicitly provide an instance_id, it will be picked up from the associated space or project for billing purposes. For details on plans and billing for Watson Machine Learning services, refer to [Watson Machine Learning plans and runtime usage](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/wml-plans.html).
-
-Refer to [sample notebooks](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-samples-overview.html) for examples of how to authenticate and then score a model by using the Python client.
-
-"
-B518A7A2D4AA3B05564C965889116F6A6151A34B_8,B518A7A2D4AA3B05564C965889116F6A6151A34B," REST API
-
-Refer to: [Watson Machine Learning REST API ](https://cloud.ibm.com/apidocs/machine-learning)
-
-To use the Watson Machine Learning REST API, you must obtain an IBM Cloud Identity and Access Management (IAM) token. In this example, you would supply your API key in place of the example key.
-
-"
-B518A7A2D4AA3B05564C965889116F6A6151A34B_9,B518A7A2D4AA3B05564C965889116F6A6151A34B," cURL example
-
-curl -k -X POST
---header ""Content-Type: application/x-www-form-urlencoded""
---header ""Accept: application/json""
---data-urlencode ""grant_type=urn:ibm:params:oauth:grant-type:apikey""
---data-urlencode ""apikey=123456789""
-""https://iam.cloud.ibm.com/identity/token""
-
-The obtained IAM token needs to be prefixed with the word Bearer, and passed in the Authorization header for API calls.
-
-"
-B518A7A2D4AA3B05564C965889116F6A6151A34B_10,B518A7A2D4AA3B05564C965889116F6A6151A34B," Python example
-
-import requests
-
- Paste your Watson Machine Learning service apikey here
-
-apikey = ""123456789""
-
- Get an IAM token from IBM Cloud
-url = ""https://iam.cloud.ibm.com/identity/token""
-headers = { ""Content-Type"" : ""application/x-www-form-urlencoded"" }
-data = ""apikey="" + apikey + ""&grant_type=urn:ibm:params:oauth:grant-type:apikey""
-response = requests.post( url, headers=headers, data=data, auth=( apikey )
-iam_token = response.json()[""access_token""]
-
-"
-B518A7A2D4AA3B05564C965889116F6A6151A34B_11,B518A7A2D4AA3B05564C965889116F6A6151A34B," Node.js example
-
-var btoa = require( ""btoa"" );
-var request = require( 'request' );
-
-// Paste your Watson Machine Learning service apikey here
-var apikey = ""123456789"";
-
-// Use this code as written to get an access token from IBM Cloud REST API
-//
-var IBM_Cloud_IAM_uid = ""bx"";
-var IBM_Cloud_IAM_pwd = ""bx"";
-
-var options = { url : ""https://iam.cloud.ibm.com/identity/token"",
-headers : { ""Content-Type"" : ""application/x-www-form-urlencoded"",
-""Authorization"" : ""Basic "" + btoa( IBM_Cloud_IAM_uid + "":"" + IBM_Cloud_IAM_pwd ) },
-body : ""apikey="" + apikey + ""&grant_type=urn:ibm:params:oauth:grant-type:apikey"" };
-
-request.post( options, function( error, response, body )
-{
-var iam_token = JSON.parse( body )[""access_token""];
-} );
-
-Parent topic:[Managing predictive deployments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-general.html)
-"
-CD27E36E95AE5324468C33CF3A112DC1611CA74C_0,CD27E36E95AE5324468C33CF3A112DC1611CA74C," Customizing with third-party and private Python libraries
-
-If your model requires custom components such as user-defined transformers, estimators, or user-defined tensors, you can create a custom software specification that is derived from a base, or a predefined specification. Python functions and Python scripts also support custom software specifications.
-
-You can use custom software specification to reference any third-party libraries, user-created Python packages, or both. Third-party libraries or user-created Python packages must be specified as package extensions so that they can be referenced in a custom software specification.
-
-You can customize deployment runtimes in these ways:
-
-
-
-* [Define customizations in a Watson Studio project and then promote them to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-create-custom-software-spec.html?context=cdpaas&locale=encustom-ws)
-* [Create package extensions and custom software specifications in a deployment space by using the Watson Machine Learning Python client](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-create-custom-software-spec.html?context=cdpaas&locale=encustom-wml)
-
-
-
-For more information, see [Troubleshooting](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-create-custom-software-spec.html?context=cdpaas&locale=ents).
-
-"
-CD27E36E95AE5324468C33CF3A112DC1611CA74C_1,CD27E36E95AE5324468C33CF3A112DC1611CA74C," Defining customizations in a Watson Studio project and then promoting them to a deployment space
-
-Environments in Watson Studio projects can be customized to include third-party libraries that can be installed from Anaconda or from the PyPI repository.
-
-For more information, see [Environments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/environments-parent.html).
-
-As part of custom environment creation, these steps are performed internally (visible to the user):
-
-
-
-* A package extension that contains the details of third-party libraries is created in conda YAML format.
-* A custom software specification with the same name as the custom environment is created and the package extension that is created is associated with this custom software specification.
-
-
-
-The models or Python functions/scripts created with the custom environment must reference the custom software specification when they are saved in Watson Machine Learning repository in the project scope.
-
-"
-CD27E36E95AE5324468C33CF3A112DC1611CA74C_2,CD27E36E95AE5324468C33CF3A112DC1611CA74C," Propagating software specifications and package extensions from projects to deployment spaces
-
-To export custom software specifications and package extensions that were created in a Watson Studio project to a deployment space:
-
-
-
-1. From your project interface, click the Manage tab.
-2. Select Environments.
-3. Click the Templates tab.
-4. From your custom environment's Options menu, select Promote to space.
-
-
-
-
-
-Alternatively, when you promote any model or Python function that is associated with a custom environment from a Watson Studio project to a deployment space, the associated custom software specification and package extension is also promoted to the deployment space.
-
-If you want to update software specifications and package extensions after you promote them to deployment space, follow these steps:
-
-
-
-1. In the deployment space, delete the software specifications, package extensions, and associated models (optional) by using the Watson Machine Learning Python client.
-2. In a project, promote the model, function, or script that is associated with the changed custom software specification and package extension to the space.
-
-
-
-Software specifications are also included when you import a project or space that includes one.
-
-"
-CD27E36E95AE5324468C33CF3A112DC1611CA74C_3,CD27E36E95AE5324468C33CF3A112DC1611CA74C," Creating package extensions and custom software specifications in a deployment space by using the Watson Machine Learning Python client
-
-You can use the Watson Machine Learning APIs or Python client to define a custom software specification that is derived from a base specification.
-
-High-level steps to create a custom software specification that uses third-party libraries or user-created Python packages:
-
-
-
-1. Optional: [Save a conda YAML file that contains a list of third-party libraries](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-create-custom-software-spec.html?context=cdpaas&locale=ensave-conda-yaml) or [save a user-created Python library and create a package extension](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-create-custom-software-spec.html?context=cdpaas&locale=ensave-user-created).
-
-Note: This step is not required if the model does not have any dependency on a third-party library or a user-created Python library.
-2. Create a custom software specification
-3. Add a reference of the package extensions to the custom software specification that you created.
-
-
-
-"
-CD27E36E95AE5324468C33CF3A112DC1611CA74C_4,CD27E36E95AE5324468C33CF3A112DC1611CA74C," Saving a conda YAML file that contains a list of third-party libraries
-
-To save a conda YAML file that contains a list of third-party libraries as a package extension and create a custom software specification that is linked to the package extension:
-
-
-
-1. Authenticate and create the client.
-
-Refer to [Authentication](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-authentication.html).
-2. Create and set the default deployment space, then list available software specifications.
-
-metadata = {
-wml_client.spaces.ConfigurationMetaNames.NAME:
-'examples-create-software-spec',
-wml_client.spaces.ConfigurationMetaNames.DESCRIPTION:
-'For my models'
-}
-space_details = wml_client.spaces.store(meta_props=metadata)
-space_uid = wml_client.spaces.get_id(space_details)
-
- set the default space
-wml_client.set.default_space(space_uid)
-
- see available meta names for software specs
-print('Available software specs configuration:', wml_client.software_specifications.ConfigurationMetaNames.get())
-wml_client.software_specifications.list()
-
-asset_id = 'undefined'
-pe_asset_id = 'undefined'
-3. Create the metadata for package extensions to add to the base specification.
-
-pe_metadata = {
-wml_client.package_extensions.ConfigurationMetaNames.NAME:
-'My custom library',
- optional:
- wml_client.software_specifications.ConfigurationMetaNames.DESCRIPTION:
-wml_client.package_extensions.ConfigurationMetaNames.TYPE:
-'conda_yml'
-}
-4. Create a yaml file that contains the list of packages and then save it as customlibrary.yaml.
-
-Example yaml file:
-
-name: add-regex-package
-dependencies:
-- regex
-
-"
-CD27E36E95AE5324468C33CF3A112DC1611CA74C_5,CD27E36E95AE5324468C33CF3A112DC1611CA74C,"For more information, see [Examples of customizations](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/example-customizations.html).
-5. Store package extension information.
-
-pe_asset_details = wml_client.package_extensions.store(
-meta_props=pe_metadata,
-file_path='customlibrary.yaml'
-)
-pe_asset_id = wml_client.package_extensions.get_id(pe_asset_details)
-6. Create the metadata for the software specification and store the software specification.
-
- Get the id of the base software specification
-base_id = wml_client.software_specifications.get_id_by_name('default_py3.9')
-
- create the metadata for software specs
-ss_metadata = {
-wml_client.software_specifications.ConfigurationMetaNames.NAME:
-'Python 3.9 with pre-installed ML package',
-wml_client.software_specifications.ConfigurationMetaNames.DESCRIPTION:
-'Adding some custom libraries like regex', optional
-wml_client.software_specifications.ConfigurationMetaNames.BASE_SOFTWARE_SPECIFICATION:
-{'guid': base_id},
-wml_client.software_specifications.ConfigurationMetaNames.PACKAGE_EXTENSIONS:
-[{'guid': pe_asset_id}]
-}
-
- store the software spec
-ss_asset_details = wml_client.software_specifications.store(meta_props=ss_metadata)
-
- get the id of the new asset
-asset_id = wml_client.software_specifications.get_id(ss_asset_details)
-
- view new software specification details
-import pprint as pp
-
-ss_asset_details = wml_client.software_specifications.get_details(asset_id)
-print('Package extensions', pp.pformat(
-ss_asset_details['entity']['package_extensions']
-))
-
-
-
-"
-CD27E36E95AE5324468C33CF3A112DC1611CA74C_6,CD27E36E95AE5324468C33CF3A112DC1611CA74C," Saving a user-created Python library and creating a package extension
-
-For more information, see [Requirements for using custom components in models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-custom_libs_overview.html).
-
-To save a user-created Python package as a package extension and create a custom software specification that is linked to the package extension:
-
-
-
-1. Authenticate and create the client.
-
-Refer to [Authentication](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-authentication.html).
-2. Create and set the default deployment space, then list available software specifications.
-
-metadata = {
-wml_client.spaces.ConfigurationMetaNames.NAME:
-'examples-create-software-spec',
-wml_client.spaces.ConfigurationMetaNames.DESCRIPTION:
-'For my models'
-}
-space_details = wml_client.spaces.store(meta_props=metadata)
-space_uid = wml_client.spaces.get_id(space_details)
-
- set the default space
-wml_client.set.default_space(space_uid)
-
- see available meta names for software specs
-print('Available software specs configuration:', wml_client.software_specifications.ConfigurationMetaNames.get())
-wml_client.software_specifications.list()
-
-asset_id = 'undefined'
-pe_asset_id = 'undefined'
-3. Create the metadata for package extensions to add to the base specification.
-
-Note:You can specify pip_zip only as a value for the wml_client.package_extensions.ConfigurationMetaNames.TYPE metadata property.
-
-pe_metadata = {
-wml_client.package_extensions.ConfigurationMetaNames.NAME:
-'My Python library',
- optional:
- wml_client.software_specifications.ConfigurationMetaNames.DESCRIPTION:
-"
-CD27E36E95AE5324468C33CF3A112DC1611CA74C_7,CD27E36E95AE5324468C33CF3A112DC1611CA74C,"wml_client.package_extensions.ConfigurationMetaNames.TYPE:
-'pip.zip'
-}
-4. Specify the path of the user-created Python library.
-
-python_lib_file_path=""my-python-library-0.1.zip""
-
-For more information, see [Requirements for using custom components in models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-custom_libs_overview.html).
-5. Store package extension information.
-
-pe_asset_details = wml_client.package_extensions.store(
-meta_props=pe_metadata,
-file_path=python_lib_file_path
-)
-pe_asset_id = wml_client.package_extensions.get_id(pe_asset_details)
-6. Create the metadata for the software specification and store the software specification.
-
- Get the id of the base software specification
-base_id = wml_client.software_specifications.get_id_by_name('default_py3.9')
-
- create the metadata for software specs
-ss_metadata = {
-wml_client.software_specifications.ConfigurationMetaNames.NAME:
-'Python 3.9 with pre-installed ML package',
-wml_client.software_specifications.ConfigurationMetaNames.DESCRIPTION:
-'Adding some custom libraries like regex', optional
-wml_client.software_specifications.ConfigurationMetaNames.BASE_SOFTWARE_SPECIFICATION:
-{'guid': base_id},
-wml_client.software_specifications.ConfigurationMetaNames.PACKAGE_EXTENSIONS:
-[{'guid': pe_asset_id}]
-}
-
- store the software spec
-ss_asset_details = wml_client.software_specifications.store(meta_props=ss_metadata)
-
- get the id of the new asset
-asset_id = wml_client.software_specifications.get_id(ss_asset_details)
-
- view new software specification details
-import pprint as pp
-
-"
-CD27E36E95AE5324468C33CF3A112DC1611CA74C_8,CD27E36E95AE5324468C33CF3A112DC1611CA74C,"ss_asset_details = wml_client.software_specifications.get_details(asset_id)
-print('Package extensions', pp.pformat(
-ss_asset_details['entity']['package_extensions']
-))
-
-
-
-"
-CD27E36E95AE5324468C33CF3A112DC1611CA74C_9,CD27E36E95AE5324468C33CF3A112DC1611CA74C," Troubleshooting
-
-When a conda yml based custom library installation fails with this error: Encountered error while installing custom library, try these alternatives:
-
-
-
-* Use a different version of the same package that is available in Anaconda for the concerned Python version.
-* Install the library from the pypi repository, by using pip. Edit the conda yml installation file contents:
-
-name:
-dependencies:
-- numpy
-- pip:
-- pandas==1.2.5
-
-
-
-Parent topic:[Customizing deployment runtimes](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-customize.html)
-"
-9DFF39B0FB5FE6195AA75E50040B6D669FCE2BB6_0,9DFF39B0FB5FE6195AA75E50040B6D669FCE2BB6," Requirements for using custom components in ML models
-
-You can define your own transformers, estimators, functions, classes, and tensor operations in models that you deploy in IBM Watson Machine Learning as online deployments.
-
-"
-9DFF39B0FB5FE6195AA75E50040B6D669FCE2BB6_1,9DFF39B0FB5FE6195AA75E50040B6D669FCE2BB6," Defining and using custom components
-
-To use custom components in your models, you need to package your custom components in a [Python distribution package](https://packaging.python.org/glossary/term-distribution-package).
-
-"
-9DFF39B0FB5FE6195AA75E50040B6D669FCE2BB6_2,9DFF39B0FB5FE6195AA75E50040B6D669FCE2BB6," Package requirements
-
-
-
-* The package type must be: [source distribution](https://packaging.python.org/glossary/term-source-distribution-or-sdis) (distributions of type Wheel and Egg are not supported)
-* The package file format must be: .zip
-* Any third-party dependencies for your custom components must be installable by pip and must be passed to the install_requires argument of the setup function of the setuptools library.
-
-
-
-Refer to: [Creating a source distribution](https://docs.python.org/2/distutils/sourcedist.html)
-
-"
-9DFF39B0FB5FE6195AA75E50040B6D669FCE2BB6_3,9DFF39B0FB5FE6195AA75E50040B6D669FCE2BB6," Storing your custom package
-
-You must take extra steps when you store your trained model in the Watson Machine Learning repository:
-
-
-
-* Store your custom package in the [Watson Machine Learning repository](https://ibm.github.io/watson-machine-learning-sdk/core_api.htmlibm_watson_machine_learning.runtimes.Runtimes.store_library) (use the runtimes.store_library function from the Watson Machine Learning Python client, or the store libraries Watson Machine Learning CLI command.)
-* Create a runtime resource object that references your stored custom package, and then [store the runtime resource object](https://ibm.github.io/watson-machine-learning-sdk/core_api.htmlibm_watson_machine_learning.runtimes.Runtimes.store) in the Watson Machine Learning repository (use the runtimes.store function, or the store runtimes command.)
-* When you store your trained model in the Watson Machine Learning repository, reference your stored runtime resource in the [metadata](https://ibm.github.io/watson-machine-learning-sdk/core_api.htmlclient.Repository.store_model) that is passed to the store_model function (or the store command.)
-
-
-
-"
-9DFF39B0FB5FE6195AA75E50040B6D669FCE2BB6_4,9DFF39B0FB5FE6195AA75E50040B6D669FCE2BB6," Supported frameworks
-
-These frameworks support custom components:
-
-
-
-* Scikit-learn
-* XGBoost
-* Tensorflow
-* Python Functions
-* Python Scripts
-* Decision Optimization
-
-
-
-For more information, see [Supported frameworks](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/pm_service_supported_frameworks.html)
-
-Parent topic:[Customizing deployment runtimes](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-customize.html)
-"
-F8E12F246225210B8C984D447B3E15867D2E8869,F8E12F246225210B8C984D447B3E15867D2E8869," Customizing Watson Machine Learning deployment runtimes
-
-Create custom Watson Machine Learning deployment runtimes with libraries and packages that are required for your deployments. You can build custom images based on deployment runtime images available in IBM Watson Machine Learning. The images contain preselected open source libraries and selected IBM libraries.
-
-For a list of requirements for creating private Python packages, refer to [Requirements for using custom components in ML models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-custom_libs_overview.html).
-
-You can customize your deployment runtimes by [customizing Python runtimes with third-party libraries and user-created Python packages](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-create-custom-software-spec.html)
-
-Parent topic:[Deploying and managing assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/wmls/wmls-deploy-overview.html)
-"
-82512A3915BF43DF08D9106027A67D5E059B2719_0,82512A3915BF43DF08D9106027A67D5E059B2719," Creating an SPSS Modeler batch job with multiple data sources
-
-In an SPSS Modeler flow, it's common to have multiple import and export nodes, where multiple import nodes can be fetching data from one or more relational databases. Learn how to use Watson Machine Learning to create an SPSS Modeler batch job with multiple data sources from relational databases.
-
-Note:The examples use IBM Db2 and IBM Db2 Warehouse, referred to in examples as dashdb.
-
-"
-82512A3915BF43DF08D9106027A67D5E059B2719_1,82512A3915BF43DF08D9106027A67D5E059B2719," Connecting to multiple relational databases as input to a batch job
-
-The number of import nodes in an SPSS Modeler flow can vary. You might use as many as 60 or 70. However, the number of distinct connections to databases in these cases are just a few, though the table names that are accessed through the connections vary. Rather than specifying the details for every table connection, the approach that is described here focuses on the database connections. Therefore, the batch jobs accept a list of data connections or references by node name that are mapped to connection names in the SPSS Modeler flow's import nodes.
-
-For example, assume that if a flow has 30 nodes, only three database connections are used to connect to 30 different tables. In this case, you submit three connections (C1, C2, and C3) to the batch job. C1, C2, and C3 are connection names in the import node of the flow and the node name in the input of the batch job.
-
-When a batch job runs, the data reference for a node is provided by mapping the node name with the connection name in the import node. This example illustrates the steps for creating the mapping.
-
-The following diagram shows the flow from model creation to job submission:
-
-
-
-Limitation: The connection reference for a node in a flow is overridden by the reference that is received from the batch job. However, the table name in the import or export node is not overridden.
-
-"
-82512A3915BF43DF08D9106027A67D5E059B2719_2,82512A3915BF43DF08D9106027A67D5E059B2719," Deployment scenario with example
-
-In this example, an SPSS model is built by using 40 import nodes and a single output. The model has the following configuration:
-
-
-
-* Connections to three databases: 1 Db2 Warehouse (dashDB) and 2 Db2.
-* The import nodes are read from 40 tables (30 from Db2 Warehouse and 5 each from the Db2 databases).
-* A single output table is written to a Db2 database.
-
-
-
-
-
-"
-82512A3915BF43DF08D9106027A67D5E059B2719_3,82512A3915BF43DF08D9106027A67D5E059B2719," Example
-
-These steps demonstrate how to create the connections and identify the tables.
-
-
-
-1. Create a connection in your project.
-
-To run the SPSS Modeler flow, you start in your project and create a connection for each of the three databases your model connects to. You then configure each import node in the flow to point to a table in one of the connected databases.
-
-For this example, the database connections in the project are named dashdb_conn, db2_conn1, and db2_conn2.
-2. Configure Data Asset to import nodes in your SPSS Modeler flow with connections.
-
-Configure each node in the flow to reference one of the three connections you created (dashdb_conn, db2_conn1, and db2_conn2), then specify a table for each node.
-
-Note: You can change the name of the connection at the time of the job run. The table names that you select in the flow are referenced when the job runs. You can't overwrite or change them.
-3. Save the SPSS model to the Watson Machine Learning repository.
-
-For this example, it's helpful to provide the input and output schema when you are saving the model. It simplifies the process of identifying each input when you create and submit the batch job in the Watson Studio user interface. Connections that are referenced in the Data Asset nodes of the SPSS Modeler flow must be provided in the node name field of the input schema. To find the node name, double-click the Data Asset import node in your flow to open its properties:
-
-
-
-Note:SPSS models that are saved without schemas are still supported for jobs, but you must enter node name fields manually and provide the data asset when you submit the job.
-
-This code sample shows how to save the input schema when you save the model (Endpoint: POST /v4/models).
-
-{
-""name"": ""SPSS Drug Model"",
-""label_column"": ""label"",
-"
-82512A3915BF43DF08D9106027A67D5E059B2719_4,82512A3915BF43DF08D9106027A67D5E059B2719,"""type"": ""spss-modeler_18.1"",
-""runtime"": {
-""href"": ""/v4/runtimes/spss-modeler_18.1""
-},
-""space"": {
-""href"": ""/v4/spaces/""
-},
-""schemas"": {
-""input"": [ { ""id"": ""dashdb_conn"", ""fields"": ] },
-{ ""id"": ""db2_conn1 "", ""fields"": ] } ,
-{ ""id"": ""db2_conn2"", ""fields"": ] } ],
-""output"": [{ ""id"": ""db2_conn2 "",""fields"": ] }]
-}
-}
-
-Note: The number of fields in each of these connections doesn't matter. They’re not validated or used. What's important is the number of connections that are used.
-4. Create the batch deployment for the SPSS model.
-
-For SPSS models, the creation process of the batch deployment job is the same. You can submit the deployment request with the model that was created in the previous step.
-5. Submit SPSS batch jobs.
-
-You can submit a batch job from the Watson Studio user interface or by using the REST API. If the schema is saved with the model, the Watson Studio user interface makes it simple to accept input from the connections specified in the schema. Because you already created the data connections, you can select a connected data asset for each node name field that displays in the Watson Studio user interface as you define the job.
-
-The name of the connection that is created at the time of job submission can be different from the one used at the time of model creation. However, it must be assigned to the node name field.
-
-
-
-"
-82512A3915BF43DF08D9106027A67D5E059B2719_5,82512A3915BF43DF08D9106027A67D5E059B2719," Submitting a job when schema is not provided
-
-If the schema isn't provided in the model metadata at the time the model is saved, you must enter the import node name manually. Further, you must select the data asset in the Watson Studio user interface for each connection. Connections that are referenced in the Data Asset import nodes of the SPSS Modeler flow must be provided in the node name field of the import/export data references.
-
-"
-82512A3915BF43DF08D9106027A67D5E059B2719_6,82512A3915BF43DF08D9106027A67D5E059B2719," Specifying the connections for a job with data asset
-
-This code sample demonstrates how to specify the connections for a job that is submitted by using the REST API (Endpoint: /v4/deployment_jobs).
-
-{
-""deployment"": {
-""href"": ""/v4/deployments/""
-},
-""scoring"": {
-""input_data_references"": [
-{
-""id"": ""dashdb_conn"",
-""name"": ""dashdb_conn"",
-""type"": ""data_asset"",
-""connection"": {},
-""location"": {
-""href"": ""/v2/assets/?space_id=""
-},
-""schema"": {}
-},
-{
-""id"": ""db2_conn1 "",
-""name"": ""db2_conn1 "",
-""type"": ""data_asset"",
-""connection"": {},
-""location"": {
-""href"": ""/v2/assets/?space_id=""
-},
-""schema"": {}
-},
-{
-""id"": ""db2_conn2 "",
-""name"": ""db2_conn2"",
-""type"": ""data_asset"",
-""connection"": {},
-""location"": {
-""href"": ""/v2/assets/?space_id=""
-},
-""schema"": {}
-}],
-""output_data_reference"": {
-""id"": ""db2_conn2""
-""name"": ""db2_conn2"",
-""type"": ""data_asset "",
-""connection"": {},
-""location"": {
-""href"": ""/v2/assets/?space_id=""
-},
-""schema"": {}
-}
-}
-
-Parent topic:[Creating a batch deployment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-batch-details.html)
-"
-315971AE6C6A4EEDE13E9E1449B2A36F548B928F_0,315971AE6C6A4EEDE13E9E1449B2A36F548B928F," Deleting a deployment
-
-Delete your deployment when you no longer need it to free up resources. You can delete a deployment from a deployment space, or programmatically, by using the Python client or Watson Machine Learning APIs.
-
-"
-315971AE6C6A4EEDE13E9E1449B2A36F548B928F_1,315971AE6C6A4EEDE13E9E1449B2A36F548B928F," Deleting a deployment from a space
-
-To remove a deployment:
-
-
-
-1. Open the Deployments page of your deployment space.
-2. Choose Delete from the action menu for the deployment name.
-
-
-
-
-"
-315971AE6C6A4EEDE13E9E1449B2A36F548B928F_2,315971AE6C6A4EEDE13E9E1449B2A36F548B928F," Deleting a deployment by using the Python client
-
-Use the following method to delete the deployment.
-
-client.deployments.delete(deployment_uid)
-
-Returns a SUCCESS message. To check that the deployment was removed, you can list deployments and make sure that the deleted deployment is no longer listed.
-
-client.deployments.list()
-
-Returns:
-
----- ---- ----- ------- -------------
-GUID NAME STATE CREATED ARTIFACT_TYPE
----- ---- ----- ------- -------------
-
-"
-315971AE6C6A4EEDE13E9E1449B2A36F548B928F_3,315971AE6C6A4EEDE13E9E1449B2A36F548B928F," Deleting a deployment by using the REST API
-
-Use the DELETE method for deleting a deployment.
-
-DELETE /ml/v4/deployments/{deployment_id}
-
-For more information, see [Delete](https://cloud.ibm.com/apidocs/machine-learningdeployments-delete).
-
-For example, see the following code snippet:
-
-curl --location --request DELETE 'https://us-south.ml.cloud.ibm.com/ml/v4/deployments/:deployment_id?space_id=&version=2020-09-01'
-
-Parent topic:[Managing predictive deployments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-general.html)
-"
-53019DD52EDB5790460DFF9A02363856B83CAFB7_0,53019DD52EDB5790460DFF9A02363856B83CAFB7," Managing predictive deployments
-
-For proper deployment, you must set up a deployment space and then select and configure a specific deployment type. After you deploy assets, you can manage and update them to make sure they perform well and to monitor their accuracy.
-
-To be able to deploy assets from a space, you must have a machine learning service instance that is provisioned and associated with that space. For more information, see [Associating a service instance with a space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-service-instance.htmlassociating-instance-with-space).
-
-Online and batch deployments provide simple ways to create an online scoring endpoint or do batch scoring with your models.
-
-If you want to implement a custom logic:
-
-
-
-* Create a Python function to use for creating your online endpoint
-* Write a notebook or script for batch scoring
-
-
-
-Note: If you create a notebook or a script to perform batch scoring such an asset runs as a platform job, not as a batch deployment.
-
-"
-53019DD52EDB5790460DFF9A02363856B83CAFB7_1,53019DD52EDB5790460DFF9A02363856B83CAFB7," Deployable assets
-
-Following is the list of assets that you can deploy from a Watson Machine Learning space, with information on applicable deployment types:
-
-
-
-List of assets that you can deploy
-
- Asset type Batch deployment Online deployment
-
- Functions Yes Yes
- Models Yes Yes
- Scripts Yes No
-
-
-
-An R Shiny app is the only asset type that is supported for web app deployments.
-
-Notes:
-
-
-
-* A deployment job is a way of running a batch deployment, or a self-contained asset like a flow in Watson Machine Learning. You can select the input and output for your job and choose to run it manually or on a schedule. For more information, see [Creating a deployment job](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/create-jobs.html).
-* Notebooks and flows use notebook environments. You can run them in a deployment space, but they are not deployable.
-
-
-
-For more information, see:
-
-
-
-* [Creating online deployments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-online.html)
-* [Creating batch deployments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-batch-details.html)
-* [Deploying Python functions](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-py-function.html)
-* [Deploying scripts](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-script.html)
-
-
-
-After you deploy assets, you can manage and update them to make sure they perform well and to monitor their accuracy. Some ways to manage or update a deployment are as follows:
-
-
-
-"
-53019DD52EDB5790460DFF9A02363856B83CAFB7_2,53019DD52EDB5790460DFF9A02363856B83CAFB7,"* [Manage deployment jobs](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-jobs.html). After you create one or more jobs, you can view and manage them from the Jobs tab of your deployment space.
-* [Update a deployment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-update.html). For example, you can replace a model with a better-performing version without having to create a new deployment.
-* [Scale a deployment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-scaling.html) to increase availability and throughput by creating replicas of the deployment.
-* [Delete a deployment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-delete.html) to remove a deployment and free up resources.
-
-
-
-"
-53019DD52EDB5790460DFF9A02363856B83CAFB7_3,53019DD52EDB5790460DFF9A02363856B83CAFB7," Learn more
-
-
-
-* [Full list of asset types that can be added to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets-all.html)
-
-
-
-Parent topic:[Deploying and managing models](https://dataplatform.cloud.ibm.com/docs/content/wsj/wmls/wmls-deploy-overview.html)
-"
-45A1C384D8D6A730D73357E2BB3216EDBD2F7FF2_0,45A1C384D8D6A730D73357E2BB3216EDBD2F7FF2," Writing deployable Python functions
-
-Learn how to write a Python function and then store it as an asset that allows for deploying models.
-
-For a list of general requirements for deployable functions refer to [General requirements for deployable functions](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-py-function-write.html?context=cdpaas&locale=enreqs). For information on what happens during a function deployment, refer to [Function deployment process](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-py-function-write.html?context=cdpaas&locale=enfundepro)
-
-"
-45A1C384D8D6A730D73357E2BB3216EDBD2F7FF2_1,45A1C384D8D6A730D73357E2BB3216EDBD2F7FF2," General requirements for deployable functions
-
-To be deployed successfully, a function must meet these requirements:
-
-
-
-* The Python function file on import must have the score function object as part of its scope. Refer to [Score function requirements](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-py-function-write.html?context=cdpaas&locale=enscore)
-* Scoring input payload must meet the requirements that are listed in [Scoring input requirements](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-py-function-write.html?context=cdpaas&locale=enscoinreq)
-* The output payload expected as output of score must include the schema of the score_response variable for status code 200. Note that the prediction parameter, with an array of JSON objects as its value, is mandatory in the score output.
-* When you use the Python client to save a Python function that contains a reference to an outer function, only the code in the scope of the outer function (including its nested functions) is saved. Therefore, the code outside the outer function's scope will not be saved and thus will not be available when you deploy the function.
-
-
-
-"
-45A1C384D8D6A730D73357E2BB3216EDBD2F7FF2_2,45A1C384D8D6A730D73357E2BB3216EDBD2F7FF2," Score function requirements
-
-
-
-* Two ways to add the score function object exist:
-
-
-
-* explicitly, by user
-* implicitly, by the method that is used to save the Python function as an asset in the Watson Machine Learning repository
-
-
-
-* The score function must accept a single, JSON input parameter.
-* The score function must return a JSON-serializable object (for example: dictionaries or lists)
-
-
-
-"
-45A1C384D8D6A730D73357E2BB3216EDBD2F7FF2_3,45A1C384D8D6A730D73357E2BB3216EDBD2F7FF2," Scoring input requirements
-
-
-
-* The scoring input payload must include an array with the name values, as shown in this example schema.
-
-{""input_data"": [!{
-""values"": ""Hello world""]]
-}]
-}
-
-Note:
-- The input_data parameter is mandatory in the payload.
-- The input_data parameter can also include additional name-value pairs.
-* The scoring input payload must be passed as input parameter value for score. This way you can ensure that the value of the score input parameter is handled accordingly inside the score.
-* The scoring input payload must match the input requirements for the concerned Python function.
-* The scoring input payload must include an array that matches the [Example input data schema](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-py-function-write.html?context=cdpaas&locale=enexschema).
-
-
-
-"
-45A1C384D8D6A730D73357E2BB3216EDBD2F7FF2_4,45A1C384D8D6A730D73357E2BB3216EDBD2F7FF2," Example input data schema
-
-{""input_data"": [!{
-""values"": ""Hello world""]]
-}]
-}
-
-"
-45A1C384D8D6A730D73357E2BB3216EDBD2F7FF2_5,45A1C384D8D6A730D73357E2BB3216EDBD2F7FF2," Example Python code
-
-wml_python_function
-def my_deployable_function():
-
-def score( payload ):
-
-message_from_input_payload = payload.get(""input_data"")[0].get(""values"")[0]
-response_message = ""Received message - {0}"".format(message_from_input_payload)
-
- Score using the pre-defined model
-score_response = {
-'predictions': [{'fields': 'Response_message_field'],
-'values': response_message]]
-}]
-}
-return score_response
-
-return score
-
-score = my_deployable_function()
-
-You can test your function like this:
-
-input_data = { ""input_data"": [{ ""fields"": ""message"" ]!,
-""values"": ""Hello world"" ]]
-}
-]
-}
-function_result = score( input_data )
-print( function_result )
-
-It returns the message ""Hello world!"".
-
-"
-45A1C384D8D6A730D73357E2BB3216EDBD2F7FF2_6,45A1C384D8D6A730D73357E2BB3216EDBD2F7FF2," Function deployment process
-
-The Python code of your Function asset gets loaded as a Python module by the Watson Machine Learning engine by using an import statement. This means that the code will be executed exactly once (when the function is deployed or each time when the corresponding pod gets restarted). The score function that is defined by the Function asset is then called in every prediction request.
-
-"
-45A1C384D8D6A730D73357E2BB3216EDBD2F7FF2_7,45A1C384D8D6A730D73357E2BB3216EDBD2F7FF2," Handling deployable functions
-
-Use one of these methods to create a deployable Python function:
-
-
-
-* [Creating deployable functions through REST API](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-py-function-write.html?context=cdpaas&locale=enrest)
-* [Creating deployable functions through the Python client](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-py-function-write.html?context=cdpaas&locale=enpy)
-
-
-
-"
-45A1C384D8D6A730D73357E2BB3216EDBD2F7FF2_8,45A1C384D8D6A730D73357E2BB3216EDBD2F7FF2," Creating deployable functions through REST API
-
-For REST APIs, because the Python function is uploaded directly through a file, the file must already contain the score function. Any one time import that needs to be done to be used later within the score function can be done within the global scope of the file. When this file is deployed as a Python function, the one-time imports available in the global scope get executed during the deployment and later simply reused with every prediction request.
-
-Important:The function archive must be a .gz file.
-
-Sample score function file:
-
-Score function.py
----------------------
-def score(input_data):
-return {'predictions': [{'values': 'Just a test']]}]}
-
-Sample score function with one time imports:
-
-import subprocess
-subprocess.check_output('pip install gensim --user', shell=True)
-import gensim
-
-def score(input_data):
-return {'predictions': [{'fields': 'gensim_version'], 'values': gensim.__version__]]}]}
-
-"
-45A1C384D8D6A730D73357E2BB3216EDBD2F7FF2_9,45A1C384D8D6A730D73357E2BB3216EDBD2F7FF2," Creating deployable functions through the Python client
-
-To persist a Python function as an asset, the Python client uses the wml_client.repository.store_function method. You can do that in two ways:
-
-
-
-* [Persisting a function through a file that contains the Python function](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-py-function-write.html?context=cdpaas&locale=enpersfufile)
-* [Persisting a function through the function object](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-py-function-write.html?context=cdpaas&locale=enpersfunob)
-
-
-
-"
-45A1C384D8D6A730D73357E2BB3216EDBD2F7FF2_10,45A1C384D8D6A730D73357E2BB3216EDBD2F7FF2," Persisting a function through a file that contains the Python function
-
-This method is the same as persisting the Python function file through REST APIs (score must be defined in the scope of the Python source file). For details, refer to [Creating deployable functions through REST API](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-py-function-write.html?context=cdpaas&locale=enrest).
-
-Important:When you are calling the wml_client.repository.store_function method, pass the file name as the first argument.
-
-"
-45A1C384D8D6A730D73357E2BB3216EDBD2F7FF2_11,45A1C384D8D6A730D73357E2BB3216EDBD2F7FF2," Persisting a function through the function object
-
-You can persist Python function objects by creating Python Closures with a nested function named score. The score function is returned by the outer function that is being stored as a function object, when called. This score function must meet the requirements that are listed in [General requirements for deployable functions](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-py-function-write.html?context=cdpaas&locale=enreqs). In this case, any one time imports and initial setup logic must be added in the outer nested function so that they get executed during deployment and get used within the score function. Any recurring logic that is needed during the prediction request must be added within the nested score function.
-
-Sample Python function save by using the Python client:
-
-def my_deployable_function():
-
-import subprocess
-subprocess.check_output('pip install gensim', shell=True)
-import gensim
-
-def score(input_data):
-import
-message_from_input_payload = payload.get(""input_data"")[0].get(""values"")[0]
-response_message = ""Received message - {0}"".format(message_from_input_payload)
-
- Score using the pre-defined model
-score_response = {
-'predictions': [{'fields': 'Response_message_field', 'installed_lib_version'],
-'values': response_message, gensim.__version__]]
-}]
-}
-return score_response
-
-return score
-
-function_meta = {
-client.repository.FunctionMetaNames.NAME:""test_function"",
-client.repository.FunctionMetaNames.SOFTWARE_SPEC_ID: sw_spec_id
-}
-func_details = client.repository.store_function(my_deployable_function, function_meta)
-
-"
-45A1C384D8D6A730D73357E2BB3216EDBD2F7FF2_12,45A1C384D8D6A730D73357E2BB3216EDBD2F7FF2,"In this scenario, the Python function takes up the job of creating a Python file taht contains the score function and persisting the function file as an asset in the Watson Machine Learning repository:
-
-score = my_deployable_function()
-
-"
-45A1C384D8D6A730D73357E2BB3216EDBD2F7FF2_13,45A1C384D8D6A730D73357E2BB3216EDBD2F7FF2," Learn more
-
-
-
-* [Python Closures](https://www.programiz.com/python-programming/closure)
-* [Closures](https://www.learnpython.org/en/Closures)
-* [Nested function, Scope of variable & closures in Python](https://www.codesdope.com/blog/article/nested-function-scope-of-variable-closures-in-pyth/)
-
-
-
-Parent topic:[Deploying Python functions](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-py-function.html)
-"
-03FF997603B065D2DF1FBB49934CA8C348765ACF_0,03FF997603B065D2DF1FBB49934CA8C348765ACF," Deploying Python functions in Watson Machine Learning
-
-You can deploy Python functions in Watson Machine Learning the same way that you can deploy models. Your tools and apps can use the Watson Machine Learning Python client or REST API to send data to your deployed functions the same way that they send data to deployed models. Deploying Python functions gives you the ability to hide details (such as credentials). You can also preprocess data before you pass it to models. Additionally, you can handle errors and include calls to multiple models, all within the deployed function instead of in your application.
-
-"
-03FF997603B065D2DF1FBB49934CA8C348765ACF_1,03FF997603B065D2DF1FBB49934CA8C348765ACF," Sample notebooks for creating and deploying Python functions
-
-For examples of how to create and deploy Python functions by using the Watson Machine Learning [Python client library](https://ibm.github.io/watson-machine-learning-sdk/), refer to these sample notebooks:
-
-
-
- Sample name Framework Techniques demonstrated
-
- [Use Python function to recognize hand-written digits](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/1eddc77b3a4340d68f762625d40b64f9) Python Use a function to store a sample model and deploy it.
- [Predict business for cars](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/61a8b600f1bb183e2c471e7a64299f0e) Hybrid(Tensorflow) Set up an AI definition Prepare the data Create a Keras model by using Tensorflow Deploy and score the model Define, store, and deploy a Python function
- [Deploy Python function for software specification](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/56825df5322b91daffd39426038808e9) Core Create a Python function Create a web service Score the model
-
-
-
-The notebooks demonstrate the six steps for creating and deploying a function:
-
-
-
-1. Define the function.
-2. Authenticate and define a space.
-3. Store the function in the repository.
-4. Get the software specification.
-5. Deploy the stored function.
-6. Send data to the function for processing.
-
-
-
-For links to other sample notebooks that use the Watson Machine Learning [Python client library](https://ibm.github.io/watson-machine-learning-sdk/), refer to [Using Watson Machine Learning in a notebook](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-samples-overview.html).
-
-"
-03FF997603B065D2DF1FBB49934CA8C348765ACF_2,03FF997603B065D2DF1FBB49934CA8C348765ACF," Increasing scalability for a function
-
-When you deploy a function from a deployment space or programmatically, a single copy of the function is deployed by default. To increase scalability, you can increase the number of replicas by editing the configuration of the deployment. More replicas allow for a larger volume of scoring requests.
-
-The following example uses the Python client API to set the number of replicas to 3.
-
-change_meta = {
-client.deployments.ConfigurationMetaNames.HARDWARE_SPEC: {
-""name"":""S"",
-""num_nodes"":3}
-}
-
-client.deployments.update(, change_meta)
-
-"
-03FF997603B065D2DF1FBB49934CA8C348765ACF_3,03FF997603B065D2DF1FBB49934CA8C348765ACF," Learn more
-
-
-
-* To learn more about defining a deployable Python function, see General requirements for deployable functions section in [Writing and storing deployable Python functions](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-py-function-write.html).
-* You can deploy a function from a deployment space through the user interface. For more information, see [Deployment spaces](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-spaces_local.html).
-
-
-
-Parent topic:[Managing predictive deployments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-general.html)
-"
-8279C6C73A8DB1A593945E5EA339F9EFDE96A61E_0,8279C6C73A8DB1A593945E5EA339F9EFDE96A61E," Scaling a deployment
-
-When you create an online deployment for a model or function from a deployment space or programmatically, a single copy of the asset is deployed by default. To increase scalability and availability, you can increase the number of copies (replicas) by editing the configuration of the deployment. More copies allow for a larger volume of scoring requests.
-
-Deployments can be scaled in the following ways:
-
-
-
-* Update the configuration for a deployment in a deployment space.
-* Programmatically, using the Watson Machine Learning Python client library, or the Watson Machine Learning REST APIs.
-
-
-
-"
-8279C6C73A8DB1A593945E5EA339F9EFDE96A61E_1,8279C6C73A8DB1A593945E5EA339F9EFDE96A61E," Changing the number of copies of an online deployment from a space
-
-
-
-1. Click the Deployment tab of your deployment space.
-2. From the action menu for your deployment name, click Edit.
-3. In the Edit deployment dialog box, change the number of copies and click Save.
-
-
-
-"
-8279C6C73A8DB1A593945E5EA339F9EFDE96A61E_2,8279C6C73A8DB1A593945E5EA339F9EFDE96A61E," Increasing the number of replicas of a deployment programmatically
-
-To view or run a working sample of scaling a deployment programmatically, you can increase the number of replicas in the metadata for a deployment.
-
-"
-8279C6C73A8DB1A593945E5EA339F9EFDE96A61E_3,8279C6C73A8DB1A593945E5EA339F9EFDE96A61E," Python example
-
-This example uses the Python client to set the number of replicas to 3.
-
-change_meta = {
-client.deployments.ConfigurationMetaNames.HARDWARE_SPEC: {
-""name"":""S"",
-""num_nodes"":3}
-}
-
-client.deployments.update(, change_meta)
-
-The HARDWARE_SPEC value includes a name because the API requires a name or an ID to be provided.
-
-"
-8279C6C73A8DB1A593945E5EA339F9EFDE96A61E_4,8279C6C73A8DB1A593945E5EA339F9EFDE96A61E," REST API example
-
-curl -k -X PATCH -d '[ { ""op"": ""replace"", ""path"": ""/hardware_spec"", ""value"": { ""name"": ""S"", ""num_nodes"": 2 } } ]'
-
-You must specify a name for the hardware_spec value, but the argument is not applied for scaling.
-
-Parent topic:[Managing predictive deployments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-general.html)
-"
-462A5BA596AADF9C38762611CA2578398F234BD4_0,462A5BA596AADF9C38762611CA2578398F234BD4," Updating a deployment
-
-After you create an online or a batch deployment, you can still update your deployment details and update the assets that are associated with your deployment.
-
-For more information, see:
-
-
-
-* [Update deployment details](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-update.html?context=cdpaas&locale=enupd-general)
-* [Update assets associated with a deployment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-update.html?context=cdpaas&locale=enupd-assets)
-
-
-
-"
-462A5BA596AADF9C38762611CA2578398F234BD4_1,462A5BA596AADF9C38762611CA2578398F234BD4," Updating deployment details
-
-You can update general deployment details, such as deployment name, description, metadata, and tags by using one of these methods:
-
-
-
-* [Update deployment details from the UI](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-update.html?context=cdpaas&locale=enupdate-details-ui).
-* [Update deployment details by using the Patch API command](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-update.html?context=cdpaas&locale=enupdate-details-api).
-
-
-
-"
-462A5BA596AADF9C38762611CA2578398F234BD4_2,462A5BA596AADF9C38762611CA2578398F234BD4," Updating deployment details from the UI
-
-
-
-1. From the Deployments tab of your deployment space, click the action menu for the deployment and choose Edit settings.
-2. Update the details and then click Save.
-
-Tip: You can also update a deployment from the information sheet for the deployment.
-
-
-
-"
-462A5BA596AADF9C38762611CA2578398F234BD4_3,462A5BA596AADF9C38762611CA2578398F234BD4," Updating deployment details by using the Patch API command
-
-Use the [Watson Machine Learning API Patch](https://cloud.ibm.com/apidocs/machine-learning-cpmodels-update) command to update deployment details.
-
-curl -X PATCH '/ml/v4/deployments/?space_id=&version=' n--data-raw '[
-{
-""op"": """",
-""path"": """",
-""value"": """"
-},
-{
-""op"": """",
-""path"": """",
-""value"": """"
-}
-]'
-
-For example, to update a description for deployment:
-
-curl -X PATCH '/ml/v4/deployments/?space_id=&version=' n--data-raw '[
-{
-""op"": ""replace"",
-""path"": ""/description"",
-""value"": """"
-},
-]'
-
-Notes:
-
-
-
-* For , use ""add"", ""remove"", or ""replace"".
-
-
-
-"
-462A5BA596AADF9C38762611CA2578398F234BD4_4,462A5BA596AADF9C38762611CA2578398F234BD4," Updating assets associated with a deployment
-
-After you create an online or batch deployment, you can update the deployed asset from the same endpoint. For example, if you have a better performing model, you can replace the deployed model with the improved version. When the update is complete, the new model is available from the REST API endpoint.
-
-Before you update an asset, make sure that these conditions are true:
-
-
-
-* The framework of the new model is compatible with the existing deployed model.
-* The input schema exists and matches for the new and deployed model.
-
-Caution: Failure to follow these conditions can result in a failed deployment.
-* For more information, see [Updating an asset from the deployment space UI](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-update.html?context=cdpaas&locale=enupdate-asset-ui).
-* For more information, see [Updating an asset by using the Patch API command](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-update.html?context=cdpaas&locale=enupdate-asset-api).
-
-
-
-"
-462A5BA596AADF9C38762611CA2578398F234BD4_5,462A5BA596AADF9C38762611CA2578398F234BD4," Updating an asset from the deployment space UI
-
-
-
-1. From the Deployments tab of your deployment space, click the action menu for the deployment and choose Edit.
-2. Click Replace asset. From the Select an asset dialog box, select the asset that you want to replace the current asset with and click Select asset.
-3. Click Save.
-
-
-
-Important: Make sure that the new asset is compatible with the deployment.
-
-
-
-"
-462A5BA596AADF9C38762611CA2578398F234BD4_6,462A5BA596AADF9C38762611CA2578398F234BD4," Updating an asset by using the Patch API command
-
-Use the Watson Machine Learning [API](https://cloud.ibm.com/apidocs/machine-learning)Patch command to update any supported asset.
-
-Use this method to patch a model for an online deployment.
-
-curl -X PATCH '/ml/v4/models/?space_id=&project_id=&version=' n--data-raw '[
-{
-""op"": """",
-""path"": """",
-""value"": """"
-},
-{
-""op"": """",
-""path"": """",
-""value"": """"
-}
-]'
-
-For example, patch a model with ID 6f01d512-fe0f-41cd-9a52-1e200c525c84 in space ID f2ddb8ce-7b10-4846-9ab0-62454a449802:
-
-curl -X PATCH '/ml/v4/models/6f01d512-fe0f-41cd-9a52-1e200c525c84?space_id=f2ddb8ce-7b10-4846-9ab0-62454a449802&project_id=&version=' n--data-raw '[
-
-{
-""op"":""replace"",
-""path"":""/asset"",
-""value"":{
-""id"":""6f01d512-fe0f-41cd-9a52-1e200c525c84"",
-""rev"":""1""
-}
-}
-]'
-
-A successful output response looks like this:
-
-{
-""entity"": {
-""asset"": {
-""href"": ""/v4/models/6f01d512-fe0f-41cd-9a52-1e200c525c84?space_id=f2ddb8ce-7b10-4846-9ab0-62454a449802"",
-"
-462A5BA596AADF9C38762611CA2578398F234BD4_7,462A5BA596AADF9C38762611CA2578398F234BD4,"""id"": ""6f01d512-fe0f-41cd-9a52-1e200c525c84""
-},
-""custom"": {
-},
-""description"": ""Test V4 deployments"",
-""name"": ""test_v4_dep_online_space_hardware_spec"",
-""online"": {
-},
-""space"": {
-""href"": ""/v4/spaces/f2ddb8ce-7b10-4846-9ab0-62454a449802"",
-""id"": ""f2ddb8ce-7b10-4846-9ab0-62454a449802""
-},
-""space_id"": ""f2ddb8ce-7b10-4846-9ab0-62454a449802"",
-""status"": {
-""online_url"": {
-""url"": ""https://example.com/v4/deployments/349dc1f7-9452-491b-8aa4-0777f784bd83/predictions""
-},
-""state"": ""updating""
-}
-},
-""metadata"": {
-""created_at"": ""2020-06-08T16:51:08.315Z"",
-""description"": ""Test V4 deployments"",
-""guid"": ""349dc1f7-9452-491b-8aa4-0777f784bd83"",
-""href"": ""/v4/deployments/349dc1f7-9452-491b-8aa4-0777f784bd83"",
-""id"": ""349dc1f7-9452-491b-8aa4-0777f784bd83"",
-""modified_at"": ""2020-06-08T16:55:28.348Z"",
-""name"": ""test_v4_dep_online_space_hardware_spec"",
-""parent"": {
-""href"": """"
-},
-""space_id"": ""f2ddb8ce-7b10-4846-9ab0-62454a449802""
-}
-}
-
-Notes:
-
-
-
-* For , use ""add"", ""remove"", or ""replace"".
-"
-462A5BA596AADF9C38762611CA2578398F234BD4_8,462A5BA596AADF9C38762611CA2578398F234BD4,"* The initial state for the PATCH API output is ""updating"". Keep polling the status until it changes to ""ready"", then retrieve the deployment meta.
-* Only the ASSET attribute can be specified for the asset patch. Changing any other attribute results in an error.
-* The schema of the current model and the model being patched is compared to the deployed asset. A warning message is returned in the output of the Patch request API if the two don't match. For example, if a mismatch is detected, you can find this information in the output response.
-
-""status"": {
-""message"": {
-""text"": ""The input schema of the asset being patched does not match with the currently deployed asset. Please ensure that the score payloads are up to date as per the asset being patched.""
-},
-* For more information, see [Updating software specifications by using the API](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-manage-outdated.htmlupdate-soft-specs-api).
-
-
-
-Parent topic:[Managing predictive deployments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-general.html)
-"
-0310B7FB9072E7F7E5D73F5AF90EDE62FAA81286_0,0310B7FB9072E7F7E5D73F5AF90EDE62FAA81286," Managing hardware configurations
-
-When you deploy certain assets in Watson Machine Learning, you can choose the type, size, and power of the hardware configuration that matches your computing needs.
-
-"
-0310B7FB9072E7F7E5D73F5AF90EDE62FAA81286_1,0310B7FB9072E7F7E5D73F5AF90EDE62FAA81286," Deployment types that require hardware specifications
-
-Selecting a hardware specification is available for all [batch deployment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-batch-details.html) types. For [online deployments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-online.html), you can select a specific hardware specification if you're deploying:
-
-
-
-* Python Functions
-* Tensorflow models
-* Models with custom software specifications
-
-
-
-"
-0310B7FB9072E7F7E5D73F5AF90EDE62FAA81286_2,0310B7FB9072E7F7E5D73F5AF90EDE62FAA81286," Hardware configurations available for deploying assets
-
-
-
-* XS: 1x4 = 1 vCPU and 4 GB RAM
-* S: 2x8 = 2 vCPU and 8 GB RAM
-* M: 4x16 = 4 vCPU and 16 GB RAM
-* L: 8x32 = 8 vCPU and 32 GB RAM
-* XL: 16x64 = 16 vCPU and 64 GB RAM
-
-
-
-You can use the XS configuration to deploy:
-
-
-
-* Python functions
-* Python scripts
-* R scripts
-* Models based on custom libraries and custom images
-
-
-
-For Decision Optimization deployments, you can use these hardware specifications:
-
-
-
-* S
-* M
-* L
-* XL
-
-
-
-"
-0310B7FB9072E7F7E5D73F5AF90EDE62FAA81286_3,0310B7FB9072E7F7E5D73F5AF90EDE62FAA81286," Learn more
-
-
-
-* [Monitoring account resource usage](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/monitor-resources.html)
-
-
-
-Parent topic:[Managing predictive deployments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-general.html)
-"
-19BA0BFC40B6212B42F38487F1533BB65647850E_0,19BA0BFC40B6212B42F38487F1533BB65647850E," Importing models to a deployment space
-
-Import machine learning models trained outside of IBM Watson Machine Learning so that you can deploy and test the models. Review the model frameworks that are available for importing models.
-
-Here, to import a trained model means:
-
-
-
-1. Store the trained model in your Watson Machine Learning repository
-2. Optional: Deploy the stored model in your Watson Machine Learning service
-
-
-
-and repository means a Cloud Object Storage bucket. For more information, see [Creating deployment spaces](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-create.html).
-
-You can import a model in these ways:
-
-
-
-* [Directly through the UI](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-importing-model.html?context=cdpaas&locale=enui-import)
-* [By using a path to a file](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-importing-model.html?context=cdpaas&locale=enpath-file-import)
-* [By using a path to a directory](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-importing-model.html?context=cdpaas&locale=enpath-dir-import)
-* [Import a model object](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-importing-model.html?context=cdpaas&locale=enobject-import)
-
-
-
-For more information, see [Importing models by ML framework](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-importing-model.html?context=cdpaas&locale=ensupported-formats).
-
-"
-19BA0BFC40B6212B42F38487F1533BB65647850E_1,19BA0BFC40B6212B42F38487F1533BB65647850E,"For more information, see [Things to consider when you import models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-importing-model.html?context=cdpaas&locale=enmodel-import-considerations).
-
-For an example of how to add a model programmatically by using the Python client, refer to this notebook:
-
-
-
-* [Use PMML to predict iris species.](https://github.com/IBM/watson-machine-learning-samples/blob/df8e5122a521638cb37245254fe35d3a18cd3f59/cloud/notebooks/python_sdk/deployments/pmml/Use%20PMML%20to%20predict%20iris%20species.ipynb)
-
-
-
-For an example of how to add a model programmatically by using the REST API, refer to this notebook:
-
-
-
-* [Use scikit-learn to predict diabetes progression](https://github.com/IBM/watson-machine-learning-samples/blob/be84bcd25d17211f41fb34ec262b418f6cd6c87b/cloud/notebooks/rest_api/curl/deployments/scikit/Use%20scikit-learn%20to%20predict%20diabetes%20progression.ipynb)
-
-
-
-"
-19BA0BFC40B6212B42F38487F1533BB65647850E_2,19BA0BFC40B6212B42F38487F1533BB65647850E," Available ways to import models, per framework type
-
-This table lists the available ways to import models to Watson Machine Learning, per framework type.
-
-
-
-Import options for models, per framework type
-
- Import option Spark MLlib Scikit-learn XGBoost TensorFlow PyTorch
-
- [Importing a model object](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-importing-model.html?context=cdpaas&locale=enobject-import) ✓ ✓ ✓
- [Importing a model by using a path to a file](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-importing-model.html?context=cdpaas&locale=enpath-file-import) ✓ ✓ ✓ ✓
- [Importing a model by using a path to a directory](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-importing-model.html?context=cdpaas&locale=enpath-dir-import) ✓ ✓ ✓ ✓
-
-
-
-"
-19BA0BFC40B6212B42F38487F1533BB65647850E_3,19BA0BFC40B6212B42F38487F1533BB65647850E," Adding a model by using UI
-
-Note:If you want to import a model in the PMML format, you can directly import the model .xml file.
-
-To import a model by using UI:
-
-
-
-1. From the Assets tab of your space in Watson Machine Learning, click Import assets.
-2. Select Local file and then select Model.
-3. Select the model file that you want to import and click Import.
-
-
-
-The importing mechanism automatically selects a matching model type and software specification based on the version string in the .xml file.
-
-"
-19BA0BFC40B6212B42F38487F1533BB65647850E_4,19BA0BFC40B6212B42F38487F1533BB65647850E," Importing a model object
-
-Note:This import method is supported by a limited number of ML frameworks. For more information, see [Available ways to import models, per framework type](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-importing-model.html?context=cdpaas&locale=ensupported-formats).
-
-To import a model object:
-
-
-
-1. If your model is located in a remote location, follow [Downloading a model that is stored in a remote location](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-importing-model.html?context=cdpaas&locale=enmodel-download).
-2. Store the model object in your Watson Machine Learning repository. For more information, see [Storing model in Watson Machine Learning repository](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-importing-model.html?context=cdpaas&locale=enstore-in-repo).
-
-
-
-"
-19BA0BFC40B6212B42F38487F1533BB65647850E_5,19BA0BFC40B6212B42F38487F1533BB65647850E," Importing a model by using a path to a file
-
-Note:This import method is supported by a limited number of ML frameworks. For more information, see [Available ways to import models, per framework type](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-importing-model.html?context=cdpaas&locale=ensupported-formats).
-
-To import a model by using a path to a file:
-
-
-
-1. If your model is located in a remote location, follow [Downloading a model that is stored in a remote location](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-importing-model.html?context=cdpaas&locale=enmodel-download) to download it.
-2. If your model is located locally, place it in a specific directory:
-
-!cp
-!cd
-3. For Scikit-learn, XGBoost, Tensorflow, and PyTorch models, if the downloaded file is not a .tar.gz archive, make an archive:
-
-!tar -zcvf .tar.gz
-
-The model file must be at the top-level folder of the directory, for example:
-
-assets/
-
-variables/
-variables/variables.data-00000-of-00001
-variables/variables.index
-4. Use the path to the saved file to store the model file in your Watson Machine Learning repository. For more information, see [Storing model in Watson Machine Learning repository](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-importing-model.html?context=cdpaas&locale=enstore-in-repo).
-
-
-
-"
-19BA0BFC40B6212B42F38487F1533BB65647850E_6,19BA0BFC40B6212B42F38487F1533BB65647850E," Importing a model by using a path to a directory
-
-Note:This import method is supported by a limited number of ML frameworks. For more information, see [Available ways to import models, per framework type](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-importing-model.html?context=cdpaas&locale=ensupported-formats).
-
-To import a model by using a path to a directory:
-
-
-
-1. If your model is located in a remote location, refer to [Downloading a model stored in a remote location](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-importing-model.html?context=cdpaas&locale=enmodel-download).
-2. If your model is located locally, place it in a specific directory:
-
-!cp
-!cd
-
-For scikit-learn, XGBoost, Tensorflow, and PyTorch models, the model file must be at the top-level folder of the directory, for example:
-
-assets/
-
-variables/
-variables/variables.data-00000-of-00001
-variables/variables.index
-3. Use the directory path to store the model file in your Watson Machine Learning repository. For more information, see [Storing model in Watson Machine Learning repository](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-importing-model.html?context=cdpaas&locale=enstore-in-repo).
-
-
-
-"
-19BA0BFC40B6212B42F38487F1533BB65647850E_7,19BA0BFC40B6212B42F38487F1533BB65647850E," Downloading a model stored in a remote location
-
-Follow this sample code to download your model from a remote location:
-
-import os
-from wget import download
-
-target_dir = ''
-if not os.path.isdir(target_dir):
-os.mkdir(target_dir)
-filename = os.path.join(target_dir, '')
-if not os.path.isfile(filename):
-filename = download('', out = target_dir)
-
-"
-19BA0BFC40B6212B42F38487F1533BB65647850E_8,19BA0BFC40B6212B42F38487F1533BB65647850E," Things to consider when you import models
-
-To learn more about importing a specific model type, see:
-
-
-
-* [Models saved in PMML format](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-importing-model.html?context=cdpaas&locale=enpmml-import)
-* [Spark MLlib models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-importing-model.html?context=cdpaas&locale=enspark-ml-lib-import)
-* [Scikit-learn models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-importing-model.html?context=cdpaas&locale=enscikit-learn-import)
-* [XGBoost models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-importing-model.html?context=cdpaas&locale=enxgboost-import)
-* [TensorFlow models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-importing-model.html?context=cdpaas&locale=entf-import)
-* [PyTorch models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-importing-model.html?context=cdpaas&locale=enpt-import)
-
-
-
-To learn more about frameworks that you can use with Watson Machine Learning, see [Supported frameworks](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/pm_service_supported_frameworks.html).
-
-"
-19BA0BFC40B6212B42F38487F1533BB65647850E_9,19BA0BFC40B6212B42F38487F1533BB65647850E," Models saved in PMML format
-
-
-
-* The only available deployment type for models that are imported from PMML is online deployment.
-* The PMML file must have the .xml file extension.
-* PMML models cannot be used in an SPSS stream flow.
-* The PMML file must not contain a prolog. Depending on the library that you are using when you save your model, a prolog might be added to the beginning of the file by default. For example, if your file contains a prolog string such as spark-mllib-lr-model-pmml.xml, remove the string before you import the PMML file to the deployment space.
-
-
-
-Depending on the library that you are using when you save your model, a prolog might be added to the beginning of the file by default, like in this example:
-
-::::::::::::::
-spark-mllib-lr-model-pmml.xml
-::::::::::::::
-
-You must remove that prolog before you can import the PMML file to Watson Machine Learning.
-
-"
-19BA0BFC40B6212B42F38487F1533BB65647850E_10,19BA0BFC40B6212B42F38487F1533BB65647850E," Spark MLlib models
-
-
-
-* Only classification and regression models are available.
-* Custom transformers, user-defined functions, and classes are not available.
-
-
-
-"
-19BA0BFC40B6212B42F38487F1533BB65647850E_11,19BA0BFC40B6212B42F38487F1533BB65647850E," Scikit-learn models
-
-
-
-* .pkl and .pickle are the available import formats.
-* To serialize or pickle the model, use the joblib package.
-* Only classification and regression models are available.
-* Pandas Dataframe input type for predict() API is not available.
-* The only available deployment type for scikit-learn models is online deployment.
-
-
-
-"
-19BA0BFC40B6212B42F38487F1533BB65647850E_12,19BA0BFC40B6212B42F38487F1533BB65647850E," XGBoost models
-
-
-
-* .pkl and .pickle are the available import formats.
-* To serialize or pickle the model, use the joblib package.
-* Only classification and regression models are available.
-* Pandas Dataframe input type for predict() API is not available.
-* The only available deployment type for XGBoost models is online deployment.
-
-
-
-"
-19BA0BFC40B6212B42F38487F1533BB65647850E_13,19BA0BFC40B6212B42F38487F1533BB65647850E," TensorFlow models
-
-
-
-* .pb, .h5, and .hdf5 are the available import formats.
-* To save or serialize a TensorFlow model, use the tf.saved_model.save() method.
-* tf.estimator is not available.
-* The only available deployment types for TensorFlow models are: online deployment and batch deployment.
-
-
-
-"
-19BA0BFC40B6212B42F38487F1533BB65647850E_14,19BA0BFC40B6212B42F38487F1533BB65647850E," PyTorch models
-
-
-
-* The only available deployment type for PyTorch models is online deployment.
-* For a Pytorch model to be importable to Watson Machine Learning, it must be previously exported to .onnx format. Refer to this code.
-
-torch.onnx.export(, , "".onnx"", verbose=True, input_names= , output_names=)
-
-
-
-"
-19BA0BFC40B6212B42F38487F1533BB65647850E_15,19BA0BFC40B6212B42F38487F1533BB65647850E," Storing a model in your Watson Machine Learning repository
-
-Use this code to store your model in your Watson Machine Learning repository:
-
-from ibm_watson_machine_learning import APIClient
-
-client = APIClient()
-sw_spec_uid = client.software_specifications.get_uid_by_name("""")
-
-meta_props = {
-client.repository.ModelMetaNames.NAME: """",
-client.repository.ModelMetaNames.SOFTWARE_SPEC_UID: sw_spec_uid,
-client.repository.ModelMetaNames.TYPE: """"}
-
-client.repository.store_model(model=, meta_props=meta_props)
-
-Notes:
-
-
-
-* Depending on the model framework used, can be the actual model object, a full path to a saved model file, or a path to a directory where the model file is located. For more information, see [Available ways to import models, per framework type](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-importing-model.html?context=cdpaas&locale=ensupported-formats).
-* For a list of available software specifications to use as , use the client.software_specifications.list() method.
-* For a list of available model types to use as model_type, refer to [Software specifications and hardware specifications for deployments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/pm_service_supported_frameworks.html).
-* When you export a Pytorch model to the .onnx format, specify the keep_initializers_as_inputs=True flag and set opset_version to 9 (Watson Machine Learning deployments use the caffe2 ONNX runtime that doesn't support opset versions higher than 9).
-
-"
-19BA0BFC40B6212B42F38487F1533BB65647850E_16,19BA0BFC40B6212B42F38487F1533BB65647850E,"torch.onnx.export(net, x, 'lin_reg1.onnx', verbose=True, keep_initializers_as_inputs=True, opset_version=9)
-* To learn more about how to create the dictionary, refer to [Watson Machine Learning authentication](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-authentication.html).
-
-
-
-Parent topic:[Assets in deployment spaces](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets-all.html)
-"
-E008266C010ADFEF841C513AE7BCB91436F9AE9C_0,E008266C010ADFEF841C513AE7BCB91436F9AE9C," Frameworks and software specifications in Watson Machine Learning
-
-You can use popular tools, libraries, and frameworks to train and deploy your machine learning models and functions.
-
-"
-E008266C010ADFEF841C513AE7BCB91436F9AE9C_1,E008266C010ADFEF841C513AE7BCB91436F9AE9C," Overview of software specifications
-
-Software specifications define the programming language and version that you use for a building a model or a function. You can use software specifications to configure the software that is used for running your models and functions. You can also define the software version to be used and include your own extensions. For example, you can use conda .yml files or custom libraries.
-
-"
-E008266C010ADFEF841C513AE7BCB91436F9AE9C_2,E008266C010ADFEF841C513AE7BCB91436F9AE9C," Supported frameworks and software specifications
-
-You can use predefined tools, libraries, and frameworks to train and deploy your machine learning models and functions. Examples of supported frameworks include Scikit-learn, Tensorflow, and more.
-
-For more information, see [Supported deployment frameworks and software specifications](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/pm_service_supported_frameworks.html).
-
-
-
-"
-E008266C010ADFEF841C513AE7BCB91436F9AE9C_3,E008266C010ADFEF841C513AE7BCB91436F9AE9C," Managing outdated frameworks and software specifications
-
-Update software specifications and frameworks in your models when they become outdated. Sometimes, you can seamlessly update your assets. In other cases, you must retrain or redeploy your assets.
-
-For more information, see [Managing outdated software specifications or frameworks](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-manage-outdated.html).
-
-Parent topic:[Deploying assets with Watson Machine Learning](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-overview.html)
-"
-29A9834843B2D6E7417C09A5385B83BCB13D814C_0,29A9834843B2D6E7417C09A5385B83BCB13D814C," Managing outdated software specifications or frameworks
-
-Use these guidelines when you are updating assets that refer to outdated software specifications or frameworks.
-
-In some cases, asset update is seamless. In other cases, you must retrain or redeploy the assets. For general guidelines, refer to [Migrating assets that refer to discontinued software specifications](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-manage-outdated.html?context=cdpaas&locale=endiscont-soft-spec) or [Migrating assets that refer to discontinued framework versions](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-manage-outdated.html?context=cdpaas&locale=endiscont-framewrk).
-
-For more information, see the following sections:
-
-
-
-* [Updating software specifications](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-manage-outdated.html?context=cdpaas&locale=enupdate-soft-specs)
-* [Updating a machine learning model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-manage-outdated.html?context=cdpaas&locale=enupgrade-model)
-* [Updating a Python function](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-manage-outdated.html?context=cdpaas&locale=enupgr-function)
-* [Retraining an SPSS Modeler flow](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-manage-outdated.html?context=cdpaas&locale=enretrain-spss)
-
-
-
-"
-29A9834843B2D6E7417C09A5385B83BCB13D814C_1,29A9834843B2D6E7417C09A5385B83BCB13D814C," Managing assets that refer to discontinued software specifications
-
-
-
-* During migration, assets that refer to the discontinued software specification are mapped to a comparable-supported default software specification (only in cases where the model type is still supported).
-* When you create new deployments of the migrated assets, the updated software specification in the asset metadata is used.
-* Existing deployments of the migrated assets are updated to use the new software specification. If deployment or scoring fails due to framework or library version incompatibilities, follow the instructions in [Updating software specifications](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-manage-outdated.html?context=cdpaas&locale=enupdate-soft-specs). If the problem persists, follow the steps that are listed in [Updating a machine learning model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-manage-outdated.html?context=cdpaas&locale=enupgrade-model).
-
-
-
-"
-29A9834843B2D6E7417C09A5385B83BCB13D814C_2,29A9834843B2D6E7417C09A5385B83BCB13D814C," Migrating assets that refer to discontinued framework versions
-
-
-
-* During migration, model types are not be updated. You must manually update this data. For more information, see [Updating a machine learning model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-manage-outdated.html?context=cdpaas&locale=enupgrade-model).
-* After migration, the existing deployments are removed and new deployments for the deprecated framework are not allowed.
-
-
-
-"
-29A9834843B2D6E7417C09A5385B83BCB13D814C_3,29A9834843B2D6E7417C09A5385B83BCB13D814C," Updating software specifications
-
-You can update software specifications from the UI or by using the API. For more information, see the following sections:
-
-
-
-* [Updating software specifications from the UI](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-manage-outdated.html?context=cdpaas&locale=enupdate-soft-specs-ui)
-* [Updating software specifications by using the API](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-manage-outdated.html?context=cdpaas&locale=enupdate-soft-specs-api)
-
-
-
-"
-29A9834843B2D6E7417C09A5385B83BCB13D814C_4,29A9834843B2D6E7417C09A5385B83BCB13D814C," Updating software specifications from the UI
-
-
-
-1. From the deployment space, click the model (make sure it does not have any active deployments.)
-2. Click the i symbol to check model details.
-3. Use the dropdown list to update the software specification.
-
-
-
-Refer to the example image:
-
-
-
-"
-29A9834843B2D6E7417C09A5385B83BCB13D814C_5,29A9834843B2D6E7417C09A5385B83BCB13D814C," Updating software specifications by using the API
-
-You can update a software specification by using the API Patch command:
-
-For software_spec field, type /software_spec. For value field, use either the ID or the name of the new software specification.
-
-Refer to this example:
-
-curl -X PATCH '/ml/v4/models/6f01d512-fe0f-41cd-9a52-1e200c525c84?space_id=f2ddb8ce-7b10-4846-9ab0-62454a449802&project_id=&version=' n--data-raw '[
-{
-""op"":""replace"",
-""path"":""/software_spec"",
-""value"":{
-""id"":""6f01d512-fe0f-41cd-9a52-1e200c525c84"" // or ""name"":""tensorflow_rt22.1-py3.9""
-}
-}
-]'
-
-For more information, see [Updating an asset by using the Patch API command](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-update.htmlupdate-asset-api).
-
-"
-29A9834843B2D6E7417C09A5385B83BCB13D814C_6,29A9834843B2D6E7417C09A5385B83BCB13D814C," Updating a machine learning model
-
-Follow these steps to update a model built with a deprecated framework.
-
-"
-29A9834843B2D6E7417C09A5385B83BCB13D814C_7,29A9834843B2D6E7417C09A5385B83BCB13D814C," Option 1: Save the model with a compatible framework
-
-
-
-1. Download the model by using either the Watson Machine Learning [REST API](https://cloud.ibm.com/apidocs/machine-learning) or the Watson Machine Learning [Python client library](https://ibm.github.io/watson-machine-learning-sdk/).
-
-The following example shows how to download your model:
-
-client.repository.download(, filename=""xyz.tar.gz"")
-2. Edit model metadata with the model type and version that is supported in the current release. For more information, see [Software specifications and hardware specifications for deployments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/pm_service_supported_frameworks.html).
-
-The following example shows how to edit model metadata:
-
-model_metadata = {
-client.repository.ModelMetaNames.NAME: ""example model"",
-client.repository.ModelMetaNames.DESCRIPTION: ""example description"",
-client.repository.ModelMetaNames.TYPE: """",
-client.repository.ModelMetaNames.SOFTWARE_SPEC_UID:
-client.software_specifications.get_uid_by_name("""")
-}
-3. Save the model to the Watson Machine Learning repository. The following example shows how to save the model to the repository:
-
-model_details = client.repository.store_model(model=""xyz.tar.gz"", meta_props=model_metadata)
-4. Deploy the model.
-5. Score the model to generate predictions.
-
-
-
-If deployment or scoring fails, the model is not compatible with the new version that was used for saving the model. In this case, use [Option 2](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-manage-outdated.html?context=cdpaas&locale=enretrain-option2).
-
-"
-29A9834843B2D6E7417C09A5385B83BCB13D814C_8,29A9834843B2D6E7417C09A5385B83BCB13D814C," Option 2: Retrain the model with a compatible framework
-
-
-
-1. Retrain the model with a model type and version that is supported in the current version.
-2. Save the model with the supported model type and version.
-3. Deploy and score the model.
-
-
-
-It is also possible to update a model by using the API. For more information, see [Updating an asset by using the Patch API command](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-update.htmlupdate-asset-api).
-
-"
-29A9834843B2D6E7417C09A5385B83BCB13D814C_9,29A9834843B2D6E7417C09A5385B83BCB13D814C," Updating a Python function
-
-Follow these steps to update a Python function built with a deprecated framework.
-
-"
-29A9834843B2D6E7417C09A5385B83BCB13D814C_10,29A9834843B2D6E7417C09A5385B83BCB13D814C," Option 1: Save the Python function with a compatible runtime or software specification
-
-
-
-1. Download the Python function by using either the Watson Machine Learning [REST API](https://cloud.ibm.com/apidocs/machine-learning) or the Watson Machine Learning [Python client library](https://ibm.github.io/watson-machine-learning-sdk/).
-2. Save the Python function with a supported runtime or software specification version. For more information, see [Software specifications and hardware specifications for deployments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/pm_service_supported_frameworks.html).
-3. Deploy the Python function.
-4. Score the Python function to generate predictions.
-
-
-
-If your Python function fails during scoring, the function is not compatible with the new runtime or software specification version that was used for saving the Python function. In this case, use [Option 2](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-manage-outdated.html?context=cdpaas&locale=enmodify-option2).
-
-"
-29A9834843B2D6E7417C09A5385B83BCB13D814C_11,29A9834843B2D6E7417C09A5385B83BCB13D814C," Option 2: Modify the function code and save it with a compatible runtime or software specification
-
-
-
-1. Modify the Python function code to make it compatible with the new runtime or software specification version. In some cases, you must update dependent libraries that are installed within the Python function code.
-2. Save the Python function with the new runtime or software specification version.
-3. Deploy and score the Python function.
-
-
-
-It is also possible to update a function by using the API. For more information, see [Updating an asset by using the Patch API command](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-update.htmlupdate-asset-api).
-
-"
-29A9834843B2D6E7417C09A5385B83BCB13D814C_12,29A9834843B2D6E7417C09A5385B83BCB13D814C," Retraining an SPSS Modeler flow
-
-Some models that were built with SPSS Modeler in IBM Watson Studio Cloud before 1 September 2020 can no longer be deployed by using Watson Machine Learning. This problem is caused by an upgrade of the Python version in supported SPSS Modeler runtimes. If you're using one of the following six nodes in your SPSS Modeler flow, you must rebuild and redeploy your models with SPSS Modeler and Watson Machine Learning:
-
-
-
-* XGBoost Tree
-* XGBoost Linear
-* One-Class SVM
-* HDBSCAN
-* KDE Modeling
-* Gaussian Mixture
-
-
-
-To retrain your SPSS Modeler flow, follow these steps:
-
-
-
-* If you're using the Watson Studio user interface, open the SPSS Modeler flow in Watson Studio, retrain, and save the model to Watson Machine Learning. After you save the model to the project, you can promote it to a deployment space and create a new deployment.
-* If you're using [REST API](https://cloud.ibm.com/apidocs/machine-learning) or [Python client](https://ibm.github.io/watson-machine-learning-sdk/), retrain the model by using SPSS Modeler and save the model to the Watson Machine Learning repository with the model type spss-modeler-18.2.
-
-
-
-Parent topic:[Frameworks and software specifications](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-manage-frame-and-specs.html)
-"
-6F51A9033343574AEE2D292CB23F09D542456389,6F51A9033343574AEE2D292CB23F09D542456389," Enabling model tracking with AI factsheets
-
-If your organization is using AI Factsheets as part of an AI governance strategy, you can track models after adding them to a space.
-
-Tracking a model populates a factsheet in an associated model use case. The model use cases are maintained in a model inventory in a catalog, providing a way for all stakeholders to view the lifecyle details for a machine learning model. From the inventory, collaborators can view the details for a model as it moves through the model lifecycle, including the request, development, deployment, and evaluation of the model.
-
-To enable model tracking by using AI Factsheets:
-
-
-
-1. From the asset list in your space, click a model name and then click the Model details tab.
-2. Click Track this model.
-3. Associate the model with an existing model use case in the inventory or create a new use case.
-4. Specify the details for the new use case, including specifying a catalog if you have access to more than one, and save to register the model. A link to the model inventory is added to the model details page.
-"
-035EF4A1D7C465E8A72ACC1C5C98198B4E95068B_0,035EF4A1D7C465E8A72ACC1C5C98198B4E95068B," Adding conditions to the pipeline
-
-Add conditions to a pipeline to handle various scenarios.
-
-"
-035EF4A1D7C465E8A72ACC1C5C98198B4E95068B_1,035EF4A1D7C465E8A72ACC1C5C98198B4E95068B," Configuring conditions for the pipeline
-
-As you create a pipeline, you can specify conditions that must be met before you run the pipeline. For example, you can set a condition that the output from a node must satisfy a particular condition before you proceed with the pipeline execution.
-
-To define a condition:
-
-
-
-1. Hover over the link between two nodes.
-2. Click Add condition.
-3. Choose the type of condition:
-
-
-
-* [Condition Response](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-conditions.html?context=cdpaas&locale=ennode) checks a condition on the status of the previous node.
-* [Simple condition](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-conditions.html?context=cdpaas&locale=ensimple) is a no-code condition in the form of an if-then statement.
-* [Advanced condition](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-conditions.html?context=cdpaas&locale=enadvanced) Advanced condition uses expression code, providing the most features and flexibility.
-
-
-
-4. Define and save your expression.
-
-
-
-
-
-When you define your expression, a summary captures the condition and the expected result. For example:
-
-If Run AutoAI is Successful, then Create deployment node.
-
-When you return to the flow, you see an indicator that you defined a condition. Hover over the icon to edit or delete the condition.
-
-
-
-"
-035EF4A1D7C465E8A72ACC1C5C98198B4E95068B_2,035EF4A1D7C465E8A72ACC1C5C98198B4E95068B," Configuring a condition based on node status
-
-If you select Condition Response as your condition type, the previous node status must satisfy at least one of these conditions to continue with the flow:
-
-
-
-* Completed - the node activity is completed without error.
-* Completed with warnings - the node activity is completed but with warnings.
-* Completed with errors - the node activity is completed, but with errors.
-* Failed - the node activity failed to complete.
-* Cancelled - the previous action or activity was canceled.
-
-
-
-"
-035EF4A1D7C465E8A72ACC1C5C98198B4E95068B_3,035EF4A1D7C465E8A72ACC1C5C98198B4E95068B," Configuring a simple condition
-
-To configure a simple condition, choose the condition that must be satisfied to continue with the flow.
-
-
-
-1. Optional: edit the default name.
-2. Depending on the node, choose a variable from the drop-down options. For example, if you are creating a condition based on a Run AutoAI node, you can choose Model metric as the variable to base your condition on.
-3. Based on the variable, choose an operator from: Equal to, Not equal to, Greater than, Less than, Greater than or equal to, Less than or equal to.
-4. Specify the required value. For example, if you are basing a condition on an AutoAI metric, specify a list of values that consists of the available metrics.
-5. Optional: click the plus icon to add an And (all conditions must be met) or an Or (either condition must be met) to the expression to build a compound conditional statement.
-6. Review the summary and save the condition.
-
-
-
-"
-035EF4A1D7C465E8A72ACC1C5C98198B4E95068B_4,035EF4A1D7C465E8A72ACC1C5C98198B4E95068B," Configuring an advanced condition
-
-Use coding constructs to build a more complex condition. The next node runs when the condition is met. You build the advanced condition by using the expression builder.
-
-
-
-1. Optional: edit the default name.
-2. Add items from the Expression elements panel to the Expression canvas to build your condition. You can also type your conditions and the elements autocomplete.
-3. When your expression is complete, review the summary and save the condition.
-
-
-
-"
-035EF4A1D7C465E8A72ACC1C5C98198B4E95068B_5,035EF4A1D7C465E8A72ACC1C5C98198B4E95068B," Learn more
-
-For more information on using the code editor to build an expression, see:
-
-
-
-* [Functions used in pipelines Expression Builder](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-expr-builder.html)
-
-
-
-Parent topic:[Creating a pipeline](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-create.html)
-"
-8CF8260D0474AD73D9878CCD361C83102B724733_0,8CF8260D0474AD73D9878CCD361C83102B724733," Configuring pipeline nodes
-
-Configure the nodes of your pipeline to specify inputs and to create outputs as part of your pipeline.
-
-"
-8CF8260D0474AD73D9878CCD361C83102B724733_1,8CF8260D0474AD73D9878CCD361C83102B724733," Specifying the workspace scope
-
-By default, the scope for a pipeline is the project that contains the pipeline. You can explicitly specify a scope other than the default, to locate an asset used in the pipeline. The scope is the project, catalog, or space that contains the asset. From the user interface, you can browse for the scope.
-
-"
-8CF8260D0474AD73D9878CCD361C83102B724733_2,8CF8260D0474AD73D9878CCD361C83102B724733," Changing the input mode
-
-When you are configuring a node, you can specify any resources that include data and notebooks in various ways. Such as directly entering a name or ID, browsing for an asset, or by using the output from a prior node in the pipeline to populate a field. To see what options are available for a field, click the input icon for the field. Depending on the context, options can include:
-
-
-
-* Select resource: use the asset browser to find an asset such as a data file.
-* Assign pipeline parameter: assign a value by using a variable configured with a pipeline parameter. For more information, see [Configuring global objects](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-flow-param.html).
-* Select from another node: use the output from a node earlier in the pipeline as the value for this field.
-* Enter the expression: enter code to assign values or identify resources. For more information, see [Coding elements](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-expr-builder.html).
-
-
-
-"
-8CF8260D0474AD73D9878CCD361C83102B724733_3,8CF8260D0474AD73D9878CCD361C83102B724733," Pipeline nodes and parameters
-
-Configure the following types of pipeline nodes:
-
-"
-8CF8260D0474AD73D9878CCD361C83102B724733_4,8CF8260D0474AD73D9878CCD361C83102B724733," Copy nodes
-
-Use Copy nodes to add assets to your pipeline or to export pipeline assets.
-
-
-
-* Copy assets
-
-Copy selected assets from a project or space to a nonempty space. You can copy these assets to a space: - AutoAI experiment - Code package job - Connection - Data Refinery flow - Data Refinery job - Data asset - Deployment job - Environment - Function - Job - Model - Notebook - Notebook job - Pipelines job - Script - Script job - SPSS Modeler job #### Input parameters |Parameter|Description| |---|---| |Source assets |Browse or search for the source asset to add to the list. You can also specify an asset with a pipeline parameter, with the output of another node, or by entering the asset ID| |Target|Browse or search for the target space| |Copy mode|Choose how to handle a case where the flow tries to copy an asset and one of the same name exists. One of: ignore, fail, overwrite| #### Output parameters |Parameter|Description| |---|---| |Output assets |List of copied assets|
-
-
-
-
-
-* Export assets
-
-"
-8CF8260D0474AD73D9878CCD361C83102B724733_5,8CF8260D0474AD73D9878CCD361C83102B724733,"Export selected assets from the scope, for example, a project or deployment space. The operation exports all the assets by default. You can limit asset selection by building a list of resources to export. #### Input parameters |Parameter|Description| |---|---| |Assets |Choose Scope to export all exportable items or choose List to create a list of specific items to export| |Source project or space |Name of project or space that contains the assets to export| |Exported file |File location for storing the export file| |Creation mode (optional)|Choose how to handle a case where the flow tries to create an asset and one of the same name exists. One of: ignore, fail, overwrite| #### Output parameters |Parameter|Description| |---|---| |Exported file|Path to exported file| Notes: - If you export a project that contains a notebook, the latest version of the notebook is included in the export file. If the Pipeline with the Run notebook job node was configured to use a different notebook version other than the latest version, the exported Pipeline is automatically reconfigured to use the latest version when imported. This might produce unexpected results or require some reconfiguration after the import. - If assets are self-contained in the exported project, they are retained when you import a new project. Otherwise, some configuration might be required following an import of exported assets.
-
-
-
-
-
-* Import assets
-
-"
-8CF8260D0474AD73D9878CCD361C83102B724733_6,8CF8260D0474AD73D9878CCD361C83102B724733,"Import assets from a ZIP file that contains exported assets. #### Input parameters |Parameter|Description| |---|---| |Path to import target |Browse or search for the assets to import| |Archive file to import |Specify the path to a ZIP file or archive| Notes: After you import a file, paths and references to the imported assets are updated, following these rules: - References to assets from the exported project or space are updated in the new project or space after the import. - If assets from the exported project refer to external assets (included in a different project), the reference to the external asset will persist after the import. - If the external asset no longer exists, the parameter is replaced with an empty value and you must reconfigure the field to point to a valid asset.
-
-
-
-"
-8CF8260D0474AD73D9878CCD361C83102B724733_7,8CF8260D0474AD73D9878CCD361C83102B724733," Create nodes
-
-Configure the nodes for creating assets in your pipeline.
-
-
-
-* Create AutoAI experiment
-
-Use this node to train an [AutoAI classification or regression experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-overview.html) and generate model-candidate pipelines. #### Input parameters |Parameter|Description| |---|---| |AutoAI experiment name|Name of the new experiment| |Scope|A project or a space, where the experiment is going to be created| |Prediction type|The type of model for the following data: binary, classification, or regression| |Prediction column (label)|The prediction column name| |Positive class (optional)|Specify a positive class for a binary classification experiment| |Training data split ratio (optional)|The percentage of data to hold back from training and use to test the pipelines(float: 0.0 - 1.0)| |Algorithms to include (optional)|Limit the list of estimators to be used (the list depends on the learning type)| |Algorithms to use|Specify the list of estimators to be used (the list depends on the learning type)| |Optimize metric (optional)| The metric used for model ranking| |Hardware specification (optional)|Specify a hardware specification for the experiment| |AutoAI experiment description|Description of the experiment| |AutoAI experiment tags (optional)|Tags to identify the experiment| |Creation mode (optional)|Choose how to handle a case where the pipeline tries to create an experiment and one of the same name exists. One of: ignore, fail, overwrite| #### Output parameters |Parameter|Description| |---|---| |AutoAI experiment|Path to the saved model|
-
-
-
-
-
-* Create AutoAI time series experiment
-
-"
-8CF8260D0474AD73D9878CCD361C83102B724733_8,8CF8260D0474AD73D9878CCD361C83102B724733,"Use this node to train an [AutoAI time series experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-overview.html) and generate model-candidate pipelines. #### Input parameters |Parameter|Description| |---|---| |AutoAI time series experiment name|Name of the new experiment| |Scope|A project or a space, where the pipeline is going to be created| |Prediction columns (label)|The name of one or more prediction columns| |Date/time column (optional)|Name of date/time column| |Leverage future values of supporting features|Choose ""True"" to enable the consideration for supporting (exogenous) features to improve the prediction. For example, include a temperature feature for predicting ice cream sales.| |Supporting features (optional)|Choose supporting features and add to list| |Imputation method (optional)|Choose a technique for imputing missing values in a data set| |Imputation threshold (optional)|Specify an higher threshold for percentage of missing values to supply with the specified imputation method. If the threshold is exceeded, the experiment fails. For example, if you specify that 10% of values can be imputed, and the data set is missing 15% of values, the experiment fails.| |Fill type|Specify how the specified imputation method fill null values. Choose to supply a mean of all values, and median of all values, or specify a fill value.| |Fill value (optional)|If you selected to sepcify a value for replacing null values, enter the value in this field.| |Final training data set|Choose whether to train final pipelines with just the training data or with training data and holdout data. "
-8CF8260D0474AD73D9878CCD361C83102B724733_9,8CF8260D0474AD73D9878CCD361C83102B724733,"If you choose training data, the generated notebook includes a cell for retrieving holdout data| |Holdout size (optional)|If you are splitting training data into training and holdout data, specify a percentage of the training data to reserve as holdout data for validating the pipelines. Holdout data does not exceed a third of the data.| |Number of backtests (optional)|Customize the backtests to cross-validate your time series experiment| |Gap length (optional)|Adjust the number of time points between the training data set and validation data set for each backtest. When the parameter value is non-zero, the time series values in the gap is not used to train the experiment or evaluate the current backtest.| |Lookback window (optional)|A parameter that indicates how many previous time series values are used to predict the current time point.| |Forecast window (optional)|The range that you want to predict based on the data in the lookback window.| |Algorithms to include (optional)|Limit the list of estimators to be used (the list depends on the learning type)| |Pipelines to complete|Optionally adjust the number of pipelines to create. More pipelines increase training time and resources.| |Hardware specification (optional)|Specify a hardware specification for the experiment| |AutoAI time series experiment description (optional)|Description of the experiment| |AutoAI experiment tags (optional)|Tags to identify the experiment| |Creation mode (optional)|Choose how to handle a case where the pipeline tries to create an experiment and one of the same name exists. One of: ignore, fail, overwrite| #### Output parameters |Parameter|Description| |---|---| |AutoAI time series experiment|Path to the saved model|
-* Create batch deployment
-
-"
-8CF8260D0474AD73D9878CCD361C83102B724733_10,8CF8260D0474AD73D9878CCD361C83102B724733,"Use this node to create a batch deployment for a machine learning model. #### Input parameters |Parameter|Description| |---|---| |ML asset|Name or ID of the machine learning asset to deploy| |New deployment name (optional)|Name of the new job, with optional description and tags| |Creation mode (optional)|How to handle a case where the pipeline tries to create a job and one of the same name exists. One of: ignore, fail, overwrite| |New deployment description (optional)| Description of the deployment| |New deployment tags (optional)| Tags to identify the deployment| |Hardware specification (optional)|Specify a hardware specification for the job| #### Output parameters |Parameter|Description| |---|---| |New deployment| Path of the newly created deployment|
-
-
-
-
-
-* Create data asset
-
-Use this node to create a data asset. #### Input parameters |Parameter|Description| |---|---| |File |Path to file in a file storage| |Target scope| Path to the target space or project| |Name (optional)|Name of the data source with optional description, country of origin, and tags| |Description (optional)| Description for the asset| |Origin country (optional)|Origin country for data regulations| |Tags (optional)| Tags to identify assets| |Creation mode|How to handle a case where the pipeline tries to create a job and one of the same name exists. One of: ignore, fail, overwrite| #### Output parameters |Parameter|Description| |---|---| |Data asset|The newly created data asset|
-
-
-
-
-
-* Create deployment space
-
-"
-8CF8260D0474AD73D9878CCD361C83102B724733_11,8CF8260D0474AD73D9878CCD361C83102B724733,"Use this node to create and configure a space that you can use to organize and create deployments. #### Input parameters |Parameter|Description| |---|---| |New space name|Name of the new space with optional description and tags| |New space tags (optional)| Tags to identify the space| |New space COS instance CRN |CRN of the COS service instance| |New space WML instance CRN (optional)|CRN of the Watson Machine Learning service instance| |Creation mode (optional)|How to handle a case where the pipeline tries to create a space and one of the same name exists. One of: ignore, fail, overwrite| |Space description (optional)|Description of the space| #### Output parameters |Parameter|Description| |---|---| |Space|Path of the newly created space|
-
-
-
-
-
-* Create online deployment
-
-Use this node to create an online deployment where you can submit test data directly to a web service REST API endpoint. #### Input parameters |Parameter|Description| |---|---| |ML asset|Name or ID of the machine learning asset to deploy| |New deployment name (optional)|Name of the new job, with optional description and tags| |Creation mode (optional)|How to handle a case where the pipeline tries to create a job and one of the same name exists. One of: ignore, fail, overwrite| |New deployment description (optional)| Description of the deployment| |New deployment tags (optional)| Tags to identify the deployment| |Hardware specification (optional)|Specify a hardware specification for the job| #### Output parameters |Parameter|Description| |---|---| |New deployment| Path of the newly created deployment|
-
-
-
-"
-8CF8260D0474AD73D9878CCD361C83102B724733_12,8CF8260D0474AD73D9878CCD361C83102B724733," Wait
-
-Use nodes to pause a pipeline until an asset is available in the location that is specified in the path.
-
-
-
-* Wait for all results
-
-Use this node to wait until all results from the previous nodes in the pipeline are available so the pipeline can continue. This node takes no inputs and produces no output. When the results are all available, the pipeline continues automatically.
-
-
-
-
-
-* Wait for any result
-
-Use this node to wait until any result from the previous nodes in the pipeline is available so the pipeline can continue. Run the downstream nodes as soon as any of the upstream conditions are met. This node takes no inputs and produces no output. When any results are available, the pipeline continues automatically.
-
-
-
-
-
-* Wait for file
-
-"
-8CF8260D0474AD73D9878CCD361C83102B724733_13,8CF8260D0474AD73D9878CCD361C83102B724733,"Wait for an asset to be created or updated in the location that is specified in the path from a job or process earlier in the pipeline. Specify a timeout length to wait for the condition to be met. If 00:00:00 is the specified timeout length, the flow waits indefinitely. #### Input parameters |Parameter|Description| |---|---| |File location|Specify the location in the asset browser where the asset resides. Use the format data_asset/filename where the path is relative to the root. The file must exist and be in the location you specify or the node fails with an error. | |Wait mode| By default the mode is for the file to appear. You can change to waiting for the file to disappear| |Timeout length (optional)|Specify the length of time to wait before you proceed with the pipeline. Use the format hh:mm:ss| |Error policy (optional)| See [Handling errors](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-errors.html)| #### Output parameters |Parameter|Description| |---|---| |Return value|Return value from the node| |Execution status| Returns a value of: Completed, Completed with warnings, Completed with errors, Failed, or Canceled| |Status message| Message associated with the status|
-
-
-
-"
-8CF8260D0474AD73D9878CCD361C83102B724733_14,8CF8260D0474AD73D9878CCD361C83102B724733," Control nodes
-
-Control the pipeline by adding error handling and logic.
-
-
-
-* Loops
-
-"
-8CF8260D0474AD73D9878CCD361C83102B724733_15,8CF8260D0474AD73D9878CCD361C83102B724733,"Loops are a node in a Pipeline that operates like a coded loop. The two types of loops are parallel and sequential. You can use loops when the number of iterations for an operation is dynamic. For example, if you don't know the number of notebooks to process, or you want to choose the number of notebooks at run time, you can use a loop to iterate through the list of notebooks. You can also use a loop to iterate through the output of a node or through elements in a data array. ### Loops in parallel Add a parallel looping construct to the pipeline. A parallel loop runs the iterating nodes independently and possibly simultaneously. For example, to train a machine learning model with a set of hyperparameters to find the best performer, you can use a loop to iterate over a list of hyperparameters to train the notebook variations in parallel. The results can be compared later in the flow to find the best notebook. To see limits on the number of loops you can run simultaneously, see [Limitations](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/known-issues.htmlpipeline-issues). \\ Input parameters when iterating List types |Parameter|Description| |---|---| |List input| The List input parameter contains two fields, the data type of the list and the list content that the loop iterates over or a standard link to pipeline input or pipeline output.| |Parallelism |Maximum number of tasks to be run simultaneously. Must be greater than zero| \\ Input parameters when iterating String types |Parameter|Description| |---|---| |Text input| Text data that the loop reads from| |Separator| A char used to split the text | |Parallelism (optional)| Maximum number of tasks to be run simultaneously. Must be greater than zero| If the input array element type is JSON or any type that is represented as such, this field might decompose it as dictionary. "
-8CF8260D0474AD73D9878CCD361C83102B724733_16,8CF8260D0474AD73D9878CCD361C83102B724733,"Keys are the original element keys and values are the aliases for output names. \ Loops in sequence Add a sequential loop construct to the pipeline. Loops can iterate over a numeric range, a list, or text with a delimiter. A use case for sequential loops is if you want to try an operation 3 time before you determine whether an operation failed. \\ Input parameters |Parameter|Description| |---|---| |List input| The List input parameter contains two fields, the data type of the list and the list content that the loop iterates over or a standard link to pipeline input or pipeline output.| |Text input| Text data that the loop reads from. Specify a character to split the text.| |Range| Specify the start, end, and optional step for a range to iterate over. The default step is 1.| After you configure the loop iterative range, define a subpipeline flow inside the loop to run until the loop is complete. For example, it can invoke notebook, script, or other flow per iteration. \ Terminate loop In a parallel or sequential loop process flow, you can add a Terminate pipeline node to end the loop process anytime. You must customize the conditions for terminating. Attention: If you use the Terminate loop node, your loop cancels any ongoing tasks and terminates without completing its iteration.
-* Set user variables
-
-"
-8CF8260D0474AD73D9878CCD361C83102B724733_17,8CF8260D0474AD73D9878CCD361C83102B724733,"Configure a user variable with a key/value pair, then add the list of dynamic variables for this node. For more information on how to create a user variable, see [Configuring global objects](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-flow-param.html). #### Input parameters x |Parameter|Description| |---|---| |Name| Enter the name, or key, for the variable| |Input type|Choose Expression or Pipeline parameter as the input type. - For expressions, use the built-in Expression Builder to create a variable that results from a custom expression. - For pipeline parameters, assign a pipeline parameter and use the parameter value as input for the user variable.
-
-
-
-
-
-* Terminate pipeline
-
-You can initiate and control the termination of a pipeline with a Terminate pipeline node from the Control category. When the error flow runs, you can optionally specify how to handle notebook or training jobs that were initiated by nodes in the pipeline. You must specify whether to wait for jobs to finish, cancel the jobs then stop the pipeline, or stop everything without canceling. Specify the options for the Terminate pipeline node. #### Input parameters |Parameter|Description| |---|---| |Terminator mode (optional)| Choose the behavior for the error flow| Terminator mode can be: - Terminate pipeline run and all running jobs stops all jobs and stops the pipeline. - Cancel all running jobs then terminate pipeline cancels any running jobs before stopping the pipeline. - Terminate pipeline run after running jobs finish waits for running jobs to finish, then stops the pipeline. - Terminate pipeline that is run without stopping jobs stops the pipeline but allows running jobs to continue.
-
-
-
-"
-8CF8260D0474AD73D9878CCD361C83102B724733_18,8CF8260D0474AD73D9878CCD361C83102B724733," Update nodes
-
-Use update nodes to replace or update assets to improve performance. For example, if you want to standardize your tags, you can update to replace a tag with a new tag.
-
-
-
-* Update AutoAI experiment
-
-Update the training details for an [AutoAI experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-overview.html). #### Input parameters |Parameter|Description| |---|---| |AutoAI experiment|Path to a project or a space, where the experiment resides| |AutoAI experiment name (optional)| Name of the experiment to be updated, with optional description and tags| |AutoAI experiment description (optional)|Description of the experiment| |AutoAI experiment tags (optional)|Tags to identify the experiment| #### Output parameters |Parameter|Description| |---|---| |AutoAI experiment|Path of the updated experiment|
-
-
-
-
-
-* Update batch deployment
-
-Use these parameters to update a batch deployment. #### Input parameters |Parameter|Description| |---|---| |Deployment| Path to the deployment to be updated| |New name for the deployment (optional)|Name or ID of the deployment to be updated | |New description for the deployment (optional)|Description of the deployment| |New tags for the deployment (optional)| Tags to identify the deployment| |ML asset|Name or ID of the machine learning asset to deploy| |Hardware specification|Update the hardware specification for the job| #### Output parameters |Parameter|Description| |---|---| |Deployment|Path of the updated deployment|
-
-
-
-
-
-* Update deployment space
-
-"
-8CF8260D0474AD73D9878CCD361C83102B724733_19,8CF8260D0474AD73D9878CCD361C83102B724733,"Update the details for a space. #### Input parameters |Parameter|Description| |---|---| |Space|Path of the existing space| |Space name (optional)|Update the space name| |Space description (optional)|Description of the space| |Space tags (optional)|Tags to identify the space| |WML Instance (optional)| Specify a new Machine Learning instance| |WML instance| Specify a new Machine Learning instance. Note: Even if you assign a different name for an instance in the UI, the system name is Machine Learning instance. Differentiate between different instances by using the instance CRN| #### Output parameters |Parameter|Description| |---|---| |Space|Path of the updated space|
-
-
-
-
-
-* Update online deployment
-
-Use these parameters to update an online deployment (web service). #### Input parameters |Parameter|Description| |---|---| |Deployment|Path of the existing deployment| |Deployment name (optional)|Update the deployment name| |Deployment description (optional)|Description of the deployment| |Deployment tags (optional)|Tags to identify the deployment| |Asset (optional)|Machine learning asset (or version) to be redeployed| #### Output parameters |Parameter|Description| |---|---| |Deployment|Path of the updated deployment|
-
-
-
-"
-8CF8260D0474AD73D9878CCD361C83102B724733_20,8CF8260D0474AD73D9878CCD361C83102B724733," Delete nodes
-
-Configure parameters for delete operations.
-
-
-
-* Delete
-
-You can delete: - AutoAI experiment - Batch deployment - Deployment space - Online deployment For each item, choose the asset for deletion.
-
-
-
-"
-8CF8260D0474AD73D9878CCD361C83102B724733_21,8CF8260D0474AD73D9878CCD361C83102B724733," Run nodes
-
-Use these nodes to train an experiment, execute a script, or run a data flow.
-
-
-
-* Run AutoAI experiment
-
-Trains and stores [AutoAI experiment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-overview.html) pipelines and models. #### Input parameters |Parameter|Description| |---|---| |AutoAI experiment|Browse for the ML Pipeline asset or get the experiment from a pipeline parameter or the output from a previous node. | |Training data asset|Browse or search for the data to train the experiment. Note that you can supply data at runtime by using a pipeline parameter| |Holdout data asset (optional)|Optionally choose a separate file to use for holdout data for testingmodel performance| |Models count (optional)| Specify how many models to save from best performing pipelines. The limit is 3 models| |Run name (optional)|Name of the experiment and optional description and tags| |Model name prefix (optional)| Prefix used to name trained models. Defaults to <(experiment name)> | |Run description (optional)| Description of the new training run| |Run tags (optional)| Tags for new training run| |Creation mode (optional)| Choose how to handle a case where the pipeline flow tries to create an asset and one of the same name exists. "
-8CF8260D0474AD73D9878CCD361C83102B724733_22,8CF8260D0474AD73D9878CCD361C83102B724733,"One of: ignore, fail, overwrite| |Error policy (optional)| Optionally, override the default error policy for the node| #### Output parameters |Parameter|Description| |---|---| |Models | List of paths of highest N trained and persisted model (ordered by selected evaluation metric)| |Best model | path of the winning model (based on selected evaluation metric)| |Model metrics | a list of trained model metrics (each item is a nested object with metrics like: holdout_accuracy, holdout_average_precision, ...)| |Winning model metric |elected evaluation metric of the winning model| |Optimized metric| Metric used to tune the model| |Execution status| Information on the state of the job: pending, starting, running, completed, canceled, or failed with errors| |Status message|Information about the state of the job|
-* Run Bash script
-
-"
-8CF8260D0474AD73D9878CCD361C83102B724733_23,8CF8260D0474AD73D9878CCD361C83102B724733,"Run an inline Bash script to automate a function or process for the pipeline. You can enter the Bash script code manually, or you can import the bash script from a resource, pipeline parameter, or the output of another node. You can also use a Bash script to process large output files. For example, you can generate a large, comma-separated list that you can then iterate over using a loop. In the following example, the user entered the inline script code manually. The script uses the cpdctl tool to search all notebooks with a set variable tag and aggregates the results in a JSON list. The list can then be used in another node, such as running the notebooks returned from the search. {: height=""50%"" width=""50%""} #### Input parameters |Parameter|Description| |---|---| |Inline script code|Enter a Bash script in the inline code editor. Optional: Alternatively, you can select a resource, assign a pipeline parameter, or select from another node. | |Environment variables (optional)| Specify a variable name (the key) and a data type and add to the list of variables to use in the script.| |Runtime type (optional)| Select either use standalone runtime (default) or a shared runtime. "
-8CF8260D0474AD73D9878CCD361C83102B724733_24,8CF8260D0474AD73D9878CCD361C83102B724733,"Use a shared runtime for tasks that require running in shared pods. | |Error policy (optional)| Optionally, override the default error policy for the node| #### Output parameters |Parameter|Description| |---|---| |Output variables |Configure a key/value pair for each custom variable, then click the Add button to populate the list of dynamic variables for the node| |Return value|Return value from the node| |Standard output|Standard output from the script| |Execution status|Information on the state of the job: pending, starting, running, completed, canceled, or failed with errors| |Status message| Message associated with the status| #### Rules for Bash script output The output for a Bash script is often the result of a computed expression and can be large. When you are reviewing the properties for a script with valid large output, you can preview or download the output in a viewer. These rules govern what type of large output is valid. - The output of a list_expression is a calculated expression, so it is valid a large output. - String output is treated as a literal value rather than a calculated expression, so it must follow the size limits that govern inline expressions. For example, you are warned when a literal value exceeds 1 KB and values of 2 KB and higher result in an error. #### Referencing a variable in a Bash script The way that you reference a variable in a script depends on whether the variable was created as an input variable or as an output variable. Output variables are created as a file and require a file path in the reference. Specifically: - Input variables are available using the assigned name - Output variable names require that _PATH be appended to the variable name to indicate that values have to be written to the output file pointed by the {output_name}_PATH variable. #### Using SSH in Bash scripts
-The following steps describe how to use ssh to run your remote Bash script. 1. Create a private key and public key.
-bash ssh-keygen -t rsa -C ""XXX"" 2. Copy the public key to the remote host.
-"
-8CF8260D0474AD73D9878CCD361C83102B724733_25,8CF8260D0474AD73D9878CCD361C83102B724733,"bash ssh-copy-id USER@REMOTE_HOST 3. On the remote host, check whether the public key contents are added into /root/.ssh/authorized_keys.
-4. Copy the public and private keys to a new directory in the Run Bash script node. bash mkdir -p $HOME/.ssh copy private key content echo ""-----BEGIN OPENSSH PRIVATE KEY----- ... ... -----END OPENSSH PRIVATE KEY-----"" > $HOME/.ssh/id_rsa copy public key content echo ""ssh-rsa ...... "" > $HOME/.ssh/id_rsa.pub chmod 400 $HOME/.ssh/id_rsa.pub chmod 400 $HOME/.ssh/id_rsa ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -o GlobalKnownHostsFile=/dev/null -i $HOME/.ssh/id_rsa USER@REMOTE_HOST ""cd /opt/scripts; ls -l; sh 1.sh"" \\ Using SSH utilities in Bash scripts
-The following steps describe how to use sshpass to run your remote Bash script. 1. Put your SSH password file in your system path, such as the mounted storage volume path. 2. Use the SSH password directly in the Run Bash script node: bash cd /mnts/orchestration ls -l sshpass chmod 777 sshpass ./sshpass -p PASSWORD ssh -o StrictHostKeyChecking=no USER@REMOTE_HOST ""cd /opt/scripts; ls -l; sh 1.sh""
-
-
-
-
-
-* Run batch deployment
-
-Configure this node to run selected deployment jobs. #### Input parameters |Parameter|Description| |---|---| |Deployment|Browse or search for the deployment job | |Input data assets|Specify the data used for the batch job
-"
-8CF8260D0474AD73D9878CCD361C83102B724733_26,8CF8260D0474AD73D9878CCD361C83102B724733,"Restriction: Input for batch deployment jobs is limited to data assets. Deployments that require JSON input or multiple files as input, are not supported. For example, SPSS models and Decision Optimization solutions that require multiple files as input are not supported.| |Output asset|Name of the output file for the results of the batch job. You can either select Filename and enter a custom file name, or Data asset and select an existing asset in a space.| |Hardware specification (optional)|Browse for a hardware specification to apply for the job| |Error policy (optional)| Optionally, override the default error policy for the node| #### Output parameters |Parameter|Description| |---|---| |Job|Path to the file with results from the deployment job| |Job run|ID for the job| |Execution status|Information on the state of the job: pending, starting, running, completed, canceled, or failed with errors| |Status message| Information about the state of the job|
-
-
-
-
-
-* Run DataStage job
-
-
-
-
-
-* Run Data Refinery job
-
-"
-8CF8260D0474AD73D9878CCD361C83102B724733_27,8CF8260D0474AD73D9878CCD361C83102B724733,"This node runs a specified Data Refinery job. #### Input parameters |Parameter|Description| |---|---| |Data Refinery job |Path to the Data Refinery job.| |Environment | Path of the environment used to run the job Attention: Leave the environments field as is to use the default runtime. If you choose to override, specify an alternate environment for running the job. Be sure any environment that you specify is compatible with the component language and hardware configuration to avoid a runtime error.| |Error policy (optional)| Optionally, override the default error policy for the node| #### Output parameters |Parameter|Description| |---|---| |Job |Path to the results from the Data Refinery job| |Job run|Information about the job run| |Job name |Name of the job | |Execution status|Information on the state of the flow: pending, starting, running, completed, canceled, or failed with errors| |Status message| Information about the state of the flow|
-
-
-
-
-
-* Run notebook job
-
-"
-8CF8260D0474AD73D9878CCD361C83102B724733_28,8CF8260D0474AD73D9878CCD361C83102B724733,"Use these configuration options to specify how to run a Jupyter Notebook in a pipeline. #### Input parameters |Parameter|Description| |---|---| |Notebook job|Path to the notebook job. | |Environment |Path of the environment used to run the notebook. Attention: Leave the environments field as is to use the default environment. If you choose to override, specify an alternate environment for running the job. Be sure any environment that you specify is compatible with the notebook language and hardware configuration to avoid a runtime error.| |Environment variables (optional)|List of environment variables used to run the notebook job| |Error policy (optional)| Optionally, override the default error policy for the node| Notes: - Environment variables that you define in a pipeline cannot be used for notebook jobs you run outside of Watson Pipelines. - You can run a notebook from a code package in a regular package. #### Output parameters |Parameter|Description| |---|---| |Job |Path to the results from the notebook job| |Job run|Information about the job run| |Job name |Name of the job | |Output variables |Configure a key/value pair for each custom variable, then click Add to populate the list of dynamic variables for the node| |Execution status|Information on the state of the run: pending, starting, running, completed, canceled, or failed with errors| |Status message|Information about the state of the notebook run|
-
-
-
-
-
-* Run Pipelines component
-
-"
-8CF8260D0474AD73D9878CCD361C83102B724733_29,8CF8260D0474AD73D9878CCD361C83102B724733,"Run a reusable pipeline component that is created by using a Python script. For more information, see [Creating a custom component](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-custom-comp.html). - If a pipeline component is available, configuring the node presents a list of available components. - The component that you choose specifies the input and output for the node. - Once you assign a component to a node, you cannot delete or change the component. You must delete the node and create a new one.
-
-
-
-
-
-* Run Pipelines job
-
-Add a pipeline to run a nested pipeline job as part of a containing pipeline. This is a way of adding reusable processes to multiple pipelines. You can use the output from a nested pipeline that is run as input for a node in the containing pipeline. #### Input parameters |Parameter|Description| |---|---| |Pipelines job|Select or enter a path to an existing Pipelines job.| |Environment (optional)| Select the environment to run the Pipelines job in, and assign environment resources. Attention: Leave the environments field as is to use the default runtime. If you choose to override, specify an alternate environment for running the job. Be sure any environment that you specify is compatible with the component language and hardware configuration to avoid a runtime error.| |Job Run Name (optional) |A default job name is used unless you override it by specifying a custom job name. You can see the job name in the Job Details dashboard.| |Values for local parameters (optional) | Edit the default job parameters. This option is available only if you have local parameters in the job. | |Values from parameter sets (optional) |Edit the parameter sets used by this job. "
-8CF8260D0474AD73D9878CCD361C83102B724733_30,8CF8260D0474AD73D9878CCD361C83102B724733,"You can choose to use the parameters as defined by default, or use value sets from other pipelines' parameters. | |Error policy (optional)| Optionally, override the default error policy for the node| #### Output parameters |Parameter|Description| |---|---| |Job |Path to the results from the pipeline job| |Job run|Information about the job run| |Job name |Name of the job | |Execution status| Returns a value of: Completed, Completed with warnings, Completed with errors, Failed, or Canceled| |Status message| Message associated with the status| #### Notes for running nested pipeline jobs If you create a pipeline with nested pipelines and run a pipeline job from the top-level, the pipelines are named and saved as project assets that use this convention: - The top-level pipeline job is named ""Trial job - pipeline guid"". - All subsequent jobs are named ""pipeline_ pipeline guid"".
-* Run SPSS Modeler job
-
-"
-8CF8260D0474AD73D9878CCD361C83102B724733_31,8CF8260D0474AD73D9878CCD361C83102B724733,"Use these configuration options to specify how to run an SPSS Modeler in a pipeline. #### Input parameters |Parameter|Description| |---|---| |SPSS Modeler job|Select or enter a path to an existing SPSS Modeler job.| |Environment (optional)| Select the environment to run the SPSS Modeler job in, and assign environment resources. Attention: Leave the environments field as is to use the default SPSS Modeler runtime. If you choose to override, specify an alternate environment for running the job. Be sure any environment that you specify is compatible with the hardware configuration to avoid a runtime error.| |Values for local parameters | Edit the default job parameters. This option is available only if you have local parameters in the job. | |Error policy (optional)| Optionally, override the default error policy for the node| #### Output parameters |Parameter|Description| |---|---| |Job |Path to the results from the pipeline job| |Job run|Information about the job run| |Job name |Name of the job | |Execution status| Returns a value of: Completed, Completed with warnings, Completed with errors, Failed, or Canceled| |Status message| Message associated with the status|
-
-
-
-"
-8CF8260D0474AD73D9878CCD361C83102B724733_32,8CF8260D0474AD73D9878CCD361C83102B724733," Learn more
-
-Parent topic:[Creating a pipeline](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-create.html)
-"
-536EF493AB96990DE8E237EDB8A97DB989EF15C8_0,536EF493AB96990DE8E237EDB8A97DB989EF15C8," Creating a pipeline
-
-Create a pipeline to run an end-to-end scenario to automate all or part of the AI lifecycle. For example, create a pipeline that creates and trains an asset, promotes it to a space, creates a deployment, then scores the model.
-
-Watch this video to see how to create and run a sample pipeline.
-
-This video provides a visual method to learn the concepts and tasks in this documentation.
-
-"
-536EF493AB96990DE8E237EDB8A97DB989EF15C8_1,536EF493AB96990DE8E237EDB8A97DB989EF15C8," Overview: Adding a pipeline to a project
-
-Follow these steps to add a pipeline to a project:
-
-
-
-1. Open a project.
-2. Click New task > Automate model lifecycle.
-3. Enter a name and an optional description.
-4. Click Create to open the canvas.
-
-
-
-"
-536EF493AB96990DE8E237EDB8A97DB989EF15C8_2,536EF493AB96990DE8E237EDB8A97DB989EF15C8," Pipeline access
-
-When you use a pipeline to automate a flow, you must have access to all of the elements in the pipeline. Make sure that you create and run pipelines with the proper access to all assets, projects, and spaces used in the pipeline.
-
-"
-536EF493AB96990DE8E237EDB8A97DB989EF15C8_3,536EF493AB96990DE8E237EDB8A97DB989EF15C8," Related services
-
-In addition to access to all elements in a pipeline, you must have the services available to run all assets you add to a pipeline. For example, if you automate a pipeline that trains and deploys a model, you must have the Watson Studio and Watson Machine Learning services. If a required service is missing, the pipeline will not run. This table lists assets that require services in addition to Watson Studio:
-
-
-
- Asset Required service
-
- AutoAI experiment Watson Machine Learning
- Batch deployment job Watson Machine Learning
- Online deployment (web service) Watson Machine Learning
-
-
-
-"
-536EF493AB96990DE8E237EDB8A97DB989EF15C8_4,536EF493AB96990DE8E237EDB8A97DB989EF15C8," Overview: Building a pipeline
-
-Follow these high-level steps to build and run a pipeline.
-
-
-
-1. Drag any node objects onto the canvas. For example, drag a Run notebook job node onto the canvas.
-2. Use the action menu for each node to view and select options.
-3. Configure a node as required. You are prompted to supply the required input options. For some nodes, you can view or configure output options as well. For examples of configuring nodes, see [Configuring pipeline components](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-config.html).
-4. Drag from one node to another to connect and order the pipeline.
-5. Optional: Click the Global objects icon  in the toolbar to configure runtime options for the pipeline.
-6. When the pipeline is complete, click the Run icon on the toolbar to run the pipeline. You can run a trial to test the pipeline, or you can schedule a job when you are confident in the pipeline.
-
-
-
-"
-536EF493AB96990DE8E237EDB8A97DB989EF15C8_5,536EF493AB96990DE8E237EDB8A97DB989EF15C8," Configuring nodes
-
-As you add nodes to a pipeline, you must configure them to provide all of the required details. For example, if you add a node to run an AutoAI experiment, you must configure the node to specify the experiment, load the training data, and specify the output file:
-
-
-
-"
-536EF493AB96990DE8E237EDB8A97DB989EF15C8_6,536EF493AB96990DE8E237EDB8A97DB989EF15C8," Connecting nodes
-
-When you build a complete pipeline, the nodes must be connected in the order in which they run in the pipeline. To connect nodes, hover over a node and drag a connection to the target node. Disconnected nodes are run in parallel.
-
-
-
-"
-536EF493AB96990DE8E237EDB8A97DB989EF15C8_7,536EF493AB96990DE8E237EDB8A97DB989EF15C8," Defining pipeline parameters
-
-A pipeline parameter defines a global variable for the whole pipeline. Use pipeline parameters to specify data from one of these categories:
-
-
-
- Parameter type Can specify
-
- Basic JSON types such as string, integer, or a JSON object
- CPDPath Resources available within the platform, such as assets, asset containers, connections, notebooks, hardware specs, projects, spaces, or jobs
- InstanceCRN Storage, machine learning instances, and other services.
- Other Various configuration types, such as status, timeout length, estimator, error policies and other various configuration types.
-
-
-
-To specify a pipeline parameter:
-
-
-
-1. Click the global objects icon  in the toolbar to open the Manage global objects window.
-2. Select the Pipeline parameters tab to configure parameters.
-3. Click Add pipeline parameter.
-4. Specify a name and an optional description.
-5. Select a type and provide any required information.
-6. Click Add when the definition is complete, and repeat the previous steps until you finish defining the parameters.
-7. Close the Manage global objects dialog.
-
-
-
-The parameters are now available to the pipeline.
-
-"
-536EF493AB96990DE8E237EDB8A97DB989EF15C8_8,536EF493AB96990DE8E237EDB8A97DB989EF15C8," Next steps
-
-[Configure pipeline components](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-config.html)
-
-Parent topic:[IBM Watson Pipelines](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-overview.html)
-"
-7F2731C1EBB3F492687A336E1369CD6232512118_0,7F2731C1EBB3F492687A336E1369CD6232512118," Creating a custom component for use in the pipeline
-
-A custom pipeline component runs a script that you write. You can use custom components to share reusable scripts between pipelines.
-
-You create custom components as project assets. You can then use the components in pipelines you create in that project. You can create as many custom components for pipelines as needed. Currently, to create a custom component you must create one programmatically, using a Python function.
-
-"
-7F2731C1EBB3F492687A336E1369CD6232512118_1,7F2731C1EBB3F492687A336E1369CD6232512118," Creating a component as a project asset
-
-To create a custom component, use the Python client to authenticate with IBM Watson Pipelines, code the component, then publish the component to the specified project. After it is available in the project, you can assign it to a node in a pipeline and run it as part of a pipeline flow.
-
-This example demonstrates the process of publishing a component that adds two numbers together, then assigning the component to a pipeline node.
-
-
-
-1. Publish a function as a component with the latest Python client. Run the following code in a Jupyter Notebook in a project of IBM watsonx.
-
- Install libraries
-! pip install ibm-watson-pipelines
-
- Authentication
-from ibm_watson_pipelines import WatsonPipelines
-
-apikey = ''
-project_id = 'your_project_id'
-
-client = WatsonPipelines.from_apikey(apikey)
-
- Define the function of the component
-
- If you define the input parameters, users are required to
- input them in the UI
-
-def add_two_numbers(a: int, b: int) -> int:
-print('Adding numbers: {} + {}.'.format(a, b))
-return a + b + 10
-
- Other possible functions might be sending a Slack message,
- or listing directories in a storage volume, and so on.
-
- Publish the component
-client.publish_component(
-name='Add numbers', Appears in UI as component name
-func=add_two_numbers,
-description='Custom component adding numbers', Appears in UI as component description
-project_id=project_id,
-overwrite=True, Overwrites an existing component with the same name
-)
-
-To generate a new API key:
-
-
-
-1. Go to the [IBM Cloud home page](https://cloud.ibm.com/)
-2. Click Manage > Access (IAM)
-3. Click API keys
-4. Click Create
-
-
-
-
-
-
-
-1. Drag the node called Run Pipelines component under Run to the canvas.
-
-"
-7F2731C1EBB3F492687A336E1369CD6232512118_2,7F2731C1EBB3F492687A336E1369CD6232512118,"2. Choose the name of the component that you want to use.
-
-3. Connect and run the node as part of a pipeline job.
-
-
-
-
-"
-7F2731C1EBB3F492687A336E1369CD6232512118_3,7F2731C1EBB3F492687A336E1369CD6232512118," Manage pipeline components
-
-To manage your components, use the Python client to manage them.
-
-
-
-Table 1. Manage pipeline components
-
- Method Function
-
- client.get_components(project_id=project_id) List components from a project
- client.get_component(project_id=project_id, component_id=component_id) Get a component by ID
- client.get_component(project_id=project_id, name=component_name) Get a component by name
- client.publish_component(component name) Publish a new component
- client.delete_component(project_id=project_id, component_id=component_id) Delete a component by ID
-
-
-
-"
-7F2731C1EBB3F492687A336E1369CD6232512118_4,7F2731C1EBB3F492687A336E1369CD6232512118," Import and export
-
-IBM Watson Pipelines can be imported and exported with pipelines only.
-
-Parent topic:[Creating a pipeline](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-create.html)
-"
-05D687FC92FD17804374E20E7F330EDAE142F725_0,05D687FC92FD17804374E20E7F330EDAE142F725," Handling Pipeline errors
-
-You can specify how to respond to errors in a pipeline globally, with an error policy, and locally, by overriding the policy on the node level. You can also create a custom error-handling response.
-
-"
-05D687FC92FD17804374E20E7F330EDAE142F725_1,05D687FC92FD17804374E20E7F330EDAE142F725," Setting global error policy
-
-The error policy sets the default behavior for errors in a pipeline. You can override this behavior for any node in the pipeline.
-
-To set the global error policy:
-
-
-
-1. Click the Manage default settings icon on the toolbar.
-2. Choose the default response to an error under the Error policy:
-
-
-
-* Fail pipeline on error stops the flow and initiates an error-handling flow.
-* Continue pipeline on error tries to continue running the pipeline.
-
-Note: Continue pipeline on error affects nodes that use the default error policy and does not affect node-specific error policies.
-
-
-
-3. You can optionally create a custom error-handling response for a flow failure.
-
-
-
-"
-05D687FC92FD17804374E20E7F330EDAE142F725_2,05D687FC92FD17804374E20E7F330EDAE142F725," Specifying an error response
-
-If you opt for Fail pipeline on error for either the global error policy or for a node-specific policy, you can further specify what happens on failure. For example, if you check the Show icon on nodes that are linked to an error-handling pipeline, an icon flags a node with an error to help debug the flow.
-
-"
-05D687FC92FD17804374E20E7F330EDAE142F725_3,05D687FC92FD17804374E20E7F330EDAE142F725," Specifying a node-specific error policy
-
-You can override the default error policy for any node in the pipeline.
-
-
-
-1. Click a node to open the configuration pane.
-2. Check the option to Override default error policy with:
-
-
-
-* Fail pipeline on error
-* Continue pipeline on error
-
-
-
-
-
-"
-05D687FC92FD17804374E20E7F330EDAE142F725_4,05D687FC92FD17804374E20E7F330EDAE142F725," Viewing all node policies
-
-To view all node-specific error handling for a pipeline:
-
-
-
-1. Click Manage default settings on the toolbar.
-2. Click the view all node policies link under Error policy.
-
-
-
-A list of all nodes in the pipeline show which nodes use the default policy, and which override the default policy. Click a node name to see the policy details. Use the view filter to show:
-
-
-
-* All error policies: all nodes
-* Default policy: all nodes that use the default policy
-* Override default policy: all nodes that override the default policy
-* Fail pipeline on error: all nodes that stop the flow on error
-* Continue pipeline on error: all nodes that try to continue the flow on error
-
-
-
-"
-05D687FC92FD17804374E20E7F330EDAE142F725_5,05D687FC92FD17804374E20E7F330EDAE142F725," Running the Fail on error flow
-
-If you specify that the flow fails on error, a secondary error handling flow starts when an error is encountered.
-
-"
-05D687FC92FD17804374E20E7F330EDAE142F725_6,05D687FC92FD17804374E20E7F330EDAE142F725," Adding a custom error response
-
-If Create custom error handling response is checked on default settings for error policy, you can add an error handling node to the canvas so you can configure a custom error response. The response applies to all nodes configured to fail when an error occurs.
-
-Parent topic:[Creating a pipeline](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-create.html)
-"
-E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_0,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," Functions used in Watson Pipelines's Expression Builder
-
-Use these functions in Pipelines code editors, for example, to define a user variable or build an advanced condition.
-
-The Experssion Builder uses the categories for coding functions:
-
-
-
-* [Conversion functions](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-expr-builder.html?context=cdpaas&locale=enconversion)
-* [Standard functions](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-expr-builder.html?context=cdpaas&locale=enofext)
-* [Accessing advanced global objects](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-expr-builder.html?context=cdpaas&locale=enadvanced)
-
-
-
-"
-E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_1,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," Conversion functions
-
-Converts a single data element format to another.
-
-"
-E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_2,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," Table for basic data type conversion
-
-
-
- Type Accepts Returns Syntax
-
- double int, uint, string double double(val)
- duration string duration duration(string) Duration must end with ""s"", which stands for seconds.
- int int, uint, double, string, timestamp int int(val)
- timestamp string timestamp timestamp(string) Converts strings to timestamps according to RFC3339, that is ""1972-01-01T10:00:20.021-05:00"".
- uint int, double, string uint uint(val)
-
-
-
-"
-E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_3,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," Example
-
-For example, to cast a value to type double:
-
-double(%val%)
-
-When you cast double to int | uint, result rounds toward zero and errors if result is out of range.
-
-"
-E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_4,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," Standard functions
-
-Functions that are unique to IBM Watson Pipelines.
-
-"
-E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_5,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," sub
-
-Replaces substrings of a string that matches the given regular expression that starts at position offset.
-
-"
-E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_6,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," Syntax
-
-(string).sub(substring (string), replacement (string), [occurrence (int), offset (int)]])
-
-returns: the string with substrings updated.
-
-"
-E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_7,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," Examples
-
-'aaabbbcccbbb'.sub('[b]+','RE')
-
-Returns 'aaaREcccRE'.
-
-"
-E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_8,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," format
-
-Formats a string or timestamp according to a format specifier and returns the resulting string.
-
-"
-E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_9,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," Syntax
-
-"
-E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_10,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C,"format as a method of strings
-
-(string).format(parameter 1 (string or bool or number)... parameter 10 (string or bool or number))
-
-returns: the string that contains the formatted input values.
-
-"
-E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_11,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C,"format as a method of timestamps
-
-(timestamp).format(layout(string))
-
-returns: the formatted timestamp in string format.
-
-"
-E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_12,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," Examples
-
-'number=%d, text=%s'.format(1, 'str')
-
-Returns the string 'number=1, text=str'.
-
-timestamp('2020-07-24T09:07:29.000-00:00').format('%Y/%m/%d')
-
-Returns the string '2020/07/24'.
-
-"
-E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_13,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," now
-
-Returns the current timestamp.
-
-"
-E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_14,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," Syntax
-
-now()
-
-returns: the current timestamp.
-
-"
-E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_15,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," parseTimestamp
-
-Returns the current timestamp in string format.
-
-"
-E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_16,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," Syntax
-
-parseTimestamp([timestamp_string(string)] [layout(string)])
-
-returns: the current timestamp to a string of type string.
-
-"
-E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_17,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," Examples
-
-parseTimestamp('2020-07-24T09:07:29Z')
-
-Returns '2020-07-24T09:07:29.000-00:00'.
-
-"
-E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_18,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," min
-
-Returns minimum value in list.
-
-"
-E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_19,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," Syntax
-
-(list).min()
-
-returns: the minimum value of the list.
-
-"
-E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_20,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," Examples
-
-[1,2,3].min()
-
-Returns the integer 1.
-
-"
-E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_21,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," max
-
-Returns maximum value in list.
-
-"
-E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_22,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," Syntax
-
-(list).max()
-
-returns: the maximum value of the list.
-
-"
-E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_23,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," Examples
-
-[1,2,3].max()
-
-Returns the integer 3.
-
-"
-E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_24,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," argmin
-
-Returns index of minimum value in list.
-
-"
-E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_25,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," Syntax
-
-(list).argmin()
-
-returns: the index of the minimum value of the list.
-
-"
-E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_26,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," Examples
-
-[1,2,3].argmin()
-
-Returns the integer 0.
-
-"
-E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_27,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," argmax
-
-Returns index of maximum value in list.
-
-"
-E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_28,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," Syntax
-
-(list).argmax()
-
-returns: the index of the maximum value of the list.
-
-"
-E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_29,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," Examples
-
-[1,2,3].argmax()
-
-Returns the integer 2.
-
-"
-E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_30,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," sum
-
-Returns the sum of values in list.
-
-"
-E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_31,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," Syntax
-
-(list).sum()
-
-returns: the index of the maximum value of the list.
-
-"
-E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_32,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," Examples
-
-[1,2,3].argmax()
-
-Returns the integer 2.
-
-"
-E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_33,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," base64.decode
-
-Decodes base64-encoded string to bytes. This function returns an error if the string input is not base64-encoded.
-
-"
-E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_34,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," Syntax
-
-base64.decode(base64_encoded_string(string))
-
-returns: the decoded base64-encoded string in byte format.
-
-"
-E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_35,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," Examples
-
-base64.decode('aGVsbG8=')
-
-Returns 'hello' in bytes.
-
-"
-E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_36,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," base64.encode
-
-Encodes bytes to a base64-encoded string.
-
-"
-E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_37,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," Syntax
-
-base64.encode(bytes_to_encode (bytes))
-
-returns: the encoded base64-encoded string of the original byte value.
-
-"
-E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_38,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," Examples
-
-base64.decode(b'hello')
-
-Returns 'aGVsbG8=' in bytes.
-
-"
-E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_39,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," charAt
-
-Returns the character at the given position. If the position is negative, or greater than the length of the string, the function produces an error.
-
-"
-E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_40,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," Syntax
-
-(string).charAt(index (int))
-
-returns: the character of the specified position in integer format.
-
-"
-E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_41,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," Examples
-
-'hello'.charAt(4)
-
-Returns the character 'o'.
-
-"
-E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_42,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," indexOf
-
-Returns the integer index of the first occurrence of the search string. If the search string is not found the function returns -1.
-
-"
-E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_43,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," Syntax
-
-(string).indexOf(search_string (string), [offset (int)])
-
-returns: the index of the first character occurrence after the offset.
-
-"
-E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_44,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," Examples
-
-'hello mellow'.indexOf('ello', 2)
-
-Returns the integer 7.
-
-"
-E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_45,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," lowerAscii
-
-Returns a new string with ASCII characters turned to lowercase.
-
-"
-E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_46,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," Syntax
-
-(string).lowerAscii()
-
-returns: the new lowercase string.
-
-"
-E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_47,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," Examples
-
-'TacoCat'.lowerAscii()
-
-Returns the string 'tacocat'.
-
-"
-E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_48,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," replace
-
-Returns a new string based on the target, which replaces the occurrences of a search string with a replacement string if present. The function accepts an optional limit on the number of substring replacements to be made.
-
-"
-E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_49,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," Syntax
-
-(string).replace(search_string (string), replacement (string), [offset (int)])
-
-returns: the new string with occurrences of a search string replaced.
-
-"
-E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_50,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," Examples
-
-'hello hello'.replace('he', 'we')
-
-Returns the string 'wello wello'.
-
-"
-E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_51,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," split
-
-Returns a list of strings that are split from the input by the separator. The function accepts an optional argument that specifies a limit on the number of substrings that are produced by the split.
-
-"
-E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_52,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," Syntax
-
-(string).split(separator (string), [limit (int)])
-
-returns: the split string as a string list.
-
-"
-E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_53,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," Examples
-
-'hello hello hello'.split(' ')
-
-Returns the string list ['hello', 'hello', 'hello'].
-
-"
-E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_54,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," substring
-
-Returns the substring given a numeric range corresponding to character positions. Optionally you might omit the trailing range for a substring from a character position until the end of a string.
-
-"
-E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_55,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," Syntax
-
-(string).substring(start (int), [end (int)])
-
-returns: the substring at the specified index of the string.
-
-"
-E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_56,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," Examples
-
-'tacocat'.substring(4)
-
-Returns the string 'cat'.
-
-"
-E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_57,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," trim
-
-Returns a new string, which removes the leading and trailing white space in the target string. The trim function uses the Unicode definition of white space, which does not include the zero-width spaces.
-
-"
-E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_58,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," Syntax
-
-(string).trim()
-
-returns: the new string with white spaces removed.
-
-"
-E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_59,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," Examples
-
-' ttrimn '.trim()
-
-Returns the string 'trim'.
-
-"
-E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_60,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," upperAscii
-
-Returns a new string where all ASCII characters are upper-cased.
-
-"
-E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_61,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," Syntax
-
-(string).upperAscii()
-
-returns: the new string with all characters turned to uppercase.
-
-"
-E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_62,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," Examples
-
-'TacoCat'.upperAscii()
-
-Returns the string 'TACOCAT'.
-
-"
-E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_63,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," size
-
-Returns the length of the string, bytes, list, or map.
-
-"
-E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_64,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," Syntax
-
-(string | bytes | list | map).size()
-
-returns: the length of the string, bytes, list, or map array.
-
-"
-E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_65,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," Examples
-
-'hello'.size()
-
-Returns the integer 5.
-
-'hello'.size()
-
-Returns the integer 5.
-
-['a','b','c'].size()
-
-Returns the integer 3.
-
-{'key': 'value'}.size()
-
-Returns the integer 1.
-
-"
-E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_66,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," contains
-
-Tests whether the string operand contains the substring.
-
-"
-E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_67,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," Syntax
-
-(string).contains(substring (string))
-
-returns: a Boolean value of whether the substring exists in the string operand.
-
-"
-E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_68,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," Examples
-
-'hello'.contains('ll')
-
-Returns true.
-
-"
-E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_69,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," endsWith
-
-Tests whether the string operand ends with the specified suffix.
-
-"
-E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_70,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," Syntax
-
-(string).endsWith(suffix (string))
-
-returns: a Boolean value of whether the string ends with specified suffix in the string operand.
-
-"
-E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_71,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," Examples
-
-'hello'.endsWith('llo')
-
-Returns true.
-
-"
-E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_72,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," startsWith
-
-Tests whether the string operand starts with the prefix argument.
-
-"
-E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_73,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," Syntax
-
-(string).startsWith(prefix (string))
-
-returns: a Boolean value of whether the string begins with specified prefix in the string operand.
-
-"
-E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_74,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," Examples
-
-'hello'.startsWith('he')
-
-Returns true.
-
-"
-E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_75,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," matches
-
-Tests whether the string operand matches regular expression.
-
-"
-E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_76,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," Syntax
-
-(string).matches(prefix (string))
-
-returns: a Boolean value of whether the string matches the specified regular expression.
-
-"
-E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_77,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," Examples
-
-'Hello'.matches('[Hh]ello')
-
-Returns true.
-
-"
-E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_78,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," getDate
-
-Get the day of the month from the date with time zone (default Coordinated Universal Time), one-based indexing.
-
-"
-E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_79,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," Syntax
-
-(timestamp).getDate([time_zone (string)])
-
-returns: the day of the month with one-based indexing.
-
-"
-E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_80,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," Examples
-
-timestamp('2020-07-24T09:07:29.000-00:00').getDate()
-
-Returns 24.
-
-"
-E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_81,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," getDayOfMonth
-
-Get the day of the month from the date with time zone (default Coordinated Universal Time), zero-based indexing.
-
-"
-E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_82,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," Syntax
-
-(timestamp).getDayOfMonth([time_zone (string)])
-
-returns: the day of the month with zero-based indexing.
-
-"
-E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_83,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," Examples
-
-timestamp('2020-07-24T09:07:29.000-00:00').getDayOfMonth()
-
-Returns 23.
-
-"
-E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_84,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," getDayOfWeek
-
-Get day of the week from the date with time zone (default Coordinated Universal Time), zero-based indexing, zero for Sunday.
-
-"
-E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_85,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," Syntax
-
-(timestamp).getDayOfWeek([time_zone (string)])
-
-returns: the day of the week with zero-based indexing.
-
-"
-E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_86,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," Examples
-
-timestamp('2020-07-24T09:07:29.000-00:00').getDayOfWeek()
-
-Returns 5.
-
-"
-E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_87,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," getDayOfYear
-
-Get the day of the year from the date with time zone (default Coordinated Universal Time), zero-based indexing.
-
-"
-E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_88,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," Syntax
-
-(timestamp).getDayOfYear([time_zone (string)])
-
-returns: the day of the year with zero-based indexing.
-
-"
-E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_89,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," Examples
-
-timestamp('2020-07-24T09:07:29.000-00:00').getDayOfYear()
-
-Returns 205.
-
-"
-E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_90,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," getFullYear
-
-Get the year from the date with time zone (default Coordinated Universal Time).
-
-"
-E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_91,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," Syntax
-
-(timestamp).getFullYear([time_zone (string)])
-
-returns: the year from the date.
-
-"
-E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_92,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," Examples
-
-timestamp('2020-07-24T09:07:29.000-00:00').getFullYear()
-
-Returns 2020.
-
-"
-E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_93,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," getMonth
-
-Get the month from the date with time zone, 0-11.
-
-"
-E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_94,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," Syntax
-
-(timestamp).getMonth([time_zone (string)])
-
-returns: the month from the date.
-
-"
-E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_95,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," Examples
-
-timestamp('2020-07-24T09:07:29.000-00:00').getMonth()
-
-Returns 6.
-
-"
-E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_96,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," getHours
-
-Get hours from the date with time zone, 0-23.
-
-"
-E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_97,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," Syntax
-
-(timestamp).getHours([time_zone (string)])
-
-returns: the hour from the date.
-
-"
-E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_98,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," Examples
-
-timestamp('2020-07-24T09:07:29.000-00:00').getHours()
-
-Returns 9.
-
-"
-E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_99,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," getMinutes
-
-Get minutes from the date with time zone, 0-59.
-
-"
-E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_100,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," Syntax
-
-(timestamp).getMinutes([time_zone (string)])
-
-returns: the minute from the date.
-
-"
-E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_101,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," Examples
-
-timestamp('2020-07-24T09:07:29.000-00:00').getMinutes()
-
-Returns 7.
-
-"
-E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_102,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," getSeconds
-
-Get seconds from the date with time zone, 0-59.
-
-"
-E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_103,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," Syntax
-
-(timestamp).getSeconds([time_zone (string)])
-
-returns: the second from the date.
-
-"
-E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_104,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," Examples
-
-timestamp('2020-07-24T09:07:29.000-00:00').getSeconds()
-
-Returns 29.
-
-"
-E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_105,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," getMilliseconds
-
-Get milliseconds from the date with time zone, 0-999.
-
-"
-E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_106,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," Syntax
-
-(timestamp).getMilliseconds([time_zone (string)])
-
-returns: the millisecond from the date.
-
-"
-E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_107,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," Examples
-
-timestamp('2020-07-24T09:07:29.021-00:00').getMilliseconds()
-
-Returns 21.
-
-"
-E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_108,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," Access to advanced global objects
-
-Get node outputs, user variables, and pipeline parameters by using the following Pipelines code.
-
-"
-E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_109,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," Get user variable
-
-Gets the most up-to-date value of a user variable.
-
-"
-E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_110,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," Syntax
-
-vars.
-
-"
-E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_111,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," Examples
-
-
-
- Example Output
-
- vars.my_user_var Gets the value of the user variable my_user_var
-
-
-
-"
-E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_112,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," Get parameters
-
-Gets the flow parameters.
-
-"
-E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_113,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," Syntax
-
-params.
-
-"
-E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_114,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," Examples
-
-
-
- Example Output
-
- params.a Gets the value of the parameter a
-
-
-
-"
-E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_115,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," Get parameter sets
-
-Gets the flow parameter sets.
-
-"
-E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_116,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," Syntax
-
-param_set..
-
-"
-E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_117,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," Examples
-
-
-
- Example Output
-
- param_set.ps.a Gets the value of the parameter a from a parameter set ps
- param_sets.config Gets the pipeline configuration values
- param_sets.config.deadline Gets a date object from the configurations parameter set
- param_sets.ps[""$PARAM""] Gets the value of the parameter $PARAM from a parameter set ps
-
-
-
-"
-E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_118,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," Get task results
-
-Get a pipeline task's resulting output and other metrics from a pipeline task after it completes its run.
-
-"
-E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_119,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," Syntax
-
-tasks..
-
-"
-E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_120,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," Examples
-
-
-
- Example Output
-
- tasks.run_datastage_job Gets the results dictionary of job output
- tasks.run_datastage_job.results.score Gets the value score of job output
- tasks.run_datastage_job.results.timestamp Gets the end timestamp of job run
- tasks.run_datastage_job.results.error Gets the number of errors from job run
- tasks.loop_task.loop.counter Gets the current loop iterative counter of job run
- tasks.loop_task.loop.item Gets the current loop iterative item of job run
- tasks.run_datastage_job.results.status Gets either success or fail status of job run
- tasks.run_datastage_job.results.status_message Gets the status message of job run
- tasks.run_datastage_job.results.job_name Gets the job name
- tasks.run_datastage_job.results.job Gets the Cloud Pak for Data path of job
- tasks.run_datastage_job.results.job_run Gets the Cloud Pak for Data run path of job run
-
-
-
-"
-E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_121,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," Get pipeline context objects
-
-Gets values that are evaluated in the context of a pipeline that is run in a scope (project, space, catalog).
-
-"
-E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_122,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," Examples
-
-
-
- Example Output
-
- ctx.scope.id Gets scope ID
- ctx.scope.type Returns either ""project"", ""space"", or ""catalog""
- ctx.scope.name Gets scope name
- ctx.pipeline.id Gets pipeline ID
- ctx.pipeline.name Gets pipeline name
- ctx.job.id Gets job ID
- ctx.run_datastage_job.id Gets job run ID
- ctx.run_datastage_job.started_at Gets job run start time
- ctx.user.id Gets the user ID
-
-
-
-"
-E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_123,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," Get error status
-
-If the exception handler is triggered, an error object is created and becomes accessible only within the exception handler.
-
-"
-E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C_124,E933C12C1DF97E13CBA40BCD54E4F4B8133DA10C," Examples
-
-
-
- Example Output
-
- error.status Gets either success or fail status of job run, usually failed
- error.status_message Gets the error status message
- error.job Gets the Cloud Pak for Data path of job
- error.run_datastage_job Gets the Cloud Pak for Data run path of job
-
-
-
-Parent topic:[Adding conditions to a Pipelines](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-conditions.html)
-"
-445B99372919DE6B2C3E6A7E2C3F4CAAB0BF174C_0,445B99372919DE6B2C3E6A7E2C3F4CAAB0BF174C," Configuring global objects for Watson Pipelines
-
-Use global objects to create configurable constants to configure your pipeline at run time. Use parameters or user variables in pipelines to specify values at run time, rather than hardcoding the values. Unlike pipeline parameters, user variables can be dynamically set during the flow.
-
-Learn about creating:
-
-
-
-* [Pipeline parameters](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-flow-param.html?context=cdpaas&locale=enflow)
-* [Parameter sets](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-flow-param.html?context=cdpaas&locale=enparam-set)
-* [User variables](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-flow-param.html?context=cdpaas&locale=enuser)
-
-
-
-"
-445B99372919DE6B2C3E6A7E2C3F4CAAB0BF174C_1,445B99372919DE6B2C3E6A7E2C3F4CAAB0BF174C," Pipeline parameters
-
-Use pipeline parameters to specify a value at pipeline runtime. For example, if you want a user to enter a deployment space for pipeline output, use a parameter to prompt for the space name to use when the pipeline runs. Specifying the value of the parameter each time that you run the job helps you use the correct resources.
-
-About pipeline parameters:
-
-
-
-* can be assigned as a node value or assign it for the pipeline job.
-* can be assigned to any node, and a status indicator alerts you.
-* can be used for multiple nodes.
-
-
-
-"
-445B99372919DE6B2C3E6A7E2C3F4CAAB0BF174C_2,445B99372919DE6B2C3E6A7E2C3F4CAAB0BF174C," Defining a pipeline parameter
-
-
-
-1. Create a pipeline parameter from the node configuration panel from the toolbar.
-2. Enter a name and an optional description. The name must be lower snake case with lowercase letters, numbers, and underscores. For example, lower_snake_case_with_numbers_123 is a valid name. The name must begin with a letter. If the name does not comply, you get a 404 error when you try to run the pipeline.
-3. Assign a parameter type. Depending on the parameter type, you might need to provide more details or assign a default value.
-4. Click Add to list to save the pipeline parameter.
-
-
-
-Note:
-
-"
-445B99372919DE6B2C3E6A7E2C3F4CAAB0BF174C_3,445B99372919DE6B2C3E6A7E2C3F4CAAB0BF174C," Parameter types
-
-Parameter types are categorized as:
-
-
-
-* Basic: including data types to structure input to a pipeline or options for handling the creation of a duplicate space or asset.
-* Resource: for selecting a project, catalog, space, or asset.
-* Instance: for selecting a machine learning instance or a Cloud Object Storage instance.
-* Other: for specifying details, such as creation mode or error policy.
-
-
-
-"
-445B99372919DE6B2C3E6A7E2C3F4CAAB0BF174C_4,445B99372919DE6B2C3E6A7E2C3F4CAAB0BF174C," Example of using pipeline types
-
-To create a parameter of the type Path:
-
-
-
-1. Create a parameter set called MASTER_PARAMETER_SET.
-2. Create a parameter called file_path and set the type to Path.
-3. Set the value of file_path to mnts/workspace/masterdir.
-4. Drag the node Wait for file onto the canvas and set the File location value to MASTER_PARAMETER_SET.file_path.
-5. Connect the Wait for file with the Run Bash script node so that the latter node runs after the former.
-6. Optional: Test your parameter variable:
-
-
-
-1. Add the environment variable parameter to your MASTER_PARAMETER_SET parameter set, for example FILE_PATH.
-2. Paste the following command into the Script code of the Run Bash script:
-
-echo File: $FILE_PATH
-cat $FILE_PATH
-
-
-
-7. Run the pipeline. The path mnts/workspace/masterdir is in both of the nodes' execution logs to see they passed successfully.
-
-
-
-"
-445B99372919DE6B2C3E6A7E2C3F4CAAB0BF174C_5,445B99372919DE6B2C3E6A7E2C3F4CAAB0BF174C," Configuring a node with a pipeline parameter
-
-When you configure a node with a pipeline parameter, you can choose an existing pipeline parameter or create a new one as part of configuring a node.
-
-For example:
-
-
-
-1. Create a pipeline parameter called creationmode and save it to the parameter list.
-2. Configure a Create deployment space node and click to open the configuration panel.
-3. Choose the Pipeline parameter as the input for the Creation mode option.
-4. Choose the creationmode pipeline parameter and save the configuration.
-
-
-
-When you run the flow, the pipeline parameter is assigned when the space is created.
-
-"
-445B99372919DE6B2C3E6A7E2C3F4CAAB0BF174C_6,445B99372919DE6B2C3E6A7E2C3F4CAAB0BF174C," Parameter sets
-
-Parameter sets are a group of related parameters to use in a pipeline. For example, you might create one set of parameters to use in a test environment and another for use in a production environment.
-
-Parameter sets can be created as a project asset. Parameter sets created in the project are then available for use in pipelines in that project.
-
-"
-445B99372919DE6B2C3E6A7E2C3F4CAAB0BF174C_7,445B99372919DE6B2C3E6A7E2C3F4CAAB0BF174C," Creating a parameter set as a project asset
-
-You can create a parameter set as a reusable project asset to use in pipelines.
-
-
-
-1. Open an existing project or create a project.
-2. Click New task > Collect multiple job parameters with specified values to reuse in jobs from the available tasks.
-3. Assign a name for the set, and specify the details for each parameter in the set, including:
-
-
-
-* Name for the parameter
-* Data type
-* Prompt
-* Default value
-
-
-
-4. Optionally create value sets for the parameters in the parameter set. The value sets can be the different values for different contexts. For example, you can create a Test value set with values for a test environment, and a production set for production values.
-5. Save the parameter set after you create all the parameters, s. It becomes available for use in pipelines that are created in that project.
-
-
-
-"
-445B99372919DE6B2C3E6A7E2C3F4CAAB0BF174C_8,445B99372919DE6B2C3E6A7E2C3F4CAAB0BF174C," Adding a parameter set for use in a pipeline
-
-To add a parameter set from a project:
-
-
-
-1. Click the global objects icon and switch to the Parameter sets tab.
-2. Click Add parameter set to add parameter sets from your project that you want to use in your pipeline.
-3. You can add or remove parameter sets from the list. The parameter sets you specify for use in your pipeline becomes available when you assign parameters as input in the pipeline.
-
-
-
-"
-445B99372919DE6B2C3E6A7E2C3F4CAAB0BF174C_9,445B99372919DE6B2C3E6A7E2C3F4CAAB0BF174C," Creating a parameter set from the parameters list in your pipeline
-
-You can create a parameter set from the parameters list for your pipeline
-
-
-
-1. Click the global objects icon and open the Pipeline Parameters.
-2. Select the parameters that you want in the set, then click the Save as parameter set icon.
-3. Enter a name and optional description for the set.
-4. Save to add the parameter set for use in your pipeline.
-
-
-
-"
-445B99372919DE6B2C3E6A7E2C3F4CAAB0BF174C_10,445B99372919DE6B2C3E6A7E2C3F4CAAB0BF174C," Using a parameter set in a pipeline
-
-To use a parameter set:
-
-
-
-1. Choose Assign pipeline parameter as an input type from a node property sheet.
-2. Choose the parameter to assign. A list displays all available parameters of the type for that input. Available parameters can be individual parameters, and parameters defined as part of a set. The parameter set name precedes the name of the parameter. For example, Parameter_set_name.Parameter_name.
-3. Run the pipeline and select a value set for the corresponding value (if available), assign a value for the parameter, or accept the default value.
-
-
-
-Note:You can use a parameter set in the expression builder by using the format param_sets.. If a parameter set value contains an environment variable, you must use this syntax in the expression builder: param_sets.MyParamSet[""$ICU_DATA""]. Attention: If you delete a parameter, make sure that you remove the references to the parameter from your job design. If you do not remove the references, your job might fail.
-
-"
-445B99372919DE6B2C3E6A7E2C3F4CAAB0BF174C_11,445B99372919DE6B2C3E6A7E2C3F4CAAB0BF174C," Editing a parameter set in a job
-
-If you use a parameter set when you define a job, you can choose a value set to populate variables with the values in that set. If you change and save the values, then edit the job and save changes, the parameter set values reset to the defaults.
-
-"
-445B99372919DE6B2C3E6A7E2C3F4CAAB0BF174C_12,445B99372919DE6B2C3E6A7E2C3F4CAAB0BF174C," User variables
-
-Create user variables to assign values when the flow runs. Unlike pipeline parameters, user variables can be modified during processing.
-
-"
-445B99372919DE6B2C3E6A7E2C3F4CAAB0BF174C_13,445B99372919DE6B2C3E6A7E2C3F4CAAB0BF174C," Defining a user variable
-
-You can create user variables for use in your pipeline. User variables, like parameters, are defined on the global level and are not specific to any node. The initial value for a user variable must be set when you define it and cannot be set dynamically as the result of any node output. When you define a user variable, you can use the Set user variables node to update it with node output.
-
-To create a user variable:
-
-
-
-1. Create a variable from the Update variable node configuration panel or from the toolbar.
-2. Enter a name and an optional description. The name must be lower snake case with lowercase letters, numbers, and underscores. For example, lower_snake_case_with_numbers_123 is a valid name. The name must begin with a letter. If the name does not comply, you get a 404 error when you try to run the pipeline.
-3. Complete the definition of the variable, including choosing a variable type and input type.
-4. Click Add to add the variable to the list. It is now available for use in a node.
-
-
-
-Parent topic:[Creating a pipeline](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-create.html)
-"
-484AF9BAF43AC6BCFDFAF7B0D353CCDF119033DF_0,484AF9BAF43AC6BCFDFAF7B0D353CCDF119033DF," Getting started with the Watson Pipelines editor
-
-The Watson Pipelines editor is a graphical canvas where you can drag and drop nodes that you connect together into a pipeline for automating machine model operations.
-
-You can open the Pipelines editor by creating a new Pipelines asset or editing an existing Pipelines asset. To create a new asset in your project from the Assets tab, click New asset > Automate model lifecycle. To edit an existing asset, click the pipeline asset name on the Assets tab.
-
-The canvas opens with a set of annotated tools for you to use to create a pipeline. The canvas includes the following components:
-
-
-
-
-
-* The node palette provides nodes that represent various actions for manipulating assets and altering the flow of control in a pipeline. For example, you can add nodes to create assets such as data files, AutoAI experiments, or deployment spaces. You can configure node actions based on conditions if files import successfully, such as feeding data into a notebook. You can also use nodes to run and update assets. As you build your pipeline, you connect the nodes, then configure operations on the nodes to create the pipeline. These pipelines create a dynamic flow that addresses specific stages of the machine learning lifecycle.
-* The toolbar includes shortcuts to options related to running, editing, and viewing the pipeline.
-* The parameters pane provides context-sensitive options for configuring the elements of your pipeline.
-
-
-
-"
-484AF9BAF43AC6BCFDFAF7B0D353CCDF119033DF_1,484AF9BAF43AC6BCFDFAF7B0D353CCDF119033DF," The toolbar
-
-
-
-Use the Pipeline editor toolbar to:
-
-
-
-* Run the pipeline as a trial run or a scheduled job
-* View the history of pipeline runs
-* Cut, copy, or paste canvas objects
-* Delete a selected node
-* Drop a comment onto the canvas
-* Configure global objects, such as pipeline parameters or user variables
-* Manage default settings
-* Arrange nodes vertically
-* View last saved timestamp
-* Zoom in or out
-* Fit the pipeline to the view
-* Show or hide global messages
-
-
-
-Hover over an icon on the toolbar to view the shortcut text.
-
-"
-484AF9BAF43AC6BCFDFAF7B0D353CCDF119033DF_2,484AF9BAF43AC6BCFDFAF7B0D353CCDF119033DF," The node palette
-
-The node palette provides the objects that you need to create an end-to-end pipeline. Click a top-level node in the palette to see the related nodes.
-
-
-
- Node category Description Node type
-
- Copy Use nodes to copy an asset or file, import assets, or export assets Copy assets Export assets Import assets
- Create Create assets or containers for assets Create AutoAI experiment Create AutoAI time series experiment Create batch deployment Create data asset Create deployment space Create online deployment
- Wait Specify node-level conditions for advancing the pipeline run Wait for all results Wait for any result Wait for file
- Control Specify error handling Loop in parallel Loop in sequence Set user variables Terminate pipeline
- Update Update the configuration settings for a space, asset, or job. Update AutoAI experiment Update batch deployment Update deployment space Update online deployment
- Delete Remove a specified asset, job, or space. Delete AutoAI experiment Delete batch deployment Delete deployment space Delete online deployment
- Run Run an existing or ad hoc job. Run AutoAI experiment Run Bash script Run batch deployment Run Data Refinery job Run notebook job Run pipeline job Run Pipelines component job Run SPSS Modeler job
-
-
-
-"
-484AF9BAF43AC6BCFDFAF7B0D353CCDF119033DF_3,484AF9BAF43AC6BCFDFAF7B0D353CCDF119033DF," The parameters pane
-
-Double-click a node to edit its configuration options. Depending on the type, a node can define various input and output options or even allow the user to add inputs or outputs dynamically. You can define the source of values in various ways. For example, you can specify that the source of value for ""ML asset"" input for a batch deployment must be the output from a run notebook node.
-
-For more information on parameters, see [Configuring pipeline components](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-config.html).
-
-"
-484AF9BAF43AC6BCFDFAF7B0D353CCDF119033DF_4,484AF9BAF43AC6BCFDFAF7B0D353CCDF119033DF," Next steps
-
-
-
-* [Planning a pipeline](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-planning.html)
-* [Explore the sample pipeline](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-sample.html)
-* [Create a pipeline](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-create.html)
-
-
-
-Parent topic:[Watson Pipelines](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-overview.html)
-"
-28243D1C0B8BCF04FE3556990D40D1A31F4CB58D_0,28243D1C0B8BCF04FE3556990D40D1A31F4CB58D," Manage default settings
-
-You can manage the global settings of your IBM Watson Pipelines such as a default error policy and default rules for node caching.
-
-Global settings apply to all nodes in the pipeline unless local node settings overwrite them. To update global settings, click the Manage default settings icon  on the toolbar. You can configure:
-
-
-
-* [Error policy](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-global-settings.html?context=cdpaas&locale=enerr-pol)
-* [Node caching](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-global-settings.html?context=cdpaas&locale=ennode-cache)
-
-
-
-"
-28243D1C0B8BCF04FE3556990D40D1A31F4CB58D_1,28243D1C0B8BCF04FE3556990D40D1A31F4CB58D," Setting global error policy
-
-You can define the behavior of Pipelines when an error occurs.
-
-
-
-* Fail pipeline on error stops the flow and initiates an error-handling flow.
-* Continue pipeline on error tries to continue running the pipeline.
-
-
-
-"
-28243D1C0B8BCF04FE3556990D40D1A31F4CB58D_2,28243D1C0B8BCF04FE3556990D40D1A31F4CB58D," Error handling
-
-You can configure the behavior of Pipelines for error handling.
-
-
-
-* Create custom-error handling response: Customize an error-handling response. Add an error handling node to the canvas so you can configure a custom error response. The response applies to all configured nodes to fail when an error occurs.
-
-
-
-* Show icon on nodes linked to error handling pipeline: An icon flags a node with an error to help debug the flow.
-
-
-
-
-
-To learn more about error handling, see [Managing pipeline errors](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-errors.html)
-
-"
-28243D1C0B8BCF04FE3556990D40D1A31F4CB58D_3,28243D1C0B8BCF04FE3556990D40D1A31F4CB58D," Setting node caches
-
-Manual caching for nodes sets the default for how the pipeline caches and stores information. You can override these settings for individual nodes.
-
-"
-28243D1C0B8BCF04FE3556990D40D1A31F4CB58D_4,28243D1C0B8BCF04FE3556990D40D1A31F4CB58D," Default cache usage frequency
-
-You can change the following cache settings:
-
-"
-28243D1C0B8BCF04FE3556990D40D1A31F4CB58D_5,28243D1C0B8BCF04FE3556990D40D1A31F4CB58D," Caching method
-
-Choose whether to enable automatic caching for all nodes or choose to manually set cache conditions for specific nodes.
-
-
-
-* Enable automatic caching for all nodes (recommended)
-All nodes that support caching enable it by default. Setting Creation Mode or Copy Mode in your node's settings to Overwrite automatically disables cache, if the node supports these setting parameters.
-* Enable caching for specific nodes in the node properties panel.
-In individual nodes, you can select Create data cache at this node in Output to allow caching for individual nodes. A save icon appears on nodes that uses this feature.
-
-
-
-"
-28243D1C0B8BCF04FE3556990D40D1A31F4CB58D_6,28243D1C0B8BCF04FE3556990D40D1A31F4CB58D," Cache usage
-
-Choose the conditions for using cached data.
-
-
-
-* Do not use cache
-* Always use cache
-* Use cache when all selected conditions are met
-
-
-
-* Retrying from a previous failed run
-* Input values for the current pipeline are unchanged from previous run
-* Pipeline version is unchanged from previous run
-
-
-
-
-
-To view and download your cache data, see Run tracker in your flow. You can download the results by opening a preview of the node's cache and clicking the download icon.
-
-"
-28243D1C0B8BCF04FE3556990D40D1A31F4CB58D_7,28243D1C0B8BCF04FE3556990D40D1A31F4CB58D," Resetting the cache
-
-If your cache was enabled, you can choose to reset your cache when you run a Pipelines job. When you click Run again, you can select Clear pipeline cache in Define run settings. By choosing this option, you are overriding the default cache settings to reset the cache for the current run. However, the pipeline still creates cache for subsequent runs while cache is enabled.
-
-"
-28243D1C0B8BCF04FE3556990D40D1A31F4CB58D_8,28243D1C0B8BCF04FE3556990D40D1A31F4CB58D," Managing your Pipelines settings
-
-Configure other global settings for your Pipelines asset.
-
-"
-28243D1C0B8BCF04FE3556990D40D1A31F4CB58D_9,28243D1C0B8BCF04FE3556990D40D1A31F4CB58D," Autosave
-
-Choose to automatically save your current Pipelines canvas at a selected frequency. Only changes that impact core pipeline flow are saved.
-
-Parent topic:[IBM Watson Pipelines](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-overview.html)
-"
-606EF22CF35AF0EDC961776FB893B07A880F11D4_0,606EF22CF35AF0EDC961776FB893B07A880F11D4," IBM Watson Pipelines
-
-The Watson Pipelines editor provides a graphical interface for orchestrating an end-to-end flow of assets from creation through deployment. Assemble and configure a pipeline to create, train, deploy, and update machine learning models and Python scripts.
-
-To design a pipeline that you drag nodes onto the canvas, specify objects and parameters, then run and monitor the pipeline.
-
-"
-606EF22CF35AF0EDC961776FB893B07A880F11D4_1,606EF22CF35AF0EDC961776FB893B07A880F11D4," Automating the path to production
-
-Putting a model into a product is a multi-step process. Data must be loaded and processed, models must be trained and tuned before they are deployed and tested. Machine learning models require more observation, evaluation, and updating over time to avoid bias or drift.
-
-
-
-Automating the pipeline makes it simpler to build, run, and evaluate a model in a cohesive way, to shorten the time from conception to production. You can assemble the pipeline, then rapidly update and test modifications. The Pipelines canvas provides tools to visualize the pipeline, customize it at run time with pipeline parameter variables, and then run it as a trial job or on a schedule.
-
-The Pipelines editor also allows for more cohesive collaboration between a data scientist and a ModelOps engineer. A data scientist can create and train a model. A ModelOps engineer can then automate the process of training, deploying, and evaluating the model after it is published to a production environment.
-
-"
-606EF22CF35AF0EDC961776FB893B07A880F11D4_2,606EF22CF35AF0EDC961776FB893B07A880F11D4," Next steps
-
-[Add a pipeline](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-get-started.html) to your project and get to know the canvas tools.
-
-"
-606EF22CF35AF0EDC961776FB893B07A880F11D4_3,606EF22CF35AF0EDC961776FB893B07A880F11D4," Additional resources
-
-For more information, see this blog post about [automating the AI lifecycle with a pipeline flow](https://yairschiff.medium.com/automating-the-ai-lifecycle-with-ibm-watson-studio-orchestration-flow-4450f1d725d6).
-"
-1BD28F052373C2E70130C7539D399D76F9D2AAFE_0,1BD28F052373C2E70130C7539D399D76F9D2AAFE," Accessing the components in your pipeline
-
-When you use a pipeline to automate a flow, you must have access to all of the elements in the pipeline. Make sure that you create and run pipelines with the proper access to all assets, projects, and spaces used in the pipeline. Collaborators who run the pipeline must also be able to access the pipeline components.
-
-"
-1BD28F052373C2E70130C7539D399D76F9D2AAFE_1,1BD28F052373C2E70130C7539D399D76F9D2AAFE," Managing pipeline credentials
-
-To run a job, the pipeline must have access to IBM Cloud credentials. Typically, a pipeline uses your personal IBM Cloud API key to execute long-running operations in the pipeline without disruption. If credentials are not available when you create the job, you are prompted to supply an API key or create a new one.
-
-To generate an API key from your IBM Cloud user account, go to [Manage access and users - API Keys](https://cloud.ibm.com/iam/apikeys) and create or select an API key for your user account.
-
-You can also generate and rotate API keys from Profile and settings > User API key. For more information, see [Managing the user API key](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/admin-apikeys.html).
-
-Alternatively, you can request that a key is generated for the pipeline. In either scenario, name and copy the key, protecting it as you would a password.
-
-"
-1BD28F052373C2E70130C7539D399D76F9D2AAFE_2,1BD28F052373C2E70130C7539D399D76F9D2AAFE," Adding assets to a pipeline
-
-When you create a pipeline, you add assets, such as data, notebooks, deployment jobs, or Data Refinery jobs to the pipeline to orchestrate a sequential process. The strongly recommended method for adding assets to a pipeline is to collect the assets in the project containing the pipeline and use the asset browser to select project assets for the pipeline.
-
-Attention: Although you can include assets from other projects, doing so can introduce complexities and potential problems in your pipeline and could be prohibited in a future release. The recommended practice is to use assets from the current project.
-
-Parent topic:[Getting started with Watson Pipelines](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-get-started.html)
-"
-F5086D0B6258FEF503CB3219F427FFBFF73135E1_0,F5086D0B6258FEF503CB3219F427FFBFF73135E1," Programming IBM Watson Pipelines
-
-You can program in a pipeline by using a notebook, or running Bash scripts in a pipeline.
-
-"
-F5086D0B6258FEF503CB3219F427FFBFF73135E1_1,F5086D0B6258FEF503CB3219F427FFBFF73135E1," Programming with Bash scripts
-
-[Run Bash scripts](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-config.htmlrun-bash) in a pipeline to compute or process data as part of the flow.
-
-"
-F5086D0B6258FEF503CB3219F427FFBFF73135E1_2,F5086D0B6258FEF503CB3219F427FFBFF73135E1," Programming with notebooks
-
-You can use a notebook to run an end-to-end pipeline or to run parts of a pipeline, such as model training.
-
-
-
-* For details on creating notebooks and for links to sample notebooks, see [Notebooks](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebook-editor.html).
-* For details on running a notebook as a pipeline job, see [Run notebook job](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-config.htmlrun-notebook).
-
-
-
-"
-F5086D0B6258FEF503CB3219F427FFBFF73135E1_3,F5086D0B6258FEF503CB3219F427FFBFF73135E1," Using the Python client
-
-Use the [Watson Pipelines Python client](https://pypi.org/project/ibm-watson-pipelines/) for working with pipelines in a notebook.
-
-To install the library, use pip to install the latest package of ibm-watson-pipelines in your coding environment. For example, run the following code in your notebook environment or console.
-
-!pip install ibm-watson-pipelines
-
-Use the client documentation for syntax and descriptions for commands that access pipeline components.
-
-"
-F5086D0B6258FEF503CB3219F427FFBFF73135E1_4,F5086D0B6258FEF503CB3219F427FFBFF73135E1," Go further
-
-To learn more about how to orchestrate external tasks efficiently, see [Making tasks more efficiently with Tekton](https://medium.com/@rafal.bigaj/tekton-and-friends-how-to-orchestrate-external-tasks-efficiently-3fcacf882f6d), a key continuous delivery framework used for Pipelines.
-
-Parent topic:[Creating a pipeline](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-create.html)
-"
-AE57C56703B39C9097516D1466B70A3DE57AA1C4_0,AE57C56703B39C9097516D1466B70A3DE57AA1C4," Running a pipeline
-
-You can run a pipeline in real time to test a flow as you work. When you are satisfied with a pipeline, you can then define a job to run a pipeline with parameters or to run on a schedule.
-
-To run a pipeline:
-
-
-
-1. Click Run pipeline on the toolbar.
-2. Choose an option:
-
-
-
-* Trial run runs the pipeline without creating a job. Use this to test a pipeline.
-* Create a job presents you with an interface for configuring and scheduling a job to run the pipeline. You can save and reuse run details, such as pipeline parameters, for a version of your pipeline.
-* View history compares all of your runs over time.
-
-
-
-
-
-You must make sure requirements are met when you run a pipeline. For example, you might need a deployment space or an API key to run some of your nodes before you can begin.
-
-"
-AE57C56703B39C9097516D1466B70A3DE57AA1C4_1,AE57C56703B39C9097516D1466B70A3DE57AA1C4," Using a job run name
-
-You can optionally specify a job run name when running a pipeline flow or a pipeline job and see the different jobs in the Job details dashboard. Otherwise, you can also assign a local parameter DSJobInvocationId to either a Run pipeline job node or Run DataStage job node.
-
-If both the parameter DSJobInvocationId and job name of the node are set, DSJobInvocationId will be used. If neither are set, the default value ""job run"" is used.
-
- Notes on running a pipeline
-
-
-
-* When you run a pipeline from a trial run or a job, click the node output to view the results of a successful run. If the run fails, error messages and logs are provided to help you correct issues.
-* Errors in the pipeline are flagged with an error badge. Open the node or condition with an error to change or complete the configuration.
-* View the consolidated logs to review operations or identify issues with the pipeline.
-
-
-
-"
-AE57C56703B39C9097516D1466B70A3DE57AA1C4_2,AE57C56703B39C9097516D1466B70A3DE57AA1C4," Creating a pipeline job
-
-The following are all the configuration options for defining a job to run the pipeline.
-
-
-
-1. Name your pipeline job and choose a version.
-2. Input your IBM API key.
-3. (Optional) Schedule your job by toggling the Schedule button.
-
-
-
-1. Choose the start date and fine tune your schedule to repeat by any minute, hour, day, week, month.
-2. Add exception days to prevent the job from running on certain days.
-3. Add a time for terminating the job.
-
-
-
-4. (Optional) Enter the pipeline parameters needed for your job, for example assigning a space to a deployment node. To see how to create a pipeline parameter, see Defining pipeline parameters in [Creating a pipeline](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-create.html).
-5. (Optional) Choose if you want to be notified of pipeline job status after running.
-
-
-
-"
-AE57C56703B39C9097516D1466B70A3DE57AA1C4_3,AE57C56703B39C9097516D1466B70A3DE57AA1C4," Saving a version of a pipeline
-
-You can save a version of a pipeline and revert to it at a later time. For example, if you want to preserve a particular configuration before you make changes, save a version. You can revert the pipeline to a previous version. When you share a pipeline, the latest version is used.
-
-To save a version:
-
-
-
-1. Click the Versions icon on the toolbar.
-2. In the Versions pane, click Save version to create a new version with a version number incremented by 1.
-
-
-
-When you run the pipeline, you can choose from available saved versions.
-
-Note: You cannot delete a saved version.
-
-"
-AE57C56703B39C9097516D1466B70A3DE57AA1C4_4,AE57C56703B39C9097516D1466B70A3DE57AA1C4," Exporting pipeline assets
-
-When you export project or space assets to import them into a deployment space, you can include pipelines in the list of assets you export to a zip file and then import into a project or space.
-
-Importing a pipeline into a space extends your MLOps capabilities to run jobs for various assets from a space, or to move all jobs from a pre-production to a production space. Note these considerations for working with pipelines in a space:
-
-
-
-* Pipelines in a space are read-only. You cannot edit the pipeline. You must edit the pipeline from the project, then export the updated pipeline and import it into the space.
-* Although you cannot edit the pipeline in a space, you can create new jobs to run the pipeline. You can also use parameters to assign values for jobs so you can have different values for each job you configure.
-* If there is already a pipeline in the space with the same name, the pipeline import will fail.
-* If there is no pipeline in the space with the same name, a pipeline with version 1 is created in the space.
-* Any supporting assets or references required to run a pipeline job must also be part of the import package or the job will fail.
-* If your pipeline contains assets or tools not supported in a space, such as an SPSS modeler job, the pipeline job will fail.
-
-
-
-Parent topic:[IBM Watson Pipelines](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-overview.html)
-"
-2EE7BF839FFA16EC1A7F9ED82662EFE539FD29C2_0,2EE7BF839FFA16EC1A7F9ED82662EFE539FD29C2," Run the built-in sample pipeline
-
-You can view and run a built-in sample pipeline that uses sample data to learn how to automate machine learning flows in Watson Pipelines.
-
-"
-2EE7BF839FFA16EC1A7F9ED82662EFE539FD29C2_1,2EE7BF839FFA16EC1A7F9ED82662EFE539FD29C2," What's happening in the sample pipeline?
-
-The sample pipeline gets training data, trains a machine learning model by using the AutoAI tool, and selects the best pipeline to save as a model. The model is then copied to a deployment space where it is deployed.
-
-The sample illustrates how you can automate an end-to-end flow to make the lifecycle easier to run and monitor.
-
-The sample pipeline looks like this:
-
-
-
-The tutorial steps you through this process:
-
-
-
-1. [Prerequisites](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-sample.html?context=cdpaas&locale=enset-up)
-2. [Preview creating and running the sample pipeline](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-sample.html?context=cdpaas&locale=enpreview)
-3. [Creating the sample pipeline](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-sample.html?context=cdpaas&locale=encreate-sample)
-4. [Running the sample pipeline](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-sample.html?context=cdpaas&locale=enrun-flow)
-5. [Reviewing the results](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-sample.html?context=cdpaas&locale=enreview-results)
-"
-2EE7BF839FFA16EC1A7F9ED82662EFE539FD29C2_2,2EE7BF839FFA16EC1A7F9ED82662EFE539FD29C2,"6. [Exploring the sample nodes and configuration](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-sample.html?context=cdpaas&locale=enexplore-sample)
-
-
-
-"
-2EE7BF839FFA16EC1A7F9ED82662EFE539FD29C2_3,2EE7BF839FFA16EC1A7F9ED82662EFE539FD29C2," Prerequisites
-
-To run this sample, you must first create:
-
-
-
-* A [project](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/projects.html), where you can run the sample pipeline.
-* A [deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-create.html), where you can view and test the results. The deployment space is required to run the sample pipeline.
-
-
-
-"
-2EE7BF839FFA16EC1A7F9ED82662EFE539FD29C2_4,2EE7BF839FFA16EC1A7F9ED82662EFE539FD29C2," Preview creating and running the sample pipeline
-
-Watch this video to see how to create and run a sample pipeline.
-
-This video provides a visual method to learn the concepts and tasks in this documentation.
-
-"
-2EE7BF839FFA16EC1A7F9ED82662EFE539FD29C2_5,2EE7BF839FFA16EC1A7F9ED82662EFE539FD29C2," Creating the sample pipeline
-
-Create the sample pipeline in the Pipelines editor.
-
-
-
-1. Open the project where you want to create the pipeline.
-2. From the Assets tab, click New asset > Automate model lifecycle.
-3. Click the Samples tab, and select the Orchestrate an AutoAI experiment.
-4. Enter a name for the pipeline. For example, enter Bank marketing sample.
-5. Click Create to open the canvas.
-
-
-
-"
-2EE7BF839FFA16EC1A7F9ED82662EFE539FD29C2_6,2EE7BF839FFA16EC1A7F9ED82662EFE539FD29C2," Running the sample pipeline
-
-To run the sample pipeline:
-
-
-
-1. Click Run pipeline on the canvas toolbar, then choose Trial run.
-2. Select a deployment space when prompted to provide a value for the deployment_space pipeline parameter.
-
-
-
-1. Click Select Space.
-2. Expand the Spaces section.
-3. Select your deployment space.
-4. Click Choose.
-
-
-
-3. Provide an API key if it is your first time to run a pipeline. Pipeline assets use your personal IBM Cloud API key to run operations securely without disruption.
-
-
-
-* If you have an existing API key, click Use existing API key, paste the API key, and click Save.
-* If you don't have an existing API key, click Generate new API key, provide a name, and click Save. Copy the API key, and then save the API key for future use. When you're done, click Close.
-
-
-
-4. Click Run to start the pipeline.
-
-
-
-"
-2EE7BF839FFA16EC1A7F9ED82662EFE539FD29C2_7,2EE7BF839FFA16EC1A7F9ED82662EFE539FD29C2," Reviewing the results
-
-When the pipeline run completes, you can view the output to see the results.
-
-
-
-Open the deployment space that you specified as part of the pipeline. You see the new deployment in the space:
-
-
-
-If you want to test the deployment, use the deployment space Test page to submit payload data in JSON format and get a score back. For example, click the JSON tab and enter this input data:
-
-{""input_data"": [{""fields"": ""age"",""job"",""marital"",""education"",""default"",""balance"",""housing"",""loan"",""contact"",""day"",""month"",""duration"",""campaign"",""pdays"",""previous"",""poutcome""],""values"": ""30"",""unemployed"",""married"",""primary"",""no"",""1787"",""no"",""no"",""cellular"",""19"",""oct"",""79"",""1"",""-1"",""0"",""unknown""]]}]}
-
-When you click Predict, the model generates output with a confidence score for the prediction of whether a customer subscribes to a term deposit promotion.
-
-
-
-In this case, the prediction of ""no"" is accompanied by a confidence score of close to 95%, predicting that the client will most likely not subscribe to a term deposit.
-
-"
-2EE7BF839FFA16EC1A7F9ED82662EFE539FD29C2_8,2EE7BF839FFA16EC1A7F9ED82662EFE539FD29C2," Exploring the sample nodes and configuration
-
-Get a deeper understanding of how the sample nodes were configured to work in concert in the pipeline sample.
-
-"
-2EE7BF839FFA16EC1A7F9ED82662EFE539FD29C2_9,2EE7BF839FFA16EC1A7F9ED82662EFE539FD29C2," Viewing the pipeline parameter
-
-A pipeline parameter specifies a setting for the entire pipeline. In the sample pipeline, a pipeline parameter is used to specify a deployment space where the model that is saved from the AutoAI experiment is stored and deployed. You are prompted to select the deployment space the pipeline parameter links to.
-
-Click the Global objects icon  on the canvas toolbar to view or create pipeline parameters. In the sample pipeline, the pipeline parameter is named deployment_space and is of type Space. Click the name of the pipeline parameter to view the details. In the sample, the pipeline parameter is used with the Create data file node and the Create AutoAI experiment node.
-
-
-
-"
-2EE7BF839FFA16EC1A7F9ED82662EFE539FD29C2_10,2EE7BF839FFA16EC1A7F9ED82662EFE539FD29C2," Loading the training data for the AutoAI experiment
-
-In this step, a Create data file node is configured to access the data set for the experiment. Click the node to view the configuration. The data file is bank-marketing-data.csv, which provides sample data to predict whether a bank customer signs up for a term deposit. The data rests in a Cloud Object Storage bucket and can be refreshed to keep the model training up to date.
-
-
-
- Option Value
-
- File The location of the data asset for training the AutoAI experiment. In this case, the data file is in a project.
- File path The name of the asset, bank-marketing-data.csv.
- Target scope For this sample, the target is a deployment space.
-
-
-
-"
-2EE7BF839FFA16EC1A7F9ED82662EFE539FD29C2_11,2EE7BF839FFA16EC1A7F9ED82662EFE539FD29C2," Creating the AutoAI experiment
-
-The node to Create AutoAI experiment is configured with these values:
-
-
-
- Option Value
-
- AutoAI experiment name onboarding-bank-marketing-prediction
- Scope For this sample, the target is a deployment space.
- Prediction type binary
- Prediction column (label) y
- Positive class yes
- Training data split ration 0.9
- Algorithms to include GradientBoostingClassifierEstimator XGBClassifierEstimator
- Algorithms to use 1
- Metric to optimize ROC AUC
- Optimize metric (optional) default
- Hardware specification (optional) default
- AutoAI experiment description This experiment uses a sample file, which contains text data that is collected from phone calls to a Portuguese bank in response to a marketing campaign. The classification goal is to predict whether a client subscribes to a term deposit, represented by variable y.
- AutoAI experiment tags (optional) none
- Creation mode (optional) default
-
-
-
-Those options define an experiment that uses the bank marketing data to predict whether a customer is likely to enroll in a promotion.
-
-"
-2EE7BF839FFA16EC1A7F9ED82662EFE539FD29C2_12,2EE7BF839FFA16EC1A7F9ED82662EFE539FD29C2," Running the AutoAI experiment
-
-In this step, the Run AutoAI experiment node runs the AutoAI experiment onboarding-bank-marketing-prediction, trains the pipelines, then saves the best model.
-
-
-
- Option Value
-
- AutoAI experiment Takes the output from the Create AutoAI node as the input to run the experiment.
- Training data assets Takes the output from the Create Data File node as the training data input for the experiment.
- Model count 1
- Holdout data asset (optional) none
- Models count (optional) 3
- Run name (optional) none
- Model name prefix (optional) none
- Run description (optional) none
- Run tags (optional) none
- Creation mode (optional) default
- Error policy (optional) default
-
-
-
-"
-2EE7BF839FFA16EC1A7F9ED82662EFE539FD29C2_13,2EE7BF839FFA16EC1A7F9ED82662EFE539FD29C2," Deploying the model to a web service
-
-The Create Web deployment node creates an online deployment that is named onboarding-bank-marketing-prediction-deployment so you can deliver data and get predictions back in real time from the REST API endpoint.
-
-
-
- Option Value
-
- ML asset Takes the best model output from the Run AutoAI node as the input to create the deployment.
- Deployment name onboarding-bank-marketing-prediction-deployment
-
-
-
-Parent topic:[IBM Watson Pipelines](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-overview.html)
-"
-BB961AB67F88B50475329FCD1EE2F64137480426_0,BB961AB67F88B50475329FCD1EE2F64137480426," Run a sample pipeline to compare models
-
-Download a pre-populated project with the assets you need to run a sample pipeline. The pipeline compares two AutoAI experiments and compares the output, selecting the best model and deploying it as a Web service.
-
-The Train AutoAI and reference model sample creates a pre-populated project with the assets you need to run a pre-built pipeline that trains models using a sample data set. After performing some set up and configuration tasks, you can run the sample pipeline to automate the following sequence:
-
-
-
-* Copy sample assets into a space.
-* Run a notebook and an AutoAI experiment simultaneously, on a common training data set.
-* Run another notebook to compare the results from the previous nodes and select the best model, ranked for accuracy.
-* Copy the winning model to a space and create a web service deployment for the selected model.
-
-
-
-After the run completes, you can inspect the output in the pipeline editor and then switch to the associated deployment space to [view and test the resulting deployment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-sample2.html?context=cdpaas&locale=enview-deploy).
-
-"
-BB961AB67F88B50475329FCD1EE2F64137480426_1,BB961AB67F88B50475329FCD1EE2F64137480426," Learning goals
-
-After running this sample you will know how to:
-
-
-
-* Configure a Watson Pipeline
-* Run a Watson Pipeline
-
-
-
-"
-BB961AB67F88B50475329FCD1EE2F64137480426_2,BB961AB67F88B50475329FCD1EE2F64137480426," Downloading the sample
-
-Follow these steps to create the sample project from the Samples so you can test the capabilities of IBM Watson Pipelines:
-
-
-
-1. Open the [Train AutoAI and reference model sample](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/496c1220779cbe5cccc063534600789f) from the Samples.
-2. Click Create project to create the project.
-3. Open the project and follow the instructions on the Readme page to set up the pipeline assets.
-
-
-
-"
-BB961AB67F88B50475329FCD1EE2F64137480426_3,BB961AB67F88B50475329FCD1EE2F64137480426," The sample pipeline components
-
-The sample project includes:
-
-
-
-* Pre-built sample Watson Pipeline
-* Data set called german_credit_data_biased_training.csv used for training a model to predict credit risk
-* Data set called german_credit_test_data.csv used to test the deployed model
-* Notebook called reference-model-training-notebook that trains an AutoAI experiment and saves the best pipeline as a model
-* Notebook called select-winning-model that compares the models and chooses the best to save to the designated deployment space
-
-
-
-"
-BB961AB67F88B50475329FCD1EE2F64137480426_4,BB961AB67F88B50475329FCD1EE2F64137480426," Getting started with the sample
-
-To run the sample pipeline, you will need to perform some set-up tasks:
-
-
-
-1. Create a deployment space, for example, dev-space which you'll need when you run the notebooks. From the navigation menu, select Deployments > View All Spaces > New deployment space. Fill in the required fields.
-
-Note:Make sure you associate a Watson Machine Learning instance with the space or the pipeline run will fail.
-2. From the Assets page of the sample project, open the reference-model-training-notebook and follow the steps in the Set up the environment section to acquire and insert an api_key variable as your credentials.
-3. After inserting your credentials, click File > Save as version to save the updated notebook to your project.
-4. Do the same for the select-winning-model notebook to add credentials and save the updated version of the notebook.
-
-
-
-"
-BB961AB67F88B50475329FCD1EE2F64137480426_5,BB961AB67F88B50475329FCD1EE2F64137480426," Exploring the pipeline
-
-After you complete the set up tasks, open the sample pipeline On-boarding - Train AutoAI and reference model and select the best from the Assets page of the sample project.
-
-You will see the sample pipeline:
-
-
-
-"
-BB961AB67F88B50475329FCD1EE2F64137480426_6,BB961AB67F88B50475329FCD1EE2F64137480426," Viewing node configuration
-
-As you explore the sample pipeline, double-click on the various nodes to view their configuration. For example, if you click on the first node for copying an asset, you will see this configuration:
-
-
-
-Note that the node that will copy the data asset to a deployment space is configured using a pipeline parameter. The pipeline parameter creates a placeholder for the space you created to use for this pipeline. When you run the pipeline, you are prompted to choose the space.
-
-"
-BB961AB67F88B50475329FCD1EE2F64137480426_7,BB961AB67F88B50475329FCD1EE2F64137480426," Running the pipeline
-
-When you are ready to run the pipeline, click the Run icon and choose Trial job. You are prompted to choose the deployment space for the pipeline and create or supply an API key for the pipeline if one is not already available.
-
-As the pipeline runs, you will see status notifications about the progress of the run. Nodes that are processed successfully are marked with a checkmark.
-
-
-
-"
-BB961AB67F88B50475329FCD1EE2F64137480426_8,BB961AB67F88B50475329FCD1EE2F64137480426," Viewing the output
-
-When the job completes, click Pipeline output for the run to see a summary of pipeline processes. You can click to expand each section and view the details for each operation.
-
-
-
-"
-BB961AB67F88B50475329FCD1EE2F64137480426_9,BB961AB67F88B50475329FCD1EE2F64137480426," Viewing the deployment in your space
-
-After you are done exploring the pipeline and its output, you can view the assets that were created in the space you designated for the pipeline.
-
-Open the space. You can see that the models and training data were copied to the space. The winning model is tagged as selected_model.
-
-
-
-"
-BB961AB67F88B50475329FCD1EE2F64137480426_10,BB961AB67F88B50475329FCD1EE2F64137480426," Viewing the deployment
-
-The last step of the pipeline created a web service deployment for the selected model. Click the Deployments tab to view the deployment.
-
-
-
-"
-BB961AB67F88B50475329FCD1EE2F64137480426_11,BB961AB67F88B50475329FCD1EE2F64137480426," Testing the deployment
-
-You can test the deployment to see the predictions the model will generate.
-
-
-
-1. Click the deployment name to view the details.
-2. Click the Test tab.
-3. Enter this JSON data into the Input form. The payload (input) must match the schema for the model but should not include the prediction column.
-
-
-
-{""input_data"":[{
-""fields"": ""CheckingStatus"",""LoanDuration"",""CreditHistory"",""LoanPurpose"",""LoanAmount"",""ExistingSavings"",""EmploymentDuration"",""InstallmentPercent"",""Sex"",""OthersOnLoan"",""CurrentResidenceDuration"",""OwnsProperty"",""Age"",""InstallmentPlans"",""Housing"",""ExistingCreditsCount"",""Job"",""Dependents"",""Telephone"",""ForeignWorker""],
-""values"": ""no_checking"",28,""outstanding_credit"",""appliances"",5990,""500_to_1000"",""greater_7"",5,""male"",""co-applicant"",3,""car_other"",55,""none"",""free"",2,""skilled"",2,""yes"",""yes""]]
-}]}
-
-Clicking Predict returns this prediction, indicating a low credit risk for this customer.
-
-
-
-"
-BB961AB67F88B50475329FCD1EE2F64137480426_12,BB961AB67F88B50475329FCD1EE2F64137480426," Next steps
-
-[Create a pipeline](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-create.html) using your own assets.
-
-Parent topic:[Watson Pipelines](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-overview.html)
-"
-D1DB4F3B084CB401795C925F280207CBCB3D94AA_0,D1DB4F3B084CB401795C925F280207CBCB3D94AA," Storage and data access for IBM Watson Pipelines
-
-Learn where files and data are stored outside of IBM Watson Pipelines and use it in a Pipelines.
-
-"
-D1DB4F3B084CB401795C925F280207CBCB3D94AA_1,D1DB4F3B084CB401795C925F280207CBCB3D94AA," Access data on Cloud Object Storage
-
-File storage refers to the repository where you store assets to use with the pipeline. It is a Cloud Object Storage bucket that is used as storage for a particular scope, such as a project or deployment space.
-
-A storage location is referenced by a Cloud Object Storage data connection in its scope. Refer to a file by pointing to a location such as an object key in a dedicated, self-managed bucket.
-
-Parent topic:[Creating a pipeline](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-create.html)
-"
-CE13AE6812F1E2CA6AD429D4B01AF25F9F398148_0,CE13AE6812F1E2CA6AD429D4B01AF25F9F398148," Deploying models with Watson Machine Learning
-
-Using IBM Watson Machine Learning, you can deploy models, scripts, and functions, manage your deployments, and prepare your assets to put into production to generate predictions and insights.
-
-This graphic illustrates a typical process for a machine learning model. After you build and train a machine learning model, use Watson Machine Learning to deploy the model, manage the input data, and put your machine learning assets to use.
-
-
-
-"
-CE13AE6812F1E2CA6AD429D4B01AF25F9F398148_1,CE13AE6812F1E2CA6AD429D4B01AF25F9F398148," IBM Watson Machine Learning architecture and services
-
-Watson Machine Learning is a service on IBM Cloud with features for training and deploying machine learning models and neural networks. Built on a scalable, open source platform based on Kubernetes and Docker components, Watson Machine Learning enables you to build, train, deploy, and manage machine learning and deep learning models.
-
-"
-CE13AE6812F1E2CA6AD429D4B01AF25F9F398148_2,CE13AE6812F1E2CA6AD429D4B01AF25F9F398148," Deploying and managing models with Watson Machine Learning
-
-Watson Machine Learning supports popular frameworks, including: TensorFlow, Scikit-Learn, and PyTorch to build and deploy models. For a list of supported frameworks, refer to [Supported frameworks](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/pm_service_supported_frameworks.html).
-
-To build and train a model:
-
-
-
-* Use one of the tools that are listed in [Analyzing data and building models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/data-science.html).
-* [Import a model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-importing-model.html) that you built and trained outside of Watson Studio.
-
-
-
-"
-CE13AE6812F1E2CA6AD429D4B01AF25F9F398148_3,CE13AE6812F1E2CA6AD429D4B01AF25F9F398148," Deployment infrastructure
-
-
-
-* [Deploy trained models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-general.html) as a web service or for batch processing.
-* [Deploy Python functions](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-py-function.html) to simplify AI solutions.
-
-
-
-"
-CE13AE6812F1E2CA6AD429D4B01AF25F9F398148_4,CE13AE6812F1E2CA6AD429D4B01AF25F9F398148," Programming Interfaces
-
-
-
-* Use [Python client library](https://ibm.github.io/watson-machine-learning-sdk/) to work with all of your Watson Machine Learning assets in a notebook.
-* Use [REST API](https://cloud.ibm.com/apidocs/machine-learning) to call methods from the base URLs for the Watson Machine Learning API endpoints.
-* When you call the API, use the URL and add the path for each method to form the complete API endpoint for your requests. For details on checking endpoints, refer to [Looking up a deployment endpoint](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-online.html).
-
-
-
-Parent topic:[Deploying and managing models](https://dataplatform.cloud.ibm.com/docs/content/wsj/wmls/wmls-deploy-overview.html)
-"
-577964B0C132F5EA793054C3FF67417DDA6511D3_0,577964B0C132F5EA793054C3FF67417DDA6511D3," Watson Machine Learning Python client samples and examples
-
-Review and use sample Jupyter Notebooks that use Watson Machine Learning Python library to demonstrate machine learning features and techniques. Each notebook lists learning goals so you can find the one that best meets your goals.
-
-"
-577964B0C132F5EA793054C3FF67417DDA6511D3_1,577964B0C132F5EA793054C3FF67417DDA6511D3," Training and deploying models from notebooks
-
-If you choose to build a machine learning model in a notebook, you must be comfortable with coding in a Jupyter Notebook. A Jupyter Notebook is a web-based environment for interactive computing. You can run small pieces of code that process your data, and then immediately view the results of your computation. Using this tool, you can assemble, test, and run all of the building blocks you need to work with data, save the data to Watson Machine Learning, and deploy the model.
-
-"
-577964B0C132F5EA793054C3FF67417DDA6511D3_2,577964B0C132F5EA793054C3FF67417DDA6511D3," Learn from sample notebooks
-
-Many ways exist to build and train models and then deploy them. Therefore, the best way to learn is to look at annotated samples that step you through the process by using different frameworks. Review representative samples that demonstrate key features.
-
-The samples are built by using the V4 version of the Watson Machine Learning [Python client library](https://ibm.github.io/watson-machine-learning-sdk/).
-
-Video disclaimer: Some minor steps and graphical elements in the videos might differ from your deployment.
-
-Watch this video to learn how to train, deploy, and test a machine learning model in a Jupyter Notebook. This video mirrors the Use scikit-learn to recognize hand-written digits found in the Deployment samples table.
-
-This video provides a visual method to learn the concepts and tasks in this documentation.
-
-Watch this video to learn how to test a model that was created with AutoAI by using the Watson Machine Learning APIs in Jupyter Notebook.
-
-This video provides a visual method to learn the concepts and tasks in this documentation.
-
-"
-577964B0C132F5EA793054C3FF67417DDA6511D3_3,577964B0C132F5EA793054C3FF67417DDA6511D3," Helpful variables
-
-Use the pre-defined PROJECT_ID environment variable to call the Watson Machine Learning Python client APIs. PROJECT_ID is the guide of the project where your environment is running.
-
-"
-577964B0C132F5EA793054C3FF67417DDA6511D3_4,577964B0C132F5EA793054C3FF67417DDA6511D3," Deployment samples
-
-View or run these Jupyter Notebooks to see how techniques are implemented by using various frameworks. Some of the samples rely on trained models, which are also available for you to download from the public repository.
-
-
-
- Sample name Framework Techniques demonstrated
-
- [Use scikit-learn and custom library to predict temperature](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/9365d34eeacef267026a2b75b92bfa2f) Scikit-learn Train a model with custom defined transformer Persist the custom-defined transformer and the model in Watson Machine Learning repository Deploy the model by using Watson Machine Learning Service Perform predictions that use the deployed model
- [Use PMML to predict iris species](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/8bddf7f7e5d004a009c643750b16f5b4) PMML Deploy and score a PMML model
- [Use Python function to recognize hand-written digits](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/1eddc77b3a4340d68f762625d40b64f9) Python Use a function to store a sample model, then deploy the sample model.
- [Use scikit-learn to recognize hand-written digits](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/e20607d75c8473daaade1e77c21717d4) Scikit-learn Train sklearn model Persist trained model in Watson Machine Learning repository Deploy model for online scoring by using client library Score sample records by using client library
-"
-577964B0C132F5EA793054C3FF67417DDA6511D3_5,577964B0C132F5EA793054C3FF67417DDA6511D3," [Use Spark and batch deployment to predict customer churn](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/e20607d75c8473daaade1e77c21719c1) Spark Load a CSV file into an Apache Spark DataFrame Explore data Prepare data for training and evaluation Create an Apache Spark machine learning pipeline Train and evaluate a model Persist a pipeline and model in Watson Machine Learning repository Explore and visualize prediction result by using the plotly package Deploy a model for batch scoring by using Watson Machine Learning API
- [Use Spark and Python to predict Credit Risk](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/e20607d75c8473daaade1e77c2173364) Spark Load a CSV file into an Apache® Spark DataFrame Explore data Prepare data for training and evaluation Persist a pipeline and model in Watson Machine Learning repository from tar.gz files Deploy a model for online scoring by using Watson Machine Learning API Score sample data by using the Watson Machine Learning API Explore and visualize prediction results by using the plotly package
- [Use SPSS to predict customer churn](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/e20607d75c8473daaade1e77c2175eb9) SPSS Work with the instance Perform an online deployment of the SPSS model Score data by using deployed model
-"
-577964B0C132F5EA793054C3FF67417DDA6511D3_6,577964B0C132F5EA793054C3FF67417DDA6511D3," [Use XGBoost to classify tumors](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/ac820b22cc976f5cf6487260f4c8d9c8) XGBoost Load a CSV file into numpy array Explore data Prepare data for training and evaluation Create an XGBoost machine learning model Train and evaluate a model Use cross-validation to optimize the model's hyperparameters Persist a model in Watson Machine Learning repository Deploy a model for online scoring Score sample data
- [Predict business for cars](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/61a8b600f1bb183e2c471e7a64299f0e) Spark Download an externally trained Keras model with dataset. Persist an external model in the Watson Machine Learning repository. Deploy a model for online scoring by using client library. Score sample records by using client library.
- [Deploy Python function for software specification](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/56825df5322b91daffd39426038808e9) Core Create a Python function Create a web service Score the model
- [Machine Learning artifact management](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/55ef73c276cd1bf2bae266613d08c0f3) Core Export and import artifacts Load, deploy, and score externally created models
- [Use Decision Optimization to plan your diet](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/5502accad754a3c5dcb3a08f531cea5a) Core Create a diet planning model by using Decision Optimization
-"
-577964B0C132F5EA793054C3FF67417DDA6511D3_7,577964B0C132F5EA793054C3FF67417DDA6511D3," [Use SPSS and batch deployment with Db2 to predict customer churn](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/e5e78be14e2260ccb4bcf8181d0955ef) SPSS Load a CSV file into an Apache Spark DataFrame Explore data Prepare data for training and evaluation Persist a pipeline and model in Watson Machine Learning repository from tar.gz files Deploy a model for online scoring by using Watson Machine Learning API Score sample data by using the Watson Machine Learning API Explore and visualize prediction results by using the plotly package
- [Use scikit-learn and AI lifecycle capabilities to predict Boston house prices](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/8bddf7f7e5d004a009c643750b1c7b47) Scikit-learn Load a sample data set from scikit-learn Explore data Prepare data for training and evaluation Create a scikit-learn pipeline Train and evaluate a model Store a model in the Watson Machine Learning repository Deploy a model with AutoAI lifecycle capabilities
- [German credit risk prediction with Scikit-learn for model monitoring](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/f63c83c7368d2487c943c91a9a28ad67) Scikit-learn Train, create, and deploy a credit risk prediction model with monitoring
- [Monitor German credit risk model](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/48e9f342365736c7bb7a8dfc481bca6e) Scikit-learn Train, create, and deploy a credit risk prediction model with IBM Watson OpenScale capabilities
-
-
-
-"
-577964B0C132F5EA793054C3FF67417DDA6511D3_8,577964B0C132F5EA793054C3FF67417DDA6511D3," AutoAI samples
-
-View or run these Jupyter Notebooks to see how AutoAI model techniques are implemented.
-
-
-
- Sample name Framework Techniques demonstrated
-
- [Use AutoAI and Lale to predict credit risk](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/8bddf7f7e5d004a009c643750b16d0c0) Hybrid (AutoAI) with Lale Work with Watson Machine Learning experiments to train AutoAI models Compare trained models quality and select the best one for further refinement Refine the best model and test new variations Deploy and score the trained model
- [Use AutoAI to predict credit risk](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/029d77a73d72a4134c81383d6f103330) Hybrid (AutoAI) Work with Watson Machine Learning experiments to train AutoAI models Compare trained models quality and select the best one for further refinement Refine the best model and test new variations Deploy and score the trained model
-
-
-
-"
-577964B0C132F5EA793054C3FF67417DDA6511D3_9,577964B0C132F5EA793054C3FF67417DDA6511D3," Next steps
-
-
-
-* To learn more about using notebook editors, see [Notebooks](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebooks-and-scripts.html).
-* To learn more about working with notebooks, see [Coding and running notebooks](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/code-run-notebooks.html).
-
-
-
-
-
-* To learn more about authenticating in a notebook, see [Authentication](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-authentication.html).
-
-
-
-Parent topic:[Managing predictive deployments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-general.html)
-"
-67FBC6967ED56285CC4EB1FF12D0E2E23B2F7BD5_0,67FBC6967ED56285CC4EB1FF12D0E2E23B2F7BD5," Managing the Watson Machine Learning service endpoint
-
-You can use IBM Cloud connectivity options for accessing cloud services securely by using service endpoints. When you provision a Watson Machine Learning service instance, you can choose if you want to access your service through the public internet, which is the default setting, or over the IBM Cloud private network.
-
-For more information, refer to [IBM Cloud service endpoints](https://cloud.ibm.com/docs/account?topic=account-vrf-service-endpoint).{: new_window}
-
-You can use the Service provisioning page to choose a default endpoint from the following options:
-
-
-
-* [Public network](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-service-endpoint.html?context=cdpaas&locale=enpublic_net)
-* [Private network](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-service-endpoint.html?context=cdpaas&locale=enprivate_net)
-* Both, public and private networks
-
-
-
-"
-67FBC6967ED56285CC4EB1FF12D0E2E23B2F7BD5_1,67FBC6967ED56285CC4EB1FF12D0E2E23B2F7BD5," Public network
-
-You can use public network endpoints to connect to Watson Machine Learning service instance on the public network. Your environment needs to have internet access to connect.
-
-"
-67FBC6967ED56285CC4EB1FF12D0E2E23B2F7BD5_2,67FBC6967ED56285CC4EB1FF12D0E2E23B2F7BD5," Private network
-
-You can use private network endpoints to connect to your IBM Watson Machine Learning service instance over the IBM Cloud Private network. After you configure your Watson Machine Learning service to use private endpoints, the service is not accessible from the public internet.
-
-"
-67FBC6967ED56285CC4EB1FF12D0E2E23B2F7BD5_3,67FBC6967ED56285CC4EB1FF12D0E2E23B2F7BD5," Private URLs for Watson Machine Learning
-
-Private URLs for Watson Machine Learning for each region are as follows:
-
-
-
-* Dallas - [https://private.us-south.ml.cloud.ibm.com](https://private.us-south.ml.cloud.ibm.com)
-* London - [https://private.eu-gb.ml.cloud.ibm.com](https://private.eu-gb.ml.cloud.ibm.com)
-* Frankfurt - [https://private.eu-de.ml.cloud.ibm.com](https://private.eu-de.ml.cloud.ibm.com)
-* Tokyo - [https://private.jp-tok.ml.cloud.ibm.com](https://private.jp-tok.ml.cloud.ibm.com)
-
-
-
-"
-67FBC6967ED56285CC4EB1FF12D0E2E23B2F7BD5_4,67FBC6967ED56285CC4EB1FF12D0E2E23B2F7BD5," Using IBM Cloud service endpoints
-
-Follow these steps to enable private network endpoints on your clusters:
-
-
-
-1. Use [IBM Cloud CLI](https://cloud.ibm.com/docs/cli?topic=cli-getting-started) to enable your account to use IBM Cloud service endpoints.
-2. Provision a Watson Machine Learning service instance with private endpoints.
-
-
-
-"
-67FBC6967ED56285CC4EB1FF12D0E2E23B2F7BD5_5,67FBC6967ED56285CC4EB1FF12D0E2E23B2F7BD5," Provisioning with service endpoints
-
-You can provision a Watson Machine Learning service instance with service endpoint by using IBM Cloud UI or IBM Cloud CLI.
-
-"
-67FBC6967ED56285CC4EB1FF12D0E2E23B2F7BD5_6,67FBC6967ED56285CC4EB1FF12D0E2E23B2F7BD5," Provisioning a service endpoint with IBM Cloud UI
-
-To configure the endpoints of your IBM Watson Machine Learning service instance, you can use the Endpoints field on the IBM Cloud catalog page. You can configure a public, private, or a mixed network.
-
-
-
-"
-67FBC6967ED56285CC4EB1FF12D0E2E23B2F7BD5_7,67FBC6967ED56285CC4EB1FF12D0E2E23B2F7BD5," IBM Cloud CLI
-
-If you provision an IBM Watson Machine Learning service instance by using the IBM Cloud CLI, use the command-line option service-endpoints to configure the Watson Machine Learning endpoints. You can specify the value public (the default value), private, or public-and-private:
-
-ibmcloud resource service-instance-create pm-20 --service-endpoints
-
-For example:
-
-ibmcloud resource service-instance-create wml-instance pm-20 standard us-south -p --service-endpoints private
-
-or
-
-ibmcloud resource service-instance-create wml-instance pm-20 standard us-south --service-endpoints public-and-private
-
-Parent topic:[First steps](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-overview.html)
-"
-80AE02DC6E3E4FF10C8FD97E1C3F5A5E87270D57_0,80AE02DC6E3E4FF10C8FD97E1C3F5A5E87270D57," Assets in deployment spaces
-
-Learn about various ways of adding and promoting assets to a space. Find the list of asset types that you can add to a space.
-
-Note these considerations for importing assets into a space:
-
-
-
-* Upon import, some assets are automatically assigned a version number, starting with version 1. This version numbering prevents overwriting existing assets if you import their updated versions later.
-* Assets or references that are required to run jobs in the space must be part of the import package, or must be added separately. If you don't add these supporting assets or references, jobs fail.
-
-
-
-The way to add an asset to a space depends on the asset type. You can add some assets directly to a space (for example a model that was created outside of watsonx). Other asset types originate in a project and must be transferred from a project to a space. The third class includes asset types that you can add to a space only as a dependency of another asset. These asset types do not display in the Assets tab in the UI.
-
-For more information, see:
-
-
-
-* [Asset types that you can directly add to a space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets-all.html?context=cdpaas&locale=enadd_directly)
-* [Asset types that are created in projects and can be transferred into a space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets-all.html?context=cdpaas&locale=enadd_transfer)
-* [Asset types that can be added to a space only as a dependency](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets-all.html?context=cdpaas&locale=enadd_dependency)
-
-
-
-For more information about working with space assets, see:
-
-
-
-"
-80AE02DC6E3E4FF10C8FD97E1C3F5A5E87270D57_1,80AE02DC6E3E4FF10C8FD97E1C3F5A5E87270D57,"* [Accessing asset details](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-access-detailed-info.html)
-
-
-
-"
-80AE02DC6E3E4FF10C8FD97E1C3F5A5E87270D57_2,80AE02DC6E3E4FF10C8FD97E1C3F5A5E87270D57," Asset types that you can directly add to a space
-
-
-
-* Connection
-* Data asset (from a connection or an uploaded file)
-* Model
-
-
-
-For more information, see:
-
-
-
-* For data assets and connections: [Adding data assets to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html)
-* For models: [Importing models into a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-importing-model.html)
-
-
-
-"
-80AE02DC6E3E4FF10C8FD97E1C3F5A5E87270D57_3,80AE02DC6E3E4FF10C8FD97E1C3F5A5E87270D57," Assets types that are created in projects and can be transferred into a space
-
-
-
-* Connection
-* Data Refinery flow
-* Environment
-* Function
-* Job
-* Model
-* Script
-
-
-
-If your asset is located in a standard Watson Studio project, you can transfer the asset to the deployment space by promoting it.
-
-For more information, see [Promoting assets to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-promote-assets.html).
-
-Alternatively, you can export the project and then import it into the deployment space. For more information, see:
-
-
-
-* [Exporting a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/export-project.html)
-* [Importing spaces and projects](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-import-to-space.html)
-
-
-
-If you export the whole project, any matching custom environments are exported as well.
-
-"
-80AE02DC6E3E4FF10C8FD97E1C3F5A5E87270D57_4,80AE02DC6E3E4FF10C8FD97E1C3F5A5E87270D57," Asset types that can be added to a space only as a dependency
-
-
-
-* Hardware Specification
-* Package Extension
-* Software Specification
-* Watson Machine Learning Experiment
-* Watson Machine Learning Model Definition
-
-
-
-"
-80AE02DC6E3E4FF10C8FD97E1C3F5A5E87270D57_5,80AE02DC6E3E4FF10C8FD97E1C3F5A5E87270D57," Learn more
-
-
-
-* [Deploying assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-general.html)
-* [Training and deploying machine learning models in notebooks](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-samples-overview.html)
-
-
-
-Parent topic:[Deployment spaces](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-spaces_local.html)
-"
-D8BD7C30F776F7218860187F535C6B72D1A8DC74_0,D8BD7C30F776F7218860187F535C6B72D1A8DC74," Adding data assets to a deployment space
-
-Learn about various ways of adding and promoting data assets to a space and data types that are used in deployments.
-
-Data can be:
-
-
-
-* A data file such as a .csv file
-* A connection to data that is located in a repository such as a database.
-* Connected data that is located in a storage bucket. For more information, see [Using data from the Cloud Object Storage service](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html?context=cdpaas&locale=encos-data).
-
-
-
-Notes:
-
-
-
-* For definitions of data-related terms, refer to [Asset types and properties](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/assets.html).
-
-
-
-You can add data to a space in one of these ways:
-
-
-
-* [Add data and connections to space by using UI](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html?context=cdpaas&locale=enadd-directly)
-* [Promote a data source, such as a file or a connection from an associated project](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-promote-assets.html)
-* [Save a data asset to a space programmatically](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets.html?context=cdpaas&locale=enadd-programmatically)
-* [Import a space or a project, including data assets, into an existing space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-import-to-space.html).
-
-
-
-Data added to a space is managed in a similar way to data added to a Watson Studio project. For example:
-
-
-
-"
-D8BD7C30F776F7218860187F535C6B72D1A8DC74_1,D8BD7C30F776F7218860187F535C6B72D1A8DC74,"* Adding data to a space creates a new copy of the asset and its attachments within the space, maintaining a reference back to the project asset. If an asset such as a data connection requires access credentials, they persist and are the same whether you are accessing the data from a project or from a space.
-* Just like with data connection in a project, you can edit data connection details from the space.
-* Data assets are stored in a space in the same way that they are stored in a project. They use the same file structure for the space as the structure used for the project.
-
-
-
-"
-D8BD7C30F776F7218860187F535C6B72D1A8DC74_2,D8BD7C30F776F7218860187F535C6B72D1A8DC74," Adding data and connections to space by using UI
-
-To add data or connections to space by using UI:
-
-
-
-1. From the Assets tab of your deployment space, click Import assets.
-2. Choose between adding a connected data asset, local file, or connection to a data source:
-
-
-
-* If you want to add a connected data asset, select Connected data. Choose a connection and click Import.
-* If you want to add a local file, select Local file > Data asset. Upload your file and click Done.
-* If you want to add a connection to a data source, select Data access > Connection. Choose a connection and click Import.
-
-
-
-
-
-The data asset displays in the space and is available for use as an input data source in a deployment job.
-
-Note:Some types of connections allow for using your personal platform credentials. If you add a connection or connected data that uses your personal platform credentials, tick the Use my platform login credentials checkbox.
-
-"
-D8BD7C30F776F7218860187F535C6B72D1A8DC74_3,D8BD7C30F776F7218860187F535C6B72D1A8DC74," Adding data to space programmatically
-
-If you are using APIs to create, update, or delete Watson Machine Learning assets, make sure that you are using only Watson Machine Learning [API calls](https://cloud.ibm.com/apidocs/machine-learning).
-
-For an example of how to add assets programmatically, refer to this sample notebook: [Use SPSS and batch deployment with Db2 to predict customer churn](https://github.com/IBM/watson-machine-learning-samples/blob/df8e5122a521638cb37245254fe35d3a18cd3f59/cloud/notebooks/python_sdk/deployments/spss/Use%20SPSS%20and%20batch%20deployment%20with%20DB2%20to%20predict%20customer%20churn.ipynb)
-
-"
-D8BD7C30F776F7218860187F535C6B72D1A8DC74_4,D8BD7C30F776F7218860187F535C6B72D1A8DC74," Data source reference types in Watson Machine Learning
-
-Data source reference types are referenced in Watson Machine Learning requests to represent input data and results locations. Use data_asset and connection_asset for these types of data sources:
-
-
-
-* Cloud Object Storage
-* Db2
-* Database data
-
-
-
-Notes:
-
-
-
-* For Decision Optimization, the reference type is url.
-
-
-
-"
-D8BD7C30F776F7218860187F535C6B72D1A8DC74_5,D8BD7C30F776F7218860187F535C6B72D1A8DC74," Example data_asset payload
-
-{""input_data_references"": [{
-""type"": ""data_asset"",
-""connection"": {
-},
-""location"": {
-""href"": ""/v2/assets/?space_id=""
-}
-}]
-
-"
-D8BD7C30F776F7218860187F535C6B72D1A8DC74_6,D8BD7C30F776F7218860187F535C6B72D1A8DC74," Example connection_asset payload
-
-""input_data_references"": [{
-""type"": ""connection_asset"",
-""connection"": {
-""id"": """"
-},
-""location"": {
-""bucket"": """",
-""file_name"": ""/""
-}
-
-}]
-
-For more information, see:
-
-
-
-* Watson Machine Learning [REST API](https://cloud.ibm.com/apidocs/machine-learning)
-
-
-
-"
-D8BD7C30F776F7218860187F535C6B72D1A8DC74_7,D8BD7C30F776F7218860187F535C6B72D1A8DC74," Using data from the Cloud Object Storage service
-
-Cloud Object Storage service can be used with deployment jobs through a connected data asset or a connection asset. To use data from the Cloud Object Storage service:
-
-
-
-1. Create a connection to IBM Cloud Object Storage by adding a Connection to your project or space and selecting Cloud Object Storage (infrastructure) or Cloud Object Storage as the connector. Provide the secret key, access key, and login URL.
-
-Note:When you are creating a connection to Cloud Object Storage or Cloud Object Storage (Infrastructure), you must specify both access_key and secret_key. If access_key and secret_key are not specified, downloading the data from that connection doesn't work in a batch deployment job. For reference, see [IBM Cloud Object Storage connection](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-cos.html) and [IBM Cloud Object Storage (infrastructure) connection](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/conn-cos-infra.html).
-2. Add input and output files to the deployment space as connected data by using the Cloud Object Storage connection that you created.
-
-
-
-Parent topic:[Assets in deployment spaces](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets-all.html)
-"
-451244D4E0CD8A3E96CD15FFAF0F3BDA526CCED2_0,451244D4E0CD8A3E96CD15FFAF0F3BDA526CCED2," Creating deployment spaces
-
-Create a deployment space to store your assets, deploy assets, and manage your deployments.
-
-"
-451244D4E0CD8A3E96CD15FFAF0F3BDA526CCED2_1,451244D4E0CD8A3E96CD15FFAF0F3BDA526CCED2,"Required permissions:
-All users in your IBM Cloud account with the Editor IAM platform access role for all IAM enabled services or for Cloud Pak for Data can manage to create deployment spaces. For more information, see [IAM Platform access roles](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/roles.htmlplatform).
-
-A deployment space is not associated with a project. You can publish assets from multiple projects to a space. For example, you might have a test space for evaluating deployments, and a production space for deployments you want to deploy in business applications.
-
-Follow these steps to create a deployment space:
-
-
-
-1. From the navigation menu, select Deployments > New deployment space. Enter a name for your deployment space.
-2. Optional: Add a description and tags.
-
-3. Select a storage service to store your space assets.
-
-
-
-* If you have a [Cloud Object Storage](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/storage-options.html) repository that is associated with your IBM Cloud account, choose a repository from the list to store your space assets.
-* If you do not have a [Cloud Object Storage](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/storage-options.html) repository that is associated with your IBM Cloud account, you are prompted to create one.
-
-
-
-4. Optional: If you want to deploy assets from your space, select a machine learning service instance to associate with your deployment space.
-To associate a machine learning instance to a space, you must:
-
-
-
-* Be a space administrator.
-* Have admin access to the machine learning service instance that you want to associate with the space. For more information, see [Creating a Watson Machine Learning service instance](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-service-instance.html). Tip: If you want to evaluate assets in the space, switch to the Manage tab and associate a Watson OpenScale instance.
-
-
-
-"
-451244D4E0CD8A3E96CD15FFAF0F3BDA526CCED2_2,451244D4E0CD8A3E96CD15FFAF0F3BDA526CCED2,"5. Optional: Assign the space to a deployment stage. Deployment stages are used for [MLOps](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/modelops-overview.html), to manage access for assets in various stages of the AI lifecycle. They are also used in [governance](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-overview.html), for tracking assets. Choose from:
-
-
-
-* Development for assets under development. Assets that are tracked for governance are displayed in the Develop stage of their associated use case.
-* Testing for assets that are being validated. Assets that are tracked for governance are displayed in the Validate stage of their associated use case.
-* Production for assets in production. Assets that are tracked for governance are displayed in the Operate stage of their associated use case.
-
-
-
-6. Optional: Upload space assets, such as [exported project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/export-project.html) or [exported space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-export.html). If the imported space is encrypted, you must enter the password.
-
-Tip: If you get an import error, clear your browser cookies and then try again.
-7. Click Create.
-
-
-
-"
-451244D4E0CD8A3E96CD15FFAF0F3BDA526CCED2_3,451244D4E0CD8A3E96CD15FFAF0F3BDA526CCED2," Viewing and managing deployment spaces
-
-
-
-* To view all deployment spaces that you can access, click Deployments on the navigation menu.
-* To view any of the details about the space after you create it, such as the associated service instance or storage ID, open your deployment space and then click the Manage tab.
-* Your space assets are stored in a Cloud Object Storage repository. You can access this repository from IBM Cloud. To find the bucket ID, open your deployment space, and click the Manage tab.
-
-
-
-"
-451244D4E0CD8A3E96CD15FFAF0F3BDA526CCED2_4,451244D4E0CD8A3E96CD15FFAF0F3BDA526CCED2," Learn more
-
-To learn more about adding assets to a space and managing them, see [Assets in deployment spaces](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets-all.html).
-
-To learn more about creating a space and accessing its details programmatically, see [Notebook on managing spaces](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/e5e78be14e2260ccb4bcf8181d0967e3).
-
-To learn more about handling spaces programmatically, see [Python client](https://ibm.github.io/watson-machine-learning-sdk/) or [REST API](https://cloud.ibm.com/apidocs/machine-learning).
-
-Parent topic:[Deployment spaces](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-spaces_local.html)
-"
-C11E8DEEDBABE64F4789061D10E55AEA415FD51E_0,C11E8DEEDBABE64F4789061D10E55AEA415FD51E," Deleting deployment spaces
-
-Delete existing deployment spaces that you don't require anymore.
-
-Important:Before you delete a deployment space, you must delete all the deployments that are associated with it. Only a project admin can delete a deployment space. For more information, see [Deployment space collaborator roles and permissions](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/collaborator-permissions-wml.html).
-
-To remove a deployment space, follow these steps:
-
-
-
-1. From the navigation menu, click Deployments.
-2. In the deployments list, click the Spaces tab and find the deployment space that you want to delete.
-3. Hover over the deployment space, select the menu () icon, and click Delete.
-4. In the confirmation dialog box, click Delete.
-
-
-
-"
-C11E8DEEDBABE64F4789061D10E55AEA415FD51E_1,C11E8DEEDBABE64F4789061D10E55AEA415FD51E," Learn more
-
-To learn more about how to clean up a deployment space and delete it programmatically, refer to:
-
-
-
-* [Notebook on managing machine learning artifacts](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/e5e78be14e2260ccb4bcf8181d093d7b)
-* [Notebook on managing spaces](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/e5e78be14e2260ccb4bcf8181d0967e3)
-
-
-
-Parent topic:[Deployment spaces](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-spaces_local.html)
-"
-85E9CAC1F581E61092CFF1F6BE38570EE734C115_0,85E9CAC1F581E61092CFF1F6BE38570EE734C115," Exporting space assets from deployment spaces
-
-You can export assets from a deployment space so that you can share the space with others or reuse the assets in another space.
-
-For a list of assets that you can export from space, refer to [Assets in a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets-all.html).
-
-"
-85E9CAC1F581E61092CFF1F6BE38570EE734C115_1,85E9CAC1F581E61092CFF1F6BE38570EE734C115," Exporting space assets from the UI
-
-Important:To avoid problems with importing the space, export all dependencies together with the space. For more information, see [Exporting a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/export-project.html).
-
-To export space assets from the UI:
-
-
-
-1. From your deployment space, click the import and export space () icon. From the list, select Export space.
-2. Click New export file. Specify a file name and an optional description.
-Tip: To encrypt sensitive data in the exported archive, type the password in the Password field.
-3. Select the assets that you want to export with the space.
-4. Click Create to create the export file.
-5. After the space is exported, click the download () to save the file.
-
-
-
-You can reuse this space by choosing Create a space from a file when you create a new space.
-
-"
-85E9CAC1F581E61092CFF1F6BE38570EE734C115_2,85E9CAC1F581E61092CFF1F6BE38570EE734C115," Learn more
-
-
-
-* [Importing spaces and projects into existing deployment spaces](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-import-to-space.html).
-
-
-
-Parent topic:[Deployment spaces](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-spaces_local.html)
-"
-A11374B50B49477362FA00BBB32A277776F7E8E2_0,A11374B50B49477362FA00BBB32A277776F7E8E2," Importing space and project assets into deployment spaces
-
-You can import assets that you export from a deployment space or a project (either a project export or a Git archive) into a new or existing deployment space. This way, you can add assets or update existing assets (for example, replacing a model with its newer version) to use for your deployments.
-
-You can import a space or a project export file to [a new deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-import-to-space.html?context=cdpaas&locale=enimport-to-new) or an [existing deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-import-to-space.html?context=cdpaas&locale=enimport-to-existing) to populate the space with assets.
-
-Tip: The export file can come from a Git-enabled project and a Watson Studio project. To create the file to export, create a compressed file for the project that contains the assets to import. Then, follow the steps for importing the compressed file into a new or existing space.
-
-"
-A11374B50B49477362FA00BBB32A277776F7E8E2_1,A11374B50B49477362FA00BBB32A277776F7E8E2," Importing a space or a project to a new deployment space
-
-To import a space or a project when you are creating a new deployment space:
-
-
-
-1. Click New deployment space.
-2. Enter the details for the space. For more information, see [Creating deployment spaces](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-create.html).
-3. In the Upload space assets section, upload the exported compressed file that contains data assets and click Create.
-
-
-
-The assets from the exported file is added as space assets.
-
-"
-A11374B50B49477362FA00BBB32A277776F7E8E2_2,A11374B50B49477362FA00BBB32A277776F7E8E2," Importing a space or a project to an existing deployment space
-
-To import a space or a project into an existing space:
-
-
-
-1. From your deployment space, click the import and export space () icon. From the list, select Import space.
-2. Add your compressed file that contains assets from a Watson Studio project or deployment space.
-Tip: If the space that you are importing is encrypted, enter the password in the Password field.
-3. After your asset is imported, click Done.
-
-
-
-The assets from the exported file is added as space assets.
-
-"
-A11374B50B49477362FA00BBB32A277776F7E8E2_3,A11374B50B49477362FA00BBB32A277776F7E8E2," Resolving issues with asset duplication
-
-The importing mechanism compares assets that exist in your space with the assets that are being imported. If it encounters an asset with the same name and of the same type:
-
-
-
-* If the asset type supports revisions, the importing mechanism creates a new revision of the existing asset and fixes the new revision.
-* If the asset type does not support revisions, the importing mechanism fixes the existing asset.
-
-
-
-This table describes how import works to resolve cases where assets are duplicated between the import file and the existing space.
-
-
-
-Scenarios for importing duplicated assets
-
- Your space File being imported Result
-
- No assets with matching name or type One or more assets with matching name or type All assets are imported. If multiple assets in the import file have the same name, they are imported as duplicate assets in the target space.
- One asset with matching name or type One asset with matching name or type Matching asset is updated with new version. Other assets are imported normally.
- One asset with matching name or type More than one asset with matching name or type The first matching asset that is processed is imported as a new version for the existing asset in the space, extra assets with matching name are created as duplicates in the space. Other assets are imported normally.
- Multiple assets with matching name or type One or more assets with matching name or type Assets with matching names fail to import. Other assets are imported normally.
-
-
-
-Warning: Multiple assets of the same name in an existing space or multiple assets of the same name in an import file are not fully supported scenarios. The import works as described for the scenarios in the table, but you cannot use versioning capabilities specific to the import.
-
-Existing deployments get updated differently, depending on deployment type:
-
-
-
-* If a batch deployment was created by using the previous version of the asset, the next invocation of the batch deployment job will refer to the updated state of the asset.
-* If an online deployment was created by using the previous version of the asset, the next ""restart"" of the deployment refers to the updated state of the asset.
-
-
-
-"
-A11374B50B49477362FA00BBB32A277776F7E8E2_4,A11374B50B49477362FA00BBB32A277776F7E8E2," Learn more
-
-
-
-* To learn about adding other types of assets to a space, refer to [Assets in deployment spaces](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets-all.html).
-* To learn about exporting assets from a deployment space, refer to [Exporting space assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-export.html).
-
-
-
-Parent topic:[Deployment spaces](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-spaces_local.html)
-"
-4DD17198B8E7413469C1837FFDBAF109B307078C_0,4DD17198B8E7413469C1837FFDBAF109B307078C," Promoting assets to a deployment space
-
-Learn about how to promote assets from a project to a deployment space and the requirements for promoting specific asset types.
-
-"
-4DD17198B8E7413469C1837FFDBAF109B307078C_1,4DD17198B8E7413469C1837FFDBAF109B307078C," Promoting assets to your deployment space
-
-You can promote assets from your project to a deployment space. For a list of assets that can be promoted from a project to a deployment space, refer to [Adding assets to a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets-all.html). When you are promoting assets, you can:
-
-
-
-* Choose an existing space or create a new one.
-* Add tags to help identify the promoted asset.
-* Choose dependent assets to promote them at the same time.
-
-
-
-Follow these steps to promote your assets to your deployment space:
-
-
-
-1. From your project, go to the Assets tab.
-2. Select the Options () icon and click Promote to space.
-
-
-
-Tip: If the asset that you want to promote is a model, you can also click the model name to open the model details page, and then click Promote to deployment space.
-
-Notes:
-
-
-
-* Promoting assets and their dependencies from a project to a space by using the Watson Studio user interface is the recommended method to guarantee that the promotion flow results in a complete asset definition. For example, relying on the [Watson Data API](https://cloud.ibm.com/apidocs/watson-data-api-cpd) to manage the promotion flow of an asset, together with its dependencies, can result in the promoted asset from being inaccessible from the space.
-* Promoting assets from default Git-based projects is not supported.
-* Depending on your configuration and the type of asset that you promote, large asset attachments, typically more than 2 GB, can cause the promotion action to time out.
-
-
-
-For more information, see:
-
-
-
-* [Promoting connections and connected data](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-promote-assets.html?context=cdpaas&locale=enpromo-conn)
-"
-4DD17198B8E7413469C1837FFDBAF109B307078C_2,4DD17198B8E7413469C1837FFDBAF109B307078C,"* [Promoting models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-promote-assets.html?context=cdpaas&locale=enpromo-model)
-* [Promoting notebooks and scripts](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-promote-assets.html?context=cdpaas&locale=enpromo-nbs)
-
-
-
-"
-4DD17198B8E7413469C1837FFDBAF109B307078C_3,4DD17198B8E7413469C1837FFDBAF109B307078C," Promoting connections and connected data
-
-When you promote a connection that uses personal credentials or Cloud Pak for Data authentication to a deployment space, the credentials are not promoted. You must provide the credentials information again or allow Cloud Pak for Data authentication. Because Storage Volume connections support only personal credentials, to be able to use this type of asset after it is promoted to a space, you must provide the credentials again.
-
-Some types of connections allow for using your personal platform credentials. If you promote a connection or connected data that uses your personal platform credentials, tick the Use my platform login credentials checkbox.
-
-Although you can promote any kind of data connection to a space, where you can use the connection is governed by factors such as model and deployment type. For example, you can access any of the connected data by using a script. However, in batch deployments you are limited to particular types of data, as listed in [Creating a batch deployment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/deploy-batch-details.html).
-
-"
-4DD17198B8E7413469C1837FFDBAF109B307078C_4,4DD17198B8E7413469C1837FFDBAF109B307078C," Promoting models
-
-When you promote a model to a space:
-
-
-
-* Components that are required for a successful deployment, such as a custom software specification, model definition, or pipeline definition are automatically promoted as well.
-* The data assets that were used to train the model are not promoted with it. Information on data assets used to train the model is included in model metadata.
-
-
-
-"
-4DD17198B8E7413469C1837FFDBAF109B307078C_5,4DD17198B8E7413469C1837FFDBAF109B307078C," Promoting notebooks and scripts
-
-Tip: If you are using the Notebook editor, you must save a version of the notebook before you can promote it.
-
-
-
-* If you created a job for a notebook and you selected Log and updated version as the job run result output, the notebook cannot be promoted to a deployment space.
-* If you are working in a notebook that you created before IBM Cloud Pak for Data 4.0, and you want to promote this notebook to a deployment space, follow these steps to enable promoting it:
-
-
-
-1. Save a new version of the notebook.
-2. Select the newly created version.
-3. Select either Log and notebook or Log only as the job run result output under Advanced configuration.
-4. Run your job again.
-
-Now you can promote it manually from the project Assets page or programmatically by using CPDCTL commands.
-
-
-
-
-
-
-
-* If you want to promote a notebook programmatically, use CPDCTL commands to move the notebook or script to a deployment space. To learn how to use CPDCTL to move notebooks or scripts to spaces, refer to [CPDCTL code samples](https://github.com/IBM/cpdctl/tree/master/samples). For the reference guide, refer to [CPDCTL command reference](https://github.com/IBM/cpdctl/blob/master/README_command_reference.mdnotebook_promote).
-
-
-
-Parent topic:[Assets in deployment spaces](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets-all.html)
-"
-47CC4851C049D805F02BD2058CD5C2FFA157981C_0,47CC4851C049D805F02BD2058CD5C2FFA157981C," Deployment spaces
-
-Deployment spaces contain deployable assets, deployments, deployment jobs, associated input and output data, and the associated environments. You can use spaces to deploy various assets and manage your deployments.
-
-Deployment spaces are not associated with projects. You can publish assets from multiple projects to a space, and you can deploy assets to more than one space. For example, you might have a test space for evaluating deployments, and a production space for deployments that you want to deploy in business applications.
-
-The deployments dashboard is an aggregate view of deployment activity available to you, across spaces. For details, refer to [Deployments dashboard](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/operator-view.html).
-
-When you open a space from the UI, you see these elements:
-
-
-
-You can share a space with other people. When you add collaborators to a deployment space, you can specify which actions they can do by assigning them access levels. For details on space collaborator permissions, refer to [Deployment space collaborator roles and permissions](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/collaborator-permissions-wml.html).
-
-"
-47CC4851C049D805F02BD2058CD5C2FFA157981C_1,47CC4851C049D805F02BD2058CD5C2FFA157981C," Learn more
-
-
-
-* [Creating deployment spaces](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-create.html)
-* [Managing assets in a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-add-assets-all.html)
-* [Creating deployments from a space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-general.html)
-"
-88A9F08917918D1D74C1C2CA702E999747EEB422_0,88A9F08917918D1D74C1C2CA702E999747EEB422," Jupyter Notebook editor
-
-The Jupyter Notebook editor is largely used for interactive, exploratory data analysis programming and data visualization. Only one person can edit a notebook at a time. All other users can access opened notebooks in view mode only, while they are locked.
-
-You can use the preinstalled open source libraries that come with the notebook runtime environments, add your own libraries, and benefit from the IBM libraries provided at no extra cost.
-
-When your notebooks are ready, you can create jobs to run the notebooks directly from the Jupyter Notebook editor. Your job configurations can use environment variables that are passed to the notebooks with different values when the notebooks run.
-
-"
-88A9F08917918D1D74C1C2CA702E999747EEB422_1,88A9F08917918D1D74C1C2CA702E999747EEB422," Learn more
-
-
-
-* [Quick start: Analyze data in a Jupyter notebook](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-analyze.html)
-* [Create notebooks in the Jupyter Notebook editor](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/creating-notebooks.html)
-* [Runtime environments for notebooks](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/environments-parent.html)
-* [Libraries and scripts](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/libraries.html)
-* [Code and run notebooks](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/code-run-notebooks.html)
-* [Schedule a notebook](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-job-nb-editor.html)
-* [Share and publish notebooks](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/share-notebooks.html)
-
-
-
-Parent topic:[Notebooks and scripts](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebooks-and-scripts.html)
-"
-CF4254CE9E6D890CCAA2564DA3E9B57071ADE342_0,CF4254CE9E6D890CCAA2564DA3E9B57071ADE342," Compute resource options for the notebook editor in projects
-
-When you run a notebook in the notebook editor in a project, you choose an environment template, which defines the compute resources for the runtime environment. The environment template specifies the type, size, and power of the hardware configuration, plus the software configuration. For notebooks, environment templates include a supported language of Python and R.
-
-
-
-* [Types of environments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebook-environments.html?context=cdpaas&locale=entypes)
-* [Runtime releases](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebook-environments.html?context=cdpaas&locale=enruntime-releases)
-* [CPU environment templates](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebook-environments.html?context=cdpaas&locale=endefault-cpu)
-* [Spark environment templates](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebook-environments.html?context=cdpaas&locale=endefault-spark)
-* [GPU environment templates](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebook-environments.html?context=cdpaas&locale=endefault-gpu)
-* [Default hardware specifications for scoring models with Watson Machine Learning](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebook-environments.html?context=cdpaas&locale=enwml)
-* [Data files in notebook environments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebook-environments.html?context=cdpaas&locale=endata-files)
-"
-CF4254CE9E6D890CCAA2564DA3E9B57071ADE342_1,CF4254CE9E6D890CCAA2564DA3E9B57071ADE342,"* [Compute usage by service](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebook-environments.html?context=cdpaas&locale=encompute)
-* [Runtime scope](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebook-environments.html?context=cdpaas&locale=enscope)
-* [Changing environments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebook-environments.html?context=cdpaas&locale=enchange-env)
-
-
-
-"
-CF4254CE9E6D890CCAA2564DA3E9B57071ADE342_2,CF4254CE9E6D890CCAA2564DA3E9B57071ADE342," Types of environments
-
-You can use these types of environments for running notebook:
-
-
-
-* [Anaconda CPU environments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebook-environments.html?context=cdpaas&locale=endefault-cpu) for standard workloads.
-* [Spark environments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebook-environments.html?context=cdpaas&locale=endefault-spark) for parallel processing that is provided by the platform or by other services.
-* [GPU environments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebook-environments.html?context=cdpaas&locale=endefault-gpu) for compute-intensive machine learning models.
-
-
-
-Most environment types for notebooks have default environment templates so you can get started quickly. Otherwise, you can [create custom environment templates](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/create-customize-env-definition.html).
-
-
-
-Environment types for notebooks
-
- Environment type Default templates Custom templates
-
- Anaconda CPU ✓ ✓
- Spark clusters ✓ ✓
- GPU ✓ ✓
-
-
-
-"
-CF4254CE9E6D890CCAA2564DA3E9B57071ADE342_3,CF4254CE9E6D890CCAA2564DA3E9B57071ADE342," Runtime releases
-
-The default environments for notebooks are added as an affiliate of a runtime release and prefixed with Runtime followed by the release year and release version.
-
-A runtime release specifies a list of key data science libraries and a language version, for example Python 3.10. All environments of a runtime release are built based on the library versions defined in the release, thus ensuring the consistent use of data science libraries across all data science applications.
-
-The Runtime 22.2 and Runtime 23.1 releases are available for Python 3.10 and R 4.2.
-
-While a runtime release is supported, IBM will update the library versions to address security requirements. Note that these updates will not change the . versions of the libraries, but only the versions. This ensures that your notebook assets will continue to run.
-
-"
-CF4254CE9E6D890CCAA2564DA3E9B57071ADE342_4,CF4254CE9E6D890CCAA2564DA3E9B57071ADE342," Library packages included in Runtimes
-
-For specific versions of popular data science library packages included in Watson Studio runtimes refer to these tables:
-
-
-
-Table 3. Packages and their versions in the various Runtime releases for Python
-
- Library Runtime 22.2 on Python 3.10 Runtime 23.1 on Python 3.10
-
- Keras 2.9 2.12
- Lale 0.7 0.7
- LightGBM 3.3 3.3
- NumPy 1.23 1.23
- ONNX 1.12 1.13
- ONNX Runtime 1.12 1.13
- OpenCV 4.6 4.7
- pandas 1.4 1.5
- PyArrow 8.0 11.0
- PyTorch 1.12 2.0
- scikit-learn 1.1 1.1
- SciPy 1.8 1.10
- SnapML 1.8 1.13
- TensorFlow 2.9 2.12
- XGBoost 1.6 1.6
-
-
-
-
-
-Table 4. Packages and their versions in the various Runtime releases for R
-
- Library Runtime 22.2 on R 4.2 Runtime 23.1 on R 4.2
-
- arrow 8.0 11.0
- car 3.0 3.0
- caret 6.0 6.0
- catools 1.18 1.18
- forecast 8.16 8.16
- ggplot2 3.3 3.3
- glmnet 4.1 4.1
- hmisc 4.7 4.7
- keras 2.9 2.12
- lme4 1.1 1.1
- mvtnorm 1.1 1.1
- pandoc 2.12 2.12
- psych 2.2 2.2
- python 3.10 3.10
- randomforest 4.7 4.7
- reticulate 1.25 1.25
- sandwich 3.0 3.0
- scikit-learn 1.1 1.1
- spatial 7.3 7.3
- tensorflow 2.9 2.12
- tidyr 1.2 1.2
- xgboost 1.6 1.6
-
-
-
-In addition to the libraries listed in the tables, runtimes include many other useful libraries. To see the full list, select the Manage tab in your project, then click Templates, select the Environments tab, and then click on one of the listed environments.
-
-"
-CF4254CE9E6D890CCAA2564DA3E9B57071ADE342_5,CF4254CE9E6D890CCAA2564DA3E9B57071ADE342," CPU environment templates
-
-You can select any of the following default CPU environment templates for notebooks. The default environment templates are listed under Templates on the Environments page on the Manage tab of your project.
-
-DO Indicates that the environment templates includes the CPLEX and the DOcplex libraries to model and solve decision optimization problems that exceed the complexity that is supported by the Community Edition of the libraries in the other default Python environments. See [Decision Optimization notebooks](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/DONotebooks.html).
-
-NLP Indicates that the environment templates includes the Watson Natural Language Processing library with pre-trained models for language processing tasks that you can run on unstructured data. See [Using the Watson Natural Language Processing library](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp.html). This default environment should be large enough to run the pre-trained models.
-
-
-
-Default CPU environment templates for notebooks
-
- Name Hardware configuration CUH rate per hour
-
- Runtime 22.2 on Python 3.10 XXS 1 vCPU and 4 GB RAM 0.5
- Runtime 22.2 on Python 3.10 XS 2 vCPU and 8 GB RAM 1
- Runtime 22.2 on Python 3.10 S 4 vCPU and 16 GB RAM 2
- Runtime 23.1 on Python 3.10 XXS 1 vCPU and 4 GB RAM 0.5
- Runtime 23.1 on Python 3.10 XS 2 vCPU and 8 GB RAM 1
- Runtime 23.1 on Python 3.10 S 4 vCPU and 16 GB RAM 2
- DO + NLP Runtime 22.2 on Python 3.10 XS 2 vCPU and 8 GB RAM 6
- NLP + DO Runtime 23.1 on Python 3.10 XS 2 vCPU and 8 GB RAM 6
- Runtime 22.2 on R 4.2 S 4 vCPU and 16 GB RAM 2
- Runtime 23.1 on R 4.2 S 4 vCPU and 16 GB RAM 2
-
-
-
-"
-CF4254CE9E6D890CCAA2564DA3E9B57071ADE342_6,CF4254CE9E6D890CCAA2564DA3E9B57071ADE342,"You should stop all active CPU runtimes when you no longer need them to prevent consuming extra capacity unit hours (CUHs). See [CPU idle timeout](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/manage-envs-new.htmlstop-active-runtimes).
-
- Notebooks and CPU environments
-
-When you open a notebook in edit mode in a CPU runtime environment, exactly one interactive session connects to a Jupyter kernel for the notebook language and the environment runtime that you select. The runtime is started per single user and not per notebook. This means that if you open a second notebook with the same environment template in the same project, a second kernel is started in the same runtime. Runtime resources are shared by the Jupyter kernels that you start in the runtime. Runtime resources are also shared if the CPU has GPU.
-
-If you want to avoid sharing runtimes but want to use the same environment template for multiple notebooks in a project, you should create custom environment templates with the same specifications and associate each notebook with its own template.
-
-If necessary, you can restart or reconnect to the kernel. When you restart a kernel, the kernel is stopped and then started in the same session again, but all execution results are lost. When you reconnect to a kernel after losing a connection, the notebook is connected to the same kernel session, and all previous execution results which were saved are available.
-
-"
-CF4254CE9E6D890CCAA2564DA3E9B57071ADE342_7,CF4254CE9E6D890CCAA2564DA3E9B57071ADE342," Spark environment templates
-
-You can select any of the following default Spark environment templates for notebooks. The default environment templates are listed under Templates on the Environments page on the Manage tab of your project.
-
-
-
-Default Spark environment templates for notebooks
-
- Name Hardware configuration CUH rate per hour
-
- Default Spark 3.3 & R 4.2 2 Executors each: 1 vCPU and 4 GB RAM; Driver: 1 vCPU and 4 GB RAM 1
- Default Spark 3.4 & R 4.2 2 Executors each: 1 vCPU and 4 GB RAM; Driver: 1 vCPU and 4 GB RAM 1
-
-
-
-You should stop all active Spark runtimes when you no longer need them to prevent consuming extra capacity unit hours (CUHs). See [Spark idle timeout](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/manage-envs-new.htmlstop-active-runtimes).
-
-"
-CF4254CE9E6D890CCAA2564DA3E9B57071ADE342_8,CF4254CE9E6D890CCAA2564DA3E9B57071ADE342," Large Spark environments
-
-If you have the Watson Studio Professional plan, you can create custom environment templates for larger Spark environments.
-
-Professional plan users can have up to 35 executors and can choose from the following options for both driver and executor:
-
-
-
-Hardware configurations for Spark environments
-
- Hardware configuration
-
- 1 vCPU and 4 GB RAM
- 1 vCPU and 8 GB RAM
- 1 vCPU and 12 GB RAM
-
-
-
-The CUH rate per hour increases by 0.5 for every vCPU that is added. For example, 1x Driver: 3vCPU with 12GB of RAM and 4x Executors: 2vCPU with 8GB of RAM amounts to (3 + (4 * 2)) = 11 vCPUs and 5.5 CUH.
-
- Notebooks and Spark environments
-
-You can select the same Spark environment template for more than one notebook. Every notebook associated with that environment has its own dedicated Spark cluster and no resources are shared.
-
-When you start a Spark environment, extra resources are needed for the Jupyter Enterprise Gateway, Spark Master, and the Spark worker daemons. These extra resources amount to 1 vCPU and 2 GB of RAM for the driver and 1 GB RAM for each executor. You need to take these extra resources into account when selecting the hardware size of a Spark environment. For example: if you create a notebook and select Default Spark 3.3 & Python 3.10, the Spark cluster consumes 3 vCPU and 12 GB RAM but, as 1 vCPU and 4 GB RAM are required for the extra resources, the resources remaining for the notebook are 2 vCPU and 8 GB RAM.
-
-"
-CF4254CE9E6D890CCAA2564DA3E9B57071ADE342_9,CF4254CE9E6D890CCAA2564DA3E9B57071ADE342," File system on a Spark cluster
-
-If you want to share files across executors and the driver or kernel of a Spark cluster, you can use the shared file system at /home/spark/shared.
-
-If you want to use your own custom libraries, you can store them under /home/spark/shared/user-libs/. There are four subdirectories under /home/spark/shared/user-libs/ that are pre-configured to be made available to Python and R or Java runtimes.
-
-The following tables lists the pre-configured subdirectories where you can add your custom libaries.
-
-
-
-Table 5. Pre-configured subdirectories for custom libraries
-
- Directory Type of library
-
- /home/spark/shared/user-libs/python3/ Python 3 libraries
- /home/spark/shared/user-libs/R/ R packages
- /home/spark/shared/user-libs/spark2/ Java JAR files
-
-
-
-To share libraries across a Spark driver and executors:
-
-
-
-1. Download your custom libraries or JAR files to the appropriate pre-configured directory.
-2. Restart the kernel from the notebook menu by clicking Kernel > Restart Kernel. This loads your custom libraries or JAR files in Spark.
-
-
-
-Note that these libraries are not persisted. When you stop the environment runtime and restart it again later, you need to load the libraries again.
-
-"
-CF4254CE9E6D890CCAA2564DA3E9B57071ADE342_10,CF4254CE9E6D890CCAA2564DA3E9B57071ADE342," GPU environment templates
-
-You can select the following GPU environment template for notebooks. The environment templates are listed under Templates on the Environments page on the Manage tab of your project.
-
-The GPU environment template names indicate the accelerator power. The GPU environment templates include the Watson Natural Language Processing library with pre-trained models for language processing tasks that you can run on unstructured data. See [Using the Watson Natural Language Processing library](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp.html).
-
- Indicates that the environment template requires the Watson Studio Professional plan. See [Offering plans](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/ws-plans.html).
-
-
-
-Default GPU environment templates for notebooks
-
- Name Hardware configuration CUH rate per hour
-
- GPU V100 Runtime 22.2 on Python 3.10 40 vCPU + 172 GB RAM + 1 NVIDIA TESLA V100 (1 GPU) 68
- GPU V100 Runtime 23.1 on Python 3.10 40 vCPU + 172 GB RAM + 1 NVIDIA TESLA V100 (1 GPU) 68
- GPU 2xV100 Runtime 22.2 on Python 3.10 80 vCPU and 344 GB RAM + 2 NVIDIA TESLA V100 (2 GPU) 136
- GPU 2xV100 Runtime 23.1 on Python 3.10 80 vCPU and 344 GB RAM + 2 NVIDIA TESLA V100 (2 GPU) 136
-
-
-
-You should stop all active GPU runtimes when you no longer need them to prevent consuming extra capacity unit hours (CUHs). See [GPU idle timeout](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/manage-envs-new.htmlstop-active-runtimes).
-
- Notebooks and GPU environments
-
-GPU environments for notebooks are available only in the Dallas IBM Cloud service region.
-
-You can select the same Python and GPU environment template for more than one notebook in a project. In this case, every notebook kernel runs in the same runtime instance and the resources are shared. To avoid sharing runtime resources, create multiple custom environment templates with the same specifications and associate each notebook with its own template.
-
-"
-CF4254CE9E6D890CCAA2564DA3E9B57071ADE342_11,CF4254CE9E6D890CCAA2564DA3E9B57071ADE342," Default hardware specifications for scoring models with Watson Machine Learning
-
-When you invoke the Watson Machine Learning API within a notebook, you consume compute resources from the Watson Machine Learning service as well as the compute resources for the notebook kernel.
-
-You can select any of the following hardware specifications when you connect to Watson Machine Learning and create a deployment.
-
-
-
-Hardware specifications available when invoking the Watson Machine Learning service in a notebook
-
- Capacity size Hardware configuration CUH rate per hour
-
- Extra small 1x4 = 1 vCPU and 4 GB RAM 0.5
- Small 2x8 = 2 vCPU and 8 GB RAM 1
- Medium 4x16 = 4 vCPU and 16 GB RAM 2
- Large 8x32 = 8 vCPU and 32 GB RAM 4
-
-
-
-"
-CF4254CE9E6D890CCAA2564DA3E9B57071ADE342_12,CF4254CE9E6D890CCAA2564DA3E9B57071ADE342," Data files in notebook environments
-
-If you are working with large data sets, you should store the data sets in smaller chunks in the IBM Cloud Object Storage associated with your project and process the data in chunks in the notebook. Alternatively, you should run the notebook in a Spark environment.
-
-Be aware that the file system of each runtime is non-persistent and cannot be shared across environments. To persist files in Watson Studio, you should use IBM Cloud Object Storage. The easiest way to use IBM Cloud Object Storage in notebooks in projects is to leverage the [project-lib package for Python](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/project-lib-python.html) or the [project-lib package for R](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/project-lib-r.html).
-
-"
-CF4254CE9E6D890CCAA2564DA3E9B57071ADE342_13,CF4254CE9E6D890CCAA2564DA3E9B57071ADE342," Compute usage by service
-
-The notebook runtimes consumes compute resources as CUH from Watson Studio, while running default or custom environments. You can monitor the Watson Studio CUH consumption in the project on the Resource usage page on the Manage tab of the project.
-
-Notebooks can also consume CUH from the Watson Machine Learning service when the notebook invokes the Watson Machine Learning to score a model. You can monitor the total monthly amount of CUH consumption for the Watson Machine Learning service on the Resource usage page on the Manage tab of the project.
-
-"
-CF4254CE9E6D890CCAA2564DA3E9B57071ADE342_14,CF4254CE9E6D890CCAA2564DA3E9B57071ADE342," Track CUH consumption for Watson Machine Learning in a notebook
-
-To calculate capacity unit hours consumed by a notebook, run this code in the notebook:
-
-CP = client.service_instance.get_details()
-CUH = CUH[""entity""][""capacity_units""]/(36001000)
-print(CUH)
-
-For example:
-
-'capacity_units': {'current': 19773430}
-
-19773430/(36001000)
-
-returns 5.49 CUH
-
-For details, see the Service Instances section of the [IBM Watson Machine Learning API](https://cloud.ibm.com/apidocs/machine-learning) documentation.
-
-"
-CF4254CE9E6D890CCAA2564DA3E9B57071ADE342_15,CF4254CE9E6D890CCAA2564DA3E9B57071ADE342," Runtime scope
-
-Environment runtimes are always scoped to an environment template and a user within a project. If different users in a project work with the same environment, each user will get a separate runtime.
-
-If you select to run a version of a notebook as a scheduled job, each scheduled job will always start in a dedicated runtime. The runtime is stopped when the job finishes.
-
-"
-CF4254CE9E6D890CCAA2564DA3E9B57071ADE342_16,CF4254CE9E6D890CCAA2564DA3E9B57071ADE342," Changing the environment of a notebook
-
-You can switch environments for different reasons, for example, you can:
-
-
-
-* Select an environment with more processing power or more RAM
-* Change from using an environment without Spark to a Spark environment
-
-
-
-You can only change the environment of a notebook if the notebook is unlocked. You can change the environment:
-
-
-
-* From the notebook opened in edit mode:
-
-
-
-1. Save your notebook changes.
-2. Click the Notebook Info icon () from the notebook toolbar and then click Environment.
-3. Select another template with the compute power and memory capacity from the list.
-4. Select Change environment. This stops the active runtime and starts the newly selected environment.
-
-
-
-* From the Assets page of your project:
-
-
-
-1. Select the notebook in the Notebooks section, click Actions > Change Environment and select another environment. The kernel must be stopped before you can change the environment. This new runtime environment will be instantiated the next time the notebook is opened for editing.
-
-
-
-* In the notebook job by editing the job template. See [Editing job settings](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/jobs.htmlview-job-details).
-
-
-
-"
-CF4254CE9E6D890CCAA2564DA3E9B57071ADE342_17,CF4254CE9E6D890CCAA2564DA3E9B57071ADE342," Next steps
-
-
-
-* [Creating a notebook](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/creating-notebooks.html)
-* [Creating your own environment template](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/create-customize-env-definition.html)
-* [Customizing an environment template](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/customize-envs.html)
-* [Stopping active notebook runtimes](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/manage-envs-new.htmlstop-active-runtimes)
-
-
-
-"
-CF4254CE9E6D890CCAA2564DA3E9B57071ADE342_18,CF4254CE9E6D890CCAA2564DA3E9B57071ADE342," Learn more
-
-
-
-* [Monitoring account resource usage](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/monitor-resources.html)
-
-
-
-Parent topic:[Compute resources for tools](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/environments-parent.html)
-"
-C6B0055426C9E91760F4923ED42BE91D64FCA6C8_0,C6B0055426C9E91760F4923ED42BE91D64FCA6C8," Notebooks and scripts
-
-You can create, edit and execute Python and R code using Jupyter notebooks and scripts in code editors, for example the notebook editor or an integrated development environment (IDE), like RStudio.
-
-Notebooks : A Jupyter notebook is a web-based environment for interactive computing. You can use notebooks to run small pieces of code that process your data, and you can immediately view the results of your computation. Notebooks include all of the building blocks you need to work with data, namely the data, the code computations that process the data, the visualizations of the results, and text and rich media to enhance understanding.
-
-Scripts : A script is a file containing a set of commands and comments. The script can be saved and used later to re-execute the saved commands. Unlike in a notebook, the commands in a script can only be executed in a linear fashion.
-
- Notebooks
-
-Required permissions : Editor or Admin role in a project
-
-Tools : Notebook editor
-
-Programming languages : Python and R
-
-Data format : All types
-
-Code support is available for loading and accessing data from project assets for:
-
-: Data assets, such as CSV, JSON and .xlsx and .xls files : Database connections and connected data assets
-
-See [Data load support](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/data-load-support.html). for the supported file and database types.
-
-Data size : 5 GB. If your files are larger, you must load the data in multiple parts.
-
-"
-C6B0055426C9E91760F4923ED42BE91D64FCA6C8_1,C6B0055426C9E91760F4923ED42BE91D64FCA6C8," Scripts
-
-Required permissions : Editor or Admin role in a project
-
-Tools : RStudio
-
-Programming languages : R
-
-Data format : All types
-
-Code support is available for loading and accessing data from project assets for: : Data assets, such as CSV, JSON and .xlsx and .xls files : Database connections and connected data assets
-
-See [Data load support](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/data-load-support.html). for the supported file and database types.
-
-Data size : 5 GB. If your files are larger, you must load the data in multiple parts.
-
-"
-C6B0055426C9E91760F4923ED42BE91D64FCA6C8_2,C6B0055426C9E91760F4923ED42BE91D64FCA6C8," Working in the notebook editor
-
-The notebook editor is largely used for interactive, exploratory data analysis programming and data visualization. Only one person can edit a notebook at a time. All other users can access opened notebooks in view mode only, while they are locked.
-
-You can use the preinstalled open source libraries that come with the notebook runtime environments, add your own libraries, and benefit from the IBM libraries provided at no extra cost.
-
-When your notebooks are ready, you can create jobs to run the notebooks directly from the notebook editor. Your job configurations can use environment variables that are passed to the notebooks with different values when the notebooks run.
-
-"
-C6B0055426C9E91760F4923ED42BE91D64FCA6C8_3,C6B0055426C9E91760F4923ED42BE91D64FCA6C8," Working in RStudio
-
-RStudio is an integrated development environment for working with R scripts or Shiny apps. Although the RStudio IDE cannot be started in a Spark with R environment runtime, you can use Spark in your R scripts and Shiny apps by accessing Spark kernels programmatically.
-
-R scripts and Shiny apps can only be created and used in the RStudio IDE. You can't create jobs for R scripts or R Shiny deployments.
-
-"
-C6B0055426C9E91760F4923ED42BE91D64FCA6C8_4,C6B0055426C9E91760F4923ED42BE91D64FCA6C8," Learn more
-
-
-
-* [Quick start: Analyze data in a Jupyter notebook](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-analyze.html)
-* [RStudio](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/rstudio-overview.html)
-* [Notebook editor](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebook-editor.html)
-
-
-
-Parent topic:[Analyzing data and building models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/data-science.html)
-"
-A957ADC1E11B5DC6AA15DC17EF6293C40F89FC20_0,A957ADC1E11B5DC6AA15DC17EF6293C40F89FC20," Deployments dashboard
-
-The deployments dashboard provides an aggregate view of deployment activity available to you, across spaces. You can get a broad view of deployment activity such as the status of job runs or a list of online deployments. You can also use filters and views to focus on specific job runs or category of runs such as failed runs. ModelOps or DevOps users can review and monitor the activity for an organization.
-
-"
-A957ADC1E11B5DC6AA15DC17EF6293C40F89FC20_1,A957ADC1E11B5DC6AA15DC17EF6293C40F89FC20," Accessing the Deployments dashboard
-
-From the navigation menu, click Deployments. If you don't have any deployment spaces, you are prompted to create a space. This following illustration shows an example of the Deployments dashboard:
-
-
-
-The dashboard view has two tabs:
-
-
-
-* [Activity](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/operator-view.html?context=cdpaas&locale=enactivity): Use the Activity tab to review all of the deployment activity across spaces. You can sort and filter this view to focus on a particular type of activity, such as failed deployments, or jobs with active runs. You can also review metrics such as the number of deployment spaces with active deployments.
-* [Spaces](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/operator-view.html?context=cdpaas&locale=enspaces): Use the Spaces tab to list all the spaces that you can access. You can read the overview information, such as the number of deployments and job runs in a space, or click a space name to view details and create deployments or jobs.
-
-
-
-"
-A957ADC1E11B5DC6AA15DC17EF6293C40F89FC20_2,A957ADC1E11B5DC6AA15DC17EF6293C40F89FC20," Viewing activity
-
-View the overview information for finished runs, active runs, or online deployments, or drill down to view details.
-
-"
-A957ADC1E11B5DC6AA15DC17EF6293C40F89FC20_3,A957ADC1E11B5DC6AA15DC17EF6293C40F89FC20," Finished runs
-
-The Finished runs section shows activity in jobs over a specified time interval. The default is to view finished jobs for the last 8 hours. It shows jobs that are completed, canceled, or failed across all of your deployment spaces within the specified time frame. Click View finished runs to view a list of runs.
-
-The view provides more detail on the finished runs and a visualization that shows run times.
-
-
-
-Filter the view to focus on a particular type of activity:
-
-
-
-
-
-* Jobs with active runs - Shows jobs that have active runs (running, started, or queued) across all spaces you can access.
-* Active runs - Shows runs that are in the running, started, or queued state across all jobs you can access.
-* Jobs with finished runs - Shows jobs with runs that are completed, canceled, or failed.
-* Finished runs - Shows runs that are completed, canceled, or failed.
-
-
-
-"
-A957ADC1E11B5DC6AA15DC17EF6293C40F89FC20_4,A957ADC1E11B5DC6AA15DC17EF6293C40F89FC20," Active runs
-
-The Active runs section displays runs that are currently running or are in the starting or queued state. Click View active runs to view a list of the runs.
-
-"
-A957ADC1E11B5DC6AA15DC17EF6293C40F89FC20_5,A957ADC1E11B5DC6AA15DC17EF6293C40F89FC20," Online deployments
-
-The Deployments section shows all online and R-Shiny deployments, which are sorted into categories for by status. Click View deployments to view the list of deployments that you can access.
-
-From any view, you can start from the overview and drill down to see the details for a particular job or run. You can also filter the view to focus on a particular type of deployment.
-
-"
-A957ADC1E11B5DC6AA15DC17EF6293C40F89FC20_6,A957ADC1E11B5DC6AA15DC17EF6293C40F89FC20," Viewing spaces
-
-View a list of spaces that you can access, with overview information such as number of deployments and collaborators. Click the name of a space to view details or add assets, and to create new deployments or jobs. Use filters to modify the view from the default list of all spaces to show Active spaces, with deployments or jobs, or Inactive spaces, with no deployments or jobs.
-
-"
-A957ADC1E11B5DC6AA15DC17EF6293C40F89FC20_7,A957ADC1E11B5DC6AA15DC17EF6293C40F89FC20," Next steps
-
-[Use spaces](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-spaces_local.html) to organize your deployment activity.
-
-Parent topic:[Deploying and managing models and functions](https://dataplatform.cloud.ibm.com/docs/content/wsj/wmls/wmls-deploy-overview.html)
-"
-BD995B62F35EC624DA9E86F9A3383B73B54D9ED7_0,BD995B62F35EC624DA9E86F9A3383B73B54D9ED7," The parts of a notebook
-
-You can see some information about a notebook before you open it on the Assets page of a project. When you open a notebook in edit mode, you can do much more with the notebook by using multiple menu options, toolbars, an information pane, and by editing and running the notebook cells.
-
-You can view the following information about a notebook by clicking the Notebooks asset type in the Assets page of your project:
-
-
-
-* The name of the notebook
-* The date when the notebook was last modified and the person who made the change
-* The programming language of the notebook
-* Whether the notebook is currently locked
-
-
-
-When you open a notebook in edit mode, the notebook editor includes the following features:
-
-
-
-* [Menu bar and toolbar](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/parts-of-a-notebook.html?context=cdpaas&locale=enmenu-bar-and-toolbar)
-* [Notebook action bar](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/parts-of-a-notebook.html?context=cdpaas&locale=ennotebook-action-bar)
-* [The cells in a Jupyter notebook](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/parts-of-a-notebook.html?context=cdpaas&locale=enthe-cells-in-a-jupyter-notebook)
-
-
-
-* [Jupyter Code cells](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/parts-of-a-notebook.html?context=cdpaas&locale=enjupyter-code-cells)
-* [Jupyter markdown cells](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/parts-of-a-notebook.html?context=cdpaas&locale=enjupyter-markdown-cells)
-"
-BD995B62F35EC624DA9E86F9A3383B73B54D9ED7_1,BD995B62F35EC624DA9E86F9A3383B73B54D9ED7,"* [Raw Jupyter NBConvert cells](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/parts-of-a-notebook.html?context=cdpaas&locale=enraw-jupyter-nbconvert-cells)
-
-
-
-* [Spark job progress bar](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/parts-of-a-notebook.html?context=cdpaas&locale=enspark-job-progress-bar)
-* [Project token for authorization](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/token.html)
-
-
-
-
-
-You can select notebook features that affect the way the notebook functions and perform the most-used operations within the notebook by clicking an icon.
-
- Notebook action bar
-
-
-
-You can select features that enhance notebook collaboration. From the action bar, you can:
-
-
-
-* Publish your notebook as a gist or on GitHub.
-* Create a permanent URL so that anyone with the link can view your notebook.
-* Create jobs in which to run your notebook. See [Schedule a notebook](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/create-job-nb-editor.html).
-* Download your notebook.
-* Add a project token so that code can access the project resources. See [Add code to set the project token](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/token.html).
-* Generate code snippets to add data from a data asset or a connection to a notebook cell.
-* View your notebook information. You can:
-
-
-
-"
-BD995B62F35EC624DA9E86F9A3383B73B54D9ED7_2,BD995B62F35EC624DA9E86F9A3383B73B54D9ED7,"* Change the name of your notebook by editing it in the Name field.
-* Edit the description of your notebook in the Description field.
-* View the date when the notebook was created.
-* View the environment details and runtime status; you can change the notebook runtime from here. See [Notebook environments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebook-environments.html).
-
-
-
-* Save versions of your notebook.
-* Upload assets to the project.
-
-
-
-"
-BD995B62F35EC624DA9E86F9A3383B73B54D9ED7_3,BD995B62F35EC624DA9E86F9A3383B73B54D9ED7," The cells in a Jupyter notebook
-
-A Jupyter notebook consists of a sequence of cells. The flow of a notebook is sequential. You enter code into an input cell, and when you run the cell, the notebook executes the code and prints the output of the computation to an output cell.
-
-You can change the code in an input cell and re-run the cell as often as you like. In this way, the notebook follows a read-evaluate-print loop paradigm. You can choose to use tags to describe cells in a notebook.
-
-The behavior of a cell is determined by a cell’s type. The different types of cells include:
-
-"
-BD995B62F35EC624DA9E86F9A3383B73B54D9ED7_4,BD995B62F35EC624DA9E86F9A3383B73B54D9ED7," Jupyter code cells
-
-Where you can edit and write new code.
-
-
-
-"
-BD995B62F35EC624DA9E86F9A3383B73B54D9ED7_5,BD995B62F35EC624DA9E86F9A3383B73B54D9ED7," Jupyter markdown cells
-
-Where you can document the computational process. You can input headings to structure your notebook hierarchically.
-
-You can also add and edit image files as attachments to the notebook. The markdown code and images are rendered when the cell is run.
-
-
-
-See [Markdown for Jupyter notebooks cheatsheet](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/markd-jupyter.html).
-
-"
-BD995B62F35EC624DA9E86F9A3383B73B54D9ED7_6,BD995B62F35EC624DA9E86F9A3383B73B54D9ED7," Raw Jupyter NBConvert cells
-
-Where you can write output directly or put code that you don’t want to run. Raw cells are not evaluated by the notebook.
-
-
-
-"
-BD995B62F35EC624DA9E86F9A3383B73B54D9ED7_7,BD995B62F35EC624DA9E86F9A3383B73B54D9ED7," Spark job progress bar
-
-When you run code in a notebook that triggers Spark jobs, it is often challenging to determine why your code is not running efficiently.
-
-To help you better understand what your code is doing and assist you in code debugging, you can monitor the execution of the Spark jobs for a code cell.
-
-To enable Spark monitoring for a cell in a notebook:
-
-
-
-* Select the code cell you want to monitor.
-* Click the Enable Spark Monitoring icon () on the notebook toolbar.
-
-
-
-The progress bars you see display the real time runtime progress of your jobs on the Spark cluster. Each Spark job runs on the cluster in one or more stages, where each stage is a list of tasks that can be run in parallel. The monitoring pane can become very large is the Spark job has many stages.
-
-The job monitoring pane also displays the duration of each job and the status of the job stages. A stage can have one of the following statuses:
-
-
-
-* Running: Stage active and started.
-* Completed: Stage completed.
-* Skipped: The results of this stage were cached from a earlier operation and so the task doesn't have to run again.
-* Pending: Stage hasn't started yet.
-
-
-
-Click the icon again to disable monitoring in a cell.
-
-Note: Spark monitoring is currently only supported in notebooks that run on Python.
-
-Parent topic:[Creating notebooks](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/creating-notebooks.html)
-"
-B3F8FB433FC6730284E636B068A5DE98C002DABD_0,B3F8FB433FC6730284E636B068A5DE98C002DABD," Planning your notebooks and scripts experience
-
-To make a plan for using Jupyter notebooks and scripts, first understand the choices that you have, the implications of those choices, and how those choices affect the order of implementation tasks.
-
-You can perform most notebook and script related tasks with Editor or Admin role in an analytics project.
-
-Before you start working with notebooks and scripts, you should consider the following questions as most tasks need to be completed in a particular order:
-
-
-
-* Which programming language do you want to work in?
-* What will your notebooks be doing?
-* What libraries do you want to work with?
-* How can you use the notebook or script in IBM watsonx?
-
-
-
-To create a plan for using Jupyter notebooks or scripts, determine which of the following tasks you must complete.
-
-
-
- Task Mandatory? Timing
-
- [Creating a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/planning-for-notebooks.html?context=cdpaas&locale=enproject) Yes This must be your very first task
- [Adding data assets to the project](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/planning-for-notebooks.html?context=cdpaas&locale=endata-assets) Yes Before you begin creating notebooks
- [Picking a programming language](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/planning-for-notebooks.html?context=cdpaas&locale=enprogramming-lang) Yes Before you select the tool
- [Selecting a tool](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/planning-for-notebooks.html?context=cdpaas&locale=enselect-tool) Yes After you've picked the language
-"
-B3F8FB433FC6730284E636B068A5DE98C002DABD_1,B3F8FB433FC6730284E636B068A5DE98C002DABD," [Checking the library packages](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/planning-for-notebooks.html?context=cdpaas&locale=enprogramming-libs) Yes Before you select a runtime environment
- [Choosing an appropriate runtime environment](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/planning-for-notebooks.html?context=cdpaas&locale=enruntime-env) Yes Before you open the development environment
- [Managing the notebooks and scripts lifecycle](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/planning-for-notebooks.html?context=cdpaas&locale=enmanage-lifecycle) No When the notebook or script is ready
- [Uses for notebooks and scripts after creation](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/planning-for-notebooks.html?context=cdpaas&locale=enuse-options) No When the notebook is ready
-
-
-
-"
-B3F8FB433FC6730284E636B068A5DE98C002DABD_2,B3F8FB433FC6730284E636B068A5DE98C002DABD," Creating a project
-
-You need to create a project before you can start working in notebooks.
-
-Projects You can create an empty project, one from file, or from URL. In this project:
-
-
-
-* You can use the Juypter Notebook and RStudio.
-* Notebooks are assets in the project.
-* Notebook collaboration is based on locking by user at the project level.
-* R scripts and Shiny apps are not assets in the project.
-* There is no collaboration on R scripts or Shiny apps.
-
-
-
-"
-B3F8FB433FC6730284E636B068A5DE98C002DABD_3,B3F8FB433FC6730284E636B068A5DE98C002DABD," Picking a programming language
-
-You can choose to work in the following languages:
-
-Notebooks : Python and R
-
-Scripts : R scripts and R Shiny apps
-
-"
-B3F8FB433FC6730284E636B068A5DE98C002DABD_4,B3F8FB433FC6730284E636B068A5DE98C002DABD," Selecting a tool
-
-In IBM watsonx, you can work with notebook and scripts in the following tool:
-
-Juypter Notebook editor : In the Juypter Notebook editor, you can create Python or R notebooks. Notebooks are assets in a project. Collaboration is only at the project level. The notebook is locked by a user when opened and can only be unlocked by the same user or a project admin.
-
-RStudio : In RStudio, you can create R scripts and Shiny apps. R scripts are not assets in a project, which means that there is no collaboration at the project level.
-
-"
-B3F8FB433FC6730284E636B068A5DE98C002DABD_5,B3F8FB433FC6730284E636B068A5DE98C002DABD," Checking the library packages
-
-When you open a notebook in a runtime environment, you have access to a large selection of preinstalled data science library packages. Many environments also include libraries provided by IBM at no extra charge, such as the Watson Natural Language Processing library in Python environments, libraries to help you access project assets, or libraries for time series or geo-spatial analysis in Spark environments.
-
-For a list of the library packages and the versions included in an environment template, select the template on the Templates page from the Manage tab on the project's Environments page.
-
-If libraries are missing in a template, you can add them:
-
-Through the notebook or script : You can use familiar package install commands for your environment. For example, in Python notebooks, you can use mamba, conda or pip.
-
-By creating a custom environment template : When you create a custom template, you can create a software customization and add the libraries you want to include. For details, see [Customizing environment templates](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/customize-envs.html).
-
-"
-B3F8FB433FC6730284E636B068A5DE98C002DABD_6,B3F8FB433FC6730284E636B068A5DE98C002DABD," Choosing a runtime environment
-
-Choosing the compute environment for your notebook depends on the amount of data you want to process and the complexity of the data analysis processes.
-
-Watson Studio offers many default environment templates with different hardware sizes and software configurations to help you quickly get started, without having to create your own templates. These included templates are listed on the Templates page from the Manage tab on the project's Environments page. For more information about the included environments, see [Environments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/environments-parent.html).
-
-If the available templates don't suit your needs, you can create custom templates and determine the hardware size and software configuration. For details, see [Customizing environment templates](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/customize-envs.html).
-
-Important: Make sure that the environment has enough memory to store the data that you load to the notebook. Oftentimes this means that the environment must have significantly more memory than the total size of the data loaded to the notebook because some data frameworks, like pandas, can hold multiple copies of the data in memory.
-
-"
-B3F8FB433FC6730284E636B068A5DE98C002DABD_7,B3F8FB433FC6730284E636B068A5DE98C002DABD," Working with data
-
-To work with data in a notebook, you need to:
-
-
-
-* Add the data to your project, which turns the data into a project asset. See [Adding data to a project](https://dataplatform.cloud.ibm.com/docs/content/wsj//manage-data/add-data-project.html) for the different methods for adding data to a project.
-* Use generated code that loads data from the asset to a data structure in your notebook. For a list of the supported data types, see [Data load support](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/data-load-support.html).
-* Write your own code to load data if the data source isn't added as a project asset or support for adding generated code isn't available for the project asset.
-
-
-
-"
-B3F8FB433FC6730284E636B068A5DE98C002DABD_8,B3F8FB433FC6730284E636B068A5DE98C002DABD," Managing the notebooks and scripts lifecycle
-
-After you have created and tested a notebook in your tool, you can:
-
-
-
-* Publish it to a catalog so that other catalog members can use the notebook in their projects. See [Publishing assets from a project into a catalog](https://dataplatform.cloud.ibm.com/docs/content/wsj/catalog/publish-asset-project.html).
-* Share a read-only copy outside of Watson Studio so that people who aren't collaborators in your projects can see and use it. See [Sharing notebooks with a URL](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/share-notebooks.html).
-* Publish it to a GitHub repository. See [Publishing notebooks on GitHub](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/github-integration.html).
-* Publish it as a gist. See [Publishing a notebook as a gist](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/create-gist.html).
-
-
-
-R scripts and Shiny apps can't be published or shared using functionality in a project.
-
-"
-B3F8FB433FC6730284E636B068A5DE98C002DABD_9,B3F8FB433FC6730284E636B068A5DE98C002DABD," Uses for notebooks and scripts after creation
-
-The options for a notebook that is created and ready to use in IBM watsonx include:
-
-
-
-* Running it as a job in a project. See [Creating and managing jobs in a project](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/jobs.html).
-* Running it as part of a Watson Pipeline. See [Configuring pipeline nodes](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-orchestration-config.html).
-
-To ensure that a notebook can be run as a job or in a pipeline:
-
-
-
-* Ensure that no cells require interactive input by a user.
-* Ensure that the notebook logs enough detailed information to enable understanding the progress and any failures by looking at the log.
-* Use environment variables in the code to access configurations if a notebook or script requires them, for example the input data file or the number of training runs.
-
-
-
-* Using the Watson Machine Learning Python client to build, train and then deploy your models. See [Watson Machine Learning Python client samples and examples](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-samples-overview.html).
-* Using the Watson Machine Learning REST API to build, train and then deploy your models.
-
-
-
-R scripts and Shiny apps can only be created and used in the RStudio IDE in IBM watsonx. You can't create jobs for R scripts or R Shiny deployments.
-
-Parent topic:[Notebooks and scripts](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebooks-and-scripts.html)
-"
-1483016BE71021F31B8193239D319F34D8E01C9C_0,1483016BE71021F31B8193239D319F34D8E01C9C," Supported machine learning tools, libraries, frameworks, and software specifications
-
-In IBM Watson Machine Learning, you can use popular tools, libraries, and frameworks to train and deploy machine learning models and functions. The environment for these models and functions is made up of specific hardware and software specifications.
-
-Software specifications define the language and version that you use for a model or function. You can use software specifications to configure the software that is used for running your models and functions. By using software specifications, you can precisely define the software version to be used and include your own extensions (for example, by using conda .yml files or custom libraries).
-
-You can get a list of available software and hardware specifications and then use their names and IDs for use with your deployment. For more information, see [Python client](https://ibm.github.io/watson-machine-learning-sdk/) or [REST API](https://cloud.ibm.com/apidocs/machine-learning).
-
-"
-1483016BE71021F31B8193239D319F34D8E01C9C_1,1483016BE71021F31B8193239D319F34D8E01C9C," Predefined software specifications
-
-You can use popular tools, libraries, and frameworks to train and deploy machine learning models and functions.
-
-This table lists the predefined (base) model types and software specifications.
-
-
-
-List of predefined (base) model types and software specifications
-
- Framework** Versions Model Type Default software specification
-
- AutoAI 0.1 NA autoai-kb_rt22.2-py3.10 autoai-ts_rt22.2-py3.10 hybrid_0.1 autoai-kb_rt23.1-py3.10 autoai-ts_rt23.1-py3.10 autoai-tsad_rt23.1-py3.10 autoai-tsad_rt22.2-py3.10
- Decision Optimization 20.1 do-docplex_20.1 do-opl_20.1 do-cplex_20.1 do-cpo_20.1 do_20.1
- Decision Optimization 22.1 do-docplex_22.1 do-opl_22.1 do-cplex_22.1 do-cpo_22.1 do_22.1
- Hybrid/AutoML 0.1 wml-hybrid_0.1 hybrid_0.1
- PMML 3.0 to 4.3 pmml. (or) pmml..*3.0 - 4.3 pmml-3.0_4.3
- PyTorch 1.12 pytorch-onnx_1.12 pytorch-onnx_rt22.2 runtime-22.2-py3.10 pytorch-onnx_rt22.2-py3.10 pytorch-onnx_rt22.2-py3.10-edt
- PyTorch 2.0 pytorch-onnx_2.0 pytorch-onnx_rt23.1 runtime-23.1-py3.10 pytorch-onnx_rt23.1-py3.10 pytorch-onnx_rt23.1-py3.10-edt pytorch-onnx_rt23.1-py3.10-dist
-"
-1483016BE71021F31B8193239D319F34D8E01C9C_2,1483016BE71021F31B8193239D319F34D8E01C9C," Python Functions 0.1 NA runtime-22.2-py3.10 runtime-23.1-py3.10
- Python Scripts 0.1 NA runtime-22.2-py3.10 runtime-23.1-py3.10
- Scikit-learn 1.1 scikit-learn_1.1 runtime-22.2-py3.10 runtime-23.1-py3.10
- Spark 3.3 mllib_3.3 spark-mllib_3.3
- SPSS 17.1 spss-modeler_17.1 spss-modeler_17.1
- SPSS 18.1 spss-modeler_18.1 spss-modeler_18.1
- SPSS 18.2 spss-modeler_18.2 spss-modeler_18.2
- Tensorflow 2.9 tensorflow_2.9 tensorflow_rt22.2 runtime-22.2-py3.10 tensorflow_rt22.2-py3.10
- Tensorflow 2.12 tensorflow_2.12 tensorflow_rt23.1 runtime-23.1-py3.10 tensorflow_rt23.1-py3.10-dist tensorflow_rt23.1-py3.10-edt tensorflow_rt23.1-py3.10
- XGBoost 1.6 xgboost_1.6 or scikit-learn_1.1 (see notes) runtime-22.2-py3.10 runtime-23.1-py3.10
-
-
-
-When you have assets that rely on discontinued software specifications or frameworks, in some cases the migration is seamless. In other cases, your action is required to retrain or redeploy assets.
-
-
-
-* Existing deployments of models that are built with discontinued framework versions or software specifications are removed on the date of discontinuation.
-* No new deployments of models that are built with discontinued framework versions or software specifications are allowed.
-
-
-
-"
-1483016BE71021F31B8193239D319F34D8E01C9C_3,1483016BE71021F31B8193239D319F34D8E01C9C," Learn more
-
-
-
-* To learn more about how to customize software specifications, see [Customizing with third-party and private Python libraries](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-create-custom-software-spec.html).
-* To learn more about how to use and customize environments, see [Environments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/environments-parent.html).
-* To learn more about how to use software specifications for deployments, see the following Jupyter notebooks:
-
-
-
-* [Using REST API and cURL](https://github.com/IBM/watson-machine-learning-samples/tree/master/cloud/notebooks/rest_api/curl/deployments)
-* [Using the Python client](https://github.com/IBM/watson-machine-learning-samples/tree/master/cloud/notebooks/python_sdk/deployments)
-
-
-
-
-
-Parent topic:[Frameworks and software specifications](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-manage-frame-and-specs.html)
-"
-6406A3BCB4E9210A9FB00AF248F11F392AF5C205,6406A3BCB4E9210A9FB00AF248F11F392AF5C205," Promoting an environment template to a space
-
-If you created an environment template and associated it with an asset that you promoted to a deployment space, you can also promote the environment template to the same space. Promoting the environment template to the same space enables running the asset in the same environment that was used in the project.
-
-You can only promote environment templates that you created.
-
-To promote an environment template associated with an asset that you promoted to a deployment space:
-
-
-
-1. From the Manage tab of your project on the Environments page under Templates, select the custom environment template and click Actions > Promote.
-2. Select the space that you promoted your asset to as the target deployment space and optionally provide a description and tags.
-
-
-
-Parent topic:[Environments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/environments-parent.html)
-"
-B2117B2CD0FEA469149B23FACB6A9F7F32905AFD_0,B2117B2CD0FEA469149B23FACB6A9F7F32905AFD," Deploying a prompt template
-
-Deploy a prompt template so you can add it to a business workflow or so you can evaluate the prompt template to measure performance.
-
-"
-B2117B2CD0FEA469149B23FACB6A9F7F32905AFD_1,B2117B2CD0FEA469149B23FACB6A9F7F32905AFD," Before you begin
-
-Save a prompt template that contains at least one variable as a project asset. See [Building reusable prompts](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-prompt-variables.html).
-
-"
-B2117B2CD0FEA469149B23FACB6A9F7F32905AFD_2,B2117B2CD0FEA469149B23FACB6A9F7F32905AFD," Promote a prompt template to a deployment space
-
-To deploy a prompt template, complete the following steps:
-
-
-
-1. Open the project containing the prompt template.
-2. Click Promote to space for the template.
-
-
-3. In the Target deployment space field, choose a deployment space or create a new space. Note the following:
-
-The deployment space must be associated with a machine learning instance that is in the same account as the project where the prompt template was created.
-
-If you don't have a deployment space, choose Create a new deployment space, and then follow the steps in [Creating deployment spaces](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-space-create.html).
-
-If you plan to evaluate the prompt template in the space, the recommended Deployment stage type for the space is Production. For more information on evaluating, see [Evaluating a prompt template in a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-eval-prompt-spaces.html).
-
-Note: The deployment space stage cannot be changed after the space is created.
-
-
-
-
-
-1. Tip: Select View deployment in deployment space after creating. Otherwise, you need to take more steps to find your deployed asset.
-2. From the Assets tab of the deployment space, click Deploy. You create an online deployment, which means you can send data to the endpoint and receive a response in real-time.
-
-
-3. Optional: In the Deployment serving name field, add a unique label for the deployment.
-
-"
-B2117B2CD0FEA469149B23FACB6A9F7F32905AFD_3,B2117B2CD0FEA469149B23FACB6A9F7F32905AFD,"The serving name is used in the URL for the API endpoint that identifies your deployment. Adding a name is helpful because the human-readable name that you add replaces a long, system-generated unique ID that is assigned otherwise.
-
-The serving name also abstracts the deployment from its service instance details. Applications refer to this name, which allows for the underlying service instance to be changed without impacting users.
-
-The name can have up to 36 characters. The supported characters are [a-z,0-9,_].
-
-The name must be unique across the IBM Cloud region. You might be prompted to change the serving name if the name you choose is already in use.
-
-
-
-"
-B2117B2CD0FEA469149B23FACB6A9F7F32905AFD_4,B2117B2CD0FEA469149B23FACB6A9F7F32905AFD," Testing the deployed prompt template
-
-After the deployment successfully completes, click the deployment name to view the deployment.
-
-
-
-
-
-* API reference tab includes the API endpoints and code snippets that you need to add this prompt template to an application.
-* Test tab supports testing the prompt template. Enter test data as text, streamed text, or in a JSON file. For details on testing a prompt template, see.
-
-
-
-If the watsonx.governance service is enabled, you also see these tabs:
-
-
-
-* Evaluate provides the tools for evaluating the prompt template in the space. Click Activate to choose the dimensions to evaluate. For details, see [Evaluating prompt templates](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-eval-prompt.html).
-* AI Factsheets displays all of the metadata that is collected for the prompt template. Use these details for tracking the prompt template for governance and compliance goals. See [Tracking prompt templates](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-track-prompt-temp.html).
-
-
-
-For more information, see [Deployment spaces](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-spaces_local.html).
-
-"
-B2117B2CD0FEA469149B23FACB6A9F7F32905AFD_5,B2117B2CD0FEA469149B23FACB6A9F7F32905AFD," Learn more
-
-
-
-* [Tracking prompt templates ](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/xgov-track-prompt-temp.html)
-* [Evaluating a prompt template in a deployment space](https://dataplatform.cloud.ibm.com/docs/content/wsj/model/wos-eval-prompt-spaces.html)
-* [Security and privacy for foundation models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-security.html)
-
-
-
-Parent topic:[Deploying and managing assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/wmls/wmls-deploy-overview.html)
-"
-0A507FF5262BAD7A3FB3F3C478388CFF78949941_0,0A507FF5262BAD7A3FB3F3C478388CFF78949941," Managing feature groups with assetframe-lib for Python (beta)
-
-You can use the assetframe-lib to create, view and edit feature group information for data assets in Watson Studio notebooks.
-
-Feature groups define additional metadata on columns of your data asset that can be used in downstream Machine Learning tasks. See [Managing feature groups](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/feature-group.html) for more information about using feature groups in the UI.
-
-"
-0A507FF5262BAD7A3FB3F3C478388CFF78949941_1,0A507FF5262BAD7A3FB3F3C478388CFF78949941," Setting up the assetframe-lib and ibm-watson-studio-lib libraries
-
-The assetframe-lib library for Python is pre-installed and can be imported directly in a notebook in Watson Studio. However, it relies on the [ibm-watson-studio-lib](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/using-ibm-ws-lib.html) library. The following steps describe how to set up both libraries.
-
-To insert the project token to your notebook:
-
-
-
-1. Click the More icon on your notebook toolbar and then click Insert project token.
-
-If a project token exists, a cell is added to your notebook with the following information:
-
-from ibm_watson_studio_lib import access_project_or_space
-wslib = access_project_or_space({""token"":""""})
-
- is the value of the project token.
-
-If you are told in a message that no project token exists, click the link in the message to be redirected to the project's Access Control page where you can create a project token. You must be eligible to create a project token.
-
-To create a project token:
-
-
-
-1. From the Manage tab, select the Access Control page, and click New access token under Access tokens.
-2. Enter a name, select Editor role for the project, and create a token.
-3. Go back to your notebook, click the More icon on the notebook toolbar and then click Insert project token.
-
-
-
-2. Import assetframe-lib and initialize it with the created ibm-watson-studio-lib instance.
-
-from assetframe_lib import AssetFrame
-AssetFrame._wslib = wslib
-
-
-
-"
-0A507FF5262BAD7A3FB3F3C478388CFF78949941_2,0A507FF5262BAD7A3FB3F3C478388CFF78949941," The assetframe-lib functions and methods
-
-The assetframe-lib library exposes a set of functions and methods that are grouped in the following way:
-
-
-
-* [Creating an asset frame](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/py_assetframe.html?context=cdpaas&locale=encreate-assetframe)
-* [Creating, retrieving and removing features](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/py_assetframe.html?context=cdpaas&locale=encreate-features)
-* [Specifying feature attributes](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/py_assetframe.html?context=cdpaas&locale=enspecify-featureatt)
-
-
-
-* [Role](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/py_assetframe.html?context=cdpaas&locale=enrole)
-* [Description](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/py_assetframe.html?context=cdpaas&locale=endescription)
-* [Fairness information for favorable and unfavorable outcomes](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/py_assetframe.html?context=cdpaas&locale=enfairnessinfo)
-* [Fairness information for monitored and reference groups](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/py_assetframe.html?context=cdpaas&locale=enmonitoredreference)
-* [Value descriptions](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/py_assetframe.html?context=cdpaas&locale=envalue-desc)
-"
-0A507FF5262BAD7A3FB3F3C478388CFF78949941_3,0A507FF5262BAD7A3FB3F3C478388CFF78949941,"* [Recipe](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/py_assetframe.html?context=cdpaas&locale=enrecipe)
-* [Tags](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/py_assetframe.html?context=cdpaas&locale=entags)
-
-
-
-* [Previewing data](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/py_assetframe.html?context=cdpaas&locale=enpreview-data)
-* [Getting fairness information](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/py_assetframe.html?context=cdpaas&locale=enget-fairness)
-
-
-
-"
-0A507FF5262BAD7A3FB3F3C478388CFF78949941_4,0A507FF5262BAD7A3FB3F3C478388CFF78949941," Creating an asset frame
-
-An asset frame is used to define feature group metadata on an existing data asset or on a pandas DataFrame. You can have exactly one feature group for each asset. If you create an asset frame on a pandas DataFrame, you can store the pandas DataFrame along with the feature group metadata as a data asset in your project.
-
-You can use one of the following functions to create your asset frame:
-
-
-
-* AssetFrame.from_data_asset(asset_name, create_default_features=False)
-
-This function creates a new asset frame wrapping an existing data asset in your project. If there is already a feature group for this asset, for example created in the user interface, it is read from the asset metadata.
-
-
-
-"
-0A507FF5262BAD7A3FB3F3C478388CFF78949941_5,0A507FF5262BAD7A3FB3F3C478388CFF78949941,"Parameters:
-- asset_name: (Required) The name of a data asset in your project.
-- create_default_features: (Optional) Creates features for all columns in the data asset.
-
-
-
-* AssetFrame.from_pandas(name, dataframe, create_default_features=False)
-
-This function creates a new asset frame wrapping a pandas DataFrame.
-
-"
-0A507FF5262BAD7A3FB3F3C478388CFF78949941_6,0A507FF5262BAD7A3FB3F3C478388CFF78949941,"Parameters:
-
-
-
-* name: (Required) The name of the asset frame. This name will be used as the name of the data asset if you store your feature group in your project in a later step.
-* dataframe: (Required) A pandas DataFrame that you want to store along with feature group information.
-* create_default_features: (Optional) Create features for all columns in the dataframe.
-
-Example of creating a asset frame from a pandas DataFrame:
-
- Create an asset frame from a pandas DataFrame and set
- the name of the asset frame.
-af = AssetFrame.from_pandas(dataframe=credit_risk_df, name=""Credit Risk Training Data"")
-
-
-
-
-
-"
-0A507FF5262BAD7A3FB3F3C478388CFF78949941_7,0A507FF5262BAD7A3FB3F3C478388CFF78949941," Creating, retrieving and removing features
-
-A feature defines metadata that can be used by downstream Machine Learning tasks. You can create one feature per column in your data set.
-
-You can use one of the following functions to create, retrieve or remove columns from your asset frame:
-
-
-
-* add_feature(column_name, role='Input')
-
-This function adds a new feature to your asset frame with the given role.
-
-"
-0A507FF5262BAD7A3FB3F3C478388CFF78949941_8,0A507FF5262BAD7A3FB3F3C478388CFF78949941,"Parameters:
-
-
-
-* column_name: (Required) The name of the column to create a feature for.
-* role: (Optional) The role of the feature. It defaults to Input.
-
-Valid roles are:
-
-
-
-* Input: The input for a machine learning model
-
-* Target: The target of a prediction model
-
-* Identifier: The identifier of a row in your data set.
-
-
-
-
-
-* create_default_features()
-
-This function creates features for all columns in your data set. The roles of the features will default to Input.
-* get_features()
-
-This function retrieves all features of the asset frame.
-* get_feature(column_name)
-
-This function retrieves the feature for the given column name.
-
-"
-0A507FF5262BAD7A3FB3F3C478388CFF78949941_9,0A507FF5262BAD7A3FB3F3C478388CFF78949941,"Parameters:
-
-
-
-* column_name: (Required) The string name of the column to create the feature for.
-
-
-
-* get_features_by_role(role)
-
-This function retrieves all features of the dataframe with the given role.
-
-"
-0A507FF5262BAD7A3FB3F3C478388CFF78949941_10,0A507FF5262BAD7A3FB3F3C478388CFF78949941,"Parameters:
-
-
-
-* role: (Required) The role that the features must have. This can be Input, Target or Identifier.
-
-
-
-* remove_feature(feature_or_column_name)
-
-This function removes the feature from the asset frame.
-
-"
-0A507FF5262BAD7A3FB3F3C478388CFF78949941_11,0A507FF5262BAD7A3FB3F3C478388CFF78949941,"Parameters:
-
-
-
-* feature_or_column_name: (Required) A feature or the name of the column to remove the feature for.
-
-
-
-
-
-Example that shows creating features for all columns in the data set and retrieving one of those columns for further specifications:
-
- Create features for all columns in the data set and retrieve a column
- for further specifications.
-af.create_default_features()
-risk_feat = af.get_feature('Risk')
-
-"
-0A507FF5262BAD7A3FB3F3C478388CFF78949941_12,0A507FF5262BAD7A3FB3F3C478388CFF78949941," Specifying feature attributes
-
-Features specify additional metadata on columns that may be used in downstream Machine Learning tasks.
-
-You can use the following function to retrieve the column that the feature is defined for:
-
-
-
-* get_column_name()
-
-This function retrieves the column name that the feature is defined for.
-
-
-
-"
-0A507FF5262BAD7A3FB3F3C478388CFF78949941_13,0A507FF5262BAD7A3FB3F3C478388CFF78949941," Role
-
-The role specifies the intended usage of the feature in a Machine Learning task.
-
-Valid roles are:
-
-
-
-* Input: The feature can be used as an input to a Machine Learning model.
-* Identifier: The feature uniquely identifies a row in the data set.
-* Target: The feature can be used as a target in a prediction algorithm.
-
-
-
-At this time, a feature must have exactly one role.
-
-You can use the following methods to work with the role:
-
-
-
-* set_roles(roles)
-
-This method sets the roles of the feature.
-
-"
-0A507FF5262BAD7A3FB3F3C478388CFF78949941_14,0A507FF5262BAD7A3FB3F3C478388CFF78949941,"Parameters:
-
-
-
-* roles : (Required) The roles to be used. Either as a single string or an array of strings.
-
-
-
-* get_roles()
-
-This method returns all roles of the feature.
-
-
-
-Example that shows getting a feature and setting a role:
-
- Set the role of the feature 'Risk' to 'Target' to use it as a target in a prediction model.
-risk_feat = af.get_feature('Risk')
-risk_feat.set_roles('Target')
-
-"
-0A507FF5262BAD7A3FB3F3C478388CFF78949941_15,0A507FF5262BAD7A3FB3F3C478388CFF78949941," Description
-
-An optional description of the feature. It defaults to None.
-
-You can use the following methods to work with the description.
-
-
-
-* set_description(description)
-
-This method sets the description of the feature.
-
-"
-0A507FF5262BAD7A3FB3F3C478388CFF78949941_16,0A507FF5262BAD7A3FB3F3C478388CFF78949941,"Parameters:
-
-
-
-* description: (Required) Either a string or None to remove the description.
-
-
-
-* get_description()
-
-This method returns the description of the feature.
-
-
-
-"
-0A507FF5262BAD7A3FB3F3C478388CFF78949941_17,0A507FF5262BAD7A3FB3F3C478388CFF78949941," Fairness information for favorable and unfavorable outcomes
-
-You can specify favorable and unfavorable labels for a feature with a Target role.
-
-You can use the following methods to set and retrieve favorable or unfavorable labels.
-
-"
-0A507FF5262BAD7A3FB3F3C478388CFF78949941_18,0A507FF5262BAD7A3FB3F3C478388CFF78949941," Favorable outcomes
-
-You can use the following methods to set and get favorable labels:
-
-
-
-* set_favorable_labels(labels)
-
-This method sets favorable labels for the feature.
-
-"
-0A507FF5262BAD7A3FB3F3C478388CFF78949941_19,0A507FF5262BAD7A3FB3F3C478388CFF78949941,"Parameters:
-
-
-
-* labels: (Required) A string or list of strings with favorable labels.
-
-
-
-* get_favorable_labels()
-
-This method returns the favorable labels of the feature.
-
-
-
-"
-0A507FF5262BAD7A3FB3F3C478388CFF78949941_20,0A507FF5262BAD7A3FB3F3C478388CFF78949941," Unfavorable outcomes
-
-You can use the following methods to set and get unfavorable labels:
-
-
-
-* set_unfavorable_labels(labels)
-
-This method sets unfavorable labels for the feature.
-
-Parameters:
-
-
-
-* labels: (Required) A string or list of strings with unfavorable labels.
-
-
-
-* get_unfavorable_labels()
-
-This method gets the unfavorable labels of the feature.
-
-
-
-Example that shows setting favorable and unfavorable labels:
-
- Set favorable and unfavorable labels for the target feature 'Risk'.
-risk_feat = af.get_feature('Risk')
-risk_feat.set_favorable_labels(""No Risk"")
-risk_feat.set_unfavorable_labels(""Risk"")
-
-"
-0A507FF5262BAD7A3FB3F3C478388CFF78949941_21,0A507FF5262BAD7A3FB3F3C478388CFF78949941," Fairness information for monitored and reference groups
-
-Some columns in your data might by prone to unfair bias. You can specify monitored and reference groups for further usage in Machine Learning tasks. They can be specified for features with the role Input.
-
-You can either specify single values or ranges of numeric values as a string with square brackets and a start and end value, for example [0,15].
-
-You can use the following methods to set and retrieve monitored and reference groups:
-
-
-
-* set_monitored_groups(groups)
-
-This method sets monitored groups for the feature.
-
-Parameters:
-
-
-
-* groups: (Required) A string or list of strings with monitored groups.
-
-
-
-* get_monitored_groups()
-
-This method gets the monitored groups of the feature.
-* set_reference_groups(groups)
-
-This method sets reference groups for the feature.
-
-Parameters:
-
-
-
-* groups: (Required) A string or list of strings with reference groups.
-
-
-
-* get_reference_groups()
-
-This method gets the reference groups of the feature.
-
-
-
-Example that shows setting monitored and reference groups:
-
- Set monitored and reference groups for the features 'Sex' and 'Age'.
-sex_feat = af.get_feature(""Sex"")
-sex_feat.set_reference_groups(""male"")
-sex_feat.set_monitored_groups(""female"")
-
-age_feat = af.get_feature(""Age"")
-age_feat.set_monitored_groups(""[0,25]"")
-age_feat.set_reference_groups(""[26,80]"")
-
-"
-0A507FF5262BAD7A3FB3F3C478388CFF78949941_22,0A507FF5262BAD7A3FB3F3C478388CFF78949941," Value descriptions
-
-You can use value descriptions to specify descriptions for column values in your data.
-
-You can use the following methods to set and retrieve descriptions:
-
-
-
-* set_value_descriptions(value_descriptions)
-
-This method sets value descriptions for the feature.
-
-"
-0A507FF5262BAD7A3FB3F3C478388CFF78949941_23,0A507FF5262BAD7A3FB3F3C478388CFF78949941,"Parameters:
-
-
-
-* value_descriptions: (Required) A Pyton dictionary or list of dictionaries of the following format: {'value': '', 'description': ''}
-
-
-
-* get_value_descriptions()
-
-This method returns all value descriptions of the feature.
-* get_value_description(value)
-
-This method returns the value description for the given value.
-
-Parameters:
-
-
-
-* value: (Required) The value to retrieve the value description for.
-
-
-
-* add_value_description(value, description)
-
-This method adds a value description with the given value and description to the list of value descriptions for the feature.
-
-Parameters:
-
-
-
-* value: (Required) The string value of the value description.
-* description: (Required) The string description of the value description.
-
-
-
-* remove_value_description(value)
-
-This method removes the value description with the given value from the list of value descriptions of the feature.
-
-Parameters:
-
-
-
-* value: (Required) A value of the value description to be removed.
-
-
-
-
-
-Example that shows how to set value descriptions:
-
-plan_feat = af.get_feature(""InstallmentPlans"")
-val_descriptions = [
-{'value': 'stores',
-'description': 'customer has additional business installment plan'},
-{'value': 'bank',
-'description': 'customer has additional personal installment plan'},
-{'value': 'none',
-'description': 'customer has no additional installment plan'}
-]
-plan_feat.set_value_descriptions(val_descriptions)
-
-"
-0A507FF5262BAD7A3FB3F3C478388CFF78949941_24,0A507FF5262BAD7A3FB3F3C478388CFF78949941," Recipe
-
-You can use the recipe to describe how a feature was created, for example with a formula or a code snippet. It defaults to None.
-
-You can use the following methods to work with the recipe.
-
-
-
-* set_recipe(recipe)
-
-This method sets the recipe of the feature.
-
-Parameters:
-
-
-
-* recipe: (Required) Either a string or None to remove the recipe.
-
-
-
-* get_recipe()
-
-This method returns the recipe of the feature.
-
-
-
-"
-0A507FF5262BAD7A3FB3F3C478388CFF78949941_25,0A507FF5262BAD7A3FB3F3C478388CFF78949941," Tags
-
-You can use tags to attach additional labels or information to your feature.
-
-You can use the following methods to work with tags:
-
-
-
-* set_tags(tags)
-
-This method sets the tags of the feature.
-
-Parameters:
-
-
-
-* tags: (Required) Either as a single string or an array of strings.
-
-
-
-* get_tags()
-
-This method returns all tags of the feature.
-
-
-
-"
-0A507FF5262BAD7A3FB3F3C478388CFF78949941_26,0A507FF5262BAD7A3FB3F3C478388CFF78949941," Previewing data
-
-You can preview the data of your data asset or pandas DataFrame with additional information about your features like fairness information.
-
-The data is displayed like a pandas DataFrame with optional header information about feature roles, descriptions or recipes. Fairness information is displayed with coloring for favorable or unfavorable labels, monitored and reference groups.
-
-At this time, you can retrieve up to 100 rows of sample data for a data asset.
-
-Use the following function to preview data:
-
-
-
-* head(num_rows=5, display_options=['role'])
-
-This function returns the first num_rows rows of the data set in a pandas DataFrame.
-
-Parameters:
-
-
-
-* num_rows : (Optional) The number of rows to retrieve.
-* display_options: (Optional) The column header can display additional information for a column in your data set.
-
-Use these options to display feature attributes:
-
-
-
-* role: Displays the role of a feature for this column.
-* description: Displays the description of a feature for this column.
-* recipe: Displays the recipe of a feature for this column.
-
-
-
-
-
-
-
-"
-0A507FF5262BAD7A3FB3F3C478388CFF78949941_27,0A507FF5262BAD7A3FB3F3C478388CFF78949941," Getting fairness information
-
-You can retrieve the fairness information of all features in your asset frame as a Python dictionary. This includes all features containing monitored or reference groups (or both) as protected attributes and the target feature with favorable or unfavorable labels.
-
-If the data type of a column with fairness information is numeric, the values of labels and groups are transformed to numeric values if possible.
-
-Fairness information can be used directly in [AutoAI](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-fairness.html) or [AI Fairness 360](https://www.ibm.com/opensource/open/projects/ai-fairness-360/).
-
-You can use the following function to retrieve fairness information of your asset frame:
-
-
-
-* get_fairness_info(target=None)
-
-This function returns a Python dictionary with favorable and unfavorable labels of the target column and protected attributes with monitored and reference groups.
-
-Parameters:
-
-
-
-* target: (Optional) The target feature. If there is only one feature with role Target, it will be used automatically.
-
-Example that shows how to retrieve fairness information:
-
-af.get_fairness_info()
-
-Output showing fairness information:
-
-{
-'favorable_labels': ['No Risk'],
-'unfavorable_labels': ['Risk'],
-'protected_attributes': [
-{'feature': 'Sex',
-'monitored_group': 'female'],
-'reference_group': 'male']},
-{'feature': 'Age',
-'monitored_group': 0.0, 25]],
-'reference_group': 26, 80]]
-}]
-}
-
-
-
-
-
-"
-0A507FF5262BAD7A3FB3F3C478388CFF78949941_28,0A507FF5262BAD7A3FB3F3C478388CFF78949941," Saving feature group information
-
-After you have fully specified or updated your features, you can save the whole feature group definition as metadata for your data asset.
-
-If you created the asset frame from a pandas DataFrame, a new data asset will be created in the project storage with the name of the asset frame.
-
-You can use the following method to store your feature group information:
-
-
-
-* to_data_asset(overwrite_data=False)
-
-This method saves feature group information to the assets metadata. It creates a new data asset, if the asset frame was created from a pandas DataFrame.
-
-Parameters:
-
-
-
-* overwrite_data: (Optional) Also overwrite the asset contents with the data from the asset frame. Defaults to False.
-
-
-
-
-
-"
-0A507FF5262BAD7A3FB3F3C478388CFF78949941_29,0A507FF5262BAD7A3FB3F3C478388CFF78949941," Learn more
-
-See the [Creating and using feature store data](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/e756adfa2855bdfc20f588f9c1986382) sample project in the Samples.
-
-Parent topic:[Loading and accessing data in a notebook](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/load-and-access-data.html)
-"
-A724F6E91162B52C519F6887F06DF40626C0F698_0,A724F6E91162B52C519F6887F06DF40626C0F698," Using Python functions to work with Cloud Object Storage
-
-To access and work with data that is in IBM Cloud Object Storage, you can use Python functions from a notebook.
-
-With your IBM Cloud Object Storage credentials, you can access and load data from IBM Cloud Object Storage to use in a notebook. This data can be any object of type file-like-object, for example, byte buffers or string buffers. The data that you upload can reside in a different IBM Cloud Object Storage bucket than the project's bucket.
-
-You can also upload data from a local system into IBM Cloud Object Storage from within a notebook. This data can be a compressed file or Pickle object.
-
-See [Working With IBM Cloud Object Storage In Python](https://medium.com/ibm-data-science-experience/working-with-ibm-cloud-object-storage-in-python-fe0ba8667d5f) for more information.
-
-"
-A724F6E91162B52C519F6887F06DF40626C0F698_1,A724F6E91162B52C519F6887F06DF40626C0F698," Learn more
-
-
-
-* Use [ibm-watson-studio-lib for Python](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ws-lib-python.html) to interact with Watson Studio projects and project assets. The library also contains functions that simplify fetching files from IBM Cloud Object Storage.
-* [Control access to COS buckets](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/cos_buckets.html)
-
-
-
-Parent topic:[Loading and accessing data in a notebook](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/load-and-access-data.html)
-"
-F43870B5B6CE4D191950FDAAE6AAFC36F05360C9_0,F43870B5B6CE4D191950FDAAE6AAFC36F05360C9," Compute resource options for RStudio in projects
-
-When you run RStudio in a project, you choose an environment template for the runtime environment. The environment template specifies the type, size, and power of the hardware configuration, plus the software template.
-
-
-
-* [Types of environments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/rstudio-envs.html?context=cdpaas&locale=entypes)
-* [Default environment templates](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/rstudio-envs.html?context=cdpaas&locale=endefault)
-* [Compute usage](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/rstudio-envs.html?context=cdpaas&locale=encompute)
-* [Runtime scope](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/rstudio-envs.html?context=cdpaas&locale=enscope)
-* [Changing the runtime](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/rstudio-envs.html?context=cdpaas&locale=enchange-env)
-
-
-
-"
-F43870B5B6CE4D191950FDAAE6AAFC36F05360C9_1,F43870B5B6CE4D191950FDAAE6AAFC36F05360C9," Types of environments
-
-You can use this type of environment with RStudio:
-
-
-
-* Default RStudio CPU environments for standard workloads
-
-
-
-"
-F43870B5B6CE4D191950FDAAE6AAFC36F05360C9_2,F43870B5B6CE4D191950FDAAE6AAFC36F05360C9," Default environment templates
-
-You can select any of the following default environment templates for RStudio in a project. These default environment templates are listed under Templates on the Environments page on the Manage tab of your project. All environment templates use RStudio with Runtime 23.1 on the R 4.2 programming language.
-
-
-
-Default RStudio environment templates
-
- Name Hardware configuration Local storage CUH rate per hour
-
- Default RStudio L 16 vCPU and 64 GB RAM 2 GB 8
- Default RStudio M 8 vCPU and 32 GB RAM 2 GB 4
- Default RStudio XS 2 vCPU and 8 GB RAM 2 GB 1
-
-
-
-If you don't explicitly select an environment, Default RStudio M is the default. The hardware configuration of the available RStudio environments is preset and cannot be changed.
-
-For compute-intensive processing on a large data set, consider pushing your data processing to Spark from your RStudio session. See [Using Spark in RStudio](https://medium.com/ibm-data-science-experience/access-ibm-analytics-for-apache-spark-from-rstudio-eb11bf8b401b).
-
-To prevent consuming extra capacity unit hours (CUHs), stop all active RStudio runtimes when you no longer need them. See [RStudio idle timeout](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/manage-envs-new.htmlstop-active-runtimes).
-
-"
-F43870B5B6CE4D191950FDAAE6AAFC36F05360C9_3,F43870B5B6CE4D191950FDAAE6AAFC36F05360C9," Compute usage in projects
-
-RStudio consumes compute resources as CUH from the Watson Studio service in projects.
-
-You can monitor the Watson Studio CUH consumption on the Resource usage page on the Manage tab of your project.
-
-"
-F43870B5B6CE4D191950FDAAE6AAFC36F05360C9_4,F43870B5B6CE4D191950FDAAE6AAFC36F05360C9," Runtime scope
-
-An RStudio environment runtime is always scoped to a project and a user. Each user can only have one RStudio runtime per project at one time. If you start RStudio in a project in which you already have an active RStudio session, the existing active session is disconnected and you can continue working in the new RStudio session.
-
-"
-F43870B5B6CE4D191950FDAAE6AAFC36F05360C9_5,F43870B5B6CE4D191950FDAAE6AAFC36F05360C9," Changing the RStudio runtime
-
-If you notice that processing is very slow, you can restart RStudio and select a larger environment runtime.
-
-To change the RStudio environment runtime:
-
-
-
-1. Save any data from your current session before switching to another environment.
-2. Stop the active RStudio runtime under Tool runtimes on the Environments page on the Manage tab of your project.
-3. Restart RStudio from the Launch IDE menu on your project's action bar and select another environment with the compute power and memory capacity that better meets your data processing requirements.
-
-
-
-"
-F43870B5B6CE4D191950FDAAE6AAFC36F05360C9_6,F43870B5B6CE4D191950FDAAE6AAFC36F05360C9," Learn more
-
-
-
-* [RStudio](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/rstudio-overview.html)
-* [Monitoring account resource usage](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/monitor-resources.html)
-
-
-
-Parent topic:[Choosing compute resources for tools](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/environments-parent.html)
-"
-BB832BB5CE4B3E6E6272967D547D652B1DAF2C4D_0,BB832BB5CE4B3E6E6272967D547D652B1DAF2C4D," RStudio
-
-R is a popular statistical analysis and machine-learning package that enables data management and includes tests, models, analyses and graphics. RStudio, included in IBM Watson Studio, provides an integrated development environment for working with R scripts.
-
-"
-BB832BB5CE4B3E6E6272967D547D652B1DAF2C4D_1,BB832BB5CE4B3E6E6272967D547D652B1DAF2C4D," Accessing RStudio
-
-RStudio is integrated in IBM Watson Studio projects and can be launched after you create a project. With RStudio integration in projects, you can access and use the data files that are stored in the IBM Cloud Object Storage bucket associated with your project in RStudio.
-
-To start RStudio in your project:
-
-
-
-1. Click RStudio from the Launch IDE menu on your project's action bar.
-2. Select an environment.
-3. Click Launch.
-
-The environment runtime is initiated and the development environment opens.
-
-
-
-Sometimes, when you start an RStudio session, you might experience a corrupted RStudio state from a previous session and your session will not start. If this happens, select to reset the workspace at the time you select the RStudio environment and then start the RStudio IDE again. By resetting the workspace, RStudio is started using the default settings with a clean RStudio workspace.
-
-"
-BB832BB5CE4B3E6E6272967D547D652B1DAF2C4D_2,BB832BB5CE4B3E6E6272967D547D652B1DAF2C4D," Working with data files
-
-In RStudio, you can work with data files from different sources:
-
-
-
-* Files in the RStudio server file structure, which you can view by clicking Files in the bottom right section of RStudio. This is where you can create folders, upload files from your local system, and delete files.
-
-To access these files in R, you need to set the working directory to the directory with the files. You can do this by navigating to the directory with the files and clicking More > Set as Working Directory.
-
-Be aware that files stored in the Home directory of your RStudio instance are persistent within your instance only and cannot be shared across environments nor within your project.
-
-
-
-Video disclaimer: Some minor steps and graphical elements in the videos on this page may differ from your deployment.
-
-Watch this video to see how to load data to RStudio.
-
-This video provides a visual method to learn the concepts and tasks in this documentation.
-
-
-
-* Project data assets that are stored in the IBM Cloud Object Storage bucket associated with your project. When RStudio is launched, the IBM Cloud Object Storage bucket content is mounted to the project-objectstorage directory in your RStudio Home directory.
-
-If you want data files to appear in the project-objectstorage directory, you must add them as assets to your project. See [Adding files as project assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/rstudio-overview.html?context=cdpaas&locale=enadding-files).
-
-If new data assets are added to the project while you are in RStudio and you want to access them, you need to refresh the project-objectstorage folder.
-
-See how to [read and write data to and from Cloud Object Storage](https://medium.com/ibm-data-science-experience/read-and-write-data-to-and-from-bluemix-object-storage-in-rstudio-276282347ce1).
-* Data stored in a database system.
-
-Watch this video to see how to connect to external data sources in RStudio.
-
-This video provides a visual method to learn the concepts and tasks in this documentation.
-"
-BB832BB5CE4B3E6E6272967D547D652B1DAF2C4D_3,BB832BB5CE4B3E6E6272967D547D652B1DAF2C4D,"* Files stored in local storage that are mounted to /home/rstudio. The home directory has a storage limitation of 2 GB and is used to store the RStudio session workspace. Note that you are allocated 2 GB for your home directory storage across all of your projects, irrespective of whether you use RStudio in each project. As a consequence, you should only store R script files and small data files in the home directory. It is not intended for large data files or large generated output. All large data files should be uploaded as project assets, which are mounted to the project-objectstorage directory from where you can access them.
-
-
-
-"
-BB832BB5CE4B3E6E6272967D547D652B1DAF2C4D_4,BB832BB5CE4B3E6E6272967D547D652B1DAF2C4D," Adding files as project assets
-
-If you worked with data files and want them appear in the project-objectstorage directory, you must add them to your project as data assets. To add these files as data assets to the project:
-
-
-
-1. On the Assets page of the project, click the Upload asset to project icon () and select the Files tab.
-2. Select the files you want to add to the project as assets.
-3. From the Actions list, select Add as data asset and apply your changes.
-
-
-
-"
-BB832BB5CE4B3E6E6272967D547D652B1DAF2C4D_5,BB832BB5CE4B3E6E6272967D547D652B1DAF2C4D," Capacity consumption and runtime scope
-
-An RStudio environment runtime is always scoped to an environment template and an RStudio session user. Only one RStudio session can be active per Watson Studio user at one time. If you started RStudio in another project, you are asked if you want to stop that session and start a new RStudio session in the context of the current project you're working in.
-
-Runtime usage is calculated by the number of capacity unit hours (CUHs) consumed by the active environment runtime. The CUHs consumed by an active RStudio runtime in a project are billed to the account of the project creator. See [Capacity units per hour billing for RStudio](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/track-runtime-usage.htmlrstudio).
-
-You can see which RStudio environment runtimes are active on the project's Environments page. You can stop your runtime from this page.
-
-Remember: The CUH counter continues to increase while the runtime is active so stop the runtime if you aren't using RStudio. If you don't explicitly stop the runtime, it is stopped for you after an idle time of 2 hour. During this idle time, you will continue to consume CUHs for which you are billed. Long compute-intensive jobs are hard stopped after 24 hours.
-
-Watch this video to see an overview of the RStudio IDE.
-
-Video disclaimer: Some minor steps and graphical elements in this video might differ from your platform.
-
-This video provides a visual method to learn the concepts and tasks in this documentation.
-
-
-
-* Transcript
-
-Synchronize transcript with video
-
-
-
- Time Transcript
-
- 00:00 This video is a quick tour of the RStudio integrated development environment inside a Watson Studio project.
- 00:07 From any project, you can launch the RStudio IDE.
- 00:12 RStudio is a free and open-source integrated development environment for R, a programming language for statistical computing and graphics.
- 00:22 In RStudio, there are four panes: the source pane, the console pane, the environment pane, and the files pane.
-"
-BB832BB5CE4B3E6E6272967D547D652B1DAF2C4D_6,BB832BB5CE4B3E6E6272967D547D652B1DAF2C4D," 00:32 The panes help you organize your work and separate the different tasks you'll do with R.
- 00:39 You can drag to resize the panes or use the icons to minimize and maximize a pane.
- 00:47 You can also rearrange the panes in global options.
- 00:53 The console pane is your interface to R.
- 00:56 It's exactly what you would see in terminal window or user interfaces bundled with R.
- 01:01 The console pane does have some added features that you'll find helpful.
- 01:06 To run code from the console, just type the command.
- 01:11 Start typing a command to see a list of commands that begin with the letters you started typing.
- 01:17 Highlight a command in the list and press ""Enter"" to insert it.
- 01:24 Use the up arrow to scroll through the commands you've previously entered.
- 01:31 As you issue more commands, you can scroll through the results.
- 01:36 Use the menu option to clear the console.
- 01:39 You can also use tab completion to see a list of the functions, objects, and data sets beginning with that text.
- 01:47 And use the arrows to highlight a command to see help for that command.
- 01:51 When you're ready, just press ""Enter"" to insert it.
- 01:55 Next, you'll see a list of the options for that command in the current context.
- 01:59 For example, the first argument for the read.csv function is the file.
- 02:05 RStudio will display a list of the folders and files in your working directory, so you can easily locate the file to include with the argument.
- 02:16 Lastly, if you use the tab completion with a function that expects a package name, such as a library, you'll see a list of all the installed packages.
- 02:28 Next, let's look at the source pane, which is simply a text editor for you to write your R code.
- 02:34 The text editor supports R command files and plain text, as well as several other languages, and includes language-specific highlighting in context.
- 02:47 And you'll notice the tab completion is also available in the text editor.
-"
-BB832BB5CE4B3E6E6272967D547D652B1DAF2C4D_7,BB832BB5CE4B3E6E6272967D547D652B1DAF2C4D," 02:53 From the text editor, you can run a single line of code, or select several lines of code to run, and you'll see the results in the console pane.
- 03:08 You can save your code as an R script to share or run again later.
- 03:15 The view function opens a new tab that shows the dataframe in spreadsheet format.
- 03:22 Or you can display it in its own window.
- 03:25 Now, you can scroll through the data, sort the columns, search for specific values, or filter the rows using the sliders and drop-down menus.
- 03:41 The environment pane contains an ""Environment"" tab, a ""History"" tab, and a ""Connections"" tab, and keeps track of what's been happening in this R session.
- 03:51 The ""Environment"" tab contains the R objects that exist in your global environment, created during the session.
- 03:58 So, when you create a new object in the console pane, it automatically displays in the environment pane.
- 04:04 You can also view the objects related to a specific package, and even see the source code for a specific function.
- 04:12 You can also see a list of the data sets, expand a data set to inspect its individual elements, and view them in the source pane.
- 04:22 You can save the contents of an environment as an .RData file, so you can load that .RData file at a later date.
- 04:29 From here, you can also clear the objects from the workspace.
- 04:33 If you want to delete specific items, use the grid view.
- 04:38 For example, you can easily find large items to delete to free up memory in your R session.
- 04:45 The ""Environment"" tab also allows you to import a data set.
- 04:50 You can see a preview of the data set and change options before completing the import.
- 04:55 The imported data will display in the source pane.
- 05:00 The ""History"" tab displays a history of each of the commands that you run at the command line.
- 05:05 Just like the ""Environment"" tab, you can save the history as an .Rhistory file, so you can open it at a later date.
- 05:11 And this tab has the same options to clear all of the history and individual entries in the history.
-"
-BB832BB5CE4B3E6E6272967D547D652B1DAF2C4D_8,BB832BB5CE4B3E6E6272967D547D652B1DAF2C4D," 05:17 Select a command and send it to the console to rerun the command.
- 05:23 You can also copy a command to the source pane to include it in a script.
- 05:31 On the ""Connections"" tab, you can create a new connection to a data source.
- 05:36 The choices in this dialog box are dependent upon which packages you have installed.
- 05:41 For example, a ""BLUDB"" connection allows you to connect to a Db2 Warehouse on Cloud service.
- 05:49 The files pane contains the ""Files"", ""Plots"", ""Packages"", ""Help"", and ""Viewer"" tabs.
- 05:55 The ""Files"" tab displays the contents of your working directory.
- 05:59 RStudio will load files from this directory and save files to this directory.
- 06:04 Navigate to a file and click the file to view it in the source pane.
- 06:09 From here, you can create new folders and upload files, either by selecting individual files to upload or selecting a .zip file containing all of the files to upload.
- 06:25 From here, you can also delete and rename files and folders.
- 06:30 In order to access the file in R, you need to set the data folder as a working directory.
- 06:36 You'll see that the setwd command was executed in the console.
- 06:43 You can access the data assets in your project by opening the project folder.
- 06:50 The ""Plots"" tab displays the results of R's plot functions, such as: plot, hist, ggplot, and xyplot
- 07:00 You can navigate through different plots using the arrows or zoom to see a graph full screen.
- 07:09 You can also delete individual plots or all plots from here.
- 07:13 Use the ""Export"" option to save the plot as a graphic or print file at the specified resolution.
- 07:21 The ""Packages"" tab displays the packages you currently have installed in your system library.
- 07:26 The search bar lets you quickly find a specific package.
- 07:30 The checked packages are the packages that were already loaded, using the library command, in the current session.
- 07:38 You can check additional packages from here to load them or uncheck packages to detach them from the current session.
-"
-BB832BB5CE4B3E6E6272967D547D652B1DAF2C4D_9,BB832BB5CE4B3E6E6272967D547D652B1DAF2C4D," 07:45 The console pane displays the results.
- 07:48 Use the ""X"" next to a package name to remove it from the system library.
- 07:54 You can also find new packages to install or update to the latest version of any package.
- 08:03 Clicking any of the packages opens the ""Help"" tab with additional information for that package.
- 08:09 From here, you can search for functions to get more help.
- 08:13 And from the console, you can use the help command, or simply type a question mark followed by the function, to get help with that function.
- 08:21 The ""Viewer"" tab displays HTML output.
- 08:25 Some R functions generate HTML to display reports and interactive graphs.
- 08:31 The R Markdown package creates reports that you can view in the ""Viewer"" tab.
- 08:38 The Shiny package creates web apps that you can view in the ""Viewer"" tab.
- 08:44 And other packages build on the htmlwidgets framework and include Java-based, interactive visualizations.
- 08:54 You can also publish the visualization to the free site, called ""RPubs.com"".
- 09:01 This is been a brief overview of the RStudio IDE.
- 09:05 Find more videos on RStudio in the Cloud Pak for Data as a Service documentation.
-
-
-
-
-
-"
-BB832BB5CE4B3E6E6272967D547D652B1DAF2C4D_10,BB832BB5CE4B3E6E6272967D547D652B1DAF2C4D," Learn more
-
-
-
-* [RStudio environments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/rstudio-envs.html)
-* [Using Spark in RStudio](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/rstudio-spark.html)
-
-
-
-Parent topic:[Notebooks and scripts](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebooks-and-scripts.html)
-"
-2BCC4276EA71978FFA874621715BE92A9667390F_0,2BCC4276EA71978FFA874621715BE92A9667390F," Using Spark in RStudio
-
-Although the RStudio IDE cannot be started in a Spark with R environment runtime, you can use Spark in your R scripts and Shiny apps by accessing Spark kernels programmatically. RStudio uses the sparklyr package to connect to Spark from R. The sparklyr package includes a dplyr interface to Spark data frames as well as an R interface to Spark’s distributed machine learning pipelines.
-
-You can connect to Spark from RStudio:
-
-
-
-* By connecting to a Spark kernel that runs locally in the RStudio container in IBM Watson Studio
-
-
-
-RStudio includes sample code snippets that show you how to connect to a Spark kernel in your applications for both methods.
-
-To use Spark in RStudio after you have launched the IDE:
-
-
-
-1. Locate the ibm_sparkaas_demos directory under your home directory and open it. The directory contains the following R scripts:
-
-
-
-* A readme with details on the included R sample scripts
-* spark_kernel_basic_local.R includes sample code of how to connect to a local Spark kernel
-* spark_kernel_basic_remote.R includes sample code of how to connect to a remote Spark kernel
-* The files sparkaas_flights.Rand sparkaas_mtcars.R are two examples of how to use Spark in a small sample application
-
-
-
-2. Use the sample code snippets in your R scripts or applications to help you get started using Spark.
-
-
-
-"
-2BCC4276EA71978FFA874621715BE92A9667390F_1,2BCC4276EA71978FFA874621715BE92A9667390F," Connecting to Spark from RStudio
-
-To connect to Spark from RStudio using the Sparklyr R package, you need a Spark with R environment. You can either use the default Spark with R environment that is provided or create a custom Spark with R environment. To create a custom environment, see [Creating environment templates](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/create-customize-env-definition.html).
-
-Follow these steps after you launch RStudio in an RStudio environment:
-
-Use the following sample code to get a listing of the Spark environment details and to connect to a Spark kernel from your RStudio session:
-
- load spark R packages
-library(ibmwsrspark)
-library(sparklyr)
-
- load kernels
-kernels <- load_spark_kernels()
-
- display kernels
-display_spark_kernels()
-
- get spark kernel Configuration
-
-conf <- get_spark_config(kernels[1])
- Set spark configuration
-conf$spark.driver.maxResultSize <- ""1G""
- connect to Spark kernel
-
-sc <- spark_connect(config = conf)
-
-Then to disconnect from Spark, use:
-
- disconnect
-spark_disconnect(sc)
-
-Examples of these commands are provided in the readme under /home/wsuser/ibm_sparkaas_demos.
-
-Parent topic:[RStudio](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/rstudio-overview.html)
-"
-42F34465DD884E8110BB08A708A138532999714F_0,42F34465DD884E8110BB08A708A138532999714F," Compute resource options for AutoAI experiments in projects
-
-When you run an AutoAI experiment in a project, the type, size, and power of the hardware configuration available depend on the type of experiment you build.
-
-
-
-* [Default hardware configurations](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/run-autoai.html?context=cdpaas&locale=endefault)
-* [Compute usage](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/run-autoai.html?context=cdpaas&locale=encompute)
-
-
-
-"
-42F34465DD884E8110BB08A708A138532999714F_1,42F34465DD884E8110BB08A708A138532999714F," Default hardware configurations
-
-The type of hardware configuration available for your AutoAI experiment depends on the type of experiment you are building. A standard AutoAI experiment, with a single data source, has a single, default hardware configuration. An AutoAI experiment with joined data has options for increasing computational power.
-
-"
-42F34465DD884E8110BB08A708A138532999714F_2,42F34465DD884E8110BB08A708A138532999714F," Capacity units per hour for AutoAI experiments
-
-
-
-Hardware configurations available in projects for AutoAI with a single data source
-
- Capacity type Capacity units per hour
-
- 8 vCPU and 32 GB RAM 20
-
-
-
-The runtimes for AutoAI stop automatically when processing is complete.
-
-"
-42F34465DD884E8110BB08A708A138532999714F_3,42F34465DD884E8110BB08A708A138532999714F," Compute usage in projects
-
-AutoAI consumes compute resources as CUH from the Watson Machine Learning service.
-
-You can monitor the total monthly amount of CUH consumption for the Watson Machine Learning service on the Resource usage page on the Manage tab of your project.
-
-"
-42F34465DD884E8110BB08A708A138532999714F_4,42F34465DD884E8110BB08A708A138532999714F," Learn more
-
-
-
-* [AutoAI](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-overview.html)
-* [Watson Machine Learning service](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/wml-plans.html)
-* [Compute resource options for assets and deployments in spaces](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/run-cuh-deploy-spaces.html)
-* [Monitoring account resource usage](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/monitor-resources.html)
-
-
-
-Parent topic:[Choosing compute resources for tools](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/environments-parent.html)
-"
-B08A6B7A0F11FD3AB62A14F44FD4E1A771174C61_0,B08A6B7A0F11FD3AB62A14F44FD4E1A771174C61," Compute options for model training and scoring
-
-When you train or score a model or function, you choose the type, size, and power of the hardware configuration that matches your computing needs.
-
-
-
-* [Default hardware configurations](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/run-cuh-deploy-spaces.html?context=cdpaas&locale=endefault)
-* [Compute usage](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/run-cuh-deploy-spaces.html?context=cdpaas&locale=encompute)
-
-
-
-"
-B08A6B7A0F11FD3AB62A14F44FD4E1A771174C61_1,B08A6B7A0F11FD3AB62A14F44FD4E1A771174C61," Default hardware configurations
-
-Choose the hardware configuration for your Watson Machine Learning asset when you train the asset or when you deploy it.
-
-
-
-Hardware configurations available for training and deploying assets
-
- Capacity type Capacity units per hour
-
- Extra small: 1x4 = 1 vCPU and 4 GB RAM 0.5
- Small: 2x8 = 2 vCPU and 8 GB RAM 1
- Medium: 4x16 = 4 vCPU and 16 GB RAM 2
- Large: 8x32 = 8 vCPU and 32 GB RAM 4
- Extra large: 16x64 = 16 vCPU and 64 GB RAM 8
-
-
-
-"
-B08A6B7A0F11FD3AB62A14F44FD4E1A771174C61_2,B08A6B7A0F11FD3AB62A14F44FD4E1A771174C61," Compute usage for Watson Machine Learning assets
-
-Deployments and scoring consume compute resources as capacity unit hours (CUH) from the Watson Machine Learning service.
-
-To check the total monthly CUH consumption for your Watson Machine Learning services, from the navigation menu, select Administration -> Environment runtimes.
-
-Additionally, you can monitor the monthly resource usage in each specific deployment space. To do that, from your deployment space, go to the Manage tab and then select Resource usage. The summary shows CUHs used by deployment type: separately for AutoAI deployments, Federated Learning deployments, batch deployments, and online deployments.
-
-"
-B08A6B7A0F11FD3AB62A14F44FD4E1A771174C61_3,B08A6B7A0F11FD3AB62A14F44FD4E1A771174C61," Compute usage details
-
-The rate of consumed CUHs is determined by the computing requirements of your deployments. It is based on such variables as:
-
-
-
-* type of deployment
-* type of framework
-* complexity of scoring Scaling a deployment to support more concurrent users and requests also increases CUH consumption. As many variables affect resource consumption for a deployment, it is recommended that you run tests on your models and deployments to analyze CUH consumption.
-
-
-
-The way that online deployments consume capacity units is based on framework. For some frameworks, CUHs are charged for the number of hours that the deployment asset is active in a deployment space. For example, SPSS models in online deployment mode that run for 24 hours a day, seven days a week, consume CUHs and are charged for that period. An active online deployment has no idle time. For other frameworks, CUHs are charged according to scoring duration. Refer to the CUH consumption table for details on how CUH usage is calculated.
-
-Compute time is calculated to the millisecond, with a 1-minute minimum for each distinct operation. For example:
-
-
-
-* A training run that takes 12 seconds is billed as 1 minute
-* A training run that takes 83.555 seconds is billed exactly as calculated
-
-
-
-"
-B08A6B7A0F11FD3AB62A14F44FD4E1A771174C61_4,B08A6B7A0F11FD3AB62A14F44FD4E1A771174C61," CUH consumption by deployment and framework type
-
-CUH consumption is calculated by using these formulas:
-
-
-
- Deployment type Framework CUH calculation
-
- Online AutoAI, AI function, SPSS, Scikit-Learn custom libraries, Tensorflow, RShiny Deployment active duration * Number of nodes * CUH rate for capacity type framework
- Online Spark, PMML, Scikit-Learn, Pytorch, XGBoost Score duration in seconds * Number of nodes * CUH rate for capacity type framework
- Batch all frameworks Job duration in seconds * Number of nodes * CUH rate for capacity type framework
-
-
-
-"
-B08A6B7A0F11FD3AB62A14F44FD4E1A771174C61_5,B08A6B7A0F11FD3AB62A14F44FD4E1A771174C61," Learn more
-
-
-
-* [Deploying assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/wmls/wmls-deploy-overview.html)
-* [Watson Machine Learning service](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/wml-plans.html)
-* [Monitoring account resource usage](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/monitor-resources.html)
-
-
-
-Parent topic:[Managing predictive deployments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-deploy-general.html)
-"
-5B66F4F408827FE62B0584882D7F25FB9C6CA839_0,5B66F4F408827FE62B0584882D7F25FB9C6CA839," Compute resource options for Decision Optimization
-
-When you run a Decision Optimization model, you use the Watson Machine Learning instance that is linked to the deployment space associated with your experiment.
-
-
-
-* [Default hardware configurations](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/run-decisionopt.html?context=cdpaas&locale=endefault)
-* [Compute usage](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/run-decisionopt.html?context=cdpaas&locale=encompute)
-
-
-
-"
-5B66F4F408827FE62B0584882D7F25FB9C6CA839_1,5B66F4F408827FE62B0584882D7F25FB9C6CA839," Default hardware configuration
-
-The following hardware configuration is used by default when running models in an experiment:
-
-
-
- Capacity type Capacity units per hour (CUH)
-
- 2 vCPU and 8 GB RAM 6
-
-
-
-The CUH is consumed only when the model is running and not when you are adding data or editing your model.
-
-You can also switch to any other experiment environment as required. See the [Decision Optimization plans](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/wml-plans.htmldo) for a list of environments for Decision Optimization experiments.
-
-For more information on how to configure Decision Optimization experiment environments, see [Configuring environments](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/configureEnvironments.html).
-
-"
-5B66F4F408827FE62B0584882D7F25FB9C6CA839_2,5B66F4F408827FE62B0584882D7F25FB9C6CA839," Compute usage in projects
-
-Decision Optimization experiments consume compute resources as CUH from the Watson Machine Learning service.
-
-You can monitor the total monthly amount of CUH consumption for the Watson Machine Learning service on the Resource usage page on the Manage tab of your project.
-
-"
-5B66F4F408827FE62B0584882D7F25FB9C6CA839_3,5B66F4F408827FE62B0584882D7F25FB9C6CA839," Learn more
-
-
-
-* [Decision Optimization](https://dataplatform.cloud.ibm.com/docs/content/DO/DOWS-Cloud_home.html)
-* [Watson Machine Learning plans and compute usage](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/wml-plans.html)
-* [Monitoring account resource usage](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/monitor-resources.html)
-
-
-
-Parent topic:[Choosing compute resources for tools](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/environments-parent.html)
-"
-17AC1BECAE0867381BC236D4C0CC8FC4B8921A0A_0,17AC1BECAE0867381BC236D4C0CC8FC4B8921A0A," Compute resource options for Tuning Studio experiments in projects
-
-A Tuning Studio experiment has a single hardware configuration.
-
-The following table shows the hardware configuration that is used when tuning foundation models in a tuning experiment.
-
-
-
-Hardware configuration available in projects for Tuning Studio
-
- Capacity type Capacity units per hour
-
-
-
-
-NVIDIA A100 80GB GPU|43|
-
-"
-17AC1BECAE0867381BC236D4C0CC8FC4B8921A0A_1,17AC1BECAE0867381BC236D4C0CC8FC4B8921A0A," Compute usage in projects
-
-Tuning Studio consumes compute resources as CUH from the Watson Machine Learning service.
-
-You can monitor the total monthly amount of CUH consumption for the Watson Machine Learning service on the Resource usage page on the Manage tab of your project.
-
-"
-17AC1BECAE0867381BC236D4C0CC8FC4B8921A0A_2,17AC1BECAE0867381BC236D4C0CC8FC4B8921A0A," Learn more
-
-
-
-* [Tuning Studio](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-tuning-studio.html)
-* [Watson Machine Learning plans](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/wml-plans.html)
-* [Compute resource options for assets and deployments in spaces](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/run-cuh-deploy-spaces.html)
-* [Monitoring account resource usage](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/monitor-resources.html)
-
-
-
-Parent topic:[Choosing compute resources for tools](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/environments-parent.html)
-"
-9DAE797269714235C8D9287B5D358BCF72E2C9F5_0,9DAE797269714235C8D9287B5D358BCF72E2C9F5," SPSS predictive analytics algorithms for scoring
-
-A PMML-compliant scoring engine supports:
-
-
-
-* PMML-compliant models (4.2 and earlier versions) produced by various vendors, except for Baseline Model, ScoreCard Model, Sequence Model, and Text Model. Refer to the [Data Mining Group (DMG) web site](http://www.dmg.org/) for a list of supported models.
-* Non-PMML models produced by IBM SPSS products: Discriminant and Bayesian networks
-* PMML 4.2 transformations completely
-
-
-
-Different kinds of models can produce various scoring results. For example:
-
-
-
-* Classification models (those with a categorical target: Bayes Net, General Regression, Mining, Naive Bayes, k-Nearest Neighbor, Neural Network, Regression, Ruleset, Support Vector Machine, and Tree) produce:
-
-
-
-* Predicted values
-* Probabilities
-* Confidence values
-
-
-
-* Regression models (those with a continuous target: General Regression, Mining, k-Nearest Neighbor, Neural Network, Regression, and Tree) produce predicted values; some also produce standard errors.
-* Cox regression (in General Regression) produces predicted survival probability and cumulative hazard values.
-* Tree models also produce Node ID.
-* Clustering models produce Cluster ID and Cluster affinity.
-* Anomaly Detection (represented as Clustering) produces anomaly index and top reasons.
-* Association models produce Consequent, Rule ID, and confidence for top matching rules.
-
-
-
-"
-9DAE797269714235C8D9287B5D358BCF72E2C9F5_1,9DAE797269714235C8D9287B5D358BCF72E2C9F5,"Python example code:
-
-from spss.ml.score import Score
-
-with open(""linear.pmml"") as reader:
-pmmlString = reader.read()
-
-score = Score().fromPMML(pmmlString)
-scoredDf = score.transform(data)
-scoredDf.show()
-
-Parent topic:[SPSS predictive analytics algorithms](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/spss-algorithms.html)
-"
-2A3F647A7F4EB8FB4270D4E78245F18BFDE29AD8_0,2A3F647A7F4EB8FB4270D4E78245F18BFDE29AD8," Sharing notebooks with a URL
-
-You can create a URL to share the last saved version of a notebook on social media or with people outside of Watson Studio. The URL shows a read-only view of the notebook. Anyone who has the URL can view or download the notebook.
-
-"
-2A3F647A7F4EB8FB4270D4E78245F18BFDE29AD8_1,2A3F647A7F4EB8FB4270D4E78245F18BFDE29AD8,"Required permissions:
-
-You must have the Admin or Editor role in the project to share a notebook URL. The shared notebook shows the author of the shared version and when the notebook version was last updated.
-
-"
-2A3F647A7F4EB8FB4270D4E78245F18BFDE29AD8_2,2A3F647A7F4EB8FB4270D4E78245F18BFDE29AD8," Sharing a notebook URL
-
-To share a notebook URL:
-
-
-
-1. Open the notebook in edit mode.
-2. If necessary, add code to [hide sensitive code cells](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/hide_code.html).
-3. Create a saved version of the notebook by clicking File > Save Version.
-4. Click the Share icon () from the notebook action bar.
-
-
-5. Select to share the link.
-6. Choose a sharing option:
-
-
-
-* Choose Only text and output to hide all code cells.
-* Choose All content excluding sensitive code cells to hide code cells that you marked as sensitive.
-* Choose All content, including code to show everything, even code cells that you marked as sensitive. Make sure that you remove your credential and other sensitive information before you choose this option and every time before you save a new version of the notebook.
-
-
-
-7. Copy the link or choose a social media site on which to share the URL.
-
-
-
-Note: The URL remains valid while the project and notebook exist and while the notebook is shared. If you [unshare the notebook](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/share-notebooks.html?context=cdpaas&locale=enunsharing), the URL becomes invalid. When you unshare, and then re-share the notebook, the URL will be the same again.
-
-"
-2A3F647A7F4EB8FB4270D4E78245F18BFDE29AD8_3,2A3F647A7F4EB8FB4270D4E78245F18BFDE29AD8," Updating a shared notebook
-
-To update a shared notebook:
-
-
-
-1. Open the notebook in edit mode.
-2. Make changes to the notebook.
-3. Create a new version of the notebook by clicking File > Save Version.
-
-
-
-Note: Clicking File > Save saves your changes but it doesn't create a new version of the notebook; the shared URL still points to the older version of the notebook.
-
-"
-2A3F647A7F4EB8FB4270D4E78245F18BFDE29AD8_4,2A3F647A7F4EB8FB4270D4E78245F18BFDE29AD8," Unsharing a notebook URL
-
-To unshare a notebook URL:
-
-
-
-1. Open the notebook in edit mode.
-2. Click the Share icon () from the notebook action bar.
-
-
-3. Unselect the Share with anyone who has the link toggle.
-
-
-
-Parent topic:[Managing the lifecycle of notebooks and scripts](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/manage-nb-lifecycle.html)
-"
-C0E0C248B3934E34883814B5F9CEB792D734042A_0,C0E0C248B3934E34883814B5F9CEB792D734042A," Compute resource options for Data Refinery in projects
-
-When you create or edit a Data Refinery flow in a project, you use the Default Data Refinery XS runtime environment. However, when you run a Data Refinery flow in a job, you choose an environment template for the runtime environment. The environment template specifies the type, size, and power of the hardware configuration, plus the software template.
-
-
-
-* [Types of environments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/spark-dr-envs.html?context=cdpaas&locale=entypes)
-* [Default environment templates](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/spark-dr-envs.html?context=cdpaas&locale=endefault)
-* [Compute usage](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/spark-dr-envs.html?context=cdpaas&locale=encompute)
-* [Changing the runtime](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/spark-dr-envs.html?context=cdpaas&locale=enchange-env)
-* [Runtime logs for jobs](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/spark-dr-envs.html?context=cdpaas&locale=enlogs)
-
-
-
-"
-C0E0C248B3934E34883814B5F9CEB792D734042A_1,C0E0C248B3934E34883814B5F9CEB792D734042A," Types of environments
-
-You can use these types of environments with Data Refinery:
-
-
-
-* Default Data Refinery XS runtime environment for running jobs on small data sets.
-* Spark environments for running jobs on larger data sets. The Spark environments have [default environment templates](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/spark-dr-envs.html?context=cdpaas&locale=endefault) so you can get started quickly. Otherwise, you can [create custom environment templates](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/create-customize-env-definition.html) for Spark environments. You should use a Spark & R environment only if you are working on a large data set. If your data set is small, you should select the Default Data Refinery XS runtime. The reason is that, although the SparkR cluster in a Spark & R environment is fast and powerful, it requires time to create, which is noticeable when you run a Data Refinery job on small data set.
-
-
-
-"
-C0E0C248B3934E34883814B5F9CEB792D734042A_2,C0E0C248B3934E34883814B5F9CEB792D734042A," Default environment templates
-
-When you work in Data Refinery, the Default Data Refinery XS environment runtime is started and appears as an active runtime under Tool runtimes on the Environments page on the Manage tab of your project. This runtime stops after an hour of inactivity in the Data Refinery interface. However, you can stop it manually under Tool runtimes on the Environments page.
-
-When you create a job to run a Data Refinery flow in a project, you select an environment template. After a runtime for a job is started, it is listed as an active runtime under Tool runtimes on the Environments page on the Manage tab of your project. The runtime for a job stops when the Data Refinery job stops running.
-
-Compute usage is tracked by capacity unit hours (CUH).
-
-
-
-Preset environment templates available in projects for Data Refinery
-
- Name Hardware configuration Capacity units per hour (CUH)
-
- Default Data Refinery XS 3 vCPU and 12 GB RAM 1.5
- Default Spark 3.3 & R 4.2 2 Executors each: 1 vCPU and 4 GB RAM; Driver: 1 vCPU and 4 GB RAM 1.5
-
-
-
-All default environment templates for Data Refinery are HIPAA ready.
-
-The Spark default environment templates are listed under Templates on the Environments page on the Manage tab of your project.
-
-"
-C0E0C248B3934E34883814B5F9CEB792D734042A_3,C0E0C248B3934E34883814B5F9CEB792D734042A," Compute usage in projects
-
-You can monitor the Watson Studio CUH consumption on the Resource usage page on the Manage tab of your project.
-
-"
-C0E0C248B3934E34883814B5F9CEB792D734042A_4,C0E0C248B3934E34883814B5F9CEB792D734042A," Changing the runtime
-
-You can't change the runtime for working in Data Refinery.
-
-You can change the runtime for a Data Refinery flow job by editing the job template. See [Creating jobs in Data Refinery](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/jobs.htmlcreate-jobs-in-dr).
-
-"
-C0E0C248B3934E34883814B5F9CEB792D734042A_5,C0E0C248B3934E34883814B5F9CEB792D734042A," Runtime logs for jobs
-
-To view the accumulated logs for a Data Refinery job:
-
-
-
-1. From the project's Jobs page, click the job that ran the Data Refinery flow for which you want to see logs.
-2. Click the job run. You can view the log tail or download the complete log file.
-
-
-
-"
-C0E0C248B3934E34883814B5F9CEB792D734042A_6,C0E0C248B3934E34883814B5F9CEB792D734042A," Next steps
-
-
-
-* [Creating your own environment template](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/create-customize-env-definition.html)
-* [Creating jobs in Data Refinery](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/jobs.htmlcreate-jobs-in-dr)
-* [Stopping active runtimes](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/manage-envs-new.htmlstop-active-runtimes)
-
-
-
-"
-C0E0C248B3934E34883814B5F9CEB792D734042A_7,C0E0C248B3934E34883814B5F9CEB792D734042A," Learn more
-
-
-
-* [Monitoring account resource usage](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/monitor-resources.html)
-
-
-
-Parent topic:[Choosing compute resources for tools](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/environments-parent.html)
-"
-6F544922DE2638796837398F7EC15A4AFE6B0781,6F544922DE2638796837398F7EC15A4AFE6B0781," SPSS predictive analytics algorithms
-
-You can use the following SPSS predictive analytics algorithms in your notebooks. Code samples are provided for Python notebooks.
-
-Notebooks must run in a Spark with Python environment runtime. To run the algorithms described in this section, you don't need the SPSS Modeler service.
-
-
-
-* [Data preparation](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/datapreparation-guides.html)
-* [Classification and regression](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/classificationandregression-guides.html)
-* [Clustering](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/clustering-guides.html)
-* [Forecasting](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/forecasting-guides.html)
-* [Survival analysis](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/survivalanalysis-guides.html)
-* [Score](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/score-guides.html)
-
-
-
-Parent topic:[Notebooks and scripts](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebooks-and-scripts.html)
-"
-54029DD42BAE3A23D68D928AC3B6C04D0C735DEC_0,54029DD42BAE3A23D68D928AC3B6C04D0C735DEC," Compute resource options for SPSS Modeler in projects
-
-When you run an SPSS Modeler flow in a project, you choose an environment template for the runtime environment. The environment template specifies the type, size, and power of the hardware configuration, plus the software template.
-
-
-
-* [Types of environments](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/spss-envs.html?context=cdpaas&locale=entypes_spss)
-* [Default environment templates](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/spss-envs.html?context=cdpaas&locale=endefault_spss)
-* [Compute usage](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/spss-envs.html?context=cdpaas&locale=encompute_spss)
-* [Changing the runtime](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/spss-envs.html?context=cdpaas&locale=enchange-env_spss)
-
-
-
-"
-54029DD42BAE3A23D68D928AC3B6C04D0C735DEC_1,54029DD42BAE3A23D68D928AC3B6C04D0C735DEC," Types of environments
-
-You can use this type of environment with SPSS Modeler:
-
-
-
-* Default SPSS Modeler CPU environments for standard workloads
-
-
-
-"
-54029DD42BAE3A23D68D928AC3B6C04D0C735DEC_2,54029DD42BAE3A23D68D928AC3B6C04D0C735DEC," Default environment templates
-
-You can select any of the following default environment templates for SPSS Modeler in a project. The included environment templates are listed under Templates on the Environments page on the Manage tab of your project.
-
-
-
-Default SPSS Modeler environment templates
-
- Name Hardware configuration Local storage CUH rate per hour
-
- Default SPSS Modeler S 2 vCPU and 8 GB RAM 128 GB 1
- Default SPSS Modeler M 4 vCPU and 16 GB RAM 128 GB 2
- Default SPSS Modeler L 6 vCPU and 24 GB RAM 128 GB 3
-
-
-
-After selecting an environment, any other SPSS Modeler flows opened in that project will use the same runtime. The hardware configuration of the available SPSS Modeler environments is preset and cannot be changed.
-
-"
-54029DD42BAE3A23D68D928AC3B6C04D0C735DEC_3,54029DD42BAE3A23D68D928AC3B6C04D0C735DEC," Compute usage in projects
-
-SPSS Modeler consumes compute resources as CUH from the Watson Studio service in projects.
-
-You can monitor the Watson Studio CUH consumption on the Resource usage page on the Manage tab of your project.
-
-"
-54029DD42BAE3A23D68D928AC3B6C04D0C735DEC_4,54029DD42BAE3A23D68D928AC3B6C04D0C735DEC," Changing the SPSS Modeler runtime
-
-If you notice that processing is very slow, you can restart SPSS Modeler and select a larger environment runtime.
-
-To change the SPSS Modeler environment runtime:
-
-
-
-1. Save any data from your current session before switching to another environment.
-2. Stop the active SPSS Modeler runtime under Tool runtimes on the Environments page on the Manage tab of your project.
-3. Restart SPSS Modeler and select another environment with the compute power and memory capacity that better meets your requirements.
-
-
-
-"
-54029DD42BAE3A23D68D928AC3B6C04D0C735DEC_5,54029DD42BAE3A23D68D928AC3B6C04D0C735DEC," Learn more
-
-
-
-* [SPSS Modeler](https://dataplatform.cloud.ibm.com/docs/content/wsd/spss-modeler.html)
-
-
-
-Parent topic:[Choosing compute resources for tools](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/environments-parent.html)
-"
-2D81FCD3E78A5CC7B435198A59522AE6BF8640ED_0,2D81FCD3E78A5CC7B435198A59522AE6BF8640ED," SPSS predictive analytics survival analysis algorithms in notebooks
-
-You can use non-parametric distribution fitting, parametric distribution fitting, or parametric regression modeling SPSS predictive analytics algorithms in notebooks.
-
-"
-2D81FCD3E78A5CC7B435198A59522AE6BF8640ED_1,2D81FCD3E78A5CC7B435198A59522AE6BF8640ED," Non-Parametric Distribution Fitting
-
-Survival analysis analyzes data where the outcome variable is the time until the occurrence of an event of interest. The distribution of the event times is typically described by a survival function.
-
-Non-parametric Distribution Fitting (NPDF) provides an estimate of the survival function without making any assumptions concerning the distribution of the data. NPDF includes Kaplan-Meier estimation, life tables, and specialized extension algorithms to support left censored, interval censored, and recurrent event data.
-
-"
-2D81FCD3E78A5CC7B435198A59522AE6BF8640ED_2,2D81FCD3E78A5CC7B435198A59522AE6BF8640ED,"Python example code:
-
-from spss.ml.survivalanalysis import NonParametricDistributionFitting
-from spss.ml.survivalanalysis.params import DefinedStatus, Points, StatusItem
-
-npdf = NonParametricDistributionFitting().
-setAlgorithm(""KM"").
-setBeginField(""time"").
-setStatusField(""status"").
-setStrataFields([""treatment""]).
-setGroupFields([""gender""]).
-setUndefinedStatus(""INTERVALCENSORED"").
-setDefinedStatus(
-DefinedStatus(
-failure=StatusItem(points = Points(""1"")),
-rightCensored=StatusItem(points = Points(""0"")))).
-setOutMeanSurvivalTime(True)
-
-npdfModel = npdf.fit(df)
-predictions = npdfModel.transform(data)
-predictions.show()
-
-"
-2D81FCD3E78A5CC7B435198A59522AE6BF8640ED_3,2D81FCD3E78A5CC7B435198A59522AE6BF8640ED," Parametric Distribution Fitting
-
-Survival analysis analyzes data where the outcome variable is the time until the occurrence of an event of interest. The distribution of the event times is typically described by a survival function.
-
-Parametric Distribution Fitting (PDF) provides an estimate of the survival function by comparing the functions for several known distributions (exponential, Weibull, log-normal, and log-logistic) to determine which, if any, describes the data best. In addition, the distributions for two or more groups of cases can be compared.
-
-"
-2D81FCD3E78A5CC7B435198A59522AE6BF8640ED_4,2D81FCD3E78A5CC7B435198A59522AE6BF8640ED,"Python excample code:
-
-from spss.ml.survivalanalysis import ParametricDistributionFitting
-from spss.ml.survivalanalysis.params import DefinedStatus, Points, StatusItem
-
-pdf = ParametricDistributionFitting().
-setBeginField(""begintime"").
-setEndField(""endtime"").
-setStatusField(""status"").
-setFreqField(""frequency"").
-setDefinedStatus(
-DefinedStatus(
-failure=StatusItem(points=Points(""F"")),
-rightCensored=StatusItem(points=Points(""R"")),
-leftCensored=StatusItem(points=Points(""L"")))
-).
-setMedianRankEstimation(""RRY"").
-setMedianRankObtainMethod(""BetaFDistribution"").
-setStatusConflictTreatment(""DERIVATION"").
-setEstimationMethod(""MRR"").
-setDistribution(""Weibull"").
-setOutProbDensityFunc(True).
-setOutCumDistFunc(True).
-setOutSurvivalFunc(True).
-setOutRegressionPlot(True).
-setOutMedianRankRegPlot(True).
-setComputeGroupComparison(True)
-
-pdfModel = pdf.fit(data)
-predictions = pdfModel.transform(data)
-predictions.show()
-
-"
-2D81FCD3E78A5CC7B435198A59522AE6BF8640ED_5,2D81FCD3E78A5CC7B435198A59522AE6BF8640ED," Parametric regression modeling
-
-Parametric regression modeling (PRM) is a survival analysis technique that incorporates the effects of covariates on the survival times. PRM includes two model types: accelerated failure time and frailty. Accelerated failure time models assume that the relationship of the logarithm of survival time and the covariates is linear. Frailty, or random effects, models are useful for analyzing recurrent events, correlated survival data, or when observations are clustered into groups.
-
-PRM automatically selects the survival time distribution (exponential, Weibull, log-normal, or log-logistic) that best describes the survival times.
-
-"
-2D81FCD3E78A5CC7B435198A59522AE6BF8640ED_6,2D81FCD3E78A5CC7B435198A59522AE6BF8640ED,"Python example code:
-
-from spss.ml.survivalanalysis import ParametricRegression
-from spss.ml.survivalanalysis.params import DefinedStatus, Points, StatusItem
-
-prm = ParametricRegression().
-setBeginField(""startTime"").
-setEndField(""endTime"").
-setStatusField(""status"").
-setPredictorFields([""age"", ""surgery"", ""transplant""]).
-setDefinedStatus(
-DefinedStatus(
-failure=StatusItem(points=Points(""0.0"")),
-intervalCensored=StatusItem(points=Points(""1.0""))))
-
-prmModel = prm.fit(data)
-PMML = prmModel.toPMML()
-statXML = prmModel.statXML()
-predictions = prmModel.transform(data)
-predictions.show()
-
-Parent topic:[SPSS predictive analytics algorithms](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/spss-algorithms.html)
-"
-FAE139F839DAB4C6EB794D689DACCEFF869C718F_0,FAE139F839DAB4C6EB794D689DACCEFF869C718F," Switching the platform for a space
-
-You can switch the platform for some spaces between the Cloud Pak for Data as a Service and the watsonx platform. When you switch a space to another platform, you can use the tools that are specific to that platform.
-
-For example, you might switch an existing space from Cloud Pak for Data as a Service to watsonx to consolidate your collaborative work on one platform. See [Comparison between watsonx and Cloud Pak for Data as a Service](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/compare-platforms.html).
-
-Note: You cannot promote Prompt Lab assets created with foundation model inferencing to a space.
-
-
-
-* [Requirements](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/switch-platform-space.html?context=cdpaas&locale=enrequirements)
-* [Restrictions](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/switch-platform-space.html?context=cdpaas&locale=enrestrictions)
-* [What happens when you switch a space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/switch-platform-space.html?context=cdpaas&locale=enconsequences)
-* [Switch the platform for a space](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/switch-platform-space.html?context=cdpaas&locale=enmove-one)
-
-
-
-"
-FAE139F839DAB4C6EB794D689DACCEFF869C718F_1,FAE139F839DAB4C6EB794D689DACCEFF869C718F," Requirements
-
-You can switch a space from one platform to the other if you have the required accounts and permissions.
-
-Required accounts : You must be signed up for both Cloud Pak for Data as a Service and watsonx.
-
-Required permissions : You must have the Admin role in the space that you want to switch.
-
-Required services : The current account that you are working in must have both of these services provisioned: : - Watson Studio : - Watson Machine Learning
-
-"
-FAE139F839DAB4C6EB794D689DACCEFF869C718F_2,FAE139F839DAB4C6EB794D689DACCEFF869C718F," Restrictions
-
-To switch a space from Cloud Pak for Data as a Service to watsonx, all the assets in the space must be supported by both platforms.
-
-Spaces that contain any of the following asset types, but no other types of assets, are eligible to switch from Cloud Pak for Data as a Service to watsonx:
-
-
-
-* Connected data asset
-* Connection
-* Data asset from a file
-* Deployment
-* Jupyter notebook
-* Model
-* Python function
-* Script
-
-
-
-You can’t switch a space that contains assets that are specific to Cloud Pak for Data as a Service. If you add any assets that you created with services other than Watson Studio and Watson Machine Learning to a project, you can't switch that space to watsonx. Although Pipelines assets are supported in both Cloud Pak for Data as a Service and watsonx spaces, you can't switch a space that contains pipeline assets because pipelines can reference unsupported assets.
-
-For more information about asset types, see [Asset types and properties](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/assets.html).
-
-"
-FAE139F839DAB4C6EB794D689DACCEFF869C718F_3,FAE139F839DAB4C6EB794D689DACCEFF869C718F," What happens when you switch the platform for a space
-
-Switching a space between platforms has the following effects:
-
-Collaborators : Collaborators in the project receive notifications of the switch on the original platform. If any collaborators do not have accounts for the destination platform, those collaborators can no longer access the project.
-
-Jobs : Scheduled jobs are retained. Any jobs that are running at the time of the switch continue until completion on the original platform. Any jobs that are scheduled for times after the switch are run on the destination platform. Job history is not retained.
-
-Environments : Custom hardware and software specifications are retained.
-
-Space history : Recent activity and asset activities are not retained.
-
-Resource usage : Resource usage is cumulative because you continue to use the same service instances.
-
-Storage : The space's IBM Cloud Object Storage bucket remains the same.
-
-"
-FAE139F839DAB4C6EB794D689DACCEFF869C718F_4,FAE139F839DAB4C6EB794D689DACCEFF869C718F," Switch the platform for a space
-
-You can switch the platform for a space from within the space on the original platform. You can switch between Cloud Pak for Data as a Service and watsonx.
-
-To switch the platform for a space:
-
-
-
-1. From the space you want to switch, open the Manage tab, select the General page, and in the Controls section, click Switch platform. If you don't see a Switch platform button or the button is not active, you can't switch the space.
-2. Select the destination platform and click Switch platform.
-
-
-
-"
-FAE139F839DAB4C6EB794D689DACCEFF869C718F_5,FAE139F839DAB4C6EB794D689DACCEFF869C718F," Learn more
-
-
-
-* [Comparison between watsonx and Cloud Pak for Data as a Service](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/compare-platforms.html)
-* [Asset types and properties](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/assets.html)
-
-
-
-Parent topic:[Deployment spaces](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-spaces_local.html)
-"
-384EB2033AD74EA7044AFC8BF1DDB06FF392CB08_0,384EB2033AD74EA7044AFC8BF1DDB06FF392CB08," Compute resource options for Synthetic Data Generator in projects
-
-To create data with the Synthetic Data Generator, you must have the Watson Studio and Watson Machine Learning services provisioned. Running a synthetic data flow consumes compute resources from the Watson Studio service.
-
-"
-384EB2033AD74EA7044AFC8BF1DDB06FF392CB08_1,384EB2033AD74EA7044AFC8BF1DDB06FF392CB08," Capacity units per hour for Synthetic Data Generator
-
-
-
- Capacity type Capacity units per hour
-
- 2 vCPU and 8 GB RAM 7
-
-
-
-"
-384EB2033AD74EA7044AFC8BF1DDB06FF392CB08_2,384EB2033AD74EA7044AFC8BF1DDB06FF392CB08," Compute usage in projects
-
-Running a synthetic data flow consumes compute resources from the Watson Studio service.
-
-You can monitor the total monthly amount of CUH consumption for Watson Studio on the Resource usage page on the Manage tab of your project.
-
-"
-384EB2033AD74EA7044AFC8BF1DDB06FF392CB08_3,384EB2033AD74EA7044AFC8BF1DDB06FF392CB08," Learn more
-
-
-
-* [Synthetic data](https://dataplatform.cloud.ibm.com/docs/content/wsj/synthetic/synthetic_data_overview_sd.html)
-* [Watson Machine Learning service](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/wml-plans.html)
-* [Watson Studio service](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/ws-plans.html)
-* [Monitoring account resource usage](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/monitor-resources.html)
-
-
-
-Parent topic:[Choosing compute resources for tools](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/environments-parent.html)
-"
-8A411252B81F0E159C1F63EE64F63A987D1BEF9F_0,8A411252B81F0E159C1F63EE64F63A987D1BEF9F," Manually adding the project access token
-
-All projects have an authorization token that is used to access data assets, for example files and connections, and is used by platform APIs. This token is called the project access token, or simply access token in the project user interface. This project access token must be set in notebooks so that project and platform functions can access the project resources.
-
-When you load data to your notebook by clicking Read data on the Code snippets pane, selecting the asset and the load option, the project access token is added for you, if the generated code that is inserted uses project functions.
-
-However, when you use API functions in your notebook that require the project token, for example, if you're using Wget to access data by using the HTTP, HTTPS or FTP protocols, or the ibm-watson-studio-lib library, you must add the project access token to the notebook yourself.
-
-To add a project access token to a notebook if you are not using the generated code:
-
-
-
-1. From the Manage tab, select Access Control and click New access token under Access tokens. Only project administrators can create project access tokens.
-
-Enter a name and select the access role. To enable using API functions in a notebook, the access token must have the Editor access role. An access token with Viewer access role enables read access only to a notebook.
-2. Add the project access token to a notebook by clicking More > Insert project token from the notebook action bar.
-
-By running the inserted hidden code cell, a project object is created that you can use for functions in the ibm-watson-studio-lib library. For example to get the name of the current project run:
-
-project.get_name()
-
-For details on the available ibm-watson-studio-lib functions, see [Accessing project assets with ibm-watson-studio-lib](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/using-ibm-ws-lib.html).
-
-"
-8A411252B81F0E159C1F63EE64F63A987D1BEF9F_1,8A411252B81F0E159C1F63EE64F63A987D1BEF9F,"Note that a project administrator can revoke a project access token at any time. An access token has no expiration date and is valid until it is revoked.
-
-
-
-Parent topic:[Loading and accessing data in a notebook](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/load-and-access-data.html)
-"
-E76A86B7EE87A78FA06482285BAD02694ABCC3CA_0,E76A86B7EE87A78FA06482285BAD02694ABCC3CA," Watson Studio environments compute usage
-
-Compute usage is calculated by the number of capacity unit hours (CUH) consumed by an active environment runtime in Watson Studio. Watson Studio plans govern how you are billed monthly for the resources you consume.
-
-
-
-Capacity units included in each plan per month
-
- Feature Lite Professional Standard (legacy) Enterprise (legacy)
-
- Processing usage 10 CUH per month Unlimited CUH billed for usage per month 10 CUH per month + pay for more 5000 CUH per month + pay for more
-
-
-
-
-
-Capacity units included in each plan per month
-
- Feature Lite Professional
-
- Processing usage 10 CUH per month Unlimited CUH billed for usage per month
-
-
-
-"
-E76A86B7EE87A78FA06482285BAD02694ABCC3CA_1,E76A86B7EE87A78FA06482285BAD02694ABCC3CA," Capacity units per hour for notebooks
-
-
-
-Notebooks
-
- Capacity type Language Capacity units per hour
-
- 1 vCPU and 4 GB RAM Python R 0.5
- 2 vCPU and 8 GB RAM Python R 1
- 4 vCPU and 16 GB RAM Python R 2
- 8 vCPU and 32 GB RAM Python R 4
- 16 vCPU and 64 GB RAM Python R 8
- Driver: 1 vCPU and 4 GB RAM; 1 Executor: 1 vCPU and 4 GB RAM Spark with Python Spark with R 1 CUH per additional executor is 0.5
- Driver: 1 vCPU and 4 GB RAM; 1 Executor: 2 vCPU and 8 GB RAM Spark with Python Spark with R 1.5 CUH per additional executor is 1
- Driver: 2 vCPU and 8 GB RAM; 1 Executor: 1 vCPU and 4 GB RAM; Spark with Python Spark with R 1.5 CUH per additional executor is 0.5
- Driver: 2 vCPU and 8 GB RAM; 1 Executor: 2 vCPU and 8 GB RAM; Spark with Python Spark with R 2 CUH per additional executor is 1
-
-
-
-The rate of capacity units per hour consumed is determined for:
-
-
-
-* Default Python or R environments by the hardware size and the number of users in a project using one or more runtimes
-
-For example: The IBM Runtime 22.2 on Python 3.10 XS with 2 vCPUs will consume 1 CUH if it runs for one hour. If you have a project with 7 users working on notebooks 8 hours a day, 5 days a week, all using the IBM Runtime 22.2 on Python 3.10 XS environment, and everyone shuts down their runtimes when they leave in the evening, runtime consumption is 5 x 7 x 8 = 280 CUH per week.
-
-The CUH calculation becomes more complex when different environments are used to run notebooks in the same project and if users have multiple active runtimes, all consuming their own CUHs. Additionally, there might be notebooks, which are scheduled to run during off-hours, and long-running jobs, likewise consuming CUHs.
-"
-E76A86B7EE87A78FA06482285BAD02694ABCC3CA_2,E76A86B7EE87A78FA06482285BAD02694ABCC3CA,"* Default Spark environments by the hardware configuration size of the driver, and the number of executors and their size.
-
-
-
-"
-E76A86B7EE87A78FA06482285BAD02694ABCC3CA_3,E76A86B7EE87A78FA06482285BAD02694ABCC3CA," Capacity units per hour for notebooks with Decision Optimization
-
-The rate of capacity units per hour consumed is determined by the hardware size and the price for Decision Optimization.
-
-
-
-Decision Optimization notebooks
-
- Capacity type Language Capacity units per hour
-
- 1 vCPU and 4 GB RAM Python + Decision Optimization 0.5 + 5 = 5.5
- 2 vCPU and 8 GB RAM Python + Decision Optimization 1 + 5 = 6
- 4 vCPU and 16 GB RAM Python + Decision Optimization 2 + 5 = 7
- 8 vCPU and 32 GB RAM Python + Decision Optimization 4 + 5 = 9
- 16 vCPU and 64 GB RAM Python + Decision Optimization 8 + 5 = 13
-
-
-
-"
-E76A86B7EE87A78FA06482285BAD02694ABCC3CA_4,E76A86B7EE87A78FA06482285BAD02694ABCC3CA," Capacity units per hour for notebooks with Watson Natural Language Processing
-
-The rate of capacity units per hour consumed is determined by the hardware size and the price for Watson Natural Language Processing.
-
-
-
-Watson Natural Language Processing notebooks
-
- Capacity type Language Capacity units per hour
-
- 1 vCPU and 4 GB RAM Python + Watson Natural Language Processing 0.5 + 5 = 5.5
- 2 vCPU and 8 GB RAM Python + Watson Natural Language Processing 1 + 5 = 6
- 4 vCPU and 16 GB RAM Python + Watson Natural Language Processing 2 + 5 = 7
- 8 vCPU and 32 GB RAM Python + Watson Natural Language Processing 4 + 5 = 9
- 16 vCPU and 64 GB RAM Python + Watson Natural Language Processing 8 + 5 = 13
-
-
-
-"
-E76A86B7EE87A78FA06482285BAD02694ABCC3CA_5,E76A86B7EE87A78FA06482285BAD02694ABCC3CA," Capacity units per hour for Synthetic Data Generator
-
-
-
- Capacity type Capacity units per hour
-
- 2 vCPU and 8 GB RAM 7
-
-
-
-"
-E76A86B7EE87A78FA06482285BAD02694ABCC3CA_6,E76A86B7EE87A78FA06482285BAD02694ABCC3CA," Capacity units per hour for SPSS Modeler flows
-
-
-
-SPSS Modeler flows
-
- Name Capacity type Capacity units per hour
-
- Default SPSS XS 4 vCPU 16 GB RAM 2
-
-
-
-"
-E76A86B7EE87A78FA06482285BAD02694ABCC3CA_7,E76A86B7EE87A78FA06482285BAD02694ABCC3CA," Capacity units per hour for Data Refinery and Data Refinery flows
-
-
-
-Data Refinery and Data Refinery flows
-
- Name Capacity type Capacity units per hour
-
- Default Data Refinery XS runtime 3 vCPU and 12 GB RAM 1.5
- Default Spark 3.3 & R 4.2 2 Executors each: 1 vCPU and 4 GB RAM; Driver: 1 vCPU and 4 GB RAM 1.5
-
-
-
-"
-E76A86B7EE87A78FA06482285BAD02694ABCC3CA_8,E76A86B7EE87A78FA06482285BAD02694ABCC3CA," Capacity units per hour for RStudio
-
-
-
-RStudio
-
- Name Capacity type Capacity units per hour
-
- Default RStudio XS 2 vCPU and 8 GB RAM 1
- Default RStudio M 8 vCPU and 32 GB RAM 4
- Default RStudio L 16 vCPU and 64 GB RAM 8
-
-
-
-"
-E76A86B7EE87A78FA06482285BAD02694ABCC3CA_9,E76A86B7EE87A78FA06482285BAD02694ABCC3CA," Capacity units per hour for GPU environments
-
-
-
-GPU environments
-
- Capacity type GPUs Language Capacity units per hour
-
- 1 x NVIDIA Tesla V100 1 Python with GPU 68
- 2 x NVIDIA Tesla V100 2 Python with GPU 136
-
-
-
-"
-E76A86B7EE87A78FA06482285BAD02694ABCC3CA_10,E76A86B7EE87A78FA06482285BAD02694ABCC3CA," Runtime capacity limit
-
-You are notified when you're about to reach the monthly runtime capacity limit for your Watson Studio service plan. When this happens, you can:
-
-
-
-* Stop active runtimes you don't need.
-* Upgrade your service plan. For up-to-date information, see the[Services catalog page for Watson Studio](https://dataplatform.cloud.ibm.com/data/catalog/data-science-experience?context=wx&target=services).
-
-
-
-Remember: The CUH counter continues to increase while a runtime is active so stop the runtimes you aren't using. If you don't explicitly stop a runtime, the runtime is stopped after an idle timeout. During the idle time, you will continue to consume CUHs for which you are billed.
-
-"
-E76A86B7EE87A78FA06482285BAD02694ABCC3CA_11,E76A86B7EE87A78FA06482285BAD02694ABCC3CA," Track runtime usage for a project
-
-You can view the environment runtimes that are currently active in a project, and monitor usage for the project from the project's Environments page.
-
-"
-E76A86B7EE87A78FA06482285BAD02694ABCC3CA_12,E76A86B7EE87A78FA06482285BAD02694ABCC3CA," Track runtime usage for an account
-
-The CUH consumed by the active runtimes in a project are billed to the account that the project creator has selected in his or her profile settings at the time the project is created. This account can be the account of the project creator, or another account that the project creator has access to. If other users are added to the project and use runtimes, their usage is also billed against the account that the project creator chose at the time of project creation.
-
-You can track the runtime usage for an account on the Environment Runtimes page if you are the IBM Cloud account owner or administrator.
-
-To view the total runtime usage across all of the projects and see how much of your plan you have currently used, choose Administration > Environment runtimes.
-
-A list of the active runtimes billed to your account is displayed. You can see who created the runtimes, when, and for which projects, as well as the capacity units that were consumed by the active runtimes at the time you view the list.
-
-"
-E76A86B7EE87A78FA06482285BAD02694ABCC3CA_13,E76A86B7EE87A78FA06482285BAD02694ABCC3CA," Learn more
-
-
-
-* [Idle runtime timeouts](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/manage-envs-new.htmlstop-active-runtimes)
-* [Monitor account resource usage](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/monitor-resources.html)
-* [Upgrade your service](https://dataplatform.cloud.ibm.com/docs/content/wsj/admin/upgrade.html)
-
-
-
-Parent topic:[Managing compute resources](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/manage-envs-new.html)
-"
-DE60E212953766B4698982B3B631D1A25A019F2E_0,DE60E212953766B4698982B3B631D1A25A019F2E," Accessing project assets with ibm-watson-studio-lib
-
-The ibm-watson-studio-lib library for Python and R contains a set of functions that help you to interact with IBM Watson Studio projects and project assets. You can think of the library as a programmatical interface to a project. Using the ibm-watson-studio-lib library, you can access project metadata and assets, including files and connections. The library also contains functions that simplify fetching files associated with the project.
-
-"
-DE60E212953766B4698982B3B631D1A25A019F2E_1,DE60E212953766B4698982B3B631D1A25A019F2E," Next steps
-
-
-
-* Start using ibm-watson-studio-lib in new notebooks:
-
-
-
-* [ibm-watson-studio-lib for Python](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ws-lib-python.html)
-* [ibm-watson-studio-lib for R](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ws-lib-r.html)
-
-
-
-
-
-Parent topic:[Loading and accessing data in a notebook](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/load-and-access-data.html)
-"
-15D57C8193B99B8525BC2999EF82EF1CD7EAE8AD_0,15D57C8193B99B8525BC2999EF82EF1CD7EAE8AD," Watson Natural Language Processing task catalog
-
-Watson Natural Language Processing encapsulates natural language functionality in standardized components called blocks or workflows. Each block or workflow can be loaded and run in a notebook, some directly on input data, others in a given order.
-
-This topic contains descriptions of the natural language processing tasks supported in the Watson Natural Language Processing library. It lists the task names, the languages that are supported, dependencies to other blocks and includes sample code of how you use the natural language processing functionality in a Python notebook.
-
-The following natural language processing tasks are supported as blocks or workflows in the Watson Natural Language Processing library:
-
-
-
-* [Language detection](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-language-detection.html)
-* [Syntax analysis](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-syntax.html)
-* [Noun phrase extraction](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-noun-phrase.html)
-* [Keyword extraction and ranking](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-keyword.html)
-* [Entity extraction](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-entity-enhanced.html)
-* [Sentiment classification](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-sentiment.html)
-* [Tone classification](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-tone.html)
-"
-15D57C8193B99B8525BC2999EF82EF1CD7EAE8AD_1,15D57C8193B99B8525BC2999EF82EF1CD7EAE8AD,"* [Emotion classification](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-emotion.html)
-* [Concepts extraction](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-concept-ext.html)
-* [Relations extraction](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-relation-extraction.html)
-* [Hierarchical text categorization](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-hierarchical-cat.html)
-
-
-
-"
-15D57C8193B99B8525BC2999EF82EF1CD7EAE8AD_2,15D57C8193B99B8525BC2999EF82EF1CD7EAE8AD," Language codes
-
-Many of the pre-trained models are available in many languages. The following table lists the language codes and the corresponding language.
-
-
-
-Language codes and their corresponding language equivalents
-
- Language code Corresponding language Language code Corresponding language
-
- af Afrikaans ar Arabic
- bs Bosnian ca Catalan
- cs Czech da Danish
- de German el Greek
- en English es Spanish
- fi Finnish fr French
- he Hebrew hi Hindi
- hr Croatian it Italian
- ja Japanese ko Korean
- nb Norwegian Bokmål nl Dutch
- nn Norwegian Nynorsk pl Polish
- pt Portuguese ro Romanian
- ru Russian sk Slovak
- sr Serbian sv Swedish
- tr Turkish zh_cn Chinese (Simplified)
- zh_tw Chinese (Traditional)
-
-
-
-Parent topic:[Watson Natural language Processing library](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp.html)
-"
-156F8A58809D3A4D8F80D02481E5ADDE513EDEAA_0,156F8A58809D3A4D8F80D02481E5ADDE513EDEAA," Concepts extraction block
-
-The Watson Natural Language Processing Concepts block extracts general DBPedia concepts (concepts drawn from language-specific Wikipedia versions) that are directly referenced or alluded to, but not directly referenced, in the input text.
-
-"
-156F8A58809D3A4D8F80D02481E5ADDE513EDEAA_1,156F8A58809D3A4D8F80D02481E5ADDE513EDEAA,"Block name
-
-concepts_alchemy__stock
-
-"
-156F8A58809D3A4D8F80D02481E5ADDE513EDEAA_2,156F8A58809D3A4D8F80D02481E5ADDE513EDEAA,"Supported languages
-
-The Concepts block is available for the following languages. For a list of the language codes and the corresponding language, see [Language codes](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-catalog.htmllang-codes).
-
-de, en, es, fr, it, ja, ko, pt
-
-"
-156F8A58809D3A4D8F80D02481E5ADDE513EDEAA_3,156F8A58809D3A4D8F80D02481E5ADDE513EDEAA,"Capabilities
-
-Use this block to assign concepts from [DBPedia](https://www.dbpedia.org/) (2016 edition). The output types are based on DBPedia.
-
-"
-156F8A58809D3A4D8F80D02481E5ADDE513EDEAA_4,156F8A58809D3A4D8F80D02481E5ADDE513EDEAA,"Dependencies on other blocks
-
-The following block must run before you can run the Concepts extraction block:
-
-
-
-* syntax_izumo__stock
-
-
-
-"
-156F8A58809D3A4D8F80D02481E5ADDE513EDEAA_5,156F8A58809D3A4D8F80D02481E5ADDE513EDEAA,"Code sample
-
-import watson_nlp
-
- Load Syntax and a Concepts model for English
-syntax_model = watson_nlp.load('syntax_izumo_en_stock')
-concepts_model = watson_nlp.load('concepts_alchemy_en_stock')
- Run the syntax model on the input text
-syntax_prediction = syntax_model.run('IBM announced new advances in quantum computing')
-
- Run the concepts model on the result of syntax
-concepts = concepts_model.run(syntax_prediction)
-print(concepts)
-
-Output of the code sample:
-
-{
-""concepts"": [
-{
-""text"": ""IBM"",
-""relevance"": 0.9842190146446228,
-""dbpedia_resource"": ""http://dbpedia.org/resource/IBM""
-},
-{
-""text"": ""Quantum_computing"",
-""relevance"": 0.9797260165214539,
-""dbpedia_resource"": ""http://dbpedia.org/resource/Quantum_computing""
-},
-{
-""text"": ""Computing"",
-""relevance"": 0.9080164432525635,
-""dbpedia_resource"": ""http://dbpedia.org/resource/Computing""
-},
-{
-""text"": ""Shor's_algorithm"",
-""relevance"": 0.7580527067184448,
-""dbpedia_resource"": ""http://dbpedia.org/resource/Shor's_algorithm""
-},
-{
-""text"": ""Quantum_dot"",
-""relevance"": 0.7069802284240723,
-""dbpedia_resource"": ""http://dbpedia.org/resource/Quantum_dot""
-},
-{
-""text"": ""Quantum_algorithm"",
-""relevance"": 0.7063655853271484,
-""dbpedia_resource"": ""http://dbpedia.org/resource/Quantum_algorithm""
-},
-{
-""text"": ""Qubit"",
-""relevance"": 0.7063655853271484,
-"
-156F8A58809D3A4D8F80D02481E5ADDE513EDEAA_6,156F8A58809D3A4D8F80D02481E5ADDE513EDEAA,"""dbpedia_resource"": ""http://dbpedia.org/resource/Qubit""
-},
-{
-""text"": ""DNA_computing"",
-""relevance"": 0.7044616341590881,
-""dbpedia_resource"": ""http://dbpedia.org/resource/DNA_computing""
-},
-{
-""text"": ""Computation"",
-""relevance"": 0.7044616341590881,
-""dbpedia_resource"": ""http://dbpedia.org/resource/Computation""
-},
-{
-""text"": ""Computer"",
-""relevance"": 0.7044616341590881,
-""dbpedia_resource"": ""http://dbpedia.org/resource/Computer""
-}
-],
-""producer_id"": {
-""name"": ""Alchemy Concepts"",
-""version"": ""0.0.1""
-}
-}
-
-Parent topic:[Watson Natural Language Processing block catalog](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-catalog.html)
-"
-B32394103127310AF0F4BF240CFD0B26399B685D_0,B32394103127310AF0F4BF240CFD0B26399B685D," Emotion classification
-
-The Emotion model in the Watson Natural Language Processing classification workflow classifies the emotion in the input text.
-
-Workflow nameensemble_classification-workflow_en_emotion-stock
-
-"
-B32394103127310AF0F4BF240CFD0B26399B685D_1,B32394103127310AF0F4BF240CFD0B26399B685D,"Supported languages
-
-
-
-* English and French
-
-
-
-"
-B32394103127310AF0F4BF240CFD0B26399B685D_2,B32394103127310AF0F4BF240CFD0B26399B685D,"Capabilities
-
-The Emotion classification model is a pre-trained document classification model for the task of classifying the emotion in the input document. The model identifies the emotion of a document, and classifies it as:
-
-
-
-* Anger
-* Disgust
-* Fear
-* Joy
-* Sadness
-
-
-
-Unlike the Sentiment model, which classifies each individual sentence, the Emotion model classifies the entire input document. As such, the Emotion model works optimally when the input text to classify is no longer than 1000 characters. If you would like to classify texts longer than 1000 characters, split the text into sentences or paragraphs for example and apply the Emotion model on each sentence or paragraph.
-
-A document may be classified into multiple categories or into no category.
-
-
-
-Capabilities of emotion classification based on an example
-
- Capabilities Example
-
- Identifies the emotion of a document and classifies it ""I'm so annoyed that this code won't run --> anger, sadness
-
-
-
-"
-B32394103127310AF0F4BF240CFD0B26399B685D_3,B32394103127310AF0F4BF240CFD0B26399B685D,"Dependencies on other blocks
-
-None
-
-"
-B32394103127310AF0F4BF240CFD0B26399B685D_4,B32394103127310AF0F4BF240CFD0B26399B685D,"Code sample
-
-import watson_nlp
-
- Load the Emotion workflow model for English
-emotion_model = watson_nlp.load('ensemble_classification-workflow_en_emotion-stock')
-
- Run the Emotion model
-emotion_result = emotion_model.run(""I'm so annoyed that this code won't run"")
-print(emotion_result)
-
-Output of the code sample:
-
-{
-""classes"": [
-{
-""class_name"": ""anger"",
-""confidence"": 0.6074999913276445
-},
-{
-""class_name"": ""sadness"",
-""confidence"": 0.2913303280964709
-},
-{
-""class_name"": ""fear"",
-""confidence"": 0.10266377929247113
-},
-{
-""class_name"": ""disgust"",
-""confidence"": 0.018745421312542355
-},
-{
-""class_name"": ""joy"",
-""confidence"": 0.0020577122567564804
-}
-],
-""producer_id"": {
-""name"": ""Voting based Ensemble"",
-""version"": ""0.0.1""
-}
-
-Parent topic:[Watson Natural Language Processing task catalog](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-catalog.html)
-"
-A8A2D53661EB9EF173F7CC4794096A134123DACA_0,A8A2D53661EB9EF173F7CC4794096A134123DACA," Entity extraction
-
-The Watson Natural Language Processing Entity extraction models extract entities from input text.
-
-For details, on available extraction types, refer to these sections:
-
-
-
-* [Machine-learning-based extraction for general entities](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-entity-enhanced.html?context=cdpaas&locale=enmachine-learning-general)
-* [Machine-learning-based extraction for PII entities](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-entity-enhanced.html?context=cdpaas&locale=enmachine-learning-pii)
-* [Rule-based extraction for general entities](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-entity-enhanced.html?context=cdpaas&locale=enrule-based-general)
-* [Rule-based extraction for PII entities](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-entity-enhanced.html?context=cdpaas&locale=enrule-based-pii)
-
-
-
-"
-A8A2D53661EB9EF173F7CC4794096A134123DACA_1,A8A2D53661EB9EF173F7CC4794096A134123DACA," Machine-learning-based extraction for general entities
-
-The machine-learning-based extraction models are trained on labeled data for the more complex entity types such as person, organization and location.
-
-"
-A8A2D53661EB9EF173F7CC4794096A134123DACA_2,A8A2D53661EB9EF173F7CC4794096A134123DACA,"Capabilities
-
-The entity models extract entities from the input text. The following types of entities are recognized:
-
-
-
-* Date
-* Duration
-* Facility
-* Geographic feature
-* Job title
-* Location
-* Measure
-* Money
-* Ordinal
-* Organization
-* Person
-* Time
-
-
-
-
-
-Capabilities of machine-learning-based extraction based on an example
-
- Capabilities Examples
-
- Extracts entities from the input text. IBM's CEO Arvind Krishna is based in the US -> IBMOrganization , CEOJobTitle, Arvind KrishnaPerson, USLocation
-
-
-
-Available workflows and blocks differ, depending on the runtime used.
-
-
-
-Blocks and workflows for handling general entities with their corresponding runtimes
-
- Block or workflow name Available in runtime
-
- entity-mentions_transformer-workflow_multilingual_slate.153m.distilled [Runtime 23.1](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-entity-enhanced.html?context=cdpaas&locale=enruntime-231)
- entity-mentions_transformer-workflow_multilingual_slate.153m.distilled-cpu [Runtime 23.1](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-entity-enhanced.html?context=cdpaas&locale=enruntime-231)
- entity-mentions_bert_multi_stock [Runtime 22.2](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-entity-enhanced.html?context=cdpaas&locale=enruntime-222)
-
-
-
-"
-A8A2D53661EB9EF173F7CC4794096A134123DACA_3,A8A2D53661EB9EF173F7CC4794096A134123DACA," Machine-learning-based workflows for general entities in Runtime 23.1
-
-"
-A8A2D53661EB9EF173F7CC4794096A134123DACA_4,A8A2D53661EB9EF173F7CC4794096A134123DACA,"Workflow names
-
-
-
-* entity-mentions_transformer-workflow_multilingual_slate.153m.distilled: this workflow can be used on both CPUs and GPUs.
-* entity-mentions_transformer-workflow_multilingual_slate.153m.distilled-cpu: this workflow is optimized for CPU-based runtimes.
-
-
-
-"
-A8A2D53661EB9EF173F7CC4794096A134123DACA_5,A8A2D53661EB9EF173F7CC4794096A134123DACA,"Supported languages
-
-Entity extraction is available for the following languages.
-
-For a list of the language codes and the corresponding language, see [Language codes](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-catalog.htmllang-codes):
-
-ar, cs, da, de, en, es, fi, fr, he, hi, it, ja, ko, nb, nl, nn, pt, ro, ru, sk, sv, tr, zh-cn
-
-"
-A8A2D53661EB9EF173F7CC4794096A134123DACA_6,A8A2D53661EB9EF173F7CC4794096A134123DACA,"Code sample
-
-import watson_nlp
- Load the workflow model
-entities_workflow = watson_nlp.load('entity-mentions_transformer-workflow_multilingual_slate.153m.distilled')
- Run the entity extraction workflow on the input text
-entities = entities_workflow.run('IBM's CEO Arvind Krishna is based in the US', language_code=""en"")
-print(entities.get_mention_pairs())
-
-"
-A8A2D53661EB9EF173F7CC4794096A134123DACA_7,A8A2D53661EB9EF173F7CC4794096A134123DACA,"Output of the code sample:
-
-[('IBM', 'Organization'), ('CEO', 'JobTitle'), ('Arvind Krishna', 'Person'), ('US', 'Location')]
-
-"
-A8A2D53661EB9EF173F7CC4794096A134123DACA_8,A8A2D53661EB9EF173F7CC4794096A134123DACA," Machine-learning-based blocks for general entities in Runtime 22.2
-
-Block namesentity-mentions_bert_multi_stock
-
-"
-A8A2D53661EB9EF173F7CC4794096A134123DACA_9,A8A2D53661EB9EF173F7CC4794096A134123DACA,"Supported languages
-
-Entity extraction is available for the following languages. For a list of the language codes and the corresponding language, see [Language codes](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-catalog.htmllang-codes).
-
-ar, cs, da, de, en, es, fi, fr, he, hi, it, ja, ko, nb, nl, nn, pt, ro, ru, sk, sv, tr, zh-cn
-
-"
-A8A2D53661EB9EF173F7CC4794096A134123DACA_10,A8A2D53661EB9EF173F7CC4794096A134123DACA,"Dependencies on other blocks
-
-The following block must run before you can run the Entity extraction block:
-
-
-
-* syntax_izumo__stock
-
-
-
-"
-A8A2D53661EB9EF173F7CC4794096A134123DACA_11,A8A2D53661EB9EF173F7CC4794096A134123DACA,"Code sample
-
-import watson_nlp
-
- Load Syntax Model for English, and the multilingual BERT Entity model
-syntax_model = watson_nlp.load('syntax_izumo_en_stock')
-bert_entity_model = watson_nlp.load('entity-mentions_bert_multi_stock')
-
- Run the syntax model on the input text
-syntax_prediction = syntax_model.run('IBM's CEO Arvind Krishna is based in the US')
-
- Run the entity mention model on the result of syntax model
-bert_entity_mentions = bert_entity_model.run(syntax_prediction)
-print(bert_entity_mentions.get_mention_pairs())
-
-"
-A8A2D53661EB9EF173F7CC4794096A134123DACA_12,A8A2D53661EB9EF173F7CC4794096A134123DACA,"Output of the code sample:
-
-[('IBM', 'Organization'), ('CEO', 'JobTitle'), ('Arvind Krishna', 'Person'), ('US', 'Location')]
-
-"
-A8A2D53661EB9EF173F7CC4794096A134123DACA_13,A8A2D53661EB9EF173F7CC4794096A134123DACA," Machine-learning-based extraction for PII entities
-
-Block namesentity-mentions_bilstm_en_pii
-
-
-
-Blocks for handling Personal Identifiable Information (PII) entities with their corresponding runtimes
-
- Block name Available in runtime
-
- entity-mentions_bilstm_en_pii Runtime 22.2, Runtime 23.1
-
-
-
-The entity-mentions_bilstm_en_pii machine-learning based extraction model is trained on labeled data for types person and location.
-
-"
-A8A2D53661EB9EF173F7CC4794096A134123DACA_14,A8A2D53661EB9EF173F7CC4794096A134123DACA,"Capabilities
-
-The entity-mentions_bilstm_en_pii block recognizes the following types of entities:
-
-
-
-Entities extracted by the entity-mentions_bilstm_en_pii block
-
- Entity type name Description Supported languages
-
- Location All geo-political regions, continents, countries, and street names, states, provinces, cities, towns or islands. en
- Person Any being; living, nonliving, fictional or real. en
-
-
-
-"
-A8A2D53661EB9EF173F7CC4794096A134123DACA_15,A8A2D53661EB9EF173F7CC4794096A134123DACA,"Dependencies on other blocks
-
-The following block must run before you can run the entity-mentions_bilstm_en_pii block:
-
-
-
-* syntax_izumo_en_stock
-
-
-
-"
-A8A2D53661EB9EF173F7CC4794096A134123DACA_16,A8A2D53661EB9EF173F7CC4794096A134123DACA,"Code sample
-
-import os
-
-import watson_nlp
-
- Load Syntax and a Entity Mention BiLSTM model for English
-
-syntax_model = watson_nlp.load('syntax_izumo_en_stock')
-
-entity_model = watson_nlp.load('entity-mentions_bilstm_en_pii')
-
-text = 'Denver is the capital of Colorado. The total estimated government spending in Colorado in fiscal year 2016 was $36.0 billion. IBM office is located in downtown Denver. Michael Hancock is the mayor of Denver.'
-
- Run the syntax model on the input text
-
-syntax_prediction = syntax_model.run(text)
-
- Run the entity mention model on the result of the syntax analysis
-
-entity_mentions = entity_model.run(syntax_prediction)
-
-print(entity_mentions)
-
-"
-A8A2D53661EB9EF173F7CC4794096A134123DACA_17,A8A2D53661EB9EF173F7CC4794096A134123DACA,"Output of the code sample:
-
-{
-""mentions"": [
-{
-""span"": {
-""begin"": 0,
-""end"": 6,
-""text"": ""Denver""
-},
-""type"": ""Location"",
-""producer_id"": {
-""name"": ""BiLSTM Entity Mentions"",
-""version"": ""1.0.0""
-},
-""confidence"": 0.6885626912117004,
-""mention_type"": ""MENTT_UNSET"",
-""mention_class"": ""MENTC_UNSET"",
-""role"": """"
-},
-{
-""span"": {
-""begin"": 25,
-""end"": 33,
-""text"": ""Colorado""
-},
-""type"": ""Location"",
-""producer_id"": {
-""name"": ""BiLSTM Entity Mentions"",
-""version"": ""1.0.0""
-},
-""confidence"": 0.8509215116500854,
-""mention_type"": ""MENTT_UNSET"",
-""mention_class"": ""MENTC_UNSET"",
-""role"": """"
-},
-{
-""span"": {
-""begin"": 78,
-""end"": 86,
-""text"": ""Colorado""
-},
-""type"": ""Location"",
-""producer_id"": {
-""name"": ""BiLSTM Entity Mentions"",
-""version"": ""1.0.0""
-},
-""confidence"": 0.9928259253501892,
-""mention_type"": ""MENTT_UNSET"",
-""mention_class"": ""MENTC_UNSET"",
-""role"": """"
-},
-{
-""span"": {
-""begin"": 151,
-""end"": 166,
-""text"": ""downtown Denver""
-},
-""type"": ""Location"",
-""producer_id"": {
-""name"": ""BiLSTM Entity Mentions"",
-""version"": ""1.0.0""
-},
-""confidence"": 0.48378944396972656,
-""mention_type"": ""MENTT_UNSET"",
-""mention_class"": ""MENTC_UNSET"",
-""role"": """"
-},
-{
-""span"": {
-"
-A8A2D53661EB9EF173F7CC4794096A134123DACA_18,A8A2D53661EB9EF173F7CC4794096A134123DACA,"""begin"": 168,
-""end"": 183,
-""text"": ""Michael Hancock""
-},
-""type"": ""Person"",
-""producer_id"": {
-""name"": ""BiLSTM Entity Mentions"",
-""version"": ""1.0.0""
-},
-""confidence"": 0.9972871541976929,
-""mention_type"": ""MENTT_UNSET"",
-""mention_class"": ""MENTC_UNSET"",
-""role"": """"
-}
-],
-""producer_id"": {
-""name"": ""BiLSTM Entity Mentions"",
-""version"": ""1.0.0""
-}
-}
-
-"
-A8A2D53661EB9EF173F7CC4794096A134123DACA_19,A8A2D53661EB9EF173F7CC4794096A134123DACA," Rule-based extraction for general entities
-
-The rule-based model entity-mentions_rbr_xx_stock identifies syntactically regular entities.
-
-Block nameentity-mentions_rbr_xx_stock
-
-"
-A8A2D53661EB9EF173F7CC4794096A134123DACA_20,A8A2D53661EB9EF173F7CC4794096A134123DACA,"Capabilities
-
-Rule-based extraction handles syntactically regular entity types. The entity block extract entities from the input text. The following types of entities are recognized:
-
-
-
-* PhoneNumber
-* EmailAddress
-* Number
-* Percent
-* IPAddress
-* HashTag
-* TwitterHandle
-* URLDate
-
-
-
-
-
-Capabilities of rule-based extraction based on an example
-
- Capabilities Examples
-
- Extracts syntactically regular entity types from the input text. My email is john@us.ibm.com -> john@us.ibm.comEmailAddress
-
-
-
-"
-A8A2D53661EB9EF173F7CC4794096A134123DACA_21,A8A2D53661EB9EF173F7CC4794096A134123DACA,"Supported languages
-
-Entity extraction is available for the following languages. For a list of the language codes and the corresponding language, see [Language codes](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-catalog.htmllang-codes).
-
-ar, cs, da, de, en, es, fi, fr, he, hi, it, ja, ko, nb, nl, nn, pt, ro, ru, sk, sv, tr, zh-cn, zh-tw
-
-"
-A8A2D53661EB9EF173F7CC4794096A134123DACA_22,A8A2D53661EB9EF173F7CC4794096A134123DACA,"Dependencies on other blocks
-
-None
-
-"
-A8A2D53661EB9EF173F7CC4794096A134123DACA_23,A8A2D53661EB9EF173F7CC4794096A134123DACA,"Code sample
-
-import watson_nlp
-
- Load a rule-based Entity Mention model for English
-rbr_entity_model = watson_nlp.load('entity-mentions_rbr_en_stock')
-
- Run the entity model on the input text
-rbr_entity_mentions = rbr_entity_model.run('My email is john@us.ibm.com')
-print(rbr_entity_mentions)
-
-Output of the code sample:
-
-{
-""mentions"": [
-{
-""span"": {
-""begin"": 12,
-""end"": 27,
-""text"": ""john@us.ibm.com""
-},
-""type"": ""EmailAddress"",
-""producer_id"": {
-""name"": ""RBR mentions"",
-""version"": ""0.0.1""
-},
-""confidence"": 0.8,
-""mention_type"": ""MENTT_UNSET"",
-""mention_class"": ""MENTC_UNSET"",
-""role"": """"
-}
-],
-""producer_id"": {
-""name"": ""RBR mentions"",
-""version"": ""0.0.1""
-}
-}
-
-"
-A8A2D53661EB9EF173F7CC4794096A134123DACA_24,A8A2D53661EB9EF173F7CC4794096A134123DACA," Rule-based extraction for PII entities
-
-The rule-based model entity-mentions_rbr_multi_pii handles the majority of the types by identifying common formats of PII entities and performing possible checksum or validations as appropriate for each entity type. For example, credit card number candidates are validated using the Luhn algorithm.
-
-Block nameentity-mentions_rbr_multi_pii
-
-"
-A8A2D53661EB9EF173F7CC4794096A134123DACA_25,A8A2D53661EB9EF173F7CC4794096A134123DACA,"Capabilities
-
-The entity block entity-mentions_rbr_multi_pii recognizes the following types of entities:
-
-
-
-Entities extracted by the entity-mentions_rbr_multi_pii block
-
- Entity type name Description Supported languages
-
- BankAccountNumber.CreditCardNumber.Amex Credit card number for card types AMEX (15 digits). Checked through the Luhn algorithm. All
- BankAccountNumber.CreditCardNumber.Master Credit card number for card types Master card (16 digits). Checked through the Luhn algorithm. All
- BankAccountNumber.CreditCardNumber.Other Credit card number for left-over category of other types. Checked through the Luhn algorithm. All
- BankAccountNumber.CreditCardNumber.Visa Credit card number for card types VISA (16 to 19 digits). Checked through the Luhn algorithm. All
- EmailAddress Email addresses, for example: john@gmail.com ar, cs, da, de, en, es, fi, fr, he, hi, it, ja, ko, nb, nl, nn, pl, pt, ro, ru, sk, sv, tr, zh-cn
- IPAddress IPv4 and IPv6 addresses, for example, 10.142.250.123 ar, cs, da, de, en, es, fi, fr, he, hi, it, ja, ko, nb, nl, nn, pl, pt, ro, ru, sk, sv, tr, zh-cn
- PhoneNumber Any specific phone number, for example, 0511-123-456 ar, cs, da, de, en, es, fi, fr, he, hi, it, ja, ko, nb, nl, nn, pl, pt, ro, ru, sk, sv, tr, zh-cn
-
-
-
-Some PII entity type names are country-specific. The _ in the following entity types is a placeholder for a country code.
-
-
-
-* BankAccountNumber.BBAN._ : These are more variable national bank account numbers and the extraction is mostly language-specific without a general checksum algorithm.
-"
-A8A2D53661EB9EF173F7CC4794096A134123DACA_26,A8A2D53661EB9EF173F7CC4794096A134123DACA,"* BankAccountNumber.IBAN._ : Highly standardized IBANs are supported in a language-independent way and with a checksum algorithm.
-* NationalNumber.NationalID._: These national IDs don’t have a (published) checksum algorithm, and are being extracted on a language-specific basis.
-* NationalNumber.Passport._ : Checksums are implemented only for the countries where a checksum algorithm exists. These are specifically extracted language with additional context restrictions.
-* NationalNumber.TaxID._ : These IDs don't have a (published) checksum algorithm, and are being extracted on a language-specific basis.
-
-
-
-Which entity types are available for which languages and which country code to use is listed in the following table.
-
-
-
-Country-specific PII entity types
-
- Country Entity Type Name Description Supported Languages
-
- Austria BankAccountNumber.BBAN.AT Basic bank account number de
- BankAccountNumber.IBAN.AT International bank account number all
- NationalNumber.Passport.AT Passport number de
- NationalNumber.TaxID.AT Tax identification number de
- Belgium BankAccountNumber.BBAN.BE Basic bank account number fr, nl
- BankAccountNumber.IBAN.BE International bank account number all
- NationalNumber.NationalID.BE National identification number fr, nl
- NationalNumber.Passport.BE Passport number fr, nl
- Bulgaria BankAccountNumber.BBAN.BG Basic bank account number bg
- BankAccountNumber.IBAN.BG International bank account number all
- NationalNumber.NationalID.BG National identification number bg
- Canada NationalNumber.SocialInsuranceNumber.CA Social insurance number. Checksum algorithm is implemented. en, fr
- Croatia BankAccountNumber.BBAN.HR Basic bank account number hr
- BankAccountNumber.IBAN.HR International bank account number all
- NationalNumber.NationalID.HR National identification number hr
- NationalNumber.TaxID.HR Tax identification number hr
- Cyprus BankAccountNumber.BBAN.CY Basic bank account number el
- BankAccountNumber.IBAN.CY International bank account number all
- NationalNumber.TaxID.CY Tax identification number el
-"
-A8A2D53661EB9EF173F7CC4794096A134123DACA_27,A8A2D53661EB9EF173F7CC4794096A134123DACA," Czechia BankAccountNumber.BBAN.CZ Basic bank account number cs
- BankAccountNumber.IBAN.CZ International bank account number cs
- NationalNumber.NationalID.CZ National identification number cs
- NationalNumber.TaxID.CZ Tax identification number cs
- Denmark BankAccountNumber.BBAN.DK Basic bank account number da
- BankAccountNumber.IBAN.DK International bank account number all
- NationalNumber.NationalID.DK National identification number da
- Estonia BankAccountNumber.BBAN.EE Basic bank account number et
- BankAccountNumber.IBAN.EE International bank account number all
- NationalNumber.NationalID.EE National identification number et
- Finland BankAccountNumber.BBAN.FI Basic bank account number fi
- BankAccountNumber.IBAN.FI International bank account number all
- NationalNumber.NationalID.FI National identification number fi
- NationalNumber.Passport.FI Passport number fi
- France BankAccountNumber.BBAN.FR Basic bank account number fr
- BankAccountNumber.IBAN.FR International bank account number all
- NationalNumber.Passport.FR Passport number fr
- NationalNumber.SocialInsuranceNumber.FR Social insurance number. Checksum algorithm is implemented. fr
- Germany BankAccountNumber.BBAN.DE Basic bank aAccount number de
- BankAccountNumber.IBAN.DE International bank account number all
- NationalNumber.Passport.DE Passport number de
- NationalNumber.SocialInsuranceNumber.DE Social insurance number. Checksum algorithm is implemented. de
- Greece BankAccountNumber.BBAN.GR Basic bank account number el
- BankAccountNumber.IBAN.GR International bank account number all
- NationalNumber.Passport.GR Passport number el
- NationalNumber.TaxID.GR Tax identification number el
- NationalNumber.NationalID.GR National ID number el
- Hungary BankAccountNumber.BBAN.HU Basic bank account number hu
- BankAccountNumber.IBAN.HU International bank account number all
- NationalNumber.NationalID.HU National identification number hu
- NationalNumber.TaxID.HU Tax identification number hu
- Iceland BankAccountNumber.BBAN.IS Basic bank account number is
- BankAccountNumber.IBAN.IS International bank account number all
- NationalNumber.NationalID.IS National identification number is
-"
-A8A2D53661EB9EF173F7CC4794096A134123DACA_28,A8A2D53661EB9EF173F7CC4794096A134123DACA," Ireland BankAccountNumber.BBAN.IE Basic bank account number en
- BankAccountNumber.IBAN.IE International bank account number all
- NationalNumber.NationalID.IE National identification number en
- NationalNumber.Passport.IE Passport number en
- NationalNumber.TaxID.IE Tax identification number en
- Italy BankAccountNumber.BBAN.IT Basic bank account number it
- BankAccountNumber.IBAN.IT International bank account number all
- NationalNumber.NationalID.IT National identification number it
- NationalNumber.Passport.IT Passport number it
- Latvia BankAccountNumber.BBAN.LV Basic bank account number lv
- BankAccountNumber.IBAN.LV International bank account number all
- NationalNumber.NationalID.LV National identification number lv
- Liechtenstein BankAccountNumber.BBAN.LI Basic bank account number de
- BankAccountNumber.IBAN.LI International bank account number all
- Lithuania BankAccountNumber.BBAN.LT Basic bank account number lt
- BankAccountNumber.IBAN.LT International bank account number all
- NationalNumber.NationalID.LT National identification number lt
- Luxembourg BankAccountNumber.BBAN.LU Basic bank account number de, fr
- BankAccountNumber.IBAN.LU International bank account number all
- NationalNumber.TaxID.LU Tax identification number de, fr
- Malta BankAccountNumber.BBAN.MT Basic bank account number mt
- BankAccountNumber.IBAN.MT International bank account number all
- Netherlands BankAccountNumber.BBAN.NL Basic bank account number nl
- BankAccountNumber.IBAN.NL International bank account number all
- NationalNumber.NationalID.NL National identification number nl
- NationalNumber.Passport.NL Passport number nl
- Norway BankAccountNumber.BBAN.NO Basic bank account number no
- BankAccountNumber.IBAN.NO International bank account number all
- NationalNumber.NationalID.NO National identification number no
- NationalNumber.NationalID.NO.Old National identification number old no
- NationalNumber.Passport.NO Passport number no
- Poland BankAccountNumber.BBAN.PL Basic bank account number pl
- BankAccountNumber.IBAN.PL International bank account number all
- NationalNumber.NationalID.PL National identification number pl
-"
-A8A2D53661EB9EF173F7CC4794096A134123DACA_29,A8A2D53661EB9EF173F7CC4794096A134123DACA," NationalNumber.Passport.PL Passport number pl
- NationalNumber.TaxID.PL Tax identification number pl
- Portugal BankAccountNumber.IBAN.PT International bank account number all
- BankAccountNumber.BBAN.PT Basic bank account number pt
- NationalNumber.NationalID.PT National identification number pt
- NationalNumber.NationalID.PT.Old National identification number, obsolete format pt
- NationalNumber.TaxID.PT Tax identification number pt
- Romania BankAccountNumber.BBAN.RO Basic bank account number ro
- BankAccountNumber.IBAN.RO International bank account number all
- NationalNumber.NationalID.RO National identification number ro
- NationalNumber.TaxID.RO Tax identification number ro
- Slovakia BankAccountNumber.IBAN.SK International bank account number all
- BankAccountNumber.BBAN.SK Basic bank account number sk
- NationalNumber.TaxID.SK Tax identification number sk
- NationalNumber.NationalID.SK National identification number sk
- Slovenia BankAccountNumber.IBAN.SI International bank account number all
- Spain BankAccountNumber.IBAN.ES International bank account number all
- BankAccountNumber.BBAN.ES Basic bank account number es
- NationalNumber.NationalID.ES National identification number es
- NationalNumber.Passport.ES Passport number es
- NationalNumber.TaxID.ES Tax identification number es
- Sweden BankAccountNumber.IBAN.SE International bank account number all
- BankAccountNumber.BBAN.SE Basic bank account number sv
- NationalNumber.NationalID.SE National identification number sv
- NationalNumber.Passport.SE Passport number sv
- Switzerland BankAccountNumber.IBAN.CH International bank account number all
- BankAccountNumber.BBAN.CH Basic bank account number de, fr, it
- NationalNumber.NationalID.CH National identification number de, fr, it
- NationalNumber.Passport.CH Passport number de, fr, it
- NationalNumber.NationalID.CH.Old National identification number, obsolete format de, fr, it
- United Kingdom of Great Britain and Northern Ireland BankAccountNumber.IBAN.GB International bank account number all
- NationalNumber.SocialSecurityNumber.GB.NHS National Health Service number all
-"
-A8A2D53661EB9EF173F7CC4794096A134123DACA_30,A8A2D53661EB9EF173F7CC4794096A134123DACA," NationalNumber.SocialSecurityNumber.GB.NINO National Social Security Insurance number all
- NationalNumber.NationalID.GB.Old National ID number, obsolete format all
- NationalNumber.Passport.GB Passport Number. Checksum algorithm is not implemented and hence come with additional context restrictions. all
- United States NationalNumber.SocialSecurityNumber.US Social Security number. Checksum algorithm is not implemented and hence come with additional context restrictions. en
- NationalNumber.Passport.US Passport Number. Checksum algorithm is not implemented and hence come with additional context restrictions. en
-
-
-
-"
-A8A2D53661EB9EF173F7CC4794096A134123DACA_31,A8A2D53661EB9EF173F7CC4794096A134123DACA,"Dependencies on other blocks
-
-None
-
-"
-A8A2D53661EB9EF173F7CC4794096A134123DACA_32,A8A2D53661EB9EF173F7CC4794096A134123DACA,"Code sample
-
-import watson_nlp
-
- Load the RBR PII model. Note that this is a multilingual model supporting multiple languages.
-rbr_entity_model = watson_nlp.load('entity-mentions_rbr_multi_pii')
-
- Run the RBR model. Note that language code of the input text is passed as a parameter to the run method.
-rbr_entity_mentions = rbr_entity_model.run('Please find my credit card number here: 378282246310005. Thanks for the payment.', language_code='en')
-print(rbr_entity_mentions)
-
-Output of the code sample:
-
-{
-""mentions"": [
-{
-""span"": {
-""begin"": 40,
-""end"": 55,
-""text"": ""378282246310005""
-},
-""type"": ""BankAccountNumber.CreditCardNumber.Amex"",
-""producer_id"": {
-""name"": ""RBR mentions"",
-""version"": ""0.0.1""
-},
-""confidence"": 0.8,
-""mention_type"": ""MENTT_UNSET"",
-""mention_class"": ""MENTC_UNSET"",
-""role"": """"
-}
-],
-""producer_id"": {
-""name"": ""RBR mentions"",
-""version"": ""0.0.1""
-}
-}
-
-Parent topic:[Watson Natural Language Processing task catalog](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-catalog.html)
-"
-1EC0AABFA78901776901CB2C57AFF822855B6B5E_0,1EC0AABFA78901776901CB2C57AFF822855B6B5E," Hierarchical text categorization
-
-The Watson Natural Language Processing Categories block assigns individual nodes within a hierarchical taxonomy to an input document. For example, in the text IBM announces new advances in quantum computing, examples of extracted categories are technology and computing/hardware/computer and technology and computing/operating systems. These categories represent level 3 and level 2 nodes in a hierarchical taxonomy.
-
-This block differs from the Classification block in that training starts from a set of seed phrases associated with each node in the taxonomy, and does not require labeled documents.
-
-Note that the Hierarchical text categorization block can only be used in a notebook that is started in an environment based on Runtime 22.2 or Runtime 23.1 that includes the Watson Natural Language Processing library.
-
-"
-1EC0AABFA78901776901CB2C57AFF822855B6B5E_1,1EC0AABFA78901776901CB2C57AFF822855B6B5E,"Block name
-
-categories_esa_en_stock
-
-"
-1EC0AABFA78901776901CB2C57AFF822855B6B5E_2,1EC0AABFA78901776901CB2C57AFF822855B6B5E,"Supported languages
-
-The Categories block is available for the following languages. For a list of the language codes and the corresponding language, see [Language codes](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-catalog.htmllang-codes).
-
-de, en
-
-"
-1EC0AABFA78901776901CB2C57AFF822855B6B5E_3,1EC0AABFA78901776901CB2C57AFF822855B6B5E,"Capabilities
-
-Use this block to determine the topics of documents on the web by categorizing web pages into a taxonomy of general domain topics, for ad placement and content recommendation. The model was tested on data from news reports and general web pages.
-
-For a list of the categories that can be returned, see [Category types](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-returned-categories.html).
-
-"
-1EC0AABFA78901776901CB2C57AFF822855B6B5E_4,1EC0AABFA78901776901CB2C57AFF822855B6B5E,"Dependencies on other blocks
-
-The following block must run before you can run the hierarchical categorization block:
-
-
-
-* syntax_izumo__stock
-
-
-
-"
-1EC0AABFA78901776901CB2C57AFF822855B6B5E_5,1EC0AABFA78901776901CB2C57AFF822855B6B5E,"Code sample
-
-import watson_nlp
-
- Load Syntax and a Categories model for English
-syntax_model = watson_nlp.load('syntax_izumo_en_stock')
-categories_model = watson_nlp.load('categories_esa_en_stock')
-
- Run the syntax model on the input text
-syntax_prediction = syntax_model.run('IBM announced new advances in quantum computing')
-
- Run the categories model on the result of syntax
-categories = categories_model.run(syntax_prediction)
-print(categories)
-
-Output of the code sample:
-
-{
-""categories"": [
-{
-""labels"":
-""technology & computing"",
-""computing""
-],
-""score"": 0.992489,
-""explanation"": ]
-},
-{
-""labels"":
-""science"",
-""physics""
-],
-""score"": 0.945449,
-""explanation"": ]
-}
-],
-""producer_id"": {
-""name"": ""ESA Hierarchical Categories"",
-""version"": ""1.0.0""
-}
-}
-
-Parent topic:[Watson Natural Language Processing task catalog](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-catalog.html)
-"
-BCA763BF5F62BDC635AC2E0E7C9C6A47B04745A4_0,BCA763BF5F62BDC635AC2E0E7C9C6A47B04745A4," Keyword extraction and ranking
-
-The Watson Natural Language Processing Keyword extraction with ranking block extracts noun phrases from input text based on their relevance.
-
-"
-BCA763BF5F62BDC635AC2E0E7C9C6A47B04745A4_1,BCA763BF5F62BDC635AC2E0E7C9C6A47B04745A4,"Block name
-
-keywords_text-rank__stock
-
-"
-BCA763BF5F62BDC635AC2E0E7C9C6A47B04745A4_2,BCA763BF5F62BDC635AC2E0E7C9C6A47B04745A4,"Supported language
-
-Keyword extraction with text ranking is available for the following languages. For a list of the language codes and the corresponding language, see [Language codes](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-catalog.htmllang-codes).
-
-ar, cs, da, de, en, es, fi, fr, he, hi, it, ja, ko, nb, nl, nn, pt, ro, ru, sk, sv, tr, zh-cn
-
-"
-BCA763BF5F62BDC635AC2E0E7C9C6A47B04745A4_3,BCA763BF5F62BDC635AC2E0E7C9C6A47B04745A4,"Capabilities
-
-The keywords and text rank block ranks noun phrases extracted from an input document based on how relevant they are within the document.
-
-
-
-Capabilities of keyword extraction and ranking based on an example
-
- Capabilities Examples
-
- Ranks extracted noun phrases based on relevance ""Anna went to school at University of California Santa Cruz. Anna joined the university in 2015."" -> Anna, University of California Santa Cruz
-
-
-
-"
-BCA763BF5F62BDC635AC2E0E7C9C6A47B04745A4_4,BCA763BF5F62BDC635AC2E0E7C9C6A47B04745A4,"Dependencies on other blocks
-
-The following blocks must run before you can run the Keyword extraction with ranking block:
-
-
-
-* syntax_izumo__stock
-* noun-phrases_rbr__stock
-
-
-
-"
-BCA763BF5F62BDC635AC2E0E7C9C6A47B04745A4_5,BCA763BF5F62BDC635AC2E0E7C9C6A47B04745A4,"Code sample
-
-import watson_nlp
-text = ""Anna went to school at University of California Santa Cruz. Anna joined the university in 2015.""
-
- Load Syntax, Noun Phrases and Keywords models for English
-syntax_model = watson_nlp.load('syntax_izumo_en_stock')
-noun_phrases_model = watson_nlp.load('noun-phrases_rbr_en_stock')
-keywords_model = watson_nlp.load('keywords_text-rank_en_stock')
-
- Run the Syntax and Noun Phrases models
-syntax_prediction = syntax_model.run(text, parsers=('token', 'lemma', 'part_of_speech'))
-noun_phrases = noun_phrases_model.run(text)
-
- Run the keywords model
-keywords = keywords_model.run(syntax_prediction, noun_phrases, limit=2)
-print(keywords)
-
-Output of the code sample:
-
-'keywords':
-[{'text': 'University of California Santa Cruz', 'relevance': 0.939524, 'count': 1},
-{'text': 'Anna', 'relevance': 0.891002, 'count': 2}]
-
-Parent topic:[Watson Natural Language Processing task catalog](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-catalog.html)
-"
-E1074D5C232CB13E3CD1FB6E832753626D2FE30E_0,E1074D5C232CB13E3CD1FB6E832753626D2FE30E," Language detection
-
-The Watson Natural Language Processing Language Detection identifies the language of input text.
-
-Block namelang-detect_izumo_multi_stock
-
-"
-E1074D5C232CB13E3CD1FB6E832753626D2FE30E_1,E1074D5C232CB13E3CD1FB6E832753626D2FE30E,"Supported languages
-
-The Language Detection block is able to detect the following languages:
-
-af, ar, bs, ca, cs, da, de, el, en, es, fi, fr, he, hi, hr, it, ja, ko, nb, nl, nn, pl, pt, ro, ru, sk, sr, sv, tr, zh_cn, zh_tw
-
-"
-E1074D5C232CB13E3CD1FB6E832753626D2FE30E_2,E1074D5C232CB13E3CD1FB6E832753626D2FE30E,"Capabilities
-
-Use this block to detect the language of an input text.
-
-"
-E1074D5C232CB13E3CD1FB6E832753626D2FE30E_3,E1074D5C232CB13E3CD1FB6E832753626D2FE30E,"Dependencies on other blocks
-
-None
-
-"
-E1074D5C232CB13E3CD1FB6E832753626D2FE30E_4,E1074D5C232CB13E3CD1FB6E832753626D2FE30E,"Code sample
-
- Load the language detection model
-lang_detection_model = watson_nlp.load('lang-detect_izumo_multi_stock')
-
- Run it on input text
-detected_lang = lang_detection_model.run('IBM announced new advances in quantum computing')
-
- Retrieve language ISO code
-print(detected_lang.to_iso_format())
-
-Output of the code sample:
-
-EN
-
-Parent topic:[Watson Natural Language Processing task catalog](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-catalog.html)
-"
-883359C27F09C3368292819B64149182441721E1_0,883359C27F09C3368292819B64149182441721E1," Noun phrase extraction
-
-The Watson Natural Language Processing Noun phrase extraction block extracts noun phrases from input text.
-
-"
-883359C27F09C3368292819B64149182441721E1_1,883359C27F09C3368292819B64149182441721E1,"Block name
-
-noun-phrases_rbr__stock
-
-Note: The ""rbr"" abbreviation in model name means rule-based reasoning. RBR models handle syntactically regular entity types such as number, email and phone.
-
-"
-883359C27F09C3368292819B64149182441721E1_2,883359C27F09C3368292819B64149182441721E1,"Supported languages
-
-Noun phrase extraction is available for the following languages. For a list of the language codes and the corresponding language, see [Language codes](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-catalog.htmllang-codes).
-
-ar, cs, da, de, es, en, fi, fr, he, hi, it, ja, ko, nb, nl, nn, pt, ro, ru, sk, sv, tr, zh_cn, zh_tw
-
-"
-883359C27F09C3368292819B64149182441721E1_3,883359C27F09C3368292819B64149182441721E1,"Capabilities
-
-The Noun phrase extraction block extracts non-overlapping noun phrases from the input text.
-
-
-
-Capabilities of noun phrase extraction based on an example
-
- Capabilities Examples
-
- Extraction of non-overlapping noun phrases ""Anna went to school at University of California Santa Cruz"" -> Anna, school, University of California Santa Cruz
-
-
-
-"
-883359C27F09C3368292819B64149182441721E1_4,883359C27F09C3368292819B64149182441721E1,"Dependencies on other blocks
-
-None
-
-"
-883359C27F09C3368292819B64149182441721E1_5,883359C27F09C3368292819B64149182441721E1,"Code sample
-
-import watson_nlp
-
- Load the model for English
-noun_phrases_model = watson_nlp.load('noun-phrases_rbr_en_stock')
-
- Run the model on the input text
-noun_phrases = noun_phrases_model.run('Anna went to school at University of California Santa Cruz')
-print(noun_phrases)
-
-Output of the code sample:
-
-{
-""noun_phrases"": [
-{
-""span"": {
-""begin"": 0,
-""end"": 4,
-""text"": ""Anna""
-}
-},
-{
-""span"": {
-""begin"": 13,
-""end"": 19,
-""text"": ""school""
-}
-},
-{
-""span"": {
-""begin"": 23,
-""end"": 58,
-""text"": ""University of California Santa Cruz""
-}
-}
-],
-""producer_id"": {
-""name"": ""RBR Noun phrases"",
-""version"": ""0.0.1""
-}
-}
-
-Parent topic:[Watson Natural Language Processing task catalog](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-catalog.html)
-"
-B4B2E864E1ABD4EA20845750E9567225BB3F417E_0,B4B2E864E1ABD4EA20845750E9567225BB3F417E," Relations extraction
-
-Watson Natural Language Processing Relations extraction encapsulates algorithms for extracting relations between two entity mentions. For example, in the text Lionel Messi plays for FC Barcelona. a relation extraction model may decide that the entities Lionel Messi and F.C. Barcelona are in a relationship with each other, and the relationship type is works for.
-
-"
-B4B2E864E1ABD4EA20845750E9567225BB3F417E_1,B4B2E864E1ABD4EA20845750E9567225BB3F417E,"Capabilities
-
-Use this model to detect relations between discovered entities.
-
-The following table lists common relations types that are available out-of-the-box after you have run the entity models.
-
-
-
-Table 1. Available common relation types between entities
-
- Relation Description
-
- affiliatedWith Exists between two entities that have an affiliation or are similarly connected.
- basedIn Exists between an Organization and the place where it is mainly, only, or intrinsically located.
- bornAt Exists between a Person and the place where they were born.
- bornOn Exists between a Person and the Date or Time when they were born.
- clientOf Exists between two entities when one is a direct business client of the other (that is, pays for certain services or products).
- colleague Exists between two Persons who are part of the same Organization.
- competitor Exists between two Organizations that are engaged in economic competition.
- contactOf Relates contact information with an entity.
- diedAt Exists between a Person and the place at which he, she, or it died.
- diedOn Exists between a Person and the Date or Time on which he, she, or it died.
- dissolvedOn Exists between an Organization or URL and the Date or Time when it was dissolved.
- educatedAt Exists between a Person and the Organization at which he or she is or was educated.
- employedBy Exists between two entities when one pays the other for certain work or services; monetary reward must be involved. In many circumstances, marking this relation requires world knowledge.
- foundedOn Exists between an Organization or URL and the Date or Time on which it was founded.
- founderOf Exists between a Person and a Facility, Organization, or URL that they founded.
- locatedAt Exists between an entity and its location.
- managerOf Exists between a Person and another entity such as a Person or Organization that he or she manages as his or her job.
- memberOf Exists between an entity, such as a Person or Organization, and another entity to which he, she, or it belongs.
- ownerOf Exists between an entity, such as a Person or Organization, and an entity that he, she, or it owns. The owner does not need to have permanent ownership of the entity for the relation to exist.
-"
-B4B2E864E1ABD4EA20845750E9567225BB3F417E_2,B4B2E864E1ABD4EA20845750E9567225BB3F417E," parentOf Exists between a Person and their children or stepchildren.
- partner Exists between two Organizations that are engaged in economic cooperation.
- partOf Exists between a smaller and a larger entity of the same type or related types in which the second entity subsumes the first. If the entities are both events, the first must occur within the time span of the second for the relation to be recognized.
- partOfMany Exists between smaller and larger entities of the same type or related types in which the second entity, which must be plural, includes the first, which can be singular or plural.
- populationOf Exists between a place and the number of people located there, or an organization and the number of members or employees it has.
- measureOf This relation indicates the quantity of an entity or measure (height, weight, etc) of an entity.
- relative Exists between two Persons who are relatives. To identify parents, children, siblings, and spouses, use the parentOf, siblingOf, and spouseOf relations.
- residesIn Exists between a Person and a place where they live or previously lived.
- shareholdersOf Exists between a Person or Organization, and an Organization of which the first entity is a shareholder.
- siblingOf Exists between a Person and their sibling or stepsibling.
- spokespersonFor Exists between a Person and an Facility, Organization, or Person that he or she represents.
- spouseOf Exists between two Persons that are spouses.
- subsidiaryOf Exists between two Organizations when the first is a subsidiary of the second.
-
-
-
-"
-B4B2E864E1ABD4EA20845750E9567225BB3F417E_3,B4B2E864E1ABD4EA20845750E9567225BB3F417E,"In [Runtime 22.2](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-relation-extraction.html?context=cdpaas&locale=enruntime-222), relation extraction is provided as an analysis block, which depends on the Syntax analysis block and a entity mention extraction block. Starting with [Runtime 23.1](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-relation-extraction.html?context=cdpaas&locale=enruntime-231), relation extraction is provided as a workflow, which is directly run on the input text.
-
-"
-B4B2E864E1ABD4EA20845750E9567225BB3F417E_4,B4B2E864E1ABD4EA20845750E9567225BB3F417E," Relation extraction in Runtime 23.1
-
-"
-B4B2E864E1ABD4EA20845750E9567225BB3F417E_5,B4B2E864E1ABD4EA20845750E9567225BB3F417E,"Workflow name
-
-relations_transformer-workflow_multilingual_slate.153m.distilled
-
-Supported languages The Relations Workflow is available for the following languages. For a list of the language codes and the corresponding language, see [Language codes](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-catalog.htmllang-codes).
-
-ar, de, en, es, fr, it, ja, ko, pt
-
-"
-B4B2E864E1ABD4EA20845750E9567225BB3F417E_6,B4B2E864E1ABD4EA20845750E9567225BB3F417E,"Code sample
-
-import watson_nlp
-
- Load the workflow model
-relations_workflow = watson_nlp.load('relations_transformer-workflow_multilingual_slate.153m.distilled')
-
- Run the relation extraction workflow on the input text
-relations = relations_workflow.run('Anna Smith is an engineer. Anna works at IBM.', language_code=""en"")
-print(relations.get_relation_pairs_by_type())
-
-"
-B4B2E864E1ABD4EA20845750E9567225BB3F417E_7,B4B2E864E1ABD4EA20845750E9567225BB3F417E,"Output of the code sample
-
-{'employedBy': [(('Anna', 'Person'), ('IBM', 'Organization'))]}
-
-"
-B4B2E864E1ABD4EA20845750E9567225BB3F417E_8,B4B2E864E1ABD4EA20845750E9567225BB3F417E," Relation extraction in Runtime 22.2
-
-"
-B4B2E864E1ABD4EA20845750E9567225BB3F417E_9,B4B2E864E1ABD4EA20845750E9567225BB3F417E,"Block name
-
-relations_transformer_en_stock
-
-"
-B4B2E864E1ABD4EA20845750E9567225BB3F417E_10,B4B2E864E1ABD4EA20845750E9567225BB3F417E,"Supported languages
-
-The Relations extraction block is available for English only.
-
-"
-B4B2E864E1ABD4EA20845750E9567225BB3F417E_11,B4B2E864E1ABD4EA20845750E9567225BB3F417E,"Dependencies on other blocks
-
-The following block must run before you can run the relations_transformer_en_stock block:
-
-
-
-* syntax_izumo_en_stock
-
-
-
-This must be followed by one of the following entity models on which the relations extraction block can build its results:
-
-
-
-* entity-mentions_rbr_en_stock
-* entity-mentions_bert_multi_stock
-
-
-
-"
-B4B2E864E1ABD4EA20845750E9567225BB3F417E_12,B4B2E864E1ABD4EA20845750E9567225BB3F417E,"Code sample
-
-import watson_nlp
-
- Load the models for English
-syntax_model = watson_nlp.load('syntax_izumo_en_stock')
-entity_mentions_model = watson_nlp.load('entity-mentions_bert_multi_stock')
-relation_model = watson_nlp.load('relations_transformer_en_stock')
-
- Run the prerequisite models
-syntax_prediction = syntax_model.run('Anna Smith is an engineer. Anna works at IBM.')
-entity_mentions = entity_mentions_model.run(syntax_prediction)
-
- Run the relations model
-relations_on_mentions = relation_model.run(syntax_prediction, mentions_prediction=entity_mentions)
-print(relations_on_mentions.get_relation_pairs_by_type())
-
-Output of the code sample:
-
-{'employedBy': [(('Anna', 'Person'), ('IBM', 'Organization'))]}
-
-Parent topic:[Watson Natural Language Processing task catalog](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-catalog.html)
-"
-A152F3047C3B41F06773051EA4B5B6B14DDE709E_0,A152F3047C3B41F06773051EA4B5B6B14DDE709E," Sentiment classification
-
-The Watson Natural Language Processing Sentiment classification models classify the sentiment of the input text.
-
-"
-A152F3047C3B41F06773051EA4B5B6B14DDE709E_1,A152F3047C3B41F06773051EA4B5B6B14DDE709E,"Supported languages
-
-Sentiment classification is available for the following languages. For a list of the language codes and the corresponding language, see [Language codes](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-catalog.htmllang-codes).
-
-ar, cs, da, de, en, es, fi, fr, he, hi, it, ja, ko, nb, nl, nn, pl, pt, ro, ru, sk, sv, tr, zh-cn
-
-"
-A152F3047C3B41F06773051EA4B5B6B14DDE709E_2,A152F3047C3B41F06773051EA4B5B6B14DDE709E," Sentiment
-
-The sentiment of text can be positive, negative or neutral.
-
-The sentiment model computes the sentiment for each sentence in the input document. The aggregated sentiment for the entire document is also calculated using the sentiment transformer workflow in Runtime 23.1. If you are using the sentiment models in Runtime 22.2 the overall document sentiment can be computed by the helper method called predict_document_sentiment.
-
-The classifications returned contain a probability. The sentiment score varies from -1 to 1. A score greater than 0 denotes a positive sentiment, a score less than 0 a negative sentiment, and a score of 0 a neutral sentiment.
-
-"
-A152F3047C3B41F06773051EA4B5B6B14DDE709E_3,A152F3047C3B41F06773051EA4B5B6B14DDE709E," Sentence sentiment workflows in runtime 23.1
-
-"
-A152F3047C3B41F06773051EA4B5B6B14DDE709E_4,A152F3047C3B41F06773051EA4B5B6B14DDE709E,"Workflow names
-
-
-
-* sentiment-aggregated_transformer-workflow_multilingual_slate.153m.distilled
-* sentiment-aggregated_transformer-workflow_multilingual_slate.153m.distilled-cpu
-
-
-
-The sentiment-aggregated_transformer-workflow_multilingual_slate.153m.distilled workflow can be used on both CPUs and GPUs.
-
-The sentiment-aggregated_transformer-workflow_multilingual_slate.153m.distilled-cpu workflow is optimized for CPU-based runtimes.
-
-Code sample using the sentiment-aggregated_transformer-workflow_multilingual_slate.153m.distilled workflow
-
- Load the Sentiment workflow
-sentiment_model = watson_nlp.load('sentiment-aggregated_transformer-workflow_multilingual_slate.153m.distilled-cpu')
-
- Run the sentiment model on the result of the syntax results
-sentiment_result = sentiment_model.run('The rooms are nice. But the beds are not very comfortable.')
-
- Print the sentence sentiment results
-print(sentiment_result)
-
-"
-A152F3047C3B41F06773051EA4B5B6B14DDE709E_5,A152F3047C3B41F06773051EA4B5B6B14DDE709E,"Output of the code sample
-
-{
-""document_sentiment"": {
-""score"": -0.339735,
-""label"": ""SENT_NEGATIVE"",
-""mixed"": true,
-""sentiment_mentions"": [
-{
-""span"": {
-""begin"": 0,
-""end"": 19,
-""text"": ""The rooms are nice.""
-},
-""sentimentprob"": {
-""positive"": 0.9720447063446045,
-""neutral"": 0.011838269419968128,
-""negative"": 0.016117043793201447
-}
-},
-{
-""span"": {
-""begin"": 20,
-""end"": 58,
-""text"": ""But the beds are not very comfortable.""
-},
-""sentimentprob"": {
-""positive"": 0.0011594508541747928,
-""neutral"": 0.006315878126770258,
-""negative"": 0.9925248026847839
-}
-}
-]
-},
-""targeted_sentiments"": {
-""targeted_sentiments"": {},
-""producer_id"": {
-""name"": ""Aggregated Sentiment Workflow"",
-""version"": ""0.0.1""
-}
-},
-""producer_id"": {
-""name"": ""Aggregated Sentiment Workflow"",
-""version"": ""0.0.1""
-}
-}
-
-"
-A152F3047C3B41F06773051EA4B5B6B14DDE709E_6,A152F3047C3B41F06773051EA4B5B6B14DDE709E," Sentence sentiment blocks in 22.2 runtimes
-
-"
-A152F3047C3B41F06773051EA4B5B6B14DDE709E_7,A152F3047C3B41F06773051EA4B5B6B14DDE709E,"Block name
-
-sentiment_sentence-bert_multi_stock
-
-"
-A152F3047C3B41F06773051EA4B5B6B14DDE709E_8,A152F3047C3B41F06773051EA4B5B6B14DDE709E,"Dependencies on other blocks
-
-The following block must run before you can run the Sentence sentiment block:
-
-
-
-* syntax_izumo__stock
-
-
-
-Code sample using the sentiment_sentence-bert_multi_stock block
-
-import watson_nlp
-from watson_nlp.toolkit.sentiment_analysis_utils import predict_document_sentiment
- Load Syntax and a Sentiment model for English
-syntax_model = watson_nlp.load('syntax_izumo_en_stock')
-sentiment_model = watson_nlp.load('sentiment_sentence-bert_multi_stock')
-
- Run the syntax model on the input text
-syntax_result = syntax_model.run('The rooms are nice. But the beds are not very comfortable.')
-
- Run the sentiment model on the result of the syntax results
-sentiment_result = sentiment_model.run_batch(syntax_result.get_sentence_texts(), syntax_result.sentences)
-
- Print the sentence sentiment results
-print(sentiment_result)
-
- Get the aggregated document sentiment
-document_sentiment = predict_document_sentiment(sentiment_result, sentiment_model.class_idxs)
-print(document_sentiment)
-
-Output of the code sample:
-
-[{
-""score"": 0.9540348989256836,
-""label"": ""SENT_POSITIVE"",
-""sentiment_mention"": {
-""span"": {
-""begin"": 0,
-""end"": 19,
-""text"": ""The rooms are nice.""
-},
-""sentimentprob"": {
-""positive"": 0.919123649597168,
-""neutral"": 0.05862388014793396,
-""negative"": 0.022252488881349564
-}
-},
-""producer_id"": {
-""name"": ""Sentence Sentiment Bert Processing"",
-""version"": ""0.1.0""
-}
-}, {
-""score"": -0.9772116371114815,
-""label"": ""SENT_NEGATIVE"",
-""sentiment_mention"": {
-""span"": {
-""begin"": 20,
-""end"": 58,
-""text"": ""But the beds are not very comfortable.""
-"
-A152F3047C3B41F06773051EA4B5B6B14DDE709E_9,A152F3047C3B41F06773051EA4B5B6B14DDE709E,"},
-""sentimentprob"": {
-""positive"": 0.015949789434671402,
-""neutral"": 0.025898978114128113,
-""negative"": 0.9581512808799744
-}
-},
-""producer_id"": {
-""name"": ""Sentence Sentiment Bert Processing"",
-""version"": ""0.1.0""
-}
-}]
-{
-""score"": -0.335185,
-""label"": ""SENT_NEGATIVE"",
-""mixed"": true,
-""sentiment_mentions"": [
-{
-""span"": {
-""begin"": 0,
-""end"": 19,
-""text"": ""The rooms are nice.""
-},
-""sentimentprob"": {
-""positive"": 0.919123649597168,
-""neutral"": 0.05862388014793396,
-""negative"": 0.022252488881349564
-}
-},
-{
-""span"": {
-""begin"": 20,
-""end"": 58,
-""text"": ""But the beds are not very comfortable.""
-},
-""sentimentprob"": {
-""positive"": 0.015949789434671402,
-""neutral"": 0.025898978114128113,
-""negative"": 0.9581512808799744
-}
-}
-]
-}
-
-"
-A152F3047C3B41F06773051EA4B5B6B14DDE709E_10,A152F3047C3B41F06773051EA4B5B6B14DDE709E," Targets sentiment extraction
-
-Targets sentiment extraction extracts sentiments expressed in text and identifies the targets of those sentiments.
-
-It can handle multiple targets with different sentiment in one sentence as opposed to the sentiment block described above.
-
-For example, given the input sentence The served food was delicious, yet the service was slow., the Targets sentiment block identifies that there is a positive sentiment expressed in the target ""food"", and a negative sentiment expressed in ""service"".
-
-The model has been fine-tuned on English data only. Although you can use the model on the other languages listed under Supported languages, the results might vary.
-
-"
-A152F3047C3B41F06773051EA4B5B6B14DDE709E_11,A152F3047C3B41F06773051EA4B5B6B14DDE709E," Targets sentiment workflows in Runtime 23.1
-
-"
-A152F3047C3B41F06773051EA4B5B6B14DDE709E_12,A152F3047C3B41F06773051EA4B5B6B14DDE709E,"Workflow names
-
-
-
-* targets-sentiment_transformer-workflow_multilingual_slate.153m.distilled
-* targets-sentiment_transformer-workflow_multilingual_slate.153m.distilled-cpu
-
-
-
-The targets-sentiment_transformer-workflow_multilingual_slate.153m.distilled workflow can be used on both CPUs and GPUs.
-
-The targets-sentiment_transformer-workflow_multilingual_slate.153m.distilled-cpu workflow is optimized for CPU-based runtimes.
-
-Code sample for the targets-sentiment_transformer-workflow_multilingual_slate.153m.distilled workflow
-
-import watson_nlp
- Load Targets Sentiment model for English
-targets_sentiment_model = watson_nlp.load('targets-sentiment_transformer-workflow_multilingual_slate.153m.distilled')
- Run the targets sentiment model on the input text
-targets_sentiments = targets_sentiment_model.run('The rooms are nice, but the bed was not very comfortable.')
- Print the targets with the associated sentiment
-print(targets_sentiments)
-
-"
-A152F3047C3B41F06773051EA4B5B6B14DDE709E_13,A152F3047C3B41F06773051EA4B5B6B14DDE709E,"Output of the code sample:
-
-{
-""targeted_sentiments"": {
-""rooms"": {
-""score"": 0.990798830986023,
-""label"": ""SENT_POSITIVE"",
-""mixed"": false,
-""sentiment_mentions"": [
-{
-""span"": {
-""begin"": 4,
-""end"": 9,
-""text"": ""rooms""
-},
-""sentimentprob"": {
-""positive"": 0.990798830986023,
-""neutral"": 0.0,
-""negative"": 0.00920116901397705
-}
-}
-]
-},
-""bed"": {
-""score"": -0.9920912981033325,
-""label"": ""SENT_NEGATIVE"",
-""mixed"": false,
-""sentiment_mentions"": [
-{
-""span"": {
-""begin"": 28,
-""end"": 31,
-""text"": ""bed""
-},
-""sentimentprob"": {
-""positive"": 0.00790870189666748,
-""neutral"": 0.0,
-""negative"": 0.9920912981033325
-}
-}
-]
-}
-},
-""producer_id"": {
-""name"": ""Transformer-based Targets Sentiment Extraction Workflow"",
-""version"": ""0.0.1""
-}
-}
-
-"
-A152F3047C3B41F06773051EA4B5B6B14DDE709E_14,A152F3047C3B41F06773051EA4B5B6B14DDE709E," Targets sentiment blocks in 22.2 runtimes
-
-Block nametargets-sentiment_sequence-bert_multi_stock
-
-"
-A152F3047C3B41F06773051EA4B5B6B14DDE709E_15,A152F3047C3B41F06773051EA4B5B6B14DDE709E,"Dependencies on other blocks
-
-The following block must run before you can run the Targets sentiment extraction block:
-
-
-
-* syntax_izumo__stock
-
-
-
-Code sample using the sentiment-targeted_bert_multi_stock block
-
-import watson_nlp
-
- Load Syntax and the Targets Sentiment model for English
-syntax_model = watson_nlp.load('syntax_izumo_en_stock')
-targets_sentiment_model = watson_nlp.load('targets-sentiment_sequence-bert_multi_stock')
-
- Run the syntax model on the input text
-syntax_result = syntax_model.run('The rooms are nice, but the bed was not very comfortable.')
-
- Run the targets sentiment model on the syntax results
-targets_sentiments = targets_sentiment_model.run(syntax_result)
-
- Print the targets with the associated sentiment
-print(targets_sentiments)
-
-Output of the code sample:
-
-{
-""targeted_sentiments"": {
-""rooms"": {
-""score"": 0.9989274144172668,
-""label"": ""SENT_POSITIVE"",
-""mixed"": false,
-""sentiment_mentions"": [
-{
-""span"": {
-""begin"": 4,
-""end"": 9,
-""text"": ""rooms""
-},
-""sentimentprob"": {
-""positive"": 0.9989274144172668,
-""neutral"": 0.0,
-""negative"": 0.0010725855827331543
-}
-}
-]
-},
-""bed"": {
-""score"": -0.9977545142173767,
-""label"": ""SENT_NEGATIVE"",
-""mixed"": false,
-""sentiment_mentions"": [
-{
-""span"": {
-""begin"": 28,
-""end"": 31,
-""text"": ""bed""
-},
-""sentimentprob"": {
-""positive"": 0.002245485782623291,
-""neutral"": 0.0,
-""negative"": 0.9977545142173767
-}
-}
-]
-}
-},
-""producer_id"": {
-""name"": ""BERT TSA"",
-""version"": ""0.0.1""
-}
-"
-A152F3047C3B41F06773051EA4B5B6B14DDE709E_16,A152F3047C3B41F06773051EA4B5B6B14DDE709E,"}
-
-Parent topic:[Watson Natural Language Processing task catalog](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-catalog.html)
-"
-DCE29488A4D041B77F6E9B1B514F41335FAE0696_0,DCE29488A4D041B77F6E9B1B514F41335FAE0696," Syntax analysis
-
-The Watson Natural Language Processing Syntax block encapsulates syntax analysis functionality.
-
-"
-DCE29488A4D041B77F6E9B1B514F41335FAE0696_1,DCE29488A4D041B77F6E9B1B514F41335FAE0696,"Block names
-
-
-
-* syntax_izumo__stock
-* syntax_izumo__stock-dp (Runtime 23.1 only)
-
-
-
-"
-DCE29488A4D041B77F6E9B1B514F41335FAE0696_2,DCE29488A4D041B77F6E9B1B514F41335FAE0696,"Supported languages
-
-The Syntax analysis block is available for the following languages. For a list of the language codes and the corresponding language, see [Language codes](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-catalog.htmllang-codes).
-
-Language codes to use for model syntax_izumo__stock: af, ar, bs, ca, cs, da, de, el, en, es, fi, fr, he, hi, hr, it, ja, ko, nb, nl, nn, pl, pt, ro, ru, sk, sr, sv, tr, zh_cn, zh_tw
-
-Language codes to use for model syntax_izumo__stock-dp: af, ar, bs, ca, cs, da, de, el, en, es, fi, fr, he, hi, hr, it, ja, ko, nb, nl, nn, pl, pt, ro, ru, sk, sr, sv, tr, zh
-
-
-
-List of the supported languages for each syntax task
-
- Task Supported language codes
-
- Tokenization af, ar, bs, ca, cs, da, de, el, en, es, fi, fr, he, hi, hr, it, ja, ko, nb, nl, nn, pl, pt, ro, ru, sk, sr, sv, tr, zh_cn, zh_tw, zh
- Part-of-speech tagging af, ar, bs, ca, cs, da, de, nl, nn, el, en, es, fi, fr, he, hi, hr, it, ja, ko, nb, pl, pt, ro, ru, sk, sr, sv, tr, zh_cn, zh_tw, zh
-"
-DCE29488A4D041B77F6E9B1B514F41335FAE0696_3,DCE29488A4D041B77F6E9B1B514F41335FAE0696," Lemmatization af, ar, bs, ca, cs, da, de, el, en, es, fi, fr, he, hi, hr, it, ja, ko, nb, nl, nn, pl, pt, ro, ru, sk, sr, sv, tr, zh_cn, zh_tw, zh
- Sentence detection af, ar, bs, ca, cs, da, de, el, en, es, fi, fr, he, hi, hr, it, ja, ko, nb, nl, nn, pl, pt, ro, ru, sk, sr, sv, tr, zh_cn, zh_tw, zh
- Paragraph detection af, ar, bs, ca, cs, da, de, el, en, es, fi, fr, he, hi, hr, it, ja, ko, nb, nl, nn, pl, pt, ro, ru, sk, sr, sv, tr, zh_cn, zh_tw, zh
- Dependency parsing af, ar, bs, cs, da, de, en, es, fi, fr, hi, hr, it, ja, nb, nl, nn, pt, ro, ru, sk, sr, sv
-
-
-
-"
-DCE29488A4D041B77F6E9B1B514F41335FAE0696_4,DCE29488A4D041B77F6E9B1B514F41335FAE0696,"Capabilities
-
-Use this block to perform tasks like sentence detection, tokenization, part-of-speech tagging, lemmatization and dependency parsing in different languages. For most tasks, you will likely only need sentence detection, tokenization, and part-of-speech tagging. For these use cases use the syntax_model_xx_stock model. If you want to run dependency parsing in Runtime 23.1, use the syntax_model_xx_stock-dp model. In Runtime 22.2, dependency parsing is included in the syntax_model_xx_stock model.
-
-The analysis for Part-of-speech (POS) tagging and dependencies follows the Universal Parts of Speech tagset ([Universal POS tags](https://universaldependencies.org/u/pos/)) and the Universal Dependencies v2 tagset ([Universal Dependency Relations](https://universaldependencies.org/u/dep/)).
-
-The following table shows you the capabilities of each task based on the same example and the outcome to the parse.
-
-
-
-Capabilities of each syntax task based on an example
-
- Capabilities Examples Parser attributes
-
- Tokenization I don't like Mondays"" --> ""I"" , ""do"", ""n't"", ""like"", ""Mondays token
- Part-Of_Speech detection ""I don't like Mondays"" --> ""I""\POS_PRON, ""do""\POS_AUX, ""n't""\POS_PART, ""like""\POS_VERB, ""Mondays""\POS_PROPN part_of_speech
- Lemmatization I don't like Mondays"" --> ""I"", ""do"", ""not"", ""like"", ""Monday lemma
- Dependency parsing I don't like Mondays"" --> ""I""-SUBJECT->""like""<-OBJECT-""Mondays dependency
- Sentence detection ""I don't like Mondays"" --> returns this sentence sentence
- Paragraph detection (Currently paragraph detection is still experimental and returns similar results to sentence detection.) ""I don't like Mondays"" --> returns this sentence as being a paragraph sentence
-
-
-
-"
-DCE29488A4D041B77F6E9B1B514F41335FAE0696_5,DCE29488A4D041B77F6E9B1B514F41335FAE0696,"Dependencies on other blocks
-
-None
-
-"
-DCE29488A4D041B77F6E9B1B514F41335FAE0696_6,DCE29488A4D041B77F6E9B1B514F41335FAE0696,"Code sample
-
-import watson_nlp
-
- Load Syntax for English
-syntax_model = watson_nlp.load('syntax_izumo_en_stock')
-
- Detect tokens, lemma and part-of-speech
-text = 'I don't like Mondays'
-syntax_prediction = syntax_model.run(text, parsers=('token', 'lemma', 'part_of_speech'))
-
- Print the syntax result
-print(syntax_prediction)
-
-Output of the code sample:
-
-{
-""text"": ""I don't like Mondays"",
-""producer_id"": {
-""name"": ""Izumo Text Processing"",
-""version"": ""0.0.1""
-},
-""tokens"": [
-{
-""span"": {
-""begin"": 0,
-""end"": 1,
-""text"": ""I""
-},
-""lemma"": ""I"",
-""part_of_speech"": ""POS_PRON""
-},
-{
-""span"": {
-""begin"": 2,
-""end"": 4,
-""text"": ""do""
-},
-""lemma"": ""do"",
-""part_of_speech"": ""POS_AUX""
-},
-{
-""span"": {
-""begin"": 4,
-""end"": 7,
-""text"": ""n't""
-},
-""lemma"": ""not"",
-""part_of_speech"": ""POS_PART""
-},
-{
-""span"": {
-""begin"": 8,
-""end"": 12,
-""text"": ""like""
-},
-""lemma"": ""like"",
-""part_of_speech"": ""POS_VERB""
-},
-{
-""span"": {
-""begin"": 13,
-""end"": 20,
-""text"": ""Mondays""
-},
-""lemma"": ""Monday"",
-""part_of_speech"": ""POS_PROPN""
-}
-],
-""sentences"": [
-{
-""span"": {
-""begin"": 0,
-""end"": 20,
-""text"": ""I don't like Mondays""
-}
-}
-],
-""paragraphs"": [
-{
-""span"": {
-""begin"": 0,
-"
-DCE29488A4D041B77F6E9B1B514F41335FAE0696_7,DCE29488A4D041B77F6E9B1B514F41335FAE0696,"""end"": 20,
-""text"": ""I don't like Mondays""
-}
-}
-]
-}
-
-Parent topic:[Watson Natural Language Processing task catalog](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-catalog.html)
-"
-ABCA967CD96AB805BE518E8A52EF984499C62F6C_0,ABCA967CD96AB805BE518E8A52EF984499C62F6C," Tone classification
-
-The Tone model in the Watson Natural Language Processing classification workflow classifies the tone in the input text.
-
-"
-ABCA967CD96AB805BE518E8A52EF984499C62F6C_1,ABCA967CD96AB805BE518E8A52EF984499C62F6C,"Workflow name
-
-ensemble_classification-workflow_en_tone-stock
-
-"
-ABCA967CD96AB805BE518E8A52EF984499C62F6C_2,ABCA967CD96AB805BE518E8A52EF984499C62F6C,"Supported languages
-
-
-
-* English and French
-
-
-
-"
-ABCA967CD96AB805BE518E8A52EF984499C62F6C_3,ABCA967CD96AB805BE518E8A52EF984499C62F6C,"Capabilities
-
-The Tone classification model is a pre-trained document classification model for the task of classifying the tone in the input document. The model identifies the tone of the input document and classifies it as:
-
-
-
-* Excited
-* Frustrated
-* Impolite
-* Polite
-* Sad
-* Satisfied
-* Sympathetic
-
-
-
-Unlike the Sentiment model, which classifies each individual sentence, the Tone model classifies the entire input document. As such, the Tone model works optimally when the input text to classify is no longer than 1000 characters. If you would like to classify texts longer than 1000 characters, split the text into sentences or paragraphs for example and apply the Tone model on each sentence or paragraph.
-
-A document may be classified into multiple categories or into no category.
-
-
-
-Capabilities of tone classification
-
- Capabilities Example
-
- Identifies the tone of a document and classifies it ""I'm really happy with how this was handled, thank you!"" --> excited, satisfied
-
-
-
-"
-ABCA967CD96AB805BE518E8A52EF984499C62F6C_4,ABCA967CD96AB805BE518E8A52EF984499C62F6C,"Dependencies on other blocks
-
-None
-
-"
-ABCA967CD96AB805BE518E8A52EF984499C62F6C_5,ABCA967CD96AB805BE518E8A52EF984499C62F6C,"Code sample
-
-import watson_nlp
-
- Load the Tone workflow model for English
-tone_model = watson_nlp.load('ensemble_classification-workflow_en_tone-stock')
-
- Run the Tone model
-tone_result = tone_model.run(""I'm really happy with how this was handled, thank you!"")
-print(tone_result)
-
-Output of the code sample:
-
-{
-""classes"": [
-{
-""class_name"": ""excited"",
-""confidence"": 0.6896854620082722
-},
-{
-""class_name"": ""satisfied"",
-""confidence"": 0.6570277557333078
-},
-{
-""class_name"": ""polite"",
-""confidence"": 0.33628806679460566
-},
-{
-""class_name"": ""sympathetic"",
-""confidence"": 0.17089694967744093
-},
-{
-""class_name"": ""sad"",
-""confidence"": 0.06880583874412932
-},
-{
-""class_name"": ""frustrated"",
-""confidence"": 0.010365418217209686
-},
-{
-""class_name"": ""impolite"",
-""confidence"": 0.002470793624966174
-}
-],
-""producer_id"": {
-""name"": ""Voting based Ensemble"",
-""version"": ""0.0.1""
-}
-}
-
-Parent topic:[Watson Natural Language Processing task catalog](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-catalog.html)
-"
-9E2277EC0ED75EC2871C8BCCB4B9AF3F78350C9B_0,9E2277EC0ED75EC2871C8BCCB4B9AF3F78350C9B," Classifying text with a custom classification model
-
-You can train your own models for text classification using strong classification algorithms from three different families:
-
-
-
-* Classic machine learning using SVM (Support Vector Machines)
-* Deep learning using CNN (Convolutional Neural Networks)
-* A transformer-based algorithm using a pre-trained transformer model:
-
-
-
-* Runtime 23.1: Slate IBM Foundation model
-* Runtime 22.x: Google BERT Multilingual model
-
-
-
-
-
-The Watson Natural Language Processing library also offers an easy to use Ensemble classifier that combines different classification algorithms and majority voting.
-
-The algorithms support multi-label and multi-class tasks and special cases, like if the document belongs to one class only (single-label task), or binary classification tasks.
-
-Note:Training classification models is CPU and memory intensive. Depending on the size of your training data, the environment might not be large enough to complete the training. If you run into issues with the notebook kernel during training, create a custom notebook environment with a larger amount of CPU and memory, and use that to run your notebook. Especially for transformer-based algorithms, you should use a GPU-based environment, if it is available to you. See [Creating your own environment template](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/create-customize-env-definition.html).
-
-Topic sections:
-
-
-
-* [Input data format for training](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-classify-text.html?context=cdpaas&locale=eninput-data)
-* [Input data requirements](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-classify-text.html?context=cdpaas&locale=eninput-data-reqs)
-"
-9E2277EC0ED75EC2871C8BCCB4B9AF3F78350C9B_1,9E2277EC0ED75EC2871C8BCCB4B9AF3F78350C9B,"* [Stopwords](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-classify-text.html?context=cdpaas&locale=enstopwords)
-* [Training SVM algorithms](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-classify-text.html?context=cdpaas&locale=entrain-svm)
-* [Training the CNN algorithm](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-classify-text.html?context=cdpaas&locale=entrain-cnn)
-* [Training the transformer algorithm by using the Slate IBM Foundation model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-classify-text.html?context=cdpaas&locale=entrain-slate)
-* [Training a custom transformer model by using a model provided by Hugging Face](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-classify-text.html?context=cdpaas&locale=entrain-huface)
-* [Training the multilingual BERT algorithm](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-classify-text.html?context=cdpaas&locale=entrain-bert)
-* [Training an ensemble model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-classify-text.html?context=cdpaas&locale=entrain-ensemble)
-"
-9E2277EC0ED75EC2871C8BCCB4B9AF3F78350C9B_2,9E2277EC0ED75EC2871C8BCCB4B9AF3F78350C9B,"* [Training best practices](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-classify-text.html?context=cdpaas&locale=enbest-practices)
-* [Applying the model on new data](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-classify-text.html?context=cdpaas&locale=enapply-model)
-* [Choosing the right algorithm for your use case](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-classify-text.html?context=cdpaas&locale=enchoose-algorithm)
-
-
-
-"
-9E2277EC0ED75EC2871C8BCCB4B9AF3F78350C9B_3,9E2277EC0ED75EC2871C8BCCB4B9AF3F78350C9B," Input data format for training
-
-Classification blocks accept training data in CSV and JSON formats.
-
-
-
-* The CSV Format
-
-The CSV file should contain no header. Each row in the CSV file represents an example record. Each record has one or more columns, where the first column represents the text and the subsequent columns represent the labels associated with that text.
-
-Note:
-
-
-
-* The SVM and CNN algorithms do not support training data where an instance has no labels. So, if you are using the SVM algorithm, or the CNN algorithm, or an Ensemble including one of these algorithms, each CSV row must have at least one label, i.e., 2 columns.
-* The BERT-based and Slate-based Transformer algorithms support training data where each instance has 0, 1 or more than one label.
-
-Example 1,label 1
-Example 2,label 1,label 2
-
-
-
-* The JSON Format
-
-The training data is represented as an array with multiple JSON objects. Each JSON object represents one training instance, and must have a text and a labels field. The text represents the training example, and labels stores the labels associated with the example (0, 1, or more than one label).
-
-[
-{
-""text"": ""Example 1"",
-""labels"": ""label 1""]
-},
-{
-""text"": ""Example 2"",
-""labels"": ""label 1"", ""label 2""]
-},
-{
-""text"": ""Example 3"",
-""labels"": ]
-}
-]
-
-Note:
-
-
-
-* ""labels"": [] denotes an example with no labels. The SVM and CNN algorithms do not support training data where an instance has no labels. So, if you are using the SVM algorithm, or the CNN algorithm, or an Ensemble including one of these algorithms, each JSON object must have at least one label.
-* The BERT-based and Slate-based Transformer algorithms support training data where each instance has 0, 1 or more than one label.
-
-
-
-
-
-"
-9E2277EC0ED75EC2871C8BCCB4B9AF3F78350C9B_4,9E2277EC0ED75EC2871C8BCCB4B9AF3F78350C9B," Input data requirements
-
-For SVM and CNN algorithms:
-
-
-
-* Minimum number of unique labels required: 2
-* Minimum number of text examples required per label: 5
-
-
-
-For the BERT-based and Slate-based Transformer algorithms:
-
-
-
-* Minimum number of unique labels required: 1
-* Minimum number of text examples required per label: 5
-
-
-
-Note that the training data in CSV or JSON format is converted to a DataStream before training. Instead of training data files, you can also pass data streams directly to the training functions of classification blocks.
-
-"
-9E2277EC0ED75EC2871C8BCCB4B9AF3F78350C9B_5,9E2277EC0ED75EC2871C8BCCB4B9AF3F78350C9B," Stopwords
-
-You can provide your own stopwords that will be removed during preprocessing. Stopwords file inputs are expected in a standard format: a single text file with one phrase per line. Stopwords can be provided as a list or as a file in a standard format.
-
-Stopwords can be used only with the Ensemble classifier.
-
-"
-9E2277EC0ED75EC2871C8BCCB4B9AF3F78350C9B_6,9E2277EC0ED75EC2871C8BCCB4B9AF3F78350C9B," Training SVM algorithms
-
-SVM is a support vector machine classifier that can be trained using predictions on any kind of input provided by the embedding or vectorization blocks as feature vectors, for example, by USE (Universal Sentence Encoder) embeddings and TF-IDF vectorizers. It supports multi-class and multi-label text classification and produces confidence scores via Platt Scaling.
-
-For all options that are available for configuring SVM training, enter:
-
-help(watson_nlp.blocks.classification.svm.SVM.train)
-
-To train SVM algorithms:
-
-
-
-1. Begin with these preprocessing steps:
-
-import watson_nlp
-from watson_core.data_model.streams.resolver import DataStreamResolver
-from watson_nlp.blocks.classification.svm import SVM
-
-training_data_file = """"
-
- Create datastream from training data
-data_stream_resolver = DataStreamResolver(target_stream_type=list, expected_keys={'text': str, 'labels': list})
-training_data = data_stream_resolver.as_data_stream(training_data_file)
-
- Load a Syntax model
-syntax_model = watson_nlp.load('syntax_izumo_en_stock')
-
- Create Syntax stream
-text_stream, labels_stream = training_data[0], training_data[1]
-syntax_stream = syntax_model.stream(text_stream)
-
-
-
-
-
-1. Train the classification model using USE embeddings. See [Pretrained USE embeddings available out-of-the-box](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-classify-text.html?context=cdpaas&locale=enuse-embeddings) for a list of the pretrained blocks that are available.
-
- download embedding
-use_embedding_model = watson_nlp.load('embedding_use_en_stock')
-
-"
-9E2277EC0ED75EC2871C8BCCB4B9AF3F78350C9B_7,9E2277EC0ED75EC2871C8BCCB4B9AF3F78350C9B,"use_train_stream = use_embedding_model.stream(syntax_stream, doc_embed_style='raw_text')
- NOTE: doc_embed_style can be changed to avg_sent as well. For more information check the documentation for Embeddings
- Or the USE run function API docs
-use_svm_train_stream = watson_nlp.data_model.DataStream.zip(use_train_stream, labels_stream)
-
- Train SVM using Universal Sentence Encoder (USE) training stream
-classification_model = SVM.train(use_svm_train_stream)
-
-
-
-"
-9E2277EC0ED75EC2871C8BCCB4B9AF3F78350C9B_8,9E2277EC0ED75EC2871C8BCCB4B9AF3F78350C9B," Pretrained USE embeddings available out-of-the-box
-
-USE embeddings are wrappers around Google Universal Sentence Encoder embeddings available in TFHub. These embeddings are used in the document classification SVM algorithm.
-
-The following table lists the pretrained blocks for USE embeddings that are available and the languages that are supported. For a list of the language codes and the corresponding language, see [Language codes](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-catalog.htmllang-codes).
-
-
-
-List of pretrained USE embeddings with their supported languages
-
- Block name Model name Supported languages
-
- use embedding_use_en_stock English only
- use embedding_use_multi_small ar, de, en, es, fr, it, ja, ko, nl, pl, pt, ru, tr, zh-cn, zh-tw
- use embedding_use_multi_large ar, de, en, es, fr, it, ja, ko, nl, pl, pt, ru, tr, zh-cn, zh-tw
-
-
-
-When using USE embeddings, consider the following:
-
-
-
-* Choose embedding_use_en_stock if your task involves English text.
-* Choose one of the multilingual USE embeddings if your task involves text in a non-English language, or you want to train multilingual models.
-* The USE embeddings exhibit different trade-offs between quality of the trained model and throughput at inference time, as described below. Try different embeddings to decide the trade-off between quality of result and inference throughput that is appropriate for your use case.
-
-
-
-* embedding_use_multi_small has reasonable quality, but it is fast at inference time
-* embedding_use_en_stock is a English-only version of embedding_embedding_use_multi_small, hence it is smaller and exhibits higher inference throughput
-"
-9E2277EC0ED75EC2871C8BCCB4B9AF3F78350C9B_9,9E2277EC0ED75EC2871C8BCCB4B9AF3F78350C9B,"* embedding_use_multi_large is based on Transformer architecture, and therefore it provides higher quality of result, with lower throughput at inference time
-
-
-
-
-
-"
-9E2277EC0ED75EC2871C8BCCB4B9AF3F78350C9B_10,9E2277EC0ED75EC2871C8BCCB4B9AF3F78350C9B," Training the CNN algorithm
-
-CNN is a simple convolutional network architecture, built for multi-class and multi-label text classification on short texts. It utilizes GloVe embeddings. GloVe embeddings encode word-level semantics into a vector space. The GloVe embeddings for each language are trained on the Wikipedia corpus in that language. For information on using GloVe embeddings, see the open source GloVe embeddings documentation.
-
-For all the options that are available for configuring CNN training, enter:
-
-help(watson_nlp.blocks.classification.cnn.CNN.train)
-
-To train CNN algorithms:
-
-import watson_nlp
-from watson_core.data_model.streams.resolver import DataStreamResolver
-from watson_nlp.blocks.classification.cnn import CNN
-
-training_data_file = """"
-
- Create datastream from training data
-data_stream_resolver = DataStreamResolver(target_stream_type=list, expected_keys={'text': str, 'labels': list})
-training_data = data_stream_resolver.as_data_stream(training_data_file)
-
- Load a Syntax model
-syntax_model = watson_nlp.load('syntax_izumo_en_stock')
-
- Create Syntax stream
-text_stream, labels_stream = training_data[0], training_data[1]
-syntax_stream = syntax_model.stream(text_stream)
-
- Download GloVe embeddings
-glove_embedding_model = watson_nlp.load('embedding_glove_en_stock')
-
- Train CNN
-classification_model = CNN.train(watson_nlp.data_model.DataStream.zip(syntax_stream, labels_stream), embedding=glove_embedding_model.embedding)
-
-"
-9E2277EC0ED75EC2871C8BCCB4B9AF3F78350C9B_11,9E2277EC0ED75EC2871C8BCCB4B9AF3F78350C9B," Training the transformer algorithm by using the IBM Slate model
-
-The transformer algorithm using the pretrained Slate IBM Foundation model can be used for multi-class and multi-label text classification on short texts.
-
-The pretrained Slate IBM Foundation model is only available in Runtime 23.1.
-
-For all the options available for configuring Transformer training, enter:
-
-help(watson_nlp.blocks.classification.transformer.Transformer.train)
-
-To train Transformer algorithms:
-
-import watson_nlp
-from watson_nlp.blocks.classification.transformer import Transformer
-from watson_core.data_model.streams.resolver import DataStreamResolver
-training_data_file = ""train_data.json""
-
- create datastream from training data
-data_stream_resolver = DataStreamResolver(target_stream_type=list, expected_keys={'text': str, 'labels': list})
-train_stream = data_stream_resolver.as_data_stream(training_data_file)
-
- Load pre-trained Slate model
-pretrained_model_resource = watson_nlp.load('pretrained-model_slate.153m.distilled_many_transformer_multilingual_uncased ')
-
- Train model
-classification_model = Transformer.train(train_stream, pretrained_model_resource)
-
-"
-9E2277EC0ED75EC2871C8BCCB4B9AF3F78350C9B_12,9E2277EC0ED75EC2871C8BCCB4B9AF3F78350C9B," Training a custom transformer model by using a model provided by Hugging Face
-
-Note: This training method is only available in Runtime 23.1.
-
-You can train your custom transformer-based model by using a pretrained model from Hugging Face.
-
-To use a Hugging Face model, specify the model name as the pretrained_model_resource parameter in the train method of watson_nlp.blocks.classification.transformer.Transformer. Go to [https://huggingface.co/models](https://huggingface.co/models) to copy the model name.
-
-To get a list of all the options available for configuring a transformer training, type this code:
-
-help(watson_nlp.blocks.classification.transformer.Transformer.train)
-
-For information on how to train transformer algorithms, refer to this code example:
-
-import watson_nlp
-from watson_nlp.blocks.classification.transformer import Transformer
-from watson_core.data_model.streams.resolver import DataStreamResolver
-training_data_file = ""train_data.json""
-
- create datastream from training data
-data_stream_resolver = DataStreamResolver(target_stream_type=list, expected_keys={'text': str, 'labels': list})
-train_stream = data_stream_resolver.as_data_stream(training_data_file)
-
- Specify the name of the Hugging Face model
-huggingface_model_name = 'xml-roberta-base'
-
- Train model
-classification_model = Transformer.train(train_stream, pretrained_model_resource=huggingface_model_name)
-
-"
-9E2277EC0ED75EC2871C8BCCB4B9AF3F78350C9B_13,9E2277EC0ED75EC2871C8BCCB4B9AF3F78350C9B," Training the multilingual BERT algorithm
-
-BERT is a transformer-based architecture, built for multi-class and multi-label text classification on short texts.
-
-Note: The Google BERT Multilingual model is available in 22.2 runtimes only.
-
-For all the options available for configuring BERT training, enter:
-
-help(watson_nlp.blocks.classification.bert.BERT.train)
-
-To train BERT algorithms:
-
-import watson_nlp
-from watson_nlp.blocks.classification.bert import BERT
-from watson_core.data_model.streams.resolver import DataStreamResolver
-training_data_file = """"
-
- create datastream from training data
-data_stream_resolver = DataStreamResolver(target_stream_type=list, expected_keys={'text': str, 'labels': list})
-train_stream = data_stream_resolver.as_data_stream(training_data_file)
-
- Load pre-trained BERT model
-pretrained_model_resource = watson_nlp.load('pretrained-model_bert_multi_bert_multi_uncased')
-
- Train model
-classification_model = BERT.train(train_stream, pretrained_model_resource)
-
-"
-9E2277EC0ED75EC2871C8BCCB4B9AF3F78350C9B_14,9E2277EC0ED75EC2871C8BCCB4B9AF3F78350C9B," Training an ensemble model
-
-The Ensemble model is a weighted ensemble of these three algorithms: CNN, SVM with TF-IDF and SVM with USE. It computes the weighted mean of a set of classification predictions using confidence scores. The ensemble model is very easy to use.
-
-"
-9E2277EC0ED75EC2871C8BCCB4B9AF3F78350C9B_15,9E2277EC0ED75EC2871C8BCCB4B9AF3F78350C9B," Using the Runtime 22.2 and Runtime 23.1 environments
-
-The GenericEnsemble classifier allows more flexibility for the user to choose from the three base classifiers TFIDF-SVM, USE-SVM and CNN. For texts ranging from 50 to 1000 characters, using the combination of TFIDF-SVM and USE-SVM classifiers often yields a good balance of quality and performance. On some medium or long documents (500-1000+ characters), adding the CNN to the Ensemble could help increase quality, but it usually comes with a significant runtime performance impact (lower throughput and increased model loading time).
-
-For all of the options available for configuring Ensemble training, enter:
-
-help(watson_nlp.workflows.classification.GenericEnsemble)
-
-To train Ensemble algorithms:
-
-import watson_nlp
-from watson_nlp.workflows.classification import GenericEnsemble
-from watson_nlp.workflows.classification.base_classifier import GloveCNN
-from watson_nlp.workflows.classification.base_classifier import TFidfSvm
-from watson_nlp.workflows.classification.base_classifier import UseSvm
-
-training_data_file = """"
-
- Create datastream from training data
-data_stream_resolver = DataStreamResolver(target_stream_type=list, expected_keys={'text': str, 'labels': list})
-training_data = data_stream_resolver.as_data_stream(training_data_file)
-
- Syntax Model
-syntax_model = watson_nlp.load('syntax_izumo_en_stock')
- USE Embedding Model
-use_model = watson_nlp.load('embedding_use_en_stock')
- GloVE Embedding model
-glove_model = watson_nlp.load('embedding_glove_en_stock')
-
-ensemble_model = GenericEnsemble.train(training_data, syntax_model,
-base_classifiers_params=[
-"
-9E2277EC0ED75EC2871C8BCCB4B9AF3F78350C9B_16,9E2277EC0ED75EC2871C8BCCB4B9AF3F78350C9B,"TFidfSvm.TrainParams(syntax_model=syntax_model),
-GloveCNN.TrainParams(syntax_model=syntax_model, glove_embedding_model=glove_model, cnn_epochs=5),
-UseSvm.TrainParams(syntax_model=syntax_model, use_embedding_model=use_model, doc_embed_style='raw_text')],
-use_ewl=True)
-
-"
-9E2277EC0ED75EC2871C8BCCB4B9AF3F78350C9B_17,9E2277EC0ED75EC2871C8BCCB4B9AF3F78350C9B," Pretrained stopword models available out-of-the-box
-
-The text model for identifying stopwords is used in training the document classification ensemble model.
-
-The following table lists the pretrained stopword models and the language codes that are supported (xx stands for the language code). For a list of the language codes and the corresponding language, see [Language codes](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-catalog.htmllang-codes).
-
-
-
-List of pretrained stopword models with their supported languages
-
- Resource class Model name Supported languages
-
- text text_stopwords_classification_ensemble_xx_stock ar, de, es, en, fr, it, ja, ko
-
-
-
-"
-9E2277EC0ED75EC2871C8BCCB4B9AF3F78350C9B_18,9E2277EC0ED75EC2871C8BCCB4B9AF3F78350C9B," Training best practices
-
-There are certain constraints on the quality and quantity of data to ensure that the classifications model training can complete in a reasonable amount of time and also meets various performance criteria. These are listed below. Note that none are hard restrictions. However, the further one deviates from these guidelines, the greater the chance that the model fails to train or the model will not be satisfactory.
-
-
-
-* Data quantity
-
-
-
-* The highest number of classes classification model has been tested on is 1200.
-* The best suited text size for training and testing data for classification is around 3000 code points. However, larger texts can also be processed, but the runtime performance might be slower.
-* Training time will increase based on the number of examples and number of labels.
-* Inference time will increased based on the number of labels.
-
-
-
-* Data quality
-
-
-
-* Size of each sample (for example, number of phrases in each training sample) can affect quality.
-* Class separation is important. In other words, classes among the training (and test) data should be semantically distinguishable from each another in order to avoid misclassifications. Since the classifier algorithms in Watson Natural Language Processing rely on word embeddings, training classes that contain text examples with too much semantic overlap may make high-quality classification computationally intractable. While more sophisticated heuristics may exist for assessing the semantic similarity between classes, you should start with a simple ""eye test"" of a few examples from each class to discern whether or not they seem adequately separated.
-* It is recommended to use balanced data for training. Ideally there should be roughly equal numbers of examples from each class in the training data, otherwise the classifiers may be biased towards classes with larger representation in the training data.
-* It is best to avoid circumstances where some classes in the training data are highly under-represented as compared to other classes.
-
-
-
-
-
-Limitations and caveats:
-
-
-
-"
-9E2277EC0ED75EC2871C8BCCB4B9AF3F78350C9B_19,9E2277EC0ED75EC2871C8BCCB4B9AF3F78350C9B,"* The BERT classification block has a predefined sequence length of 128 code points. However, this can be configured at train time by changing the parameter max_seq_length. The maximum value allowed for this parameter is 512. This means that the BERT classification block can only be used to classify short text. Text longer than max_seq_length is trimmed and discarded during classification training and inference.
-* The CNN classification block has a predefined sequence length of 1000 code points. This limit can be configured at train time by changing the parameter max_phrase_len. There is no maximum limit for this parameter, but increasing the maximum phrase length will affect CPU and memory consumption.
-* SVM blocks do not have such limit on sequence length and can be used with longer texts.
-
-
-
-"
-9E2277EC0ED75EC2871C8BCCB4B9AF3F78350C9B_20,9E2277EC0ED75EC2871C8BCCB4B9AF3F78350C9B," Applying the model on new data
-
-After you have trained the model on a data set, apply the model on new data using the run() method, as you would use on any of the existing pre-trained blocks.
-
-"
-9E2277EC0ED75EC2871C8BCCB4B9AF3F78350C9B_21,9E2277EC0ED75EC2871C8BCCB4B9AF3F78350C9B,"Sample code
-
-
-
-* For the Ensemble and BERT models, for example for Ensemble:
-
- run Ensemble model on new text
-ensemble_prediction = ensemble_classification_model.run(""new input text"")
-* For SVM and CNN models, for example for CNN:
-
- run Syntax model first
-syntax_result = syntax_model.run(""new input text"")
- run CNN model on top of syntax result
-cnn_prediction = cnn_classification_model.run(syntax_result)
-
-
-
-"
-9E2277EC0ED75EC2871C8BCCB4B9AF3F78350C9B_22,9E2277EC0ED75EC2871C8BCCB4B9AF3F78350C9B," Choosing the right algorithm for your use case
-
-You need to choose the model algorithm that best suits your use case.
-
-When choosing between SVM, CNN, and Transformers, consider the following:
-
-
-
-* BERT and Transformer-based Slate
-
-
-
-* Choose when high quality is required and higher computing resources are available.
-
-
-
-* CNN
-
-
-
-* Choose when decent size data is available
-* Choose if GloVe embedding is available for the required language
-* Choose if you have the option between single label versus multi-label
-* CNN fine tunes embeddings, so it could give better performance for unknown terms or newer domains.
-
-
-
-* SVM
-
-
-
-* Choose if an easier and simpler model is required
-* SVM has the fastest training and inference time
-* Choose if your data set size is small
-
-
-
-
-
-If you select SVM, you need to consider the following when choosing between the various implementations of SVM:
-
-
-
-* SVMs train multi-label classifiers.
-* The larger the number of classes, the longer the training time.
-* TF-IDF:
-
-
-
-* Choose TF-IDF vectorization with SVM if the data set is small, i.e. has a small number of classes, a small number of examples and shorter text size, for example, sentences containing fewer phrases.
-* TF-IDF with SVM can be faster than other algorithms in the classification block.
-* Choose TF-IDF if embeddings for the required language are not available.
-
-
-
-* USE:
-
-
-
-* Choose Universal Sentence Encoder (USE) with SVM if the data set has one or more sentences in input text.
-* USE can perform better on data sets where understanding the context of words or sentences is important.
-
-
-
-
-
-The Ensemble model combines multiple individual (diverse) models together to deliver superior prediction power. Consider the following key data for this model type:
-
-
-
-* The ensemble model combines CNN, SVM with TF-IDF and SVM with USE.
-* It is the easiest model to use.
-* It can give better performance than the individual algorithms.
-* It works for all kinds of data sets. However, training time for large datasets (more than 20000 examples) can be high.
-"
-9E2277EC0ED75EC2871C8BCCB4B9AF3F78350C9B_23,9E2277EC0ED75EC2871C8BCCB4B9AF3F78350C9B,"* An ensemble model allows you to set weights. These weights decides how the ensemble model combines the results of individual classifiers. Currently, the selection of weights is a heuristics and needs to be set by trial and error. The default weights that are provided in the function itself are a good starting point for the exploration.
-
-
-
-Parent topic:[Creating your own models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-create-model.html)
-"
-97C26F347FD5A13FBC5B24FC567FCF7ADF8CE0C3_0,97C26F347FD5A13FBC5B24FC567FCF7ADF8CE0C3," Creating your own models
-
-Certain algorithms in Watson Natural Language Processing can be trained with your own data, for example you can create custom models based on your own data for entity extraction, to classify data, to extract sentiments, and to extract target sentiments.
-
-Starting with Runtime 23.1 you can use the new built-in transformer-based IBM foundation model called Slate to create your own models. The Slate model has been trained on a very large data set that was preprocessed to filter hate, bias, and profanity.
-
-To create your own classification, entity extraction model, or sentiment model you can fine-tune the Slate model on your own data. To train the model in reasonable time, it's recommended to use GPU-based environments.
-
-
-
-* [Detecting entities with a custom dictionary](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-entities-dict.html)
-* [Detecting entities with regular expressions](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-entities-regex.html)
-* [Detecting entities with a custom transformer model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-entities-transformer.html)
-* [Classifying text with a custom classification model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-classify-text.html)
-* [Extracting sentiment with a custom transformer model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-extract-sentiment.html)
-* [Extracting targets sentiment with a custom transformer model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-target-sentiment.html)
-
-
-
-"
-97C26F347FD5A13FBC5B24FC567FCF7ADF8CE0C3_1,97C26F347FD5A13FBC5B24FC567FCF7ADF8CE0C3," Language support for custom models
-
-You can create custom models and use the following pretrained dictionary and classification models for the shown languages. For a list of the language codes and the corresponding languages, see [Language codes](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-catalog.htmllang-codes).
-
-
-
-Supported languages for out-of-the-box custom models
-
- Custom model Supported language codes
-
- Dictionary models af, ar, bs, ca, cs, da, de, el, en, es, fi, fr, he, hi, hr, it, ja, ko, nb, nl, nn, pl, pt, ro, ru, sk, sr, sv, tr, zh_cn, zh_tw (all languages supported in the Syntax part of speech tagging)
- Regexes af, ar, bs, ca, cs, da, de, el, en, es, fi, fr, he, hi, hr, it, ja, ko, nb, nl, nn, pl, pt, ro, ru, sk, sr, sv, tr, zh_cn, zh_tw (all languages supported in the Syntax part of speech tagging)
- SVM classification with TFIDF af, ar, ca, cs, da, de, el, en, es, fi, fr, he, hi, hr, it, ja, ko, nb, nl, nn, pl, pt, ro, ru, sk, sr, sv, tr, zh_cn, zh_tw
- SVM classification with USE ar, de, en, es, fr, it, ja, ko, nl, pl, pt, ru, tr, zh_cn, zh_tw
- CNN classification with GloVe ar, de, en, es, fr, it, ja, ko, nl, pt, zh_cn
-"
-97C26F347FD5A13FBC5B24FC567FCF7ADF8CE0C3_2,97C26F347FD5A13FBC5B24FC567FCF7ADF8CE0C3," BERT Multilingual classification af, ar, ca, cs, da, de, el, en, es, fi, fr, he, hi, hr, it, ja, ko, nb, nl, nn, pl, pt, ro, ru, sk, sr, sv, tr, zh_cn, zh_tw
- Transformer model af, ar, bs, ca, cs, da, de, el, en, es, fi, fr, he, hi, hr, it, ja, ko, nb, nl, nn, pl, pt, ro, ru, sk, sr, sv, tr, zh_cn, zh_tw
- Stopword lists ar, de, en, es, fr, it, ja, ko
-
-
-
-"
-97C26F347FD5A13FBC5B24FC567FCF7ADF8CE0C3_3,97C26F347FD5A13FBC5B24FC567FCF7ADF8CE0C3," Saving and loading custom models
-
-If you want to use your custom model in another notebook, save it as a Data Asset to your project. This way, you can export the model as part of a project export.
-
-Use the ibm-watson-studio-lib library to save and load custom models.
-
-To save a custom model in your notebook as a data asset to export and use in another project:
-
-
-
-1. Ensure that you have an access token on the Access control page on the Manage tab of your project. Only project admins can create access tokens. The access token can have viewer or editor access permissions. Only editors can inject the token into a notebook.
-2. Add the project token to a notebook by clicking More > Insert project token from the notebook action bar and then run the cell. When you run the inserted hidden code cell, a wslib object is created that you can use for functions in the ibm-waton-studio-lib library. For details on the available ibm-watson-studio-lib functions, see [Using ibm-watson-studio-lib for Python](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ws-lib-python.html).
-3. Run the train() method to create a custom dictionary, regular expression, or classification model and assign this custom model to a variable. For example:
-
-custom_block = CNN.train(train_stream, embedding_model.embedding, verbose=2)
-4. If you want to save a custom dictionary or regular expression model, convert it to a RBRGeneric block. Converting a custom dictionary or regular expression model to a RBRGeneric block is useful if you want to load and execute the model using the [API for Watson Natural Language Processing for Embed](https://www.ibm.com/docs/en/watson-libraries?topic=home-api-reference). To date, Watson Natural Language Processing for Embed supports running dictionary and regular expression models only as RBRGeneric blocks. To convert a model to a RBRGeneric block, run the following commands:
-
- Create the custom regular expression model
-"
-97C26F347FD5A13FBC5B24FC567FCF7ADF8CE0C3_4,97C26F347FD5A13FBC5B24FC567FCF7ADF8CE0C3,"custom_regex_block = watson_nlp.resources.feature_extractor.RBR.train(module_folder, language='en', regexes=regexes)
-
- Save the model to the local file system
-custom_regex_model_path = 'some/path'
-custom_regex_block.save(custom_regex_model_path)
-
- The model was saved in a file ""executor.zip"" in the provided path, in this case ""some/path/executor.zip""
-model_path = os.path.join(custom_regex_model_path, 'executor.zip')
-
- Re-load the model as a RBRGeneric block
-custom_block = watson_nlp.blocks.rules.RBRGeneric(watson_nlp.toolkit.rule_utils.RBRExecutor.load(model_path), language='en')
-5. Save the model as a Data Asset to your project using ibm-watson-studio-lib:
-
-wslib.save_data("""", custom_block.as_bytes(), overwrite=True)
-
-When saving transformer models, you have the option to save the model in CPU format. If you plan to use the model only in CPU environments, using this format will make your custom model run more efficiently. To do that, set the CPU format option as follows:
-
-wslib.save_data('', data=custom_model.as_bytes(cpu_format=True), overwrite=True)
-
-
-
-To load a custom model to a notebook that was imported from another project:
-
-
-
-1. Ensure that you have an access token on the Access control page on the Manage tab of your project. Only project admins can create access tokens. The access token can have viewer or editor access permissions. Only editors can inject the token into a notebook.
-"
-97C26F347FD5A13FBC5B24FC567FCF7ADF8CE0C3_5,97C26F347FD5A13FBC5B24FC567FCF7ADF8CE0C3,"2. Add the project token to a notebook by clicking More > Insert project token from the notebook action bar and then run the cell. When you run the the inserted hidden code cell, a wslib object is created that you can use for functions in the ibm-watson-studio-lib library. For details on the available ibm-watson-studio-lib functions, see [Using ibm-watson-studio-lib for Python](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ws-lib-python.html).
-3. Load the model using ibm-watson-studio-lib and watson-nlp:
-
-custom_block = watson_nlp.load(wslib.load_data(""""))
-
-
-
-Parent topic:[Watson Natural Language Processing library](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp.html)
-"
-34BC2F43F99778FFA7E2C3E414C3CFB32509276D_0,34BC2F43F99778FFA7E2C3E414C3CFB32509276D," Detecting entities with a custom dictionary
-
-If you have a fixed set of terms that you want to detect, like a list of product names or organizations, you can create a dictionary. Dictionary matching is very fast and resource-efficient.
-
-Watson Natural Language Processing dictionaries contain advanced matching capabilities that go beyond a simple string match, including:
-
-
-
-* Dictionary terms can consist of a single token, for example wheel, or multiple tokens, for example, steering wheel.
-* Dictionary term matching can be case-sensitive or case-insensitive. With a case-sensitive match, you can ensure that acronyms, like ABS don't match terms in the regular language, like abs that have a different meaning.
-* You can specify how to consolidate matches when multiple dictionary entries match the same text. Given the two dictionary entries, Watson and Watson Natural Language Processing, you can configure which entry should match in ""I like Watson Natural Language Processing"": either only Watson Natural Language Processing, as it contains Watson, or both.
-* You can specify to match the lemma instead of enumerating all inflections. This way, the single dictionary entry mouse will detect both mouse and mice in the text.
-* You can attach a label to each dictionary entry, for example Organization category to include additional metadata in the match.
-
-
-
-All of these capabilities can be configured, so you can pick the right option for your use case.
-
-"
-34BC2F43F99778FFA7E2C3E414C3CFB32509276D_1,34BC2F43F99778FFA7E2C3E414C3CFB32509276D," Types of dictionary files
-
-Watson Natural Language Processing supports two types of dictionary files:
-
-
-
-* Term list (ending in .dict)
-
-Example of a term list:
-
-Arthur
-Allen
-Albert
-Alexa
-* Table (ending in .csv)
-
-Example of a table:
-
-""label"", ""entry""
-""ORGANIZATION"", ""NASA""
-""COUNTRY"", ""USA""
-""ACTOR"", ""Christian Bale""
-
-
-
-You can use multiple dictionaries during the same extraction. You can also use both types at the same time, for example, run a single extraction with three dictionaries, one term list and two tables.
-
-"
-34BC2F43F99778FFA7E2C3E414C3CFB32509276D_2,34BC2F43F99778FFA7E2C3E414C3CFB32509276D," Creating dictionary files
-
-Begin by creating a module directory inside your notebook. This is a directory inside the notebook file system that will be used temporarily to store your dictionary files.
-
-To create dictionary files in your notebook:
-
-
-
-1. Create a module directory. Note that the name of the module folder cannot contain any dashes as this will cause errors.
-
-import os
-import watson_nlp
-module_folder = ""NLP_Dict_Module_1""
-os.makedirs(module_folder, exist_ok=True)
-2. Create dictionary files, and store them in the module directory. You can either read in an external list or CSV file, or you can create dictionary files like so:
-
- Create a term list dictionary
-term_file = ""names.dict""
-with open(os.path.join(module_folder, term_file), 'w') as dictionary:
-dictionary.write('Bruce')
-dictionary.write('n')
-dictionary.write('Peter')
-dictionary.write('n')
-
- Create a table dictionary
-table_file = 'Places.csv'
-with open(os.path.join(module_folder, table_file), 'w') as places:
-places.write(""""label"", ""entry"""")
-places.write(""n"")
-places.write(""""SIGHT"", ""Times Square"""")
-places.write(""n"")
-places.write(""""PLACE"", ""5th Avenue"""")
-places.write(""n"")
-
-
-
-"
-34BC2F43F99778FFA7E2C3E414C3CFB32509276D_3,34BC2F43F99778FFA7E2C3E414C3CFB32509276D," Loading the dictionaries and configuring matching options
-
-The dictionaries can be loaded using the following helper methods.
-
-
-
-* To load a single dictionary, use watson_nlp.toolkit.rule_utils.DictionaryConfig ()
-* To load multiple dictionaries, use watson_nlp.toolkit.rule_utils.DictionaryConfig.load_all([)])
-
-
-
-For each dictionary, you need to specify a dictionary configuration. The dictionary configuration is a Python dictionary, with the following attributes:
-
-
-
- Attribute Value Description Required
-
- name string The name of the dictionary Yes
- source string The path to the dictionary, relative to module_folder Yes
- dict_type file or table Whether the dictionary artifact is a term list (file) or a table of mappings (table) No. The default is file
- consolidate ContainedWithin (Keep the longest match and deduplicate) / NotContainedWithin (Keep the shortest match and deduplicate) / ContainsButNotEqual (Keep longest match but keep duplicate matches) / ExactMatch (Deduplicate) / LeftToRight (Keep the leftmost longest non-overlapping span) What to do with dictionary matches that overlap. No. The default is to not consolidate matches.
- case exact / insensitive Either match exact case or be case insensitive. No. The default is exact match.
- lemma True / False Match the terms in the dictionary with the lemmas from the text. The dictionary should contain only lemma forms. For example, add mouse in the dictionary to match both mouse and mice in text. Do not add mice in the dictionary. To match terms that consist of multiple tokens in text, separate the lemmas of those terms in the dictionary by a space character. No. The default is False.
-"
-34BC2F43F99778FFA7E2C3E414C3CFB32509276D_4,34BC2F43F99778FFA7E2C3E414C3CFB32509276D," mappings.columns (columns as attribute of mappings: {}) list [ string ] List of column headers in the same order as present in the table csv Yes if dict_type: table
- mappings.entry (entry as attribute of mappings: {}) string The name of the column header that contains the string to match against the document. Yes if dict_type: table
- label string The label to attach to matches. No
-
-
-
-"
-34BC2F43F99778FFA7E2C3E414C3CFB32509276D_5,34BC2F43F99778FFA7E2C3E414C3CFB32509276D,"Code sample
-
- Load the dictionaries
-dictionaries = watson_nlp.toolkit.rule_utils.DictionaryConfig.load_all([{
-'name': 'Names',
-'source': term_file,
-'case':'insensitive'
-}, {
-'name': 'places_and_sights_mappings',
-'source': table_file,
-'dict_type': 'table',
-'mappings': {
-'columns': 'label', 'entry'],
-'entry': 'entry'
-}
-}])
-
-"
-34BC2F43F99778FFA7E2C3E414C3CFB32509276D_6,34BC2F43F99778FFA7E2C3E414C3CFB32509276D," Training a model that contains dictionaries
-
-After you have loaded the dictionaries, create a dictionary model and train the model using the RBR.train() method. In the method, specify:
-
-
-
-* The module directory
-* The language of the dictionary entries
-* The dictionaries to use
-
-
-
-"
-34BC2F43F99778FFA7E2C3E414C3CFB32509276D_7,34BC2F43F99778FFA7E2C3E414C3CFB32509276D,"Code sample
-
-custom_dict_block = watson_nlp.resources.feature_extractor.RBR.train(module_folder,
-language='en', dictionaries=dictionaries)
-
-"
-34BC2F43F99778FFA7E2C3E414C3CFB32509276D_8,34BC2F43F99778FFA7E2C3E414C3CFB32509276D," Applying the model on new data
-
-After you have trained the dictionaries, apply the model on new data using the run() method, as you would use on any of the existing pre-trained blocks.
-
-"
-34BC2F43F99778FFA7E2C3E414C3CFB32509276D_9,34BC2F43F99778FFA7E2C3E414C3CFB32509276D,"Code sample
-
-custom_dict_block.run('Bruce is at Times Square')
-
-Output of the code sample:
-
-{(0, 5): ['Names'], (12, 24): ['SIGHT']}
-
-To show the labels or the name of the dictionary:
-
-RBR_result = custom_dict_block.executor.get_raw_response('Bruce is at Times Square', language='en')
-print(RBR_result)
-
-Output showing the labels:
-
-{'annotations': {'View_Names': [{'label': 'Names', 'match': {'location': {'begin': 0, 'end': 5}, 'text': 'Bruce'}}], 'View_places_and_sights_mappings': [{'label': 'SIGHT', 'match': {'location': {'begin': 12, 'end': 24}, 'text': 'Times Square'}}]}, 'instrumentationInfo': {'annotator': {'version': '1.0', 'key': 'Text match extractor for NLP_Dict_Module_1'}, 'runningTimeMS': 3, 'documentSizeChars': 32, 'numAnnotationsTotal': 2, 'numAnnotationsPerType': [{'annotationType': 'View_Names', 'numAnnotations': 1}, {'annotationType': 'View_places_and_sights_mappings', 'numAnnotations': 1}], 'interrupted': False, 'success': True}}
-
-Parent topic:[Creating your own models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-create-model.html)
-"
-6ACE7C519D2C4FCA9FC0498BCE82F75FFA05CFFD_0,6ACE7C519D2C4FCA9FC0498BCE82F75FFA05CFFD," Detecting entities with regular expressions
-
-Similar to detecting entities with dictionaries, you can use regex pattern matches to detect entities.
-
-Regular expressions are not provided in files like dictionaries but in-memory within a regex configuration. You can use multiple regex configurations during the same extraction.
-
-Regexes that you define with Watson Natural Language Processing can use token boundaries. This way, you can ensure that your regular expression matches within one or more tokens. This is a clear advantage over simpler regular expression engines, especially when you work with a language that is not separated by whitespace, such as Chinese.
-
-Regular expressions are processed by a dedicated component called Rule-Based Runtime, or RBR for short.
-
-"
-6ACE7C519D2C4FCA9FC0498BCE82F75FFA05CFFD_1,6ACE7C519D2C4FCA9FC0498BCE82F75FFA05CFFD," Creating regex configurations
-
-Begin by creating a module directory inside your notebook. This is a directory inside the notebook file system that is used temporarily to store the files created by the RBR training. This module directory can be the same directory that you created and used for dictionary-based entity extraction. Dictionaries and regular expressions can be used in the same training run.
-
-To create the module directory in your notebook, enter the following in a code cell. Note that the module directory can't contain a dash (-).
-
-import os
-import watson_nlp
-module_folder = ""NLP_RBR_Module_2""
-os.makedirs(module_folder, exist_ok=True)
-
-A regex configuration is a Python dictionary, with the following attributes:
-
-
-
-Available attributes in regex configurations with their values, descriptions of use and indication if required or not
-
- Attribute Value Description Required
-
- name string The name of the regular expression. Matches of the regular expression in the input text are tagged with this name in the output. Yes
- regexes list (string of perl based regex patterns) Should be non-empty. Multiple regexes can be provided. Yes
- flags Delimited string of valid flags Flags such as UNICODE or CASE_INSENSITIVE control the matching. Can also be a combination of flags. For the supported flags, see [Pattern (Java Platform SE 8)](https://docs.oracle.com/javase/8/docs/api/java/util/regex/Pattern.html). No (defaults to DOTALL)
- token_boundary.min int token_boundary indicates whether to match the regular expression only on token boundaries. Specified as a dict object with min and max attributes. No (returns the longest non-overlapping match at each character position in the input text)
-"
-6ACE7C519D2C4FCA9FC0498BCE82F75FFA05CFFD_2,6ACE7C519D2C4FCA9FC0498BCE82F75FFA05CFFD," token_boundary.max int max is an optional attribute for token_boundary and needed when the boundary needs to extend for a range (between min and max tokens). token_boundary.max needs to be >= token_boundary.min No (if token_boundary is specified, the min attribute can be specified alone)
- groups list (string labels for matching groups) String index in list corresponds to matched group in pattern starting with 1 where 0 index corresponds to entire match. For example: regex: (a)(b) on ab with group: ['full', 'first', 'second'] will yield full: ab, first: a, second: b No (defaults to label match on full match)
-
-
-
-The regex configurations can be loaded using the following helper methods:
-
-
-
-* To load a single regex configuration, use watson_nlp.toolkit.RegexConfig.load()
-* To load multiple regex configurations, use watson_nlp.toolkit.RegexConfig.load_all([)])
-
-
-
-"
-6ACE7C519D2C4FCA9FC0498BCE82F75FFA05CFFD_3,6ACE7C519D2C4FCA9FC0498BCE82F75FFA05CFFD,"Code sample
-
-This sample shows you how to load two different regex configurations. The first configuration detects person names. It uses the groups attribute to allow easy access to the full, first and last name at a later stage.
-
-The second configuration detects acronyms as a sequence of all-uppercase characters. By using the token_boundary attribute, it prevents matches in words that contain both uppercase and lowercase characters.
-
-from watson_nlp.toolkit.rule_utils import RegexConfig
-
- Load some regex configs, for instance to match First names or acronyms
-regexes = RegexConfig.load_all([
-{
-'name': 'full names',
-'regexes': '(A-Z]a-z]) (A-Z]a-z])'],
-'groups': 'full name', 'first name', 'last name']
-},
-{
-'name': 'acronyms',
-'regexes': '(A-Z]+)'],
-'groups': 'acronym'],
-'token_boundary': {
-'min': 1,
-'max': 1
-}
-}
-])
-
-"
-6ACE7C519D2C4FCA9FC0498BCE82F75FFA05CFFD_4,6ACE7C519D2C4FCA9FC0498BCE82F75FFA05CFFD," Training a model that contains regular expressions
-
-After you have loaded the regex configurations, create an RBR model using the RBR.train() method. In the method, specify:
-
-
-
-* The module directory
-* The language of the text
-* The regex configurations to use
-
-
-
-This is the same method that is used to train RBR with dictionary-based extraction. You can pass the dictionary configuration in the same method call.
-
-"
-6ACE7C519D2C4FCA9FC0498BCE82F75FFA05CFFD_5,6ACE7C519D2C4FCA9FC0498BCE82F75FFA05CFFD,"Code sample
-
- Train the RBR model
-custom_regex_block = watson_nlp.resources.feature_extractor.RBR.train(module_path=module_folder, language='en', regexes=regexes)
-
-"
-6ACE7C519D2C4FCA9FC0498BCE82F75FFA05CFFD_6,6ACE7C519D2C4FCA9FC0498BCE82F75FFA05CFFD," Applying the model on new data
-
-After you have trained the dictionaries, apply the model on new data using the run() method, as you would use on any of the existing pre-trained blocks.
-
-"
-6ACE7C519D2C4FCA9FC0498BCE82F75FFA05CFFD_7,6ACE7C519D2C4FCA9FC0498BCE82F75FFA05CFFD,"Code sample
-
-custom_regex_block.run('Bruce Wayne works for NASA')
-
-Output of the code sample:
-
-{(0, 11): ['regex::full names'], (0, 5): ['regex::full names'], (6, 11): ['regex::full names'], (22, 26): ['regex::acronyms']}
-
-To show the matching subgroups or the matched text:
-
-import json
- Get the raw response including matching groups
-full_regex_result = custom_regex_block.executor.get_raw_response('Bruce Wayne works for NASA‘, language='en')
-print(json.dumps(full_regex_result, indent=2))
-
-Output of the code sample:
-
-{
-""annotations"": {
-""View_full names"": [
-{
-""label"": ""regex::full names"",
-""fullname"": {
-""location"": {
-""begin"": 0,
-""end"": 11
-},
-""text"": ""Bruce Wayne""
-},
-""firstname"": {
-""location"": {
-""begin"": 0,
-""end"": 5
-},
-""text"": ""Bruce""
-},
-""lastname"": {
-""location"": {
-""begin"": 6,
-""end"": 11
-},
-""text"": ""Wayne""
-}
-}
-],
-""View_acronyms"": [
-{
-""label"": ""regex::acronyms"",
-""acronym"": {
-""location"": {
-""begin"": 22,
-""end"": 26
-},
-""text"": ""NASA""
-}
-}
-]
-},
-...
-}
-
-Parent topic:[Creating your own models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-create-model.html)
-"
-D71261B71A4CF5A1AD5E148EDE7751B630060BDF_0,D71261B71A4CF5A1AD5E148EDE7751B630060BDF," Detecting entities with a custom transformer model
-
-If you don't have a fixed set of terms or you cannot express entities that you like to detect as regular expressions, you can build a custom transformer model. The model is based on the pretrained Slate IBM Foundation model.
-
-When you use the pretrained model, you can build multi-lingual models. You don't have to have separate models for each language.
-
-You need sufficient training data to achieve high quality (2000 – 5000 per entity type). If you have GPUs available, use them for training.
-
-Note:Training transformer models is CPU and memory intensive. The predefined environments are not large enough to complete the training. Create a custom notebook environment with a larger amount of CPU and memory, and use that to run your notebook. If you have GPUs available, it's highly recommended to use them. See [Creating your own environment template](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/create-customize-env-definition.html).
-
-"
-D71261B71A4CF5A1AD5E148EDE7751B630060BDF_1,D71261B71A4CF5A1AD5E148EDE7751B630060BDF," Input data format
-
-The training data is represented as an array with multiple JSON objects. Each JSON object represents one training instance, and must have a text and a mentions field. The text field represents the training sentence text, and mentions is an array of JSON objects with the text, type, and location of each mention:
-
-[
-{
-""text"": str,
-""mentions"": {
-""location"": {
-""begin"": int,
-""end"": int
-},
-""text"": str,
-""type"": str
-},...]
-},...
-]
-
-Example:
-
-[
-{
-""id"": 38863234,
-""text"": ""I'm moving to Colorado in a couple months."",
-""mentions"": {
-""text"": ""Colorado"",
-""type"": ""Location"",
-""location"": {
-""begin"": 14,
-""end"": 22
-}
-},
-{
-""text"": ""couple months"",
-""type"": ""Duration"",
-""location"": {
-""begin"": 28,
-""end"": 41
-}
-}]
-}
-]
-
-"
-D71261B71A4CF5A1AD5E148EDE7751B630060BDF_2,D71261B71A4CF5A1AD5E148EDE7751B630060BDF," Training your model
-
-The transformer algorithm is using the pretrained Slate model. The pretrained Slate model is only available in Runtime 23.1.
-
-To get the options available for configuring Transformer training, enter:
-
-help(watson_nlp.workflows.entity_mentions.transformer.Transformer.train)
-
-"
-D71261B71A4CF5A1AD5E148EDE7751B630060BDF_3,D71261B71A4CF5A1AD5E148EDE7751B630060BDF,"Sample code
-
-import watson_nlp
-from watson_nlp.toolkit.entity_mentions_utils.train_util import prepare_stream_of_train_records_from_JSON_collection
-
- load the syntax models for all languages to be supported
-syntax_model = watson_nlp.load('syntax_izumo_en_stock')
-syntax_models = [syntax_model]
-
- load the pretrained Slate model
-pretrained_model_resource = watson_nlp.load('pretrained-model_slate.153m.distilled_many_transformer_multilingual_uncased')
-
- prepare the train and dev data
- entity_train_data is a directory with one or more json files in the input format specified above
-train_data_stream = prepare_stream_of_train_records_from_JSON_collection('entity_train_data')
-dev_data_stream = prepare_stream_of_train_records_from_JSON_collection('entity_train_data')
-
- train a transformer workflow model
-trained_workflow = watson_nlp.workflows.entity_mentions.transformer.Transformer.train(
-train_data_stream=train_data_stream,
-dev_data_stream=dev_data_stream,
-syntax_models=syntax_models,
-template_resource=pretrained_model_resource,
-num_train_epochs=3,
-)
-
-"
-D71261B71A4CF5A1AD5E148EDE7751B630060BDF_4,D71261B71A4CF5A1AD5E148EDE7751B630060BDF," Applying the model on new data
-
-Apply the trained transformer workflow model on new data by using the run() method, as you would use on any of the existing pre-trained blocks.
-
-"
-D71261B71A4CF5A1AD5E148EDE7751B630060BDF_5,D71261B71A4CF5A1AD5E148EDE7751B630060BDF,"Code sample
-
-trained_workflow.run('Bruce is at Times Square')
-
-"
-D71261B71A4CF5A1AD5E148EDE7751B630060BDF_6,D71261B71A4CF5A1AD5E148EDE7751B630060BDF," Storing and loading the model
-
-The custom transformer model can be stored as any other model as described in ""Loading and storing models"", using ibm_watson_studio_lib.
-
-To load the custom transformer model, extra steps are required:
-
-
-
-1. Ensure that you have an access token on the Access control page on the Manage tab of your project. Only project admins can create access tokens. The access token can have Viewer or Editor access permissions. Only editors can inject the token into a notebook.
-2. Add the project token to the notebook by clicking More > Insert project token from the notebook action bar and then run the cell.
-
-By running the inserted hidden code cell, a wslib object is created that you can use for functions in the ibm-watson-studio-lib library. For information on the available ibm-watson-studio-lib functions, see [Using ibm-watson-studio-lib for Python](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ws-lib-python.html).
-3. Download and extract the model to your local runtime environment:
-
-import zipfile
-model_zip = 'trained_workflow_file'
-model_folder = 'trained_workflow_folder'
-wslib.download_file('trained_workflow', file_name=model_zip)
-
-with zipfile.ZipFile(model_zip, 'r') as zip_ref:
-zip_ref.extractall(model_folder)
-4. Load the model from the extracted folder:
-
-trained_workflow = watson_nlp.load(model_folder)
-
-
-
-Parent topic:[Creating your own models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-create-model.html)
-"
-355EA8BD00A0246EACFEF090AF6A6B6F2BD92D4F_0,355EA8BD00A0246EACFEF090AF6A6B6F2BD92D4F," Extracting sentiment with a custom transformer model
-
-You can train your own models for sentiment extraction based on the Slate IBM Foundation model. This pretrained model can be find-tuned for your use case by training it on your specific input data.
-
-The Slate IBM Foundation model is available only in Runtime 23.1.
-
-Note: Training transformer models is CPU and memory intensive. Depending on the size of your training data, the environment might not be large enough to complete the training. If you run into issues with the notebook kernel during training, create a custom notebook environment with a larger amount of CPU and memory, and use that to run your notebook. Use a GPU-based environment for training and also inference time, if it is available to you. See [Creating your own environment template](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/create-customize-env-definition.html).
-
-
-
-* [Input data format for training](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-extract-sentiment.html?context=cdpaas&locale=eninput)
-* [Loading the pretrained model resources](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-extract-sentiment.html?context=cdpaas&locale=enload)
-* [Training the model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-extract-sentiment.html?context=cdpaas&locale=entrain)
-* [Applying the model on new data](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-extract-sentiment.html?context=cdpaas&locale=enapply)
-
-
-
-"
-355EA8BD00A0246EACFEF090AF6A6B6F2BD92D4F_1,355EA8BD00A0246EACFEF090AF6A6B6F2BD92D4F," Input data format for training
-
-You need to provide a training and development data set to the training function. The development data is usually around 10% of the training data. Each training or development sample is represented as a JSON object. It must have a text and a labels field. The text represents the training example text, and the labels field is an array, which contains exactly one label of positive, neutral, or negative.
-
-The following is an example of an array with sample training data:
-
-[
-{
-""text"": ""I am happy"",
-""labels"": ""positive""]
-},
-{
-""text"": ""I am sad"",
-""labels"": ""negative""]
-},
-{
-""text"": ""The sky is blue"",
-""labels"": ""neutral""]
-}
-]
-
-The training and development data sets are created as data streams from arrays of JSON objects. To create the data streams, you might use the utility method prepare_data_from_json:
-
-import watson_nlp
-from watson_nlp.toolkit.sentiment_analysis_utils.training import train_util as utils
-
-training_data_file = ""train_data.json""
-dev_data_file = ""dev_data.json""
-
-train_stream = utils.prepare_data_from_json(training_data_file)
-dev_stream = utils.prepare_data_from_json(dev_data_file)
-
-"
-355EA8BD00A0246EACFEF090AF6A6B6F2BD92D4F_2,355EA8BD00A0246EACFEF090AF6A6B6F2BD92D4F," Loading the pretrained model resources
-
-The pretrained Slate IBM Foundation model needs to be loaded before it passes to the training algorithm. In addition, you need to load the syntax analysis models for the languages that are used in your input texts.
-
-To load the model:
-
- Load the pretrained Slate IBM Foundation model
-pretrained_model_resource = watson_nlp.load('pretrained-model_slate.153m.distilled_many_transformer_multilingual_uncased')
-
- Download relevant syntax analysis models
-syntax_model_en = watson_nlp.load('syntax_izumo_en_stock')
-syntax_model_de = watson_nlp.load('syntax_izumo_de_stock')
-
- Create a list of all syntax analysis models
-syntax_models = [syntax_model_en, syntax_model_de]
-
-"
-355EA8BD00A0246EACFEF090AF6A6B6F2BD92D4F_3,355EA8BD00A0246EACFEF090AF6A6B6F2BD92D4F," Training the model
-
-For all options that are available for configuring sentiment transformer training, enter:
-
-help(watson_nlp.workflows.sentiment.AggregatedSentiment.train_transformer)
-
-The train_transformer method creates a workflow model, which automatically runs syntax analysis and the trained sentiment classification. In a subsequent step, enable language detection so that the workflow model can run on input text without any prerequisite information.
-
-The following is a sample call using the input data and pretrained model from the previous section (Training the model):
-
-from watson_nlp.workflows.sentiment import AggregatedSentiment
-
-sentiment_model = AggregatedSentiment.train_transformer(
-train_data_stream = train_stream,
-dev_data_stream = dev_stream,
-syntax_model=syntax_models,
-pretrained_model_resource=pretrained_model_resource,
-label_list=['negative', 'neutral', 'positive'],
-learning_rate=2e-5,
-num_train_epochs=10,
-combine_approach=""NON_NEUTRAL_MEAN"",
-keep_model_artifacts=True
-)
-lang_detect_model = watson_nlp.load('lang-detect_izumo_multi_stock')
-
-sentiment_model.enable_lang_detect(lang_detect_model)
-
-"
-355EA8BD00A0246EACFEF090AF6A6B6F2BD92D4F_4,355EA8BD00A0246EACFEF090AF6A6B6F2BD92D4F," Applying the model on new data
-
-After you train the model on a data set, apply the model on new data by using the run() method, as you would use on any of the existing pre-trained blocks.
-
-Sample code:
-
-input_text = 'new input text'
-sentiment_predictions = sentiment_model.run(input_text)
-
-Parent topic:[Creating your own models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-create-model_cloud.html)
-"
-D174298E1DD7898C08771488715D83FC7A7740AE_0,D174298E1DD7898C08771488715D83FC7A7740AE," Working with pre-trained models
-
-Watson Natural Language Processing provides pre-trained models in over 20 languages. They are curated by a dedicated team of experts, and evaluated for quality on each specific language. These pre-trained models can be used in production environments without you having to worry about license or intellectual property infringements.
-
-"
-D174298E1DD7898C08771488715D83FC7A7740AE_1,D174298E1DD7898C08771488715D83FC7A7740AE," Loading and running a model
-
-To load a model, you first need to know its name. Model names follow a standard convention encoding the type of model (like classification or entity extraction), type of algorithm (like BERT or SVM), language code, and details of the type system.
-
-To find the model that matches your needs, use the task catalog. See [Watson NLP task catalog](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-block-catalog.html).
-
-You can find the expected input for a given block class (for example to the Entity Mentions model) by using help() on the block class run() method:
-
-import watson_nlp
-
-help(watson_nlp.blocks.keywords.TextRank.run)
-
-Watson Natural Language Processing encapsulates natural language functionality through blocks and workflows. Each block or workflow supports functions to:
-
-
-
-* load(): load a model
-* run(): run the model on input arguments
-* train(): train the model on your own data (not all blocks and workflows support training)
-* save(): save the model that has been trained on your own data
-
-
-
-"
-D174298E1DD7898C08771488715D83FC7A7740AE_2,D174298E1DD7898C08771488715D83FC7A7740AE," Blocks
-
-Two types of blocks exist:
-
-
-
-* [Blocks that operate directly on the input document](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-pretrained.html?context=cdpaas&locale=enoperate-data)
-* [Blocks that depend on other blocks](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-pretrained.html?context=cdpaas&locale=enoperate-blocks)
-
-
-
-[Workflows](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-pretrained.html?context=cdpaas&locale=enworkflows) run one more blocks on the input document, in a pipeline.
-
-"
-D174298E1DD7898C08771488715D83FC7A7740AE_3,D174298E1DD7898C08771488715D83FC7A7740AE," Blocks that operate directly on the input document
-
-An example of a block that operates directly on the input document is the Syntax block, which performs natural language processing operations such as tokenization, lemmatization, part of speech tagging or dependency parsing.
-
-Example: running syntax analysis on a text snippet:
-
-import watson_nlp
-
- Load the syntax model for English
-syntax_model = watson_nlp.load('syntax_izumo_en_stock')
-
- Run the syntax model and print the result
-syntax_prediction = syntax_model.run('Welcome to IBM!')
-print(syntax_prediction)
-
-"
-D174298E1DD7898C08771488715D83FC7A7740AE_4,D174298E1DD7898C08771488715D83FC7A7740AE," Blocks that depend on other blocks
-
-Blocks that depend on other blocks cannot be applied on the input document directly. They are applied on the output of one or more preceeding blocks. For example, the Keyword Extraction block depends on the Syntax and Noun Phrases block.
-
-These blocks can be loaded but can only be run in a particular order on the input document. For example:
-
-import watson_nlp
-text = ""Anna went to school at University of California Santa Cruz.
-Anna joined the university in 2015.""
-
- Load Syntax, Noun Phrases and Keywords models for English
-syntax_model = watson_nlp.load('syntax_izumo_en_stock')
-noun_phrases_model = watson_nlp.load('noun-phrases_rbr_en_stock')
-keywords_model = watson_nlp.load('keywords_text-rank_en_stock')
-
- Run the Syntax and Noun Phrases models
-syntax_prediction = syntax_model.run(text, parsers=('token', 'lemma', 'part_of_speech'))
-noun_phrases = noun_phrases_model.run(text)
-
- Run the keywords model
-keywords = keywords_model.run(syntax_prediction, noun_phrases, limit=2)
-print(keywords)
-
-"
-D174298E1DD7898C08771488715D83FC7A7740AE_5,D174298E1DD7898C08771488715D83FC7A7740AE," Workflows
-
-Workflows are predefined end-to-end pipelines from a raw document to a final block, where all necessary blocks are chained as part of the workflow pipeline. For instance, the Entity Mentions block offered in Runtime 22.2 requires syntax analysis results, so the end-to-end process would be: input text -> Syntax analysis -> Entity Mentions -> Entity Mentions results. Starting with Runtime 23.1, you can call the Entity Mentions workflow. Refer to this sample:
-
-import watson_nlp
-
- Load the workflow model
-mentions_workflow = watson_nlp.load('entity-mentions_transformer-workflow_multilingual_slate.153m.distilled')
-
- Run the entity extraction workflow on the input text
-mentions_workflow.run('IBM announced new advances in quantum computing', language_code=""en"")
-
-Parent topic:[Watson Natural Language Processing library](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp.html)
-"
-174D6FDF73627D7B2258D7F351C3D0156C06D1DC_0,174D6FDF73627D7B2258D7F351C3D0156C06D1DC," Category types
-
-The categories that are returned by the the Watson Natural Language Processing Categories block are based on the IAB Tech Lab Content Taxonomy, which provides common language categories that can be used when describing content.
-
-The following table lists the IAB categories taxonomy returned by the Categories block.
-
-
-
- LEVEL 1 LEVEL 2 LEVEL 3 LEVEL 4
-
- Automotive
- Automotive Auto Body Styles
- Automotive Auto Body Styles Commercial Trucks
- Automotive Auto Body Styles Sedan
- Automotive Auto Body Styles Station Wagon
- Automotive Auto Body Styles SUV
- Automotive Auto Body Styles Van
- Automotive Auto Body Styles Convertible
- Automotive Auto Body Styles Coupe
- Automotive Auto Body Styles Crossover
- Automotive Auto Body Styles Hatchback
- Automotive Auto Body Styles Microcar
- Automotive Auto Body Styles Minivan
- Automotive Auto Body Styles Off-Road Vehicles
- Automotive Auto Body Styles Pickup Trucks
- Automotive Auto Type
- Automotive Auto Type Budget Cars
- Automotive Auto Type Certified Pre-Owned Cars
- Automotive Auto Type Classic Cars
- Automotive Auto Type Concept Cars
- Automotive Auto Type Driverless Cars
- Automotive Auto Type Green Vehicles
- Automotive Auto Type Luxury Cars
- Automotive Auto Type Performance Cars
- Automotive Car Culture
- Automotive Dash Cam Videos
- Automotive Motorcycles
- Automotive Road-Side Assistance
- Automotive Scooters
- Automotive Auto Buying and Selling
- Automotive Auto Insurance
- Automotive Auto Parts
- Automotive Auto Recalls
- Automotive Auto Repair
- Automotive Auto Safety
- Automotive Auto Shows
- Automotive Auto Technology
- Automotive Auto Technology Auto Infotainment Technologies
- Automotive Auto Technology Auto Navigation Systems
- Automotive Auto Technology Auto Safety Technologies
- Automotive Auto Rentals
- Books and Literature
- Books and Literature Art and Photography Books
- Books and Literature Biographies
- Books and Literature Children's Literature
- Books and Literature Comics and Graphic Novels
- Books and Literature Cookbooks
- Books and Literature Fiction
- Books and Literature Poetry
- Books and Literature Travel Books
- Books and Literature Young Adult Literature
- Business and Finance
- Business and Finance Business
- Business and Finance Business Business Accounting & Finance
- Business and Finance Business Human Resources
- Business and Finance Business Large Business
- Business and Finance Business Logistics
-"
-174D6FDF73627D7B2258D7F351C3D0156C06D1DC_1,174D6FDF73627D7B2258D7F351C3D0156C06D1DC," Business and Finance Business Marketing and Advertising
- Business and Finance Business Sales
- Business and Finance Business Small and Medium-sized Business
- Business and Finance Business Startups
- Business and Finance Business Business Administration
- Business and Finance Business Business Banking & Finance
- Business and Finance Business Business Banking & Finance Angel Investment
- Business and Finance Business Business Banking & Finance Bankruptcy
- Business and Finance Business Business Banking & Finance Business Loans
- Business and Finance Business Business Banking & Finance Debt Factoring & Invoice Discounting
- Business and Finance Business Business Banking & Finance Mergers and Acquisitions
- Business and Finance Business Business Banking & Finance Private Equity
- Business and Finance Business Business Banking & Finance Sale & Lease Back
- Business and Finance Business Business Banking & Finance Venture Capital
- Business and Finance Business Business I.T.
- Business and Finance Business Business Operations
- Business and Finance Business Consumer Issues
- Business and Finance Business Consumer Issues Recalls
- Business and Finance Business Executive Leadership & Management
- Business and Finance Business Government Business
- Business and Finance Business Green Solutions
- Business and Finance Business Business Utilities
- Business and Finance Economy
- Business and Finance Economy Commodities
- Business and Finance Economy Currencies
- Business and Finance Economy Financial Crisis
- Business and Finance Economy Financial Reform
- Business and Finance Economy Financial Regulation
- Business and Finance Economy Gasoline Prices
- Business and Finance Economy Housing Market
- Business and Finance Economy Interest Rates
- Business and Finance Economy Job Market
- Business and Finance Industries
- Business and Finance Industries Advertising Industry
- Business and Finance Industries Education industry
- Business and Finance Industries Entertainment Industry
- Business and Finance Industries Environmental Services Industry
- Business and Finance Industries Financial Industry
- Business and Finance Industries Food Industry
- Business and Finance Industries Healthcare Industry
- Business and Finance Industries Hospitality Industry
- Business and Finance Industries Information Services Industry
- Business and Finance Industries Legal Services Industry
- Business and Finance Industries Logistics and Transportation Industry
- Business and Finance Industries Agriculture
- Business and Finance Industries Management Consulting Industry
- Business and Finance Industries Manufacturing Industry
- Business and Finance Industries Mechanical and Industrial Engineering Industry
- Business and Finance Industries Media Industry
- Business and Finance Industries Metals Industry
- Business and Finance Industries Non-Profit Organizations
- Business and Finance Industries Pharmaceutical Industry
- Business and Finance Industries Power and Energy Industry
- Business and Finance Industries Publishing Industry
- Business and Finance Industries Real Estate Industry
- Business and Finance Industries Apparel Industry
- Business and Finance Industries Retail Industry
- Business and Finance Industries Technology Industry
- Business and Finance Industries Telecommunications Industry
- Business and Finance Industries Automotive Industry
-"
-174D6FDF73627D7B2258D7F351C3D0156C06D1DC_2,174D6FDF73627D7B2258D7F351C3D0156C06D1DC," Business and Finance Industries Aviation Industry
- Business and Finance Industries Biotech and Biomedical Industry
- Business and Finance Industries Civil Engineering Industry
- Business and Finance Industries Construction Industry
- Business and Finance Industries Defense Industry
- Careers
- Careers Apprenticeships
- Careers Career Advice
- Careers Career Planning
- Careers Job Search
- Careers Job Search Job Fairs
- Careers Job Search Resume Writing and Advice
- Careers Remote Working
- Careers Vocational Training
- Education
- Education Adult Education
- Education Private School
- Education Secondary Education
- Education Special Education
- Education College Education
- Education College Education College Planning
- Education College Education Postgraduate Education
- Education College Education Postgraduate Education Professional School
- Education College Education Undergraduate Education
- Education Early Childhood Education
- Education Educational Assessment
- Education Educational Assessment Standardized Testing
- Education Homeschooling
- Education Homework and Study
- Education Language Learning
- Education Online Education
- Education Primary Education
- Events and Attractions
- Events and Attractions Amusement and Theme Parks
- Events and Attractions Fashion Events
- Events and Attractions Historic Site and Landmark Tours
- Events and Attractions Malls & Shopping Centers
- Events and Attractions Museums & Galleries
- Events and Attractions Musicals
- Events and Attractions National & Civic Holidays
- Events and Attractions Nightclubs
- Events and Attractions Outdoor Activities
- Events and Attractions Parks & Nature
- Events and Attractions Party Supplies and Decorations
- Events and Attractions Awards Shows
- Events and Attractions Personal Celebrations & Life Events
- Events and Attractions Personal Celebrations & Life Events Anniversary
- Events and Attractions Personal Celebrations & Life Events Wedding
- Events and Attractions Personal Celebrations & Life Events Baby Shower
- Events and Attractions Personal Celebrations & Life Events Bachelor Party
- Events and Attractions Personal Celebrations & Life Events Bachelorette Party
- Events and Attractions Personal Celebrations & Life Events Birth
- Events and Attractions Personal Celebrations & Life Events Birthday
- Events and Attractions Personal Celebrations & Life Events Funeral
- Events and Attractions Personal Celebrations & Life Events Graduation
- Events and Attractions Personal Celebrations & Life Events Prom
- Events and Attractions Political Event
- Events and Attractions Religious Events
- Events and Attractions Sporting Events
- Events and Attractions Theater Venues and Events
- Events and Attractions Zoos & Aquariums
- Events and Attractions Bars & Restaurants
- Events and Attractions Business Expos & Conferences
-"
-174D6FDF73627D7B2258D7F351C3D0156C06D1DC_3,174D6FDF73627D7B2258D7F351C3D0156C06D1DC," Events and Attractions Casinos & Gambling
- Events and Attractions Cinemas and Events
- Events and Attractions Comedy Events
- Events and Attractions Concerts & Music Events
- Events and Attractions Fan Conventions
- Family and Relationships
- Family and Relationships Bereavement
- Family and Relationships Dating
- Family and Relationships Divorce
- Family and Relationships Eldercare
- Family and Relationships Marriage and Civil Unions
- Family and Relationships Parenting
- Family and Relationships Parenting Adoption and Fostering
- Family and Relationships Parenting Daycare and Pre-School
- Family and Relationships Parenting Internet Safety
- Family and Relationships Parenting Parenting Babies and Toddlers
- Family and Relationships Parenting Parenting Children Aged 4-11
- Family and Relationships Parenting Parenting Teens
- Family and Relationships Parenting Special Needs Kids
- Family and Relationships Single Life
- Fine Art
- Fine Art Costume
- Fine Art Dance
- Fine Art Design
- Fine Art Digital Arts
- Fine Art Fine Art Photography
- Fine Art Modern Art
- Fine Art Opera
- Fine Art Theater
- Food & Drink
- Food & Drink Alcoholic Beverages
- Food & Drink Vegan Diets
- Food & Drink Vegetarian Diets
- Food & Drink World Cuisines
- Food & Drink Barbecues and Grilling
- Food & Drink Cooking
- Food & Drink Desserts and Baking
- Food & Drink Dining Out
- Food & Drink Food Allergies
- Food & Drink Food Movements
- Food & Drink Healthy Cooking and Eating
- Food & Drink Non-Alcoholic Beverages
- Healthy Living
- Healthy Living Children's Health
- Healthy Living Fitness and Exercise
- Healthy Living Fitness and Exercise Participant Sports
- Healthy Living Fitness and Exercise Running and Jogging
- Healthy Living Men's Health
- Healthy Living Nutrition
- Healthy Living Senior Health
- Healthy Living Weight Loss
- Healthy Living Wellness
- Healthy Living Wellness Alternative Medicine
- Healthy Living Wellness Alternative Medicine Herbs and Supplements
- Healthy Living Wellness Alternative Medicine Holistic Health
- Healthy Living Wellness Physical Therapy
- Healthy Living Wellness Smoking Cessation
- Healthy Living Women's Health
- Hobbies & Interests
- Hobbies & Interests Antiquing and Antiques
- Hobbies & Interests Magic and Illusion
- Hobbies & Interests Model Toys
- Hobbies & Interests Musical Instruments
- Hobbies & Interests Paranormal Phenomena
- Hobbies & Interests Radio Control
-"
-174D6FDF73627D7B2258D7F351C3D0156C06D1DC_4,174D6FDF73627D7B2258D7F351C3D0156C06D1DC," Hobbies & Interests Sci-fi and Fantasy
- Hobbies & Interests Workshops and Classes
- Hobbies & Interests Arts and Crafts
- Hobbies & Interests Arts and Crafts Beadwork
- Hobbies & Interests Arts and Crafts Candle and Soap Making
- Hobbies & Interests Arts and Crafts Drawing and Sketching
- Hobbies & Interests Arts and Crafts Jewelry Making
- Hobbies & Interests Arts and Crafts Needlework
- Hobbies & Interests Arts and Crafts Painting
- Hobbies & Interests Arts and Crafts Photography
- Hobbies & Interests Arts and Crafts Scrapbooking
- Hobbies & Interests Arts and Crafts Woodworking
- Hobbies & Interests Beekeeping
- Hobbies & Interests Birdwatching
- Hobbies & Interests Cigars
- Hobbies & Interests Collecting
- Hobbies & Interests Collecting Comic Books
- Hobbies & Interests Collecting Stamps and Coins
- Hobbies & Interests Content Production
- Hobbies & Interests Content Production Audio Production
- Hobbies & Interests Content Production Freelance Writing
- Hobbies & Interests Content Production Screenwriting
- Hobbies & Interests Content Production Video Production
- Hobbies & Interests Games and Puzzles
- Hobbies & Interests Games and Puzzles Board Games and Puzzles
- Hobbies & Interests Games and Puzzles Card Games
- Hobbies & Interests Games and Puzzles Roleplaying Games
- Hobbies & Interests Genealogy and Ancestry
- Home & Garden
- Home & Garden Gardening
- Home & Garden Remodeling & Construction
- Home & Garden Smart Home
- Home & Garden Home Appliances
- Home & Garden Home Entertaining
- Home & Garden Home Improvement
- Home & Garden Home Security
- Home & Garden Indoor Environmental Quality
- Home & Garden Interior Decorating
- Home & Garden Landscaping
- Home & Garden Outdoor Decorating
- Medical Health
- Medical Health Diseases and Conditions
- Medical Health Diseases and Conditions Allergies
- Medical Health Diseases and Conditions Ear, Nose and Throat Conditions
- Medical Health Diseases and Conditions Endocrine and Metabolic Diseases
- Medical Health Diseases and Conditions Endocrine and Metabolic Diseases Hormonal Disorders
- Medical Health Diseases and Conditions Endocrine and Metabolic Diseases Menopause
-"
-174D6FDF73627D7B2258D7F351C3D0156C06D1DC_5,174D6FDF73627D7B2258D7F351C3D0156C06D1DC," Medical Health Diseases and Conditions Endocrine and Metabolic Diseases Thyroid Disorders
- Medical Health Diseases and Conditions Eye and Vision Conditions
- Medical Health Diseases and Conditions Foot Health
- Medical Health Diseases and Conditions Heart and Cardiovascular Diseases
- Medical Health Diseases and Conditions Infectious Diseases
- Medical Health Diseases and Conditions Injuries
- Medical Health Diseases and Conditions Injuries First Aid
- Medical Health Diseases and Conditions Lung and Respiratory Health
- Medical Health Diseases and Conditions Mental Health
- Medical Health Diseases and Conditions Reproductive Health
- Medical Health Diseases and Conditions Reproductive Health Birth Control
- Medical Health Diseases and Conditions Reproductive Health Infertility
- Medical Health Diseases and Conditions Reproductive Health Pregnancy
- Medical Health Diseases and Conditions Blood Disorders
- Medical Health Diseases and Conditions Sexual Health
- Medical Health Diseases and Conditions Sexual Health Sexual Conditions
- Medical Health Diseases and Conditions Skin and Dermatology
- Medical Health Diseases and Conditions Sleep Disorders
- Medical Health Diseases and Conditions Substance Abuse
- Medical Health Diseases and Conditions Bone and Joint Conditions
- Medical Health Diseases and Conditions Brain and Nervous System Disorders
- Medical Health Diseases and Conditions Cancer
- Medical Health Diseases and Conditions Cold and Flu
- Medical Health Diseases and Conditions Dental Health
- Medical Health Diseases and Conditions Diabetes
- Medical Health Diseases and Conditions Digestive Disorders
- Medical Health Medical Tests
- Medical Health Pharmaceutical Drugs
- Medical Health Surgery
- Medical Health Vaccines
- Medical Health Cosmetic Medical Services
- Movies
- Movies Action and Adventure Movies
- Movies Romance Movies
- Movies Science Fiction Movies
- Movies Indie and Arthouse Movies
- Movies Animation Movies
- Movies Comedy Movies
- Movies Crime and Mystery Movies
- Movies Documentary Movies
- Movies Drama Movies
- Movies Family and Children Movies
- Movies Fantasy Movies
- Movies Horror Movies
- Movies World Movies
- Music and Audio
- Music and Audio Adult Contemporary Music
- Music and Audio Adult Contemporary Music Soft AC Music
- Music and Audio Adult Contemporary Music Urban AC Music
- Music and Audio Adult Album Alternative
- Music and Audio Alternative Music
- Music and Audio Children's Music
- Music and Audio Classic Hits
- Music and Audio Classical Music
- Music and Audio College Radio
- Music and Audio Comedy (Music and Audio)
- Music and Audio Contemporary Hits/Pop/Top 40
- Music and Audio Country Music
- Music and Audio Dance and Electronic Music
- Music and Audio World/International Music
- Music and Audio Songwriters/Folk
-"
-174D6FDF73627D7B2258D7F351C3D0156C06D1DC_6,174D6FDF73627D7B2258D7F351C3D0156C06D1DC," Music and Audio Gospel Music
- Music and Audio Hip Hop Music
- Music and Audio Inspirational/New Age Music
- Music and Audio Jazz
- Music and Audio Oldies/Adult Standards
- Music and Audio Reggae
- Music and Audio Blues
- Music and Audio Religious (Music and Audio)
- Music and Audio R&B/Soul/Funk
- Music and Audio Rock Music
- Music and Audio Rock Music Album-oriented Rock
- Music and Audio Rock Music Alternative Rock
- Music and Audio Rock Music Classic Rock
- Music and Audio Rock Music Hard Rock
- Music and Audio Rock Music Soft Rock
- Music and Audio Soundtracks, TV and Showtunes
- Music and Audio Sports Radio
- Music and Audio Talk Radio
- Music and Audio Talk Radio Business News Radio
- Music and Audio Talk Radio Educational Radio
- Music and Audio Talk Radio News Radio
- Music and Audio Talk Radio News/Talk Radio
- Music and Audio Talk Radio Public Radio
- Music and Audio Urban Contemporary Music
- Music and Audio Variety (Music and Audio)
- News and Politics
- News and Politics Crime
- News and Politics Disasters
- News and Politics International News
- News and Politics Law
- News and Politics Local News
- News and Politics National News
- News and Politics Politics
- News and Politics Politics Elections
- News and Politics Politics Political Issues
- News and Politics Politics War and Conflicts
- News and Politics Weather
- Personal Finance
- Personal Finance Consumer Banking
- Personal Finance Financial Assistance
- Personal Finance Financial Assistance Government Support and Welfare
- Personal Finance Financial Assistance Student Financial Aid
- Personal Finance Financial Planning
- Personal Finance Frugal Living
- Personal Finance Insurance
- Personal Finance Insurance Health Insurance
- Personal Finance Insurance Home Insurance
- Personal Finance Insurance Life Insurance
- Personal Finance Insurance Motor Insurance
- Personal Finance Insurance Pet Insurance
- Personal Finance Insurance Travel Insurance
- Personal Finance Personal Debt
- Personal Finance Personal Debt Credit Cards
- Personal Finance Personal Debt Home Financing
- Personal Finance Personal Debt Personal Loans
- Personal Finance Personal Debt Student Loans
- Personal Finance Personal Investing
- Personal Finance Personal Investing Hedge Funds
- Personal Finance Personal Investing Mutual Funds
- Personal Finance Personal Investing Options
- Personal Finance Personal Investing Stocks and Bonds
- Personal Finance Personal Taxes
- Personal Finance Retirement Planning
- Personal Finance Home Utilities
- Personal Finance Home Utilities Gas and Electric
- Personal Finance Home Utilities Internet Service Providers
- Personal Finance Home Utilities Phone Services
- Personal Finance Home Utilities Water Services
- Pets
- Pets Birds
- Pets Cats
- Pets Dogs
- Pets Fish and Aquariums
- Pets Large Animals
- Pets Pet Adoptions
- Pets Reptiles
- Pets Veterinary Medicine
-"
-174D6FDF73627D7B2258D7F351C3D0156C06D1DC_7,174D6FDF73627D7B2258D7F351C3D0156C06D1DC," Pets Pet Supplies
- Pop Culture
- Pop Culture Celebrity Deaths
- Pop Culture Celebrity Families
- Pop Culture Celebrity Homes
- Pop Culture Celebrity Pregnancy
- Pop Culture Celebrity Relationships
- Pop Culture Celebrity Scandal
- Pop Culture Celebrity Style
- Pop Culture Humor and Satire
- Real Estate
- Real Estate Apartments
- Real Estate Retail Property
- Real Estate Vacation Properties
- Real Estate Developmental Sites
- Real Estate Hotel Properties
- Real Estate Houses
- Real Estate Industrial Property
- Real Estate Land and Farms
- Real Estate Office Property
- Real Estate Real Estate Buying and Selling
- Real Estate Real Estate Renting and Leasing
- Religion & Spirituality
- Religion & Spirituality Agnosticism
- Religion & Spirituality Spirituality
- Religion & Spirituality Astrology
- Religion & Spirituality Atheism
- Religion & Spirituality Buddhism
- Religion & Spirituality Christianity
- Religion & Spirituality Hinduism
- Religion & Spirituality Islam
- Religion & Spirituality Judaism
- Religion & Spirituality Sikhism
- Science
- Science Biological Sciences
- Science Chemistry
- Science Environment
- Science Genetics
- Science Geography
- Science Geology
- Science Physics
- Science Space and Astronomy
- Shopping
- Shopping Coupons and Discounts
- Shopping Flower Shopping
- Shopping Gifts and Greetings Cards
- Shopping Grocery Shopping
- Shopping Holiday Shopping
- Shopping Household Supplies
- Shopping Lotteries and Scratchcards
- Shopping Sales and Promotions
- Shopping Children's Games and Toys
- Sports
- Sports American Football
- Sports Boxing
- Sports Cheerleading
- Sports College Sports
- Sports College Sports College Football
- Sports College Sports College Basketball
- Sports College Sports College Baseball
- Sports Cricket
- Sports Cycling
- Sports Darts
- Sports Disabled Sports
- Sports Diving
- Sports Equine Sports
- Sports Equine Sports Horse Racing
- Sports Extreme Sports
- Sports Extreme Sports Canoeing and Kayaking
- Sports Extreme Sports Climbing
- Sports Extreme Sports Paintball
- Sports Extreme Sports Scuba Diving
- Sports Extreme Sports Skateboarding
- Sports Extreme Sports Snowboarding
- Sports Extreme Sports Surfing and Bodyboarding
- Sports Extreme Sports Waterskiing and Wakeboarding
- Sports Australian Rules Football
- Sports Fantasy Sports
- Sports Field Hockey
- Sports Figure Skating
- Sports Fishing Sports
- Sports Golf
- Sports Gymnastics
- Sports Hunting and Shooting
- Sports Ice Hockey
- Sports Inline Skating
- Sports Lacrosse
- Sports Auto Racing
- Sports Auto Racing Motorcycle Sports
- Sports Martial Arts
- Sports Olympic Sports
- Sports Olympic Sports Summer Olympic Sports
- Sports Olympic Sports Winter Olympic Sports
- Sports Poker and Professional Gambling
- Sports Rodeo
- Sports Rowing
- Sports Rugby
- Sports Rugby Rugby League
- Sports Rugby Rugby Union
- Sports Sailing
- Sports Skiing
- Sports Snooker/Pool/Billiards
- Sports Soccer
- Sports Badminton
- Sports Softball
-"
-174D6FDF73627D7B2258D7F351C3D0156C06D1DC_8,174D6FDF73627D7B2258D7F351C3D0156C06D1DC," Sports Squash
- Sports Swimming
- Sports Table Tennis
- Sports Tennis
- Sports Track and Field
- Sports Volleyball
- Sports Walking
- Sports Water Polo
- Sports Weightlifting
- Sports Baseball
- Sports Wrestling
- Sports Basketball
- Sports Beach Volleyball
- Sports Bodybuilding
- Sports Bowling
- Sports Sports Equipment
- Style & Fashion
- Style & Fashion Beauty
- Style & Fashion Beauty Hair Care
- Style & Fashion Beauty Makeup and Accessories
- Style & Fashion Beauty Nail Care
- Style & Fashion Beauty Natural and Organic Beauty
- Style & Fashion Beauty Perfume and Fragrance
- Style & Fashion Beauty Skin Care
- Style & Fashion Women's Fashion
- Style & Fashion Women's Fashion Women's Accessories
- Style & Fashion Women's Fashion Women's Accessories Women's Glasses
- Style & Fashion Women's Fashion Women's Accessories Women's Handbags and Wallets
- Style & Fashion Women's Fashion Women's Accessories Women's Hats and Scarves
- Style & Fashion Women's Fashion Women's Accessories Women's Jewelry and Watches
- Style & Fashion Women's Fashion Women's Clothing
- Style & Fashion Women's Fashion Women's Clothing Women's Business Wear
- Style & Fashion Women's Fashion Women's Clothing Women's Casual Wear
- Style & Fashion Women's Fashion Women's Clothing Women's Formal Wear
- Style & Fashion Women's Fashion Women's Clothing Women's Intimates and Sleepwear
- Style & Fashion Women's Fashion Women's Clothing Women's Outerwear
- Style & Fashion Women's Fashion Women's Clothing Women's Sportswear
- Style & Fashion Women's Fashion Women's Shoes and Footwear
- Style & Fashion Body Art
- Style & Fashion Children's Clothing
- Style & Fashion Designer Clothing
- Style & Fashion Fashion Trends
- Style & Fashion High Fashion
- Style & Fashion Men's Fashion
- Style & Fashion Men's Fashion Men's Accessories
- Style & Fashion Men's Fashion Men's Accessories Men's Jewelry and Watches
- Style & Fashion Men's Fashion Men's Clothing
- Style & Fashion Men's Fashion Men's Clothing Men's Business Wear
- Style & Fashion Men's Fashion Men's Clothing Men's Casual Wear
- Style & Fashion Men's Fashion Men's Clothing Men's Formal Wear
- Style & Fashion Men's Fashion Men's Clothing Men's Outerwear
-"
-174D6FDF73627D7B2258D7F351C3D0156C06D1DC_9,174D6FDF73627D7B2258D7F351C3D0156C06D1DC," Style & Fashion Men's Fashion Men's Clothing Men's Sportswear
- Style & Fashion Men's Fashion Men's Clothing Men's Underwear and Sleepwear
- Style & Fashion Men's Fashion Men's Shoes and Footwear
- Style & Fashion Personal Care
- Style & Fashion Personal Care Bath and Shower
- Style & Fashion Personal Care Deodorant and Antiperspirant
- Style & Fashion Personal Care Oral care
- Style & Fashion Personal Care Shaving
- Style & Fashion Street Style
- Technology & Computing
- Technology & Computing Artificial Intelligence
- Technology & Computing Augmented Reality
- Technology & Computing Computing
- Technology & Computing Computing Computer Networking
- Technology & Computing Computing Computer Peripherals
- Technology & Computing Computing Computer Software and Applications
- Technology & Computing Computing Computer Software and Applications 3-D Graphics
- Technology & Computing Computing Computer Software and Applications Photo Editing Software
- Technology & Computing Computing Computer Software and Applications Shareware and Freeware
- Technology & Computing Computing Computer Software and Applications Video Software
- Technology & Computing Computing Computer Software and Applications Web Conferencing
- Technology & Computing Computing Computer Software and Applications Antivirus Software
- Technology & Computing Computing Computer Software and Applications Browsers
- Technology & Computing Computing Computer Software and Applications Computer Animation
- Technology & Computing Computing Computer Software and Applications Databases
- Technology & Computing Computing Computer Software and Applications Desktop Publishing
- Technology & Computing Computing Computer Software and Applications Digital Audio
- Technology & Computing Computing Computer Software and Applications Graphics Software
- Technology & Computing Computing Computer Software and Applications Operating Systems
- Technology & Computing Computing Data Storage and Warehousing
- Technology & Computing Computing Desktops
- Technology & Computing Computing Information and Network Security
- Technology & Computing Computing Internet
- Technology & Computing Computing Internet Cloud Computing
- Technology & Computing Computing Internet Web Development
- Technology & Computing Computing Internet Web Hosting
- Technology & Computing Computing Internet Email
- Technology & Computing Computing Internet Internet for Beginners
- Technology & Computing Computing Internet Internet of Things
- Technology & Computing Computing Internet IT and Internet Support
- Technology & Computing Computing Internet Search
- Technology & Computing Computing Internet Social Networking
- Technology & Computing Computing Internet Web Design and HTML
- Technology & Computing Computing Laptops
- Technology & Computing Computing Programming Languages
- Technology & Computing Consumer Electronics
- Technology & Computing Consumer Electronics Cameras and Camcorders
-"
-174D6FDF73627D7B2258D7F351C3D0156C06D1DC_10,174D6FDF73627D7B2258D7F351C3D0156C06D1DC," Technology & Computing Consumer Electronics Home Entertainment Systems
- Technology & Computing Consumer Electronics Smartphones
- Technology & Computing Consumer Electronics Tablets and E-readers
- Technology & Computing Consumer Electronics Wearable Technology
- Technology & Computing Robotics
- Technology & Computing Virtual Reality
- Television
- Television Animation TV
- Television Soap Opera TV
- Television Special Interest TV
- Television Sports TV
- Television Children's TV
- Television Comedy TV
- Television Drama TV
- Television Factual TV
- Television Holiday TV
- Television Music TV
- Television Reality TV
- Television Science Fiction TV
- Travel
- Travel Travel Accessories
- Travel Travel Locations
- Travel Travel Locations Africa Travel
- Travel Travel Locations Asia Travel
- Travel Travel Locations Australia and Oceania Travel
- Travel Travel Locations Europe Travel
- Travel Travel Locations North America Travel
- Travel Travel Locations Polar Travel
- Travel Travel Locations South America Travel
- Travel Travel Preparation and Advice
- Travel Travel Type
- Travel Travel Type Adventure Travel
- Travel Travel Type Family Travel
- Travel Travel Type Honeymoons and Getaways
- Travel Travel Type Hotels and Motels
- Travel Travel Type Rail Travel
- Travel Travel Type Road Trips
- Travel Travel Type Spas
- Travel Travel Type Air Travel
- Travel Travel Type Beach Travel
- Travel Travel Type Bed & Breakfasts
- Travel Travel Type Budget Travel
- Travel Travel Type Business Travel
- Travel Travel Type Camping
- Travel Travel Type Cruises
- Travel Travel Type Day Trips
- Video Gaming
- Video Gaming Console Games
- Video Gaming eSports
- Video Gaming Mobile Games
- Video Gaming PC Games
- Video Gaming Video Game Genres
- Video Gaming Video Game Genres Action Video Games
- Video Gaming Video Game Genres Role-Playing Video Games
- Video Gaming Video Game Genres Simulation Video Games
- Video Gaming Video Game Genres Sports Video Games
- Video Gaming Video Game Genres Strategy Video Games
- Video Gaming Video Game Genres Action-Adventure Video Games
- Video Gaming Video Game Genres Adventure Video Games
- Video Gaming Video Game Genres Casual Games
- Video Gaming Video Game Genres Educational Video Games
- Video Gaming Video Game Genres Exercise and Fitness Video Games
- Video Gaming Video Game Genres MMOs
-"
-D92A34A349CEE727B017AF7D40B880B232220959_0,D92A34A349CEE727B017AF7D40B880B232220959," Watson Natural Language Processing library usage samples
-
-The sample notebooks demonstrate how to use the different Watson Natural Language Processing blocks and how to train your own models.
-
-"
-D92A34A349CEE727B017AF7D40B880B232220959_1,D92A34A349CEE727B017AF7D40B880B232220959," Sample project and notebooks
-
-To help you get started with the Watson Natural Language Processing library, you can download a sample project and notebooks from the Samples.
-
-You can access the Samples by selecting Samples from the Cloud Pak for Data navigation menu.
-
-"
-D92A34A349CEE727B017AF7D40B880B232220959_2,D92A34A349CEE727B017AF7D40B880B232220959,"Sample notebooks
-
-
-
-* [Financial complaint analysis](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/39047aede50128e7cbc8ea19660fe1f6)
-
-This notebook shows you how to analyze financial customer complaints using Watson Natural Language Processing. It uses data from the Consumer Complaint Database published by the Consumer Financial Protection Bureau (CFPB). The notebook teaches you to use the Tone classification and Emotion classification models.
-* [Car complaint analysis](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/4b8aa2c1ee67a6cd1172a1cf760f65f7)
-
-This notebook demonstrates how to analyze car complaints using Watson Natural Language Processing. It uses publicly available complaint records from car owners stored by the National Highway and Transit Association (NHTSA) of the US Department of Transportation. This notebook shows you how use syntax analysis to extract the most frequently used nouns, which typically depict the problems that review authors talk about and combine these results with structured data using association rule mining.
-* [Complaint classification with Watson Natural Language Processing](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/636001e59902133a4a23fd89f011c232)
-
-This notebook demonstrates how to train different text classifiers using Watson Natural Language Processing. The classifiers predict the product group from the text of a customer complaint. This could be used, for example to route a complaint to the appropriate staff member. The data that is used in this notebook is taken from the Consumer Complaint Database that is published by the Consumer Financial Protection Bureau (CFPB), a U.S. government agency and is publicly available. You will learn how to train a custom CNN model and a VotingEnsemble model and evaluate their quality.
-* [Entity extraction on Financial Complaints with Watson Natural Language Processing](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/636001e59902133a4a23fd89f0112100)
-
-"
-D92A34A349CEE727B017AF7D40B880B232220959_3,D92A34A349CEE727B017AF7D40B880B232220959,"This notebook demonstrates how to extract named entities from financial customer complaints using Watson Natural Language Processing. It uses data from the Consumer Complaint Database published by the Consumer Financial Protection Bureau (CFPB). In the notebook you will learn how to do dictionary-based term extraction to train a custom extraction model based on given dictionaries and extract entities using the BERT or a transformer model.
-
-
-
-"
-D92A34A349CEE727B017AF7D40B880B232220959_4,D92A34A349CEE727B017AF7D40B880B232220959,"Sample project
-
-If you don't want to download the sample notebooks to your project individually, you can download the entire sample project [Text Analysis with Watson Natural Language Processing](https://dataplatform.cloud.ibm.com/exchange/public/entry/view/636001e59902133a4a23fd89f010e4cb) from the IBM watsonx Gallery.
-
-The sample project contains the sample notebooks listed in the previous section, including:
-
-
-
-* Analyzing hotel reviews using Watson Natural Language Processing
-
-This notebook shows you how to use syntax analysis to extract the most frequently used nouns from the hotel reviews, classify the sentiment of the reviews and use targets sentiment analysis. The data file that is used by this notebook is included in the project as a data asset.
-
-
-
-You can run all of the sample notebooks with the NLP + DO Runtime 23.1 on Python 3.10 XS environment except for the Analyzing hotel reviews using Watson Natural Language Processing notebook. To run this notebook, you need to create an environment template that is large enough to load the CPU-optimized models for sentiment and targets sentiment analysis.
-
-Parent topic:[Watson Natural Language Processing library](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp.html)
-"
-715ABFB108ED8F6361D07762656DBD0443C57904_0,715ABFB108ED8F6361D07762656DBD0443C57904," Extracting targets sentiment with a custom transformer model
-
-You can train your own models for targets sentiment extraction based on the Slate IBM Foundation model. This pretrained model can be find-tuned for your use case by training it on your specific input data.
-
-The Slate IBM Foundation model is available only in Runtime 23.1.
-
-Note: Training transformer models is CPU and memory intensive. Depending on the size of your training data, the environment might not be large enough to complete the training. If you run into issues with the notebook kernel during training, create a custom notebook environment with a larger amount of CPU and memory, and use that to run your notebook. Use a GPU-based environment for training and also inference time, if it is available to you. See [Creating your own environment template](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/create-customize-env-definition.html).
-
-
-
-* [Input data format for training](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-target-sentiment.html?context=cdpaas&locale=eninput)
-* [Loading the pretrained model resources](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-target-sentiment.html?context=cdpaas&locale=enload)
-* [Training the model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-target-sentiment.html?context=cdpaas&locale=entrain)
-* [Applying the model on new data](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-target-sentiment.html?context=cdpaas&locale=enapply)
-"
-715ABFB108ED8F6361D07762656DBD0443C57904_1,715ABFB108ED8F6361D07762656DBD0443C57904,"* [Storing and loading the model](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-target-sentiment.html?context=cdpaas&locale=enstore)
-
-
-
-"
-715ABFB108ED8F6361D07762656DBD0443C57904_2,715ABFB108ED8F6361D07762656DBD0443C57904," Input data format for training
-
-You must provide a training and development data set to the training function. The development data is usually around 10% of the training data. Each training or development sample is represented as a JSON object. It must have a text and a target_mentions field. The text represents the training example text, and the target_mentions field is an array, which contains an entry for each target mention with its text, location, and sentiment.
-
-Consider using Watson Knowledge Studio to enable your domain subject matter experts to easily annotate text and create training data.
-
-The following is an example of an array with sample training data:
-
-[
-{
-""text"": ""Those waiters stare at you your entire meal, just waiting for you to put your fork down and they snatch the plate away in a second."",
-""target_mentions"":
-{
-""text"": ""waiters"",
-""location"": {
-""begin"": 6,
-""end"": 13
-},
-""sentiment"": ""negative""
-}
-]
-}
-]
-
-The training and development data sets are created as data streams from arrays of JSON objects. To create the data streams, you may use the utility method read_json_to_stream. It requires the syntax analysis model for the language of your input data.
-
-Sample code:
-
-import watson_nlp
-from watson_nlp.toolkit.targeted_sentiment.training_data_reader import read_json_to_stream
-
-training_data_file = 'train_data.json'
-dev_data_file = 'dev_data.json'
-
- Load the syntax analysis model for the language of your input data
-syntax_model = watson_nlp.load('syntax_izumo_en_stock')
-
- Prepare train and dev data streams
-train_stream = read_json_to_stream(json_path=training_data_file, syntax_model=syntax_model)
-dev_stream = read_json_to_stream(json_path=dev_data_file, syntax_model=syntax_model)
-
-"
-715ABFB108ED8F6361D07762656DBD0443C57904_3,715ABFB108ED8F6361D07762656DBD0443C57904," Loading the pretrained model resources
-
-The pretrained Slate IBM Foundation model needs to be loaded before passing it to the training algorithm.
-
-To load the model:
-
- Load the pretrained Slate IBM Foundation model
-pretrained_model_resource = watson_nlp.load('pretrained-model_slate.153m.distilled_many_transformer_multilingual_uncased')
-
-"
-715ABFB108ED8F6361D07762656DBD0443C57904_4,715ABFB108ED8F6361D07762656DBD0443C57904," Training the model
-
-For all options that are available for configuring sentiment transformer training, enter:
-
-help(watson_nlp.blocks.targeted_sentiment.SequenceTransformerTSA.train)
-
-The train method will create a new targets sentiment block model.
-
-The following is a sample call that uses the input data and pretrained model from the previous section (Training the model):
-
- Train the model
-custom_tsa_model = watson_nlp.blocks.targeted_sentiment.SequenceTransformerTSA.train(
-train_stream,
-dev_stream,
-pretrained_model_resource,
-num_train_epochs=5
-)
-
-"
-715ABFB108ED8F6361D07762656DBD0443C57904_5,715ABFB108ED8F6361D07762656DBD0443C57904," Applying the model on new data
-
-After you train the model on a data set, apply the model on new data by using the run() method, as you would use on any of the existing pre-trained blocks. Because the created custom model is a block model, you need to run syntax analysis on the input text and pass the results to the run() methods.
-
-Sample code:
-
-input_text = 'new input text'
-
- Run syntax analysis first
-syntax_model = watson_nlp.load('syntax_izumo_en_stock')
-syntax_analysis = syntax_model.run(input_text, parsers=('token',))
-
- Apply the new model on top of the syntax predictions
-tsa_predictions = custom_tsa_model.run(syntax_analysis)
-
-"
-715ABFB108ED8F6361D07762656DBD0443C57904_6,715ABFB108ED8F6361D07762656DBD0443C57904," Storing and loading the model
-
-The custom targets sentiment model can be stored as any other model as described in ""Loading and storing models"", using ibm_watson_studio_lib.
-
-To load the custom targets sentiment model, additional steps are required:
-
-
-
-1. Ensure that you have an access token on the Access control page on the Manage tab of your project. Only project admins can create access tokens. The access token can have Viewer or Editor access permissions. Only editors can inject the token into a notebook.
-2. Add the project token to the notebook by clicking More > Insert project token from the notebook action bar. Then run the cell.
-
-By running the inserted hidden code cell, a wslib object is created that you can use for functions in the ibm-watson-studio-lib library. For information on the available ibm-watson-studio-lib functions, see [Using ibm-watson-studio-lib for Python](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ws-lib-python.html).
-3. Download and extract the model to your local runtime environment:
-
-import zipfile
-model_zip = 'custom_TSA_model_file'
-model_folder = 'custom_TSA'
-wslib.download_file('custom_TSA_model', file_name=model_zip)
-
-with zipfile.ZipFile(model_zip, 'r') as zip_ref:
-zip_ref.extractall(model_folder)
-4. Load the model from the extracted folder:
-
-custom_TSA_model = watson_nlp.load(model_folder)
-
-
-
-Parent topic:[Creating your own models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-create-model_cloud.html)
-"
-F7E8527824E15B4194A3FD12CEEE049F910016DB_0,F7E8527824E15B4194A3FD12CEEE049F910016DB," Watson Natural Language Processing library
-
-The Watson Natural Language Processing library provides natural language processing functions for syntax analysis and pre-trained models for a wide variety of text processing tasks, such as sentiment analysis, keyword extraction, and classification. The Watson Natural Language Processing library is available for Python only.
-
-With Watson Natural Language Processing, you can turn unstructured data into structured data, making the data easier to understand and transferable, in particular if you are working with a mix of unstructured and structured data. Examples of such data are call center records, customer complaints, social media posts, or problem reports. The unstructured data is often part of a larger data record that includes columns with structured data. Extracting meaning and structure from the unstructured data and combining this information with the data in the columns of structured data:
-
-
-
-* Gives you a deeper understanding of the input data
-* Can help you to make better decisions.
-
-
-
-Watson Natural Language Processing provides pre-trained models in over 20 languages. They are curated by a dedicated team of experts, and evaluated for quality on each specific language. These pre-trained models can be used in production environments without you having to worry about license or intellectual property infringements.
-
-Although you can create your own models, the easiest way to get started with Watson Natural Language Processing is to run the pre-trained models on unstructured text to perform language processing tasks.
-
-Some examples of language processing tasks available in Watson Natural Language Processing pre-trained models:
-
-
-
-* Language detection: detect the language of the input text
-* Syntax: tokenization, lemmatization, part of speech tagging, and dependency parsing
-* Entity extraction: find mentions of entities (like person, organization, or date)
-* Noun phrase extraction: extract noun phrases from the input text
-* Text classification: analyze text and then assign a set of pre-defined tags or categories based on its content
-* Sentiment classification: is the input document positive, negative or neutral?
-* Tone classification: classify the tone in the input document (like excited, frustrated, or sad)
-* Emotion classification: classify the emotion of the input document (like anger or disgust)
-"
-F7E8527824E15B4194A3FD12CEEE049F910016DB_1,F7E8527824E15B4194A3FD12CEEE049F910016DB,"* Keywords extraction: extract noun phrases that are relevant in the input text
-* Concepts: find concepts from DBPedia in the input text
-* Relations: detect relations between two entities
-* Hierarchical categories: assign individual nodes within a hierarchical taxonomy to the input document
-* Embeddings: map individual words or larger text snippets into a vector space
-
-
-
-Watson Natural Language Processing encapsulates natural language functionality through blocks and workflows. Blocks and workflows support functions to load, run, train, and save a model.
-
-For more information, refer to [Working with pre-trained models](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-pretrained.html).
-
-Some examples of how you can use the Watson Natural Language Processing library:
-
-Running syntax analysis on a text snippet:
-
-import watson_nlp
-
- Load the syntax model for English
-syntax_model = watson_nlp.load('syntax_izumo_en_stock')
-
- Run the syntax model and print the result
-syntax_prediction = syntax_model.run('Welcome to IBM!')
-print(syntax_prediction)
-
-Extracting entities from a text snippet:
-
-import watson_nlp
-entities_workflow = watson_nlp.load('entity-mentions_transformer-workflow_multilingual_slate.153m.distilled')
-entities = entities_workflow.run('IBM's CEO Arvind Krishna is based in the US', language_code=""en"")
-print(entities.get_mention_pairs())
-
-For examples of how to use the Watson Natural Language Processing library, refer to [Watson Natural Language Processing library usage samples](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/watson-nlp-samples.html).
-
-"
-F7E8527824E15B4194A3FD12CEEE049F910016DB_2,F7E8527824E15B4194A3FD12CEEE049F910016DB," Using Watson Natural Language Processing in a notebook
-
-You can run your Python notebooks that use the Watson Natural Language Processing library in any of the environments that listed here. The GPU environment templates include the Watson Natural Language Processing library.
-
-DO + NLP: Indicates that the environment template includes both the CPLEX and the DOcplex libraries to model and solve decision optimization problems and the Watson Natural Language Processing library.
-
- : Indicates that the environment template requires the Watson Studio Professional plan. See [Offering plans](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/ws-plans.html).
-
-
-
-Environment templates that include the Watson Natural Language Processing library
-
- Name Hardware configuration CUH rate per hour
-
- NLP + DO Runtime 23.1 on Python 3.10 XS 2vCPU and 8 GB RAM 6
- DO + NLP Runtime 22.2 on Python 3.10 XS 2 vCPU and 8 GB RAM 6
- GPU V100 Runtime 23.1 on Python 3.10 40 vCPU + 172 GB + 1 NVIDIA® V100 (1 GPU) 68
- GPU 2xV100 Runtime 23.1 on Python 3.10 80 vCPU + 344 GB + 2 NVIDIA® V100 (2 GPU) 136
- GPU V100 Runtime 22.2 on Python 3.10 40 vCPU + 172 GB + 1 NVIDIA® V100 (1 GPU) 68
- GPU 2xV100 Runtime 22.2 on Python 3.10 80 vCPU + 344 GB + 2 NVIDIA® V100 (2 GPU) 136
-
-
-
-Normally these environments are sufficient to run notebooks that use prebuilt models. If you need a larger environment, for example to train your own models, you can create a custom template that includes the Watson Natural Language Processing library. Refer to [Creating your own environment template](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/create-customize-env-definition.html).
-
-
-
-* Create a custom template without GPU by selecting the engine type Default, the hardware configuration size that you need, and choosing NLP + DO Runtime 23.1 on Python 3.10 or DO + NLP Runtime 22.2 on Python 3.10 as the software version.
-"
-F7E8527824E15B4194A3FD12CEEE049F910016DB_3,F7E8527824E15B4194A3FD12CEEE049F910016DB,"* Create a custom template with GPU by selecting the engine type GPU, the hardware configuration size that you need, and choosing GPU Runtime 23.1 on Python 3.10 or GPU Runtime 22.2 on Python 3.10 as the software version.
-
-
-
-"
-F7E8527824E15B4194A3FD12CEEE049F910016DB_4,F7E8527824E15B4194A3FD12CEEE049F910016DB," Learn more
-
-
-
-* [Creating your own environment template](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/create-customize-env-definition.html)
-
-
-
-Parent topic:[Notebooks and scripts](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebooks-and-scripts.html)
-"
-0ECEAC44DA213D067B5B5EA66694E6283457A441_0,0ECEAC44DA213D067B5B5EA66694E6283457A441," ibm-watson-studio-lib for Python
-
-The ibm-watson-studio-lib library for Python provides access to assets. It can be used in notebooks that are created in the notebook editor. ibm-watson-studio-lib provides support for working with data assets and connections, as well as browsing functionality for all other asset types.
-
-There are two kinds of data assets:
-
-
-
-* Stored data assets refer to files in the storage associated with the current project. The library can load and save these files. For data larger than one megabyte, this is not recommended. The library requires that the data is kept in memory in its entirety, which might be inefficient when processing huge data sets.
-* Connected data assets represent data that must be accessed through a connection. Using the library, you can retrieve the properties (metadata) of the connected data asset and its connection. The functions do not return the data of a connected data asset. You can either use the code that is generated for you when you click Read data on the Code snippets pane to access the data or you must write your own code.
-
-
-
-Note: The ibm-watson-studio-lib functions do not encode or decode data when saving data to or getting data from a file. Additionally, the ibm-watson-studio-lib functions can't be used to access connected folder assets (files on a path to the project storage).
-
-"
-0ECEAC44DA213D067B5B5EA66694E6283457A441_1,0ECEAC44DA213D067B5B5EA66694E6283457A441," Setting up the ibm-watson-studio-lib library
-
-The ibm-watson-studio-lib library for Python is pre-installed and can be imported directly in a notebook in the notebook editor. To use the ibm-watson-studio-lib library in your notebook, you need the ID of the project and the project token.
-
-To insert the project token to your notebook:
-
-
-
-1. Click the More icon on your notebook toolbar and then click Insert project token.
-
-If a project token exists, a cell is added to your notebook with the following information:
-
-from ibm_watson_studio_lib import access_project_or_space
-wslib = access_project_or_space({""token"":""""})
-
- is the value of the project token.
-
-If you are told in a message that no project token exists, click the link in the message to be redirected to the project's Access Control page where you can create a project token. You must be eligible to create a project token. For details, see [Manually adding the project token](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/token.html).
-
-To create a project token:
-
-
-
-1. From the Manage tab, select the Access Control page, and click New access token under Access tokens.
-2. Enter a name, select Editor role for the project, and create a token.
-3. Go back to your notebook, click the More icon on the notebook toolbar and then click Insert project token.
-
-
-
-
-
-"
-0ECEAC44DA213D067B5B5EA66694E6283457A441_2,0ECEAC44DA213D067B5B5EA66694E6283457A441," Helper functions
-
-You can get information about the supported functions in the ibm-watson-studio-lib library programmatically by using help(wslib), or for an individual function by using help(wslib., for example help(wslib.get_connection).
-
-You can use the helper function wslib.show(...) for formatted printing of Python dictionaries and lists of dictionaries, which are the common result output type of the ibm-watson-studio-lib functions.
-
-"
-0ECEAC44DA213D067B5B5EA66694E6283457A441_3,0ECEAC44DA213D067B5B5EA66694E6283457A441," The ibm-watson-studio-lib functions
-
-The ibm-watson-studio-lib library exposes a set of functions that are grouped in the following way:
-
-
-
-* [Get project information](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ws-lib-python.html?context=cdpaas&locale=enget-infos)
-* [Get authentication token](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ws-lib-python.html?context=cdpaas&locale=enget-auth-token)
-* [Fetch data](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ws-lib-python.html?context=cdpaas&locale=enfetch-data)
-* [Save data](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ws-lib-python.html?context=cdpaas&locale=ensave-data)
-* [Get connection information](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ws-lib-python.html?context=cdpaas&locale=enget-conn-info)
-* [Get connected data information](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ws-lib-python.html?context=cdpaas&locale=enget-conn-data-info)
-* [Access assets by ID instead of name](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ws-lib-python.html?context=cdpaas&locale=enaccess-by-id)
-"
-0ECEAC44DA213D067B5B5EA66694E6283457A441_4,0ECEAC44DA213D067B5B5EA66694E6283457A441,"* [Access project storage directly](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ws-lib-python.html?context=cdpaas&locale=endirect-proj-storage)
-* [Spark support](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ws-lib-python.html?context=cdpaas&locale=enspark-support)
-* [Browse project assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ws-lib-python.html?context=cdpaas&locale=enbrowse-assets)
-
-
-
-"
-0ECEAC44DA213D067B5B5EA66694E6283457A441_5,0ECEAC44DA213D067B5B5EA66694E6283457A441," Get project information
-
-While developing code, you might not know the exact names of data assets or connections. The following functions provide lists of assets, from which you can pick the relevant ones. In all examples, you can use wslib.show(assets) to pretty-print the list. The index of each item is printed in front of the item.
-
-
-
-* list_connections()
-
-This function returns a list of the connections. The list of returned connections is not sorted by any criterion and can change when you call the function again. You can pass a dictionary item instead of a name to the get_connection function.
-
-For example:
-
- Import the lib
-from ibm_watson_studio_lib import access_project_or_space
-wslib = access_project_or_space({""token"":""""})
-
-assets = wslib.list_connections()
-wslib.show(assets)
-connprops = wslib.get_connection(assets[0])
-wslib.show(connprops)
-* list_connected_data()
-
-This function returns the connected data assets. The list of returned connected data assets is not sorted by any criterion and can change when you call the function again. You can pass a dictionary item instead of a name to the get_connected_data function.
-* list_stored_data()
-
-This function returns a list of the stored data assets (data files). The list of returned data assets is not sorted by any criterion and can change when you call the function again. You can pass a dictionary item instead of a name to the load_data and save_datafunctions.
-
-Note: A heuristic is applied to distinguish between connected data assets and stored data assets. However, there may be cases where a data asset of the wrong kind appears in the returned lists.
-* wslib.here
-
-By using this entry point, you can retrieve metadata about the project that the lib is working with. The entry point wslib.here provides the following functions:
-
-
-
-* get_name()
-
-This function returns the name of the project.
-* get_description()
-
-This function returns the description of the project.
-* get_ID()
-
-"
-0ECEAC44DA213D067B5B5EA66694E6283457A441_6,0ECEAC44DA213D067B5B5EA66694E6283457A441,"This function returns the ID of the project.
-* get_storage()
-
-This function returns storage information for the project.
-
-
-
-
-
-"
-0ECEAC44DA213D067B5B5EA66694E6283457A441_7,0ECEAC44DA213D067B5B5EA66694E6283457A441," Get authentication token
-
-Some tasks require an authentication token. For example, if you want to run your own requests against the [Watson Data API](https://cloud.ibm.com/apidocs/watson-data-api-cpd), you need an authentication token.
-
-You can use the following function to get the bearer token:
-
-
-
-* get_current_token()
-
-
-
-For example:
-
-from ibm_watson_studio_lib import access_project_or_space
-wslib = access_project_or_space({""token"":""""})
-token = wslib.auth.get_current_token()
-
-This function returns the bearer token that is currently used by the ibm-watson-studio-lib library.
-
-"
-0ECEAC44DA213D067B5B5EA66694E6283457A441_8,0ECEAC44DA213D067B5B5EA66694E6283457A441," Fetch data
-
-You can use the following functions to fetch data from a stored data asset (a file) in your project.
-
-
-
-* load_data(asset_name_or_item, attachment_type_or_item=None)
-
-This function loads the data of a stored data asset into a BytesIO buffer. The function is not recommended for very large files.
-
-The function takes the following parameters:
-
-
-
-* asset_name_or_item: (Required) Either a string with the name of a stored data asset or an item like those returned by list_stored_data().
-* attachment_type_or_item: (Optional) Attachment type to load. A data asset can have more than one attachment with data. Without this parameter, the default attachment type, namely data_asset is loaded. Specify this parameter if the attachment type is not data_asset. For example, if a plain text data asset has an attached profile from Natural Language Analysis, this can be loaded as attachment type data_profile_nlu.
-
-Here is an example that shows you how to load the data of a data asset:
-
-
-
-
-
-
- Import the lib
-from ibm_watson_studio_lib import access_project_or_space
-wslib = access_project_or_space({""token"":""""})
-
- Fetch the data from a file
-my_file = wslib.load_data(""MyFile.csv"")
-
- Read the CSV data file into a pandas DataFrame
-my_file.seek(0)
-import pandas as pd
-pd.read_csv(my_file, nrows=10)
-
-
-
-
-* download_file(asset_name_or_item, file_name=None, attachment_type_or_item=None)
-
-This function downloads the data of a stored data asset and stores it in the specified file in the file system of your runtime. The file is overwritten if it already exists.
-
-The function takes the following parameters:
-
-
-
-"
-0ECEAC44DA213D067B5B5EA66694E6283457A441_9,0ECEAC44DA213D067B5B5EA66694E6283457A441,"* asset_name_or_item: (Required) Either a string with the name of a stored data asset or an item like those returned by list_stored_data().
-* file_name: (Optional) The name of the file that the downloaded data is stored to. It defaults to the asset's attachment name.
-* attachment_type_or_item: (Optional) The attachment type to download. A data asset can have more than one attachment with data. Without this parameter, the default attachment type, namely data_asset is downloaded. Specify this parameter if the attachment type is not data_asset. For example, if a plain text data asset has an attached profile from Natural Language Analysis, this can be downlaoded loaded as attachment type data_profile_nlu.
-
-Here is an example that shows you how to you can use download_file to make your custom Python script available in your notebook:
-
-
-
-
-
-
- Import the lib
-from ibm_watson_studio_lib import access_project_or_space
-wslib = access_project_or_space({""token"":""""})
-
- Let's assume you have a Python script ""helpers.py"" with helper functions on your local machine.
- Upload the script to your project using the Data Panel on the right of the opened notebook.
-
- Download the script to the file system of your runtime
-wslib.download_file(""helpers.py"")
-
- import the required functions to use them in your notebook
-from helpers import my_func
-my_func()
-
-
-"
-0ECEAC44DA213D067B5B5EA66694E6283457A441_10,0ECEAC44DA213D067B5B5EA66694E6283457A441," Save data
-
-The functions to save data in your project storage do multiple things:
-
-
-
-* Store the data in project storage
-* Add the data as a data asset (by creating an asset or overwriting an existing asset) to your project so you can see the data in the data assets list in your project.
-* Associate the asset with the file in the storage.
-
-
-
-You can use the following functions to save data:
-
-
-
-* save_data(asset_name_or_item, data, overwrite=None, mime_type=None, file_name=None)
-
-This function saves data in memory to the project storage.
-
-The function takes the following parameters:
-
-
-
-* asset_name_or_item: (Required) The name of the created asset or list item that is returned by list_stored_data(). You can use the item if you like to overwrite an existing file.
-* data: (Required) The data to upload. This can be any object of type bytes-like-object, for example a byte buffer.
-* overwrite: (Optional) Overwrites the data of a stored data asset if it already exists. By default, this is set to false. If an asset item is passed instead of a name, the behavior is to overwrite the asset.
-* mime_type: (Optional) The MIME type for the created asset. By default the MIME type is determined from the asset name suffix. If you use asset names without a suffix, specify the MIME type here. For example mime_type=application/text for plain text data. This parameter is ignored when overwriting an asset.
-* file_name: (Optional) The file name to be used in the project storage. The data is saved in the storage associated with the project. When creating a new asset, the file name is derived from the asset name, but might be different. If you want to access the file directly, you can specify a file name. This parameter is ignored when overwriting an asset.
-
-Here is an example that shows you how to save data to a file:
-
-
-
-
-
-
- Import the lib
-from ibm_watson_studio_lib import access_project_or_space
-"
-0ECEAC44DA213D067B5B5EA66694E6283457A441_11,0ECEAC44DA213D067B5B5EA66694E6283457A441,"wslib = access_project_or_space({""token"":""""})
-
- let's assume you have the pandas DataFrame pandas_df which contains the data
- you want to save as a csv file
-wslib.save_data(""my_asset_name.csv"", pandas_df.to_csv(index=False).encode())
-
- the function returns a dict which contains the asset_name, asset_id, file_name and additional information upon successful saving of the data
-
-
-
-
-* upload_file(file_path, asset_name=None, file_name=None, overwrite=False, mime_type=None) This function saves data in the file system in the runtime to a file associated with your project. The function takes the following parameters:
-
-
-
-* file_path: (Required) The path to the file in the file system.
-* asset_name: (Optional) The name of the data asset that is created. It defaults to the name of the file to be uploaded.
-* file_name: (Optional) The name of the file that is created in the storage associated with the project. It defaults to the name of the file to be uploaded.
-* overwrite: (Optional) Overwrites an existing file in storage. Defaults to false.
-* mime_type: (Optional) The MIME type for the created asset. By default the MIME type is determined from the asset name suffix. If you use asset names without a suffix, specify the MIME type here. For example mime_type='application/text' for plain text data. This parameter is ignored when overwriting an asset.
-
-Here is an example that shows you how you can upload a file to the project:
-
-
-
-
-
-
- Import the lib
-from ibm_watson_studio_lib import access_project_or_space
-wslib = access_project_or_space({""token"":""